INSTRUCTION
stringlengths 61
6.63k
| RESPONSE
stringlengths 1
11k
|
---|---|
How do we plot $u(4−t)$, where $u(t)$ is a step function? How do we plot $u(4−t)$?
$u(t)$ is a step function:
$$u(t)=\begin{cases} 1&\text{ for }t \ge 0,\\
0 & \text{ for }t \lt 0.\end{cases}$$
|
$u(4-t) = 1$ for $4-t\ge0$ so for $t\le4$
$u(4-t) = 0$ for $4-t\lt0$ so for $t\gt4$
Here is the plot:
|
Cauchy Problem for Heat Equation with Holder Continuous Data This exercise comes from a past PDE qual problem. Assume $u(x,t)$ solves
$$
\left\{\begin{array}{rl}
u_{t}-\Delta u=0&\text{in}\mathbb{R}^{n}\times(0,\infty)\\
u(x,0)=g(x)&\text{on}\mathbb{R}^{n}\times\{t=0\}\end{array}\right.
$$
and $g$ is Holder continuous with continuity mode $0<\delta\leq1,$
that is
$$|g(x)-g(y)|\leq|x-y|^{\delta}$$
for every $(x,y)\in\mathbb{R}^{n}$. Prove the estimate
$$|u_{t}|+|u_{x_{i}x_{j}}|\leq C_{n}t^{\frac{\delta}{2}-1}.$$
I have quite a few pages of scratch work in trying to prove this estimate, but I have not been able to arrive at a situation where it is even obvious how to exploit the Holder continuity of $g$. Because of translation invariance in space, we can just prove it for the case $x=0$, so that at least simplifies some things. But again, there is a key observation that has apparently eluded me, and a hint would be appreciated!
|
This calls for a scaling argument.
As you noticed, it suffices to consider $x=0$. Replace $g$ with $g-g(0)$; this does not change the derivatives. Now we know that $$|g(x)|\le |x|^\delta\tag1$$
Prove an estimate of the form
$$|u_{t}(0,1)| + |u_{x_ix_j}(0,1)| \le C_n\tag2$$
This requires writing the derivatives
as convolutions of $g$ with the derivatives of $\Phi$, and a rough estimate such as $|g(x)|\le 1+|x|$.
For every scaling factor $\lambda$ the function $u_\lambda=\lambda^{-\delta} u(\lambda x,\lambda^2 t)$ solves the heat equation with the initial data $g_\lambda(x)=\lambda^{-\delta} g(\lambda x)$. Notice that $g_\lambda$ also satisfies (1).
Therefore, $u_\lambda$ satisfies (2). All of a sudden, we're done.
|
Filling the gap in knowledge of algebra Recently, I realize that my inability to solve problems, sometimes, is because I have gaps in my knowldge of algebra. For example, I recently posted a question that asked why $\sqrt{(9x^2)}$ was not $3x$ which to me was fairly embarrassing because the answer was fairly logical and something that I had missed. Furthermore, I realize that when I am solving questions, I tend to get stuck on some intermediate step because that is where most of the algebra is needed. Therefore, my question is: How can I improve algebra? What steps are needed? What books should I be practicing from? What are a few things everyone should know?
|
As skullpatrol commented, the Khan Academy has covered a wide range of high school algebra. As you go up the scale, there are many more resources such as the Art of Problem Solving for contest mathematics. Art of Problem Solving has books on high school algebra as well as practice problems: I personally like their structure. You can use your mathematics textbooks for algebra too. Another website I want to add is Brilliant.
P.S.: Never forget the site you are already on! Set the algebra-precalculus tag as your favorite and start exploring.
|
Graph theory and computer chip design reference Wikipedia says graph theory is used in computer chip design.
... travel, biology, computer chip design, and many other fields. ...
Is there a good reference for that? I can imagine optimal way to draw cpu to chip is to draw shortest hamiltonian cycle in it.
|
I don't know about a reference. However, the intuition is that an electrical circuit in a computer chip design is etched into a flat surface. This implies that the graph model of this circuit must be a planar graph. So the theory behind planar graphs is very important in designing such circuits.
|
Applying substitutions in lambda calculus For computing $2+3$, the lambda calculus goes the following: $(\lambda sz.s(sz))(\lambda wyx.y(wyx))(\lambda uv.u(u(uv)))$
I am having a hard time substituing and reaching the final form of $(\lambda wyx.y((wy)x))((\lambda wyx.y((wy)x))(\lambda uv.u(u(uv))))$. Can anyone provide step-by-step procedure?
|
Recall that application is left-associative e.g. $w y x = \color{\red}{(}w y\color{red}{)} x$. Then the steps just follow by standard $\beta$-reduction
and $\zeta_1$ (which reduces the left-hand side of an application, i.e. $e_1 \rightarrow e_1' \Rightarrow e_1 e_2 \rightarrow e_1' e_2$.
In the following I have underlined the term which is about to be substituted:
$\color{red}{(}(\lambda sz.s(sz))\underline{(\lambda wyx.y(wyx))}\color{red}{)}(\lambda uv.u(u(uv)))$
$\xrightarrow{\zeta_1/\beta} (\lambda z.(λwyx.y(wyx))((\lambda wyx.y(wyx))z))\underline{(\lambda uv.u(u(uv)))}$
$\xrightarrow{\beta} (\lambda wyx.y(wyx))((\lambda wyx.y(wyx))(\lambda uv.u(u(uv))))$
Which is equal to your final form which has some additional parentheses to make clear the associativity of $w y z$ (i.e. $\lambda wyx.y((wy)x))((\lambda wyx.y((wy)x))(\lambda uv.u(u(uv))))$).
|
Exercise 6.1 in Serre's Representations of Finite Groups I am trying to show that if $p$ divides the order of $G$ then the group algebra $K[G]$ for $K$ a field of characteristic $p$ is not semisimple. Now Serre suggests us to consider the ideal
$$U = \left\{ \sum_{s \in G} a_s e_s \hspace{1mm} \Bigg| \hspace{1mm} \sum_{s\in G} a_s = 0\right\}$$
of $K[G]$ and show that there is no submodule $K[G]$ - submodule $V$ such that $U \oplus V = K[G]$. Now I sort of have a proof in the case that $G = S_3$ and $K = \Bbb{F}_2$ but can't seem to generalise.
Suppose there were such a $V$ in this case. Then $V$ is one - dimensional spanned by some vector $v \in K[G]$ such that the sum of the coefficients of $u$ is not zero. If that $v$ is say $e_{(1)}$, then by multiplying with all other other basis elements of $\Bbb{F}_2[S_3]$ and taking their sum we will get something in $U$, contradiction.
We now see that the general plan is this: If $v \notin U$ is the sum of a certain number of basis elements not in $U$, we can somehow multiply $v$ by elements in $K[G]$ then take the sum of all these to get something in $U$, contradiction.
How can this be generalised to an aribtrary field of characteristic $p$ and finite group $G$?
Thanks.
|
Here's my shot and the problem. I thought of this solution just before going to bed last night.
Suppose that $p$ divides the order of $G$. Then Cauchy's Theorem says that there is an element $x$ of order $p$. Consider $e_x$. If we can find a submodule $V$ such that $K[G] = U \oplus V$ then we can write
$$e_x = u+v$$
for some $u\in U$ and $v \in V$. We note that $v \neq 0$ because $e_x \notin U$. Then consider the elements
$$\begin{eqnarray*} e_x &=& u + v \\
e_{x^2} &=& e_xu + e_x v \\
&\vdots & \\
e_{x^p} &=& e_{x^{p-1}}u + e_{x^{p-1}}v\end{eqnarray*}$$
and take their sum. We then have
$$\left(\sum_{i=1}^p e_{x^i}\right)\left(e_{1} - u\right) = \sum_{i=1}^{p} e_{x^i}v. $$
However the guys on the left are in $U$ while the sum on the right is in $V$, contradicting $U \cap V = \{0\}$.
|
Normalizer of $S_n$ in $GL_n(K)$ In the exercises on direct product of groups of Dummit & Foote, I proved that the symmetric group $S_n$ is isomorphic to a subgroup of $GL_n(K)$, called the permutation matrices with one 1 in each row and each column.
My question is how can I find the normalizer of this subgroup in $GL_n(K)$?
|
Edit: I've revised the answer to make it more elementary, and to fix the error YACP pointed out (thank you).
Suppose $X\in N_{GL_n(K)}(S_n)$. Then for every permutation matrix $P\in S_n$ we have $XPX^{-1}\in S_n$, so conjugation by $X$ is an automorphism of $S_n$. If $n\ne 2, 6$, then as YACP noted it must be an inner automorphism, i.e. we have some $P'\in S_n$ such that for every $P\in S_n$, $XPX^{-1}=P'P{P'}^{-1}$. Thus $(X^{-1}P')P(X^{-1}P')^{-1}=P$, so $X^{-1}P'\in C_{GL_n(K)}(S_n)$ (the centralizer of $S_n$). Thus $X\in C_{GL_n(K)}(S_n)\cdot S_n$, so all we have to do is find $C_{GL_n(K)}(S_n)$, as $C_{GL_n(K)}(S_n)\cdot S_n\subseteq N_{GL_n(K)}(S_n)$ holds trivially.
Let $\mathcal C$ denote of all matrices (including non-invertible ones) $X$ such that $PXP^{-1}=X$ for all $P\in S_n$. Note that conjugation is linear, i.e. $A(X+Y)A^{-1}=AXA^{-1}+AYA^{-1}$ for any $A,X,Y\in M_{n\times n}(K)$, so $\mathcal C$ is closed under addition. Conjugation also respects scalar multiplication, i.e. $AcXA^{-1}=cAXA^{-1}$, so $\mathcal C$ is closed under scalar multiplication. Recall that $M_{n\times n}(K)$ is a vector space over $K$, so this makes $\mathcal C$ a subspace of $M_{n\times n}$. The use of $\mathcal C$ is that $C_{GL_n(K)}(S_n)=\mathcal C\cap GL_n(K)$, yet unlike $C_{GL_n(K)}(S_n)$ it is a vector subspace, and vector subspaces are nice to work with.
It is easy to see that $\mathcal C$ contains diagonal matrices $D$ with constant diagonal, as well as all matrices $M$ such that the entries $m_{ij}$ are the same for all $i,j$. Since $\mathcal C$ is a vector subspace, this means it contains all sums of these matrices as well. We want to show that every matrix in $\mathcal C$ can be written as $D+M$ where $D$ and $M$ are as above. If $X\in \mathcal C$ then we can subtract a diagonal matrix $D$ and a matrix $M$ of the second kind to get the upper left and right entries to be $0$:
$$X-D-M=\begin{pmatrix} 0 & x_{12} & \cdots & x_{1n-1} & 0\\
x_{21} & x_{22} & \cdots & x_{2n-1} & x_{2n}\\
\vdots & \vdots & \ddots & \vdots & \vdots\\
x_{n1} & x_{n2} & \cdots & x_{nn-1} & x_{nn}\\
\end{pmatrix}$$
Call this matrix $X'$; we wish to show $X'=0$. Exchanging the second and last column must be the same as exchanging the second and last row, and since the first action switches $x_{12}$ and $x_{1n}$ while the second leaves the first row unchanged we have $x_{12}=x_{1n}=0$. Continuing in this manner we see that the whole first row is $0$. Exchanging the first and second row is the same as exchanging the first and second column, so the whole second row must be $0$ as well. Continuing in this manner we get that $X'=0$ as desired. Thus $\mathcal C$ is the set of matrices of the form $D+M$, i.e. with $a$ on the diagonal and $b$ off the diagonal.
$C_{GL_n(K)}(S_n)$ is the set of such matrices with nonzero determinant. Let $X\in \mathcal C$ have entries $a$ on the diagonal and $b$ off it. Clearly if $a=b$ then the determinant is $0$, so suppose $a\ne b$. Then we can write $X=(a-b)(I_n+cr)$ where $c$ is a column consisting entirely of $1$'s and $r$ is a row consisting entirely of entries $\frac{b}{a-b}$. By Sylvester's Determinant Theorem the determinant of this is $(a-b)^n(1+rc)$, and $rc=\frac{nb}{a-b}$, which gives us $\det(X)=(a-b)^{n-1}(a-b+nb)$. Thus for any $X\in \mathcal C$, $\det(X)=0$ iff either $a=b$ or $a=(1-n)b$.
Putting this all together, we get that
$$N_{GL_n(K)}(S_n)=\left\{\begin{pmatrix} a & b & \cdots & b \\
b & a & \ddots &\vdots \\
\vdots &\ddots & \ddots & b\\
b & \cdots & b & a\\
\end{pmatrix}P: a\neq b, a\neq (1-n)b, P\in S_n \right\}$$
|
Prove the limit problems I got two problems asking for the proof of the limit:
Prove the following limit: $$\sup_{x\ge 0}\ x e^{x^2}\int_x^\infty e^{-t^2} \, dt={1\over 2}.$$
and,
Prove the following limit: $$\sup_{x\gt 0}\ x\int_0^\infty {e^{-px}\over {p+1}} \, dp=1.$$
I may feel that these two problems are of the same kind. World anyone please help me with one of them and I may figure out the other one? Many thanks!
|
Let
$$ f(x)=\ x e^{x^2}\int_x^\infty e^{-t^2} \implies f(x)=\ x e^{x^2}g(x).$$
We can see that $ f(0)=0 $ and $f(x)>0,\,\, \forall x>0$. Taking the limit as $x$ goes to infinity and using L'hobital's rule and Leibniz integral rule yields
$$ \lim_{ x\to \infty } xe^{x^2}g(x) = \lim _{x\to \infty} \frac{g(x)}{\frac{1}{xe^{x^2}}}=\lim_{x \to \infty} \frac{g'(x)}{\frac{1}{(xe^{x^2})'}}=\lim_{x \to \infty} \frac{-e^{-x^2}}{{-{\frac {{{\rm e}^{-{x}^{2}}} \left( 2\,{x}^{2}+1 \right) }{{x}^{2}}}}} =\frac{1}{2}. $$
|
Help with writing the following as a partial fraction $\frac{4x+5}{x^3+1}$. I need help with writing the following as a partial fraction:
$$\frac{4x+5}{x^3+1}$$
My attempts so far are to factor $x^3$ into $(x+1)$ and $(x^2-x+1)$
This gives me: $A(x^2-x+1)+B(x+1)$.
But I have problems with solving the equation system that this gives:
$A = 0$ (since there are no $x^2$ terms in $4x+5$)
$-A+B =4$ (since there are $4$ $x$ terms in $4x+5$)
$A+B = 5$ (since the constant is $5$ in $4x+5$)
this gives me $A=0.5$ and $B=4.5$ and $\frac{1/2}{x+1}, \frac{9/2}{x^2-x+1}$
This is appearantly wrong. Where is my reasoning faulty?
Thank you!
|
You need to use one less exponent per factor in the numerator after your factorization.
This leads to:
$$\frac{Ax+B}{x^2-x+1} + \frac{C}{x+1} = \frac{4x+5}{x^3+1}$$
This gives us:
$$Ax^2 + Ax + Bx + B + Cx^2 - Cx + C = 4x + 5$$
This leads to:
$A + C = 0$
$A + B - C = 4$
$B + C = 5$
yielding:
$$A = -\frac{1}{3}, B = \frac{14}{3}, C = \frac{1}{3}$$
Writing the expansion out yields:
$$\frac{4x+5}{x^3+1} = \frac{14 - x}{3(x^2-x+1)} + \frac{1}{3(x+1)}$$
|
Integration theory Any help with this problem is appreciated.
Given the $f$ is measurable and finite a.e. on $[0,1]$. Then prove the following statements
$$ \int_E f = 0 \text{ for all measurable $E \subset [0,1]$ with $\mu(E) = 1/2$ }\Rightarrow f = 0 \text{ a.e. on } [0,1]$$
$$ f > 0 \text{ a.e. } \Rightarrow \inf ~ \left\{\int_E f : \mu(E) \geq 1/2\right\} > 0 $$
|
For $(1)$, we can define the sets $ P:= \{x:f(x)\ge 0\}$ and $ N:=\{x:f(x)\le 0\}$. Then either $\mu(P)\ge \frac{1}{2}$ or $\mu(N)\ge \frac{1}{2}$. Suppose $\mu(P)\ge \frac{1}{2}$, define $SP:= \{x:f(x)> 0\},$ then $\ SP\subset P$. If $ \mu(SP)<\frac{1}{2}$, we can choose a set $E$ such that $$SP \subset E,\ f(x)\ge 0\ on \ E,\ and\ \mu(E)=\frac{1}{2}$$ According to the hypothesis, $\int_Ef=0$ which implies $\mu(SP)=0$($f$ is non-negative on $E\ \Rightarrow \ f=0\ a.e.\ on\ E$). I think you are able to show the case when $\mu(SP)>\frac{1}{2}$. Similarly, if we define $ SN:=\{x: f(x)<0\} $, we can show $\mu(SN)=0$.
For $(2)$, forst define $A_n:=\{x:f(x)>\frac{1}{n}\} $, then we know $A_n$ is incresing and $\lim_{n\to \infty}\mu(A_n)=1$ since $f>0\ a.e.$. Fix sufficiently large $n_0$ so that $\mu(A_{n_0})>1-\epsilon_0$, then for any $E$ with $\mu(E)\ge \frac{1}{2}$, we have
$$ \int_Ef\,\mathrm{d}\mu=\int_{E\cap A_{n_0}^c} f\,\mathrm{d}\mu+\int_{E\cap A_{n_0}} f\,\mathrm{d}\mu\ge \int_{E\cap A_{n_0}} f\,\mathrm{d}\mu\ge \frac{1}{n_o}\cdot \mu(E\cap A_{n_0})$$
Note that $\mu(E\cap A_{n_0})\ge \mu (E)+\mu (A_{n_0})-1> \frac{1}{2}+(1-\epsilon_0)-1=\frac{1}{2}-\epsilon_0$, hence $\int_E f\,\mathrm{d}\mu>\frac{1}{n_0}\cdot (\frac{1}{2}-\epsilon_0)>0$.
|
How can I prove that a sequence has a given limit? For example, let's say that I have some sequence $$\left\{c_n\right\} = \left\{\frac{n^2 + 10}{2n^2}\right\}$$ How can I prove that $\{c_n\}$ approaches $\frac{1}{2}$ as $n\rightarrow\infty$?
I'm using the Buchanan textbook, but I'm not understanding their proofs at all.
|
Well we want to show that for any $\epsilon>0$, there is some $N\in\mathbb N$ such that for all $n>N$ we have $|c_n-1/2|<\epsilon$ (this is the definition of a limit). In this case we are looking for a natural number $N$ such that if $n>N$ then
$$\left|\frac{n^2+10}{2n^2}-\frac{1}{2}\right|=\frac{5}{n^2}<\epsilon$$
We can make use of what's called the Archimedean property, which is that for any real number $x$ there is a natural number larger than it. To do so, note that the above equation is equivalent to $\frac{n^2}{5}>\frac{1}{\epsilon}$, or $n^2>\frac{5}{\epsilon}$. If we choose $N$ to be a natural number greater than $\frac{5}{\epsilon}$, then if $n>N$ we have $n^2>N>\frac{5}{\epsilon}$ as desired. Thus $\lim\limits_{n\to\infty} c_n = \frac12$.
To relate this to the definition of limit in Buchanan: You want to show that for any neighborhood of $1/2$, there is some $N\in\mathbb N$ such that if $n>N$ then $c_n$ is in the neighborhood. Now note that any neighborhood contains an open interval around $1/2$, which takes the form $(1/2-\epsilon,1/2+\epsilon)$. Saying that $c_n$ is in this open interval is the same as saying that $|c_n-1/2|<\epsilon$.
|
Prove that either $m$ divides $n$ or $n$ divides $m$ given that $\operatorname{lcm}(m,n) + \operatorname{gcd}(m,n) = m + n$? We are given that $m$ and $n$ are positive integers such that $\operatorname{lcm}(m,n) + \operatorname{gcd}(m,n) = m + n$.
We are looking to prove that one of numbers (either $m$ or $n$) must be divisible by the other.
|
We may suppose without loss of generality that $m \le n$. If $\text{lcm}(m,n) > n$, then $\text{lcm}(m,n) \ge 2n$, since $\text{lcm}(m,n)$ is a multiple of $n$. But then we have
$\text{lcm}(m,n) < \text{lcm}(m,n)+\gcd(m,n) = m + n \le 2n \le \text{lcm}(m,n)$,
a contradiction. So $\text{lcm}(m,n) = n$.
|
Integral Sign with indicator function and random variable I have the following problem. I need to consider all the conditions in which the following integral may be equal to zero:
$$\int_\Omega [p\phi-\lambda(\omega)]f(\omega)\iota(\omega)d\omega$$
Where $p>0$ is a constant. $f$ is a probability density function and $\iota$ is an indicator function (i.e I am truncating the density). $\lambda(\cdot)$ is an increasing function in $\omega$ and $\phi$ is a random variable, so I am solving this integral for any possible realization of $\phi$. Of course there is a trivial solution when $p\phi=\lambda(\omega*)$ for some $\omega*$. But there exist any other case when this is zero which I am not considering?
|
*
*The expectation $\mathbb{E}\left[\iota(p\phi-\lambda)\right]$ could be zero depending on the value of $p\phi-\lambda$ on all $\omega$.
*If there is no $\omega$ for which $\iota(\omega)=1$, again you end up with a zero.
|
How many triangles are formed by $n$ chords of a circle? This is a homework problem I have to solve, and I think I might be misunderstanding it. I'm translating it from Polish word for word.
$n$ points are placed on a circle, and all the chords whose endpoints they are are drawn. We assume that no three chords intersect at one point.
a) How many parts do the chords dissect the disk?
b) How many triangles are formed whose sides are the chords or their fragments?
I think the answer to a) is $2^n$. But I couldn't find a way to approach b), so I calculated the values for small $n$ and asked OEIS about them. I got A006600. And it appears that there is no known formula for all $n$. This page says that
$\text{triangles}(n) = P(n) - T(n)$
where $P(n)$ is the number of triangles for a convex n-gon in general
position. This means there are no three diagonal through one point
(except on the boundary). (There are no degenarate corners.)
This number is easy to calculate as:
$$P(n) = {n\choose 3} + 4{n\choose 4} + 5{n\choose5} + {n\choose6}
= {n(n-1)(n-2)(n^3 + 18 n^2 - 43 n + 60)\over720}$$
The four terms count the triangles in the following manner [CoE76]:
$n\choose3$: Number of trianges with 3 corners at the border.
$4{n\choose4}$: Number of trianges with 2 corners at the border.
$5{n\choose5}$: Number of trianges with 1 corners at the border.
$n\choose6$: Number of trianges with 0 corners at the border.
$T(n)$ is the number of triple-crossings (this is the number of
triples of diagonals which are concurrent) of the regular $n$-gon.
It turns out that such concurrences cannot occur for n odd, and,
except for obvious cases, can only occur for $n$ divisible by $6$.
Among other interesting results, Bol [Bol36] finds values of n
for which $4$, $5$, $6$, and $7$ diagonals are concurrent and shows that
these are the only possibilities (with the center for exception).
The function $T(n)$ for $n$ not divisible by $6$ is:
$$T(n) = {1\over8}n(n-2)(n-7)[2|n] + {3\over4}n[4|n].$$
where $[k|n]$ is defined as $1$ if $k$ is a divisor of $n$ and otherwise $0$.
The intersection points need not lie an any of lines of symmetry of
the $2m$-gon, e. g. for $n=16$ the triple intersection of $(0,7),(1,12),(2,14)$.
If I understand the text correctly, it doesn't give a general formula for $T(n)$. Also I've found a statement somewhere else that some mathematician wasn't able to give a general formula solving this problem. I haven't found a statement that it is still an open problem, but it looks like it to me.
So am I just misunderstanding the problem, or misunderstanding what I've found on the web, or maybe it is indeed a very hard problem? It's the beginning of the semester, our first homework, and it really scares me.
|
The answer to a) isn't $2^n$. It isn't even $2^{n-1}$, which is probably what you meant. Draw the case $n=6$, carefully, and count the regions.
As for $T(n)$, the number of triple-crossings, the problem statement specifically says there are no triple-crossings.
|
Clarification Regarding the Tor Functor involved in a Finite Exact Sequence Let $\cdots\rightarrow F_1 \rightarrow F_0 \rightarrow M \rightarrow 0$ be a free resolution of the $A$-module $M$. Let $N$ be an $A$-module. I saw in some notes that we have an exact sequence $0 \rightarrow \operatorname{Tor}(M,N) \rightarrow F_1 \otimes N \rightarrow F_0 \otimes N \rightarrow M \otimes N \rightarrow 0$. Why is that true?
Edited:
Let me explain the source of my confusion. In my study, $\operatorname{Tor}_n(M,N)$ was defined as the homology of dimension $n$ of the double complex $K_{p,q}=F_p \otimes Q_q$, where $F_{\cdot}, Q_{\cdot}$ are projective resolutions of $M,N$ respectively. It was shown that $\operatorname{Tor}_n(M,N) = \operatorname{Tor}_n(F_{\cdot} \otimes N)=H_n(F_{\cdot} \otimes N) = \frac{Ker(F_n \otimes N \rightarrow F_{n-1} \otimes N)}{Im(F_{n+1} \otimes N \rightarrow F_{n} \otimes N)}$. Now if the sequence
$0 \rightarrow \operatorname{Tor}(M,N) \rightarrow F_1 \otimes N \rightarrow F_0 \otimes N \rightarrow M \otimes N \rightarrow 0$ is exact, then $\operatorname{Tor}_1(M,N) = \operatorname{Ker}(F_1 \otimes N \rightarrow F_{0} \otimes N)$, which means that $\operatorname{Im}(F_{2} \otimes N \rightarrow F_{1} \otimes N)=0$, which does not make sense. What am I missing?
PS: The reference that i am using is Matsumura's Commutative Ring Theory Appendix B. That's as far as i have gone so far with homological algebra.
|
The tensor functor is right-exact. We have an exact sequence
$F_1\to F_0\to M\to 0$,
implying that the complex
$F_2\otimes N\to F_1\otimes N\to F_0\otimes N\to M\otimes N\to 0$,
is exact except possibly at $F_1\otimes N$. Now how can you calculate $\mathrm{Tor}_1(M,N)$? You should be able to figure it out from here.
|
Prove that $ \left(1+\frac a b \right) \left(1+\frac b c \right)\left(1+\frac c a \right) \geq 2\left(1+ \frac{a+b+c}{\sqrt[3]{abc}}\right)$.
Given $a,b,c>0$, prove that $\displaystyle \left(1+\frac a b \right) \left(1+\frac b c \right)\left(1+\frac c a \right) \geq 2\left(1+ \frac{a+b+c}{\sqrt[3]{abc}}\right)$.
I expanded the LHS, and realized I have to prove $\displaystyle\frac a b +\frac a c +\frac b c +\frac b a +\frac c a +\frac c b \geq \frac{2(a+b+c)}{\sqrt[3]{abc}}$, but I don't know how. Please help. Thank you.
|
We can begin by clearing denominators as follows
$$a^2c+a^2b+b^2a+b^2c+c^2a+c^2b\geq 2a^{5/3}b^{2/3}c^{2/3}+2a^{2/3}b^{5/3}c^{2/3}+2a^{2/3}b^{2/3}c^{5/3}$$
Now by the Arithmetic Mean - Geometric Mean Inequality,
$$\frac{2a^2c+2a^2b+b^2a+c^2a}{6} \geq a^{5/3}b^{2/3}c^{2/3}$$
That is,
$$\frac{2}{3}a^2c+\frac{2}{3}a^2b+\frac{1}{3}b^2a+\frac{1}{3}c^2a \geq 2a^{5/3}b^{2/3}c^{2/3}$$
Similarly, we have
$$\frac{2}{3}b^2a+\frac{2}{3}b^2c+\frac{1}{3}c^2b+\frac{1}{3}a^2b \geq 2a^{2/3}b^{5/3}c^{2/3}$$
$$\frac{2}{3}c^2b+\frac{2}{3}c^2a+\frac{1}{3}a^2c+\frac{1}{3}b^2c \geq 2a^{2/3}b^{2/3}c^{5/3}$$
Summing these three inequalities together, we obtain the desired result.
|
Derivative of the off-diagonal $L_1$ matrix norm We define the off-diagonal $L_1$ norm of a matrix as follows: for any $A\in \mathcal{M}_{n,n}$, $$\|A\|_1^{\text{off}} = \sum_{i\ne j}|a_{ij}|.$$
So what is $$\frac{\partial \|A\|_1^{\text{off}}}{\partial A}\;?$$
|
$
\def\p{\partial}
\def\L{\left}\def\R{\right}\def\LR#1{\L(#1\R)}
\def\t#1{\operatorname{Tr}\LR{#1}}
\def\s#1{\operatorname{sign}\LR{#1}}
\def\g#1#2{\frac{\p #1}{\p #2}}
$Use the element-wise sign() function to define the matrix
$\,S = \s{X}.\;$
The gradient and differential of the Manhattan norm can be written as
$$\eqalign{
\g{\|X\|_1}{X} &= S \quad\iff\quad d\|X\|_1 = S:dX \\
}$$
Suppose that $X$ itself is defined in terms of another matrix $A$
$$\eqalign{
F &= (J-I), \qquad
X &= F\odot A, \qquad
S &= \s{F\odot A} \\
}$$
where $J$ is the all-ones matrix, $I$ is the identity, and $(\odot)$ is the Hadamard product.
$\big($ Note that $X$ is composed of the off-diagonal elements of $A\,\big)$
Substituting into the known differential yields the desired gradient
$$\eqalign{
&\quad d\|X\|_1 = S:dX = S:(F\odot dA) = (F\odot S):dA \\
&\boxed{\;\g{\|X\|_1}{A} = F\odot S\;}
\\
}$$
where $(:)$ denotes the Frobenius product, which is a convenient notation for the trace
$$\eqalign{
A:B &= \sum_{i=1}^m\sum_{j=1}^n A_{ij}B_{ij} \;=\; \t{A^TB} \\
A:A &= \big\|A\big\|^2_F \\
}$$
and which commutes with the Hadamard product
$$\eqalign{
A:(B\odot C) &= \sum_{i=1}^m\sum_{j=1}^n A_{ij}B_{ij}C_{ij} \\
&= (A\odot B):C \\\\
}$$
NB: The gradient above is not defined for any element of $X$ equal to zero, because the sign() function itself is discontinuous at zero.
|
$f$ is integrable but has no indefinite integral Let $$f(x)=\cases{0,& $x\ne0$\cr 1, &$x=0.$}$$
Then $f$ is clearly integrable, yet has no antiderivative, on any interval containing $0,$ since any such antiderivative would have a constant value on each side of $0$ and have slope $1$ at $0$—an impossibility.
So does this mean that $f$ has no indefinite integral?
EDIT
My understanding is that the indefinite integral of $f$ is the family of all the antiderivatives of $f,$ and conceptually requires some antiderivative to be defined on the entire domain. Is this correct?
|
Just to supplement Emanuele’s answer. If $ I $ is an open subset of $ \mathbb{R} $, then some mathematicians define the indefinite integral of a function $ f: I \to \mathbb{R} $ as follows:
$$
\int f ~ d{x} \stackrel{\text{def}}{=} \{ g \in {D^{1}}(I) ~|~ f = g' \}.
$$
Hence, taking the indefinite integral of $ f $ yields a family of antiderivatives of $ f $. If $ f $ has no antiderivative, then according to the definition above, we have
$$
\int f ~ d{x} = \varnothing.
$$
|
Prove $1+\frac{1}{\sqrt{2}}+\frac{1}{\sqrt{3}}+\dots+\frac{1}{\sqrt{n}} > 2\:(\sqrt{n+1} − 1)$ Basically, I'm trying to prove (by induction) that:
$$1+\frac{1}{\sqrt{2}}+\frac{1}{\sqrt{3}}+\dots+\frac{1}{\sqrt{n}} > 2\:(\sqrt{n+1} − 1)$$
I know to begin, we should use a base case. In this case, I'll use $1$. So we have:
$$1 > 2\:(1+1-1) = 1>-2$$
Which works out.
My problem is the next step. What comes after this? Thanks!
|
Mean Value Theorem can also be used,
Let $\displaystyle f(x)=\sqrt{x}$
$\displaystyle f'(x)=\frac{1}{2}\frac{1}{\sqrt{x}}$
Using mean value theorem we have:
$\displaystyle \frac{f(n+1)-f(n)}{(n+1)-n}=f'(c)$ for some $c\in(n,n+1)$
$\displaystyle \Rightarrow \frac{\sqrt{n+1}-\sqrt{n}}{1}=\frac{1}{2}\frac{1}{\sqrt{c}}$....(1)
$\displaystyle \frac{1}{\sqrt{n+1}}<\frac{1}{\sqrt{c}}<\frac{1}{\sqrt{n}}$
Using the above ineq. in $(1)$ we have,
$\displaystyle \frac{1}{2\sqrt{n+1}}<\sqrt{n+1}-\sqrt{n}<\frac{1}{2\sqrt{n}}$
Adding the left part of the inequality we have,$\displaystyle\sum_{k=2}^{n}\frac{1}{2\sqrt{k}}<\sum_{k=2}^{n}(\sqrt{k}-\sqrt{k-1})=\sqrt{n}-1$
$\Rightarrow \displaystyle\sum_{k=2}^{n}\frac{1}{\sqrt{k}}<2\sum_{k=2}^{n}(\sqrt{k}-\sqrt{k-1})=2(\sqrt{n}-1)$
$\Rightarrow \displaystyle1+\sum_{k=2}^{n}\frac{1}{\sqrt{k}}<1+2\sum_{k=2}^{n}(\sqrt{k}-\sqrt{k-1})=2\sqrt{n}-2+1=2\sqrt{n}-1$
$\Rightarrow \displaystyle\sum_{k=1}^{n}\frac{1}{\sqrt{k}}<2\sqrt{n}-1$
Similarly adding the right side of the inequality we have,
$\displaystyle\sum_{k=1}^{n}\frac{1}{2\sqrt{k}}>\sum_{k=1}^{n}(\sqrt{k+1}-\sqrt{k})=\sqrt{n+1}-1$
$\Rightarrow \displaystyle\sum_{k=1}^{n}\frac{1}{\sqrt{k}}>2(\sqrt{n+1}-1)$
This completes the proof.
$\displaystyle 2\sqrt{n+1}-2<\sum_{k=1}^{n}{\frac{1}{\sqrt{k}}}<2\sqrt{n}-1.$
This is a much better proof than proving by induction(Ofcourse if someone knows elementary calculus).
|
Elementary books by good mathematicians I'm interested in elementary books written by good mathematicians.
For example:
*
*Gelfand (Algebra, Trigonometry, Sequences)
*Lang (A first course in calculus, Geometry)
I'm sure there are many other ones. Can you help me to complete this short list?
As for the level, I'm thinking of pupils (can be advanced ones) whose age is less than 18.
But books a bit more advanced could interest me. For example Roger Godement books: Analysis I & II are full of nice little results that could be of interest at an elementary level.
|
Solving Mathematical Problems: A Personal Perspective by Terence Tao
|
Ramsey Number Inequality: $R(\underbrace{3,3,...,3,3}_{k+1}) \le (k+1)(R(\underbrace{3,3,...3}_k)-1)+2$ I want to prove that:
$$R(\underbrace{3,3,...,3,3}_{k+1}) \le (k+1)(R(\underbrace{3,3,...3}_k)-1)+2$$
where R is a Ramsey number. In the LHS, there are $k+1$ $3$'s, and in the RHS, there are $k$ $3's$. I really have no clue how to start this proof. Any help is appreciated!
|
The general strategy will probably look something like this. (Let $R = R(\underbrace{3,\ldots,3}_k)$ in what follows.)
*
*Take $k+1$ copies of $K_{R-1}$, the complete graph on $R-1$ vertices, and assume that the edges of each one are colored with $k+1$ colors so as to avoid monochromatic triangles.
*Then suppose that no copy contains a triangle in the $k+1$'th color.
*Then show that if you add 2 more vertices $u,v$ there must be some monochromatic triangle unless (something about the edges between $u$ and the $K_{R-1}$'s, and between $v$ and the $K_{R-1}$'s).
*Then show that the edge from $u$ to $v$ must complete a monochromatic triangle with some vertex in one of the $K_{R-1}$'s.
A more careful analysis would consider the many edges between one $K_{R-1}$ and another, but with any luck you won't have to do that.
On review, I think Alex's suggestion of induction on $k$ is probably an excellent one.
|
If $f(x_0+x)=P(x)+O(x^n)$, is $f$ $mLet $f : \mathbb{R} \to \mathbb{R}$ be a real function and $x_0 \in \mathbb{R}$ be a real number. Suppose that there exists a polynomial $P \in \mathbb{R}[X]$ such that $f(x_0+x)=P(x)+ \underset{x \to 0}{O} (x^n)$ with $n> \text{deg}(P)$.
Is it true that for $m<n$, $f^{(m)}(x_0)$ exists? (Of course, it is obvious for $m=1$.) If so, we can notice that $f^{(m)}(x_0)=P^{(m)}(x_0)$.
|
No, these are not true, and classical counterexamples are based on $f(x)=|x|^a\sin(1/|x|^b)$ for $x\ne0$ and $f(0)=0$, considered at $x_0=0$, for well chosen positive $a$ and $b$.
Basically, the idea is that the limited expansion of $f$ at $0$ is $f(x)=O(|x|^n)$ (no polynomial term) with $n$ large if $a$ is large but that $f''(0)$ need not exist when $b$ is suitably larger than $a$.
|
How is this derived? In my textbook I find the following derivation:
$$ \displaystyle \lim _{n \to \infty} \dfrac{1}{n} \displaystyle \sum ^n _{k=1} \dfrac{1}{1 + k/n} = \displaystyle \int^1_0 \dfrac{dx}{1+x}$$
I understand that it's $\displaystyle \int^1_0$ but I don't understand the $\dfrac{dx}{1+x}$ part.
|
The sum is a Riemann Sum for the given integral. As $n\to\infty$,
$$
\sum_{k=1}^n\frac1{1+k/n}\frac1n
$$
tends to the sum of rectangles $\frac1{1+k/n}$ high and $\frac1n$ wide. This approximates the integral
$$
\int_0^1\frac1{1+x}\mathrm{d}x
$$
where $x$ is represented by $k/n$ and $\mathrm{d}x$ by $\frac1n$.
|
Limit $a_{n+1}=\frac{(a_n)^2}{6}(n+5)\int_{0}^{3/n}{e^{-2x^2}} \mathrm{d}x$ I need to find the limit when $n$ goes to $\infty$ of
$$a_{n+1}=\frac{(a_n)^2}{6}(n+5)\int_{0}^{3/n}{e^{-2x^2}} \mathrm{d}x, \quad a_{1}=\frac{1}{4}$$
Thanks in advance!
|
Obviously $a_n > 0$ for all $n \in \mathbb{N}$. Since
$$
\int_0^{3/n} \exp(-2 x^2) \mathrm{d}x < \int_0^{3/n} \mathrm{d}x = \frac{3}{n}
$$
We have
$$
a_{n+1} < \frac{1}{2} a_n^2 \left(1 + \frac{5}{n} \right) \leqslant 3 a_n^2
$$
Consider sequence $b_n$, such that $b_1 = a_1$ and $b_{n+1} = 3 b_n^2$, then $a_n \leqslant b_n$ by induction on $n$. But $b_n$ admits a closed form solution:
$$
b_n = \frac{1}{3} \left(\frac{3}{4} \right)^{2^{n-1}}
$$
and $\lim_{n \to \infty }b_n = 0$. Thus, since $0 < a_n \leqslant b_n$, $\lim_{n \to \infty} a_n = 0$ by squeeze theorem.
|
Sequence Proof with Binomial Coefficient Suppose $\lim_{n\to \infty }z_n=z$.
Let $w_n=\sum_{k=0}^n {2^{-n}{n \choose k}z_k}$
Prove $\lim_{n\to \infty }w_n=z$.
I'm pretty sure I need to use $\sum_{k=0}^\infty{n \choose k}$ = $2^{n}$ in the proof. Help? Thoughts?
|
$$w_n-z=\sum_{k=0}^n2^{-n}\binom{n}k(z_k-z)$$
Fix $\epsilon>0$. There is an $m\in\Bbb Z^+$ such that $|z_k-z|<\frac{\epsilon}2$ whenever $k\ge m$. Clearly
$$\lim_{n\to\infty}\sum_{k=0}^m2^{-n}\binom{n}k=0\;,$$
so there is an $r\ge m$ such that $$\sum_{k=0}^m2^{-n}\binom{n}k|z_k-z|<\frac{\epsilon}2$$ whenever $n\ge r$.
Then
$$\begin{align*}
|w_n-z|&\le\sum_{k=0}^m2^{-n}\binom{n}k|z_k-z|+\sum_{k=m+1}^n2^{-n}\binom{n}k|z_k-z|\\
&\le\sum_{k=0}^m2^{-n}\binom{n}k|z_k-z|+\sum_{k=0}^n2^{-n}\binom{n}k|z_k-z|\\
&\le\frac{\epsilon}2+\frac{\epsilon}2\sum_{k=0}^n2^{-n}\binom{n}k\\
&=\epsilon
\end{align*}$$
for $n\ge r$.
|
Show that $\sigma(n) = \sum_{d|n} \phi(n) d(\frac{n}{d})$ This is a homework question and I am to show that $$\sigma(n) = \sum_{d|n} \phi(n) d\left(\frac{n}{d}\right)$$ where $\sigma(n) = \sum_{d|n}d$, $d(n) = \sum_{d|n} 1 $ and $\phi$ is the Euler Phi function.
What I have. Well I know $$\sum_{d|n}\phi(d) = n$$ I also know that for $n\in \mathbb{Z}^n$ it has a certain prime factorization $n = p_1^{a_1} \ldots p_k^{a_k}$ so since $\sigma$ is a multiplicative function, we have $\sigma(n) = \sigma(p_1)\sigma(p_2)...$
I also know the theorem of Möbius Inversion Formula and the fact that if $f$ and $g$ are artihmetic functions, then $$f(n) = \sum_{d|n}g(d)$$ iff $$g(n) = \sum_{d|n}f(d)\mu\left(\frac{n}{d}\right)$$
Please post no solution, only hints. I will post the solution myself for others when I have figured it out.
|
First hint: verify the formula when $n$ is a power of a prime. Then, prove that the function $n \mapsto \sum_{d \mid n} \phi(n) d(n/d)$ is also multiplicative, so it must coincide with $\sigma$. In fact, prove more generally that if $a_n$ and $b_n$ are multiplicative, then $n\mapsto \sum_{d \mid n}a_db_{n/d}$ is multiplicative.
Second hint (more advanced): If $A(s) = \sum_{n\geq 1}\frac{a_n}{n^{-s}}$ and $B(s) = \sum_{n\geq 1}\frac{b_n}{n^{-s}}$ are Dirichlet series, their product is $$A(s)B(s)=\sum_{n\geq 1}\frac{(a * b)_n}{n^{-s}},$$
where $(a*b)_n = \sum_{d\mid n} a_n b_{n/d}$. Express the Dirichlet series $\sum_{n\geq 1}\frac{\phi(n)}{n^{-s}}$, $\sum_{n\geq 1}\frac{d(n)}{n^{-s}}$ and $\sum_{n\geq 1}\frac{\sigma(n)}{n^{-s}}$ in terms of the Riemann zeta function, and reduce your identity to an identity between them.
|
Coordinates for vertices of the "silver" rhombohedron. The "silver" rhombohedron (a.k.a the trigonal trapezohedron) is a three-dimensional object with six faces composed of congruent rhombi. You can see it visualised here.
I am interested in replicating the visualisation linked to above in MATLAB, but to do that I need to know the coordinates of the rhombohedron's eight vertices (I am using the patch function).
How do I calculate the coordinates of the vertices of the silver rhombohedron?
|
Use vectors $e_1=(1,0,0)$, $e_2=(\cos{\alpha},\sin{\alpha},0)$ and $e_3=(\cos{\alpha},0,\sin{\alpha})$ as basis.
Then vertices are set of all points with each coordinate $0$ or $1$: $(0,0,0)$, $(0,0,1)$, ..., $(1,1,1)$.
Or $0$, $e_1$, $e_2$, $e_3$, $e_1+e_2$, ..., $e_1+e_2+e_3$.
Multiply coordinates by constant if needed.
|
Conditional probabilties in the OR-gate $T=A\cdot B$ with zero-probabilities in $A$ and $B$? My earlier question became too long so succintly:
What are $P(T|A)=P(T\cap A)/P(A)$ and $P(T|B)=P(T\cap B)/P(B)$ if $P(A)=0$ and $P(B)=0$?
I think they are undefined because of the division by zero. How can I specify the conditional probabilities now? Please, note that the basic events $A$ and $B$ depend on $T$ because $T$ consists of them, namely $T=A \cup B$.
|
Yes, it is undefined in general. It is generally pointless to ask for the conditional probability of $T$ when $A$ occurs when it is known that $A$ almost surely never happens.
But a meaningful specification in your particular case that $T = A\cup B$ is by some intuitive notion of continuity. For $P(A) \neq 0$, if $C\supseteq A$ we must have $P(C|A) = 1$. Hence one can argue from a subjective interpretation of probability that $P(T|A) = 1$ since $T\supseteq A$. And similarly for $P(T|B)$.
|
Help evaluating a gamma function I need to do a calculus review; I never felt fully confident with it and it keeps cropping up as I delve into statistics. Currently, I'm working through a some proof theory and basic analysis as a sort of precursor to the calc review, and I just hit a problem that requires integration. Derivatives I'm ok with, but I really don't remember how to take integrals correctly. Here's the problem:
$$\Gamma (x) = \int_0^\infty t^{x-1} \mathrm{e}^{-t}\,\mathrm{d}t$$
If $x \in \mathbb{N}$, then $ \Gamma(x)=(x-1)!$
Check that this is true for x=1,2,3,4
I did a bit of reading on integration, but it left my head spinning. If anyone wants to lend a hand, I'd appreciate it. I should probably just push this one to the side for now, but part of me wants to plow through it.
Update: Thanks for the responses. I suspect this will all make more sense once I've reviewed integration. I'll have to revisit it then.
|
In general:
$$u=t^{x-1}\;\;,\;\;u'=(x-1)t^{x-2}\\v'=e^{-t}\;\;,\;\;v=-e^{-t}$$
so
$$\Gamma(x):=\int\limits_0^\infty t^{x-1}e^{-t}\,dt=\overbrace{\left.-t^{x-1}e^{-t}\right|_0^\infty}^\text{This is zero}+(x-1)\int\limits_0^\infty t^{x-2}e^{-t}=$$
$$=:(x-1)\Gamma(x-1)$$
So you only need to know $\,\Gamma(1)=1\,$ and this is almost immediate...
|
Find the common ratio of the geometric series with the sum and the first term Given: Geometric Series Sum ($S_n$) = 39
First Term ($a_1$) = 3
number of terms ($n$) = 3
Find the common ratio $r$.
*I have been made aware that the the common ratio is 3, but for anyone trying to solve this, don't plug it in as an answer the prove that it's true. Try to find it without using it.
|
The genreal sum formula in geometric sequences: $$S_n=\frac{a_1\left(r^n-1\right)}{r-1}$$ In your case, $S_n=39$, $a_1=3$, $n=3$. Now you just need to find $r$:
$$39=\frac{3\left(r^3-1\right)}{r-1}$$ $$\ 13=\frac{r^3-1}{r-1}$$ $$\ 13\left(r-1\right)=r^3-1$$ $$\ 13r-13=r^3-1$$ $$\ r^3-13r+12=0$$
$$\ r_1=-4,\ r_2=3,\ r_3=1$$
Since you said it's a geometric series, $r ≠ 1$, but even if it did, plugging it back in would give you $3 + 3 + 3 ≠ 39$. So, the common ratio is $$r = 3, -4$$ Plug $r=3$ back into the formula and see that it works. As for $r=-4$, you'll have to do it the long way because it won't work in the sum formula, so you could just write that $3 + (-12) + 48 = 39$ which is indeed true.
|
Artinian rings and PID Let $R$ be a commutative ring with unity. Suppose that $R$ is a principal ideal domain, and $0\ne t\in R$. I'm trying to show that $R/Rt$ is an artinian $R$-module, and so is an artinian ring if $t$ is not a unit in $R$.I'm not sure how to begin. please help.
|
Hint: Show that the ideals of $R/Rt$ are all principal, and in fact, are in bijection with the divisors of $t$ in $R$ (considered up to multiplication by a unit). Then use that $R$ is a UFD (since it's a PID). Finally, note that the ideals of $R/Rt$ are also precisely its $R$-submodules.
|
Suppose that $(x_n)$ is a sequence satisfying $|x_{n+1}-x_n| \leq q|x_n-x_{n-1}|$ for all $n \in \mathbb{N^+}$.Prove that $(x_n)$ converges Let $q$ be a real number satisfying $0<q<1$. Suppose that $(x_n)$ is a sequence satisfying $$|x_{n+1}-x_n| \leq q|x_n-x_{n-1}|$$ for all $n \in \mathbb{N^+}$. Prove that $(x_n)$ is a convergent sequence. I try to show that the sequence is Cauchy but I stuck at finding the $M$. Can anyone help me ?
|
Hint: note that for any $N$ and $n>N$, we have $|x_{n+1}-x_n|\leq q|x_n-x_{n-1}|\cdot q^{N-1}|x_2-x_1|$. This can be proven by induction on $N$. If we have $n>m>N$, then how can we bound $|x_n-x_m|$?
|
Example of a group where $o(a)$ and $o(b)$ are finite but $o(ab)$ is infinite Let G be a group and $a,b \in G$. It is given that $o(a)$ and $o(b)$ are finite. Can you give an example of a group where $o(ab)$ is infinite?
|
The standard example is the infinite dihedral group.
Consider the group of maps on $\mathbf{Z}$
$$
D_{\infty} = \{ x \mapsto \pm x + b : b \in \mathbf{Z} \}.
$$
Consider the maps
$$
\sigma: x \mapsto -x,
\qquad
\tau: x \mapsto -x + 1,
$$
both of order $2$. Their composition
$$
\tau \circ \sigma (x) = \tau(\sigma(x)) : x \mapsto x + 1
$$
has infinite order.
|
Writing direct proofs Let $x$, $y$ be elements of $\mathbb{Z}$. Prove if $17\mid(2x+3y)$ then $17\mid(9x+5y)$. Can someone give advice as to what method of proof should I use for this implication? Or simply what steps to take?
|
I would write the equations in $\mathbb{Z}_{17}$, which is a field, because $17$ is prime, so linear algebra applies:
$$ 2x+3y=0 $$
is a linear equation of two variables, and you seek to prove that it implies
$$ 9x+5y=0 $$
which means they're linearly dependent. Two equations are linearly dependent if and only if one is a multiple of the other - and this should be easy to prove.
Edit: Since you asked about proof strategy, I'd like to emphasize that this is not some random trick; the condition $p|x$ is not very nice to work with algebraically, but because $\mathbb{Z}_p$ is a field, the equivalent statement $x\equiv 0\mod p$ (I omitted the $\mod 17$ and the $\equiv$ above to make it look more like familiar algebra) is much simpler and better, because you can multiply, add and create linear spaces over $\mathbb{Z}_p$ that behave (in many ways) like real numbers.
|
Klein-bottle and Möbius-strip together with a homeomorphism Consider the Klein bottle (this can be done by making a quotient space). I want to give a proof of the following statement:
The Klein Bottle is homeomorphic to the union of two copies of a Möbius strip joined by a homeomorphism along their boundaries.
I know what such a Möbius band looks like and how we can obtain this also by a quotient map. I also know how to see the Klein bottle, but I don't understand that the given statement is correct. How do you construct such a homeomorphism explicitly?
|
Present a Klein bottle as a square with vertical edges identified in an orientation-reversing manner and horizontal edges identified in an orientation-preserving manner. Now make two horizontal cuts at one-third of the way up and two-thirds of the way up.
The middle third is one Möbius strip. Take the upper and lower thirds and glue them by the original horizontal identification. This is the other Möbius strip.
|
Using Pigeonhole Principle to prove two numbers in a subset of $[2n]$ divide each other Let $n$ be greater or equal to $1$, and let $S$ be an $(n+1)$-subset of $[2n]$. Prove that there exist two numbers in $S$ such that one divides the other.
Any help is appreciated!
|
HINT: Create a pigeonhole for each odd positive integer $2k+1<2n$, and put into it all numbers in $[2n]$ of the form $(2k+1)2^r$ for some $r\ge 0$.
|
Exercise on representations I am stuck on an exercise in Serre, Abelian $\ell$-adic representations (first exercise of chapter 1).
Let $V$ be a vector space of dimension $2$, and $H$ a subgroup of $GL(V)$ such that $\det(1-h)=0$ for all $h \in H$.
Show that in some basis $H$ is a subgroup of either $\begin{pmatrix} 1 & * \\ 0 &* \\ \end{pmatrix}$ or $\begin{pmatrix} 1 & 0 \\ * &* \\ \end{pmatrix}$.
I know that this means that there is a subspace or a quotient of $V$ on which $H$ acts trivially, and I know it is enough to show that $V$ is not irreducible as representation of $H$, but I don't know how to do it.
|
By hypothesis, for all $h \in H$ and basis $(e,f)$ one has
$$\det(h(e)-e,h(f)-f) =0.$$
Let $g \in H$ different from the identity.
$\bullet$ Suppose that $1$ is the only eigen-value of $g$. Then in some basis $(e,f)$ the matrix of $g$ is $Mat(g) = \left( \begin{smallmatrix} 1&1\\0&1 \end{smallmatrix}\right)$. Let $h \in H$, then
$$(1) \quad \det(h(e)-e,h(f)-f)=0,$$
$$(2) \quad \det(hg(e)-e,hg(f)-f)=\det(h(e)-e,h(e)+h(f)-f) = 0.$$
If $h(e)-e=0$, then $h(e)$ is colinear to $e$. If $h(f)-f=0$, then (2) shows that $h(e)$ is colinear to $e$. If $h(e) \neq e$ and $h(f)\neq f$, then $h(e)+h(f)-f$ is colinear to $h(e)-e$ (by (2)). The latter is colinear to $h(f)-f$ (by (1)). This implies that $h(e)$ is colinear to $e$. Hence $ke$ is fixed by $H$ and we are done.
$\bullet$ Suppose that $g$ has two distinct eigenvalues $1$ and $a$. Then is some basis $(e,f)$ : $Mat(g) = \left( \begin{smallmatrix} 1&0\\0&a \end{smallmatrix}\right)$.
Lemma: if $h \in H$ then
(i) $h(e)=e$ or (ii) $h(f)$ is colinear to $f$.
Proof: We have
$$\det(h(e)-e,h(f)-f) = 0,$$
$$\det(hg(e)-e,hg(f)-f) = \det(h(e)-e,a h(f)-f) = 0.$$
If $h(e) \neq e$, then $h(f)-f$ and $a h(f)-f$ are colinear to each other (because they are both colinear to $h(e)-e$). Since $a \neq 1$, this implies that $h(f)$ is colinear to $f$. QED
To conclude we have to show that either every $h \in H$ satisfies (i) or every $h \in H$ satisfies (ii). If not, then let $h \in H$ not satisfying (i) and $k \in H$ not satisfying (ii). The matrices of $h$ and $k$ have the forms $Mat(h) = \left( \begin{smallmatrix} 1&0\\\alpha&* \end{smallmatrix}\right)$ and $Mat(k) = \left( \begin{smallmatrix} 1&\beta\\0&* \end{smallmatrix}\right)$ with $\alpha \neq 0$ and $\beta \neq 0$. We check that $\det(hk-{\rm{Id}}) = -\alpha \beta$ which is a contradiction.
|
Dimension of a space I'm reading a book about Hilbert spaces, and in chapter 1 (which is supposed to be a revision of linear algebra), there's a problem I can't solve. I read the solution, which is in the book, and I don't understand it either.
Problem: Prove that the space of continuos functions in the interval (0,1): $C[0,1]$, has dimension $c=\dim(\mathbb{R})$.
Solution: The solution of the book goes by proving that the size of a minimal base of the space $B$ is first $|B|\leqslant c$ and $|B|\geqslant c$, and so $|B|=c$. the proof of it being greater or equal is simple and I understand it, the problem is the other thing. The author does this:
A continuos function is defined by the values it takes at rational numbers, so $|B|\leqslant c^{\aleph_0}=c$
I don't get that.
|
Note that if $f,g$ are continuous and for every rational number $q$ it holds that $f(q)=g(q)$ then $f=g$ everywhere.
This means that $|B|\leq|\mathbb{R^Q}|=|\mathbb{R^N}|=|\mathbb R|$.
Also, $|\mathbb R|$ is not necessarily $\aleph_1$. This assumption is known as the continuum hypothesis.
|
functional relation I need to find functions $f : \mathbb{R}_{+} \rightarrow \mathbb{R}$ which satisfies
$$ f(0) =1 $$
$$ f(\max(a,b))=f(a)f(b)$$
For each $a,b \geq 0$.
I have found two functions which satisfy my criteria.
$$ f_1(x)=1$$
$$f_2(x) = \begin{cases}
0, & \text{if } x>0 \\
1, & \text{if } x=0
\end{cases}
$$
Is there another function which satisfies my criteria?
|
Since $f(a)f(b)=f(b)$ for all $b\ge a$, either $f(b)=0$ or $f(a)=1$. If $f(a)=0$, then $f(b)=0$ for all $b\ge a$. If $f(b)=1$, then $f(a)=1$ for all $a\le b$.
Thus, it appears that for any $a\ge0$, the functions
$$
f_a^+(x)=\left\{\begin{array}{}
1&\text{for }x\le a\\
0&\text{for }x\gt a
\end{array}\right.
$$
and for any $a\gt0$, the functions
$$
f_a^-(x)=\left\{\begin{array}{}
1&\text{for }x\lt a\\
0&\text{for }x\ge a
\end{array}\right.
$$
and the function
$$
f_\infty(x)=1\quad\text{for all }x
$$
all satisfy the conditions, and these should be all.
|
Are sets always represented by upper case letters? -- and understanding a bit about equivalance relations I'm trying to solve q question which states:
Let $\leq$ be a preorder relation on a set $X$, and $E=${$(x,y)\in X:x\leq y$ and $y\leq x$} the corresponding equivalence relation on $X$. Describe $E$ if $X$ is the set of finite subsets of a fixed infinite set $U$, and $\leq$ is the inclusion relation (i.e $x \leq y$ iff $x\subset y$)
I naively said that $E=${$(x,y) \in X$:$x\subset y$ and $y\subset x$}$=${$(x,y) \in X:x=y$}
I have a few concerns however, the question says $x \leq y$ iff $x\subset y$, can we just assume that x and y are sets even though they are represented by lower case letters thus be able to use the $\subset$ relation on them (it was not specified that elements of $X$ are sets)?
Secondly, what is the significance of stating that X is the set of finite subsets of a fixed infinite set U?
Thanks
|
In answer to your second question, this is a major component on the axiomatic set theory. I believe the power theory states that if X is the set of finite subsets of a fixed infinite set U then ∀x∃y∀u(u∈y↔u⊆x). And there are many subsequent theories based on this axiom. http://mathworld.wolfram.com/AxiomofthePowerSet.html
|
Showing if something is continuous in Topology If $f : X \to \mathbb{R}$ is continuous,
I want to show that $(cf)(x) = cf(x)$ is continuous, where $c$ is a constant.
Attempt: If $f$ is continuous, then we want to show that the inverse image of every open set in $\mathbb{R}$ is an open set of $X$. Choose an open interval in $\mathbb{R}$.
Thats as far as I got. :(
|
An alternative answer, that does not require you to prove the composition of continuous functions is continuous:
Let $U\subset\mathbb{R}$ be open. In fact we may assume $U=(a,b)$ by taking the open balls as a base for $\mathbb{R}$.
Now,
$$
(cf)^{-1}(U) = \{ x\in X: \exists y\in(a,b): (cf)(x) = y\}
= \left\{x\in X:\exists y\in(a,b): f(x)=\frac{y}{c}\right\}
$$
$$
=\left\{x\in X: \exists z\in \left(\frac{a}{c},\frac{b}{c}\right): f(x)=z\right\}
$$
and this last set is precisely $$f^{-1}\left\{\left(\frac{a}{c},\frac{b}{c}\right)\right\},$$ which is open since $f$ is continuous.
|
The Lebesgue Measure of the Cantor Set Why is it that m(C+C) = 2, where C is the Cantor set, but C is itself measure zero? What is happening that makes this change occur? Is it true generally that m(A+B) =/= m(A) + m(B)? Are there cases in which this necessarily holds? I should note that this is all within the usual topology on R. Thanks!
|
Adding two sets $A$ and $B$ is in general much more than taking the union of a copy of $A$ and a copy of $B$.
Rather, $A+B$ can be seen as made up of many slices that look like $A$ (one for each $b\in B$), and the Cantor set has the same size as the reals, so here we have in fact as many slices as real numbers. Therein lies the problem: Even $m(A+\{1,2\})$ may be larger than $m(A)+m(\{1,2\})$, and it can be as large as $2m(A)$ (think, for example, of $A=[0,1]$). -- Of course, the slices do not need to be disjoint in general, but this is another issue.
Ittay's answer indicates that if $A,B$ are countable we have $m(A+B)=m(A)+m(B)$, more or less for trivial reasons. There does not seem to be a natural way of generalizing this to the uncountable, in part because Lebesgue measure can never be $\mathfrak c$-additive, meaning that even if we have that $A_r$ is a measurable set for each $r\in\mathbb R$, and it happens that $\sum_r m(A_r)$ converges, and it happens that the $A_r$ are pairwise disjoint, in general $m(\bigcup_r A_r)$ and $\sum_r m(A_r)$ are different. For example, consider $A_r=\{r\}$.
|
How to solve a linear equation by substitution? I've been having a tough time figuring this out. This is how I wrote out the question with substitution.
$$\begin{align}
& I_2 = I_1 + aI_1 (T_2 - T_1) \\
& I_1=100\\
&T_2=35\\
&T_1=-5\\
&a=0.000011
\end{align}$$
My try was $I_2 = 100(1) + 0.000011(100) (35(2)-5(1))$
The answer is $100.044$m but I can't figure out the mechanics of the question.
Thanks in advance for any help
|
The lower temperature is $-5$ and you dropped a sign plus should not be multiplying $T_2$ by $2$. The correct calculation is $I_2=100+0.000011\cdot 100 \cdot (35-(-5))=100.044$
|
Finding a bijection and inverse to show there is a homeomorphism I need to find a bijection and inverse of the following:
$X = \{ (x,y) \in \mathbb{R}^2 : 1 \leq x^2 + y^2 \leq 4 \}$ with its subspace topology in $\mathbb{R}^2$
$Y = \{ (x,y,z) \in \mathbb{R}^3 : x^2 + y^2 = 1$ and $ 0 \leq z \leq 1 \}$ with its subspace topology in $\mathbb{R}^3$
Then show they are homeomorphic. Not sure where to start.
|
Hint: $X$ is an annulus, and $Y$ is a cylinder. Imagine "flattening" a cylinder by forcing its bottom edge inwards and its top edge outwards; it will eventually become an annulus. This is the idea of the map; you should work out the actual expressions describing it on your own.
Here is a gif animation I made with Mathematica to illustrate the idea:
F[R_][t_, x_, y_] := {(R + x) Cos[t], (R + x) Sin[t], y}
BentCylinder[R_, r_, s_, t_, z_] := F[R][t, r + s*Sin[z], s*Cos[z]]
BendingCylinder[R_, r_, z_] :=
ParametricPlot3D[
BentCylinder[R, r, s, t, z], {s, -r, r}, {t, 0, 2 Pi}, Mesh -> None,
Boxed -> False, Axes -> None, PlotStyle -> Red,
PlotRange -> {{-10, 10}, {-10, 10}, {-5, 5}}, PlotPoints -> 50]
Export["animation.gif",
Table[BendingCylinder[6, 2, z], {z, 0, Pi/2, 0.02*Pi}],
"DisplayDurations" -> {0.25}]
|
(partial) Derivative of norm of vector with respect to norm of vector I'm doing a weird derivative as part of a physics class that deals with quantum mechanics, and as part of that I got this derivative:
$$\frac{\partial}{\partial r_1} r_{12}$$
where $r_1 = |\vec r_1|$ and $r_{12} = |\vec r_1 - \vec r_2|$. Is there any way to solve this? My first guess was to set it equal to 1 or since $r_{12}$ is just a scalar, but then I realized it really depends on $r_1$ after all.
The expression appears when I try to solve
$$\frac{\nabla_1^2}{2} \left( \frac{r_{12}}{2(1+\beta r_{12})} \right)$$
($\beta$ is constant)
|
Since these are vectors, one can consider the following approach:
Let
${\bf x}_1 := \overrightarrow{r}_1$, and ${\bf x}_2 : = \overrightarrow{r}_2$, then $r_1 = ||{\bf x}||^{\frac{1}{2}}$, and $r_{12} = ||{\bf x}_1 - {\bf x_2}||^{\frac{1}{2}}$.
Define the following functions:
$g({\bf x}_1) = ||{\bf x}_1 - {\bf x_2}||^{\frac{1}{2}} = \left[({\bf x}_1 - {\bf x_2})^T ({\bf x}_1 - {\bf x_2}) \right]^{\frac{1}{2}}$
$f({\bf x}_1) = ||{\bf x}||^{\frac{1}{2}} = \left[ {\bf x}_1^T {\bf x}_1\right]^{\frac{1}{2}}$
Note that these functions are both scalar function of vectors. Also the following update should be noted
$r_{12} = g({\bf x}_1)$, and $r_1 = f({\bf x}_1)$.
$\dfrac{\partial}{\partial {r_1}} r_{12} ~=~ \dfrac{\partial}{\partial f({\bf x}_1)} g({\bf x}_1) ~=~ \dfrac{\partial}{\partial {\bf x}_1} g({\bf x}_1) ~~ \dfrac{1}{\dfrac{\partial}{\partial {\bf x}_1} f({\bf x}_1)}$ ...... chain rule
Applying Matrix Calculus and simplifying
$\dfrac{\partial}{\partial f({\bf x}_1)} g({\bf x}_1) ~=~
\dfrac{({\bf x}_1 - {\bf x_2})^T}{g({\bf x}_1)}
\dfrac{f({\bf x}_1)}{{\bf x}^T} ~=~
\dfrac{({\bf x}_1 - {\bf x_2})}{g({\bf x}_1)}
\dfrac{f({\bf x}_1)}{{\bf x}}
$... since transpose of a scalar is a scalar
changing to the original variables,
$\dfrac{\partial}{\partial {r_1}} r_{12} ~=~\dfrac{\overrightarrow{r}_1 - \overrightarrow{r}_2}{r_{12}} \dfrac{r_1}{\overrightarrow{r}_1}$
|
Limit involving probability Let $\mu$ be any probability measure on the interval $]0,\infty[$. I think the following limit holds, but I don't manage to prove it:
$$\frac{1}{\alpha}\log\biggl(\int_0^\infty\! x^\alpha d\mu(x)\biggr) \ \xrightarrow[\alpha\to 0+]{}\ \int_0^\infty\! \log x\ d\mu(x)$$
In probabilistic terms it can be rewritten as:
$$\frac{1}{\alpha}\log\mathbb{E}[X^\alpha] \ \xrightarrow[\alpha\to 0+]{}\ \mathbb{E}[\log X]$$
for any positive random variable $X$.
Can you help me to prove it?
|
We assume that there is $\alpha_0>0$ such that $\int_0^{+\infty}x^{\alpha_0} d\mu(x)$ is finite. Let $I(\alpha):=\frac 1{\alpha}\log\left(\int_0^{+\infty}x^\alpha d\mu(x)\right)$ and $I:=\int_0^{+\infty}\log xd\mu(x)$.
Since the function $t\mapsto \log t$ is concave, we have $I(\alpha)\geqslant I$ for all $\alpha$.
Now, use the inequality $\log(1+t)\leqslant t$ and the dominated convergence theorem to show that $\lim_{\alpha\to 0^+}\int_0^{+\infty}\frac{x^\alpha-1}\alpha d\mu(x)=I$. Call $J(\alpha):=\int_0^{+\infty}\frac{x^\alpha-1}\alpha d\mu(x)$. Then
$$I\leqslant I(\alpha)\leqslant J(\alpha).$$
|
Continuously differentiability of "glued" function I have the following surface for $x,t>0$: $$z(x,t)=\begin{cases}\sin(x-2t)&x\geq 2t\\
(t-\frac{x}{2})^{2}&x<2t \end{cases}$$ How to show that surface is not continuously differentiable along the curve $x=2t$? Truly speaking, I have no idea how to start with this example. Thank you. Andrew
|
When $x=2t$, what is the value of $\frac x2-t$? of $\cos(x-2t)$? Are these equal?
|
Armijo's rule line search I have read a paper (http://www.seas.upenn.edu/~taskar/pubs/aistats09.pdf) which describes a way to solve an optimization problem involving Armijo's rule, cf. p363 eq 13.
The variable is $\beta$ which is a square matrix. If $f$ is the objective function, the paper states that Armijo's rule is the following:
$f(\beta^{new})-f(\beta^{old}) \leq \eta(\beta^{new}-\beta^{old})^T \nabla_{\beta} f$
where $\nabla_{\beta} f$ is the vectorization of the gradient of $f$.
I am having problems with this as the expression on the right of the inequality above does not make sense due to dimension mismatch. I am unable to find an analogue of the above rule elsewhere. Can anyone help me as to figure out what the right expression means? The left expression is a scalar while the right expression suffers from a dimension mismatch...
|
In general (i.e. for scalar-valued $x$), Armijo's rule states
$$f(x^{new}) - f(x^{old}) \le \eta \, (x^{new}-x^{old})^\top \nabla f(x^{old}).$$
Hence, you need the vectorization of $\beta^{new}-\beta^{old}$ on the right hand side.
(Alternatively, you could replace $\nabla_\beta f$ by the gradient w.r.t. the frobenius inner product and write $(\beta^{new}-\beta^{old}) : \nabla_\beta f$, where $:$ denotes the frobenius inner product).
|
Solve the equation $y''+y'=\sec(x)$ Solve the equation $$y''+y'=\sec(x).$$ By solving the associated homogenous equation, I get complementary solution is $y_c(x)=A+Be^{-x}$. Then by using the method of variation of parameter, I let $y_p(x)=u_1+u_2e^{-x}$ where $u_1,u_2$ satisfy $$u_1'+u_2' e^{-x}=0, \quad -u_2' e^{-x}=\sec(x).$$ Then I stuck at finding $u_2$ as $u_2^{\prime}=-e^x \sec(x)$ and I have no idea on how to integrate this. Can anyone guide me?
|
$y''+y'=sec(x)$
$(y'e^x)'=\dfrac{d\int sec(x)e^xdx}{dx}$
$y'=e^{-x}\int sec(x)e^xdx
+Ce^{-x}$
$y=\int e^{-x}\int sec(x)e^xdx dx+ C_1e^{-x}+C_2$
I do not know of any elementary form for the integrals. Wolfram gives an answer
|
Do subspaces obey the axioms for a vector space? If I have a subspace $W$, then would elements in $W$ obey the 8 axioms for a vector space $V$ such as:
$u + (-u) = 0$ ; where $u ∈ V$
|
Yes, indeed. A subspace of a vector space is also a vector space, restricted to the operations of the vector space of which it is a subspace. And as such, the axioms for a vector space are all be satisfied by a subspace of a vector space.
As stated in my comment below: In fact, a subset of a vector space is a subspace if and only if it satisfies the axioms of a vector space.
|
Find all solutions of the equation $x! + y! = z!$ Not sure where to start with this one. Do we look at two cases where $x<y$ and where $x>y$ and then show that the smaller number will have the same values of the greater? What do you think?
|
Let $x \le y < z$:
We get $x! = 1\cdot 2 \cdots x$, $y! = 1\cdot 2 \cdots y$, and $z! = 1\cdot 2 \cdots z$.
Now we can divide by $x!$
$$1 + (x+1)\cdot (x+2)\cdots y = (x+1)\cdot (x+2) \cdots z$$
You can easily show by induction that the right side is bigger than the left for all $z>2$. The only cases that remain are $0! + 0! = 2!$, $0! + 1! = 2!$, $1! + 0! = 2!$, and $1! + 1! = 2!$.
|
Limit point of set $\{\sqrt{m}-\sqrt{n}:m,n\in \mathbb N\} $ How can I calculate the limit points of set $\{\sqrt{m}-\sqrt{n}\mid m,n\in \mathbb N\} $?
|
The answer is $\mathbb{R}$, as we can see here, for $x\in (0,\infty)$ and $\epsilon >0$, there are $n_0 , N \in \mathbb{N}$ such that $\sqrt{n_0 +1}-\sqrt{n_0} <1/N<\epsilon /2$. Now we can divide $(0,\infty)$ to pieces of length $1/N$, so there is $k\in \mathbb{N}$ such that $k(\sqrt{n_0 +1}-\sqrt{n_0})\in N_{\epsilon} (x)$.
The proof for $(-\infty , 0)$
is the same.
|
Number of elements of order $5$ in $S_7$: clarification In finding the number of elements of order $5$ in $S_7$, the symmetric group on $7$ objects, we want to find products of disjoint cycles that partition $\{1,2,3,4,5,6,7\}$ such that the least common multiple of their lengths is $5$. Since $5$ is prime, this allows only products of the form $(a)(b)(cdefg)$, which is just equal to $(cdefg)$. Therefore, we want to find all distinct $5$-cycles in $S_7$.
Choosing our first number we have seven options, followed by six, and so on, and we obtain $(7\cdot 6 \cdot 5 \cdot 4 \cdot 3)=\dfrac{7!}{2!}$. Then, we note that for any $5$-cycles there are $5$ $5$-cycles equivalent to it (including itself), since $(abcde)$ is equivalent to $(bcdea)$, etc. Thus, we divide by $5$, yielding $\dfrac{7!}{2!\cdot 5}=504$ elements of order $5$.
I understand this...it makes perfect sense. What's bothering me is:
Why isn't the number of elements of order five $\binom{7}{5}/5$? It's probably simple, I'm just not seeing why it doesn't yield what we want.
|
This is your code:
g:=SymmetricGroup(7);
h:=Filtered(Elements(g),x->Order(x)=5);
Size(h);
The size of $h$ is the same as other theoretical approaches suggested. It is $504$.
|
Method of Undetermined Coefficients I am trying to solve a problem using method of undetermined coefficients to derive a second order scheme for ux using three points, c1, c2, c3 in the following way:
ux = c1*u(x) + c2*u(x - h) + c3*u(x - 2h)
Now second order scheme just means to solve the equation for the second order derivative, am I right?
I understand how this problem works for actual numerical functions, but I am unsure how to go about it when everything is theoretical and just variables.
Thanks for some help
|
Having a second order scheme means that it's accurate for polynomials up to and including second degree. The scheme should calculate the first order derivative $u_x$, as the formula says.
It suffices to make sure that the scheme is accurate for $1$, $x$, and $x^2$; then it will work for all second-degree polynomials by linearity.
To make it work for the function $u(x)=1$, we need
$$ 0= c_1+c_2+c_3
\tag1$$
To make it work for the linear function $u(x)=x$, we need
$$ 1 = c_1 x +c_2(x-h) +c_3(x-2h)
\tag2$$
which in view of (1) simplifies to
$$ 1 = c_2(-h) +c_3(-2h)
\tag{2'}$$
And to make it work for the quadratic function $u(x)=x^2$, we need
$$ 2x = c_1 x^2 +c_2(x-h)^2 +c_3(x-2h)^2
\tag3$$
which in view of (1) and (2') simplifies to
$$ 0 = c_2h^2 +c_3(4h^2)
\tag{3'}$$
Now you can solve the linear system (1), (2') and (3') for the unknowns $c_1,c_2,c_3$.
This may not be the quickest solution, but it's the most concrete one that I could think of.
|
Banach-algebra homeomorphism. Let $ A $ be a commutative unital Banach algebra that is generated by a set $ Y \subseteq A $. I want to show that $ \Phi(A) $ is homeomorphic to a closed subset of the Cartesian product $ \displaystyle \prod_{y \in Y} \sigma(y) $. Moreover, if $ Y = \{ a \} $ for some $ a \in A $, I want to show that the map is onto.
Notation: $ \Phi(A) $ is the set of characters on $ A $ and $ \sigma(y) $ is the spectrum of $ y $.
I tried to do this with the map
$$
f: \Phi(A) \longrightarrow \prod_{y \in Y} \sigma(y)
$$
defined by
$$
f(\phi) \stackrel{\text{def}}{=} (\phi(y))_{y \in Y}.
$$
I don’t know if $ f $ makes sense, and I can’t show that it is open or continuous. Need your help. Thank you!
|
Note that $\Phi (A)$ is compact in the w$^*$-topology. Also, $\prod \sigma(y)$ is compact Hausdorff in the product topology. For the map $f$ you defined, note that $Ker f = \{0\}$ since $Y$ generates $A$.
To prove continuity, take a net $\{\phi_\alpha\}_{\alpha \in I}$ in $\Phi(A)$, such that $\phi_\alpha \rightarrow \phi$ in w$^*$-topology. Then, $\phi_\alpha (y) \rightarrow \phi(y)$ in norm topology, for any $y \in Y$. ------------- (*)
Now consider a basic open set $V$ around the point $\prod_{y \in Y} \phi(y)$. Then, there exists $y_1, y_2, \ldots, y_k \in Y$ such that $V = \prod_{y \in Y} V_y $, where $V_y = \sigma (y)$ for any $y \in Y \setminus \{y_1, y_2, \ldots, y_k\}$ and $ V_{y_i} = V_i $ is an open ball in $\sigma (y_i)$ containing $\phi(y_i)$, for $i = 1, 2, \ldots, k$.
Using (*) we get that for each $i$, $\exists$ $\alpha_i$ such that $\phi_{\beta} (y_i) \in V_i$ for any $\beta \geq \alpha_i$. Since the index set $I$ is directed, $\exists$ $\alpha_0 \in I$ such that $\alpha_0 \geq \alpha_i$ for all $i$. Thus for any $\beta \geq \alpha_0$, we have that $\phi_{\beta} (y_i) \in V_i$ for each $i$, and hence $\prod_{y \in Y} \phi_{\beta} (y) \in V$.
Thus, it follows that $\prod \phi_\alpha (y) \rightarrow \prod \phi(y)$ in $\prod \sigma(y)$, i.e, $f$ is continuous.
|
Evaluating the similarity of watermarks I am working on an assignment where we have to use the NEC algorithm for inserting and extracting a watermark from an image file. Using the techniques described in this article. I am at the point where I want to apply the similarity function in Matlab:
$$\begin{align*}
X &= (x_1,x_2,\dotsc,x_n)\quad\text{original watermark}\\
X^* &= (x^*_1,x^*_2,\dotsc,x^*_n)\quad\text{extracted watermark}\\\\
sim(X, X^*) &= \frac{X^* * X}{\sqrt{X^* * X^*}}\\
\end{align*}$$
After that, the purpose is to compare the result to a threshold value to take a decision.
EDIT Question: what is the best way to implement the sim() function? The values of the watermark vectors are doubles.
|
Well if you talking from an efficient/fastest/concise point of view, then MATLAB is vectorized. Let x1 be the vector of original watermark and let x2 be the vector of the extracted watermark then I would just do something like
dot(x1,x2)/sqrt(dot(x2,x2))
or even
dot(x1,x2)/norm(x2,2). They are identical.
|
Linear independence of polynomials from Friedberg In Page 38 Example 4, in Friedberg's Linear Algebra:
For $k = 0, 1, \ldots, n$ let $$p_k = x^k + x^{k+1} +\cdots + x^n.$$
The set $\{p_0(x), p_1(x), \ldots , p_n(x)\}$ is linearly independent in $P_n(F)$. For if $$a_0 p_0(x) + \cdots a_n p_n(x) = 0$$ for some scalars $a_0, a_1, \ldots, a_n$ then $$a_0 + (a_0 + a_1)x +(a_0+a_1+a_2)x^2+\cdots+ (a_0+\cdots+a_n)x^n = 0.$$
My question is how he arrives at $$a_0 + (a_0 + a_1)x +(a_0+a_1+a_2)x^2+\cdots+ (a_0+\cdots+a_n)x^n = 0.$$ Why is it $(a_0+....+a_n)x^n$ instead of $(a_n)x^n$ since it was $a_n p_n(x)$? It should be $(a_0+\cdots+a_n)x^0 $ since $x^0$ is common to each polynomial and then $(a_1+\cdots+ a_n) x$, no?
|
You are taking $n$ as variable, but the variable index is $k$. If you write it out, varying $k$, only $p_0$ has the term $1$ (so only $a_0$ is multiplied by $1$), but all $p_i$ have the term $x^n$. I think if you look at your equations carefully again, you'll see it yourself (visually: in your argument, you make the polynomial shorter from the right; but it is made shorter from the left, when counting from $0$ to $n$).
|
Difference of two sets and distributivity If $A,B,C$ are sets, then we all know that $A\setminus (B\cap C)= (A\setminus B)\cup (A\setminus C)$. So by induction
$$A\setminus\bigcap_{i=1}^nB_i=\bigcup_{i=1}^n (A\setminus B_i)$$
for all $n\in\mathbb N$.
Now if $I$ is an uncountable set and $\{B_i\}_{i\in I}$ is a family of sets, is it true that:
$$A\setminus\bigcap_{i\in I}B_i=\bigcup_{i\in I} (A\setminus B_i)\,\,\,?$$
If the answer to the above question will be "NO", what can we say if $I$ is countable?
|
De Morgan's laws are most fundamental and hold for all indexed families, no matter the cardinalities involved. So, $$A-\bigcap _{i\in I}A_i=\bigcup _{i\in I}(A-A_i)$$ and dually $$A-\bigcup_{i\in I}A_i=\bigcap _{i\in I}(A-A_i).$$ The proof is a very good exercise.
|
How to evaluate $\int_0^1\frac{\log^2(1+x)}x\mathrm dx$? The definite integral
$$\int_0^1\frac{\log^2(1+x)}x\mathrm dx=\frac{\zeta(3)}4$$
arose in my answer to this question. I couldn't find it treated anywhere online. I eventually found two ways to evaluate the integral, and I'm posting them as answers, but they both seem like a complicated detour for a simple result, so I'm posting this question not only to record my answers but also to ask whether there's a more elegant derivation of the result.
Note that either using the method described in this blog post or substituting the power series for $\log(1+x)$ and using
$$\frac1k\frac1{s-k}=\frac1s\left(\frac1k+\frac1{s-k}\right)\;$$
yields
$$
\int_0^1\frac{\log^2(1+x)}x\mathrm dx=2\sum_{n=1}^\infty\frac{(-1)^{n+1}H_n}{(n+1)^2}\;.
$$
However, since the corresponding identity without the alternating sign is used to obtain the sum by evaluating the integral and not vice versa, I'm not sure that this constitutes progress.
|
I wrote this to answer a question which was deleted (before I posted) because the answers to this question answered that question.
$$
\begin{align}
\int_0^1\frac{\log(1+x)^2}x\,\mathrm{d}x
&=-2\int_0^1\frac{\log(1+x)\log(x)}{1+x}\,\mathrm{d}x\tag1\\
&=-2\sum_{k=0}^\infty(-1)^kH_k\int_0^1x^k\log(x)\,\mathrm{d}x\tag2\\
&=-2\sum_{k=0}^\infty(-1)^k\frac{H_k}{(k+1)^2}\tag3\\
&=-2\sum_{k=0}^\infty(-1)^k\left(\frac{H_{k+1}}{(k+1)^2}-\frac1{(k+1)^3}\right)\tag4\\[3pt]
&=-2\left(\frac58\zeta(3)-\frac34\zeta(3)\right)\tag5\\[6pt]
&=\frac14\zeta(3)\tag6
\end{align}
$$
Explanation:
$(1)$: integration by parts
$(2)$: $\frac{\log(1+x)}{1+x}=\sum\limits_{k=0}^\infty(-1)^{k-1}H_kx^k$
$(3)$: $H_{k+1}=H_k+\frac1{k+1}$
$(4)$: $\int_0^1x^k\log(x)\,\mathrm{d}x=-\frac1{(k+1)^2}$
$(5)$: $(7)$ from this answer
$(6)$: simplify
|
Most efficient method for converting flat rate interest to APR. A while ago, a rather sneaky car salesman tried to sell me a car financing deal, advertising an 'incredibly low' annual interest rate of 1.5%. What he later revealed that this was the 'flat rate' (meaning the interest is charged on the original balance, and doesn't decrease with the balance over time).
The standard for advertising interest is APR (annual percentage rate), where the interest charged decreases in proportion to the balance. Hence the sneaky!
I was able to calculate what the interest for the flat rate would be (merely 1.5% of the loan, fixed over the number of months), but I was unable to take that total figure of interest charged and then convert it to the appropriate APR for comparison.
I'm good with numbers but not a mathematician. To the best of my knowledge I would need to use some kind of trial and error of various percentages (a function that oscillates perhaps?) to find an APR which most closely matched the final interest figure.
What would be the most appropriate mathematical method for achieving this?
Please feel free to edit this question to add appropriate tags - I don't know enough terminology to appropriately tag the question.
|
My rule of thumb to convert APR to Flat or vice versa is as such:
APR = Flat rate x 2 x No. of payments / No. of payments + 1
Example: 4% x 2 x 12 / 12 + 1 = 96 / 13 = 7.38% approx.
|
Is an infinite linearly independent subset of $\Bbb R$ dense? Suppose $(a_n)$ is a real sequence and $A:=\{a_n \mid n\in \Bbb N \}$ has an infinite linearly independent subset (with respect to field $\Bbb Q$). Is $A$ dense in $\Bbb R?$
|
If $A$ is a linearly independent subset of $\mathbb R$, for each $a\in A$ there is a positive integer $n(a)$ such that $n(a)>|a|$. The set $\left\{\dfrac{a}{n(a)}:a\in A\right\}$ is a linearly independent set with the same cardinality and span as $A$, but it is a subset of $(-1,1)$.
|
Proof of $\sin^2 x+\cos^2 x=1$ using Euler's Formula How would you prove $\sin^2x + \cos^2x = 1$ using Euler's formula?
$$e^{ix} = \cos(x) + i\sin(x)$$
This is what I have so far:
$$\sin(x) = \frac{1}{2i}(e^{ix}-e^{-ix})$$
$$\cos(x) = \frac{1}{2} (e^{ix}+e^{-ix})$$
|
Multiply $\mathrm e^{\mathrm ix}=\cos(x)+\mathrm i\sin(x)$ by the conjugate identity $\overline{\mathrm e^{\mathrm ix}}=\cos(x)-\mathrm i\sin(x)$ and use that $\overline{\mathrm e^{\mathrm ix}}=\mathrm e^{-\mathrm ix}$ hence $\mathrm e^{\mathrm ix}\cdot\overline{\mathrm e^{\mathrm ix}}=\mathrm e^{\mathrm ix-\mathrm ix}=1$.
|
Prove: $a\equiv b\pmod{n} \implies \gcd(a,n)=\gcd(b,n)$ Proof: If $a\equiv b\pmod{n}$, then $n$ divides $a-b$. So $a-b=ni$ for some integer $i$. Then, $b=ni-a$. Since $\gcd(a,n)$ divides both $a$ and $n$, it also divides $b$. Similarly, $a=ni+b$, and since $\gcd(b,n)$ divides both $b$ and $n$, it also divides $a$.
Since $\gcd(a,n)$ is a divisor of both $b$ and $n$, we know $\gcd(a,n)\leq\gcd(b,n)$. Similarly, since $\gcd(b,n)$ is a divisor of both $a$ and $n$, we know $\gcd(b,n)\leq\gcd(a,n)$. Therefore, $\gcd(a,n)=\gcd(b,n)$.
Thanks for helping me through this one!
|
HINT : Let $d = \gcd(b,n)$, then $b = dx$ and $n = dy$ for some $x$ and $y$
|
If $g$ is in $N_G(P)$ then $g\in P$ where $P$ is a $p$-Sylow subgroup Please help me to solve this problem. Let $P$ is a $p$-Sylow subgroup of the finite group $G$ and $g$ is an element such that $\lvert g \rvert=p^k$ then if $g$ is in $N_G(P)$ then $g\in P$. Where to start?
|
If $H \leq G$ is a $p$-group, then $H \leq N_G(P)$ if and only if $H \leq P$.
|
Graph with the smallest diameter. Consider a graph with $N$ vertexes where each vertex has at most $k$ edges.
I assume that $k < N$. What is the graph which have above property and has the smallest diameter?
Also, could you suggest good books in graph theory. Thanks.
|
This question is quite difficult. The upper bound on the number of vertices given above by @Boris Novikov coincides with Moore's bound in the case of the Petersen graph. Moore's bound is not only achieved by the Petersen graph, but also by the Hoffman-Singleton graph, and in general, by the so-called Moore graphs. Unfortunately, there are very few Moore graphs. In most other cases, i.e. when Moore's bound cannot be reached, the optimal graph is not known. For more details see "Moore Graphs and Beyond: A survey of the Degree/Diameter Problem", by Mirka Miller and Josef Siran (Elec. J. Combinatorics, Dynamic Survey 14).
|
Does a 3-Dimensional coordinate transformation exist such that its scale factors are equal? Let $\vec r=(x,y,z) $ be the position vector expressed in Cartesian coordinates. Let us define the coordinate transformation as
$\vec r(u,v,w)=(x(u,v,w),y(u,v,w),z(u,v,w)) $
The scale factors are defined by
$h_u=\vert \partial \vec r/\partial u \vert, h_v=\vert \partial \vec r/\partial v \vert, h_w=\vert \partial \vec r/\partial w \vert$
I wonder if a transformation can be defined such that
$h_u=h_v=h_w$
Now a pair of examples in the two dimentional case.
The transformation between elliptic and cartesian coordinates:
$\vec r(u,v)=(cosh(u)cos(v)/2,sinh(u)sin(v)/2) $
$h_u=h_v=\sqrt{cosh^2(u)-cos^2(v)}/2$
The transformation between parabolic and cartesian coordinates.
$\vec r(u,v)=((u^2-v^2)/2,u v) $
$h_u=h_v=\sqrt{u^2+v^2}$
|
Suppose we add another condition: not only $$\left\lVert\frac{\partial\mathbf r}{\partial u}\right\rVert=\left\lVert\frac{\partial\mathbf r}{\partial v}\right\rVert=\left\lVert\frac{\partial\mathbf r}{\partial w}\right\rVert,$$ but also $$\frac{\partial\mathbf r}{\partial u}\cdot\frac{\partial\mathbf r}{\partial v}=\frac{\partial\mathbf r}{\partial v}\cdot\frac{\partial\mathbf r}{\partial w}=\frac{\partial\mathbf r}{\partial w}\cdot\frac{\partial\mathbf r}{\partial u}=0.$$
That is, the coordinate system is orthogonal, which is usually desirable. (In particular, both of your two-dimensional examples have this property.) This means that the Jacobian matrix of the transformation is a multiple of the identity, and the transformation $(u,v,w)\mapsto(x,y,z)$ is a conformal map.
Liouville's theorem states that in three or more dimensions, all such maps are compositions of translations, similarities, orthogonal transformations and inversions. So the space of such coordinate systems is much more restricted than in two dimensions. Nevertheless, we do have a non-Cartesian example, namely inversion:
$$(x,y,z)=\frac{(u,v,w)}{u^2+v^2+w^2}.$$
This has $\lVert\partial\mathbf r/\partial u\rVert=\lVert\partial\mathbf r/\partial v\rVert=\lVert\partial\mathbf r/\partial w\rVert=1/(u^2+v^2+w^2)$.
|
Linear algebra - Dimension theorem. Suppose we have a vector space $V$, and $U$, $W$ subspaces of $V$.
Dimension theorem states:
$$ \dim(U+W)=\dim U+ \dim W - \dim (U\cap W).$$
My question is:
Why is $U \cap W$ necessary in this theorem?
|
$U \cap W$ is the intersection of the vector spaces $U$ and $W$, that is, the set of all vectors of the space $V$ which are in both subspaces $U$ and $W$.
As $U$ and $W$ are both subspaces of $V$, their intersection $U \cap W$ is also a subspace of $V$ (this assertion can be easily proved). Because $U \cap W$ is a subspace, it is also a vector space itself, and as such it has a basis. The number of elements in this basis will be the space's dimension, $\dim (U \cap W)$.
Loosely speaking, one could think that summing $\dim(U)$ and $\dim(W)$ would yield $\dim(U+W)$. But as $(U \cap W) \subset U$ and $(U \cap W) \subset W$, the sum $\dim(U) + \dim(W)$ "counts" two times the dimension of $U \cap W$ - once in $\dim(U)$ and once more in $\dim(W)$. To make it sum up to $\dim(U+W)$ accurately, we must then subtract the dimension of $U \cap W$, so that it is "counted" only once. This way, we obtain:
$$ \dim(U+W) = \dim(U) + \dim(W) - \dim(U \cap W).$$
Note that this is not, by any means, a formal proof. It is only an informal explanation of why $U \cap W$ is needed in this formula.
|
There exists only two groups of order $p^2$ up to isomorphism. I just proved that any finite group of order $p^2$ for $p$ a prime is abelian. The author now asks to show that there are only two such groups up to isomorphism. The first group I can think of is $G=\Bbb Z/p\Bbb Z\oplus \Bbb Z/p\Bbb Z$. This is abelian and has order $p^2$. I think the other is $\Bbb Z/p^2 \Bbb Z$.
Now, it should follow from the fact that there is only one cyclic group of order $n$ up to isomorphism that these two are unique up to isomorphism. All I need to show is these two are in fact not isomorphic. It suffices to show that $G$ as before is not cyclic. But this is easy to see, since we cannot generate any $(x,y)$ with $x\neq y$ by repeated addition of some $(z,z)$.
Now, it suffices to show that any other group of order $p^2$ is isomorphic to either one of these two groups. If the group is cyclic, we're done, so assume it is not cyclic. One can see that $G=\langle (1,0) ,(0,1)\rangle$. How can I move on?
|
A proof of that could be:
The center of a group is a subgroup, so it's order must divide $p^2$, but it's a known fact that if a group has order $p^m$, with $p$ prime, then the center of the group is different from $p^{m-1}$ and different from $1$, so in our case, the center has order $p^2$, so it's abelian.
That's what you already have, know, by the theorem of structure of finite abelian groups, the group can be either $\mathbb{Z}_p\times\mathbb{Z}_p$ or $\mathbb{Z}_{p^2}$. But from that same theorem, one can deduce that a group $\mathbb{Z}_m\times\mathbb{Z}_n$ is isomorphic to a group $\mathbb{Z}_{nm}$ iff $\gcd(m,n)=1$, so in our case, those two groups are not isomorphic, and there are only two groups of order $p^2$.
|
Is there a simple algorithm for factoring polynomials over the reals? Any real polynomial can be expressed as a product of quadratic and binomial factors like $(x+a)$ and $(x^2 + bx + c)$. Given a polynomial, is there an algorithm which will find such factors?
For example, how can I express $x^4 +1$ in the form $(x^2 + bx + c)(x^2 +dx + f)?$
|
If you can find roots, you can find factors. As others have pointed out, this needs to be done using numerical methods for polynomials with degree greater than 4. In fact it's often a good idea to use numerical methods (rather than closed-form formulae) even in the degree 3 and degree 4 cases, too. There's lots of good software for numerically finding roots of polynomials. The best-known is the Jenkins Traub method
|
How many ways are there to consider $\Bbb Q$ as an $\Bbb R$-module? How many ways are there to consider $\Bbb Q$ as an $\Bbb R$-module?
I guess there is only one way, and that is the trivial case. i.e.
$$\forall r\in \Bbb R,\, \forall a,b\in \Bbb Q \qquad r\cdot \frac{a}{b}=0$$
With an idea I can proof this:
$$\exists r\not=s\in \Bbb R \quad\text{s.t.}\quad r\cdot \frac{a}{b}=s\cdot \frac {a}{b}$$
but I think this idea is wrong.
|
There are no ways to consider $\mathbb{Q}$ as an $\mathbb{R}$-module (i.e. $\mathbb{R}$-vector space), if you follow the usual convention of requiring that modules be unital, i.e. $1\cdot q=q$ for any $q\in\mathbb{Q}$. This is because a vector space over $\mathbb{R}$ (or indeed, any field) is free; because $\mathbb{R}$ is uncountable, any $\mathbb{R}$-vector space must consist of either one element or uncountably many elements, neither of which is the case for $\mathbb{Q}$.
If you do not require your modules to be unital, then the trivial module structure (multiplication by any real number gives 0) is the only possible one. This is because, if there is some $r\in\mathbb{R}$ and some $q\in\mathbb{Q}$ such that $r\cdot q\neq 0$, then we reach the same problem as before, because for any non-zero real number $s$, we have that
$$0\neq rq=\left(\frac{r}{s}\right)\cdot sq$$
so that $sq\neq 0$ for any non-zero $s\in\mathbb{R}$, and we have $rq\neq sq$ if $r\neq s$ (because $(r-s)q\neq 0$), thereby making $\mathbb{Q}$ uncountable again (contradiction).
|
If the sum of the digits of $n$ are divisible by 9, then $n$ is divisible by 9; help understanding part of a proof
Let $n$ be a positive integer such that $n<1000$. If the sum of the digits of $n$ is divisible by 9, then $n$ is divisible by 9.
I got up to here:
$$100a + 10b + c = n$$
$$a+b+c = 9k,\quad k \in\mathbb{Z}$$
I didn't know what to do after this, so I consulted the solution
The next step is:
$$100a+10b+c = n = 9k +99a+9b = 9(k +11a+b)$$
I don't get how you can add $99a + 9b$ randomly, can someone please explain this for me?
|
Simple way to see it: Take the number $N = \sum_{0 \le k \le n} d_k 10^k$ where the $d_k$ are the digits, modulo 9 you have:
$$
N = \sum_{0 \le k \le n} d_k 10^k
\equiv \sum_{0 \le k \le n} d_k \pmod{9}
$$
since $10 \equiv 1 \pmod{9}$, and so $10^k \equiv 1 \pmod{9}$ for all $k$.
|
Is the set of integers with respect to the p-adic metric compact? Given the integers and a prime $p$. I thought I had successfully shown that $\mathbb{Z}$ was compact with respect to the metric $|\cdot |_p$, by showing that the open ball centered at zero contained all integers with more than a certain number of factors of $p$, and then showing that the remaining integers took on a finite number of possible p-adic absolute values and thus fell into a finite number of balls.
Now if the integers are compact with respect to $|\cdot |_p$, then that means they are complete with respect to $|\cdot |_p$.
But then I read that the p-adic integers $\mathbb{Z}_p$ are defined to be the completion of the integers with respect to $|\cdot |_p$, and include in their completion all the rational numbers with p-adic absolute value less than or equal to one. So this means that the integers with respect to the p-adic metric are not complete, and thus not compact, and hence there must be something wrong with my proof, correct?
Edit: Ok upon typing this up I realized that my proof is most likely wrong as there's no reason to conclude that two elements with the same absolute value are necessarily in the same ball.
|
You don't prove compactness "by showing that the open ball centered at zero contained all integers with ..., and then showing that the remaining integers ... fell into a finite number of balls", i.e. by showing that there is a finite number of open balls covering the space.
Actually you can show more : $\Bbb Z$ with the $p$-adic metric is a bounded metric space : every integer is at a distance less than $1$ from $0$.
Instead, to prove compactness you have to show that for any covering of $\Bbb Z$ by open balls, you can select a finite number of those open balls and still cover $\Bbb Z$. For example, pick the covering of $\Bbb Z$ by placing an open ball on $n$ with radius $p^{-|n|}$. For most $p$ ($p \ge 5$) , you can't extract a finite cover of $\Bbb Z$ from this cover.
|
Book Recommendations and Proofs for a First Course in Real Analysis I am taking real analysis in university. I find that it is difficult to prove some certain questions. What I want to ask is:
*
*How do we come out with a proof? Do we use some intuitive idea first and then write it down formally?
*What books do you recommended for an undergraduate who is studying real analysis? Are there any books which explain the motivation of theorems?
|
While this doesn't speak, directly, to Real Analysis, it is a recommendation that will help you there, and in other courses you're encounter, or will encounter soon:
In terms of both reading and writing proofs, in general, an excellent book to work through and/or have as a reference is Velleman's great text How to Prove It: A Structured Approach. The best way to overcome doubt and apprehension about proofs, whether trying to understand them or to write them, is to be patient, persist, and dig in and do it! (often, write and rewrite, read and reread, until you're convinced and you're convinced you can convince others!)
One helpful (and free-to-use) online resource is the website maintained by MathCS.org: Interactive Real Analysis.
"Interactive Real Analysis is an online, interactive textbook for Real Analysis or Advanced Calculus in one real variable. It deals with sets, sequences, series, continuity, differentiability, integrability (Riemann and Lebesgue), topology, power series, and more."
|
Norm of the operator $Tf=\int_{-1}^0f(t)\ dt-\int_{0}^1f(t)\ dt$ Consider the operator $T:(C[-1, 1], \|\cdot\|_\infty)\rightarrow \mathbb R$ given by, $$Tf=\int_{-1}^0f(t)\ dt-\int_{0}^1f(t)\ dt,$$ is $\|T\|=2$. How to show $\|T\|=2$? On the one hand it is easy,
$$\begin{align}
|Tf|&=\left|\int_{-1}^0f(t)\ dt-\int_{0}^1f(t)\ dt\right|\\&\leq \int_{-1}^0|f(t)|\ dt+\int_{0}^1|f(t)|\ dt\\ &\leq \|f\|_\infty \left(\int_{-1}^0\ dt+\int_{0}^1\ dt\right)\\ &=2.
\end{align}$$ How to show $\|T\|\geq 2$?
|
Try $f$ piecewise linear with $f(x)=-1$ if $x\leqslant -a$, $f(x)=x/a$ if $-a\leqslant x\leqslant a$ and $f(x)=+1$ if $x\geqslant a$, when $a\to0$.
|
Total Orders and Minimum/Maximum elements How can I prove that for any given Poset $(A,\preceq)$, $\preceq$ is a total order implies that $\forall a\in\preceq$, if a is a maximal, then a is maximum? Same goes for minimal/minimum.
|
I assume that you are asking the following:
In a totally ordered set, why is every maximal element a greatest element?
The answer is simple: Assume $a$ is maximal. Let $b$ be an arbitrary element of our set. Since the set is totally ordered set we have either $a\leq b$ or $b\leq a$. Since $a$ is maximal, we must have $b\leq a$. Since $b$ was arbitrary, it follows that $a$ is a greatest element.
|
Range and Kernel of a $3\times 3$ identity matrix? geometrically describe both.
the kernell would be just the zero vector, correct? and would also merely live in the 1st dimension, but geometrically speaking be non existent?
and for the range of the matrix, it would just be $\{(1,0,0),(0,1,0),(0,0,1)\}$. geometrically this would span the whole 3 dimensional space, correct?
|
The kernel is a subspace of the domain, regarding the matrix as a transformation. So the kernel is written as $\{ (0, 0, 0) \}$. Now, you can certainly regard this kernel as a space as well, but it is properly called 0-dimension, not non-existant.
Your range is correct. All of $\mathbb{R}^3$.
|
How is GLB and LUB different than the maximum and minimum of a poset? As the subject asks, how is the greatest lower bound different than the minimum element in a poset, and subsequently, how is the least upper bound different than the minimum? How does a set having no maximum but multiple maximal elements affect the existence of a LUB?
Edit:
Here's an assignment question I have on the topic...
Consider the subset $A=\{4,5,7\}$ of $W$.
Does $\sup(A)$ exist? What about $\inf(A)$?
I'm guessing that Sup is Suprema (or LUB), and inf is Infimum (GLB)... but I have no idea because I missed that day in class.
All I can guess is that the subset $A$ is ordered such that $7\preceq 5$, but $4$ and $5$ are incomparable. I don't know what to do with it.
|
The supremum coincides with maximum in the finite case. That said, a great deal of basic example are not finite, so supremum is a different operation in general. The most notorious thing here is that supremum is an element not necesarilly in the original set.
|
Surface area of a cylinders and prisms A cylinder has a diameter of 9cm and a height of 25cm. What is the surface area of the cylinder if it has a top and a base?
|
The top and the bottom each have area $\pi(9/2)^2$.
For the rest, use a can opener to remove the top and bottom of the can. Then use metal shears to cut straight down, and flatten out the metal. We get a rectangle of height $25$, and width the circumference of the top. So the width is $9\pi$, and therefore the area of the rectangle is $(9\pi)(25)$.
|
Change of Variables in a 3 dimensional integral Let $\int_0^{\infty}\int_0^{\infty}\int_{-\infty}^{\infty}f(x_1,x_2,x_3)dx_1dx_2dx_3$ be a 3 dimensional variable ( i.e. $0\leq x_1,x_2\leq \infty,-\infty\leq x_3\leq \infty.)$ I am defining the following change of variables : $$x_1'=x_1,x_2'=x_2, x_3'=c_1x_1+c_2x_2+c_3x_3$$
Question : What will be my new domain expressed in $x_1',x_2',x_3'.$ I understand the Jacobian concept but it is not clear how to define my new boundaries.
|
New limits coincide with the old ones.
|
Derivatives of univalent functions must converge to derivative of univalent function? This is probably something basic that I am missing. I am reading the article Normal Families: New Perspectives by Lawrence Zalcman, and in one of his examples he makes the following assertion (I am paraphrasing, not quoting, for brevity - I hope I didn't ruin the correctness of the statement):
Let $f_n : D \to \mathbb C$ be a sequence of analytic univalent functions on a domain $D$. Suppose the sequence of first derivatives $g_n := f_n\prime$ converges locally uniformly to an analytic function $g$. Then $g$ is also the first derivative of a univalent function on $D$, or zero.
Why is this true? It looks like a twist on Hurwitz's Theorem, but I don't get it.
|
Yes, this is a twist on Hurwitz's theorem: the limit of non-vanishing functions $f_n'$ is either identically zero, or nowhere zero. The first case is clear. In the second case, fix a point $z_0\in D$ and consider the functions $\tilde f_n=f_n-f_n(z_0)$. Since $\tilde f_n$ is pinned down at $z_0$, and the derivatives $\tilde f_n'=f_n'$ converge locally uniformly, the functions $\tilde f_n$ converge locally uniformly. Let $f$ be their limit. Since $f'=\lim \tilde f_n' = g$ is nowhere zero, $f$ is locally univalent.
To prove that $f$ is univalent in $D$, argue by contradiction: suppose there is $w\in \mathbb C$ and points $z_1\ne z_2$ such that $f-w$ vanishes at $z_1,z_2$. Pick $r>0$ such that the $r$-neighborhoods of $z_1,z_2$ are disjoint. By Hurwitz's theorem, for all sufficiently large $n$ the function $\tilde f_n-w$ vanishes in the aforementioned neighborhoods. This is a contradiction, because $\tilde f_n$ is univalent.
|
Any open subset of $\Bbb R$ is a countable union of disjoint open intervals
Let $U$ be an open set in $\mathbb R$. Then $U$ is a countable union of disjoint intervals.
This question has probably been asked. However, I am not interested in just getting the answer to it. Rather, I am interested in collecting as many different proofs of it which are as diverse as possible. A professor told me that there are many. So, I invite everyone who has seen proofs of this fact to share them with the community. I think it is a result worth knowing how to prove in many different ways and having a post that combines as many of them as possible will, no doubt, be quite useful. After two days, I will place a bounty on this question to attract as many people as possible. Of course, any comments, corrections, suggestions, links to papers/notes etc. are more than welcome.
|
In a locally connected space $X$, all connected components of open sets are open. This is in fact equivalent to being locally connected.
Proof: (one direction) let $O$ be an open subset of a locally connected space $X$. Let $C$ be a component of $O$ (as a (sub)space in its own right). Let $x \in C$. Then let $U_x$ be a connected neighbourhood of $x$ in $X$ such that $U_x \subset O$, which can be done as $O$ is open and the connected neighbourhoods form a local base. Then $U_x,C \subset O$ are both connected and intersect (in $x$) so their union $U_x \cup C \subset O$ is a connected subset of $O$ containing $x$, so by maximality of components $U_x \cup C \subset C$. But then $U_x$ witnesses that $x$ is an interior point of $C$, and this shows all points of $C$ are interior points, hence $C$ is open (in either $X$ or $O$, that's equivalent).
Now $\mathbb{R}$ is locally connected (open intervals form a local base of connected sets) and so every open set if a disjoint union of its components, which are open connected subsets of $\mathbb{R}$, hence are open intervals (potentially of infinite "length", i.e. segments). That there are countably many of them at most, follows from the already given "rational in every interval" argument.
|
$ \int_{0}^{1}x^m.(1-x)^{15-m}dx$ where $m\in \mathbb{N}$ $=\displaystyle \int_{0}^{1}x^m.(1-x)^{15-m}dx$ where $m\in \mathbb{N}$
My Try:: Put $x=\sin^2 \theta$ and $dx = 2\sin \theta.\cos \theta.d\theta$ and changing limit, We Get
$ = \displaystyle \int_{0}^{\frac{\pi}{2}}\sin^{2m}\theta.\cos^{30-2m}\theta.2\sin \theta.\cos \theta d\theta$
$ = \displaystyle 2\int_{0}^{\frac{\pi}{2}} \sin^{2m+1}\theta.\cos^{31-2m}\theta d\theta$
Now How can i proceed after that.
Is there is any method to Calculate the Given Integral Then plz explain here.
Thanks
|
$$f(m,n)=\int_0^1 x^m(1-x)^ndx$$
Repeated partial integration on the right hand side reveals that:
$$(m+1)f(m,n)=n\,f(m+1,n-1)$$
$$(m+2)(m+1)f(m,n)=n(n-1)\,f(m+2,n-2)$$
$$\cdots$$
$$(m+n)\cdots (m+1)\,f(m,n)=n!\,f(m+n,0)$$
$$\text{i.e}$$
$$(m+n)!f(m,n)=n!\,m!\,f(m+n,0)$$
Since
$$f(m+n,0)=\int_0^1 t^{m+n}dt=\frac{1}{m+n+1}$$
We have:
$$f(m,n)=\frac{n!\,m!}{(m+n+1)!}$$
Now simply let $n=15-m$
|
How to solve $ 13x \equiv 1 ~ (\text{mod} ~ 17) $?
How to solve $ 13x \equiv 1 ~ (\text{mod} ~ 17) $?
Please give me some ideas. Thank you.
|
$$\frac{17}{13}=1+\frac4{13}=1+\frac1{\frac{13}4}=1+\frac1{3+\frac14}$$
The last but one convergent of $\frac{17}{13}$ is $1+\frac13=\frac43$
Using the relationship of the successive convergents of a continued fraction, $17\cdot3-13\cdot4=-1\implies 13\cdot4\equiv1\pmod{17}\implies x\equiv4\pmod{17}$
|
Is it necessary to know a lot of advance math to become a good junior high/high school teacher? By "advance math" I refer to Real Analysis, Abstract Algebra and Linear Algebra (to the level of Axler). I received mainly Bs in these courses with the exception of the intro-level Linear Algebra. Since I intend to be a teacher, is it necessary to have mastered these subjects at the elementary level?
|
"Is it necessary to know a lot of advanced math to become a good junior high/high school teacher?"
It is neither necessary nor sufficient.
One of the best math teachers I've ever had probably doesn't remember much (if any) advanced math beyond linear algebra. I don't know how he performed as a math student himself, but I'm not sure it really matters. He is almost universally loved by his students, is very involved in the math team, and consistently challenges students to go beyond the routine algorithms and formulas.
On the flip side, I've also had teachers who certainly knew advanced math very well, but were unable to communicate effectively even the basics to students.
Of course, as Qiaochu points out, having an understanding of advanced math certainly couldn't hurt. For example, it might help provide a deeper context for some of the math you're teaching.
Now I'm not a teacher myself yet, so maybe take what I'm saying with a grain of salt. But as far as I can tell, good teaching is ultimately about dedication, effective communication, a responsiveness to student needs, and an interest in mathematics that goes beyond the textbook. I think these are the important things.
|
composition of continuous functions I was wondering if a function $f:[a,b]\rightarrow[c,d]$ is continuous, $g:[c,d]\rightarrow\mathbb{R}$ is continuous, does it necessarily imply that $g\circ f$ is continuous? Are there counterexamples? What is the necessary and sufficient condition for $g\circ f$ to be continuous?
This is not HWQ. I am just wondering if that is possible.
|
With the sequence definition of continuity it is obvious that $g\circ f$ is continous, because
$$\lim_{n\rightarrow \infty} g(f(x_n))=g(\lim_{n\rightarrow \infty} f(x_n)) = g(f(\lim_{n\rightarrow \infty} x_n))$$
because $f$ and $g$ are continuous.
It is hard to say what is necessary that the composition of function is continuous, taking
$$D(x)=\left\{
\begin{array}{rl}
0 & x\in \mathbb{R}\setminus \mathbb{Q}\\
1 & x \in \mathbb{Q}\\
\end{array}
\right.$$
is discontinuous in every $x\in \mathbb{R}$ but $D(D(x))=1$ is $C^\infty$.
$C^\infty$ means the function is arbitrary often continuous differentiable.
|
Lambda Calculus Equivalence I'm a bit new to lambda calculus and was wondering about the equivalence of two expressions
$$(\lambda x.\lambda y.xy)\lambda z.z\overset{?}=(\lambda x.\lambda y.xy)(\lambda z.z)$$
Can anyone help out?
|
By convention the outer most parenthesis are dropped for minimal clutter.
$$\color{red}{(\lambda x.\lambda y.xy)}\color{blue}{\lambda z.z}\iff\color{red}{(\lambda x.\lambda y.xy)}\color{blue}{(\lambda z.z)}$$
The same thing is done in algebra:
$$\color{red}{(z)}\color{blue}{(x+y)}\iff \color{red}z\color{blue}{(x+y)}$$
In lambda calculus there is a similar "order of operations" as in conventional mathematics. Things to note are parenthesis are evaluated first.
|
Ordinal exponentiation - $2^{\omega}=\omega$ This is my understanding of ordinal arithmetic - two ordinals are the same as one another if there is an order-preserving bijection between them. So for instance
$$1+\omega = \omega$$
because if
$$f(\langle x,y\rangle)=\begin{cases}y+1 & x=1\\ 1 &\text{otherwise}\end{cases}$$
Then $f$ is an order-preserving bijection between $\{ 0 \} \times 1 \cup \{ 1 \} \times \omega$ and $\omega$, where $\{ 0 \} \times 1 \cup \{ 1 \} \times \omega$ is endowed with the addition order.
Likwise if
$$g(\langle x,y \rangle)=2 \times x+y$$
Then $g$ is an order-preserving bijection between $2 \times \omega$ and $\omega$, where $2 \times \omega$ is endowed with the multiplication order, and so $2 \cdot \omega =\omega$ , whereas $\lnot 2 \cdot \omega =\omega \cdot 2$ because $< 0,1 >$ is a limit of $\omega \times 2$ under the multiplication order whereas $2 \cdot \omega$ has no limit ordinals.
On Wikipedia's page, Exponentiation is described for ordinals, where in particular, it says that $2^{\omega} = \omega$. How can this be when $\omega$ does not even have the same cardinality as $2^{\omega}$ - to wit, isn't $2^{\omega}$ uncountable, with the same cardinality as the reals?
|
Ordinal exponentiation is not cardinal exponentiation.
The cardinal exponentiation $2^\omega$ is indeed uncountable and has the cardinality of the continuum.
The ordinal exponentiation $2^\omega$ is the supremum of $\{2^n\mid n\in\omega\}$ which in turn is exactly $\omega$ again.
Also related:
*
*How is $\epsilon_0$ countable?
*Do $\omega^\omega=2^{\aleph_0}=\aleph_1$?
|
Derivative of $e^{\ln(1/x)}$ This question looks so simple, yet it confused me.
If $f(x) = e^{\ln(1/x)}$, then $f'(x) =$ ?
I got $e^{\ln(1/x)} \cdot \ln(1/x) \cdot (-1/x^2)$.
And the correct answer is just the plain $-1/x^2$. But I don't know how I can cancel out the other two function.
|
Hint: the exponential and logarithm are inverse functions:
$$e^{\ln u}=u$$
for any $u>0$.
|
Are all integers fractions? In a college class I was asked this question on a quiz in regards to sets:
All integers are fractions. T/F.
I answered False because if an integer is written in fraction notation it is then classified as a rational number. The teacher said the answer was True and gave me the link http://www.purplemath.com/modules/numtypes.htm. As a teacher of mathematics in the K-12 system I have always taught that integers were all whole numbers above and below zero, and including zero. All of the resources I have used agree to my definition. Please clarify this for me.
What is the truth, or are she and I just mincing words?
|
Every integer $x \in \mathbb Z$ can be expressed as the fraction $x \over 1$
|
Is $f(x)= \cos(e^x)$ uniformly continuous? As in the topic, my quest is to check (and prove) whether the given function $$f(x)= \cos(e^x)$$is uniformly continuous on $\left\{\begin{matrix}x \in (-\infty;0]
\\
x \in [0; +\infty)
\end{matrix}\right.$ .
My problem is that I have absolutely no idea how to do it. Any hints will be appreciated and do not feel offended if I ask you a question which you consider stupid, but such are the beginnings. Thank you in advance.
|
Note that
$f$ is differentiable on $\mathbb{R}$ and $f'(x)=-e^x\sin{(e^x)}.$
Let $x_n=\ln{\left(\dfrac{\pi}{6}+2\pi n\right)}, \;\; y_n= \ln{\left(\dfrac{\pi}{3}+2\pi n\right)}.$
Then $$|x_n-y_n| = {\ln{\left(\dfrac{\pi}{3}+2\pi n\right)} -\ln{\left(\dfrac{\pi}{6}+2\pi n\right)}}=\ln{\dfrac{\dfrac{\pi}{3}+2\pi n}{{\dfrac{\pi}{6}+2\pi n}}}=\\ = \ln{\left(1+\dfrac{1}{1+12 n }\right)}\sim \dfrac{1}{1+12 n }, \;\; n\to \infty .$$
By the mean value ( Lagrange's) theorem $$|f(x_n)-f(y_n|=|f'(\xi_n)|\cdot|x_n-y_n|=e^{\xi_n}|\sin(e^{\xi_n})|\cdot|x_n-y_n|\geqslant \\ \geqslant e^{x_n}\cdot\dfrac{1}{2}\cdot\dfrac{1}{1+12 n }=\dfrac{\dfrac{\pi}{6}+2\pi n}{2+24n} \underset{n\to\infty}{\rightarrow} \dfrac {\pi}{12},$$
which proves that $f$ is not uniformly continuous on $[0,\;+\infty)$.
|
Value of limsup i? This is a part of my question.
$\lim \sup \cos(n\pi/12)$ as n goes to infinity
What is the value of this limit?
|
When a sequence is bounded, the limsup is the largest limit of all convergent subsequences.
For all $n$, $\cos(n\pi/2)\leq 1$ so $\limsup\cos(n\pi/2) \leq 1$.
And for the extraction $n=4k$, $\cos(n\pi/2)=\cos(2k\pi)=1$. So $\limsup\cos(n\pi/2)\geq 1$.
Hence
$$
\limsup_{n\rightarrow +\infty}\cos(n\pi/2)=1.
$$
|
A ‘strong’ form of the Fundamental Theorem of Algebra Let $ n \in \mathbb{N} $ and $ a_{0},\ldots,a_{n-1} \in \mathbb{C} $ be constants. By the Fundamental Theorem of Algebra, the polynomial
$$
p(z) := z^{n} + \sum_{k=0}^{n-1} a_{k} z^{k} \in \mathbb{C}[z]
$$
has $ n $ roots, including multiplicity. If we vary the values of $ a_{0},\ldots,a_{n-1} $, the roots will obviously change, so it seems natural to ask the following question.
Do the $ n $ roots of $ p(z) $ depend on the coefficients in an analytic sort of way? More precisely, can we find holomorphic functions $ r_{1},\ldots,r_{n}: \mathbb{C}^{n} \to \mathbb{C} $ such that
$$
z^{n} + \sum_{k=0}^{n-1} a_{k} z^{k} = \prod_{j=1}^{n} [z - {r_{j}}(a_{0},\ldots,a_{n-1})]?
$$
The definition of a holomorphic function of several complex variables is given as follows:
Definition Let $ n \in \mathbb{N} $ and $ \Omega \subseteq \mathbb{C}^{n} $ be a domain (i.e., a connected open subset). A function $ f: \Omega \to \mathbb{C} $ is said to be holomorphic if and only if it is holomorphic in the usual sense in each of its $ n $ variables.
The existence of $ r_{1},\ldots,r_{n}: \mathbb{C}^{n} \to \mathbb{C} $ that are continuous seems to be a well-known result (due to Ostrowski, perhaps?), but I am unable to find anything in the literature that is concerned with the holomorphicity of these functions.
Any help would be greatly appreciated. Thank you very much!
|
To give a little expansion to @Andreas’s answer, let’s examine a little more closely the way the coefficients depend on the roots. Let’s take an $n$-tuple of roots, say $\rho=(\rho_1,\cdots,\rho_n)$ and form the corresponding $n$-tuple whose entries are the coefficients $a=(a_0,\cdots,a_{n-1})$ of the monic polynomial whose roots are the $\rho_i$’s. You have the map $C\colon\rho\mapsto a$, and you can ask what the Jacobian determinant is of this map, call it $J$. Then the fact is that $J^2$ is the discriminant of the polynomial $F(x)=x^n+\sum_0^{n-1}a_{n-i}x^i$, which as you probably know is a polynomial in the $a_i$’s. This fact makes it very clear, via the Inverse Function Theorem, how and when and why the roots depend on the coefficients.
|
Number of ways to seat students at a round table subject to certain conditions. In an Olympic contest, there are $n$ teams. Each team is composed of $k$ students attending different subjects. How many ways are there to seat all the students at a round table such that $k$ students in a team sit together and there are no two students who attend the same subject seat next to one another?
My attempt:
Let $S_n$ denote the total way to seat all the student in $n$ teams with $k$ students on each team in a way that satisfies the problem.
Then I find that $\forall n \geq 2$ $S_{n+1}=\alpha.nS_n$ with $\alpha = 2(k-1)!-(k-2)!$
But there is problem for me to find $S_2$ because it may be non-relative to $S_1$. Help me!
|
I think that your recurrence isn’t quite right. If you start with an acceptable arrangement of $n$ teams, you can insert an $(n+1)$-st team in any of the $n$ slots between adjacent teams. The members of the new team can be permuted in $k!$ ways; $(k-1)!$ of these have an unacceptable person at one end, $(k-1)!$ have an unacceptable person at the other end, and $(k-2)!$ have an unacceptable person at both ends, so
$$S_{n+1}=n\Big(k!-2(k-1)!+(k-2)!\Big)S_n\;,$$
i.e., we should have $\alpha=k!-2(k-1)!+(k-2)!$.
I’m assuming now that arrangements that differ only by a rotation of the table are considered the same. Then $S_1=(k-1)!$. There are at least two ways to see that $S_2=k!\alpha$.
*
*Start with any of the $(k-1)!$ arrangements of one team. There are $k$ slots into which we can insert the second team, and the argument given above shows that within its slot it can be arranged in $\alpha$ ways, for a total of $(k-1)!k\alpha=k!\alpha$ arrangements.
*There are $k!$ ways to seat the first team around half of the table, and by the argument given above there are $\alpha$ acceptable ways to seat the second team around the other half of the table.
Combine this with the recurrence $S_n=(n-1)\alpha S_{n-1}$ for $n\ge 3$, and you can easily get a closed expression for $S_n$ in terms of $n$ and $k$.
|
Elementary topology problem. Let $ ((Y_{\alpha},\tau_{\alpha}) \mid \alpha \in J) $ be a $ J $-indexed family of topological spaces and $ X $ any non-empty set. Let $ (f_{\alpha} \mid \alpha \in J) $ be a $ J $-indexed family of functions, where $ f_{\alpha}: Y_{\alpha} \to X $. What topology $ \tau $ can you put on $ X $ that will make all of the $ f_{\alpha} $’s continuous with respect to the $ \tau_{\alpha} $’s and $ \tau $?
Please help me with this. I think that I only need to put the indiscrete topology on $ X $, so that the only open sets in $ X $ are $ X $ itself and $ \varnothing $, and the inverse image of each of these under $ f_{\alpha} $ is open in $ Y_{\alpha} $.
|
One can define a topology $ \tau $ on $ X $ as follows:
Declare a subset $ U $ of $ X $ to be $ \tau $-open if and only if $ {f_{\alpha}^{\leftarrow}}[U] \in \tau_{\alpha} $ for each $ \alpha \in J $.
Then $ \tau $ is the finest topology on $ X $ that makes $ f_{\alpha}: (Y_{\alpha},\tau_{\alpha}) \to (X,\tau) $ continuous for each $ \alpha \in J $.
|
diagonalizability of matrix If $A$ is invertible, $F$ is algebraically closed, and $A^n$ is diagonalizable for some $n$ that is not an integer multiple of the characteristic of $F$, then $A$ is diagonalizable.
My question is:
(1) Why the condition "$A$ is invertible" essential?
(2) In wikipedia, diagonalizable matrix , it says $A$ satisfies the polynomial $p(x)=(x^n- \lambda_1) \cdots (x^n- \lambda_k)$, is that means $p(x)$ is the minimal polynomail of $A^n$? If yes, why it can be written in the form of $(x^n- \lambda_1) \cdots (x^n- \lambda_k)$ instead of $(x- \lambda_1) \cdots (x- \lambda_k)$??
Thank you very much!
|
If $A$ is invertible then $A^n$ is also invertible, so $0$ is not an eigenvalue of $A^n$.
$A^n$ is diagonalizable then its minimal polynomial $P$ is a product of distinct linear factors over $F$:
$$P(X)=(X-\lambda_1)\cdots(X-\lambda_k),\quad \lambda_i\neq\lambda_j$$
We know that $A^n$ is annihilated by $P$: $P(A^n)=0$ so $A$ is annihilated by the polynomial:
$$Q(X)=(X^n-\lambda_1)\cdots(X^n-\lambda_k)$$
and since $\lambda_i\neq0$ then each polynomial $(X^n-\lambda_i)$ has no multiple root (since its derivative is $nX^{n-1}$ and $n$ is not an integer multiple of the characteristic of $F$), then $Q$ is a product of distinct linear factors and $A$ is annihilated by $Q$. Hence $A$ is diagonalizable matrix.
|
$\sum \limits_{k=1}^{\infty} \frac{6^k}{\left(3^{k+1}-2^{k+1}\right)\left(3^k-2^k\right)} $ as a rational number. $$\sum \limits_{k=1}^{\infty} \frac{6^k}{\left(3^{k+1}-2^{k+1}\right)\left(3^k-2^k\right)} $$
I know from the ratio test it convergest, and I graph it on wolfram alpha and I suspect the sum is 2; however, I am having trouble with the manipulation of the fraction to show the rational number.
ps. When it says write as a rational number it means to write the value of $S_{\infty}$ or to rewrite the fraction?
|
That denominator should suggest the possibility of splitting the general term into partial fractions and getting a telescoping series of the form
$$\sum_{k\ge 1}\left(\frac{A_k}{3^k-2^k}-\frac{A_{k+1}}{3^{k+1}-2^{k+1}}\right)\;,$$
where $A_k$ very likely depends on $k$. Note that if this works, the sum of the series will be
$$\frac{A_1}{3^1-2^1}=A_1\;.$$
Now
$$\frac{A_k}{3^k-2^k}-\frac{A_{k+1}}{3^{k+1}-2^{k+1}}=\frac{3^{k+1}A_k-3^kA_{k+1}-2^{k+1}A_k+2^kA_{k+1}}{(3^k-2^k)(3^{k+1}-2^{k+1})}\;,$$
so you want to choose $A_k$ and $A_{k+1}$ so that
$$3^{k+1}A_k-3^kA_{k+1}-2^{k+1}A_k+2^kA_{k+1}=6^k\;.$$
The obvious things to try are $A_k=2^k$, which makes the last two terms cancel out to leave $3^{k+1}2^k-3^k2^{k+1}=6^k(3-2)=6^k$, and $A_k=3^k$, which makes the first two terms cancel out and leaves $6^k(3-2)=6^k$; both work.
However, summing $$\sum_{k\ge 1}\left(\frac{A_k}{3^k-2^k}-\frac{A_{k+1}}{3^{k+1}-2^{k+1}}\right)\tag{1}$$ to $$\frac{A_1}{3^1-2^1}=A_1$$ is valid only if
$$\lim_{k\to\infty}\frac{A_k}{3^k-2^k}=0\;,$$
since the $n$-th partial sum of $(1)$ is
$$A_1-\frac{A_{n+1}}{3^{n+1}-2^{n+1}}\;.$$
Checking the two possibilities, we see that
$$\lim_{k\to\infty}\frac{2^k}{3^k-2^k}=0,\quad\text{but}\quad\lim_{k\to\infty}\frac{3^k}{3^k-2^k}=1\;,$$
so we must choose $A_k=2^k$, and the sum of the series is indeed $A_1=2$.
|
Eigenvalues - Linear algebra If $c$ is eigenvalue for matrix B.
How can I prove than $c^k$ is eigenvalue for matrix $B^k$?
am not sure what I should try here...will appreciate your help
|
Induction...?
$$B(v)=cv\Longrightarrow B^k(v)=B(B^{k-1}v)\stackrel{\text{Ind. hypothesis}}=B(c^{k-1}v)=c^{k-1}Bv=c^{k-1}cv=c^kv$$
|
Joint density with continuous and binary random variable Assume $X\in\mathbb{R}$, $Y\in\{0,1\}$ are two random variables. What allows us to claim that $$f_{X}(x) = f_{XY}(x,1) + f_{XY}(x,0)$$
where $f_X(x)$ and $f_{XY}(x,y)$ are densities.
|
$$P(X\le x)=P(X\le x\mid Y=1)P(Y=1)+P(X\le x\mid Y=0)P(Y=0)\\
=P(X\le x, Y=1)+P(X\le x,Y=0)\\
f_X(x)=\frac{dP(X\le x)}{dx}=f_{XY}(x,1)+f_{XY}(x,0)$$
|
Normal $T\in B(H)$ has a nontrivial invariant subspace I am wondering if the following is true:
Every normal $T\in B(H)$ has a nontrivial invariant subspace if $\dim(H)>1$?
|
Let $ T \in B(\mathcal{H}) $ be a normal operator. Let $ \sigma(T) $ denote the spectrum of $ T $. We then have two cases to consider: (i) $ \sigma(T) $ is a singleton set, and (ii) $ \sigma(T) $ contains at least two points.
Case (i): Suppose that $ \sigma(T) = \{ \lambda \} $ for some $ \lambda \in \mathbb{C} $. Let $ \text{id}_{\lambda} $ and $ 1_{\lambda} $ denote, respectively, the identity function on $ \{ \lambda \} $ and the constant function on $ \{ \lambda \} $ with value $ 1 $. By the Continuous Functional Calculus, we can apply these two functions to $ T $. As $ \text{id}_{\lambda} = \lambda \cdot 1_{\lambda} $, we obtain $ T = \lambda I $, where $ I: \mathcal{H} \to \mathcal{H} $ is the identity operator. Clearly, $ I $ has non-trivial subspaces (this is only true if we assume that $ \dim(\mathcal{H}) > 1 $), so $ T $ has non-trivial subspaces as well.
Case (ii): Suppose that $ \sigma(T) $ contains two distinct points $ \lambda_{1} $ and $ \lambda_{2} $ (note that this is not possible if $ \dim(\mathcal{H}) = 1 $). Let $ U_{1} $ and $ U_{2} $ be disjoint open neighborhoods (contained in $ \sigma(T) $) of $ \lambda_{1} $ and $ \lambda_{2} $ respectively. If $ \mathbf{E} $ denotes the resolution of the identity corresponding to $ T $, then we have non-zero projection operators $ P_{1} := \mathbf{E}(U_{1}) $ and $ P_{2} := \mathbf{E}(U_{2}) $ satisfying the following two properties:
*
*$ P_{1} P_{2} = \mathbf{0}_{B(\mathcal{H})} = P_{2} P_{1} $ and
*$ T P_{1} = P_{1} T $ and $ T P_{2} = P_{2} T $, i.e., $ P_{1} $ and $ P_{2} $ commute with $ T $.
Property (1) says that $ P_{1} $ is not the identity operator; otherwise $ P_{1} P_{2} = P_{2} \neq \mathbf{0}_{B(\mathcal{H})} $, which is a contradiction. Next, Property (2) says
$$
T[{P_{1}}[\mathcal{H}]] = {P_{1}}[T[\mathcal{H}]] \subseteq {P_{1}}[\mathcal{H}],
$$
which shows that $ {P_{1}}[\mathcal{H}] $ is a non-trivial invariant subspace of $ T $.
Conclusion: Every normal operator $ T \in B(\mathcal{H}) $ has a non-trivial invariant subspace if $ \dim(\mathcal{H}) > 1 $.
|
How do you solve linear congruences with two variables. For example: Solving for $x$ and $y$ given the following linear congruences.
$x + 2y \equiv 3 \pmod9\,$, $3x + y \equiv 2 \pmod9$
So far, I've tried taking the difference of the two congruences.
Since $x + 2y \equiv 3 \pmod9 \Rightarrow x + 2y = 3 + 9k\,$, and $3x + y \equiv 2 \pmod9 \Rightarrow 3x + y = 2 + 9l$.
$$(x + 2y = 3 + 9k) - (-3x + y = 2 + 9l) \Rightarrow -2x + y = 1 + 9(k - l)$$.
So now, are you supposed to solve this like a normal linear Diophantine equation?
|
The CRT is used solve systems of congruences of the form $\rm x\equiv a_i\bmod m_{\,i}$ for distinct moduli $\rm m_{\,i}$; in our situation, there is only one variable and only one moduli, but different linear congruences, so this is not the sort of problem where CRT applies. Rather, this is linear algebra.
Instead, you are working with a $2\times2$ linear system over a given modulus, $9$. Here, the first two elementary methods of solving linear systems apply: substitution and elimination. The difference, however, is that we cannot generally divide by anything sharing divisors with $9$, i.e. multiples of $3$. And if, in your quest to eliminate variables, you multiply by things not coprime to the modulus, you can end up adding extraneous non-solutions, so it can be dangerous if you're not careful.
Let's use substitution. The congruences here are
$$\begin{cases}\rm x+2y\equiv 3 \mod 9, \\ \rm 3x+y\equiv2 \mod 9.\end{cases}$$
The first congruence gives $\rm x\equiv 3-2y$; plug this into the second to obtain
$$\rm 3x+y\equiv 3(3-2y)+y\equiv -5y\equiv2\mod 9.$$
Now $-5$ is coprime to $9$ so we can divide by it, i.e. multiply by its reciprocal mod $9$. In this case the reciprocal is $-5^{-1}\equiv-2\equiv7\bmod 9$, so the solution for $\rm y$ is $\rm y\equiv7\cdot2\equiv5\bmod 9$. To find $\rm x$, plug in $\rm y\equiv5$ into the congruences, obtaining $\rm x+10\equiv3$ and $\rm 3x+5\equiv2$. The first gives $\rm x\equiv2$, so that we have the unique solution $\rm (x,y)\equiv(2,5)$. However the second gives $\rm 3x\equiv-3\bmod9$, which, after dividing, gives $\rm x\equiv-1\equiv2\bmod3$ so that $\rm x\in\{2,5,7\}\bmod 9$; this doesn't change the fact that $(2,5)$ is the unique solution to the system, but it does illustrate that dividing by things that are not coprime to the modulus can introduce unwanted, fake solutions.
Note that matrix multiplication makes sense taken modulo an integer. The potential issues arise when we want inverses. If a matrix inverse $\rm A^{-1}$ exists of an integer-entry matrix $\rm A$, and every denominator appearing in the resulting rationals is coprime to the modulus (equivalently: $\rm\det A$ is coprime to the modulus), then $\rm A^{-1}$ can be reduced under the modulus (rationals like $\rm a/b$ become as $\rm ab^{-1}\bmod m$; this sort of thing is valid because there is a "ring homomorphism ${\bf Z}_{(p)}\to{\bf Z}/p{\bf Z}$"), and the result will be an inverse of $\rm A$ with respect to the modulus, i.e. $\rm A\, A^{-1}\equiv I\bmod m$. This is why bikenaga's answer works, using matrices, and it makes things quite easier when applicable.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.