INSTRUCTION
stringlengths 61
6.63k
| RESPONSE
stringlengths 1
11k
|
---|---|
Why is the axiom of choice separated from the other axioms? I don't know much about set theory or foundational mathematics, this question arose just out of curiosity. As far as I know, the widely accepted axioms of set theory is the Zermelo-Fraenkel axioms with the axiom of choice. But the last axiom seems to be the most special out of these axioms. A lot of theorems specifically mention that they depend on the axiom of choice. So, what is so special about this axiom?
I know that a lot of funny results occur when we assume the axiom of choice, such as the Banach-Tarski paradox. However, we are assuming the other ZF axioms at the same time. So why do we blame it to the axiom of choice, not the others? To me, the axiom of regularity is less convincing than the axiom of choice (though it's probably due to my lack of understanding).
| The basic axiom of "naive set theory" is general comprehension: For any property $P$, you may form the set consisting of all elements satisfying $P$. Russell's paradox shows that general comprehension is inconsistent, so you need to break it down into more restricted types of comprehension.
The other axioms of ZF (except for well-foundedness) are all special cases of general comprehension. For example, the Power Set axiom asserts that the class of all subsets of $X$ is a set. Replacement with respect $\phi(x,y)$ asserts that the class of pairs $(x,y)$ satisfying $\phi(x,y)$ is a set. Separation asserts is obviously a sub-case of general comprehension.
Choice is very different, because it asserts the existence of a set which does not satisfy a specific defining sentence.
|
simple-connectedness of convex and star domains Let $D$ be a convex domain in the complex plane, and is domain $D$ a simply connected domain? What about when $D$ is a star domain?
| Yes star-domains are simply connected, since every path is homotopy equivalent to one going through the center.
The disc with one point removed is not simply connected, but also not convex.
Open convex sets are among the star-domains.
All that is not special to the $\mathbb{C}$, but any $\mathbb{R}^d$.
|
How to expand undifferential function as power series? If a function has infinite order derivative at $0$ and $\lim_{n\to \infty}(f(x)-\sum_{i=1}^{n} a_{n}x^n)=0$ for every $x \in (-r,r)$,then it can be expand as power series$\sum a_{n}x^n$,
My question is if this function is not differential at $0$,how to expand it as $\sum a_{n}x^n$ satisfied with $\lim_{n\to \infty}(f(x)-\sum_{i=1}^{n} a_{n}x^n)=0$ for every $x \in (-r,r)$?Is it unique ?
| If $\sum_{j=0}^\infty a_j x^j$ converges for every $x$ in an interval $(-r,r)$, then the radius of convergence of the series is at least $r$, and the sum is analytic in the disk $\{z \in {\mathbb C}: |z| < r\}$. So if $f(x)$ is not analytic in $(-r,r)$, in particular if it is not differentiable at $0$, there is no way to represent it as $\sum_n a_n x^n$ with
$\sum_{j=0}^n a_j x^j \to f(x)$.
However, you can try a series $\sum_n a_n x^n$ such that some subsequence of partial sums $P_N(x) = \sum_{j=0}^N a_j x^j$ converges to $f(x)$. Suppose $f$ is continuous on $[-r,r]$ except possibly at $0$. I'll let $a_0 = f(0)$ and $N_0 = 0$. Given $a_j$ for $0 \le j \le N_k$, let $g_k(x) = (f(x) - P_{N_k}(x))/x^{N_k}$ for $x \ne 0$, $g_k(0) = 0$. Since $g_k$ is continuous on $E_k = [-r, -r/(k+1)] \cap \{0\} \cap [r/(k+1), r]$, Stone-Weierstrass says there is a polynomial $h_k(x)$ with $|g_k(x) - h_k(x)| < r^{-N_k}/(k+1)$ on $E_k$. Moreover we can assume $h_k(0) = g_k(0) = 0$. Let $N_{k+1} = N_k + \deg(h_k)$, and let
$a_j$ be the coefficient of $x^j$ in $x^{N_k} h_k(x)$ for $N_k < j \le N_{k+1}$. Thus
$P_{N_{k+1}}(x) = P_{N_k}(x) + x^{N_k} h_k(x)$ so that
$|P_{N_{k+1}}(x) - f(x)| = |x|^{N_k} |g_k(x) - h_k(x)| < 1/(k+1)$ for $x \in E_k \backslash \{0\}$ (we already know $P_{N_{k+1}}(0) = f(0)$).
Since the union of the $E_k$ is all of $[-r,r]$, the partial sums $P_{N_k}(x)$ converge to $f(x)$ pointwise on $[-r,r]$.
|
Can asymptotic of a Mellin (or laplace inverse ) be evaluated? I mean, given the Mellin inverse integral $ \int_{c-i\infty}^{c+i\infty}dsF(s)x^{-s} $, can we evaluate this integral, at least as $ x \rightarrow \infty $?
Can the same be made for $ \int_{c-i\infty}^{c+i\infty}dsF(s)\exp(st) $ as $ x \rightarrow \infty $?
Why or why not can this be evaluated in order to get the asymptotic behaviour of Mellin inverse transforms?
| yes we can evaluate above integral but it depends on F(s).what is your F(s).then we can see how to solve it.above integral is inverse mellin transform.Some times it is v difficult to find inverse,it all depends on what F(s) is.
|
convergence with respect to integral norm but not pointwise I want to give an example of a sequence of functions $f_1 \dots f_n$ that converges with respect to the metric $d(f,g) = \int_a^b |f(x) - g(x)| dx$ but does not converge pointwise.
I'm thinking of a function $f_n$ that is piecewise triangle, whose area converges to some constant function, but doesn't converge pointwise.
I just can't manage to formalize it.
| You can suppose that $g(x) = 0$, because your metric is translation invariant ; $d(f_n,g) = d(f_n-g,0)$. Think of the sequence $f_n : [0,1] \to \mathbb R$ defined by $f_n(x) = x^n$ if $n$ is odd, and $(1-x)^n$ if $n$ is even. Therefore,
$$
d(f_{2n+1},0) = \int_0^1 (1-x)^{2n+1} \, dx = \frac 1{2n+2} \underset{ n \to \infty} {\longrightarrow} 0
$$
or
$$
d(f_{2n},0) = \int_0^1 x^{2n} \, dx = \frac 1{2n+1} \underset{ n \to \infty} {\longrightarrow} 0
$$
but $f_n$ does not converge pointwise at $0$ and $1$ because the values oscillates between $0$ and $1$.
I know my answer uses the idea of peaking "the triangles" alternatively at $0$ and $1$ of Brian, but I still think it's worth it to see a more-or-less "trivial" example (using not-defined-by-parts polynomial functions), so I kept my answer there anyway.
Hope that helps,
|
Finding the singularities of affine and projective varieties I'm having trouble calculating singularities of varieties: when my professor covered it it was the last lecture of the course and a shade rushed.
I'm not sure if the definition I've been given is standard, so I will quote it to be safe:
the tangent space of a projective variety $X$ at a point $a$ is $T_aX = a + \mbox{ker}(\mbox{Jac}(X))$.
A variety is smooth at $a \in X$ if $a$ lives in a unique irreducible component $x_i$ of $X$ and $\dim T_a(X) = \dim X_i$, where dimension of a variety has been defined to be the degree of the Hilbert polynomial of $X$. A projective variety is smooth if its affine cone is.
I tried to calculate a few examples and it all went very wrong.
Example: The Grassmannian $G(2,4)$ in its Plucker embedding is $V(X_{12} x_{34} - x_{13}x_{24}+ x_{14}x_{23}) \subset \mathbb{P}^5$
I calculated the Hilbert polynomial to be $\frac{1}{12}d^4+...$, so it has dimension 4 (as expected), but I get
$$\mbox{Jac}(G(2,4))= [x_{34}, -x_{24}, x_{23}, x_{14}, -x_{13}, x_{12}]$$
Which has rank 1 where $x \ne 0$, so nullity 5. So assumedly $\dim T_aX = \dim( a + \mbox{ker}(\mbox{Jac}(X))) = \dim \mbox{ker} \mbox{Jac}(X) = \mbox{nullity} (\mbox{Jac}(X))$.
Which isn't 4?
Which is a bit silly, as the Grassmannian is obviously smooth.
I'm probably going wrong somewhere, but I've gotten myself thoroughly confused. Any help would be greatly appreciated.
Thanks!
| You are confusing $\mathbb P^5$ and $\mathbb A^6$.
The calculation you did is valid for the cone $C\subset \mathbb A^6$ with equation $ x_{12} x_{34} - x_{13}x_{24}+ x_{14}x_{23}=0$.
It is of codimension $1$ (hence of dimension $5$), and smooth outside of the origin, as your jacobian matrix shows.
The image $\mathbb G(2,4)\subset \mathbb P^5$ of the grassmannim under the Plücker embedding is $\mathbb P(C\setminus \lbrace 0\rbrace) \subset \mathbb P^5$
It is also smooth of codimension $1$, hence of dimension $4$ as expected.
|
nilpotent ideals
Possible Duplicate:
The set of all nilpotent element is an ideal of R
An element $a$ of a ring $R$ is nilpotent if $a^n = 0$ for some positive integer $n$. Let $R$ be a
commutative ring, and let $N$ be the set of all nilpotent elements of $R$.
(a) I'm trying to show that $N$ is an ideal, and that the only nilpotent element of $R/N$ is the zero element.
(c) What are the nilpotent elements of $R = \mathbb{Z}_{24}$? And what is the quotient ring $R/N$ in that
case? (what known ring is it isomorphic to?)
| The first question on why the set of all nilpotent elements in a commutative ring $R$ is an ideal (also called the nilradical of $R$, denoted by $\mathfrak{R}$) has already been answered numerous times on this site. I will tell you why $R/\mathfrak{R}$ has no nilpotent elements.
Suppose in the quotient we have an element $\overline{a}$ such that $\overline{a}^n = 0$. But then multiplication in the quotient is well defined so that $\overline{a}^n = \overline{a^n}$. This means that $a^n$ must be in $\mathfrak{R}$, so that there exists a $k$ such that $(a^n)^k =0$. But then this means that $a^{nk} = 0$ so that $a \in \mathfrak{R}$. In otherwords, in the quotient $\overline{a} = 0$ proving that the quotient ring $R/ \mathfrak{R}$ has no nilpotent elements.
For question (c) I think you can do it by bashing out the algebra, but let me tell you that $\Bbb{Z}_{24}$ is guaranteed to have nilpotent elements because $24$ is not square free (it's prime factorisation is $24 = 2^{3} \cdot 3$).
|
How to find the Laplace transform? Jobs arrive to a computer facility according to a Poisson process with rate $\lambda$ jobs / hour. Each job requires a service time $X$ which is uniformly distributed between $0$ and $T$ hours independently of all other jobs.
Let $Y$ denote the service time for all jobs which arrive during a one hour period. How should I find the Laplace transform of $Y$?
| The general form of a Poisson process (or a Levy process) can be defined as; the number of events in time interval $(t, t + T]$ follows a Poisson distribution with associated parameter λT. This relation is given as
\begin{equation}
P[(N(t+T)-N(t))=k] = \frac{(\lambda T)^k e^{- \lambda T}}{k!}
\end{equation}
Where $N(t + T) − N(t) = k$ is the number of events in time interval $(t, t + T]$. It will be obvious to you that $\lambda$ is the rate parameter. Assume in the simplest case for some $T$ where $k=1$ then
\begin{equation}
f(\lambda) = \lambda e^{-\lambda T}
\end{equation}
Then taking Laplace transforms yields
\begin{equation}
\widehat{f(t)} = \frac{\lambda}{\lambda + s}
\end{equation}
I'll leave it to you to fill in the more specific details, I've only dropped the textbook conclusions from a Poisson process and the Laplace transform of such a process.
|
What is the runtime of a modulus operation Hi I have an algorithm for which I would like to provide the total runtime:
def foo(x):
s = []
if(len(x)%2 != 0):
return false
else:
for i in range(len(x)/2):
//some more operations
return true
The loop is in O(n/2) but what is O() of the modulus operation?
I guess it is does not matter much for the overall runtime of the Algorithm, but I would really like to know.
| There are two meanings for "run time". Many people have pointed out that if we assume the numbers fit in one register, the mod operaton is atomic and takes constant time.
If we want to look at arbitrary values of $x$, we need to look at the model of computation that is used. The standard way of measuring computational complexity for arbitrary inputs is with the Turing machine model using binary representation for numbers. In this model, the run time to compute $x \% 2$ is $O(\log x)$, which is also the number of bits in the binary representation of $x$. This is because $x \% 2$ is just the last bit of $x$ in binary notation.
Moreover, it is impossible to use this model and compute the function in time $f(x)$ for any $f$ with $\lim_{x \to \infty} f(x)/\log(x) < 1$. This is because with that sort of bound, there would be values of $x$ large enough that the machine could not even get to the last bit of $x$ before time expires. So, in this strong sense, the run time is "exactly" $\log x$ in the Turing machine model.
|
Two linearly independent eigenvectors with eigenvalue zero What is the only $2\times 2$ matrix that only has eigenvalue zero but does have two linearly independent eigenvectors?
I know there is only one such matrix, but I'm not sure how to find it.
| Answer is the zero matrix obviously.
EDIT, here is a simple reason: let the matrix be $(c_1\ c_2)$, where $c_1$ and $c_2$ are both $2\times1$ column vectors. For any eigenvector $(a_1 \ a_2)^T$ with eigenvalue $0$, $a_1c_1 + a_2c_2 = 0$. Similarly, for another eigenvector $(b_1 \ b_2)^T$, $b_1c_1 + b_2c_2 = 0$. So $(a_2b_2 - a_1b_2)c_1 = 0$, therefore $c_1=0$ as the eigenvectors are linearly independent. From this, $c_2=0$ also.
|
Inequality ${n \choose k} \leq \left(\frac{en}{ k}\right)^k$ This is from page 3 of http://www.math.ucsd.edu/~phorn/math261/9_26_notes.pdf (Wayback Machine).
Copying the relevant segment:
Stirling’s approximation tells us $\sqrt{2\pi n} (n/e)^n \leq n! \leq e^{1/12n} \sqrt{2\pi n} (n/e)^n$. In particular we can use this to say that
$$ {n \choose k} \leq \left(\frac{en}{ k}\right)^k$$
I tried the tactic of combining bounds from $n!$, $k!$ and $(n-k)!$ and it didn't work. How does this bound follow from stirling's approximation?
| First of all, note that $n!/(n-k)! \le n^k$. Use Stirling only for $k!$.
${n \choose k} \le \frac{n^k}{k!} \le \frac{n^k}{(\sqrt{2\pi k}(k/e)^k)} \le \frac{n^k}{(k/e)^k} = (\frac{en}{k})^k$
|
What does $2^x$ really mean when $x$ is not an integer? We all know that $2^5$ means $2\times 2\times 2\times 2\times 2 = 32$, but what does $2^\pi$ mean? How is it possible to calculate that without using a calculator? I am really curious about this, so please let me know what you think.
| If this helps:
For all n:
$$2^n=e^{n\log 2}$$
This is a smooth function that is defined everywhere.
Another way to think about this (in a more straightforward manner then others described):
We know
$$a^{b+c}=a^ba^c$$
Then say, for example, $b=c=1/2$. Then we have:
$$a^{1}=a=a^{1/2}a^{1/2}$$
Thus $a^{1/2}=\sqrt{a}$ is a number that equals $a$ when multiplied by itself.
Now we can find the value of (for some p and q), $a^{p/q}$. We know:
$(a^x)^y=a^{xy}$
thus
$(a^{p/q})^{q/p}=a^1=a$
Other exponents may be derived similarly.
|
Existence of a sequence. While reading about Weierstrass' Theorem and holomorphic functions, I came across a statement that said: "Let $U$ be any connected open subset of $\mathbb{C}$ and let $\{z_j\} \subset U$ be a sequence of points that has no accumulation point in $U$ but that accumulates at every boundary point of $U$."
I was curious as to why such a sequence exists. How would I be able to construct such a sequence?
| Construct your sequence in stages indexed by positive integers $N$. At stage $N$, enumerate those points $(j+ik)/N^2$ for integers $j,k$ with $|j|+|k|\le N^3$ that are within distance $1/N$ of the boundary of $G$.
EDIT: Oops, not so clear that this will give you points in $G$. I'll have to be a bit less specific. At stage $N$, consider $K_N = \partial G \cap \overline{D_N(0)}$ where $\partial G$ is the boundary of $G$ and $D_r(a) = \{z: |z-a|<r\}$ . Since this is compact, it can be covered by finitely many open disks $D_{1/N}(a_k)$, $k=1,\ldots,m$, centred at points $a_k \in K_N$. Since $a_k \in \partial G$, $G \cap D_{1/N}(a_k) \ne \emptyset$. So we take a point $z_j \in G \cap D_{1/N}(a_k)$ for $k=1,\ldots,m$.
|
give a counterexample of monoid If $G$ is a monoid, $e$ is its identity, if $ab=e$ and $ac=e$, can you give me a counterexample such that $b\neq c$?
If not, please prove $b=c$.
Thanks a lot.
| Lets look at the endomorphisms of a particular vector space: namely let our vector space $V := \mathbb{R}^\infty$ so an element of $V$ looks like a countable (not necessarily convergent) sequence of real numbers. The set of linear maps $\phi: V \rightarrow V$ form a monoid under composition(prove it!). Let $R: V \rightarrow V$ be the right shift map, namely it takes $R: (x_0, x_1, \dots) \mapsto (0,x_0,x_1, \dots)$. Let $L: V \rightarrow V$ be the left shift map $L: (x_0, x_1, \dots) \mapsto (x_1, x_2, \dots)$, clearly $L \circ R = \textrm{id}_V = e$. Now define $R' : V \rightarrow V$ where $R' : (x_0, x_1, \dots) \mapsto (x_0, x_0, x_1, x_2, \dots)$. We also have $L \circ R' = \textrm{id}_V = e$, but these are different maps.
There are probably simpler examples, but this is pretty explicit so I thought it would be good to see.
|
Capelli Lemma for polynomials I have seen this lemma given without proof in some articles (see example here), and I guess it is well known, but I couldn't find an online reference for a proof.
It states like this:
Let $K$ be a field and $f,g \in K[x]$. Let $\alpha$ be a root of $f$ in the algebraic closure of $K$. Then $f \circ g$ is irreducible over $K$ if and only if $f$ is irreducible over $K$ and $g-\alpha$ is irreducible over $K(\alpha)$.
Can you please give a proof for this?
| Let $\theta$ be a root of $g-\alpha$. From $g(\theta)=\alpha$ we get that $f(g(\theta))=0$. Now all it is a matter of field extensions. Notice that $[K(\theta):K]\le \deg (f\circ g)=\deg f\deg g$, $[K(\theta):K(\alpha)]\le \deg(g-\alpha)$ $=$ $\deg g$ and $[K(\alpha):K]\le\deg f$. Each inequality becomes equality iff the corresponding polynomial is irreducible. But $[K(\theta):K]=[K(\theta):K(\alpha)][K(\alpha):K]$ and then $f\circ g$ is irreducible over $K$ iff $g -\alpha$ is irreducible over $K(\alpha)$ and $f$ is irreducible over $K$.
|
Correct precedence of division operators Say i have the followingv operation - $6/3/6$, i get different answers depending on which division i perform first.
$6/3/6 = 2/6 = .33333...$
$6/3/6 = 6/.5 = 12$
So which answer is correct?
| By convention it's done from left to right, but by virtually universal preference it's not done at all; one uses parentheses.
However, I see students writing fractions like this:
$$
\begin{array}{c}
a \\ \hline \\ b \\ \hline \\ c
\end{array}
$$
Similarly they write $\sqrt{b^2 - 4a} c$ or $\sqrt{b^2 - 4}ac$ or even $\sqrt{b^2 -{}}4ac$ when they actually need $\sqrt{b^2 - 4ac}$, etc.
|
Probability of a specific event at a given trial Recently I have founded some problems about probabilities, that ask to find the probability of an event at a given trial.
A dollar coin is toss several times until ones get "one dollar" face up. What is the probability to toss the coin at least $3$ times?
I thought to apply for the binomial law.But the binomial law gives the probability for a number of favorable trials, and the question ask for a specific trial.How can I solve this kind of problem?
Is there any methodology that one can apply for this kind of problems?
| Hint: It is the same as the probability of two tails in a row, because you need to toss at least $3$ times precisely if the first two tosses are tails. And the probability of two tails in a row is $\frac{1}{4}$.
Remark: For fun, let's also do this problem the hard way. We need at least $6$ tosses if we toss $2$ tails then a head, or if we toss $3$ tails then a head, or we toss $4$ tails then a head, or we toss $5$ tails then a head, or $\dots$.
The probability of $2$ tails then a head is $\left(\frac{1}{2}\right)^3$. The probability of $3$ tails then a head is $\left(\frac{1}{2}\right)^4$. The probability of $4$ tails then a head is $\left(\frac{1}{2}\right)^5$. And so on. Add up. The required probability is
$$\left(\frac{1}{2}\right)^3 +\left(\frac{1}{2}\right)^4+\left(\frac{1}{2}\right)^5+\cdots.$$
This is an infinite geometric series, which can be summed in the usual way. We get $\frac{1}{4}$.
|
Splitting field of $x^6+x^3+1$ over $\mathbb{Q}$ I am trying to find the splitting field of $x^6+x^3+1$ over $\mathbb{Q}$.
Finding the roots of the polynomial is easy (substituting $x^3=t$ , finding the two roots of the polynomial in $t$ and then taking a 3-rd root from each one). The roots can be seen here [if there is a more elegant way of finding the roots it will be nice to hear]
Is is true the that the splitting field is $\mathbb{Q}((-1)^\frac{1}{9})$ ?
I think so from the way the roots look, but I am unsure.
Also, I am having trouble finding the minimal polynomial of $(-1)^\frac{1}{9}$, it seems that it would be a polynomial of degree 9, but of course the degree can't be more than 6...can someone please help with this ?
| You've got something wrong: the roots of $t^2+t+1$ are the complex cubic roots of one, not of $-1$: $t^3-1 = (t-1)(t^2+t+1)$, so every root of $t^2+t+1$ satisfies $\alpha^3=1$). That means that you actually want the cubic roots of some of the cubic roots of $1$; that is, you want some ninth roots of $1$ (not of $-1$).
Note that
$$(x^6+x^3+1)(x-1)(x^2+x+1) = x^9-1.$$
So the roots of $x^6+x^3+1$ are all ninth roots of $1$. Moreover, those ninth roots should not be equal to $1$, nor be cubic roots of $1$ (the roots of $x^2+x+1$ are the nonreal cubic roots of $1$): since $x^9-1$ is relatively prime to $(x^9-1)' = 9x^8$, the polynomial $x^9-1$ has no repeated roots. So any root of $x^9-1$ is either a root of $x^6+x^3+1$, or a root of $x^2+x+1$, or a root of $x-1$, but it cannot be a root of two of them.
If $\zeta$ is a primitive ninth root of $1$ (e.g., $\zeta = e^{i2\pi/9}$), then $\zeta^k$ is also a ninth root of $1$ for all $k$; it is a cubic root of $1$ if and only if $3|k$, and it is equal to $1$ if and only if $9|k$. So the roots of $x^6+x^3+1$ are precisely $\zeta$, $\zeta^2$, $\zeta^4$, $\zeta^5$, $\zeta^7$, and $\zeta^8$. They are all contained in $\mathbb{Q}(\zeta)$, which is necessarily contained in the splitting field. Thus, the splitting field is $\mathbb{Q}(\zeta)$, where $\zeta$ is any primitive ninth root of $1$.
|
How to find x that defined in the picture?
$O$ is center of the circle that surrounds the ABC triangle.
$|EF| // |BC|$
we only know $a,b,c$
$(a=|BC|, b=|AC|,c=|AB|)$
$x=|EG|=?$
Could you please give me hand to see an easy way to find the x that depends on given a,b,c?
| This can be done using trigonometry.
Let $D$ be the foot of perpendicular from $O$ to $BC$.
Then we have that $\angle{BOD} = \angle{BAC} (= \alpha, \text{say})$.
Let $\angle{CBA} = \beta$.
Let the radius of the circumcircle be $R$.
Let $I$ be the foot of perpendicular from $G$ on $BC$.
Then we have that $DB = R\sin \alpha$, by considering $\triangle BOD$
$GI = OD = R \cos \alpha$.
By considering $\triangle BGI$, $BI = GI \cot \beta = R \cos \alpha \cot \beta$.
Thus $x = R - OG = R - (BD - BI) = R - R\sin \alpha + R \cos \alpha \cot \beta$.
Now, $R$ and trigonometric functions of $\alpha$ and $\beta$ can be expressed in terms of $a,b,c$.
|
Explaining Horizontal Shifting and Scaling I always find myself wanting for a clear explanation (to a college algebra student) for the fact that horizontal transformations of graphs work in the opposite way that one might expect.
For example, $f(x+1)$ is a horizontal shift to the left (a shift toward the negative side of the $x$-axis), whereas a cursory glance would cause one to suspect that adding a positive amount should shift in the positive direction. Similarly, $f(2x)$ causes the graph to shrink horizontally, not expand.
I generally explain this by saying $x$ is getting a "head start". For example, suppose $f(x)$ has a root at $x = 5$. The graph of $f(x+1)$ is getting a unit for free, and so we only need $x = 4$ to get the same output before as before (i.e. a root). Thus, the root that used to be at $x=5$ is now at $x=4$, which is a shift to the left.
My explanation seems to help some students and mystify others. I was hoping someone else in the community had an enlightening way to explain these phenomena. Again, I emphasize that the purpose is to strengthen the student's intuition; a rigorous algebraic approach is not what I'm looking for.
| What does the graph of $g(x) = f(x+1)$ look like? Well, $g(0)$ is $f(1)$, $g(1)$ is $f(2)$, and so on. Put another way, the point $(1,f(1))$ on the graph of $f(x)$ has become the point $(0,g(0))$ on the graph of $g(x)$, and so on. At this point, drawing an actual graph and showing how the points on the graph of $f(x)$ move
one unit to the left to become points on the graph of $g(x)$ helps the student
understand the concept. Whether the student absorbs the concept well enough
to utilize it correctly later is quite another matter.
|
Why this element in this tensor product is not zero? $R=k[[x,y]]/(xy)$, $k$ a field. This ring is local with maximal ideal $m=(x,y)R$. Then the book proves that $x\otimes y\in m\otimes m$ is not zero, but I don't understand what's going on, if the tensor product is $R$-linear, then $x\otimes y=1\otimes xy=1\otimes 0=0$, where is the mistake? And also the book proves that this element is torsion:
$(x+y)(x\otimes y)=(x+y)x\otimes y=(x+y)\otimes(xy)=(x+y)\otimes0=0$
why $(x+y)x\otimes y=(x+y)\otimes(xy)$?
| For your first question, $1$ does not lie in $m$, so $1 \otimes xy$ is not actually an element of $m \otimes m$.
$R$-linearity implies $a(b\otimes c)=(ab)\otimes c$, but (and this is important), if you have $(ab)\otimes c$, and $b$ does not lie in $m$, then "$ab$" is not actually a factorization of the element inside $m$, and if we try to factor out $a$, we get $a(b\otimes c)$, which is non-sensical since $b$ does not lie in $m$.
For your second question, $(x+y)x\otimes y=x((x+y)\otimes y)=(x+y)\otimes(xy)$. The reason we were allowed to take out the $x$ in this case was because $x+y$ lies in $m$, so all of the elements in the equations remained in $m \otimes m$.
Cheers,
Rofler
|
Techniques for (upper-)bounding LP maximization I have a huge maximization linear program (variables grow as a factorial of a parameter). I would like to bound the objective function from above. I know that looking at the dual bounds the objective function of the primal program from below.
I know the structure of the constraints (I know how they are generated, from all permutations of a certain set). I am asking if there are some techniques to find an upper bound on the value of the objective function. I realize that the technique is very dependent on the structure of the constraints, but I am hoping to find many techniques so that hopefully one of them would be suitable for my linear program.
| In determining the value of a primal maximization problem, primal solutions give lower bounds and dual solutions give upper bounds. There's really only one technique for getting good solutions to large, structured LPs, and that's column generation, which involves solving a problem-specific optimization problem over the set of variables.
|
Proving that a limit does not exist Given the function
$$f(x)= \left\{\begin{matrix}
1 & x \gt 0 \\
0 & x =0 \\
-1 & x \lt 0
\end{matrix}\right\}$$
What is $\lim_{a}f$ for all $a \in \mathbb{R},a \gt 0$?
It seems easy enough to guess that the limit is $1$, but how do I take into account the fact that $f(x)=-1$ when $x \lt 0$?
Thanks
| Here $\lim_{x\to 0^+}f(x)=1$ & $\lim_{x\to 0^-}f(x)=-1$ both limit are not equal.
therefore limit is not exists
|
When is $X^n-a$ is irreducible over F? Let $F$ be a field, let $\omega$ be a primitive $n$th root of unity in an algebraic closure of $F$. If $a$ in $F$ is not an $m$th power in $F(\omega)$ for any $m\gt 1$ that divides $n$, how to show that $x^n -a$ is irreducible over $F$?
| I will assume "$m \geq 1$", since otherwise $a \in F(\omega)$, but $F(\omega)$ is $(n-1)$th extension and not $n$th extension, so $x^n-a$ must have been reducible.
Let $b^n=a$ (from the algebraic closure of $F$).
$x^n-a$ is irreducible even over $F(\omega)$. Otherwise $$f= \prod_{k=0}^n (x-\omega^k b) = (x^p + \cdots + \omega^o b^p)(x^{n-p} + \cdots + \omega^ó b^{n-p}),$$ so $b^p$ and $b^{n-p}$ are in $F(\omega)$. Consequenty $b^{\gcd(p,n-p)}$ is in $F(\omega)$, but $\gcd(p,n-p)$ divides $n$, so $(b^{\gcd})^\frac{n}{\gcd} = a$, a contradiction.
|
Question about the independence definition. Why does the independence definition requires that every subfamily of events $A_1,A_2,\ldots,A_n$ satisfies $P(A_{i1}\cap \cdots \cap A_{ik})=\prod_j P(A_{ij})$ where $i_1 < i_2 < \cdots < i_n$ and $j < n$.
My doubt arose from this: Suppose $A_1,A_2$ and $A_3$ such as $P(A_1\cap A_2\cap A_3)=P(A_1)P(A_2)P(A_3)$.
Then
$$P(A_1\cap A_2)=P(A_1\cap A_2 \cap A_3) + P(A_1\cap A_2 \cap A_3^c)$$
$$=P(A_1)P(A_2)(P(A_3)+P(A_3^c))=P(A_1)P(A_2).$$
So it seems to me that if $P(A_1\cap A_2\cap A_3)=P(A_1)P(A_2)P(A_3)$ then $P(A_i\cap A_j)=P(A_i)P(A_j)$, i.e., the biggest collection independence implies the smaller ones. Why am I wrong? The calculations seems right to me, maybe my conclusion from it are wrong?
| $P(ABC)=P(A)P(B)P(C)$ does not imply that $P(ABC^C)=P(A)P(B)P(C^C)$, which it seems you're using. Consider, for instance, $C=\emptyset$.
However, see this question.
Another example:
Let $S=\{a,b,c,d,e,f\}$ with $P(a)=P(b)={1\over8}$, and $P(c)=P(d)=P(e)=P(f)={3\over16}$.
Let $A=\{a,d,e\}$, $B=\{a,c,e\}$, and $C=\{a,c,d\}$.
Then
$\ \ \ \ \ \ P(ABC)=P(\{a\})={1\over8}$
and
$\ \ \ \ \ \ P(A)P(B)P(C)= {1\over2}\cdot{1\over2}\cdot{1\over2}={1\over8}$.
But
$\ \ \ \ \ \ P(ABC^C)=P(\{e\})= {3\over16}$
while
$\ \ \ \ \ \ P(A)P(B)P(C^C) = {1\over2}\cdot{1\over2}\cdot{1\over2}={1\over8}$.
In fact no two of the events $A$, $B$, and $C$ are independent.
|
Problem of finding subgroup without Sylow's Thm. Let $G$ is a group with order $p^n$ where $p$ is prime and $n \geq 3$.
By Sylow's Thm, we know that $G$ has a subgroup with order $p^2$.
But, I wonder to proof without Sylow's Thm.
| Well, we could apply the fact that the center of such a group $G$ is nontrivial (proof). Since the center is nontrivial, it either has order $p$ or $p^m$ for $1<m\leq n$. In the former case, $G/Z(G)$ is of order $p^{n-1}$ so $Z(G/Z(G))$ is nontrivial, hence has order a power of $p$. Since $Z(G/Z(G))$ is abelian, it is easy to find such a subgroup (i.e. using the Fundamental Theorem of Finite Abelian Groups). The latter case is similarly easy, as $Z(G/Z(G))$ is an abelian group of $p^m$ for $m>1$.
|
limits of the sequence $n/(n+1)$ Given the problem:
Determine the limits of the sequnce $\{x_n\}^
\infty_{
n=1}$
$$x_n = \frac{n}{n+1}$$
The solution to this is:
step1:
$\lim\limits_{n \rightarrow \infty} x_n = \lim\limits_{n \rightarrow \infty} \frac{n}{n + 1}$
step2:
$=\lim\limits_{n \rightarrow \infty} \frac{1}{1+\frac{1}{n}}$
step3:
$=\frac{1}{1 + \lim\limits_{n \rightarrow \infty} \frac{1}{n}}$
step4:
$=\frac{1}{1 + 0}$
step5:
$=1$
I get how you go from step 2 to 5 but I don't understand how you go from step 1 to 2.
Again, I'm stuck on the basic highschool math.
Please help
| Divide the numerator and denominator by $n$. Why is this legal, in other words, why does this leave your fraction unchanged?
Because $$\frac {\frac a n} {\frac b n}=\frac {a \cdot \frac 1 n} {b \cdot \frac 1 n}=\frac a b$$ where the last equality is because $\dfrac 1 n$'s get cancelled.
Further, remember the fact that:
$$\frac{a+b}{n}=\frac a n+\frac b n$$
|
Integral related to $\sum\limits_{n=1}^\infty\sin^n(x)\cos^n(x)$ Playing around in Mathematica, I found the following:
$$\int_0^\pi\sum_{n=1}^\infty\sin^n(x)\cos^n(x)\ dx=0.48600607\ldots =\Gamma(1/3)\Gamma(2/3)-\pi.$$
I'm curious... how could one derive this?
| For giggles:
$$\begin{align*}
\sum_{n=1}^\infty\int_0^\pi \sin^n u\,\cos^n u\;\mathrm du&=\sum_{n=1}^\infty\frac1{2^n}\int_0^\pi \sin^n 2u\;\mathrm du\\
&=\frac12\sum_{n=1}^\infty\frac1{2^n}\int_0^{2\pi} \sin^n u\;\mathrm du\\
&=\frac12\sum_{n=1}^\infty\frac1{2^{2n}}\int_0^{2\pi} \sin^{2n} u\;\mathrm du\\
&=2\sum_{n=1}^\infty\frac1{2^{2n}}\int_0^{\pi/2} \sin^{2n} u\;\mathrm du\\
&=\pi\sum_{n=1}^\infty\frac1{2^{4n}}\binom{2n}{n}=\pi\sum_{n=1}^\infty\frac{(-4)^n}{16^n}\binom{-1/2}{n}\\
&=\pi\left(\frac1{\sqrt{1-\frac14}}-1\right)=\pi\left(\frac2{\sqrt 3}-1\right)
\end{align*}$$
where the oddness of the sine function was used in the third line to remove zero terms, the Wallis formula and the binomial identity $\dbinom{2n}{n}=(-4)^n\dbinom{-1/2}{n}$ were used in the fifth line, after which we finally recognize the binomial series and evaluate accordingly.
Of course, Alex's solution is vastly more compact...
|
Why use absolute value for Cauchy Schwarz Inequality? I see the Cauchy-Schwarz Inequality written as follows
$$|\langle u,v\rangle| \leq \lVert u\rVert \cdot\lVert v\rVert.$$
Why the is the absolute value of $\langle u,v\rangle$ specified? Surely it is apparent if the right hand side is greater than or equal to, for example, $5$, then it will be greater than or equal to $-5$?
| I assumed that we work in a real inner product space, otherwise of course we have to put the modulus.
The inequality $\langle u,v\rangle\leq \lVert u\rVert\lVert v\rVert$ is also true, but doesn't give any information if $\langle u,v\rangle\leq 0$, since in this case it's true, and just the trivial fact that a non-negative number is greater than a non-positive one. What is not trivial is that $\lVert u\rVert\lVert v\rVert$ is greater than the absolute value. But in fact the assertions
$$\forall u,v \quad \langle u,v\rangle\leq \lVert u\rVert\lVert v\rVert$$
and
$$\forall u,v\quad |\langle u,v\rangle|\leq \lVert u\rVert\lVert v\rVert$$
are equivalent. Indeed, the second implies the first, and consider successively $u$ and $-u$ in the first to get the second one.
|
Changing the bilinear form on a Euclidean space with an orthonormal basis. I'm having trouble getting my head around how Euclidean spaces, bilinear forms and dot product all link in with each other. I am told that on a Euclidean space any bilinear form is denoted by $$\tau(u,v) = u\cdot v$$ and in an orthonormal basis we have $$u \cdot v = \underline{u}^T \underline{v}$$ but what, say, if we have an orthonormal basis on a vector space together with a positive definite symmetric bilinear form (so a Euclidean space) then $$\tau(u,v) = \underline{u}^T \underline{v}$$ but what now if we keep the same vector space, the same orthonormal basis, and the same vectors $u,v$ but we change the positive definite symmetric bilinear form $\tau$ surely the computation $\underline{u}^T \underline{v}$ will be the same but the computation $\tau(u,v)$ will surely change? Can someone please explain this?
| Orthonormal is defined with respect to $\tau$. That is, with no positive definite symmetric bilinear form $\tau$ around, the statement "$\{v_1,...,v_n\}$ is a an orthonormal basis" is meaningless.
Once you have such a $\tau$, then you can say $\{v_1,...,v_n\}$ is an orthonormal basis with respect to $\tau$." This means that $\tau(v_i,v_i) = 1$ and $\tau(v_i, v_j) = 0$ when $i\neq j$.
Often, once a $\tau$ has been chosen, one doesn't write "with respect to $\tau$", but technically it should always be there.
So, when you say
...what now if we keep the same vector space, the same orthonormal basis, and the same vectors u,v but we change the positive definite symmetric bilinear form τ surely the computation $\underline{u}^T\underline{v}$ will be the same but the computation $\tau(u,v)$ will surely change
(emphasis mine) you have to be careful because you can't stick with the same orthonormal basis. When you change $\tau$, this changes whether or not your orthnormal basis is still orthonormal. So before computing $\underline{u}^T\underline{v}$, you must first find a new orthonormal basis, then compute $u$ and $v$ in this basis, and then compute $\underline{u}^T\underline{v}$.
|
Summing an unusual series: $\frac {x} {2!(n-2)!}+\frac {x^{2}} {5!(n-5)!}+\dots +\frac {x^{\frac{n}{3}}} {(n-1)!}$ How to sum the following series
$$\frac {x} {2!(n-2)!}+\frac {x^{2}} {5!(n-5)!}+\frac {x^{3}} {8!(n-8)!}+\dots +\frac {x^{\frac{n}{3}}} {(n-1)!}$$ n being a multiple of 3.
This question is from a book, i did not make this up. I can see a pattern in each term
as the ith term can be written as
$\frac {x^i}{(3i-1)!(n+1-3i)!}$
but i am unsure what it going on with the indexing variable's range. Any help would be much appreciated ?
| Start with
$$ (1 + x)^n = \sum_{r=0}^{n} \binom{n}{r} x^r$$
Multiply by $x$
$$ f(x) = x(1 + x)^n = \sum_{r=0}^{n} \binom{n}{r} x^{r+1}$$
Now if $w$ is a primitive cube-root of unity then
$$f(x) + f(wx) + f(w^2 x) = 3\sum_{k=1}^{n/3} \binom{n}{3k-1} x^{3k}$$
Replace $x$ by $\sqrt[3]{x}$ and divide by $n!$.
|
Finding an ON basis of $L_2$ The set $\{f_n : n \in \mathbb{Z}\}$ with $f_n(x) = e^{2πinx}$ forms an orthonormal basis of the complex space $L_2([0,1])$.
I understand why its ON but not why its a basis?
| It is known that orthonormal system $\{f_n:n\in\mathbb{Z}\}$ is a basis if
$$
\operatorname{cl}_{L_2}(\operatorname{span}(\{f_n:n\in\mathbb{Z}\}))=L_2([0,1])
$$
where $\operatorname{cl}_{L_2}$ means the closure in the $L_2$ norm.
Denote by $C_0([0,1])$ the space of continuous functions on $[0,1]$ which equals $0$ at points $0$ and $1$. It is known that for each $f\in C_0([0,1])$ the Feier sums of $f$ uniformly converges to $f$. This means that
$$
\operatorname{cl}_{C}(\operatorname{span}(\{f_n:n\in\mathbb{Z}\}))=C_0([0,1])
$$
where $\operatorname{cl}_{C}$ means the closure in the uniform norm.
Since we always have inequality $\|f\|_{L_2([0,1])}\leq\|f\|_{C([0,1])}$, then
$$
\operatorname{cl}_{L_2}(\operatorname{span}(\{f_n:n\in\mathbb{Z}\}))=C_0([0,1])
$$
It is remains to say that $C_0([0,1])$ is dence subspace of $L_2([0,1])$, i.e.
$$
\operatorname{cl}_{L_2}(C_0([0,1]))=L_2([0,1])
$$
then we obtain
$$
\operatorname{cl}_{L_2}(\operatorname{span}(\{f_n:n\in\mathbb{Z}\}))=
\operatorname{cl}_{L_2}(C_0([0,1]))=L_2([0,1])
$$
|
Irreducibility of polynomials This is a very basic question, but one that has frustrated me somewhat.
I'm dealing with polynomials and trying to see if they are irreducible or not. Now, I can apply Eisenstein's Criterion and deduce for some prime p if a polynomial over Z is irreducible over Q or not and I can sort of deal with basic polynomials that we can factorise easily.
However I am looking at the polynomial $t^3 - 2$.
I cannot seem to factor this down, but a review book is asking for us to factorise into irreducibles over a) $\mathbb{Z}$, b) $\mathbb{Q}$, c) $\mathbb{R}$, d) $\mathbb{C}$, e) $\mathbb{Z}_3$, f) $\mathbb{Z}_5$, so obviously it must be reducible in one of these.
Am I wrong in thinking that this is irreducible over all? (I tried many times to factorise it into any sort of irreducibles but the coefficients never match up so I don't know what I am doing wrong).
I would really appreciate if someone could explain this to me, in a very simple way.
Thank you.
| $t^3-2=(t-\sqrt[3]{2})(t^2+(\sqrt[3]2)t+\sqrt[3]4)$
|
A basic estimate for Sobolev spaces Here is a statement that I came upon whilst studying Sobolev spaces, which I cannot quite fill in the gaps:
If $s>t>u$ then we can estimate:
\begin{equation}
(1 + |\xi|)^{2t} \leq \varepsilon (1 + |\xi|)^{2s} + C(\varepsilon)(1 + |\xi|)^{2u}
\end{equation}
for any $\varepsilon > 0$
(here $\xi \in \mathbb{R}^n$ and $C(\varepsilon)$ is a constant, dependent on $\varepsilon$).
How can I show this? Many thanks for hints!
| Let $f(x)=(1+x)^{2(t-u)}-\varepsilon(1+x)^{2(s-u)}$. We have $f(0)=1-\varepsilon$ and since $s-u>t-u$ and $\varepsilon>0$, we know $f(x)\to-\infty$ as $x\to\infty$. Hence $C(\varepsilon):=\displaystyle\sup_{0\leq x<\infty}f(x)<\infty$.
|
Are groups algebras over an operad? I'm trying to understand a little bit about operads. I think I understand that monoids are algebras over the associative operad in sets, but can groups be realised as algebras over some operad? In other words, can we require the existence of inverses in the structure of the operad? Similarly one could ask the same question about (skew-)fields.
| No, there is no operad whose algebras are groups. Since there are many variants of operads a more precise answer is that if one considers what are known as (either symmetric or non-symmetric) coloured operads then there is no operad $P$ such that morphisms $P\to \bf Set$, e.g., $P$-algebras, correspond to groups.
In general, structures that can be captured by symmetric operads are those that can be defined by a first order equational theory where the equations do not repeat arguments (for non-symmetric operads one needs to further demand that the order in which arguments appear on each side of an equation is the same). The common presentation of the theory of monoids is such and indeed there is a corresponding operad. The common presentation of the theory of groups is not of this form (because of the axiom for existence of inverses). This however does not prove that no operad can describe groups since it does not show that no other (super clever) presentation of groups can exist which is of the desired form.
It can be shown that the category of algebras in $Set$ for an operad $P$ has certain properties that are not present in the category $Grp$ of groups. This does establish that no operad exists that describes groups.
|
A finitely presented group
Given the presented group
$$G=\Bigl\langle a,b\Bigm| a^2=c,\ b(c^2)b,\ ca(b^4)\Bigr\rangle,$$
determine the structure of the quotient $G/G'$,where G' is the derived subgroup of $G$ (i.e., the commutator subgroup of $G$).
Simple elimination shows $G$ is cyclic (as it's generated by $b$) of order as a divisor of $10$, how to then obtain $G/G'$? Note $G'$ is the derived group, i.e it's the commutator subgroup of $G$.
| Indeed, the group $G/G'$ is generated by $bG'$: let $\alpha$ denote the image of $a$ in $G/G'$ and $\beta$ the image of $b$. Then we have the relations $\alpha^4\beta^2 = \alpha^3\beta^4 = 1$; from there we obtain
$$\beta^2 = \alpha^{-4} = \alpha^{-1}\alpha^{-3} = \alpha^{-1}\beta^{4},$$
so $\alpha = \beta^{2}$. And therefore $\alpha^4\beta^2 = \beta^8\beta^2 = \beta^{10}=1$. So the order of $\beta$ divides $10$. Therefore $G/G'$ is a quotient of $\langle x\mid x^{10}\rangle$, the cyclic group of order $10$.
Now consider the elements $x^2$ and $x$ in $K=\langle x\mid x^{10}\rangle$. We have $x\Bigl( (x^4)^2\Bigr)x=1$ and $x^4x^2(x^4) =1$. Therefore, there is a homomorphism $G\to K$ that maps $a$ to $x^2$ and $b$ to $x^{10}$, which trivially factors through $G/G'$. Therefore, $G/G'$ has the cyclic group of order $10$ as a quotient.
Since $G/G'$ is a quotient of the cyclic group of order $10$ and has the cyclic group of order $10$ as a quotient, it follows that $G/G'$ is cyclic of order $10$ (generated by $bG'$).
|
solution to a differential equation I have given the following differential equation:
$x'= - y$ and $y' = x$
How can I solve them?
Thanks for helping!
Greetings
| Let $\displaystyle X(t)= \binom{x(t)}{y(t)}$ so
$$ X' = \left( \begin{array}{ccc}
0 & -1 \\
1 & 0 \\
\end{array} \right)X .$$
This has solution $$ X(t)= \exp\biggr( \left( \begin{array}{ccc}
0 & -t \\
t & 0 \\
\end{array} \right) \biggr) X(0)= \left( \begin{array}{ccc}
0 & e^{-t} \\
e^t & 0 \\
\end{array} \right)\binom{x(0)}{y(0)}$$
so $$ x(t) = y(0) e^{-t} \ \ \text{ and } \ \ y(t) = x(0) e^{t} . $$
|
If p then q misunderstanding? The statement $P\rightarrow Q$ means: if $P$ then $Q$.
p | q | p->q
_____________
T | F | F
F | F | T
T | T | T
F | T | T
Lets say: if I'm hungry $h$ - I'm eating $e$.
p | q | p->q
_______________________
h | not(e) | F
not(h) | not(e) | T
h | e | T
not(h) | e | T // ?????
If I'm not hungry, I'm eating? (This does not make any sense...)
Can you please explain that for me?
| Rather than your example about food — which is not very good, there is a lot of people starving and not eating —, let consider a more mathematical one : if $n$ equals 2 then $n$ is even.
I — How to interpret the truth table ?
Fix $n$ an integer.
Let $p$ denote the assertion “$n = 2$”, and $q$ the assertion “$n$ is even”. These two assertions can be true or false, depending on $n$. What does mean that the assertion “$p \to q$” is true ?
This means precisely :
*
*$p$ true and $q$ false is not possible ;
*$p$ false and $q$ false is a priori possible (e.g. with $n = 3$) ;
*$p$ true and $q$ true is a priori possible (e.g. with $n = 2$) ;
*$p$ false and $q$ true is a priori possible (e.g. with $n = 4$).
II — Some common errors
The assertion “($n$ is divided by 2) $\to$ ($n$ is divided by 3)” is definitely not true for all $n$, but it can be true, for example for $n=3$, or $n=6$.
The fact that “false implies true” should not be read as “if not $p$ then $q$”. Indeed “false implies false” also holds, so if $p$ is false then either $q$ is false, either it is true, which is not a big deal.
|
Taking the derivative of $y = \dfrac{x}{2} + \dfrac {1}{4} \sin(2x)$ Again a simple problem that I can't seem to get the derivative of
I have $\frac{x}{2} + \frac{1}{4}\sin(2x)$
I am getting $\frac{x^2}{4} + \frac{4\sin(2x)}{16}$
This is all very wrong, and I do not know why.
| You deal with the sum of functions, $f(x) = \frac{x}{2}$ and $g(x)= \frac{1}{4} \sin(2 x)$. So you would use linearity of the derivative:
$$
\frac{d}{d x} \left( f(x) + g(x) \right) = \frac{d f(x)}{d x} + \frac{d g(x)}{d x}
$$
To evaluate these derivatives, you would use $\frac{d}{d x}\left( c f(x) \right) = c \frac{d f(x)}{d x}$, for a constant $c$. Thus
$$
\frac{d}{d x} \left( \frac{x}{2} + \frac{1}{4} \sin(2 x) \right) = \frac{1}{2} \frac{d x}{d x} + \frac{1}{4} \frac{d \sin(2 x)}{d x}
$$
To evaluate derivative of the sine function, you would need a chain rule:
$$
\frac{d}{d x} y(h(x)) = y^\prime(h(x)) h^\prime(x)
$$
where $y(x) = \sin(x)$ and $h(x) = 2x$. Now finish it off using table of derivatives.
|
Can someone check my work on this integral? $$
\begin{align}
\int_0^{2\pi}\log|e^{i\theta} - 1|d\theta
&= \int_0^{2\pi}\log(1-\cos(\theta))d\theta \\
&= \int_0^{2\pi}\log(\cos(0) - \cos(\theta))\,d\theta\\
&= \int_0^{2\pi}\log\left(-2\sin\left(\frac{\theta}{2}\right)\sin\left(\frac{-\theta}{2}\right)\right)\,d\theta\\
&= \int_0^{2\pi}\log\left(2\sin^2\left(\frac{\theta}{2}\right)\right)\,d\theta\\
&= \int_0^{2\pi}\log(2)d\theta + 2\int_0^{2\pi}\log\left(\sin\left(\frac{\theta}{2}\right)\right)\,d\theta\\
&= 2\pi \log(2) + 4\int_0^\pi \log\big(\sin(t)\big)\,dt\\
&=2\pi \log(2) - 4\pi \log(2) = -2\pi \log(2)
\end{align}
$$
Where $\int_0^\pi \log(\sin(t))\,dt = -\pi \log(2)$ according to this. The first step where I removed the absolute value signs is the one that worries me the most. Thanks.
| You can use Jensen's Formula, I believe: http://mathworld.wolfram.com/JensensFormula.html
Edit: Jensen's formula seems to imply that your integral is zero...
|
Morphism between projective schemes induced by surjection of graded rings Ravi Vakil 9.2.B is "Suppose that $S \rightarrow R$ is a surjection of graded rings. Show that the induced morphism $\text{Proj }R \rightarrow \text{Proj }S$ is a closed embedding."
I don't even see how to prove that the morphism is affine. The only ways I can think of to do this are to either classify the affine subspaces of Proj S, or to prove that when closed morphisms are glued, one gets a closed morphism.
Are either of those possible, and how can this problem be done?
| I think a good strategy could be to verify the statement locally, and then verify that the glueing is successful, as you said. Let us call $\phi:S\to R$ your surjective graded morphism, and $\phi^\ast:\textrm{Proj}\,\,R\to \textrm{Proj}\,\,S$ the corresponding morphism. Note that $$\textrm{Proj}\,\,R=\bigcup_{t\in S_1}D_+(\phi(t))$$
because $S_+$ (the irrelevant ideal of $S$) is generated by $S_1$ (as an ideal), so $\phi(S_+)R$ is generated by $\phi(S_1)$. For any $t\in S_1$ you have a surjective morphism
$S_{(t)}\to R_{\phi(t)}$ (sending $x/t^n\mapsto \phi(x)/\phi(t)^n$, for any $x\in S$), which corresponds to the canonical closed immersion of affine schemes $\phi^\ast_t:D_+(\phi(t))\hookrightarrow D_+(t)$. It remains to glue the $\phi^\ast_t$'s.
|
How to show that if a matrix A is diagonalizable, then a similar matrix B is also diagonalizable? So a matrix $B$ is similar to $A$ if for some invertible $S$, $B=S^{-1}AS$. My idea was to start with saying that if $A$ is diagonalizable, that means $A={X_A}^{-1}\Lambda_A X_A$, where $X$ is the eigenvector matrix of $A$, and $\Lambda$ is the eigenvalue matrix of $A$.
And I basically want to show that $B={X_B}^{-1}\Lambda_B X_B$. This would mean $B$ is diagonalizable right?
I am given that similar matrices have the same eigenvalues, and if $x$ is an eigenvector of $B$, then $Sx$ is an eigenvector of $A$. That is, $Bx=\lambda x \implies A(Sx)=\lambda(Sx)$.
Can someone enlighten me please? Much appreciated.
| Hint: Substitute $A = X_A^{-1} \Lambda X_A$ into $B = S^{-1} A S$ and use the formula $D^{-1}C^{-1} = (CD)^{-1}$.
|
what's the ordinary derivative of the kronecker delta function? What's ordinary derivative of the kronecker delta function? I have used "ordinary" in order not to confuse the reader with the covariant derivative. I have tried the following:
$$\delta[x-n]=\frac{1}{2\pi}\int_{0}^{2\pi}e^{i(x-n)t}dt$$
but that doesn't work since. $x,n \in \mathbb{Z}$, while I look for the case $x \in \mathbb{R}$
| May be it is already too late, but I will answer. If I am wrong, please correct me.
Let's have a Kronecker delta via the Fourier transform getting a $Sinc$ function:
$$\delta_{k,0} = \frac{1}{a}\int_{-\frac{a}{2}}^{\frac{a}{2}} e^{-\frac{i 2 \pi k x}{a}} \, dx = \frac{\sin (\pi k)}{\pi k}$$
This function looks like:
Fourier transform of "1" (Sinc) and Kronecker delta (orange dots)
Calculating the derivative we get:
$$\frac{d \delta_{k,0}}{dk} = \frac{\cos (\pi k)}{k}-\frac{\sin (\pi k)}{\pi k^2} = \frac{\cos (\pi k)}{k}$$
for $k \in \mathbb Z$
On a plot it looks like
Derivative of Sinc and Kronecker delta
|
How to verify the following function is convex or not? Consider function
$$f(x)=\frac{x^{n_{1}}}{1-x}+\frac{(1-x)^{n_{2}}}{x},x\in(0,1)$$
where $n_{1}$ and $n_2$ are some fixed positive integers.
My question: Is $f(x)$ convex for any fixed $n_1$ and $n_2$?
The second derivation of function $f$ is very complex, so I wish there exists other method to verify convex property.
| In mathematics, a real-valued function defined on an interval is called convex (or convex downward or concave upward) if the graph of the function lies below the line segment joining any two points of the graph. Equivalently, a function is convex if its epigraph (the set of points on or above the graph of the function) is a convex set. More generally, this definition of convex functions makes sense for functions defined on a convex subset of any vector space.according to wikipedia
A real valued function f : X → R defined on a convex set X in a vector space is called convex if, for any two points x1,x2 in X and any t belongs [0 1] we have
$f(t*x1+(1-t)*x2)<=(t*f(x1)+(1-t)f(x2))$ now let's take $n1$ and $n2$ some fixed values,let say 5 and 10,and try it
|
Probability that a coin lands on tails an odd number of times when it is tossed $100$ times
A coin is tossed 100 times , Find the probability that tail occurs odd number of times!
I do not know the answer, but I tried this, that there are these $4$ possible outcomes in which tossing of a coin $100$ times can unfold.
*
*head occurs odd times
*head occurs even times
*tail occurs odd times
*tail occurs even times
Getting a head is equally likely as getting a tail, similarly for odd times and even times.
Thus, all of these events must have same the probability, i.e. $\dfrac{1}{4}$.
Is this the correct answer? Is there an alternate way of solving this problem? Lets hear it!
| There are only two possible outcomes: Either both heads and tails come out an even number of times, or they both come out an odd number of times. This is so because if heads came up $x$ times and tails came up $y$ times then $x+y=100$, and the even number 100 can't be the sum of an even and an odd number.
A good way to solve this problem is to notice that if we have a sequence of 100 coin tosses in which tails came up an odd number of times, than by flipping the result of the first toss you get a sequence where tails came up an even number of times (and no matter what came up in the first toss!). Hence you have a bijection between the set of sequences where tails occurs an odd number of times, and the set of sequences where tails occurs an even number of times.
|
Convergence to the stable law I am reading the book Kolmogorov A.N., Gnedenko B.V. Limit distributions for sums of independent random variables.
From the general theory there it is known that if $X_i$ are symmetric i.i.d r.v such that $P(|X_1|>x)=x^{-\alpha},\, x \geq 1$, then $(X_1+\ldots+X_n)n^{-1/\alpha}\to Y$, where c.f. of $Y$ equals $\varphi_Y(t)=e^{-c|t|^{\alpha}}, \alpha \in (0,2]$, so $Y$ has stable law of distribution.
I want to check it without using that general theorems. So I start as the following, $X_1$ has density of distribution $f_X(x)=|x|^{-\alpha-1}\alpha/2, |x|>1$. Using Levy theorem one must prove that $\varphi^n_{X_1}(t/n^{1/\alpha})\to \varphi_Y(t),\, n \to \infty$ for all $t\in \mathbb R$. $$\varphi_{X_1}(t/n^{1/\alpha})=\int_{1}^{\infty}\cos(tx/n^{1/\alpha})\alpha x^{-\alpha-1}\,dx,$$ for all it is evident that $t$ $\varphi_{X_1}(t/n^{1/\alpha})\to 1, n \to \infty$ so we have indeterminate form $1^\infty$.
So we are to find $n(\varphi_{X_1}(t/n^{1/\alpha})-1)$, but $\varphi_{X_1}(t/n^{1/\alpha})\sim 1+1/2(2txn^{-1/\alpha})^2$, and I can only say something about $\alpha=2$ and I got stuck here. Perhaps, I made a mistake somewhere.
Could you please help me? Thanks.
| I yr integral make the change of variables $z = \frac y {n^{\frac 1 {\alpha}}}$. This brings a factor $\frac 1n$ out front. The write $cos(tz) = 1 + (cos(tz) -1)$. Integrate the 1 explicitly, and the integral invlving $cos(tz)-1$ converges because it is nice at zero.
|
Newton polygons This question is primarily to clear up some confusion I have about Newton polygons.
Consider the polynomial $x^4 + 5x^2 +25 \in \mathbb{Q}_{5}[x]$. I have to decide if this polynomial is irreducible over $\mathbb{Q}_{5}$.
So, I compute its Newton polygon. On doing this I find that the vertices of the polygon are $(0,2)$, $(2,1)$ and $(4,0)$. The segments joining $(0,2)$ and $(2,1)$, and $(2,1)$ and $(4,0)$ both have slope $-\frac{1}{2}$, and both segments have length $2$ when we take their projections onto the horizontal axis.
Am I correct in concluding that the polynomial $x^4 +5x^2 +25$ factors into two quadratic polynomials over $\mathbb{Q}_{5}$, and so is not irreducible?
I am deducing this on the basis of the following definition of a pure polynomial given in Gouvea's P-adic Numbers, An Introduction (and the fact that irreducible polynomials are pure):
A polynomial is pure if its Newton polygon has one slope.
What I interpret this definition to mean is that a polynomial $f(x) = a_nx^n + ... + a_0$ $\in \mathbb{Q}_{p}[x]$ (with $a_na_0 \neq 0$) is pure, iff the only vertices on its Newton polygon are $(0,v_p(a_0))$ and $(n, v_p(a_n))$. Am I right about this, or does the polynomial $x^4 + 5x^2+25$ also qualify as a pure polynomial?
| There is no vertex at $(2,1)$. In my opinion, the right way to think of a Newton Polygon of a polynomial is as a closed convex body in ${\mathbb{R}}^2$ with vertical sides on both right and left. A point $P$ is only a vertex if there's a line through it touching the polygon at only one point. So this polynomial definitely is pure, and N-polygon theory does not help you at all. Easiest, I suppose, will be to write down what the roots are and see that any one of them generates an extension field of ${\mathbb{Q}}_5$ of degree $4$: voilà, your polynomial is irreducible.
|
Find all connected 2-sheeted covering spaces of $S^1 \lor S^1$ This is exercise 1.3.10 in Hatcher's book "Algebraic Topology".
Find all the connected 2-sheeted and 3-sheeted covering spaces of $X=S^1 \lor S^1$, up to isomorphism of covering spaces without basepoints.
I need some start-help with this. I know there is a bijection between the subgroups of index $n$ of $\pi_1(X) \approx \mathbb{Z} *\mathbb{Z}$ and the n-sheeted covering spaces, but I don't see how this can help me find the covering spaces (preferably draw them). From the pictures earlier in the book, it seems like all the solutions are wedge products of circles (perhaps with some orientations?).
So the question is: How should I think when I approach this problem? Should I think geometrically, group-theoretically, a combination of both? Small hints are appreciated.
NOTE: This is for an assignment, so please don't give away the solution. I'd like small hints or some rules on how to approach problems like this one. Thanks!
| A covering space of $S^1 \lor S^1$ is just a certain kind of graph, with edges labeled by $a$'s and $b$'s, as shown in the full-page picture on pg. 58 of Hatcher's book.
Just try to draw all labeled graphs of this type with exactly two or three vertices. Several of these are already listed in parts (1) through (6) of the figure, but there are several missing.
|
Using the definition of a concave function prove that $f(x)=4-x^2$ is concave (do not use derivative).
Let $D=[-2,2]$ and $f:D\rightarrow \mathbb{R}$ be $f(x)=4-x^2$. Sketch this function.Using the definition of a concave function prove that it is concave (do not use derivative).
Attempt:
$f(x)=4-x^2$ is a down-facing parabola with origin at $(0,4)$. I know that. But what $D=[-2,2]$ is given for. Is it domain or a point?
Then, how do I prove that $f(x)$ is concave using the definition of a concave function? I got the inequality which should hold for $f(x)$ to be concave:
For two distinct non-negative values of $x (u$ and $v$)
$f(u)=4-u^2$ and $f(v)=4-v^2$
Condition of a concave function:
$ \lambda(4-u^2)+(1-\lambda)(4-v^2)\leq4-[(\lambda u+(1-\lambda)v]^2$
I do not know what to do next.
| If you expand your inequality, and fiddle around you can end up with
$$
(\lambda u-\lambda v)^2\leq (\sqrt{\lambda}u-\sqrt{\lambda}v)^2.
$$
Without loss of generality, you may assume that $u\geq v$. This allows you to drop the squares. Another manipulation gives you something fairly obvious. Now, work your steps backwards to give a valid proof.
|
Evaluate the $\sin$, $\cos$ and $\tan$ without using calculator?
Evaluate the $\sin$, $\cos$ and $\tan$ without using calculator?
$150$ degree
the right answer are $\frac{1}{2}$, $-\frac{\sqrt{3}}{2}$and $-\frac{1}{\sqrt{3}} $
$-315$ degree
the right answer are $\frac{1}{\sqrt{2}}$, $\frac{1}{\sqrt{2}}$ and $1$.
| You can look up cos and sin on the unit circle.
The angles labelled above are those of the special right triangles 30-60-90 and 45-45-90. Note that -315 ≡ 45 (mod 360).
For tan, use the identity $\tan{\theta} = \frac{\sin{\theta}}{\cos \theta}$.
|
Classical contradiction in logic I am studying for my logic finals and I came across this question:
Prove $((E\implies F)\implies E)\implies E$
I don't understand how there is a classical contradiction in this proof.
Using natural deduction, could some one explain why there is a classical contradiction in this proof?
Thank you in advance!
| This proof follows the OP's attempt at a proof. The main difference is that explosion (X) is used on line 5:
The classical contradictions are lines 4 and 8. According to Wikipedia,
In classical logic, a contradiction consists of a logical incompatibility between two or more propositions.
For line 4, the two lines that show logical incompatibility are 2 and 3. For line 8, the two lines that show logical incompatibility are 2 and 7.
|
Combinatorics - An unproved slight generalization of a familiar combinatorial identity One of the most basic and famous combinatorial identites is that
$$\sum_{i=0}^n \binom{n}{i} = 2^n \; \forall n \in \mathbb{Z^+} \tag 1$$
There are several ways to make generalizations of $(1)$, one is that:
Rewrite $(1)$: $$\sum_{a_1,a_2 \in \mathbb{N}; \; a_1+a_2=n} \frac{n!}{a_1! a_2!} = 2^n \; \forall n \in \mathbb{Z^+} \tag 2$$
Generalize $(2)$: $$\sum_{a_1,...,a_k \in \mathbb{N}; \; a_1+...+a_k=n} \frac{n!}{a_1!...a_k!} = k^n \; \forall n,k \in \mathbb{Z^+} \tag 3$$
Using Double counting, it is easy to prove that $(3)$ is true. So we have a generalization of $(1)$.
The question is whether we can generalize the identity below using the same idea
$$\sum_{i=0}^n \binom{n}{i}^2 = \binom{2n}{n} \; \forall n \in \mathbb{Z^+} \tag 4$$
which means to find $f$ in
$$\sum_{a_1,...,a_k \in \mathbb{N}; \; a_1+...+a_k=n} \left ( \frac{n!}{a_1!...a_k!} \right )^2 = f(n,k) \; \forall n,k \in \mathbb{Z^+} \tag 5$$
That is the problem I try to solve few days now but not achieve anything. Anyone has any ideas, please share. Thank you.
P.S: It is not an assignment, and sorry for my bad English.
Supplement 1: I think I need to make it clear: The problem I suggested is about to find $f$ which satisfies $(5)$. I also show the way I find the problem, and the only purpose of which is that it may provide ideas for solving.
Supplement 2: I think I have proved the identity of $f(n,3)$ in the comment below
$$f(n,3) = \sum_{i=0}^n \binom{n}{i}^2 \binom{2i}{i} \tag 6$$
by using Double Counting:
We double-count the number of ways to choose a sex-equal subgroup, half of which are on the special duty, from the group which includes $n$ men and $n$ women (the empty subgroup is counted).
The first way of counting: The number of satisfying subgroups which contain $2i$ people is $\binom{n}{i}^2 \binom{2i}{i}$. So we have the number of satisfying subgroups is $RHS(6)$.
The second way of counting: The number of satisfying subgroups which contain $2(a_2+a_3)$ people, $a_2$ women on the duty and $a_3$ men on the duty is
$$\left ( \frac{n!}{a_1!a_2!a_3!} \right )^2$$.
So the number of satisfying subgroups is $LHS(6)$.
Hence, $(6)$ is proved.
| Induction using Vandermonde's identity
$$
\sum_{i_1+i_2=m}\binom{n_1}{i_1}\binom{n_2}{i_2}=\binom{n_1+n_2}{m}\tag{1}
$$
yields
$$
\sum_{i_1+i_2+\dots+i_k=m}\binom{n_1}{i_1}\binom{n_2}{i_2}\dots\binom{n_k}{i_k}=\binom{n_1+n_2+\dots+n_k}{m}\tag{2}
$$
|
Differential operator - what is $\frac{\partial f}{\partial x}x$? given $f$ - a smooth function, $f\colon\mathbb{R}^2\to \mathbb{R}$.
I have a differential operator that takes $f$ to $\frac{\partial f}{\partial x}x$, but I am unsure what this is.
If, for example, the operator tooked $f$ to $\frac{\partial f}{\partial x}$ then I understand that this operator derives by $x$, $\frac{\partial f}{\partial x}x$ derives by $x$ and then what ?
Can someone please help with this ? (I'm confused...)
| Let $D$ be a (first order) differential operator (i.e. a derivation) and let $h$ be a function. (In your example $h$ is the function $x$ and $D$ is $\frac{\partial}{\partial x}$). I claim that there is a differential operator $hD$ given by
$$
f \mapsto hDf.
$$
(This means multiply $h$ by $Df$ pointwise.) For this claim to hold, we just need it to be true that the Leibniz (product) rule holds, i.e.
$$
hD(fg) = f(hD)g + g(hD)f.
$$
Since $D$ itself satisfies the Leibniz rule, this is true. (We also need the operator $hD$ to take sums to sums, which it does.)
Put less formally, to compute $x \frac{\partial}{\partial x}$ on a function $f$, take the partial with respect to $x$, then multiply what you get by $x$.
|
moment generating function of exponential distribution I have a question concerning the aforementioned topic :)
So, with $f_X(t)={\lambda} e^{-\lambda t}$, we get:
$$\phi_X(t)=\int_{0}^{\infty}e^{tX}\lambda e^{-\lambda X}dX =\lambda \int_{0}^{\infty}e^{(t-\lambda)X}dX =\lambda \frac{1}{t-\lambda}\left[ e^{(t-\lambda)X}\right]_0^\infty =\frac{\lambda}{t-\lambda}[0-(1)]$$ but only, if $(t-\lambda)<0$, else the integral does not converge. But how do we know that $(t-\lambda)<0$? Or do we not know it at all, and can only give the best approximation? Or (of course) am I doing something wrong? :)
Yours,
Marie!
| You did nothing wrong. The moment generating function of $X$ simply isn't defined, as your work shows, for $t\ge\lambda$.
|
Prove that $|e^{a}-e^{b}|<|a-b|$
Possible Duplicate:
$|e^a-e^b| \leq |a-b|$
Could someone help me through this problem?
Let a, b be two complex numbers in the left half-plane. Prove that $|e^{a}-e^{b}|<|a-b|$
| By mean value theorem,
$$ |e^a - e^b| \leqslant |a-b|\max_{x\in [a,b]} e^x $$
But $a$ and $b$ have a negative real part, and then all $x$ in $[a,b]$ also have a negative real part. Hence the $\max$ is less than one. And thus
$$ |e^a - e^b| <|a-b|. $$
|
Finding $\frac{dy}{dt}$ given a curve and the speed of the particle Today I was doing some practice problems for the AP Calculus BC exam and came across a question I was unable to solve.
In the xy-plane, a particle moves along the parabola $y=x^2-x$ with a constant speed of $2\sqrt{10}$ units per second. If $\frac{dx}{dt}>0$, what is the value of $\frac{dy}{dt}$ when the particle is at the point $(2,2)$?
I first tried to write the parabola as a parametric equation such that $x(t)=at$ and $y(t)=(at)^2-at$ and then find a value for $a$ such that $\displaystyle\int_0^1\sqrt{(x'(t))^2+(y'(t))^2}dt=2\sqrt{10}$. However, since it was a multiple choice question we were probably not supposed to spend more than 3min on the question so I though that my approach was probably incorrect. The only information that I know for sure is that since $\frac{dy}{dt}=\frac{dy}{dx}\frac{dx}{dt}\rightarrow\frac{dy}{dt}=(2x-1)\frac{dx}{dt}$ and we are evaluating at $x=2$ and so $\frac{dy}{dt}=3\frac{dx}{dt}$. Other than that I am not sure how to proceed and so any help would be greatly appreciated!
| As you noticed, $dy=(2x-1)dx$. The speed is constant, so that $\dot{x}^2+\dot{y}^2=40$. So you get the system $$\begin{cases} \dot{y}=(2x-1)\dot{x} \\\ \dot{x}^2+\dot{y}^2=40 \end{cases}.$$ By substitution, $\left[1+(2x-1)^2 \right]\dot{x}^2 = 40$, whence, at $x=2$, $\dot{x}^2=4$. Since $\dot{x}>0$, you find $\dot{x}=2$, and then $\dot{y}=(2x-1)\dot{x}=3 \cdot 2 = 6$. I hope I did not make mistakes.
|
Wedge product $d(u\, dz)= \bar{\partial}u \wedge dz$. How to show that if $u \in C_0^\infty(\mathbb{C})$ then
$d(u\, dz)= \bar{\partial}u \wedge dz$.
Obrigado.
| Note that $$d(u\,dz)=du\wedge dz+(-1)^0u\,ddz=du\wedge dz=(\partial u+\bar{\partial} u)\wedge dz.$$
Since $$\partial u=\frac{\partial u}{\partial z}dz\hspace{2mm}\mbox{ and }\hspace{2mm}\bar{\partial} u=\frac{\partial u}{\partial \bar{z}}d\bar{z},$$
we have
$$d(u\,dz)=\frac{\partial u}{\partial z}dz\wedge dz+\frac{\partial u}{\partial \bar{z}}d\bar{z}\wedge dz=\frac{\partial u}{\partial \bar{z}}d\bar{z}\wedge dz=\bar{\partial} u\wedge dz,$$
where we have used $dz\wedge dz=0$.
|
How to show that every connected graph has a spanning tree, working from the graph "down" I am confused about how to approach this. It says:
Show that every connected graph has a spanning tree. It's possible to
find a proof that starts with the graph and works "down" towards the
spanning tree.
I was told that a proof by contradiction may work, but I'm not seeing how to use it. Is there a visual, drawing-type of proof?
I appreciate any tips or advice.
| Let G be a simple connected graph, if G has no cycle, then G is the spaning tree of itself. If G has cycles, remove one edge from each cycle,the resulting graph will be a spanning tree.
|
How to prove this equality $ t(1-t)^{-1}=\sum_{k\geq0} 2^k t^{2^k}(1+t^{2^k})^{-1}$?
Prove the equality $\quad t(1-t)^{-1}=\sum_{k\geq0} 2^k t^{2^k}(1+t^{2^k})^{-1}$.
I have just tried to use the Taylor's expansion of the left to prove it.But I failed.
I don't know how the $k$ and $2^k$ in the right occur. And this homework appears after some place with the $Jacobi$ $Identity$ in the book $Advanced$ $Combinatorics$(Page 118,EX10 (2)).
Any hints about the proof ?Thank you in advance.
| This is all for $|t|<1$. Start with the geometric series
$$\frac{t}{1-t} = \sum_{n=1}^\infty t^n$$
On the right side, each term expands as a geometric series
$$\frac{2^k t^{2^k}}{1+t^{2^k}} = \sum_{j=1}^\infty 2^k (-1)^{j-1} t^{j 2^k}$$
If we add this up over all nonnegative integers $k$, for each integer $n$ you get
a term in $t^n$ whenever $n$ is divisible by $2^k$, with coefficient $+2^k$ when
$n/2^k$ is odd and $-2^k$ when $n/2^k$ is even. So if $2^p$ is the largest power of $2$ that divides $n$, the coefficient of $t^n$ will be $2^p -\sum_{k=0}^{p-1} 2^k = 1$.
|
What is so interesting about the zeroes of the Riemann $\zeta$ function? The Riemann $\zeta$ function plays a significant role in number theory and is defined by $$\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s} \qquad \text{ for } \sigma > 1 \text{ and } s= \sigma + it$$
The Riemann hypothesis asserts that all the non-trivial zeroes of the $\zeta$ function lie on the line $\text{Re}(s) = \frac{1}{2}$.
My question is:
Why are we interested in the zeroes of the $\zeta$ function? Does it give any information about something?
What is the use of writing $$\zeta(s) = \prod_{p} \biggl(1-\frac{1}{p^s}\biggr)^{-1}$$
| Here is a visual supplement to Eric's answer, based on this paper by Riesel and Göhl, and Mathematica code by Stan Wagon:
The animation demonstrates the eventual transformation from Riemann's famed approximation to the prime counting function
$$R(x)=\sum_{k=1}^\infty \frac{\mu(k)}{k} \mathrm{li}(\sqrt[k]{x})=1+\sum_{k=1}^\infty \frac{(\log\,x)^k}{k\,k!\zeta(k+1)}$$
to the actual prime-counting function $\pi(x)$, through a series of successive corrections based on the nontrivial roots of $\zeta(s)$. (Here, $\mu(k)$ is the Möbius function and $\mathrm{li}(x)$ is the logarithmic integral.) See the Riesel/Göhl paper for more details.
|
Find the area of a surface of revolution I'm a calculus II student and I'm completely stuck on one question:
Find the area of the surface generated by revolving the right-hand
loop of the lemniscate $r^2 = \cos2 θ$ about the vertical line through
the origin (y-axis).
Can anyone help me out?
Thanks in advance
| Note some useful relationships and identities:
$r^2 = x^2 + y^2$
${cos 2\theta} = cos^2\theta - sin^2\theta$
${sin \theta} = {y\over r} = {y\over{\sqrt{x^2 + y^2}}}$
${sin^2 \theta} = {y^2\over {x^2 + y^2}}$
These hint at the possibility of doing this in Cartesian coordinates.
|
Show that $ a,b,c, \sqrt{a}+ \sqrt{b}+\sqrt{c} \in\mathbb Q \implies \sqrt{a},\sqrt{b},\sqrt{c} \in\mathbb Q $ Assume that $a,b,c, \sqrt{a}+ \sqrt{b}+\sqrt{c} \in\mathbb Q$ are rational,prove $\sqrt{a},\sqrt{b},\sqrt{c} \in\mathbb Q$,are rational.
I know that can be proved, would like to know that there is no easier way
$\sqrt a + \sqrt b + \sqrt c = p \in \mathbb Q$,
$\sqrt a + \sqrt b = p- \sqrt c$,
$a+b+2\sqrt a \sqrt b = p^2+c-2p\sqrt c$,
$2\sqrt a\sqrt b=p^2+c-a-b-2p\sqrt c$,
$4ab=(p^2+c-a-b)+4p^2c-4p(p^2+c-a-b)\sqrt c$,
$\sqrt c=\frac{(p^2+c-a-b)+4p^c-4ab}{4p(p^2+c-a-b)}\in\mathbb Q$.
| [See here and here for an introduction to the proof. They are explicitly worked special cases]
As you surmised, induction works, employing our prior Lemma (case $\rm\:n = 2\:\!).\:$ Put $\rm\:K = \mathbb Q\:$ in
Theorem $\rm\ \sqrt{c_1}+\cdots+\!\sqrt{c_{n}} = k\in K\ \Rightarrow \sqrt{c_i}\in K\:$ for all $\rm i,\:$ if $\rm\: 0 < c_i\in K\:$ an ordered field.
Proof $\: $ By induction on $\rm n.$ Clear if $\rm\:n=1.$ It is true for $\rm\:n=2\:$ by said Lemma. Suppose that $\rm\: n>2.$ It suffices to show one of the square-roots is in $\rm K,\:$ since then the sum of all of the others is in $\rm K,\:$ so, by induction, all of the others are in $\rm K$.
Note that $\rm\:\sqrt{c_1}+\cdots+\sqrt{c_{n-1}}\: =\: k\! -\! \sqrt{c_n}\in K(\sqrt{c_n})\:$ so all $\,\rm\sqrt{c_i}\in K(\sqrt{c_n})\:$ by induction.
Therefore $\rm\ \sqrt{c_i} =\: a_i + b_i\sqrt{c_n}\:$ for some $\rm\:a_i,\:\!b_i\in K,\:$ for $\rm\:i=1,\ldots,n\!-\!1$.
Some $\rm\: b_i < 0\:$ $\Rightarrow$ $\rm\: a_i = \sqrt{c_i}-b_i\sqrt{c_n} = \sqrt{c_i}+\!\sqrt{b_i^2 c_n}\in K\:\Rightarrow \sqrt{c_i}\in K\:$ by Lemma $\rm(n=2).$
Else all $\rm b_i \ge 0.\:$ Let $\rm\: b = b_1\!+\cdots+b_{n-1} \ge 0,\:$ and let $\rm\: a = a_1\!+\cdots+a_{n-1}.\:$ Then
$$\rm \sqrt{c_1}+\cdots+\!\sqrt{c_{n}}\: =\: a+(b\!+\!1)\:\sqrt{c_n} = k\in K\:\Rightarrow\:\!\sqrt{c_n}= (k\!-\!a)/(b\!+\!1)\in K$$
Note $\rm\:b\ge0\:\Rightarrow b\!+\!1\ne 0.\:$ Hence, in either case, one of the square-roots is in $\rm K.\ \ $ QED
Remark $ $ Note that the proof depends crucially on the positivity of the square-root summands. Without such the proof fails, e.g. $\:\sqrt{2} + (-\sqrt{2})\in \mathbb Q\:$ but $\rm\:\sqrt{2}\not\in\mathbb Q.\:$ It is instructive to examine all of the spots where positivity is used in the proof (above and Lemma), e.g. to avoid dividing by $\,0$.
See also this post on linear independence of square-roots (Besicovic's theorem).
|
$\lim\limits_{n \to{+}\infty}{\sqrt[n]{n!}}$ is infinite How do I prove that $ \displaystyle\lim_{n \to{+}\infty}{\sqrt[n]{n!}}$ is infinite?
| Using $\text{AM} \ge \text{GM}$
$$ \frac{1 + \frac{1}{2} + \dots + \frac{1}{n}}{n} \ge \sqrt[n]{\frac{1}{n!}}$$
$$\sqrt[n]{n!} \ge \frac{n}{H_n}$$
where $H_n = 1 + \frac{1}{2} + \dots + \frac{1}{n} \le \log n+1$
Thus
$$\sqrt[n]{n!} \ge \frac{n}{\log n+1}$$
|
How to explain integrals and derivatives to a 10 years old kid? I have a sister that is interested in learning "what I do". I'm a 17 years old math loving person, but I don't know how to explain integrals and derivatives with some type of analogies.
I just want to explain it well so that in the future she could remember what I say to her.
| This is a question for a long car journey - both speed and distance travelled are available, and the relationship between the two can be explored and sampled, and later plotted and examined. And the questions as to why both speed and distance are important can be asked etc
|
Reference request for "Hodge Theorem" I have been told about a theorem (it was called Hodge Theorem), which states the following isomorphism:
$H^q(X, E) \simeq Ker(\Delta^q).$
Where $X$ is a Kähler Manifold, $E$ an Hermitian vector bundle on it and $\Delta^q$ is the Laplacian acting on the space of $(0,q)$-forms $A^{0,q}(X, E)$.
Unfortunately I couldn´t find it in the web. Anyone knows a reliable reference for such a theorem? (In specific I´m looking for a complete list of hypothesis needed and for a proof.)
Thank you!
| There is a proof of Hodge theorem in John Roe's book, Elliptic Operators, topology, and asymptotic expansion of heat kernel. The proof is only two page long and very readable. However he only proved it for the classical Laplace operator, and the statement holds for any generalized Laplace operator. Another place you can find a reference is Richard Melrose's notes on microlocal analysis, which you can find in his homepage. But the proof is difficult to read without some background.
|
Cardinality of the power set of the set of all primes Please show me what I am doing wrong...
Given the set $P$ of all primes I can construct the set $Q$ being the power set of P.
Now let $q$ be an element in $Q$. ($q = \{p_1,p_2,p_3,\ldots\}$ where every $p_n$ is an element in $P$.)
Now I can map every $q$ to a number $k$, where $k$ is equal to the product of all elements of $q$. ($k = p_1p_2p_3\ldots$) (for an empty set $q$, $k$ may be equal to one)
Let the set $K$ consist of all possible values of $k$.
Now because of the uniqueness of the prime factorization I can also map every number $k$ in $K$ to a $q$ in $Q$. (letting $k=1$ map to $q=\{\}$)
Thus there exists a bijection between $Q$ and $K$. But $K$ is a subset of the natural numbers which are countable, and $Q$, being the power set of $P$, needs to be uncountably infinite (by Cantor's theorem), since $P$ is countably infinite.
This is a contradiction since there cannot exist a bijection between two sets of different cardinality. What am I overlooking?
| Many (most) of the elements $q$ have an infinite number of elements. Then you cannot form the product of those elements. You have shown that the set of finite subsets of the primes is countable, which is correct.
|
Factorial number of digits Is there any neat way to solve how many digits the number $20!$ have? I'm looking a solution which does not use computers, calculators nor log tables, just pen and paper.
| I will use log as the base b logarithm and ln as the natural log.
Then number of digits of x in base b is given by one more than the floor of the log(x).
log(n!)=sum(log(k)) for k=1,2,...,n
We can interpret this as the Riemann sum for the integral from 1 to n of log(x) dx. This integral is actually a lower bound. The upper bound is the same integral from 2 to n+1 rather than 1 to n.
The lower bound integral is given by n log(n)-(n-1)/ln(b). The upper bound gives (n+1) log(n+1)-n/ln(b) -2ln(2)+1/ln(b).
For n=20, working in base 10, we get about 17.8 as the lower bound and 18.9 as the upper bound. One more than the floor gives 18 or 19 digits. Not surprisingly, the answer is 19 as the lower bound is nearly 19 digits and the upper is nearly 20.
The Riemann sum approximations will get better as n increases, but the answer is already quite good by n=20.
|
What is $\limsup\limits_{n\to\infty} \cos (n)$, when $n$ is a natural number? I think the answer should be $1$, but am having some difficulties proving it. I can't seem to show that, for any $\delta$ and $n > m$, $|n - k(2\pi)| < \delta$. Is there another approach to this or is there something I'm missing?
| You are on the right track. If $|n-2\pi k|<\delta$ then $|\frac{n}{k}-2\pi|<\frac \delta k$. So $\frac{n}{k}$ must be a "good" approximation for $2\pi$ to even have a chance.
Then it depends on what you know about rational approximations of irrational numbers. Do you know about continued fractions?
|
How to find $\int{\frac{x}{\sqrt{x^2+1}}dx}$? I started simplifying $$\int{\dfrac{x}{\sqrt{x^2+1}}dx}$$
but I always get this:
$$\int{x(x^2+1)^{-1/2}dx}.$$
But I don't know how to follow by that way.
| Personally, I dislike the use of variable substitution, which is sort of mechanical, for problems that can be solved by applying concept. Not to mention that changing variables is always taken extremely lightly, as if we can just plug in any expression for $x$ and presto! new variable! For example $u = x^2+1$ is clearly not valid in all of $\mathbb R$ as it isn't injective, but it is in $\mathbb R^+$, which works out because that's all we need in this case, but no one seemed to care to use this justification! Also in for beginner calculus students, "$\mathrm dx$" means god knows what, all I need to know is that $\mathrm d(f(x))=f'(x)\mathrm dx,$ and hence we encourage liberal manipulation of meaningless symbols.
For these kinds of integrals just use the chain rule: $$
(f(u(x))' = f'(u(x))u'(x)\Rightarrow f(u(x))=\int f'(u(x))u'(x) \mathrm dx$$
So here just identify $u(x)=x^2+1$ and $u'(x)=2x$, so all we need is a factor of $2$ inside the integral which we can obtain if we also divide by $2$, which we can then factor out. I think this is a very good method in general, that is, the method of looking for when a function and its own derivative are present in an integral.
|
Why is the Connect Four gaming board 7x6? (or: algorithm for creating Connect $N$ board) The Connect Four board is 7x6, as opposed to 8x8, 16x16, or even 4x4. Is there a specific, mathematical reason for this? The reason I'm asking is because I'm developing a program that will be able to generate Connect $N$ boards, for any given number. At first I assumed that the board size was 2n by 2n, but then I realized it's 7x6. What's going on here?
P.S.: Forgive me if my question tags are incorrect; I'm not quite sure what this falls under.
| So it seems that a 7x6 board was chosen because it's "the smallest board which isn't easily shown to be a draw". In addition, it was also speculated that there should probably be an even amount of columns. Therefore, it seems that the dimensions of a Connect $N$ board are a function of $N$. I see two possible functions:
N.B.: I'm not sure if there's a rule about the numbers being consecutive, but I'm assuming that that is the case here.
Times 1.5 function pseudo-code:
column_height = N * 1.5;
If column_height is an even number:
row_height = N + 1;
Otherwise (if column_height is an odd number):
column_height = (N * 1.5) + 1; //truncate the decimal portion of (N * 1.5) before adding one
row_height = column_height + 1;
Add 3 function psuedo-code:
column_height = N + 3
If column_height is an even number:
row_height = N + 2;
Otherwise (if column_height is an odd number):
column_height = N + 4;
row_height = N + 3;
The first one seems more likely, but since I'm trying to generate perfectly mathematically balanced game boards and there doesn't seem to be any symmetry that I can see, I'm still not sure. Does this seem about right?
|
Why do engineers use the Z-transform and mathematicians use generating functions? For a (complex valued) sequence $(a_n)_{n\in\mathbb{N}}$ there is the associated generating function
$$
f(z) = \sum_{n=0}^\infty a_nz^n$$
and the $z$-Transform
$$
Z(a)(z) = \sum_{n=0}^\infty a_nz^{-n}$$
which only differ by the sign of the exponent of $z$, that is, both are essentially the same and carry the same information about the sequence, though encoded slightly differently. The basic idea is the same: associate a holomorphic function with the sequence and use complex calculus (or formal power series).
However, the engineering books I know which treat the $Z$-transform do not even mention the word "generating function" (well one does but means the generator of a multiscale analysis...) and the mathematics books on generating function do not mention the $Z$-transform (see for example "generatingfunctionology").
I am wondering: Why is that? Has one formulation some advantage over the other? Or is it just for historical reasons?
(BTW: There is not a tag for the $Z$-transform, and the closest thing I found was "integral-transforms"...)
| Given a sequence of numbers $\{x[n] \colon n \in \mathbb Z\}$ the $z$-transform
is defined as
$$X(z) = \sum_n x[n]z^{-n}$$ which when evaluated at $z = \exp(j\omega)$
(where $j = \sqrt{-1}$ is what electrical engineers typically use for
what mathematicians denote by $i$) gives
$${X}(\exp(j \omega)) = \sum_n x[n] \exp(-j\omega n)$$
which is called the
discrete-time Fourier Transform (DTFT) of the sequence. Engineers view this as
slightly easier to use and remember than evaluating the generating function
$$\hat{X}(D) = \sum_n x[n]D^{n}$$ (where $D$ denotes delay) at
$D = \exp(-j\omega)$ to arrive at the same result. So, it is essentially a matter of convention.
|
An infinite finitely generated group contains an isometric copy of $\mathbb{R}$, i.e., contains a bi-infinite geodesic The question is: prove that an infinite finitely generated group $G$ contains an isometric copy of $\mathbb{R}$, i.e., contains a bi-infinite geodesic ($G$ is equipped with the word metric).
I do not even know what I have to prove. It does not make sense to me. The word metric of $G$ assumes values in the natural numbers. How could there be an isometry between a subgraph of the Cayley graph of $G$ and the real line $\mathbb{R}$.
I am really confused.
I found this question here (sheet 6, ex. 1).
| I'm just going to focus on what you've said you are confused about, namely:
"How could there be an isometry between a subgraph of the Cayley graph of G and the real line $\mathbb{R}$?".
We can extend the word metric on $G$ to a metric on the Cayley graph in a natural way, with each edge being an isometric copy of a unit interval. Under this metric, the Cayley graph of $\mathbb{Z}$ with respect to the generator $1$ is isometric to $\mathbb{R}$.
|
compactness property I am a new user in Math Stack Exchange. I don't know how to solve part of this problem, so I hope that one of the users can give me a hand.
Let $f$ be a continuous function from $\mathbb{R}^{n}$ to $\mathbb{R}^{m}$ with the following properties:$A\subset \mathbb{R}^{n}$ is open then $f(A)$ is open. If $B\subset \mathbb{R}^{m}$ is compact then $f^{-1}(B)$ is compact.
I want to prove that $f( \mathbb{R}^{n}) $ is closed.
| Take $y \in \overline{f(\mathbb{R}^n)}$.
Let $B_\varepsilon = \{x | d(x,y) \leq \varepsilon\}$.
Now,
$\emptyset \neq B_\varepsilon \cap f(\mathbb{R}^n) = f\left(f^{-1}(B_\varepsilon)\right)$.
Because $f^{-1}(B_\varepsilon)$ is compact, $B_\varepsilon \cap f(\mathbb{R}^n)$,
as the image of a compact by $f$, is a decreasing sequence of nonempty compact sets.
Therefore,
$\bigcap_\varepsilon (B_\varepsilon \cap f(\mathbb{R}^n))$ is nonempty.
Now,
$\emptyset \neq \bigcap_\varepsilon (B_\varepsilon \cap f(\mathbb{R}^n))
\subset
\bigcap_\varepsilon B_\varepsilon = \{y\}$
implies that $y \in f(\mathbb{R}^n)$.
That is, $f(\mathbb{R}^n) = \overline{f(\mathbb{R}^n)}$.
By the way, the only "clopen" sets in $\mathbb{R}^m$ are $\emptyset$ and
$\mathbb{R}^m$. Since $f(\mathbb{R}^n)$ is not empty, we have that
$f(\mathbb{R}^n) = \mathbb{R}^m$.
|
How to find perpendicular vector to another vector? How do I find a vector perpendicular to a vector like this: $$3\mathbf{i}+4\mathbf{j}-2\mathbf{k}?$$
Could anyone explain this to me, please?
I have a solution to this when I have $3\mathbf{i}+4\mathbf{j}$, but could not solve if I have $3$ components...
When I googled, I saw the direct solution but did not find a process or method to follow. Kindly let me know the way to do it. Thanks.
| The vectors perpendicular to $(3,4,-2)$ form a two dimensional subspace, the plane $3x+4y-2z=0$, through the origin.
To get solutions, choose values for any two of $x,y$ and $z$, and then use the equation to solve for the third.
The space of solutions could also be described as $V^{\perp}$, where $V=\{(3t,4t,-2t):t\in\Bbb R\}$ is the line (or one dimensional vector space) spanned by $(3,4-2)$.
|
Successive Lottery Drawings and Choosing Winning Number Consider the following scenario:
Suppose on some date $D1$ the number $N$ is a winning number in a fair lottery where a "play" is considered the selection of a finite set of numbers. By "fair" I mean that the winning number will be selected at random. At some later date $D2$ another lottery, using the same rules, will be held. Suppose one picks the same number $N$ that was the winning number on $D1$ to play on $D2$. Does this selection increase/decrease or have no effect on ones chances of winning the lottery on $D2$?
I believe that picking a previously winning number will have no impact on one's chance of success for the simple reason that the universe has no way of remembering which previous numbers won and which ones did not; since the selection is random, each number has an equally likely chance of being picked regardless of whether it was picked before. Other than a basic assumption of causality, I really don't know though how one would rigorously prove this.
The counterargument, which I believe is faulty, argues against "reusing" winning numbers because the likelihood of the same number coming up twice is infinitesimally small so, of course, one should not reuse numbers. The problem with this as I see it though is that the probability of picking any two specific numbers, regardless of whether they are the same, is identical to picking the same number twice. The fault is that picking numbers this way is like trying to play two lotteries in succession - which is very different from the given problem.
| Just adding to what André Nicolas said, he's accurate. Some prize tiers are usually shared between all people who got a winning combination so this has an effect.
For example, in 2005 A record 110 players won the second prize tier of 500,000 and 100,000 dollar prizes (depending on Powerplay) in a single Powerball drawing, The powerball winning numbers most of them used apparently came from a fortune-cookie message. Ordinarily it was expected that only four tickets to win at the Match 5 prize level.
The odds of this happening were so high that an investigation was launched.
|
Sex distribution Suppose there are N male and N female students. They are randomly distributed into k groups.
Is it more probable for a male student to find himself in a group with more guys and for female student to find herself in a group with more girls?
The question is motivated by an argument with my mother. She claimed that in the majority of collectives where she was, the number of women besides her was greater than men, while I had the opposite impression, that in most collectives where I was a member there was more men even if not to count myself.
I never was in a majority-girls collective, so I think for a male it is more probable to find oneself in a majority-male collective (even if we exclude oneself).
| This problem also gives the answer why you are "always" in the longer queue at the supermarket.
If $k=1$ the answer is trivial: All groups are gender balanced.
Therefore we shall assume that $k>1$.
Assume Samuel and Samantha were ill the day the groups were originally formed.
If the two Sams are assigned to the groups randomly, the result is just as if there had been no illness.
If Sam and Sam are asigned to the same group (which happens with probability $\frac1k$ if the distribution is uniform, with other distributions, your mileage may vary), we see by symmetry that more guys is exactly as probable as more gals for this group.
If Sam is assigned to a different group than Sam (which happens with probability $1-\frac1k>0$ in the uniform case, but we actually need only assume that the probability is $>0$), then three cases are possible:
*
*The group was gender-balanced before with some probability $p$, say (clearly, $p>0$, though the exact value depends on the group size).
*Or the group had more male members.
*Or the group had more female members.
By symmetry, the probabilities for the latter two events are equal, hence are $\frac{1- p}2$ each. Then the probability that the group including Sam has more people of the same gender as Sam is at least $p+\frac{1- p}2=\frac{1+ p}2>\frac12$.
In total, it is more probable to find oneself in a group with more people of ones own gender than with less.
This holds both for Sam=Samual and Sam=Samantha.
|
Find equation of a plane that passes through point and contains the intersection line of 2 other planes Find equation of a plane that passes through point P $(-1,4,2)$ that contains the intersection line of the planes
$$\begin{align*}
4x-y+z-2&=0\\
2x+y-2z-3&=0
\end{align*}$$
Attempt:
I found the the direction vector of the intersection line by taking the cross product of vectors normal to the known planes. I got $\langle 1,10,6\rangle$. Now, I need to find a vector normal to the plane I am looking for. To do that I need one more point on that plane. So how do I proceed?
| Consider the family of planes $u(4x-y+z-2)+(1-u)(2x+y-2z-3)=0$ where $u$ is a parameter. You can find the appropriate value of $u$ by substituting in the coordinates of the given point and solving for $u$; the value thus obtained can be substituted in the equation for the family to yield the particular member you need.
|
Finding subgroups of index 2 of $G = \prod\limits_{i=1}^\infty \mathbb{Z}_n$ I looked at this question and its answer. The answer uses the fact that every vector space has a basis, so there are uncountable subgroups of index 2 if $n=p$ where $p$ is prime.
Are there uncountable subgroups of index 2 if $n$ is not prime ?
The problem looks the same (with minimal change), but the way we found the subgroups is not good for this case (I think).
| If $n$ is odd, $G$ has no subgroups of index $2$. Indeed, if $H$ is a subgroup of index dividing $2$, and $g\in G$, then $2g\in H$ (since $G/H$ has order $2$, so $2(g+H) = 0+H$). Since every element of $G$, hence of $H$, has order dividing $n$, and $\gcd(2,n)=1$, then $\langle 2g\rangle = \langle g\rangle$, so $g\in\langle 2g\rangle\subseteq H$, hence $g+H$ is trivial. That is, $G\subseteq H$. So the only subgroup of index dividing $2$ is $G$.
If $n$ is even, then let $H=2\mathbb{Z}_n$, which is of index $2$ in $\mathbb{Z}_n$. Then $G/\prod_{i=1}^{\infty} H \cong \prod_{i=1}^{\infty}(\mathbb{Z}_n/2\mathbb{Z}_n) \cong \prod_{i=1}^{\infty}\mathbb{Z}_2$.
Note: $\prod_{i=1}^{\infty} H$ is not itself of index $2$ in $G$; in fact, $\prod_{i=1}^{\infty} H$ has infinite index in $G$. We are using $\prod_{i=1}^{\infty}H$ to reduce to a previously solved case.
Since $\prod_{i=1}^{\infty}\mathbb{Z}_2$ has uncountably many subgroups of index $2$ by the previously solved case, by the isomorphism theorems so does $G$.
Below is an answer for the wrong group (I thought $G=\prod_{n=1}^{\infty}\mathbb{Z}_n$; I leave the answer because it is exactly the same idea.)
For each $n$, let $H_n$ be the subgroup of $\mathbb{Z}_n$ given by $2\mathbb{Z}_n$. Note that $H_n=\mathbb{Z}_n$ if $n$ is odd, and $H_n$ is the subgroup of index $2$ in $\mathbb{Z}_n$ if $n$ is even.
Now let $\mathcal{H}=\prod_{n=1}^{\infty}H_n$. Then
$$G/\mathcal{H}\cong\prod_{n=1}^{\infty}(\mathbb{Z}_n/H_n) = \prod_{n=1}^{\infty}(\mathbb{Z}/2\mathbb{Z}).$$
(In the last isomorphism, the odd-indexed quotients are trivial, the even-indexed quotients are isomorphic to $\mathbb{Z}/2\mathbb{Z}$; then delete all the trivial factors).
Since $G/\mathcal{H} \cong \prod_{n=1}^{\infty}(\mathbb{Z}/2\mathbb{Z})$ has uncountably many subgroups of index $2$, so does $G$ by the isomorphism theorems.
|
Primitive roots as roots of equations.
Take $g$ to be a primitive root $\pmod p$, and $n \in \{0, 1,\ldots,p-2\}$ write down a necessary sufficient condition for $x=g^n$ to be a root of $x^5\equiv 1\pmod p$ . This should depend on $n$ and $p$ only, not $g$.
How many such roots $x$ of this equation are there? This answer may only depend on $p$.
At a guess for the first part I'd say as $g^{5n} \equiv g^{p-1}$ it implies for $x$ to be a root $5n \equiv p-1 \pmod p$. No idea if this is right and not sure what to do for second part. Thanks for any help.
| Hint. In any abelian group, if $a$ has order $n$, then $a^r$ has order $n/\gcd(n,r)$.
(Your idea is fine, except that you got the wrong congruence: it should be $5n\equiv p-1\pmod{p-1}$, not modulo $p$; do you see why?)
For the second part, you'll need to see what you get from the first part. That will help you figure it out.
|
Symmetric and exterior power of representation Does there exist some simple formulas for the characters
$$\chi_{\Lambda^{k}V}~~~~\text{and}~~~\chi_{\text{Sym}^{k}V},$$
where $V$ is a representation of some finite group?
Thanks.
| This is not quite an answer, but Fulton & Harris, §2.1 on page 13 gives a Formula for $k=2$:
$$\chi_{\bigwedge^2 V}(g) = \frac{1}{2}\cdot\left( \chi_V(g)^2 - \chi_V(g^2)\right)$$
as well as, in the Exercise below,
$$\chi_{\mathrm{Sym}^2(V)}(g) = \frac{1}{2}\cdot\left( \chi_V(g)^2 + \chi_V(g^2)\right)$$
Maybe you can look into the proof for the first equality and generalize.
|
is it possible to get the Riemann zeros since we know that the number of Riemann zeros on the interval $ (0,E) $ is given by $ N(E) = \frac{1}{\pi}\operatorname{Arg}\xi(1/2+iE) $
is then possible to get the inverse function $ N(E)^{-1}$ so with this inverse we can evaluate the Riemann zeros $ \rho $ ??
i mean the Riemann zeros are the inverse function of $\arg\xi(1/2+ix) $
| No, your formula is wrong. $N(E)= \frac{1}{\pi} Arg \xi (1/2+iE) $ + a nonzero term coming from the integration along the lines $\Im s =E$ (you are applying an argument prinicple).
Besides, any function $N: \mathbb{R} \rightarrow\mathbb{Z}$ can't be injective for cardinality considerations.
|
What does it really mean for something to be "trivial"? I see this word a lot when I read about mathematics. Is this meant to be another way of saying "obvious" or "easy"? What if it's actually wrong? It's like when I see "the rest is left as an exercise to the reader", it feels like a bit of a cop-out. What does this all really mean in the math communities?
| It can mean different things. For example:
Obvious after a few moments thought.
Clear from a commonly used argument or a short one line proof.
However, it is often also used to mean the most simple example of something. For example, a trivial group is the group of one element. A trivial vector space is the space {0}.
|
Factoring over a finite field Consider $f=x^4-2\in \mathbb{F}_3[x]$, the field with three elements. I want to find the Galois group of this polynomial.
Is there an easy or slick way to factor such a polynomial over a finite field?
| The coefficients are reduced modulo 3, so
$$
x^4-2=x^4-3x^2+1=(x^4-2x^2+1)-x^2=(x^2-1)^2-x^2=(x^2+x-1)(x^2-x-1).
$$
It is easy to see that neither $x^2+x-1$ nor $x^2-x-1$ have any roots any $F_3$. As they are both quadratic, the roots are in $F_9$. Therefore the Galois group is $Gal(F_9/F_3)$, i.e. cyclic of order two.
|
Probability of components to fail I want to verify my reasoning with you.
An electronic system contains 15 components. The probability that a component might fail is 0.15 given that they fail independently. Knowing that at least 4 and at most 7 failed, what is the probability that exactly 5 failed?
My solution:
$X \sim Binomial(n=15, p=0.15)$
I guess what I have to calculate is $P(X=5 | 4 \le X \le 7) = \frac{P(5 \cap \{4,5,6,7\})}{P(\{4,5,6,7\})}$. Is it correct? Thank you
| You already know the answer is $a=p_5/(p_4+p_5+p_6+p_7)$ where $p_k=\mathrm P(X=k)$. Further simplifications occur if one considers the ratios $r_k=p_{k+1}/p_k$ of successive weights. To wit,
$$
r_k=\frac{{n\choose k+1}p^{k+1}(1-p)^{n-k-1}}{{n\choose k}p^{k}(1-p)^{n-k}}=\frac{n-k}{k+1}\color{blue}{t}\quad\text{with}\ \color{blue}{t=\frac{p}{1-p}}.
$$
Thus,
$$
\frac1a=\frac{p_4}{p_5}+1+\frac{p_6}{p_5}+\frac{p_7}{p_5}=\frac1{r_4}+1+r_5(1+r_6),
$$
which, for $n=15$ and with $\color{blue}{t=\frac3{17}}$, yields
$$
\color{red}{a=\frac1{\frac5{11\color{blue}{t}}+1+\frac{10\color{blue}{t}}6\left(1+\frac{9\color{blue}{t}}7\right)}}.
$$
|
Inscrutable proof in Humphrey's book on Lie algebras and representations This is a question pertaining to Humphrey's Introduction to Lie Algebras and Representation Theory
Is there an explanation of the lemma in §4.3-Cartan's Criterion? I understand the proof given there but I fail to understand how anybody could have ever devised it or had the guts to prove such a strange statement...
Lemma: Let $k$ be an algebraically closed field of characteristic $0$. Let $V$ be a finite dimensional vector space over $k$, and $A\subset B\subset \mathrm{End}(V)$ two subspaces. Let $M$ be the set of endomorphisms $x$ of $V$ such that $[x,B]\subset A$. Suppose $x\in M$ is such that $\forall y\in M, \mathrm{Tr}(xy)=0$. Then, $x$ is nilpotent.
The proof uses the diagonalisable$+$nilpotent decomposition, and goes on to show that all eigenvalues of $x$ are $=0$ by showing that the $\mathbb{Q}$ subspace of $k$ they generate has only the $0$ linear functional.
Added: (t.b.) here's the page from Google books for those without access:
| This doesn't entirely answer your question but the key ingredients are (1) the rationals are nice in that their squares are non-negative (2) you can get from general field elements to rationals using a linear functional f (3) getting a handle on x by way of the eigenvalues of s.
|
Algebrically independent elements
Possible Duplicate:
Why does K->K(X) preserve the degree of field extensions?
Suppose $t_1,t_2,\ldots,t_n$ are algebrically independent over $K$ containing $F$.
How to show that $[K(t_1,\ldots,t_n):F(t_1,\ldots,t_n)]=[K:F]$?
| Using the answer in link provided by Zev, your question can be answered by simple induction over $n$. For $n=1$ we proceed along one of the answers shown over there. Assume we have shown the theorem for some $n$. Then we have $[K(t_1,\ldots,t_n):F(t_1,\ldots,t_n)]=[K:F]$, and by the theorem for $n=1$ we have also $[K(t_1,\ldots,t_n,t_{n+1}):F(t_1,\ldots,t_n,t_{n+1})]=[K(t_1,\ldots,t_n)(t_{n+1}):F(t_1,\ldots,t_n)(t_{n+1})]=[K(t_1,\ldots,t_n):F(t_1,\ldots,t_n)]=[K:F]$ which completes the proof by induction.
|
Showing a series is a solution to a differential equation I am attempting to show that the series $y(x)\sum_{n=0}^{\infty} a_{n}x^n$ is a solution to the differential equation $(1-x)^2y''-2y=0$ provided that $(n+2)a_{n+2}-2na_{n+1}+(n-2)a_n=0$
So i have:
$$y=\sum_{n=0}^{\infty} a_{n}x^n$$
$$y'=\sum_{n=0}^{\infty}na_{n}x^{n-1}$$
$$y''=\sum_{n=0}^{\infty}a_{n}n(n-1)x^{n-2}$$
then substituting these into the differential equation I get:
$$(1-2x+x^2)\sum_{n=0}^{\infty}n(n-1)a_{n}x^{n-2}-2\sum_{n=0}^{\infty} a_{n}x^n=0$$
$$\sum_{n=0}^{\infty}n(n-1)a_{n}x^{n-2}-2\sum_{n=0}^{\infty}n(n-1)a_{n}x^{n-1}+\sum_{n=0}^{\infty}n(n-1)a_{n}x^{n}-2\sum_{n=0}^{\infty} a_{n}x^n=0$$
relabeling the indexes:
$$\sum_{n=-2}^{\infty}(n+2)(n+1)a_{n+2}x^{n}-2\sum_{n=-1}^{\infty}n(n+1)a_{n+1}x^{n}+\sum_{n=0}^{\infty}n(n-1)a_{n}x^{n}-2\sum_{n=0}^{\infty} a_{n}x^n=0$$
and then cancelling the $n=-2$ and $n=-1$ terms:
$$\sum_{n=0}^{\infty}(n+2)(n+1)a_{n+2}x^{n}-2\sum_{n=0}^{\infty}n(n+1)a_{n+1}x^{n}+\sum_{n=0}^{\infty}n(n-1)a_{n}x^{n}-2\sum_{n=0}^{\infty} a_{n}x^n=0$$
but this doesn't give me what I want (I don't think) as I have $n^2$ terms as I would need
$(n^2+3n+2)a_{n+2}-(2n^2+n)a_{n+1}+(n^2-n-2)a_{n}=0$
I'm not sure where I have gone wrong?
Thanks very much for any help
| You are correct.
Only you need to go on and observe that the lhs of your last equation factorizes as: $$(n+1)[(n+2)a_{n+2}-2n a_{n+1}+(n-2)a_n]$$
|
Finding a constant to make a valid pdf Let $f(x) = c\cdot 2^{-x^2}$. How do I find a constant $c$ such that the integral evaluates to $1$?
| Hint: Rewrite
$$f(x) = c \,[e^{\ln(2)}]^{-x^2} = c\, e^{-x^2\ln(2)}$$
and try to exploit the following integral together with some change of variable:
$$
\int^{\infty}_0 e^{-x^2} \,dx = \frac{\sqrt{\pi}}{2}
$$
|
Central Limit Theorem/Markov's inequality Here is the question:
Chip dies are manufactured in a facility where it was observed that
the width of the die is normally distributed with mean 5mm and
standard deviation $\sigma$. The manufacturer wants to guarantee that no more
than 1 out of 100 dies fall outside the range of (5mm +/- 0.5mm). What
should be the maximal standard deviation $\sigma$ of this manufacturing
process?
My attempt at a solution:
I figured I could use the central limit theorem and Markov's inequality for this one:
thus-
Pr{die will be in range} = 99/100
I assumed that this should be a normal R.V. (because using a Poisson R.V. to solve this would be tedious)
And now I'm horribly stuck. Any advice as to where I went wrong?
Thank you.
| Assume, without much justification except that we were told to do so, that the width $X$ of the die has normal distribution with mean $5$ and variance $\sigma^2$.
The probability that we are within $k\sigma$ of the mean $5$ (formally, $P(5-k\sigma\le X \le 5+k\sigma)$) is equal to the probability that $|Z|\le k$, where $Z$ has standard normal distribution. We want this probability to be $0.99$.
If we look at a table for the standard normal, we find that $k\approx 2.57$.
We want $k\sigma=0.5$ to just meet the specification. Solve for $\sigma$. We get $\sigma\approx 0.19455$, so a standard deviation of about $0.195$ or less will do the job.
We did not use the Central Limit Theorem, nor the Markov Inequality, since we were asked to assume normality. The Poisson distribution has no connection with the problem.
Remark: The table that we used shows that the probability that $Z\le 2.57$ is about $0.995$. It follows that $P(Z>2.57)\approx 0.005$, we have $1/2$ of $1$ percent in the right tail. We also have by symmetry $1/2$ of $1$ percent in the left tail, for a total of $1$ percent, as desired.
|
Lowenheim-Skolem theorem confusion This Wikipedia entry on the Löwenheim–Skolem theorem says:
In mathematical logic, the Löwenheim–Skolem theorem, named for Leopold Löwenheim and Thoralf Skolem, states that if a countable first-order theory has an infinite model, then for every infinite cardinal number κ it has a model of size κ.
What does the "size" of a model referring to (or mean)?
Edit: If it is referring to the cardinality of a model (set), how do you get the cardinality of one model (-> It's synonymous with interpretation, right?)? What is inside the model, then? I mean, it seems sensical to define a model of a language, as a language has some constant numbers and objects, but defining a model of a single object - a number - seems nonsensical to me. What is inside the model of an infinite number?
Thanks.
| Each model has a set of individuals. The size of the model is the cardinality of this set.
|
Proof of the Schwarz Lemma I have a question which is (the Schwarz Lemma):
Suppose that $f:\mathbb{D}\rightarrow\mathbb{D}$ is holomorphic and suppose that $f(0)=0$, show that $\lvert f(z)\rvert \leq \lvert z \rvert \forall{z}\in\mathbb{D}$
and the solution is:
Let $g(z)=\frac{f(z)}{z}$ for $z\neq0$ and $g(0)=f'(0)$. Then g is holomorphic in $\mathbb{D}$.
Now apply the maximum principle to to g on the disc $\bar{D(0,r)}$ for $r<1$ to conclude that for $\lvert z \rvert \leq r$ we have $\lvert g(z) \rvert \leq\frac{1}{r}$ and then letting $r\rightarrow 1$ we get $\lvert g(z)\rvert\leq 1$ and so we get $\lvert f(z)\rvert \leq \lvert z \rvert$.
I am confused as to why $\lvert f(z) \rvert \leq 1$ for $z$ on the boundary of the disc.
| $f$ is a function of the unit disk into itself. This means that $|f(z)| < 1$ for all $z \in \mathbb{D}$, and in particular this is true for all $z$ in the boundary of the disk $\mathbb{D}(0,r)$ , $r<1$.
|
Advection diffusion equation The advection diffusion equation is the partial differential equation $$\frac{\partial C}{\partial t} = D\frac{\partial^2 C}{\partial x^2} - v \frac{\partial C}{\partial x}$$ with the boundary conditions $$\lim_{x \to \pm \infty} C(x,t)=0$$ and initial condition $$C(x,0)=f(x).$$ How can I transform the advection diffusion equation into a linear diffusion equation by introducing new variables $x^\ast=x-vt$ and $t^\ast=t$?
Thanks for any answer.
| We could simply apply the chain rule, to avoid some confusions we let $ C(x,t) = C(x^* + vt,t^*) = C^*(x^*,t^*)$:
$$
\frac{\partial C}{\partial x} = \frac{\partial C^*}{\partial x^{\phantom{*}}}= \frac{\partial C^*}{\partial x^*} \frac{\partial x^*}{\partial x^{\phantom{*}}} + \frac{\partial C^*}{\partial t^*} \frac{\partial t^*}{\partial x^{\phantom{*}}} = \frac{\partial C}{\partial x^*}
$$
remember here in chain rule, the partial derivative is being taken wrt the first and second variable if not to confuse this wrt the total derivative, similary we could have $\displaystyle \frac{\partial^2 C}{\partial x^2} = \frac{\partial^2 C^*}{\partial {x^*}^2} $,
$$
\frac{\partial C}{\partial t} = \frac{\partial C^*}{\partial t} = \frac{\partial C^*}{\partial x^*} \frac{\partial x^*}{\partial t^{\phantom{*}}} + \frac{\partial C^*}{\partial t^*} \frac{\partial t^*}{\partial t^{\phantom{*}}} = -v\frac{\partial C^*}{\partial x^*} + \frac{\partial C^*}{\partial t^*}
$$
Plugging back to the original equation you will see the convection term is gone if we have done this velocity cone rescaling, you could think the original equation like a diffusion on a car with velocity $v$ measured by a standing person, after the change of variable it is just a pure diffusion measured on a car:
$$
\frac{\partial C^*}{\partial t^*} = D\frac{\partial^2 C^*}{\partial {x^*}^2}
$$
and the initial condition changes to $C^*(x^*,0) = C(x^*+vt^*,t^*)\Big\vert_{t^*=0}= f(x^*)$, the boundary condition remains the same.
|
Self-Dual Code; generator matrix and parity check matrix
Hi !
I have a parity check matrix $H$ for a code $C$
0 0 0 1 1 1 1 0
0 1 1 0 0 1 1 0
1 0 1 0 1 0 1 0
1 1 1 1 1 1 1 1
I am allowed to assume that
1) the dual of an $(n,k)$-code is an $[n,n-k]$-code
2) $(C^{\perp})^{\perp} = C$ (Here $\perp$ denotes the dual)
I want to prove that my code $C$ is self-dual. (ie that $C=C^{\perp}$)
Here is my logic:
I know that, since $H$ is a parity check matrix for $C$,
$H$ is a generator matrix for $C^{\perp}$.
Since $C^{\perp}$ is an $[n,n-k]$-code, the generator matrix $H$ is a matrix: $[n-k]$ x $ n$
So now looking at $H$, n=8 and k=4, so the corresponding $C$ code is a $8$x$4$ matrix.
Now let $G=[g_{i,j}]$ be the generator matrix for $C$.
$(GH^T)=0$ since every vector in the rowspace of $G$ is orthogonal to every vector in the rowspace of $H$;
Can anyone tell me what is missing to finish off my proof?
Note: i see that each row in $H$ has even number of entries and that the distance between any two rows is even. maybe this helps if I can right a definition of weight relating to duals...
| The rows of $H$ generate $C^\perp$.
By definition of the parity check, $xH^\mathrm{T}=0$ iff $x\in C$.
What can you conclude from the fact that $HH^\mathrm{T}=[0]$?
|
Commutativity between diagonal and unitary matrices? Quick questions:
*
*if you have a diagonal matrix $A$ and a unitary matrix $B$. Do $A$ and $B$ commute?
*if $A$ and $B$ are positive definite matrices. if $a$ is an eigenvalue of $A$ and $b$ is an eigenvalue of $B$, does it follow that $a+b$ is an eigenvalue of $A+B$?
| For the first question, the answer is no, an explicit example is given by $A:=\pmatrix{1&0\\ 0&2}$ and $B=\pmatrix{1&1\\ -1&1}$. An other way to see it's not true is the following: take $S$ a symmetric matrix, then you can find $D$ diagonal and $U$ orthogonal (hence unitary) such that $S=U^tDU$ and if $U$ and $D$ commute then $S$ is diagonal.
For the second question, the answer is "not necessarly", because the set
$$\{a+b\mid a\mbox{ eigenvalue of }A,b\mbox{ eigenvalue of }B\}$$ may contain more elements than the dimension of the space we are working with.
|
Independent, Normally Distributed R.V. Working on this:
A shot is fired at a circular target. The vertical and the horizontal
coordinates of the point of impact (with the origin sitting at the
target’s center) are independent and normally distributed with $\nu(0, 1)$. Show that the distance of the point of impact from the center is
distributed with PDF $$p(r) = re^{-r^2/2}, r \geq 0.$$ Find the median of this
distribution.
So I'm guessing this would be graphed on an X and Y axis. I can intuit that I need to take the integral of the PDF from the lower bound to $m$ (or from $m$ to the upper bound), but I don't know what the normal distribution with $\nu$(0, 1) mean.
Also, how would I show that the point of impact has the desired PDF?
Thank you.
| Let $r\ge 0$; put $R = \sqrt{X^2 + Y^2}$, where $X$ and $Y$ are the coordinates of the shot. Then
$$P(R\le r) = {1\over 2\pi} \mathop{\int\!\!\int}_{B_r(0)} \exp\{(x^2 + y^2)/2\}\,dx\,dy.$$
Change to polars to get
$$P(R\le r) = {1\over 2\pi}\int_0^r \int_0^{2\pi} \exp(r^2/2)r\,d\theta\, dr
=\int_0^r r\exp\{r^2/2\}\,dr$$
Differentiate and you are there.
|
Area under a curve, difference between dy and dx I am trying to find the area of $ y = 1 $ and $y = x^\frac{1}{4}$ from 0 to 1 and revolving around $ x = 1$
In class we did the problem with respect to y, so from understanding that is taking the "rectangles" from f(y) or the y axis. I was wondering why not just do it with respect to x, it would either be a verticle or horizontal slice of the function but the result would be the same. I was not able to get the correct answer for the problem but I am not sure why.
Also one other question I had about this, is there a hole at 1,1 in the shape? The area being subtracted is defined there so shouldn't that be a hole since we are taking that area away? Both function at 1 are 1.
| I expect you have drawn a picture, and that it is the region below $y=1$, above $y=x^{1/4}$, from $x=0$ to $x=1$ that is being rotated about $x=1$. When you rotate, you get a cylinder with a kind of an upside down bowl carved out of it, very thin in the middle. You have asked similar questions before, so I will be brief.
It is probably easiest to do it by slicing parallel to the $x$-axis. So take a slice of thickness $dy$, at height $y$. We need to find the area of cross-section.
Look at the cross-section. It is a circle with a circle removed. The outer radius is $1$, and the inner radius is $1-x$. So the area of cross-section is $\pi(1^2-(1-x)^2)$. We need to express this in terms of $y$. Note that $x=y^4$. so our volume is
$$\int_0^1 \pi\left(1^2-(1-y^4)^2\right)\,dy.$$
I would find it more natural to find the volume of the hollow part, and subtract from the volume of the cylinder.
You could also use shells. Take a thin vertical slice, with base going from $x$ to $x+dx$, and rotate it. At $x$, we are going from $x^{1/4}$ to $1$. The radius of the shell is $1-x$, and therefore the volume is given by
$$\int_0^1 2\pi(1-x)(1-x^{1/4})\,dx.$$
Multiply through, and integrate term by term. Not too bad, but slicing was easier.
|
Proving identities using Pythagorean, Reciprocal and Quotient Back again, with one last identity that I cannot solve:
$$\frac{\cos \theta}{\csc \theta - 2 \sin \theta} = \frac{\tan\theta}{1-\tan^2\theta}$$.
The simplest I could get the left side to, if at all simpler, is $$\frac{\cos\theta}{\csc^2\theta-2}$$
As for the right side, it has me stumped, especially since the denominator is so close to a identity yet so far away. I've tried rationalizing the denominators (both sides) to little success. From my last question, where multiplying by a '1' worked, I didn't see the association here.
Thanks!
| HINT:
$$\begin{align*}
\frac{\tan\theta}{1-\tan^2\theta}&=\frac{\sin\theta\cos\theta}{\cos^2\theta-\sin^2\theta}
\end{align*}$$
$$\begin{align*}
\frac{\cos\theta}{\csc\theta-2\sin\theta}&=\frac{\sin\theta\cos\theta}{1-2\sin^2\theta}
\end{align*}$$
|
Question on conditional independence Consider four random vectors $X, Z, C$ and $W$ in which
$Z_i = W_i+N(0,\sigma)$: iid Gaussian noise for each element of $W$
If $X$ is conditionally independent of $Z$ given $C,$ will X be conditionally independent of $W$ given $C$?
Thank you very much.
| Not necessarily, given the conditions as stated. We can work in one dimension. Let $\eta, \xi$ be two iid $N(0,1)$ random variables. Set $X = \eta - \xi$, $W = \eta$, $Z = \eta + \xi$, and $C=0$ (so conditional independence given $C$ is just independence). Then $X$ and $Z$ are independent (they are jointly Gaussian with zero covariance) but $X$ and $W$ are not (their covariance is 1).
|
The Set of All Subsequential Limits Given $\{a_n\}_{n=0}^\infty$ and $\{b_n\}_{n=0}^\infty$ bounded sequences; show that if $\lim \limits_{n\to \infty}a_n-b_n=0$ then both sequences have the same subsequential limits.
My attempt to prove this begins with: Let $E_A=\{L|L$ subsequential limit of $a_n$}
and $E_B=\{L|L$ subsequential limit of $b_n$}. We need to show that $E_A=E_B$.
Given bounded sequence $a_n$ and $b_n$ we know from B.W that each sequence has a subsequence that converges, therefore both $E_A$ and $E_B$ are not empty;
Let $L\in E_A$.
How can I show that $L\in E_B$?
Thank you very much.
| I think you may want to prove that:
*
*Consider two sequences $\{x_n\}$ and $\{y_n\}$ such that $x_n-y_n \to l$ and $y_n \to y$, then, $$x_n \to l+y$$
*Given a convergent sequence $\{x_n\}$ that converges to $x$, all its subsequences converge to the same limit, $x$.
Do you see how that would pay here?
*
*Let $r \in E_B$. That is, $r$ is a limit point of the sequence $\{b_n\}$. So, there is a subsequence of $\{b_n\}$, say $\{b_{n_k}\}$ that converges to $r$.
*Now, consider the same subsequence of $\{a_n-b_n\}$, namely $\{a_{n_k}-b_{n_k}\}$. Since this is a subsequence of a convergent subsequence, $\{a_{n_k}-b_{n_k}\}$ converges to $0$, $a_{n_k}-b_{n_k} \to 0$ by $(2)$.
Now putting together the two claims, by $(1)$. you have that $a_{n_k} \to r$. That is, $r \in E_A$. This proves one inclusion, $E_B \subseteq E_A$. The proof of the other inclusion is similar.
For the other inclusion, as Brian observes, note that $a_n -b_n \to 0$ implies $b_n-a_n \to 0$. Now appeal to the previous part, to see that $E_A \subseteq E_B$.
|
In the history of mathematics, has there ever been a mistake? I was just wondering whether or not there have been mistakes in mathematics. Not a conjecture that ended up being false, but a theorem which had a proof that was accepted for a nontrivial amount of time before someone found a hole in the argument. Does this happen anymore now that we have computers? I imagine not. But it seems totally possible that this could have happened back in the Enlightenment.
Feel free to interpret this how you wish!
| Well, there have been plenty of conjectures which everybody thought were correct, which in fact were not. The one that springs to mind is the Over-estimated Primes Conjecture. I can't seem to find a URL, but essentially there was a formula for estimating the number of primes less than $N$. Thing is, the formula always slightly over-estimates how many primes there really are... or so everybody thought. It turns out that if you make $N$ absurdly large, then the formula starts to under-estimate! Nobody expected that one. (The "absurdly large number" was something like $10^{10^{10^{10}}}$ or something silly like that.)
Fermat claimed to have had a proof for his infamous "last theorem". But given that the eventual proof is a triumph of modern mathematics running to over 200 pages and understood by only a handful of mathematicians world wide, this cannot be the proof that Fermat had 300 years ago. Therefore, either 300 years of mathematicians have overlooked something really obvious, or Fermat was mistaken. (Since he never write down his proof, we can't claim that "other people believed it before it was proven false" though.)
Speaking of which, I'm told that Gauss or Cauchy [I forget which] published a proof for a special case of Fermat's last theorem - and then discovered that, no, he was wrong. (I don't recall how long it took or how many people believed it.)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.