INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
For a Turing machine and input $w$- Is "Does M stop or visit the same configuration twice" a decidable question? I have the following question out of an old exam that I'm solving: Input: a Turing machine and input w Question: Does on running of M on w, at least one of the following things happen -M stops of w -M visits the same configuration at least twice First I thought that it's clearly in $RE$, it is a recursive enumerable question since we can simulate running of $M$ on $w$ and list the configuration or wait until it stops, but then I thought to myself:"If it visits the same configuration more than twice, it must be in an infinite loop", because, as I understand, if it reached the same configuration it's gonna copy the same transitions, over and over again, so the problem might be in $R$, it's decidable, since it's the same question as "It stops for $w$ or it doesn't"? What do you think? Thank you!
We can modify a Turing machine $T$ by replacing every computation step of $T$ by a procedure in which we go to the left end of the (used portion of) the tape, and one more step left, print a special symbol $U$, and then hustle back to do the intended step. So in the modified machine, a configuration never repeats. Thus if we can solve your problem, we can solve the Halting Problem. The conclusion is that your problem is not decidable.
A field without a canonical square root of $-1$ The following is a question I've been pondering for a while. I was reminded of it by a recent dicussion on the question How to tell $i$ from $-i$? Can you find a field that is abstractly isomorphic to $\mathbb{C}$, but that does not have a canonical choice of square root of $-1$? I mean canonical in the following sense: if you were to hand your field to one thousand mathematicians with the instructions "Pick out the most obvious square root of -1 in this field. Your goal is to make the same choice as most other mathematicians," there should be be a fairly even division of answers.
You can model the complex numbers by linear combinations of the $2\times 2$ unit matrix $\mathbb{I}$ and a real $2\times 2$ skew-symmetric matrix with square $-\mathbb I$, of which there are two, $\begin{pmatrix}0 & -1\\1 & 0\end{pmatrix}$ and $\begin{pmatrix}0 & 1\\-1 & 0\end{pmatrix}$. I see no obvious reason to prefer one over the other.
Prove that given any $2$ vertices $v_0,v_1$ of Graph $G$ that is a club, there is a path of length at most $2$ starting in $v_0$ and ending in $v_1$ Definition of a club: Let $G$ be a graph with $n$ vertices where $n > 2$. We call the graph $G$ a club if for all pairs of distinct vertices $u$ and $v$ not connected by an edge, we have $\deg(u)+\deg(v)\ge n$. Ref: Khoussainov,B. , Khoussainova,N. (2012), Lectures on Discrete Mathematics for Computers Science, World Scientific Pg.(83) My strategy was to prove this using the proof by cases method. Case 1: Prove for the case when $v_0$ and $v_1$ are connected by an edge, in which case there is clearly a path from $v_0$ to $v_1$, of length $1$, so this case is proven. Case 2: Prove for the case when $v_0$ and $v_1$ are not connected by an edge. Now, for this case, since the hypothesis is assumed to be true and Graph $G$ is a club, $$\deg(v_0) + \deg(v_1)\ge n\;.$$ That's all I've got so far, and I'm not sure how to proceed.
The statement is not true. Consider a path of length 4, where $v_0,v_1$ are the endpoints of the path. There is no path of length at most two from $v_0$ to $v_1$, and the graph is a club by your definition. Edit: After the definition of club changed With the new definition, the proof can be made as follows: Let $G$ be a (simple) graph which is a club. Assume there is no path of length 1 or 2 from $v_0$ to $v_1$. Let $S_0$ denote the set of vertices joined to $v_0$ by an edge, and let $S_1$ be the set of vertices joined to $S_1$ by an edge. If there is a vertex $x$ which is in both $S_0$ and $S_1$ (that is $x \in S_0 \cap S_1$), then there is a path $v_0,x,v_1$ of length 2 from $v_0$ to $v_1$, so assume there is no vertex in $S_0 \cap S_1$. We know by the definition of $S_0$ and $S_1$ that $|S_0| = \text{deg}(v_0)$ and $|S_1| = \text{deg}(v_1)$, and since $G$ is a club, $|S_0| + |S_1| = \text{deg}(v_0) + \text{deg}(v_1) \geq n$. Since $S_0$ and $S_1$ have no vertices in common, this means that any of the $n$ vertices is in either $S_0$ or $S_1$. But this is a contradiction since $v_0,v_1$ are in none of them (if they were, there would be a direct edge from $v_0$ to $v_1$).
Express Expectation and Variance in other terms. Let $X \sim N(\mu,\sigma^2)$ and $$f_X(x) = \frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{(x-\mu)^2}{2\sigma^2}}.$$ where $-\infty < x < \infty$. Express $\operatorname{E}(aX + b)$ and $\operatorname{Var}(aX +b)$ in terms of $\mu$, $\sigma$, $a$ and $b$, where $a$ and $b$ are real constants. This is probably an easy question but I'm desperate at Probability! Any help is much appreciated as I'm not even sure where to start.
Not an answer: Check out Wikipedia, and then learn them through comprehension and by heart. * *Normal Distribution (E, $\sigma$ included) *What is Variance *Important Properties of Variance *Important Properties of Expected Value
What function $f$ such that $a_1 \oplus\, \cdots\,\oplus a_n = 0$ implies $f(a_1) \oplus\, \cdots\,\oplus f(a_n) \neq 0$ For a certain algorithm, I need a function $f$ on integers such that $a_1 \oplus a_2 \oplus \, \cdots\,\oplus a_n = 0 \implies f(a_1) \oplus f(a_2) \oplus \, \cdots\,\oplus f(a_n) \neq 0$ (where the $a_i$ are pairwise distinct, non-negative integers and $\oplus$ is the bitwise XOR operation) The function $f$ should be computable in $O(m)$, where $m$ is the maximum number of digits of the $a_i$. Of course the simpler the function is, the better. Preferrably the output of the function would fit into $m$ digits as well. Is there something like this? It would also be okay to have a family of finitely many functions $f_n$ such that for one of the functions the result of the above operation will be $\neq 0$. My own considerations so far were the following: * *If we choose the ones' complement as $f$, we can rule out all cases where $n$ is odd. *If $n$ is even, this means that for every bit, an even number of the $a_i$ has the bit set and the rest has not, therefore taking the ones' complement before XORing doesn't change the result. So the harder part seems to be the case where $n$ is even.
The function $f$, if it exists, must have very large outputs. Call a set of integers "closed" if it is closed under the operation $\oplus$. A good example of a closed set of integers is the set of positive integers smaller than $2^k$ for some $k$. Let $S$ be a closed set of integers that form the domain of $f$. Take as an example those positive integers with at most $m$ bits. Let $T$ be the codomain of $f$, so that we have $f : S \to T$ being the function of interest. Assume furthermore that $T$ is a closed set of integers. Big claim: $|T| \ge (2^{|S|}-1)/|S|$. Proof sketch: Let $A$ be the set of sequences $a_1 < a_2 < \dots < a_n$ of distinct positive integers in $S$. Let $p : A \to S$ be defined by $p(a_1,a_2,\dots,a_n) = a_1 \oplus \dots \oplus a_n$, and let $q : A \to T$ be defined by $q(a_1,\dots,a_n) = f(a_1) \oplus \dots \oplus f(a_n)$. Claim: If $p(a_1,\dots,a_n) = p(b_1,\dots,b_l),$ then $q(a_1,\dots,a_n) \ne q(b_1,\dots,b_l)$. Proof: Interleave the sequences $a$ and $b$, removing duplicates, to obtain a sequence $c$. Then $p(c) = p(a) \oplus p(b) = 0$, so $q(c) \ne 0$; yet $q(c) = q(a) \oplus q(b)$. Now, note that there are $2^{|S|}-1$ elements of $A$, so there must be $(2^{|S|}-1)/|S|$ such elements sharing the same value of $p$. This means that $T$ must contain $(2^{|S|}-1)/|S|$ distinct values. So, if $S$ consists of $m$-bit integers, then $T$ must consist of roughly $(2^m-m)$-bit integers. EDIT: Incorporating comments: the function $f(a) = 2^a$ has the desired property, and roughly achieves this bound.
Count permutations. Hi I have a compinatorial exercise: Let $s \in S_n$. Count the permutations such that $$s(1)=1$$ and $$|s(i+1)-s(i)|\leq 2 \,\, \mathrm{for} \, \, i\in\{1,2, \ldots , n-1 \}$$ Thank you!
This is OEIS A038718 at the On-Line Encyclopedia of Integer Sequences. The entry gives the generating function $$g(x)=\frac{x^2-x+1}{x^4-x^3+x^2-2x+1}$$ and the recurrence $a(n) = a(n-1) + a(n-3) + 1$, where clearly we must have initial values $a(0)=0$ and $a(1)=a(2)=1$. Added: I got this by calculating $a(1)$ through $a(6)$ by hand and then looking at OEIS. For completeness, here’s a brief justification for the recurrence. Start with any of the $a(n-1)$ permutations of $[n-1]$. Add $1$ to each element of the permutation, and prepend a $1$; the result is an acceptable permutation of $[n]$ beginning $12$, and every such permutation of $[n]$ is uniquely obtained in this way. Now take each of the $a(n-3)$ permutations of $[n-3]$, add $3$ to each entry, and prepend $132$; the result is an acceptable permutation of $[n]$ beginning $132$, and each such permutations is uniquely obtained in this way. The only remaining acceptable permutation of $[n]$ is the unique single-peaked permutation: $13542$ and $135642$ are typical examples for odd and even $n$ respectively. From here we can easily get the generating function. I’ll write $a_n$ for $a(n)$. Assuming that $a_n=0$ for $n<0$, we have $a_n=a_{n-1}+a_{n-3}+1-[n=0]-[n=2]$ for all $n\ge 0$, where the last two terms are Iverson brackets. Multiply by $x^n$ and sum over $n\ge 0$: $$\begin{align*} g(x)&=\sum_{n\ge 0}a_nx^n\\ &=\sum_{n\ge 0}a_{n-1}x^n+\sum_{n\ge 0}a_{n-3}x^n+\sum_{n\ge 0}x^n-1-x^2\\ &=xg(x)+x^3g(x)+\frac1{1-x}-1-x^2\;, \end{align*}$$ so $$\begin{align*}g(x)&=\frac{1-(1-x)-x^2(1-x)}{(1-x)(1-x-x^3)}\\ &=\frac{x-x^2+x^3}{1-2x+x^2-x^3+x^4}\;. \end{align*}$$ This is $x$ times the generating function given in the OEIS entry; that one is offset so that $a(0)=a(1)=1$, and in general its $a(n)$ is the number of acceptable permutations of $[n+1]$; my $a_n$ is the number of acceptable permutations of $[n]$. (I didn’t notice this until I actually worked out the generating function myself.)
Proving that the number of vertices of odd degree in any graph G is even I'm having a bit of a trouble with the below question Given $G$ is an undirected graph, the degree of a vertex $v$, denoted by $\mathrm{deg}(v)$, in graph $G$ is the number of neighbors of $v$. Prove that the number of vertices of odd degree in any graph $G$ is even.
Simply, sum of even numbers of odd number is an even number (always odd+odd=even and even+odd=odd and even+even=even). As the sum of degree of vertices needs to be even number, number of such vertices must be even. Which @Mike has presented very succinctly.
A tricky but silly doubt regarding the solutions of $x^2/(y-1)^2=1$ Motivation : I have been confused with some degree 2 equation. I suddenly came across a simple equation and couldn't get the quintessence behind that. I have an equation $$\dfrac{x^2}{(y-1)^2}=1 \tag{1}$$ and I was looking for its solutions. It was asked by some kid ( of $9$nth standard ) to me. I did some manipulation and got $$x^2=(y-1)^2 \tag{2} $$ finally. And one can see that $(0,1)$ satisfies the Equation $[2]$ well. But I was happy, and within small time, I realized that the same solution set can't satisfy the equation $[1]$ . If you substitute $(0,1)$ in $[1]$ you get $\dfrac{0}{0}=1$ which is wrong. The answer that convinced me finally : We can see the same equation as this $x^2. \dfrac{1}{(y-1)^2}=1$ . We know that the set of integers form a ring. So the product of two numbers is one if one number is the inverse of other number. '$1$' present on the R.H.S is the identity element. So the product of the entity with its inverse always gives us the identity. So when $x$ is $0$, the $0$ doesn't have an inverse in the integers. So the case is to be emitted. Still persisting questions : But the thing that makes me surprise is that the Wolfram Alpha gives me this solution . In the picture you can clearly see that they both intersect at $(0,1)$ . But what is that confusion ? We omitted that solution, but in fact $(0,1)$ is the intersection of the two lines. Questions that are to be answered by learned people : * *What is the value of term $\dfrac{0}{0}$ ? Isn't it $1$ ? *Why the solution pair $(0,1)$ satisfies $x^2=(y-1)^2$ but not $\dfrac{x^2}{(y-1)^2}=1$ ? We know that both of them are manifestations of each other in a simple manner. *If we need to omit that solution, why do the lines intersect at $(0,1)$ ? Thank you everyone for giving your time.
The equations $$x^2=(y-1)^2\tag{1}$$ and $$\frac{x^2}{(y-1)^2}=1\tag{2}$$ do not have the same solution set. Every solution of $(2)$ is a solution of $(1)$, but $\langle 0,1\rangle$ is a solution of $(1)$ that is not a solution of $(2)$, because $\frac00$ is undefined. The reason is that $(1)$ does not imply $(2)$. Note first that $(2)$ does imply $(1)$, because you can multiply both sides of $(2)$ by $(y-1)^2$ to get $(1)$. In order to derive $(2)$ from $(1)$, however, you must divide both sides of $(1)$ by $(y-1)^2$, and this is permissible if and only if $(y-1)^2\ne 0$. Thus, $(1)$ and $(2)$ are equivalent if and only if $(y-1)^2\ne 0$. As long as $(y-1)^2\ne 0$, $(1)$ and $(2)$ have exactly the same solutions, but a solution of $(1)$ with $(y-1)^2=0$ need not be (and in fact isn’t) a solution of $(2)$. As far as the graphs go, the solution of $(1)$ is the union of the straight lines $y=x+1$ and $y=-x+1$. The solution of $(2)$ consists of every point on these two straight lines except their point of intersection.
Complete course of self-study I am about $16$ years old and I have just started studying some college mathematics. I may never manage to get into a proper or good university (I do not trust fate) but I want to really study mathematics. I request people to tell me what topics an undergraduate may/must study and the books that you highly recommend (please do not ask me to define an undergraduate). Background: * *Single variable calculus from Apostol's book Calculus; *I have started IN Herstein's topics in algebra; *I have a limited knowledge of linear algebra: I only know what a basis is, a dimension is, a bit of transpose, inverse of a matrix, determinants defined in terms of co-factors, etc., but no more; *absolutely elementary point set topology. I think open and closed balls, limit points, compactness, Bolzano-Weirstrass theorem (I may have forgotten this topology bit); *binomial coefficients, recursions, bijections; *very elementary number theory: divisibility, modular arithmetic, Fermat's little theorem, Euler's phi function, etc. I asked a similar question (covering less ground than this one) some time back which received no answers and which I deleted. Even if I do not manage to get into a good university, I wish to self-study mathematics. I thanks all those who help me and all those who give me their valuable criticism and advice. P.S.: Thanks all of you. Time for me to start studying.
I would suggest some mathematical modeling or other practical application of mathematics. Also Finite automata and graph-theory is interesting as it is further away from "pure math" as I see it, it has given me another perspective of math.
Simple Limit Divergence I am working with a definition of a divergent limit as follows: A sequence $\{a_n\}$ diverges to $-\infty$ if, given any number $M$, there is an $N$ so that $n \ge N$ implies that $a_n \le M$. The sequence I am considering is $a_n = -n^2$, which I thought would be pretty simple, but I keep running into a snag. My Work: For a given $M$, we want to show that $n \ge N \Rightarrow -n^2 \le M$. So $n^2 \ge -M$. But here is where I run into trouble, because I can't take square roots from here. What should I do?
If $M>0$ inequality $n^2\geq -M$ always holds. So you can take any $N$ you want. If $M\leq 0$, then $n\geq\sqrt{-M}$, and you can take $N=\lfloor\sqrt{-M}\rfloor +1$.
If $R$ is a ring s.t. $(R,+)$ is finitely generated and $P$ is a maximal ideal then $R/P$ is a finite field Let $R$ be a commutative unitary ring and suppose that the abelian group $(R,+)$ is finitely generated. Let's also $P$ be a maximal ideal of $R$. Then $R/P$ is a finite field. Well, the fact that the quotient is a field is obvious. The problem is that I have to show it is a finite field. I do not know how to start: I think that we have to use some tools from the classification of modules over PID (the hypotesis about the additive group is quite strong). I found similar questions here and here but I think my question is (much) easier, though I don't manage to prove it. What do you think about? Have you got any suggestions? Thanks in advance.
As abelian groups, both $\,R\,,\,P\,$ are f.g. and thus the abelian group $\,R/P\,$ is f.g....but this is also a field so if it had an element of additive infinite order then it'd contain an isomorphic copy of $\,\Bbb Z\,$ and thus also of $\,\Bbb Q\,$, which of course is impossible as the last one is not a f.g. abelian group. (of course, if an abelian group is f.g. then so is any subgroup)
Frechet Differentiabilty of a Functional defined on some Sobolev Space How can I prove that the following Functional is Frechet Differentiable and that the Frechet derivative is continuous? $$ I(u)=\int_\Omega |u|^{p+1} dx , \quad 1<p<\frac{n+2}{n-2} $$ where $\Omega$ is a bounded open subset of $\mathbb{R}^n$ and $I$ is a functional on $H^1_0(\Omega).$
As was given in the comments, the Gâteaux derivative is $$ I'(u)\psi = (p+1) \int_\Omega |u|^{p-1}u\psi. $$ It is clearly linear, and bounded on $L^{p+1}$ since $$ |I'(u)\psi| \leq (p+1) \|u\|_{p+1}^p\|\psi\|_{p+1}, $$ by the Hölder inequality with the exponents $\frac{p+1}p$ and $p+1$. Here $\|\cdot\|_{q}$ denotes the $L^q$-norm. Then the boundedness on $H^1_0(\Omega)$ follows from the continuity of the embedding $H^1_0\subset L^{p+1}$. Now we will show the continuity of $I':L^{p+1}\to (L^{p+1})'$, with the latter space taken with its norm topology. First, some elementary calculus. For a constant $a>0$, the function $f(x)=|x|^a x$ is continuously differentiable with $$ f'(x) = (a+1)|x|^a, $$ implying that $$ \left||x|^ax-|y|^ay\right| \leq (a+1)\left|\int_{x}^{y} |t|^a\mathrm{d}t \right| \leq (a+1)\max\{|x|^a,|y|^a\}|x-y|. $$ Using this, we have $$ |I(u)\psi-I(v)\psi|\leq (p+1)\int_\Omega \left||u|^{p-1}u-|v|^{p-1}v\right|\cdot|\psi| \leq p(p+1) \int_\Omega \left(|u|^{p-1}+|v|^{p-1}\right)|u-v|\cdot|\psi|. $$ Finally, it follows from the Hölder inequality with the exponents $\frac{p+1}{p-1}$, $p+1$, and $p+1$ that $$ |I(u)\psi-I(v)\psi| \leq p(p+1) \left(\|u\|_{p+1}^{p-1}+\|v\|_{p+1}^{p-1}\right)\|u-v\|_{p+1}\cdot\|\psi\|_{p+1}, $$ which establishes the claim. Finally, the continuity of $I':H^1_0\to (H^1_0)'$ follows from the continuity of the embedding $H^1_0\subset L^{p+1}$.
One step subgroup test help Possible Duplicate: Basic Subgroup Conditions could someone please explain how the one step subgroup test works, I know its important and everything but I do not know how to apply it as well as with the two step subgroup. If someone could also give some examples with it it would be really helpful. thank you
Rather than prove that the "one step subgroup test" and the "two step subgroup test" are equivalent (which the links in the comments do very well), I thought I would "show it in action". Suppose we want to show that $2\Bbb Z = \{k \in \Bbb Z: k = 2m, \text{for some }m \in \Bbb Z\}$ is a subgroup of $\Bbb Z$ under addition. A) The "two-step method": first, we show closure - given $k,k' \in 2\Bbb Z$, we have that: $k = 2m,k' = 2m'$ for some integers $m,m'$, so $k+k' = 2m+2m' = 2(m+m')$. Since $\Bbb Z$ is a group, and closed under addition, $m+m'$ is an integer, so $k+k' \in 2\Bbb Z$. Next, we show that if $k \in 2\Bbb Z$, $-k \in 2\Bbb Z$: since $k = 2m$, for some integer $m$, we have $-k = -(2m) = 2(-m)$, and since $-m$ is also an integer, $-k \in 2\Bbb Z$. B) The "one step method": here, we combine both steps into one: given $k,k' \in 2\Bbb Z$, we aim to show that $k + (-k') \in 2\Bbb Z$. As before, we write: $k + (-k') = k - k' = 2m - 2m' = 2(m -m')$, and since $m - m'$ is an integer, $k + (-k') \in 2\Bbb Z$. A more sophisticated use of this test, is to show that for any subgroup $H$ of a group $G$, and any element $g \in G$, $gHg^{-1} = \{ghg^{-1}: h \in H\}$ is also a subgroup of $G$. So given any pair of elements $x,y \in gHg^{-1}$, we must show $xy^{-1} \in gHg^{-1}$. Note we can write: $x = ghg^{-1}$, for some $h \in H$, $y = gh'g^{-1}$, for some $h'\in H$. Then $y^{-1} = (gh'g^{-1})^{-1} = (g^{-1})^{-1}h'^{-1}g^{-1} = gh'^{-1}g^{-1}$, so: $xy^{-1} = (ghg^{-1})(gh'^{-1}g^{-1}) = gh(g^{-1}g)h'^{-1}g^{-1} = gh(e)h'^{-1}g^{-1} = g(hh'^{-1})g^{-1}$. Since $H$ is a subgroup, it contains all inverses, so $h'^{-1}$ is certainly in $H$, and $H$ is also closed under multiplication, so $hh'^{-1} \in H$, thus: $xy^{-1} = g(hh'^{-1})g^{-1} \in gHg^{-1}$, and we are done.
How is a system of axioms different from a system of beliefs? Other ways to put it: Is there any faith required in the adoption of a system of axioms? How is a given system of axioms accepted or rejected if not based on blind faith?
There are similarities about how people obtain beliefs on different matters. However it is hardly a blind faith. There are rules that guide mathematicians in choosing axioms. There has always been discussions about whether an axiom is really true or not. For example, not long ago, mathematicians were discussing whether the axiom of choice is reasonable or not. The unexpected consequences of the axiom like the well ordering principle caused many to think it is not true. Same applies to axioms that are discussed today among set theorists. Set theoretical statements which are independent of the ZFC. There are various views regarding these but they are not based on blind belief. One nice paper to have a look at is Saharon Shelah's Logical Dreams. (This is only one of the views regarding which axioms we should adopt for mathematics, another interesting point of views is the one held by Godel which can be found in his collected works.) I think a major reason for accepting the consistency of mathematical systems like ZFC is that this statement is refutable (to refute the statement one just needs to come up with a proof of contradiction in ZFC) but no such proof has been found. In a sense, it can be considered to be similar to physics: as long as the theory is describing what we see correctly and doesn't lead to strange things mathematicians will continue to use it. If at some point we notice that it is not so (this happened in the last century in naive Cantorian set theory, see Russell's Paradox) we will fix the axioms to solve those issues. There has been several discussion on the FOM mailing list that you can read if you are interested. In short, adoption of axioms for mathematics is not based on "blind faith".
A non-square matrix with orthonormal columns I know these 2 statements to be true: 1) An $n$ x $n$ matrix U has orthonormal columns iff. $U^TU=I=UU^T$. 2) An $m$ x $n$ matrix U has orthonormal columns iff. $U^TU=I$. But can (2) be generalised to become "An $m$ x $n$ matrix U has orthonormal columns iff. $U^TU=I=UU^T$" ? Why or why not? Thanks!
The $(i,j)$ entry of $U^T U$ is the dot product of the $i$'th and $j$'th columns of $U$, so the matrix has orthonormal columns if and only if $U^T U = I$ (the $n \times n$ identity matrix, that is). If $U$ is $m \times n$, this requires $m \ge n$, because the rank of $U^T U$ is at most $\min(m,n)$. On the other hand, $U U^T$ is $m \times m$, and this again has rank at most $\min(m,n)$, so if $m > n$ it can't be the $m \times m$ identity matrix.
Proving that $2^{2^n} + 5$ is always composite by working modulo $3$ By working modulo 3, prove that $2^{2^n} + 5$ is always composite for every positive integer n. No need for a formal proof by induction, just the basic idea will be great.
Obviously $2^2 \equiv 1 \pmod 3$. If you take the above congruence to the power of $k$ you get $$(2^2)^k=2^{2k} \equiv 1^k=1 \pmod 3$$ which means that $2$ raised to any even power is congruent to $1$ modulo $3$. What can you say about $2^{2k}+5$ then modulo 3? It is good to keep in mind that you can take powers of congruences, multiply them and add them together. If you have finished the above, you have shown that $3\mid 2^{2k}+5$. Does this imply that $2^{2k}+5$ is composite?
Equivalence of a Lebesgue Integrable function I have the following question: Let $X$: $\mu(X)<\infty$, and let $f \geq 0$ on $X$. Prove that $f$ is Lebesgue integrable on $X$ if and only if $\sum_{n=0}^{\infty}2^n \mu(\lbrace x \in X : f(x) \geq 2^n \rbrace) < \infty $. I have the following ideas, but am a little unsure. For the forward direction: By our hypothesis, we are taking $f$ to be Lebesgue integrable. Assume $\sum_{n=0}^{\infty}2^n \mu(\lbrace x \in X : f(x) \geq 2^n \rbrace) = \infty $. Then for any n, no matter how large, $\mu(\lbrace x \in X : f(x) \geq 2^n \rbrace)$ has positive measure. Otherwise, the sum will terminate for a certain $N$, giving us $\sum_{n=0}^{N}2^n \mu(\lbrace x \in X : f(x) \geq 2^n \rbrace) < \infty $. Thus we have $f$ unbounded on a set of positive measure, which in combination with $f(x) \geq 0$, gives us that $\int_E f(x) d\mu=\infty$. This is a contradiction to $f$ being Lebesgue integrable. So our summation must be finite. For the reverse direction: We have that $\sum_{n=0}^{N}2^n \mu(\lbrace x \in X : f(x) \geq 2^n \rbrace) < \infty \\$. Assume that $f$ is not Lebesgue integrable, then we have $\int_E f(x) d\mu=\infty$. Since we are integrating over a finite set $X$, then this means that $f(x)$ must be unbounded on a set of positive measure, which makes our summation infinite, a contradiction. Any thoughts as to the validity of my proof? I feel as if there is an easier, direct way to do it.
$$\frac12\left(1+\sum_{n=0}^{+\infty}2^n\,\mathbf 1_{f\geqslant2^n}\right)\leqslant f\lt1+\sum_{n=0}^{+\infty}2^n\,\mathbf 1_{f\geqslant2^n}$$
Prime factorization, Composite integers. Describe how to find a prime factor of 1742399 using at most 441 integer divisions and one square root. So far I have only square rooted 1742399 to get 1319.9996. I have also tried to find a prime number that divides 1742399 exactly; I have tried up to 71 but had no luck. Surely there is an easier way that I am missing without trying numerous prime numbers. Any help would be great, thanks.
Note that the problem asks you do describe how you would go about factorizing 1742399 in at most 442 operations. You are not being asked to carry out all these operations yourself! I think your method of checking all primes up to the squareroot is exactly what the problem is looking for, but to be safe you should check that there are no more than 441 primes less than or equal to 1319.
What is it about modern set theory that prevents us from defining the set of all sets which are not members of themselves? We can clearly define a set of sets. I feel intuitively like we ought to define sets which do contain themselves; the set of all sets which contain sets as elements, for instance. Does that set produce a contradiction? I do not have a very firm grasp on what constitutes a set versus what constitutes a class. I understand that all sets are classes, but that there exist classes which are not sets, and this apparently resolves Russell's paradox, but I don't think I see exactly how it does so. Can classes not contain classes? Can a class contain itself? Can a set?
crf wrote: "I understand that all sets are classes, but that there exist classes which are not sets, and this apparently resolves Russell's paradox...." You don't need classes to resolve Russell's paradox. The key is that, for any formula P, you cannot automatically assume the existence of $\{x | P(x)\}$. If $P(x)$ is $x\notin x$, we have arrive at Russsell's Paradox. If $P(x)$ is $x\in x$, however, you don't necessarily run into any problems. So, you can ban the use of certain formulas, hoping that the ban will cover all possibilities that lead to a contradiction. My preference (see http://www.dcproof.com ) is not to assume a priori the existence of any sets, not even the empty set. In such a system, you cannot prove the existence of any sets, problematic or otherwise. You can, of course, postulate the existence of a set in such a system, and construct other sets from it, e.g. subsets, or power sets as permitted.
About the sequence satisfying $a_n=a_{n-1}a_{n+1}-1$ "Consider sequences of positive real numbers of the form x,2000,y,..., in which every term after the first is 1 less than the product of its two immediate neighbors. For how many different values of x does the term 2001 appear somewhere in the sequence? (A) 1 (B) 2 (C) 3 (D) 4 (E) More than 4" Can anyone suggest a systematic way to solve this problem? Thanks!
(This is basically EuYu's answer with the details of periodicity added; took a while to type up.) Suppose that $a_0 , a_1 , \ldots$ is a generalised sequence of the type described, so that $a_i = a_{i-1} a_{i+1} - 1$ for all $i > 0$. Note that this condition is equivalent to demanding that $$a_{i+1} = \frac{ a_i + 1 }{a_{i-1}}.$$ Using this we find the following recurrences: $$ a_2 = \frac{ a_1 + 1}{a_0}; \\ a_3 = \frac{ a_2 + 1}{a_1} = \frac{ \frac{ a_1 + 1}{a_0} }{a_1} = \frac{ a_0 + a_1 + 1 }{ a_0a_1 }; \\ a_4 = \frac{ a_3 + 1 }{a_2} = \frac{\frac{ a_0 + a_1 + 1 }{ a_0a_1 } + 1}{\frac{ a_1 + 1}{a_0}} = \frac{ ( a_0 + 1 )( a_1 + 1) }{ a_1 ( a_1 + 1 ) } = \frac{a_0 + 1}{a_1};\\ a_5 = \frac{ a_4 + 1 }{ a_3 } = \frac{ \frac{a_0 + 1}{a_1} + 1}{\frac{ a_0 + a_1 + 1 }{ a_0a_1 }} = \frac{ \left( \frac{a_0 + a_1 + 1}{a_1} \right) }{ \left( \frac{a_0+a_1+1}{a_0a_1} \right) } = a_0 \\ a_6 = \frac{ a_5 + 1 }{a_4} = \frac{ a_0 + 1}{ \left( \frac{ a_0 + 1 }{a_1} \right) } = a_1. $$ Thus every such sequence is periodic with period 5, so if 2001 appears, it must appear as either $a_0, a_1, a_2, a_3, a_4$. * *Clearly if $a_0 = 2001$, we're done. *As we stipulate that $a_1 = 2000$, it is impossible for $a_1 = 2001$. *If $a_2 = 2001$, then it must be that $2001 = \frac{ 2000 + 1 }{a_0}$ and so $a_0 = 1$. *If $a_3 = 2001$, then it must be that $2001 = \frac{a_0 + 2000 + 1}{a_0 \cdot 2000}$, and it follows that $a_0 = \frac{2001}{2000 \cdot 2001 - 1}$. *If $a_4 = 2001$, then it must be that $2001 = \frac{ a_0 + 1 }{2000}$, and so $a_0 = 2001 \cdot 2000 - 1$. There are thus exactly four values of $a_0$ such that 2001 appears in the sequence.
Find all linearly dependent subsets of this set of vectors I have vectors in such form (1 1 1 0 1 0) (0 0 1 0 0 0) (1 0 0 0 0 0) (0 0 0 1 0 0) (1 1 0 0 1 0) (0 0 1 1 0 0) (1 0 1 1 0 0) I need to find all linear dependent subsets over $Z_2$. For example 1,2,5 and 3,6,7. EDIT (after @rschwieb) The answer for presented vectors: 521 642 763 6541 7432 75431 765321 I did by brute force. I mean i wrote program to iterate through all variants in $${7 \choose 3} {7 \choose 4} {7 \choose 5} {7 \choose 6} {7 \choose 7}$$ 99 in total. But i just thought what some method exist for such task. For now im trying to implement http://en.wikipedia.org/wiki/Quadratic_sieve . Code incorporated in whole program. I plan to put it here then i organize it well.
Let us denote with $M$ (the transpose) of your matrix, $$M= \begin{pmatrix} 1 & 0 & 1 & 0 & 1 & 0 & 1 \\ 1 & 0 & 0 & 0 & 1 & 0 & 0 \\ 1 & 1 & 0 & 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 1 & 0 & 1 & 1 \\ 1 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{pmatrix}.$$ As rschwieb already noted, a vector $v$ with $Mv=0$ solves your problem. Using elementary row operation (modulo 2), we can easily bring it on the row echelon form $$M' = \begin{pmatrix} 1 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 & 1 & 1 & 1 \\ 0 & 0 & 1 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 1 & 0 & 1 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{pmatrix}.$$ Now, it is easy to see that the vectors $$v = \begin{pmatrix} \alpha \\ \alpha +\beta +\gamma \\ \gamma \\ \beta +\gamma \\ \alpha \\ \beta \\ \gamma \\ \end{pmatrix} $$ parameterized by $\alpha$, $\beta$, $\gamma$ are in the kernel of $M'$ and thus in the kernel of $M$. Setting $\alpha =0,1$, $\beta=0,1$, and $\gamma=0,1$, we obtain the $2^3=8$ solutions. The solution with $\alpha=\beta=\gamma=0$ is trivial, so there are 7 nontrivial solutions.
Intuition on proof of Cauchy Schwarz inequality To prove Cauchy Schwarz inequality for two vectors $x$ and $y$ we take the inner product of $w$ and $w$ where $w=y-kx$ where $k=\frac{(x,y)}{|x|^2}$ ($(x,y)$ is the inner product of $x$ and $y$) and use the fact that $(w,w) \ge0$ . I want to know the intuition behind this selection. I know that if we assume this we will be able to prove the theorem, but the intuition is not clear to me.
My favorite proof is inspired by Axler and uses the Pythagorean theorem (that $\|v+w\|^2 =\|v\|^2+\|w\|^2$ when $(v,w)=0$). It motivates the choice of $k$ as the component of $y$ in an orthogonal decomposition (i.e., $kx$ is the projection of $y$ onto the space spanned by $x$ using the decomposition $\langle x\rangle\oplus \langle x\rangle^\perp$). For simplicity we will assume a real inner product space (a very small tweak makes it work in both cases). The idea is that we want to show $$\left|\left(\frac{x}{\|x\|},y\right)\right| = |(\hat{x},y)| \leq \|y\|,$$ where I have divided both sides by $\|x\|$ and let $\hat{x}=x/\|x\|$. If we interpret taking an inner product with a unit vector as computing a component, the above says the length of $y$ is at least the component of $y$ in the $x$ direction (quite plausible). Following the above comments we will prove this statement by decomposing $y$ into two components: one in the direction of $x$ and the other orthogonal to $x$. Let $y=k\hat{x}+(y-k\hat{x})$ where $k = (\hat{x},y)$. We see that $$(\hat{x},y-k\hat{x}) = (\hat{x},y) - (\hat{x},k\hat{x}) = k - k(\hat{x},\hat{x})=0,$$ showing $\hat{x}$ and $y-k\hat{x}$ are orthogonal. This allows us to apply the Pythagorean theorem: $$\|y\|^2 = \|k\hat{x}\|^2+\|y-k\hat{x}\|^2 = |k|^2 + \|y-k\hat{x}\|^2 \geq |k|^2,$$ since norms are non-negative. Taking square roots gives the result. As a final comment, note that $$k\hat{x} = (\hat{x},y)\hat{x} = \left(\frac{x}{\|x\|},y\right)\frac{x}{\|x\|} = \frac{(x,y)}{\|x\|^2}x$$ matches your formulation.
Two problems with prime numbers Problem 1. Prove that there exists $n\in\mathbb{N}$ such that in interval $(n^2, \ (n+1)^2)$ there are at least $1000$ prime numbers. Problem 2. Let $s_n=p_1+p_2+...+p_n$ where $p_i$ is the $i$-th prime number. Prove that for every $n$, there exists $k\in\mathbb{N}$ such that $s_n<k^2<s_{n+1}$. I've found these two a while ago and they interested me. But don't have any ideas.
Problem 2: For any positive real $x$, there is a square between $x$ and $x+2\sqrt{x}+2$. Therefore it will suffice to show that $p_{n+1}\geq 2\sqrt{s_n}+2$. We have $s_{n}\leq np_n$ and $p_{n+1}\geq p_n+2$, so we just need to show $p_n\geq 2\sqrt{np_n}$, i.e., $p_n\geq 4n$. That this holds for all sufficiently large $n$ follows either from a Chebyshev-type estimate $\pi(x)\asymp\frac{x}{\log(x)}\,$ (we could also use PNT, but we don't need the full strength of this theorem), or by noting that fewer than $\frac{1}{4}$ of the residue classes mod $210=2\cdot3\cdot5\cdot7$ are coprime to $210$. We can check that statement by hand for small $n$. There have already been a couple of answers, but here is my take on problem 1: Suppose the statement is false. It follows that $\pi(x)\leq 1000\sqrt{x}$ for all $x$. This contradicts Chebyshev's estimate $\pi(x)\asymp \frac{x}{\log(x)}$
Attaching a topological space to another I'm self-studying Mendelson's Introduction to Topology. There is an example in the identification topology section that I cannot understand: Let $X$ and $Y$ be topological spaces and let $A$ be a non-empty closed subset of $X$. Assume that $X$ and $Y$ are disjoint and that a continuous function $f : A \to Y$ is given. Form the set $(X - A) \cup Y$ and define a function $\varphi: X \cup Y \to (X - A) \cup Y$ by $\varphi(x) = f(x)$ for $x \in A$, $\varphi(x) = x$ for $x \in X - A$, and $\varphi(y) = y$ for $y \in Y$. Give $X \cup Y$ the topology in which a set is open (or closed) if and only if its intersections with $X$ and $Y$ are both open (or closed). $\varphi$ is onto. Let $X \cup_f Y$ be the set $(X - A) \cup Y$ with the identification topology defined by $\varphi$. Let $I^2$ be the unit square in $\mathbb{R}^2$ and let $A$ be the union of its two vertical edges. Let $Y = [0, 1]$ be the unit interval. Define $f : I^2 \to Y$ by $f(x, y) = y$. Then $I^2 \cup_f Y$ is a cylinder formed by identifying the two vertical edges of $I^2$. I don't understand how $I^2 \cup_f Y$ can be a cylinder. The set is equal to $(I^2 - A) \cup Y$. Which is a union of a subset of $\mathbb{R}^2$ and $[0, 1]$. How can this be a cylinder? The book has an exercise that constructs a torus in a similar manner. I'm hoping to be able to solve it once I understand this example. I looked up some examples online. While I understand the definitions and theorems of identification topologies, I have no clue how geometric objects are constructed.
I think it's good that you ask this question, plus one. Your intuition will eventually develop, don't worry. I had trouble understanding identification topologies too when I saw them first. It just takes some time to get used to, don't worry. The way I think about it now, is as follows: You have two spaces $X,Y$ and you know how you want to "glue" them together, namely, you take all the points in $A \subset X$ and stick them on $Y$. The map $f$ tells you where on $Y$ you stick them. In pictures it looks something like this: In the example, this looks like this:
Deducing formula for a linear transformation The question I'm answering is as follows: Let $ T: \mathbb{R}^2 \rightarrow \mathbb{R}^2$ be a linear transformation such that $ T(1,1) = (2,1) $ and $ T(0,2) = (2,8) $. Find a formula for $ T(a,b) $ where $ (a,b) \in \mathbb{R}^2 $. Earlier we proved that $\{(1,1), (0,2)\}$ spans $\mathbb{R}^2$. I used this when trying to find a formula for $T$. My working is: $T(a(1,1) + b(0,2)) = aT(1,1) + bT(0,2) $ Because $T$ is linear. Thus: $ T(a(1,1) + b(0,2)) = a(2,1) + b(2,8) = T(a,b)$ Is this correct? It seems a bit too easy and so I'm wondering if I missed anything.
We have $$(1,0)=(1,1)-\frac{1}{2}(0,2)\qquad\text{and} \qquad(0,1)=\frac{1}{2}(0,2).\tag{$1$}$$ Note that $(a,b)=a(1,0)+b(0,1)$. So $$T(a,b)=aT(1,0)+bT(0,1).$$ Now use the values of $T(1,1)$ and $T(0,2)$, and Equations $(1)$, to find $T(1,0)$ and $T(0,1)$, and simplify a bit.
Explicitness in numeral system Prove that for every $a\in\mathbb{N}$ there is one and only one way to express it in the system with base $\mathbb{N}\ni s>1$. Seems classical, but I don't have any specific argument.
We can express a natural number $a$ to any base $s$ by writing $k = \lceil \log_s a \rceil$, the number of digits we'll need, $a_k=\max(\{n\in \mathbb{N}: ns^k\leq a\}),$ and recursively $a_{i}=\max(\{n\in \mathbb{N}: ns^i \leq a-\sum_{j=i+1}^k a_j s^j\})$. It's immediate that each of these maxes exists, since $0s^i\leq a$ no matter what and $a s^i\geq a$, and that they're unique by, say, the well-ordering of $\mathbb{N}$. Then $$a=\sum_{j=0}^k a_j s^k$$
Stabilizer of a point and orbit of a point I really need help with this topic I have an exam tomorrow and am trying to get this stuff in my head. But the book is not explaining me these two topics properly. It gives me the definition of a stabilizer at a point where $\mathrm {Stab}_G (i) = \{\phi \in G \mid \phi(i) = i\}$, and where $\mathrm{Orb}_G (i) = \{\phi(i) \mid \phi \in G\}$. I do not know how to calculate the stabilizer nor the orbit for this. I am also given an example Let $G = \{ (1), (132)(465)(78), (132)(465), (123)(456), (123)(456)(78), (78)\}$ and then $\mathrm{Orb}_G (1) = \{1, 3, 2\}$, $\mathrm{Orb}_G (2) = \{2, 1, 3\}$, $\mathrm{Orb}_G (4) = \{4, 6, 5\}$, and $\mathrm{Orb}_G (7) = \{7, 8\}$. also $\mathrm{Stab}_G (1) = \{(1), (78)\},\\ \mathrm{Stab}_G (2) = \{(1), (78)\},\\ \mathrm{Stab}_G (3) = \{(1), (78)\},\text {and}\\ \mathrm{Stab}_G (7) = \{(1), (132)(465), (123)(456)\}.$ If someone could PLEASE go step by step in how this example was solved it would be really helpful. Thank you
In simple terms, Stabilizer of a point is that permutation in the group which does not change the given point => for stab(1) = (1), (78) Orbit of a point(say 1) are those points that follow given point(1) in the permutations of the group. =>orbit(1) = 1 for (1); 3 for (132)...; 2 for (123)...
Sum of a stochastic process I have a question regarding the distribution of the sum of a discrete-time stochastic process. That is, if the stochastic process is $(X_1,X_2,X_3,X_4,\ldots)$, what is the distribution of $X_1+X_2+X_3+\ldots$? $X_i$ could be assumed from a discrete or continuous set, whatever is easier to calculate. I understand that it mainly depends on the distribution of $X_i$ and on the fact if the $X_i$ are correlated, right? If they are independent, the computation is probably relatively straightforward, right? For the case of two variables, it is the convolution of the probability distributions and probably this can be generalized to the case of n variables, does it? But what if they are dependent? Are there any types of stochastic processes, where the distribution of the sum can be computed numerically or even be given as a closed-form expression? I really appreciate any hints!
Are there any types of stochastic processes, where the distribution of the sum can be computed numerically or even be given as a closed-form expression? As stated, the problem is quite equivalent to compute the distribution of the sum of an arbritary set of random variables. Little can be said in general, as the fact that the variables ($X_i$) form a stochastic process adds practically nothing. Let's assume that the stochastic process $X(n)$ is a stationary ARMA$(P,Q)$ process, i.e., it's generated from a white noise process $R(n)$ of zero mean and given distribution that passes through a LTI causal filter with $P$ zeroes and $Q$ poles. Then, the process $Z(n) = \sum_{k=n-M+1}^{n} X(k)$ is obtained by chaining a MA$(M)$ filter, so $Z(n)$ is ARMA$(P+M,Q)$ (apart from cancellation which might occur). Now any finite order invertible causal ARMA filter can be expressed as an infinite order MA filter, so that $Z(n)$ can be expressed as a (infinite) linear combination of the white noise input: $$Z(n) = \sum_{k=-\infty}^n a_k R(k)$$ Because $R(k)$ is iid, the distribution of the sum can be obtained as a convolution. (Notice, however, that the CLT does not apply here). In terms of the characteristic functions, we'd get $$F_Z(w)=\prod_{k=-\infty}^n F_R(a_k w)$$ Notice, however, that all this might have little or not practical use. For one thing, ARMA modelling is usually applied only to second order moment analysis.
Multiplicative group of integers modulo n definition issues It is easy to verify that the set $(\mathbb{Z}/n\mathbb{Z})^\times$ is closed under multiplication in the sense that $a, b ∈ (\mathbb{Z}/n\mathbb{Z})^\times$ implies $ab ∈ (\mathbb{Z}/n\mathbb{Z})^\times$, and is closed under inverses in the sense that $a ∈ (\mathbb{Z}/n\mathbb{Z})^\times$ implies $a^{-1} ∈ (\mathbb{Z}/n\mathbb{Z})^\times$. The question is the following: Firstly, are $a$ and $b$ referring to each equivalence class of integers modulo $n$? Secondly, by $a^{-1}$, what is this referring to? If $a$ is the equivalence class, I cannot see (or I am not sure) how I can make inverse set.
Well $2\cdot 3\equiv 1\; \text{mod}\ 5$, so $2$ and $3$ are multiplicative inverses $\text{ mod } 5$. How to find the inverse of a number modulo a prime number was the topic of one of my previous answers. Modulo a composite number, inverses don't always exist. See Calculating the Modular Multiplicative Inverse without all those strange looking symbols for the way to find the inverse of $322$ mod $701$. It turns out to be $455$
How to find the maxium number of edge-disjoint paths using flow network Given a graph $G=(V,E)$ and $2$ vertices $s,t \in V$, how can I find the maximum number of edge-disjoint paths from $s$ to $t$ using a flow network? $2$ paths are edge disjoint if they don't have any common edge, though they may share some common vertices. Thank you.
Hint: if each edge has a capacity of one unit, different units of stuff flowing from $s$ to $t$ must go on edge-disjoint paths.
Finding a basis for the solution space of a system of Diophantine equations Let $m$, $n$, and $q$ be positive integers, with $m \ge n$. Let $\mathbf{A} \in \mathbb{Z}^{n \times m}_q$ be a matrix. Consider the following set: $S = \big\{ \mathbf{y} \in \mathbb{Z}^m \mid \mathbf{Ay} \equiv \mathbf{0} \pmod q \big\}$. It can be easily shown that $S$ constitutes a lattice, because it is a discrete additive subgroup of $\mathbb{R}^m$. I want to find the basis of this lattice. In other words, I want to find a matrix $\mathbf{B} \in \mathbb{Z}^{m \times m}$, such that the following holds: $S = \{\mathbf{Bx} \mid \mathbf{x} \in \mathbb{Z}^m \}$. Let me give some examples: * *$q=2$, and $\mathbf{A} = [1,1]$ $\quad \xrightarrow{\qquad}\quad$ $\mathbf{B} = \left[ \begin{array}{cc} 2&1 \\ 0&1 \end{array} \right]$ *$q=3$, and $\mathbf{A} = \left[ \begin{array}{ccc} 0&1&2 \\ 2&0&1 \end{array} \right]$ $\quad \xrightarrow{\qquad}\quad$ $\mathbf{B} = \left[ \begin{array}{ccc} 3&0&1 \\ 0&3&1 \\ 0&0&1 \end{array} \right]$ *$q=4$, and $\mathbf{A} = \left[ \begin{array}{ccc} 0&2&3 \\ 3&1&2 \end{array} \right]$ $\quad \xrightarrow{\qquad}\quad$ $\mathbf{B} = \left[ \begin{array}{ccc} 4&2&1 \\ 0&2&1 \\ 0&0&2 \end{array} \right]$ Note that in all cases, $\mathbf{AB} =\mathbf{0} \pmod q$. However, $\mathbf{B}$ is not an arbitrary solution to this equivalence, since it must span $S$. For instance, in the example 1 above, matrix $\mathbf{\hat B} = \left[ \begin{array}{cc} 2&0\\ 0&2 \end{array} \right]$ satisfies $\mathbf{A \hat B} =\mathbf{0} \pmod 2$, but generates a sub-lattice of $S$. Also note that if $\mathbf{B}$ is a basis of $S$, any other basis $\mathbf{\bar B}$ is also a basis of $S$, provided that there exists a unimodular matrix $\mathbf{U}$, for which $\mathbf{\bar B} = \mathbf{BU}$. My questions: * *How to obtain $\mathbf{B}$ from $\mathbf{A}$ and $q$? *Is it possible that $\mathbf{B}$ is not full rank, i.e. $\text{Rank}(\mathbf{B})< m$? *Is there any difference between the case where $q$ is prime and the case where it is composite? Side note: As far as I understood, $S$ is the solution space of a system of linear Diophantine equations. The solution has something to do with Hermite normal forms, but I can't figure out how.
Even over a field, a fair amount goes into this. Here are two pages from Linear Algebra and Matrix Theory by Evar D. Nering, second edition: From a row-echelon form for your data matrix $A,$ one can readily find the null space as a certain number of columns by placing $1$'s in certain "free" positions and back-substituting to get the other positions. Meanwhile, once you have that information, the square matrices you display above are not quite what Buchmann, Lindner, Ruckert, and Schneider want. At the bottom of page 2 they define their HNF for $B$ as being upper triangular as well, where the first several columns have merely a single $q$ on the diagonal element and otherwise $0$'s. So you were close. Note that, over a field ($q$ prime) there is a well-defined notion of the rank of $A.$ The number of non-trivial columns of $B$ is $m-n,$ as you have illustrated above. Anyway, I think everything about your problem is in ordinary linear algebra books for fields, then I recommend INTEGRAL MATRICES by Morris Newman.
Can you construct a field with 6 elements? Possible Duplicate: Is there anything like GF(6)? Could someone tell me if you can build a field with 6 elements.
If such a field $F$ exists, then the multiplicative group $F^\times$ is cyclic of order 5. So let $a$ be a generator for this group and write $F = \{ 0, 1, a, a^2, a^3, a^4\}$. From $a(1 + a + a^2 + a^3 + a^4) = 1 + a + a^2 + a^3 + a^4$, it immediately follows that $1 + a + a^2 + a^3 + a^4 = 0$. Let's call this (*). Since $0$ is the additive inverse of itself and $F^\times$ has odd number of elements, at least one element of $F^\times$ is its own additive inverse. Since $F$ is a field, this implies $1 = -1$. So, in fact, every element of $F^\times$ is its own additive inverse (**). Now, note that $1 + a$ is different from $0$, $1$ and $a$. So it is $a^i$, where i = 2, 3 or 4. Then, $1 + a - a^i = 1 + a + a^i = 0$. Hence, by $(*)$ one of $a^2 + a^3$, $a^2 + a^4$ and $a^3 + a^4$ must be $0$, a contradiction with (**).
can one derive the $n^{th}$ term for the series, $u_{n+1}=2u_{n}+1$,$u_{0}=0$, $n$ is a non-negative integer derive the $n^{th}$ term for the series $0,1,3,7,15,31,63,127,255,\ldots$ observation gives, $t_{n}=2^n-1$, where $n$ is a non-negative integer $t_{0}=0$
The following is a semi-formal variant of induction that is particularly useful for recurrences. Let $x_n=2^n-1$. It is easy to verify that $x_0=0$. It is also easy to verify that $$x_{n+1}=2x_n+1,$$ since $2^{n+1}-1=2(2^n-1)+1$. So the sequence $(x_n)$ starts in the same way as your sequence and obeys the same recurrence as your sequence. Thus the two sequences must be the same.
Is $ \ln |f| $ harmonic? I'd like to show that $\ln |f| $ is harmonic, where $f$ is holomorphic defined on a domain of the complex plane and never takes the value 0. My idea was to use the fact that $\ln |f(z)| = \operatorname{Log} f(z) - i*\operatorname{Arg}(f(z)) $, but $Log$ is only holomorphic on some part of the complex plane and $\operatorname{Arg}$ is not holomorphic at all. Any help is welcome!
This is a local result; you need to show that given a $z_0$ with $f(z_0) \neq 0$ there is a neighborhood of $z_0$ on which $\ln|f(z)|$ is harmonic. Fix $z_0$ with $f(z_0) \neq 0$. Let $\log(z)$ denote an analytic branch of the logarithm defined on a neighborhood of $f(z_0)$. Then the real part of $\log(z)$ is $\ln|z|$; any two branches of the logarithm differ by an integer multiple of $2\pi i$. The function $\log(f(z))$, being the composition of analytic functions, is analytic on a neighborhood of $z_0$. The real part of this function is $\ln|f(z)|$, which is therefore harmonic on a neighborhood of $z_0$, which is what you need to show.
Find a side of a triangle given other two sides and an angle I have a really simple-looking question, but I have no clue how I can go about solving it? The question is Find the exact value of $x$ in the following diagram: Sorry for the silly/easy question, but I'm quite stuck! To use the cosine or sine rule, I'd need the angle opposite $x$, but I can't find that, cause I don't have anything else to help it along. Thank You!
Use the cosine rule with respect to the 60 degree angle. Then you get an equation involving $x$ as a variable, Then you solve the equation for $x$.
Does there exist a nicer form for $\beta(x + a, y + b) / \beta(a, b)$? I have the expression $$\displaystyle\frac{\beta(x + a, y + b)}{\beta(a, b)}$$ where $\beta(a_1,a_2) = \displaystyle\frac{\Gamma(a_1)\Gamma(a_2)}{\Gamma(a_1+a_2)}$. I have a feeling this should have a closed-form which is intuitive and makes less heavy use of the Beta function. Can someone describe to me whether this is true? Here, $x$ and $y$ are integers larger than $0.$
$$ \beta(1+a,b) = \frac{\Gamma(1+a)\Gamma(b)}{\Gamma(1+a+b)} = \frac{a\Gamma(a)\Gamma(b)}{(a+b)\Gamma(a+b)} = \frac{a}{a+b} \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)} = \frac{a}{a+b} \beta(a,b). $$ If you have, for example $\beta(5+a,8+b)$, just repeat this five times for the first argument and eight for the second: $$ \frac{(4+a)(3+a)(2+a)(1+a)\cdot(7+b)(6+b)\cdots (1+b)b}{(12+a+b)(11+a+b)\cdots (1+a+b)(a+b)}\beta(a,b). $$
A question about applying Arzelà-Ascoli An example of an application of Arzelà-Ascoli is that we can use it to prove that the following operator is compact: $$ T: C(X) \to C(Y), f \mapsto \int_X f(x) k(x,y)dx$$ where $f \in C(X), k \in C(X \times Y)$ and $X,Y$ are compact metric spaces. To prove that $T$ is compact we can show that $\overline{TB_{\|\cdot\|_\infty} (0, 1)}$ is bounded and equicontinuous so that by Arzelà-Ascoli we get what we want. It's clear to me that if $TB_{\|\cdot\|_\infty} (0,1)$ is bounded then $\overline{TB_{\|\cdot\|_\infty} (0, 1)}$ is bounded too. What is not clear to me is why $\overline{TB_{\|\cdot\|_\infty} (0, 1)}$ is equicontinuous if $TB_{\|\cdot\|_\infty} (0, 1)$ is. I think about it as follows: $TB_{\|\cdot\|_\infty} (0, 1)$ is dense in $\overline{TB_{\|\cdot\|_\infty} (0, 1)}$ with respect to $\|\cdot\|_\infty$ hence all $f$ in $\overline{TB_{\|\cdot\|_\infty} (0, 1)}$ are continuous (since they are uniform limits of continuous sequences). Since $Y$ is compact they are uniformly continuous. Now I don't know how to argue why I get equicontinuity from this. Thanks for your help.
Following tb's comment: Claim: If $\{f_n\}$ is equicontinuous and $f_n \to f$ uniformly then $\{f\} \cup \{f_n\}$ is equicontinuous. Proof: Let $\varepsilon > 0$. (i) Let $\delta^\prime$ be the delta that we get from equicontinuity of $\{f_n\}$ so that $d(x,y) < \delta^\prime$ implies $|f_n(x) - f_n(y)| < \varepsilon$ for all $n$. (ii) Since $f_n \to f$ uniformly, $f$ is continuous and since $X$ is compact, $f$ is uniformly continuous so there is a $\delta^{\prime \prime}$ such that $d(x,y) < \delta^{\prime \prime}$ implies $|f(x) - f(y)| < \varepsilon$. Now let $\delta = \min(\delta^\prime, \delta^{\prime \prime})$ then $d(x,y) < \delta$ implies $|g(x) - g(y)| < \varepsilon$ for all $g$ in $\{f\} \cup \{f_n\}$. Edit What I wrote above is rubbish and doesn't lead anywhere. As pointed out in Ahriman's comment to the OP, we don't need continuity of $f$. We can bound $f$ as follows (in analogy to the proof of the uniform limit theorem): Let $\delta$ be such that $|f_n(x) - f_n(y)| < \varepsilon / 3$ for all $n$ and all $x,y$. Since $f$ is the uniform limit of $f_n$, for $x$ fixed, $f_n(x)$ is a Cauchy sequence converging to $f(x)$. Let $n$ be such that $\|f-f_n\|_\infty < \varepsilon / 3$. Then $$ |f(x) - f(y)| \leq |f(x) - f_n(x)| + |f_n(x) - f_n(y)| + |f(y)-f_n (y)| < \varepsilon$$ Hence we may choose $\delta$ such that $|f(x) - f(y)| < \varepsilon / 3$ for all $f$ in $TB(0,1)$ to get that $\overline{TB(0,1)}$ is equicontinuous, too.
direct sum of image and kernel in a infinitedimensional space Is it true that in an infinitdimensional Hilbert space the formula $$\text{im} S\oplus \ker S =H$$holds, where $S:H\rightarrow H$ ? I know it is true for finitely many dimensions but I'm not so sure about infinitely many. Would it be true under some additional assumption, like assuming that the rank of $S$ is finite ?
Consider $l_2$, and the translation operator $T:~e_n\mapsto e_{n+1}$, it's injective but not surjective. So that $ker~T=0,im~T\neq l_2,ker~T\oplus im~T\neq l_2$. If $rank~im T$ is finite, i remember i have learned that the equality holds in some book. (but it's vague to me now, so take care) You can try to consider the coordinate function on the finite dimensional space $im~T$ in this case. Other kinds of additional assumptions may concern with * *Projection operator *Compact operator *Fredholm operator Try your best
How to find $\lim\limits_{x\to0}\frac{e^x-1-x}{x^2}$ without using l'Hopital's rule nor any series expansion? Is it possible to determine the limit $$\lim_{x\to0}\frac{e^x-1-x}{x^2}$$ without using l'Hopital's rule nor any series expansion? For example, suppose you are a student that has not studied derivative yet (and so not even Taylor formula and Taylor series).
Define $f(x)=\lim_{n\to\infty}\left(1+\frac{x}{n}\right)^n$. One possibility is to take $f(x)$ as the definition of $e^x$. Since the OP has suggested a different definition, I will show they agree. If $x=\frac{p}{q}$ is rational, then \begin{eqnarray*} f(x)&=&\lim_{n\to\infty}\left(1+\frac{p}{qn}\right)^n\\ &=&\lim_{n\to\infty}\left(1+\frac{p}{q(pn)}\right)^{pn}\\ &=&\lim_{n\to\infty}\left(\left(1+\frac{1}{qn}\right)^n\right)^p\\ &=&\lim_{n\to\infty}\left(\left(1+\frac{1}{(qn)}\right)^{(qn)}\right)^{p/q}\\ &=&\lim_{n\to\infty}\left(\left(1+\frac{1}{n}\right)^{n}\right)^{p/q}\\ &=&e^{p/q} \end{eqnarray*} Now, $f(x)$ is clearly non-decreasing, so $$ \sup_{p/q\leq x}e^{p/q}\leq f(x)\leq \inf_{p/q\geq x}e^{p/q} $$ It follows that $f(x)=e^x$. Now, we have \begin{eqnarray*} \lim_{x\to0}\frac{e^x-1-x}{x^2}&=&\lim_{x\to0}\lim_{n\to\infty}\frac{\left(1+\frac{x}{n}\right)^n-1-x}{x^2}\\ &=&\lim_{x\to0}\lim_{n\to\infty}\frac{n-1}{2n}+\sum_{k=3}^n\frac{{n\choose k}}{n^k}x^{k-2}\\ &=&\frac{1}{2}+\lim_{x\to0}x\lim_{n\to\infty}\sum_{k=3}^n\frac{{n\choose k}}{n^k}x^{k-3}\\ \end{eqnarray*} We want to show that the limit in the last line is 0. We have $\frac{{n\choose k}}{n^k}\leq\frac{1}{k!}\leq 2^{-(k-3)}$, so we have \begin{eqnarray*} \left|\lim_{x\to0}x\lim_{n\to\infty}\sum_{k=3}^n\frac{{n\choose k}}{n^k}x^{k-3}\right|&\leq&\lim_{x\to0}|x|\lim_{n\to\infty}\sum_{k=3}^n \left(\frac{|x|}{2}\right)^{k-3}\\ &=&\lim_{x\to0}|x| \frac{1}{1-\frac{|x|}{2}}\\ &=&0 \end{eqnarray*}
Evaluation of $\sum\limits_{n=0}^\infty \left(\operatorname{Si}(n)-\frac{\pi}{2}\right)$? I would like to evaluate the sum $$ \sum\limits_{n=0}^\infty \left(\operatorname{Si}(n)-\frac{\pi}{2}\right) $$ Where $\operatorname{Si}$ is the sine integral, defined as: $$\operatorname{Si}(x) := \int_0^x \frac{\sin t}{t}\, dt$$ I found that the sum could be also written as $$ -\sum\limits_{n=0}^\infty \int_n^\infty \frac{\sin t}{t}\, dt $$ Anyone have any ideas?
We want (changing the sign and starting with $n=1$) : $$\tag{1}S(0)= -\sum_{n=1}^\infty \left(\mathrm{Si}(n)-\frac{\pi}{2}\right)$$ Let's insert a 'regularization parameter' $\epsilon$ (small positive real $\epsilon$ taken at the limit $\to 0^+$ when needed) : $$\tag{2} S(\epsilon) = \sum_{n=1}^\infty \int_n^\infty \frac {\sin(x)e^{-\epsilon x}}x\,dx$$ $$= \sum_{n=1}^\infty \int_1^\infty \frac {\sin(nt)e^{-\epsilon nt}}t\,dt$$ $$= \int_1^\infty \sum_{n=1}^\infty \Im\left( \frac {e^{int-\epsilon nt}}t\right)\,dt$$ $$= \int_1^\infty \frac {\Im\left( \sum_{n=1}^\infty e^{int(1+i\epsilon )}\right)}t\,dt$$ (these transformations should be justified...) $$S(\epsilon)= \int_1^\infty \frac {\Im\left(\dfrac {-e^{it(1+i\epsilon)}}{e^{it(1+i\epsilon)}-1}\right)}t\,dt$$ But $$\Im\left(\dfrac {-e^{it(1+i\epsilon)}}{e^{it(1+i\epsilon)}-1}\right)=\Im\left(\dfrac {i\,e^{it(1+i\epsilon)/2}}2\frac{2i}{e^{it(1+i\epsilon)/2}-e^{-it(1+i\epsilon)/2}}\right)$$ Taking the limit $\epsilon \to 0^+$ we get GEdgar's expression : $$\frac {\cos(t/2)}{2\sin(t/2)}=\frac {\cot\left(\frac t2\right)}2$$ To make sense of the (multiple poles) integral obtained : $$\tag{3}S(0)=\int_1^\infty \frac{\cot\left(\frac t2\right)}{2t}\,dt$$ let's use the cot expansion applied to $z=\frac t{2\pi}$ : $$\frac 1{2t}\cot\left(\frac t2\right)=\frac 1{2\pi t}\left[\frac {2\pi}t-\sum_{k=1}^\infty\frac t{\pi\left(k^2-\left(\frac t{2\pi}\right)^2\right)}\right]$$ $$\frac 1{2t}\cot\left(\frac t2\right)=\frac 1{t^2}-\sum_{k=1}^\infty\frac 2{(2\pi k)^2-t^2}$$ Integrating from $1$ to $\infty$ the term $\frac 1{t^2}$ and all the terms of the series considered as Cauchy Principal values $\ \displaystyle P.V. \int_1^\infty \frac 2{(2\pi k)^2-t^2} dt\ $ we get : $$\tag{4}S(0)=1+\sum_{k=1}^\infty\frac {\mathrm{atanh}\bigl(\frac 1{2\pi k}\bigr)}{\pi k}$$ and the result : $$\tag{5}\boxed{\displaystyle\sum_{n=0}^\infty \left(\mathrm{Si}(n)-\frac{\pi}{2}\right)=-1-\frac{\pi}4-\sum_{k=1}^\infty\frac {\mathrm{atanh}\bigl(\frac 1{2\pi k}\bigr)}{\pi k}}$$$$\approx -1.8692011939218853347728379$$ (and I don't know why the $\frac {\pi}2$ term re-inserted from the case $n=0$ became a $\frac {\pi}4$ i.e. the awaited answer was $-S(0)-\frac{\pi}2$ !) Let's try to rewrite this result using the expansion of the $\mathrm{atanh}$ : $$\mathrm{atanh(x)}=\sum_{n=0}^\infty \frac{x^{2n+1}}{2n+1}$$ so that $$A=\sum_{k=1}^\infty\frac {\mathrm{atanh}\bigl(\frac 1{2\pi k}\bigr)}{\pi k}=\sum_{k=1}^\infty \sum_{n=0}^\infty \frac 1{\pi k(2\pi k)^{2n+1}(2n+1)}$$ $$=\sum_{n=0}^\infty \frac 2{(2n+1)(2\pi)^{2n+2}}\sum_{k=1}^\infty \frac 1{ k^{2n+2}}$$ $$=2\sum_{n=1}^\infty \frac {\zeta(2n)}{2n-1}a^{2n}\quad \text{with}\ \ a=\frac 1{2\pi} $$ $$\tag{6}\boxed{\displaystyle\sum_{n=0}^\infty \left(\mathrm{Si}(n)-\frac{\pi}{2}\right)=-1-\frac{\pi}4-2\sum_{n=1}^\infty \frac {\zeta(2n)}{(2n-1)(2\pi)^{2n}}}$$ and... we are back to the cotangent function again since it is a generating function for even $\zeta$ constants ! $$1-z\,\cot(z)=2\sum_{n=1}^\infty \zeta(2n)\left(\frac z{\pi}\right)^{2n}$$ Here we see directly that $$A=\frac 12\int_0^{\frac 12} \frac {1-z\,\cot(z)}{z^2} dz$$ with the integral result : $$\tag{7}\boxed{\displaystyle\sum_{n=0}^\infty \left(\mathrm{Si}(n)-\frac{\pi}{2}\right)=-1-\frac{\pi}4-\int_0^1 \frac 1{t^2}-\frac {\cot\left(\frac t2\right)}{2t} dt}$$ (this shows that there was probably a more direct way to make (3) converge but all journeys are interesting !)
Trigonometry- why we need to relate to circles I'm a trigonometry teaching assistant this semester and have a perhaps basic question about the motivation for using the circle in the study of trigonometry. I certainly understand Pythagorean Theorem and all that (I would hope so if I'm a teaching assistant!) but am looking for more clarification on why we need the circle, not so much that we can use it. I'll be more specific- to create an even angle incrementation, it seems unfeasible, for example, to use the line $y = -x+1$, divide it into even increments, and then draw segments to the origin to this lattice of points, because otherwise $\tan(\pi/3)$ would equal $(2/3)/(1/3)=2$. But why mathematically can't we do this?
Suppose that instead of parametrizing the circle by arc length $\theta$, so that $(\cos\theta,\sin\theta)$ is a typical point on the circle, one parametrizes it thus: $$ t\mapsto \left(\frac{1-t^2}{1+t^2}, \frac{2t}{1+t^2}\right)\text{ for }t\in\mathbb{R}\cup\{\infty\}. \tag{1} $$ The parameter space is the one-point compactification of the circle, i.e. there's just one $\infty$, which is at both ends of the line $\mathbb{R}$, rather than two, called $\pm\infty$. So $\mathbb{R}\cup\{\infty\}$ is itself topologically a circle, and $\infty$ is mapped to $(-1,0)$. Now do some geometry: let $t$ be the $y$-coordinate of $(0,t)$, and draw a straight line through $(-1,0)$ and $(0,t)$, and look at the point where that line intersects the circle. That point of intersection is just the point to which $t$ is mapped in $(1)$. Later edit: an error appears below. I just noticed I did something dumb: the mapping between the circle and the line $y=1-x$ that associates a point on that line with a point on that circle if the line through them goes through $(0,0)$ is not equivalent to the one in $(1)$ because the center of projection is the center of the circle rather than a point on the circle. end of later edit This mapping is in a sense equivalent to the one you propose: I think you can find an affine mapping from $t$ to your $x$ on the line $y=-x+1$, such that the point on the circle to which $t$ is mapped and the point on the circle to which $x$ is mapped are related by linear-fractional transformations of the $x$- and $y$-coordinates. The substitution $$ \begin{align} (\cos\theta,\sin\theta) & = \left(\frac{1-t^2}{1+t^2}, \frac{2t}{1+t^2}\right) \\[10pt] d\theta & = \frac{2\,dt}{1+t^2} \end{align} $$ is the Weierstrass substitution, which transforms integrals of rational functions of sine and cosine, to integrals of simply rational functions. I'm pretty sure proposed mapping from the $(x,y=-x+1)$ to the circle would accomplish the same thing.
The base of a triangular prism is $ABC$. $A'B'C'$ is an equilateral triangle with lengths $a$... The base of a triangular prism is $ABC$. $A'B'C'$ is an equilateral triangle with lengths $a$, and the lengths of its adjacent sides also equal $a$. Let $I$ be the midpoint of $AB$ and $B'I \perp (ABC)$. Find the distance from the $B'$ to the plane ($ACC'A'$) in term of $a$.
Let $BB'$ be along $x$ axis and $BC$ be along $y$ axis ($B$ being the origin). Given that $B'I$ is perpendicular to $BA$, $\angle{ABC}$ will be $\pi/3$ (as $\Delta BIB'$ is a $(1,\sqrt{3},2)$ right-triangle). The co-ordinates of $A$ will then be of the form $\left(a\cos{\pi/3},a\cos{\pi/3},h\right)$. As the length of $AB$ is $a$, it leads to $$ \left(a\cos{\pi/3}\right)^{2}+\left(a\cos{\pi/3}\right)^{2}+h^{2}=a^{2} \\ h=\frac{a}{\sqrt{2}} $$ Due to symmetry the height of $A$ from plane $BCC'B'$ is the same as $B'$ to $ACC'A'$, which is $a/\sqrt{2}$.
Application of Radon Nikodym Theorem on Absolutely Continuous Measures I have the following problem: Show $\beta \ll \eta$ if and only if for every $\epsilon > 0 $ there exists a $\delta>0$ such that $\eta(E)<\delta$ implies $\beta(E)<\epsilon$. For the forward direction I had a proof, but it relied on the use of the false statement that "$h$ integrable implies that $h$ is bounded except on a set of measure zero". I had no problem with the backward direction.
Assume that $\beta=h\eta$ with $h\geqslant0$ integrable with respect to $\eta$, in particular $\beta$ is a finite measure. Let $\varepsilon\gt0$. There exists some finite $t_\varepsilon$ such that $\beta(B_\varepsilon)=\int_{B_\varepsilon} h\,\mathrm d\eta\leqslant\varepsilon$ where $B_\varepsilon=[h\geqslant t_\varepsilon]$. Note that, for every measurable $A$, $A\subset B_\varepsilon\cup(A\setminus B_\varepsilon)$, hence $\beta(A)\leqslant\beta(B_\varepsilon)+\beta(A\cap[h\leqslant t_\varepsilon])\leqslant\varepsilon+t_\varepsilon\eta(A)$. Let $\delta=\varepsilon/t_\varepsilon$. One sees that, for every measurable $A$, if $\eta(A)\leqslant\delta$, then $\beta(A)\leqslant2\varepsilon$, QED.
using central limit theorem I recently got a tute question which I don't know how to proceed with and I believe that the tutor won't provide solution... The question is Pick a real number randomly (according to the uniform measure) in the interval $[0, 2]$. Do this one million times and let $S$ be the sum of all the numbers. What, approximately, is the probability that a) $S\ge1,$ b) $S\ge0.001,$ c) $S\ge0$? Express as a definite integral of the function $e^\frac{-x^2}{2}$. Can anyone show me how to do it? It is in fact from a Fourier analysis course but I guess I need some basic result from statistcs which I am not familiar with at all..
Let's call $S_n$ the sum of the first $n$ terms. Then for $0 \le x \le 1$ it can be shown by induction that $\Pr(S_n \le x) = \dfrac{x^n}{2^n \; n!}$ So the exact answers are a) $1 - \dfrac{1}{2^{1000000} \times 1000000!}$ b) $1 - \dfrac{1}{2000^{1000000} \times 1000000!}$ c) $1$ The first two are extremely close to 1; the third is 1. The central limit theorem will not produce helpful approximations here, so you may have misquoted the question.
Show that for this function the stated is true. For the function $$G(w) = \frac{\sqrt2}{2}-\frac{\sqrt2}{2}e^{iw},$$ show that $$G(w) = -\sqrt2ie^{iw/2} \sin(w/2).$$ Hey everyone, I'm very new to this kind of maths and would really appreciate any help. Hopefully I can get an idea from this and apply it to other similar questions. Thank you.
Use the definition for the complex sine: $$ \sin(z)=\frac{ e^{iz}-e^{-iz} } {2i} $$ Thus, $$-\sqrt{2}ie^{i\frac{w}{2}}\sin\frac{w}{2} =-\sqrt{2}ie^{i\frac{w}{2}}(\frac{1}{2i}(e^{i\frac{w}{2}} - e^{-i\frac{w}{2}})) $$ Now simplify to get your result.
Safe use of generalized inverses Suppose I'm given a linear system $$Ax=b,$$ with unknown $x\in\mathbb{R}^n$, and some symmetric $A\in\mathbb{R}^{n\times n}$ and $b=\in\mathbb{R}^n$. Furthermore, it is known that $A$ is not full-rank matrix, and that its rank is $n-1$; therefore, $A$ is not invertible. However, to compute the "solution" $x$, one may use $x=A^+b$, where $A^+$ is a generalized inverse of $A$, i.e., Moore-Penrose inverse. What is the characteristic of such solution? More precisely, under which conditions will $x=A^+b$ give the exact solution to the system (supposing the exact solution exists)? Could one state that in the above case, with additional note that $b$ is orthogonal to null-space of $A$, the generalized inverse will yield the exact solution to the system?
Let $\tilde x = A^+b$. Then obviously $A\tilde x = AA^+b$. But since $AA^+$ is an orthogonal projector, and specifically $I-AA^+$ is the projector to the null space of the Hermitian transpose of $A$, $\tilde x$ is a solution iff $b$ is orthogonal to the null space of $AA^+$, that is, orthogonal to the null space of the Hermitian transpose of $A$.
Why is the last digit of $n^5$ equal to the last digit of $n$? I was wondering why the last digit of $n^5$ is that of $n$? What's the proof and logic behind the statement? I have no idea where to start. Can someone please provide a simple proof or some general ideas about how I can figure out the proof myself? Thanks.
If $\gcd(a, n) = 1$ then by Euler's theorem, $$a^{\varphi(n)} \equiv 1 \pmod{n}$$ From the tables and as @SeanEberhard stated, $$ \varphi(10) = \varphi(5*2) = 10\left( 1 - \frac{1}{5} \right) \cdot \left(1 - \frac{1}{2} \right)$$ $$= 10\left(\frac{4}{5} \right) \cdot \left(\frac{1}{2} \right) = 4$$ Let $n=10$ and thus, $$a^{\varphi(10)} \equiv 1 \pmod{10} \implies a^{4} \equiv 1 \pmod{10}$$ Multiply both sides by $a$, $$a^{5} \equiv a \pmod{10}$$
divisibility for numbers like 13,17 and 19 - Compartmentalization method For denominators like 13, 17 i often see my professor use a method to test whether a given number is divisible or not. The method is not the following : Ex for 17 : subtract 5 times the last digit from the original number, the resultant number should be divisible by 17 etc... The method is similar to divisibility of 11. He calls it as compartmentalization method. Here it goes. rule For 17 : take 8 digits at a time(sun of digits at odd places taken 8 at a time - sum of digits at even places taken 8 at a time) For Ex : $9876543298765432..... 80$digits - test this is divisible by 17 or not. There will be equal number of groups (of 8 digits taken at a time) at odd and even places. Therefore the given number is divisible by 17- Explanation. The number 8 above differs based on the denominator he is considering. I am not able to understand the method and logic both. Kindly clarify. Also for other numbers like $13$ and $19$, what is the number of digits i should take at a time? In case my question is not clear, please let me know.
Your professor is using the fact that $100000001=10^8+1$ is divisible by $17$. Given for example your $80$-digit number, you can subtract $98765432\cdot 100000001=9876543298765432$, which will leave zeros in the last $16$ places. Slash the zeros, and repeat. After $5$ times you are left with the number $0$, which is divisible by $17$, and hence your $80$-digit number must also be divisible by $17$. When checking for divisibility by $17$, you can also subtract multiples of $102=6\cdot 17$ in the same way. For divisibility by $7$, $11$, or $13$, you can subtract any multiple of the number $1001=7\cdot 11\cdot 13$ without affecting divisibility by these three numbers. For example, $6017-6\cdot 1001=11$, so $6017$ is divisible by $11$, but not by $7$ or $13$. For divisibility by $19$, you can use the number $1000000001=10^9+1=7\cdot 11\cdot 13\cdot 19\cdot 52579$. By subtracting multiples of this number, you will be left with a number of at most $9$ digits, which you can test for divisibility by $19$ by performing the division.
Solving a literal equation containing fractions. I know this might seem very simple, but I can't seem to isolate x. $$\frac{1}{x} = \frac{1}{a} + \frac{1}{b} $$ Please show me the steps to solving it.
You should combine $\frac1a$ and $\frac1b$ into a single fraction using a common denominator as usual: $$\begin{eqnarray} \frac1x& = &\frac1a + \frac1b \\ &=&{b\over ab} + {a\over ab} \\ &=& b+a\over ab \end{eqnarray}$$ So we get: $$x = {ab\over{b+a}}.$$ Okay?
Why Does Finitely Generated Mean A Different Thing For Algebras? I've always wondered why finitely generated modules are of form $$M=Ra_1+\dots+Ra_n$$ while finitely generated algebras have form $$R=k[a_1,\dots, a_n]$$ and finite algebras have form $$R=ka_1+\dots +ka_n$$ It seems to me that this is an flagrant abuse of nomenclature, and has certainly confused me in the past. Is there some historical reason for this terminology? Any references to the genesis of these terms would be greatly appreciated!
The terminology is actually very appropriate and precise. Consider that "A is a finitely generated X" means "there exists a finite set G such that A is the smallest X containing G". Looking at your examples, suppose $M$ is a finitely generated module, generated by $a_1,\dots,a_n$. Then $M$ contains $a_1,\dots,a_n$. Since it is a module, it must contain all elements of the form $Ra_i$ and their sums, so it must contain the module $Ra_1+\dots+Ra_n$. However, since this latter object is in fact a module, $M$ need not contain anything else and is in fact equal to this module. If $R$ is a finitely generated algebra, we can go through the same procedure as before. However, since algebras have an additional operation (multiplication), we must allow not only sums of elements of the form $ka_n$ but also their products. This gives us that $R$ must contain all polynomial expressions in the elements $a_1,\dots,a_n$, i.e. it must contain the algebra $k[a_1,\dots,a_n]$. Again, since this latter object is in fact an algebra, $R$ need not contain anything else and is equal to this algebra. A finite algebra seems to be a name for an algebra which is finitely generated as a module. Your example is then consistent with what I wrote above. I do admit that the name seems somewhat misleading.
relation between integral and summation What is the relation between a summation and an integral ? This question is actually based on a previous question of mine here where I got two answers (one is based on summation notation) and the other is based on integral notation and I do not know yet which one to accept . So I would like to understand the connect between the two ?
The Riemann or Lebesgue integral is in a sense an continuous sum. The symbol $\int$ is adapted from a letter looking like a somewhat elongated 's' from the word summa. In the definitions of the Riemann and the Lebesgue integrals the ordinary finite summation $\sum _{k=1}^n$ is used but the relation between the two is deeper than a mere 'one is used in the definition of the other' kind of relation. The modern approach to viewing integrals is measure theory. Without getting into too many technical details one defines a measure space as an abstraction to the notion of measuring length or volume. Then real valued functions on a measurable space may be integrated to give a single real number. For a particular measure space one gets the Lebesgue integral which itself, in a suitable way, subsumes the Riemann integral. On the other hand, given any set $X$ with $n$ elements there is a measure space structure on it such that for any function $f:X\to \mathbb {R}$ the integral of $f$ with respect to that measure is precisely the sum of the values that $f$ attains. In that sense general measure theory subsumes finite sums (of real numbers). More interestingly, given any countable set $X$ there is a measure on it such that the integral of real values functions on $X$ correspond to the infinite sum of its values. (Both of these remarks are trivial.) Thus measure theory subsumes infinite sums. From that point of view, a summation corresponds to integrals on a discrete measure space and the Lebesgue or Riemann integral corresponds to integrals on a continuous measure space.
A variable for the first 8 integers? I wish to use algebra to (is the term truncate?) the set of positive integers to the first 8 and call it for example 'n'. In order to define $r_n = 2n$ or similar. This means: $$r_0 = 0$$ $$r_1 = 2$$ $$\ldots$$ $$r_7 = 14$$ However there would not be an $r_8$. edit: Changed "undefined" to "would not be", sorry about this.
You've tagged this abstract-algebra and group-theory but it's not entirely clear what you mean. However, by these tags, perhaps you are referring to $\left(\Bbb Z_8, +\right)$? In such a case, you have $r_1+r_7 = 1+_8 7 = 8 \mod 8 = 0$. So there is no $r_8$ per se; however, the re-definition of the symbols $r_0, r_1, \ldots$ is superfluous.
Can this type of series retain the same value? Let $H$ be a Hilbert space and $\sum_k x_k$ a countable infinite sum in it. Lets say we partition the sequence $(x_k)_k$ in a sequence of blocks of finite length and change the order of summation only in those blocks, like this (for brevity illustrated only for the first two blocks $$(x_1,\ldots,x_k,x_{k+1},\ldots,x_{k+l},\ldots )$$ becomes $$(x_{\pi(1)},\ldots,x_{\pi(k)},x_{\gamma(k+1)},\ldots,x_{\gamma(k+l)},\ldots ),$$ where $\pi$ and $\gamma$ are permutations. If we denote the elements of the second sequence with $x'$, does anyone know, what will happen to the series $\sum _k x'_k$ in this case ? Can it stay the same ? Does staying the same requires additional assumptions ?
If both series converge, it doesn't change anything. This can be easily seen by considering partial sums. Put $k_j$ as the cumulative length of the first $j$ blocks. Then clearly $\sum_{j=1}^{k_n} x_j=\sum_{j=1}^{k_n} x_j'$ for any $n$, so assuming both series converge, we have that $$\sum_j x_j=\lim_{n\to \infty}\sum_{j=1}^{k_n} x_j=\lim_{n\to \infty}\sum_{j=1}^{k_n} x'_j=\sum_j x_j'$$ On the other hand, we can change a (conditionally) convergent series into a divergent one using this method – if we take the alternating harmonic sequence $x_n=(-1)^n/n$, sorting large enough blocks so that all the positive elements come before all the negative elements, we can get infinitely many arbitrarily large “jumps”, preventing convergence. On another note, if the length of permuted blocks is bounded, then I think this kind of thing could not happen (again, by considering partial sums).
Graph $f(x)=e^x$ Graph $f(x)=e^x$ I have no idea how to graph this. I looked on wolframalpha and it is just a curve. But how would I come up with this curve without the use of other resources (i.e. on a test).
You mention in your question that "I have no idea how to graph this" and "how would I come up with this curve without the use of other resources (i.e. on a test)". I know that you have already accepted one answer, but I thought that I would add a bit. In my opinion, then best thing is to remember (memorize if you will) certain types of functions and their graphs. So you probably already know how to graph a linear function like $f(x) = 5x -4$. Before a test, you would ask your teacher which types of functions that you would be required to ne able to sketch by hand without any graphing calculator. Then you could go study these different types. Now, if you don't have a graphing calculator, but you have a simple calculator, then you could also try to sketch the graph of a function by just "plotting points". So you draw a coordinate system with an $x$-axis and a $y$-axis and for various values of $x$ you calculate corresponding values of $y$ and then you plot those points. In the end you connect all the dots with lines/curves (so if the dots are all on a straight line, then you can just draw a straight line, but is things seem to curve one way or the other, then you try to "curve with the points"). No matter what, I would recommend to try and sit down with a piece of paper. Draw a coordinate system (maybe your a ruler). And try to sketch graphs of various functions. This will then familiarize you with how graphs of certain functions looks like.
definition of the exterior derivative I have a question concerning the definition of $d^*$. It is usually defined to be the (formally) adjoint of $d$? what is the meaning of formally?, is not just the adjoint of $d$? thanks
I will briefly answer two questions here. First, what does the phrase "formal adjoint" mean in this context? Second, how is the adjoint $d^*$ actually defined? Definitions: $\Omega^k(M)$ ($M$ a smooth oriented $n$-manifold with a Riemannian metric) is a pre-Hilbert space with norm $$\langle \omega,\eta\rangle_{L^2} = \int_M \omega\wedge *\eta.$$ (Here, $*$ is the Hodge operator.) For the first question, the "formal adjoint" of the operator $d$ is the operator $d^*$ (if it exists, from some function space to some other function space) that has the property $$\langle d\omega,\eta\rangle_{L^2} = \langle \omega,d^*\eta\rangle_{L^2}.$$ For the second question, the operator $d^*$ is actually defined as a map $\Omega^{k+1}(M)\to\Omega^k(M)$ by setting $$d^* = *d*.$$ Adjointness is proven by using integration by parts and Stokes' theorem.
The primes $p$ of the form $p = -(4a^3 + 27b^2)$ The current question is motivated by this question. It is known that the number of imaginary quadratic fields of class number 3 is finite. Assuming the answer to this question is affirmative, I came up with the following question. Let $f(X) = X^3 + aX + b$ be an irreducible polynomial in $\mathbb{Z}[X]$. Let $p = -(4a^3 + 27b^2)$ be the discriminant of $f(X)$. We consider the following conditions. (1) $p = -(4a^3 + 27b^2)$ is a prime number. (2) The class number of $\mathbb{Q}(\sqrt{p})$ is 3. (3) $f(X) \equiv (X - s)^2(X - t)$ (mod $p$), where $s$ and $t$ are distinct rational integers mod $p$. Question Are there infinitely many primes $p$ satisfying (1), (2), (3)? If this is too difficult, is there any such $p$? I hope that someone would search for such primes using a computer.
For (229, -4,-1) the polynomial factors as $(x-200)^2(x-58)$ For (1373, -8,-5) the polynomial factors as $(x-860)(x-943)^2$ For (2713, -13,-15) the polynomial factors as $(x-520)^2(x-1673)$
Predicting the next vector given a known sequence I have a sequence of unit vectors $\vec{v}_0,\vec{v}_1,\ldots,\vec{v}_k,\ldots$ with the following property: $\lim_{i\rightarrow\infty}\vec{v}_{i} = \vec{\alpha}$, i.e. the sequence converges to a finite unit vector. As the sequence is generated by a poorly known process, I am interested in modelling $\vec{v}_k$ given previous generated vectors $\vec{v}_0,\vec{v}_1,\ldots,\vec{v}_{k-1}$. What are the available mathematical tools which allows me to discover a vector function $\vec{f}$ such that $\vec{v}_k\approx \vec{f}(\vec{v}_{k-1},\vec{v}_{k-2},\ldots,\vec{v}_{k-n})$, for a given $n$, in the $L_p$-norm sense? EDIT: I am looking along the lines of the Newton's Forward Difference Formula, which predicts interpolated values between tabulated points, except for two differences for my problem: 1) Newton' Forward Difference is applicable for a scalar sequence, and 2) I am doing extrapolation at one end of the sequence, not interpolation in between given values. ADDITIONAL INFO: Below are plots of the individual components of an 8-tuple unit vector from a sequence of 200:
A lot of methods effectively work by fitting a polynomial to your data, and then using that polynomial to guess a new value. The main reason for polynomials is that they are easy to work with. Given that you know your functions have asymptotes, you may get better success by choosing a form that incorporates that fact, such as a rational function. If nothing else, you can always use a sledgehammer to derive a method -- e.g. use the method of least squares to select the coefficients of your rational functions.
Rigorous proof of the Taylor expansions of sin $x$ and cos $x$ We learn trigonometric functions in high school, but their treatment is not rigorous. Then we learn that they can be defined by power series in a college. I think there is a gap between the two. I'd like to fill in the gap in the following way. Consider the upper right quarter of the unit circle $C$ = {$(x, y) \in \mathbb{R}^2$; $x^2 + y^2 = 1$, $0 \leq x \leq 1$, $y \geq 0$}. Let $\theta$ be the arc length of $C$ from $x = 0$ to $x$, where $0 \leq x \leq 1$. By the arc length formula, $\theta$ can be expressed by a definite integral of a simple algebraic function from 0 to $x$. Clearly $\sin \theta = x$ and $\cos\theta = \sqrt{1 - \sin^2 \theta}$. Then how do we prove that the Taylor expansions of $\sin\theta$ and $\cos\theta$ are the usual ones?
Since $x^2 + y^2 = 1$, $y = \sqrt{1 - x^2}$, $y' = \frac{-x}{\sqrt{1 - x^2}}$ By the arc length formula, $\theta = \int_{0}^{x} \sqrt{1 + y'^2} dx = \int_{0}^{x} \frac{1}{\sqrt{1 - x^2}} dx$ We consider this integral on the interval [$-1, 1$] instead of [$0, 1$]. Then $\theta$ is a monotone strictly increasing function of $x$ on [$-1, 1$]. Hence $\theta$ has the inverse function defined on [$\frac{-\pi}{2}, \frac{\pi}{2}$]. We denote this function also by $\sin\theta$. We redefine $\cos\theta = \sqrt{1 - \sin^2 \theta}$ on [$\frac{-\pi}{2}, \frac{\pi}{2}$]. Since $\frac{d\theta}{dx} = \frac{1}{\sqrt{1 - x^2}}$, (sin $\theta)' = \frac{dx}{d\theta} = \sqrt{1 - x^2} =$ cos $\theta$. On the other hand, $(\cos\theta)' = \frac{d\sqrt{1 - x^2}}{d\theta} = \frac{d\sqrt{1 - x^2}}{dx} \frac{dx}{d\theta} = \frac{-x}{\sqrt{1 - x^2}} \sqrt{1 - x^2} = -x = -\sin\theta$ Hence $(\sin\theta)'' = (\cos\theta)' = -\sin\theta$ $(\cos\theta)'' = -(\sin\theta)' = -\cos\theta$ Hence by the induction on $n$, $(\sin\theta)^{(2n)} = (-1)^n\sin\theta$ $(\sin\theta)^{(2n+1)} = (-1)^n\cos\theta$ $(\cos\theta)^{(2n)} = (-1)^n\cos\theta$ $(\cos\theta)^{(2n+1)} = (-1)^{n+1}\sin\theta$ Since $\sin 0 = 0, \cos 0 = 1$, $(\sin\theta)^{(2n)}(0) = 0$ $(\sin\theta)^{(2n+1)}(0) = (-1)^n$ $(\cos\theta)^{(2n)}(0) = (-1)^n$ $(\cos\theta)^{(2n+1)}(0) = 0$ Note that $|\sin\theta| \le 1$, $|\cos\theta| \le 1$. Hence, by Taylor's theorem, $\sin\theta = \sum_{n=0}^{\infty} (-1)^n \frac{\theta^{2n+1}}{(2n+1)!}$ $\cos\theta = \sum_{n=0}^{\infty} (-1)^n \frac{\theta^{2n}}{(2n)!}$ QED Remark: When you consider the arc length of the lemniscate instead of the circle, you will encounter $\int_{0}^{x} \frac{1}{\sqrt{1 - x^4}} dx$. You may find interesting functions like we did with $\int_{0}^{x} \frac{1}{\sqrt{1 - x^2}} dx$. This was young Gauss's approach and he found elliptic functions.
The measure of $([0,1]\cap \mathbb{Q})×([0,1]\cap\mathbb{Q})$ We know that $[0,1]\cap \mathbb{Q}$ is a dense subset of $[0,1]$ and has measure zero, but what about $([0,1]\cap \mathbb{Q})\times([0,1]\cap \mathbb{Q})$? Is it also a dense subset of $[0,1]\times[0,1]$ and has measure zero too? Besides, what about its complement? Is it dense in $[0,1]\times[0,1]$ and has measure zero?
To give a somewhat comprehensive answer: * *the set in question is countable (as a product of countable sets), so it is of measure zero (because any countable set is zero with respect to any continuous measure, such as Lebesgue measure). *it is also dense, because it is a product of dense sets. *it has measure zero, so its complement has full measure. *its complement has full measure with respect to Lebesgue measure, so it's dense in $[0,1]^2$
How to solve for $x$ in $x(x^3+\sin x \cos x)-\sin^2 x =0$? How do I solve for $x$ in $$x\left(x^3+\sin(x)\cos(x)\right)-\big(\sin(x)\big)^2=0$$ I hate when I find something that looks simple, that I should know how to do, but it holds me up. I could come up with an approximate answer using Taylor's, but how do I solve this? (btw, WolframAlpha tells me the answer, but I want to know how it's solved.)
Using the identity $\cos x=1-2\sin^2(x/2)$ and introduccing the function ${\rm sinc}(x):={\sin x\over x}$ we can rewrite the given function $f$ in the following way: $$f(x)=x^2\left(x^2\left(1-{1\over2}{\rm sinc}(x){\rm sinc}^2(x/2)\right)+{\rm sinc}(x)\bigl(1-{\rm sinc}(x)\bigr)\right)\ .\qquad(*)$$ Now ${\rm sinc}(x)$ is $\geq0$ on $[0,\pi]$ and of absolute value $\leq1$ throughout. By distinguishing the cases $0<x\leq\pi$ and $x>\pi$ it can be verified by inspection that $f(x)>0$ for $x>0$. Since $f$ is even it follows that $x_0=0$ is the only real zero of $f$. [One arrives at the representation $(*)$ by expanding the simple functions appearing in the given $f$ into a Taylor series around $0$ and grouping terms of the same order skillfully.]
What is the order type of monotone functions, $f:\mathbb{N}\rightarrow\mathbb{N}$ modulo asymptotic equivalence? What about computable functions? I was reading the blog "who can name the bigger number" ( http://www.scottaaronson.com/writings/bignumbers.html ), and it made me curious. Let $f,g:\mathbb{N}\rightarrow\mathbb{N}$ be two monotonicly increasing strictly positive functions. We say that these two functions are asymptotically equivalent if $$\lim_{n \to \infty} \frac{f(n)}{g(n)}= \alpha\in(0,\infty)$$ We will say that $f>g$ if $$\lim_{n \to \infty} \frac{f(n)}{g(n)}=\infty$$ It is quite clear that this is a partial order. What if any thing can be said about this order type?
If you want useful classes of "orders of growth" that are totally ordered, perhaps you should learn about things like Hardy fields. And even in this case, of course, Asaf's comment applies, and it should not resemble a well-order at all.
Equivalent definitions of linear function We say a transform is linear if $cf(x)=f(cx)$ and $f(x+y)=f(x)+f(y)$. I wonder if there is another definition. If it's relevant, I'm looking for sufficient but possibly not necessary conditions. As motivation, there are various ways of evaluating income inequality. Say the vector $w_1,\dots,w_n$ is the income of persons $1,\dots,n$. We might have some $f(w)$ telling us how "good" the income distribution is. It's reasonable to claim that $cf(w)=f(cw)$ but it's not obvious that $f(x+y)=f(x)+f(y)$. Nonetheless, there are some interesting results if $f$ is linear. So I wonder if we could find an alternative motivation for wanting $f$ to be linear.
Assume that we are working over the reals. Then the condition $f(x+y)=f(x)+f(y)$, together with continuity of $f$ (or even just measurability of $f$) is enough. This can be useful, since on occasion $f(x+y)=f(x)+f(y)$ is easy to verify, and $f(cx)=cf(x)$ is not.
Ellipse with non-orthogonal minor and major axes? If there's an ellipse with non-orthogonal minor and major axes, what do we call it? For example, is the following curve a ellipse? $x = \cos(\theta)$ $y = \sin(\theta) + \cos(\theta) $ curve $C=\vec(1,0)*\cos(\theta) + \vec(1,1)*\cos(\theta) $ The major and minor axes are $\vec(1,0)$ and $\vec(1,1)$. They are not orthogonal. Is it still an ellipse? Suppose I have a point $P(p_1,p_2)$ can I find a point Q on this curve that has shortest euclidean distance from P?
Hint: From $y=\sin\theta+\cos\theta$, we get $y-x=\sin\theta$, and therefore $(y-x)^2=\sin^2\theta=1-x^2$. After simplifying and completing the square, can you recognize the curve? The major and minor axes do turn out to be orthogonal.
What are affine spaces for? I'm studying affine spaces but I can't understand what they are for. Could you explain them to me? Why are they important, and when are they used? Thanks a lot.
The first space we are introduced in our lives are euclidean spaces, which are the classical beginning point of classical geometry. In these spaces, there is a natural movement between points that are translations, i.e., you can move in a natural way from a point $p$ to a point $q$ through the vector that joint them $\overrightarrow{pq}$. In this way, vectors represent translations in the euclidean space. Therefore vector spaces are the natural generalization of translations of spaces, but which spaces? Here is where affine spaces are important, because they recover the concept of points which the "arrows" (vectors) of a vector space move. In conclusion, an affine space is mathematical modelling of an space of points whose main feature is that there is a set of preferred movements (called translations) that permits to go from any point to other point in an unique way and that are modeled through the concept of vector space. Or in other words, affine spaces represent the points that the arrows of vector spaces move.
In a polynomial of $n$ degree, what numbers can fill the $n$? Until now, I've seen that the $n$ could be filled with the set $\mathbb{N}_0$ and $-\infty$ but I still didn't see mentions on other sets of numbers. As I thought that having 0 and $-\infty$ as degrees of a polynomial were unusual, I started to think if it would be possible for other numbers to also fill the gap.
The degree of a polynomial can only take the values that you've specified. For that, let's revisit the definition of a polynomial. Personally, I was taught that a polynomial (in one variable) is an algebraic expression which can be written in the form $$a_0 + a_1x + a_2x^2 + \cdots + a_nx^n$$ where $n$ is a non-negative integer. From this definition, as the degree is the same as $n$, therefore, the degree is also a non-negative integer (I'll not go into the degree of a zero polynomial, which is best left undefined). On the other hand, Wikipedia says that A polynomial is an expression of finite length constructed from variables (also called indeterminates) and constants, using only the operations of addition, subtraction, multiplication, and non-negative integer exponents. Clearly, there is no non-integer expenentiation, nor any division by a variable, so again it is clear that the degree is a non-negative integer.
How do you take the product of Bernoulli distribution? I have a prior distribution, $$p(\boldsymbol\theta|\pi)=\prod\limits_{i=1}^K p(\theta_i|\pi).$$ $\theta_i$ can equal $0$ or $1$, so I am using a Bernoulli distribtion so that $$p(\boldsymbol\theta|\pi)=\prod\limits_{i=1}^K \pi^{\theta_i}(1-\pi)^{1-\theta_i}.$$ I then want to add this distribution onto my marginal likelihood to make up my posterior. Should I solve it as $$p(\boldsymbol\theta|\pi)=\pi^{K\theta_i}(1-\pi)^{K(1-\theta_i)} \, \, ?$$ But then is the product of bernoulli distributions the binomial distribution? Then should my answer be $$p(\boldsymbol\theta|\pi)=\left(\begin{array}c K\\ t \end{array}\right)\pi^{t}(1-\pi)^{K-t)} $$ where $K$ is the maximum number of $\theta_i$'s allowed, and $t=\{0, 1\}$ , (i.e. $t=0\, \, \text{or}\, \, 1$)? What form do I add this prior to my likelihood?
The equation you have can be represented as follows: $$p(\boldsymbol x|\theta)=\prod\limits_{i=1}^K \theta^{x_i}(1-\theta)^{1-x_i}=\theta^{\sum_i x_i}(1-\theta)^{K-\sum_i x_i}$$ We have the Bayes rule $$p(\theta|x)=\frac{p(x|\theta)p(\theta)}{p(x)}$$ as $\theta$ is known, we have the joint density $p(x,\theta)=p(\theta,x)$ which specifies all the information we need.
Prove that if $n$ is a positive integer then $2^{3n}-1$ is divisible by $7$. I encountered a problem in a book that was designed for IMO trainees. The problem had something to do with divisibility. Prove that if $n$ is a positive integer then $2^{3n}-1$ is divisible by $7$. Can somebody give me a hint on this problem. I know that it can be done via the principle of mathematical induction, but I am looking for some other way (that is if there is some other way)
Hint: Note that $8 \equiv 1~~~(\text{mod } 7)$. So, $$2^{3n}=(2^3)^n=8^n\equiv \ldots~~~(\text{mod } 7)=\ldots~~~(\text{mod } 7)$$ Try to fill in the gaps! Solution: Note that $8 \equiv 1~~(\text{mod } 7)$. This means that $8$ leaves a remainder of $1$ when divided by $7$. Now assuming that you are aware of some basic modular arithmetic, $$2^{3n}=(2^3)^n=8^n\equiv 1^n ~~(\text{mod } 7)=1~~(\text{mod } 7)$$ Now if $2^{3n}\equiv 1~~(\text{mod } 7)$ then it follows that, $$2^{3n}-1=8^n-1\equiv (1-1)~~(\text{mod } 7)~\equiv 0~~(\text{mod } 7)\\ \implies 2^{3n}-1\equiv 0~~(\text{mod } 7)$$ Or in other words, $2^{3n}-1$ leaves no remainder when divided by $7$ (i.e. $2^{3n}-1$ is divisible by $7$). As desired
Convergence of a sequence of non-negative real numbers $x_n$ given that $x_{n+1} \leq x_n + 1/n^2$. Let $x_n$ be a sequence of the type described above. It is not monotonic in general, so boundedness won't help. So, it seems as if I should show it's Cauchy. A wrong way to do this would be as follows (I'm on a mobile device, so I can't type absolute values. Bear with me.) $$\left|x_{n+1} - x_n\right| \leq \frac{1}{n^2}.$$ So, we have $$ \left|x_m -x_n\right| \leq \sum_{k=n}^{m} \frac{1}{k^2}$$ which is itself Cauchy, etc., etc. But, of course, I can't just use absolute values like that. One thing I have shown is that $x_n$ is bounded. Inductively, one may show $$ \limsup_{n \to \infty} x_n \leq x_k + \sum_{k=n}^{\infty} \frac{1}{k^2},$$ although I'm not sure this helps or matters at all. Thanks in advance. Disclaimer I've noticed that asking a large number of questions in quick succession on this site is often frowned upon, especially when little or no effort has been given by the asker. However, I am preparing for a large test in a few days and will be sifting through dozens of problems. Therefore, I may post a couple a day. I will only do so when I have made some initial, meaningful progress. Thanks.
Note that $$ \lim_{n\to\infty}\sum_{k=n}^\infty\frac1{k^2}=0 $$ and for $m\gt n$, $$ a_m\le a_n+\sum_{k=n}^\infty\frac1{k^2} $$ First, take the $\limsup\limits_{m\to\infty}$: $$ \limsup_{m\to\infty}a_m\le a_n+\sum_{k=n}^\infty\frac1{k^2} $$ which must be non-negative. Then take the $\liminf\limits_{n\to\infty}$: $$ \limsup_{m\to\infty}a_m\le\liminf_{n\to\infty}a_n $$ Thus, the limit exists.
A puzzle with powers and tetration mod n A friend recently asked me if I could solve these three problems: (a) Prove that the sequence $ 1^1, 2^2, 3^3, \dots \pmod{3}$ in other words $\{n^n \pmod{3} \}$ is periodic, and find the length of the period. (b) Prove that the sequence $1, 2^2, 3^{3^3},\dots \pmod{4}$ i.e. $\{\ ^nn \pmod{4}\}$ is periodic and find the length of the period. (c) Prove that the sequence $1, 2^2, 3^{3^3},\dots \pmod{5}$ i.e. $\{\ ^nn \pmod{5}\}$ is periodic and find the length of the period. The first two were not terribly difficult (but might be useful exercises in Fermat's little theorm), but the third one is causing me problems, since my methods are leading to rather a lot of individual cases, and I would be interested to see if anyone here can find a neater way to solve it. (In (c), I have evaluated the first 15 terms, and not found a period yet - unless I have made a mistake.)
For $k\ge 1$ we have $$\begin{align*} {^{k+2}n}\bmod5&=(n\bmod5)^{^{k+1}n}\bmod5=(n\bmod5)^{^{k+1}n\bmod4}\bmod5\\ {^{k+1}n}\bmod4&=(n\bmod4)^{^kn}=\begin{cases} 0,&\text{if }n\bmod2=0\\ n\bmod4,&\text{otherwise}\;, \end{cases} \end{align*}$$ so $${^{k+2}n}\bmod5=\begin{cases} (n\bmod5)^0=1,&\text{if }n\bmod2=0\\ (n\bmod5)^{n\bmod4},&\text{otherwise}\;. \end{cases}$$ In particular, $${^nn}\bmod5=\begin{cases} (n\bmod5)^0,&\text{if }n\bmod2=0\\ (n\bmod5)^{n\bmod4},&\text{otherwise}\;. \end{cases}$$ for $n\ge3$, where $0^0\bmod5$ is understood to be $0$. Clearly the period starting at $n=3$ is a multiple of $20$; actual calculation shows that it is $20$, and that we cannot include the first two terms:: $$\begin{array}{r|c|c} n:&&&&&&&&&1&2\\ {^nn}\bmod5:&&&&&&&&&1&4\\ \hline n:&3&4&5&6&7&8&9&10&11&12\\ {^nn}\bmod5:&2&1&0&1&3&1&4&0&1&1\\ \hline n:&13&14&15&16&17&18&19&20&21&22&\\ {^nn}\bmod5:&3&1&0&1&2&1&4&0&1&1 \end{array}$$ (Added: And no, I’d not seen that Qiaochu had already answered the question until after I posted.)
Theorems with an extraordinary exception or a small number of sporadic exceptions The Whitney graph isomorphism theorem gives an example of an extraordinary exception: a very general statement holds except for one very specific case. Another example is the classification theorem for finite simple groups: a very general statement holds except for very few (26) sporadic cases. I am looking for more of this kind of theorems-with-not-so-many-sporadic-exceptions (added:) where the exceptions don't come in a row and/or in the beginning - but are scattered truly sporadically. (A late thanks to Asaf!)
How about the Big Picard theorem? http://en.wikipedia.org/wiki/Picard_theorem If a function $f:\mathbb{C}\to \mathbb{C}$ is analytic and has an essential singularity at $z_0\in \mathbb{C}$, then in any open set containing $z_0$, $f(z)$ takes on all possible complex values, with at most one possible exception, infinitely often.
Is a right inverse of a surjective linear map linear? On a finite dimensional vector space, the answer is yes (because surjective linear map must be an isomorphism). Does this extend to infinite dimensional vector space? In other words, for any linear surjection $T:V\rightarrow V$, AC guarantees the existence of right inverse $R:V\rightarrow V$. Must $R$ be linear? How about $T:V\rightarrow W$ linear surjection in general?
No. Let $V = \text{span}(e_1, e_2, ...)$ and let $T : V \to V$ be given by $T e_1 = 0, T e_i = e_{i-1}$. A right inverse $S$ for $T$ necessarily sends $v = \sum c_i e_i$ to $\sum c_i e_{i+1} + c_v e_1$ but $c_v$ may be an arbitrary function of $v$.
Limit of $(x_n)$ with $0 Let $0 < x_1 < 1$ and $x_{n + 1} = x_n - x_n^{n + 1}$ for $n \geqslant 1$. Prove that the limit exists and find the limit in terms of $x_1$. I have proved the existence but cannot manage the other part. Thanks for any help.
Note that $x_2=x_1(1-x_1)$ with $0\lt x_1\lt1$ hence $0\lt x_2\lt1/4$, that $(x_n)_{n\geqslant1}$ is decreasing, in particular $(x_n)_{n\geqslant1}$ converges to some value $\ell(x_1)$ in $[0,x_1)$, and that $x_n\leqslant x_2$ for every $n\geqslant2$. Hence, for every $n\geqslant2$, $x_{n+1}\geqslant x_n-x_2^{n+1}$, which implies that $x_n\geqslant x_2-x_2^3-\cdots-x_2^n$ for every $n\geqslant2$ and that $\ell(x_1)\geqslant x_2-x_2^3/(1-x_2)=x_2(1-x_2-x_2^2)/(1-x_2)$ hence $\ell(x_1)\gt0$. Auto-quote: To get an exact value of the limit $\ell(x_1)$ as an explicit function of $x_1$ is more difficult and, to venture a guess, probably not doable.
$\operatorname{Aut}(\mathbb Z_n)$ is isomorphic to $U_n$. I've tried, but I can't solve the question. Please help me prove that: $\operatorname{Aut}(\mathbb Z_n)$ is isomorphic to $U_n$.
(If you know about ring theory.) Since $\mathbb Z_n$ is an abelian group, we can consider its endomorphism ring (where addition is component-wise and multiplication is given by composition). This endomorphism ring is simply $\mathbb Z_n$, since the endomorphism is completely determined by its action on a generator, and a generator can go to any element of $\mathbb Z_n$. Therefore, the automorphism group $\mathrm{Aut}(\mathbb Z_n)$ is the group of units in $\mathbb Z_n$, which is $U_n=U(\mathbb Z_n)$.
Integer combination i want write a module to find the integer combination for a multi variable fomula. For example $8x + 9y \le 124$ The module will return all possible positive integer for $x$ and $y$.Eg. $x=2$, $y=12$. It does not necessary be exactly $124$, could be any number less or equal to $124$. Must be as close as possible to $124$ if no exact solution could be found. I do not want to solve with brute force as the number of variable could be any...$(5,10,100,...n)$ Any algorithm could solve this?
Considering the equal case, if you analyze some of the data points that produce integer solution to the original inequality: $$...,(-34,44), (-25,36), (-16,28), (-7,20),(2,12),(11,4),...$$ you can see that: $$x_{i}=x_{i-1}+9$$ to get $y$ values, re-write the inequality as follows (and use the equal part): $$y = \frac{124-8x}{9}$$ using the above equations you can find $(x,y)$ values in a given range of $x$ values without brute force.
Optimization problem involving step function I've got to optimize the following function with respect to $\phi$: $q(\phi, x) = \frac{1}{n} \sum_{i=1}^{n}{H(y_i)}$ where $y_i = k - \phi l - x_i$ and $H(.)$ denotes the Heaviside function. $k$ and $l$ are constants, and $x$ follows either (1) a continuous uniform distribution or (2) a normal distribution. This is part of a quite standard programming problem but I'm a little stuck with finding the optimal $\phi$ I'm sure this is a totally simple question but I can't quite figure it out... any help is greatly appreciated...
The step functions all add up in the same direction, since $\phi$ has the same sign in all $y_i$ – thus $q$ is minimal for all $\phi$ such that all step functions are $0$, which occurs for $\phi\lessgtr(k-x)/l$, where the inequality is $\lt$ or $\gt$ and $x$ is the greatest or least of the $x_i$, depending on whether $l$ is negative or positive, respectively.
natural solutions for $9m+9n=mn$ and $9m+9n=2m^2n^2$ Please help me find the natural solutions for $9m+9n=mn$ and $9m+9n=2m^2n^2$ where m and n are relatively prime. I tried solving the first equation in the following way: $9m+9n=mn \rightarrow (9-n)m+9n=0 $ $\rightarrow m=-\frac{9n}{9-n}$ Thanks in advance.
$$mn=9n+9m \Rightarrow (m-9)(n-9)=81$$ This equation is very easy to solve, just keep in mind that even if $m,n$ are positive, $m-9,n-9$ could be negative. But there are only 6 ways of writing 81 as the product of two integers. The second one is trickier, but if $mn >9$ then it is easy to prove that $$2m^2n^2> 18mn > 9m+9n $$ Added Also, since $9|2m^2n^2$ it follows that $3|mn$. Combining this with $mn \leq 9$ and $m|9n, n|9m$ solves immediately the equation. P.S. Your approach also works, if you do Polynomial long division you will get $\frac{9n}{n-9}=9 +\frac{81}{n-9}$. Thus $n-9$ is a divisor of $81$. P.P.S. Alternately, for the second equation, if you use $2\sqrt{mn} \leq m+n$ you get $$18 \sqrt{mn} \leq 9(m+n)=2m^2n^2$$ Thus $$(mn)^3 \geq 81$$ which implies $mn=0 \text{ or } mn \geq 5$.
If $T$ is bounded and $F$ has finite rank, what is the spectrum of $T+F$? Suppose that $T$ is a bounded operator with finite spectrum. What happens with the spectrum of $T+F$, where $F$ has finite rank? Is it possible that $\sigma(T+F)$ has non-empty interior? Is it always at most countable? Update: If $\sigma(T)=\{0\}$ then $0$ is in the essential spectrum of $T$ ($T$ is not invertible in the Calkin algebra), hence for any compact $K$, $\sigma_{ess}(T+K)=\sigma_{ess}(T)=\{0\}$. For operators such that the essential spectrum is $\{0\}$, it is known that their spectrum is either finite or consists of a sequence converging to $0$. I think it should be the same for operators with finite spectrum, but I cannot find a proof or reference.
It is always true if $T$ is self-adjoint. Here is a theorem that you might be interested: If $T$ is self-adjoint, a complex number is in the spectrum of $T$ but not in its essential spectrum iff it is an isolated eigenvalue of $T$ of finite multipliticity. The result can be found on page 32 of Analytic K-homology by Nigel Higson and John Roe.
Finding a paper by John von Neumann written in 1951 There's a 1951 article by John von Neumann, Various techniques used in connection with random digits, which I would really like to read. It is widely cited, but I can't seem to find an actual copy of the paper, be it free or paying. Is there a general strategy to find copies of relatively old papers like this one? EDIT: I've searched quite a lot before posting this question and fond the following reference: Journal of Research of the National Bureau of Standards, Appl. Math. Series (1951), 3, 36-38 Unfortunately, my library doesn't have it, and it is not in NIST's online archive (neither at http://www.nist.gov/nvl/journal-of-research-past-issues.cfm nor at http://nistdigitalarchives.contentdm.oclc.org/cdm/nistjournalofresearchbyvolume/collection/p13011coll6)
One of the citations gives the bibliographic info, von Neumann J, Various Techniques Used in Connection with Random Digits, Notes by G E Forsythe, National Bureau of Standards Applied Math Series, 12 (1951) pp 36-38. Reprinted in von Neumann's Collected Works, 5 (1963), Pergamon Press pp 768-770. That should be enough information for any librarian to find you a copy.
about the differentiability : the general case Let $U$ be an open set in $\mathbb{R}^{n}$ and $f :U \subset \mathbb{R}^{n} \rightarrow \mathbb{R}^{m}$ be a given function. We say that $f$ is differentiable at $x_{0}\in U$ if the partial derivatives of $f$ existi at $x_{0}$ and if $$\displaystyle \lim_{x \rightarrow x_{0}} \frac{\|f(x)-f(x_{0})-T(x-x_{0})\|}{\|x-x_{0}\|}=0$$ where $T=Df(x_{0})$ is the $ m \times n$ matrix with elements $\displaystyle \frac{\partial f_{i}}{\partial x_{j}}$ evaluated at $x_{0}$ and the $T(x-x_{0})$ means the product of $T$ with $x-x_{0}$ (regarded as a column matrix). We call $T$ the derivative of $f$ at $x_{0}$. Now I consider a particular case($m=n=2$) $f: \mathbb{R}^2 \rightarrow \mathbb{R}^2$ Following the definition I obtain : $$\displaystyle f(a,b)-f(a_{0}, b_{0})- \begin{pmatrix}\frac{\partial f_{1}}{\partial a} & \frac{\partial f_{1}}{\partial b}\\ \frac{\partial f_{2}}{\partial a} & \frac{\partial f_{2}}{\partial b} \end{pmatrix} \begin{pmatrix}a-a_{0} \\ b-b_{0} \end{pmatrix} = f(a,b)-f(a_{0},b_{0})- \begin{pmatrix}\frac{\partial f_{1}}{\partial a} \cdot(a-a_{0})+\frac{\partial f_{1}}{\partial b}\cdot(b-b_{0})\\ \frac{\partial f_{2}}{\partial a}\cdot(a-a_{0})+\frac{\partial f_{2}}{\partial b}\cdot(b-b_{0}) \end{pmatrix}$$ where $f(a,b)=(f_{1}(a,b),f_{2}(a,b))$. My question is : How can I compute this limit because the last element is a matrix and first two aren't. And why $f(x)-f(x_{0})-T(x-x_{0})$ I have to put into $\| \|$ ? I have an idea why I have to put into norm but I'm not sure and can you give an concrete example how I compute the limit - for example $f: \mathbb{R}^2 \rightarrow \mathbb{R}^2, f(a,b)=(a^2+b^2, a^2+b^2)$ when $ (a,b) \rightarrow (1,1)$. Thanks :)
Let me compute for the example, $f(x,y)=(x^2+y^2, x+y)$. We write $f_1(x,y)=x^2+y^2$ and $f_2(x,y)=x+y$. Then $$ \frac{\partial f_1}{\partial x}(x,y) = 2x,\quad \frac{\partial f_1}{\partial y}(x,y) = 2y,\quad \frac{\partial f_2}{\partial x}(x,y) = 1,\quad \frac{\partial f_2}{\partial y}(x,y) = 1. $$ Since these four functions are continuous then $f$ is differentiable. THEN $T$ is already the Jacobian matrix: $$ T=\begin{pmatrix} 2x& 2y\\ 1&1\end{pmatrix}. $$ Finally $$ f(x,y)-f(x_0,y_0)-T\begin{pmatrix}x-x_0\\y-y_0\end{pmatrix}= \begin{pmatrix}x^2+y^2-x_0^2-y_0^2-2x_0(x-x_0)-2y_0(y-y_0) \\ x+y-x_0-y_0 -(x-x_0+y-y_0)\end{pmatrix}, $$ taking limits $$ \lim_{(x,y)\to(x_0,y_0)} \frac{(x-x_0)^2+(y-y_0)^2}{\sqrt{(x-x_0)^2+(y-y_0)^2}}= \lim_{(x,y)\to(x_0,y_0)} \sqrt{(x-x_0)^2+(y-y_0)^2}=0. $$
How many numbers less than $x$ are co-prime to $x$ Is there a fast way , or a direct function to give the count of numbers less than $x$ and co-prime to $x$ . for example if $x$ = 3 ; then $n = 2$ and if $x$ = 8 ; then $n = 4$.
Yes there is. First of all, you have to prime factorize your $x$, any put it in exponential form. Suppose you have the number $x = 50$. The prime factorization is $5^2 * 2^1$. Now take each number seperately. Take the bases and subtract 1 from all of them. $5-1=4$. $2-1=1$. Now evaluate the each base/exponent combination after subtracting the exponent by 1. $5^{2-1}=5$. $2^{1-1}=1$. Now multiply your answers. $(4)(1)(5)(1) = 20$. If you want to try out your examples, $3=3^1$. So your answer would be $(3-1)(3^{1-1}) = (2)(1) = 2.$ For your other example, $8=2^3. n=(2-1)(2^{3-1})=(1)(4)=4$. I guess in general, prime factorize your $x$ to $a_1^{b_1}*a_2^{b_2}* ... * a_n^{b_n}$ and your answer will be $((a_1-1)(a_1^{b_1-1}))*((a_2-1)(a_2^{b_2-1}))*...*(a_n-1)(a_n^{b_n-1})$
Nonsingularity of Euclidean distance matrix Let $x_1, \dots, x_k \in \mathbb{R}^n$ be distinct points and let $A$ be the matrix defined by $A_{ij} = d(x_i, x_j)$, where $d$ is the Euclidean distance. Is $A$ always nonsingular? I have a feeling this should be well known (or, at least a reference should exists), on the other hand, this fact fails for general metrics (take e.g. path metric on the cycle $C_4$) edit: changed number of points from $n$ to general $k$
I think it should be possible to show that your distance matrix is always nonsingular by showing that it is always a Euclidean distance matrix (in the usual sense of the term) for a non-degenerate set of points. I don't give a full proof but sketch some ideas that I think can be fleshed out into a proof. Two relevant papers on Euclidean distance matrices are Discussion of a Set of Points in Terms of Their Mutual Distances by Young and Householder and Metric Spaces and Positive Definite Functions by Schoenberg. They show that an $n\times n$ matrix $A$ is a Euclidean distance matrix if and only if $x^\top Ax\le0$ for all $x$ with $e^\top x=0$ (where $e$ is the vector with $1$ in each component) and that the affine dimension of the points is $n$ if and only if the inequality is strict. It follows that a Euclidean distance matrix can only be singular if the affine dimension of the points is less than $n$: If the affine dimension is $n$, there cannot be an eigenvalue $0$, since there is a positive eigenvalue (since $e^\top Ae\gt0$), and the span of these two eigenspaces would non-trivially intersect the space $e^\top x=0$, contradicting the negative definiteness of $A$ on that space. To use all this for your case, one could try to show that a distance matrix in your sense is always a Euclidean distance matrix in the usual sense for points with affine dimension $n$. I think this could be done by continuously varying the exponent $\alpha$ in $A_{ij}=d(x_i,x_j)^\alpha$ from $1$ to $2$ and showing a) that there is always a direction in which the points can move such that $A$ remains their distance matrix with the changing exponent and b) that this movement necessarily causes them to have affine dimension $n$. To get a feel for how this might work, consider a square: The movement would bend the square into a tetrahedron. The proof would need to account for the fact that this seems to hold only for $\alpha\lt2$; you can see from the example of three points in a line that they can be bent to accommodate $\alpha\lt2$ but not $\alpha\gt2$.
What is the best way to solve a problem given remainders and divisors? $x$ is a positive integer less than $100$. When $x$ is divided by $5$, its remainder is $4$. When $x$ is divided by $23$, its remainder is $7$. What is $x$ ?
So, $x=5y+4$ and $x=23z+7$ for some integers $y,z$ So, $5y+4=23z+7=>5y+20=23z+23$ adding 16 to either sides, $5(y+4)=23(z+1)=>y+4=\frac{23(z+1)}{5}$, so,$5|(z+1)$ as $y+4$ is integer and $(5,23)=1$ $=>z+1=5w$ for some integer $w$. $x=23(5w-1)+7=115w-16$ As $0<x<100$ so,$x=99$ putting $w=1$ Alternatively, We have $5y=23z+3$ Using convergent property of continued fractions, (i)$\frac{23}{5}=4+\frac{3}{5}=4+\frac{1}{\frac{5}{3}}=4+\frac{1}{1+\frac{2}{3}}$ $=4+\frac{1}{1+\frac{1}{\frac{3}{2}}}=4+\frac{1}{1+\frac{1}{1+\frac{1}{2}}}$ So, the last but one convergent is $4+\frac{1}{1+\frac{1}{1}}=\frac{9}{2}$ $=>23\cdot2-5\cdot9=1$ $5y=23z+3(23\cdot2-5\cdot9)=>5(y+27)=23(z+6)=>5|(z+6)=>5|(z+1)$ (ii)$\frac{23}{5}=4+\frac{3}{5}=4+\frac{1}{\frac{5}{3}}=4+\frac{1}{1+\frac{2}{3}}$ $=4+\frac{1}{1+\frac{1}{\frac{3}{2}}}=4+\frac{1}{1+\frac{1}{1+\frac{1}{2}}}$ $=4+\frac{1}{1+\frac{1}{1+\frac{1}{1+1}}}$ So, the last convergent is $4+\frac{1}{1+\frac{1}{1+1}}=\frac{14}{3}$ $=>23\cdot3-5\cdot14=-1$ $5y=23z-3(23\cdot3-5\cdot14)=5(y-42)=23(z-9)=>5|(z-9)=>5|(z+1)$
Infinite descent This Wikipedia article of Infinite Descent says: We have $ 3 \mid a_1^2+b_1^2 \,$. This is only true if both $a_1$ and $b_1$ are divisible by $3$. But how can this be proved?
Suppose $a_1 = 3 q_1 + r_1$ and $b_1 = 3 q_2 + r_2$, where $r_1$ and $r_2$ is either $-1$, $0$ or $1$. Then $$ a_1^2 + b_1^2 = 3 \left( 3 q_1^2 + 3 q_2^2 + 2 q_1 r_1 + 2 q_2 r_2 \right) + r_1^2 + r_2^2 $$ For $a_1^2 + b_1^2$ to be divisible by $3$, we should have $r_1^2 + r_2^2 = 0$, since $0\leqslant r_1^2+r_2^2 < 3$. Enumerating 9 cases, only $r_1 = r_2 = 0$ assure the divisibility. This is essentially the answer anon gave in his comment.
Linear algebraic dynamical system that has complex entries in the matrix Suppose that there is a dynamical system that has the form of $\mathbb{x}_{k+1} = A\mathbb{x}_k$. Suppose that one eigenvalue of $A$ matrix is complex number, the form of $a-bi$. We then convert $\mathbb{x}_k = P\mathbb{y}_k$ where matrix $P$ is the matrix of corresponding eigenvector of $a-bi$ eigenvalue. Then we would be able to write the dynamical system as the following: $\mathbb{y}_{k+1} = C\mathbb{y}_k$ where $C$ is \begin{bmatrix}a & -b \\b & a \end{bmatrix} and then $A = PCP^{-1}$. The question is, why does the eigenvalue of the matrix C equal to the eigenvalue of the matrix A?
Note that if $Av=\lambda v$ then $Cw=\lambda w$ where $w=P^{-1}v$.
Dedekind Cuts in Rudin' analysis - Step 4 This question refers to the construction of $\mathbb{R}$ from $\mathbb{Q}$ using Dedekind cuts, as presented in Rudin's "Principles of Mathematical Analysis" pp. 17-21. More specifically, in the last paragraph of step 4, Rudin says that for $\alpha$ a fixed cut, and given $v \in 0^*$, setting $w=- v / 2$, there exists an integer $n$ such that $nw \in \alpha$ but $(n+1)w$ is not inside $\alpha$. Rudin says that this depends on the Archimedean property of the rationals, however he has not proved it. Could somebody prove the existence of the integer $n$?
If $v\in 0^*$, then $v<0$, so $w>0$. Let $\gamma=\sup \alpha$. The Archimedean property says that for any $\gamma$ there is some integer $m$ such that $mw\geq\gamma$. Since $\mathbb{N}$ is well-ordered, there is a smallest $m$ such that $mw\geq\gamma$, call this $n+1$. This means that $nw<\gamma$, and hence $nw\in \alpha$. But also $(n+1)w\notin \alpha$ since it is larger than $\gamma$ (OK, so there might be some subtleties with endpoints you should think about, but essentially this is the idea).
$H\vartriangleleft G$ and $|H|\not\equiv 1 (\mathrm{mod} \ p)$ then $H\cap C_{G}(P)\neq1$ Let $G$, a finite group, has $H$ as a proper normal subgroup and let $P$ be an arbitrary $p$-subgroup of $G$ ($p$ is a prime). Then $$|H|\not\equiv 1 (\mathrm{mod} \ p)\Longrightarrow H\cap C_{G}(P)\neq1$$ What I have done: I can see the subsequent well-known theorem is an especial case of the above problem: Let $G$ is a finite non trivial $p$-group and $H\vartriangleleft G$. Then if $H\neq1$ so $H\cap Z(G)\neq1$. So I assume that $G$ acts on $H$ by conjugation and therefore $$|H|=1+\sum_{x\in H-\{1\}}|\mathrm{Orbit}_G(x)|$$ $|H|\not\equiv 1 (\mathrm{mod} \ p)$ means to me that there is $x_0\in H$ such that $p\nmid|\mathrm{Orbit}_G(x_0)|$. Am I doing right? Thanks. This problem can be applied nicely in the following fact: Let $p$ is an odd prime and $q$ is a prime such that $q^2\leqslant p$. Then $\mathrm{Sym}(p)$ cannot have a normal subgroup of order $q^2$.
Let $P$ act on $H$; the number of fixed points is the number of elements in $C_G(P)\cap H$. Now use the easy fact that the number of fixed points is congruent to $|H|\pmod{p}$.
Solving this linear system based on the combustion of methane, has no constants I recently discovered that I could solve a chemical reaction using a linear system. So I thought I would try something simple like the combustion of methane. where x y z and w are the moles of each molecule x $CH_4$ + y $O_2$ = z $H_2$O + w C$O_2$ the linear system for this would be: x = w 2y = z + 2w 4x = 2z I got as far as y = z and y = 2w but without any constants, I am stumped. Can anyone help me? I was assuming that elimination and substitution would suffice, but I must be wrong.
Each of your equations can be rearranged to give $$x-w=0$$ $$2y-z-2w=0$$ $$4x-2z=0$$ Your idea of using Gaussian elimination is then a very good one. Or you can solve directly. From the first equation, you get $$x=w.$$ From the third equation, you get $$z=\frac{x}{2}=\frac{w}{2}.$$ Then from the second equation you get $$y=\frac{z}{2}+w=\frac{5w}{4}.$$ In each subsequent equation, we have used previous information and any value for $w$ will suffice ($w$ is called a free variable). Note that this makes sense as $w$ is a particular number of type atoms and changing it will change how the equation is to be balanced.
Why are two vectors that are parallel equivalent? Why are two parallel vectors with the same magnitude equivalent? Why is their start point irrelevant? How can a vector starting at $\,(0, -10)\,$ going to $\,(10, 0)\,$ be the same as a vector starting at $\,(10, 10)\,$ and going to $\,(20, 20)\,$?
Components of a vector determine the length and the direction of the vector, but not it's basepoint. Therefore 2 vectors are equivalent if and only if they have the same components, but this gives the idea that any 2 parallel vectors are equal though they have different base points. To avoid this confusion, we take that all vectors are based on the origin unless stated.
Inequality with two absolute values I'm new here, and I was wondering if any of you could help me out with this little problem that is already getting on my nerves since I've been trying to solve it for hours. Studying for my next test on inequalities with absolute values, I found this one: $$ |x-3|-|x-4|<x $$ (I precisely found the above inequality on this website, here to be precise, but, the problem is that when I try to solve it, my answer won't be $(-1,+\infty)$, but $(1,7)$. I took the same inequalities that the asker's teacher had given to him and then put them on a number line, and my result was definitely not $(-1,+\infty)$ Here are the inequalities: $$ x−3 < x−4 +x $$ $$ x−3 < −(x−4) +x $$ $$ −(x−3)<−(x−4)+x $$ And here are my answers respectively: $$ x>1, \quad x>-1, \quad x<7 $$ I will really appreciate if anyone could help me out, because I'm already stressed dealing with this problem, that by the way, it is not demanding that I solve it, but you know, why not?
$|x-3|-|x-4|< x$, I write in $|x-3|< x+|x-4|$ but remember: $|r|< s \implies -s < r < s$. So I write the equation in the form: $-x-|x-4| < x-3 < x+|x-4|$. From this inequality I obtain 2 equations: (a) $-x-|x-4| < x-3$ (b) $x-3 < x+|x-4|$. Remember too : $|r| > s \implies r > s \text{ or } r < -s $. So this concept I will apply to equation (a) and equation (b). From equation (a): $-x-|x-4| < x-3$ I write so absolute value is in one side: -|x-4| < 2x-3 then I multiply by -1 : |x-4| > -2x+3 . Now I write the equation in the form of: |r| > s----> r > s OR r < -s .As a result we obtain 2 additional equation: x-4 > -2x+3 OR x-4 < 2x-3 . From the first one: 3x > 7 --->x > 7/3 or (7/3,∞) . From the second inequality: -1 < x or (-1,∞) .The first and second inequality are OR function then: (7/3,∞) U (-1,∞) is: (-1,∞) Now the equation (b): x-3 < x+|x-4| we write to put absolute value in one side: $-3 < |x-4|$ or $|x-4| > -3$ . To satisfy this inequality $x$ will take any value positive or negative. As a result we can write the result the value of $x$ for equation (b) as: $(-∞,+∞)$ The final result from the equation (a) AND (b) will be the intersection of their value: $(a)∩(b)$ or $(-1,∞)∩(-∞,∞)$ and find the final result for $x: (-1,∞)$ that satisfy the inequality $|x-3|-|x-4| < x$
Existence of an Infinite Length Path I came across the following simple definition A path $\gamma$ in $\mathbb{R}^n$ that connects the point $a \in \mathbb{R}^n$ to the point $b \in \mathbb{R}^n$, is a continuous $\gamma : [0, 1] \to \mathbb{R}^n$ such that $\gamma(0) = a$ and $\gamma(1) = b$. We denote by $\ell(\gamma)$ the (Euclidean) length of $\gamma$. $\ell(\gamma)$ is always defined and is either a non-negative realnumber or $\infty$. However, I cannot seem to think of a path, defined in this manner (specifically, where the domain is compact), the length of which is infinite. Can anyone provide an example ?
Yes, for instance Koch snowflake is a such example. Let's do the following construction: * *Start with the segment $A_0 = [0,1]$. *Subdivise $A_0$ in three equal pieces. *Replace the middle third by an equilateral triangle with base $[\frac13, \frac23]$. *Suppress the base of that triangle. You get the path $A_1$ something which looks like a saw teeth. *By induction, to construct $A_{n+1}$ replace each of the $4^n$ segments of $A_n$ by an equilateral triangle of base this segment, and then remove the segment. *The limit object is a path (whose image is compact set) joining $0$ and $1$. The length of $A_n$ is $\left( \frac43 \right)^n$ tends to $+\infty$ as $n \to +\infty$. The Hausdorff dimensions is $\frac{\ln 4}{\ln 3} \approx 1.26$ The following picture summarize the construction. http://www.cl.cam.ac.uk/~dao29/tmp/koch/geometric-construction.png
Generating integer solutions to $4mn - m^2 + n^2 = ±1$ How can I generate positive integer solutions to $m$ and $n$ that satisfy the equation: $4mn - m^2 + n^2 = ±1$, subject to the constraints that $m$ and $n$ are coprime, $m-n$ is odd and $m > n$.
Hint: Completing the square yields an equation of the form: $$x^2-Dy^2=\pm 1$$ for a particular $D$. There's actually a simple recursion that generates all solutions. Let $a_0=0$, $a_1=1$, and $a_{k+2}=4a_{k+1}+a_{k}$. Then the general solution is $(m,n)=(a_{k+1},a_{k})$. This gives the positive solutions. The solutions with $n$ negative are of the form $(m,n)=(a_k,-a_{k+1})$. There are no solutions with $m$ negative and $m> n$
Sufficiency to prove the convergence of a sequence using even and odd terms Given a sequence $a_{n}$, if I know that the sequence of even terms converges to the same limit as the subsequence of odd terms: $$\lim_{n\rightarrow\infty} a_{2n}=\lim_{n\to\infty} a_{2n-1}=L$$ Is this sufficient to prove that the $\lim_{n\to\infty}a_{n}=L$? If so, how can I make this more rigorous? Is there a theorem I can state that covers this case?
If you are familiar with subsequences, you can easily prove as follows. Let $a_{n_k}$ be the subsequence which converges to $\limsup a_n$. it is obviously convergent and contain infinitely many odds or infinitely many evens, or both. Hence, $\limsup a_n = L$. The same holds for $\liminf a_n$, hence the limit of the whole sequence exists and equals $L$.
What is exactly "Algebraic Dynamics"? Could somebody please give a big picture description of what exactly is the object of study in the area of Algebraic Dynamics? Is it related to Dynamical Systems? If yes in what sense? Also, what is the main mathematical discipline underpinning Algebraic Dynamics? Is it algebraic geometry, differential geometry e.t.c.?
The Wiki article states that it is a combination of dynamical systems and number theory. I know it's a redirect, but WP's information on this point is probably reliable enough :) (Are you checking here because you are not comfortable with WP info? It is a serious question which I'm curious about.)
Cauchy nets in a metric space Say that a net $a_i$ in a metric space is cauchy if for every $\epsilon > 0$ there exists $I$ such that for all $i, j \geq I$ one has $d(a_i,a_j) \leq \epsilon$. If the metric space is complete, does it hold (and in either case why) that every cauchy net converges?
Consider a Cauchy net: $$\forall \lambda,\lambda'\geq\lambda_n:\quad d(x_\lambda,x_\lambda')<\frac{1}{n}$$ Extract a Cauchy sequence: $$x_n:=x_{\lambda(n)}\quad\lambda(n):=\lambda_1\wedge\ldots\wedge\lambda_n$$ Apply completeness: $$d(x_\lambda,x)\leq d(x_\lambda,x_{n_0})+d(x_{n_0},x)<\frac{N}{2}+\frac{N}{2}\leq\epsilon$$ where to choose the meet $n_0:=N\wedge n(N)$ with $N:=\lceil\frac{\epsilon}{2}\rceil$
Is independence preserved by conditioning? $X_1$ and $X_2$ are independent. $Y_1|X_1\sim\mathrm{Ber}\left(X_1\right)$, $Y_2|X_2\sim\mathrm{Ber}\left(X_2\right)$. Are $Y_1$ and $Y_2$ necessarily independent? (Assume $\mathrm{P}\left(0<X_1<1\right)=1$, $\mathrm{P}\left(0<X_2<1\right)=1$)
No. Let $X_1,X_2$ be independent uniform (0,1) random variables, and then define $$Y_1=1_{(X_1\geq X_2)}\,\mbox{ and }\, Y_2=1_{(X_2> X_1)}.$$
Cardinality of the set of all pairs of integers The set $S$ of all pairs of integers can be represented as $\{i \ | \ i \in \mathbb{Z} \} \times \{j\ | \ j \in \mathbb{Z}\}$. In other words, all coordinates on the cartesian plane where $x, y$ are integers. I also know that a set is countable when $|S|\leq |\mathbb{N}^+|$. I attempted to map out a bijective function, $f : \mathbb{N}^+ \rightarrow S$. $1 \rightarrow (1,1) \\ 2 \rightarrow (1,2)\\ 3 \rightarrow (1,3) \\ \quad \vdots $ I determined from this that the natural numbers can only keep up with $(1,*)$. But there is the ordered pairs where $x=2,3,4,\cdots$ not to mention the negative integers. In other words, $|S|\geq |\mathbb{N}^+|$ and therefore $S$ is not countably infinite. Is this correct? (I don't think it is... Something to do with my understanding of infinite sets)
Define $\sigma: \Bbb Z \times \Bbb Z \to \Bbb Z \times \Bbb Z$ by $$ \sigma(m,n) = \left\{\begin{array}{lr} (1,-1), & \text{for } (m,n) = (0,0)\\ (m, n+1), & \text{for } m \gt 0 \, \land \, -m \le n \lt m\\ (m-1, n), & \text{for } n \gt 0 \, \land \, -n \lt m \le n\\ (m, n-1), & \text{for } m \lt 0 \, \land \, m \lt n \le -m\\ (m+1, n), & \text{for } n \lt 0 \, \land \, n \le m \lt -n-1\\ (m+2,n-1), & \text{for } n \lt 0 \, \land \, m = -n-1 \end{array}\right\} $$ Exercise: Show that $n \mapsto \sigma^n(0,0)$ is a bijective mapping between $\{0,1,2,3,...\}$ and $\Bbb Z \times\Bbb Z$.
Solving improper integrals and u-substitution on infinite series convergent tests This is the question: Use the integral test to determine the convergence of $\sum_{n=1}^{\infty}\frac{1}{1+2n}$. I started by writing: $$\int_1^\infty\frac{1}{1+2x}dx=\lim_{a \rightarrow \infty}\left(\int_1^a\frac{1}{1+2x}dx\right)$$ I then decided to use u-substitution with $u=1+2n$ to solve the improper integral. I got the answer wrong and resorted to my answer book and this is where they went after setting $u=1+2n$: $$\lim_{a \rightarrow \infty}\left(\frac{1}{2}\int_{3}^{1+2a}\frac{1}{u}du\right)$$ And the answer goes on... What I can't figure out is where the $\frac{1}{2}$ came from when the u-substitution began and also, why the lower bound of the integral was changed to 3. Can someone tell me?
$$u=1+2x\Longrightarrow du=2dx\Longrightarrow dx=\frac{1}{2}du$$ Remember, not only you substitute the variable and nothing more: you also have to change the $\,dx\,$ and the integral's limits: $$u=1+2x\,\,,\,\text{so}\,\, x=1\Longrightarrow u=1+2\cdot 1 =3$$
$\mathbb{Z}$ has no torsion? What does is mean to say that $\mathbb{Z}$ has no torsion? This is an important fact for any course? Thanks, I heard that in my field theory course, but I don't know what it is.
@d555, you might want to know that the notion of torison is extremely important, for example if $A$ is finitely generated and abelian group, then it can be written as the direct sum of its torsion subgroup $T(A)$ and a torsion-free subgroup (but this is not true for all infinitely generated abelian groups). $T(A)$ is uniquely determined.
Trying to find angle at which projectile was fired. So let's say I have a parabolic function that describes the displacement of some projectile without air resistance. Let's say it's $$y=-4.9x^2+V_0x.$$ I want to know at what angle the projectile was fired. I notice that $$\tan \theta_0=f'(x_0)$$ so the angle should be $$\theta_0 = \arctan(f'(0)).$$ or $$\theta_0 = \arctan(V_0).$$ Is this correct? I can't work out why it wouldn't be, but it doesn't look right when I plot curves.
From what you are saying, it looks like you are thinking of $f(x)$ as a real valued function, and in effect you are only considering linear motion. (This would be fine, if the cannon were firing straight up or straight down or straight up.) However, you are interested in knowing the interesting two-dimensional trajectories, and this means that $V_0$ is a 2-D vector pointing in the direction of the initial trajectory, and that the coefficient of $x^2$ actually accompanies a vector which is pointing straight down (to show the contribution of gravity.) Once you find what $V_0$ is, you can then compute its angle with respect to horizontal, and have your answer.
Finding the value of y in terms of x. Is it possible to get the value of $y$ in terms of $x$ from the below equation? If so please give give me a clue how to do that :) $$y \sqrt{y^2 + 1} + \ln\left(y + \sqrt{y^2 + 1}\right) = \frac{a}{x^2}.$$
Since $\ \rm{asinh}(y)=\ln\left(y + \sqrt{y^2 + 1}\right)\ $ let's set $\ y:=\sinh(u)\ $ and rewrite your equation as : $$\sinh(u) \sqrt{\sinh(u)^2 + 1} + u = \frac{a}{x^2}$$ $$\sinh(u) \cosh(u) + u = \frac{a}{x^2}$$ $$\sinh(2u) + 2u = 2\frac{a}{x^2}$$ After that I fear you'll have to solve this numerically (to get $u$ in function of $x$). I don't see something simpler sorry... To solve $\ \sinh(w) + w = r\ $ numerically you may : * *use iterations (Newton-Raphson) : $\ \displaystyle w_{n+1}=w_n-\frac {\sinh(w_n)+w_n-r}{\cosh(w_n)+1}$ (starting with $w_0=\frac r2$) *use reversion of series to get $w=w(r)$ : $w(r)= \frac 12 r - \frac 1{96}r^3 + \frac 1{1920}r^5 - \frac{43}{1290240}r^7 + \frac {223}{92897280}r^9 - \frac{60623}{326998425600}r^{11} + \frac{764783}{51011754393600}r^{13} - \frac {107351407}{85699747381248000}r^{15} + \mathrm{O}\bigl(r^{17}\bigr)$ *perhaps that other methods exist in the litterature...