INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Examples of $\kappa$-Fréchet-Urysohn spaces. We say that A space $X$ is $\kappa$-Fréchet-Urysohn at a point $x\in X$, if whenever $x\in\overline{U}$, where $U$ is a regular open subset of $X$, some sequence of points of $U$ converges to $x$. I'm looking for some examples of $\kappa$-Fréchet-Urysohn space. I guess it is not true that every compact Hausdorff space is a $\kappa$-Fréchet-Urysohn But how about compact Hausdorff homogeneous spaces?
What examples are you looking for? I think, for some exotic examples you should search Engelking’s “General Topology” (on Frechet-Urysohn spaces) and part 10 “Generalized metric spaces” of the “Handbook of Set-Theoretic Topology”.
An analytic function is onto All sets are subsets of $\mathbb{C}$. Suppose $f: U \to D$ is analytic where $U$ is bounded and open, and $D$ is the open unit disk. Now suppose we can continuously extend $f$ to $\bar{f}: \bar{U} \to \bar D$, such that $\bar{f}(\partial U) \subseteq \partial D$. To show that $f$ is onto, I was thinking maybe I could show that $f(U)$ is a dense subset of $\bar{D}$ , and since $\bar{f}(U) = f(U)$ is open by the open mapping theorem, it must be $D$. But to do this I would need to know that $f(\partial U) = \partial D$. Is this true? Some advice or other approaches would be greatly appreciated. Thank you.
Hint: If $f$ is not onto then $D$ containts a point $w$ which is on the boundary of $f(U)$. Take a sequence in $U$ with $f(z_k)\to w$, and use compactness.
Learning Combinatorial Species. I have been reading the book conceptual mathematics(first edition) and I'm also about halfway through Diestel's Graph theory (4th edition). I was wondering if I was able to start learning about combinatorial species. This is very interesting to me because I love combinatorics and it makes direct use of category therory. Also, what are some good resources to understand the main ideas behind the area and understand the juciest part of it? Regards, and thanks to Anon who told me about this area of math.
For an easy to understand introduction, http://dept.cs.williams.edu/~byorgey/pub/species-pearl.pdf seems to be nice. But it leans more towards the computer science applications.
Distribution of a random variable $X_1$, $X_2$, $X_3$ are independent random variables, each with an exponential distribution, but with means of $2.0, 5.0, 10.0$ respectively. Let $Y$= the smallest or minimum value of these three random variables. Derive and identify the distribution of $Y$. (The distribution function may be useful). How do I solve this question? Do I plug in each mean to the exponential distribution? I would appreciate it if someone could explain this to me, thanks.
The wiki on exponential distribution has an answer to that. The answer of course is exponential distribution. http://en.wikipedia.org/wiki/Exponential_distribution#Distribution_of_the_minimum_of_exponential_random_variables
Density of sum of two independent uniform random variables on $[0,1]$ I am trying to understand an example from my textbook. Let's say $Z = X + Y$, where $X$ and $Y$ are independent uniform random variables with range $[0,1]$. Then the PDF is $$f(z) = \begin{cases} z & \text{for $0 < z < 1$} \\ 2-z & \text{for $1 \le z < 2$} \\ 0 & \text{otherwise.} \end{cases}$$ How was this PDF obtained? Thanks
By the hint of jay-sun, consider this idea, if and only if $f_X (z-y) = 1$ when $0 \le z-y \le 1$. So we get $$ z-1 \le y \le z $$ however, $z \in [0, 2]$, the range of $y$ may not be in the range of $[0, 1]$ in order to get $f_X (z-y) = 1$, and the value $1$ is a good splitting point. Because $z-1 \in [-1, 1]$. Consider (i) if $z-1 \le 0$ then $ -1 \le z-1 \le 0$ that is $ z \in [0, 1]$, we get the range of $y \in [0, z]$ since $z \in [0, 1]$. And we get $\int_{-\infty}^{\infty}f_X(z-y)dy = \int_0^{z} 1 dy=z$ if $z \in [0, 1]$. Consider (ii) if $z-1 \ge 0$ that is $ z \in [1, 2]$, so we get the range of $y \in [z-1, 1]$, and $\int_{-\infty}^{\infty}f_X(z-y)dy = \int_{z-1}^{1} 1 dy = 2-z$ if $z \in [1, 2]$. To sum up, consider to clip the range in order to get $f_X (z-y) = 1$.
Find an orthogonal vector to 2 vector I have the following problem: A B C D are the 4 consecutive summit of a parallelogram, and have the following coordinates A(1,-1,1);B(3,0,2);C(2,3,4);D(0,2,3) I must find a vector that is orthogonal to both CB and CD. How? Is there some kind of formula? Thanks,
The cross product of two vectors is orthogonal to both, and has magnitude equal to the area of the parallelogram bounded on two sides by those vectors. Thus, if you have: $$\vec{CB} = \langle3-2, 0-3, 2-4\rangle = \langle1, -3, -2\rangle$$ $$\vec{CD} = \langle0-2, 2-3, 3-4\rangle = \langle-2, -1, -1\rangle$$ Compute the following, which is an answer to your question: $$\langle1, -3, -2\rangle\times\langle-2, -1, -1\rangle = \langle1, 5, -7\rangle$$ Note, though, that there are infinitely many vectors that are orthogonal to $\vec{CB}$ and $\vec{CD}$. However, these are all non-zero scalar multiples of the cross product. So, you can multiply your cross product by any (non-zero) scalar.
Understanding the fundamentals of pattern recognition I'm learning now about sequences and series: patterns in short. This is part of my Calc II class. I'm finding I'm having difficulty in detecting all of the patterns that my text book is asking me to solve. My question at this point isn't directly about a homework problem (yet anyway), but instead help in understanding why certain statements are made in the example. So, the example from the book: $$ 1 + \frac{1}{2}+\frac{1}{4}+\frac{1}{8}+\frac{1}{16}+...\\ \begin{array}{lcc} Partial Sum & Value & Sugg. Expression \\ s_1 = 1 & 1 & 2 - 1 \\ s_2 = 1 + \frac{1}{2} & \frac{3}{2} & 2 - \frac{1}{2} \\ s_3 = 1 + \frac{1}{2} + \frac{1}{4} & \frac{7}{4} & 2 - \frac{1}{4} \\ & ... & \\ s_n = 1 + \frac{1}{2} + \frac{1}{4} + ... + \frac{1}{2^{n-1}} & \frac{2^n - 1}{2^{n-1}} & 2 - \frac{1}{2^{n-1}} \end{array} $$ Why is that for $s_1$ they say, "Suggested Expression" is 2-1? 4 - 3 also yields 1. Granted, the suggested values in the textbook are much simpler to work with. However, I'd like to know what wisdom leads the authors to say that 2-1 is the suggested expression instead of some other expression also yielding 1. It is also interesting to me that this process is necessary when this sequence is quite easily seen as $\sum_{n=1}^{\infty}\frac{1}{2^{n-1}}$. This section is all about convergence and divergence. Am I learning these extra steps because using the rule I've just outlined doesn't show what it converges to? Also, in typing this question, I think I've just discovered something the textbook was saying: a series isn't convergent unless the limit of its terms is 0. That is: $\lim_{n\to\infty}\frac{1}{2^{n-1}} = 0$. It's amazing what one finds when looking for something else. Thanks, Andy
The suggested expressions weren’t found one at a time: they’re synthesized from the whole pattern. We don’t seriously consider $4-3$ for the first one, for instance, because it doesn’t fit nicely with anything else. I’d describe the thought process this way. First we calculate the first few partial sums: $$\begin{array}{r|cc} n&1&2&3&4&5&6\\ \hline s_n&1&\frac32&\frac74&\frac{15}8&\frac{31}{16}&\frac{63}{32} \end{array}$$ At this point I can pursue either of two lines of thought. * *The partial sums seem to be getting very close to $2$. Perhaps they’re doing so in some regular, easily identifiable fashion? Let’s add another line to the table: $$\begin{array}{r|cc} n&1&2&3&4&5&6\\ \hline s_n&1&\frac32&\frac74&\frac{15}8&\frac{31}{16}&\frac{63}{32}\\ \hline 2-s_n&1&\frac12&\frac14&\frac18&\frac1{16}&\frac1{32} \end{array}$$ Now that was very informative: the denominators of the new entries are instantly recognizable as powers of $2$, specifically $2^{n-1}$, and it looks very much as if $2-s_n=\frac1{2^{n-1}}$, or $$s_n=2-\frac1{2^{n-1}}\;.$$ This is the line of thought that leads to the suggested expressions in the example. *The denominators of $s_n$ are instantly recognizable as powers of $2$, specifically $2^{n-1}$, and the numerators seem to be one less than the next higher power of $2$, or $2^n-1$. It looks very much as if $$s_n=\frac{2^n-1}{2^{n-1}}\;.$$ A little algebra of course shows that the conjectures are the same: $2-\dfrac1{2^{n-1}}=\dfrac{2^n-1}{2^{n-1}}$. Without seeing the example in full I can’t be sure, but I suspect that the suggested expressions are there because they make it immediately evident that $$\lim_{n\to\infty}s_n=\lim_{n\to\infty}\left(2-\frac1{2^{n-1}}\right)=2-\lim_{n\to\infty}\frac1{2^{n-1}}=2\;,$$ since clearly $\lim\limits_{n\to\infty}\dfrac1{2^{n-1}}=0$. Essentially the same idea is at the heart of the proof that $$\sum_{n\ge 0}x^n=\frac1{1-x}$$ if $|x|<1$; it that argument hasn’t yet appeared in your text, this example may be part of the preparation. Finally, note the correction of your final remark that joriki made in the comments.
How to generate random symmetric positive definite matrices using MATLAB? Could anybody tell me how to generate random symmetric positive definite matrices using MATLAB?
The algorithm I described in the comments is elaborated below. I will use $\tt{MATLAB}$ notation. function A = generateSPDmatrix(n) % Generate a dense n x n symmetric, positive definite matrix A = rand(n,n); % generate a random n x n matrix % construct a symmetric matrix using either A = 0.5*(A+A'); OR A = A*A'; % The first is significantly faster: O(n^2) compared to O(n^3) % since A(i,j) < 1 by construction and a symmetric diagonally dominant matrix % is symmetric positive definite, which can be ensured by adding nI A = A + n*eye(n); end Several changes are able to be used in the case of a sparse matrix. function A = generatesparseSPDmatrix(n,density) % Generate a sparse n x n symmetric, positive definite matrix with % approximately density*n*n non zeros A = sprandsym(n,density); % generate a random n x n matrix % since A(i,j) < 1 by construction and a symmetric diagonally dominant matrix % is symmetric positive definite, which can be ensured by adding nI A = A + n*speye(n); end In fact, if the desired eigenvalues of the random matrix are known and stored in the vector rc, then the command A = sprandsym(n,density,rc); will construct the desired matrix. (Source: MATLAB sprandsym website)
What is the real life use of hyperbola? The point of this question is to compile a list of applications of hyperbola because a lot of people are unknown to it and asks it frequently.
Applications of hyperbola Dulles Airport, designed by Eero Saarinen, has a roof in the shape of a hyperbolic paraboloid. The hyperbolic paraboloid is a three-dimensional surface that is a hyperbola in one cross-section, and a parabola in another cross section. This is a Gear Transmission. Greatest application of a pair of hyperbola gears: And hyperbolic structures are used in Cooling Towers of Nuclear Reactors.. Doesn't it make hyperbola, a great deal on earth? :)
factor group is cyclic. Prove that a factor group of a cyclic group is cyclic. I didn't understand last two lines of proof .. Therefore $gH=(aH)^i$ for any coset $gH$. so $G/H$ is cyclic , by definition of cyclic groups. How $gH=(aH)^i$ of any coset $gH$. proves factor group to be cyclic. Please explain.
I just wanted to mention that more generally, if $G$ is generated by $n$ elements, then every factor group of $G$ is generated by at most $n$ elements: Let $G$ be generated by $\{x_1,\ldots x_n\}$, and let $N$ be a normal subgroup of $G$. Then every coset of $N$ in $G$ can be expressed as a product of the cosets $Nx_1,\ldots, Nx_n$. So the set $\{Nx_1,\ldots,Nx_n\}$ generates $G/N$, and this set contains at most $n$ elements. (Note that the cosets $Nx_i$ will not all be distinct if $N$ is non-trivial, but it's fine to write the set this way, just as $\{x^2 \mid x\in \mathbb{R}\}$ is a perfectly valid description of the set of non-negative real numbers.) The result about cyclic groups is then just the special case $n=1$ of this.
Simplifying $\sum\limits_{k=1}^{n-1} (2k + \log_2(k) - 1)$ I'm trying to simplify the following summation: $$\sum_{k=1}^{n-1} (2k + \log_2(k) - 1)$$. I've basically done the following: $$\sum_{k=1}^{n-1} (2k + \log_2(k) - 1) \\ = \sum_{k=1}^{n-1} 2k + \sum_{k=1}^{n-1} \log_2(k) - \sum_{k=1}^{n-1} 1\\ = \frac{n(n-1)}{2} + \sum_{k=1}^{n-1} \log_2(k) - (n-1)$$ Now I'm trying to do deal with this term $\sum_{k=1}^{n-1} \log_2(k)$, but I'm a bit confused. My gut tells me I can do the following: $$\sum_{k=1}^{n-1} \log_2(k)\\ = \log_2(1) + \log_2(2) + \ldots + \log_2(n-1)\\ = \log_2(\prod_{k=1}^{n-1} k)\\ = \log_2((n-1)!)$$ Using that $\log_a(b) + \log_a(c) = \log_a(b \cdot c)$. However, I'm not convinced this is an entirely valid reasoning because I can't find any rules/identities for dealing with $\sum_{k=1}^{n-1} \log_2(k)$. Is this correct or are there any rules to apply?
(Just so people know this has been answered) You right $$\sum_{k=1}^{n} \log k = \log (n!)$$ If you want to justify it formally, you can try using induction.
Find $x,y$ such that $x=4y$ and $1$-$9$ occur in $x$ or $y$ exactly once. $x$ is a $5$-digits number, while $y$ is $4$-digits number. $x=4y$, and they used up all numbers from 1 to 9. Find $x,y$. Can someone give me some ideas please? Thank you.
Here are the $x,y$ pairs a quick bit of code found. 15768 3942 17568 4392 23184 5796 31824 7956 No insight to offer at the moment I'm afraid..
Whats the percentage of somebody getting homework in class there is a 25% chance you get homework in one class there is a 40% chance you get homework in another class what is the probability you get homework in both classes
it is between 0 and 0.25, based on the information you have given. However, if you assume they are independent. i.e. they are two teachers who do not disccuss whether they should give home work on the same day or not then P(A and B) = P(A)P(B) = 0.25 * 0.4 = 0.1 A is getting work from the 1st class, B is getting work from the 2nd class.
Evaluating $\lim \limits_{n\to \infty}\,\,\, n\!\! \int\limits_{0}^{\pi/2}\!\! \left(1-\sqrt [n]{\sin x} \right)\,\mathrm dx$ Evaluate the following limit: $$\lim \limits_{n\to \infty}\,\,\, n\!\! \int\limits_{0}^{\pi/2}\!\! \left(1-\sqrt [n]{\sin x} \right)\,\mathrm dx $$ I have done the problem . My method: First I applied L'Hôpital's rule as it can be made of the form $\frac0 0$. Then I used weighted mean value theorem and using sandwich theorem reduced the limit to an integral which could be evaluated using properties of define integration . I would like to see other different ways to solve for the limit.
You can have a close form solution; infact if $Re(1/n)>-1$ you have that the integral collapse in: $$\int_{0}^{\pi/2}\left[1-(\sin(x))^{1/n}\right]dx=\frac{1}{2} \left(\pi -\frac{2 \sqrt{\pi } n \Gamma \left(\frac{n+1}{2 n}\right)}{\Gamma \left(\frac{1}{2 n}\right)}\right)$$ So we define: $$y(n)=\frac{n}{2} \left(\pi -\frac{2 \sqrt{\pi } n \Gamma \left(\frac{n+1}{2 n}\right)}{\Gamma \left(\frac{1}{2 n}\right)}\right)$$ And performing the limit: $$\lim_{n \rightarrow + \infty}y(n)=-\frac{1}{4} \pi \left[\gamma +\psi ^{(0)}\left(\frac{1}{2}\right)\right]$$
dealing with sum of squares (1) I need to be able to conclude that there are $a, b \in \Bbb Z$, not 0, such that $|a| < √p,\ |b| < √p$ and $$a^2 + 2b^2 ≡ 0\ (mod\ p)$$ I'm not sure how to go about this at all. But apparently it is supposed to help me show (2) that there are $a, b \in \Bbb Z$, such that either $$a^2 + 2b^2 = p$$ or $$a^2 + 2b^2 = 2p$$ Any idea how to go about 1 or 2?
Hint $\ $ Apply the following result of Aubry-Thue
integrate $\int_0^{2\pi} e^{\cos \theta} \cos( \sin \theta) d\theta$ How to integrate $ 1)\displaystyle \int_0^{2\pi} e^{\cos \theta} \cos( \sin \theta) d\theta$ $ 2)\displaystyle \int_0^{2\pi} e^{\cos \theta} \sin ( \sin \theta) d\theta$
Let $\gamma$ be the unitary circumference positively parametrized going around just once. Consider $\displaystyle \int _\gamma \frac{e^z}{z}\,dz$. On the one hand $$\begin{align} \int _\gamma \frac{e^z}{z}\mathrm dz&=\int \limits_0^{2\pi}\frac{e^{e^{i\theta}}}{e^{i\theta}}ie^{i\theta}\mathrm d\theta\\ &=i\int _0^{2\pi}e^{\cos (\theta)+i\sin (\theta )}\mathrm d\theta\\ &=i\int _0^{2\pi}e^{\cos (\theta )}[\cos (\sin (\theta))+i\sin (\sin (\theta))\textbf{]}\mathrm d\theta. \end{align}$$ On the other hand Cauchy's integral formula gives you: $\displaystyle \int _\gamma \frac{e^z}{z}\mathrm dz=2\pi i$. $\large \color{red}{\text{FINISH HIM!}}$
How to expand $(a_0+a_1x+a_2x^2+...a_nx^n)^2$? I know you can easily expand $(x+y)^n$ using the binomial expansion. However, is there a simple summation formula for the following expansion? $$(a_0+a_1x+a_2x^2+...+a_nx^n)^2$$ I found something called the multinomial theorem on wikipedia but I'm not sure if that applies to this specific problem. Thanks.
To elaborate on the answer vonbrand gave, perhaps the following will help: $$(a + b)^2 \;\; = \;\; a^2 + b^2 + 2ab$$ $$(a + b + c)^2 \;\; = \;\; a^2 + b^2 + c^2 + 2ab + 2ac + 2bc$$ $$(a + b + c + d)^2 \;\; = \;\; a^2 + b^2 + c^2 + d^2 + 2ab + 2ac + 2ad + 2bc + 2bd + 2cd$$ In words, the square of a polynomial is the sum of the squares of all the terms plus the sum of double the products of all pairs of different terms. (added a few minutes later) I just realized Denise T wrote "... is there a simple summation formula for the following expansion", but maybe what I said could still be of help to others. What lead me to initially respond was her comment (which initially I misunderstood) "What does the i,j notation in a summation mean?", which suggested to me that perhaps she didn't understand sigma notation and thus vonbrand's answer was much too advanced. (added 3 days later) Because of Denise T's comment, where she said in part "I am only familiar with basic sigma notation", and because of Wikipedia's notational-heavy treatment of the Multinomial theorem that in my opinion isn't very useful to someone who doesn't already mostly know the topic, I thought it would be useful to include a development that focuses more on the underlying ideas and technique. I would have written this earlier, but I've been extremely busy the past few days. Notation: In what follows, ellipses (i.e. $\dots$) denote the continuation of a finite sum, not the continuation of an infinite sum. Recall the (extended) distributive law: $$A(x+y+z+\dots) \; = \; Ax + Ay + Az + \dots$$ Using $A = (a+b+c+\dots)$, we get $$(a+b+c+\dots)(x+y+z+\dots)$$ $$= \; (a+b+c+\dots)x \; + \; (a+b+c+\dots)y \; + \; (a+b+c+\dots)z \; + \; \dots$$ Now imagine expanding each of the terms on the right side. For example, one such term to imagine expanding is $(a+b+c+\dots)y.$ It should be clear from this that the expansion of $(a+b+c+\dots)(x+y+z+\dots)$ consists of all possible products of the form (term from left sum) times (term from right sum) The above can be considered a left brain approach. A corresponding right brain approach would be the well known school method of dividing a rectangle with dimensions $(a+b+c+\dots)$ by $(x+y+z+\dots)$ into lots of tiny rectangles whose dimensions are $a$ by $x$, $a$ by $y,$ etc. For example, see this diagram. Of course, the rectangle approach assumes $a,$ $b,$ $\dots,$ $x,$ $y,$ $\dots$ are all positive. Now let's consider the case where the two multinomials, $(a+b+c+\dots)$ and $(x+y+z+\dots),$ are equal. $$(a+b+c+\dots)(a+b+c+\dots)$$ $$= \; (a+b+c+\dots)a \; + \; (a+b+c+\dots)b \; + \; (a+b+c+\dots)c \; + \; \dots$$ After expanding each of the terms on the right side, such as the term $(a+b+c+\dots)b,$ we find that the individual products of the form (term from left sum) times (term from right sum) can be classifed into the these two types: Type 1 product: $\;\;$ term from left sum $\;\; = \;\;$ term from right sum Type 2 product: $\;\;$ term from left sum $\;\; \neq \;\;$ term from right sum Examples of Type 1 products are $aa,$ $bb,$ $cc,$ etc. Examples of Type 2 products are $ab,$ $ba,$ $ac,$ $ca,$ $bc,$ $cb,$ etc. When all the Type 1 products are assembled together, we get $$a^2 + b^2 + c^2 + \dots$$ When all the Type 2 products are assembled together, we get $$(ab + ba) \; + \; (ac + ca) \; + \; (bc + cb) \; + \; \dots$$ $$=\;\; 2ab \; + \; 2ac \; + \; 2bc \; + \; \dots$$ Here is a way of looking at this that is based on the area of rectangles approach I mentioned above. The full expansion arises from adding all the entries in a Cayley table (i.e. a multiplication table) for a binary operation on the finite set $\{a,\;b,\,c,\;\dots\}$ in which the binary operation is commutative. The Type 1 products are the diagonal entries in the Cayley table and the Type 2 products are the non-diagonal entries in the Cayley table. Because the operation is commutative, the sum of the non-diagonal entries can be obtained by doubling the sum of the entries above the diagonal (or by doubling the sum of the entries below the diagonal). In (abbreviated) sigma notation we have $$\left(\sum_{i}a_{i}\right)^2 \;\; = \;\; (type \; 1 \; products) \;\; + \;\; (type \; 2 \; products)$$ $$= \;\; \sum_{\begin{array}{c} (i,j) \\ i = j \end{array}} a_{i}a_{j} \;\; + \;\; \sum_{\begin{array}{c} (i,j) \\ i \neq j \end{array}} a_{i}a_{j}$$ $$= \;\; \sum_{\begin{array}{c} (i,j) \\ i = j \end{array}} a_{i}a_{j} \;\; + \;\; 2\sum_{\begin{array}{c} (i,j) \\ i < j \end{array}} a_{i}a_{j}$$ In older advanced "school level" algebra texts from the 1800s, such the texts by George Chrystal and Hall/Knight and Elias Loomis and Charles Smith and William Steadman Aldis and Isaac Todhunter, you can often find the following even more abbreviated notation used: $$(\Sigma)^2 \;\; = \;\;(\Sigma a^2) \; + \; 2(\Sigma ab)$$ In this notation $(\Sigma a^2)$ represents the sum of all expressions $a^2$ where $a$ varies over the terms in the multinomial being squared, and $(\Sigma ab)$ represents the sum of all expressions $ab$ (with $a \neq b$) where $a$ and $b$ vary over the terms in the multinomial being squared (with the selection being "unordered", so that for example once you choose "$b$ and $c$" you don't later choose "$c$ and $b$"). This older notation is especially helpful in stating and using expansions of degree higher than $2$: $$(\Sigma)^3 \;\; = \;\; (\Sigma a^3) \; + \; 3(\Sigma a^2b) \; + \; 6(\Sigma abc)$$ $$(\Sigma)^4 \;\; = \;\; (\Sigma a^4) \; + \; 4(\Sigma a^3b) \; + \; 6(\Sigma a^2b^2) \; + \; 12(\Sigma a^2bc) \; + \; 24(\Sigma abcd)$$ Thus, using the above formula for $(\Sigma)^3$, we can immediately expand $(x+y+z+w)^3$. Altogether, there will be $20$ distinct types of terms: $$(\Sigma a^3) \;\; = \;\; x^3 \; + \; y^3 \; + \; z^3 \; + \; w^3$$ $$3(\Sigma a^2b) \;\; = \;\; 3( x^2y + x^2z + x^2w + y^2x + y^2z + y^2w + z^2x + z^2y + z^2w + w^2x + w^2y + w^2z)$$ $$6(\Sigma abc) \;\; = \;\; 6( xyz \; + \; xyw \; + \; xzw \; + \; yzw)$$ Here is why cubing a multinomial produces the pattern I gave above. By investigating what happens when you multiply $(a+b+c+\dots)$ by the expanded form of $(a+b+c+\dots)^2,$ you'll find that $$(a+b+c+\dots)(a+b+c+\dots)(a+b+c+\dots)$$ can be expanded by adding all individual products of the form (term from left) times (term from middle) times (term from right) The various products that arise can be classifed into the these three types: Type 1 product: $\;\;$ all $3$ terms are equal to each other Type 2 product: $\;\;$ exactly $2$ terms are equal to each other Type 3 product: $\;\;$ all $3$ terms are different from each other Examples of Type 1 products are $aaa,$ $bbb,$ $ccc,$ etc. Examples of Type 2 products are $aab$ $aba,$ $baa,$ $aac,$ $aca,$ $caa,$ etc. Examples of Type 3 products are $abc,$ $acb,$ $bac,$ $bca,$ $cab,$ $cba,$ etc. Note that each algebraic equivalent of a Type 2 product, such as $a^2b,$ shows up $3$ times, which explains why we multiply $(\Sigma a^2b)$ by $3.$ Also, each algebraic equivalent of a Type 3 product, such as $abc,$ shows up $6$ times, and hence we multiply $(\Sigma abc)$ by $6.$ Those who have understood most things up to this point and who can come up with an explanation for why the $4$th power of a multinomial produces the pattern I gave above (an explanation similar to what I gave for the $3$rd power of a multinomial) are probably now at the point where the Wikipedia article Multinomial theorem can be attempted. See also Milo Brandt's excellent explanation in his answer to Is there a simple explanation on the multinomial theorem?
How to solve $(1+x)^{y+1}=(1-x)^{y-1}$ for $x$? Suppose $y \in [0,1]$ is some constant, and $x \in [y,1]$. How to solve the following equation for $x$: $\frac{1+y}{2}\log_2(1+x)+\frac{1-y}{2}\log_2(1-x)=0$ ? Or equivalently $1+x = (1-x)^{\frac{y-1}{y+1}}$? Thanks very much.
If we set $f = \frac{1+x}{1-x}$ and $\eta = \frac{y+1}{2}$ some manipulation yields the following: $$2f^{\eta} - f - 1 = 0$$ For rational $\eta = \frac{p}{q}$, this can be converted to a polynomial in $f^{\frac{1}{q}}$, and is likely "unsolveable" exactly, and would require numerical methods (like Newton-Raphson etc, which will work for irrational $\eta$ too).
Ways to fill a $n\times n$ square with $1\times 1$ squares and $1\times 2$ rectangles I came up with this question when I'm actually starring at the wall of my dorm hall. I'm not sure if I'm asking it correctly, but that's what I roughly have: So, how many ways (pattern) that there are to fill a $n\times n:n\in\mathbb{Z}_{n>0}$ square with only $1\times 1$ squares and $1\times 2$ rectangles? For example, for a $2\times 2$ square: * *Four $1\times 1$ squares; 1 way. *Two $1\times 1$ squares and one $1\times 2$ rectangle; $4$ ways total since we can rotate it to get different pattern. *Two $1\times 2$ rectangles; 2 ways total: placed horizontally or vertically. $\therefore$ There's a total of $1+4+2=\boxed{7}$ ways to fill a $2\times 2$ square. So, I'm just wondering if there's a general formula for calculating the ways to fill a $n\times n$ square. Thanks!
We can probably give some upper and lower bounds though. Let $t_n$ be the possible ways to tile an $n\times n$ in the manner you described. At each square, we may have $5$ possibilities: either a $1\times 1$ square, or $4$ kinds of $1\times 2$ rectangles going up, right, down, or left. This gives you the upper bound $t_n \leq 5^{n^2}$. For the lower bound, consider a $2n\times 2n$ rectangle, and divide it to $n^2$ $2\times 2$ blocks, starting from the top left and putting a $2\times 2$ square, putting another $2\times 2$ square to its right and so on... For each of these $2\times 2$ squares, we have $5$ possible distinct ways of tiling. This gives the lower bound $t_{2n} \geq 7^{n^2}$. Obviously, $t_{2n+1} \geq t_{2n},\,n \geq 1$, and therefore $t_n \geq 7^{\lfloor \frac{n}{2}\rfloor ^2}$. Hence, \begin{align} 7^{\lfloor \frac{n}{2}\rfloor ^2} \leq t_n \leq 5^{n^2}, \end{align} or roughly (if $n$ is even), \begin{align} (7^{1/4})^{n^2} \leq t_n \leq 5^{n^2}. \end{align} BTW, $7^{1/4} \geq 1.6$. So, at least we know $\log t_n \in \Theta(n^2)$. Note: Doing the $3\times 3 $ case for the lower bound, we get $(131)^{1/9} \geq 1.7$ which is slightly better.
Give an example of a simply ordered set without the least upper bound property. In Theorem 27.1 in Topology by Munkres, he states "Let $X$ be a simply ordered set having the least upper bound property. In the order topology, each closed interval in $X$ is compact." (The LUB property is if a subset is bounded above, then it has a LUB.) I don't understand how you could have a simply ordered set (a chain) WITHOUT the LUB property. If a subset is bounded and it is a chain, then how can it not have a LUB? Can someone give an example? Thanks!
According to the strict definitions given by the OP, the null set fails to have a Least Upper Bound while still being simply ordered. The Least Upper Bound of a set, as defined at the Wikipedia page he links to requires that it be a member of that set. The null set, having no members, clearly lacks a LUB. However, the definition given for being simply ordered does not require that the set have any elements. Indeed, a set can only lack the property if it has a pair of elements that are not comparable. So, the null set is indeed Simply Ordered without having the Least Upper Bound property.
The Radical of $SL(n,k)$ For an algebraically closed field $k$, I'd like to show that the algebraic group $G=SL(n,k)$ is semisimple. Since $G$ is connected and nontrivial, this amounts to showing that the radical of $G$, denoted $R(G)$, is trivial. $R(G)$ can be defined as the unique largest normal, solvable, connected subgroup of $G$. I know that the group of $n$th roots of unity of $k$ is inside of $G$, and it is normal and solvable (being in the center of $G$) but not connected, having one irreducible component for each root of unity. What are other normal subgroups in $G$? How can I show that $R(G)=e?$
The fact is that the quotient $\mathrm{PSL}_n(k)$ of $\mathrm{SL}_n(k)$ by its center is simple. Since the center of $\mathrm{SL}_n(k)$ consists, as you say, of the $n$th roots of unity, this shows that there are no nontrivial connected normal subgroups of $\mathrm{SL}_n(k)$. The fact that the projective special linear group is simple is not entirely trivial. There is a proof using Tits systems in the famous Bourbaki book on Lie groups and Lie algebras (chapter 4), which is of course more general. A more elementary approach, just using linear algebra, can be found in Grove's book "Classical groups and geometric algebra".
$\mathbb Q/\mathbb Z$ is an infinite group I'm trying to prove that $\mathbb Q/\mathbb Z$ is an infinite abelian group, the easiest part is to prove that this set is an abelian group, I'm trying so hard to prove that this group is infinite without success. This set is defined to be equivalences classes of $\mathbb Q$, where $x\sim y$ iff $x-y\in \mathbb Z$. I need help here. thanks a lot
Another hint: prove that for $$n,k\in\Bbb N\;,\;\;n\neq k\;,\;\;\;\frac{1}{n}+\Bbb Z\neq\frac{1}{k}+\Bbb Z$$
How can the graph of an equivalence relation be conceptualized? Consider a generic equivalence relation $R$ on a set $S$. By definition, if we partition $S$ using the relation $R$ into $\pi_S$, whose members are the congruence classes $c_1, c_2...$ then $aRb \text{ iff a and b are members of the same congruence class in } \pi_S$. But what is the domain and codomain of $R$? Is it $S \rightarrow S $ or $S^2 \rightarrow \text{{true,false}}$? The reason I ask is to have an idea of the members of $graph(R)$. Does it only contain ordered pairs which are equivalent, i.e. $(a,b)$; or all elements of $S \times S$ followed by whether they are equivalent, i.e. $((a,b), true)$? Moreover, what would the image of $s \in S$ under $R$ look like? If one suggests that $R(s)$ would return a set of all the equivalent members, then the former definition is fitting. I suppose the source of confusion is that we rarely think of equivalence relations as 'mapping' from an input to an output, instead it tells us if two objects are similar in some way.
The domain is $S\times S$ and codomain is {true, false}. Said another way, you can just think of any relation as a subset of $S\times S$.
Dimensions: $\bigcap^{k}_{i=1}V_i \neq \{0\}$ Let $V$ be a vector space of dimension $n$ and let $V_1,V_2,\ldots,V_k \subset V$ be subspaces of $V$. Assume that \begin{eqnarray} \sum^{k}_{i=1} \dim(V_i) > n(k-1). \end{eqnarray} To show that $\bigcap^{k}_{i=1}V_i \neq \{0\}$, what must be done? Also, could there be an accompanying schematic/diagram to show the architecture of the spaces' form; that is, something like what's shown here.
Hint: take complements. That is, pick vector spaces $W_i$ with $\dim(W_i) + \dim(V_i) = n$ and $W_i \cap V_i = \{0\}$. EDIT: thanks, Ted
Which is the function that this sequence of functions converges Prove that $$ \left(\sqrt x, \sqrt{x + \sqrt x}, \sqrt{x + \sqrt {x + \sqrt x}}, \ldots\right)$$ in $[0,\infty)$ is convergent and I should find the limit function as well. For give a idea, I was plotting the sequence and it's look like
Note that $f(x)$ is limited by $\sqrt{x}+1$ as can be shown easily by $induction$. For the limit $y=f(x)$ we have the following : $\sqrt{x+y}=y$ $x+y=y^2$ $0=y^2-y-x$ $\Delta= 1+4x $ $ So $ , $y=f(x)=\frac{1+\sqrt{1+4x}}{2}$ for $x>0$ and $f(0)=0 $ .
Are there any other constructions of a finite field with characteristic $p$ except $\Bbb Z_p$? I mean, $\Bbb Z_p$ is an instance of $\Bbb F_p$, I wonder if there are other ways to construct a field with characteristic $p$? Thanks a lot!
Just to supplement the other answers: As stated in the other answers, for every prime power $p^r$, $r>0$, there is a unique (up to isomorphism) field with $p^r$ elements. There are also infinite fields of characteristic $p$, for instance if $F$ is any field of characteristic $p$ (e.g., $\mathbb Z_p$), the field $F(t)$ (the field of fractions of the polynomial ring $F[t]$, with $t$ an indeterminate), is an infinite field of characteristic $p$.
Can you find function which satisfies $f(ab)=\frac{f(a)}{f(b)}$? Can you find function which satisfies $f(ab)=\frac{f(a)}{f(b)}$? For example $log(x)$ satisfies condition $f(ab)=f(a)+f(b)$ and $x^2$ satisfies $f(ab)=f(a)f(b)$?
Let us reformulate the question as classify all maps $f : G \rightarrow H$ which need not be group morphism that satisfies the condition $f(ab)=f(a)f(b)^{-1}$ A simple calculation shows that $f(e)=f(x)f(x^{-1})^{-1}= f(x^{-1})f(x)^{-1}$ or we have $f(x)=f(e)^{-1}f(x^{-1})=f(e)f(x^{-1})$ or $f(e)^{-1}=f(e)$ Now $f(x)=f(ex)=f(e)f(x)^{-1}=f(e)^{-1}f(x)^{-1}=(f(x)f(e))^{-1}=(f(x)f(e)^{-1})^{-1}=f(xe)^{-1}=f(x)^{-1}$ so even if we don't assume a group morphism we have the image involutive. And hence it's a group morphism $f(ab)=f(a)f(b)^{-1}=f(a)f(b)$
Why is this function Lipschitz? Let $f:A \to B$ where $A$, $B \subset \mathbb{R}^n$. Suppose $$\lVert f(y_1) - f(y_2)\rVert_{\ell_\infty} \geq C\lVert y_1 - y_2 \rVert_{\ell_\infty}$$ This tells us that $f$ is one to one and that the inverse is Lipschitz. I am told that $f$ is bi-Lipschitz; so $f$ is also Lipschitz, but I don't see why?
I was inaccurate: in the general settings of your problem, where the only thing we know on $A$ is that $A \subset \mathbb{R}^n$, it is not true that even if $f \in W^{1,\infty}(A)$ then $f \in \text{Lip}(A)$. Neverthless what is true is the following: $\textbf{Theorem:}$ Leu $U$ be open and bounded, with $\partial U$ of class $C^1$. Then $u \colon U \to \mathbb{R}$ is Lipschitz continuous if and aonly if $u \in W^{1,\infty}(U)$. This is proved in Evan's Partial Differential Equations: this is Th. 4 of the additional topics in chapter 5. For the case in which $U$ is unbounded you just need to read the section dedicated to extentions. The theoreom is pretty good, indeed any weakening of the hypothesis makes the theorem false: google has counterexamples. (Even the Hypothesis $\partial U$ of Lipschitz class is too weak.) If your $A$ is a general open set what is true is that $u \in W^{1,\infty}_{loc}(A) \Longleftrightarrow u \in \text{Lip}_{loc}(A)$. I really hope this helps!
Simplicial cohomology of $ \Bbb{R}\text{P}^2$ I've managed to confuse myself on a simple cohomology calculation. I'm working with the usual $\Delta$-complex on $X = \mathbf{R}\mathbf{P}^2$ and I've computed the complex as $\newcommand{Z}{\mathbf{Z}}$ $$ 0 \to \Z \oplus \Z \stackrel{\partial^0}{\to} \Z \oplus \Z \oplus \Z \stackrel{\partial^1}{\to} \Z \oplus \Z \to 0 $$ with $\partial^0$ given by $(l, m) \mapsto (-l+m, -l+m, 0)$ and $\partial^1$ by $(l,m,n) \mapsto (l+m-n, -l+m+n)$. Then $\mathrm{Ker}(\partial^0) = \left<(1,1)\right> \cong \Z$ and $\mathrm{Im}(\partial^0) = \left<(1,1,0)\right> \cong \Z$. For $\partial^1$ I got $\mathrm{Ker}(\partial^1) = \left<(1,0,1)\right> \cong \Z$ and I'm pretty confident about everything so far. Now for $\mathrm{Im}(\partial^1)$ I first got $2\Z \oplus 2\Z$, since $(2, 0)$ and $(0, 2)$ are both in the image while $(1, 0)$ and $(0, 1)$ are not. I don't see what's wrong with this logic, but it doesn't give the right answer: $H^2(X) \cong \Z \oplus \Z / (2\Z \oplus 2\Z) \cong \Z/2\Z \oplus \Z/2\Z$ while I believe the correct answer has only one copy. A second approach I tried is the "isomorphism theorem" which says $\mathrm{Im}(\partial^1) \cong \Z \oplus \Z \oplus \Z / \mathrm{Ker}(\partial^1) = (\Z \oplus \Z \oplus \Z) / \Z \cong \Z \oplus \Z$. But then $H^2(X) \cong \Z \oplus \Z / (\Z \oplus \Z) = 0$ is still wrong. What's wrong with both of these approaches, and what's the correct one? EDIT: I just realised that of course $\Z \oplus \Z \cong 2\Z \oplus 2\Z$ so both approaches actually give the same answer for $\mathrm{Im}(\partial^1)$. More specifically I think it is generated by $\left<(1, 1), (1, -1)\right>$. So I can only assume I'm computing the quotient $\Z^2/\mathrm{Im}(\partial^1)$ incorrectly. To be very precise, we have the isomorphism $$ H^2(X) = \Z \oplus \Z / \mathrm{Im}(\partial^1) \stackrel{\cong}{\to} \Z $$ given by $(m, n) + \left<(1, 1), (1, -1)\right> \mapsto m + n$. Since $(m,n) \sim (0, m+n)$ this map is injective; and it is obviously surjective because $(n, 0)$ always maps to $n$ for any $n \in \Z$. This is so weird...... EDIT: Of course, the problem with the above "isomorphism" is that it is not actually a well-defined homomorphism, as it doesn't agree on $(1, 1)$ and $(1, -1)$ (hence we mod out $2\Z$...)
Assuming that you have computed your cochain complex correctly, the problem with your first approach is that it is not true that $$(A \oplus B)/(C \oplus D) \cong A/B \oplus C/D.$$ Instead to calculate $H^2(X)$ you will need to work with generators and relations. Define $a := (1,0)$ and $b:= (0,1)$, your basis vectors of $\Bbb{Z} \oplus \Bbb{Z}$. A basis for the image of $\partial^1$ is given by $a -b$ and $a + b$. So when you quotient out by $\operatorname{im} \partial^1$, you are effectively saying that $$H^2(X) \cong \langle a,b | a +b = a-b= 0\rangle$$ The relations $a + b= 0$ and $a - b = 0$ combine to give $2a = 0$, $a = -b$. This means $$H^2(X) \cong \langle a | 2a = 0 \rangle \cong \Bbb{Z}/2\Bbb{Z}.$$
Number of conjugacy classes of the reflection in $D_n$. Consider the conjugation action of $D_n$ on $D_n$. Prove that the number of conjugacy classes of the reflections are $\begin{cases} 1 &\text{ if } n=\text{odd} \\ 2 &\text{ if } n=\text{even} \end{cases} $ I tried this: Let $σ$ be a reflection. And $ρ$ be the standard rotation of $D_n$. $$ρ^l⋅σρ^k⋅ρ^{-l}=σρ^{k-2l}$$ $$σρ^l⋅σr^k⋅ρ^{-l}σ=σρ^{-k+2l}$$ If $n$ is even, it depends on $k$ if $-k+2l$ will stay even. But if $n$ is odd, then at some point $-k+2l=|D_n|$ and therefore you will also get the even elements. So independent of $k$ you will get all the elements. Is this the idea ?
Hint. The Orbit-Stabilizer theorem gives you that $[G:C_G(g)]$ is the size of the conjugacy class containing $g$. When $n$ is odd, a reflection $g$ commutes only with itself (why?), so $g$ has $[G:C_G(g)]=|G|/2$ elements, which are easily identified as the other reflections. Now, use this same technique to figure out the answer for the case of even $n$, keeping in mind that dihedral groups $D_{2n}$ have nontrivial centers when $n$ is even (why?).
How to evaluate powers of powers (i.e. $2^3^4$) in absence of parentheses? If you look at $2^{3^4}$, what is the expected result? Should it be read as $2^{(3^4)}$ or $(2^3)^4$? Normally I would use parentheses to make the meaning clear, but if none are shown, what would you expect? (In this case, the formatting gives a hint, because I can either enter $2^{3^4}$ or ${2^3}^4$. If I omit braces in the MathML expression, the output is shown as $2^3^4$. Just suppose all three numbers were displayed in the same size, and the same vertical offset between 2, 3 and 3, 4.)
In the same way that any expressions in brackets inside other brackets are done before the rest of the things in the brackets, I'd say that one works from the top down in such a case. i.e. because we do $(a*d)$ first in$((a*b)*c)*d$, I'd imagine it'd be the expected thing to do $x^{(y^z)}$
Reversing the Gram Matrix Let $A$ be a $M\times N$ real matrix, then $B=A^TA$ is the gramian of $A$. Suppose $B$ is given, is $A$ unique? Can I say something on it depending on $M$ and $N$.
$A$ will definitely not be unique without some pretty serious restrictions. The simplest case to think about might be to consider $M\times 1$ 'matrices', i.e. column vectors. Then, $A^TA$ is simply the norm-squared of $A$, so for instance $A^TA=1$ would hold for any vector with norm $1$ (i.e. the unit sphere in $\Bbb{R}^M$).
Condition number question Please, help me with this problem: Let $A$ a matrix of orden $100$, $$A\ =\ \left(\begin{array}{ccccc} 1 & 2 & & & \\ & 1 & 2 & & \\ & & \ddots & \ddots & \\ & & & 1 & 2\\ & & & & 1 \end{array}\right).$$ Show that $\mbox{cond}_2(A) \geq 2^{99}$. Thanks in advance.
We consider a general order $n$. Calculate $\|Ax\|/\|x\|$ with $x=(1,1,\ldots,1)^T$ to get a lower bound $p=\sqrt{\frac{9(n-1)+1}{n}}$ for $\sigma_1(A)$. Compute $\|Ax\|/\|x\|$ for $x=\left((-2)^{n-1},(-2)^{n-2},\,\ldots,\,-2,1\right)^T$ to get an upper bound $q=\sqrt{\frac{3}{4^n-1}}$ for $\sigma_n(A)$. Now $\frac pq$ is a lower bound for $\operatorname{cond}_2(A)$ and you may show that it is $\ge2^{n-1}$.
Standard Young Tableaux and Bijection to saturated chains in Young Lattice I'm reading Sagan's book The Symmetric Group and am quite confused. I was under the assumption that any tableau with entries weakly increasing along a row and strictly increasing down a column would be considered standard Young tableau, e.g. $$1\; 2$$ $$2\; 3$$ would be a standard Young tableau. But Sagan proposes that there is a simple bijection between standard Young tableaux and saturated $\emptyset-\lambda$ chains in the Young lattice. But this wouldn't make sense for the above tableau, since you could take both: * *$\emptyset \prec (1,0) \prec (1,1) \prec (2,1) \prec (2,2)$ *$\emptyset \prec (1,0) \prec (2,0) \prec (2,1) \prec (2,2)$ I believe I am missing something, can someone please clarify?
Actually your Young tableau corresponds to the chain $$\begin{array}{cccccc}\emptyset & \prec & \bullet & \prec & \bullet & \bullet & \prec & \bullet & \bullet \\ & & & & \bullet & & & \bullet & \bullet\end{array}$$ that is, $\emptyset \prec (1,0) \prec (2,1) \prec (2,2)$, which is not saturated.
The Best of Dover Books (a.k.a the best cheap mathematical texts) Perhaps this is a repeat question -- let me know if it is -- but I am interested in knowing the best of Dover mathematics books. The reason is because Dover books are very cheap and most other books are not: For example, while something like Needham's Visual Complex Analysis is a wonderful book, most copies of it are over $100. In particular, I am interested in the best of both undergraduate and graduate-level Dover books. As an example, I particularly loved the Dover books Calculus of Variations by Gelfand & Fomin and Differential Topology by Guillemin & Pollack. Thanks. (P.S., I am sort of in an 'intuition-appreciation' kick in my mathematical studies (e.g., Needham)) EDIT: Thank you so far. I'd just like to mention that the books need not be Dover, just excellent and affordable at the same time.
Though it lacks any treatment of cardinal functions, Stephen Willard’s General Topology remains one of the best treatments of point-set topology at the advanced undergraduate or beginning graduate level. Steen & Seebach, Counterexamples in Topology, is not a text, but it is a splendid reference; the title is self-explanatory.
Why aren't logarithms defined for negative $x$? Given a logarithm is true, if and only if, $y = \log_b{x}$ and $b^y = x$ (and $x$ and $b$ are positive, and $b$ is not equal to $1$)[1], are true, why aren't logarithms defined for negative numbers? Why can't $b$ be negative? Take $(-2)^3 = -8$ for example. Turning that into a logarithm, we get $3 = \log_{(-2)}{(-8)}$ which is an invalid equation as $b$ and $x$ are not positive! Why is this?
For the real, continuous exponentiation operator -- the used in the definition of the real, continuous logarithm -- $(-2)^3$ is undefined, because it has a negative base. The motivation stems from continuity: If $(-2)^3$ is defined, then $(-2)^{\pi}$ and $(-2)^{22/7}$ should both be defined as well, and be "close" in value to $(-2)^3$, because $\pi$ and $22/7$ are "close" to 3. But the same line of reasoning says that $(-2)^{22/7}$ should be $8.8327...$, which isn't very close to $-8$ at all, and what could $(-2)^{\pi}$ possibly mean?!?! You can define a "discrete logarithm" operator that corresponds to the discrete exponentiation operator (i.e. the one that defines $(-2)^3$ to be repeated multiplication, and thus $8$), but this has less general utility -- my expectation is that one would be much better off learning just the continuous logarithm and dealing with the signs "manually" in those odd cases where you want to solve questions involving discrete exponentiation with negative bases. (and eventually learn the complex logarithm)
On the weak closedness I have some difficulties in this question. Let $X$ be a nonreflexive Banach space and $K\subset X$ be a nonempty, convex, bounded and closed in norm. We consider $K$ as a subset of $X^{**}$. I would like to ask whether $K$ is closed w.r.t the topology $\sigma(X^{**}, X^*)$. Thank you for all comments and kind help.
No, this fails in every nonreflexive space for $K=\{x\in X:\|x\|\le 1\}$, the closed unit ball of $X$. Indeed, any neighborhood of a point $p\in X^{**}$ contains the intersection of finitely many "slabs" $\{x^{**}\in X^{**}: a_j< \langle x^{**}, x^*_j\rangle < b_j \}$ for some $x^*_j\in X^*$and $a_j,b_j\in \mathbb R$. It is easy to see that $\bigcap_j \{x\in X: a_j< \langle x, x^*_j\rangle < b_j \}$ is nonempty as well; indeed, $\bigcap_j \{x \in X: \langle x, x^*_j\rangle = (a_j+b_j)/2\}$ is a subspace of finite codimension. Thus, $X$ is dense in $X^{**}$. Since the embedding of $X$ into $X^{**}$ is isometric, the unit ball of $X$ is dense in the unit ball of $X^{**}$.
Finding the limit of a weird sequence function? Do you know any convergent sequence of continuous functions, that his limit function is a discontinuous function in infinitely many points of its domain (?)
Let $$f_n(x) = \begin{cases} nx & \hat{x}\in[0,\frac{1}{n}) \\ 1 & \hat{x}\in[\frac{1}{n},\frac{n-1}{n}) \\ n-nx & \hat{x}\in(\frac{n-1}{n},1)\end{cases},$$ where $\hat{x}$ is the fractional part of $x$. It is not to hard to see that $f_n$ is continuous for every $n\in\mathbb{N}$, but $\lim_{n\to\infty} f_n = f_\infty$, where $f_\infty(x) = \begin{cases} 1 & x\notin\mathbb{Z} \\ 0 & x\in\mathbb{Z}\end{cases}$, which is clearly not continuous at any integer.
Meaure theory problem no. 12 page 92, book by stein and shakarchi Show that there are $f \in L^1(\Bbb{R}^d,m)$ and a sequence $\{f_n\}$ with $f_n \in L^1(\Bbb{R}^d,m)$ such that $\|f_n - f\|_{L^1} \to 0$, but $f_n(x) \to f(x)$ for no $x$.
Let us see the idea for $\mathbb{R}$ . Let $f_1$ be a characteristic function of $[0,0.5]$, $f_2$ --- of [0.5,1], $f_3$ of [0,0.25] and so on. They form the desired sequence on $L^1([0,1])$. The rest is an exercise because $\mathbb{R}$ can be covered by a countable number of intervals.
About injectivity of induced homomorphisms on quotient rings Let $A, B$ be commutative rings with identity, let $f: A \rightarrow B$ be a ring homomorphism (with $f(1) = 1$), let $\mathfrak{a}$ be an ideal of $A$, $\mathfrak{b}$ an ideal of $B$ such that $f(\mathfrak{a}) \subseteq \mathfrak{b}$. Then there is a well-defined homomorphism \begin{align} &\bar f: &&A / \mathfrak{a} &&\rightarrow &&B / \mathfrak{b} \\ & &&a + \mathfrak{a} &&\mapsto &&f(a) + \mathfrak{b}. \end{align} It's clear to me that if $\mathfrak{a} = \mathfrak{b}^c$, then $f$ injective $\implies \bar f$ injective. Question: Under what conditions does $\bar f$ injective $\implies f$ injective hold?
Let $\bar f$ is injective. Then $f(a)\in \mathfrak{b} $ implies $a\in \mathfrak{a} $. This means $\mathfrak{a} \supseteq \mathfrak{b}^c$ hence $\mathfrak{a}=\mathfrak{b}^c$ .
$\mathbb{R^+}$ is the disjoint union of two nonempty sets, each closed under addition. I saw Using Zorn's lemma show that $\mathbb R^+$ is the disjoint union of two sets closed under addition. and have a question related to the answer (I'm not sure if this is the right place to post it); Why don't we just take $\mathcal{A}=\{A\subseteq \mathbb{R^+}\ :\text{A is closed under addition and all elements of A are irrational}\}$, $\mathcal{B}=\{B\subseteq \mathbb{R^+}\ :\text{B is closed under addition and all elements of B are rational}\}$ and order both of them with the partial order $\subseteq$ so that they will satisfy the chain condition? Then, they have maximal elements $\bigcup{\mathcal{A}}$ and $\bigcup{\mathcal{B}}$ respectively. So it remains to verify that $\bigcup{\mathcal{A}}$ is the set of all irrationals and $\bigcup{\mathcal{B}}$ is the set of all rationals. Is this approach correct?
There is a reason why you shouldn't try to explicitly construct such a decomposition. If $\mathbb{R}^{+}$ is a disjoint union of $A$ and $B$, where both $A$ and $B$ are closed under addition then neither one of $A$ and $B$ is either Lebesgue measurable or has Baire property since if $X$ is either a non meager set of reals with Baire property or $X$ has positive Lebesgue measure then $X + X$ has non empty interior. It is well known that such sets cannot be provably defined within set theory (ZFC).
Proving $\sqrt{2}\in\mathbb{Q_7}$? Why does Hensel's lemma imply that $\sqrt{2}\in\mathbb{Q_7}$? I understand Hensel's lemma, namely: Let $f(x)$ be a polynomial with integer coefficients, and let $m$, $k$ be positive integers such that $m \leq k$. If $r$ is an integer such that $f(r) \equiv 0 \pmod{p^k}$ and $f'(r) \not\equiv 0 \pmod{p}$ then there exists an integer $s$ such that $f(s) \equiv 0 \pmod{p^{k+m}}$ and $r \equiv s \pmod{p^{k}}$. But I don't see how this has anything to do with $\sqrt{2}\in\mathbb{Q_7}$? I know a $7$-adic number $\alpha$ is a $7$-adically Cauchy sequence $a_n$ of rational numbers. We write $\mathbb{Q}_7$ for the set of $7$-adic numbers. A sequence $a_n$ of rational numbers is $p$-adically Cauchy if $|a_{m}-a_n|_p \to 0$ as $n \to \infty$. How do we show $\sqrt{2}\in\mathbb{Q_7}$?
This is how I see it and, perhaps, it will help you: we can easily solve the polynomial equation $$p(x)=x^2-2=0\pmod 7\;\;(\text{ i.e., in the ring (field)} \;\;\;\Bbb F_7:=\Bbb Z/7\Bbb Z)$$ and we know that there's a solution $\,w:=\sqrt 2=3\in \Bbb F_{7}\,$ Since the roots $\,w\,$ is simple (i.e., $\,p'(w)\neq 0\,$) , Hensel's Lemma gives us lifts for the root, meaning: for any $\,k\ge 2\,$ , there exists an integers $\,w_k\,$ s.t. $$(i)\;\;\;\;p(w_k)=0\pmod {7^k}\;\;\wedge\;\;w_k=w_{k-1}\pmod{p^{k-1}}$$ Now, if you know the inverse limit definition of the $\,p$-adic integers then we're done as the above shows the existence of $\,\sqrt 2\in\Bbb Q_p\,$ , otherwise you can go by some other definition (say, infinite sequences and etc.) and you get the same result.
Series with complex terms, convergence Could you tell me how to determine convergence of series with terms being products of real and complex numbers, like this: $\sum_{n=1} ^{\infty}\frac{n (2+i)^n}{2^n}$ , $ \ \ \ \ \sum_{n=1} ^{\infty}\frac{1}{\sqrt{n} +i}$? I know that $\sum (a_n +ib_n)$ is convergent iff $\sum a_n$ converges and $\sum b_n$ converges. (How) can I use it here?
We have $$\left|\frac{n(2+i)^n}{2^n}\right|=n\left(\frac{\sqrt{5}}{2}\right)^n\not\to0$$ so the series $\displaystyle\sum_{n=1}^\infty \frac{n(2+i)^n}{2^n}$ is divergent. For the second series we have $$\frac{1}{\sqrt{n}+i}\sim_\infty\frac{1}{\sqrt{n}}$$ then the series $\displaystyle\sum_{n=1}^\infty \frac{1}{\sqrt{n}+i}$ is also divergent.
Shooting game - a probability question In a shooting game, the probability for Jack to hit a target is 0.6. Suppose he makes 8 shots, find the probabilities that he can hit the target in more than 5 shots. I find this question in an exercise and do not know how to solve it. I have tried my best but my answer is different from the one in the answer key. Can anyone suggest a clear solution to it? My trial: (0.6^6)(0.4^2)+(0.6^7)(0.4)+0.6^8 But it is wrong... (The answer is 0.3154, which is correct to 4 significant figures)
Total shots = n = 8 Success = p = The probability for Jack to hit a target is 0.6 Failure $= q = (1-p) =$ The probability for Jack to NOT Hit a target is $(1- 0.6) = 0.4$ P( He can hit the target in more than 5 shots, i.e. from 6 to 8 ) $= [ P(X = 6) + P(X = 7) + P(X = 8) ] =$ $ [\binom{8}{6}.(.6)^{6}.(.4)^{8-6} + \binom{8}{7}.(.6)^{7}.(.4)^{8-7} + \binom{8}{8}.(.6)^{8}.(.4)^{8-8} ]$
Polynomials not dense in holder spaces How to prove that the polynomials are not dense in Holder space with exponent, say, $\frac{1}{2}$?
By exhibiting a function that cannot be approximated by polynomials in the norm of $C^{1/2}$, such as $f(x)=\sqrt{x}$ on the interval $[0,1]$. The proof is divided into steps below; you might not need to read all of them. Let $p$ be a polynomial. $(p(x)-p(0))/x\to p'(0)$ as $x\to 0^+$ $|p(x)-p(0)|/x^{1/2}\to 0$ as $x\to 0^+$ $|(f(x)-p(x))-(f(0)-p(0))|/x^{1/2}\to 1$ as $x\to 0^+$ $\|f-p\|_{C^{1/2}}\ge 1$
Graph Concavity Test I'm studying for my final, and I'm having a problem with one of the questions. Everything before hand has been going fine and is correct, but I'm not understanding this part of the concavity test. $$f(x) = \frac{2(x+1)}{3x^2}$$ $$f'(x) =-\frac{2(x+2)}{3x^3}$$ $$f''(x) = \frac{4(x+3)}{3x^4}$$ For the increasing and decreasing test I found that the critical point is -2: $$2(x+2)=0$$ $$x = -2$$ (This part is probably done differently than most of you do this), here's the chart of the I/D 2(x+2), Before -2 you will get a negative number, and after -2 you will get a positive number. Therefore, f'(x), before -2 will give you a negative number, and after you will get a positive number, so f(x) before -2 will be decreasing and after it will be increasing. Where -2 is your local minimum. (By this I mean; 2(x+2), any number before -2 (ie. -10) it will give you a negative number.) As for the concavity test, I did the same thing basically; $$4(x + 3) = 0$$ $$x = -3$$ However, my textbook says that x = 0 is also a critical point, I don't understand where you get this from. If anyone can explain this I would appreciate it, also if there's a more simple way of doing these tests I would love to hear it, thanks.
If I got your question correctly, you are working on the function $f(x)$ as above and want to know the concavity of it. First of all note that as the first comment above says; $x=0$ is also a critical point for $f$. Remember what is the definition of a critical point for a function. Secondly, you see that $x=0$ cannot get you a local max or local min cause $f$ is undefined at this point. $x=0$ is not an inflection point because when $x<0$ or $x>0$ in a small neighborhood, the sign of $f''$ doesn't change. It is positive so $f$ is concave upward around the origin.
transformation of integral from 0 to infinity to 0 to 1 How do I transform the integral $$\int_0^\infty e^{-x^2} dx$$ from 0 to $\infty$ to o to 1 and. I have to devise a monte carlo algorithm to solve this further, so any advise would be of great help
Pick your favorite invertible, increasing function $f : (0,1) \to (0,+\infty)$. Make a change of variable $x = f(y)$. Or, pick your favorite invertible, increasing function $g : (0,+\infty) \to (0,1)$. Make a change of variable $y = g(x)$.
Prove that $\tan(75^\circ) = 2 + \sqrt{3}$ My (very simple) question to a friend was how do I prove the following using basic trig principles: $\tan75^\circ = 2 + \sqrt{3}$ He gave this proof (via a text message!) $1. \tan75^\circ$ $2. = \tan(60^\circ + (30/2)^\circ)$ $3. = (\tan60^\circ + \tan(30/2)^\circ) / (1 - \tan60^\circ \tan(30/2)^\circ) $ $4. \tan (30/2)^\circ = \dfrac{(1 - \cos30^\circ)}{ \sin30^\circ}$ Can this be explained more succinctly as I'm new to trigonometry and a little lost after (2.) ? EDIT Using the answers given I'm almost there: * *$\tan75^\circ$ *$\tan(45^\circ + 30^\circ)$ *$\sin(45^\circ + 30^\circ) / \cos(45^\circ + 30^\circ)$ *$(\sin30^\circ.\cos45^\circ + \sin45^\circ.\cos30^\circ) / (\cos30^\circ.\cos45^\circ - \sin45^\circ.\sin30^\circ)$ *$\dfrac{(1/2\sqrt{2}) + (3/2\sqrt{2})}{(3/2\sqrt{2}) - (1/2\sqrt{2})}$ *$\dfrac{(1 + \sqrt{3})}{(\sqrt{3}) - 1}$ *multiply throughout by $(\sqrt{3}) + 1)$ Another alternative approach: * *$\tan75^\circ$ *$\tan(45^\circ + 30^\circ)$ *$\dfrac{\tan45^\circ + \tan30^\circ}{1-\tan45^\circ.\tan30^\circ}$ *$\dfrac{1 + 1/\sqrt{3}}{1-1/\sqrt{3}}$ *at point 6 in above alternative
You can rather use $\tan (75)=\tan(45+30)$ and plug into the formula by Metin. Cause: Your $15^\circ$ is not so trivial.
Suggest an Antique Math Book worth reading? I'm not a math wizard, but I recently started reading through a few math books to prepare myself for some upcoming classes and I'm starting to really get into it. Then I noticed a few antique math books at a used bookstore and bought them thinking that, if nothing else, they would look cool on my bookshelf. But as it turns out, I enjoy both reading and collecting them. I find myself constantly browsing used book stores, thrift stores, antique stores ect. looking for the next book to add to my library. So do you know of an antique book that you found interesting, helpful, or historically relevant? (Just some insight- some of the books I have that I like are: The Laws of Thought by George Boole; Mathematical Methods of Statistics by Harald Cramer; and Introduction to Mathematical Analysis by F.L. Griffin. I've also enjoyed reading online about probablity, logic, and math history. But any area of mathematics is fine as I'm still discovering which areas interest me.)
For a book that is not going to teach you any new math, but will give you a window into how a mathematical personality might think or act, I would recommend I Want to be a Mathematician by Paul Halmos. Quite a fun read, full of all of the joys and nuisances of being a high class working mathematician.
Brent's algorithm Use Brent's algorithm to find all real roots of the equation $$9-\sqrt{99+2x-x^2}=\cos(2x),\\ x\in[-8,10]$$ I am having difficulty understanding Brent's algorithm. I looked at an example in wikipedia and in my book but the examples given isn't the same as this question. Any help will be greatly appreciated.
The wikipedia entry you cite explains Brent's algorithm as a modification on other ones. Write down all algorithms that are mentioned in there, see how they go into Brent's. Perhaps try one or two iterations of each to feel how they work. Try to write Brent's algorithm down as a program in some language you are familiar with. Make sure your program follows the logic given. Run your program on the function given (or some simpler one), and have it tell you what it is doing each step. Look over the explanation to see why that step makes sense, check what the alternative would have been. As a result, you will have a firm grasp of the algorithm (and some others, with their shortcommings and strong points), and thus why it is done the way it is. [Presumably that is what this assignment is all about...]
Find the coordinates of any stationary points on the curve $y = {1 \over {1 + {x^2}}}$ and state it's nature I know I could use the quotient rule and determine the second differential and check if it's a max/min point, the problem is the book hasn't covered the quotient rule yet and this section of the book concerns exercises related only to the chain rule, so I was wondering what other method is there to determine the nature of the stationary point (which happens to be (0,1)) given that this is an exercise that is meant to utilise the chain rule. $${{dy} \over {dx}} = - {{2x} \over {{{({x^2} + 1)}^2}}}$$ So essentially my question is can I determine ${{{d^2}y} \over {d{x^2}}}$ by using the chain rule (which I dont think you can) and thus the nature of stationary point, or would I have to determine the nature of the stationary point another way? I think I may have overlooked something, any help would be appreciated. Thank you.
As stated in the comments below, you can check whether a "stationary point" (a point where the first derivative is zero), is a maximum or minimum by using the first derivative. Evaluate points on each side of $x = 0$ to determine on which side it is decreasing (where $f'(x)$ is negative) and which side it is increasing (where $f'(x)$ is positive). Increasing --> stationary --> decreasing $\implies$ maximum. Decreasing ..> stationary ..> increasing $\implies$ minimum. In your case, we have ($f'(x) > 0$ means $f$ is increasing to left of $x = 0$) and ($f'(x) <0$ means $f$ is decreasing to the right of $x = 0$) hence the point $\;(0, 1)\,$ is a local maximum of $f(x)$. With respect to the second derivative: While the quotient rule can simplify the evaluation of $\dfrac{d^2y}{dx^2}$, you can evaluate the second derivative of your given function by finding the derivative of $\;\displaystyle {{dy} \over {dx}} = {{-2x} \over {{{({x^2} + 1)}^2}}}\;$ by using the chain rule and the product rule: Given $\quad \dfrac{dy}{dx} = (-2x)(x^2 + 1)^{-2},\;$ then using the product rule we get $$\frac{d^2y}{dx^2} = -2x \cdot \underbrace{\frac{d}{dx}\left((x^2 + 1)^{-2}\right)}_{\text{use chain rule}} + (x^2 + 1)^{-2}\cdot \dfrac{d}{dx}(-2x)$$ $$\frac{d^2y}{dx^2} = \frac{6x^2 - 2}{\left(x^2 + 1\right)^3}$$ Note: The product rule, if you haven't yet learned it, is as follows: If $\;f(x) = g(x)\cdot h(x)\;$ (i.e., if $\,f(x)\,$ is the product of two functions, which we'll call $g(x)$ and $h(x)$ respectively), then $$f'(x) = g(x)h'(x) + g'(x)h(x)\tag{product rule}$$
The angle between two unit vectors is not what I expected Ok imagine a vector with only X and Z components that make a 45 degree angle with the positive X axis. It's a unit vector. Now also imagine a unit vector that has the same direction as the positive x axis. Now imagine rotating both of these around the Z axis. I expect the angle between these vectors to still be 45 degrees. But it's not. If you don't believe me look here. Angle between two 3D vectors is not what I expected. Another way to think about it is to draw a 45 degree angle between two lines on a piece of paper. Now stand the paper up, and rotate the paper. The angle between the lines are still 45 degrees. Why is my way of thinking wrong?
If you start with the vectors $(1,0,0)$ and $(1/\sqrt{2},0,1/\sqrt{2})$, and rotate both by $45^\circ$ about the $z\text{-axis}$, then you end up with $(1/\sqrt{2},1/\sqrt{2},0)$ and $(1/2,1/2,1/\sqrt{2})$. The second point is not $(1/\sqrt{3},1/\sqrt{3},1/\sqrt{3})$ as you imagined. If you think about it, the $z\text{-coordinate}$ cannot be changed by this rotation. If the $z\text{-axis}$ is vertical, and the $x\text{-}y$ plane is horizontal, then the height of the point above the plane is not changed by rotation about the $z\text{-axis}$. The height remains $1/\sqrt{2}$, and the length of the horizontal coordinate remains $1/\sqrt{2}$ as well. That would not be the case if the final vector were what you thought it was.
Show that $(2+i)$ is a prime ideal Consider the set Gaussian integer $\mathbb{Z}[i]$. Show that $(2+i)$ is a prime ideal. I try to come out with a quotient ring such that the set Gaussian integers over the ideal $(2+i)$ is either a field or integral domain. But I failed to see what is the quotient ring.
The quotient ring you are after must be $\Bbb Z[i]/I$ where $I=(2+i)$, otherwise it would not tell you much about the status is the ideal $I$. You must know that multiplication by a complex number is a combination of rotation and scaling, and so multiplication by $2+i$ is the unique such operation that sends $1\in\Bbb C$ to $2+i$. Therefore the image of the grid (lattice) $\Bbb Z[i]\subset\Bbb C$ is the grid $I\subset\Bbb Z[i]$ with orthogonal spanning vectors $2+i$ and $(2+i)i=-1+2i$. The square spanned by those two vectors has area $|\begin{smallmatrix}2&-1\\1&\phantom+2\end{smallmatrix}|=5$, so the density of the points of $I$ should be smaller than that of $\Bbb Z[i]$ by a factor $5$. This is a not entirely rigorous argument showing that the quotient ring $\Bbb Z[i]/I$ should have $5$ elements. There aren't very many rings with $5$ elements, so you should be able to guess which $5$ elements you can choose as representatives of the classes in $\Bbb Z[i]/I$. Now go ahead and show that all elements of $\Bbb Z[i]$ can be transformed into one and only one of those $5$ elements by adding an element of $I$. By the way, the quotient being a finite ring it will be a field if and only if it is an integral domain.: it is either both of them or none of them. In the case at hand it will be both (all rings with a prime number of elements are integral domains and also fields).
First Order Logic: Formula for $y$ is the sum of non-negative powers of $2$ As the title states, is it possible to write down a first order formula that states that $y$ can be written as the sum of non-negative powers of $2$. I have been trying for the past hour or two to get a formula that does so (if it is possible), but It seems to not work. Here's my attempt: Let $\varphi(y)$ be the formula $(\exists n < 2y)(\exists v_0 < 2)\cdots (\exists v_n < 2)(y = v_0\cdot 1 + v_1\cdot 2 + \cdots + v_n2^n)$. In the above, the $\mathcal{L}$-language is $\{+,\cdot, 0, s\}$ where $s$ is the successor function. But the problem with the formula above is that when $n$ is quantified existentially as less than $2y$, $n$ does not appear in $2^n$ when we write it out as products of $2$ $n$ times. I think this is the problem. My other attempts at this problem happen to be the same issue, where $n$ is quantified but does not appear in the statement, such as the example provided above. If you can give me any feedback, that would be great. Thanks for your time. Edit: I guess that when I write $2^n$, I mean $(s(s(0)))^n$.
In the natural numbers, the formula $\theta(x) \equiv x = x$ works. Think about binary notation. More seriously, once you have developed the machinery to quantify over finite sequences, it is not so hard to write down the formula. Let $\phi(x)$ define the set of powers of 2. The formula will look like this: $$ (\exists \sigma)(\exists \tau)[\, (\forall n < |\sigma|)[\phi(\sigma(n))] \land |\tau| = |\sigma| + 1 \land \tau(0) = 0 \land (\forall n < |\sigma|) [ \tau(n+1) = \tau(n) + \sigma(n)] \land x =\tau(|\tau|)] $$
Directional derivative of a scalar field in the direction of fastest increase of another such field Suppose $f,g : \mathbb{R}^n \rightarrow \mathbb{R}$ are scalar fields. What expression represents the directional derivative of $f$ in the direction in which $g$ is increasing the fastest?
The vector field encoding the greatest increase in $g$ is the gradient of $g$, so the directional derivative of $f$ in the direction of $\text{grad}(g)$ is just $\text{grad}(f)\bullet \text{grad}(g)$.
Neatest proof that set of finite subsets is countable? I am looking for a beautiful way of showing the following basic result in elementary set theory: If $A$ is a countable set then the set of finite subsets of $A$ is countable. I proved it as follows but my proof is somewhat fugly so I was wondering if there is a neat way of showing it: Let $|A| \le \aleph_0$. If $A$ is finite then $P(A)$ is finite and hence countable. If $|A| = \aleph_0$ then there is a bijection $A \to \omega$ so that we may assume that we are talking about finite subsets of $\omega$ from now on. Define a map $\varphi: [A]^{<\aleph_0} \to (0,1) \cap \mathbb Q$ as $B \mapsto \sum_{n \in \omega} \frac{\chi_B (n)}{2^n}$. Then $\varphi$ is injective hence the claim follows. (The proof of which is what is the core-fugly part of the proof and I omit it.).
A proof for finite subsets of $\mathbb{N}$: For every $n \in \mathbb{N}$, there are finitely many finite sets $S \subseteq \mathbb{N}$ whose sum $\sum S = n$. Then we can enumerate every finite set by enumerating all $n \in \mathbb{N}$ and then enumerating every (finitely many) set $S$ whose sum is $\sum S = n$. Since every finite set has such an $n$, every finite set is enumerated. QED. If you want the proof to hold for any countable $A$, first define any injective function $f: A \to \mathbb{N}$.
Uniform convergence for $x\arctan(nx)$ I am to check the uniform convergence of this sequence of functions : $f_{n}(x) = x\arctan(nx)$ where $x \in \mathbb{R} $. I came to a conclusion that $f_{n}(x) \rightarrow \frac{\left|x\right|\pi}{2} $. So if $x\in [a,b]$ then $\sup_{x \in [a,b]}\left|f_n(x)- \frac{\left|x\right|\pi}{2}\right|\rightarrow 0$ as $n\to\infty.$ Now, how do I check the uniform convergence? $$\sup_{x\in\mathbb{R}}\left|f_n(x)-\frac{\left|x\right|\pi}{2}\right| = ?$$ Thanks in advance!
Hint: Use the fact that $$\arctan t+\arctan\left(\frac 1t\right)=\frac{\pi}2$$ for all $t>0$ and that $f_n$ are even.
G is a group, H is a subgroup of G, and N a normal group of G. Prove that N is a normal subgroup of NH. So I have already proved that NH is a subgroup of G. To prove that N is a normal subgroup of NH I said the we need to show $xNx^{-1}$ is a subgroup of NH for all $x\in NH$ Or am i defining it wrong?
For any subgroup $K$ of $G$ with $N \subset K \subset G$, you can show that $N$ is a normal subgroup of $K$. Also, you can easily show $N \subset NH$.
How to prove that $\det(M) = (-1)^k \det(A) \det(B)?$ Let $\mathbf{A}$ and $\mathbf{B}$ be $k \times k$ matrices and $\mathbf{M}$ is the block matrix $$\mathbf{M} = \begin{pmatrix}0 & \mathbf{B} \\ \mathbf{A} & 0\end{pmatrix}.$$ How to prove that $\det(\mathbf{M}) = (-1)^k \det(\mathbf{A}) \det(\mathbf{B})$?
Here is one way among others: $$ \left( \matrix{0&B\\A&0}\right)=\left( \matrix{0&I_k\\I_k&0}\right)\left( \matrix{A&0\\0&B}\right). $$ I assume you are allowed to use the block diagonal case, which gives $\det A\cdot\det B$ for the matrix on the far right. Just in case, this follows for instance from Leibniz formula. Now it boils down to $$ \det\left( \matrix{0&I_k\\I_k&0}\right)=(-1)^k. $$ This is the matrix of a permutation made of $k$ transpositions. So the determinant is $(-1)^k$ after $k$ elementary operations (transpositions in this case) leading to the identity matrix.
Time dilation by special relativity When reading about special relativity and time dilation I encounter a problem; Here is a link: Time dilation in GPS On page 1 under header "2. Time dilation by special relativity." It says: Since $(1 – x)^{-1/2} ≈ 1 + x /2$ for small $x$, we get... How is $(1 – x)^{-1/2} ≈ 1 + x /2$ for small $x$? I really don't get it. Thank you in advance!
That is a mathematical approximation that is valid when $x\ll 1$ (meaning $x$ much less than one). It can be easily demonstrated if you know calculus: for any function $f(x)$ that is defined and derivable at $x=0$, $f(x)=f(0)+f'(0)x+\ldots$ (it is called a Taylor series expansion). The first two terms are enough if $x$ is small, otherwise the approximation requires more terms to be good.
How many subgroups of $\Bbb Z_5\times \Bbb Z_6$ are isomorphic to $\Bbb Z_5\times \Bbb Z_6$ I am trying to find the answer to the question in the title. The textbook's answer is only $\Bbb Z_5\times \Bbb Z_6$ itself. But i think like the following: Since 5 and 6 are relatively prime, $\Bbb Z_5\times \Bbb Z_6$ is isomorphic to $\mathbb{Z}_{30}$. And also, since 2,3 and 5 are pairwise relatively prime, then $\Bbb Z_2\times \Bbb Z_3\times \Bbb Z_5$ should also be isomorphic to $\Bbb Z_5\times \Bbb Z_6$. Am i wrong? Thank you
The underlying sets of these groups are by definition $$\mathbb{Z}_2 \times \mathbb{Z}_3 \times \mathbb{Z}_5=\{(a,b,c):a \in \mathbb{Z}_2 \text{ and } b \in \mathbb{Z}_3 \text{ and } c \in \mathbb{Z}_5\}$$ and $$\mathbb{Z}_5 \times \mathbb{Z}_6=\{(a,b):a \in \mathbb{Z}_5 \text{ and } b \in \mathbb{Z}_6\}.$$ So while $\mathbb{Z}_5 \times \mathbb{Z}_6$ is trivially a subgroup of $\mathbb{Z}_5 \times \mathbb{Z}_6$, the group $\mathbb{Z}_2 \times \mathbb{Z}_3 \times \mathbb{Z}_5$ is not even a subset of $\mathbb{Z}_5 \times \mathbb{Z}_6$. In fact, they have no elements in common. However, the groups $(\mathbb{Z}_2 \times \mathbb{Z}_3 \times \mathbb{Z}_5,+)$ and $(\mathbb{Z}_5 \times \mathbb{Z}_6,+)$ are isomorphic (they have essentially the same structure). In fact, they are isomorphic to infinitely many other groups.
Maximal compact subgroups of $GL_n(\mathbb{R})$. The subgroup $O_n=\{M\in GL_n(\mathbb{R}) | ^tM M = I_n\}$ is closed in $GL_n(\mathbb{R})$ because it's the inverse image of the closed set $\{I_n\}$ by the continuous map $X\mapsto ^tX X$. $O_n$ is also bounded in $GL_n(\mathbb{R})$, for example this is clear by considering the norm $||X|| = \sqrt{tr(^tX X)}$ (elements of $O_n$ are bounded by $\sqrt{n}$), so $O_n$ should be a compact subgroup of $GL_n(\mathbb{R})$. I see it claimed without proof in many places that $O_n$ is a maximal compact subgroup. How can we see this?
Let $G \subset GL_n(\mathbb{R})$ be a compact group containing $O(n)$ and let $M \in G$. Using polar decomposition, $$M=OS \ \text{for some} \ O \in O(n), \ S \in S_n^{++}(\mathbb{R}).$$ Since $O(n) \subset G$, we deduce that $S \in G$. Because $G$ is compact, $(S^n)$ has a convergent subsequence; $S$ being diagonalizable, it is possible if and only if the eigenvalues of $S$ are $\leq 1$. The same thing works for $S^{-1}$, so $1$ is the only eigenvalue of $S$, ie. $S=I_n$ hence $M=O \in O(n)$. Consequently, $G= O(n)$.
Help with proof that $I = \langle 2 + 2i \rangle$ is not a prime ideal of $Z[i]$ (Note: $Z[i] = \{a + bi\ |\ a,b\in Z \}$) This is what I have so far. Proof: If $I$ is a prime ideal of $Z[i]$ then $Z[i]/I$ must also be an integral domain. Now (I think this next step is right, I'm not sure though), $$ Z[i]/I = \{a+bi + \langle 2 + 2i \rangle\ | a,b,\in Z \}. $$ So, let $a=b=2$, then we can see that $$ (2+2i) \cdot \langle 2 + 2i \rangle = \langle 4 - 4 \rangle = 0 $$ Thus, $Z[i]/I$ has a zero-divisor. Thus, $Z[i]/I$ is not an integral domain which means that $I$ is not a prime ideal of $Z[i]$. $\square$ Now if my proof is right, am I right to think that $(2+2i) \cdot \langle 2 + 2i \rangle $ represents the following: $$ (2+2i) \cdot \langle 2 + 2i \rangle = (2+2i) \cdot \{2 +2i^2, 4 + 4i^2, 6 +6i^2, 8 + 8i^2, \ldots \} = \{0, 0, 0, 0, \ldots\} = 0? $$
A much simpler argument: Note that $2,1+i\notin \langle 2+2i\rangle$, yet $2\cdot (1+i)=2+2i\in \langle 2+2i\rangle$.
A question concerning fundamental groups and whether a map is null-homotopic. Is it true that if $X$ and $Y$ are topological spaces, and $f:X \rightarrow Y$ is a continuous map and the induced group homomorphism $\pi_1(f):\pi_1(X) \rightarrow \pi_1(Y)$ is the trivial homomorphism, then we have that $f$ is null-homotopic?
Take $X=S^{2}$, $Y=S^{2}$, and the map $f(x)=-x$. This map has degree $-1 \neq 0$, therefore it is not nullhomotopic. However, $\pi_{1} (S^{2})$ is trivial, so the induced map will be between trivial groups, and is thus trivial. The claim you're making is too strong because it asserts that whenever $Y$ is simply connected, then any continuous map into $Y$ is null homotopic.
Find the last two digits of $ 7^{81} ?$ I came across the following problem and do not know how to tackle it. Find the last two digits of $ 7^{81} ?$ Can someone point me in the right direction? Thanks in advance for your time.
$\rm{\bf Hint}\ \ mod\,\ \color{#c00}2n\!: \ a\equiv b\, \Rightarrow\, mod\,\ \color{#c00}4n\!:\ \, a^2 \equiv b^2\ \ by \ \ a^2\! = (b\!+\!\color{#c00}2nk)^2\!=b^2\!+\!\color{#c00}4nk(b\!+\!nk)\equiv b^2$ $\rm So,\, \ mod\,\ \color{}50\!:\, 7^{\large 2}\!\equiv -1\Rightarrow mod\ \color{}100\!:\,\color{#0a0}{7^{\large 4} \equiv\, 1}\:\Rightarrow\:7^{\large 1+4n}\equiv\, 7\, \color{#0a0}{(7^{\large 4})}^{\large n} \equiv\, 7 \color{#0a0}{(1)}^{\large n} \equiv 7 $
Power Series Solution for $e^xy''+xy=0$ $$e^xy''+xy=0$$ How do I find the power series solution to this equation, or rather, how should I go about dealing with the $e^x$? Thanks!
When trying to find a series to represent something, it's important to decide what kind of a series you want. Even if you're dealing with power series, a small change in notation between $\displaystyle \sum a_n x^n$ and $\displaystyle\sum \frac{a_nx^n}{n!}$ can lead to substantial changes. In particular, we have that, if $f(x)=\sum a_n x^n$, then $$e^x f(x) = \left(\sum \frac{x^n}{n!}\right) \left(\sum a_n x^n\right) = \sum \left(\sum_{i+j=n}\frac{a_i}{j!} \right) x^n, $$ which, while it will lead to a perfectly good recurrence yielding power series solution for a problem like this, is somewhat awkward, unwieldy, and likely not to lead to a recurrence that you can recognize or explicitly solve. However, if $f(x)=\sum \frac{a_n}{n!}$, then $$e^x f(x) = \left(\sum \frac{x^n}{n!}\right) \left(\sum \frac{a_n x^n}{n!}\right) = \sum \left(\sum_{i+j=n}\frac{a_i n!}{i!j!} \right) \frac{x^n}{n!}=\sum \left(\sum_{k\leq n}a_k \binom{n}{k} \right) \frac{x^n}{n!}, $$ which is a nicer looking expression. Additionally, the power series expansion of $f'(x)$ has a nice form due to the cancellation between the $n$ in $n!$ and the $n$ in $D(x^n)=nx^{n-1}$ With this in mind, I suggest a slight change on the hint of @M.Strochyk Hint: Expand $\displaystyle y(x)=\sum \frac{a_nx^n}{n!}$ and $\displaystyle e^x=\sum \frac{x^n}{n!}$ and substitute them into the equation.
Proving two graphs are isomorphic I need to prove that the following two countable undirected graphs $G_1$ and $G_2$ are isomorphic: Set of vertices of $G_1$ is $\mathbb{N}$ and there is an edge between $i$ and $j$ if and only if the $j$ th bit of the binary representation of $i$ is $1$ or the $i$ th bit of the binary representation of $j$ is $1$. In the other graph $G_2$, the set of vertices is $\mathbb{N}_+ := \lbrace n\in\mathbb{N} : n>0\rbrace$ and there is an edge between $n$ and $m$, for $n>m$, if and only if $n$ is divisible by $p_m$, the $m$ th prime. Any hints or ideas would be appreciated.
HINT: These are both the Rado graph, which is the unique countable graph with the following extension property: if $U$ and $V$ are disjoint finite sets of vertices of the graph, there is a vertex $x$ connected to each vertex in $U$ and to no vertex in $V$. The link actually demonstrates this for $G_1$, and the same article proves uniqueness. Thus, you need only prove that $G_2$ has the extension property.
Coefficients of series given by generating function How to find the coefficients of this infinite series given by generating function.$$g(x)=\sum_{n=0}^{\infty}a_nx^n=\frac{1-11x}{1-(3x^2+10x)}$$ I try to expand like Fibonacci sequences using geometric series and binomial theorem but without any success.
Use (1) the fact that $1-3x^2-10x=(1-ax)(1-bx)$ for some $a$ and $b$, (2) the fact that $$ \frac{1-11x}{(1-ax)(1-bx)}=\frac{c}{1-ax}+\frac{d}{1-bx}, $$ for some $c$ and $d$, and (3) the fact that, for every $e$, $$ \frac1{1-ex}=\sum_{n\geqslant0}e^nx^n. $$ Then put all these pieces together to deduce that, for every $n\geqslant0$, $$ a_n=c\cdot a^n+d\cdot b^n. $$ Edit: You might want to note that $g(0)=1$ hence $a_0=1$, which yields $c+d=1$. Likewise, $1/(1-u)=1+u+o(u)$ when $u\to0$ hence $g(x)=(1-11x)(1+10x+o(x))=1-x+o(x)$. This shows that $g'(0)=-1$ thus $a_1=-1$ and $ca+(1-c)b=-1$, that is, $c=(1+b)/(b-a)$. Finally, $a$ and $b$ are the roots of the denominator $1-3x^2-10x$ and, for every $n\geqslant0$, $$ a_n=\frac{(1+b)\cdot a^n-(1+a)\cdot b^n}{b-a}. $$
Number of bases of an n-dimensional vector space over q-element field. If I have an n-dimensional vector space over a field with q elements, how can I find the number of bases of this vector space?
There are $q^n-1$ ways of choosing the first element, since we can't choose zero. The subspace generated by this element has $q$ elements, so there are $q^n-q$ ways of choosing the second element. Repeating this process, we have $$(q^n-1)(q^n-q)\cdots(q^n-q^{n-1})$$ for the number of ordered bases. If you want unordered bases, divide this by $n!$.
How to calculate max iterations needed to equally increase row of numbers per some value each iteration? I don't know whether title describes the main idea of my question, so apologize me for it. I have 6 numbers whose values can vary from 0 to 100, but initial value cannot be more than 35. As example, here is my number list: 20, 31, 15, 7, 18, 29 In one iteration we can distribute some value (5, 7, 10, 15 or so) among these numbers. Let it be 15. And each number must be increased at least once per iteration. So one iteration may look like: 20 + 3 => 23 31 + 2 => 33 15 + 3 => 18 7 + 5 => 12 18 + 1 => 19 29 + 1 => 30 The question is: how to calculate the max amount of iterations for any number row with constant distribution value per iteration? One should know that iterations must stop once one number reaches value of 100. What math field should I learn to get more info on alike questions?
You start with $a_1, \ldots, a_n$, have to distribute $d\ge n$ in each round and stop at $m>\max a_i$. Then the maximal number of rounds is bounded by two effects: * *The maximal value grows by at least one per round, so it reaches $m$ after at most $m-\max a_i$ rounds. *The total grows by $d$ each round, so you exceed $n\cdot (m-1)$ after at most $\left\lceil\frac{n(m-1)+1-\sum a_i}{d}\right\rceil$ rounds. Interestingly, these two conditions are all there is, i.e. the maximal number of rounds is indeed $$ \min\left\{m-\max a_i,\left\lceil\frac{n(m-1)+1-\sum a_i}{d}\right\rceil\right\}.$$ This can be achieved by (in each round) distributing one point to each term and then repeatedly increase the smallest term until all has been distributed. The field of math this belongs two would be combinatorics.
Nice examples of groups which are not obviously groups I am searching for some groups, where it is not so obvious that they are groups. In the lecture's script there are only examples like $\mathbb{Z}$ under addition and other things like that. I don't think that these examples are helpful to understand the real properties of a group, when only looking to such trivial examples. I am searching for some more exotic examples, like the power set of a set together with the symmetric difference, or an elliptic curve with its group law.
I always found the fact that braid groups are groups at all quite interesting. The elements of the group are all the different braids you can make with, say, $n$ strings. The group operation is concatenation. The identity is the untangled braid. But the fact that inverses exist is not obvious.
Counter-examples of homeomorphism Briefly speaking, we know that a map $f$ between $2$ topological spaces is homeomorphic if $f$ is a bijection and the inverse of $f$ and itself are both continuous. So, can anyone give me $2$ counter examples(preferably simple ones) of non-homeomorphic maps $f$ between 2 topological spaces that satisfy the properties I give? (Only one of them is satisfied and 3 examples for each property.) * *$f$ is bijective and continuous, but its inverse is not continuous. *$f$ is bijective and the inverse is continuous, but $f$ itself is not continuous. In addition, can we think about some examples of topologies that are path-connected? I will understand the concept of homeomorphism much better if I know some simple counterexamples. I hope you can help me out. Thanks!
$1.$ Let $X$ be the set of real numbers, with the discrete topology, and let $Y$ be the reals, with the ordinary topology. Let $f(x)=x$. Then $f$ is continuous, since every subset of $X$ is open. But $f^{-1}$ is not continuous. $2.$ In doing $1$, we have basically done $2$.
Nicolas Boubarki, Algebra I, Chapter 1, § 2, Ex. 12 Nicolas Boubarki, Algebra I, Chapter 1, § 2, Ex. 12: ($E$ is a Semigroup with associative law (represented multiplicatively), $\gamma_a(x)=ax$.) Under a multiplicative law on $E$, let $ a \in E $ be such that $\gamma_a $ is surjective. (a) Show that, if there exists $u$ such that $ua=a$, then $ux=x$ for all $x\in E$. (b) For an element $b\in E$ to be such that $ba$ is left cancellable, it is necessary and sufficient that $\gamma_a$ be surjective and that $b$ be left cancellable. For those interested in part (a), simple proof is that for every $x\in E$ there exists $x^\prime \in E$ such that $ax^\prime=x$, consequently $ua=a \Rightarrow uax^\prime=ax^\prime \Rightarrow ux=x$. In (b), surjectivity of $\gamma_a$ and left cancellability of $b$ is required. However, I am concerned with "sufficiency" portion of part (b). When $E$ is infinite set there can always be a surjective function $\gamma_a$ which need not be injective, and left translation by $b$ is cancellable, however $ba$ need not be left cancellable.
I have a Russian translation of Bourbaki. In it Ex.12 looks as follows: "For $\gamma_{ba}$ to be an one-one mapping of $E$ into $E$, it is necessary and sufficient that $\gamma_{a}$ be an one-one mapping of $E$ onto $E$ and $\gamma_{b}$ be an one-one mapping of $E$ into $E$." So I guess that there is a misprint in English translation. I wonder how it looks in the French original?
Differential equation must satisfy its edge conditions. I have this variation problem $$\text{Minimize} \; \int_0^1 \left( 12xt- \dot{x}^2-2 \dot{x} \right) \; dt$$ With the edge conditions $x(0)=0$ and $x(1)$ is "free". And from here solve it: $$x(t)\to -t^3 +c_1t+c_2$$ From here it should've been correctly. Now I must solve the equation and compute $c_1$ and $c_2$ However I'm not aware of the edge condition $x(1)$ as free. How do I solve this, and what does it exactly mean by "free"? Result: $c_1$ should be $2$ and $c_2$ should be $0$. (Ps. If you can show it with Mathematica It would be great!)
Denote the functional as $J(x)$: $$ J(x) = \int_0^1 \left( 12xt- \dot{x}^2-2 \dot{x} \right) $$ Then the minimizer $x$ satisfies the following (perturbing the minimum with $\epsilon y$): $$ \frac{d}{d\epsilon} J(x + \epsilon y)\Big\vert_{\epsilon = 0} =0$$ Simplifying above gives us: $$ \int^1_0 (12 y t - 2\dot{x}\dot{y} - 2\dot{y})dt = 0 $$ Integration by parts yields: $$ \int^1_0 (12 y t + 2\ddot{x}y)dt - 2(\dot{x}+1)y\big|^1_0= 0 $$ Let's look at the boundary term: $(\dot{x}+1)y\big|^1_0 = (\dot{x}(1)+1)y(1) - (\dot{x}(0)+1)y(0)$. The term "free" from what I know would be a natural boundary condition. The essential boundary condition is $x(0)=0$ hence test function $y(0)=0$, the second term vanishes. On the natural boundary $t=1$, we do not impose anything on $y$'s value, hence we have to let $\dot{x}(1)+1 = 0$ to make variational problem well-posed, thus to get the differential equation $6 t + \ddot{x} = 0$. And the final answer is: "$x(1)$ is free" leads us to the natural boundary condition $$ \dot{x}(1)+1 = 0 $$ thus the coefficients are $c_1 = 2, c_2 = 0$.
Numbers that are divisible So I am given the following question: For natural numbers less than or equal to 120, how many are divisible by 2, 3, or 5? I solved it by inclusion-exclusion principle and by using the least common multiple by having it as (2, 3, 5)=120 which is equal to 30. Are these the right way to solve them or is there other ways?
It is a right way. Inevitably there are others. For example, there are $\varphi(120)$ numbers in the interval $[1,120]$ which are relatively prime to $120$. Here $\varphi$ is the Euler $\varphi$-function. The numbers in our interval which are divisible by $2$, $3$, or $5$ are precisely the numbers in our interval which are not relatively prime to $120$. So $120-\varphi(120)$ gives the desired count. Compute $\varphi(120)$ by using the usual formula, and the fact that $120=2^3\cdot 3\cdot 5$. The Inclusion/Exclusion procedure is more versatile than the Euler $\phi$-function procedure.
Convergence of $\prod_{n=1}^\infty(1+a_n)$ The question is motivated by the following exercise in complex analysis: Let $\{a_n\}\subset{\Bbb C}$ such that $a_n\neq-1$ for all $n$. Show that if $\sum_{n=1}^\infty |a_n|^2$ converges, then the product $\prod_{n=1}^\infty(1+a_n)$ converges to a non-zero limit if and only if $\sum_{n=1}^\infty a_n$ converges. One can get a proof by using $|a_n|^2$ to bound $|\log(1+a_n)-a_n|$. Here is my question: is the converse of this statement also true? If "the product $\prod_{n=1}^\infty(1+a_n)$ converges to a non-zero limit if and only if $\sum_{n=1}^\infty a_n$ converges", then $\sum_{n=1}^\infty |a_n|^2$ converges.
I shall try to give examples where $\sum|a_n|^2$ is divergent and all possible combinations of convergence/divergence for $\prod(1+a_n)$ and $\sum a_n$. Let $a_{2n}=\frac1{\sqrt n}$ and $a_{2n+1}=\frac1{1+a_{2n}}-1=-\frac{1}{1+\sqrt n}$. Then $(1+a_{2n})(1+a_{2n+1})= 1$, hence the product converges. But $a_{2n}+a_{2n+1}=\frac1{n+\sqrt n}>\frac1{2n}$, hence $\sum a_n$ diverges. Let $a_{2n}=\frac1{\sqrt n}$ and $a_{2n+1}=-\frac1{\sqrt n}$. Then $a_{2n}+a_{2n+1}=0$, hence $\sum a_n$ converges. But $(1+a_{2n})(1+a_{2n+1})=1-\frac1n$; the $\log$ of this is $\sim -\frac1n$, hence $\sum \log(1+a_n)$ and also $\prod(1+a_n)$ diverges. Let $a_n=\frac1{\sqrt n}$. Then $\prod(1+a_n)$ and $\sum a_n$ diverge. It almost looks as if it is not possible to have both $\prod(1+a_n)$ and $\sum a_n$ convergent if $\sum |a_n|^2$ diverges because $\ln(1+a_n) = a_n-\frac12a_n^2\pm\ldots$, but here we go: If $n=4k+r$ with $r\in\{0,1,2,3\}$, let $a_n = \frac{i^r}{\sqrt k}$. Then the product of four such consecutive terms is $(1+\frac1{\sqrt k})(1+\frac i{\sqrt k})(1-\frac1{\sqrt k})(1-\frac i{\sqrt k})=1-\frac1{k^2}$, hence the log of these is $\sim -\frac1{k^2}$ and the product converges. The sum also converges (to $0$).
Concise proof that every common divisor divides GCD without Bezout's identity? In the integers, it follows almost immediately from the division theorem and the fact that $a \mid x,y \implies a \mid ux + vy$ for any $u, v \in \mathbb{Z}$ that the least common multiple of $a$ and $b$ divides any other common multiple. In contrast, proving $e\mid a,b \implies e\mid\gcd(a,b)$ seems to be more difficult. In Elementary Number Theory by Jones & Jones, they do not try to prove this fact until establishing Bezout's identity. This Wikipedia page has a proof without Bezout's identity, but it is convoluted to my eyes. I tried my hand at it, and what I got seems no cleaner: Proposition: If $e \mid a,b$, then $e \mid \gcd(a,b)$. Proof: Let $d = \gcd(a,b)$. Then if $e \nmid d$, by the division theorem there's some $q$ and $c$ such that $d = qe + c$ with $0 < c < r$. We have $a = k_1 d$ and $b = k_2 d$, so by substituting we obtain $a = k_1 (qe + c)$ and $b = k_2 (qe + c)$. Since $e$ divides both $a$ and $b$, it must divide both $k_1 c$ and $k_2 c$ as well. This implies that both $k_1 c$ and $k_2 c$ are common multiples of $c$ and $r$. Now let $l = \operatorname{lcm}(r, c)$. $l$ divides both $k_1 c$ and $k_2 c$. Since $l = \phi c$ for some $\phi$, we have $\phi | k_1, k_2$, so $d \phi | a, b$. But we must have $\phi > 1$ otherwise $l = c$, implying $r \mid c$, which could not be the case since $c < r$. So $d \phi$ is a common divisor greater than $d$, which is a contradiction. $\Box$ Question: Is there a cleaner proof I'm missing, or is this seemingly elementary proposition just not very easy to prove without using Bezout's identity?
We can gain some insight by seeing what happens for other rings. A GCD domain is an integral domain $D$ such that $\gcd$s exist in the sense that for any $a, b \in D$ there exists an element $\gcd(a, b) \in D$ such that $e | a, e | b \Rightarrow e | \gcd(a, b)$. A Bézout domain is an integral domain satisfying Bézout's identity. Unsurprisingly, Bézout domains are GCD domains, and the proof is the one you already know. It turns out that the converse is false, so there exist GCD domains which are not Bézout domains; Wikipedia gives a construction. (But if you're allowing yourself the division algorithm, why the fuss? The path from the division algorithm to Bézout's identity is straightforward. In all of these proofs for $\mathbb{Z}$ the division algorithm is doing most of the work.)
Finding the median value on a probability density function Quick question here that I cannot find in my textbook or online. I have a probability density function as follows: $\begin{cases} 0.04x & 0 \le x < 5 \\ 0.4 - 0.04x & 5 \le x < 10 \\ 0 & \text{otherwise} \end{cases}$ Now I understand that for the median, the value of the integral must be $0.5$. We can set the integrals from negative infinity to m where m represents the median and solve. However, there are 2 functions here so how would i do that? In the answers that I was provided, the prof. simply takes the first function and applies what I said. How/why did he do that? Help would be greatly appreciated! Thank you :)
I don't know, let's find out. Maybe the median is in the $[0,5]$ part. Maybe it is in the other part. To get some insight, let's find the probability that our random variable lands between $0$ and $5$. This is $$\int_0^5(0.04)x\,dx.$$ Integrate. We get $0.5$. What a lucky break! There is nothing more to do. The median is $5$. Well, it wasn't entirely luck. Graph the density function. (We should have done that to begin with, geometric insight can never hurt.) We find that the density function is symmetric about the line $x=5$. So the median must be $5$. Remark: Suppose the integral had turned out to be $0.4$. Then to reach the median, we need $0.1$ more in area. Then the median $m$ would be the number such that $\int_5^m (0.4-0.04x)\,dx=0.1$.
Generating $n\times n$ random matrix with rank $n/2$ using matlab Can we generate $n \times n$ random matrix having any desired rank? I have to generate a $n\times n$ random matrix having rank $n/2$. Thanks for your time and help.
Generate $U,V$ random matrices of size $n \times n/2$, then almost surely $A = U \cdot V^T$ is of rank $n/2$.
$f$ is a bijective function with differentiable inverse at a single point Let $\Omega \subseteq \mathbb{R}^n$ and $p \in \Omega$. Let $f:U \to V$ be a bijection of open sets $p \in U \subseteq \Omega$ amd $f(p) \in V \subseteq \mathbb{R}^n$. If $f^{-1}: V \to U$ is differentiable at $p$, then $df_p: \mathbb{R}^n \to \mathbb{R}^n$ is invertible. Suppose $f$ is a bijection. Since $f^{-1}$ is differentiable at $p$, it is continuous at $p$. Let $f^{-1}(f(x))=x$. Then \begin{align*} df^{-1}(f(x))&=1\\ df_p^{-1}df_p&=1\\ \end{align*} Following Dan Shved's suggestion I applied the chain rule, but I'm not sure that $f$ is differentiable - thus I'm not sure if $df_p$ exists. If $f^{-1}$ were continuously differentiable this would be easier because I could invoke the Inverse Function theorem.
The problem statement is either incorrect, incomplete, or both. Certainly, in order to say anything about $df_p$, the assumption on $f^{-1}$ should be made at $f(p)$. But the mere fact that $f^{-1}$ is differentiable at $f(p)$ is not enough. For example, $f(x)=x^{1/3}$ is a bijection of $(-1,1)\subset \mathbb R$ onto itself. Its inverse $f^{-1}(x)=x^3$ is differentiable at $0=f(0)$, but $f$ is not differentiable at $0$. The following amended statement is correct: Let $f:U \to V$ be a bijection of open sets $p \in U \subseteq \Omega$ amd $f(p) \in V \subseteq \mathbb{R}^n$. If $f^{-1}: V \to U$ is differentiable at $\mathbf{f(p)}$, then $df_p: \mathbb{R}^n \to \mathbb{R}^n$ is invertible provided it exists. Indeed, the chain rule applies to $\mathrm{id}=f^{-1}\circ f$ at $p$ and yields $\mathrm{id}=df^{-1}_{f(p)} \circ df_p$. Hence $df_p$ is invertible.
Trace of the matrix power Say I have matrix $A = \begin{bmatrix} a & 0 & -c\\ 0 & b & 0\\ -c & 0 & a \end{bmatrix}$. What is matrix trace tr(A^200) Thanks much!
You may do it by first computing matrix powers and then you may calculate whatever you want. Now question is how to calculate matrix power for a given matrix, say $A$? Your goal here is to develop a useful factorization $A = PDP^{-1}$, when $A$ is $n\times n$ matrix.The matrix $D$ is a diagonal matrix (i.e. entries off the main diagonal are all zeros). Then $A^k =PD^kP^{-1} $. $D^k$ is trivial to compute. Note that columns of $P$ are n linearly independent eigenvectors of $A$.
Show that if $G$ is a group of order 6, then $G \cong \Bbb Z/6\Bbb Z$ or $G\cong S_3$ Show that if $G$ is a group of order 6, then $G \cong \Bbb Z/6\Bbb Z$ or $G\cong S_3$ This is what I tried: If there is an element $c$ of order 6, then $\langle c \rangle=G$. And we get that $G \cong \Bbb Z/6 \Bbb Z$. Assume there don't exist element of order 6. From Cauchy I know that there exist an element $a$ with $|a|=2$, and $b$ with $b=3$. As the inverse of $b$ doesn't equal itself, I have now 4 distinct elements: $e,a,b,b^{-1}$. As there are no elements of order $6$, we have two options left. Option 1: $c$ with $|c|=3$, and $c^{-1}$ with $|c|=3$. Option 2: two different elements of order 2, $c,d$. My intuition says that for option 1, $H= \{e,b,b^{-1},c,c^{-1} \}$ would be a subgroup of order $5$, which would give a contradiction, but I don't know if this is true/how I could prove this. I also don't know how I could prove that option 2 must be $S_3$. I think that $G= \{e,a,b,b^{-1},c,d \}$. Looks very similar to $D_3$. But I don't know how I could prove this rigoursly. Can anybody help me finish my proof ? If there are other ways to see this, I'd be glad to hear them as well!
Instead of introducing a new element called $c$, we can use the group structure to show that the elements $ab$ and $ab^2$ are the final two elements of the group, that is $G=\{e,a,b,b^2,ab,ab^2\}$. Notice that if $ab$ were equal to any of $e,a,b$, or $b^{-1}=b^2$, we would arrive at the contradictions $a=b^{-1}$, $b=e$, $a=e$, and $a=b$ respectively. Similarly, see if you can show that $ab^2$ must be distinct from the elements $e,a,b,b^2,$ and $ab$. In a manner similar to the one above, we can now show that the element $ba$ (notice the order) can only be equal to either $ab$ or $ab^2$ in our list of elements of $G$ without arriving at some contradiction. If $ba=ab$, then $|ab|=6$, so that $G$ is cyclic. If $ba=ab^2$, see if you can write down a group isomorphism between $G$ and $S_3$, and show that the images of the elements of $G$ in $S_3$ satisfy the same relations their pre-images do in $G$.
find the largest perfect number less than $10,000$ in Maple Can anyone tell me how to find largest perfect number less than 10000 in maple? Actually, I know how to find all the perfect numbers less than or equal to 10000 but I don't know how to find the largest one within the same code?
Well if you know how to find them all, I suppose you use a loop. So before your loop add a variable $max=0$. During the loop, for each perfect number $p$ you find, check if $p>max$ and if it is, then do $max=p$. The value of $max$ after the end of the loop will be the greatest number found ;)
Which way does the inclusion go? Lemma Let $\mathcal{B}$ and $\mathcal{B'}$ be bases for topologies $\mathcal{T}$ and $\mathcal{T'}$, respectively, on $X$. Then the following are equivalent: * *$\mathcal{T'}$ is finer than $\mathcal{T}$. *For each $x\in X$ and each basis element $B\in \mathcal{B}$ containing $x$, there is a basis element $B'\in \mathcal{B'}$ such that $x\in B' \subset B$. Why don't we write "for each $x\in X$ and each basis element $B\in \mathcal{B}$ containing $x$, there is a basis element $B'\in \mathcal{B'}$ such that $x\in B' \supset B$." (isn't that also true?) instead? I can see that the original statement is true but it seems very counterintuitive.
The idea is that we need every $\mathcal{T}$-open set to be $\mathcal{T}'$-open. Since $\mathcal{B}$ is a basis for $\mathcal{T}$, then every $\mathcal{T}$-open set is a union of $\mathcal{B}$-elements (and every union of $\mathcal{B}$-elements is $\mathcal{T}$-open), so it suffices that every $\mathcal{B}$-element is $\mathcal{T}'$-open. Since $\mathcal{B}'$ is a basis for $\mathcal{T}'$, then we must show that every $\mathcal{B}$-element is a union of $\mathcal{B}'$-elements, which is what the Lemma shows.
High-order elements of $SL_2(\mathbb{Z})$ have no real eigenvalues Let $\gamma=\begin{pmatrix} a & b \\ c & d \end{pmatrix} \in SL_2(\mathbb{Z})$, $k$ the order of $\gamma$, i.e. $\gamma^k=1$ and $k=\min\{ l : \gamma^l = 1 \}$. I have to show that $\gamma$ has no real eigenvalues if $k>2$. The eigenvalues of $\gamma$ are $\gamma_{1,2} = \frac{1}{2} (a+d \pm \sqrt{(a+d)^2-4})$, i.e. I have to show that $(a+d)^2<4$ for $k>2$. How can I prove this? I have determined the first powers of $\gamma$ to get the condition directly from $\gamma^k = 1$ but I failed. Probably, there is an easier way?
Assume there is a real eigenvalue. Then the minimal polynomial of $\gamma$ is a divisor of $X^k-1$ and has degree at most $2$ and has at least one real root. If its degree is $2$, the other root must also be real. The only real roots of unity are $\pm1$, so the minimal polynomial os one of $X-1$, $X+1$ or $(X-1)(X+1)=X^2-1$. All three are divisors of $X^2-1$, i.e. we find $\gamma^2=1$.
Counting binary sequences with $S$ $0$'s and $T$ $1$'s where every pre-sequence contains fewer $1$'s than $0$'s How many $S+T$-digit binary sequences with exactly $S$ $0$'s and $T$ $1$'s exist where in every pre-sequence the number of $1$'s is less than the number of $0$'s? Examples: * *the sequence $011100$, is bad since the pre-sequence $011$ has more $1$'s than $0$'s. *the sequence $010101000$, is good since there is no pre-sequence such that there are more $1$'s than $0$'s.
This is a famous problem often called Bertand's Ballot Theorem. A good summary is given in the Wikipedia article cited. There are a number of nice proofs. Note that your statement is the classical one ("always ahead") but the example of a good sequence that you give shows that "never behind" is intended. If that is the case, go to the "ties allowed" section of the article. The number of good sequences turns out to be $$\binom{s+t}{s}\frac{s+1-t}{s+1}.$$
proving that the following limit exist How can I prove that the following limit exist? $$ \mathop {\lim }\limits_{x,y \to 0} \frac{{x^2 + y^4 }} {{\left| x \right| + 3\left| y \right|}} $$ I tried a lot of tricks. At least assuming that this limit exist, I can prove using some special path (for example y=x) that the limit is zero. But how can I prove the existence?
There are more appropriate ways, but let's use the common hammer. Let $x=r\cos\theta$ and $y=r\sin\theta$. Substitute. The only other fact needed is that $|\sin\theta|+|\cos\theta|$ is bounded below. An easy lower bound is $\frac{1}{\sqrt{2}}$. When you substitute for $x$ and $y$ on top, you get an $r^2$ term, part of which cancels the $r$ at the bottom, and the other part of which kills the new top as $r\to 0$.
If I have $5^{200}$, can I rewrite it in terms of $2$'s and $3$'s to some powers? If I have $5^{200}$, can I rewrite it in terms of $2$'s and $3$'s to some powers? For example, if I had $4^{250}$ can be written in terms of $2$'s like so: $2^{500}$.
No. This is the Fundamental theorem of algebra: every integer $n\geq 2$ can be written in exactly one way (up to the order of factors) as a product of powers of prime numbers.
Showing that one cannot continuously embed $\ell^\infty$ in $\ell^1$. Is it possible to embed $\ell^\infty$ into $\ell^1$ continuously? I.e. can one find a continuous linear injection $I:\ell^\infty \to \ell^1$. I have reduced a problem I have been working on to showing that this cannot happen, but I don't see how to proceed from here.
Yes, it's possible; for example, you can set $$ I(a_1,a_2,a_3,\dots):=(\frac{a_1}{1^2},\frac{a_2}{2^2},\frac{a_3}{3^2},\dots). $$
Turing Decryption MIT example I am learning mathematics for computer science on OpenCourseWare. I have no clue in understanding below small mathematical problem. Encryption: The message m can be any integer in the set $\{0,1,2,\dots,p−1\}$; in par­ticular, the message is no longer required to be a prime. The sender encrypts the message $m$ to produce $m^∗$ by computing: $$m^∗ = \operatorname{remainder}(mk,p).$$ Multiplicative inverses are the key to decryption in Turing’s code. Specfically, we can recover the original message by multiplying the encoded message by the inverse of the key: \begin{align*} m*k^{-1} &\cong \operatorname{remainder}(mk,p) k^{-1} && \text{(the def. (14.8) of $m^*$)} \\ &\cong (mk) k{^-1} \pmod p && \text{(by Cor. 14.5.2)} \\ &\cong m \pmod p. \end{align*} This shows that $m*k^{-1}$ is congruent to the original message $m$. Since $m$ was in the range $0,1,\dots,p-1$, we can recover it exactly by taking a remainder: $m = \operatorname{rem}(m*k^{-1},p)$ --- ??? Can someone please explain the above line (with question marks) I don't understand it.
The line with the question mark is just a restatement of the explanation above in symbolic form. We have a message $m$ and encrypted message $m^* = \text{remainder}(mk,p)$. If we are given $m^*$ we can recover $m$ by multiplying by $k^{-1}$ and taking the remainder mod $p$. That is, $\text{remainder}(m^* \cdot k^{-1},p) = \text{remainder}(mkk^{-1},p) = \text{remainder}(m,p) = m$. This gives $m$ exactly (and not something else congruent to $m \mod p$) because $m$ is restricted to be in $0,1,\dots,p-1$.
Calculating $\lim\limits_{n\to\infty}\frac{(\ln n)^{2}}{n}$ What is the value of $\lim\limits_{n\to\infty}\dfrac{(\ln n)^{2}}{n}$ and the proof ? I can't find anything related to it from questions. Just only $\lim\limits_{n\to\infty}\dfrac{\ln n}{n}=0$, which I know it is proved by Cesàro.
We can get L'Hospital's Rule to work in one step. Express $\dfrac{\log^2 x}{x}$ as $\dfrac{\log x}{\sqrt{x}}\cdot\dfrac{\log x}{\sqrt{x}}$. L'Hospital's Rule gives limit $0$ for each part. Another approach is to let $x=e^y$. Then we want to find $\displaystyle\lim{y\to\infty} \dfrac{y^2}{e^y}$. Note that for positive $y$, we have $$e^y\gt 1+y+\frac{y^2}{2!}+\frac{y^3}{3!}\gt \frac{y^3}{3!}.$$ It follows that $\dfrac{y^2}{e^y}\lt \dfrac{3!}{y}$, which is enough to show that the limit is $0$.
Intuition behind Borsuk-Ulam Theorem I watched the following video to get more intuition behind Borsuk-Ulam Theorem. The first part of the video was very clear for me, as I understood it considers only $R^2$ dimension and points $A$ and $B$ moving along the equator and during the video we track the temperature of point $A$ and $B$ along the equator. The following is the picture from the second part. In the second part $R^3$ is considered, and instead of tracking the temperatures along the equator, we track the temperature along the arbitrary path from $A$ to $B$ along the sphere, but along this part we don't move $A$ and $B$ there is no intersection of temperatures as in was in the first part (the most confusing phrase is 4:45 "as $A$ goes to $B$ is goes from being colder than $B$ to hotter than B", why? it just goes to the $B$). I don't understand how the assumption that there are a point on the track where the temperature is as in the point $B$ can help us, even if it's true is not what we need we need the temperature in the point $A$ to be equal the temperate in the point $B$. The second assumption is to consider all antipodal points with the same temperature and consider all the points on the track with the same temperature of the opposite point, so as result we have a "club" of the intermediate point with the different temperatures, but all their temperatures equal to the temperature of the opposite point, given so how can we connect them by the line. I have some problems in understanding the idea of the second part, would appreciate for help.
Creator of the video here. but along this part we don't move A and B there is no intersection of temperatures as in was in the first part Yeah I didn't elaborate on this as much as the previous section, but the same thing is happening, just along an arbitrary path of connected opposite points, instead of a great sphere. What also might be unclear, is that B tracks the opposite point of A. Its path just isn't drawn. I hope that helps. Vsauce also released a video very recently which runs through my explanation (I think he got it from me - I'm credited in his video description), so maybe it would also be useful: https://www.youtube.com/watch?v=csInNn6pfT4
How to show that $C(\bigcup _{i \in I} A_i)$ is a supremum of a subset $\{C(A_i): i \in I \}$ of the lattice $L_C$ of closed subsets? According to Brris & Sankappanavar's "A course in universal algebra," the set $L_C$ of closed subsets of a set $A$ forms a complete lattice under $\subseteq$. Here, a subset $X$ of $A$ is said to be closed if $C(X) = X$, where $C$ is a closure operator on $A$ in the sense that it satisfies C1 - C3 below: (For any $X, Y \subseteq A$) C1: $X \subseteq C(X)$ C2: $C^2(X) = C(X)$ C3: $X \subseteq Y \Rightarrow C(X) \subseteq C(Y)$. They say that the supremum of a subset $\{C(A_i): i \in I\}$ of the lattice $\langle L_C, \subseteq \rangle$ is $C(\bigcup _{i \in I} A_i)$. If so, it must be that $$C(\bigcup _{i \in I} A_i) \subseteq \bigcup _{i \in I} C (A_i)$$ (since $\bigcup _{i \in I} C (A_i)$ is also an upper bound). But, I cannot so far show how this is so. Postscript It was an error to think that the above inclusion had to hold if $C(\bigcup _{i \in I} A_i)$ is $sup \{C(A_i): i \in I\}$. This inclusion does not follow, and its converse follows, actually, as pointed out by Brian and Abel. Still, $C(\bigcup _{i \in I} A_i)$ is the supremum of the set since, among the closed subsets of $A$, it is the set's smallest upper bound, as explained by Brian and Alexei. This question was very poorly and misleadingly stated. I will delete it if it's requested.
$\bigcup C(A_i)$ is not necessarily closed, and the smallest closed set containing it is $C[\bigcup C(A_i)]$. Now, $\bigcup A_i \subset \bigcup C(A_i)$, thus $C(\bigcup A_i) \subset C[\bigcup C(A_i)]$. Conversely, $A_i \subset \bigcup A_i$, so $C(A_i) \subset C(\bigcup A_i)$. Therefore, $\bigcup C(A_i) \subset C(\bigcup A_i)$, so $C[\bigcup C(A_i)] \subset C(\bigcup A_i)$. Thus, $C[\bigcup C(A_i)] = C(\bigcup A_i)$, Q.E.D.
Minimal Distance between two curves What is the minimal distance between curves? * *$y = |x| + 1$ *$y = \arctan(2x)$ I need to set a point with $\cos(t), \sin(t)$?
One shortcut here is to note that curves 1, 2 (say $f(x)$, $g(x)$) have a co-normal line passing between the closest two points. Therefore, since $f'(x) = 1$ for all $x>0$ then just find where $g'(x) = 1$ or \begin{align}&\frac{2}{4x^2 +1} = 1\\ &2 = 4x^2 + 1\\ &\bf{x = \pm 1/2}\end{align}
How to show $\dim_\mathcal{H} f(F) \leq \dim_\mathcal{H} F$ for any set $F \subset \mathbb{R}$ and $f$ continuously differentiable? Let $f: \mathbb{R} \to \mathbb{R}$ be differentiable with continuous derivative. I have to show that for all sets $F \subset \mathbb{R}$, the inequality $$\dim_\mathcal{H} f(F) \leq \dim_\mathcal{H} F$$ holds, where $\dim_\mathcal{H}$ denotes the Hausdorff dimension. For some strange reason, there seems to be no definition of the Hausdorff dimension in the provided lecture notes. I looked it up on wikipedia and don't really know how I can say anything about the Hausdorff dimension of the image of a continuously differentiable function. Could anyone give me some help? Thanks a lot in advance.
Hint: Show that the inequality is true if $f$ is lipschitz. Then, deduce the general case from the following property: $\dim_{\mathcal{H}} \bigcup\limits_{i \geq 0} A_i= \sup\limits_{i \geq 0} \ \dim_{\mathcal{H}}A_i$. For a reference, there is Fractal Geometry by K. Falconer.
Calculate eigenvectors I am given the $2\times2$ matrix $$A = \begin {bmatrix} -2&-1 \\\\ 15&6 \ \end{bmatrix}$$ I calculated the Eigenvalues to be 3 and 1. How do I find the vectors? If I plug the value back into the character matrix, I get $$B = \begin {bmatrix} -5&1 \\\\ 15&3 \ \end{bmatrix}$$ Am I doing this right? What would the eigenvector be?
Remember what the word "eigenvector" means. If $3$ is an eigenvalue, then you're looking for a vector satisfying this: $$A = \begin {bmatrix} -2&-1 \\\\ 15&6 \ \end{bmatrix}\begin{bmatrix} x \\ y\end{bmatrix} = 3\begin{bmatrix} x \\ y\end{bmatrix}$$ Solve that. You'll get infinitely many solutions since every scalar multiple of a solution is also a solution.
Solving a set of 3 Nonlinear Equations In the following 3 equations: $$ k_1\cos^2(\theta)+k_2\sin^2(\theta) = c_1 $$ $$ 2(k_2-k_1)\cos(\theta)\sin(\theta)=c_2 $$ $$ k_1\sin^2(\theta)+k_2\cos^2(\theta) = c_3 $$ $c_1$, $c_2$ and $c_3$ are given, and $k_1$, $k_2$ and $\theta$ are the unknowns. What is the best way to solve for the unknowns? Specifically, I need to solve many independent instances of this system in an algorithm. Therefore, ideally the solution method should be fast.
Add and subtract equations $1$ and $3$, giving the system $$\begin{cases}\begin{align}k_1+k_2&=c_1+c_3\\(k_1-k_2)\sin2\theta&=-c_2\\(k_1-k_2)\cos2\theta&=c_1-c_3\end{align}\end{cases}$$ Then you find $k_1-k_2$ and $2\theta$ by a polar-to-Cartesian transform, giving $$\begin{cases}\begin{align}k_1+k_2&=c_1+c_3,\\k_1-k_2&=\sqrt{c_2^2+(c_1-c_3)^2},\\2\theta&=\arctan_2(-c_2,c_1-c_3).\end{align}\end{cases}$$
Find $\arg\max_x \operatorname{corr}(Ax, Bx)$ for vector $x$, matrices $A$ and $B$ This is similar to, but not the same as, canonical correlation: For $(n \times m)$ matrices $A$ and $B$, and unit vector $(m \times 1)$ $x$, is there a closed-form solution to maximize the correlation between $Ax$ and $Bx$ w.r.t. $x$? Note that I am optimizing over just one vector (in contrast to canonical correlation).
Here is an answer for the case $m>n$. Write $x=(x_1,\ldots,x_m)^T,A=(a^{1},\ldots,a^{m}),B=(b^{1},\ldots,b^{m})$, so $Ax=\sum_{i\le m} x_ia^i$, $Bx=\sum_{i\le m} x_i b^i$. Since $m>n$, columns $a^i - b^i$ of the matrix $A-B$ are linearly dependent, i.e. there is $x$ such that $Ax=Bx$. For this $x$ we have ${\rm corr}(Ax,Bx)=1$, i.e. is maximal.
Graph of an inverse trig function. Which of the following is equivalent to the graph of $arcsin(x)$ ? (a) Reflecting $arccos(x)$ about the y-axis, then shift down by $\pi /2$ units. (b) Reflecting $arccos(x)$ about the x-axis, then shift up by $\pi /2$ units. I think they are both the same thing. Can someone confirm this ?
You can look at graphs of all three functions here: http://www.wolframalpha.com/input/?i=%7Barccos%28-x%29-pi%2F2%2C-arccos%28x%29%2Bpi%2F2%2Carcsin%28x%29%7D Do they look the same to you?
How to prove that $\|AB-B^{-1}A^{-1}\|_F\geq\|AB-I\|_F$ when $A$ and $B$ are symmetric positive definite? Let $A$ and $B$ be two symmetric positive definite $n \times n$ matrices. Prove or disprove that $$\|AB-B^{-1}A^{-1}\|_F\geq\|AB-I\|_F$$ where $\|\cdot\|_F$ denotes Frobenius norm. I believe it is true but I have no clue how to prove it. Thanks for your help.
For the Froebenius Norm: Since $A$ and $B$ are positive definite, we can write $C=AB=QDQ^\dagger$, with $D$ being the a diagonal matrix with the $n$ positive eigenvalues $\lambda_k$ and $Q$ a hermitian matrix ($QQ^\dagger=QQ^{-1}=I$). So we obtain $$ ||C - C^{-1}|| \geq ||C - I|| $$ Since the Froebenius Norm is invariant under coordinate rotations, i.e. $||QA||=||A||$, we can simplify this expression to $$ ||C - C^{-1}|| = || QDQ^\dagger - QD^{-1}Q^\dagger || = ||D - D^{-1}|| = \sqrt{\sum_{k=1}^n \left(\lambda_k-\lambda_k^{-1}\right)^2} $$ and $$ ||C -I|| = ||QDQ^\dagger - I || = ||D -I|| = \sqrt{\sum_{k=1}^n \left(\lambda_k-1\right)^2} $$ For all $\lambda_k>0$, $$ \sqrt{\sum_{k=1}^n \left(\lambda_k-\lambda_k^{-1}\right)^2} \geq \sqrt{\sum_{k=1}^n \left(\lambda_k-1\right)^2} $$ holds.