INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Differential equations and the consequence of its scale invariance This is a question based on a Wikipedia article on scale invariance (see the claim under the section Classical Electromagnetism). The equation $$\frac{\partial^2\psi(x,t)}{\partial x^2}=\frac{\partial^2\psi(x,t)}{\partial t^2}\tag{1}$$ is invariant under $x,t\rightarrow \lambda x,\lambda t$. Wikipedia claims that if $\psi(x,t)$ is a solution, $\psi(\lambda x,\lambda t)$ is also a solution. * *Is this claim independent of the boundary condition? *All solutions of the type $\psi(\lambda x,\lambda t)$ obtained from a particular $\psi(x,t)$, form only a subset of all possible solutions of (1) i.e., they do not exhaust all possible solutions. Am I correct?
* *Yes. *Of course one can add particular solutions to any general solution, i.e. if $\psi(x,t)$ is solution so will be $\psi(x,t)+Kx$ for any constant $K$.
Why can a basic definite integral be viewed as an integral of a connection? In Example 5. on this page (midway down the page) the author says that the definite integral $$\int_a^b f(x) dx,$$ where $f: [a, b] \to \mathbb{R}$ can be interpreted as an integral of a connection. He says that if we set $X = [a, b]$ and let $(\mathbb{R})_{x \in X}$ be the trivial bundle over $X$ then the function $f$ induces a connection $\Gamma_f$ on this bundle by setting $$\Gamma_f(x \to x + dx): y \to y + f(x)dx.$$ He says that the integral $\Gamma_f([a, b])$ of this connection along $[a, b]$ is the operation of translation by $\int_a^b f(x)$ in the real line. * *Is there a typo in the definition of the connection? Shouldn't the definition of $\Gamma_f$ map $y$ to $y + f'(x)dx$, not to $y + f(x)dx$? *I don't understand why he is saying that the integral of this connection is equivalent to translation...surely the integral of the connection just outputs some arbitrary scalar, where is the notion of it representing translation coming from?
In the diagram, vertical lines are fibres of the trivial bundle $[a, b] \times\ \mathbf{R}$, the horizontal line represents the "coordinate trivialization", the short segments (whose slopes at $(x, y)$ are $f(x)$) denote the horizontal spaces of the connection, and the dark curve is a horizontal section. * *There's no typo in the definition, though it might have been friendlier to change notation $f \to f'$ so that the slope of the connection at $(x, y)$ was $f'(x)$ instead of $f(x)$. *The integral of the connection is "translation by the integral" in the sense that if $\sigma_{1}$ is a "connection horizontal" section and $\sigma_{2}$ is the "coordinate horizontal" section satisfying $\sigma_{2}(a) = \sigma_{1}(a)$, then for each $t$ in $[a, b]$ the difference at $t$ is $$ \sigma_{1}(t) - \sigma_{2}(t) = \int_{a}^{t} f(x)\, dx. $$
Does there exist a measurable subset $E$ such that $\varepsilon < \frac{|E\cap{I}|}{I} < 1-\varepsilon$ for every finite interval I =[a,b]? Given $0 < \varepsilon < \frac{1}{10}$, does there exist a measurable subset $E \subset \mathbb{R^1}$ such that $\varepsilon < \frac{|E\cap{I}|}{I} < 1-\varepsilon$ for every finite interval I =[a,b]? Prove or disprove it . Here ||denotes the Lebesgue measure. I think it is exist . But I have no ideal of the example.
No. Note: $$\varepsilon<\frac{|E\cap I|}{|I|}=\frac{1}{|I|}\int_I \chi_E(s)~ds<1-\varepsilon$$ Then take $$\lim_{I\to x}\frac{1}{|I|}\int_I \chi_E(s)~ds$$ Lebesgue differentiation theorem tells us that a.e., $\lim_{I\to x}\frac{1}{|I|}\int_I \chi_E(s)~ds=\chi_E(x)$ The bounds show that $\varepsilon\le\chi_E(x)\le 1-\varepsilon$ for almost all $x$. However $\chi_E$ can only be $1$ or $0$ a.e. Contradiction.
A simple bijection between $\mathbb{N}$ and $\mathbb{Z}\times\mathbb{Z}$ (naturals and pairs of integers) I think I can use a bijection between $\mathbb{Z}$ and $\mathbb{N}$ to transform $\mathbb{Z}\times\mathbb{Z}$ into $\mathbb{N}\times\mathbb{N}$ and then use the Cantor pairing function, but is there an explicit bijection with fewer computational steps?
The function $f=\left( -1\right)^n\lfloor \frac{n}{2}\rfloor$ bijects $\mathbb{N}$ with $\mathbb{Z}$ if we define $0\notin\mathbb{N}$. (Whatever bijects with this definition of $\mathbb{N}$ can trivially be modified otherwise with the shift $n\mapsto n-1$.) The function $g\left(2^k r\right)=\left(k+1, \frac{r+1}{2}\right)$ with $r$ odd bijects $\mathbb{N}$ with $\mathbb{N}^2$. Thus the function $h\left(2^k r\right)=\left(f\left( k+1\right), f\left(\frac{r+1}{2}\right)\right)$ bijects $\mathbb{N}$ with $\mathbb{Z}^2$.
Is it possible to find the sum of the infinite series $1/p + 2/p^2 + 3/p^3 + \cdots + n/(p^n)+\cdots$, where $p>1$? Is it possible to find the sum of the series: $$\frac{1}{p} + \frac{2}{p^2} +\frac{3}{p^3} +\dots+\frac{n}{p^n}\dots$$ Does this series converge? ($p$ is finite number greater than $1$)
For $p>1$ the series converges. Lets take the general geometric series; $$|x|<1\\a_1(1+x+x^2+x^3+...)=\frac{a_1}{1-x}\\a_1(1+2x+3x^2+4x^3+...)=a_1(1+x+x^2+x^3+...)'=(\frac{a_1}{1-x})'=\frac{a_1}{(1-x)^2}$$ So in our case; $$a_1=\frac{1}{p}\ \ ,\ \ x=\frac{1}{p}\\\frac{1}{p}+\frac{2}{p^2}+...=\frac{1}{p}\cdot\frac{1}{(1-\frac{1}{p})^2}=\frac{1}{p(1-\frac{2}{p}+\frac{1}{p^2})}=\frac{1}{\frac{p^2-2p+1}{p}}=\frac{p}{(p-1)^2}$$
Which books on proof-writing are stuitable for someone engaged in self-study? I want to learn how to read and write proofs. I only have basic pre-calculus skills. As I am preparing for my upcoming calculus 1 course, I wanted to understand questions like Prove $\left|ab\right|=\left|a\right|\left|b\right|$ for any numbers $a$ and $b$. These questions pop up in various calculus books and I need to understand them. Any book recommendations?
If you're just about to move into Calculus I, you don't need to worry much about the formal logic behind the mathematics. Proofs typically pop up first in a linear algebra or abstract algebra course sophomore or junior year at a university. Real analysis is where you need to worry about proving both integral and differential Calculus, but that's a class that you take after you finish the Calculus sequence anyway. Any facts in a basic single variable Calculus class should be easy to prove with information from your textbook, if you are even required to prove anything at all.
Compute the $n$-th power of triangular $3\times3$ matrix I have the following matrix $$ \begin{bmatrix} 1 & 2 & 3\\ 0 & 1 & 2\\ 0 & 0 & 1 \end{bmatrix} $$ and I am asked to compute its $n$-th power (to express each element as a function of $n$). I don't know at all what to do. I tried to compute some values manually to see some pattern and deduce a general expression but that didn't gave anything (especially for the top right). Thank you.
Here is another variation based upon walks in graphs. We interpret the matrix $A=(a_{i,j})_{1\leq i,j\leq 3}$ with \begin{align*} A= \begin{pmatrix} 1 & 2 & 3\\ \color{grey}{0} & 1 & 2\\ \color{grey}{0} & \color{grey}{0} & 1 \end{pmatrix} \end{align*} as adjacency matrix of a graph with three nodes $P_1,P_2$ and $P_3$ and for each entry $a_{i,j}\neq 0$ with a directed edge from $P_i$ to $P_j$ weighted with $a_{i,j}$.   Note: When calculating the $n$-th power $A^n=\left(a_{i,j}^{(n)}\right)_{1\leq i,j\leq 3}$ we can interpret the element $a_{i,j}^{(n)}$ of $A^n$ as the number of (weighted) paths of length $n$ from $P_i$ to $P_j$. The entries of $A=(a_{i,j})_{1\leq i,j\leq 3}$ are the weighted paths of length $1$ from $P_i$ to $P_j$. See e.g. chapter 1 of Topics in Algebraic Combinatorics by Richard P. Stanley. Let's look at the corresponding graph and check for walks of length $n$. * *We see there are no directed edges from $P_2$ to $P_1$ and no directed edges from $P_3$ to $P_2$ and from $P_3$ to $P_1$ which implies there are no walks of length $n$ either. So, $A^n$ has due to the specific triangle structure of $A$ necessarily zeroes at the same locations as $A$. \begin{align*} A^n= \begin{pmatrix} . & . & .\\ \color{grey}{0} & . & .\\ \color{grey}{0} & \color{grey}{0} & . \end{pmatrix} \end{align*} *It is also easy to consider the walks of length $n$ from $P_i$ to $P_i$. There is only one possibility to loop along the vertex weighted with $1$ from $P_i$ to $P_i$ and so the entries $a_{i,i}^{(n)}$ are \begin{align*} 1\cdot 1\cdot 1\cdots 1 = 1^n=1 \end{align*} and we obtain \begin{align*} A^n= \begin{pmatrix} 1& . & .\\ \color{grey}{0} & 1 & .\\ \color{grey}{0} & \color{grey}{0} & 1 \end{pmatrix} \end{align*} and now the more interesting part * *$P_1$ to $P_2$: The walks of length $n$ from $P_1$ to $P_2$ can start with zero or more loops at $P_1$ followed by a step (weigthed with $2$) from $P_1$ to $P_2$ and finally zero or more loops at $P_2$. All the loops are weighted with $1$. There are $n$ possibilities to walk this way \begin{align*} a_{1,2}^{(n)}=2\cdot 1^{n-1}+1\cdot 2\cdot 1^{n-2}+\cdots +1^{n-2}\cdot 2\cdot 1+1^{n-1}\cdot 2=2n \end{align*} *$P_2$ to $P_3$: Symmetry is trump. When looking at the graph we observe the same situation as before from $P_1$ to $P_2$ and conclude \begin{align*} a_{2,3}^{(n)}=2n \end{align*} * *$P_1$ to $P_3$: Here are two different types of walks of length $n$ possible. The first walk uses the weight $3$ edge from $P_1$ to $P_3$ as we did when walking from $P_1$ to $P_2$ along the weight $2$ edge. This part gives therefore \begin{align*} 3\cdot 1^{n-1}+1\cdot 3\cdot 1^{n-2}+\cdots +1^{n-2}\cdot 3\cdot 1+1^{n-1}\cdot 3=3n\tag{1} \end{align*} The other type of walk of length $n$ uses the hop via $P_2$. We observe it is some kind of concatenation of walks as considered before from $P_1$ to $P_2$ and from $P_2$ to $P_3$. In fact there are $\binom{n}{2}$ possibilities to place two $2$'s in a walk of length $n$. All other steps are loops at $P_1,P_2$ and $P_3$ and we obtain \begin{align*} \binom{n}{2}\cdot 2\cdot 2=2n(n-1)\tag{2} \end{align*} Summing up (1) and (2) gives \begin{align*} a_{1,3}^{(n)}=3n+2n(n-1)=n(2n+1) \end{align*} and we finally obtain \begin{align*} A^n=\left(a_{i,j}^{(n)}\right)_{1\leq i,j\leq 3}=\begin{pmatrix} 1& 2n & n(2n+1)\\ \color{grey}{0} & 1 & 2n\\ \color{grey}{0} & \color{grey}{0} & 1 \end{pmatrix} \end{align*}
Bayes Formula: Intuition I found an intuitive explanation for Bayes' Theorem but I do not understand point 3: With $P(A|B) = \frac{P(A \cap B)}{P(B)}$ I see that P(B) is a scaling factor for partitions of B to be able to sum up to 1. But I can't seem to wrap my head around the meaning of the ratio in Bayes' Theorem.
I can give you a similar interpretation. Write it as $$\Pr(B|A)=\frac{\Pr(A|B)\Pr(B)}{\Pr(A)}=\frac{\Pr(B\cap A)}{\Pr(A)}$$ First notice the difference between the two events $B$ and $B|A$. The former is an event in the sample space $\Omega$ whose probability is one. We can write $\Pr(B)$ as $$\frac{\Pr(B\cap\Omega)}{\Pr(\Omega)}=\frac{\Pr(B)}{\Pr(\Omega)}=\frac{\Pr(B)}{1}$$ However, $B|A$ means that for event $B$ the sample space is reduced from $\Omega$ to $A$, hence we have $\Pr(A)$ in denomerator instead of $\Pr(\Omega)$. Similarly, in the numerator we have $\Pr(B\cap A)$ since we only consider $B$ in a subset of the sample space that intersects with $A$.
How can an isolated point be an open set? I have the following definition: In a metric space $(X,d)$ an element $x \in X$ is called isolated if $\{x\}\subset$ X is an open subset But how can $\{x\}$ be an open subset? There has to exist an open ball with positive radius centered at $x$ and at the same time this open ball has to be a subset of $\{x\}$ but how can this be if there is only one element? I'm trying to wrap my head around this, but I can't figure it out. It doesn't make sense for metrics on $\mathbb{R}^n$ since each open ball with some positive radius has to contain other members of $\mathbb{R}^n$. The only thing I could think of was that we have some $x$ with 'nothing' around it and an open ball that contains only $x$ and 'nothing' (even though a positive radius doesn't make sense since there is nothing), so therefore the open ball is contained in $\{x\}$. But I'm not even sure we can define such a metric space, let alone define an open ball with positive radius containing only $x$ and 'nothing'.
Suppose your metric space is $\mathbb{Z}$. Then you can take a ball around the element $4 \in \mathbb{Z}$ of radius $\frac{1}{2}$. The only element of your metric space in that ball is $4$, so the ball is just the set $\{4\}$, so $\{4\}$ is open.
Why is $-\log(x)$ integrable over the interval $[0, 1]$ but $\frac{1}{x}$ not integrable? I don't understand why some functions that contain a singularity in the domain of integration are integrable but others are not. For example, consider $f(x) = -\log(x)$ and $g(x) = \frac{1}{x}$ on the interval $[0, 1]$. These functions look very similar when they are plotted but only $f(x)$ can be integrated. * *What is the precise mathematical reason(s) that makes some functions with singularities integrable while others are not? *Are $\log$ functions the only functions with singularities that can be integrated or are there other types of functions with singularities that can be integrated?
Simple: $\quad -\log x=_0 o\Bigl(\dfrac1{\sqrt x}\Bigr)$ and the integral of $\dfrac 1{\sqrt x}$ on [0,1] is convergent.
Solving $W$ function What is the general form of Lambert $W$-function to calculate any $x$ in $\mathbb R$? I had problems solving for $x, x^2-e^x=0$. I reached $x=-2W(\frac12)$. What does $W(\frac12)$ equal to?
Short answer: As Ethan Bolker said, in general the only way to do it is using a computer and approximation methods. When a computer evaluates $W(z)$, it generally uses an approximation method. Longer answer: One such approximation method is as follows. It's an iterative method, which means you pick a starting number as a guess, plug it into the formula, and a better guess comes out. Repeat until you get the desired precision. Source: 1996: R.M. Corless, G.H. Gonnet, D.E.G Hare, D.J. Jeffrey and D.E. Knuth: On the Lambert W Function (Vol. 5: 329 – 359) Quoted verbatim, except for changed formatting: We return to the specific problem at hand that of computing a value of $W_k(z)$ for arbitrary integer $k$ and complex $z$ . Taking full advantage of the features of iterative rootfinders outlined above we compared the efficiency of three methods, namely, (1) Newton's method (2) Halley's method (3) the fourth order method described in [30] (as published this last method evaluates only the principal branch of $W$ at positive real arguments but it easily extends to all branches and to all complex arguments) ... The results showed quite consistently that method (2) is the optimal method ... For the $W$ function, Halley's method takes the form: $$w_{j+1} = w_j - \frac{w_j e^{w_j}-z}{e^{w_j}(w_j+1)-\frac{(w_j+2)w_je^{w_j}-z}{2w_j + 2}}$$ You can find more information about Newton's Method and Halley's method on Wikipedia. Let me know if you're confused about the notation, what $k$ means, or what a "branch" is. If there's more than one solution to $x = We^W$, then you have to pick one in order for $W$ to be a function. For real numbers, $x=We^W$ has a unique solution $W$ for $x>0$, has two solutions for $-\frac 1 e \le x \le 0$ and no solution for $x < -\frac 1 e$. So we "cut off the branch" that gives us the second solution for $-\frac 1 e \le x \le 0$, and the remaining function is called the "principal branch".
Counting the total number of possible passwords I'm working from Kenneth Rosen's book "Discrete Mathematics and its applications (7th edition)". One of the topics is counting, he gives an example of how to count the total number of possible passwords. Question: Each user on a computer system has a password, which is six to eight characters long, where each character is an uppercase letter or digit. Each password must contain at least one digit. How many possible passwords are there? His solution: Let $P$ be the total number of possible passwords, and let $p_6, p_7$ and $p_8$ denote the number of possible passwords of length 6, 7, and 8, respectively. By the sum rule, $p = p_6 + p_7 + p_8$. $P_6 = 36^6 - 26^6 = 2,176,782,336 - 1,308,915,776 = 1,867,866,560$ Similar process for $p_7$ and $p_8$. This is where I'm confused, what is the logic for finding $p_6$? If I was given the question, I would have done as follows: $p_6 = 36^5 * 10$, because 5 of the 6 characters can be a letter or a number, so 36 possible values for each character. One character has to be numerical, so it has 10 possible values. All multiplied together, gives you $p_6$. Obviously I'm wrong, but why is he right? I'd just like to understand the thinking behind Rosen's solution, as he does not make that clear in the book.
This is the principle of inclusion-exclusion. We see here that it is much easier to count the number of cases without restriction and remove the undesirable cases. Thus we get $$(36^6-26^6)+(36^7-26^7)+(36^8-26^8).$$ In each case of a password of length $n$ where $n$ ranges over the integers from $6$ to $8$ inclusive we have removed the case where the password consists only of uppercase letters, leaving us with all possibilities for those passwords which contain at least one digit. Although this is not always true, the words at least in a problem may imply that the method of inclusion-exclusion is the best way forward. Keep an eye out for similar verbiage in other problems and consider applying the same principle.
Integrate $\int x e^{x} \sin x dx$ Evaluate: $$\int x e^{x} \sin x dx$$ Have you ever come across such an integral? I have no idea how to start with the calculation.
By indeterminate coefficients: A term like $xe^x\sin x$ can be generated by the derivative of itself (due to $e^x$), which will also generate $xe^x\cos x$ and $e^x\sin x$. Then we are tempted to try $$f(x)=e^x(x(A\sin x+B\cos x)+(C\sin x+D\cos x)),$$ and $$f'(x)=e^x(x(A\sin x+B\cos x)+(C\sin x+D\cos x)+(A\sin x+B\cos x)+x(A\cos x-B\sin x)+(C\cos x-D\sin x)).$$ We identify, $$A-B=1,\\A+B=0,\\C+A-D=0,\\D+B+C=0$$ and obtain $$\frac12xe^x(\sin x-\cos x)+\frac12\cos x.$$
Proving the image of a linear map with kernel =$ \{0\}$ Question from Serge Lang's Introduction to Linear Algebra: Let $F: V \to W$ be a linear map, whose kernel is $\{0\}$. Assume that $V$ and $W$ have both the same dimension $n$. Show that the image of $F$ is all of $W$. Thoughts & attempt: * *We can express any element $w \in W$ as $\sum_{i}^n c_iw_i$, where $w_1,...,w_n$ are linearly independent. I'm trying to reach a point where I can coherently say $\sum_i^nF(c_iv_i) = \sum_{i}^n c_iw_i$ by assuming that $v_1,..., v_n$ form a basis of $V$. Is it enough to say that since the ker = $\{0\}$, linear independence is preserved, and therefore $W$ also has $n$ dimension? EDIT: I meant to consider $c_i$ on each side of the equation as not necessarily equivalent scalars.
What you are really trying to show is the following: If $T:V\rightarrow W$ is an injective linear map, where $V$ and $W$ have the same dimension, then $T$ is surjective. By the rank nullity theorem, $dimimT=dimW$. As $imT$ is a subspace of $W$, the result follows.
Prove $p\mid\frac{x^{a}-1}{x-1}$ using Fermat's little theorem Fermat's little theorem states that if $p$ is a prime number, then for any integer $a$, the number $a^{p} − a $ is an integer multiple of $p$. In the notation of modular arithmetic, this is expressed as $a^{p} \equiv a \pmod p.$ Using this theorem prove: Given a prime number $p$, show that if there are a positive integer $x$ and a prime number $a$ such that $p$ divides $\frac{x^{a}-1}{x-1}$, then either $a = p$ or $p \equiv 1 \pmod a$. $$p\mid\frac{x^{a}-1}{x-1}$$ So, I'm thinking: $$\frac{x^{a}-1}{x-1} = x^{a-1}+x^{a-2}+...+1$$ I tried the telescoping technique but that doesn't work, assuming $a = p$, shows that $x^{p-1}\equiv 1 \pmod p$. So, what else can I do?
You have: $p|\frac{x^a-1}{x-1}\Rightarrow p|(\frac{x^a-1}{x-1})(x-1) \Rightarrow p|x^a-1 \Rightarrow x^a \equiv 1 (\mod p)$ Hence if $m$ is the order of $x$ with respect to $p$, then $m|a \Rightarrow m=1$ or $m=a$, since $a$ is prime. If $m=1$, then by the definition of the order, we have that $x\equiv 1(\mod p)$ and so by the hypothesis we get: $0\equiv \frac{x^a-1}{x-1}\equiv x^{(a-1)}+...+1\equiv 1^{(a-1)}+...+1\equiv a(\mod p)$ Hence in this case $p|a \Rightarrow p=1\ \text{or}\ p=a$, since $a$ is a prime. The first equality is rejected, because $p>1$ as a prime number. If $m=a$, then $a|p-1 \Rightarrow p\equiv 1 (\mod a)$, because it is known that for the order $m$ we have $m|φ(p)$. $φ$ is the Euler totient function. The proof of the property that $m$ holds uses the Fermat's little Theorem. Links: 1.Orders 2.Euler's Totient Function
Counting on a circle I feel really bad for asking so many questions... but just one more... there are 12 points that are equally spaced on a circle. Define a set of points $(A,B,C,D)$ NICE if $AB$ intersect $CD$. How many nice sets are there? What I tries to do is counting in cases... there isn't any case when they are just one point away, and there is $1\cdot 9$ case when two points away , $2\cdot 8$ 3 points away until $5\cdot 5$ when 6 points away. there are 12 points in every cases and there are $2^3$ ways to arrange $A,B,C,D$. Wrapping this up, we will have:$$8\cdot12\cdot(1\cdot9+2\cdot8+3\cdot7+4\cdot6+5\cdot5)=9120$$ However this is no way near the answer. Can anyone tell where i did it wrong?
I’m assuming that you’re actually counting ordered $4$-tuples $\langle a,b,c,d\rangle$ of distinct points (chosen from the $12$) such that $ab$ intersects $cd$. Let the points be $p_1,p_2,\ldots,p_{12}$ in order around the circle. Let $\{i,j,k,\ell\}$ be a $4$-element subset of $\{1,2,\ldots,12\}$, where $i<j<k<\ell$. Then $p_ip_k$ intersects $p_jp_\ell$. To get a $4$-tuple $\langle a,b,c,d\rangle$ from these $4$ points in such a way that $ab$ intersects $cd$, we can let $a$ be any one of the $4$. The choice of $a$ completely determines that of $b$: if $a=p_i$, $b$ must be $p_k$ and vice versa, and if $a=p_j$, $b$ must be $p_\ell$ and vice versa. We then have $2$ possible choice for $c$, after which $d$ must be the remaining point. Thus, there are $4\cdot2=8$ possible $4$-tuples using $p_i,p_j,p_k$, and $p_\ell$. Since there are $\binom{12}4=495$ ways to choose $4$ of the $12$ points, there are $495\cdot8=3960$ $4$-tuples $\langle a,b,c,d\rangle$ such that $ab$ intersects $cd$.
Neighborhood vs open neighborhood I am a beginner in the concept of general Topology. I have some confusion about the utilization of neighborhood and open neighborhood. $\mathbf{Definition}$: Let $(X,\mathcal{T})$ be a topological space and $x\in X$. $N\subset X$ is called neighborhood of $x$ if there exists $U\in \mathcal{T}$ such that $x\in U\subset N$. It is clear that any open set $U\in \mathcal{T}$, such that $x\in U$, is a neighborhood of $x$. Now, Let $(x_n)_{n\in\mathbb{N}}\subset X$ be a sequence and $x\in X$. $\mathbf{Definition\ 1}$: $x$ is a limit of the sequence $(x_n)$ if for any neighborhood $N$ of $x$, there exists $n_0\in\mathbb{N}$ such that $x_n\in N$ for all $n\geq n_0$. Some people use a different definition: $\mathbf{Definition\ 2}$: $x$ is a limit of the sequence $(x_n)$ if for any open neighborhood $N$ of $x$, there exists $n_0\in\mathbb{N}$ such that $x_n\in N$ for all $n\geq n_0$. The only difference is that, in the Definition 1, they consider all neighborhoods of $x$ where in the Definition 2, they only consider open neighborhoods. But is that difference not a problem? Are those definitions equivalent? The same for the definition of Adherent point. $\mathbf{Definition\ 1}$: $x$ is an adherent point of $A$ if for any neighborhood $N$ of $x$, $N\cap A\neq \emptyset$. $\mathbf{Definition\ 2}$: $x$ is an adherent point of $A$ if for any open neighborhood $N$ of $x$, $N\cap A\neq \emptyset$. I am really confused about those two definitions.
It should be clear that if $x$ is the limit of $(x_n)$ by Definition $1$, then Definition $2$ is satisfied as well (if it's not clear let me know and I can explain). Hence we only need to show that if Definition $2$ is satisfied, then Definition $1$ is as well: Suppose $x$ is the limit of $(x_n)$ by Definition $2$. Let $N$ be any neighborhood of $x$, so there exists some open $U$ with $x\in U\subset N$. Because $U$ is an open neighborhood of $x$, by assumption there exists $n_0\in\Bbb{N}$ such that $x_n\in U$ for $n\ge n_0$. But $U\subset N$, so $x_n\in N$ for $n\ge n_0$ as well, and Definition $1$ is indeed satisfied. So we see the two definitions of a limit are indeed equivalent. I'd recommend you try to apply a similar argument to your second pair of definitions. Edit: Here is the argument to show Definition $1$ implies Definition $2$: Suppose Definition $1$ holds. Then if $U$ is any open neighborhood of $x$, because $U$ is a neighborhood Definition $1$ tells us that there is an $n_0$ such that $x_n\in U$ for $n\ge n_0$; since $U$ was arbitrary, we see Definition $2$ holds.
A question about neighboring fractions. I have purchased I.M. Gelfand's Algebra for my soon-to-be high school student son, but I am embarrassed to admit that I am unable to answer seemingly simple questions myself. For example, this one: Problem 42. Fractions $\dfrac{a}{b}$ and $\dfrac{c}{d}$ are called neighbor fractions if their difference $\dfrac{ad - bc}{bd}$ has numerator $\pm1$, that is, $ad - bc = \pm 1$. Prove that     (a.) in this case neither fraction can be simplified (that is, neither has any common factors in numerator and denominator);     (b.) if $\dfrac{a}{b}$ and $\dfrac{c}{d}$ are neighbor fractions then $\dfrac{a + b}{c + d}$ is between them and is a niehgbor fraction for both $\dfrac{a}{b}$ and $\dfrac{c}{d}$; moreover, ... Here is the snapshot from the book online (click on Look Inside on the Amazon page): So, (a) is simple, but I have no idea how to prove (b). It just does not seem right to me. Embarrassing. Any help is appreciated.
There appears to be a slight misunderstanding in the above discussion of betweenness in the Gelfand exercise. The fact that $a+c\over b+d$ is between $a\over b$ and $c\over d$ does not depend on the two fractions being neighbor fractions. It is true for any (positive) fractions (as stated by Gelfand himself on the previous page) and is easily proven algebraically without assuming the fractions are neighbors (a stipulation which only complicates the proof). An outline of such an algebraic proof would be: If ${a\over b}\lt {c\over d}$, then $ad\lt bc$. Add $ab$ to both sides, and it follows that ${a\over b}\lt{a+c\over b+d}$. The other half of the betweenness follows similarly. This betweenness property also follows immediately from a logical view of averaging. If a group of $b$ people has $a$ apples, and a group of $d$ people has $c$ apples, merging the groups will give each person in the new merged group an average number of apples which benefits the people in the "poorer" group and hurts the people in the "richer" group. [Note that Gelfand makes this very argument on the previous page, to explain the betweenness property.] So in Gelfand's part (b), the fact of betweenness is true always, while the fact that the resulting fraction is a neighbor to both original fractions depends, of course, from the assumption that the original fractions were neighboring. -Jonathan
Divergent set in terms of unions and intersections Let $(f_n)_{n=1}^{\infty}$ and $f $ be real valued functions defined on $\mathbb{R}$. For $ε > 0$ and $\forall m ∈ \mathbb{N}$ define $E_m (ε) = \{x ∈ \mathbb{R}\, |\, |f_m (x) − f(x)| ≥ ε\}$. Let $S = \{x ∈ \mathbb{R} \,|\, {f_n (x)},n\in \mathbb{N}$, does not converge to $f(x)\}.$ Express $S$ in terms of the sets $E_m (ε)$, $m∈\mathbb{N}$, $ε>0$ (using the set theoretic operations of unions and intersections). I think the answer is $\bigcup_{ε>0}\bigcap_{n=1}^{\infty}\bigcup_{m=n}^{\infty}E_m(ε)$, but am unable to derive it. Any help. Thanks beforehand.
Your answer is correct. Take the def'n of convergence and observe from it that $(f_n(x))_n$ does NOT converge to $f(x)$ iff for some $e>0$ the set $\{m:|f_m(x)-f(x)|>e\}$ is infinite. For ease of notation let $F_n(e)=\cup_{m\geq n}E_m(e).$ (1). If $(f_n(x))_n$ does not converge to $f(x)$. For some $e >0$ we have: For every $n$ there exists $m\geq n$ such that $x\in E_m(e).$ Hence for some $e>0$ we have: $x\in F_n(e)$ for all $n.$ So $x\in \cap_{n=1}^{\infty}F_n(e)$ for some $e>0$. Therefore $$x\in \cup_{e >0}\cap_{n=1}^{\infty} F_n(e).$$ (2). On the other hand if $x\in \cup_{e>0}\cap_{n=1}^{\infty}F_n(e)$. For some $e>0$ we have: $x\in F_n(e)=\cup_{m\geq n}E_m(e)$ for every $n.$ So for some $e >0$ we have: For every $n$ there exists $m\geq n$ such that $x\in E_m(e).$ From this, and the def'n of $E_m(e)$, we have, for some $e >0$ : For every $n$ there exists $m\geq n$ with $|f_m(x)-f(x)|>e.$ So for some $e >0 $ the set $\{m:|f_m(x)-f(x)|>e \}$ is infinite. Therefore $ (f_n(x))_n$ does not converge to $x.$
Integrating $\int\frac{5x^4+4x^5}{(x^5+x+1)^2}dx $ In the following integral: $$I = \int\frac{5x^4+4x^5}{(x^5+x+1)^2}dx $$ I thought of making partial fractions , then solve it . But I am not able to make partial fractions.
Set $x=1/y$ $$I=-\int\dfrac{5y^4+4y^3}{(1+y^4+y^5)^2} dy$$ Set $1+y^4+y^5=u$
If f(x) is a polynomial of degree three with leading coefficient 1 and $f(1)=1$, $f(2)=4$, $f(3)=9$ then prove $f(x)=0$ has a root in interval $(0,1)$ If f(x) is a polynomial of degree three with leading coefficient 1 such that $f(1)=1$, $f(2)=4$, $f(3)=9$then prove $f(x)=0$ has a root in interval $(0,1)$. This is a reframed version of "more than one option correct type" questions. I could identify all the other answers but this one got left out. My Attempt: From the information given in the question, the cubic equation is $$x^3-5x^2+11x-6=0$$ Now I don't know how to prove the fact that one of the roots lies in the interval $(0,1)$. As it is a cubic equation I can't even find the roots directly to prove this.
You have that: $$f(0) = c,\ f(1)=1,$$ Since you have shown that $c<0$, and since $f$ is continuous, by the intermediate value theorem it must pass through zero at some point in $[0,1]$.
Showing $\int_x^{x+1}f(t)\,dt \xrightarrow{x\to\infty}0$ for $f\in L^2 (\mathbb{R})$ Show that if $f\in L^2 (\mathbb{R})$ then $$\lim_{x \rightarrow \infty} g(x)=0,$$ where $$g(x)=\int_x^{x+1}f(t)\,dt$$ Since $f \in L^2 (\mathbb{R})$ then $f \in L^2[x,x+1]$ for every $x$. Then $$|g(x)| \leq \left(\int_x^{x+1} f(t)^2 dt\right)^{\frac{1}{2}} \int_x^{x+1} 1 dt$$ $$|g(x)|\leq \left(\int_x^{x+1} f(t)^2 dt\right)^{\frac{1}{2}}$$
Continuing from your last step, you have $$|g(x)|\leq\left(\int_\mathbb{R} f(t)^2\chi_{[x,x+1]}\,dt\right)^{1/2}$$ where $\chi_{[x,x+1]}(t)$ is the indicator function on the interval $[x,x+1]$. So $$\lim_{x\to\infty}|g(x)|\leq\lim_{x\to\infty}\left(\int_\mathbb{R} f(t)^2\chi_{[x,x+1]}\,dt\right)^{1/2}$$ Since $|f(t)^2\chi_{[x,x+1]}|\leq|f(t)^2|$ which is integrable, by Lebesgue's Dominated Convergence Theorem, we can move the limit inside the integral, so $$\lim_{x\to\infty}\left(\int_\mathbb{R} f(t)^2\chi_{[x,x+1]}\,dt\right)^{1/2}=\left(\int_\mathbb{R} \lim_{x\to\infty}f(t)^2\chi_{[x,x+1]}\,dt\right)^{1/2}=0$$ since $\lim_{x\to\infty}\chi_{[x,x+1]}=0$ Hence $\lim_{x\to\infty}|g(x)|\leq 0$, and we can conclude from there.
Is this set of vectors linearly (in)dependent? I have the following problem: Are the following vectors linearly independent in $\mathbb{R}^2$? \begin{bmatrix} -1 \\ 2 \end{bmatrix}\begin{bmatrix} 1 \\ -2 \end{bmatrix}\begin{bmatrix} 2 \\ -4 \end{bmatrix} when I solve this using $c_1 v_1+c_2 v_2+ c_3 v_3=0$ I get an underdetermined system, can anyone help me to understand what this means for the linear independence? Thanks in advance :)
$$0v_1+2v_2-v_3=0$$ In general, you can never have more than $k$ linearly independent vectors in a $k$-dimensional vector space
Solution to an ordinary differential equation The general solution to the equation $y''+by'+cy=0$ approaches to $0$ as $x$ approaches infinity if * *$b$ is negative $c$ is positive *$b$ is positive $c$ is negative *$b$ is positive $c$ is positive *$b$ is negative $c$ is negative
To seek a non-zero solution, put $y=e^{mx}$ ($m$ is arbitrary constant) in the given ODE Then the auxiliary equation you get is $e^{mx}(m^2+bm+c)=0$ or $m^2+bm+c=0$ as $e^{mx}\neq 0$. In case of real roots,what you want is that this quadratic equation must not have a positive real root for which $b>0$ and $c>0$ (by Descarte's rule of signs) is sufficient. In case no real root exists, even then you need the real part of the root to be negative for which you must have $b>0$. In both cases, option $3$ is the best choice.
Differential equation with Dirac Delta function While studying thermoionic emission from metals I wanted to get a feeling for the problem with classical mechanics before delving into quantum mechanics. The potential used to model the situation is this one: $$V(x)=V_0 \Theta(x)$$ Where $\Theta (x)$ is the Heaviside step function. If we want the classical force for this potential we differentiate: $$F_x = - \frac{dV}{dx}= - V_0 \delta(x)$$ Where $\delta(x)$ is the Dirac delta function. This gives an equation of motion of the type: $$m \ddot{x} = -V_0 \delta(x)$$ With $m$ and $V_0$ positive parameters and the dots denote differentiation with respect to time. My question is: how to treat this equation? It turns out that the problem is much simpler in quantum mechanics if we try to solve the time independent Schroedinger equation. Thanks in advance.
As in the QM case, the usual way to solve differential equations involving delta-functions is to solve them piecewise on each domain. We first re-cast the equation to solve for the speed $v = dx/dt$ as a function of $x$: $$ \frac{d^2 x}{dt^2} = \frac{dv}{dt} = \frac{dv}{dx} \frac{dx}{dt} = v \frac{dv}{dx} = \frac{1}{2} \frac{d}{dx} \left(v^2 \right). $$ Our equation is now \begin{equation} \frac{m}{2} \frac{d}{dx} \left(v^2 \right) = - V_0 \delta(x). \qquad \qquad (1) \end{equation} We can now note that for the regions $x < 0$ and $x > 0$, we have $$ \frac{m}{2} \frac{d}{dx} \left(v^2 \right) = 0, $$ which implies that the solution is $$ v(x) = \begin{cases} v_- & x < 0 \\ v_+ & x>0 \end{cases} $$ To find the relationship between $v_-$ and $v_+$, we integrate equation (1) in a small interval $[-\epsilon, \epsilon]$ around 0: \begin{align*} \frac{m}{2} \int_{-\epsilon}^{\epsilon} \frac{d}{dx} \left(v^2 \right) \, dx &= - V_0 \int_{-\epsilon}^{\epsilon} \delta(x) \, dx \\ \frac{m}{2} \left[ v^2 \right]_{-\epsilon}^{\epsilon} &= - V_0 \\ \frac{m}{2} \left(v_+^2 - v_-^2 \right) &= - V_0. \end{align*} This latter equation can be recognized as energy conservation across the boundary $x = 0$: $\Delta KE = - \Delta PE$. The solution for $v(x)$ is then $$ v(x) = \begin{cases} v_0 & x < 0 \\ \sqrt{v_0 - 2V_0/m} & x > 0 \end{cases} $$ If you want the solution for $x(t)$, you can then integrate this with respect to time. Alternately, if you want to skip the step of finding $v(x)$, you can instead use the identity $$ \delta(x(t)) = \sum_i \frac{1}{|\dot{x}(t_i)|} \delta(t - t_i) $$ where the sum runs over the zeroes of the function $x(t)$. This then allows us to recast this equation solely in terms of $x$ as a function of $t$. One can oncesagain solve this piecewise between successive zeroes of the function $x(t)$, and integrate over small intervals of $t$ surrounding these zeros to "patch" the piecewise solutions together. In this case, the solutions for $x(t)$ "between" the zeroes will be simply linear functions of $t$, which means that you will only have one zero for $x(t)$, and applying the above techniques will yield the same sort of solution.
How many positive integers divide $20!$ $20!$ Has nos that are multiples of $2,3,4$ and so on. However, the total number of integers is large. So, please help me.
Hint: Use Legendre's formula: For each prime $p\le n$, the exponent of $p$ in the prime decomposition of $n!$ is $$v_p(n!)=\biggl\lfloor\frac{n}{p}\biggr\rfloor+\biggl\lfloor\frac{n}{p^2}\biggr\rfloor+\biggl\lfloor\frac{n}{p^3}\biggr\rfloor+\dotsm$$ The number of prime divisors of $n!$ is then $$\prod_{\substack{ p\;\text{prime}\\p\le n}}\bigl(v_p(n!)+1\bigr).$$
Impact of Riemannian Geometry on Group Theory In the Wikipedia page for Riemannian Geometry, it mentions that the field had made a "profound impact on group theory". What are some examples of this? Looking around a bit (including on that page), it seems more like it is the other way around (i.e. group theory informs Riemannian manifold theory). E.g. analyzing a manifold with its fundamental group, or with Lie theory. (But perhaps these can be viewed inversely.)
Felix Klein and later Élie Cartan introduced the study of geometric structures on manifolds through algebra (and in particular through group theory). This is a mutual relationship and one may argue that the geometry motivates the study of group theory (or algebra in general), or vice versa, that the study of group theory (or algebra) is applied to geometry. In some cases this relationship is even "a bijection", e.g., compact flat Riemannian manifolds are classified by Bieberbach groups. Crystallographic groups and its generalisations are perhaps an example which were studied originally as symmetry groups of crystals, but later received an "profound impact" from Riemannian (and pseudo-Riemannian) manifolds and its fundamental groups. In a similar way, there is an impact from number theory, by the theory of discrete groups, and lattices of Lie groups, etc. The generalisations to other geometric structures and therefore for other algebraic structures or number theory is still active and very modern, e.g., infra-nilmanifolds, affine and projective manifolds, see Milnor, and many more examples.
Find the biggest number $M$ such that the inequality $a^2+b^{1389} \ge Mab$ holds for every $a,b \in [0,1]$. Find the biggest number $M$ such that the following inequality holds for every $a,b \in [0,1]$, $$a^2+b^{1389} \ge Mab$$ My attempt:We should find the minimum of $\frac{a}{b}+\frac{b^{1388}}{a}$.By putting $a=b=0^{+}$ we will get to $1 \ge M$ and clearly $M \ge 0$ but how much is $M$?
The condition is equivalent to $a^2 - M ab + b^{1389} \ge 0$ for $a,b \in [0,1]$. Considering it as a quadratic in $a$ and noting that $a=b=1$ gives $M \le 2\,$, its minimum is attained at $\frac{Mb}{2} \in [0,1]$ so in order for the inequality to hold the quadratic must have no distinct real roots i.e. $\Delta = M^2b^2 - 4 b^{1389} \le 0$. For $b \ne 0$ the latter gives $M^2 \le 4 b^{1387}$ and with $b \to 0^+$ it follows that $M^2 \le 0$ therefore $M=0$.
Determine the intervals in which the following inequality is satisfied: $|1-x|-x\geq 0$ Exercise: Determine the intervals in which the following inequality is satisfied: $$|1-x|-x\geq 0$$ Attempt: What to Expect: A quick manipulation renders the following: $|1-x|\geq x$. Graphing both sides: Eyeballing, the answer seems to be: $x \leq \frac{1}{2}$. Solution: (1) $|1-x|-x\geq 0$ (2) $|1-x| \geq x$ (3) * *when $x \geq 0$: * *when $1-x \geq 0$: $1-x \geq x$ *when $1-x < 0$: $-(1-x) \geq x$ *when $x < 0$: $x-1 > 0$ (4) * *when $x \geq 0$: * *when $1 \geq x$: $\frac{1}{2} \geq x$ *when $1 < x$: invalid *when $x < 0$: $x > 1$ Request: I do see the expected answer in (4), but according to my solution it's only applicable when $x \geq 0$. When $x < 0$, I get an answer that seems to have no resemblance in the expected answer. Where and what did I do wrong?
When $x \ge 1$, $$|1-x|=x-1\ge x$$ and there is no solution. When $x \le 1$, $$|1-x|=1-x\ge x$$ $$1\ge 2x$$ $$1/2\ge x$$
finding interval on which series is uniformly convergent Consider $f(x)=\sum_{n=1}^{\infty} \frac{1}{1+n^2 x}.$ For what values of x does it converge uniformly? In general is there any special criterion, except Weierstrass M-test, to find out whether a Series of functions is uniformly convergent or not?
The series converges point wise for $x>0$ as well as for $x<0$ as long as $-\frac1x$ is not a perfect square (so in particular for $x<-1$). Treating the negative part is a bit tricky, but on $(0,\infty)$, we see that $0\le f(x)\le\frac {\pi^2}{6x}$. As all summands are positive (on that interval), we have uniform convergence on any interval $[a,\infty)$ with $a>0$: Given $\epsilon>0$, we certainly have an error $<\epsilon$ for $x>\frac{\pi^2}{6\epsilon}$, hence need only concentrate on the interval $[a,\frac{\pi^2}{6\epsilon}]$, which is compact. In fact, we only used "positive continuous summands with a limit tending to $0$ at infinity". The same trick can be used to show that $f$, with the finitely many summands having $1+n^2a\ge 0$ removed, converges uniformly on $(-\infty,a]$ for any $a<0$. Thus after adding bac those finitely many summands, $f$ converges uniformly on any closed subinterval of $\Bbb R\setminus A$ where $A=\{0\}\cup\{\,-\frac1{n^2}\mid n\in\Bbb N\,\}$ is the set of points where $f$ does not converge (or even some summand is undefined) and tends to infinity.
Determine two changing variables only knowing the result So, about a decade ago my company came up with pricing for some banners that we sell. the prices are as follows. $43.68 for a 3x4 banner $44.52 for a 3x6 banner $46.36 for a 3x8 banner $50.00 for a 3x10 banner $52.54 for a 3x12 banner and I can not figure out where these prices came from. The guy who wrote them up quit before I started, and I need to figure out the equation to extend the pricing up and down. Here's what I DO know. The equation is based off two things The cost of the banner per square foot The cost of labor I do not need to figure out the factors that went into pricing for either, I just need to know what numbers they are. Best guess for labor was 63 dollars, it might not be, but if that works, it sounds good to me. my attempt was to figure it out using substitution with a system of equations. 12(sqft) * X($/sqft) + 63($/hour) * Y (hours) = 43.68 and 18x + 63y = 44.52 with a second set of 24x + 63y = 46.36 and 30x + 63y = 50.00 BUT the first set gives me x=0.14 y=0.66667 and the second set gives me x=0.606667 y=0.504762 which leads me to believe that the hours per banner change. Meaning the y in each equation is different. Is there a way to determine what these two variables are, even though one changes, probably linearly? If not, I'll just do a whole new equation, the only issue is the number of variables going into each of these variables. Thanks.
Since you DON'T KNOW anything about "cost of materials" and "cost of labor", just ignore them and just use the given data. I also see that all of these have the first size "3", it is only the second measurement that is important. So we are looking for a function, f(x), such that f(4)= 43.68 f(6)= 44.52 f(8)= 46.36 f(10)= 50.00 f(12)= 52.54 It is also true that any collection of n points can be given by an n-1 degree polynomial. Here there are 5 data points so a polynomial of the form f(x)= A+ Bx+ Cx^2+ Dx^3+ Ex^4. Taking x and y from the data we have five equations to solve for A, B, C, D, and E.
For $X_1, X_2 \sim Unif(0,1)$ independent, why is $P(X_1X_2 >k) = \int_{k}^{1}\int_{k/x_1}^{1}dx_2dx_1$? For $X_1, X_2 \sim Unif(0,1)$ independent, I am trying to see why it is the case that $$ P(X_1X_2 >k) = \int_{k}^{1}\int_{k/x_1}^{1}dx_2dx_1 $$ I get the inner integral bound, but I do not know why the second outer integral starts from $k$ instead of starting from $0$?
The largest $x_2$ can be is $1$, so $1 \geqslant x_2 $, or $1/x_2 \geqslant 1$. Therfore in order to have $x_1x_2>k$, you must have $$ x_1 > \frac{k}{x_2} \geqslant k. $$ Quick sketch of the region in question:
Show a scheme is integral if and only if it is irreducible and reduced. I'm trying to solve Exercise 5.2.F from Vakil's notes: Show a scheme $X$ is integral if and only if it is irreducible and reduced. Where we say $X$ is reduced (integral) if $\mathscr O_X(U)$ is reduced (an integral domain) for all open subsets $U$ of $X$. Clearly, if $X$ is integral, then each $\mathscr{O}_X(U)$ is a domain, hence reduced, so $X$ is reduced. I'm not sure how to see that $X$ is irreducible. It's obvious for an affine scheme, since if $\mathscr{O}_X(X)=:A$ is a domain then $\text{Spec } A$ is irreducible. Am I able to use this to tackle the general case? Like if $X=\cup_i U_i$ with each $U_i$ affine open, does each $U_i$ being irreducible imply that $X$ is irreducible? This doesn't seem like the right way to approach it. I'm not sure, I just feel stuck. Any hints would be greatly appreciated (I'd prefer that over someone just giving me the answer).
Hint for the forward direction: If $X$ is not irreducible, that means there are two disjoint nonempty open subsets $U,V\subset X$. Consider $\mathscr{O}_X(U\cup V)$. Hint for the reverse direction: If $X$ is irreducible and reduced, so is any open subscheme of $X$, so every nonempty affine open subscheme of $X$ is Spec of a domain. To show $\mathscr{O}_X(U)$ is a domain for arbitrary $U$, you can show that the restriction $\mathscr{O}_X(U)\to\mathscr{O}_X(V)$ is injective for any nonempty affine open subset $V\subseteq U$. A stronger hint for the reverse direction is hidden below: To show the restriction $\mathscr{O}_X(U)\to\mathscr{O}_X(V)$ is injective, suppose $f$ is in its kernel. Then $f$ vanishes on $V$. To show $f$ vanishes on $U$, it suffices to show $f$ vanishes on $W$ for every other affine open $W\subseteq U$. Now use the fact that $W$ is Spec of a domain and $f$ vanishes on $V\cap W$.
weaker than isoperimetric inequality for surfaces in $\mathbb{R}^3$ I'm trying to prove, without the aid of the isoperimetric inequality, that for any compact manifold with given surface area $S$ in $\mathbb{R}^3$, that the volume $V$ must be bounded. By this I mean that any sequence of homeomorphisms that also preserve $S$, can't result in an unbounded volume $V$. In particular, I'm searching for an elementary proof of a weaker-than-isoperimetric inequality: $$ V \leq f(S) \tag{*}$$ I tried to find such an inequality using the Pappus Centroid Theorem but I have not been successful so far. Note 1: I am interested in a result that generalises easily to compact manifolds in $\mathbb{R}^n$ that aren't necessarily differentiable. Note 2: My motivation comes from a problem in mathematical physics where an isoperimetric inequality is deduced so I don't want to assume the isoperimetric inequality beforehand.
After discussing this with Tony Carbery, an analysis professor at Edinburgh, I realised that the Loomis-Whitney inequality answers this problem adequately. In their original paper which is only two pages long they use cubical projections of n-space to demonstrate the following theorem: Let $m$ be the measure of an open subset O of $\mathbb{R}^n$, and let $m_1,..m_n$ be the (n-1)-dimensional measures of the projections of O on the coordinate hyper-planes. Then: $$m^{n-1} \leq \prod_{i=1}^{n} m_i \tag{*}$$ From this we may deduce that if $M$ is a compact manifold in $\mathbb{R}^3$ with given surface area $S$: $$V \leq S^{3/2} \tag{**}$$ This is precisely the result I wanted. Note 1: I can reproduce the details of their proof here but I think that the original paper is worth reading. Note 2: This is clearly weaker than the sharp isoperimetric bound $V \leq \frac{S^{3/2}}{3 \sqrt{4 \pi}} \huge$ which is much more difficult to prove.
Relation between inverse tangent and inverse secant I've been working on the following integral $$\int\frac{\sqrt{x^2-9}}{x^3}\,dx,$$ where the assumption is that $x\ge3$. I used the trigonometric substitution $x=3\sec\theta$,which means that $0\le\theta<\pi/2$. Then, $dx=3\sec\theta\tan\theta\,dx$, and after a large number of steps I achieved the correct answer: $$\int\frac{\sqrt{x^2-9}}{x^3}\,dx=\frac16\sec^{-1}\frac{x}{3}-\frac{\sqrt{x^2-9}}{2x^2}+C$$ I was able to check my answer using Mathematica. expr = D[1/6 ArcSec[x/3] - Sqrt[x^2 - 9]/(2 x^2), x]; Assuming[x >= 3, FullSimplify[expr]] Which returned the correct response: Sqrt[-9 + x^2]/x^3 Mathematica returns the following answer: Integrate[Sqrt[x^2 - 9]/x^3, x, Assumptions -> x >= 3] -(Sqrt[-9 + x^2]/(2 x^2)) - 1/6 ArcTan[3/Sqrt[-9 + x^2]] Which I can write to make more clear. $$-\frac16\tan^{-1}\frac{3}{\sqrt{x^2-9}}-\frac{\sqrt{x^2-9}}{2x^2}+D$$ Now, you can see that part of my answer is there, but here is my question. How can I show that $$\frac16\sec^{-1}\frac{x}{3}\qquad\text{is equal to}\qquad -\frac16\tan^{-1}\frac{3}{\sqrt{x^2-9}}$$ plus some arbitrary constant? What identities can I use? Also, can anyone share the best web page for inverse trig identities? Update: I'd like to thank everyone for their help. The Trivial Solution's suggestion gave me: $$\theta=\sec^{-1}\frac{x}{3}=\tan^{-1}\frac{\sqrt{x^2-9}}{3}$$ Then the following identity came to mind: $$\tan^{-1}x+\tan^{-1}\frac1x=\frac{\pi}{2}$$ So I could write: \begin{align*} \frac16\sec^{-1}\frac{x}{3}-\frac{\sqrt{x^2-9}}{2x^2} &=\frac16\tan^{-1}\frac{\sqrt{x^2-9}}{3}-\frac{\sqrt{x^2-9}}{2x^2}\\ &=\frac16\left(\frac{\pi}{2}-\tan^{-1}\frac{3}{\sqrt{x^2-9}}\right)-\frac{\sqrt{x^2-9}}{2x^2}\\ &=\frac{\pi}{12}-\frac16\tan^{-1}\frac{3}{\sqrt{x^2-9}}-\frac{\sqrt{x^2-9}}{2x^2} \end{align*} Using Olivier's and Miko's thoughts, I produced this plot in Mathematica. Plot[{1/6 ArcSec[x/3] - Sqrt[x^2 - 9]/( 2 x^2), -(1/6) ArcTan[3/Sqrt[x^2 - 9]] - Sqrt[x^2 - 9]/( 2 x^2)}, {x, -6, 6}, Ticks -> {Automatic, {-\[Pi]/12, \[Pi]/12}}] Which shows that the two answers differ by $\pi/12$, but only for $x>3$.
What you are asking to prove is incorrect, I believe. By the substitution, we have that $$\frac{x}{3}=\sec(\theta)\Leftrightarrow\frac{3}{x}=\cos(\theta).$$ By the Pythagorean identity, $$\sin(\theta)=\sqrt{1-\frac{9}{x^2}}=\sqrt{\frac{x^2-9}{x^2}}.$$ Therefore, $$ \tan(\theta)=\sqrt{\frac{x^2-9}{x^2}}\frac{x}{3}=\frac{1}{3}\sqrt{x^2-9}$$ Hence, $$\theta=\sec^{-1}\frac{x}{3}=\tan^{-1}\frac{1}{3}\sqrt{x^2-9}.$$
Find the sum of all positive irreducible fractions less than 1 whose denominator is 2016. I know that we first have to use AP to find the sum of first 2015 natural nos. Then we need to subtract all those factors of 2016. But there are too many. I need help in this part.
Too long for a comment. I'm trying to explain the method used by @barak manos. So we want : $$ \sum_{n=1|\ gcd(n,2016) = 1}^{2015} \frac{n}{2016} $$ It is clear that any time $n$ is a multiple of either $2$, $3$, or $7$ then $gcd(n,2016) > 1$ because the prime factorization of $2016 = 2^5 \cdot 3^2 \cdot 7$. So we need to remove all multiple of $2$, $3$ and $7$. But we have to be careful because if we remove once all multiples of $2$ and then all multiples of $3$ then we would have removed $2$ times the multiples of $6$ ! This is where the principle of inclusion/exclusion come into play : * *Remove all multiples of $2$ and $3$ and $7$. This as for consequence to remove $2$ times the multiples of $6 = 2\cdot 3$, $14 = 2 \cdot 7$ and $21 = 3 \cdot 7$ and remove $3$ times the multiples of $42 = 2\cdot 3 \cdot 7$. *Add back one time the multiples $6,14,21$ that we removed one time too many. However this will add back $3$ times the multiples of $42$. *So overall we still need to remove the multiples of $42$ that we remove $3$ times but reintroduced $3$ times after.
How to prove the divergence/convergence of the following series: $\sum_{n=0}^\infty \frac{\sqrt[3]{n}}{\sqrt[3]{n^4}-1}$? I've tried the Ratio and Root test, but both are inconclusive, and I can't find a comparison to prove that the following serie is converge or diverge. $$\sum_{n=0}^\infty \frac{\sqrt[3]{n}}{\sqrt[3]{n^4}-1}$$
Hint: for large enough $n$, $$ \frac{\sqrt[3]{n}}{\sqrt[3]{n^4}-1}\ge\frac{1}{n} $$
Does the expression $\sum_{i=1}^n \sum_{j=1}^n a_{ij}b_{ij} $ have a name? Say, $A=(a_{ij})$ and $B=(b_{ij})$ are two $n$ by $n$ matrices. Does the expression $$\sum_{i=1}^n \sum_{j=1}^n a_{ij}b_{ij} $$ have a name? Is there a neat way to write it in terms of $A$ and $B$? I encountered the expression in the context of optimization. Is there any other context it comes up?
Can't comment yet... Supposing your matrices have real entries, this is the Frobenius inner product https://en.wikipedia.org/wiki/Frobenius_inner_product.
Concave down increasing example I'm looking for a concave down increasing-function, see the image in the right lower corner. Basically I need a function f(x) which will rise slower as x is increasing. The x will be in range of [0.10 .. 10], so f(2x) < 2*f(x) is true. Also if I would also like to have some constants which can change the way/speed the function is concaving. Any suggestion on a formula?
If you are restricted to a positive $x$, you could use $$ f(x) = a \sqrt{x}, \quad a > 0 $$ which satisfies $$ f(2 \cdot x) = a \sqrt{2 \cdot x} = \sqrt{2} \cdot a \sqrt{x} = \sqrt{2} \cdot f(x) < 2 \cdot f(x) $$
When do you take into account the +2kpi for complex numbers arguments in complex equations $$z^2 = ({2e^{i{\frac{\pi}{3}}}})^8$$ To find z I took the square root of both sides which gives me: $$z = ({2e^{i{\frac{\pi}{3}}}})^4$$ which I rewrote as $$z = {2^4e^{i{\frac{4\pi}{3}}+2k\pi}}$$ and for k=0 we find $$z=-8-8\sqrt(3)$$ which is one of the correct answers but for k=1 we find the same thing. On the other hand the correction states the other answer is $$z=8+8\sqrt(3)$$ I think that I need to put the $$+2k\pi$$ before taking the square root of Z but I don't understand why it changes the answer since mathematically it should be the same? Because even when I tried with $$z = ({2e^{i{\frac{\pi}{3}}+2k\pi}})^4$$ which I rewrote as $$z = {2^4e^{i{\frac{4\pi}{3}}+8k\pi}}$$ I still don't find the correct second answer. Thanks
* *You can use this method: $$z^2=(2e^{i{\pi\over3}})^8=(2^8e^{i{8\pi\over3}})=(2^8e^{i{8\pi\over3}+2ki\pi})$$ and now you take the square root. *Otherwise you can take the square root putting $\pm$: $$z=\pm(2e^{i{\pi\over3}})^4$$ *We can also write $2e^{i{\pi\over3}}=2e^{i{\pi\over3}+2li\pi}$ because any complex number is defined up to a phase factor $e^{2li\pi}$, so: $$z^2=(2e^{i{\pi\over3}+2li\pi})^8=(2^8e^{i{8\pi\over3}+16li\pi})=(2^8e^{i{8\pi\over3}+16li\pi+2ki\pi})=(2^8e^{i{8\pi\over3}+2\pi i(8l+k)})$$ $$z=2^4e^{i{4\pi\over3}+\pi i(8l+k)}=2^4e^{i{4\pi\over3}}e^{\pi i(8l+k)}=\pm2^4e^{i{4\pi\over3}}$$ with $l,k\in\mathbb Z$, but $8l+k$ can be even or odd so you get the same solutions.
Lowest cardinality of a subset of $N$ being able to be divided in $2017$ ways Find the lowest cardinality of $А\subseteq\Bbb{N}$ such that there are 2017 different partitions $B_i\subseteq A$ and $A\backslash B_i$ for which $\operatorname{lcm}(B_i) =\gcd(A\backslash B_i)$ How is it possible that we are looking for a GCD of a factor set? And what are we really looking for? That the number of sets in the power set of A is 2017? Here is my attempt [link]
For a valid partition we need in particular that every $x\in B_i$ divides every $y\in A\setminus B_i$. Hence for different indices $i,j$ we one of $B_i\setminus B_j$, $B_j\setminus B_i$ must be emtpy, i.e., we have $B_i\subseteq B_j$ or $B_j\subseteq B_i$. So the $B_i$ are linearly ordered by inclusion: $$\emptyset\subsetneq B_1\subsetneq B_2\subsetneq B_3\subsetneq \ldots \subsetneq B_{2017}\subsetneq A$$ Suppose for some $k$, $1\le k\le 2017$, we have $|B_{k+1}\setminus B_{k-1}|=2$ (with $B_0=\emptyset$ and $B_{2018}=A$ understood), so $B_{k}=B_{k-1}\cup \{m\}$, $B_{k+1}=B_{k}\cup\{n\}$ for some distinct numbers $m,n$. Then $\operatorname{lcm}(B_{k})=m$ and $\gcd(A\setminus B_{k})=n$, i.e., we should have $n=m$. We conclude $|B_{k+1}|\ge |B_{k-1}|+3$ for $1\le k\le 2017$. By induction, $|A|=|B_{2018}|\ge |B_0|+1009\cdot 3\ge 2027$. But can this lower bound be achieved? Yes! For $n\in\Bbb N$ let $$S_n=\{\,2^a3^b: |a-b|\le 1, a+b<n\,\}. $$ Verify that $\operatorname{lcm}(S_{2n})=\operatorname{lcm}(S_{2n+1})=6^n$ and that for $m>2n$ or $m>2n+1$, respectively, $\gcd(S_m\setminus S_{2n})=\gcd(S_m\setminus S_{2n+1})=6^n$ as well. Also $|S_{2n}|=3n$. This allows us to pick $B_k=S_k$ for $1\le k\le 2017$ and $A=S_{2018}$ with $|A|=3027$.
Can a tree with n vertices be colored with n colors according to greedy coloring? This question is from a home assignment but it makes no sense to me. Construct a sequence of trees $(T_n)^∞_{n=1}$ with an ordering of their vertices such that the greedy colouring algorithm uses n colours to find a proper colouring of $T_n$. From what I can tell, the greedy coloring algorithm always ends up using just two colors when coloring a tree regardless of the ordering of vertices but this question assumes otherwise. Am I wrong? I'm completely stuck on this.
Such a tree $T_n$ must have at least $2^{n-1}$ vertices. Here is a simple recursive construction where the tree $T_n$ has exactly $2^{n-1}$ vertices. In each case we order the vertices so that vertices of lower degree are colored before vertices of higher degree. For $n=1$ let $T_1=P_1=K_1$. For $n=2$ let $T_2=P_2=K_2$. For $n=3$ let $T_3=P_4$; it has vertices $v_1,v_2,v_3,v_4$ and edges $v_1v_2,v_2v_3,v_3v_4$. For $n=4$ start with the tree $T_3$ and add new vertices $v_1',v_2',v_3',v_4'$ and edges $v_1v_1',v_2v_2',v_3v_3',v_4v_4'$. Edit. As requested in a comment, here is a detailed description and a proof. Let $W_1,W_2,W_3,\dots$ be disjoint sets with $|W_1|=1$ and $|W_n|=2^{n-2}$ for $n\ge2$, so that $|W_n|=|W_1\cup\cdots\cup W_{n-1}|$ for $n\ge2$. Let $V_n=W_1\cup\cdots\cup W_n$. Let $E_1=\varnothing$. For $n\ge2$ let $M_n$ be a matching between $W_n$ and $V_{n-1}=W_1\cup\cdots\cup W_{n-1}$ and let $E_n=E_{n-1}\cup M_n=M_2\cup\cdots\cup M_n$. Plainly $T_n=(V_n,E_n)$ is a tree of order $2^{n-1}$. If $v\in W_i$ and $n\ge i$, then in the tree $T_n$ the vertex $v$ has no neighbors in $W_i$, exactly one neighbor in $W_j$ for each $j$ such that $i\lt j\le n$, and, if $i\ge2$, exactly one neighbor in $W_1\cup\cdots\cup W_{i-1}$; thus $$\deg_{T_n}v=\begin{cases} n-1\quad\quad\text{ if }\ \ i=1,\\ n-i+1\ \ \text{ if }\ \ i\ge2.\\ \end{cases}$$ I claim that, if the vertices of $T_n$ are ordered so that vertices of lower degree precede vertices of higher degree, then the greedy coloring of $T_n$ uses $n$ colors. The proof is by induction on $n$. The claim is clearly true for $T_1=K_1$ and $T_2=K_2$. Suppose $n\ge3$. We start by coloring the leaves of $T_n$; these are precisely the elements of $W_n$, which is an independent set, so they all get the same color, call it red. It remains to color the vertices of $T_{n-1}$. Since $$\deg_{T_{n-1}}v=\deg_{T_n}v-1$$ for $v\in V_{n-1}$, it follows from the inductive hypothesis that the greedy algorithm on $T_n$ will use $n-1$ colors on the vertices of $T_{n-1}$. Since each vertex of $T_{n-1}$ has a neighbor in $W_n$, no vertex of $T_{n-1}$ is colored red, so a total of $n$ colors are used on $T_n$.
Polar transformation of a probability distribution function I am working through Dirk P Kroese "Monte Carlo Methods" notes with one section based on Random Variable Generation from uniform random numbers using polar transformations (section 2.1.2.6). The polar method is based on the polar coordinate transformation $ X=R \cos \Theta$, $Y=R\sin \Theta$, where $\Theta \sim \text{U}(0,2\pi)$ and $R\sim f_R$ are independent. Using standard transformation rules it follows that the joint pdf of $X$ and $Y$ satisfies: $$f_{X,Y}(x,y)=\cfrac{f_R(r)}{2\pi r}$$ with $r=\sqrt{x^2+y^2}$. I don't fully understand how expression $f_{X,Y}(x,y)$ is obtained from "standard transformation rules", please help or hint.
Rewrite the RHS as \begin{align}\frac{f_R(r)}{2\pi r}&=f_R(r)\cdot\frac1{2\pi}\cdot\frac1r=f_R(r)f_\Theta(\theta)\frac1r=f_{R,\Theta}(r,\theta)\frac1r\end{align} (where we used that $\Theta \sim \text{U}(0,2\pi)$ hence $f_\Theta(\theta)=\frac1{2\pi}$ and that $R,\Theta$ are independent) so that their assertion becomes $$f_{R,\Theta}(r,\theta)=f_{X,Y}(x,y)\cdot r$$ where $x$ stands actually for $x(r,\theta)=r\cos{(\theta)}$ and similarly $y$ stands for $y(r,\theta)=r\sin{(\theta)}$. Hence, we expect that $r$ is the absolute value of the determinant of the Jacobian of the transformation (this is the standard methodology for transforming bivariate random variables, just search for this in some textbook or on the internet), which we can indeed verify $$\det{(J)}=\frac{\partial x(r,\theta)}{\partial r}\frac{\partial y(r,\theta)}{\partial \theta}-\frac{\partial x(r,\theta)}{\partial \theta}\frac{\partial y(r,\theta)}{\partial r}=r\cos^2{(\theta)}+r\sin^2{(\theta)}=r\cdot1=r$$ Note: I would expect $|r|$ but since they omitted the absolute value, there must be some point where they assume that $r>0$ (if I am not mistaken).
Solving $\lim\limits_{x \to \infty} \frac{\sin(x)}{x-\pi}$ using L'Hôpital's rule I know how to solve this using the squeeze theorem, but I am supposed to solve only using L'Hôpital's rule $$\lim\limits_{x \to \infty} \frac{\sin(x)}{x-\pi}$$ I tried: $$\lim\limits_{x \to \infty} \frac{\sin(x)}{x-\pi} = \lim\limits_{x \to \infty} \frac{d/dx[\sin(x)]}{d/dx[x-\pi]} = \lim\limits_{x \to \infty} \frac{\cos(x)}{1}$$ From here I am stuck because the rule no longer applies and using $\infty$ for $x$ doesn't not help to simplify. Logically the limit is $0$ because $\sin(x)$ can only be $-1$ to $1$, but this is using squeeze theorem. Is still there any way to solve this without using the squeeze theorem?
L'Hopital's Rule only works when the limit is $0/0$ or $\infty/\infty$. In this case, the limit is undefined over $\infty$, so you cannot use L'Hopital's Rule on this problem (at least the way it's written.)
Categorical construction - do I have a functor? I have the following situation: I have a construction that takes an object in a category $\mathcal{C}$ and via some totally noncanonical choices constructs an object in another category $\mathcal{D}$ where different choices result in (noncanonically) isomorphic objects in $\mathcal{D}$. I also have a second construction where given morphism in $\mathcal{C}$ say $f: X \to Y$, after choosing objects in $\mathcal{D}$ associated to $X$ and $Y$, say $X'$ and $Y'$, I can construct a map $f' : X' \to Y'$. Is there a way of putting these constructions together and calling it a functor? It seems I would need a new category instead of $\mathcal{D}$ in which the objects are isomorphism classes of objects in $\mathcal{D}$.
A basic trick here is to "inflate" the category $\mathcal{C}$; e.g.define $\hat{\mathcal{C}}$ to be * *The objects are pairs $(X, C)$ where $X$ is an object of $\mathcal{C}$ and $C$ represents a particular collection of choices in the construction. (e.g. we might just pick $C = X'$) *The morphisms are given by $\hom_\hat{\mathcal{C}}((X,C), (Y,D)) = \hom_\mathcal{C}(X,Y)$ The end result is that the functor $\hat{\mathcal{C}} \to \mathcal{C}$ that forgets the choices is an equivalence functor, and when doing your construction on objects of $\hat{\mathcal{C}}$, you now have canonical choices. Of course, your construction still may or may not be a functor; e.g. you have to check that it preserves identities and composition. With a sufficiently strong version of the axiom of choice, equivalence functors have weak inverses, so you also get an equivalence functor $\mathcal{C} \to \hat{\mathcal{C}}$ that makes a choice for each object of $\mathcal{C}$, which can be composed with functors $\hat{\mathcal{C}} \to \mathcal{D}$ to get a functor $\mathcal{C} \to \mathcal{D}$
Two sets are equal, Definition. Two sets A and B are the same, if $ \displaystyle \forall X: [X \in A \Leftrightarrow X \in B] \land [A \in X \Leftrightarrow B \in X]$ Ehm. I know one explanation that says, "two sets are equal, if they own the same elements". If so, why do we need the second part of the definition? If I read this, I see: "Two sets are equal, if they own the same elements and are elements of some other thing". But then, they are elements of that that is an element of themselves. What are these circles of mind? Can somebody explain me the sacral meaning of this second part? Thanks.
The definition of equality both tells us what equality is but also tells us what facts about equal things we may use. So if we are told $ A = B $ then we are told that both that they have the same elements and we are being told that they are elements of the same sets. When we are dealing at the level of axioms we can't depend on what we expect equality to mean. For example imagine if we had symbols $ \doteq $ and $ \triangleright $. If you had the axiom $ A \doteq B \Leftrightarrow \forall X : [ X \triangleright A \Leftrightarrow X \triangleright B ] $ Then the following graph (with arrows indicating the $ \triangleright $ relationship) would have $ A \doteq B $ but A and B wouldn't act in the way you would expect them to act if they where equal. .
Proof that conditional probabilities sum up to one I've read on Wikipedia that the sum (or integral, for continuous $Y$) of $P(Y=y|X=x)$ over $y$ is always equal to one. I've attempted a proof of this statement for the discrete (sum) case: Proof: By the Kolmogorov definition of conditional probability and the Law of Total Probability, $$\sum_k P(A_k | B) = \sum_k \frac{P(A_k \cap B)}{P(B)} = \frac{1}{P(B)}\sum_kP(A_k \cap B) = \frac{1}{P(B)}P(B)=1.\ \square$$ Is this a correct proof? How would I proof the above statement for the continuous case, i.e. that $\int_y P(Y=y|X=x)$ always equals one?
Your proof looks great! For the continuous case, your probability is actually a density, so you could write \begin{align*} \int_{\mathbb{R}} f(y \mid x) dy = \int_{\mathbb{R}} \frac{f(y,x)}{f(x)} dy = \frac{1}{f(x)}\int_{\mathbb{R}} f(y,x) dy = 1 \end{align*} since the integral of the joint density over the support of $Y$ is simply the marginal for $X$, so the proof is more or less the same.
Intuition of de rham cohomology I'm studying differential geometry, I find it very interesting however the teacher doesn't give much motivation for new definitions, theorems,... The last topic of the course dealt with differential forms, Stoke's theorem and de Rham cohomology. I understand that differential forms are able to generalize theorems from calculus. However I don't really see what de Rham cohomology does exactly. We really did see only a few facts without much explanation. I've read this nice answer Intuitive Approach to de Rham Cohomology on a similar question. It well explains the ability of 1-forms to spot holes in a manifold. However it says nothing about higher forms and their uses. Can someone explain the idea of cohomology? There is also an exercise about this: Given two smooth maps $f,g: S^3 \rightarrow S^2$, do they agree at the level of de Rham cohomology? So are the induced maps $H(S^2) \rightarrow H(S^3)$ the same? From what I know, I would say no because $H^1(S^2),H^1(S^3) \cong \mathbb{R}$ and the linear maps $H(S^2) \rightarrow H(S^3)$ are thus isomorphic to $\mathbb{R}$. So I would say there are many maps which induce different maps at the level of de Rham cohomology. Is this correct? What is the intuition behind this exercise? Thanks in advence
This is not a good example to consider. $H^1(S^2) = 0 = H^1(S^3)$, $H^2(S^2) \cong \Bbb R$ and $H^2(S^3) = 0$, and, similarly, $H^3(S^2) = 0$ and $H^3(S^3)\cong\Bbb R$. So even though there are an integer's worth of homotopy classes of maps $S^3\to S^2$, they all induce uninteresting (i.e., $0$-) maps $H^*(S^2)\to H^*(S^3)$. You might learn more from considering different maps $S^1\times S^1\to S^1\times S^1$, where you can explicitly compute the deRham cohomology and the induced maps.
Doing definite integration $\int_0^{\pi/4}\frac{x\,dx}{\cos x(\cos x + \sin x)}$ We have to solve the following integration $$ \int_0^{\pi/4}\frac{x\,dx}{\cos x(\cos x + \sin x)} $$ I divided both in Nr and Dr by $\cos^2 x$. But after that I stuck.
Let $I$ the integral that we want to compute. First we perform the change of variables $x=u+\frac{\pi}{4}$ and then the change of variables $u=-w$ to get: $I=\displaystyle{\int_{-\frac{\pi}{4}}^{0}\frac{u}{\cos u(\cos u -\sin u)}du+\frac{\pi}{4}\int_{-\frac{\pi}{4}}^{0}\frac{1}{\cos u(\cos u -\sin u)}du}=\\ -\displaystyle{\int_{0}^{\frac{\pi}{4}}\frac{w}{\cos w(\cos w +\sin w)}dw+\frac{\pi}{4}\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw}$ and therefore we have: $I=-I +\displaystyle{\frac{\pi}{4}\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw\Rightarrow I=\frac{\pi}{8}\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw}$ But with the substitution $y=tanw$ we obtain: $\displaystyle{\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos w(\cos w +\sin w)}dw=\int_{0}^{\frac{\pi}{4}}\frac{1}{\cos^2 w}\frac{1}{(1 +\tan w)}dw=\int_{0}^{1}\frac{1}{1+y}dy=\ln2}$ and finally $I=\frac{\pi}{8}\ln2$. Note: In the change of variables $x=u+\frac{\pi}{4}$ we used the trigonometric identities: $\cos (x+y)=\cos x \cos y - \sin x \sin y \\ \sin (x+y)=\sin x \cos y + \cos x \sin y$
Quadratic equation system $A^2 + B^2 = 5$ and $AB = 2$ Given a system of equations $A^2 + B^2 = 5$ $AB = 2$ what is the correct way to solve it? I see immediately that the answers are * *$A=1, B=2$ *$A=2, B=1$ *$A=-1, B=-2$ *$A=-2, B=-1$ but I don't understand the correct way of getting there. I have tried to isolate one of the variables and put the resulting expression into one of the equations, but this didn't get me anywhere. What is the correct way of solving this problem?
Hint: $$A^2+B^2+2AB=(A+B)^2=5+2\cdot2=9=3^2$$ Then $A+B=\pm3$ Thus $A+B=\pm3$ $AB=2$ Case 1) $A+B=3$ $AB=2$ Case 2) $A+B=-3$ $AB=2$
Angle between pair of vectors from two planes at an angle mutually. Suppose I have 2 planes mutually at an angle to each other. Then two vectors, one from each, would make same angle between them. Is that so?
If I've understood your question, call the two planes you described plane A and plane B. Then consider a vector in plane A, but that is also completely contained in the 'hinge line'. Also consider a vector in plane B, but that is also completely contained in the 'hinge line'. Clearly these two vectors do not have an angle between them of 30 degrees.
About a type of integral domains I was reading the notes about factorization written by Pete L. Clark and I found that he uses the name "EL-domain" to refer to those integral domains in which every irreducible element is prime (page 16). So my question is: is the name "EL-domain" an accepted terminology? I made some google search and I haven't found anything. Perhaps this terminology has been used in some papers, but I don't know. Any help is appreciated.
It stands for "Euclid's Lemma" as Pete mentions here.. As I mention there, they are more commonly called AP-domains (Atoms are Prime) in the literature on factorization theory. Remark $\ $ See this answer for common names of many closely related properties.
For what $f$ is $\int_{-\infty}^\infty f(x)\,dx = 2 \int_0^\infty f(x) \,dx \neq \pm \infty$? I just keep the cases $\pm \infty$ out, so that there are no trivial solutions such as $f(x)=x$. Can you give just not some examples but more of a general formula (if there is) for some solutions?
Continuous even functions which tend to zero quickly are a class worth looking at. For example, $f(x)=\dfrac{1}{x^2+1}$. An example of an even, continuous, function which does tend to zero for which this will not work is $g(x) = \dfrac{1}{|x|+1}$. There are many other functions (both continuous and not) that satisfy this, though, having no absolute defining pattern. And so a formula will not be possible. For example, $$h(x) = \begin{cases} 0 & x < -3 \\ 1 & -3\le x \le -2 \\ 0 & -2< x < 201.3 \\ 1 & 201.3 \le x \le 202.3 \\ 0 & x> 202.3\end{cases}$$ satisfies your criteria.
Jakob Bernoulli’s solution I'm reading this. It says, from $ds^2 = dx^2 + dy^2$, we can get $2dsd^2s =2dxd^2x $ I cannot understand this process. Please give me some references.(simpler is better)
If one feels uncomfortable with differentials it is possible to explain with ordinary derivatives. Let's divide $ds^2 = dx^2 + dy^2$ by $dy^2$ to get $$ \left(\frac{ds}{dy}\right)^2=\left(\frac{dx}{dy}\right)^2+1\qquad\Leftrightarrow\qquad (s'(y))^2=(x'(y))^2+1. $$ Now differentiate both sides w.r.t. $y$ using the chain rule $$ 2s'\cdot s''=2x'\cdot x''\qquad\Leftrightarrow\qquad 2\frac{ds}{dy}\frac{d^2s}{dy^2}=2\frac{dx}{dy}\frac{d^2x}{dy^2}. $$ Multiplying by the denominator we obtain $2ds\cdot d^2s=2dx\cdot d^2x$
Nonnegative $C^{\omega}$ function $f$ with $\int_\mathbb{R} f(t) \ dt < + \infty$ but $\lim f(x) \neq 0$ Is there a nonnegative $C^{\omega}$ function $f$ with $\int_\mathbb{R} f(t) \ dt < + \infty$ (in the Riemann sense) but $\lim f(x) \neq 0$ as $x \to +\infty$ or $x \to -\infty$? Additionally, what can we say about stronger conditions? For example, if $f$ is positive (as opposed to merely nonnegative)? Or instead of $\lim f(x) \neq 0$, $\limsup f(x) = +\infty$? Unless I'm mistaken, here is an example for the $C^{\infty}$ case, which is, of course, weaker. Consider the bump function $$\sigma(x) = \begin{cases} \exp\left( -\frac{1}{1 - x^2}\right) & \mbox |x| < 1\\ 0 & \mbox{$x \in [-2, -1] \cup [1, 2)$} \end{cases}$$ extended $4$ periodically to $\mathbb{R}$. Since this is even, we will only consider $x \geq 0$. We alter this as follows. The "bump" portion of each period (ie. where $f(x) \neq 0$) is scaled horizontally so it's width is very small. A consequence of this is that the "$0$" portion of each period gets relatively larger within each period. We must make the $nth$ bump so thin that the area underneath the bump is less than $2^{-n}$. This can be done since the height is bounded. Note that, since $x \geq 0$, the first bump is sort of a half-bump, but this makes no difference. Our original function $\sigma$ was $C^{\infty}$, so our new function, which has only been scaled, is as well.
I believe the function $$ f(x) = 1 + \tanh\left[\left(\sin^2x-1\right)e^{x^2}\right]\coth\left(e^{x^2}\right) $$ satisfies your condition. Analytic functions are closed under addition, multiplication, and composition, so the above is analytic itself. The limit as $x\rightarrow \pm\infty$ does not exist, because $f(x)$ returns to 1 every odd multiple of $\pi/2$, but the area under each period vanishes quickly enough to converge. In fact, the periods vanish quickly enough that I could probably add a growing multiplicative factor in front and make the lim sup infinite.
Notation for removal of row / column from matrix Is there some common notation for the result of removing the $i$th row, the $j$th column or both of them from a matrix given $A$?
How short do you need it? What's also often done is to use selection matrices that select a subset of rows/columns. E.g., to drop the third out of five rows of $A$ you can use $J_{3,5} \cdot A$ where $$J_{3,5} = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 \end{bmatrix}.$$ Once you defined what $J_{n,p}$ means in general you can also drop rows and columns via $J_{n,p} \cdot A \cdot J_{m,q}^{\rm T}$ (where $A$ is $p\times q$), apply them sequentially àla $J_{k,p-1} \cdot J_{n,p} \cdot A$ etc.
Finding the numbers $n$ such that $2n+5$ is composite. Let $n$ be a positive integer greater than zero. I write $$a_n = \begin{cases} 1 , &\text{ if } n=0 \\ 1 , &\text{ if } n=1 \\ n(n-1), & \text{ if $2n-1$ is prime} \\ 3-n, & \text{ otherwise} \end{cases}$$ The sequence goes like this $$1,1,2,6,12,-2,30,42,-5,72,90,-8,132,-10,-11,\ldots$$ I would like to prove the following two claims. claim 1 : If $a_n>0$ and ${a_n \above 1.5 pt 3} \notin \mathbb{Q}$ then $\sqrt{4a_n+1}$ is prime. The table below illustrates what I am seeing: \begin{array}{| l | l | l | l } \hline n & a_n & {a_n \above 1.5 pt 3} & \sqrt{4a_n+1}\\ \hline 0 & 1 & .333333.. & 2.2360679.. \\ 1 & 1 & .333333.. & 3 \\ 2 & 2 & .666666.. & 3 \\ 3 & 6 & 2 & 5 \\ 4 & 12 & 4 & 7 \\ 6 & 30 & 10 & 11 \\ 7 & 42 & 14 & 13 \\ 9 & 72 & 24 & 17 & \\ 10 & 90 & 30 & 19 \\ 12 & 132 & 44 & 23 \\ 15 & 210 & 70 & 29 \\ 16 & 240 & 80 & 31 \\ 19 & 342 & 114 & 37 \\ 21 & 420 & 140 & 41 \\ 22 & 462 & 154 & 43 \\ \hline \end{array} claim 2: If $a_n<0$ then $2a_n+5$ is composite
Claim 1 is vaccuously true, since given any $n$, $a_{n}\in\mathbb{Z}$. It also looks like n=14 is a counterexample to claim 2. Proof: $3-14=-11$ and $2(-11)+5=-17$ has positive divisors $1$ and $17$ only.
Analytical solution to linear coupled ODEs? I have this set of equations and I am trying to find $X(t)$ and $Y(t)$ analytically with initial values known such as $X(0)=X_0$ and $Y(0)=Y_0$. How should I approach to solve it? $dX/dt=a \times X(t) + b \times Y(t)$ $dY/dt=c \times Y(t) + d \times X(t)$
Rewriting the above as: $$\begin{bmatrix} X'\\Y'\end{bmatrix}=\begin{bmatrix}a & b \\ d & c\end{bmatrix}\begin{bmatrix}X\\Y\end{bmatrix}$$ Let's assume for the moment that the determinant of the above matrix is nonzero. Then the matrix has two eigenvalues, $r_1$ and $r_2$, with two corresponding eigenvectors, $\vec{v_1}=(v_{1x},v_{1y})$ and $\vec{v_2}=(v_{2x},v_{2y})$. Then the solutions are given by: $X=C_1v_{1x}e^{r_1t}+C_2v_{2x}e^{r_2t}$ $Y=C_1v_{1y}e^{r_1t}+C_2v_{2y}e^{r_2t}$ where the constants $C_1$ and $C_2$ are determined by initial conditions. If the determinant of the above square matrix is zero, obtain the single eigenvalue $r$ and the eigenvector $\vec{v}=(v_x,v_y)$, along with the generalized eigenvector $\vec{w}=(w_x,w_y)$. Then the solutions are given by: $X=C_1v_{x}e^{r_1t}+C_2w_{x}te^{r_2t}$ $Y=C_1v_{y}e^{r_1t}+C_2w_{y}te^{r_2t}$
Without a calculator, find gcd(20!+95, 19!+5). I think it is $5$ because the gcf of just $20!$ and $19!$ is $19!$, and with the addition it cancels the other factors.
$\gcd(20! + 95, 19! + 5)=$ $\gcd((20! +95)-20(19! + 5), 19! + 5)=$ $\gcd(95 - 100, 19!+5)=$ $\gcd(5, 19! + 5) = $ $5\gcd(1, \frac {19!}5 + 1)=$ $5*1 =5$
Find Order and Normal Subgroup in a Matrix Group From a Masters Qual. Practice Exam: Let $G \leq GL(3, \mathbb{F}_3)$, be the group of invertible $3 × 3$ upper triangular matrices over the field with $3$ elements (i.e. entries below the diagonal are zero). Find the order of $G$ and show that $G$ is not a simple group i.e. show that $G$ has a proper non-trivial normal subgroup. If we didn't have "invertible" condition, this would give us $3^6$ matrices, but I'm not sure how to quickly weed out which are and which aren't invertible. Intuitively, I think the normal subgroup should be the diagonal matrices (I could be wrong), but I'm not sure how to prove that without some really gross matrix multiplication.
In order that an upper triangular matrix $$\begin{pmatrix} a & b & c \\ 0 & d& e \\ 0 &0 & f\end{pmatrix}$$ be invertible, it is necessarily and sufficient that the determinant $adf$ be nonzero. This means that $b,c,e$ can be anything you want, while $a, d, f$ cannot be zero. This means there are $$3 \cdot 3 \cdot 3 \cdot 2 \cdot 2 \cdot 2 = 216 $$ such matrices. The subgroup $D$ of diagonal invertible matrices is as far from being normal in $G$ as you can get (if $x \in G$, but not in $D$, then $xDx^{-1} \neq D$). However, if you multiply two upper triangular matrices $x, y \in G$, notice that the entries on the diagonal of $xy$ are obtained by multiplying the corresponding entries on the diagonal of $x$ and $y$. This implies that $$N = \{ \begin{pmatrix} 1 & b & c \\ 0 & 1& e \\ 0 &0 & 1\end{pmatrix} : b, c, e \in \mathbb{F}_3 \}$$ is a normal subgroup of $G$. Actually, $G$ is the semidirect product of $N$ and $D$.
without using L hop-ital rule and series expansion , $ \lim_{t\rightarrow 0}\left(t\cot t+t\ln t\right)$ without using L hop-ital rule and series expansion , $\displaystyle \lim_{t\rightarrow 0}\bigg(t\cot t+t\ln t\bigg).$ $\displaystyle \lim_{t\rightarrow 0}\bigg(t\frac{\cos t}{\sin t}+\ln t^t\bigg) = 1+\lim_{t\rightarrow 0}\ln(t)^t$ could some help me with this , thanks
For $0 < t < 1$ $$ -t\ln t = -2t\ln \sqrt{t}= 2t\int_{\sqrt{t}}^1 \frac{ds}{s} \leqslant 2t\frac{1 - \sqrt{t}}{\sqrt{t}} = 2\sqrt{t}(1 - \sqrt{t})\\ -t\ln t = -2t\ln \sqrt{t}= 2t\int_{\sqrt{t}}^1 \frac{ds}{s} \geqslant 2t(1 - \sqrt{t}) $$ Now use the squeeze theorem to show $t \ln t \to 0$.
show there is no C^1 homeomorphism from R^3 to R^2 Show that there is no C^1 homeomorphism from $R^3$ to $R^2$. I am fully aware that in general $R^m$ and $R^n$ are not homeomorphism by homology theory. I wonder if we add the condition $C^1$ we can have a proof using differential calculus. Here is what i've tried just in case someone ask: I figured that by rank theorem the differential of such map can't have rank 2. But i don't see any contradictions if the rank is 1 or 0. I don't think i go through the right way.
a) If the linear map $df_A:\mathbb R^3\to \mathbb R$ has rank $2$ at a single point $A\in \mathbb R^3$, then locally near $A$ the rank theorem says that up to a change of coordinates $f(x,y,z)=(x,y)$, so that $f$ is not injective and thus even less a homeomorphism. b) If $f$ is nowhere of rank $2$, then all points of $\mathbb R^3$ are critical . But then $f(\mathbb R^3)$ has zero Lebesgue mesure by Sard's theorem and thus $f$ is hilariously far from being surjective anf even farther from being a homeomorphism. [The use of Sard requires the slightly stronger hypothesis that $f$ be at least $C^2$]
Using the Arithmetic Mean-Geometric Mean Inequality Let a, b, c be positive real numbers. Prove that $\frac{ab}{c}+\frac{bc}{a}+\frac{ca}{b}\geq \sqrt{3(a^{2}+b^{2}+c^{2})}$
Rewrite as $a^2b^2 + b^2c^2 + c^2a^2 \ge \sqrt{3(a^4b^2c^2 + a^2b^4c^2 + a^2b^2c^4)}$ Let $x = a^2b^2, y = b^2c^2, z = c^2a^2$. Therefore, now we have to prove : $x + y + z \ge \sqrt{3(xy + yz + zx)}$ Squaring both sides, $(x + y + z)^2 \ge 3(xy + yz + zx)$ $x^2 + y^2 + z^2 \ge xy + yz + zx$. Now, use A.M. $\ge$ G.M. $(\frac{x^2 + y^2}2 \ge xy)$
Using the Arithmetic Mean-Geometric Mean Inequality. Let a, b, c be real numbers. Prove that: $(a+b-c)^{2}(b+c-a)^{2}(c+a-b)^{2}\geq(a^{2}+b^{2}-c^{2})(b^{2}+c^{2}-a^{2})(c^{2}+a^{2}-b^{2}) $
We can assume that $\prod\limits_{cyc}(a^2+b^2-c^2)\geq0$. If $a^2+b^2-c^2<0$ and $a^2+c^2-b^2<0$ so $2a^2<0$, which is contradiction. Thus, we can assume $a^2+b^2-c^2\geq0$, $a^2+c^2-b^2\geq0$ and $b^2+c^2-a^2\geq0$. Since $(a+b-c)^2(a+c-b)^2-(a^2+b^2-c^2)(a^2+c^2-b^2)=2(b-c)^2(b^2+c^2-a^2)\geq0$, we obtain: $$\prod\limits_{cyc}(a+b-c)^2(a+c-b)^2\geq\prod\limits_{cyc}(a^2+b^2-c^2)(a^2+c^2-b^2),$$ which is $$\prod\limits_{cyc}(a+b-c)^2\geq\prod\limits_{cyc}(a^2+b^2-c^2)$$ Done!
Find all Polynomials P(x) with real coefficients Find all Polynomials P(x) with real coefficients so that $2P(2x) = P(3x) + P(x)$. I tried to substitute first degree, second and third, bit couldn't get an equality. Thank you for your responses!
For a term of degree $d$, $$2a_d(2x)^d=a_d(3x)^d+a_dx^d$$ requires $$a_d(2^{d+1}-3^d-1)=0.$$ The only non-trivial solutions are with $d=0,d=1$, so that the polynomial is of the first degree.
Polygon can be covered by circle. I saw this problem from a math forum but it has been left unanswered for months. The problem states: "Prove that every polygon with perimeter $2004$ can be covered by a circle with diameter $1002$ " I have tried the following methods but i keep failing, Any hints for a possible method are appreciated: $1)$ I tried proving that all triangles with perimeter $2004$ can be covered by circle with diameter $1002$ and then use strong induction to say that all such $n$-gons can be covered and then try to prove it for alla the $n+1$-gons $2)$ I tried to use contradiction but also failed.
Let's say that $D$ is the biggest distance between two vertex. Just for better undestanding let's consider that upside of $AE$ has only four sides (it doesn't change anything). By triangle inequality we have: $$d_1<a_1+a_2$$ $$d_2<d_1+a_3 \rightarrow d_2<a_1+a_2+a_3$$ $$D<d_2+a_4 \rightarrow D<a_1+a_2+a_3+a_4$$ We can make the same idea for the downside and conclude that: $$2D<a_1+a_2+...+a_n \rightarrow D<\frac{a_1+a_2+...+a_n}{2} \rightarrow D< 1002$$ Then take a circle such that the diameter contain $AE$ and once the diameter $1002$, no other distance can goes outside of that circle.
Finding a Mobius transformation that maps the upper half-plane to the inside of a unit-disk, such that point $i$ is mapped to $0$ and $\infty$ to $-1$ Finding a Mobius transformation that maps the upper half-plane $\{Im(z)>0\};z \in \mathbb{C}$ to the inside of a unit-disk, such that point $i$ is mapped to $0$ and $\infty$ to $-1$. Okay, to be clear, I know that a Mobius transformation $w$ is of the form: $$w=\frac{az+b}{cz+d};ad-bc\neq0.$$ I am very aware of what a unit disk is. I have done assignments like finding an mobius transformation that maps some points ($\mathbb{C}$) $a,b,c$ to $d,e,f$. -Where these points were not infinity. But nothing like this problem that I have here? How is this done? I think I just need one more point and know it's picture to be able to figure is out. But how?
One very useful Mobius transformation to remember is the Cayley transform defined by $$ f(z)=\frac{z-i}{z+i}$$ This maps the upper half plane to the open unit disk because the upper half plane is precisely the set of points in $\mathbb{C}$ which are closer to $i$ than to $-i$. Note also that $f(i)=0$. This map doesn't quite fit your requirements because $f(\infty)=1$, but $-f(z)$ works instead.
How can I solve this comparsion between sums? $Suppose\;m\;intergal\;and\;m\ge2$ $Which \;of\; these \;sums\; is\; asymptotically\; closer \;to\; the\; value\; log_mn!?$ $ \sum_{k=1}^n\lfloor\;log_m k\;\rfloor$ $Or$ $\sum_{k=1}^n\lceil\;log_m k\;\rceil$ Has anything to do with Stirling's formula?
Assume that $m$ is greater than $n!$, and $n\geq 2$. Thus, $\log_m{n!}<1$, but for $1< k\leq n$ $$\lfloor\log_m k\rfloor=0\\\lceil\log_m k\rceil=1$$ which results in $$\sum_{k=1}^n \lfloor\log_m k\rfloor=0\\\sum_{k=1}^n \lceil\log_m k\rceil=n-1\\$$ Obviously for such cases the floor function provides better approximation, though nothing but zero. However, there may be cases with the opposite result.
How to prove a constant function is $C^{\infty}$ Let $U\subset R^m$ be an open set and $f:U\to\mathbb R^n$ a constant function. My question is really simple. I know that this function is $C^{\infty}$, but I'm having troubles to prove it formally. What I know is its derivative is zero and $f \in C^k$ iff $f'\in C^{k-1}$. I don't know how to manage all this information to prove $f\in C^k$ for every $k=0,1\ldots$ I've already proved $f\in C^0$ and $f\in C^1$. My problem is to prove the induction part: $f\in C^k\implies f\in C^{k+1}$
Let's strengthen your induction hypothesis. We prove $f \in \mathcal{C}^{k}$ and $d^kf = 0$ for all $k \ge 1$. * *$k = 1$: you should prove $df=0$, using the definition of differential. *If $f \in \mathcal{C^{k-1}}$ and $d^{k-1}f = 0$, you should prove $f \in \mathcal{C^{k}}$ and $d^{k}f = 0$. Once again apply the definition to the hypothesis.
Proof Vector Space is Simple over Set of all Endomorphisms First, sorry that the title is a bit messy - here is what I want to ask: Let $V$ be a finite dimensional vector space over the field $K$ , and let $S$ be the set of all linear maps (endomorphisms) of $V$ into itself. Show that $V $ is a simple $S$-space, that is the only $S$-invariant subspaces of $V$ are $V$ itself and the zero subspace. Does anyone have any idea how to do this? Thanks
If by "$S$-invariant subspace" you mean "subspace which is invariant under every $f\in S$," then the statement doesn't even need the finite dimensional hypothesis. Suppose $A\subset V$ is a nontrivial subspace. Then let $\alpha\in A$ be nonzero, and let $\beta\in V\setminus A$ (note that since $0\in A$ we have $\beta\not=0$ trivially). Since $A$ is a subspace, the set $\alpha,\beta$ is linearly independent (why?). So - by the Axiom of Choice - we may find a basis $B$ of $V$ with $\alpha,\beta\in B$. Now consider the function $\pi: B\rightarrow B$ swapping $\alpha$ and $\beta$ and leaving all other elements of $B$ fixed. This extends to a unique $f\in S$, since $B$ is a basis of $V$; and $A$ is clearly not $f$-invariant. Note that this invokes the axiom of choice to get a basis for $V$. Without the axiom of choice, bases for arbitrary vector spaces need not exist. If $V$ is finite-dimensional, though, the axiom of choice is not needed. I am not sure whether AC is needed for the general case, but I suspect it is.
Geometric structures possible given trivial tangent bundle of even dimension Let $M$ be a $2n$-dimensional manifold with trivial tangent bundle $$ TM \cong M \times \mathbb R^{2n}. $$ I know that $M$ admits the following (using the structure group or other methods): (1) An orientation (2) A volume form (3) A riemannian metric (4) An almost complex structure (5) And also these Can any other structures be put on $M$ or is this a complete list? Does the fact that TM is actually a product imply any form of integrability?
It's certainly not a complete list. How many such structures there are depends on exactly what you mean by "structures on $M$." One reasonable way to interpret that phrase would be $G$-structures on $M$ (i.e., reductions of the tangent bundle to some subgroup $G\subseteq GL(2n,\mathbb R)$). All of the structures you mentioned are of this type -- for example, an orientation is a $GL(2n,\mathbb R)^+$-structure; a volume form is an $SL(2n,\mathbb R)$-structure; a Riemannian metric is an $O(2n)$-structure; etc. A trivialization of the tangent bundle of $M$ represents a reduction of the structure group to the trivial group, so for any subgroup $G\subseteq GL(2n,\mathbb R)$ whatsoever, you can put a $G$-structure on $M$. Typically, there will be uncountably many distinct such structures. For example, in dimension $4$, let $a\in (0,1)$ and define a subgroup $G_a\subseteq GL(4,\mathbb R)$ by $$ G_a = \left\{ \left( \begin{matrix} t & 0 & x & 0\\ 0 & t^{-a} & y & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \end{matrix} \right): t>0, \ x,y\in\mathbb R\right\}. $$ These three-dimensional groups all have Lie algebras of Bianchi type VI (see also this paper), so they are nonisomorphic for different choices of $a$. Thus no two of these $G$-structures are isomorphic to each other. (This construction works in any dimension bigger than $2$, but I used dimension $4$ because you stipulated that you want the manifold to have even dimension. Off the top of my head, I don't know exactly what happens in dimension $2$ -- I suppose it's possible that there are only finitely many nonisomorphic $G$-structures in that case.)
2001 AMC 10 Problem 15 This problem is from the 2001 AMC 10. It is Problem 15. A street has parallel curbs $ 40 $ feet apart. A crosswalk bounded by two parallel stripes crosses the street at an angle. The length of the curb between the stripes is $ 15 $ feet and each stripe is $ 50 $ feet long. Find the distance, in feet, between the stripes. $ \textbf{(A)}\ 9 \qquad \textbf{(B)}\ 10 \qquad \textbf{(C)}\ 12 \qquad \textbf{(D)}\ 15 \qquad \textbf{(E)}\ 25 $ What is the difference between the distance between the stripes and the length of the curb between the stripes? My diagram:
The distance between the strips is the distance between those parallel lines and the length of the curb is the distance which is measured through the given line i.e. the road. $\triangle ABE$ is similar to $\triangle CDE$ $AB$ is the distance between strips. Therefore, $\displaystyle \ \ \ \ \ \frac{CD}{AB}=\frac{CE}{AE}$ $\displaystyle \Rightarrow\frac{40}{AB}=\frac{50}{15}$ $\displaystyle \Rightarrow AB=12$
How to integrate $\tan(x) \tan(2x) \tan(3x)$? We have to integrate the following $$\int \tan x \tan 2x \tan 3x \,\mathrm dx$$ In this I tried as First split it into sine and cosine terms Then used $\sin 2x =2\sin x \cos x$ But after that got stuck
Since $$ \tan3x=\tan(x+2x)=\dfrac{\tan x+\tan2x}{1-\tan x\tan2x}, $$ we have $$ \tan3x-\tan x\tan2x\tan3x=\tan x+\tan2x, $$ i.e. $$ \tan x\tan 2x\tan3x=\tan 3x-\tan x-\tan 2x. $$ For $a\ne 0$ we have $$ \int\tan(ax)\,dx=-\dfrac{1}{a}\ln|\cos(ax)|+c, $$ and therefore \begin{eqnarray} \int \tan x\tan 2x\tan3x\,dx&=&\int(\tan 3x-\tan x-\tan 2x)\,dx\\ &=&\ln|\cos x|+\dfrac{1}{2}\ln|\cos2x|-\dfrac13\ln|\cos3x|+c \end{eqnarray}
Find all complex numbers $z$ such that given quadratic has real solution I have a question about this question. Find all complex numbers $z$ such that the equation $$t^2 + [(z+\overline z)-i(z-\overline z)]t + 2z\overline z\ =\ 0$$ has a real solution $t$. Attempt at a solution The discriminant is $[(z+\overline z) - i(z-\overline z)]^2 - 4(2z\overline z)$ $=\ (z+\overline z)^2 - 2i(z+\overline z)(z-\overline z) + [i(z-\overline z)]^2 -8z\overline z$ $=\ (z^2+2z\overline z+\overline z^2) -2i(z^2-\overline z^2) - (z^2-2z\overline z+\overline z^2)-8z\overline z$ $=\ -4z\overline z - 2iz^2 + 2i\overline z^2$ For real solutions, the discriminant must be non-negative. But $z$ is a complex number; how can complex numbers be positive or negative? This is what I don't understand. Would appreciate any help. Thanks.
Just set $z=u+iv$. The discriminant becomes $$\Delta=4[(u+v)^2-2(u^2+v^2)]=-4(u-v)^2.$$ Hence the condition is $\;u=v,\;$ or $\;\arg z\equiv\dfrac\pi4\mod\pi$.
How to calculate $\lim\limits_{n \to \infty } \left(\frac{n+1}{n-1}\right)^{3n^{2}+1}$? I have $\lim_{n \to \infty } \biggr(\dfrac{n+1}{n-1}\biggr)^{3n^{2}+1}$ and I use this way: $\lim_{n \to \infty } \frac{n+1}{n-1}=|\frac{\infty}{\infty}|=\lim_{n \to \infty } \frac{1+\frac{1}{n}}{1-\frac{1}{n}}=1$ and $\lim_{n \to \infty } 3n^{2}+1=\infty$ Then $\lim_{n \to \infty } \frac{n+1}{n-1}^{3n^{2}+1}=|1^{\infty}|$ Continue using the formula $\lim_{n \to \infty } (1+\frac{1}{n})^{n}=e$, I decide to limit and get an answer: $\infty$, and it is right. But for some reason I can imagine $\lim_{n \to \infty } \frac{n+1}{n-1}^{3n^{2}+1}$ how $\lim_{n \to \infty } \frac{n+1}{n-1}=|\frac{\infty}{\infty}|=\lim_{n \to \infty } \frac{1+\frac{1}{n}}{1-\frac{1}{n}}=1$ and $\lim_{n \to \infty } 3n^{2}+1=\infty$ ? I can't find such a property of limits.
Hint Consider $$a_n=\left( \frac{n+1}{n-1}\right)^{3n^{2}+1}$$ $$\log(a_n)=(3n^2+1)\log\left( \frac{n+1}{n-1}\right)=(3n^2+1)\log\left( \frac{1+\frac 1n}{1-\frac1n}\right)$$ Now, use Taylor for infinitely large $n$ $$\log\left( \frac{1+\frac 1n}{1-\frac1n}\right)=\frac{2}{n}+\frac{2}{3 n^3}+O\left(\frac{1}{n^5}\right)$$
What is twice as warm as 42 degrees? A friend once noted that the temperature had doubled from morning to afternoon, from 42 degrees to 84 (Fahrenheit; this was in the U.S.). I didn't contradict her, but thought to myself that wasn't really true, because the actual doubling of 42 degrees would be the span from absolute zero to 42 multiplied by 2. So for the temperature to double, we would have to have long before that burned to a crisp. Since "0" is a seemingly random "starting point," it really shouldn't figure into a calculation of when a temperature has doubled, correct? But we do, of course, say that if you get a raise from \$22 per hour to \$44 per hour, your salary has doubled. Because 0 is a solid basis from which to begin. Are my assumptions valid?
The only thing that makes sense in this context is the absolute zero. That's also what makes, for instance, the ideal gas law look the nicest (temperature doubles and volume stays the same means the pressure doubles, and so on). That means that twice as warm as the temperature at which water freezes turns out to be about $273^\circ C$, or $524^\circ F$. You get more or less the same thing happening with altitudes. Height above sea level is kindof arbitrary, and doesn't actually correspond to actual height above sea level most of the time because of wind and tides. And in the middle of a continent, what is sea level, really? So if you have two mountains, and say that one is twice as tall as the other, does that make sense in any objective manner?
Combinatoric proof I need to prove the following: $$\sum_{k=0}^{n-1} {n\choose k}{m-1 \choose m-n+k} = {m+n-1 \choose m} $$ I have to prove it using combinatorics (not algebra). I understand the right side is the number of ways to divide m balls into n different boxes, but how can I think about left side? Thanks.
The right hand equals $$\binom {m+n-1}{m+n-1-m}=\binom {m+n-1}{n-1}$$ Thus, the right hand is the number of ways to choose $n-1$ objects out of $m+n-1$. Similarly we can rewrite one of the left hand factors as $$\binom {m-1}{m-n+k}=\binom {m-1}{(n-1)-k}$$ To see that the left hand is the same as the right, say we single out $n$ objects from the total. Paint them blue and paint the others red. Note that there are $m+n-1-n=m-1$ red objects. Now $k$ denotes the number of blue objects in our list of $n-1$. Thus the $k^{th}$ term in your sum is the number of ways to choose $k$ blue and $(n-1)-k$ red objects out of $m+n-1$
Average number of pairs of distinct adjacent objects? (1) Given 8 red balls and 7 blue balls arranged randomly in a sequential fashion, what is the average number of pairs of distinct adjacent balls ? For example the sequence $\mathbf{RBR}RRRRR\mathbf{RB}BBBBB$ would contribute 3 pairs, $\mathbf{RB}$ in slot $(1,2)$ , $\mathbf{BR}$ in slot $(2,3)$ and $\mathbf{RB}$ in slot $(9,10)$ 2) Same question but with 6 red balls, 5 blue balls, 4 white balls * *My attempt for (1) is to consider the first 2 slots, and state that these will form a pair if it is either BR or RB, so that the probability of the first 2 slots being a pair is: $$ \frac {7} {7+8} * \frac {8} {7+7} + \frac {8} {7+8} * \frac {7} {7+7} $$ but then for slots (2,3) , given that there is no replacement and overlap with slot (2) for slots (1,2) , i am not sure ... * *I guess if I can manage (1) then (2) is similar ?
Use three generating functions ($A, B$ and $C$) in three variables ($z$, $w$, and $v$) to enumerate strings ending in one of three colors with the variable $u$ marking pairs of distinct adjacent colors. We have $a$ instances of the first color, $b$ of the second and $c$ of the third. We obtain $$A - z = Az + Buz + Cuz, \\ B - w = Auw + Bw + Cuw, \\ C - v = Auv + Buv + Cv.$$ Putting $Q = A + B + C$ we seek $$P= \left. \frac{d}{du} Q \right|_{u=1}.$$ Differentiate to obtain $$A' = A'z + Bz + B'uz + Cz + C'uz. \\ B' = Aw + A'uw + B'w + Cw + C'uw, \\ C' = Av + A'uv + Bv + B'uv + C'v.$$ Add these and put $u=1$ to get $$P = P (z+w+v) + \left.Q (z+w+v)\right|_{u=1} - \left.(Az + Bw + Cv)\right|_{u=1}.$$ We have by inspection that $$\left.Q \right|_{u=1} = \frac{z+w+v}{1-z-w-v}$$ and $$\left.(Az + Bw + Cv)\right|_{u=1} = \frac{z^2+w^2+v^2}{1-z-w-v}.$$ This yields $$P(1-z-w-v) = \frac{(z+w+v)^2}{1-z-w-v} - \frac{z^2+w^2+v^2}{1-z-w-v}$$ or $$P = \frac{2zw+2zv+2wv}{(1-z-w-v)^2}.$$ We now skip ahead and show how to solve the general case of $m$ colors. We get for the generating function $$P = \frac{2\sum_{1\le p \lt q\le m} w_p w_q} {\left(1-\sum_{p=1}^m w_p\right)^2}.$$ Extracting coefficients on $[w_1^{d_1} w_2^{d_2}\cdots w_m^{d_m}]$ where $d = \sum_{p=1}^m d_p$ we use the Newton binomial and obtain $$2 \sum_{1\le p \lt q\le m} (d-1) {d-2\choose d_1, d_2, \ldots d_p-1, \ldots d_q-1, \ldots d_m} \\ = {d\choose d_1, d_2, \ldots d_p, \ldots d_q, \ldots d_m} \\ \times 2 \sum_{1\le p \lt q\le m} (d-1) \frac{1}{d(d-1)} d_p d_q.$$ Divide by the multinomial coefficient to obtain the expectation $$\bbox[5px,border:2px solid #00A000]{ \frac{2}{d_1+d_2+\cdots+d_m} \sum_{1\le p \lt q\le m} d_p d_q.}$$ The original problem by the OP then produces $$\bbox[5px,border:2px solid #00A000]{ \frac{2ab+2ac+2bc}{a+b+c}.}$$ In particular we get for $(6,5,4)$ the exact value and the numerics $$\bbox[5px,border:2px solid #00A000]{ \frac{148}{15} \approx 9.866666667.}$$ There is a Maple script for this which goes as follows (warning: enumeration -- use on small values): with(combinat); ENUM := proc(L) option remember; local m, d, all, perm, pos, res, src, flips; m := nops(L); d := add(p, p in L); all := 0; res := 0; src := [seq(seq(p, q=1..L[p]), p=1..m)]; for perm in permute(src) do flips := 0; for pos to d-1 do if perm[pos] <> perm[pos+1] then flips := flips + 1; fi; od; res := res + flips; all := all + 1; od; res/all; end; X := proc(L) option remember; local m, d; m := nops(L); d := add(p, p in L); 2/d*add(add(L[p]*L[q], q=p+1..m), p=1..m); end; An elementary argument is sure to appear now that the answer, which is very simple, has been posted. What we have here is essentially the DFA method, a legacy algorithm.
Density function of summed variables Let $X$ and $Y$ be two independent and exponentially distributed random variables with parameter $\lambda$. Let $U := \frac{X}{X + Y}$. Find the density function $f_U$ of $U$. So we basically know: $$f_X(x) = \lambda e^{-\lambda x}\\ f_Y(y) = \lambda e^{-\lambda y} $$ But I really have no clue how to find $f_{\frac{X}{X+Y}}(u)$. Does anyone have some ideas and could help me?
Use the CDF method. The support of $U$ is in $(0, 1)$ since you're dividing a positive random variable $X$ by a larger positive random variable $X+Y$, so let $u \in (0, 1)$. Then $$\mathbb{P}(U \leq u) = \mathbb{P}\left(\dfrac{X}{X+Y} \leq u\right) = \mathbb{P}\left(\dfrac{1-u}{u}\cdot X \leq Y\right)\text{.}$$ Since $u \in (0, 1)$, then $\dfrac{1-u}{u}\cdot X$ will be a positively-sloped line in the $X$-$Y$ plane: and $$\mathbb{P}\left(\dfrac{1-u}{u}\cdot X \leq Y\right) = \int_{0}^{\infty}\int_{(1-u)x/u}^{\infty}f_{X, Y}(x, y)\text{ d}y\text{ d}x\text{.}$$ By independence, $$f_{X, Y}(x, y) = \lambda^2e^{-\lambda x}e^{-\lambda y}$$ for $x, y > 0$. Hence, $$\begin{align} \int_{0}^{\infty}\int_{(1-u)x/u}^{\infty}f_{X, Y}(x, y)\text{ d}y\text{ d}x &= \lambda\int_{0}^{\infty}e^{-\lambda x}e^{-\lambda (1-u)x/u}\text{ d}x \\ &= \lambda\int_{0}^{\infty}e^{-\lambda x/u}\text{ d}x \\ &= \lambda \cdot \dfrac{u}{\lambda} \\ &= u\text{.} \end{align}$$ There are several shortcuts that I used above that you should be able to identify. If you don't understand how I got from one step to the next, please let me know. Take the derivative of this with respect to $u$ to get $$f_{U}(u) = \begin{cases} 1, & u \in (0, 1) \\ 0, & \text{otherwise.} \end{cases}$$ Hence, $U$ is uniform in $(0, 1)$.
$f$ differentiable except in $1$ point Assume we have a function $f : \mathbb{R} \rightarrow \mathbb{R}$ such that $f$ is differentiable for all non-zero $x$, $f$ is continuous at $0$ and $$\lim_{x \uparrow 0} f'(x) = \lim_{x \downarrow 0} f'(x) < \infty. $$ Is it then true that $f$ is also differentiable at $0$? If we drop the hypothesis that $f$ is continuous, this is not true since the signums function is a simple counterexample. But what if $f$ is continuous? I don't think this is true but I can't find a counterexample...
Yes, $f$ is differentiable at $0$. Hint. By the Mean Value Theorem used in the interval $[0,x]$ for all $x>0$, $$\frac{f(x)-f(0)}{x-0}=f'(t_x)$$ for some $t_x\in (0,x).$ Hence $$\lim_{x\to 0^+}\frac{f(x)-f(0)}{x-0}=\lim_{x\to 0^+}f'(t_x)=\lim_{t\to 0^+}f'(t)$$ The same can be done in $[x,0]$ fo $x<0$.
Kinky functions Are there functions whereby the left-handed and right-handed derivatives are always defined but different? What made me think about this is price elasticity of demand which for psychological reasons I think wouldn't be the same from the left and right.
In Klambauer's Real Analysis pg. 101, he proves that if $f$ is a function on the open interval $(a,b)$ then there are most a countable number of points $x$ such that both $f_l'(x)$ and $f_r'(x)$ exist (including the infinite cases) but not equal. I'll repeat his argument. Let $$A=\{x\in(a,b): \text{both }f_l'(x)\text{ and } f_r'(x) \text{ exist, but }f_l'(x)< f_r'(x)\}$$ $$B=\{x\in(a,b): \text{both }f_l'(x)\text{ and } f_r'(x) \text{ exist, but }f_l'(x)> f_r'(x)\}$$ For each $x\in A$, chose a rational number $r_1^{(x)}$ such that $f_l'(x)<r_1^{(x)}<f_r'(x)$. After this, pick two more rational numbers $r_2^{(x)}$ and $r_3^{(x)}$ such that the following hold: $$a<r_2^{(x)}<r^{(x)}_3<b$$ $$\text{whenever } r_2^{(x)}<y<x\text{ we can infer }\frac{f(y)-f(x)}{y-x}>r_1^{(x)}$$ $$\text{whenever } x<y<r_3^{(x)}\text{ we can infer }\frac{f(y)-f(x)}{y-x}<r_1^{(x)}$$ These inequalities imply that \begin{equation}f(y)-f(x)< r_1^{(x)}(y-x)\end{equation} whenever $y\neq x$ and $r_2^{(x)}<y<r_3^{(x)}$. This process (with the Axiom of Choice) let's us construct a function $\varphi$ from $A$ into $\mathbb{Q}^3$ given by $\varphi(x)=(r_1^{(x)}, r_2^{(x)}, r_3^{(x)})$. This function $\varphi$ is also injective. For suppose that $\varphi(x)=\varphi(y)$. This implies that $$(r_2^{(x)}, r_3^{(x)})=(r_2^{(y)}, r_3^{(y)})\,.$$ Recall that both $x$ and $y$ are within this open interval. Thus we can infer both of these inequalities $$f(y)-f(x)< r_1^{(x)}(y-x)$$ $$f(x)-f(y)< r_1^{(y)}(x-y)$$ Since we assumed that $r_1^{(x)}=r_1^{(y)}$, adding these inequalities gives $0<0$. Which is nonsense. This shows that $A$ is countable. And by the same reasoning, $B$ is countable.
Heine-Borel theorem. I'm interested in one question about Heine-Borel theorem. We know that if S is bounded and closed then it's compact. Standard proof using fact that some segment is compact. And because of S is bounded it's have left and right bound of segment. But my question is my S should be closed? We could take open , bounded S and some segment which cover out set. Why it's so necessary ?
An open interval, say $(0,2)$ is not compact in the reals, while $[0,2]$ is. Generally a subset of a compact set is not compact. Note that while one characterization of compact is that every sequence in $C$ has a convergent subsequence this means it has to converge in $C$, that is the limit must be in $C$. So for example the sequence $(1/n)_{n\ge 1}$ shows that the set $(0,2)$ is not compact. The sequence does not converge as a sequence in $(0,2)$ since its limit (in the reals) is outside the set. If you consider the definition via open coverings the point is that a smaller will have coverings that won't covert the larger and those might not allow a finite subcover. Also this may be counter-intuitive at first, but covering the larger sets might give you a "good" building block for a cover that allows you to ditch many other sets from a cover of the smaller set. Take the example basically as in the other answer $(1/n,2)$ for $n\ge 1$ which is an open cover of $(0,1]$ with no finite subcover. But if you were to cover the point $0$ too (so the compact interval $[0,1]$), you'd need a "new" open set and this open set will allow you to discard almost all of the former sets. Say you cover $0$ with $(-0.001, 0.001)$ then you only need the former sets for $n$ up to $1001$ to cover $[0,1]$, so you have a finite cover. If you cover $0$ with something still smaller just increase the $n$ but always a finite number suffices as you'll cover $0$ with an open set that thus contains some interval $(-\epsilon, \epsilon)$.
I integrated $\arccos x$ properly, but I can't seem to figure out the bounds of integration? Okay my problem is: $$ \int_0^{1/2} \arccos x \, dx $$ According to wolfram, I performed the indefinite integral correctly. Here is my math: $$ u = \arccos x$$ $$ du = \frac{-1}{\sqrt {1-x^2}} $$ $$ v = x $$ $$dv = dx $$ ---> $$ x\arccos x + \int \frac x {\sqrt{1-x^2}} \, dx $$ So at this point I move on from using the part integration technique, and I use U-sub for the remaining integral. $$ u = -x^2 + 1 $$ $$ \frac {-1} 2 \, du = dx $$ and I end up with the final indefinite integral result of: $$x\arccos x - \sqrt{-x^2 +1} $$ Now my problem is.. according to wolfram I am write on the money with the integral. But when I plug in my values, my answer is different from wolfram's, and the books. From $0$ to $1/2$: $$ \frac 1 2 \arccos \frac 1 2 - \sqrt{-(1/2)^2 + 1} = -0.604$$ $$0\arccos 0 - \sqrt{-(0)^2 + 1 } = -1 $$ $ -0.604 - (-1) = 0.395 $ but the answer is $0.657$ does anyone know why I get the bounds wrong? I'm simply plugging it in but I don't understand how my math is wrong :(
$$\int_{0}^{\frac{1}{2}}\arccos(x)dx$$ $u=\arccos(x)\implies du=\dfrac{-1}{\sqrt{1-x^2}}dx$ and $dv=dx\implies v=x$: $$uv-\int v\,du=x\arccos(x)\big\vert_0^{\frac{1}{2}}+\int_0^{\frac 1 2}\dfrac{x}{\sqrt{1-x^2}} \, dx$$ Now, $u$-sub $u=1-x^2\implies \frac{du}{-2}=x\,dx$ so that $$x\arccos(x)\big\vert_0^{\frac{1}{2}}+\int_0^{\frac{1}{2}} \frac{x}{\sqrt{1-x^2}}\,dx =x\arccos(x)\big\vert_0^{\frac{1}{2}}-\frac{1}{2}\int u^{-\frac{1}{2}}du$$ $$=x\arccos(x)\big\vert_0^{\frac{1}{2}}-\sqrt{1-x^2} \big\vert_0^{\frac{1}{2}}$$ Up to here, you've done it correctly. However, $$x\arccos(x)\big\vert_{0}^{\frac{1}{2}}=\frac{1}{2}\arccos(\frac{1}{2})-0\arccos(0)=\frac{1}{2}\cdot\frac{\pi}{3}=\frac{\pi}{6}$$ and $$\sqrt{1-x^2}\big\vert_{0}^{\frac{1}{2}}=\sqrt{\frac{3}{4}}-1$$ So, the final answer is $$\boxed{\frac{\pi}{6}-\frac{\sqrt{3}}{2}+1}\approx 0.657573$$ Overall, your problem was simply stating that $(\frac{1}{2})\arccos(\frac{1}{2}) - \sqrt{-(\frac{1}{2})^2 + 1} = -.604$ when in fact $(\frac{1}{2}) \arccos(\frac{1}{2}) - \sqrt{-(\frac{1}{2})^2 + 1}\approx -0.342$
Uniform convergence on intervals * *$(f_n)$ converges uniformly on $(a , b)$ *$(f_n)$ converges pointwise at $x = a$ and $x = b$ Prove that $(f_n)$ converges uniformly on $[a , b]$. I am confused on how to show this, specifically at the endpoints. Using Cauchy's criterion and looking at $x = a$, we can write \begin{align*} |f_n(a) - f_m(a)| &= |f_n(a) - f_n(x) + f_n(x) - f_m(x) + f_m(x) - f_m(a)| \\ &\leq |f_n(a) - f_n(x)| + |f_n(x) - f_m(x)| + |f_m(x) - f_m(a)| \end{align*} but I am lost after that. Maybe that is not the way to go about it.
Given that $(f_n)$ converges uniformly on $(a,b)$ to say $f$, we know that $\forall \epsilon>0$ $\exists \ N_0\in \mathbb{N}$ such that $\forall x\in(a,b)$ $|f_n(x)-f(x)|<\epsilon$. Also, $(f_n)$ converges pointwise to $f$ at $a$ and $b$, hence $\exists$ $N_1$ and $N_2\in \mathbb{N}$ such that $\forall \epsilon>0$ $|f_n(a)-f(a)|<\epsilon$ and $|f_n(b)-f(b)|<\epsilon$ (respectively). Choose $N=max\{N_0,N_1,N_2\}$ and see that $(f_n)$ converges uniformly on $[a,b]$.
Determining all homomorphisms from $\Bbb Z_n \times \Bbb Z_m$ to itself I know how to determine all homomorphisms from $\def\Z{\Bbb Z}\Z_n$ to $\Z_m$ (and to itself, naturally), but I can't seem to find an approach towards determining all homomorphisms from $\Z_n \times \Z_m$ to itself. For example, from $\Z_2\times\Z_2$ to itself.
You can simply send the generator of $\def\Z{\Bbb Z}\Z/n\Z$ to any element $x$ satisfying $nx=0$, and the generator of $\def\Z{\Bbb Z}\Z/m\Z$ to any element $y$ satisfying $my=0$. Then of course $(a,b)$ maps to $ax+by$. The sets of allowed values for $x,y$ are easy to determine, but depend somewhat on possible common divisors of $n$ and $m$. For instance for $(\Z/2\Z)\times(\Z/2\Z)$, all elements satisfy $2x=0$, so you have $4^2=16$ different group endomorphisms.
The reason why the existence of inaccessible cardinals cannot be proven in ZFC I am curious about why the existence of inaccessible cardinals cannot be proven in ZFC.My intuitive proof is: Suppose we can prove there is an inaccessible cardinal ϰ,then by definition we must show that for any cardinals b&c which satisfying b<ϰ and c<ϰ,then b^c<ϰ.Then if we consider the set S which is the union of all the cardinals less than ϰ and consider the power set of S--P(S).It follows that |P(S)|>=ϰ.This implies we can proof the existence of inaccessible cardinals under ZFC iff we can disprove it.Therefore we cannot prove the existence using ZFC. Is this the intuitive idea behind the proof? I only have learnt a little bit in set theory by self-studying.But I don't know any thing about mathematical logic.So I don't know the thing such as the second incompleteness theorem.This statement was found in my set theory book but without prrof it.So any comment and improvement of my work is welcome!
The simplest proof for why we cannot prove the existence of inaccessible cardinals is as follows: If $\kappa$ is inaccessible, then $V_\kappa$ is a model of $\sf ZFC$. Now if the existence of inaccessible cardinals was provable, look inside $V_\kappa$, where $\kappa$ is the least inaccessible cardinal. There you should find some $\lambda$ which is inaccessible as well. However this would imply $\lambda$ is also inaccessible, which is a contradiction since $\kappa$ was the minimal.
Inequality with a+b+c=1 Let $a,b,c$ be positive reals such that $a+b+c=1$. Prove that $$\dfrac{bc}{a+5}+\dfrac{ca}{b+5}+\dfrac{ab}{c+5}\le \dfrac{1}{4}.$$ Progress: This is equivalent to show that $$\dfrac{1}{a^2+5a}+\dfrac{1}{b^2+5b}+\dfrac{1}{c^2+5c}\le \dfrac{1}{4abc}.$$ I'm not sure how to proceed further. Also equality doesn't occur when $a=b=c$, which makes me more confused. Edit: The original inequality is as follows: Let $a_1,a_2,\cdots{},a_n$ be $n>2$ positive reals such that $a_1+a_2+\cdots{}+a_n=1$. Prove that, $$\sum_{k=1}^n \dfrac{\prod\limits_{j\ne k}a_j}{a_k+n+2}\le \dfrac{1}{(n-1)^2}.$$
Notice that the conditions imply that $0\leq a,b,c\leq 1$; in particular, we have $$\dfrac{bc}{a+5}+\dfrac{ca}{b+5}+\dfrac{ab}{c+5}\le \dfrac{bc}{5}+\dfrac{ca}{5}+\dfrac{ab}{5} \leq \frac{b+c+a}{5} < \dfrac{1}{4}.$$ I don't think this works for the general case though, because for $n\geq 4$, we have $(n-1)^2\geq n+2$. Now for the general case, by AM-GM inequality $$\prod_{j\neq k} a_j \leq \left(\frac{\sum_{j\neq k}a_j}{n-1}\right)^{n-1}=\left(\frac{1-a_k}{n-1}\right)^{n-1}\leq \frac{1}{(n-1)^{n-1}},$$ and thus $$\sum_{k=1}^n \dfrac{\prod_{j\ne k}a_j}{a_k+n+2}\leq \frac{n}{(n+2)(n-1)^{n-1}} \leq \frac{1}{(n-1)^{n-1}}\leq \dfrac{1}{(n-1)^2}$$since $n-1\geq 2$.
Need help to solve a definit integral using Fourier transform. Hi I have some problem with solving the following definit integral. $$ \int_{-\infty}^{\infty} e^{-\big(\dfrac{t}{\tau}\big)^2-\dfrac{i\omega t}{2}}\textrm{d}t $$ My guess to solve it is by using Fourier transform. I need somehow get write it on the form $$ F(\omega)=\int_{-\infty}^{\infty}f(t)e^{-i\omega t} $$ And after i get it on this form I can simply look up in a table what $F(\omega)$ is.
Hint. Alternatively, one may observe that, by parity of the integrand, your integral reduces to $$ \int_{-\infty}^{\infty} e^{-\Big(\dfrac{t}{\tau}\Big)^2-\dfrac{i\omega t}{2}}\textrm{d}t=\int_{-\infty}^{\infty} e^{-\Big(\dfrac{t}{\tau}\Big)^2}\cos \Big(\dfrac{\omega t}{2} \Big)\:\textrm{d}t $$ then one may use the standard result: $$ \int_0^{\infty} e^{-x^2} \cos( a x) \ \mathrm{d}x=\frac{\sqrt{\pi}}{2} e^{-\large\frac{a^2}{4}}. $$
Derivates of $f(x,y)=|x| + |y| − ||x| − |y||.$ I worked on one my problem and I want you to check my work or give my some good idea for solution. $$f(x,y)=|x| + |y| − ||x| − |y||.$$ a) Find all $(x,y)$ such that $f$ is continuous at given $(x,y)$. Solution a) it's very easy to see that given function is continuous at every $(x,y)$ because absolute value function $|x|$ is continuous at every $x$. b) Count partial derivatives of function $f$ at the point $(0,0).$ Solution: My idea is not to get partial derivatives for every $x$ and $y$, but just use limit definition of it. $$\lim_{h\to0} [\frac{f(h,0)-f(0,0)}{h}=\frac{|h|+|0|-||h|-|0||}{h}=\frac{0}{h}=0]$$ This is partial derivatives whith respect of x. Similarly we get $0$ for partial derivatives with respect to y at the point $(0,0)$. c) Is it function $f$ differentiable at $(0,0)$? My work: Idea is to use definition, so we know how looks our differential operator - it's null operator (from b) part). $$\lim_{(x,y)\to0} \frac{f(x,y)-f(0,0)-A(x,y)}{||(x,y)||}=\frac{|x| + |y| − ||x| − |y||}{\sqrt{(x^2+y^2)}}$$. If we look at $x=0$ limit is $0$, but if $y=x$ than limit isn't equal to zero, so this limit doesn't exist.
As you may be aware, if $a$ and $b$ are real numbers, then \begin{align*} \max(a, b) &= \tfrac{1}{2}(a + b + |a - b|), \\ \min(a, b) &= \tfrac{1}{2}(a + b - |a - b|). \end{align*} (If $I$ denotes the closed interval with endpoint $a$ and $b$, each formula may be interpreted as "starting at the midpoint of $I$ and traveling to the right (max) or left (min) by half the length of $I$.) Particularly, $$ f(x, y) = |x| + |y| - \bigl||x| - |y|\bigr| = 2\min(|x|, |y|). $$ This observation makes your conclusions geometrically apparent.
Lower sum of partition and upper sum of another partition Given $P_1$, $P_2$ partitions of [a, b], $$s(P_1, f) \leq S(P_2, f)$$ where f is the function, and $$s(P, f) = \sum\limits_{k=1}^n(P_k - P_{k-1})m_k\text{ (lower sum)}$$ $$S(P, f) = \sum\limits_{k=1}^n(P_k - P_{k-1})M_k\text{ (upper sum)}$$ with $m_k = \inf\limits_{[P_{k-1}, P_k]}f$ and $M_k = \sup\limits_{[P_{k-1}, P_k]}f$ I can't understand this theorem, how can it be possible for every partition? I think it should be $$ P_1 \subseteq P_2$$ Or something like that...
"sandwich" another Partition $P_3:=P_1\cup P_2$ in between: (By that notation I mean the coarsest partition being finer than $P_1$ and $P_2$) Then by definition $s(P_3,f)\leq S(P_3,f)$ and you easily see that $s(P_1,f)\leq s(P_3,f)$ and $S(P_3,f)\leq S(P_2,f)$.
The permutation/combination puzzle: What are the maximum number of tickets possible in a game of Tambola? Tambola is an interesting game which is played by numbers(between 1-90) being called out and players crossing out the numbers on the ticket. Prizes are won on particular patterns. Facts about the ticket: There are 3 rows and 9 columns in every Tambola Ticket There are total 90 (1-90) numbers in Tambola There are 3 rows and 9 columns in every Tambola Ticket Every ticket has exactly 15 numbers Every row contains 5 numbers A column may have 1, 2 or 3 numbers. (There should be at least 1 number in every column) The numbers in a column are always ascending in order from top to bottom. A Ticket cannot have same number more than once Column 1 on any ticket has numbers between 1-9 Column 2 on any ticket has numbers between 10-19 Column 3 on any ticket has numbers between 20-29 Column 4 on any ticket has numbers between 30-39 Column 5 on any ticket has numbers between 40-49 Column 6 on any ticket has numbers between 50-59 Column 7 on any ticket has numbers between 60-69 Column 8 on any ticket has numbers between 70-79 Column 9 on any ticket has numbers between 80-90 So...What are the maximum number of tickets that can be generated???
I think that User3366 wrote the correct rules. The answer is 6.080.082.602.343.750 The code (in Prolog) to reach the answer is: sum([], 0). sum([H|T], Result) :- sum(T, Rest), Result is H + Rest. :- C1 = [[1,9],[2,36],[3,84]], C2 = [[1,10],[2,45],[3,120]], C3 = [[1,10],[2,45],[3,120]], C4 = [[1,10],[2,45],[3,120]], C5 = [[1,10],[2,45],[3,120]], C6 = [[1,10],[2,45],[3,120]], C7 = [[1,10],[2,45],[3,120]], C8 = [[1,10],[2,45],[3,120]], C9 = [[1,11],[2,55],[3,165]], findall(Z,( member([X1|Y1],C1), member([X2|Y2],C2), member([X3|Y3],C3), member([X4|Y4],C4), member([X5|Y5],C5), member([X6|Y6],C6), member([X7|Y7],C7), member([X8|Y8],C8), member([X9|Y9],C9), 15 is X1+X2+X3+X4+X5+X6+X7+X8+X9,Z is Y1*Y2*Y3*Y4*Y5*Y6*Y7*Y8*Y9),L), sum(L,CARTONES), write(CARTONES). If you read spanish you can find more info here: http://pablopilotti.blogspot.com.ar/2011/02/cartones-de-bingo.html
Proof that if $3^n - 2^n$ is prime then $n$ is prime This was a question from a previous year in a test and I couldn't solve it yet. If $3^n - 2^n$ is prime, then $n$ must be prime. Do you have any tips, suggestions?
Suppose that $n$ is composite, say $n = ab$. Then $$ 3^n - 2^n = (3^a)^b - (2^a)^b,$$ and $$x^b - y^b = (x -y) (x^{b-1} + x^{b-2}y + \cdots + y^{b-1}).$$ Taking $x = 3^a$, $y = 2^a$ gives a non-trivial factorization of $3^n - 2^n$. Hence if $n$ is composite, then $3^n - 2^n$ is as well.
How to integrate $\int\sqrt{\frac{4-x}{4+x}}$? Let $$g(x)=\sqrt{\dfrac{4-x}{4+x}}.$$ I would like to find the primitive of $g(x)$, say $G(x)$. I did the following: first the domain of $g(x)$ is $D_g=(-4, 4]$. Second, we have \begin{align} G(x)=\int g(x)dx &=\int\sqrt{\dfrac{4-x}{4+x}}dx\\ &=\int\sqrt{\dfrac{(4-x)(4-x)}{(4+x)(4-x)}}dx\\ &=\int\sqrt{\dfrac{(4-x)^2}{16-x^2}}dx\\ &=\int\dfrac{4-x}{\sqrt{16-x^2}}dx\\ &=\int\dfrac{4}{\sqrt{16-x^2}}dx-\int\dfrac{x}{\sqrt{16-x^2}}dx\\ &=\int\dfrac{4}{\sqrt{16(1-x^2/16)}}dx+\int\dfrac{-2x}{2\sqrt{16-x^2}}dx\\ &=\underbrace{\int\dfrac{1}{\sqrt{1-(x/4)^2}}dx}_{\text{set $t=x/4$}}+\sqrt{16-x^2}+C\\ &=\underbrace{\int\dfrac{4}{\sqrt{1-t^2}}dt}_{\text{set $t=\sin \theta$}}+\sqrt{16-x^2}+C\\ &=\int\dfrac{4\cos\theta}{\sqrt{\cos^2\theta}}d\theta+\sqrt{16-x^2}+C\\ \end{align} So finally, I get $$G(x)=\pm\theta+\sqrt{16-x^2}+C'.$$ With wolframalpha I found some different answer. Could you provide any suggestions? Also, multiplying by $4-x$ is it correct at the beginning? because I should say then that $x\neq 4$.
First of all, you are right that there is trouble in multiplying by $\frac{4-x}{4-x}$ when $x=4$. But why bother with the domain $\langle-4,4]$ in the first place? You can change the integrand at one point without changing integral, so that one point is irrelevant. Thus, choose $D_g = \langle-4,4\rangle$. The only other issue is the one that I already mentioned in the comments. If you substitute $t = \sin\theta \in\langle -1,1\rangle$, just choose $\theta$ to be in $\langle-\frac\pi 2,\frac\pi 2\rangle$ (substitution is natural bijection that way). Then you have that $\cos\theta>0$, and thus $\sqrt{\cos^2\theta}=\cos\theta$. So, your final result should be $$4\theta + \sqrt{16-x^2} + C = 4\arcsin\frac x4 + \sqrt{16-x^2} + C$$ and if you differentiate it, you can see that the result is just fine. Although your procedure is fine, in this case you might want to make trigonometric substitution much sooner: $$\int\sqrt{\frac{4-x}{4+x}}\,dx = \int\frac{4-x}{\sqrt{16-x^2}}\, dx = [x = 4\sin t] =\int \frac{4-4\sin t}{4\cos t}\cdot 4\cos t\, dt=\\ = 4(t+\cos t)+C = 4\arcsin\frac x4 +4\cos(\arcsin\frac x4) + C = 4\arcsin\frac x4 +4\sqrt{1-\frac{x^2}4} + C$$
If $A$ is a continuous linear operator on $l^2$ such that $||A − T|| < 1$, show that $A$ is not invertible. Define $T : l^2 → l^2$ as $T(x) = (0, x_1, x_2, · · ·)$ where $x=(x_1, x_2, · · ·) \in l^2$ If $A$ is a continuous linear operator on $l^2$ such that $||A − T|| < 1$, show that $A$ is not invertible. Now I know that $T$ is not invertible again as $||A − T|| < 1$ hence $Id-A+T$ is invertible. Now if $A$ is invertible, does it imply that $T$ is invertible. Help how to solve this problem?
Assume by contradiction that $A$ is invertible. Then, there exists an $x=(x_1,..,x_n,..)$ such that $$A(x)=(1,0,0,..0,...)$$ Then $$(A-T)(x)=(1,-x_1,-x_2,....,-x_n,..)$$ Therefore, since $\|A-T\| <1$, we have $$\| (1,-x_1,..,-x_n,..) \| =\| (A-T)(x) \| < \| x \|=\| (x_1,..,x_n,..)\|$$ which is a contradiction.
continuous bijective maps between manifolds If $f$ is a continuous bijective map between two manifolds $M$ with dimension $m$ and $N$ with dimension $n$. Why this map does not necessarily preserve the dimension i.e $m\not = n$? What is a good example about that?
In the case $n=m$ : Any injective continous maps $\mathbb R^n \to \mathbb R^n$ is an homeomorphism on its image. This is the classical invariance of domain theorem. Since an immersion is not an embedding in general, my argument I did wrote is not true for $n < m$, as George Elencwajg said in the comments.
Solving proportions with $3$ ratios,$ x:3:y = -2:3:-4$ Proportion seems simple enough for me. Example is $4:x = 2:5,$ and the answer is $x = 10.$ My problem is how do I solve for proportions with $3$ ratios like $x:3:y = -2:3:-4$ ? Do I write it like $$\frac{\frac{x}{3}}{y} = \frac{\frac{-2}{3}}{-4} $$ ?
Just like chained equalities, you can break it apart. $x:3:y=-2:3:-4$ becomes $x:3=-2:3$ and $3:y=3:-4$. Now solve them independently.
Log of a negative number Why is the log of $x$ equal to the log of the absolute value of $x$ plus $i$ times $\pi$? $$\log(x)=\log(|x|)+i\pi\text{, for }x < 0$$ Where does the $\pi$ come from? Is it from a logarithmic identity? I know it sounds silly, but I was not able to find an answer from existing solutions: here, here, and here.
Slightly tongue in cheek but partially seriously I'm going to say "magic". What I mean is before we talk about $\ln -k$ for a negative number we have to define what on earth it could possibly mean to have $e^{m} = -k$. There is no "natural" meaning to that and mathematicians, in one sense, must contrive a meaning for it. If our universe is the real numbers then this is .... impossible. If $b > 0$ then $b^x > 1$ and $e^x$ being negative is simply impossible. But what if our universe is the complex numbers. Then if $z = a + bi$ ($i^2 = -1$) then what could $e^{a+bi}$ possibly mean? After all we can't just "multiply $e$ by itself the square root of negative one times". Well, we know for real numbers, $e^{x + y} = e^xe^y$ and $\frac {de^x}{dx} = e^x$. For this to still be true for complex numbers as well as for real numbers the only way to define $e^{z}$ where $z$ is a complex number, so that $e^{z+w} = e^ze^w$ and so that $\frac {d^z}{dz}$ the only possible way for that to stay true we have to define $e^{a+bi} = e^a(\cos b + i \sin b)$. Okay. That was ... a lot of hand waving. But it works and if you study complex analysis it will be derived in great detail. BUT this means.... $e^{i\pi} = e^{0 + i\pi} = e^0(\cos \pi + i \sin \pi) = 1*(-1 + i*0) = -1$. This is Euler's Formula, one of the most famous mathematical formula in history. Suddenly $e^z$ being a negative number is not impossible. But if $e^z$ is negative then we need to have $e^{z} = e^{a + bi} = e^a(\cos b + i \sin b)$ so $\cos b + i \sin b$ is a negative number. That means $b = \pi$ . So $z = i \pi$. So this means $\ln -1*x = \ln -1 + \ln x = i\pi + \ln x$. And that's where $\pi$ comes from. When we define $e^{a+bi} = e^a(\cos b + i \sin b)$, exponents become thoroughly linked with trigonometric functions. As such, $\pi$ is an essential part of the inverse. For the real numbers only and logs of positive real numbers only, we don't have to worry as $e^x = e^{x + 0*i} = e^x(\cos 0 + i \sin 0) = e^x(1 + i*0) = e^x= k$ and $\ln k = x$ and $\pi$ is not relevant.
How to prove the positive-definiteness of this matrix? I have this matrix $$ C= \left( \begin{array}{c|c} A+\alpha I_m & -A-\alpha I_m \\ \hline -A-\alpha I_m & A+\alpha I_m \end{array} \right)\\ A= \left( \begin{array}{cccc} (y_1,y_1) &(y_2,y_1)& \cdots&(y_m,y_1) \\ (y_1,y_2) &(y_2,y_2)&\cdots&(y_m,y_2) \\ \vdots & \vdots & \ddots & \vdots \\ (y_1,y_m) &(y_2,y_m)&\cdots&(y_m,y_m) \\ \end{array} \right) $$ where $y_i\in L^2(\Omega)$ for all $i=1,\dots,m$ , $\alpha\geq 0$ and $(\cdot,\cdot)$ denote the inner product in $L^2(\Omega)$ with $\Omega\subset\mathbb{R}^2$. I would like to prove this matrix is Positive-definite, any suggestions?
Since $A$ is a Gram matrix, $A$ is positive semidefinite, so $A+\alpha I$ is positive semidefinite for $\alpha \ge 0$. Then, for any vector $w = \begin{bmatrix}x\\y\end{bmatrix} \in \mathbb{R}^{2m}$ (where $x,y \in \mathbb{R}^m$) we have $w^TCw$ $= \begin{bmatrix}x^T & y^T\end{bmatrix}\begin{bmatrix}A+\alpha I & -A-\alpha I \\ -A-\alpha I & A+\alpha I\end{bmatrix}\begin{bmatrix}x\\y\end{bmatrix}$ $= x^T(A+\alpha I)x - x^T(A+\alpha I)y - y^T(A+\alpha I)x + y^T(A+\alpha I)y$ $= x^T(A+\alpha I)(x-y) - y^T(A+\alpha I)(x-y)$ $= (x-y)^T(A+\alpha I)(x-y) \ge 0$, where we have used the fact that $A+\alpha I$ is positive semidefinite. Since $w^TCw \ge 0$ for all $w \in \mathbb{R}^{2m}$ and $C$ is symmetric, we have that $C$ is positive semidefinite. Note that for any $x \in \mathbb{R}^m$, the vector $w = \begin{bmatrix}x\\x\end{bmatrix} \in \mathbb{R}^{2m}$ satisfies $w^TCw = 0$. Thus, $m$ eigenvalues of $C$ are zero. So, $C$ is not positive definite.
Basic proof that $GL(2, \mathbb{Z})$ is not nilpotent I need to show that $GL(2,\mathbb{Z})=\left\{ \begin{pmatrix} a&b \\ c&d \end{pmatrix} \mid a,b,c,d \in \mathbb{Z}, ad-bc = \pm 1\right\}$ is not nilpotent. I have seen this question but the answer given there is too advanced for where I am currently in my studies. In order to show something is not nilpotent, all I have at my disposal is to show that the upper central series does not terminate or that it is not solvable (using a derived series that does not terminate or a subnormal series that either does not terminate or whose factors are not all abelian.) This seems a little messy, and I've never used a direct approach to show that something was not nilpotent. What is the best approach to this proof without appealing to $p$-groups, Sylow Groups, Galois Theory, or free abelian groups? Thank you.
Because I'm lazy, I'll write $G = \operatorname{GL}(2,\mathbb{Z})$. From your comments/messages in the chat, I gather that you found the center $Z(G) = \{I,-I\}$ where $I$ is the identity matrix. Now, consider the upper central series: $$1 = Z_0 \trianglelefteq Z_1 \trianglelefteq Z_2 \trianglelefteq \cdots \trianglelefteq Z_i \trianglelefteq \cdots$$ where $$Z_{i+1} = \{ x \in G \mid \forall y \in G: [x,y] \in Z_i\}= \{ x \in G \mid \forall y \in G: x^{-1}y^{-1}xy \in Z_i\}.$$ In particular, observe that $$Z_1 = \{ x \in G \mid \forall y \in G: [x,y] \in Z_0\} = \{ x \in G \mid \forall y \in G: xy = yx \} = Z(G) = \{I,-I\}.$$ Now, onwards to $Z_2$. $$Z_2 = \{ x \in G \mid \forall y \in G: [x,y] \in Z_1\} = \{ x \in G \mid \forall y \in G: (xy = yx \text{ or } xy = -yx) \}.$$ Let us examine the second condition: $xy = -yx$. This means that $y = -x^{-1}yx$, which gives $$\operatorname{trace}(y) = \operatorname{trace}(-x^{-1}yx) = -\operatorname{trace}(x^{-1}yx) = -\operatorname{trace}(xx^{-1}y) = -\operatorname{trace}(y)$$ hence $\operatorname{trace}(y) = 0$. So $x$ must commute with all matrices $y$ having $\operatorname{trace}(y) \neq 0$. We claim that the only matrices $x$ for which this holds, are $I$ and $-I$ (see below for a proof of this claim). Thus we end up with $Z_2(G) = Z(G)$ again! So if we repeat these steps again, we'll find that $Z_3 = Z(G)$, and again that $Z_4 = Z(G)$, etc: $Z_i = Z(G) = \{I,-I\}$ for all $i \geq 1$. But then the upper central series never terminates; and hence $G$ is not nilpotent! Proof of claim: Clearly it holds for $x = I$ or $x = -I$, since these groups are in the center of $G$. Now, let $x = \begin{pmatrix}x_1 & x_2\\x_3 & x_4\end{pmatrix}$. Then we must have that $$\begin{pmatrix}x_1 & x_1+x_2\\x_3 & x_3 + x_4\end{pmatrix} = \begin{pmatrix}x_1 & x_2\\x_3 & x_4\end{pmatrix}\begin{pmatrix}1 & 1\\ 0 & 1\end{pmatrix} = \begin{pmatrix}1 & 1\\ 0 & 1\end{pmatrix}\begin{pmatrix}x_1 & x_2\\x_3 & x_4\end{pmatrix} = \begin{pmatrix}x_1 +x_3& x_2+x_4\\x_3 & x_4\end{pmatrix},$$ hence $x_3 = 0$ and $x_1 = x_4$. Similarly, commuting $x$ with $\begin{pmatrix}1 & 0\\ 1 & 1\end{pmatrix}$ you find that $x_2 = 0$ and $x_1 = x_4$. But since $x \in G$, then the only matrices satisfying this are $I$ and $-I$.
Equation of a plane $Ax+By+Cz=D$ I have question abous parametrization and plane equation.How can I write the equation of a plane in the form $Ax+By+Cz=D$ if it is given as $$x=2s+3t$$ $$y=s-2t$$ $$z=3s+t$$.
$$x=2s+3t$$ $$y=s-2t$$ $$z=3s+t$$ $x+y=3s+t$ Then $x+y=z$ Thus $$x+y-z=0$$