INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
Numerical solutions to wave equation Does the wave equation always have an analytical solution given well-behaved boundary/initial conditions? If not, under what conditions does the wave equation need to be solved numerically? This figure of a simple 1D-problem seems to have been generated numerically. Any recommended reading for general theory on the wave equation is also welcome.
Derivation Solutions It has a general solution but numerical solutions can still be an interesting exercise. Numerical solutions are useful when you are solving some variation of the wave equation with an additional term in it which makes it unsolvable analytically.
How to show that this limit $\lim_{n\rightarrow\infty}\sum_{k=1}^n(\frac{1}{k}-\frac{1}{2^k})$ is divergent? How to show that this limit $\lim_{n\rightarrow\infty}\sum_{k=1}^n(\frac{1}{k}-\frac{1}{2^k})$ is divergent? I applied integral test and found the series is divergent. I wonder if there exist easier solutions.
Each partial sum of your series is the difference between the partial sum of the harmonic series, and the partial sum of the geometric series. The latter are all bounded by 1. Since the harmonic series diverges, your series does also.
Definable sets à la Jech Jech in Set Theory, p. 175 defines definable sets over a given model $(M,\in)$ (where $M$ is a set) as those sets (= subsets of $M$) $X$ with a formula $\phi$ in the set of formulas of the language $\lbrace \in \rbrace$ and some $a_1,\dots,a_n \in M$ such that $$ X = \lbrace x \in M : (M,\in) \models \phi[x,a_1,\dots, a_n]\rbrace$$ Jech defines $$ \text{def}(M) := \lbrace X \subset M : X\ \text{is definable over }\ (M,\in)\rbrace $$ So far, everything is clear to me. But then, Jech claims: Clearly, $M \in \text{def}(M)$ and $M \subset \text{def}(M) \subset \text{P}(M)$. It is clear to me that and why $M \in \text{def}(M)$ and that and why $\text{def}(M) \subset \text{P}(M)$. But I cannot see at once that and why $M \subset \text{def}(M)$ for arbitrary sets $M$. Is this a typo in Jech's Set Theory or did I misunderstand something?
If $M$ is transitive and $a \in M$ then $a \subseteq M$ and moreover $$a = \{x \in M : (M,{\in}) \vDash x \in a\}.$$ So $a \in \operatorname{def}(M)$.
$A$ is a subset of $B$ if and only if $P(A) \subset P(B)$ I had to prove the following for a trial calculus exam: $A\subset B$ if and only if $P(A) \subset P(B)$ where $P(A)$ is the set of all subsets of $A$. Can someone tell me if my approach is correct and please give the correct proof otherwise? $PROOF$: $\Big(\Longrightarrow\Big)$ assume $A\subset B$ is true. Then $\forall$ $a\in A$, $a\in B$ Then for $\forall$ A, the elements $a_1, a_2,$ ... , $a_n$ in A are also in B. Hence $P(A)\subset P(B)$ $\Big(\Longleftarrow\Big) $ assume $P(A) \subset P(B)$ is true. We prove this by contradiction so assume $A\not\subset B$ Then there is a set $A$ with an element $a$ in it, $a\notin$ B. Hence $P(A) \not\subset P(B)$ But we assumed $P(A) = P(B)$ is true. We reached a contradiction. Hence if $P(A) = P(B)$ then $A\subset B$. I proved it both sides now, please improve me if I did something wrong :-)
$(\Rightarrow)$ Given any $x\in P(A)$ then $x\subset A$. So, by hypothesis, $x\subset B$ and so $x\in P(B)$. The counter part is easier, as cited by @Asaf, below.
Algebraic Solutions to Systems of Polynomial Equations Given a system of rational polynomials in some number of variables with at least one real solution, I want to prove that there exists a solution that is a tuple of algebraic numbers. I feel like this should be easy to prove, but I can't determine how to. Could anyone give me any help? I have spent a while thinking about this problem, and I can't think of any cases where it should be false. However I have absolutely no idea how to begin to show that it is true, and I can't stop thinking about it. Are there any simple properties of the algebraic numbers that would imply this? I don't want someone to prove it for me, I just need someone to point me in the direction of a proof, or show me how to find a counterexample if it is actually false (which would be very surprising and somewhat enlightening). If anyone knows anything about this problem I would be thankful if they could give me a little bit of help.
Here's a thought. Let's look at the simplest non-trivial case. Let $P(x,y)$ be a polynomial in two variables with rational (equivalently, for our purposes, integer) coefficients, and a real zero. If that zero is isolated, then $P$ is never negative (or never positive) in some neighborhood of that zero, so the graph of $z=P(x,y)$ is tangent to the plane $z=0$, so the partial derivatives $P_x$ and $P_y$ vanish at the zero, so if you eliminate $x$ from the partials (by, say, taking the resultant) you get a one-variable polynomial that vanishes at the $y$-value, so the $y$-value must be algebraic, so the $x$-value must be algebraic. If the zero is not isolated, then $P$ vanishes at some nearby point with at least one algebraic (indeed, rational) coordinate, but that point must then have both coordinates algebraic. Looking at the general case, many polynomials in many variables, you ought to be able to use resultants to get down to a single polynomial, and then do an induction on the number of variables --- we've already done the base case.
Edge coloring in a graph How do I color edges in a graph? I actually want to ask you specifically about one method that I've heard about - to find a dual (?) graph and color its vertices. What is the dual graph here? Is it really the dual graph, or maybe something different? If so, what is this? The graph I'm talking about has G* sign.
One method of finding an edge colouring of a graph is to find a vertex colouring of it's line graph. The line graph is formed by placing a vertex for every edge in the original graph, and connecting them with edges if the edges of the original graph share a vertex. By finding a vertex colouring of the line graph we obtain a colour for each edge, and if two edges of the original graph share an endpoint they will be connected in the line graph and so have different colours in our colouring. From this we can see that a vertex colouring of the line graph gives an edge colouring of our initial graph.
Prove that $(\sup\{y\in\Bbb R^{\ge0}:y^2\le x\})^2=x$ This question is a simple one from foundations of analysis, regarding the definition of the square root function. We begin by defining $$\sqrt x:=\sup\{y\in\Bbb R^{\ge0}:y^2\le x\}:=\sup S(x)$$ for $x\ge0$, and now we wish to show that it satisfies its defining property, namely $\sqrt x^2=x$. By dividing into cases $x\le1$ (where $1$ is an upper bound for $S(x)$) and $x\ge1$ (where $x$ is an upper bound for $S(x)$), we know $S(x)$ is upper bounded and hence the function is well-defined for all $x\ge0$. It follows from the intermediate value theorem applied to $f(y)=y^2$ on $[0,\max(1,x)]$ that $\sqrt x^2=x$, if we can prove that $f(y)$ is continuous, but suffice it to say that I would like to prove the IVT later in generality, when I will have the definition $|x|=\sqrt{xx^*}$ (which is used in the definition of continuity), so I hit a circularity problem. Thus I need a "bootstrap" version of the IVT for this particular case, i.e. I can't just invoke this theorem. What is the cleanest way to get to the goal here? Assume I don't have any theorems to work from except basic algebra.
Since you have the l.u.b. axiom, you can use that, for any two bounded sets $A,B$ of nonnegative reals, we have $$(\sup A) \cdot (\sup B)=\sup( \{a\cdot b:a \in A,b \in B\}).$$ Applying this to $A=B=S(x)$ we want to find the sup of the set of products $a\cdot b$ where $a^2\le x$ and $b^2 \le x.$ First note any such product is at most $x$: without loss assume $a \le b$, then we have $ab \le b^2 \le x.$ This much shows $\sqrt{x}\cdot \sqrt{x} \le x$ for your definition of $\sqrt{x}$ as a sup. Second, since we may use any (independent) choices of $a,b \in S(x)$ we may in particular take them both equal, and obtain the products $t\cdot t=t^2$ which, given the definition of $S(x)$, have supremum $x$, which shows that $\sqrt{x}\cdot \sqrt{x} \ge x$.
Comparing Areas under Curves I remembered back in high school AP Calculus class, we're taught that for a series: $$\int^\infty_1\frac{1}{x^n}dx:n\in\mathbb{R}_{\geq2}\implies\text{The integral converges.}$$ Now, let's compare $$\int^\infty_1\frac{1}{x^2}dx\text{ and }\int^\infty_1\frac{1}{x^3}dx\text{.}$$ Of course, the second integral converges "faster" since it's cubed, and the area under the curve would be smaller in value than the first integral. This is what's bothering me: I found out that $$\int^\infty_{1/2}\frac{1}{x^2}dx=\int^\infty_{1/2}\frac{1}{x^3}dx<\int^\infty_{1/2}\frac{1}{x^4}dx$$ Can someone explain to me when is this happening, and how can one prove that the fact this is right? Thanks!
$$\text{The main reason that: }\int^\infty_{1/2}\frac{1}{x^2}dx=\int^\infty_{1/2}\frac{1}{x^3}dx$$ $$\text{is because although that }\int^\infty_1\frac{1}{x^2}dx>\int^\infty_1\frac{1}{x^3}dx$$ $$\text{remember that }\int^1_{1/2}\frac{1}{x^2}dx<\int^1_{1/2}\frac{1}{x^3}dx\text{.}$$ $$\text{So, in this case: }\int^\infty_1\frac{1}{x^2}dx+\int^1_{1/2}\frac{1}{x^2}dx=\int^\infty_1\frac{1}{x^3}dx+\int^1_{1/2}\frac{1}{x^3}dx,$$ $$\text{which means that: }\int^\infty_{1/2}\frac{1}{x^2}dx=\int^\infty_{1/2}\frac{1}{x^3}dx$$
Notation of random variables I am really confused about capitalization of variable names in statistics. When should a random variable be presented by uppercase letter, and when lower case? For a probability $P(X \leq x)$, what do $x$ and $X$ mean here?
You need to dissociate $x$ from $X$ in your mind—sometimes it matters that they are "the same letter" but in general this is not the case. They are two different characters and they mean two different things and just because they have the same name when read out loud doesn't mean anything. By convention, a lot of the time we give random variables names which are capital letters from around the end of the alphabet. That doesn't have to be the case—it's arbitrary—but it's a convention. So just as an example here, let's let $X$ be the random variable which represents the outcome of a single roll of a die, so that $X$ takes on values in $\{1,2,3,4,5,6\}$. Now I believe you understand what would be meant by something like $P(X\leq 2)$: it's the probability that the die comes up either a 1 or a 2. Similarly, we could evaluate numbers for $P(X\leq 4)$, $P(X\leq 6)$, $P(X\leq \pi)$, $P(X\leq 10000)$, $P(X\leq -230)$ or $P(X\leq \text{any real number that you can think of})$. Another way to say this is that $P(X\leq\text{[blank]})$ is a function of a real variable: we can put any number into [blank] that we want and we end up with a unique number. Now a very common symbol for denoting a real variable is $x$, so we can write this function as $P(X\leq x)$. In this expression, $X$ is fixed, and $x$ is allowed to vary over all real numbers. It's not super significant that $x$ and $X$ are the same letter here. We can similarly write $P(y\leq X)$ and this would be the same function. Where it really starts to come in handy that $x$ and $X$ are the same letter is when you are dealing with things like joint distributions where you have more than one random variable, and you are interested in the probability of for instance $P(X\leq \text{[some number] and } Y\leq\text{[some other number]})$ which can be written more succinctly as $P(X\leq x,Y\leq y)$. Then, just to account for the fact that it's hard to keep track of a lot of symbols at the same time, it's convenient that $x$ corresponds to $X$ in an obvious way. By the way, for a random variable $X$, the function $P(X\leq x)$ a very important function. It is called the cumulative distribution function and is usually denoted by $F_X$, so that $$F_X(x)=P(X\leq x)$$
Complex integration: $\int _\gamma \frac{1}{z}dz=\log (\gamma (b))-\log(\gamma (a))?$ Let $\gamma$ be a closed path defined on $[a,b]$ with image in the complex plan except the upper imaginary axis, ($0$ isn't in this set). Then $\frac{1}{z}$ has an antiderivative there and it is $\log z$. Therefore $\int _\gamma \frac{1}{z}dz=\log (\gamma (b))-\log(\gamma (a))=0$ because it is a closed path. Now let $\psi(t)=e^{it}+3$, $0\leq t\leq 2\pi$. Then $\psi'(t)=ie^{it}, 0\leq t\leq 2\pi$. So $$\int _\psi\frac{1}{z}dz=\int _0^{2\pi}\frac{ie^{it}}{e^{it}}dt=2\pi i$$ but $\psi$ is a closed path so there's something wrong. What's going on here?
The denominator in the second expression should be $e^{it}+3$ instead of $e^{it}$.
Find maximum and minimum of $f(x,y) = x^2 y^2 - 2x - 2y$ in $0 \leq x \leq y \leq 5$. Find maximum and minimum of $f(x,y) = x^2 y^2 - 2x - 2y$ in $0 \leq x \leq y \leq 5$. So first we need to check inside the domain, I got only one point $A(1,1)$ where $f(1,1) = -3$. and after further checking it is a saddle point. Now we want to check on the edges, we have 3 edges: $(1) x=y, 0\leq x\leq y \leq 5$ and $(2) y=5, 0 \leq x \leq 5$ and $(3) x=0, 0\leq y \leq 5$. So I started with each of them, using Lagrange. I started with the first and got: $l(x,y,\lambda) = x^2y^2 - 2x - 2y + \lambda (x-y)$. $l_x(x,y) = 2xy^2 - 2 + \lambda = 0$ $l_y(x,y) = 2x^2y - 2 - \lambda = 0 $ $x-y = 0 \rightarrow x=y$. But then what to do ? I always get $x,y = \sqrt[3]{1+ 0.5\lambda}$, which gets me nowhere. Any help would be appreciated
Well, we have $f(x,y)=x^2y^2-2x-2y$ considered on the following green area: So $f_x=2xy^2-2,~~f_y=2x^2y-2$ and solving $f_x=f_y=0$ gives us the critical point $x=y=1$ which is on the border of the area. Now think about the border: $$y=5,~~ 0\le x\le 5 \;\;\;\; x=0,~~0\le y\le 5 \;\;\;\; y=x,~~ 0\le x\le 5$$ If $y=5,~~ 0\le x\le 5$, so $$f(x,y)\to f(x)=25x^2-2x-10$$ and this easy for you to find the critical points which are maybe the best candidates on this border. If $x=0,~~0\le y\le 5$, so $$f(x,y)\to f(y)=-2y$$ there is a similar way of finding such points on this border.
Only 'atomic' vectors as part of the base of a vector space? Given a vector subspace $U_1=${$\left(\begin{array}{c} \lambda+µ \\ \lambda \\ µ \end{array}\right)\in R^3$: $\lambda,µ \in R$ } Determine a possible base of this vector subspace. As far as I know, the base of a vector space is a number of vectors from which every vector in the given vector (sub-)space can be constructed. My suggestion for a base of the given vectorspace would be: $$\left(\begin{array}{c} \lambda \\ 0 \\ 0 \end{array}\right)+ \left(\begin{array}{c} µ \\ 0 \\ 0 \end{array}\right)+ \left(\begin{array}{c} 0 \\ \lambda \\ 0 \end{array}\right)+ \left(\begin{array}{c} 0 \\ 0 \\ µ \end{array}\right) $$ None of these vectors can be written as a linear combination of the other vectors. What also came to my mind as a possible solution was: $$\left(\begin{array}{c} \lambda+µ \\ 0 \\ 0 \end{array}\right)+ \left(\begin{array}{c} 0 \\ \lambda \\ 0 \end{array}\right)+ \left(\begin{array}{c} 0 \\ 0 \\ µ \end{array}\right) $$ Which - if any - of these two is a valid base for the given vectorspace? Is the second one invalid, because it can be written like the first one?
The first thing you need to know is that a subspace's dimension cannot exceed the containing space's dimension. Since the number of vectors constituting a base is equal to the dimension, your first suggestion is wrong, as it suggests that the subspace is of dimension 4 in $\mathbb{R}^3$, which is only of dimension 3. Then, if a subspace's dimension is equal to the dimension of the containing space, they are equal. This means that if your second suggestion is correct, the subspace $U_1$ is equal to $\mathbb{R}^3$, which is also false (take for instance the vector $(1,0,0)$) As a general rule, you need to factor the scalars appearing in the definition (here $\lambda$ and $\mu$), like Listing suggested, and the basis will naturally appear.
Limits and exponents and e exponent form So I know that $\underset{n\rightarrow \infty}{\text{lim}}\left(1+\frac {1}{n}\right)^n=e$ and that we're not allowed to see it as $1^\infty$ because that'd be incorrect. Why is then that we can do the same thing with (for example): $$\lim_{n\rightarrow \infty} \left(1+\sin\left(\frac {1}{n}\right)\right)^{n\cos\left(\frac {1}{n}\right)}= \lim_{n\rightarrow \infty} \left(\left(1+\sin\left(\frac {1}{n}\right)\right)^\frac {1}{\sin\left(\frac {1}{n}\right)} \right)^{n\cdot\cos\left(\frac {1}{n}\right)\sin\left(\frac{1}{n}\right)}$$ What I mean by that is that in the example of $e$ we can't say it' $\lim \left(1+\frac {1}{n}\right)^{\lim (n)}=1^\infty$ While in the second example that's exactly what we do (we say that the limit of the base is $e$ while the limit of the exponent is 1 which is why the entire expression is equal to $e$. Can anyone explain to me the difference between the two?
In fact there's no difference between the two examples, indeed if you have a function $h$ such that $h(n)\to\infty$ then $$\lim_{n\to\infty}\left(1+\frac{1}{h(n)}\right)^{h(n)}=\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^{n}=e$$ and ofcourse if we have another function $f$ such that $f(n)\to a\in\mathbb R$ then $$\lim_{n\to\infty}\left(\left(1+\frac{1}{h(n)}\right)^{h(n)}\right)^{f(n)}=e^a.$$
How to show that right triangle is intersection of two rectangles in Cartesian coordinates? I am trying to do the following. Given the triangle $$T:=\left\{(x,y)\mid 0\leq x\leq h,0\leq y\leq k,\frac{x}{h}+\frac{y}{k}\leq1\right\}$$ find two rectangles $R$ and $S$ such that $R\cap S=T$, $\partial R\cap T$ is the two legs of $T$, and $\partial S\cap T$ is the hypotenuse of $T$ union the vertex of $T$. The rectangle $R$ should be of smallest area. For example, for $h=4$ and $k=3$, Clearly, $$R=\left\{(x,y)\mid 0\leq x\leq h,0\leq y\leq k\right\}.$$ I am having trouble coming up with a description of $S$ to show that $R\cap S=T$. I have that $S$ has side lengths $$\sqrt{h^2+k^2}\qquad\textrm{and}\qquad\frac{hk}{\sqrt{h^2+k^2}}$$ and corners at $$(0,k)\qquad(h,0)\qquad\left(\frac{h^3}{h^2+k^2},-\frac{h^2k}{h^2+k^2}\right)\qquad\left(-\frac{k^3}{h^2+k^2},\frac{hk^2}{h^2+k^2}\right).$$ How do I describe $S$ in a way that I can show $R\cap S=T$?
Note that $\frac{x}{h}+\frac{y}{k}=1$ is equivalent to $y=-\frac{k}{h}x+k$. Define $$H_1=\left\{(x,y)\mid y\leq-\frac{k}{h}x+k\right\}\\ H_2=\left\{(x,y)\mid y\leq\frac{h}{k}x+k\right\}\\ H_3=\left\{(x,y)\mid y\leq\frac{h}{k}x-\frac{h^2}{k}\right\}\\ H_4=\left\{(x,y)\mid y\leq-\frac{k}{h}x\right\}.$$ Show that $H_1\cap H_2\cap H_3\cap H_4$ is such a rectangle that fits your condition for $S$, and that $$H_1\cap R=T\\ H_2\cap R=R\\ H_3\cap R=R\\ H_4\cap R=R;$$ thus, $H_1\cap H_2\cap H_3\cap H_4\cap R=T$. (Note that $\partial H_1\parallel\partial H_4$, $\partial H_2\parallel\partial H_3$, and $\partial H_1\perp\partial H_2$, according to the point-slope form of the equations given for the definitions of $H_1$, $H_2$, $H_3$, and $H_4$.)
Testing polynomial equivalence Suppose I have two polynomials, P(x) and Q(x), of the same degree and with the same leading coefficient. How can I test if the two are equivalent in the sense that there exists some $k$ with $P(x+k)=Q(x)$? P and Q are in $\mathbb{Z}[x].$
The condition of $\mathbb{Z}[x]$ isn't required. Suppose we have 2 polynomial $P(x)$ and $Q(x)$, whose coefficients of $x^i$ are $P_i$ and $Q_i$ respectively. If they are equivalent in the sense of $P(x+k) = Q(k)$, then * *Their degree must be the same, which we denote as $n$. *Their leading coefficient must be the same, which we denote as $a_n=P_n=Q_n$ *$P(x+k) - a_n(x+k)^n = Q(x) - a_n x^n$ By considering the coefficient of $x^{n-1}$ in the last equation, this tells us that $P_{n-1} + nk a_n = Q_{n-1}$. This allows you to calculate $k$ in terms of the various knowns, in which case you can just substitute in and check if we have equivalence. We can simply check that $Q(i) = P(i+k)$ for $n+1$ distinct values of $i$, which tell us that they agree as degree $n$ polynomials.
solving an equation of the type: $t \sin (2t)=2$ where $0Need to solve: How many solutions are there to the equation, $t\sin (2t)=2$ where $0<t<3 \pi$ I am currently studying calc 3 and came across this and realized i dont have a clue as to how to get started on it.
As an alternate approach, you could rewrite the equation as $$\frac{1}{2}\sin{2t}=\frac{1}{t}$$ and then observe that since $\frac{1}{t}\le\frac{1}{2}$ for $t\ge2$, the graph of $y=\frac{1}{t}$ will intersect the graph of $y=\frac{1}{2}\sin{2t}$ twice in each interval $[n\pi,(n+1)\pi]$ for $n\ge1$.
calculate the derivative using fundamental theorem of calculus This is a GRE prep question: What's the derivative of $f(x)=\int_x^0 \frac{\cos xt}{t}\mathrm{d}t$? The answer is $\frac{1}{x}[1-2\cos x^2]$. I guess this has something to do with the first fundamental theorem of calculus but I'm not sure how to use that to solve this problem. Thanks in advance.
The integral does not exist, consequently it is not differentiable. The integral does not exist, because for each $x$ ($x>0$, the case of negative $x$ is dealt with similarly) there is some $\varepsilon > 0$ such that $\cos(xt)> 1/2$ for all $t$ in the $t$-range $0\le t \le \varepsilon$. If one splits the integral $\int_x^0 = \int_\varepsilon^0 + \int_x^\varepsilon$, then the second integral exists, because the integrand is continuous in the $t$-range $\varepsilon \le t\le x $. The first is by definition $\lim_{\eta\rightarrow 0+}\int_\varepsilon^\eta$ and larger than $(-\ln \eta + \ln \varepsilon)/2$. So the limit does not exist.
Algebra and Substitution in Quadratic Form―Einstein Summation Notation Schaum's Outline to Tensor Calculus ― chapter 1, example 1.5 ――― If $y_i = a_{ij}x_j$, express the quadratic form $Q = g_{ij}y_iy_j$ in terms of the $x$-variables. Solution: I can't substitute $y_i$ directly because it contains $j$ and there's already a $j$ in the given quadratic form. So $y_i = a_{i \huge{j}}x_{\huge{j}} = a_{i \huge{r}}x_{\huge{r}}$. This implies $ y_{\huge{j}} = a_{{\huge{j}}r}x_r.$ But I already used $r$ (in the sentence before the previous) so need to replace $r$ ――― $ y_j = a_{j \huge{r}}x_{\huge{r}} = a_{j \huge{s}}x_{\huge{s}}.$ Therefore, by substitution, $Q = g_{ij}(a_{ir}x_r)(a_{js}x_s)$ $$ = g_{ij}a_{ir}a_{js}x_rx_s. \tag{1}$$ $$= h_{rs}x_rx_s, \text{ where } h_{rs} = g_{ij}a_{ir}a_{js}. \tag{2}$$ Equation ($1$): Why can they commute $a_{js}$ and $x_r$? How are any of the terms commutative? Equation ($2$): How does $rs$ get to be the subscript of $h$? Why did they define $h_{rs}$?
In equation (1) $a_{js}$ and $x_r$ commute because these are just regular (reals or complex) numbers using standard multiplication which is commutative. Equation (2) $h_{rs}$ is defined to save space more than anything, it's the coefficients of the polynomial in $x_i$.
A simple inequality in calculus? I have to solve this inequality: $$\left(\left[\dfrac{1}{s}\right] + 1 \right) s < 1,$$ where $ 0 < s < 1 $. I guess that $s$ must be in this range: $\left(0,\dfrac{1}{2}\right]$.But I do not know if my guess is true. If so, how I can prove it? Thank you.
try $$\left(\left[\dfrac{1}{s}\right] \right) < \frac{1}{s}-1$$
p-adic Eisenstein series I'm trying to understand the basic properties of the p-adic Eisenstein series. Let $p$ be a prime number. Define the group $X = \begin{cases} \mathbb{Z}_p\times \mathbb{Z}/(p-1)\mathbb{Z} & \mbox{if }p \neq2 \\ \mathbb{Z}_2 & \mbox{if } p=2 \end{cases}$ where $\mathbb{Z}_p$ is the ring of $p$-adic integers. If $k\in X$ and $d$ is a positive integer then is it true that $d^{k-1}\in \mathbb{Z}_p$? If so, why? Thank you for your help.
Like your previous question, there's a slight philosophical issue: the question should not be "is $d^{k-1} \in \mathbb{Z}_p$", but "when and how is $d^{k-1}$ defined"? It's far from obvious what the definition should be, but once you know what the conventional definition is, the fact that it gives you something in $\mathbb{Z}_p$ whenever it is defined is totally obvious :-) So we have to do something to define $d^{x}$ for $x \in X$, and it will only work if $d \in \mathbb{Z}_p^\times$. The usual definition is as follows. Let me assume $p \ne 2$ -- you can work out the necessary modifications for $p = 2$ yourself. Suppose $x = (x_1, x_2)$ where $x_1 \in \mathbb{Z}_p$ and $x_2 \in \mathbb{Z} / (p-1)\mathbb{Z}$. Write $d = \langle d \rangle \tau(d)$, where $\tau(d)$ is a $(p-1)$-st root of unity and $\langle d \rangle$ is in $1 + p\mathbb{Z}_p$ (this can always be done, in a unique way, for any $d \in \mathbb{Z}_p^\times$). Then we define $$ d^x = \langle d \rangle^{x_1} \tau(d)^{x_2} $$ which is well-defined (using your previous question) and lies in $\mathbb{Z}_p^\times$. It's easy to check that this agrees with the "natural" definition of $d^x$ when $x \in \mathbb{Z}$ (which lives inside $X$ in the obvious way). In fact $X$ is exactly the set of group homomorphisms $\mathbb{Z}_p^\times \to \mathbb{Z}_p^\times$. If $k \in X$ we can now define $d^{k-1} = d^k / d$, where $d^k$ is defined as above. There's no sensible definition of $d^x$ for $x \in X$ and $d \in p\mathbb{Z}$, which is why the definition of the coefficient of $q^n$ in the $p$-adic Eisenstein series involves a sum over only those divisors of $n$ coprime to $p$.
Partial fraction integration $\int \frac{dx}{(x-1)^2 (x-2)^2}$ $$\int \frac{dx}{(x-1)^2 (x-2)^2} = \int \frac{A}{x-1}+\frac{B}{(x-1)^2}+\frac{C}{x-2}+\frac{D}{(x-2)^2}\,dx$$ I use the cover up method to find that B = 1 and so is C. From here I know that the cover up method won't really work and I have to plug in values for x but that won't really work either because I have two unknowns. How do I use the coverup method?
To keep in line with the processes you are learning, we have: $$\frac{1}{(x-1)^2 (x-2)^2} = \frac{A}{x-1}+\frac{B}{(x-1)^2}+\frac{C}{x-2}+\frac{D}{(x-2)^2}$$ So we want to find $A, B, C, D$ given $$A(x-1)(x-2)^2 + B(x-2)^2 + C(x-1)^2(x-2) + D(x-1)^2 = 1$$ As you found, when $x = 1$, we have $B = 1$, and when $x = 2$, we have $D = 1$. Now, we need to solve for the other two unknowns by creating a system of two equations and two unknowns: $A, C$, given our known values of $B, D = 1$. Let's pick an easy values for $x$: $x = 0$, $x = 3$ $$A(x-1)(x-2)^2 + B(x-2)^2 + C(x-1)^2(x-2) + D(x-1)^2 = 1\quad (x = 0) \implies$$ $$A(-1)((-2)^2) + B\cdot (-2)^2 + C\cdot (1)\cdot(-2) + D\cdot (-1)^2 = 1$$ $$\iff - 4A + 4B - 2C + D = 1 $$ $$B = D = 1 \implies -4A + 4 - 2C + 1 = 1 \iff 4A + 2C = 4\tag{x = 0}$$ Similarly, $x = 3 \implies $ $2A + 1 + 4C + 4 = 1 \iff 2A + 4C = -4 \iff A + 2C = -2\tag{x = 3}$ Now we have a system of two equations and two unknowns and can solve for A, C. And solving this way, gives of $A = 2, C= -2$ Now we have $$\int\frac{dx}{(x-1)^2 (x-2)^2} = \int \frac{2}{x-1}+\frac{1}{(x-1)^2}+\frac{-2}{x-2}+\frac{1}{(x-2)^2}\,dx$$
How can I calculate this determinant? Please can you give me some hints to deal with this : $\displaystyle \text{Let } a_1, a_2, ..., a_n \in \mathbb{R}$ $\displaystyle \text{ Calculate } \det A \text{ where }$ $\displaystyle A=(a_{ij})_{1\leqslant i,j\leqslant n} \text{ and }$ $\displaystyle \lbrace_{\alpha_{ij}=0,\text{ otherwise}}^{\alpha_{ij}=a_i,\text{ for } i+j=n+1}$
Hint: The matrix looks like the following (for $n=4$; it gives the idea though): $$ \begin{bmatrix} 0 & 0 & 0 & a_1\\ 0 & 0 & a_2 & 0\\ 0 & a_3 & 0 & 0\\ a_4 & 0 & 0 & 0 \end{bmatrix} $$ What happens if you do a cofactor expansion in the first column? Try using induction.
Sequence of natural numbers Numbers $1,2,...,n$ are written in sequence. It's allowed to exchange any two elements. Is it possible to return to the starting position after an odd number of movements? I know that is necessarily an even number of movements but I can't explain that!
Basically, if you make an odd number of switches, then at least one of the numbers has only been moved once (unless you repeat the same switch over and over which is an easy case to explain). But if you start in some configuration and move a number only once and want to return to the start, you must move again. Try induction -- an easy base case is $n=2$.
Solution to $y'' - 2y = 2\tan^3x$ I'm struggling with this nonhomogeneous second order differential equation $$y'' - 2y = 2\tan^3x$$ I assumed that the form of the solution would be $A\tan^3x$ where A was some constant, but this results in a mess when solving. The back of the book reports that the solution is simply $y(x) = \tan x$. Can someone explain why they chose the form $A\tan x$ instead of $A\tan^3x$? Thanks in advance.
Have you learned variation of parameters? This is a method, rather than lucky guessing :) http://en.wikipedia.org/wiki/Variation_of_parameters
Solving for the integrating factor in a Linear Equation with Variable Coefficients So I am studying Diff Eq and I'm looking through the following example. Solve the following equation: $(dy/dt)+2y=3 \rightarrow μ(t)*(dy/dt)+2*μ(t)*y=3*μ(t) \rightarrow (dμ(t)/dt)=2*μ(t) \rightarrow (dμ(t)/dt)/μ(t)=2 \rightarrow (d/dt)\ln|μ(t)|=2 \rightarrow \ln|μ(t)|=2*t+C \rightarrow μ(t)=c*e^{2*t} \rightarrow μ(t)=e^{2*t}$ So I have two questions regarding this solved problem. It appears that the absolute value sign is just tossed out of the problem without saying that as a result $c \ge 0$, is this not correct and if not why? Secondly and more importantly, I was confused by the assumption that $c=1$. Why should it be $1$ and would the answer differ if another number were selected is it just an arbitrary selection that doesn't influence the end result and just cancels out anyways?
Method 1: Calculus We have: $y' + 2y = 3$. Lets use calculus to solve this and see why these statements are okay. We have: $$\displaystyle \frac{\dfrac{dy}{dt}}{y - \dfrac{3}{2}} = -2$$ Integrating both sides yields: $$\displaystyle \int \frac{dy}{\left(y - \dfrac{3}{2}\right)} = -2 \int dt$$ We get: $\ln\left|y - \dfrac{3}{2}\right| = -2t + c$. Lets take the exponential of both sides, we get: $$\left|y - \dfrac{3}{2}\right| = e^{-2t + c} = e^{c}e^{-2t} = c e^{-2t}$$ Do you see what happened to the constant now? Now, lets use the definition of absolute value and see why it does not matter. For $y \ge \dfrac{3}{2}$, we have: $$\left(y - \dfrac{3}{2}\right) = c e^{-2t} \rightarrow y = c e^{-2t} +\dfrac{3}{2}$$ For $y \lt \dfrac{3}{2}$, we have: $$-\left(y - \dfrac{3}{2}\right) = c e^{-2t} \rightarrow y = -c e^{-2t} + \dfrac{3}{2}$$ However, we know that $c$ is an arbitrary constant, so we can rewrite this as: $$y = c e^{-2t} + \dfrac{3}{2}$$ We could also leave it as $-c$ if we choose, but it is dangerous to keep those pesky negatives around. Note: look at this graph of $\left|y - \dfrac{3}{2}\right|$: Now, can you use this approach and see why it is identical to the integrating factor (it is exactly the same reasoning)? For your second question: You could make $c$ be anything you want. Let it be $y = ke^{-2t} + \dfrac{3}{2}$. Take the derivative and substitute back into ODE and see if you get $3 = 3$ (you do). If they gave you initial conditions, then it would be a specific value, so the authors are being a little sloppy. They should have said something like $y(0) = \dfrac{5}{2}$, which would lead to $c = 1$. Lets work this statement: $y = ke^{-2t} + \dfrac{3}{2}$ $y' = -2 k e^{-2t}$ Substituting back into the original DEQ, yields: $y' + 2y = -2 k e^{-2t} + 2(ke^{-2t} + \dfrac{3}{2}) = 3$, and $3 = 3$. What if we let $c = 1$, we have: $y' + 2y = -2 e^{-2t} + 2(e^{-2t} + \dfrac{3}{2}) = 3$, and $3 = 3$. So, you see that we can let $c$ be anything, unless given an IC. Method 2: Integrating Factor Here is a step-by-step solution using integrating factor. * *$y' + 2 y = 3$ *$\mu y' + 2 \mu y = 3 \mu$ *$\dfrac{d}{dt}(\mu y) = uy' + u' y$ *Choose $\mu$ so that $\mu' = 2 \mu \rightarrow \mu = e^{2t}$ *We have: $y'+2y = 3$, so: *$e^{2t}y' + 2e^{2t}y = 3e^{2t}$ *$\dfrac{d}{dt}(e^{2t}y) = 3 e^{2t}$ *$e^{2t} y = \dfrac{3}{2}e^{2t} + c$, thus *$y(t) = \dfrac{3}{2} + c e^{-2t} = c e^{-2t}+ \dfrac{3}{2}$
How can I show that $\sqrt{1+\sqrt{2+\sqrt{3+\sqrt\ldots}}}$ exists? I would like to investigate the convergence of $$\sqrt{1+\sqrt{2+\sqrt{3+\sqrt{4+\sqrt\ldots}}}}$$ Or more precisely, let $$\begin{align} a_1 & = \sqrt 1\\ a_2 & = \sqrt{1+\sqrt2}\\ a_3 & = \sqrt{1+\sqrt{2+\sqrt 3}}\\ a_4 & = \sqrt{1+\sqrt{2+\sqrt{3+\sqrt 4}}}\\ &\vdots \end{align}$$ Easy computer calculations suggest that this sequence converges rapidly to the value 1.75793275661800453265, so I handed this number to the all-seeing Google, which produced: * *OEIS A072449 * "Nested Radical Constant" from MathWorld Henceforth let us write $\sqrt{r_1 + \sqrt{r_2 + \sqrt{\cdots + \sqrt{r_n}}}}$ as $[r_1, r_2, \ldots r_n]$ for short, in the manner of continued fractions. Obviously we have $$a_n= [1,2,\ldots n] \le \underbrace{[n, n,\ldots, n]}_n$$ but as the right-hand side grows without bound (It's $O(\sqrt n)$) this is unhelpful. I thought maybe to do something like: $$a_{n^2}\le [1, \underbrace{4, 4, 4}_3, \underbrace{9, 9, 9, 9, 9}_5, \ldots, \underbrace{n^2,n^2,\ldots,n^2}_{2n-1}] $$ but I haven't been able to make it work. I would like a proof that the limit $$\lim_{n\to\infty} a_n$$ exists. The methods I know are not getting me anywhere. I originally planned to ask "and what the limit is", but OEIS says "No closed-form expression is known for this constant". The references it cites are unavailable to me at present.
For any $n\ge4$, we have $\sqrt{2n} \le n-1$. Therefore \begin{align*} a_n &\le \sqrt{1+\sqrt{2+\sqrt{\ldots+\sqrt{(n-2)+\sqrt{(n-1) + \sqrt{2n}}}}}}\\ &\le \sqrt{1+\sqrt{2+\sqrt{\ldots+\sqrt{(n-2)+\sqrt{2(n-1)}}}}}\\ &\le\ldots\\ &\le \sqrt{1+\sqrt{2+\sqrt{3+\sqrt{2(4)}}}}. \end{align*} Hence $\{a_n\}$ is a monotonic increasing sequence that is bounded above.
How many cones pass through a given conic section? Given a conic section in the $xy$-plane, how many cones (infinite double cone) in the surrounding 3D space intersect the $xy$-plane at that conic? Is the family continuous, with a nice parametization? At least one must exist, and I expect symmetry in the conic to give a few such cones by reflection, but are there more than that? Edit: Following Peter Smith's answer, it seems possible that a continuum of such cones exist. If that were to be the case, what is the locus of the apexes of those cones?
To take the simplest case, take the circle to be centred at $(0, 0, 0)$ in the $xy$-plane; and now take any point $(0, 0, z)$. Then plainly there is a double cone of rays which pass through $(0, 0, z)$ and some point on the circle (and this is a right circular cone). So there are continuum-many distinct such cones (i.e. as many as there are are points $(0, 0, z)$) which have the given circle as section in the $xy$-plane. [This observation generalizes, mutatis mutandis, to other sorts of conic section: you'll get continuum-many possibilities.]
What went wrong? Calculate mass given the density function Calculate the mass: $$D = \{1 \leq x^2 + y^2 \leq 4 , y \leq 0\},\quad p(x,y) = y^2.$$ So I said: $M = \iint_{D} {y^2 dxdy} = [\text{polar coordinates}] = \int_{\pi}^{2\pi}d\theta {\int_{1}^{2} {r^3sin^2\theta dr}}$. But when I calculated that I got the answer $0$ which is wrong, it should be $\frac{15\pi}{8}$. Can someone please tell me what I did wrong?
You have the set-up correct, but you have incorrectly computed the integral Let's work it out together. $\int_{\pi}^{2\pi}d\theta {\int_{1}^{2} {r^3\sin^2\theta dr}}$ $\int_{\pi}^{2\pi} {\int_{1}^{2} {r^3\sin^2\theta drd\theta}}$ $\int_{\pi}^{2\pi} \sin^2\theta d\theta {\int_{1}^{2} {r^3dr}}$ $\int_{\pi}^{2\pi} \sin^2\theta d\theta (\frac{2^4}{4} - \frac{1^4}{4})$ $\int_{\pi}^{2\pi} \sin^2\theta d\theta (3\frac{3}{4})$ $\frac{1}{2}((2\pi - \sin(2\pi)\cos(2\pi) - \pi +\sin(\pi)\cos(\pi)) (3\frac{3}{4})$ note that the integral of $\sin^2(x)$ = $\frac{1}{2}(x - \sin(x)\cos(x))$ $\frac{1}{2}(\pi)(3\frac{3}{4}) = \frac{15\pi}{8}$
Lang $SL_2$ two formulas for Harish transform Let $G = SL_2$ and give it the standard Iwasawa decomposition $G = ANK$. Let: $$D(a) = \alpha(a)^{1/2} - \alpha(a)^{-1/2} := \rho(a) - \rho(a)^{-1}.$$ Now, Lang defines ($SL_2$, p.69) the Harish transform of a function $f \in C_c(G,K)$ to be $$Hf(a) := \rho(a)\int_Nf(an)dn = |D(a)|\int_{A\setminus G} f(x^{-1}ax)d\dot x$$ My trouble is in understanding why the two definitions agree for $\rho (a)≠1$. In the second integral, we're are integrating over $A\setminus G$, so we can write $x = nk$, whence $$f(x^{-1}ax) = f((nk)^{-1}ank) = f(n^{-1}an) $$ since $f \in C_c(G,K)$, i.e. it is invariant w.r.t. conjugation by elements in $K$. But now, I don't know who to get rid of the remaining $n^{-1}$ and get the factor before the integral.
This equality is not at all obvious. Just before that section, it was proven that $$ \int_{A\backslash G} f(x^{-1}ax)\;dx\;=\; {\alpha(a)\over |D(\alpha)} \int_K\int_N f(kank^{-1})\;dn\;dk $$ for arbitrary $f\in C_c(G)$. For $f$ left and right $K$-invariant, the outer integral goes away, leaving just the integral over $N$. The key point is the identity proven another page or two earlier, something like $$ \int_N f(a^{-1}nan^{-1})\,dn\;=\; {1\over |\alpha(a)^{-1}-1|}\int_N f(n)\,dn $$ which follows from multiplying out.
convergence to a generalized Euler constant and relation to Zeta serie Let $0 \leq a \leq 1$ be a real number. I would like to know how to prove that the following sequence converges: $$u_n(a)=\sum_{k=1}^n k^a- n^a \left(\frac{n}{1+a}+\frac{1}{2}\right)$$ For $a=1$: $$u_n(1)=\sum\limits_{k=1}^{n} k- n \left(\frac{n}{1+1}+\frac{1}{2}\right)= \frac{n(n+1)}{2}-\frac{n(n+1)}{2}=0$$ so $u_n(1)$ converges to $0$. for $a=0$: $$u_n(0)=\sum\limits_{k=1}^{n} 1- \left(\frac{n}{1+0}+\frac{1}{2}\right) = n-n+\frac{1}{2}=\frac{1}{2}$$ so $u_n(0)$ converges to $1/2$. In genaral, the only idea I have in mind is the Cauchy integral criterion but it does not work because $k^a$ is an increasing function, Do the proof involves Zeta serie ?
From this answer you have an asymptotics $$ \sum_{k=1}^n k^a = \frac{n^{a+1}}{a+1} + \frac{n^a}{2} + \frac{a n^{a-1}}{12} + O(n^{a-3}) $$ Use it to prove that $u_n(a)$ converges.
Minimum and maximum of $ \sin^2(\sin x) + \cos^2(\cos x) $ I want to find the maximum and minimum value of this expression: $$ \sin^2(\sin x) + \cos^2(\cos x) $$
Your expression simplifies to $$1+\cos(2\cos x)-\cos (2\sin x).$$ We optimize of $1+\cos u-\cos v$ under the constraint $u^2+v^2=4$. $\cos$ is an even function, so we can say that we optimize $1+\cos 2u-\cos(2\sqrt{1-u^2})$, $u\in [0,1]$, which should be doable.
I need a better explanation of $(\epsilon,\delta)$-definition of limit I am reading the $\epsilon$-$\delta$ definition of a limit here on Wikipedia. * *It says that $f(x)$ can be made as close as desired to $L$ by making the independent variable $x$ close enough, but not equal, to the value $c$. So this means that $f(x)$ defines $y$ or the output of the function. So when I say $f(x)$ close as desired to $L$, I actually mean the result of the calculation that has taken place and produced a $y$ close to $L$ which sits on the $y$-axis? * How close is "close enough to $c$" depends on how close one wants to make $f(x)$ to $L$. So $c$ is actually the $x$'s that I am putting into my $f$ function. So one is picking $c$'s that are $x$'s and entering them into the function, and he actually is picking those $c$'s (sorry, $x$'s) to make his result closer to $L$, which is the limit of an approaching value of $y$? * It also of course depends on which function $f$ is, and on which number $c$ is. Therefore let the positive number $\epsilon$ be how close one wishes to make $f(x)$ to $L$; OK, so now one picks a letter $\epsilon$ which means error, and that letter is the value of "how much one needs to be close to $L$". So it is actually the $y$ value, or the result of the function again, that needs to be close of the limit which is the $y$-coordinate again? * strictly one wants the distance to be less than $\epsilon$. Further, if the positive number $\delta$ is how close one will make $x$ to $c$, Er, this means $\delta=x$, or the value that will be entered into $f$? * and if the distance from $x$ to $c$ is less than $\delta$ (but not zero), then the distance from $f(x)$ to $L$ will be less than $\epsilon$. Therefore $\delta$ depends on $\epsilon$. The limit statement means that no matter how small $\epsilon$ is made, $\delta$ can be made small enough. So essentially the $\epsilon$-$\delta$ definition of the limit is the corresponding $y$, $x$ definition of the function that we use to limit it around a value? Are my conclusions wrong? I am sorry but it seams like an "Amazing Three Cup Shuffle Magic Trick" to me on how my teacher is trying to explain this to me. I always get lost to what letters mean $\epsilon$, $\delta$, $c$, $y$, and $x$, when the function has $x$ and $y$ only.
If you are a concrete or geometrical thinker you might find it easier to think in these terms. You are player $X$ and your opponent is player $Y$. Player $Y$ chooses any horizontal lines they like, symmetric about $L$, but not equal to it. You have to choose two vertical lines symmetric about $c$ - these create a rectangle, with $Y$'s lines. If $f(x)$ stays within the rectangle, you win. If you always win, whatever $Y$ does, you have a limit. If $Y$ has a winning strategy you don't.
Is Dirichlet function Riemann integrable? "Dirichlet function" is meant to be the characteristic function of rational numbers on $[a,b]\subset\mathbb{R}$. On one hand, a function on $[a,b]$ is Riemann integrable if and only if it is bounded and continuous almost everywhere, which the Dirichlet function satisfies. On the other hand, the upper integral of Dirichlet function is $b-a$, while the lower integral is $0$. They don't match, so that the function is not Riemann integrable. I feel confused about which explanation I should choose...
The Dirichlet function $f : [0, 1] → \mathbb R$ is defined by $$f(x) = \begin{cases} 1, & x ∈ \mathbb Q \\ 0, & x ∈ [0, 1] - \mathbb Q \end{cases}$$ That is, $f$ is one at every rational number and zero at every irrational number. This function is not Riemann integrable. If $P = \{I_1, I_2, . . . , I_n\}$ is a partition of [0, 1], then $M_k = \sup I_k = 1, m_k = \inf I_k = 0$, since every interval of non-zero length contains both rational and irrational numbers. It follows that $U(f; P) = 1, L(f; P) = 0$ for every partition $P$ of $[0, 1]$, so $U(f) = 1$ and $L(f) = 0$ are not equal. The Dirichlet function is discontinuous at every point of $[0, 1]$, and the moral of the last example is that the Riemann integral of a highly discontinuous function need not exist.
Evaluating $\int_0^1 \int_0^{\sqrt{1-x^2}}e^{-(x^2+y^2)} \, dy \, dx\ $ using polar coordinates Use polar coordinates to evaluate $\int_0^1 \int_0^{\sqrt{1-x^2}}e^{-(x^2+y^2)} \, dy \, dx\ $ I understand that we need to change $x^2+y^2$ to $r^2$ and then we get $\int_0^1 \int_0^{\sqrt{1-x^2}} e^{-(r^2)} \, dy \, dx\ $. Then I know I need to change the bounds with respect to $dy$ but I am unsure on how to do that and further. Please help me.
Hints: $$\bullet\;\;\;x=r\cos\theta\;,\;\;y=r\sin\theta\;,\;0\le\theta\le \frac\pi2\;\text{(why?). The Jacobian is}\;\;|J|=r$$ So the integral is $$\int\limits_0^1\int\limits_0^{\pi/2}re^{-r^2}drd\theta$$
The preimage of continuous function on a closed set is closed. My proof is very different from my reference, hence I am wondering is I got this right? Apparently, $F$ is continuous, and the identity matrix is closed. Now we want to show that the preimage of continuous function on closed set is closed. Let $D$ be a closed set, Consider a sequence $x_n \to x_0$ in which $x_n \in f^{-1}(D)$, and we will show that $x_0 \in f^{-1}(D)$. Since $f$ is continuous, we have a convergent sequence $$\lim_{n\to \infty} f(x_n) = f(x_0) = y.$$ But we know $y$ is in the range, hence, $x_0$ is in the domain. So the preimage is also closed since it contains all the limit points. Thank you.
Yes, it looks right. Alternatively, given a continuous map $f: X \to Y$, if $D \subseteq Y$ is closed, then $X \setminus f^{-1}(D) = f^{-1}(Y \setminus D)$ is open, so $f^{-1}(D)$ is closed.
The tangent plane of orthogonal group at identity. Why the tangent plane of orthogonal group at identity is the kernel of $dF_I$, the derivative of $F$ at identity, where $F(A) = AA^T$? Thank you ~
$\exists$ Proposition Let $Z$ be the preimage of a regular value $y\in Y$ under the smooth map $f: X \to Y$. Then the kernel of the derivative $df_x:T_x(X) \to T_y(Y)$ at any point $x \in Z$ is precisely the tangent space to $Z, T_x(Z).$ Proof: Since $f$ is constant on $Z$, $df_x$ is zero on $T_x(Z)$. But $df_x: T_x(X)\to T_y(Y)$ is surjective, so the dimension of the kernel of $df_x$ must be $$\dim T_x(X) - \dim T_y(Y) = \dim X - \dim Y = \dim Z.$$ Thus $T_x(Z)$ is a subspace of the kernel that has the same dimension as the complete kernel; hence $T_x(Z)$ must be the kernel. Proposition on Page 24, Guillemin and Pollack, Differential Topology Jellyfish, you should really read your textbook!
problem of probability and distribution Suppose there are 1 million parts which have $1\%$ defective parts i.e 1 million parts have $10000$ defective parts. Now suppose we are taking different sample sizes from 1 million like $10\%$, $30\%$, $50\%$, $70\%$, $90\%$ of 1 million parts and we need to calculate the probability of finding maximum $5000$ defective parts from these sample sizes. As 1 million parts has $1\%$ defective parts so value of success $p$ is $0.01$ and failure $q$ is $0.99$. Now suppose we are adding $100,000$ parts in one million parts which makes total $1,000,000$ parts but this newly added $100,000$ parts do not have any defective parts. So now what will be the value of success $p$ in total $1,000,000$ parts to find $5000$ defective parts? Please also give justification for choosing value of $p$?
There was an earlier problem of which this is a variant. In the solution to that problem I did a great many computations. For this problem, the computations are in the same style, with a different value of $p$, the probability that any one item is defective. Some of the computations in the earlier answer were done for "general" (small) $p$, so they can be repeated with small modification. We had $10000$ defectives in a population of $1000000$, and added $100000$ non-defectives. So the new probability of a defective is $p=\frac{10000}{1100000}\approx 0.0090909$. We are taking a large sample, presumably without replacement. So the distribution of the number $X$ of defectives in a sample of size $n$ is hypergeometric, not binomial. However, that does not make a significant differnce for the sample sizes of interest. As was the case in the earlier problem, the probability of $\le 5000$ defectives in a sample of size $n$ is nearly $1$ up to a certain $n_0$, the n climbs very rapidly to (nearly) $0.5$ at a certain $n_0$, and then falls very rapidly to $0$ as $n$ increases further. In the earlier problem, we had $n_0=500000$. In our new situation, the appropriate $n_0$ is obtained by solving the equation $$n_0p=5000$. Solve. We get $n_0=550000$. If $n$ is significantly below $550000$, we will have that $\Pr(X\le 5000)$ will be nearly $1$. For example, that is the case already at $n=500000$. However, for $n$ quite close to $550000$, such as $546000$, the probability is not close to $1$. Similrly, on the other side of $550000$ but close, like $554000$, the proability that $X\le 5000$ will not be close to $0$. In the erlier answer, you were supplied with all the formulas to do any needed calculations if you want to explore the "near $550000$" region in detail.
Is there any field of characteristic two that is not $\mathbb{Z}/2\mathbb{Z}$ Is there any field of characteristic two that is not $\mathbb{Z}/2\mathbb{Z}$? That is, if a field is of characteristic 2, then does this field have to be $\{0,1\}$?
To a beginner, knowing how one could think of an answer is at least as important as knowing an answer. For examples in Algebra, one needs (at least) two things: A catalogue of the basic structures that appear commonly in important mathematics, and methods of constructing new structures from old. Your catalogue and constructions will naturally expand as you study more, so you don't need to worry about this consciously. The moral I am trying to impart is the following: Instead of trying to construct a particular a structure with particular properties "from scratch" (like I constantly tried to when I was starting to learn these things), first search your basic catalogue. If that doesn't turn up anything, more often than not your search will hint at some basic construction from one of these examples that will work. When you start learning field theory, your basic catalogue should include all of the finite fields, the rationals, real numbers, complex numbers, and algebraic numbers. Your basic constructions should be subfields, field extensions, fields of fractions and algebraic closures. You should also have the tools from your basic ring theory; constructions like a quotient ring and ring extensions also help with this stuff. For example, Chris came up with his answer by starting with the easy example of a field of characteristic two and wanting to make it bigger. So he extended it with an indeterminate $X$ and as a result he got the field of rational functions with coefficients in $\mathbb{Z}/(2).$ Asaf suggested two ways: making it bigger by taking the algebraic closure, or extending the field by a root of a polynomial (I personally like to see this construction as a certain quotient ring).
motivation of additive inverse of a Dedekind cut set My understanding behind motivation of additive inverse of a cut set is as follows : For example, for the rational number 2 the inverse is -2. Now 2 is represented by the set of rational numbers less than it and -2 is represented by the set of rational numbers less than it. So, if the cut set $\alpha$ represents a rational number then the inverse of $\alpha$ is the set $\{-p-r : p\notin \alpha , r >0\}$. But if the cut set does not represent a rational number then is the above definition is correct ? I think we will miss the first rational number which does not belong to $\alpha$ intuitively. Should not the set $\{-p : p\notin \alpha \}$ be the inverse now ? Confused.
I think the confusion arises when we are trying to identify a rational number say $2$ with the cut $\{ x\mid x \in \mathbb{Q}, x < 2\}$. When using Dedekind cuts as a definition of real numbers it is important to stick to some convention and follow it properly. For example to represent a real number either we choose 1) set containing smaller rationals 2) or set container larger rationals 3) or both the sets At the same time after choosing one of these alternatives it is important to clarify whether the sets contains an end point (like least member in option 2) and greatest member in option 1)) or not. In this particular question I believe the definition uses option 1) with the criteria that there is no greatest member in the set. When this definition is adopted and you define the additive inverse of a real number then we must ensure that the set corresponding to the additive inverse does not have a greatest member. This needs to be taken care only when the cut represents a rational number.
Prove that $\frac{100!}{50!\cdot2^{50}} \in \Bbb{Z}$ I'm trying to prove that : $$\frac{100!}{50!\cdot2^{50}}$$ is an integer . For the moment I did the following : $$\frac{100!}{50!\cdot2^{50}} = \frac{51 \cdot 52 \cdots 99 \cdot 100}{2^{50}}$$ But it still doesn't quite work out . Hints anyone ? Thanks
We have $100$ people at a dance class. How many ways are there to divide them into $50$ dance pairs of $2$ people each? (Of course we will pay no attention to gender.) Clearly there is an integer number of ways. Let us count the ways. We solve first a different problem. This is a tango class. How many ways are there to divide $100$ people into dance pairs, one person to be called the leader and the other the follower? Line up the people. There are $100!$ ways to do this. Now go down the line, pairing $1$ and $2$ and calling $1$ the leader, pairing $3$ and $4$ and calling $3$ the leader, and so on. We obtain each leader-follower division in $50!$ ways, since the groups of $2$ can be permuted. So there are $\dfrac{100!}{50!}$ ways to divide the people into $50$ leader-follower pairs to dance the tango. Now solve the original problem. To just count the number of democratic pairs, note that interchanging the leader/follower tags produces the same pair division. So each democratic pairing gives rise to $2^{50}$ leader/follower pairings. It follows that there are $\dfrac{100!}{2^{50}\cdot 50!}$ democratic pairings.
Proving by induction: $2^n > n^3 $ for any natural number $n > 9$ I need to prove that $$ 2^n > n^3\quad \forall n\in \mathbb N, \;n>9.$$ Now that is actually very easy if we prove it for real numbers using calculus. But I need a proof that uses mathematical induction. I tried the problem for a long time, but got stuck at one step - I have to prove that: $$ k^3 > 3k^2 + 3k + 1 $$ Hints???
For your "subproof": Try proof by induction (another induction!) for $k \geq 7$ $$k^3 > 3k^2 + 3k + 1$$ And you may find it useful to note that $k\leq k^2, 1\leq k^2$ $$3k^2 + 3k + 1 \leq 3(k^2) + 3(k^2) + 1(k^2) = 7k^2 \leq k^3 \quad\text{when}??$$
Norm inequality (supper bound) Do you think this inequality is correct? I try to prove it, but I cannot. Please hep me. Assume that $\|X\| < \|Y\|$, where $\|X\|, \|Y\|\in (0,1)$ and $\|Z\| \gg \|X\|,\|Z\| \gg \|Y||$. prove that $$\|X+Z\|-\|Y+Z\| \leq \|X\|-\|Y\|$$ and if $Z$ is increased, the left hand side become smaller. I pick up some example and see that this inequality is correct but I cannot prove it. Thank you very much.
The inequality is false as stated. Let $$ \begin{align} X &= (0.5,0)\\ Y &= (-0.7,0)\\ Z &= (z,0), 1 \ll z \end{align}$$ This satisfies all the conditions given. We have that $$ \|X + Z\| - \|Y + Z\| = z + 0.5 - (z - 0.7) = 1.2 \not\leq -0.2 = \|X\| - \|Y\| $$ From the Calculus point of view, in $n$ dimensions, we can write $$ \|X\| = \sqrt{x_1^2 + x_2^2 + \cdots + x_n^2} $$ We have that when $\|X\| \ll \|Z\|$, we can approximate $\|X+Z\| \approx \|Z\| + X\cdot \nabla(\|Z\|)$. Now, $\nabla(\|Z\|) = \frac{Z}{\|Z\|}$ by a direct computation, so we have that $$ \|X + Z\| - \|Y + Z\| \approx (X-Y) \cdot \frac{Z}{\|Z\|} $$ From this formulation we see that even in the cases where $X,Y$ are infinitesimal the inequality you hoped for cannot hold true. However, the right hand side of this approximation can be controlled by Cauchy inequality to get (using that $Z / \|Z\|$ is a unit vector). $$ (X-Y) \cdot \frac{Z}{\|Z\|} \leq \|X - Y\| $$ So perhaps what you are thinking about is the following corollary of the triangle inequality Claim: If $X,Y,Z$ are vectors in $\mathbb{R}^n$, then $$ \|X + Z\| - \|Y + Z\| \leq \|X - Y \| $$ Proof: We write $$ X + Z = (X - Y) + (Y + Z) $$ so by the triangle inequality $$ \|X + Z\| = \|(X - Y) + (Y+Z)\| \leq \|X - Y\| + \|Y + Z\| $$ rearranging we get $$ \|X + Z\| - \|Y + Z\| \leq \|X - Y\| $$ as desired. Remark: if we re-write the expression using $-Z$ instead of $Z$, the same claim is true in an arbitrary metric space: Let $(S,d)$ be a metric space. Let $x,y,z$ be elements of $S$. Then $$ d(x,z) - d(y,z) \leq d(x,y) $$.
What is the difference between exponential symbol $a^x$ and $e^x$ in mathematics symbols? I want to know the difference between the exponential symbol $a^x$ and $e^x$ in mathematics symbols and please give me some examples for both of them. I asked this question because of the derivative rules table below contain both exponential symbol $a^x$ and $e^x$ and I don't know when should I use one of them and when should I use the another one. Derivative rules table: [Derivative rules table source]
The two are essentially the same formula stated in different ways. They can be derived from each other as follows: Note that $$\frac{d}{dx}(e^x)=e^x \ln(e) = e^x$$ is a special case of the formula for $a^x$ because $e$ has the special property that $\ln (e) =1$ Also $a^x=e^{\ln(a) x}$, which is another way into the derivative for $a^x$. $$\frac{d}{dx}(e^{rx})=re^{rx}$$ by the chain rule. Let $r=\ln (a)$.
Proving a set of linear functionals is a basis for a dual space I've seen some similar problems on the stackexchange and I want to be sure I am at least approaching this in a way that is sensible. The problem as stated: Let $V= \Bbb R^3$ and define $f_1, f_2, f_3 \in V^*$ as follows: $f_1(x,y,z)= x-2y ,\; f_2(x,y,z)= x+y+z,\;f_3(x,y,z)= y-3z$ Prove that $f_1, f_2, f_3$ is a basis for $V^*$ and and then find a basis for V for which it is the dual basis. Here's my problem: the question feels a bit circular. But this is what I attempted: To show that the linear functionals $f$ are a basis, we want that $f_i(x_j)=\delta_{ij}$, or that $f_i(x_j)=1$ if $i=j$ and that it is zero otherwise. That means that we want to set this up so that $$1= f_1(x,y,z)= x-2y$$ $$0= f_2(x,y,z)= x+y +z$$ $$0= f_3(x,y,z)= y-3z$$ That gives us three equations and three unknowns. Solving them we get $2x-2z=1$ for $x-z=\frac{1}{2}$ and $z=x-\frac{1}{2}$ and subbing into the equation for $f_3$ I get $0=y-3x-\frac{3}{2}$ which gets us $1=x-6x+3$ or $x=\frac{2}{5}$. That gives us $y=\frac{-3}{10}$ and $z=\frac{-1}{10}$. OK, this is where I am stuck on the next step. I just got what should be a vertical matrix I think, with the values $(\frac{2}{5}, \frac{-3}{10}, \frac{-1}{10})$ but I am not sure where to go from here. I am not entirely sure I set this up correctly. thanks EDIT: I do know that I have to show that $f_1, f_2, f_3 $ are linearly independent. That I think I can manage, but I am unsure how to fit it into the rest of the problem or if I am even approaching this right.
What about a direct approach? Suppose $\,a,b,c\in\Bbb R\,$ are such that $$af_1+bf_2+cf_3=0\in V^*\implies\;\forall\,v:=(x,y,z)\in\Bbb R^3\;,\;\;af_1v+bf_2v+cf_3v=0\iff$$ $$a(x-2y)+b(x+y+z)+c(y-3z)=0\iff$$ $$\iff (a+b)x-(2a-b-c)y+(b-3c)z=0$$ As the above is true for all $\;x,y,z\in\Bbb R\,$ , we must have $$\begin{align*}\text{I}&\;\;\;\;a+b=0\\\text{II}&\;\;\;\;2a-b-c=0\\\text{II}&\;\;\;\;b-3c=0\end{align*}\;\;\implies a=-b\;,\;\;c=3a=\frac13b\implies a=b=c=0$$ and we're done.
Can anyone provide me a step-by-step proof for proving a function IS onto/surjective? I've seen the definition, I've seen several examples and anti-examples (e.g. the typical x squared example). I get the idea, but I can't seem to find a proof for proving that a function IS onto, with proper explanation start to finish. Given: * *$f: R$ $\rightarrow$ $R$ *$f(x) = -3x + 4$ Prove that the above function is "onto." I know that this IS onto, but what would a dry, stone cold proof look like for this, that says like Step 1 with justification, step 2 with justification, and so on? The closest thing I could find to what I'm looking for: http://courses.engr.illinois.edu/cs173/sp2009/lectures/lect_15_supp.pdf in Section 3. It says to prove that g(x) = x - 8 is onto, and it does so by setting x to be (y + 8). But...why choose that value? What formula or strategy is there for determining what x should be? It appears as though you want x to be whatever will get rid of other stuff (+8 against a -8). So with some basic algebra, I think I can set x to $-\frac13$y + $\frac43$. And this is valid by the definition of real numbers, yes? This properly cancels everything out so that f(x) = y. Is that really the end of the proof?.....or am I way off the track?
What you need to do to prove that a function is surjective is to take each value $y$ and find - any way you can - a value of $x$ with $f(x)=y$. If you succeed for every possible value of $y$, then you have proved that $f$ is surjective. So we take $x=-\cfrac 13y+ \cfrac 43$ as you suggest. This is well-defined (no division by zero, for example) Then $f(x)=-3x+4=-3\left(-\cfrac 13y+ \cfrac 43\right)+4=y-4+4=y$ So your formula covers every $y$ at once. And because you have covered every $y$ you can say that you have a surjection.
Integral representation of cosh x On Wolfram math world, there's apparently an integral representation of $\cosh x$ that I'm unfamiliar with. I'm trying to prove it, but I can't figure it out. It goes \begin{equation}\cosh x=\frac{\sqrt{\pi}}{2\pi i}\int_{\gamma-i\infty}^{\gamma+i\infty} \frac{ds}{\sqrt{s}}\,e^{s+\frac{1}{4}\frac{x^{2}}{s}} \gamma >0\end{equation} The contour is taken along a vertical line with positive real part. I thought at first glance to use the residue theorem but it seems to be of no use here.
Expand the difficult part of the exponential in power series, the integral equals $$ I = \sqrt\pi \sum_{k\geq0} \frac{(x^2/4)^k}{k!} \frac{1}{2\pi i}\int_{\Re s=\gamma} s^{-k-1/2}e^{s}\,ds. $$ The integral here is the inverse Laplace transform of the function $s^{-k-1/2}$ evaluated at the point $t=1$, given by $$ \mathcal{L}^{-1}[s^{-k-1/2}](t) = \frac1{2\pi i}\int_{\Re s=\gamma}s^{-k-1/2}e^{st}\,ds. $$ So we can look it up (http://mathworld.wolfram.com/LaplaceTransform.html): $$ \frac1{2\pi i}\int_{\Re s=\gamma}s^{-k-1/2}e^{s}\,ds = \frac{1}{\Gamma(k+1/2)}, $$ which also satisfies $$ \frac{\Gamma(1/2)}{\Gamma(k+1/2)} = \frac{1}{\frac12\times\frac32\times\cdots\times(k-\frac12)}, $$ where $\Gamma(1/2)=\sqrt\pi$. Simplifying, we get $$ \sum_{k\geq0} \frac{(x^2/4)^k}{k!} \frac{\sqrt\pi}{\Gamma(k+1/2)} = \sum_{k\geq0}\frac{x^{2k}}{(2k)!} = \cosh x. $$
Trigonometry Equations. Solve for $0 \leq X \leq 360$, giving solutions correct to the nearest minute where necessary, a) $\cos^2 A -8\sin A \cos A +3=0$ Can someone please explain how to solve this, ive tried myself and no luck. Thanks!
HINT: $\cos^2 A=\frac{1+\cos 2A}{2},$ $\sin A\cos A=\frac{\sin 2A}{2}$ and $\sin^2 2A+\cos^2 2A=1$
$\int \frac{dz}{z\sqrt{(1-{1}/{z^2})}}$ over $|z|=2$ I need help in calculating the integral of $$\int \frac{dz}{z\sqrt{\left(1-\dfrac{1}{z^2}\right)}}$$ over the circle $|z|=2$. (We're talking about the main branch of the square root). I'm trying to remember what methods we used to calculate this sort of integral in my CA textbook. I remember the main idea was using the remainder theorem, but I don't remember the specifics of calculating remainders.... Thanks you for your help!
$$\frac{1}{z \sqrt{1-\frac{1}{z^{2}}}} = \frac{1}{z} \Big( 1 - \frac{1}{2z^{2}} + O(z^{-4}) \Big) \text{for} \ |z| >1 \implies \int_{|z|=2} \frac{1}{z \sqrt{1-\frac{1}{z^{2}}}} \ dz = 2 \pi i (1) = 2 \pi i $$
Congruence in rings Let $R$ be a commutative (and probably unitary, if you like) ring and $p$ a prime number. If $x_1,\ldots,x_n\in R$ are elements of $R$, then we have $(x_1+\cdots+x_n)^p\equiv x_1^p+\cdots+x_n^p$ mod $pR$. Why is this true? I tried to show that in $R/pR$ their congruence classes are equal, but without sucess.
Just compute ;-) ... we have - as $R$ is commutative - by the multinomial theorem $$ (x_1 + \cdots + x_n)^p = \sum_{\nu_1 + \cdots + \nu_n = p} \frac{p!}{\nu_1! \cdots \nu_n!} x_1^{\nu_1} \cdots x_n^{\nu_n} $$ If all $\nu_i <p $, the denominator contains no factor $p$ (as $p$ is prime), hence $\frac{p!}{\nu_1! \cdots \nu_n!} \equiv 0 \pmod p$, that is the only terms which survice reduction mod $pR$ are those where one $\nu_i = p$, hence the others are $0$, so $$ (x_1 + \cdots + x_n)^p = \sum_{\nu_1 + \cdots + \nu_n = p} \frac{p!}{\nu_1! \cdots \nu_n!} x_1^{\nu_1} \cdots x_n^{\nu_n} \equiv x_1^p + \cdots + x_n^p \pmod{pR}. $$
sum of exterior angles of a closed broken line in space I am looking for a simple proof of the following fact: The sum of exterior angles of any closed broken line in space is at least $2 \pi$. I believe it equals $2 \pi$ if and only if the closed broken line equals a polygon.
Quoting Curves of Finite Total Curvature by J. Sullivan: Lemma 2.1. (See[Mil50, Lemma 1.1] and [Bor47].) Suppose $P$ is a polygon in $\mathbb E^d$. If $P'$ is obtained from $P$ by deleting one vertex $v_n$ then $\operatorname{TC}(P')\leq\operatorname{TC}(P)$. We have equality here if $v_{n-1}v_nv_{n+1}$ are collinear in that order, or if $v_{n-2}v_{n-1}v_nv_{n+1}v_{n+2}$ lie convexly in some two-plane, but never otherwise. This total curvature $\operatorname{TC}$ is the sum of exterior angles you are asking about. You could reduce every closed broken line to a triangle by successive removal of vertices. Since the total curvature of a triangle is always $2\pi$, this gives the lower bound you assumed. And with the other condition, you can argue that in the case of equality the last vertex you deleted must have been in the same plane as the triangle, and subsequently every vertex deleted before that, and hence the whole curve must have been a planar and convex polygon.
Show that the matrix $A+E$ is invertible. Let $A$ be an invertible matrix, and let $E$ be an upper triangular matrix with zeros on the diagonal. Assume that $AE=EA$. Show that the matrix $A+E$ is invertible. WLOG, we can assume $E$ is Jordan form. If $A$ is Jordan form, it's trivial. If $A$ is not Jordan form, how to use $AE=EA$ to transform $A$ to a Jordan form? Any suggestion? Thanks.
$E^n=0$ and since $A,E$ commute you have $$A^{2n+1}=A^{2n+1}+E^{2n+1}=(A+E)(A^{2n}-A^{2n-1}E+...+E^{2n})$$ Since $A^{2n+1}$ is invertible, it follows that $A+E$ is invertible. P.S. I only used in the proof that $E$ is nilpotent and commutes with $A$, so more generally it holds that in (any ring), if $A$ is invertible, $E$ is nil potent and $AE=EA$ then $A\pm E$ are invertible.
Solving Bessel integration What would be the solution of the bessels equation, $$b=k A(t)\int_0^{\infty} J_0 (k \rho) e^ \frac{-\rho^2}{R^2} \rho d \rho$$ Can I sove that by using this formulation? $$c= \int_0^{\infty}j_0(t) e^{-pt} dt= \frac{1}{\sqrt{1+p^2}}$$
According to Gradshteyn and Ryzhik, we have: $$\int_0^{\infty}x^{\mu}\exp(-\alpha x^2)J_{\nu}(\beta x)dx = \frac{\beta^{\nu}\Gamma\left(\frac{1}{2}\nu+\frac{1}{2}\mu+\frac{1}{2}\right)}{2^{\nu+1}\alpha^{\frac{1}{2}(\mu+\nu+1)}\Gamma(\nu+1)}\mbox{}_1 F_1\left(\frac{\nu+\mu+1}{2};\mbox{ }\nu+1;\mbox{ }-\frac{\beta^2}{4\alpha}\right).$$ Here $\mbox{}_1F_1$ is a hypergeometric function. Inputting the proper values gives $$\int_0^{\infty}\rho\exp\left(-\frac{\rho^2}{R^2}\right)J_0(k\rho)d\rho = \frac{\Gamma(1)}{2\frac{1}{R^2}\Gamma(1)}\mbox{}_1F_1\left(1;1;-\frac{k^2R^2}{4}\right).$$ Using a property of the hypergeometric function ($_1F_1(a;a;x) = \exp(x)$) we get: $$\frac{R^2}{2}\exp\left(-\frac{k^2R^2}{4}\right).$$
An odd question about induction. Given $n$ $0$'s and $n$ $1$'s distributed in any manner whatsoever around a circle, show, using induction on $n$, that it is possible to start at some number and proceed clockwise around the circle to the original starting position so that, at any point during the cycle, we have seen at least as many $0$'s as $1$'s:
Alternatively: Count the total number of ways to arrange $0$s and $1$s around a circle ($2n$ binary digits in total), consider the number of Dyck words of length $2n$, i.e., the $n$th Catalan number, and then use the Pigeonhole Principle. "QED"
Uniform grid on a disc Do there exist any known methods of drawing a uniform grid on a disk ? I am looking for a map that converts a grid on a square to a grid on a disk.
There are many possibilities to map a square on a disk. For example one possibility is: $$ \phi(x,y) = \frac{(x,y)}{\sqrt{1+\min\{x^2,y^2\}}} $$ which moves the points along the line trough the origin. If you also want the map to mantain the infinitesimal area, it's a little bit more complicated. One possibility is to look for a map $\phi(x,y)$ which sends the circle to a rectangle by keeping vertical the vertical lines. On each vertical band you can subdivide the strip in equal parts. This means that you impose $\phi(x,y) = (f(x),y/\sqrt{1-x^2})$. The condition that the map preserves the area becomes: $$ \frac{f'(x)}{\sqrt{1-x^2}} = 1 $$ i.e. $f'(x) = \sqrt{1-x^2}$ which, by integration, gives $$ f(x) = \frac 1 2 (\arcsin x + x\sqrt{1-x^2}). $$
Prove that $\vdash p \lor \lnot p$ is true using natural deduction I'm trying to prove that $p \lor \lnot p$ is true using natural deduction. I want to do this without using any premises. As it's done in a second using a truth table and because it is so intuitive, I would think that this proof shouldn't be too difficult, but I not able to construct one.
If you have the definition Cpq := ANpq and you have that A-commutes, and Cpp, then you can do this really quickly. Since we have Cpp, by the definition we have ANpp. By A-commutation we then have ApNp. More formally, I'll first point out that natural deduction allows for definitions as well as uniform substitution on theses. So, again, here's our definition Cpq := ANpq. (or C := AN for short) First one of the lemmas: * * * *p hypothesis *Cpp 1-1 C-in Now for another of the lemmas: * * * *Apq hypothesis * * * * *p hypothesis * * * * *Aqp 2 A-in * * *CpAqp 2-3 C-in * * * * *q hypothesis * * * * *Aqp 5 A-in * * *CqAqp 5-6 A-in * * *Aqp 1, 4, 7 A-out *CApqAqp Now we have the following sequence of theses: 1 Cpp by the above 2 CApqAqp by the above 3 ANpp definition applied to thesis 1 4 CANppApNp 2 p/Np, q/p (meaning in thesis 2 we substitue p with Np and q with p) 5 ApNp 3, 4 C-out
Given that $\cos x =-3/4$ and $90^\circGiven that $\;\cos x =-\frac{3}{4}\,$ and $\,90^\circ<x<180^\circ,\,$ find $\,\tan x\,$ and $\,\csc x.$ This question is quite unusual from the rest of the questions in the chapter, can someone please explain how this question is solved? I tried Pythagorean Theorem, but no luck. Is it possible to teach me how to use the circle diagram?
As $90^\circ< x<180^\circ,\tan x <0,\sin x>0$ So, $\sin x=+\sqrt{1-\cos^2x}=...$ $\csc x=\frac1{\sin x}= ... $ and $\tan x=\frac{\sin x}{\cos x}=...$
Using the Casorati-Weierstrass theorem. Show that there is a complex number $z$ such that:$$\left|\cos{\left(\frac{1}{2z^4+3z^2+1}\right)}+100\tan^2{z}+e^{-z^2}\right|<1$$ It's easy to see that $z=i$ is a simple pole of $\frac{1}{2z^4+3z^2+1}$, but I want to know how to conclude that $z=i$ is an essential singularity of $\cos{\left(\frac{1}{2z^4+3z^2+1}\right)}$ so that I can use the Casorati-Weierstrass theorem.
Denote $$f(z) = \cos\left(\frac{1}{2z^4+3z^2+1}\right)$$ To prove $z=i$ is an essential singularity of $f(z)$, just find two complex sequences of $\{z_n\}$ and $\{w_n\}$ so that $z_n, w_n\rightarrow i$ but $f(z_n) = 1$, $f(w_n)=-1$. This means both $\lim_{z->i} f(z)$ and $\lim_{z->i} 1/f(z)$ do not exist. By definition, $z=i$ is an essential singularity.
Diameter of finite set of points is equal to diameter of its convex hull Let $M\subset \mathbb{R}^2$ be a finite set of points, $\operatorname{C}(M)$ the convex hull of M and $$\operatorname{diam}(M) = \sup_{x,y\in M}\|x-y\|_2$$ be the diameter of $M$ What I want to show now is, that it holds $$\operatorname{diam}(M) = \operatorname{diam}(\operatorname{C}(M))$$ Because $$M\subseteq\operatorname{C}(M)$$ we obtain $$\operatorname{diam}(M) \le\operatorname{diam}(\operatorname{C}(M))$$ but how to proof that $$\operatorname{diam}(M) \ge \operatorname{diam}(\operatorname{C}(M))$$ I suppose it should be possible to construct a contradiction assuming $\operatorname{diam}(M) <\operatorname{diam}(\operatorname{C}(M))$ but i do not see how at this moment.
Hint: Prove this for a triangle and then use the fact that for every point of $C(M)$ there is a triangle that contains it, there are many ways to go from there. I hope this helps ;-)
are elementary symmetric polynomials concave on probability distributions? Let $S_{n,k}=\sum_{S\subset[n],|S|=k}\prod_{i\in S} x_i$ be the elementary symmetric polynomial of degree $k$ on $n$ variables. Consider this polynomial as a function, in particular a function on probability distributions on $n$ items. It is not hard to see that this function is maximized at the uniform distribution. I am wondering if there is a "convexity"-based approach to show this. Specifically, is $S_{n,k}$ concave on probability distributions on $n$ items?
(I know this question is ancient, but I happened to run into it while looking for something else.) While I am not sure if $S_{n,k}$ is concave on the probability simplex, you can prove the result you want and many other similar useful things using Schur concavity. A sketch follows. A vector $y\in \mathbb{R}_+^n$ majorizes $x \in \mathbb{R}_+^n$ if the following inequalities are satisfied: $$ \sum_{j=1}^i{x_{(j)}} \leq \sum_{j=1}^i{y_{(j)}} $$ for all $i$, and $\sum_{i=1}^n x_i = \sum_{i=1}^n y_i$. Here $x_{(j)}$ is the $j$-th largest coordinate of $x$ and similarly for $y$. Let's write this $x \prec y$. For intuition it's useful to know that $x \prec y$ if and only if $x$ is in the convex hull of vectors you get by permuting the coordinates of $y$. A function is Schur-concave if $x \prec y \implies f(x) \geq f(y)$. A simple sufficient condition for Schur concavity is that $\partial f(x)/\partial x_i \ge \partial f(x)/\partial x_j$ whenever $x_i \le x_j$. It is easy to verify that $S_{n,k}$ satisfies this condition for any $n$,$k$. Notice that $x=(1/n, \ldots, 1/n)$ is majorized by every vector $y$ in the probability simplex. You can see this for example by noticing that the sum of $i$ random coordinates of $y$ is $i/n$, so surely the sum of the $i$ largest coordinates is at least as much. Equivalently, $x$ is the average of all permutations of $y$. This observation, and the Schur concavity of $S_{n,k}$ imply $S_{n,k}(x) \ge S_{n,k}(y)$. In fact, $S_{n,k}^{1/k}$ is concave on the positive orthant, and this implies what you want. This is itself a special case of much more powerful results about the concavity of mixed volumes. But the Schur concavity approach is elementary and pretty widely applicable.
For $f,g~(f0$ let $\{h\in\mathcal C[0,1]:t-c For $f,g~(f<g),t\in\mathcal C[0,1],c>0$ let $\{h\in\mathcal C[0,1]:t-c<h<t+c\}$$=\{h\in\mathcal C[0,1]:f<h<g\}.$ I want to show that $t-c=f,~t+c=g.$ $$t-c<t<t+c\text{ and } \\f<\dfrac{f+g}{2}<g.\\\text{Then }t-c<\dfrac{f+g}{2}<t+c\text{ and } f<t<g.$$ I don't know how to contradict the following cases: For some $y\in[0,1].$ * *Let $t(y)-c>f(y)$ *Let $t(y)-c<f(y)$ *Let $t(y)+c>g(y)$ *Let $t(y)+c<g(y)$ This problem can more clearly be written as: For $f_1,g_1,f_2,g_2\in\mathcal C[0,1]$ $$\{h\in\mathcal C[0,1]:f_1<h<g_1\}=\{h\in\mathcal C[0,1]:f_2<h<g_2\}\implies f_1=f_2,g_1=g_2.$$
Let $A=\{h\in\mathcal C[0,1]|\;t-c<h<t+c\}$ and $B=\{h\in\mathcal C[0,1]|\;f<h<g\}$. For every $\epsilon\in(0,c)$ we have $t-\epsilon,t+\epsilon\in A$ by definition of $A$. Since $A=B$, this means that $t-\epsilon,t+\epsilon\in B$ for all $\epsilon\in(0,c)$, which by definition of $B$ means that $$f<t-\epsilon<t+\epsilon<g.$$ This implies that $$f\leq t-c<t+c\leq g.$$ To complete the proof, we have to show that $f\geq t-c$ and $g\leq t+c$. To do this, note that (since $f<g$) for every $\epsilon\in(0,1)$ we have $$f<(1-\epsilon)f+\epsilon g<g,$$ therefore $(1-\epsilon)f+\epsilon g\in B$. But this means that $(1-\epsilon)f+\epsilon g\in A$ for all $\epsilon\in(0,1)$, which means that $$t-c<(1-\epsilon)f+\epsilon g<t+c.$$ Now, let $\epsilon\to 0$. This gives us $$t-c\leq f\leq t+c.$$ Similarly, letting $\epsilon\to 1$ gives us $$t-c\leq g\leq t+c.$$ This completes the proof.
Techniques for determining how "random" a sample is? What techniques exist to determine the "randomness" of a sample? For instance, say I have data from a series of $1200$ six-sided dice rolls. If the results were 1, 2, 3, 4, 5, 6, 1, 2, 3, 4, 5, 6, ... Or: 1, 1, 1, ..., 2, 2, 2, ..., 3, 3, 3, ... The confidence of randomness would be quite low. Is there a formula where I can input the sequence of outcomes and get back a number that corresponds to the likelihood of randomness? Thanks UPDATE: awkward's answer was the most helpful. Some Googling turned up these two helpful resources: * *Statistical analysis of Random.org - an overview of the statistical analyses used to evaluate the random numbers generated by the website, www.random.org *Random Number Generation Software - a NIST-funded project that provides a discussion on tests that can be used against random number generators, as well as a free software package for running said tests.
What I would do is to first take the samples one by one, and check to see whether it is uniform (You assign some value depending on how far the distribution is from uniform and the way you calculate this value depends on your application). I would then take the samples two by two and do the same thing above, and then three by three and so on. With proper weighting, your second sequence will e.g. be flagged as "not random" with this technique. This is also what Neil's professor (in one of the answer to this question) is doing to see whether the sequence of his/her student's are really random or human generated.
Evaluate the integral $\int^{\frac{\pi}{2}}_0 \frac{\sin^3x}{\sin^3x+\cos^3x}\,\mathrm dx$. Evaluate the integral $$\int^{\frac{\pi}{2}}_0 \frac{\sin^3x}{\sin^3x+\cos^3x}\, \mathrm dx.$$ How can i evaluate this one? Didn't find any clever substitute and integration by parts doesn't lead anywhere (I think). Any guidelines please?
Symmetry! This is the same as the integral with $\cos^3 x$ on top. If that is not obvious from the geometry, make the change of variable $u=\pi/2-x$. Add them, you get the integral of $1$. So our integral is $\pi/4$.
Can someone explain the intuition behind this moment generating function identity? If $X_i \sim N(\mu, \sigma^2) $, we know that: $\bar{X} \sim N(\mu, \sigma^2 /n)$. But why does: $$\exp\left({\sigma^{2}\over 2}\sum_{i=1}^{n}(t_{i}-\bar{t})^{2}\right)= M_{X_{1}-\bar{X},X_{2}-\bar{X},...,X_{n}-\bar{X}}(t_1,t_2,...,t_n)$$ Where $M$ is the moment generating function? I have three pages of scratch work but it would be incredibly tedious to post that here, and I already know it's true... Thanks!
This identity relies on the fact that $$\sum_{i=1}^nt_iX_i-\sum_{i=1}^nt_i\bar X=\sum_{j=1}^n(t_j-\bar t)X_j.$$
covering space of $2$-genus surface I'm trying to build $2:1$ covering space for $2$- genus surface by $3$-genus surface. I can see that if I take a cut of $3$-genus surface in the middle (along the mid hole) I get $2$ surfaces each one looks like $2$-genus surface which are open in one side so it's clear how to make the projection from these $2$ copies to the $2$-genus surface. my question is how can I see this process in the polygonal representation of 3-genus surface ( $12$ edges: $a_{1}b_{1}a_{1}^{-1}b_{1}^{-1}......a_{3}b_{3}a_{3}^{-1}b_{3}^{-1})$ . I can't visualize the cut I make in the polygon. thanks.
Take the dodecagon at the origin with one pair of edges intersecting the $y$-axis (call them the top and bottom faces) and one pair intersecting the $x$-axis. Cut the polygon along the $x$ axis, and un-identify the left and right faces. This gives two octagons, each with an opposing pair of unmatched edges. Identify these new edges, and you should have what you're looking for.
Linear dependence of multivariable functions It is well known that the Wronskian is a great tool for checking the linear dependence between a set of functions of one variable. Is there a similar way of checking linear dependance between two functions of two variables (e.g. $P(x,y),Q(x,y)$)? Thanks.
For checking linear dependency between two functions of two variables we can follow the follwing theorem given by "Green, G. M., Trans. Amer. Math. Soc., New York, 17, 1916,(483-516)". Theorem: Let $y_{1}$ and $y_{2}$ be functions of two independence variables $x_{1}$ and $x_{2}$ i.e., $y_{1} = y_{1}(x_{1} ,x_{2}) $ and $y_{2} = y_{1}(x_{1} ,x_{2}) $ for which all partial derivatives of $1^{st}$ order, $\frac{\partial y_{1}}{\partial x_{k}}$, $\frac{\partial y_{2}}{\partial x_{k}}$, $(k = 1,2)$ exists throughout the region $A$. Suppose, farther, that one of the functions, say $y_{1}$, vanishes at no point of $A$. Then if all the two rowed determinants in the matrix \begin{pmatrix} y_{1} & y_{2} \\ \frac{\partial y_{1}}{\partial x_{1}} & \frac{\partial y_{2}}{\partial x_{1}} \\ \frac{\partial y_{1}}{\partial x_{2}} & \frac{\partial y_{2}}{\partial x_{2}} \end{pmatrix} vanish identically in $A$, $y_{1}$ and $y_{2}$ are linearly dependent in $A$, and in fact $y_{2}=c y_{1}$.
Analysis of Differentiable Functions Suppose that $f : \Bbb{R} \to \Bbb{R}$ is a function such that $|f(x)− f(y)| ≤ |x−y|^2$ for all $x$ and $y$. Show that $f (x) = C$ for some constant $C$. Hint: Show that $f$ is differentiable at all points and compute the derivative I confused as to what I use as the function in order to show that $f$ is differentiable at all points
Hint: Let $y=x+h$, then you have $$abs\left( \frac{f(x+h)-f(x)}{h}\right)\le |h|,$$ so that as $h \to 0$ you get that $f$ is differentiable. Maybe now you can use differentiability of $f$ to finish. Actually once you know it's differentiable, the same inequality above shows the derivative is $0$, so not really more work, just to explain it on your write-up.
"IFF" (if and only if) vs. "TFAE" (the following are equivalent) If $P$ and $Q$ are statements, $P \iff Q$ and The following are equivalent: $(\text{i}) \ P$ $(\text{ii}) \ Q$ Is there a difference between the two? I ask because formulations of certain theorems (such as Heine-Borel) use the latter, while others use the former. Is it simply out of convention or "etiquette" that one formulation is preferred? Or is there something deeper? Thanks!
"TFAE" is appropriate when one is listing optional replacements for some theory. For example, you could list dozen replacements for the statements, such as replacements for the fifth postulate in euclidean geometry. "IFF" is one of the implications of "TFAE", although it as $P \rightarrow Q \rightarrow R \rightarrow P $, which equates to an iff relation.
What does $a\equiv b\pmod n$ mean? What does the $\equiv$ and $b\pmod n$ mean? for example, what does the following equation mean? $5x \equiv 7\pmod {24}$? Tomorrow I have a final exam so I really have to know what is it.
Let $a=qn+r_{1}$ and $b=pn+r_{2}$, where $0\leq r_{1},r_{2}<n$. Then $$r_{1}=r_{2}.$$ $r_{1}$ and $r_{2}$ are remainders when $a$ and $b$ are divided by $n$.
Constant growth rate? Say the population of a city is increasing at a constant rate of 11.5% per year. If the population is currently 2000, estimate how long it will take for the population to reach 3000. Using the formula given, so far I've figured out how many years it will take (see working below) but how can I narrow it down to the nearest month?
Let $a=1.115^{1/12}=\sqrt[12]{1.115}$, the twelfth root of $1.115$. Then $$1.115^x=(a^{12})^x=a^{12x}\;,$$ and $12x$ is the number of months that have gone by. Thus, if you can solve $a^y=1.5$, $y$ will be the desired number of months. Without logarithms the best that you’ll be able to do is find the smallest integer $y$ such that $a^y\ge 1.5$. By my calculation $a\approx1.009112468437$. You could start with $a^{36}$ and work up until you find the desired $y$.
How to determine whether an isomorphism $\varphi: {U_{12}} \to U_5$ exists? I have 2 groups $U_5$ and $U_{12}$ , .. $U_5 = \{1,2,3,4\}, U_{12} = \{1,5,7,11\}$. I have to determine whether an isomorphism $\varphi: {U_{12}} \to U_5$ exists. I started with the "$yes$" case: there is an isomorphism. So I searched an isomorphism $\varphi$ , but I didn't found. So I guess there is no an isomorphism $\varphi$ . How can I prove it? or at least explain? please help.
Note that $x^2\equiv 1\pmod {12}$ for all elements of $U_{12}$ whereas the corresponding property does not hold in $U_5$.
For what integers $n$ does $\phi(2n) = \phi(n)$? For what integers $n$ does $\phi(2n) = \phi(n)$? Could anyone help me start this problem off? I'm new to elementary number theory and such, and I can't really get a grasp of the totient function. I know that $$\phi(n) = n\left(1-\frac1{p_1}\right)\left(1-\frac1{p_2}\right)\cdots\left(1-\dfrac1{p_k}\right)$$ but I don't know how to apply this to the problem. I also know that $$\phi(n) = (p_1^{a_1} - p_1^{a_1-1})(p_2^{a_2} - p_2^{a_2 - 1})\cdots$$ Help
Hint: You may also prove in general that $$\varphi(mn)=\frac{d\varphi(m)\varphi(n)}{\varphi(d)}$$ where $d=\gcd(m,n).$
Higher Moments of Sums of Independent Random Variables Let $X_1 \dots X_n$ be independent random variables taking values $\{-1,1\}$ with equal probability 1/2. Let $S_n = \sum X_i$. Is there a closed form expression for $E[(S_n)^{2j}]$. If not a closed form expression then can we hope to get a nice tight upper bound. I am leaving tight unspecified here because I do not know myself how tight I want the bound to be so please tell me any non-trivial bounds.
The random variable $S_n^{2j}$ takes the value $(2k-n)^{2j}$ with probability $\binom nk\frac 1{2^n}$, hence $$\mathbb E\left[S_n^{2j}\right]=\sum_{k=0}^n\binom nk(2k-n)^{2j}.$$ It involves computations of terms of the form $\sum_{k=0}^n\binom nk k^p$, $p\in\Bbb N$.
Asking for a good starting tutorial on differential geometry for engineering background student. I just jumped into a project related to an estimation algorithm. It needs to build measures between two distributions. I found a lot of papers in this field required a general idea from differential geometry, which is like a whole new area for me as a linear algebra guy. I indeed follow a wiki leading studying by looking up the terms, and start to understand some of them, but I found this way of studying is not really good for me, because it is hard to connect these concepts. For example, I know the meaning for concepts like Manifold, Tangent space, Exponential map, etc. But I lack the understanding why they are defined in this way and how they are connected. I indeed want to put as much effort as needed on it, but my project has a quick due time, so I guess I would like to have a set of the minimum concepts I need to learn in order to have some feeling for this field. So in short, I really want to know if there is any good reference for beginner level like me -- for engineering background student? Also since my background is mainly in linear algebra and statistics, do I have to go through all the materials in geometry and topology ? I really appreciate your help. Following up: I checked the books suggested by answerers below, they are all very helpful. Especially I found Introduction to Topological Manifolds suggested by kjetil is very good for myself. Also I found http://www.youtube.com/user/ThoughtSpaceZero is a good complement (easier) resource that can be helpful for checking the basic meanings.
I suggest you read Lee's introduction to topological manifolds followed by his introduction to smooth manifolds.
What justifies assuming that a level surface contains a differentiable curve? My textbook's proof that the Lagrange multiplier method is valid begins: Let $X(t)$ be a differentiable curve on the surface $S$ passing through $P$ Where $S$ is the level surface defining the constraint, and $P$ is an extremum of the function that we're seeking to optimize. But how do we know that such a curve exists? $S$ is specifically defined as the set of points in the (open) domain of the continuously differentiable function $g$ with $g(X) = 0$ but $\operatorname{grad}g(X)\ne0$. The function $f$ that we're seeking to optimize is assumed to be continuously differentiable and defined on the same open domain as $g$, and $P$ is an extremum of $f$ on $S$.
By the Implicit Function Theorem, near $P$ you can represent your level surface as a graph, say $z=\phi(x,y)$, where $\phi$ is continuously differentiable. If $P=\phi(a,b)$, take any line through $(a,b)$ and you get a nice curve.
What does it exactly mean for a subspace to be dense? My understanding of rationals being dense in real numbers: I know when we say the rationals are dense in real is because between any two rationals we can find a irrational number. In other words we can approximate irrational numbers using rationals. I think a more precise definition would be is that any open ball around a irrational number will contain a rational number. If what I said is correct, I am trying to think about what it means for $C[a,b]$ (which are the continuous complex valued functions on [a.b]) to be dense subspace of $L^2[a,b]$. From what I said above, I want to say that all functions in $L^2[a,b]$ can be approximated by functions from $C[a,b]$. Is the intuition correct here, what would the precise definition in this case?
In general topological spaces, a dense set is one whose intersection with any nonempty open set is nonempty. For metric spaces, since we have a topological base of open balls, this is equivalent to every point in space space being arbitrarily close, with regards to the metric, to point in the dense set. Note that $L^2$ is a metric space, where $d(f,g) = ||f-g||_2$, with $||\cdot||_2$ being the $L^2$ norm.
Must certain rings be isomorphic to $\mathbb{Z}[\sqrt{a}]$ for some $a$ Consider the group $(\mathbb{Z}\times\mathbb{Z},+)$, where $(a,b)+(c,d)=(a+c,b+d)$. Let $\times$ be any binary operation on $\mathbb{Z}\times\mathbb{Z}$ such that $(\mathbb{Z}\times\mathbb{Z},+,\times)$ is a ring. Must there exist a non-square integer "$a$" such that $$(\mathbb{Z}\times\mathbb{Z},+,\times)\cong\mathbb{Z}[\sqrt{a}]?$$ Thank you. Edit: Chris Eagle noted that setting $x\times y=0$ for all $x,y\in\mathbb{Z}\times\mathbb{Z}$ would provide a counterexample. I would like to see other ecounterexamples though.
Probably the most natural counterexample is the following: If the operation $\times$ is defined such that the resulting ring is simply product of two copies of the usual ring $(\mathbb{Z},+,\times)$ (that is, if we set $(a,b)\times(c,d)=(ac,bd)$), then, again, no isomorphism exists, since the resulting ring $\mathbb{Z}\times \mathbb{Z}$ is not an integral domain and $\mathbb{Z}[\sqrt{a}]$ is.
How to check if three coordinates form a line Assume I have three coordinates from a world map in Longitude + Latitude. Is there a way to determine these three coordinates form a straight line? What if I was using a system with bounds that defines the 2 corners (northeast - southwest) in Long/Lat? The long & lat are expressed in decimal degrees.
I'll assume that by "line" you mean "great circle" -- that is, if you want to go from A to C via the shortest possible route, then keep going straight until you circle the globe and get back to A, you'll pass B on the way. The best coordinates for the question to be in are cartesian -- the 3D vector from the center of the earth to the point on the map. (Lat/long are two thirds of a set of "spherical coordinates", and can be converted to cartesian coordinates given the radius of the earth as $r$, though for this question $r$ doesn't matter). Once you have those points, which each one being a 3D vector), find the plane containing them, and check that it includes the center of the earth. Or, as a shortcut, just mark the points on a Gnomonic projection map, and use a ruler to see if they form a straight line there. Unfortunately, a given gnomonic projection map can't include the whole world.
When factoring polynomials does not result in repeated factors I found the following statement in the book introduction to finite fields and their applications: Let $x^n-1 = f_1(x)f_2(x)\dots f_m(x)$ be the decomposition of $x^n-1$ into monic irreducible factors over $\mathbb{F}_q$. If $\text{GCD}(n,q)=1$, then there are no repeated factors; i.e., polynomials $f_1, f_2, \ldots, f_m$ are all distinct. Firstly, please indicate why this statement holds. Secondly, are there similar theorems for polynomials other than $x^n-1$?
If $f(x)=g(x)^2h(x)$ then by the product rule of polynomial derivatives: $$f'(x)=2g(x)g'(x)h(x)+g(x)^2h'(x) =g(x)\left(2g'(x)h(x)+g(x)h'(x)\right)$$ So when $f(x)$ has a repeated factor, $f'(x)$ has a common factor with $f(x)$.
How to prove uniform distribution of $m\oplus k$ if $k$ is uniformly distributed? All values $m, k, c$ are $n$-bit strings. $\oplus$ stands for the bitwise modulo-2 addition. How to prove uniform distribution of $c=m\oplus k$ if $k$ is uniformly distributed? $m$ may be of any distribution and statistically independant of $k$. For example you have a $m$-bit string that is with probability p=1 always '1111...111'. Adding it bitwise to a random $k$-bit string which is uniformly distributed makes the result also uniformly distributed. Why?
This is not true. For example, if $m = k$, $c$ is not uniformly distributed.
Why does $\lim_{n \to \infty} \sqrt[n]{(-1)^n \cdot n^2 + 1} = 1$? As the title suggests, I want to know as to why the following function converges to 1 for $n \to \infty$: $$ \lim_{n \to \infty} \sqrt[n]{(-1)^n \cdot n^2 + 1} = 1 $$ For even $n$'s only $n^2+1$ has to be shown, which I did in the following way: $$\sqrt[n]{n^2} \le \sqrt[n]{n^2 + 1} \le \sqrt[n]{n^3}$$ Assuming we have already proven that $\lim_{n \to \infty}\sqrt[n]{n^k} = 1$ we can conclude that $$1 \le \sqrt[n]{n^2+1} \le 1 \Rightarrow \lim_{n \to \infty} \sqrt[n]{n^2+1} = 1.$$ For odd $n$'s I can't find the solution. I tried going the same route as for even $n$'s: $$\sqrt[n]{-n^2} \le \sqrt[n]{-n^2 + 1} \le \sqrt[n]{-n^3}$$ And it seems that it comes down to $$\lim_{n \to \infty} \sqrt[n]{-n^k}$$ I checked the limit using both Wolfram Alpha and a CAS and it converges to 1. Why is that?
It's common for CAS's like Wolfram Alpha to take $n$th roots that are complex numbers with the smallest angle measured counterclockwise from the positive real axis. So the $n$th root of negative real numbers winds up being in the first quadrant of the complex plane. As $n\to\infty$, this $n$th root would get closer to the real axis and explain why WA says the limit is 1. CAS's do this for continuity reasons; so that $\sqrt[n]{-2}$ will be close to $\sqrt[n]{-2+\varepsilon\,i}$. Instead of $\sqrt[n]{x}$, you can get around the issue with $\operatorname{sg}(x)\cdot\sqrt[n]{|x|}$ where $\operatorname{sg}(x)$ is the signum function: $1$ for positive $x$ and $-1$ for negative $x$.
Explaining the physical meaning of an eigenvalue in a real world problem Contextual Problem A PhD student in Applied Mathematics is defending his dissertation and needs to make 10 gallon keg consisting of vodka and beer to placate his thesis committee. Suppose that all committee members, being stubborn people, refuse to sign his dissertation paperwork until the next day. Since all committee members will be driving home immediately after his defense, he wants to make sure that they all drive home safely. To do so, he must ensure that his mixture doesn't contain too much alcohol in it! Therefore, his goal is to make a 10 gallon mixture of vodka and beer such that the total alcohol content of the mixture is only $12$ percent. Suppose that beer has $8\%$ alcohol while vodka has $40\%$. If $x$ is the volume of beer and $y$ is the volume of vodka needed, then clearly the system of equations is \begin{equation} x+y=10 \\ 0.08 x +0.4 y = 0.12\times 10 \end{equation} My Question The eigenvalues and eigenvectors of the corresponding matrix \begin{equation} \left[ \begin{array}{cc} 1 & 1\\ 0.08 & 0.4 \end{array} \right] \end{equation} are \begin{align} \lambda_1\approx 1.1123 \\ \lambda_2\approx 0.2877 \\ v_1\approx\left[\begin{array}{c} 0.9938 \\ 0.1116 \end{array} \right] \\ v_2\approx\left[\begin{array}{c} -0.8145 \\ 0.5802 \end{array} \right] \end{align} How do I interpret their physical meaning in the context of this particular problem?
An interpretation of eigenvalues and eigenvectors of this matrix makes little sense because it is not in a natural fashion an endomorphism of a vector space: On the "input" side you have (liters of vodka, liters of beer) and on the putput (liters of liquid, liters of alcohol). For example, nothing speaks against switching the order of beer and vodka (or of liquid and alcohol), which would result in totally different eigenvalues.
Expression for the Maurer-Cartan form of a matrix group I understand the definition of the Maurer-Cartan form on a general Lie group $G$, defined as $\theta_g = (L_{g^{-1}})_*:T_gG \rightarrow T_eG=\mathfrak{g}$. What I don't understand is the expression $\theta_g=g^{-1}dg$ when $G$ is a matrix group. In particular, I'm not sure how I'm supposed to interpret $dg$. It seemed to me that, in this concrete case, I should take a matrix $A\in T_gG$ and a curve $\sigma$ such that $\dot{\sigma}(0)=A$, and compute $\theta_g(A)=(\frac{d}{dt}g^{-1}\sigma(t))\big|_{t=0}=g^{-1}A$ since $g$ is constant. So it looks like $\theta_g$ is just plain old left matrix multiplication by $g^{-1}$. Is this correct? If so, how does it connect to the expression above?
This notation is akin to writing $d\vec x$ on $\mathbb R^n$. Think of $\vec x\colon\mathbb R^n\to\mathbb R^n$ as the identity map and so $d\vec x = \sum\limits_{j=1}^n \theta^j e_j$ is an expression for the identity map as a tensor of type $(1,1)$ [here $\theta^j$ are the dual basis to the basis $e_j$]. In the Lie group setting, one is thinking of $g\colon G\to G$ as the identity map, and $dg_a\colon T_aG\to T_aG$ is of course the identity. Since $(L_g)_* = L_g$ on matrices (as you observed), for $A\in T_aG$, $(g^{-1}dg)_a(A) = a^{-1}A = L_{a^{-1}*}dg_a(A)\in\frak g$.
Mean value theorem application for multivariable functions Define the function $f\colon \Bbb R^3\to \Bbb R$ by $$f(x,y,z)=xyz+x^2+y^2$$ The Mean Value Theorem implies that there is a number $\theta$ with $0<\theta <1$ for which $$f(1,1,1)-f(0,0,0)=\frac{\partial f}{\partial x}(\theta, \theta, \theta)+\frac{\partial f}{\partial y}(\theta, \theta, \theta)+\frac{\partial f}{\partial z}(\theta, \theta, \theta)$$ This is the last question. I don't have any idea. Sorry for not writing any idea. How can we show MVT for this question?
Hint: Consider $g(t):=f(t,t,t)$. What is $g'(t)$?
Orthogonal Subspaces I am reading orthogonality in subspaces and ran into confusion by reading this part: Suppose S is a six-dimensional subspace of nine-dimensional space $\mathbb R^9$. a) What are the possible dimensions of subspace orthogonal to $S$? Answer: Sub spaces orthogonal to S can have dimensions $0,1,2,3.$ b) What are the possible dimensions of the orthogonal complement $S^{\perp}$ of $S$? Answer: Complement $S^{\perp}$ is the largest orthogonal subspace with dim $3$. Where I am having trouble is understanding how the answers make sense to the question, or how the answers are pretty much the answers. In other words, for a, how is the dimensions $0,1,2,3$? But maybe I am not understanding the question. Any assistance with helping me understand the answer would be appreciated.
Take all the vectors linearly independent vectors in $S$ and put them in a matrix (as rows). Since $S$ has dim=6, so there are 6 linearly independent vectors in S. Thus the matrix will have size 6x9. Now, rank of this matrix is $6$, and the orthogonal complement to $S$ is the rank of its null space. So, by rank nullity theorem, dimension of orthogonal complement is $3$. Now, subspaces orthogonal to $S$ consists of vectors that belong to the null space of the above matrix, and are subspaces of the null space of the matrix (any subspace will do). These subspaces can only have dimension $0,1,2$ or $3$. Hope this helps.
Recursion Question - Trying to understand the concept Just trying to grasp this concept and was hoping someone could help me a bit. I am taking a discrete math class. Can someone please explain this equation to me a bit? $f(0) = 3$ $f(n+1) = 2f(n) + 3$ $f(1) = 2f(0) + 3 = 2 \cdot 3 + 3 = 9$ $f(2) = 2f(1) + 3 = 2 \cdot 9 + 3 = 21$ $f(3) = 2f(2) + 3 = 2 \cdot 21 + 3 = 45$ $f(4) = 2f(3) + 3 = 2 \cdot 45 + 3 = 93$ I do not see how they get the numbers to the right of the equals sign. Please someone show me how $f(2) = 2f(1) + 3 = 2 \cdot 9 + 3$. I see they get "$2\cdot$" because of $2f$ but how and where does the $9$ come from? I also see why the $+3$ at the end of each equation but how and where does that number in the middle come from?
Perhaps by considering a different sequence this may become clearer: $$f(0)=0$$ $$f(n+1)=n+1$$ therefore $$\begin{align} f(n=1)&=f(n=0)+1=0+1=1\\ f(n=2)&=f(n=1)+1=(f(n=0)+1)+1=(0+1)+1=2\\ f(n=3)&=f(n=2)+1=(f(n=1)+1)+1=((f(n=0)+1)+1)+1=((0+1)+1)+1=3\\ \end{align}$$ So this will generate all the natural numbers.
Prove that every irreducible cubic monic polynomial over $\mathbb F_{5}$ has the form $P_{t}(x)=(x-t_{1})(x-t_{2})(x-t_{3})+t_{0}(x-t_{4})(x-t_{5})$? For a parameter $t=(t_{0},t_{1},t_{2},t_{3},t_{4},t_{5},)\in\mathbb F_{5}^{6}$ with $t_{0}\ne 0$ and {$t_{i},i>0$} are ordering of elements in $\mathbb F_{5}$ (t1~t5 is a permutation of [0]~[4] here at least as I think), define a polynomial $$P_{t}(x)=(x-t_{1})(x-t_{2})(x-t_{3})+t_{0}(x-t_{4})(x-t_{5}).$$ * *Show that $P_{t}(x)$ is irreducible in $\mathbb F_{5}[x]$. *Prove that two parameters $t,t'$ give the same polynomial over $\mathbb F_5$ if and only if $t_{0}=t_{0}'$ and $\{t_{4},t_{5}\}=\{t_{4}',t_{5}'\}$. *Show that every irreducible cubic monic polynomial over $\mathbb F_{5}$ is obtained in this way. After trying $x,x-1,x-2,x-3,x-4$ the first question can be solved. But I have no idea about where to start with the remaining two. Expanding the factor seems failed for proving two polynomials are equal to each other.
Hints: * *A cubic is reducible, only if it has a linear factor. But then it should have a zero in $\mathbb{F}_5$, so it suffices to check that none of $t_1,t_2,t_3,t_4,t_5$ is a zero of $P_t(x)$. *This part is tricky. I would go about it as follows. Let $t$ and $t'$ be two vectors of parameters. Consider the difference $$ Q_{t,t'}(x)=P_t(x)-(x-t'_1)(x-t'_2)(x-t'_3). $$ It is a quadratic. Show that if $\{t_1,t_2,t_3\}=\{t'_1,t'_2,t'_3\}$ then $Q_{t,t'}$ has two zeros in $\mathbb{F}_5$, but otherwise it has one or none. This allows you to make progress. *Count them! The irreducible cubics are exactly the minimal polynomials of those elements of the finite field $L=\mathbb{F}_{125}$ that don't belong to the prime field. The number of such elements is $125-5=120$. Each cubic has three zeros in $L$ (it's Galois over the prime field), so there are a total of 40 irreducible cubic polynomials over $\mathbb{F}_5$. How many distinct polynomials $P_t(x)$ are there?
Homeomorphism from the interior of a unit disk to the punctured unit sphere I need help constructing a homeomorphism from the interior of the unit disk, $\{(x,y)\mid x^2+y^2<1\}$, to the punctured unit sphere, $\{(x,y,z)\mid x^2+y^2+z^2 = 1\} - \{(0,0,1)\}$. I was thinking you could take a line passing through $(0,0,1)$ and a point in the disk and send that point to the part of the sphere the line passes through, but this function wouldn't cover the top half of the sphere.
Stereographic projection, as mentioned in the other answers, usually comes up in this context. Indeed, it is a beautiful way of identifying the punctured sphere with $\mathbb{R}^2$, in a conformal manner (preserving angles). However, if you only care about finding a homeomorphism to the disk (and there is no way to make this conformal anyway), then perhaps it is easier to just define the homeomorphism from the punctured sphere to the unit disk by $$ \phi(x,y,t) := \frac{t+1}{2} e^{i\theta}, $$ where $\theta$ is the angle of the point in the $x-y$-plane; i.e. $\theta=\arg(x+iy)$. If you prefer real coordinates, this means $$ \phi(x,y,t) = \frac{t+1}{2\sqrt{x^2+y^2}} \cdot (x,y).$$
Evaluating the integral of $f(x, y)=yx$ Evaluate the integral $I = \int_C f(x,y) ds$ where $f(x,y)=yx$. and the curve $C$ is given by $x=\sin(t)$ and $y=\cos(t)$ for $0\leq t\leq \frac{pi}{2}$. I got the answer for this as $\frac{\sqrt{2}}{2}$ is that right?
Evaluate the parameterized integral $$ \int_0^{\pi/2} \cos t \sin t \sqrt{\cos^2 t + \sin^2 t} \, dt = \int_0^{\pi/2} \cos t \sin t \, dt. $$ I don't see a square root of 2 appearing anywhere.
Weakly compact implies bounded in norm The weak topology on a normed vector space $X$ is the weakest topology making every bounded linear functionals $x^*\in X^*$ continuous. If a subset $C$ of $X$ is compact for the weak topology, then $C$ is bounded in norm. How does one prove this fact?
The first key point is that an element of $x$ can be identified with a linear functional of norm $\|x\|$ on the dual $X^*$. Indeed, it follows from Hahn-Banach that there exists $x^*\in X^*$ such that $x^*(x)=\|x\|$ with $\|x^*\|=1$. Therefore, denoting by $e_x$ the linear functional (called point evaluation) $e_x:x^*\longmapsto x^*(x)$ on $X^*$, we have $$ \|e_x\|=\sup_{\|x^*\|\leq 1}|e_x(x^*)|=\sup_{\|x^*\|\leq 1}|x^*(x)|=\max_{\|x^*\|\leq 1}|x^*(x)|=\|x\|. $$ This yields the canonical isometric embedding of $X$ into the double dual $X^{**}$. The second key point is that $X^*$ is always a Banach space. Therefore we can use the uniform boundedness principle (Banach-Steinhaus) for bounded linear functionals on $X^*$, that is in $X^{**}$. If $C$ is weakly compact in $X$, then the image of $C$ under every $x^*\in X^*$ is compact, hence bounded, in the base field. Just because $x^*$ is continuous for the weak topology by definition of the latter, and because the continuous image of a compact space is compact. It follows that $$ \sup_{x \in C}|x^*(x)|=\sup_{x \in C}|e_x(x^*)|<\infty \qquad \forall x^*\in X^*. $$ By the uniform boundedness principle applied to $\{e_x\,;\,x\in C\}$, this implies that $$ \sup_{x\in C}\|x\|=\sup_{x\in C}\|e_x\|<\infty $$ which says precisely that $C$ is bounded in norm. Note: since the weak topology is Hausdorff, we get that weakly compact implies weakly closed + norm bounded. The converse is false. For instance, the closed unit ball of $c_0$ is weakly closed (the weak closure of a convex set is the same as its norm closure) and norm bounded, but not weakly compact (the closed unit ball of a normed vector space $X$ - actually automatically a Banach space in either case - is weakly compact if and only if $X$ is reflexive). Things are better with the weak*-topology on the dual of a Banach space $X$: weak*-compact is equivalent to weak*-closed + norm bounded.
Proof that $ \lim_{x \to \infty} x \cdot \log(\frac{x+1}{x+10})$ is $-9$ Given this limit: $$ \lim_{x \to \infty} x \cdot \log\left(\frac{x+1}{x+10}\right) $$ I may use this trick: $$ \frac{x+1}{x+1} = \frac{x+1}{x} \cdot \frac{x}{x+10} $$ So I will have: $$ \lim_{x \to \infty} x \cdot \left(\log\left(\frac{x+1}{x}\right) + \log\left(\frac{x}{x+10}\right)\right) = $$ $$ = 1 + \lim_{x \to \infty} x \cdot \log\left(\frac{x}{x+10}\right) $$ But from here I am lost, I still can't make it look like a fondamental limit. How to solve it?
I'll use the famous limit $$\left(1+\frac{a}{x+1}\right)^x\approx\left(1+\frac{a}{x}\right)^x\to e^a$$ We have $$x \ln \frac{x+1}{x+10}=x \ln \frac{x+1}{x+1+9}=-x\ln\left( 1+\frac{9}{x+1} \right)=-\ln\left( 1+\frac{9}{x+1} \right)^x\to-9$$
Poisson Estimators Consider a simple random sample of size $n$ from a Poisson distribution with mean $\mu$. Let $\theta=P(X=0)$. Let $T=\sum X_{i}$. Show that $\tilde{\theta}=[(n-1)/n]^{T}$ is an unbiased estimator of $\theta$.
We have $\Pr(X_1=0)=e^{-\mu}=\theta$. Therefore $$ \theta=\mathbb E(\Pr(X_1=0\mid X_1+\cdots+X_n)). $$ So what is $$ \Pr(X_1=0\mid X_1+\cdots+X_n=x)\text{ ?} $$ It is $$ \begin{align} & {}\qquad \frac{\Pr(X_1=0\text{ and } X_1+\cdots+X_n=x)}{\Pr(X_1+\cdots+X_n=x)} = \frac{\Pr(X_1=0)\cdot\Pr(X_2+\cdots+X_n=x)}{e^{-n\mu}(n\mu)^x/(x!)} \\[10pt] & = \frac{\left(e^{-\mu}\right)\cdot\left(e^{-(n-1)\mu}((n-1)\mu)^x/(x!)\right)}{e^{-n\mu}(n\mu)^x/(x!)} = \left(\frac{n-1}{n}\right)^x \\[10pt] \end{align} $$ Therefore $$ \mathbb E\left( \left(\frac{n-1}{n}\right)^{X_1+\cdots+X_n} \right) = \theta. $$
Calculation for absolute value pattern I have a weird pattern I have to calculate and I don't quite know how to describe it, so my apologies if this is a duplicate somewhere.. I want to solve this pattern mathematically. When I have an array of numbers, I need to calculate a secondary sequence (position) based on the index and the size, i.e.: 1 element: index: 0 position: 1 2 elements: index: 0 1 position: 2 1 3 elements: index: 0 1 2 position: 2 1 3 4 elements: index: 0 1 2 3 position: 4 2 1 3 5 elements: index: 0 1 2 3 4 position: 4 2 1 3 5 6 elements: index: 0 1 2 3 4 5 position: 6 4 2 1 3 5 etc.... The array can be 1-based as well if that would make it easier. I wrote this out to 9 elements to try and find a pattern but the best I could make out was that it was some sort of absolute value function with a variable offset...
For an array with $n$ elements, the function is: $$f(n,k)=1+2(k-[n/2])$$ when $k\geq [n/2]$ and $$f(n,k)=2+2([n/2]-k-1)$$ when $k<[n/2]$, where $[\cdot]$ is the floor function. Example: $$f(5,4)=1+2(4-2)=5$$. $$f(5,1)=2+2(1-2+1)=2$$
For the Fibonacci numbers, show for all $n$: $F_1^2+F_2^2+\dots+F_n^2=F_nF_{n+1}$ The definition of a Fibonacci number is as follows: $$F_0=0\\F_1=1\\F_n=F_{n-1}+F_{n-2}\text{ for } n\geq 2$$ Prove the given property of the Fibonacci numbers for all n greater than or equal to 1. $$F_1^2+F_2^2+\dots+F_n^2=F_nF_{n+1}$$ I am pretty sure I should use weak induction to solve this. My professor got me used to solving it in the following format, which I would like to use because it help me map everything out... This is what I have so far: Base Case: Solve for $F_0$ and $F_1$ for the following function: $F_nF_{n+1}$. Inductive Hypothesis: What I need to show: I need to show $F_{n+1}F_{n+1+1}$ will satisfy the given property. Proof Proper: (didn't get to it yet) Any intro. tips and pointers?
This identity is clear from the following diagram: (imagine here a generalized picture with $F_i$ notation) The area of the rectangle is obviously $$F_n(F_{n}+F_{n-1})=F_nF_{n+1}$$ On the other hand, since the area of a square is x^2, it is obviously: $$F_1^2+F_2^2+\dots+F_n^2$$ Therefore: $$F_1^2+F_2^2+\dots+F_n^2=F_nF_{n+1}$$ You can even convert this graphical proof to an inductive proof - your inductive step would consist of adding a square $F_{n+1} * F_{n+1}$.
Is [0,1] closed? I thought it was closed, under the usual topology $\mathbb{R}$, since its compliment $(-\infty, 0) \cup (1,\infty)$ is open. However, then then intersection number would not agree mod 2, since it can arbitrarily intersect a compact manifold even or odd times. P.S. The corollary. $X$ and $Z$ are closed submanifolds inside $Y$ with complementary dimension, and at least one of them is compact. If $g_0, g_1: X \to Y$ are arbitrary homotopic maps, then we have $I_2(g_0, Z) = I_2(g_1, Z).$ The contradiction (my question): Let [0,1] be the closed manifold $Z$, and then it can intersect an arbitrary compact manifold any times, contradicting with the corollary. Aneesh Karthik C's comment answered my question, so just to clarify: I was thinking $g_0$ is one wiggle of [0,1] such that it intersects a compact manifold once, and $g_1$ is some other sort that [0,1] intersect twice. Then it contradicts with the corollary. But apparently it doesn't, because [0,1] does not satisfy the corollary as a closed manifold. By definition, a closed manifold is a type of topological space, namely a compact manifold without boundary. Since [0,1] is not a closed manifold, it can intersect a compact manifold as much as it want, without contradicting with the theorem. I didn't realize that [0,1] is not a closed manifold. So I thought it contradicts and that's why I ask the question.
A closed manifold is a compact boundaryless manifold. So the last line "Let [0,1] be the closed manifold Z" is wrong, for $\partial[0,1]\ne\phi$.
Is't a correct observation that No norm on $B[0,1]$ can be found to make $C[0,1]$ open in it? There's a problem in my text which reads as: Show that $C[0,1]$ is not an open subset of $(B[0,1],\|.\|_\infty).$ I've already shown in a previous example that for any open subspace $Y$ of a normed linear space $(X,\|.\|),~Y=X.$ Even though using this result this problem turns out to be immediate the sup-norm is becoming immaterial. And I can't believe what I'm left with: No norm on $B[0,1]$ can be found to make $C[0,1]$ open in it. Is this a correct observation?
This statement is actually true under more general settings. It seems convenient to talk about topological vector spaces, of which normed spaces are a very special kind. So let $X$ be a topological vector space and $Y$ be an open subspace. So we know $Y$ contains some open set. Since the topology on topological vector spaces are translation invariant (that is, $V$ is open if and only if $V+x$ is open for all $x\in X$, you can check this in normed spaces), we know $Y$ contains some open neighborhood of the origin, say $0\in V\subset Y$. Another interesting fact about topological vector spaces is that for any open neighborhood $W$ of the origin, one has \begin{equation} X=\cup_{n=1}^{\infty}nW. \end{equation}Again you might check this for normed spaces. Apply this to our $V$, and note that $Y$ is closed under scalar multiplication, we have \begin{equation} X=\cup nV\subset \cup nY=Y. \end{equation} So we have just proved The only open subspace is the entire space. Note: If $S$ is a subset of a vector space, for a point $x$ and a scalar $\alpha$ we define \begin{equation} x+S:=\{x+s|s\in S\} \end{equation} and \begin{equation} \alpha S:=\{\alpha s|s\in S\}. \end{equation}
Finding perpendicular bisector of the line segement joining $ (-1,4)\;\text{and}\;(3,-2)$ Find the perpendicular bisector of the line joining the points $(-1,4)\;\text{and}\;(3,-2).\;$ I know this is a very easy question, and the answer is an equation. So any hints would be very nice. thanks
Hint: The line must be orthogonal to the difference vector $(3-(-1),-2-4)$ and pass through the midpoint $(\frac{-1+3}2,\frac{4-2}2)$.
Can we use the Second Mean Value Theorem over infinite intervals? Let $[a,b]$ be any closed interval and let $f,g$ be continuous on $[a,b]$ with $g(x)\geq 0$ for all $x\in[a,b]$. Then the Second Mean Value Theorem says that $$\int_a^bf(t)g(t)\text{d}t = f(c)\int_a^b g(t)\text{d}t,$$ for some $c\in(a,b)$. Does this theorem work on the interval $[0,\infty]$ ? EDIT: Assuming the integrals involved converge.
No. Consider $g(t)=\frac{1}{t}$, $f=g$ on $[1,\infty)$.
Floor Inequalities Proving the integrality of an fractions of factorials can be done through De Polignac formula for the exponent of factorials, reducing the question to an floored inequality. Some of those inequalities turn out to be very hard to proof if true at all. The first is, given $x_i \in \mathbb{R}$ and $\{x_i\} = x_i - \lfloor x_i \rfloor$: $$\sum_{i=1}^{n}\left \lfloor n \{x_i\} \right \rfloor \geq \left \lfloor \sum_{i=1}^{n}\{x_i\} \right \rfloor$$ I was able to prove this one by arguing that if $\left \lfloor \sum_{i=1}^{n}\{x_i\} \right \rfloor = L$ than there is some $x_k \geq \frac{L}{n}$, so the left side is at least $L$. But I was unable to apply the same idea to the following inequality: $$\sum_{i=1}^{n}\left \lfloor q_i \{x_i\} \right \rfloor \geq \left \lfloor \sum_{i=1}^{n}\{x_i\} \right \rfloor$$ Where $q_i \in \mathbb{N}$ and $\frac{1}{q_1} + \dotsm + \frac{1}{q_n} \leq 1$. Also, this generalization was proposed: $$\sum_{i=1}^{n}\left \lfloor q_i \{x_i\} \right \rfloor \geq \left \lfloor \sum_{i=1}^{n}k_i\{x_i\} \right \rfloor$$ Where $q_i, k_i \in \mathbb{N}$ and $\frac{k_1}{q_1} + \dotsm + \frac{k_n}{q_n} \leq 1$. I don't know if the last two inequalities are correct neither know how to proof if wrong or any counter-example if not. Could someone help?
Let $\theta_i=\{x_i\}$, so that the second inequality reads $$\sum_{i=1}^{n}\left \lfloor q_i \theta_i \right \rfloor \geq \left \lfloor \sum_{i=1}^{n}\theta_i \right \rfloor. \tag{1}$$ Also let $L$ denote the right side, as in the proof of the OP. Now if for each $i$ we had $\theta_i<L/q_i$ then we would have $$\sum_{i=1}^n \theta_i<L \sum_{i=1}^n \frac{1}{q_i} \le L,$$ the last inequality from the assumption that the reciprocals of the $q_i$ sum to at most $1$. From this, similar in spirit to the OP's proof, we get that for at least one index $i$ we have $\theta_i \ge L/q_i$, in other words $q_i \theta_i \ge L$, implying that the term $\left \lfloor q_i \theta_i \right \rfloor \ge L$ and establishing (1). Perhaps a similar idea would work for the final inequality. ADDED: Yes, the third inequality has a similar proof. With the notation above it reads $$\sum_{i=1}^{n}\left \lfloor q_i \theta_i \right \rfloor \geq \left \lfloor \sum_{i=1}^{n}k_i\theta_i \right \rfloor. \tag{2}$$ Again let $L$ denote the right side, and assume for each $i$ we had $\theta_i<L/q_i$. then we would have $$\sum_{i=1}^n k_i\theta_i<L \sum_{i=1}^n \frac{k_i}{q_i}\le L,$$ using $\sum (k_i/q_i) \le 1.$ This as before implies there is an index $i$ for which $q_i \theta_i \ge L$ to finish.
Indefinite integral $\int{\frac{dx}{x^2+2}}$ I cannot manage to solve this integral: $$\int{\frac{dx}{x^2+2}}$$ The problem is the $2$ at denominator, I am trying to decompose it in something like $\int{\frac{dt}{t^2+1}}$: $$t^2+1 = x^2 +2$$ $$\int{\frac{dt}{2 \cdot \sqrt{t^2-1} \cdot (t^2+1)}}$$ But it's even harder than the original one. I also cannot try partial fraction decomposition because the polynomial has no roots. Ho to go on?
Hint: $$x^2+2 = 2\left(\frac{x^2}{\sqrt{2}^2}+1\right)$$
Is a Whole Number A Rational Number Is a Whole Number part of A Rational Number or a whole number??
The real answer, as usual, is "it depends". As the other answers have indicated, it is possible to identify whole numbers with certain rational numbers. On the other hand, it's also possible to identify rational numbers with certain ordered pairs of integers. So it really depends on your perspective/purpose. If you're doing something like number theory, you'll be thinking in terms of a whole number being a rational number. If you're thinking in terms of "mathematical foundations", you'll most likely be looking at it from the other direction.