INSTRUCTION
stringlengths
61
6.63k
RESPONSE
stringlengths
1
11k
sequence of decreasing compact sets In Royden 3rd P192, Assertion 1: Let $K_n$ be a decreasing sequence compact sets, that is, $K_{n+1} \subset K_n$. Let $O$ be an open set with $\bigcap_1^\infty K_n \subset O$. Then $K_n \subset O$ for some $n$. Assertion 2: From this, we can easily see that $\bigcap_1^\infty K_n$ is also compact. I know this is trivial if $K_1$ is $T_2$ (Hausdorff). But is it true if we assume only $T_0$ or $T_1$? Any counterexample is greatly appreciated.
Here's a T_1 space for which Assertion 2 fails. Take the set of integers. Say that a set is open iff it is either a subset of the negative integers or else is cofinite. Then let K_n be the complement of {0, 1, ..., n}. Then each K_n is compact, but the intersection of K_n from n=1 to infinity is the set of negative integers, which is open and noncompact.
Relating Gamma and factorial function for non-integer values. We have $$\Gamma(n+1)=n!,\ \ \ \ \ \Gamma(n+2)=(n+1)!$$ for integers, so if $\Delta$ is some real value with $$0<\Delta<1,$$ then $$n!\ <\ \Gamma(n+1+\Delta)\ <\ (n+1)!,$$ because $\Gamma$ is monotone there and so there is another number $f$ with $$0<f<1,$$ such that $$\Gamma(n+1+\Delta)=(1-f)\times n!+f\times(n+1)!.$$ How can we make this more precise? Can we find $f(\Delta)$? Or if we know the value $\Delta$, which will usually be the case, what $f$ will be a good approximation?
Asymptotically, as $n \to \infty$ with fixed $\Delta$, $$ f(n,\Delta) = \dfrac{\Gamma(n+1+\Delta)-\Gamma(n+1)}{\Gamma(n+2)-\Gamma(n+1)} = n^\Delta \left( \dfrac{1}{n} + \dfrac{\Delta(1+\Delta)}{2n^2} + \dfrac{\Delta(-1+\Delta)(3\Delta+2)(1+\Delta)}{24n^3} + \ldots \right) $$
What's the difference between $\mathbb{Q}[\sqrt{-d}]$ and $\mathbb{Q}(\sqrt{-d})$? Sorry to ask this, I know it's not really a maths question but a definition question, but Googling didn't help. When asked to show that elements in each are irreducible, is it the same?
The notation $\rm\:R[\alpha]\:$ denotes a ring-adjunction, and, analogously, $\rm\:F(\alpha)\:$ denotes a field adjunction. Generally if $\alpha$ is a root of a monic $\rm\:f(x)\:$ over a domain $\rm\:D\:$ then $\rm\:D[\alpha]\:$ is a field iff $\rm\:D\:$ is a field. The same is true for arbitrary integral extensions of domains. See this post for a detailed treament of the quadratic case.
Determine the conditional probability mass function of the size of a randomly chosen family containing 2 girls. Suppose that 15 percent of the families in a certain community have no children, 20 percent have 1, 35 percent have 2, and 30 percent have 3 children; suppose further that each child is equally likely (and independently) to be a boy or a girl. If a family is chosen at random from this community, then B, the number of boys, and G, the number of girls, determine the conditional probability mass function of the size of a randomly chosen family containing 2 girls. My attempt There are exactly three ways this can happen: 1) family has exactly 2 girls 2) family has 2 girls and 1 boy 3) family has all 3 girls The first one is pretty simple. Given that you are going to "select" exactly two children, find the probability that they are BOTH girls (it's a coin flip, so p = 50% = 0.5): $0.5^2 = 0.25$ So the probability that the family has exactly 2 girls is the probability that the family has exactly two children times the probability that those two children will be girls: $\frac{1}{4} \cdot 35\% = 8.75\%$ Now find the probability that, given the family has exactly 3 children, that exacly two are girls. Now you flip 3 times but only need to "win" twice-this is a binomial experiment. There are 3 choose 2 = 3 ways to have exactly two girls: 1st, 2nd, or 3rd is a boy... interestingly the probability of having any particular permutation is just $0.5^3 = 1/8$ (because it's still $0.5 \times 0.5$ for two girls, then $0.5$ for one boy). So the chance of exactly 2 girls is: $\frac{3}{8}$ Now find the probability for having exactly 3 girls... that's easy, there's only one way, you just have all 3 girls, probability is just $\frac{1}{8}$. Now, add these up $\frac{3}{8} + \frac{1}{8} = \frac{4}{8} = \frac{1}{2}$ So now use the percent of families with exactly 3 children to find this portion of the probability: $\frac{1}{2} \cdot 30\% = 15\%$ Hence, add the two probabilities... here is it in full detail $$\begin{eqnarray}\mathbb{P}(\text{contains 2 girls}) &=& \mathbb{P}(\text{2 children}) \times \mathbb{P}(\text{2 girls, 2 children}) + \\ &\phantom{+=}& \mathbb{P}(\text{3 children}) \times \mathbb{P}(\text{2 or 3 girls, 3 children}) \end{eqnarray}$$ $\frac{1}{4} 35\% + 30\% \times \left(\frac{3}{8} +\frac{ 1}{8}\right)$ $8.75\% + 15\% = 23.75\%$ Is my attempt correct?
It’s correct as far as it goes, but it’s incomplete. You’ve shown that $23.75$% of the families have at least two girls, but that doesn’t answer the question. What you’re to find is probability mass function of the family size given that the family has two girls. In other words, you want to calculate $$\Bbb P(B+G=x\mid G\ge 2)$$ for the various possible values of $x$. This is very easy and obvious for $x=0$ and $x=1$, so I’ll skip to $x=2$. You calculated that $8.75$% of all the families have exactly two girls and no boys. What fraction of the families with at least two girls is this? It’s $$\frac{8.75}{23.75}=\frac7{19}\;,$$ so the conditional probability that a randomly chosen family has exactly two children given that it has at least two girls is $7/19$: $\Bbb P(B+G=2\mid G\ge 2)=7/19$. From here you should be able to finish it, I think.
"Weierstrass preparation" of $\mathbb{C}[[X,Y]]$ In Lang's book "Algebra", theorem 9.2, it said that suppose $f\in \mathbb{C}[[X,Y]]$, then by some conditions imposed to $f$, $f$ can be written as a product of a polynomial $g\in \mathbb{C}[[X]][Y]$ and a unit $u$ in $\mathbb{C}[[X,Y]]$. It suggests the following claim is not true in general. Let $f\in \mathbb{C}[[X,Y]]$, then there exists a $g\in \mathbb{C}[X,Y]$ and a unit $u\in \mathbb{C}[[X,Y]]$ such that $f=gu$. I would like to find a counter-example. Thanks.
It is known that there are transcendental power series $h(X)\in \mathbb C[[X]]$ over $\mathbb C[X]$. Note that $Xh(X)$ is also transcendental. Let $$f(X,Y)=Y-Xh(X)\in\mathbb C[[X,Y]].$$ Suppose $f=gu$ with $g$ polynomial and $u$ invertible. Consider the ring homomorphism $\phi: \mathbb C[[X,Y]]\to \mathbb C[[X]]$ which maps $X$ to $X$ and $Y$ to $Xh(X)$. Applying this homomorphism to $f=gu$, we get $$0 = g(X, Xh(X))\phi(u), \quad \phi(u)\in \mathbb C[[X]]^*.$$ So $g(X, Xh(X))=0$. As $Xh(X)$ is transcendental over $\mathbb C[X]$, and $g(X,Y)\in \mathbb C[X][Y]$, this implies that $g(X,Y)=0$. Hence $f(X,Y)=0$, absurd.
How to prove $\mathcal{l}(D+P) \leq \mathcal l{(D)} + 1$ Let $X$ be an irreducible curve, and define $\mathcal{L}(D)$ as usual for $D \in \mathrm{Div}(X)$. Define $l(D) = \mathrm{dim} \ \mathcal{L}(D)$. I'd like to show that for any divisor $D$ and point $P$, $\mathcal{l}(D+P) \leq \mathcal l{(D)} + 1$. Say $D = \sum n_i P_i$. I can prove this provided $P$ is not any of the $P_i$, by considering the map $\lambda : \mathcal{L}(D) \to k$, $f \mapsto f(P)$. This map has kernel $\mathcal{L}(D-P)$, and rank-nullity gives the result. But if $P$ is one of the $P_i$, say $P=P_j$ then I'm struggling. Any help would be appreciated. Thanks
Here is an elementary formulation, without sheaves. Let $t\in Rat(X)$ be a uniformizing parameter at $P$ (that is, $t$ vanishes with order $1$ at $P$) and let $n_P\in \mathbb Z$ be the coefficient of $D=\sum n_QQ$ at $P$. You then have en evaluation map $$\lambda: \mathcal L(D+P)\to k:f\mapsto (t^{n_P +1}\cdot f)(P)$$ and you can conclude with the reank-nullity theorem or in more sophisticated terminology with the exact sequence of $k$-vector spaces $$ 0\to \mathcal L(D)\to \mathcal L(D+P)\stackrel {\lambda}{\to} k $$
Scheduling 12 teams competing at 6 different events I have a seemingly simple question. There are 12 teams competing in 6 different events. Each event is seeing two teams compete. Is there a way to arrange the schedule so that no two teams meet twice and no teams repeat an event. Thanks. Edit: Round 1: All 6 events happen at the same time. Round 2: All 6 events happen at the same time. And so on until Round 6.
A solution to the specific problem is here: Event 1 Event 2 Event 3 Event 4 Event 5 Event 6 1 - 2 11 - 1 1 - 3 6 - 1 10 - 1 1 - 9 3 - 4 2 - 3 4 - 2 2 - 11 2 - 9 10 - 2 5 - 6 4 - 5 5 - 7 7 - 4 3 - 11 4 - 8 7 - 8 6 - 7 8 - 6 3 - 10 4 - 6 7 - 3 9 - 10 8 - 9 10 - 11 9 - 5 8 - 5 11 - 5 11 - 12 12 - 10 9 - 12 12 - 8 7 - 12 12 - 6 which came from the following webpage: http://www.crowsdarts.com/roundrobin/sched12.html
Given an integral symplectic matrix and a primitive vector, is their product also primitive? Given a matrix $A \in Sp(k,\mathbb{Z})$, and a column k-vector $g$ that is primitive ( $g \neq kr$ for any integer k and any column k-vector $r$), why does it follow that $Ag$ is also primitive? Can we take A from a larger space than the space of integral symplectic matrices?
Suppose $\,Ag\,$ is non-primitive, then $\,Ag=mr\,\,,\,\,m\in\mathbb{Z}\,\Longrightarrow g=mA^{-1}r\, $ , which means $\,g\,$ is not primitive
Why is solving non-linear recurrence relations "hopeless"? I came across a non-linear recurrence relation I want to solve, and most of the places I look for help will say things like "it's hopeless to solve non-linear recurrence relations in general." Is there a rigorous reason or an illustrative example as to why this is the case? It would seem to me that the correct response would be "we just don't know how to solve them," or "there is no solution using elementary functions," but there might be a solution in the form of, say, an infinite product or a power series or something. Just for completion, the recurrence relation I'm looking at is (slightly more than just non-linear, and this is a simplified version): $p_n = a_n b_n\\ a_n = a_{n-1} + c \\ b_n = b_{n-1} + d$ And $a_0 > 0, b_0 > 0, c,d$ fixed constants
Although it is possible to solve selected non-linear recurrence relations if you happen to be lucky, in general all sorts of peculiar and difficult-to-characterize things can happen. One example is found in chaotic systems. These are hypersensitive to initial conditions, meaning that the behavior after many iterations is extremely sensitive to tiny variations in the initial conditions, and thus any formula expressing the relationship will grow impossibly large. These recurrence equations can be amazingly simple, with xn+1 = 4xn(1-xn) with x0 between 0 and 1 as one of the classic simple examples (i.e. merely quadratic; this is the logistic map). User @Did has already given the Mandelbrot set example--similarly simple to express, and similarly difficult to characterize analytically (e.g. by giving a finite closed-form solution). Finally, note that to solve every non-linear recurrence relation would imply that one could solve the Halting problem, since one could encode a program as initial states and the workings of the Turing machine as the recurrence relations. So it is certainly hopeless in the most general case. (Which highly restricted cases admit solutions is still an interesting question.)
Proving that a space is disconnected Show that a subspace $T$ of a topological space $S$ is disconnected iff there are nonempty sets $A,B \subset T$ such that $T= A\cup B$ and $\overline{A} \cap B = A \cap \overline{B} = \emptyset$. Where the closure is taken in $S$. I've used this relatively simple proof for many of these slightly different types of questions so I was wondering if it's the right method. It seems pretty good, except for the 'where the closure is taken in $S$ part'. $T$ is disconnected if and only if there exists a partition $A,B \subset T$ such that $T = A \cup B$ and $A \cap B = \emptyset$. Also, $A$ and $B$ are both open and closed therefore $\overline{A} = A$ and $\overline{B} = B$. The result follows.
It looks fine to me, in particular because as $\,A\subset \overline{A}\Longrightarrow A\cap B\subset \overline{A}\cap B\,$ , so if the rightmost intersection is empty then also the leftmost one is, which is the usual definition
Birational map between product of projective varieties What is an example of a birational morphism between $\mathbb{P}^{n} \times \mathbb{P}^{m} \rightarrow \mathbb{P}^{n+m}$?
The subset $\mathbb A^n\times \mathbb A^m$ is open dense in $\mathbb P^n\times \mathbb P^m$ and the subset $\mathbb A^{n+m}$ is open dense in $\mathbb P^n\times \mathbb P^m$. Hence the isomorphism $\mathbb A^n\times \mathbb A^m\stackrel {\cong}{\to} \mathbb A^{n+m}$ is the required birational isomorphism. The astonishing point is that a rational map need only be defined on a dense open subset , which explains the uneasy feeling one may have toward the preceding argument, which may look like cheating. The consideration of "maps" which are not defined everywhere is typical of algebraic ( or complex analytic) geometry, as opposed to other geometric theories like topology, differential geometry,...
Need to understand question about not-a-knot spline I am having some trouble understanding what the question below is asking. What does the given polynomial $P(x)$ have to do with deriving the not-a-knot spline interpolant for $S(x)$? Also, since not-a-knot is a boundary condition, what does it mean to derived it for $S(x)$? For general data points $(x_1, y_1), (x_2, y_2),...,(x_N , y_N )$, where $x_1 < x_2 < . .. < x_N$ and $N \geq 4$, Assume that S(x) is a cubic spline interpolant for four data points $(x_1, y_1)$, $(x_2, y_2)$, $(x_3, y_3)$, and $(x_4, y_4)$ $$ S(x) = \begin{cases} p_1(x), & [x_1,x_2] \\ p_2(x), & [x_2,x_3] \\ p_3(x), & [x_3,x_4] \\ \end{cases} $$ Suppose $P (x) = 2x^3 + 5x +7$ is the cubic interpolant for the same four points $(x_1, y_1)$, $(x_2, y_2)$, $(x_3, y_3)$, $(x_4, y_4)$ where $x_1 < x_2 < x_3 < x_4$ are knots. What is the not-a-knot spline interpolant $S(x)$?
If $S$ is a N-a-K spline with knots $x_1, \dotsc, x_4$ then it satisfies the spline conditions: twelve equations in twelve unknowns. (Twelve coefficients, six equations to prescribe values at the knots and six more to force continuity of derivatives up to third order at $x_2$ and $x_3$.) Since $p_1, p_2, p_3$ fit up to third order in all (two) inner knots, it follows that $p_1 = p_2 = p_3$. So $S$ and $P$ are both Lagrange interpolation polynomials and therefore $S=P$.
Weierstrass Factorization Theorem Are there any generalisations of the Weierstrass Factorization Theorem, and if so where can I find information on them? I'm trying to investigate infinite products of the form $$\prod_{k=1}^\infty f(z)^{k^a}e^{g(z)},$$ where $g\in\mathbb{Z}[z]$ and $a\in\mathbb{N}$.
The Weierstrass factorization theorem provides a way of constructing an entire function with any prescribed set of zeros, provided the set of zeros does not have a limit point in $\mathbb{C}$. I know that this generalizes to being able to construct a function holomorphic on a region $G$ with any prescribed set of zeros in $G$, provided that the set of zeros does not have a limit point in $G$. These are theorems VII.5.14 and VII.5.15 in Conway's Functions of One Complex Variable. They lead to the (important) corollary that every meromorphic function on an open set $\Omega$ is a ratio of functions holomorphic on $\Omega$.
Symmetric Matrix as the Difference of Two Positive Definite Symmetric Matrices Prove that any real symmetric matrix can be expressed as the difference of two positive definite symmetric matrices. I was trying to use the fact that real symmetric matrices are diagonalisable , but the confusion I am having is that 'if $A$ be invertible and $B$ be a positive definite diagonal matrix, then is $ABA^{-1}$ positive definite' . Thanks for any help .
Let $ A^{*} $ be the adjoint of $ A $ and $S$ the positive square root of the positive self-adjoint operator $ S^{2}=A^{*}A $ (e.g. Rudin, ``Functional Analysis'', Mc Graw-Hill, New York 1973, p. 313-314, Th. 12.32 and 12.33) and write $ P=S+A $, $ N=S-A $. Let $n$ be the finite dimension of $A$ and $\lambda_{i}, i=1\dots n$ its eigenvalues. The eigenvalues of $S$ are $|\lambda_{i}|\ge0$, those of $P$ are $0$ if $\lambda_{i}\le0$ and $2|\lambda_{i}|$ if $\lambda_{i}>0$ and those of $N$ are $0$ if $\lambda_{i}\ge0$ and $2|\lambda_{i}|$ if $\lambda_{i}<0$. Thus $S$, $P$ and $N$ are positive definite according to the definition given by Rudin in Th. 12.32. $ A=S-N $ and $ A=(P-N)/2 $ are two possible decomposition of $A$ into the difference of two positive definite operators.
Change of Basis Calculation I've just been looking through my Linear Algebra notes recently, and while revising the topic of change of basis matrices I've been trying something: "Suppose that our coordinates are $x$ in the standard basis and $y$ in a different basis, so that $x = Fy$, where $F$ is our change of basis matrix, then any matrix $A$ acting on the $x$ variables by taking $x$ to $Ax$ is represented in $y$ variables as: $F^{-1}AF$ " Now, I've attempted to prove the above, is my intuition right? Proof: We want to write the matrix $A$ in terms of $y$ co-ordinates. a) $Fy$ turns our y co-ordinates into $x$ co-ordinates. b) pre multiply by $A$, resulting in $AFy$, which is performing our transformation on $x$ co-ordinates c) Now, to convert back into $y$ co-ordinates, pre multiply by $F^{-1}$, resulting in $F^{-1}AFy$ d) We see that when we multiply $y$ by $F^{-1}AF$ we perform the equivalent of multiplying $A$ by $x$ to obtain $Ax$, thus proved. Also, just to check, are the entries in the matrix $F^{-1}AF$ still written in terms of the standard basis? Thanks.
Without saying much, here is how I usually remember the statement and also the proof in one big picture: \begin{array}{ccc} x_{1},\dots,x_{n} & \underrightarrow{\;\;\; A\;\;\;} & Ax_{1},\dots,Ax_{n}\\ \\ \uparrow F & & \downarrow F^{-1}\\ \\ y_{1},\dots,y_{n} & \underrightarrow{\;\;\; B\;\;\;} & By_{1},\dots,By_{n} \end{array} And $$By=F^{-1}AFy$$
Prove Continuous functions are borel functions Take $f: (a,b) \to \mathbb{R}$ , continuous for all $x_{0}\in (a,b)$ and take $(Ω = (a,b) , F = ( (a,b) ⋂ B(\mathbb{R}))$ where $B(\mathbb{R})$ is the Borel $\sigma$-algebra. Prove $f$ is a borel function by showing that $\{x \in(a,b): f(x) < c \}$ is in $F$. I know that continuity of f means that for all $x\in(a,b)$ and all $\varepsilon>0$ there exists a $\delta>0$ such that $|x-x_{0}| < \delta$ implies $|f(x)-f(x_{0})| < \varepsilon$. But Then I am stuck, how would I use these facts to help me ? Thanks in advance for any help
To expand on Thomas E.'s comment: if $f$ is continuous, $f^{-1}(O)$ for $O$ open is again open. $\{x \in (a,b) : f(x) < c \} = f^{-1}((- \infty , c)) \cap (a,b)$. Now all you need to show to finish this proof is that $f^{-1}((- \infty , c))$ is in the Borel sigma algebra of $\mathbb R$. Edit (in response to comment) Reading your comment I think that your lecturer shows that $S := \{x \in (a,b) : f(x) < c \} $ is open. In a metric space, such as $\mathbb R$ with the Euclidean metric, a set $S$ is open if for all $x_0$ in $S$ you can find a $\delta > 0$ such that $(x_0-\delta, x_0+\delta) \subset S$. To show this, your lecturer picks an arbitrary $x_0 \in S$. Then by the definition of $S$ you know that $f(x_0) < c$. This means there exists an $\varepsilon > 0$ such that $f(x_0) + \varepsilon < c$, for $\varepsilon$ small enough. Since $f$ is continuous you know you can find a $\delta_1 > 0$ such that $x \in (x_0 - \delta_1, x_0 + \delta_1) $ implies that $|f(x_0) - f(x)| < \varepsilon$. Now you don't know whether $(x_0 - \delta_1, x_0 + \delta_1) $ is contained in $(a,b)$. But you know that since $(a,b)$ is open you can find a $\delta_2 > 0$ such that $(x_0 - \delta_2, x_0 + \delta_2) \subset (a,b)$. Now picking $\delta := \min (\delta_1, \delta_2)$ gives you that $(x_0 - \delta, x_0 + \delta) \subset S$ because $(f(x_0) - \varepsilon, f(x_0) + \varepsilon) \subset (-\infty , c)$.
Prove that the order of an element in the group N is the lcm(order of the element in N's factors p and q) How would you prove that $$\operatorname{ord}_N(\alpha) = \operatorname{lcm}(\operatorname{ord}_p(\alpha),\operatorname{ord}_q(\alpha))$$ where $N=pq$ ($p$ and $q$ are distinct primes) and $\alpha \in \mathbb{Z}^*_N$ I've got this: The order of an element $\alpha$ of a group is the smallest positive integer $m$ such that $\alpha^m = e$ where $e$ denotes the identity element. And I guess that the right side has to be the $\operatorname{lcm}()$ of the orders from $p$ and $q$ because they are relatively prime to each other. But I can't put it together, any help would be appreciated!
Hint. There are natural maps $\mathbb{Z}^*_N\to\mathbb{Z}^*_p$ and $\mathbb{Z}^*_N\to\mathbb{Z}^*_q$ given by reduction modulo $p$ and reduction modulo $q$. This gives you a homomorphism $\mathbb{Z}^*_N\to \mathbb{Z}^*_p\times\mathbb{Z}^*_q$. What is the kernel of the map into the product? What is the order of an element $(x,y)$ in the product?
Differential equation problem I am looking at the differential equation: $$\frac{dR}{d\theta} + R = e^{-\theta} \sec^2 \theta.$$ I understand how to use $e^{\int 1 d\theta}$ to multiply both sides which gives me: (looking at left hand side of equation only) $$e^\theta \frac{dR}{d\theta} + e^\theta R.$$ However I am not sure how to further simplify the left hand side of the equation before integrating. Can someone please show me the process for doing that? Thanks kindly for any help.
We have $$\frac{d R(\theta)}{d \theta} + R(\theta) = \exp(-\theta) \sec^2(\theta)$$ Multiply throughout by $\exp(\theta)$, we get $$\exp(\theta) \frac{dR(\theta)}{d \theta} + \exp(\theta) R(\theta) = \sec^{2}(\theta)$$ Note that $$\frac{d (R(\theta) \exp(\theta))}{d \theta} = R(\theta) \exp(\theta) + \exp(\theta) \frac{d R(\theta)}{d \theta}.$$ Hence, we get that $$\frac{d(R(\theta) \exp(\theta))}{d \theta} = \sec^2(\theta).$$ Integrating it out, we get $$R(\theta) \exp(\theta) = \tan(\theta) + C$$ This gives us that $$R(\theta) = \exp(-\theta) \tan(\theta) + C \exp(-\theta).$$ EDIT I am adding what Henry T. Horton points out in the comments and elaborating it a bit more. The idea behind the integrating factor is to rewrite the left hand side as a derivative. For instance, if we have the differential equation in the form \begin{align} \frac{d R(\theta)}{d \theta} + M(\theta) R(\theta) & = N(\theta) & \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,(1) \end{align} the goal is to find the "integrating factor" $L(\theta)$ such that when we multiply the differential equation by $L(\theta)$, we can rewrite the equation as \begin{align} \frac{d (L(\theta)R(\theta))}{d \theta} & = L(\theta) N(\theta) & \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,(2) \end{align} The above is the key ingredient in the solving process. So the question is, how to determine the function $L(\theta)$? Since the above two equations are the same, except that the second equation is multiplied by $L(\theta)$, we can expand the second equation and divide by $L(\theta)$ to get the first equation. Expanding the second equation, we get that \begin{align} L(\theta) \frac{d R(\theta)}{d \theta} + \frac{d L(\theta)}{d \theta} R(\theta) & = L(\theta) N(\theta) & (3) \end{align} Dividing the third equation by $L(\theta)$, we get that \begin{align} \frac{d R(\theta)}{d \theta} + \frac{\frac{d L(\theta)}{d \theta}}{L(\theta)} R(\theta) & = N(\theta) & \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,(4) \end{align} Comparing this with the first equation, if we set $$\frac{\frac{d L(\theta)}{d \theta}}{L(\theta)} = M(\theta)$$ then the solution to the first and second equation will be the same. Hence, we need to find $L(\theta)$ such that $$\frac{dL(\theta)}{d \theta} = M(\theta) L(\theta).$$ Note that $\displaystyle L(\theta) = \exp \left(\int_0^{\theta} M(t)dt \right)$ will do the job and this is termed the integrating factor. Hence, once we the first equation in the form of the second equation, we can then integrate out directly to get $$ L(\theta) R(\theta) = \int_{\theta_0}^{\theta} L(t) N(t) dt + C$$ and thereby conclude that $$R(\theta) = \dfrac{\displaystyle \int_{\theta_0}^{\theta} L(t) N(t) dt}{L(\theta)} + \frac{C}{L(\theta)}$$ where the function $\displaystyle L(\theta) = \exp \left(\int_0^{\theta} M(t)dt \right)$ and $C$ is a constant.
Find the intersection of these two planes. Find the intersection of $8x + 8y +z = 35$ and $x = \left(\begin{array}{cc} 6\\ -2\\ 3\\ \end{array}\right) +$ $ \lambda_1 \left(\begin{array}{cc} -2\\ 1\\ 3\\ \end{array}\right) +$ $ \lambda_2 \left(\begin{array}{cc} 1\\ 1\\ -1\\ \end{array}\right) $ So, I have been trying this two different ways. One is to convert the vector form to Cartesian (the method I have shown below) and the other was to convert the provided Cartesian equation into a vector equation and try to find the equation of the line that way, but I was having some trouble with both methods. Converting to Cartesian method: normal = $ \left(\begin{array}{cc} -4\\ 1\\ -3\\ \end{array}\right) $ Cartesian of x $=-4x + y -3z = 35$ Solving simultaneously with $8x + 8y + z = 35$, I get the point $(7, 0, -21)$ to be on both planes, i.e., on the line of intersection. Then taking the cross of both normals, I get a parallel vector for the line of intersection to be $(25, -20, -40)$. So, I would have the vector equation of the line to be: $ \left(\begin{array}{cc} 7\\ 0\\ -21\\ \end{array}\right) +$ $\lambda \left(\begin{array}{cc} 25\\ -20\\ -40\\ \end{array}\right) $ But my provided answer is: $ \left(\begin{array}{cc} 6\\ -2\\ 3\\ \end{array}\right)+ $ $ \lambda \left(\begin{array}{cc} -5\\ 4\\ 8\\ \end{array}\right) $ I can see that the directional vector is the same, but why doesn't the provided answer's point satisfy the Cartesian equation I found? Also, how would I do this if I converted the original Cartesian equation into a vector equation? Would I just equate the two vector equations and solve using an augmented matrix? I tried it a few times but couldn't get a reasonable answer, perhaps I am just making simple errors, or is this not the correct method for vector form?
It's just a simple sign mistake. The equation should be $$-4x+y-3z=-35$$ instead of $$-4x+y-3z=35.$$ Your solution will work fine then.
Product Measures Consider the case $\Omega = \mathbb R^6 , F= B(\mathbb R^6)$ Then the projections $\ X_i(\omega) = x_i ,[ \omega=(x_1,x_2,\ldots,x_6) \in \Omega $ are random variables $i=1,\ldots,6$. Fix $\ S_n = S_0$ $\ u^{\Sigma X_i(\omega)}d^{n-\Sigma X_i(\omega)} \omega \in \Omega $, $\ n=1,\ldots,6 $. Choose the measure P = $\bigotimes_{i=1}^6 Q$ on ($\Omega,F$) where $Q$ denotes the measure $p\delta_1 + q\delta_0 $ on $(\mathbb R, B(\mathbb R))$ for some $p,q>0$ such that $p+q = 1$. Show that the projections $\ X_i(\omega), i=1,\ldots,6$ are mutually independent. Since $\ X_i(\omega)$ is a random variable then am I correct in saying that to show their independence I must show that their sigma algebras $\sigma(\ X_i(\omega))$ are independent how would I go about doing this? Thanks very much!
Yes, that is correct. You have to show that $\sigma(X_i)$ and $\sigma(X_j)$ are independent, when $j\neq i$ (note that I have omitted the $\omega$ in $\sigma(X_i(\omega))$, because that is not what you want). Now, recall that $$ \sigma(X_i)=\sigma(\{X_i^{-1}(A)\mid A\in \mathcal{B}(\mathbb{R})\}), $$ and hence it is enough to show that $\{X_i^{-1}(A)\mid A\in \mathcal{B}(\mathbb{R})\}$ and $\{X_j^{-1}(A)\mid A\in \mathcal{B}(\mathbb{R})\}$ are independent when $i\neq j$. Now, if $A\in\mathcal{B}(\mathbb{R})$ then $$ X_i^{-1}(A)=\{(x_1,\ldots,x_6)\in\mathbb{R}\mid x_i\in A\}=\mathbb{R}\times\cdots \times A\times\cdots\times\mathbb{R}, $$ where $A$ is on the $i$'th place. If $j\neq i$, then $$ X_i^{-1}(A)\cap X_j^{-1}(B)=\mathbb{R}\times\cdots \times A\times B\times\cdots\times\mathbb{R}, $$ where $A$ is on the $i$'th place and $B$ is on the $j$'th place. Now $$ P(X_i^{-1}(A)\cap X_j^{-1}(B))=Q(\mathbb{R})^{4}Q(A)Q(B)=Q(A)Q(B)=P(X_i^{-1}(A))P(X_j^{-1}(B)), $$ and hence the events are independent for every choice of $A,B\in\mathcal{B}(\mathbb{R})$.
Computing conditional probability out of joint probability If I have given a complete table for the joint probability $$P(A,B,C,D,E)$$ how can I compute an arbitrary conditional probability out of it, for instance: $$P(A|B)$$
$$\mathbb{P}(A=a \vert B=b) = \frac{\mathbb{P}(A=a, B=b)}{\mathbb{P}(B=b)} = \frac{\displaystyle \sum_{c,d,e} \mathbb{P}(A=a, B=b, C=c, D=d, E=e)}{\displaystyle \sum_{a,c,d,e} \mathbb{P}(A=a, B=b, C=c, D=d, E=e)}$$
Why are zeros/roots (real) solutions to an equation of an n-degree polynomial? I can't really put a proper title on this one, but I seem to be missing one crucial point. Why do roots of a function like $f(x) = ax^2 + bx + c$ provide the solutions when $f(x) = 0$. What does that $ y = 0$ mean for the solutions, the intercept at the $x$ axis? Why aren't the solutions at $f(x) = 403045$ or some other arbitrary $n$? What makes the x-intercept special?
One reason is that it makes solving an equation simple, especially if $f(x)$ is written only as the product of a few terms. This is because $a\cdot b = 0$ implies either $a = 0$ or $b = 0$. For example, take $f(x) = (x-5)(x+2)(x-2)$. To find the values of $x$ where $f(x) = 0$ we see that $x$ must be $5$, $-2$, or $2$. To find the values of $x$ so that $f(x) = 5$, well, we can't conclude anything immediately because having 3 numbers multiply to 5 (or any non-zero number) doesn't tell us anything about those 3 numbers. This makes 0 special.
Graph decomposition What is the smallest $n \in \mathbb{N}$ with $ n \geq5$ such that the edge set of the complete graph $K_n$ can be partitioned (decomposed) to edge disjoint copies of $K_4$? I got a necesary condition for the decomposition is that $12 |n(n-1)$ and $3|n-1$, thus it implies $n \geq 13$. But can $K_{13}$ indeed be decomposed into edge disjoint copies of $K_4$?
The degree of $K_9$ is 8, whereas the degree of $K_4$ is 3. Since $3$ does not divide $8$, there is no $K_4$ decomposition of $K_9$. $K_n$ has a decomposition into edge-disjoint copies of $K_4$ whenever $n \equiv 1 \text{ or 4 } (\text{mod} 12)$, so the next smallest example after $K_4$ is $K_{13}$.
Solving polynomial differential equation I have $a(v)$ where $a$ is acceleration and $v$ is velocity. $a$ can be described as a polynomial of degree 3: $$a(v) = \sum\limits_{i=0}^3 p_i v^i = \sum\limits_{i=0}^3 p_i \left(\frac{dd(t)}{dt}\right)^i,$$ where $d(t)$ is distance with respect to time. I want to solve (or approximate) this equation for $d(t)$, but it's been a few years since I graduated, and I seem to have forgotten most of my math skills :)
Since the acceleration is the derivative of velocity, you can write $$ \frac{\mathrm{d} v}{\mathrm{d} t} = p_0 + p_1 v + p_2 v^2 + p_3 v^3 $$ separating the variables we get the integral form $$ \int \frac{\mathrm{d}v}{p_0 + p_1 v + p_2 v^2 + p_3 v^3} = \int \mathrm{d}t = t + c$$ Which we can integrate using partial fractions (also see this page). To summarise the method: Using the fundamental theorem of algebra we can factor the polynomial $$ p_0 + p_1 v + p_2 v^2 + p_3 v^3 = p_3 (v + \alpha_1)(v + \alpha_2)(v + \alpha_3) $$ where the $\alpha$s are the roots of the polynomial (assume they are distinct for now; repeated roots will require some additional work). Then we look for $\beta_1,\beta_2,\beta_3$ such that $$ \sum \frac{\beta_i}{v+\alpha_i} = \frac{1}{(v+\alpha_1)(v+\alpha_2)(v+\alpha_3)} $$ Expanding the sum you see that this requires $$\begin{align} \beta_1 + \beta_2 + \beta_3 &= 0 \\ \beta_1 (\alpha_2 + \alpha_3) + \beta_2(\alpha_1+\alpha_3) + \beta_3(\alpha_1 + \alpha_2) &= 0 \\ \beta_1 \alpha_2\alpha_3 + \beta_2\alpha_1\alpha_3 + \beta_3 \alpha_1\alpha_2 &= 1 \end{align}$$ which is a linear system that can be solved. This way we reduce our integral equation to $$ t + c = \frac{1}{p_3}\int \frac{\beta_1}{v + \alpha_1} + \frac{\beta_2}{v+\alpha_2} + \frac{\beta_3}{v+\alpha_3} \mathrm{d}v $$ where the $\alpha$ and $\beta$ coefficients are determined from the polynomial you started with. This gives us the implicit solution $$ p_3t + C = \beta_1 \ln (v+\alpha_1) + \beta_2 \ln(v+\alpha_2) + \beta_3 \ln(v+\alpha_3) $$ or $$ e^{p_3 t + C} = (v+\alpha_1)^{\beta_1}(v+\alpha_2)^{\beta_2}(v+\alpha_3)^{\beta_3} \tag{*}$$ However, this is generally where one gets stuck. To obtain $d$ from $v$ you have to integrate $v$ one more time. But now equation (*) may not have nice analytic representation for $v$, nevermind a simple integral for you to obtain $d$. In those cases the best you can do is probably ask Mathematica. (Sometime you may get lucky. For example, if your polynomial is a perfect cube, then you have $$ \int \frac{\mathrm{d}v}{p(v+q)^3} = -\frac{1}{2p(v+q)^2} + C $$ then you get that $$ v + q = \sqrt{2p t + C} $$ which one can easily integrate to get $d = \int v~\mathrm{d}t$. But those depends on special form of the coefficients $p_i$ which you have not specified.)
Curve arc length parametrization definition I did some assignments related to curve arc length parametrization. But what I can't seem to find online is a formal definition of it. I've found procedures and ways to find a curve's equation by arc length parametrization, but I'm still missing a formal definition which I have to write in my assignment. I saw many links related to the topic http://homepage.smc.edu/kennedy_john/ArcLengthParametrization.pdf but they all seem too long and don't provide a short, concise definition. Could anyone help me writing a formal definition of curve arc length parametrization?
Suppose $\gamma:[a,b]\rightarrow {\Bbb R}$ is a smooth curve with $\gamma'(t) \not = 0$ for $t\in[a,b]$. Define $$s(t) = \int_a^t ||\gamma'(\xi)||\,d\xi$$ for $t\in[a,b]$. This function $s$ has a positive derivative, so it possesses a differentiable inverse. You can use it to get a unit-speed reparametrization of your curve.
Given n raffles, what is the chance of winning k in a row? I was reading this interesting article about the probability of of tossing heads k times in a row out of n tosses. The final result was $$P = 1-\frac{\operatorname{fib}_k(n+2)}{2^n}\;,$$ where $\operatorname{fib}_k(n)$ is the $n$-th $k$-step Fibonacci number. However, I could not figure out how to adapt it to cases where the probability is not half but just some generic $p$. How do I approach that, and is there a generic solution for all $p$? To be clear, $n$ is the number of raffles, $p$ is the probability of winning a single one, and $k$ is the number of consecutive successes required. $P$ is the desired value.
We can proceed as follows. Let $p$ be the probability that we flip a head, and $q=1-p$ the probability that we flip tails. Let us search for the probability that we do NOT have at least $k$ heads in a row at some point after $n$ flips, which we will denote $P(n,k)$. Given a sequence of coin tosses (of length at least $k)$ which does not have $k$ heads in a row, the end of sequence must be a tail followed by $i$ heads, where $0\leq i<k$. We will let $P(n,k,i)$ denote the probability that a string of length $n$ has less than $k$ heads AND ends with $i$ heads. Clearly $P(n,k)=\sum P(n,k,i)$. (Also note that we can still work with $n<k$ by treating a string of just $i$ heads as being in the class $(n,k,i)$). Suppose we have a series of $n$ coin flips, with no more than $k$ heads, and we are in the class $(n,k,i)$. What can happen if we flip the coin once more? If we get tails, we end up in class $(n+1,k,0)$, which happens with probability $q$, and if we get heads, we end up in the class $(n,k,i+1)$ which happens with probability $p$. The only caveat is that if $i=k-1$, our string will have $k$ heads in a row if the next run is a head. From this, and using the fact that the $(n+1)$st flip is independent of the flips that came before, we can calculate: $$P(n+1,k,i+1)=pP(n,k,i) \qquad 0\leq i<k, $$ and so $$P(n,k,i)=p^iP(n-i,k,0) \qquad 0\leq i<k.$$ This could have been seen more directly by noting that the only way to be in the class $(n,k,i)$ is to have a string in class $(n-i,k,0)$ and to then have $i$ heads in a row, which happens with probability $p^i$. This means that we only need to use things of the form $P(n,k)$ and $P(n,k,0)$ in our calculations. By similar reasoning about how strings come about, we have $$P(n+1,k,0)=qP(n,k)=q\sum_{i=0}^{k-1} P(n,k,i)=q\sum_{i=0}^{k-1} p^iP(n-i,k,0).$$ This gives us a nice linear recurrence relation for $P(n,k,0)$ very similar to the one for the $k$-Fibonacci numbers, and dividing by $q$, we see that $P(n,k)$ satisfies the same recurrence. Adding the initial condition $P(n,k)=1$ if $n<k$ allows us to easily generate the values we need. Moreover, if we multiply our recurrence by $p^{-(n+1)}$, we get a slightly simpler recurrence for $Q(n+1,k,0)=p^{-(n+1)}P(n+1,k,0)$, namely $$Q(n+1,k,0)=\frac{q}{p} \sum_{i=0}^{k-1} Q(n-i,k,0).$$ When $p=q$, this becomes the recurrence for the $k$-Fibonacci numbers.
cohomology of a finite cyclic group I apologize if this is a duplicate. I don't know enough about group cohomology to know if this is just a special case of an earlier post with the same title. Let $G=\langle\sigma\rangle$ where $\sigma^m=1$. Let $N=1+\sigma+\sigma^2+\cdots+\sigma^{m-1}$. Then it is claimed in Dummit and Foote that $$\cdots\mathbb{Z} G \xrightarrow{\;\sigma -1\;} \mathbb{Z} G \xrightarrow{\;N\;} \mathbb{Z} G \xrightarrow{\;\sigma -1\;} \cdots \xrightarrow{\;N\;} \mathbb{Z} G \xrightarrow{\;\sigma -1\;} \mathbb{Z} G \xrightarrow{\;\text{aug}\;} \mathbb{Z} \longrightarrow 0$$ is a free resolution of the trivial $G$-module $\mathbb{Z}$. Here $\mathbb{Z} G$ is the group ring and $\text{aug}$ is the augmentation map which sums coefficients. It's clear that $N( \sigma -1) = 0$ so that the composition of consecutive maps is zero. But I can't see why the kernel of a map should be contained in the image of the previous map. any suggestions would be greatly appreciated. Thanks for your time.
As $(\sigma-1)(c_0+c_1\sigma+\dots c_{n-1}\sigma^{n-1})=(c_n-c_0)+(c_0-c_1)\sigma+\dots (c_{n-2}-c_{n-1})\sigma^{n-1}$, the element $a=c_0+c_1\sigma+\dots c_{n-1}\sigma^{n-1}$ is in the kernel of $\sigma-1$ iff all $c_i$'s are equal, i.e. iff $a=Nc$ for some $c\in\mathbb{Z}$. Similarly, $Na=(\sum c_i)N$, so here the kernel is given by the condition $\sum c_i=0$, but this means $a=(\sigma-1)(-c_0-(c_0+c_1)\sigma-(c_0+c_1+c_2)\sigma^2-\cdots)$.
Why is unique ergodicity important or interesting? I have a very simple motivational question: why do we care if a measure-preserving transformation is uniquely ergodic or not? I can appreciate that being ergodic means that a system can't really be decomposed into smaller subsystems (the only invariant pieces are really big or really small), but once you know that a transformation is ergodic, why do you care if there is only one measure which it's ergodic with respect to or not?
Unique ergodicity is defined for topological dynamical systems and it tells you that the time average of any function converges pointwise to a constant (see Walters: Introduction to Ergodic Theory, th 6.19). This property is often useful. Any ergodic measure preserving system is isomorphic to a uniquely ergodic (minimal) topological system (see http://projecteuclid.org/euclid.bsmsp/1200514225).
Density of the set $S=\{m/2^n| n\in\mathbb{N}, m\in\mathbb{Z}\}$ on $\mathbb{R}$? Let $S=\{\frac{m}{2^n}| n\in\mathbb{N}, m\in\mathbb{Z}\}$, is $S$ a dense set on $\mathbb{R}$?
Yes, is it, given open interval $(a,b)$ (suppose $a$ and $b$ positives) you can find $n\in\mathbb{N}$ such that $1/2^n<|b-a|$. Then consider the set: $$X=\{k\in \mathbb{N}; k/2^n > b\}$$ This is a subset of $\mathbb{N}$, for well ordering principe $X$ has a least element $k_0$ then is enought taking $(k_0-1)/2^n\in(a,b)$. The same is if $a$, $b$ or both are negatives (because $(a,b)$ is bounded).
What exactly is nonstandard about Nonstandard Analysis? I have only a vague understanding of nonstandard analysis from reading Reuben Hersh & Philip Davis, The Mathematical Experience. As a physics major I do have some education in standard analysis, but wonder what the properties are that the nonstandardness (is that a word?) is composed of. Is it more than defining numbers smaller than any positive real as the tag suggests? Can you give examples? Do you know of a gentle introduction to the nonstandard properties?
To complement the fine answers given earlier, I would like to address directly the question of the title: "What exactly is nonstandard about Nonstandard Analysis?" The answer is: "Nothing" (the name "nonstandard analysis" is merely a descriptive title of a field of research, chosen by Robinson). This is why some scholars try to avoid using the term in their publications, preferring to speak of "infinitesimals" or "analysis over the hyperreals", as for example in the following popular books: Goldblatt, Robert, Lectures on the hyperreals. Graduate Texts in Mathematics, 188. Springer-Verlag, New York, 1998 Vakil, Nader, Real analysis through modern infinitesimals. Encyclopedia of Mathematics and its Applications, 140. Cambridge University Press, Cambridge, 2011. More specifically, there is nothing "nonstandard" about Robinson's theory in the sense that he is working in a classical framework that a majority of mathematicians work in today, namely the Zermelo-Fraenkel set theory, and relying on classical logic.
Homotopic to a Constant I'm having a little trouble understanding several topics from algebraic topology. This question covers a range of topics I have been looking at. Can anyone help? Thanks! Suppose $X$ and $Y$ are connected manifolds, $X$ is simply connected, and the universal cover of $Y$ is contractible. Why is every continuous mapping from $X$ to $Y$ homotopic to a constant?
Let $\tilde{Y} \xrightarrow{\pi} Y$ be the universal cover of $Y$. Since $X$ is simply connected, any continuous map $X \xrightarrow{f} Y$ can be factorized as a continuous map $X \xrightarrow{\tilde{f}} \tilde{Y} \xrightarrow{\pi} Y$. Since $\tilde{Y}$ is contractible, there is a point $y \in \tilde{Y}$ and an homotopy $h$ between the identity map on $\tilde{Y}$ and the constant map $y$ : $h : \begin{array}{c}\tilde{Y} \xrightarrow{id} \tilde{Y} \\ \Downarrow \\ \tilde{Y} \xrightarrow{y} \{y\}\end{array}$ Composing this homotopy with $\tilde{f}$ and $\pi$, you get an homothopy $h'(t,x) = \pi(h(t,\tilde{f}(x))$ $h': \begin{array}{rcl}X \xrightarrow{\tilde{f}} & \tilde{Y} \xrightarrow{id} \tilde{Y} &\xrightarrow{\pi} Y \\ &\Downarrow &\\ X \xrightarrow{\tilde{f}} & \tilde{Y} \xrightarrow{y} \{y\} & \xrightarrow{\pi} \{\pi(y)\} \end{array}$ between $f$ and the constant map $\pi(y)$.
Cumulative probability and predicted take in a raffle? Not sure if this is the right term! If I have a raffle with 100 tickets in at $5 each, and people pull a ticket sequentially, how do I calculate the likely return before the winning ticket is drawn? I'm half way there. I get that you work out the cumulative probability is But how do I add the prices to work out a likely return? I want to be able to change the number of tickets sold to adjust the return.
The calculations seem to involve a strange kind of raffle, in which the first ticket is sold (for $5$ dollars). We check whether this is the winning ticket. If it is not, we sell another ticket, and check whether it is the winner. And so on. You seem to be asking for the expected return. This is $5$ times $E(X)$, where $X$ is the total number of tickets until we reach the winning ticket. The random variable $X$ has a distribution which is a special case of what is sometimes called the Negative Hypergeometric Distribution. (There are other names, such as Inverse Hypergeometric Distribution.) The general negative hypergeometric allows the possibility of $r$ "winning tickets" among the $N$ tickets, and the possibility that we will allow sales until $k$ winning tickets have turned up. You are looking at the special case $r=k=1$ (I am using the notation of the link). In your case, if the total number of tickets is $N$, of which only one is a winning one, we have $$E(X)=\frac{N+1}{2}.$$ Taking $N=100$, and $5$ dollars a ticket, the expected return is $5\frac{101}{2}$ dollars. Remark: If the model I have described is not the model you have in mind, perhaps the question can be clarified.
Exponential objects in a cartesian closed category: $a^1 \cong a$ Hi I'm having problems with coming up with a proof for this simple property of cartesian closed categories (CCC) and exponential objects, namely that for any object $a$ in a CCC $C$ with an initial object $0$, $a$ is isomorphic to $a^1$ where $1$ is the terminal object of $C$. In most of the category theory books i've read this is usually left as an exercise, but for some reason I can't get a handle on it.
You can also reason as follows, without the Yoneda lemma. But proving uniqueness of right adjoints is cumbersome without using Yoneda, and easy with. Anyway, here it goes: The functor $(-)\times 1$ is isomorphic to the identity functor. The identity functor is a right adjoint of itself, so the identity functor is also right adjoint to $(-)\times 1$. Then uniqueness of right adjoints gives that $(-)^1$ is isomorphic to the identity functor.
Sum of three primes Can all natural numbers ($n\ge 6$) be represented as the sum of three primes? With computer I checked up to $10000$, but couldn't prove it.
It was proved by Vinogradov that every large enough odd integer is the sum of at most $3$ primes, and it seems essentially certain that apart from a few uninteresting small cases, every odd integer is the sum of $3$ primes. Even integers are a different matter. To prove that every even integer $n$ is the sum of three primes, one would have to prove the Goldbach Conjecture, since one of the three primes must be $2$, and therefore $n-2$ must be the sum of two primes.
Spectra of restrictions of bounded operators Suppose $T$ is a bounded operator on a Banach Space $X$ and $Y$ is a non-trivial closed invariant subspace for $T$. It is fairly easy to show that for the point spectrum one has $\sigma_p(T_{|Y})\subseteq\sigma_p(T)$ and this is also true for the approximate point spectrum, i.e. $\sigma_a(T_{|Y})\subseteq\sigma_a(T)$. However I think it is not true in general that $\sigma(T_{|Y})\subseteq\sigma(T)$. We also have $$ \partial(\sigma(T_{|Y}))\subseteq\sigma_a(T_{|Y})\subseteq\sigma_a(T) $$ Hence $\sigma(T_{|Y})\cap\sigma(T)\ne\emptyset$. Moreover, if $\sigma(T)$ is discrete then $\partial(\sigma(T_{|Y}))$ is also discrete, which implies that $\partial(\sigma(T_{|Y}))=\sigma(T_{|Y})$, so at least in this case the inclusion $\sigma(T_{|Y})\subseteq\sigma(T)$ holds true. So for example holds true for compact, strictly singular and quasinilpotent operators. Question 1: Is it true, as I suspect, that $\sigma(T_{|Y})\subseteq\sigma(T)$ doesn't hold in general? A counterexample will be appreciated. On $l_2$ will do, as I think that on some Banach spaces this holds for any operators. For example, if $X$ is hereditary indecomposable (HI), the spectrum of any operator is discrete. Question 2 (imprecise): If the answer to Q1 is 'yes', is there some known result regarding how large the spectrum of the restriction can become? Thank you.
For example, consider the right shift operator $R$ on $X = \ell^2({\mathbb Z})$, $Y = \{y \in X: y_j = 0 \ \text{for}\ j < 0\}$. Then $Y$ is invariant under $R$, and $\sigma(R)$ is the unit circle while $\sigma(R|_Y)$ is the closed unit disk.
Analyze the convergence or divergence of the sequence $\left\{\frac1n+\sin\frac{n\pi}{2}\right\}$ Analyze the convergence or divergence of the following sequence a) $\left\{\frac{1}{n}+\sin\frac{n\pi}{2}\right\}$ The first one is divergent because of the in $\sin\frac{n\pi}{2}$ term, which takes the values, for $n = 1, 2, 3, 4, 5, \dots$: $$1, 0, -1, 0, 1, 0, -1, 0, 1, \dots$$ As you can see, it's divergent. To formally prove it,I could simply notice that it has constant subsequences of $1$s, $0$s, and $-1$s, all of which converge to different limits. If it were as subsequence, they would all be the same limit. My procedure is that correct?
You’re on the right track, but you’ve left out an important step: you haven’t said anything to take the $1/n$ term into account. It’s obvious what’s happening, but you still have to say something. Let $a_n=\frac1n+\sin\frac{n\pi}2$. If $\langle a_n:n\in\Bbb Z^+\rangle$ converged, say to $L$, then the sequence $\left\langle a_n-\frac1n:n\in\Bbb Z^+\right\rangle$ would converge to $L-0=L$, because $\left\langle\frac1n:n\in\Bbb Z^+\right\rangle$ converges to $0$. Now make your (correct) argument about $\left\langle\sin\frac{n\pi}2:n\in\Bbb Z^+\right\rangle$ not converging and thereby get a contradiction. Then you can conclude that $\langle a_n:n\in\Bbb Z^+\rangle$ does not converge.
Evaluating $\lim\limits_{n\to\infty} \left(\frac{1^p+2^p+3^p + \cdots + n^p}{n^p} - \frac{n}{p+1}\right)$ Evaluate $$\lim_{n\to\infty} \left(\frac{1^p+2^p+3^p + \cdots + n^p}{n^p} - \frac{n}{p+1}\right)$$
The result is more general. Fact: For any function $f$ regular enough on $[0,1]$, introduce $$ A_n=\sum_{k=1}^nf\left(\frac{k}n\right)\qquad B=\int_0^1f(x)\mathrm dx\qquad C=f(1)-f(0) $$ Then, $$ \lim\limits_{n\to\infty}A_n-nB=\frac12C $$ For any real number $p\gt0$, if $f(x)=x^p$, one sees that $B=\frac1{p+1}$ and $C=1$, which is the result in the question. To prove the fact stated above, start from Taylor formula: for every $0\leqslant x\leqslant 1/n$ and $1\leqslant k\leqslant n$, $$ f(x+(k-1)/n)=f(k/n)-(1-x)f'(k/n)+u_{n,k}(x)/n $$ where $u_{n,k}(x)\to0$ when $n\to\infty$, uniformly on $k$ and $x$, say $|u_{n,k}(x)|\leqslant v_n$ with $v_n\to0$. Integrating this on $[0,1/n]$ and summing from $k=1$ to $k=n$, one gets $$ \int_0^1f(x)\mathrm dx=\frac1n\sum_{k=1}^nf\left(\frac{k}n\right)-\frac1n\int_0^{1/n}u\mathrm du\cdot\sum_{k=1}^nf'\left(\frac{k}n\right)+\frac1nu_n $$ where $|u_n|\leqslant v_n$. Reordering, this says that $$ A_n=nB+\frac12\frac1n\sum_{k=1}^nf'\left(\frac{k}n\right)-u_n=nB+\frac12\int_0^1f'(x)\mathrm dx+r_n-u_n $$ with $r_n\to0$, thanks to the Riemann integrability of the function $f'$ on $[0,1]$. The proof is complete since $r_n-u_n\to0$ and the last integral is $f(1)-f(0)=C$.
Finding the second-degree polynomial that is the best approximation for cos(x) So, I need to find the second-degree polynomial that is the best approximation for $f(x) = cos(x)$ in $L^2_w[a, b]$, where $w(x) = e^{-x}$, $a=0$, $b=\infty$. "Best approximation" for f is a function $\hat{\varphi} \in \Phi$ such that: $||f - \hat{\varphi}|| \le ||f - \varphi||,\; \forall \varphi \in \Phi$ I have several methods available: * *Lagrange interpolation *Hermite interpolation Which would be the most appropriate?
In your $L^2$ space the Laguerre polynomials form an orthonormal family, so if you use the polynomial $$ P(x)=\sum_{i=0}^n a_i L_i(x), $$ you will get the approximation error $$ ||P(x)-\cos x||^2=\sum_{i=0}^n(a_i-b_i)^2+\sum_{i>n}b_i^2, $$ (Possibly you need to add a constant to account for the squared norm of the component of cosine, if any, that is orthogonal to all the polynomials. If the Laguerre polynomials form a complete orthonormal family, then this extra term is not needed. Anyway, having that extra term will not affect the solution of this problem.) where $$ b_k=\langle L_k(x)|\cos x\rangle=\int_0^{\infty}L_k(x)\cos x e^{-x}\,dx. $$ I recommend that you calculate $b_0$, $b_1$ and $b_2$, and then try and figure out how you should select the numbers $a_i$ to minimize the error and meet your degree constraint.
Limit points of sets Find all limit points of given sets: $A = \left\{ (x,y)\in\mathbb{R}^2 : x\in \mathbb{Z}\right\}$ $B = \left\{ (x,y)\in\mathbb{R}^2 : x^2+y^2 >1 \right\}$ I don't know how to do that. Are there any standard ways to do this?
1) Is set A closed or not? If it is we're then done, otherwise there's some point not in it that is a limit point of A 2) As before but perhaps even easier.
Notation for infinite product in reverse order This question is related to notation of infinite product. We know that, $$ \prod_{i=1}^{\infty}x_{i}=x_{1}x_{2}x_{3}\cdots $$ How do I denote $$ \cdots x_{3}x_{2}x_{1} ? $$ One approach could be $$ \prod_{i=\infty}^{1}x_{i}=\cdots x_{3}x_{2}x_{1} $$ I need to use this expression in a bigger expression so I need a good notation for this. Thank you in advance for your help.
(With tongue in cheek:) what about this? $$\left(x_n\prod_{i=1}^\infty \right)\;$$
Calculate $\int_\gamma \frac{1}{(z-z_0)^2}dz$ This is the definition of the fundamental theorem of contour integration that I have: If $f:D\subseteq\mathbb{C}\rightarrow \mathbb{C}$ is a continuous function on a domain $D \subseteq \mathbb{C}$ and $F:D\subseteq \mathbb{C} \rightarrow \mathbb{C}$ satisfies $F'=f$ on $D$, then for each contour $\gamma$ we have that: $\int_\gamma f(z) dz =F(z_1)-F(z_0)$ where $\gamma[a,b]\rightarrow D$ with $\gamma(a)=Z_0$ and $\gamma(b)=Z_1$. $F$ is the antiderivative of $f$. Let $\gamma(t)=Re^{it}, \ 0\le t \le 2\pi, \ R>0$. In my example it said $\int_\gamma \frac{1}{(z-z_0)^2}dz=0$. Im trying to calculate it out myself, but I got stuck. I get that $f(z)=\frac{1}{(z-z_0)^2}$ has an antiderivative $F(z)=-\frac{1}{(z-z_0)}$. Thus by the fundamental theorem of contour integration: $\int_\gamma \frac{1}{(z-z_0)^2}dz =F(z_1)-F(z_0)\\F(\gamma(2\pi))-F(\gamma(0))\\F(Re^{2\pi i})-F(R)\\-\frac{1}{Re^{2\pi i}-z_0} +\frac{1}{R-z_0}\\-\frac{1}{Re^{i}-z_0} +\frac{1}{R-z_0}$ How does $\int_\gamma \frac{1}{(z-z_0)^2}dz=0$?
$\gamma(2\pi)=Re^{2\pi i}=R=Re^0=\gamma(0)$
$\lim_{x\rightarrow a}\|f(x)\|$ and $\lim_{x\rightarrow a}\frac{\|f(x)\|}{\|x-a\|}$ Given any function $f: \mathbb{R^n} \to \mathbb{R^m}$ , if $$\lim_{x\rightarrow a}\|f(x)\| = 0$$ then does $$\lim_{x\rightarrow a}\frac{\|f(x)\|}{\|x-a\|} = 0 $$ as well? Is the converse true?
For the first part, consider e.g. the case $m = n$ with $f$ defined by $f(x) = x - a$ for all $x$ to see that the answer is no. For the second part, the answer is yes. If $\lim_{x \to a} \|f(x)\|/\|x-a\| = L$ exists (we do not need to assume that it is $0$), then since $\lim_{x \to a} \|x - a\| = 0$ clearly exists, we have that $$ \lim_{x \to a} \|f(x)\| = \lim_{x \to a}\left( \|x - a\| \cdot \frac{\|f(x)\|}{\|x-a\|}\right) = \lim_{x \to a} \|x - a\| \cdot \lim_{x \to a} \frac{\|f(x)\|}{\|x-a\|} = 0 \cdot L = 0 $$ exists and is $0$ by standard limit laws.
Is there a uniform way to define angle bisectors using vectors? Look at the left figure. $x_1$ and $x_2$ are two vectors with the same length (norm). Then $x_1+x_2$ is along the bisector of the angle subtended by $x_1$ and $x_2$. But look at the upper right figure. When $x_1$ and $x_2$ are collinear and in reverse directions, $x_1+x_2=0$ and no longer represent the bisector of the angle (in this case 180 deg). The bisector should be perpendicular to $x_1$ and $x_2$. (The $x_1+x_2$ works well for the case shown in the lower right figure.) Question: Is there a way to represent the bisector for all the three cases? I don't want to exclude the upper right case. Is it possibly helpful to introduce some infinity elements?
I also would like to give a solution, which I am currently using in my work. The key idea is to use a rotation matrix. Suppose the angle between $x_1$ and $x_2$ is $\theta$. Let $R(\theta/2)$ be a rotation matrix, which can rotate a vector $\theta/2$. Then $$y=R(\theta/2)x_1$$ is a unified way to express the bisector. Of course, we need also pay attention to the details, which can be determined straightforwardly: * *the rotation matrix rotates a vector clockwise or counter-clockwise? *how to define the angle $\theta$? *the bisector should be $y=R(\theta/2)x_1$ or $y=R(\theta/2)x_2$? EDIT: I give an example here. Consider two unit-length vectors $x_1$ and $x_2$, which will give two angles: one is in [0,pi] and the other is in (pi,2pi). We can define the angle $\theta$ such that rotating $x_1$ counterclockwise $\theta$ about the origin yields $x_2$. Here $\theta\in[0,2\pi)$. Consequently define the rotation matrix $R(\theta/2)$ rotates a vector counterclockwise $\theta/2$. (The formula of this kind of R is given here) Thus $R(\theta/2)x_1$ is a unit-length vector lying on the bisector of $\theta$. Another thing as mentioned by coffemath is that: how to compute the angle given two vectors? Of course, it is not enough to only use $\cos \theta=x_1^Tx_2$ because $\cos \theta$ gives two angles whose sum is $2\pi$. However, if we carefully define the angle $\theta$ and $R$ we can also compute $\sin \theta$. For example, we define the angle and rotation matrix as above mentioned. Then define $x_2^{\perp}=R(\pi/2)x_2$. Then it can be calculated that $x_1^Tx_2^{\perp}=-\sin \theta$. hence from both $\cos\theta$ and $\sin\theta$, we can compute $\theta$.
Why is the expected value $E(X^2) \neq E(X)^2$? I wish to use the Computational formula of the variance to calculate the variance of a normal-distributed function. For this, I need the expected value of $X$ as well as the one of $X^2$. Intuitively, I would have assumed that $E(X^2)$ is always equal to $E(X)^2$. In fact, I cannot imagine how they could be different. Could you explain how this is possible, e.g. with an example?
May as well chime in :) Expectations are linear pretty much by definition, so $E(aX + b) = aE(X) + b$. Also linear is the function $f(x) = ax$. If we take a look at $f(x^2)$, we get $f(x^2) = a(x^2) \not= (ax)^2 = f(x)^2$. If $E(X^2) = E(X)^2$, then $E(X)$ could not be linear, which is a contradiction of its definition. So, it's not true :)
Positive Operator Value Measurement Question I'm attempting to understand some of the characteristics of Posiitive Operator Value Measurement (POVM). For instance in Nielsen and Chuang, they obtain a set of measurement operators $\{E_m\}$ for states $|\psi_1\rangle = |0\rangle, |\psi_2\rangle = (|0\rangle + |1\rangle)/\sqrt{2}$. The end up obtaining the following set of operators: \begin{align*} E_1 &\equiv \frac{\sqrt{2}}{1+\sqrt{2}} |1\rangle \langle 1 |, \\ E_2 &\equiv \frac{\sqrt{2}}{1+\sqrt{2}} \frac{(|0\rangle - |1\rangle) (\langle 0 | - \langle 1 |)}{2}, \\ E_3 &\equiv I - E_1 - E_2 \end{align*} Basically, I'm oblivious to how they were able to obtain these. I thought that perhaps they found $E_1$ by utilizing the formula: \begin{align*} E_1 = \frac{I - |\psi_2\rangle \langle \psi_2|}{1 + |\langle \psi_1|\psi_2\rangle|} \end{align*} However, when working it out, I do not obtain the same result. I'm sure it's something dumb and obvious I'm missing here. Any help on this would be very much appreciated. Thanks.
Yes, those are the results but you have the subindexes swaped.
Is critical Haudorff measure a Frostman measure? Let $K$ be a compact set in $\mathbb{R}^d$ of Hausdorff dimension $\alpha<d$, $H_\alpha(\cdot)$ the $\alpha$-dimensional Hausdorff measure. If $0<H_\alpha(K)<\infty$, is it necessarily true that $H_\alpha(K\cap B)\lesssim r(B)^\alpha$ for any open ball $B$? Here $r(B)$ denotes the radius of the ball $B$. This seems to be true when $K$ enjoys some self-similarity, e.g. when $K$ is the standard Cantor set. But I am not sure if it is also true for the general sets.
Consider e.g. $\alpha=1$, $d=2$. Given $p > 1$, let $K$ be the union of a sequence of line segments of lengths $1/n^2$, $n = 1,2,3,\ldots$, all with one endpoint at $0$. Then for $0 < r < 1$, if $B$ is the ball of radius $r$ centred at $0$, $H_1(K \cap B) = \sum_{n \le r^{-1/2}} r + \sum_{n > r^{-1/2}} n^{-2} \approx r^{1/2}$
Prove that $N(\gamma) = 1$ if, and only if, $\gamma$ is a unit in the ring $\mathbb{Z}[\sqrt{n}]$ Prove that $N(\gamma) = 1$ if, and only if, $\gamma$ is a unit in the ring $\mathbb{Z}[\sqrt{n}]$ Where $N$ is the norm function that maps $\gamma = a+b\sqrt{n} \mapsto \left | a^2-nb^2 \right |$ I have managed to prove $N(\gamma) = 1 \Rightarrow \gamma$ is a unit (i think), but cannot prove $\gamma$ is a unit $\Rightarrow N( \gamma ) = 1$ Any help would be appreciated, cheers
Hint $\rm\ \ unit\ \alpha\iff \alpha\:|\: 1\iff \alpha\alpha'\:|\:1 \iff unit\ \alpha\alpha',\ $ since $\rm\:\alpha\:|\:1\iff\alpha'\:|\:1' = 1$
Peano postulates I'm looking for a set containing an element 0 and a successor function s that satisfies the first two Peano postulates (s is injective and 0 is not in its image), but not the third (the one about induction). This is of course exercise 1.4.9 in MacLane's Algebra book, so it's more or less homework, so if you could do the thing where you like point me in the right direction without giving it all away that'd be great. Thanks!
Since your set has 0 and a successor function, it must contain $\Bbb N$. The induction axiom is what ensures that every element is reachable from 0. So throw in some extra non-$\Bbb N$ elements that are not reachable from 0 and give them successors. There are several ways to do this. Geometrically, $\Bbb N$ is a ray with its endpoint at 0. The Peano axioms force it to be this shape. Each axiom prevents a different pathology. For example, the axiom $Sn\ne 0$ is required to prevent the ray from curling up into a circle. It's a really good exercise to draw various pathological shapes and then see which ones are ruled out by which axioms, and conversely, for each axiom, to produce a pathology which is ruled out by that axiom. Addendum: I just happened to be reading Frege's Theorem and the Peano Postulates by G. Boolos, and on p.318 it presents a variation of this exercise that you might enjoy. Boolos states a version of the Peano axioms: * *$\forall x. {\bf 0}\ne {\bf s}x$ *$\forall x.\forall y.{\bf s}x={\bf s}y\rightarrow x=y$ *(Induction) $\forall F. (F{\bf 0}\wedge \forall x(Fx\rightarrow F{\bf s}x)\rightarrow \forall x. F x) $ And then says: Henkin observed that (3) implies the disjunction of (1) and (2)… It is easy to construct models in which each of the seven conjunctions ±1±2±3 other than –1–2+3 holds; so no other dependencies among 1, 2, and 3 await discovery. Your job: find the models!
Removing redundant sets from an intersection Let $I$ be a non-empty set and $(A_i)_{i\in I}$ a family of sets. Is it true that there exists a subset $J\subset I$ such that $\bigcap_{j\in J}A_j=\bigcap_{i\in I}A_i$ and, for any $j_0\in J$, $\bigcap_{j\in J-\{j_0\}}A_j\neq\bigcap_{j\in J}A_j$? If $I=\mathbb{N}$, the answer is yes (if I am not mistaken): $J$ can be constructed by starting with $\mathbb{N}$ and, at the $n$-th step, removing $n$ if that does not affect the intersection. What if $I$ is uncountable? I guess the answer is still "yes" and tried to prove it by generalizing the above approach using transfinite induction, but I failed. The answer "yes" or "no" and a sketch of a proof (resp. a counterexample) would be nice.
The answer is no, even in the case $I=\mathbb N$. to see this, consider the collection $A_i=[i,\infty)\subset \mathbb R$. Then $\bigcap\limits_{i\in I}A_i=\emptyset$ and this remains true if we intersect over any infinite subset $J\subseteq I$, yet is false if we intersect over a finite subset. Thus there is no minimal subset $J$ such that $\bigcap\limits_{i\in I}A_i=\bigcap\limits_{j\in J}A_j$.
Alternative proof of the limitof the quotient of two sums. I found the following problem by Apostol: Let $a \in \Bbb R$ and $s_n(a)=\sum\limits_{k=1}^n k^a$. Find $$\lim_{n\to +\infty} \frac{s_n(a+1)}{ns_n(a)}$$ After some struggling and helpless ideas I considered the following solution. If $a > -1$, then $$\int_0^1 x^a dx=\frac{1}{a+1}$$ is well defined. Thus, let $$\lambda_n(a)=\frac{s_n(a)}{n^{a+1}}$$ It is clear that $$\lim\limits_{n\to +\infty} \lambda_n(a)=\int_0^1 x^a dx=\frac{1}{a+1}$$ and thus $$\lim_{n\to +\infty} \frac{s_n(a+1)}{ns_n(a)}=\lim_{n \to +\infty} \frac{\lambda_n(a+1)}{\lambda_n(a)}=\frac{a+1}{a+2}$$ Can you provide any other proof for this? I used mostly integration theory but maybe there are other simpler ideas (or more complex ones) that can be used. (If $a=-1$ then the limit is zero, since it is simply $H_n^{-1}$ which goes to zero since the harmonic series is divergent. For the case $a <-1$, the simple inequalities $s_n(a+1) \le n\cdot n^{a+1} = n^{a+2}$ and $s_n(a) \ge 1$ show that the limit is also zero.)
The argument below works for any real $a > -1$. We are given that $$s_n(a) = \sum_{k=1}^{n} k^a$$ Let $a_n = 1$ and $A(t) = \displaystyle \sum_{k \leq t} a_n = \left \lfloor t \right \rfloor$. Hence, $$s_n(a) = \int_{1^-}^{n^+} t^a dA(t)$$ The integral is to be interpreted as the Riemann Stieltjes integral. Now integrating by parts, we get that $$s_n(a) = \left. t^a A(t) \right \rvert_{1^-}^{n^+} - \int_{1^-}^{n^+} A(t) a t^{a-1} dt = n^a \times n - a \int_{1^-}^{n^+} \left \lfloor t \right \rfloor t^{a-1} dt\\ = n^{a+1} - a \int_{1^-}^{n^+} (t -\left \{ t \right \}) t^{a-1} dt = n^{a+1} - a \int_{1^-}^{n^+} t^a dt + a \int_{1^-}^{n^+}\left \{ t \right \} t^{a-1} dt\\ = n^{a+1} - a \left. \dfrac{t^{a+1}}{a+1} \right \rvert_{1^-}^{n^+} + a \int_{1^-}^{n^+}\left \{ t \right \} t^{a-1} dt\\ =n^{a+1} - a \dfrac{n^{a+1}-1}{a+1} + a \int_{1^-}^{n^+}\left \{ t \right \} t^{a-1} dt\\ = \dfrac{n^{a+1}}{a+1} + \dfrac{a}{a+1} + \mathcal{O} \left( a \times 1 \times \dfrac{n^a}{a}\right)\\ = \dfrac{n^{a+1}}{a+1} + \mathcal{O} \left( n^a \right)$$ Hence, we get that $$\lim_{n \rightarrow \infty} \dfrac{s_n(a)}{n^{a+1}/(a+1)} = 1$$ Hence, now $$\dfrac{s_{n}(a+1)}{n s_n(a)} = \dfrac{\dfrac{s_n(a+1)}{n^{a+2}/(a+2)}}{\dfrac{s_n(a)}{n^{a+1}/(a+1)}} \times \dfrac{a+1}{a+2}$$ Hence, we get that $$\lim_{n \rightarrow \infty} \dfrac{s_{n}(a+1)}{n s_n(a)} = \dfrac{\displaystyle \lim_{n \rightarrow \infty} \dfrac{s_n(a+1)}{n^{a+2}/(a+2)}}{\displaystyle \lim_{n \rightarrow \infty} \dfrac{s_n(a)}{n^{a+1}/(a+1)}} \times \dfrac{a+1}{a+2} = \dfrac11 \times \dfrac{a+1}{a+2} = \dfrac{a+1}{a+2}$$ Note that the argument needs to be slightly modified for $a = -1$ or $a = -2$. However, the two cases can be argued directly itself. If $a=-1$, then we want $$\lim_{n \rightarrow \infty} \dfrac{s_n(0)}{n s_n(-1)} = \lim_{n \rightarrow \infty} \dfrac{n}{n H_n} = 0$$ If $a=-2$, then we want $$\lim_{n \rightarrow \infty} \dfrac{s_n(-1)}{n s_n(-2)} = \dfrac{6}{\pi^2} \lim_{n \rightarrow \infty} \dfrac{H_n}{n} = 0$$ In general, for $a <-2$, note that both $s_n(a+1)$ and $s_n(a)$ converge. Hence, the limit is $0$. For $a \in (-2,-1)$, $s_n(a)$ converges but $s_n(a+1)$ diverges slower than $n$. Hence, the limit is again $0$. Hence to summarize $$\lim_{n \rightarrow \infty} \dfrac{s_n(a+1)}{n s_n(a)} = \begin{cases} \dfrac{a+1}{a+2} & \text{ if }a>-1\\ 0 & \text{ if } a \leq -1 \end{cases}$$
Showing that if $R$ is a commutative ring and $M$ an $R$-module, then $M \otimes_R (R/\mathfrak m) \cong M / \mathfrak m M$. Let $R$ be a local ring, and let $\mathfrak m$ be the maximal ideal of $R$. Let $M$ be an $R$-module. I understand that $M \otimes_R (R / \mathfrak m)$ is isomorphic to $M / \mathfrak m M$, but I verified this directly by defining a map $M \to M \otimes_R (R / \mathfrak m)$ with kernel $\mathfrak m M$. However I have heard that there is a way to show these are isomorphic using exact sequences and using exactness properties of the tensor product, but I am not sure how to do this. Can anyone explain this approach? Also can the statement $M \otimes_R (R / \mathfrak m) \cong M / \mathfrak m M$ be generalised at all to non-local rings?
Morover, let $I$ be a right ideal of a ring $R$ (noncommutative ring) and $M$ a left $R$-module, then $M/IM\cong R/I\otimes_R M$.
Area of ellipse given foci? Is it possible to get the area of an ellipse from the foci alone? Or do I need at least one point on the ellipse too?
If the foci are points $p,q\in\mathbb{R}^{2}$ on a horizontal line and a point on the ellipse is $c\in\mathbb{R}^{2}$, then the string length $\ell=\left|p-c\right|+\left|q-c\right|$ (the distance from the first focus to the point on the ellipse to the second focus) determines the semi-axis lengths. Using the Pythagorean theorem, the vertical semi-axis has length $\sqrt{\frac{\ell^{2}}{4}-\frac{\left|p-q\right|^{2}}{4}}$. Using the fact that the horizontal semi-axis is along the line joining $p$ to $q$, the horizontal semi-axis has length $\frac{\ell}{2}$. Thus the area is $\pi\sqrt{\frac{\ell^{2}-\left|p-q\right|^{2}}{4}}\frac{\ell}{2}$ ($\pi$ times each semi-major axis length, analogous to the circle area formula $\pi r^{2}$).
How to prove an L$^p$ type inequality Let $a,b\in[0,\infty)$ and let $p\in[1,\infty)$. How can I prove$$a^p+b^p\le(a^2+b^2)^{p/2}.$$
Some hints: * *By homogeneity, we can assume that $b=1$. *Let $f(t):=(t^2+1)^{p/2}-t^p-1$ for $t\geq 0$. We have $f'(t)=p((t^2+1)^{p/2-1}-t^{p-1})$. We have $t^2+1\geq t^2$, so the derivative is non-negative/non-positive if $p\geq 2$ or $p<2$. *Deduce the wanted inequality (when it is reversed or not).
Finitely Generated Group Let be $G$ finitely generated; My question is: Does always exist $H\leq G,H\not=G$ with finite index? Of course if G is finite it is true. But $G$ is infinite?
No. I suspect there are easier and more elegant ways to answer this question, but the following argument is one way to see it: * *There are finitely generated infinite simple groups: * *In 1951, Higman constructed the first example in A Finitely Generated Infinite Simple Group, J. London Math. Soc. (1951) s1-26 (1), 61–64. *Very popular are Thompson's groups. *I happen to like the Burger–Mozes family of finitely presented infinite simple torsion-free groups, described in Lattices in product of trees. Publications Mathématiques de l'IHÉS, 92 (2000), p. 151–194 (full disclosure: I wrote my thesis under the direction of M.B.). *See P. de la Harpe, Topics in Geometric Group Theory, Complement V.26 for further examples and references. *If a group $G$ has a finite index subgroup $H$ then $H$ contains a finite index normal subgroup of $G$, in particular no infinite simple group can have a non-trivial finite index subgroup. See also Higman's group for an example of a finitely presented group with no non-trivial finite quotients. By the same reasoning as above it can't have a non-trivial finite index subgroup.
Simplifying quotient or localisation of a polynomial ring Let $R$ be a commutative unital ring and $g\in R[X]$ a polynomial with the property that $g(0)$ is a unit in $R$ and $g(1)=1$. Is there any possible way to understand either $$R[X]/g$$ or $$g^{-1}R[X]$$ better? Here $g^{-1}R[X]$ is the localised ring for the multiplicative set $\{g^n\}$. I can't find a way to incorporate the extra conditions on $g$. It would be favorable if the rings were expressable using the ring $R[X,X^{-1}]$ or the ideal $X(X-1)R[X]$. Also any graded ring with a simple expression in degree zero is much appreciated. Background: Im working on some exact sequences in the K-theory of rings. The two rings above are part of some localisation sequence and understanding either one of them would help me simplifying things. Edit: I tried to go for the basic commutative algebra version of it, since I expected it to be easy. What I am actually trying to show is that if $S$ is the set of all those $g$ whit the property as described above, then there is an exact sequence (assume $R$ regular, or take homotopy K-theory) $$0\to K_i(R)\to K_i(S^{-1}(R[X]))\to K_i(R)\to 0$$ where the first map comes from the natural inclusion and the second map should be something like the difference of the two possible evaluation maps (as described in @Martin's answer). There are some reasons to believe that this is true (coming from a way more abstract setting), but it still looks strange to me. Moreover we should be able to work with one $g$ at a time and then take the colimit. With this edit I assume this question should go to MO as well -_-
The only thing which comes into my mind is the following: $g(1)=1$ (or $g(0)$ is a unit) ensures that $R[X] \to R$, $x \mapsto 1$ (or $x \mapsto 0$), extends to a homomorphism $g^{-1} R[X] \to R$. For more specific answers, a more specific question is necessary ;).
What are some books that I should read on 3D mathematics? I'm a first-grade highschool student who has been making games in 2D most of the time, but I started working on a 3D project for a change. I'm using a high-level engine that abstracts most of the math away from me, but I'd like to know what I'm dealing with! What books should I read on 3D mathematics? Terms like "rotation matrices" should be explained in there, for example. I could, of course, go searching these things on the interweb, but I really like books and I would probably miss something out by self-educating, which is what I do most of the time anyway. I mostly know basic mathematics, derivatives of polynomial functions is the limit to my current knowledge, but I probably do have some holes on the fields of trigonometry and such (we didn't start learning that in school, yet, so basically I'm only familiar with sin, cos and atan2).
"Computer Graphics: Principles and Practice, Third Edition, remains the most authoritative introduction to the field. The first edition, the original “Foley and van Dam,” helped to define computer graphics and how it could be taught. The second edition became an even more comprehensive resource for practitioners and students alike. This third edition has been completely rewritten to provide detailed and up-to-date coverage of key concepts, algorithms, technologies, and applications." This quote from Amazon.com represents the high regard this text on computer graphics has commanded for decades, as Foley & van Dam presents algorithms for generating CG as well as answers to more obscure issues such as clipping and examples of different methods for rendering in enough detail to actually implement solutions
UFDs are integrally closed Let $A$ be a UFD, $K$ its field of fractions, and $f$ an element of $A[T]$ a monic polynomial. I'm trying to prove that if $f$ has a root $\alpha \in K$, then in fact $\alpha \in A$. I'm trying to exploit the fact of something about irreducibility, will it help? I havent done anything with splitting fields, but this is something i can look for.
Overkill: It suffices to show that $A$ is integrally closed, which we prove using Serre's criterion. For this, we recall some of the definitions, including the definitions of the properties $R_n$ and $S_n$ that appear in Serre's criterion. We will assume $A$ is locally Noetherian (can one assume less for this approach to work?) Background Definition. A ring $A$ is said to satisfy the criterion $R_n$ if for every prime ideal $\mathfrak p$ of $A$ such that $\operatorname{ht}(\mathfrak p)\le n$, the localization $A_{\mathfrak p}$ is a regular local ring, which means that the maximal ideal can be generated by $\operatorname{ht}(\mathfrak p)$ elements. Definition. A Noetherian ring $A$ is said to satisfy the criterion $S_n$ if for every prime ideal $\mathfrak p$ of $A$, we have the inequality $$\operatorname{depth}(A_{\mathfrak p}) \ge \min\{n,\operatorname{ht}(\mathfrak p)\}$$ This relies on the notion of depth, which is the length of a maximal regular sequence in the maximal ideal. Exercise. Give a definition of the $S_n$ condition for modules. (Note: there are actually two distinct definitions in the literature, which only agree when the annihilator of the module is a nilpotent ideal.) Exercise. Show that a Noetherian ring $A$ is reduced if and only if $A$ satisfies $R_0$ and $S_1$. With these definitions out of the way, we now state the criterion of which we will benefit. Theorem. (Serre's Criterion). A Noetherian integral domain $A$ is integrally closed if and only if $A$ has the properties $R_1$ and $S_2$. Proof that UFDs are Integrally Closed Firstly, localizations of UFDs are UFDs while intersections of integrally closed domains in the field of fractions of $A$ are integrally closed. Recalling that $A=\bigcap_{\mathfrak p\in\operatorname{Spec}A} A_{\mathfrak p}$, we may assume $A$ is local. Now, $A$ is $R_1$ because prime ideals $\mathfrak p$ of height $1$ are principal and thus $A_{\mathfrak p}$ is a DVR. Also, $A$ is $S_1$ because $A$ is an integral domain and thus reduced, while for any irreducible $f\in A$ we have $A/fA$ is an integral domain, so $A/fA$ is $S_1$. This implies \begin{equation*} \operatorname{depth} A \ge \min\{2,\dim A\} \end{equation*} The argument works for any local UFD, in particular the localizations of $A$. So, $A$ is $S_2$. By Serre's criterion, $A$ is integrally closed.
Classification of automorphisms of projective space Let $k$ be a field, n a positive integer. Vakil's notes, 17.4.B: Show that all the automorphisms of the projective scheme $P_k^n$ correspond to $(n+1)\times(n+1)$ invertible matrices over k, modulo scalars. His hint is to show that $f^\star \mathcal{O}(1) \cong \mathcal{O}(1).$ (f is the automorphism. I don't if $\mathcal{O}(1)$ is the conventional notation; if it's unclear, it's an invertible sheaf over $P_k^n$) I can show what he wants assuming this, but can someone help me find a clean way to show this?
Well, $f^*(\mathcal{O}(1))$ must be a line bundle on $\mathbb{P}^n$. In fact, $f^*$ gives a group automorphism of $\text{Pic}(\mathbb{P}^n) \cong \mathbb{Z}$, with inverse $(f^{-1})^*$. Thus, $f^*(\mathcal{O}(1))$ must be a generator of $\text{Pic}(\mathbb{P}^n)$, either $\mathcal{O}(1)$ or $\mathcal{O}(-1)$. But $f^*$ is also an automorphism on the space of global sections, again with inverse $(f^{-1})^*$. Since $\mathcal{O}(1)$ has an $(n+1)$-dimensional vector space of global sections, but $\mathcal{O}(-1)$ has no non-zero global sections, it is impossible for $f^*(\mathcal{O}(1))$ to be $\mathcal{O}(-1)$.
Some interesting questions on completeness (interesting to me anyway) Suppose $(Y,\Vert\cdot\Vert)$ is a complete normed linear space. If the vector space $X\supset Y$ with the same norm $\Vert\cdot\Vert$ is a normed linear space, then is $(X,\Vert\cdot\Vert)$ necessarily complete? My guess is no. However, I am not aware of any examples. Side interest: If X and Y are Banach (with possibly different norms), I want to make $X \times Y$ Banach. But I realize that in order to do this, we cannot use the same norm as we did for $X$ and $Y$ because it's not like $X \subseteq X \times Y$ or $Y \subseteq X \times Y$. What norm (if there is one) on $X \times Y$ will garuntee us a Banach space? I'm sure these questions are standard ones in functional analysis. I just haven't come across them in my module. Thanks in advance.
You have to look at infinite dimensional Banach spaces. For example, $X=\ell^2$, vector space of square-summable sequence of real numbers. Let $Y:=\{(x_n)_n, \exists k\in\Bbb N, x_n=0\mbox{ if }n\geq k\}$. It's a vector subspace of $X$, but not complete since it's not closed (it's in fact a strict and dense subset). However, for two Banach spaces $X$ and $Y$, you can put norms on $X\times Y$ such that this space is a Banach space. For example, if $N$ is a norm on $\Bbb R^2$, define $\lVert(x,y)\rVert:=N(\lVert x\rVert_X,\lVert y\rVert_Y)$.
Ex${{{}}}$tremely Tricky Probability Question? Here's the question. It's quite difficult: David is given a deck of 40 cards. There are 3 gold cards in the deck, 3 silver cards in the deck, 3 bronze cards in the deck and 3 black cards in the deck. If David draws the gold card on his first turn, he will win $50. (The object is to get at least one gold card). The other colored cards are used to help him get the gold card, while the remaining 28 do nothing. David initially draws a hand of 6 cards, and will now try to draw a gold card, if he did not already already draw one. He may now use the other cards to help him. All of the differently colored cards may be used in the first turn. David can use a silver card to draw 1 more card. David can use a bronze card to draw 1 more card. However, he can only use 1 of these per turn. David can use a black card to look at the top 3 cards of the deck, and add one of them to his hand. He then sends the rest back to the deck and shuffles. He can only use 1 of these cards per turn. What are the odds David draws the gold card on his first turn?
We can ignore the silver cards-each should be replaced whenever we see it with another. Similarly, if you have a bronze, you should draw immediately (but subsequent bronzes don't let you replace them). So the deck is really $36$ cards, $3$ gold, $3$ black, and $30$ other (including the 2 bronzes after the first). You win if there is a gold in the first $6$, or a black in the first six, no gold in 1-6 and a gold in 7-9, or a black in 1-6, another in 1-9, no gold in 1-9 and a gold in 10-12, or a black in 1-6, another in 1-9, another in 1-12, no gold in 1-12 and a gold in 13-15. All these possibilities are disjoint, so you can just add them up.
Relation between min max of a bounded with compact and continuity While reading through Kantorovitz's book on functional analysis, I had a query that need clarification. If $X$ is compact, $C_{B}(X)$ - bounded continuous function, with the sup-norm coincides with $C(X)$ - continuous real valued function, with the sup-norm, since if $f:X \rightarrow \mathbb{R}$ is continuous and $X$ is compact, then $\vert f \vert$ is bounded. May I know how the above relates to the corollary that states: Let $X$ be a compact topological space. If $f \in C(X)$, then $\vert f \vert$ has a minimum and a maximum value on $X$. I believe the relation here is that the function is bounded and hence relate to the corollary but hope someone can clarify just to be sure. Thank You.
That is exactly what you said, but changing it a little: Every continuous real function over a compact space is bounded. We know that the image of a compact set by a continuous function is compact, and that implies boundedness of the image. A function is bounded exactly when its image is bounded, so it's proved! Then $C(X)=C_B(X)$. The minimum and maximum is a plus, that implies boundedness, so that your reasoning was correct.
Conditions for integrability Michael Spivak, in his "Calculus" writes Although it is possible to say precisely which functions are integrable,the criterion for integrability is too difficult to be stated here I request someone to please state that condition.Thank you very much!
This is commonly called the Riemann-Lebesgue Theorem, or the Lebesgue Criterion for Riemann Integration (the wiki article). The statement is that a function on $[a,b]$ is Riemann integrable iff * *It is bounded *It is continuous almost everywhere, or equivalently that the set of discontinuities is of zero lebesgue measure
An alternating series ... Find the limit of the following series: $$ 1 - \frac{1}{4} + \frac{1}{6} - \frac{1}{9} + \frac{1}{11} - \frac{1}{14} + \cdot \cdot \cdot $$ If i go the integration way all is fine for a while but then things become pretty ugly. I'm trying to find out if there is some easier way to follow.
Let $S = 1 - x^{3} + x^{5} -x^{8} + x^{10} - x^{13} + \cdots$. Then what you want is $\int_{0}^{1} S \ dx$. But we have \begin{align*} S &= 1 - x^{3} + x^{5} -x^{8} + x^{10} - x^{13} + \cdots \\\ &= -(x^{3}+x^{8} + x^{13} + \cdots) + (1+x^{5} + x^{10} + \cdots) \\\ &= -\frac{x^{3}}{1-x^{5}} + \frac{1}{1-x^{5}} \end{align*} Now you have to evaluate: $\displaystyle \int_{0}^{1}\frac{1-x^{3}}{1-x^{5}} \ dx$ And wolfram gives the answer as:
Simple recurrence relation in three dimensions I have the following recurrence relation: $$f[i,j,k] = f[i-1,j,k] + f[i,j-1,k] + f[i,j,k-1],\quad \mbox{for } i \geq j+k,$$ starting with $f[0,0,0]=1$, for $i$, $j$, and $k$ non-negative. Is there any way to find a closed form expression for $f[i,j,k]$? Note that this basically is a three dimensional version of the Catalan triangle, for which $f[i,j] = f[i-1,j] + f[i,j-1]$, for $i \geq j$, starting with $f[0,0]=1$. For this, a closed form expression is known: $f[i,j] = \frac{(i+j)!(i-j+1)}{j!(i+1)!}$. Appreciate your help!
With the constraint $i \geq j+k$ I got following formula (inspired by the Fuss-Catalan tetrahedra formula page 10 and with my thanks to Brian M. Scott for pointing out my errancies...) : $$f[i,j,k]=\binom{i+1+j}{j} \binom{i+j+k}{k} \frac{i+1-j-k}{i+1+j}\ \ \text{for}\ i \geq j+k\ \ \text{and}\ \ 0\ \ \text{else}$$ plane $k=0$ $ \begin{array} {lllll|lllll} 1\\ 1 & 1\\ 1 & 2 & 2\\ 1 & 3 & 5 & 5\\ 1 & 4 & 9 & 14 & 14\\ \end{array} $ plane $k=1$ $ \begin{array} {l} 0\\ 1 \\ 2 & 4\\ 3 & 10 & 15\\ 4 & 18 & 42 & 56\\ 5 & 28 & 84 & 168 & 210\\ \end{array} $ plane $k=2$ $ \begin{array} {l} 0\\ 0\\ 2 \\ 5 & 15\\ 9 & 42 & 84\\ 14 & 84 & 252 & 420\\ 20 & 144 & 540 & 1320 & 1980\\ \end{array} $ Without the $i \geq j+k$ constrains we get the simple : $$f[i,j,k]=\frac{(i+j+k)!}{i!j!k!}$$ That is the Trinomial expansion (extension of Pascal triangle in 3D : Pascal tetrahedron). At least with the rules : * *$f[0,0,0]=1$ *$f[i,j,k]=0$ if $i<0$ or $j<0$ or $k<0$ *$f[i,j,k] = f[i-1,j,k] + f[i,j-1,k] + f[i,j,k-1]$ in the remaining cases
Looking for intuition behind coin-flipping pattern expectation I was discussing the following problem with my son: Suppose we start flipping a (fair) coin, and write down the sequence; for example it might come out HTTHTHHTTTTH.... I am interested in the expected number of flips to obtain a given pattern. For example, it takes an expected 30 flips to get HHHH. But here's the (somewhat surprising) thing: it takes only 20 expected flips to get HTHT. The tempting intuition is to think that any pattern XXXX is equiprobable since, in batches of 4 isolated flips, this is true. But when we are looking for embedded patterns like this, things change. My son wanted to know why HTHT was so much more likely to occur before HHHH but I could not articulate any kind of satisfying explanation. Can you?
Suppose we have 4-slot queue. By state we mean the longest tail of the coin sequence that matches the pattern $XXXX$ from the left. If there no matching, we denote the state as $\varnothing$. For instance, the state of the sequence $$TTTTHTHHTTTHTH,$$ given the pattern $XXXX = HTHT$, is $HTH$ and the state for the pattern $TTTT$ is $\varnothing$. Now suppose the pattern is $XXXX = HHHH$. If you have $T$ and fail to complete the pattern, the state collapses to $\varnothing$, so that we have to start at the beginning. But if the pattern is $XXXX = HTHT$ and the previous state is either $H$ or $HTH$, then the state collapses to $H$ even if you fail. Thus we do not have to start at the beginning in this case. This difference allows us to complete the pattern faster, resulting in the short expected time.
Product of two ideals doesn't equal the intersection The product of two ideals is defined as the set of all finite sums $\sum f_i g_i$, with $f_i$ an element of $I$, and $g_i$ an element of $J$. I'm trying to think of an example in which $IJ$ does not equal $I \cap J$. I'm thinking of letting $I = 2\mathbb{Z}$, and $J = \mathbb{Z}$, and $I\cap J = 2\mathbb{Z}$? Can someone point out anything faulty about this logic of working with an even ideal and then an odd ideal? Thanks in advance.
Maybe it is helpful for you to realise what really happens for ideals in the integers. You probably know that any ideal in $\mathbb Z$ is of the form $(a)$ for $a\in \mathbb Z$, i.e. is generated by one element. The elements in $(a)$ are all integers which are divisible by $a$. If we are given two ideals $(a)$ and $(b)$, their intersection consists of those numbers which are divisible by $a$ and divisible by $b$. Their product consists of all numbers which are divisible by the product $ab$. If $a$ and $b$ are coprime they are the same. E.g. all numbers which are divisible by $2$ and $3$ are also divisible by $6$, and vice versa. If they are not coprime the situation changes. If a number is divisible by $4$ and $2$, then it is not necessarily divisible by $8$. Another way of saying that two integers $a$, $b$ are coprime is that there exist $x,y$, such that $xa+by=1$ (cf. Euclidean algorithm). In the language of ideals this translates to $(a)+(b)=\mathbb Z$ and the circle closes.
Finding the asymptotic limit of an integral. I'm having trouble finding the asymptotic of the integral $$ \int^{1}_{0} \ln^\lambda \frac{1}{x} dx$$ as $\lambda \rightarrow + \infty$. Can anyone help? Thank you!
Let $-\log x=u$ then the integral becomes $$\int\limits_0^1 {{{\left( { - \log x} \right)}^\lambda }dx} = \int\limits_0^{ + \infty } {{e^{ - u}}{u^\lambda }du} $$ This is Euler's famous Gamma function, which has an asymptotic formula by Stirling $$\int\limits_0^{ + \infty } {{e^{ - u}}{u^\lambda }du} \sim {\left( {\frac{\lambda }{e}} \right)^\lambda }\sqrt {2\pi \lambda } $$
Convex functions in integral inequality Let $\mu,\sigma>0$ and define the function $f$ as follows: $$ f(x) = \frac{1}{\sigma\sqrt{2\pi}}\mathrm \exp\left(-\frac{(x-\mu)^2}{2\sigma ^2}\right) $$ How can I show that $$ \int\limits_{-\infty}^\infty x\log|x|f(x)\mathrm dx\geq \underbrace{\left(\int\limits_{-\infty}^\infty x f(x)\mathrm dx\right)}_\mu\cdot\left(\int\limits_{-\infty}^\infty \log|x| f(x)\mathrm dx\right) $$ which is also equivalent to $\mathsf E[ X\log|X|]\geq \underbrace{\mathsf EX}_\mu\cdot\mathsf E\log|X|$ for a random variable $X\sim\mathscr N(\mu,\sigma^2).$
Below is a probabilistic and somewhat noncomputational proof. We ignore the restriction to the normal distribution in what follows below. Instead, we consider a mean-zero random variable $Z$ with a distribution symmetric about zero and set $X = \mu + Z$ for $\mu \in \mathbb R$. Claim: Let $X$ be described as above such that $\mathbb E X\log|X|$ is finite for every $\mu$. Then, for $\mu \geq 0$, $$ \mathbb E X \log |X| \geq \mu \mathbb E \log |X| \> $$ and for $\mu < 0$, $$\mathbb E X \log |X| \leq \mu \mathbb E \log |X| \>.$$ Proof. Since $X = \mu + Z$, we observe that $$ \mathbb E X \log |X| = \mu \mathbb E \log |X| + \mathbb E Z \log |\mu + Z| \>, $$ and so it suffices to analyze the second term on the right-hand side. Define $$ f(\mu) := \mathbb E Z \log|\mu+Z| \>. $$ Then, by symmetry of $Z$, we have $$ f(-\mu) = \mathbb E Z \log|{-\mu}+Z| = \mathbb E Z \log|\mu-Z| = - \mathbb E \tilde Z \log|\mu + \tilde Z| = - f(\mu) \>, $$ where $\tilde Z = - Z$ has the same distribution as $Z$ and the last equality follows from this fact. This shows the $f$ is odd as a function of $\mu$. Now, for $\mu \neq 0$, $$ \frac{f(\mu) - f(-\mu)}{\mu} = \mathbb E \frac{Z}{\mu} \log \left|\frac{1+ Z/\mu}{1- Z/\mu}\right| \geq 0\>, $$ since $x \log\left|\frac{1+x}{1-x}\right| \geq 0$, from which we conclude that $f(\mu) \geq 0$ for all $\mu > 0$. Thus, for $\mu > 0$, $\mu \mathbb E \log |X|$ is a lower bound on the quantity of interest and for $\mu < 0$, it is an upper bound. NB. In the particular case of a normal distribution, $X \sim \mathcal N(\mu,\sigma^2)$ and $Z \sim N(0,\sigma^2)$. The moment condition stated in the claim is satisfied.
The number of elements which are squares in a finite field. Meanwhile reading some introductory notes about the projective special linear group $PSL(2,q)$ wherein $q$ is the cardinal number of the field; I saw: ....in a finite field of order $q$, the number of elements ($≠0$) which are squares is $q-1$ if $q$ is even number and is $\frac{1}{2}(q-1)$ if $q$ is a odd number..." . I can see it through $\mathbb Z_5$ or $GF(2)$. Any hints for proving above fact? Thanks.
Another way to prove it, way less elegant than Dustan's but perhaps slightly more elementary: let $$a_1,a_2,...,a_{q-1}$$ be the non zero residues modulo $\,q\,,\,q$ an odd prime . Observe that $\,\,\forall\,i\,,\,\,a_i^2=(q-a_i)^2 \pmod q\,$ , so that all the quadratic residues must be among $$a_1^2\,,\,a_2^2\,,...,a_m^2\,\,,\,m:=\frac{q-1}{2} $$ Note that $\,\,\forall\,1\leq i,j\leq m\,:$$$a_i+a_j=0\Longrightarrow a_i=-a_j=q-a_i\Longrightarrow$$$$\Longrightarrow a_i-a_j=q=0$$ Both left most equalities above would lead us to $\,a_i=a_j=0\,$, which is absurd. Finally, we prove that not two of the above $\,\,(q-1)/2\,\,$ elements are equal. The following's done modulo $\,q$:$$a_i^2=a_j^2\Longrightarrow (a_i-a_j)(a_i+a_j)=0\Longrightarrow a_i-a_j=0$$since we already showed that $\,\,a_i+a_j\neq 0$
Infinite Degree Algebraic Field Extensions In I. Martin Isaacs Algebra: A Graduate Course, Isaacs uses the field of algebraic numbers $$\mathbb{A}=\{\alpha \in \mathbb{C} \; | \; \alpha \; \text{algebraic over} \; \mathbb{Q}\}$$ as an example of an infinite degree algebraic field extension. I have done a cursory google search and thought about it for a little while, but I cannot come up with a less contrived example. My question is What are some other examples of infinite degree algebraic field extensions?
Another simple example is the extension obtained by adjoining all roots of unity. Since adjoining a primitive $n$-th root of unity gives you an extension of degree $\varphi(n)$ and $\varphi(n)=n-1$ when $n$ is prime, you get algebraic numbers of arbitrarily large degree when you adjoin all roots of unity.
Flux of a vector field I've been trying to solve a flux integral with Gauss' theorem so a little input would be appreciated. Problem statement: Find the flux of ${\bf{F}}(x,y,z) = (x,y,z^2)$ upwards through the surface ${\bf r}(u,v) = (u \cos v, u \sin v, u), \hspace{1em} (0 \leq u \leq 2; 0 \leq v \leq \pi)$ OK. I notice that $z = u$ so $0 \leq z \leq 2$. Furthermore I notice that $x^2 + y^2 = z^2$ so $x^2 + y^2 \leq 4$. It makes sense to use cylindrical coordinates so $(0 \leq r \leq 2)$ and $(0 \leq \theta \leq 2 \pi)$. Finally $div {\bf F} = 2(z+1)$.With this in mind I set up my integral \begin{align*} 2\int ^{2 \pi} _0 \int ^2 _0 \int _0 ^2 (z+1)rdrdzd\theta &= \int ^{2 \pi} _0 \int ^2 _0[(z+1)r^2]_0 ^2 dzd\theta \\ &= 4\int ^{2 \pi} _0 \int ^2 _0 z + 1 dzd\theta\\ &= 4\int ^{2 \pi} _0 [1/2 z^2 + z]_0 ^2 d\theta \\ &= 16 \int _0 ^{2 \pi}d\theta \\ &= 32 \pi \end{align*} And I'm not sure how to continue from this point so if anyone can offer help it would be appreciated. Thanks!
I am not convinced that your integration limits are in order. Domain of integration is the volume below a half cone. So I would proceed as follows $$2\int_{0}^{\pi}\int_{0}^{2}\int_{0}^{r}\left(z+1\right)rdzdrd\theta=2\int_{0}^{\pi}\int_{0}^{2}\left(\frac{r^{3}}{2}+r^{2}\right)drd\theta=2\int_{0}^{\pi}\left[\left.\left(\frac{r^{4}}{8}+\frac{r^{3}}{3}\right)\right|_{0}^{2}\right]d\theta=2\pi\left(2+\frac{8}{3}\right)=\frac{28\pi}{3}$$ Then by Gauss' theorem you will have calculated the flux EDIT: arithmetical error in the second transition
When is something "obvious"? I try to be a good student but I often find it hard to know when something is "obvious" and when it isn't. Obviously (excuse the pun) I understand that it is specific to the level at which the writer is pitching the statement. My teacher is fond of telling a story that goes along the lines of A famous maths professor was giving a lecture during which he said "it is obvious that..." and then he paused at length in thought, and then excused himself from the lecture temporarily. Upon his return some fifteen minutes later he said "Yes, it is obvious that...." and continued the lecture. My teacher's point is that this only comes with a certain mathematical maturity and even eludes the best mathematicians at times. I would like to know : * *Are there any ways to develop a better sense of this, or does it just come with time and practice ? *Is this quote a true quote ? If so, who is it attributable to and if not is it a mathematical urban legend or just something that my teacher likely made up ?
Mathematical statements are only evaluated by individuals. Since individuals differ in mathematical ability, the answer is that "something" is never obvious to everyone or to yourself. The crux of the joke is that it was only obvious to the professor after reflection, which is deliberate irony since there would be no point in reflecting if something were explicitly obvious. Hence, the point is that if even the expert professor had to make sure it was obvious, then students should only check axioms even more diligently, no matter how hallowed the axiom.
Doubly exponential sequence behaviour from inequality I am investigating a strictly decreasing sequence $(a_i)_{i=0}^\infty$ in $(0, 1)$, with $\lim_{i\to\infty}a_i=0$, such that there exist constants $K>1$ and $m\in\mathbb{N}$ such that $$\frac{a_{i-1}^m}{K} \leq a_i \leq K a_{i-1}^m$$ for all $i$. Even though $K>1$, is it of the right lines to conclude that $a_i \sim \alpha^{m^i}$ for some constant $0<\alpha<1$? Thanks, DW
[Edit: Now that the question has been changed a day later, I removed analysis of the old version of the first inequality. Perhaps sometime I will update to fully answer the new question. The following still applies to the second inequality.] On the other hand, f For the second inequality, $$a_i\leq Ka_{i-1}^m\implies a_i\leq K^{1+m+m^2+\cdots+m^{i-1}}a_0^{m^i}\leq \left(K^{2/m}a_0\right)^{m^i}.$$ You could apply a similar inequality with each $N\in\mathbb N$ in place of $0$ to get $$a_i\leq \left(K^{2/(m^{N+1})}a_N^{1/m^N}\right)^{m^i}.$$ By choosing $N$ such that $K^{2/m}a_N<1$, you at least get $\displaystyle{a_i=O\left(\alpha^{m^{i}}\right)}$ with $\alpha=\left(K^{2/m}a_N\right)^{1/m^N}\in(0,1)$.
Find all $P$ with $P(x^2)=P(x)^2$ The following problem is from Golan's book on linear algebra, chapter 4. I have posted a proposed answer below. Problem: Let $F$ be a field. Find all nonzero polynomials $P\in F[x]$ satisfying $$P(x^2)=[P(x)]^2.$$
Assume first that $F$ is a field with characteristic not equal to 2. The only ones are 1 and $x^n$, $n\in \mathbb{N}$. Let $a_n$ denote the coefficient of $x^n$ in $P$. We see immediately that all $a_n=0$ for odd $n>0$. Examining the constant coefficient, we see $a_0=a_0^2\Rightarrow a_0=1$ or $a_0=0$. Now proceed by induction. Consider the case where $a_0=1$. Assume we have shown $a_n=0$ for all $n<k$, $n\neq 1$. We will show $a_k=0$. If $k$ is odd, we are done. If $k$ is even, the coefficient of $x^{k}$ in $P(x^2)$ is $a_{k/2}$, so it is 0. We evaluate $[P(x)]^2$ and ignore higher order terms, and see $$(a_kx^{k}+1)^2=a_k^2x^{2k}+2a_kx^k+1$$ and the only way for the coefficient of $x^k$ to vanish here is for $a_k$ to be 0. The case with $a_0=0$ is similar. Assume we have shown $a_n=0$ for all $n<k$. The coefficient of $x^{2k}$ in $P(x^2)$ is $a_{k}$. If evaluate $[P(x)]^2=[...a_kx^k]^2$ and ignore higher order terms again, we get $a_k^2x^{2k}$. So $a_k=1$ or $a_k=0$. If $a_k=0$, we continue the induction. If $a_k=1$, we factor $x^k$ out of the original polynomial and are reduced to the first case. In a field of characteristic 2 however, I believe that any polynomial with all coefficients equal to 0 or 1 works. Just use the "freshman's dream." Further, because equating constants on both sides gives $a_n^2=a_n$, these are the only ones that work.
(Regular) wreath product of nilpotent groups Is the wreath product of two nilpotent groups always nilpotent? I know the answer is no due to a condition "The regular wreath product A wr B of a group A by a group B is nilpotent if and only if A is a nilpotent p-group of finite exponent and B is a finite p-group for the same prime p ", but I can't easily construct a counter example to show it.
Following Jug's suggestion: let $\,\,A:=C_3=\langle a\rangle\,,\,B:=C_2=\langle c\rangle\,$ , with the regular action of $\,B\,$ on itself, and form the (regular) wreath product $$A\wr B\cong \left(C_3\times C_3\right)\rtimes_R C_2$$ Take the elements $$\pi=((1,1),c))\,\,,\,\,\sigma=((a,a^2),1)$$It's now easy to check that$$\pi^2=\sigma^3=1\,\,,\,\,\pi\sigma\pi=\sigma^2$$ so we get that $\,\,\langle \pi\,,\,\sigma\rangle\cong S_3\,\,$ and thus $\,\,A\wr B\,\,$ can't be nilpotent, though both $\,\,A,B\,\,$ are (they're even abelian...)
Prove that $\sin(x+\frac{\pi}{n})$ converges uniformly to $\sin(x)$. I've just starting learning uniform convergence and understand the formal definition. What I've got so far is: $|\sin(x+ \frac{\pi}{n}) - \sin(x)| < \epsilon \ \ \ \ \forall x \in \mathbb{R} \ \ \ \ $ for $n \geq N, \epsilon>0$ LHS = $|2\cos(x+\frac{\pi}{2n})\cdot \sin(\frac{\pi}{2n})| < \epsilon $ Am I going down the right route here? I've done some examples fine, but when trig is involved on all space, I get confused as to what I should be doing... Any help at all would be VERY much appreciated, I have an analysis exam tomorrow and need to be able to practice this. Thanks.
Use the fact that the sine function's derivative has absolute value of at most one to see that $$|\sin(x) - \sin(y)| \le |x - y|.$$
Localisation is isomorphic to a quotient of polynomial ring I am having trouble with the following problem. Let $R$ be an integral domain, and let $a \in R$ be a non-zero element. Let $D = \{1, a, a^2, ...\}$. I need to show that $R_D \cong R[x]/(ax-1)$. I just want a hint. Basically, I've been looking for a surjective homomorphism from $R[x]$ to $R_D$, but everything I've tried has failed. I think the fact that $f(a)$ is a unit, where $f$ is our mapping, is relevant, but I'm not sure. Thanks
Here's another answer using the universal property in another way (I know it's a bit late, but is it ever too late ?) As for universal properties in general, the ring satisfying the universal property described by Arturo Magidin in his answer is unique up to isomorphism. Thus to show that $R[x]/(ax-1) \simeq R_D$, it suffices to show that $R[x]/(ax-1)$ has the same universal property ! But that is quite easy: let $\phi: R\to T$ be a ring morphism such that $\phi(a) \in T^{\times}$. Using the universal property of $R[x]$, we get a unique morphism $\overline{\phi}$ extending $\phi$ with $\overline{\phi}(x) = \phi(a)^{-1}$. Quite obviously, $ax -1 \in \operatorname{Ker}\overline{\phi}$. Thus $\overline{\phi}$ factorizes uniquely through $R[x]/(ax-1)$. Thus we get a unique morphism $\mathcal{F}: R[x]/(ax-1) \to T$ with $\mathcal{F}\circ \pi = \phi$, where $\pi$ is the canonical map $R\to R[x]/(ax-1)$. This shows that $\pi: R \to R[x]/(ax-1)$ has the universal property of the localization, thus it is isomorphic to the localization. This is essentially another way of seeing Arturo Magidin's answer
Moment of inertia of an ellipse in 2D I'm trying to compute the moment of inertia of a 2D ellipse about the z axis, centered on the origin, with major/minor axes aligned to the x and y axes. My best guess was to try to compute it as: $$4\rho \int_0^a \int_0^{\sqrt{b^2(1 - x^2/a^2)}}(x^2 +y^2)\,dydx$$ ... I couldn't figure out how to integrate that. Is there a better way or a trick, or is the formula known? I'd also be happy with a good numerical approximation given a and b.
Use 'polar' coordinates, as in $\phi(\lambda, \theta) = (\lambda a \cos \theta, \lambda b \sin \theta)$, with $(\lambda, \theta) \in S = (0,1] \times [0,2 \pi]$. It is straightforward to compute the Jacobian determinant as $$ J_{\phi}(\lambda, \theta) = |\det D\phi(\lambda, \theta)| = \lambda a b.$$ Let $E = \{ (x,y) \,|\, 0 <(\frac{x}{a})^2 + (\frac{y}{b})^2 \leq 1 \}$. (Eliminating $(0,0)$ makes no difference to the integral, and is a technicality for the change of variables below.) We have $E = \phi (S)$, and $$\begin{align} I &= \rho \int_{\phi ( S)} (x^2+y^2) \, dx dy \\ &= \rho \int_{S} \lambda^2 (a^2 \cos^2 \theta+ b^2 \sin^2 \theta) \lambda a b \, d \lambda d \theta \\ &= \rho a b \int_{0}^1 \lambda^3 \, d \lambda \int_0^{2 \pi} a^2 \cos^2 \theta+ b^2 \sin^2 \theta \, d\theta \\ &= \rho \pi a b \frac{a^2+b^2}{4}. \end{align}$$
A Banach space is reflexive if and only if its dual is reflexive How to show that a Banach space $X$ is reflexive if and only if its dual $X'$ is reflexive?
Here's a different, more geometric approach that comes from Folland's book, exercise 5.24 Let $\widehat X$, $\widehat{X^*}$ be the natural images of $X$ and $X^*$ in $X^{**}$ and $X^{***}$. Define $\widehat X^0 = \{F\in X^{***}: F(\widehat x) = 0 \text{ for all } \widehat x \in \widehat X\}$ 1) It isn't hard to show that $\widehat{X^*} \bigcap \widehat X^0 = \{0\}$. 2) Furthermore, $\widehat{X^*} + \widehat X^0 = X^{***}$. To show this, let $f\in X^{***}$, and define $l \in X^*$ by $l(x) = f(\widehat x)$ for all $x\in X$. Then $f(\phi) = \widehat l(\phi) + [f(\phi) - \widehat l(\phi)]$. Clearly $\widehat l \in \widehat{X^*}$, and we claim $f - \widehat l \in \widehat X^0$. Let $\widehat x \in \widehat X$. Then $f(\widehat x) - \widehat l ( \widehat x) = f(\widehat x) - \widehat x (l) = f(\widehat x) - l(x) = 0$ Now that 1) and 2) are verified, we prove the claim: If $X$ is reflexive, then $\widehat X^0 = \{0\}$, and so $X^{***} = \widehat{X^*}$, so $X^*$ is reflexive. If $X^*$ is reflexive, then $X^{***} = \widehat{X^*}$, so $\widehat X^0 = \{0\}$. Since $\widehat X$ is a closed subspace of $X^{**}$ (on assumption $X$ is Banach), if $\widehat X$ were a proper subspace of $X^{**}$, we would be able to use Hahn-Banach to construct an $F \in X^{***}$ such that $F$ is zero on $\widehat X$ and has ||F|| = 1. This, however, would contradict $\widehat X^0 = \{0\}$. So we conclude $\widehat X = X^{**}$.
Probability problem of 220 people randomly selecting only 12 of 35 exclusive options. There are 220 people and 35 boxes filled with trinkets. Each person takes one trinket out of a random box. What is the probability that the 220 people will have grabbed a trinket from exactly 12 different boxes? I'm trying to calculate the probability of grabbing a trinket from at most 12 boxes, $P(12)$. Then calculate $P(11)$ with the answer being $P(12)-P(11)$ but I'm drawing blank. $P(12) = 1-(23/35)^{220}$ doesn't look right to me.
If the probability that the first $n$ people have chosen from exactly $c$ boxes out of a possible $t$ total boxes [$t=35$ in this case] is $p(n,c,t)$ then $$p(n,c,t)=\frac{c \times p(n-1,c,t)+(t-c+1)\times p(n-1,c-1,t)}{t}$$ starting with $p(0,c,t)=0$ and $p(n,0,t)=0$ except $p(0,0,t)=1$. Using this gives $p(220,12,35) \approx 4.42899922 \times 10^{-94}$. This is close to but not exactly the naive ${35 \choose 12}\times \left(\frac{12}{35}\right)^{220} \approx 4.42899948\times 10^{-94}$.
Inverse of transformation matrix I am preparing for a computer 3D graphics test and have a sample question which I am unable to solve. The question is as follows: For the following 3D transfromation matrix M, find its inverse. Note that M is a composite matrix built from fundamental geometric affine transformations only. Show the initial transformation sequence of M, invert it, and write down the final inverted matrix of M. $M =\begin{pmatrix}0&0&1&5\\0&3&0&3\\-1&0&0&2\\0&0&0&1\end{pmatrix} $ I only know basic linear algebra and I don't think it is the purpose to just invert the matrix but to use the information in the question to solve this. Can anyone help? Thanks
I know this is old, but the inverse of a transformation matrix is just the inverse of the matrix. For a transformation matrix $M$ which transforms some vector $\mathbf a$ to position $\mathbf v$, then to get a matrix which transforms some vector $\mathbf v$ to $\mathbf a$ we just multiply by $M^{-1}$ $M\cdot \mathbf a = \mathbf v \\ M^{-1} \cdot M \cdot \mathbf a = M^{-1} \cdot \mathbf v \\ \mathbf a = M^{-1} \cdot \mathbf v$
Finding solutions to equation of the form $1+x+x^{2} + \cdots + x^{m} = y^{n}$ Exercise $12$ in Section $1.6$ of Nathanson's : Methods in Number Theory book has the following question. * *When is the sum of a geometric progression equal to a power? Equivalently, what are the solutions of the exponential diophantine equation $$1+x+x^{2}+ \cdots +x^{m} = y^{n} \qquad \cdots \ (1)$$ in integers $x,m,n,y$ greater than $2$? Check that \begin{align*} 1 + 3 + 3^{2} + 3^{3} + 3^{4} & = 11^{2}, \\\ 1 + 7 + 7^{2} + 7^{3} &= 20^{2}, \\\ 1 + 18 +18^{2} &= 7^{3}. \end{align*} These are the only known solutions of $(1)$. The Wikipedia link doesn't reveal much about the above question. My question here would be to ask the following: * *Are there any other known solutions to the above equation. Can we conjecture that this equation can have only finitely many solutions? Added: Alright. I had posted this question on Mathoverflow some time after I had posed here. This user by name Gjergji Zaimi had actually given me a link which tells more about this particular question. Here is the link: * *https://mathoverflow.net/questions/58697/
I liked your question much. The cardinality of the solutions to the above equation purely depends upon the values of $m,n$. Let me break your problem into some cases. There are three cases possible. * *When $ m = 1 $ and $ n = 1 $ , you know that there are infinitely many solutions . *When $m=2$ and $n=1$ you know that a conic may have an infinitely many rational points or finitely many rational points. In more broad sense, these are genus -1 curves. Where the elliptic curves are also included ( when $m=2,n=3$ or hyper elliptic curves when $m=2, n\ge 4$ ) . This case the number of points on the curve are figured out using the conjecture of Birch and Swinnerton-dyer. It gives you a measure of Cardinality, whether infinite or finite by considering the $L$-functions associated to the curves. *When $m \ge 2 , n \ge 4$ it may represent some higher dimensional curve. So by the standard theorem of Falting, it has finitely many points given that the curve has genus $g \ge 2$ . Thank you. I update this answer once if I find something more interesting.
Proving that $\lim\limits_{x \to 0}\frac{e^x-1}{x} = 1$ I was messing around with the definition of the derivative, trying to work out the formulas for the common functions using limits. I hit a roadblock, however, while trying to find the derivative of $e^x$. The process went something like this: $$\begin{align} (e^x)' &= \lim_{h \to 0} \frac{e^{x+h}-e^x}{h} \\ &= \lim_{h \to 0} \frac{e^xe^h-e^x}{h} \\ &= \lim_{h \to 0} e^x\frac{e^{h}-1}{h} \\ &= e^x \lim_{h \to 0}\frac{e^h-1}{h} \end{align} $$ I can show that $\lim_{h\to 0} \frac{e^h-1}{h} = 1$ using L'Hôpital's, but it kind of defeats the purpose of working out the derivative, so I want to prove it in some other way. I've been trying, but I can't work anything out. Could someone give a hint?
Let say $y=e^h -1$, then $\lim_{h \rightarrow 0} \dfrac{e^h -1}{h} = \lim_{y \rightarrow 0}{\dfrac{y}{\ln{(y+1)}}} = \lim_{y \rightarrow 0} {\dfrac{1}{\dfrac{\ln{(y+1)}}{y}}} = \lim_{y \rightarrow 0}{\dfrac{1}{\ln{(y+1)}^\frac{1}{y}}}$. It is easy to prove that $\lim_{y \rightarrow 0}{(y+1)}^\frac{1}{y} = e$. Then using Limits of Composite Functions $\lim_{y \rightarrow 0}{\dfrac{1}{\ln{(y+1)}^\frac{1}{y}}} = \dfrac{1}{\ln{(\lim_{y \rightarrow 0}{(y+1)^\frac{1}{y}})}} = \dfrac{1}{\ln{e}} = \dfrac{1}{1} = 1.$
Showing the divergence of $ \int_0^{\infty} \frac{1}{1+\sqrt{t}\sin(t)^2} dt$ How can I show the divergence of $$ \int_0^x \frac{1}{1+\sqrt{t}\sin(t)^2} dt$$ as $x\rightarrow\infty?$
For $t \gt 0$: $$ 1 + t \ge 1 + \sqrt{t}\sin^2t $$ Or: $$ \frac{1}{1 + t} \le \frac{1}{1 + \sqrt{t}\sin^2t} $$ Now consider: $$ \int_0^x \frac{dt}{1 + t} \le \int_0^x \frac{dt}{1 + \sqrt{t}\sin^2t} $$ The LHS diverges as $x \to +\infty$, so the RHS does too.
What is the Taylor series for $g(x) =\frac{ \sinh(-x^{1/2})}{(-x^{1/2})}$, for $x < 0$? What is the Taylor series for $$g(x) = \frac{\sinh((-x)^{1/2})}{(-x)^{1/2}}$$, for $x < 0$? Using the standard Taylor Series: $$\sinh x = x + \frac{x^3}{3!} + \frac{x^5}{5!} + \frac{x^7}{7!}$$ I substituted in $x = x^{1/2}$, since $x < 0$, it would simply be $x^{1/2}$ getting, $$\sinh(x^{1/2}) = x^{1/2} + \frac{x^{3/2}}{3!} + \frac{x^{5/2}}{5!} + \frac{x^{7/2}}{7!}$$ Then to get the Taylor series for $\sinh((-x)^{1/2})/((-x)^{1/2})$, would I just divide each term by $x^{1/2}$? This gives me, $1+\frac{x}{3!}+\frac{x^2}{5!}+\frac{x^3}{7!}$ Is this correct? Thanks for any help!
As Arturo pointed out in a comment, It has to be $(-x)^{\frac{1}{2}}$ to be defined for $x<0$, then you have: $$\sinh x = x + \frac{x^3}{3!} + \frac{x^5}{5!} + \frac{x^7}{7!}+\dots$$ Substituting $x$ with $(-x)^{\frac{1}{2}}$ we get: $$\sinh (-x)^{\frac{1}{2}} = (-x)^{\frac{1}{2}} + \frac{({(-x)^{\frac{1}{2}}})^3}{3!} + \frac{({(-x)^{\frac{1}{2}}})^5}{5!} + \frac{({(-x)^{\frac{1}{2}}})^7}{7!}+\dots$$ Dividing by $(-x)^{\frac{1}{2}}$: $$\frac{\sinh (-x)^{\frac{1}{2}}}{(-x)^{\frac{1}{2}}} = 1 + \frac{({(-x)^{\frac{1}{2}}})^2}{3!} + \frac{({(-x)^{\frac{1}{2}}})^4}{5!} + \frac{({(-x)^{\frac{1}{2}}})^6}{7!}+\dots$$ And after simplification: $$\frac{\sinh (-x)^{\frac{1}{2}}}{(-x)^{\frac{1}{2}}} = 1 - \frac{x}{3!} + \frac{x^2}{5!} - \frac{x^3}{7!}+\dots$$
Given an alphabet with 6 non-distinct integers, how many distinct 4-digit integers are there? How many distinct four-digit integers can one make from the digits $1$, $3$, $3$, $7$, $7$ and $8$? I can't really think how to get started with this, the only way I think might work would be to go through all the cases. For instance, two $3$'s and two $7$'s as one case, one $1$, two $3$'s and one $8$ as another. This seems a bit tedious though (especially for a larger alphabet) and so I'm here to ask if there's a better way. Thanks.
Distinct numbers with two $3$s and two $7$s: $\binom{4}{2}=6$. Distinct numbers with two $3$s and one or fewer $7$s: $\binom{4}{2}3\cdot2=36$. Distinct numbers with two $7$s and one or fewer $3$s: $\binom{4}{2}3\cdot2=36$. Distinct numbers with one or fewer $7$s and one or fewer $3$s: $4\cdot3\cdot2\cdot1=24$. Total: $6+36+36+24=102$ With larger alphabets, Suppose there are $a$ numbers with 4 or more in the list, $b$ numbers with exactly 3 in the list, $c$ numbers with exactly 2 in the list, and $d$ numbers with exactly 1 in the list. Distinct numbers with all 4 digits the same: $a$ Distinct numbers with 3 digits the same: $\binom{4}{3}(a+b)(a+b+c+d-1)$ Distinct numbers with 2 pairs of digits: $\binom{4}{2}\binom{a+b+c}{2}$ Distinct numbers with exactly 1 pair of digits: $\binom{4}{2}(a+b+c)\binom{a+b+c+d-1}{2}2!$ Distinct numbers with no pair of digits: $\binom{a+b+c+d}{4}4!$ Total: $a+4(a+b)(a+b+c+d-1)+6\binom{a+b+c}{2}+12(a+b+c)\binom{a+b+c+d-1}{2}+24\binom{a+b+c+d}{4}$ Apply to the previous case: $a=b=0$, $c=2$, and $d=2$: $0+0+6\binom{2}{2}+12(2)\binom{3}{2}+24\binom{4}{4}=102$
Quadratic equation related to physics problem - how to proceed? It's a physics-related problem, but it has a nasty equation: Let the speed of sound be $340\dfrac{m}{s}$, then let a heavy stone fall into the well. How deep is the well when you hear the impact after $2$ seconds? The formula for the time it takes the stone to fall and the subsequent sound of impact to travel upwards is simple enough: $t = \sqrt{\dfrac{2s}{g}} + \dfrac{s}{v}$ for $s$ = distance, $g$ = local gravity and $t$ = time. This translates to said nasty equation: $gs^2 - 2sv^2 + 2gstv + gt^2v^2 = 0$ Now I need to solve this in terms of $s$, but I'm at a loss as to how to accomplish this. How to proceed?
Hint: if you insert the values of $g, t$ and $v$ you have a quadratic equation in $s$. Even if you just regard $g, t$ and $v$ as constants, you can plug this into the quadratic formula.
expressing $x^3 /1000 - 100x^2 - 100x + 3$ in big theta Hello can somebody help me in expressing $x^3/1000 - 100x^2 - 100x + 3$ in big theta notation. It looks like of $x^3$ to me, but obviously at $x =0$ obviously this polynomial gives a value of $3$. And multiplying $x^3$ by any constant won't help at all. Is there a standard way to approach such kind of problem?
More generally, given an arbitrary real polynomial $p(x)=a_nx^n+\cdots+a_1x+a_0$ with $a_n>0$, let us denote by $M$ a number greater than all of $p$'s real roots. We have $$\lim_{x\to\infty}\frac{p(x)}{x^n}=a_n+a_{n-1}(0)+\cdots+a_1(0)+a_0(0)=a_n>0.$$ Now $p$ is continuous and has no roots beyond $M$ so it cannot change sign beyond $M$; at the same time the limit is positive so it must be the case that $p(x)>0$ for all $x>M$. Also, $x^{-n}p(x)-a_n$ tends to zero as $x\to\infty$ and has no singularities so it must be bounded, hence also given $0<L<a_n$ there is some $N$ such that it is $\le L$ in magnitude for all $x>N$ (by the definition of a limit at infinity). Claim: $$-L+a_n\le \frac{p(x)}{x^n}< a_n+\frac{|a_{n-1}|}{M}+\cdots+\frac{|a_1|}{M^{n-1}}+\frac{|a_0|}{M^n} \quad \text{for all }x>\max\{M,N\}.$$ Proof. The left inequality follows from hypothesis on $-L$ being a lower bound. Otherwise for $x>M$ we have that $a_k<|a_k|$ and $1/x^\ell<1/M^\ell$; the latter is because $x\mapsto 1/x^\ell$ is a decreasing function on $x>0$ for $\ell>0$, and since the latter involves positive quantities it may be multiplied by the former (see here and adapt as necessary), hence $a_k/x^\ell <|a_k|/M^\ell$. Apply this to each term in $p(x)/x^n$ and then add the inequalities together to get the right-hand inequality above. This demonstrates that $p(x)/x^n$ is squeezed between two positive reals for sufficiently large $x$; multiply both sides by $x^n$ and we have shown $p(x)$ fulfills the definition of $\Theta(x^n)$.
Are isomorphic structures really indistinguishable? I always believed that in two isomorphic structures what you could tell for the one you would tell for the other... is this true? I mean, I've heard about structures that are isomorphic but different with respect to some property and I just wanted to know more about it. EDIT: I try to add clearer informations about what I want to talk about. In practice, when we talk about some structured set, we can view the structure in more different ways (as lots of you observed). For example, when someone speaks about $\mathbb{R}$, one could see it as an ordered field with particular lub property, others may view it with more structures added (for example as a metric space or a vector space and so on). Analogously (and surprisingly!), even if we say that $G$ is a group and $G^\ast$ is a permutation group, we are talking about different mathematical object, even if they are isomorphic as groups! In fact there are groups that are isomorphic (wrt group isomorphisms) but have different properties, for example, when seen as permutation groups.
I'm not sure, if this is what you are referring to, but here goes... There are questions that are easy to decide in one structure, but much more difficult in another isomorphic structure. The discrete logarithm problem comes to mind. The additive group $G_1=\mathbf{Z}_{502}$ is generated by $5$, and to a given $x\in G_1$ finding a multiplier $n$ such that $$ 5n=(5+5+\cdots 5)=x $$ is easy, as the generalized Euclidean algorithm will do it for us. The multiplicative group $G_2=\mathbf{Z}_{503}^*$ is also cyclic of order $502$ and also generated by $5$. Yet, to a given $x\in G_2$ the problem of finding an exponent (now an exponent as the group is multiplicative) $n$ such that $$ 5^n=(5\cdot5\cdot5\cdots5)=x $$ is more difficult. The difference in difficulty becomes more pronounced as the size of the groups grows. The problem is that describing an isomorphism is not enough to translate a question from one structure to the other, if you cannot also describe its inverse.
A good reference to begin analytic number theory I know a little bit about basic number theory, much about algebra/analysis, I've read most of Niven & Zuckerman's "Introduction to the theory of numbers" (first 5 chapters), but nothing about analytic number theory. I'd like to know if there would be a book that I could find (or notes from a teacher online) that would introduce me to analytic number theory's classical results. Any suggestions? Thanks for the tips,
I'm quite partial to Apostol's books, and although I haven't read them (yet) his analytic number theory books have an excellent reputation. Introduction to Analytic Number Theory (Difficult undergraduate level) Modular Functions and Dirichlet Series in Number Theory (can be considered a continuation of the book above) I absolutely plan to read them in the future, but I'm going through some of his other books right now. Ram Murty's Problems in Analytic Number Theory is stellar as it has a ton of problems to work out!
harmonic function question Let $u$ and $v$ be real-valued harmonic functions on $U=\{z:|z|<1\}$. Let $A=\{z\in U:u(z)=v(z)\}$. Suppose $A$ contains a nonempty open set. Prove $A=U$. Here is what I have so far: Let $h=u-v$. Then $h$ is harmonic. Let $X$ be the set of all $z$ such that $h(z)=0$ in some open neighborhood of $z$. By our assumptions on $A$, $X$ is not empty. Let $z\in X$. Then $h(z)=0$ on some open set $V$ containing $z$. If $x\in V$, then $h(w)=0$ in some open set containing $x$, namely $V$. So $X$ is open. I want to show $X$ is also closed but I am having trouble doing so. Any suggestions:
Each real harmonic function $h$ on a simply connected domain defines unique up to the constant holomorphic function $f\in\mathcal{O}(U)$ such that $$ \mathrm{Im}(f)=h $$ $$ \mathrm{Re}(f)= \int\limits_{(x_0,y_0)}^{(x,y)}\left(\frac{\partial h}{\partial y}dx-\frac{\partial h}{\partial x}dy\right)+C $$ If $h=0$ on some ball $B\subset A$, then respective function $f=C$ on $B$. Since $A$ is open, by uniqueness principle $f=C$ on $U$. Hence $h=\mathrm{Im}(f)=0$ on $U$.
What is $\lim_{(x,y)\to(0,0)} \frac{(x^3+y^3)}{(x^2-y^2)}$? In class, we were simply given that this limit is undefined since along the paths $y=\pm x$, the function is undefined. Am I right to think that this should be the case for any function, where the denominator is $x^2-y^2$, regardless of what the numerator is? Just wanted to see if this is a quick way to identify limits of this form. Thanks for the discussion and help!
For your function, in the domain of $f$ (so $x\ne \pm y)$, to compute the limit you can set $x=r\cos\theta, y=r\sin\theta$, and plug it it. You get $\lim\limits_{r\to 0} \frac{r^3(cos^3\theta+sin^3\theta)}{r^2(cos^2\theta-sin^2\theta)} =\lim\limits_{r\to 0} \frac{r(cos^3\theta+sin^3\theta)}{(cos^2\theta-sin^2\theta)}$, and you can easily see that this is $0$ for any $\theta$ in the domain of $f$ (you need to avoid $\theta = \frac{n\pi}{2}-\pi/4$). Of course, if you are considering the whole plane, then the limit does not exist, because the function isn't even defined at $y=x$, so you can't compute the limit along that path.
Proof of Eberlein–Smulian Theorem for a reflexive Banach spaces Looking for the proof of Eberlein-Smulian Theorem. Searching for the proof is what I break with this morning. Some of my friends recommend Haim Brezis (Functional Analysis, Sobolev Spaces and Partial Differential Equations). After I search the book, I only found the statement of the theorem, is the proof very difficult to grasp? Why is Haim Brezis skip it in his book? Please I need a reference where I can find the proof in detail. Theorem:(Eberlein-Smul'yan Theorem) A Banach space $E$ is reflexive if and only if every (norm) bounded sequence in $E$ has a subsequence which converges weakly to an element of $E$.
Kôsaku Yosida, Functional Analysis, Springer 1980, Chapter V, Appendix, section 4. (This appears to be the 6th edition).
Why doesn't this find the mid point? I saw a simple question and decided to try an alternate method to see if I could get the same answer; however, it didn't work out how I had expected. Given $A(4, 4, 2)~$ and $~B(6, 1, 0)$, find the coordinates of the midpoint $M$ of the line $AB$. I realize that this is quite easy just taking $\frac{1}{2}(A+B) = (5, \frac{5}{2}, 1)$; however, I don't understand why this doesn't give me the same answer: If I take $\frac{1}{2}\vec{AB}~$ I would have thought that I would be half way to B from A which would be the midpoint right? but, of course I get: $\frac{1}{2}\vec{AB} = \frac{1}{2}(2, -3, -2) = (1, -\frac{3}{2}, -1)$ Is it just because this is a directional vector which doesn't indicate position in any way, and I am trying to halve the direction/angle or something?
That's right...your calculation doesn't take into account position in any way. You are going half the distance from $A$ to $B$, but starting at the origin, not at $A$. Try $A+\frac{1}{2}\vec{AB}$ EDIT: It occured to me that I should point out: $$A+\frac{1}{2}\vec{AB}=A+\frac{1}{2}(B-A)=\frac{1}{2}(A+B)$$
Probability of a baseball team winning next 2 games Given their previous performance, the probability of a particular baseball team winning any given game is 4/5. The probability that the team will win their next 2 games is... I'm confused on how to start this question. Any help is appreciated.
Probability of a particular baseball team winning any given game is 4/5. Probability that the team will win their next 2 games is probability of winning 1st match $*$ probability of winning 2nd match. $$P = (4/5) * (4/5)$$ $$P = 16/25$$
Intermediate fields of cyclotomic splitting fields and the polynomials they split Consider the splitting field $K$ over $\mathbb Q$ of the cyclotomic polynomial $f(x)=1+x+x^2 +x^3 +x^4 +x^5 +x^6$. Find the lattice of subfields of K and for each subfield $F$ find polynomial $g(x) \in \mathbb Z[x]$ such that $F$ is the splitting field of $g(x)$ over $\mathbb Q$. My attempt: We know the Galois group to be the cyclic group of order 6. It has two proper subgroups of order 2 and 3 and hence we are looking for only two intermediate field extensions of degree 3 and 2. $\mathbb Q[\zeta_7+\zeta_7^{-1}]$ is a real subfield. $\mathbb Q[\zeta_7-\zeta_7^{-1}]$ is also a subfield. How do I calculate the degree and minimal polynomial?
Somehow, the theme of symmetrization often doesn't come across very clearly in many expositions of Galois theory. Here is a basic definition: Definition. Let $F$ be a field, and let $G$ be a finite group of automorphisms of $F$. The symmetrization function $\phi_G\colon F\to F$ associated to $G$ is defined by the formula $$ \phi_G(x) \;=\; \sum_{g\in G} g(x). $$ Example. Let $\mathbb{C}$ be the field of complex numbers, and let $G\leq \mathrm{Aut}(\mathbb{C})$ be the group $\{\mathrm{id},c\}$, where $\mathrm{id}$ is the identity automorphism, and $c$ is complex conjugation. Then $\phi_G\colon\mathbb{C}\to\mathbb{C}$ is defined by the formula $$ \phi_G(z) \;=\; \mathrm{id}(z) + c(z) \;=\; z+\overline{z} \;=\; 2\,\mathrm{Re}(z). $$ Note that the image of $\phi$ is the field of real numbers, which is precisely the fixed field of $G$. This example generalizes: Theorem. Let $F$ be a field, let $G$ be a finite group of automorphisms of $F$, and let $\phi_G\colon F\to F$ be the associated symmetrization function. Then the image of $\phi_G$ is contained in the fixed field $F^G$. Moreover, if $F$ has characteristic zero, then $\mathrm{im}(\phi_G) = F^G$. Of course, since $\phi_G$ isn't a homomorphism, it's not always obvious how to compute a nice set of generators for its image. However, in small examples the goal is usually just to produce a few elements of $F^G$, and then prove that they generate. Let's apply symmetrization to the present example. You are interested in the field $\mathbb{Q}(\zeta_7)$, whose Galois group is cyclic of order $6$. There are two subgroups of the Galois group to consider: The subgroup of order two: This is the group $\{\mathrm{id},c\}$, where $c$ is complex conjugation. You have already used your intuition to guess that $\mathbb{Q}(\zeta_7+\zeta_7^{-1})$ is the corresponding fixed field. The basic reason that this works is that $\zeta_7+\zeta_7^{-1}$ is the symmetrization of $\zeta_7$ with respect to this group. The subgroup of order three: This is the group $\{\mathrm{id},\alpha,\alpha^2\}$, where $\alpha\colon\mathbb{Q}(\zeta_7)\to\mathbb{Q}(\zeta_7)$ is the automorphism defined by $\alpha(\zeta_7) = \zeta_7^2$. (Note that this indeed has order three, since $\alpha^3(\zeta_7) = \zeta_7^8 = \zeta_7$.) The resulting symmetrization of $\zeta_7$ is $$ \mathrm{id}(\zeta_7) + \alpha(\zeta_7) + \alpha^2(\zeta_7) \;=\; \zeta_7 + \zeta_7^2 + \zeta_7^4. $$ Therefore, the corresponding fixed field is presumably $\mathbb{Q}(\zeta_7 + \zeta_7^2 + \zeta_7^4)$. All that remains is to find the minimal polynomials of $\zeta_7+\zeta_7^{-1}$ and $\zeta_7 + \zeta_7^2 + \zeta_7^4$. This is just a matter of computing powers until we find some that are linearly dependent. Using the basis $\{1,\zeta_7,\zeta_7^2,\zeta_7^3,\zeta_7^4,\zeta_7^5\}$, we have $$ \begin{align*} \zeta_7 + \zeta_7^{-1} \;&=\; -1 - \zeta_7^2 - \zeta_7^3 - \zeta_7^4 - \zeta_7^5 \\ (\zeta_7 + \zeta_7^{-1})^2 \;&=\; 2 + \zeta_7^2 + \zeta_7^5 \\ (\zeta_7 + \zeta_7^{-1})^3 \;&=\; -3 - 3\zeta_7^2 - 2\zeta_7^3 - 2\zeta_7^4 - 3\zeta_7^5 \end{align*} $$ In particular, $(\zeta_7+\zeta_7^{-1})^3 + (\zeta_7+\zeta_7^{-1})^2 - 2(\zeta_7+\zeta_7^{-1}) - 1 = 0$, so the minimal polynomial for $\zeta_7+\zeta_7^{-1}$ is $x^3 + x^2 - 2x - 1$. Similarly, we find that $$ (\zeta_7 + \zeta_7^2 + \zeta_7^4)^2 \;=\; -2 - \zeta_7 - \zeta_7^2 - \zeta_7^4 $$ so the minimal polynomial for $\zeta_7 + \zeta_7^2 + \zeta_7^4$ is $x^2+x+2$.
How does trigonometric substitution work? I have read my book, watched the MIT lecture and read Paul's Online Notes (which was pretty much worthless, no explanations just examples) and I have no idea what is going on with this at all. I understand that if I need to find something like $$\int \frac { \sqrt{9-x^2}}{x^2}dx$$ I can't use any other method except this one. What I do not get is pretty much everything else. It is hard to visualize the bounds of the substitution that will keep it positive but I think that is something I can just memorize from a table. So this is similar to u substitution except that I am not using a single variable but expressing x in the form of a trig function. How does this not change the value of the problem? To me it seems like it would, algebraically how is something like $$\int \frac { \sqrt{9-x^2}}{x^2}dx$$ the same as $$\int \frac {3\cos x}{9\sin^2 x}3\cos x \, dx$$ It feels like if I were to put in numbers for $x$ that it would be a different answer. Anyways just assuming that works I really do not understand at all what happens next. "Returning" to the original variable to me should just mean plugging back in what you had from before the substitution but for whatever unknown and unexplained reason this is not true. Even though on problems before I could just plug back in my substitution of $u = 2x$, $\sin2u = \sin4x$ that would work fine but for whatever reason no longer works. I am not expected to do some pretty complex trigonometric manipulation with the use of a triangle which I do not follow at all, luckily though this process is not explained at all in my book so I think I am just suppose to memorize it. Then when it gets time for the answer there is no explanation at all but out of nowhere inverse sin comes in for some reason. $$\frac {- \sqrt{9-x^2}}{x} - \sin^{-1} (x/3) +c$$ I have no idea happened but neither does the author apparently since there is no explanation.
There are some basic trigonometric identities which is not hard to memorise, one of the easiest and most important being $\,\,\cos^2x+\sin^2x=1\,\,$ , also known as the Trigonometric Pythagoras Theorem. From here we get $\,1-\sin^2x=\cos^2x\,$ , so (watch the algebra!)$$\sqrt{9-x^2}=\sqrt{9\left(1-\left(\frac{x}{3}\right)^2\right)}=3\sqrt{1-\left(\frac{x}{3}\right)^2}$$From here, the substitution proposed for the integral is $$\displaystyle{\sin\theta=\frac{x}{3}\Longrightarrow \cos\theta\, d\theta=\frac{1}{3}dx}\,\,,\,x=3\sin\theta\,,\,dx=3\cos\theta\,d\theta$$ so in the integral we get$$\int \frac{\sqrt{9-x^2}}{x^2}\,dx =\int \frac{3\sqrt{1-\left(\frac{x}{3}\right)^2}}{x^2}\to\int\frac{\rlap{/}{3}\sqrt{1-\sin^2\theta}}{\rlap{/}{9}\sin^2\theta}\,\rlap{/}{3}\cos\theta\,d\theta=$$$$\int\frac{\cos\theta\,\cos\theta}{\sin^2\theta}\,d\theta$$which is what you have in your book...:)
A surjective homomorphism between finite free modules of the same rank I know a proof of the following theorem using determinants. For some reason, I'd like to know a proof without using them. Theorem Let $A$ be a commutative ring. Let $E$ and $F$ be finite free modules of the same rank over $A$. Let $f:E → F$ be a surjective $A$-homomorphism. Then $f$ is an isomorphism.
This answer is not complete. See the comments below. The modules $E$ and $F$ being free of finite rank $n$ over $A$ means that they each have a finite basis over $A$. Take $y \in F$, and since $f$ is surjective some $x \in E$ maps to $y$. Pick a basis for $\langle e_1, \dots, e_n \rangle$ of $E$ over $A$, so $x = a_1e_1 + \dotsb + a_ne_n$ for some $a_i \in A$. Then for our arbitrary element $y \in F$, $$ y = f(a_1e_1 + \dotsb + a_ne_n) = a_1f(e_1) + \dotsb + a_nf(e_n) \, $$ so $\langle f(e_1),\dotsc, f(e_n)\rangle$ generates $F$. Since $F$ has the same rank as $E$, these generators must form a basis (this needs to be proven. See darij grinberg's comment below). Since these generators form a basis $$ 0 = f(\alpha_1e_1 + \dotsb + \alpha_ne_n) = \alpha_1f(e_1) + \dotsb + \alpha_nf(e_n) $$ only when the $\alpha_i$ are all zero, so $f$ is injective and hence an isomorphism. ${_\square}$ I don't see why we need $A$ to be a commutative ring. Since we're specifying that $E$ and $F$ have the same rank I assume they have the invariant dimension property. Otherwise commutivity would imply this. Also we're only talking about a single map $f \colon E \to F$ and don't need to talk about the module structure on $\mathrm{Hom}_R(E,F)$, for which we need $R$ to be commutative. Also I've seen it asked as an exercise, is this still true if we assume $f$ is injective instead of surjective? The answer is no, looking at the counterexample $\mathbf{Z} \to \mathbf{Z}$ where $1 \mapsto 2$, regarding $\mathbf{Z}$ as a rank $1$ free module over itself.
How to solve $x_j y_j = \sum_{i=1}^N x_i$ I have N equations and am having trouble with finding a solution. $$\left\{\begin{matrix} x_1 y_1 = \sum_{i=1}^N x_i\\ x_2 y_2 = \sum_{i=1}^N x_i\\ \vdots\\ x_N y_N = \sum_{i=1}^N x_i \end{matrix}\right.$$ where $x_i$, ($i = 1, 2, \cdots, N$) is an unknown and $y_i$, ($i = 1, 2, \cdots, N$) is a known variable. Given $y_i$'s, I have to find $x_i$'s but, I don't know where to start and even if it has a solution.
* *$x_i= 0$ $\forall i$ is always a solution. 2 Suppose that $y_i \ne 0$ $\forall i$. Then, $x_1 = \frac{1}{y_1} \sum x_i$ and summing over all indexes we get $\sum x_i = \sum \frac{1}{y_i} \sum x_i$ So we must either have $\sum x_i = 0$ or $\sum \frac{1}{y_i} = 1$ 2.a The case $\sum x_i = 0$ gives only the trivial solution $x_i=0 $ 2.b Elsewhere, if we are given $\sum \frac{1}{y_i} = 1$, then any $x_i = \frac{\alpha}{y_i}$ is a solution, for any $\alpha$ 3 If some $y_j=0$ for some $j$, then we must have $\sum x_i =0$ and $x_i=0$ for all $i$ with $y_i\ne 0$. This provides extra solutions if there are more than one zero-valued $y_j$. Eg, say $y_1=y_2=y_3=0$ and $y_j \ne 0$ for $j>3$; then any ${\bf x}$ with $x_j=0$ for $j>3$ and $x_1+x_2+x_3=0$ is a solution.
Simplify integral of inverse of derivative. I need to simplify function $g(x)$ which I describe below. Let $F(y)$ be the inverse of $f'(\cdot)$ i.e. $F = \left( f'\right)^{-1}$ and $f(x): \mathbb{R} \to \mathbb{R}$, then $$g(x) =\int_a^x F(y)dy$$ Is it possible to simplify $g(x)$?
Let $t = F(y)$. Then we get that $y = F^{-1}(t) = f'(t)$. Hence, $dy = f''(t) dt$. Hence, we get that \begin{align} g(x) & = \int_{F(a)}^{F(x)} t f''(t) dt\\ & = \left. \left(t f'(t) - f(t) \right) \right \rvert_{F(a)}^{F(x)}\\ & = F(x) f'(F(x)) - f(F(x)) - (F(a) f'(F(a)) - f(F(a)))\\ & = xF(x) - f(F(x)) - aF(a) + f(F(a)) \end{align}