title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Simplify $(\sqrt{x}) + x + 2 = (\sqrt{y}) + y + 2$
Hint: $x-y=(\sqrt x +\sqrt y)(\sqrt x - \sqrt y)$
Linear Algebra Question for Repeated Eigenvalues
Intuitively it means that, when combined with the other equations, they provide the same information. More rigorously, it means that there is some "linear combination" of the equations which gives $0=0$. This is easiest to see when, as in your case, you have $n$ equations and $n-1$ of them are redundant; this means that each one is a scalar multiple of the other.
Compute $\lim\limits_{n\to \infty} P(|S_n|>\frac{\pi n}{2})$ where $S_n=\sum\limits_{k=1}^n \frac{1}{X_k}$
Let $Y_n$ be defined by $$ Y_n = \frac{1}{n}\sum_{k=1}^{n} \frac{1}{X_k}. $$ Assuming that $Y_n$ converges in distribution to a continuous random variable $Y$, the limiting probability is $\Bbb{P}(|Y| > \pi/2)$, and this may be evaluated once we know the distribution of $Y$. We claim that this is indeed true: Claim. $Y_n$ converges in distribution to a random variable $Y$ having the Cauchy distribution with the scale parameter $\gamma = \pi/2$: $$ f_Y(y) = \frac{1}{\pi\gamma} \cdot \frac{\gamma^2}{y^2 + \gamma^2}. $$ Assuming this claim, we have $$ \lim_{n\to\infty} \Bbb{P}(|Y_n| > \gamma) = \Bbb{P}(|Y| > \gamma) = \int_{|y| > \gamma} f_Y(y) \, dy = \frac{1}{2}. $$ Proof of the Claim. We show that the characteristic function $\phi_{Y_n}(t)$ of $Y_n$ converges to $\phi_{Y}(t)$. Notice that $\phi_{Y_n}(t)$ is given by $$ \phi_{Y_n}(t) = \phi_{1/X}(t/n)^n = \left( \int_{|x|<1} \frac{e^{it/nx}}{2} \, dx \right)^n = \left( 1 - \int_{|x|<1} \frac{1 - \cos(t/nx)}{2} \, dx \right)^n $$ The last integral can be simplified by the substitution $y = 1/nx$: $$ \int_{|x|<1} \frac{1 - \cos(t/nx)}{2} \, dx = \frac{1}{n} \int_{1/n}^{\infty} \frac{1 - \cos(ty)}{y^2} \, dy. $$ Plugging this back and taking $n\to\infty$, we have $$ \lim_{n\to\infty} \phi_{Y_n}(t) = \exp\left\{ - \int_{0}^{\infty} \frac{1 - \cos(ty)}{y^2} \, dy \right\}. $$ This integral is easily computed from the Dirichlet integral. Indeed, integration by parts yields $$ \int_{0}^{\infty} \frac{1 - \cos(ty)}{y^2} \, dy = \int_{0}^{\infty} \frac{t\sin(ty)}{y} \, dy = \gamma|t|, $$ where $\gamma = \pi/2$ is the scale parameter of the distribution of $Y$. Therefore we have $$ \lim_{n\to\infty} \phi_{Y_n}(t) = e^{-\gamma|t|} = \phi_Y(t). $$
Density function of random variable conditioned on sum of independent Gaussians
Given that $X\sim N(0,\sigma^2_x)$ and $Y\sim N(0,1-\sigma^2_x)$ which are independent with the restriction $0<\sigma^2_x<1$ one wants to know the probability density function of $X|X+Y>z$. That pdf is $$\frac{f_X(x) \int_{z-x}^{\infty } f_Y(y) \, dy}{\Pr (X+Y>z)}=\frac{e^{-\frac{x^2}{2 \sigma_x^2}} \Phi \left(\frac{x-z}{\sqrt{1-\sigma_x^2}}\right)}{\sqrt{2 \pi } \sigma_x \Phi (-z)}$$
$\int_0^\infty\dfrac{1}{(x+1)(x^2+1)}dx $
Let $x=1/t$ to obtain $$I=\int_0^{\infty}\frac{tdt}{(t+1)(t^2+1)}=\int_0^{\infty}\frac{(t+1-1)dt}{(t+1)(t^2+1)}=-I+\int_0^{\infty}\frac{dt}{t^2+1}$$ Hence $$I=\frac12\int_0^{\infty}\frac{dt}{t^2+1}=\frac12 \times \frac{\pi}2.$$
Cauchy sequence and convergent sequence
First of all, they are conceptually different ideas: it turns out that the same sequences satisfy those concepts. (BUT see below.) Analogously, you can say that an equilateral triangle is one with three equal sides, and an equiangular triangle is one with three equal angles. These concepts are expressing different ideas, but it turns out that they describe the same triangles. The second and more important point is that your book is probably, implicitly or explicitly, discussing sequences of real numbers. (Or perhaps complex numbers.) Suppose, on the other hand, that the only numbers we know about are rationals. Then it is still true that all convergent sequences are Cauchy; however it is not true that all Cauchy sequences are convergent. For example, $$\{3,\,3.1,\,3.14,\,3.141,\,3.1415,\,3.14159,\ldots\}$$ is a Cauchy sequence in $\Bbb Q$ which does not converge (does not have a limit) in $\Bbb Q$. Note that you can't say the limit is $\pi$ because we are assuming the only numbers we know about are rationals, and $\pi$ is not rational. To continue the triangle analogy: "equilateral" and "equiangular" do not describe the same shapes if we are talking about, say, quadrilaterals. (Or quadrangles!)
If the circumcentre of a triangle lies at (0,0) and centroid is the middle point of $(a^2+1,a^2+1)$ and $(2a,-2a)$.
You need not find the coordinates of the orthocenter exactly. Since the orthocenter, circumcenter and centroid are collinear, just find the equation of line passing through centroid and circumcenter, i.e. $$\dfrac{y}{x}=\frac{\dfrac{a^2-2a+1}{2}}{\dfrac{a^2+2a+1}{2}}$$ So the required equation of line is $$(a-1)^2x-(a+1)^2y=0$$
remainder of $a^2+3a+4$ divided by 7
$a = 6 \quad(\mathrm{mod} 7)$ $a^2 = 36 = 1 \quad(\mathrm{mod} 7)$ $3a = 18 = 4\quad (\mathrm{mod} 7)$ $a^2 + 3a + 4 = 1 + 4 + 4 = 9 = 2 \quad(\mathrm{mod} 7)$
First Order PDE Initial Value Conditions
The inital conditions aren't correct as you observed. Data is prescribed along $x = r, t = \frac{-r}{3}.$ Thus, the characteristics and initial conditions should be listed as \begin{align} \frac{dx}{ds} = 3t;& \;\; x(r,0) = r\\ \frac{dt}{ds} = 1;& \;\; t(r,0) = \frac{-r}{3}\\ \frac{du}{ds} = u;& \;\; u(r,0) = 1 + \cos{(r)}. \end{align} From there, we can solve the system, try to write $r$ and $s$ as functions of $x$ and $t,$ i.e. $r = r(x,t),\; s = s(x,t),$ and then ultimately $u(x,t) = u(r(x,t),s(x,t)).$
Book recommendation for Linear Algebra
In my opinion: Linear Algebra Done Right by Sheldon Axler is high level... you can say it is second course in linear algebra not introductory. For Introductory course in Linear Algebra the following are great: 1)Elementary Linear Algebra Applications Version by Howard Anton, Chris Rorres, Anton Kaul 2)Elementary Linear Algebra Stephen_Francis_Andrilli Fourth Edition these two books are very great in my opinion.
What does it mean when the extension of scalars is free?
In general nothing can be concluded: for example, if $S$ is a field then $M \otimes_R S$ is always free and this imposes no condition on $M$. (In this case $(-) \otimes_R S$ generally won't be faithful so it's not surprising that we lose information about $M$.) If $\varphi : R \to S$ is faithfully flat then faithfully flat descent is available and a natural question to ask is whether $M \otimes_R S$ free implies that $M$ is locally free. According to the Stacks project this is false in general and an open question if we additionally assume that $S$ is finitely presented over $R$ (asked on MO with no answer), although apparently it holds if in addition $R$ is Noetherian or $M$ is finitely generated. With either of these hypotheses, if $M \otimes_R S$ is free of rank $1$ then it follows that $M$ is locally free of rank $1$, or equivalently an invertible module / line bundle.
All elements of a set
The set $A$ is not properly defined. I am interpreting $A$ as the range of $\sqrt {8-t+\sqrt {2-t}}$. $\sqrt {8-t+\sqrt {2-t}}$ is defined only when $t \leq 2$. This function is continuous and decreasing. It tends to $\infty$ as $ x \to -\infty$ and the value at $x=2$ is $\sqrt 6$. Hence the answer is $[\sqrt 6,\infty)$.
Does every closed, densely defined operator in a Banach space have a closed, densely defined extension on a Hilbert space?
The answer seems to be no. A counter example follows. Let $H_1$ be a separable Hilbert space with norm $\lVert \cdot \rVert_1$ and let $U$ be a bounded, injective, self-adjoint operator on $H_1$ with $0\in\sigma(U)$. Then we may consider the completion $H_2$ of $H_1$ with respect to the norm $\lVert \psi\rVert_2 := \lVert U \psi \rVert_1$. Since $U$ is not boundedly invertible, we have $H_2\supsetneq H_1$. On the other hand, $H_1$ is clearly dense in $H_2$. Furthermore, the extension $\bar{U}:H_2\rightarrow H_1$ of $U$ to $H_2$ is a unitary transformation. For definiteness, let $H_1=\ell^2$, and let \begin{align} U e_n &= n^{-4}e_n + n^{-3}e_1, \mbox{ if }n\geqslant 2,\\ U e_1 &= a_1 e_1 + \sum_{n=2}^\infty n^{-3}e_n, \end{align} with $a_1=2\sum_{n=2}^\infty n^{-2}$. Furthermore, consider the closed, densely defined operator $T$ defined by $Te_n=n^4e_n$. Then $U$ is bounded and positive definite with $0\in\sigma(U)$, and $UT$ is not closable. Since $H_1\subseteq H_2$, we may consider $T$ as a densely defined operator in $H_2$, and we denote this operator by $T_2$. Suppose now that $T_2$ is closable, with closure $\bar{T}_2$. Then $UT\subseteq \bar{U}\bar{T}_2\!\!\upharpoonright_{H_{1}\cap\mathrm{dom}(\bar{T}_2)}$. Since $\bar{U}$ is boundedly invertible, $\bar{U}\bar{T}_2$ is closed, which ensures that $\bar{U}\bar{T}_2\!\!\upharpoonright_{H_{1}\cap\mathrm{dom}(\bar{T}_2)}$ is closed. But then $UT$ is closable, and this is a contradiction.
Any nonzero polynomial can be suitably modified to become everywhere nonzero on ${\mathbb Z}^n$?
We'll prove that it is always possible. Take the polynomial $P$ and write it as $$ \sum_r x_n^r P_r(x_1, \dots, x_{n-1}) $$ for polynomials $P_0, \dots, P_k$ not all zero. The linear dependences between the $P_r$ in $\mathbb{C}[x_1, \dots, x_{n-1}]$ form a linear subspace of $\mathbb{C}^{k+1}$, whose intersection with the set $\{(1, t, \dots, t^k): t \in \mathbb{R}\}$ is finite. In particular, we can choose $a_n \in \mathbb{R}$ such that no solution $t$ lies in $\mathbb{Z}+a_n$. Applying the flag transformation $x_n \mapsto x_n + a_n$, we get that $P(x_1, \dots, x_{n-1}, m)\neq 0$ in $\mathbb{C}[x_1, \dots, x_{n-1}]$ for all $m\in \mathbb{Z}$. Now for each fixed value of $m$ we can apply the same argument to get a finite number of values of $t$ for which $P(x_1, \dots, x_{n-2}, t, m)=0$ in $\mathbb{C}[x_1, \dots, x_{n-2}]$. Taking the union over $m$, we have a countable collection of 'bad' $t$ values. We can now apply a translation in $x_{n-1}$ to make these bad values non-integers. Keep going iteratively to get a flag transformation (in fact, a translation) which pushes all zeros of $P$ off the integer lattice.
Why Riemannian metric does not depend on the choice of coordinate system
I think you have not intepreted the question correctly (or at least you have left too many words out that I cannot tell whether you have interpreted it correctly). In particular, I do not see that you have actually used the definition of a smooth manifold, in particular you have not made use of the key fact about smooth manifolds, namely smoothness of overlap maps. You write about two coordinate systems $x$ and $y$, and you mention the overlap $x^{-1} y$, but you never mention or make use of the key fact that the overlap maps $x^{-1} y$ and $y^{-1} x$ are smooth. Let me separate the definition of a Riemannian metric on $M$ into two parts: The object given; The property that object must satisfy. The object given is: The correspondence which associates to each point $p \in M$ an inner product $\langle,\rangle_p$ on the tangent space $T_p M$". Up to here, coordinate systems are not involved, but now they come into the property that the correspondence must satisfy: If $x:U\subset\mathbb R^n\rightarrow M$ is a coordinate system around $p$, with $x(x_1,...,x_n)=q\in x(U)$, then $g_{i,j}(x_1,...,x_n):=<dx_q(e_i),dx_q(e_j)>_q$ is a smooth function on $U$. The issue is whether this property depends on the coordinate system. What one must show is if $y:V\rightarrow M$ is another coordinate system around $p$, and if we let $h_{i,j}(y_1,...,y_n) := \langle dy_q(e_i),dy_q(e_j) \rangle_q$, then on the overlap $U \cap V$, to say that each of the functions $g_{i,j} \mid x(U \cap V)$ is smooth is equivalent to saying that each of the functions $h_{i,j} \mid y(U \cap V)$ is smooth. The way you prove this equivalence is to write out how to relate the maps $g_{i,j} \mid x(U \cap V)$ and $h_{i,j} \mid y(U \cap V)$ (for each $i,j$) are related, expressing this relation as a equation obtained from the chain rule, involving the Jacobian matrices for the overlap maps $x^{-1} y$ and $y^{-1} x$.
Parametric Curves finding its cartesian equation
My first reaction to any problem involving trig functions is to write them in terms of sine and cosine. Here, $x= 2 csc(X)= \frac{2}{sin(X)}$ and $y= \frac{cos(X)}{sin(X)}$. From the first equation $sin(X)= \frac{2}{x}$ so that $cos(X)= \sqrt{1- sin^2(X)}= \sqrt{1- \frac{4}{x^2}}= \frac{\sqrt{x^2- 4}}{x}$ $y= \frac{\frac{\sqrt{x^2- 4}}{x}}{\frac{2}{x}}= \frac{1}{2}\sqrt{x^2- 4}$. To allow for possible sign changes, multiply both sides by 2 and square: $4y^2= x^2- 4$ which reduces to the hyperbola $x^2- 4y^2= 4$.
improving estimation of integral
No. Briefly, while an error estimate can be computed, there is not enough information to determine if the technique is applicable or if the estimate is reliable. Moreover, the error estimate is comparable in magnitude to the error which the cautious user must assign to the trapezoidal sum. Let $T$ denote the target value, i.e. the integral $T = \int_1^{1.8} f(x)dx$ and let $A_h = A_h(f)$ denote the trapezoidal sum for $f$ computed with uniform step size $h$. We can compute $A_h = 1.773$ and $A_{2h} = 1.79$ for $h = 0.1$. Richardson's error estimate is $$E_h = \frac{A_h - A_{2h}}{2^{p}-1} = -0.005\overline{6}, \quad (\text{for $ p = 2$})$$ where $p>0$ depends on the function $f$. There is not enough information to justify the application of Richardson's technique, identify the value of $p$ or determine if $E_h$ is an accurate approximation of the true error $T - A_h$. For this we need more points, so that several fractions of the form $$F_h = \frac{A_{2h} - A_{4h}}{A_h - A_{2h}}$$ can be computed. If these fractions converge monotonically to a positive value as $h \rightarrow 0_+$, then this is evidence of an asymptotic error expansion of the form $$T -A_h = \alpha h^p + \beta h^q + O(h^r), \quad 0 < p < q < r.$$ Moreover, this behavior would also indicate that rounding errors are much less significant than the discretization error, that the higher order terms are insignificant compared with dominant error term $\alpha h^p$ and that the error estimate $E_h$ is reliable. The order $p$ of the dominant error term would follow from the observed limit of $F_h$ since $F_h \rightarrow 2^p$ as $h \rightarrow 0_+$. The order $q$ of the secondary error term would follow from the observed convergence rate. If $p = 2$, then this would indicate that $f$ was several times differentiable on the entire interval. However, we do not have more points. Returning to the information that we do have. In general, there is no reason to believe that the given values for $f$ are exact! At best, they are accurate to the number of significant figures shown. In this case we have three significant figures. In other words, the true value of, say, $f(1)$ is $f(1) = 1.54 \pm 0.005$, as any value in the interval $(1.535,1.545)$ rounds to midpoint $1.54$. The uncertainty on $f$ translates into an uncertainty on the computed value of the trapezoidal rule of $$A_h = 1.779 \pm 0.004.$$ The width of this interval is comparable to Richardson's error estimate $E_h$ in the smooth case where $p=2$. So, even in the best case, where $f$ is a smooth function, the step size $h = 0.2$ is so small as to quell the effect of the higher order terms, but so large that subtractive cancellation is not an issue and the error estimates would be reliable, the uncertainty on $f$ is enough to ruin our hopes. We also need to know how accurately $f$ has been computed.
Mathematics of a Simple Counting Game
If we use continuous time, the strategy space could be very large and make the game intractable. So let's say the game takes place in discrete time: $t=1,2,3,\ldots$ and let $I$ represent the set of players. Let the state vector include the current number (the last number called before round $t$ was reached), $n_t$, the list of players that were not eliminated yet $J\subset I$ and their current scores (number of points), $\mathbf{p}=(p_j)_{j\in J}$. A typical state or history is $h_t=(n_t,J,\mathbf(p))$ A (Markov) strategy for player $i$, is a mapping $s_i$ from the set of all possible values of the state vector to the interval $[0,1]$ which we interpret as saying $s_i(h_t)$ is the probability that player $i$ calls at round $t$ when the last called number is $n_t$, the set of active players is $J$ and their respective scores are $\mathbf{p}$. It simplifies the analysis a lot if we restricted attention to Markov strategies (i.e. strategies that depend only on the state and not on the calendar time), so $s_i$ would be a function only on the state vector $h_t$ but not of $t$. The question is not clear if being eliminated or finishing in 2nd or 3rd is the same or not. Does a player only cares about winning or not? A player that is behind and has no chances of winning may eliminate the front-runner. Does a player prefers that no one else wins to the outcome someone else wins or he does not care? Do players care about finishing the game quickly or they don't care? Anyway, we need to make assumptions to have payoffs clearly defined. Given a strategy for each player we can compute the probabilities the state vector is going to transition to its new value. If we now how players rank the final state, we can compute the expected payoffs. Of course the game is still too complex for us to obtain a solution here. But one should be able to solve the sub-game that happens when only two players remain in the game and one has $10$ points while the other has $9$ points. Perhaps (or perhaps not) we may need to make the players impatient: winning at round $t$ is worth $\delta^t$ and losing is worth always zero, where $\delta\in(0,1)$.
polynomial factorization and equation solving
You could use the Tchirnhaus rotation, which makes the coefficient of the $x^3$ term zero, this is done by considering $f(x-\frac{a_{n-1}}{na_n})$. In this case $a_{n-1}=A$ and $a_n =1$ and $n=4$. Doing this you will arrive at a polynomial of the form: $$x^4+Ex^2+F=0$$ Which should be nicer to factorise, so find the roots, then subtract $\frac{a_{n-1}}{na_n}$ from the result.
0 constant divided by a function
Indeed we have that $$\frac{0}{f(x)}=0$$ for any $x$ such that $f(x)\neq 0$, otherwise the expression is undefined. Refer also to the related Division by $0$
Number of numbers divisible by 5 and 6
If $n$ is divisible by 5 and by 6, then it is divisible by 30, and conversely; so just apply your method with $d = 30$ and you're done. (A much more challenging, and fun, question is to count three-digit numbers divisible by 5 or 6.)
How to find the equation of other two sides of a rhombus if equation of two sides (intersecting at the origin) and diagonal is given?
Let $OABC$ be our rhombus, $3x-4y=0$ be an equation of $OC$ and $12x-5y=0$ be an equation of $OA$. Thus, $OB=12$. We know that $m_{OA}=\frac{12}{5}$ and $m_{OC}=\frac{3}{4}$. Thus, $$\tan\measuredangle AOC=\frac{\frac{12}{5}-\frac{3}{4}}{1+\frac{12}{5}\cdot\frac{3}{4}}=\frac{33}{56}.$$ Thus, $$\cos\frac{\measuredangle AOC}{2}=\frac{11}{\sqrt{130}}$$ and $$OA=\frac{6}{\cos\frac{\measuredangle AOB}{2}}=\frac{6\sqrt{130}}{11}.$$ Now, $A$ and $C$ placed on the circle $$x^2+y^2=\left(\frac{6\sqrt{130}}{11}\right)^2.$$ Thus, for $C$ we obtain: $$x^2+\left(\frac{3}{4}x\right)^2=\left(\frac{6\sqrt{130}}{11}\right)^2,$$ which gives something very ugly.
What does this graph look like? $y = \log_x{2}$
Try this Graphing calculator..Type this log_x2 into it. Here's the graph Note the graph is basically..$$f(x)=\frac{\ln2}{\ln x}\approx\frac{0.69314718056}{\ln x}$$ and an asymptote at $x=1$.
Arched coloring of graph: $A(D) \ge \log \chi(D)$
If you want to show that $A(D) \ge \log \chi(D)$, then the natural strategy is to use an arc-coloring of $D$ with $A(D)$ colors to define a vertex-coloring of $D$ with $2^{A(D)}$ colors (getting an upper bound on $\chi(D)$). Even if you succeeded in going the other way, using a vertex coloring to define an arc-coloring, you would get the reverse inequality. One way to rephrase the condition for a proper arc-coloring is that for every vertex $v$ and for every color $c$, there cannot be both an edge into $v$ and en edge out of $v$ given the color $c$. Given this observation (and an arc-coloring of $D$ with $A(D)$ colors), a natural labeling of the vertices of $D$ with $2^{A(D)}$ values is to give each vertex $v$ a label $f(v) \in \{-1,1\}^{A(D)}$ where: $f_i(v) = -1$ if all edges of color $i$ incident to $v$ are oriented into $v$. $f_i(v) = +1$ if all edges of color $i$ incident to $v$ are oriented out of $v$. $f_i(v)$ is arbitrary if there are no edges of color $i$ incident to $v$. Now we just check that this is a proper coloring. If $(v,w)$ is any edge, let $i$ be its color. Then $f_i(v) = +1$, because the edge $(v,w)$ has color $i$ and is oriented out of $v$, and $f_i(w) = -1$, because the edge $(v,w)$ has color $i$ and is oriented into $w$. Therefore $f(v) \ne f(w)$.
Are the linear combinations of iid random variables independent?
Suppose $n=2$ and each $X_i$ is a random bit, uniformly distributed on $\{0,1\}$. Then $\bar X = \frac12(X_1+X_2)$ and $X_1-\bar X=\frac12(X_1-X_2)$ are certainly not independent -- for example because if we know that $\bar X=0$, then necessarily $X_1-\bar X$ will be $0$ too.
If $a\in R-\{0\}$ and $a$ is not a unit and $R$ Principal ideal domain and $a$ cannot be factorized with ireduc. elem then $a$ reducible
Well, let $a$ be reducible. Write $a=a_1b_1$. Then $a_1$ and $b_1$ cannot be units. Write $a_1=a_2b_2$ similarly. This gives you a sequence of elements $a,a_1,a_2,\ldots$ such that $a_1\mid a$, $a_2\mid a_1$, and so on, or an ascending chain of ideals $$(a) \subset (a_1)\subset (a_2)\subset \ldots.$$ This series becomes eventually constant in a Noetherian ring such as an integral domain where every finitely generated ideal is principal. This is what is required in the 2nd part of your proof.
Convergence/ divergence tests
HINT Note that $$\dfrac2{\ln(n)} \leq \dfrac{3+(-1)^n}{\ln(n)} \leq \dfrac4{\ln(n)}$$
Does the Fundamental Theorem of Algebra hold true for infinite polynomials?
The fundamental theorem does not hold for infinite polynomials. Your own example of $\mathrm e^z$ is a good one. We have $\mathrm e^z \neq 0$ for all $z \in \mathbb C$. This raises a really important question. All of the non-constant, finite Taylor Polynomials for the exponential function satisfy the fundamental theorem. For example: $1+z = 0$ has, when counted with multiplicity, one solution over $\mathbb C$. $1+z+\frac{1}{2}z^2 = 0$ has, when counted with multiplicity, two solutions over $\mathbb C$. $1+z+\frac{1}{2}z^2+\frac{1}{3!}z^3 = 0$ has, when counted with multiplicity, three solution over $\mathbb C$. $1+z+\frac{1}{2}z+\cdots+\frac{1}{n!}z^n = 0$ has, when counted with multiplicity, $n$ solution over $\mathbb C$. For the Taylor Polynomial of degree $n$, let $R_n$ be the set of its roots, for example: $R_1 = \{-1\}$ $R_2 = \{-1-\mathrm i, \ -1+\mathrm i\}$ What happens to these roots as $n \to \infty$? I've calculated (using my computer) $R_1,$ $R_2$, $R_3$ and $R_4$ and found that all of the elements of $R_4$ have larger moduli than the elements of $R_3$, and similarly for $R_3$ and $R_2$, etc. It seems that the roots are getting bigger in terms of their modulus. It is tempting to think that in the limit, they all retreat to infinity and that is why there are no finite solutions to $\mathrm{e}^{z} = 0$. Sadly, $\mathrm e^z$ does not seem to behave well at complex infinity - at the North Pole of the Riemann Sphere. If we write $z = x+\mathrm i y$ then we get $\mathrm e^z = \left(\mathrm e^x\cos y\right) + \mathrm i \left(\mathrm e^x \sin y\right)$. If we let $z \to \infty$ along the positive real axis, i.e. $y=0$, $x>0$ and $x \to +\infty$ then $\mathrm e^z \to + \infty$. If we let $z \to \infty$ along the negative real axis, i.e. $y=0$, $x<0$ and $x \to -\infty$ then $\mathrm e^z \to 0$. If we let $z \to \infty$ along either the positive, or the negative imaginary axis, i.e. $x=0$ and $y \to \pm \infty$ then there is no well-defined limit; the value of $\mathrm e^z$ lies in $\{\cos y +\mathrm i \sin y: y \in \mathbb R\}$, but it keeps spiralling around and never settles down. I would be tempted to say that all of the roots have retreated to infinity and imposed some irreconcilable conditions that cause a discontinuity. Don't believe a word of this though; it's all personal speculation.
Where is the most appropriate place to ask about the contribution of published papers?
Ideally, the reviews in Zentralblatt and MathReviews should provide precisely this information. In practice, many of the reviews are not exactly illuminating. The most important papers, however, often end up getting a "Featured Review" on MathSciNet, and those are usually very clear and well written. Now, one other way of learning about the significance of papers (if either the paper itself does not provide sufficient introduction so Gerry's advice is hard to follow, or if for some reason you prefer to seek out third-party evaluation of the paper) is to follow the citation trail. Go to the MathSciNet entry for the corresponding paper. On the top right there is a box listing citations to that paper (these are reasonably complete for papers published within the last 40 years). If there are Review articles citing that paper, go and read it. You will likely be enlightened, since the articles will probably also give a good overview of the "lay of the land" so to speak. Citations from references are a bit more hit and miss: sometimes the paper is only cited for a minor technical fact, sometimes the paper is only cited to be polite, but sometimes you will find another paper in the field with a good expository account of why precisely the methods and results of the paper you are interested in is useful.
Limit without L'Hopital's rule: $\lim_{x\to0} \frac{1-\cos^3 x}{x\sin2x}$
Hint: Factor $1-\cos^3 x$ and note that $1-\cos x=2\sin^2(x/2)$.
what is the index of a subgroup generated by -1
Answer: It's the same as the group generated by $|k|$ Hint for a proof: Any group containing $-k$ contains $k$.
Compute $\log 3$ numerically using power series
Hint: $$\log(3)=\int_{1}^{3}\frac{dx}{x}=\int_{-1}^{1}\frac{dx}{2+x}=\int_{-1}^{1}\left(\frac{1}{2}-\frac{x}{4}+\frac{x^2}{8}-\frac{x^4}{16}+\ldots\right)\,dx $$ but the integral over $(-1,1)$ of an odd integrable function is zero, hence $$ \log(3)=\int_{-1}^{1}\left(\frac{1}{2}+\frac{x^2}{8}+\frac{x^4}{16}+\ldots\right)\,dx = \sum_{m\geq 0}\frac{1}{(2m+1)4^m}$$ where $$ \sum_{m\geq 6}\frac{1}{(2m+1)4^m}\leq \sum_{m\geq 6}\frac{1}{13\cdot 4^m}=\frac{1}{39936}\approx 2.5\cdot 10^{-5} $$ $$ \sum_{m\geq 6}\frac{1}{(2m+1)4^m}\geq \frac{1}{13\cdot 4^6}\approx 1.8\cdot 10^{-5}$$ and $$ \sum_{m=0}^{5}\frac{1}{(2m+1)4^m}=\frac{3897967}{3548160}\approx 1.09859$$ so that $\log(3)=\color{red}{1.0986}\ldots$
Set Builder Notation: "lift" the predicate
The Set-builder notation is exactly an "operation" that transform a predicate into a term (i.e. a name). We start with the predicate $\varphi(x)$ (e.g. "$x$ is Even") and applying to it the SBN we get the "name" of the set: $E = \{ x \mid \varphi(x) \}$ of all and only those objects that satisfy the predicate (i.e. the set of even numbers). In the formal syntax, $\varphi$ can be any (well-formed) formula with one free variable. Thus, if we abbreviate with $\varphi(x)$ the formula: $\exists z \ (x = 2 \times z) \land x \le 10$ the set $E_{10} = \{ x \mid \varphi(x) \}$ will contain all and only the even numbers up to $10$. See e.g. : Herbert Enderton, A Mathematical Introduction to Logic (2nd ed - 2001), Ch.ZERO Useful Facts about Sets, page 2: We write “$\{ x \mid \text{__} x \text{__} \}$” for the set of all objects $x$ such that $\text{__} x \text{__}$. We will take considerable liberty with this notation. For example, $\{m, n \mid m < n \in \mathbb N \}$ is the set of all ordered pairs of natural numbers for which the first component is smaller than the second. And $\{x \in A \mid \text{__} x \text{__} \}$ is the set of all elements $x$ in $A$ such that $\text{__} x \text{__}$.
Finding the coefficients of $h(z)$ laurent series
We can write the series for $\psi(-z)$ at $z=-2$ as $$ \begin{align} \psi(-z) &=-\gamma+\sum_{k=0}^\infty\left(\frac1{k+1}-\frac1{k+2-(z+2)}\right)\\ &=-\gamma+\sum_{k=0}^\infty\left(\frac1{k+1}+\frac1{(z+2)-(k+2)}\right)\\ &=1-\gamma+\sum_{k=1}^\infty(1-\zeta(k+1))(z+2)^k\tag{1} \end{align} $$ where we have used that $$ \begin{align} \frac1{n!}\frac{\mathrm{d}^n}{\mathrm{d}z^n}\sum_{k=0}^\infty\frac1{(z+2)-(k+2)} &\stackrel{\hphantom{z=-2}}{=}\sum_{k=0}^\infty\frac{(-1)^n}{((z+2)-(k+2))^{n+1}}\\ &\stackrel{z=-2}{=}1-\zeta(n+1)\tag{2} \end{align} $$ Therefore, we get $$ \begin{align} \frac{\psi(-z)}{((z+2)-1)(z+2)^3} &=-\left[\sum_{k=0}^\infty(z+2)^{k-3}\right]\left[1-\gamma+\sum_{k=1}^\infty(1-\zeta(k+1))(z+2)^k\right]\\ &=\bbox[5px,border:2px solid #C00000]{\sum_{n=0}^\infty\left[-(1-\gamma)+\sum_{k=1}^n(\zeta(k+1)-1)\right](z+2)^{n-3}}\tag{3} \end{align} $$ I see that this result is used to compute a sum that is far more simply computed using partial fractions. $$ \begin{align} \sum_{n=1}^\infty\frac1{(n+1)(n+2)^3} &=\sum_{n=1}^\infty\left(\color{#C00000}{\frac1{n+1}-\frac1{n+2}}\color{#00A000}{-\frac1{(n+2)^2}}\color{#0000FF}{-\frac1{(n+2)^3}}\right)\\ &=\color{#C00000}{\frac12}\color{#00A000}{-\left(\zeta(2)-1-\frac14\right)}\color{#0000FF}{-\left(\zeta(3)-1-\frac18\right)}\\ &=\frac{23}8-\frac{\pi^2}6-\zeta(3)\tag{4} \end{align} $$
Two dimensional random variables and conditional pmf if $f(x)=24x^2, 0<x<\frac{1}{2} \text{ and} f(y|x=X)=\frac{y}{2x^2}, \text{ if} 0< y<2x$
As you have written $f_{X,Y}(x,y)=f_X(x)\cdot f_{Y|X}(x,y)$. The notation is not always uniformly. In a simplified way $f_X$, $f(x)$ and $f_X(x)$ have the same meaning. They represents the pdf of the random variable $X$. For calculation we can input the terms: $$f_{X,Y}(x,y)=24x^2\cdot \frac{y}{2x^2}=12y$$ From this result we can re-calculate $f_X(x)$ by integrating. For this purpose we need the bounds for y. We know that $y&lt;2x$ and we also know that $0&lt;y$ $$f_X(x)=\int_{0}^{2x} 12y \, dy=...$$ To calculate $f_Y(y)$ you integrate $f_{X,Y}(x,y)$ as well. We know that $y&lt;2x$ and therefore $\frac12y&lt;x&lt;0.5$ $$f_Y(y)=\int_{\frac12y}^{0.5} 12y \, dx=...$$
Newton Raphson Iteration method in Matlab
MATLAB allows for the creation of function handles. This can be done a few ways. One approach is to explicitly define the functions: function myNewtonCode() K = 25; x = zeros(K,1); x(1) = x_init; for k=2:K x(k) = x(k-1)-f(x(k-1))/fp(x(k-1)); end function y=f(x) y = 45*x^2+22*x-13; end function yp=fp(x) % Derivative of f yp = 90*x+22; Replace the ... with the actual form of $f$, whatever that is. You could also do this to call it inline from the MATLAB command window function x = testCode(x_init,f,fp) % Filename: testCode.m K = 25; x = zeros(K,1); x(1) = x_init; for k=2:K x(k) = x(k-1)-f(x(k-1))/fp(x(k-1)); end And call it with x = testCode(1,@f,@fp);. Note that you have to create separate function files for $f$ and $f'$. A less clunky way of doing this is through anonymous functions: f = @(x)... fp = @(x)... x = testCode(1,f,fp); Notice that when you do it with anonymous functions vs. static files on the disk, you drop the @ symbol. In general, you should never need to pass functions as strings. In doing so, you'd have to call into eval, which is bad practice and could corrupt variables unintentionally.
Where am I going wrong in this ODE?
$$y'(x) = \pm\left(\frac yB-1\right)^{1/2} \\ \implies \int\frac{dy}{\left(\frac yB-1\right)^{1/2}}=\pm\int dx = \pm x+C$$ $$\int\frac{dy}{\left(\frac yB-1\right)^{1/2}}=2B\sqrt{\left(\frac yB-1\right)}$$ $$2B\sqrt{\left(\frac yB-1\right)}=\pm x+C$$ $$4B^2\left(\frac yB-1\right)=x^2+C\pm 2Cx$$ Since $$\int\frac{dx}{\sqrt{ax}}=2\frac{1}{a}\sqrt{x}$$
What is the locus such that any vector from it has a given dot product with the given vector?
It's easy to write down one point $p = \frac{a * d}{|a|^2}$ in this set. For any other point $q$, we have $(p - q) \cdot a = 0$, so the set of vectors $p - q$ is precisely the set of vectors orthogonal to $a$.
Probability question with before and after scenario
If we first randomly choose a cupcake from A and transfer it to B, we can say we transfer $\frac{2}{5}$blue + $\frac{3}{5}$ chocolate cupcakes. If we then randomly choose a cupcake from B, then the probability that the cupcake being blue is $\frac{2.4}{8}$ = 30%.
Limit of distribution functions
Take $\psi \in \mathcal{D}$ such that $\int \psi = 1$ and let $\psi_n(x) = n \psi(nx).$ Then $\psi_n \to \delta$ is a mollifier. Also take $\rho \in \mathcal{D}$ such that $\rho(0)=1$ and let $\rho_n(x) = \rho(x/n).$ Then $\rho_n \to 1$ pointwise and in $\mathcal{E} = C^\infty.$ Let $F_n = \rho_n (\psi_n*F).$ Does $F_n \in \mathcal{D}$? Does $F_n \to F$ in $\mathcal{D}'$?
Population model and t representing years with a function $\sin t$ Should this t be in radians?
Without further information, like another data point I see no reason why you'd be wrong. You could even use grad, or mil as other measures of angle. Mind, .41r ~ 23.5°, not 27.1° Was that the complete question?
Prove that $S+T$ is nilpotent
Hint: Consider $(S+T)^{q+k}$. Make sure to use that $ST=TS$.
What are the detailed steps of the integral $\int_{-\infty}^{+\infty}{{\rm e}^{(-\frac{(x-(a+bc^2))^2} {2c^2})}}\,{\rm d}x$?
put $x-(a+bc^2)=t$ then, $I=\displaystyle\int_{-\infty}^{\infty}e^{-t^2/2c^2} dt$ we know, $\mathcal{F}[e^{-{at^2}}]=\displaystyle\int_{t=-\infty}^{t=\infty}e^{-at^2}e^{-j\omega t}dt=\sqrt{\dfrac{\pi}{a}}\exp\left(\dfrac{-\omega^2}{4a}\right)$ for your case put $\omega =0$ and $a=\dfrac{1}{2c^2}$ you'll get $I=c\sqrt{2\pi}$ note: in general $\displaystyle\int_{-\infty}^{\infty} \exp(-mx^n)dx=\dfrac{2\Gamma(\frac{1}{n})}{n(m)^\frac{1}{n}}$
What is the probability that it was the two-headed coin?
The probability you're asking for is the conditional probability of having thrown the two-headed coin after knowing the result is heads. It's value can be calculated by the following expression (calling $X$ the event of throwing the two headed coin, and $H$ the event of getting heads after the throw): $$P[X|H]=\frac{P[X\cap H]}{P[H]},$$ so we calculate $P[H]$, which clearly is: $$P[H]=\frac{1}{4}+\frac{1}{4}\cdot\frac{1}{2}+\frac{1}{4}\cdot\frac{1}{2}+\frac{35}{100}\cdot\frac{1}{4}=\frac{47}{80}.$$ We already know that $P[X\cap H]=P[X]=\frac{1}{4}$ because it will always give you heads as result and the coins are selected with equal chance. So the final probability is: $$P[X|H]=\frac{P[X\cap H]}{P[H]}=\frac{\frac{1}{4}}{\frac{47}{80}}=\boxed{\frac{20}{47}}$$
Find infinitely many pairs of integers $a$ and $b$ with $1 < a < b$, so that $ab$ exactly divides $a^2 +b^2 −1$
Note that the usual problem, and quite difficult if you do not know what to expect, is slightly different, https://en.wikipedia.org/wiki/Vieta_jumping#Example_2 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Define integer $k \geq 3$ as $$k = \frac{x^2+y^2-1}{xy}$$ We are solving $$ x^2 - kxy + y^2 = 1. $$ The discriminant of the quadratic form is $\Delta = k^2 - 4$ which is positive but not a square. As a result, if we can solve $\tau^2 - \Delta \sigma^2$ we can construct the generator of the (oriented) automorphism group of the form. Now, $k^2 - \Delta = 4,$ so $\tau = \pm k, \sigma = \pm 1.$ Given a form (positive nonsquare discriminant) of coefficients $\langle A,B,C\rangle$ we can take the generator (in $SL_2 \mathbb Z$) as $$ \left( \begin{array}{rr} \frac{\tau - B \sigma}{2} &amp;&amp; -C \sigma \\ A \sigma &amp;&amp; \frac{\tau + B \sigma}{2} \end{array} \right) $$ With $\langle A,B,C\rangle = \langle 1,-k,1\rangle$ we get $$ U = \left( \begin{array}{rr} k &amp;&amp; -1 \\ 1 &amp;&amp; 0 \end{array} \right) $$ and will stick with this one, as convenient for keeping $x \geq y \geq 0.$ The transformation from one solution $(x,y)$ to the next is matrix multiplication with the column vector $(x,y)^T$ on the right. That is, $$ \color{blue}{ (x,y) \mapsto (kx-y, x)}. $$ The first few solutions with $k \geq 3$ are $$ (1,0), $$ $$ (k,1), $$ $$ (k^2 - 1,k), $$ $$ (k^3 - 2k,k^2 - 1), $$ $$ (k^4 - 3 k^2 + 1,k^3 - 2k). $$ Oh, as far as actual division, $(1,0)$ is not legal in the original fraction. Life is like that. Let's try $k=3$ in the fraction: $(3,1)$ gives $9/3 = 3.$ $(8,3)$ gives $72/24 = 3.$ The final thing is Cayley-Hamilton. The matrix I named $U$ above satisfies $U^2 - k U + I = 0,$ or $U^2 = kU - I.$ As a result, both $x$ and $y$ satisfy linear recurrences, $$ x_{j+2} = k x_{j+1} - x_j, $$ $$ y_{j+2} = k y_{j+1} - y_j. $$ Again with $k=3, $we get $x$ in $$ 1, 3, 8, 21, 55, 144, $$ Note that these are every second Fibonacci number. With $k=4, $we get $x$ in $$ 1, 4, 15, 56, 209, 780, $$ With $k=5, $we get $x$ in $$ 1, 5, 24, 115, 551, 2640, $$ With your $k=2$ you have $x^2 - 2xy + y^2 = 1,$ or $(x-y)^2 = 1,$ or $x-y = \pm 1.$
Residue theorem, double pole, sinh.
To calculate the residue of $f$ at $z_0$ for a pole of order $m$ we have $$\text{Res}(z_0,f(z))=\frac{1}{(n-1)!}\lim_{z\to z_0} \frac{d^{n-1}}{dz^{n-1}}\left((z-z_0)^n f(z)\right)$$ So, for $f(z)=\frac{e^{-iz}}{(\sinh z)^2}$, $n=2$ and $z=im\pi$ we have $$\begin{align} \text{Res}(im\pi,f(z))&amp;=\frac{1}{(2-1)!} \lim_{z\to im\pi}\frac{d^{2-1}}{dz^{2-1}}\left((z-im\pi)^2 \frac{e^{-iz}}{(\sinh z)^2}\right)\\\\ &amp;=\lim_{z\to im\pi}\frac{d}{dz}\left((z-im\pi)^2 \frac{e^{-iz}}{(\sinh z)^2}\right) \end{align}$$ Footnote: The integral of interest diverges at $x=0$!
If an infinite set of sets $\{A_1,A_2,\dots\}$ satisfies the finite intersection property, then $\bigcap_{i=1}^\infty A_i\neq\emptyset$?
Take the collection $A_i = (-\frac1{i},\frac1{i})\backslash \{0\}$. Its intersection is empty, but it has the required property...
Ridge regression to minimize RMSE instead of MSE
It is worthwhile to clarify the answer in the context of regression. For any practical consideration the solution is indeed the same. By minimizing $\| Y - cX\|_2^2$ or $\| Y - cX\|_2$ you are looking for a vector $c$ that minimizes the defined loss function subject to some constraint $ \|\Gamma c \|_2^2 \le t$. In Lagrange form the problems reads $$ \arg\min \mathcal{L}(\lambda, c) =\arg \min \left( \| Y - cX\|_2 ^ 2+ \lambda(\| \Gamma c\|_2^2 - t) \right). $$ Note that $\lambda$ is a "dummy" parameter, or the so called "regularization parameter". You are searching (numerically) for such a lambda that minimizes your defined loss function (MSE or RMSE), and you don't care about its values as is. As such, the optimal $c$ will be the same while $\lambda$ will change according to the specification of the loss function. Any other combination is irrelevant or incorrect. I.e., setting $\lambda$ as constant and searching for the optimal $c$ is statistically meaningless or minimizing MSE but choosing $\lambda$ according to RMSE loss function is simply incorrect and has no sense. As such, there is no reason to discuss these cases. Formally, let us take $\Gamma = I$ for the sake of convenience. By minimizing MSE your gradient (F.O.C) is $$ 2X'(Y - Xc) + 2\lambda c = 2X'Y - c( 2X'X + I\lambda) = 0, $$ so the Ridge estimator of $c$ is $$ \hat{c}(\lambda) = (X'X + \lambda I)^{-1} X'y. $$ If instead you are minimizing the RMSE then gradient is $$ \frac{X'(Y - Xc)}{ \|Y-Xc\|} + 2\lambda c = 0, $$ or $$ X'(Y - Xc) + 2\lambda \|Y-Xc\| c = X'Y - c(X'X + 2 \lambda \|Y - Xc\|) = 0, $$ that is non-linear in $c$, hence has no close form solution. However, as square root is monotone and one-to-one transformation, the solution w.r.t $c$ will be the same as for $\| \cdot \|_2^2$. where $\lambda$ will change according to the transformation you appllied.
Could the scalar projection be negative?
The scalar projection of $a$ onto $b$ can be defined in the following ways: $$ s = \|a\|\cos{\theta} = \frac{a\cdot b}{\|b\|} $$ This shows that the result can be negative. Geometrically this will occur when $\frac{\pi}{2} &lt; \theta &lt; \frac{3\pi}{2}$. Algebraically it will occur when $a\cdot b &lt; 0$.
What is the legality of the following series?
Each terms enumerates the number of each present natural number in the previous one: 1 11 is one 1 21 is two 1 1211 is one 2 one 1 and so on
If $f(1)$ and $f(i)$ real, then find minimum value of $|a|+|b|$
If $a=a_1+a_2i$ and $b=b_1+b_2i$ then you have that $b_2+a_2=-1, b_2+a_1=1$. So you can pick any $b_1$, which, since you are seeking a minimum, means you can choose $b_1=0$. You get $b=b_2i$ and $a_2=-(b_2+1)$ and $a_1=1-b_2$. So you are trying to minimize: $$\sqrt{a_1^2+a_2^2}+\sqrt{b_1^2+b_2^2}=\sqrt{(-1-b_2)^2+(1-b_2)^2}+|b_2|=\sqrt{2+2b_2^2}+|b_2|$$ Apply usual calculus tricks to minimize $\sqrt{2+2x^2}+x$ with $x\geq 0$. You should get that the minimum is when $x=0$.
Conditional expectation of order statistics.
It should satisfy $\mathsf E(\max\{X,Y\})=\mathsf E(\mathsf E(\max\{X,Y\}\mid X))$, so yes, there is something wrong with your work. Since $X,Y$ are iid uniform $(0;1)$, it is simply: $$\mathsf E(\max\{X,Y\}\mid X) ~{= \int_0^X X~\mathrm d y+\int_X^1 y~\mathrm d y \\= \dfrac{X^2+1}{2}}$$ That is all. PS: for $x\in(0;1)$ we have $F_{\max\{x,Y\}}(z) ~{= 0\cdot I(z&lt;x)+F_Y(z)\cdot I(x\leq z&lt;1)+I(1\leq z)\\ = x\cdot I(z=x)+z\cdot I(x&lt; z\leq 1)+I(1&lt; z)}\\ \mathsf E(Z) ~{= x F_Z(x)+\int_0^1 (1-z)\mathrm d z \\ = \frac{x^2+1}{2}}$ It is the massive point at $Z=x$ that is throwing you off the cliff.
The process to reduce modular congruences?
$15 = 3\cdot 5$. So find the multiplicative inverse of $3$ which is $11$ and of $5$ which is $13$. Then multiply $10 \cdot 11 = 110$. Modulo $16$ that's $110 - 6 \cdot 16 = 14$. Then $14 \cdot 13 = 140 + 42$. Modulo $16$ that's $12 + 10 = 22 = 6 \pmod{16}$. But that takes a while. So notice that $15 = 16 - 1 = -1 \pmod {16}$ and that $-10 = 16 - 6$. So you multiply both sides by $-1$.
Binomial Expansion with fractional or negative indices
The Binomial Theorem for negative powers says that for $|x| &lt; 1$ $$(1+x)^{-1} = 1 - x + x^2 + \mathcal{o}(x^2)$$ Therefore we have: $$\frac 2{(2x-3)(2x+1)} = \frac 1{2(2x-3)} - \frac 1{2(2x+1)} = -\frac 16\left(1-\frac 23x\right)^{-1} - \frac 12\left(1+2x \right )^{-1} = -\frac16\left(1 + \frac 23x + \frac 49x^2\right)-\frac 12\left(1 - 2x + 4x^2 \right ) = \boxed{-\frac 23 + \frac 89 x - \frac{56}{27}x^2}$$ This holds for $|x| &lt; \frac12$.
$\int_{-2}^2\frac{x^2}{x^4+1} \sin(2x)dx$
How about: $$I(t)=\int_{-2}^2\frac{x^2}{x^4+1}\sin(tx)dx$$ $$I'(t)=\int_{-2}^2\frac{x^3\cos(tx)}{x^4+1}dx=\left[\frac 14\ln|x^4+1|\cos(tx)\right]_{x=-2}^2+\frac 14\int_{-2}^2\ln(x^4+1)t\sin(tx)dx$$ $$I'(t)=\frac{\ln 17}{2}\cos(2t)+\frac 14\int_{-2}^2\ln(x^4+1)\sin(tx)tdx$$ $$I(T)-\int_{-2}^2\frac{x^2}{x^4+1}dx=\frac{\ln 17}{2}\int_0^T\cos(2t)dt+\frac 14\int_0^T\int_{-2}^2\ln(x^4+1)\sin(tx)tdxdt$$ Now try changing the order of integration in the final integral or using $u,v$ substitution to obtain a result. Once you get a formula for $I(t)$ notice that your original integral is just $I(2)$
$A$ is a set containing $n$ distinct elements.A non zero subset $P$ of $A$ is chosen.
I interpret the question to be about the number of ordered pairs of non-empty subsets of $A$. There are $2^{|A|}-1$ non-empty subsets of $A$, and thus $\left(2^{|A|}-1\right)^2$ ordered pairs of them. Equating this to $225$ and solving for $|A|$ yields $|A|=4$.
Computing the specific relative error
The approximated value of $G$ is $6.67259\times 10^{-11}$, but the real value could be anything in the interval $[6.67174\times 10^{-11},6.67344\times 10^{-11}]$. You do no really know it, but the maximum absolute error would be achieved when it is one of the endpoints. Therefore, the maximum relative error is: $$\delta(\hat x)=\frac{\Delta(\hat x)}{|\hat x|}=\frac{0.00085\times 10^{-11}}{6.67259\times 10^{-11}}\approx13\times 10^{-5}$$
Vector that is tangent to the surface but normal to the boundary
Take $\mathcal H^2=\{(x,y,z)\in \mathbb R^3:z=0\wedge y\ge 0\}.$ Then, for any point $p$ lying on the $x$- axis, $v=-\left(\frac{\partial}{\partial y} \right)_p\in T_p\mathcal H^2$ and is normal to the surface:
Estimating derivatives when given a table
There is no single way how to do this. One thing that comes to my mind is to find a polynomial equation (of 4th degree) that passes through these points in your table. Once you have that polynomial equation, you may take a derivative and find $f'(7)$. Note: Even then the result is an estimate because you are not given that the function in question happens to be of polynomial nature. It is just an estimation. But I think it would be a good one. Do you know how to find this polynomial? I can update on that if you wish. Let me know!
Integration by substitution
letting $1-x=u$ so $x=1-u$ and $-dx=du$ so our integral is $\int(u-1)\sqrt{u}du$ which can be easily solved.
Other Idea to show an inequality $\dfrac{1}{\sqrt 1}+\dfrac{1}{\sqrt 2}+\dfrac{1}{\sqrt 3}+\cdots+\dfrac{1}{\sqrt n}\geq \sqrt n$
$$\begin{cases}\dfrac{1}{\sqrt 1}\geq \dfrac{1}{\sqrt n}\\+\dfrac{1}{\sqrt 2}\geq \dfrac{1}{\sqrt n}\\+\dfrac{1}{\sqrt 3}\geq \dfrac{1}{\sqrt n}\\ \vdots\\+\dfrac{1}{\sqrt n}\geq \dfrac{1}{\sqrt n}\end{cases} \\\\$$ sum of left hands is $\underbrace{\dfrac{1}{\sqrt 1}+\dfrac{1}{\sqrt 2}+\dfrac{1}{\sqrt 3}+\cdots+\dfrac{1}{\sqrt n}}$ sum of the right hands is $n\times \dfrac{1}{\sqrt n}$ so $$\dfrac{1}{\sqrt 1}+\dfrac{1}{\sqrt 2}+\dfrac{1}{\sqrt 3}+\cdots+\dfrac{1}{\sqrt n} \geq n\dfrac{1}{\sqrt n}=\dfrac{\sqrt{n^2}}{\sqrt{n}}=\sqrt{n} \checkmark$$
Suppose $1>a_n>0$ for $n\in \mathbb{N}$. Prove that $\prod_{n=1}^\infty (1-a_n)=0$ converges if and only if $\sum_{n=1}^\infty a_n=\infty$.
The convention for infinite products is to say that the infinite product diverges to $0$ when the limit of partial products is $0$. To get you on the correct track, here is one side of a correct proof. Start with the well-known inequality $1-x \leqslant e^{-x}$ for $0 \leqslant x &lt; 1$. With $ 0 &lt; a_n &lt; 1$, it follows that for any $m \in \mathbb{N}$, $$0 &lt; P_m = \prod_{n=1}^m(1-a_n) \leqslant \prod_{n=1}^m e^{-a_n} = \exp\left(-\sum_{n=1}^ma_n\right).$$ Hence if $\sum_{n=1}^{\infty}a_n = \infty$ then $\lim_{m \to \infty} P_m = 0.$
ODE- stability of a fixed point $\dot x=0$
No: points near the fixed point are typically not themselves fixed points. The most basic treatment, in the case of the autonomous equation $x'=f(x)$ with differentiable $f$, is to linearize the equation near the fixed point. This amounts to approximating it by the system $(x-x_0)'=A(x-x_0)$, where $x_0$ is the fixed point and $A$ is the Jacobian of $f$ at $x_0$. When all eigenvalues of the linearization matrix $A$ have negative real part, the fixed point of the original system is asymptotically stable. When any eigenvalues of the linearization have positive real part, the fixed point of the original system is unstable. The situation when all eigenvalues have nonpositive real part and at least one has zero real part is subtle and requires different treatments in different problems.
How is the Mean Value Theorem relevant to the formula for Arc length?
It is worth recalling Riemann integral as a $$\int_{a}^{b}f(x)dx=\lim_{\max\Delta x_k \rightarrow 0}\sum_{k=1}^nf(x_k^*)\Delta x_k \tag{1}$$ where $\{x_1,x_2,...,x_{n+1}\}$ is a partition of $[a,b]$ and $x_k^* \in [x_k,x_{k+1}]$. The Euclidean distance between 2 points $(x_1,f(x_1)),(x_2,f(x_2)), x_1&lt;x_2$ is $$\sqrt{\left(x_2-x_1\right)^2+\left(f(x_2)-f(x_1)\right)^2}=\left(x_2-x_1\right)\sqrt{1+\left(\frac{f(x_2)-f(x_1)}{x_2-x_1}\right)^2}$$ or using MVT, $\exists x_1^{*}\in (x_1,x_2)$: $$\sqrt{\left(x_2-x_1\right)^2+\left(f(x_2)-f(x_1)^2\right)}=\left(x_2-x_1\right)\sqrt{1+\left(f'(x_1^{*})\right)^2}=\sqrt{1+\left(f'(x_1^{*})\right)^2}\Delta x_1$$ The arc length is initially approximated with the sum of distances of $(x_1,f(x_1)),(x_2,f(x_2)),...,(x_{n+1},f(x_{n+1}))$, applying MVT for each adjacent pair of points: $$\sum_{k=1}^n \sqrt{\left(x_{k+1}-x_{k}\right)^2+\left(f(x_{k+1})-f(x_{k})\right)^2}=\sum_{k=1}^n \sqrt{1+\left(f'(x_k^{*})\right)^2}\Delta x_k=...$$ thinking of $g(x)=\sqrt{1+\left(f'(x)\right)^2}$ , we have $$...=\sum_{k=1}^n g(x_k^{*})\Delta x_k$$ taking the limit $\max\Delta x_k \rightarrow 0$ we have the Riemann integral $(1)$ for $g(x)$ $$L=\int_{a}^{b}g(x)dx=\int_{a}^{b}\sqrt{1+\left(f'(x)\right)^2}dx$$
Explain the following statements about random variables.
A random variable is just a special type of function, so forget for a moment that it's a random variable and just pretend it's just another function, say $f$. Given a function $f : Y \to Z$ and $B \subseteq Z$, you are probably familiar with this: \begin{align*} f^{-1}(B) = \{y \in Y : f(y) \in B\}\\ \end{align*} Thus, for the first statement, it is simply all the events $\omega \in \Omega$ in the sample space such that $X(\omega) \in E$. For the second and third, they're simply a special case of the first, with $E = \{x\}$ and $E = (-\infty,x]$ respectiely. If you would like a plain language for the first statement (which is also the same reasoning for the second and third), a random variable $X$ "encodes" events in $\Omega$ in the form of real numbers. For instance, in the case of roulette, I can have $X$ defined as follows: \begin{align*} X = \begin{cases} 1, &amp;\text{if $\omega$ is a red number} \\ -1, &amp;\text{if $\omega$ is a black number} \\ 0, &amp;\text{if $\omega = 0$} \\ \end{cases} \end{align*} Thus, here I am encoding the roulette numbers by their colour. $X^{-1}(E)$, where $E \subseteq \mathbb{R}$, would thus gives us the set of events in which, when encoded, lies inside the set $E$. For instance, for $E = \{\pm 1\}$, $X^{-1}(E)$ asks for the events which, when encoded, is non-zero, which would be all numbers on the roulette except $0$.
Expansion of $n$-th order summation
One find $m!\over m_1!m_2!...m_n!$. This can be proved by induction, for instance, on $n$.
Why is Chebyshev Bound stronger than Markov if it is an application of Markov?
Suppose $X\ge 0$ with mean $\mu$ and standard deviation $\sigma$. I presume you're talking about the Markov bound $$ \mathbb P[X \ge a] \le \dfrac{\mu}{a} $$ and the Chebyshev bound $$\mathbb P[|X - \mu| \ge k \sigma] \le \frac{1}{k^2}$$ Thus if $a = \mu + k \sigma$ for $k &gt; 0$, $$ \mathbb P[X \ge a] \le \mathbb P[|X-\mu| \ge k \sigma] \le \frac{1}{k^2} = \frac{\sigma^2}{(a-\mu)^2}$$ Thus the Chebyshev bound here is stronger if $$ \dfrac{\sigma^2}{(a-\mu)^2} &lt; \dfrac{\mu}{a}$$ This is equivalent to $$ a^2 \mu - (2 \mu^2 + \sigma^2) a + \mu^3 &gt; 0$$ That's certainly true if $a$ is sufficiently large, but not if $a$ is close to $\mu$: the Chebyshev bound is useless unless $k &gt; 1$ (i.e. $a &gt; \mu + \sigma$), while Markov gives you some useful information as soon as $a &gt; \mu$. Typically (at least in a theoretical context) we're mostly concerned with what happens when $a$ is large, so in such cases Chebyshev is indeed stronger.
construction of entire function by using Runge's Approximation theorem
Your statement of Runge's theorem is false! You're leaving out some important hypotheses. Anyway, if $R_n$ is an increasing sequence of rectangles with union equal to the right half plane and $R_n'$ is an increasing sequence of rectangles with union equal to the left half plane you can use a correct version of Runge's theorem to find polynomials $P_n$ so $|P_n-1|&lt;1/n$ on $R_n$ and $|P_n-(-1)^n|&lt;1/n$ on $R_n'$.
How to represent "not an empty set"?
It is perfectly fine to write $|A|&gt;0$. However, the simplest and most common way to write this in symbols would be $$A\neq\emptyset.$$ Note that you don't want to write $|A|\neq \emptyset$, as it is $A$ itself which you are saying is not the empty set, rather than the cardinality of $A$. (The standard symbol in mathematics for "not equal" is $\neq$, rather than $!{=}$. You can make this symbol in $\LaTeX$ with the command \neq.) As mentioned in user21820's nice answer below, though, it is also very common to just write this in words ("$A$ is not empty" or "$A$ is nonempty") instead of symbols.
Speed of light moving on a wall.
Consider a coordinate system where the car is at the origin, the wall is the line $x=20$, and the beam on the wall is initially at $(20,0)$ (the closest point to the car). Then the position of the beam on the wall is given by $(x(t),y(t))=(R(t) \cos(2 \pi t),R(t) \sin(2 \pi t))$. What is the appropriate choice of $R(t)$? Once you have that, you have $y(t)$ and just need to compute $y'(t)$.
Are there sequences of positive real numbers converge to negative real number?
That cannot happen. Suppose $x_n \to y$, where $y&lt;0$ and $x_n &gt;0$ for all $n$. Consider the interval $(y-\epsilon, y+\epsilon)$ around $y$, where $\epsilon &lt; \frac{|y|}{2}$. In this neighborhood, can any $x_n$ be there?
What norm on $\mathbb C (z)$
I think there is no reasonable norm for $\mathbb C(z)$. But maybe make it a complete metric space like this: Consider elements of $\mathbb C(z)$ to be continuous maps of the Riemann sphere to itself. Use uniform convergence (with respect to a metric for the Riemann sphere).
Computing probabilities involving increasing event outcomes
The previous answers flesh out the standard take quite well. Here's another slant on these kinds of problems. Treat the die faces as a polynomial: $(\frac{a}{512}+\frac{211 b}{512}+\frac{300 c}{512})$ Squaring this represents rolling two (or twice), cubing it three, etc. Squared, we get: $(\frac{a^2}{262144}+\frac{211 a b}{131072}+\frac{75 a c}{32768}+\frac{44521 b^2}{262144}+\frac{15825 b c}{32768}+\frac{5625 c^2}{16384})$ This "encodes" the possible outcomes. The coefficients (like $\frac{211}{131072}$) are the probabilities of the different possible outcomes, and the variables and exponents encode the two faces seen and their multiplicity (e.g., $a^2$ means both were $a$). So, adding up all the coefficients for where there is an $a$ component gives you the probability of getting at least one $a$. Nicely, if your problem were two die (or more) with differing face probabilities, or differing number of faces, you simple multiply the polynomials representing each die as needed for the rolls, and same applies.
Confused about set theory terminology
Firstly, it isn't true that $\{\emptyset,\{1\}\} \subset \{(\emptyset,\emptyset),(\emptyset,\{1\}),(\{1\},\{1\})\}$, so this might have caused some confusion. However, both of these sets are important here, as we'll see. The set we want to define a relation on here is $\mathcal{P}(\{1\})$, the power set of $\{1\}$ (which is $\{\emptyset,\{1\}\}$). In this case, the relation is supposed to be $\subseteq$, so we want $(x,y)\in R$ exactly when $x\subset y$, where $x,y\in \mathcal{P}(\{1\})$. There aren't very many elements in $\mathcal{P}(\{1\})$, so we can state all of the inclusions: $${} \quad \quad \emptyset \subseteq \emptyset, \quad \emptyset \subseteq \{1\}, \quad \{1\}\subseteq \{1\}$$ Representing these as ordered pairs, we have $$R=\{(\emptyset,\emptyset),\quad (\emptyset,\{1\}),\quad (\{1\},\{1\})\},$$ which is exactly the set they listed above.
Sufficient condition for an homogeneous ideal to be radical.
$I$ is homogeneous if and only if for every $f\in I$, the homogeneous components of $f$ belong to $I$. Provided that the product of homogeneous polynomials is homogeneous, you can make sure that $h_i^n\in I\ \forall i$.
Discrete Mathematics-logic
A &quot;formal proof&quot; must be: assume $(a_1,a_2)\preceq(b_1,b_2)$. This implies (bi-conditional from left to right): $\text { (1) } [(a_1\neq b_1) ∧ (a_1\leq b_1)] ∨ [(a_1=b_1) ∧ (a_2\leq b_2)]$. Proof by Cases with $(a_1=b_1) \lor \lnot (a_1=b_1)$: (i) if $(a_1=b_1)$, then by Addition we have $(a_1=b_1) \lor (a_1 \lt b_1)$ which is abbreviated as $(a_1 \le b_1)$. (ii) If $\lnot (a_1=b_1)$, we have $\lnot [(a_1=b_1)∧(a_2\leq b_2)]$. Thus, applying Disjunctive Syllogism to (1) we have: $[(a_1\neq b_1) ∧ (a_1\leq b_1)]$, from which, by Simplification: $(a_1\leq b_1)$. Thus, in both cases (i) and (ii) we have proved that $(a_1\leq b_1)$ holds.
Spivak's "Differential Geometry" Volume 1, Chapter 1 ,Problem #20 part (b)
For those not looking at the diagram, the "infinite jail cell" is the boundary of a tubular neighborhood of the integer grid (including horizontal a vertical lines) in the $z = 0$ plane of $xyz$-space. If we take a tubular neighborhood of the line segments from $(0,0)$ to $(1, 0)$ to $(1,1)$ to $(0, 1)$ to $(0,0)$, we get something toroidal. The "cylinder" described by Spivak consists of half of each meridian of this torus -- the half that's closest to the point $(1/2, 1/2)$. Let's call that cylinder $C$; its boundary consists of two squared-off circles in the $z = \pm 1/2$ planes. I believe that what Spivak is suggesting is that you next consider the part of the infinite jail cell that's between two larger squared-off circles, essentially ones that are parallel to the $z=0$-plane squares with corners $(-1, -1), (2, -1), (2, 2), (-1, 2)$, offset by $1/2$ in the positive and negative $z$ directions. Call this larger part $D$. It basically consists of 9 cells, the center one of which is $C$. The boundary of $D - C$ consists of four circles; the set $D-C$ is clearly a 2-manifold-with-boundary, hence homeomorphic to a $k$-holed torus with four disks removed for some $k$. (I'm pretty sure that $k$ is $8$, but I could easily be off by one or two.) Now consider a $k+2$-holed torus, with the $k+2$ holes arranged in a line, like the infinite torus drawn by Spivak, and cut off the left and right half-tori. You end up with a $k$-holed-surface-with-boundary, having four circle-boundaries. This is clearly homeomorphic to D - C. And a single cylinder is homeomorphic to $C$, so the union, along common boundaries, gets you something that's homeomorphic to $D$. You now proceed by induction: drawing a 5 x 5 pair of squared-off-circles, you see that the region $E$ between them and the 3x3 object $D$ is also homeomorphic to a $p$-holed chain with four circular ends, etc. In other words, the region between adjacent circle-pairs is always homeomorphic to an $s$-holed-torus-with-boundary, having four boundary components. By the classification theorem for surface, this is the same as a truncated piece of a linear chain of tori. You thus build up a sequence of homeomorphisms $h_i : U_i \rightarrow V_i$ where $U_i$ is a chain of $(2i + 1)^2 + 1$ tori with the last half-torus removed, and $V_i$ is the part of the infinite jail-cell enclosed by the two squared-off circles of edge-length $i$. The nice thing about this sequence is that $h_{i+1} = h_i$ on $U_i$, so the limit is well-defined and is a homeomorphism as well. ==== An alternative: Let's just talk, from here on, about squares on the integer grid, OK? So what I did in the first solution was start with a 1x1 square, move to a concentric 3x3 square, and so on. As an alternative, you can take, as your second square, a 2x2 square whose lower-left corner is the origin. The difference $D - C$ in this case clearly is homeomorphic to a 3-holed torus with some boundary, and the same sort of union argument applies. Now you take, as your third square, a 3x3 one whose lower-left is at $(-1, -1)$. Again, the difference $E - D$ is clearly homeomorphic to a 5-holed torus with some boundary. And in general, you can extend like this, alternating between adding a band to the northeast and adding a band to the southwest, gradually handling all the cells of the infinite jail-window. The advantage of the second approach is the obvious homeomorphism between the added strip and a $2i+1$-holed torus with some disks removed. The disadvantage is that the gluing-map is more complex. I think it's about a wash.
Short question about the proof of Cantor nested sets theorem
The conclusion of Cantor's theorem is that the infinite intersection $\bigcap_{n=1}^\infty [a_n,b_n]$ is not empty (in fact, this intersection is the interval $[c_a,c_b]$). As the example by @LuizCordeiro demonstrates, it is not necessary that $c_a = c_b$. One of the proofs of the Bolzano-Weierstraß theorem constructs a convergent subsequence of a given bounded sequence $(x_k)_{k=1}^\infty \subset [a,b]$ by successively choosing an interval $[a_n,b_n]$ containing infinitely many terms of $(x_k)_{k=1}^\infty$ so that $[a_n,b_n]$ is one of the two halves of the previous interval $[a_{n-1},b_{n-1}]$. This construction explicitly ensures that $b_n-a_n = 2^{-n}(b-a) \to 0$ as $n \to \infty$, so that we can later conclude that $c_a = c_b$ is the limit of the constructed subsequence. Below is a proof of Cantor's theorem. The sequence $(a_n)_{n=1}^\infty$ is monotonically increasing and bounded from above by $b_1$, as $a_n \leq b_n \leq b_1$. Hence, $(a_n)_{n=1}^\infty$ converges to a finite limit $c_a \in \mathbb{R}$. Recall that $c_a:=\lim_{n\to\infty}a_n$ is in fact the least upper bound $\sup_n a_n$ of the set $\{a_n\;|\;n\geq1\}\subset \mathbb{R}$. Indeed, if we assume that $a_n&gt;c_a$ for some $n$, then putting $\varepsilon=\frac12(a_n−c_a)&gt;0$, we observe that for all $m \geq n$, we would have $a_m \geq a_n &gt; c_a+ \varepsilon$, a contradiction. Also, if we assume that some $s&lt;c_a$ is an upper bound for $(a_n)_{n=1}^\infty$, then putting $\varepsilon=\frac12(c_a−s)&gt;0$, we obtain $a_n\leq s &lt; c_a−\varepsilon$ for all $n$; again a contradiction. Similarly, we have $c_b := \lim_{n\to\infty}b_n = \inf_n b_n$, the greatest lower bound of the set $\{b_n\;|\;n\geq 1\}$. Now, observe that $a_m \leq b_n$ for arbitrary $m$ and $n$, not only when $m=n$. Indeed, for $p = \max\{m,n\}$ we have $a_m \leq a_p \leq b_p \leq b_n$. Fix an arbitrary index $n\geq1$ and consider the inequalities $a_m \leq b_n$ for all $m \geq 1$. They imply that $b_n$ is an upper bound for $(a_m)_{m=1}^\infty$. Hence, $c_a = \sup_m a_m \leq b_n$. Since $n$ was chosen arbitrarily, the last inequality holds for all $n \geq 1$. Thus $c_a$ is a lower bound for $(b_n)_{n=1}^\infty$, and hence, $c_a \leq \inf_n b_n = c_b$. Thus, the interval $[c_a,c_b]$ is not empty. Since $a_n \leq c_a \leq c_b \leq b_n$, we have $[c_a,c_b] \subset [a_n,b_n]$ for all $n$, or in other words, $[c_a,c_b] \subset \bigcap_{n=1}^\infty [a_n,b_n]$. Therefore, the latter intersection is non-empty, Q.E.D. In fact, we have also the inclusion $\bigcap_{n=1}^\infty [a_n,b_n] \subset [c_a,c_b]$, as every element of the intersection on the left is an upper bound for $(a_n)_{n=1}^\infty$ and simultaneously a lower bound for $(b_n)_{n=1}^\infty$, and hence $c_a \leq x \leq c_b$. Thus, $\bigcap_{n=1}^\infty [a_n,b_n] = [c_a,c_b].$
Module in exact sequence isomorphism
The given information is not sufficient to conclude. One needs to know the maps in the short exact sequence. Case 1 corresponds to $$ 0 \to R/(p) \oplus 0 \to R/(p) \oplus R/(p) \to 0 \oplus R/p \to 0,$$ and Case 2 corresonds to $$ 0 \to (p)/(p^2) \to R/(p^2) \to R/(p) \to 0 $$ with the isomorphism $(p)/(p^2) \cong R/(p)$.
Limit involving floor function summation
Hint Since $$u-1&lt;\lfloor u\rfloor\le u$$we can write$${\sum_{i=1}^n(i\sqrt{2007}-1)\over \sum_{i=1}^ni\sqrt{2008}}\le{\sum_{i=1}^n\lfloor i\sqrt{2007}\rfloor\over \sum_{i=1}^n\lfloor i\sqrt{2008}\rfloor} \le {\sum_{i=1}^ni\sqrt{2007}\over \sum_{i=1}^n(i\sqrt{2008}-1)}$$
Evaluate integral over path using parametrisation
A line integral of a field $\vec{F}$ along a path $\vec{r}(t)$ where $t \in [a,b]$ is given by $$\int \vec{F}\cdot d\vec{r} = \int_a^b \vec{F}(\vec{r}(t))\cdot \vec{r}'(t) dt $$ The position vector is $$ \vec{r}(t) = x(t) \hat{i} + y(t)\hat{j} + z(t)\hat{k}=t \hat{i} + t\hat{j} + t\hat{k}$$ So you need to calculate $$\vec{F}(\vec{r}(t)) = e^{-x(t)}\hat{i}+e^{-y(t)}\hat{j}+e^{-z(t)}\hat{k} = e^{-t}\hat{i}+e^{-t}\hat{j}+e^{-t}\hat{k} = e^{-t}(\hat{i}+\hat{j}+\hat{k})$$ With the velocity vector $$\vec{r}'(t) = \hat{i}+\hat{j}+\hat{k} $$ Then calculate the dot product and integrate over the bounds of t $$ \int_0^1 (e^{-t}(\hat{i}+\hat{j}+\hat{k})) \cdot (\hat{i}+\hat{j}+\hat{k})dt = \int_0^1 3 e^{-t} dt $$ I think you should be able to calculate the rest from there
optimization with ellipsoidal constraints
From my working here using Langrange multiplier, I have shown that the optimal solution is $$x = \frac{P^{-1}c}{\sqrt{c^TP^{-1}c}}$$ Hence $$c^Tx=\frac{c^TP^{-1}c}{\sqrt{c^TP^{-1}c}}=\sqrt{c^TP^{-1}c}$$
Given a function $f(x,y)$, can two different level curves of $f$ intersect? Why or why not?
Two level curves can, by definition, not intersect. One level curve is defined as $f(x,y)=c_1$, the other as $f(x,y)=c_2$. If $c_1\neq c_2$ (else they are the same curve), if there exists a point on both level curves, that would mean $f(x,y)=c_1$ and $f(x,y)=c_2$, meaning $c_1=f(x,y)=c_2$ which cannot be true. It is possible, however, to have one level curve which is composed of more than one 'line'. For example, for $f(x,y)=xy$, the level line for $f(x,y)=0$ is composed of both the line $x=0$ and $y=0$. This does not mean, however, that this line intersects any line.
How do I find the number of distinct elements in a certain set?
When $t$ is less than all the $x$s all the functions are $0$. When $t$ is greater than all of them they are all $1$. As $t$ increases it converts each entry from $0$ to $1$ at a different point, and once an entry is converted it stays $1$, so each element of $A$ is distinct and there are $n$ of them.
Fundamental group of the connected sum of $n$ tori with $m$ points removed
Draw the polygonal representation of your n-holed torus. Put your $k$ &quot;missing&quot; points in a small disk around the center of your $4n$-gon. Draw a circle $S_1$ around the $k$ deleted points; draw a circle $S_2$ outside that. Then the portion of the figure outside $S_1$ is topologically an $n$-holed torus with $1$ point removed, so you can use the previous computation to figure out generators and relations for its fundamental group. The portion INSIDE $S_2$ is a disk with $k$ holes; you can compute $\pi_1$ for that as well. And then you can use Seifert-van Kampen to compute $\pi_1$ of the union of the two.
How should I write this function $f:\mathcal P_{fin}(\omega^{<\omega})\to \mathcal P_{fin}(\omega^{<\omega})$?
In your written definition of the function, there are several things that are unclear; for example, I don't really understand the emphasis on $s_1$ over the other terms, or how you're interpreting "$\bigcup$." I also don't understand your claim that it maps sets to singletons, given your example $\{\omega+1,\omega+2,\omega+3,\omega^2\cdot 3\}\mapsto\{1,\omega\cdot 3\}$. Based on your example in the comments, though, I suspect that what you're asking for is the following. I'll first build it in stages, rather than describing it all at once, and then give an all-at-once definition below. Incidentally, it does not always terminate; e.g. $f(\{\omega^n:n\in\mathbb{N}\})=\{\omega^n: n\in\mathbb{N}\}$ if I understand it correctly. Maybe you want to look at $\mathcal{P}_{fin}(\omega^{&lt;\omega})$ - the set of finite sets of finite strings of naturals - instead of $\mathcal{P}(\omega^{&lt;\omega})$? Also, the term "contraction mapping" has a technical meaning in the study of metric spaces, which this could conceivably fit but only after you've defined an appropriate metric structure. First, let $C: \omega^{&lt;\omega}\rightarrow\omega^\omega$ be the Cantor normal form function, sending a finite string of naturals to the ordinal $&lt;\omega^\omega$ which it represents. (Note that $C$ is noninjective - for example, it sends both $\langle 0,1\rangle$ and $\langle 1\rangle$ to $1$ - so if you want an injection, it will be a slight modification of the obvious $C$.) Next, we have the truncation function $$trun:\omega^{&lt;\omega}\setminus\{\langle\rangle\}\rightarrow\omega^{&lt;\omega}: \langle a_1,..., a_n\rangle\mapsto \langle a_1,..., a_{n-1}\rangle.$$ Note that this isn't defined on the empty string; if you prefer, you could define it to be the identity on the empty string, that is, set $trun(\langle\rangle)=\langle\rangle$. Next, we have the pointwise version of your $f$: $$g:\omega^{&lt;\omega}\rightarrow\omega^{&lt;\omega}: C(\sigma)\mapsto C(trunc(\sigma)).$$ This does what you want on a single finite string of ordinals; e.g. $$g(\omega^2+\omega)=g(C\langle1,1,0\rangle)=C(trunc(\langle 1,1,0\rangle))=\omega+1.$$ Note that we have to show that $g$ is actually well-defined - that is, if $C(\alpha)=C(\beta)$ then $C(trunc(\alpha))=C(trunc(\beta))$ - but this is easy. Finally, your $f$ is the "set version" of $g$: $$f:\mathcal{P}(\omega^{&lt;\omega})\rightarrow\mathcal{P}(\omega^{&lt;\omega}): S\mapsto\{g(s): s\in S\}.$$ OK, now here's the all-at-once definition: $f$ is the function from $\mathcal{P}(\omega^{&lt;\omega})$ to $\mathcal{P}(\omega^{&lt;\omega})$ which sends a set $X$ to the set $$\{\omega^n\cdot s_{n+1}+\omega^{n-1}\cdot s_n+...+\omega^0\cdot s_1: \exists t(\omega^{n+1}\cdot s_{n+1}+\omega^{n}\cdot s_n+...+\omega^{1}\cdot s_1+t\in X)\}.$$
Middle School Level - Probability
There are $9 \choose 2$ ways seven coins can fall into three boxes, and $5 \choose 1$ ways for four coins to fall into two boxes, so the probability is $$ {5\choose1}/{9\choose2}={5\over36} $$
Find whether : $\sum_{ I \subset \mathbb{N}} e^{-\sqrt{S(I)}}$ converges
To make things precise, when $|I| = \infty$, we redefine the value of $S(I)$ as $+\infty$ and $e^{-\sqrt{S(I)}} = 0$. After this change, the summand inside the sum $\sum\limits_{I\subset \mathbb{N}} e^{-\sqrt{S(I)}}$ will be non-zero for countably many $I$. Since all summands are non-negative, the sum is well defined and takes value in $[0,\infty]$. Furthermore, we can compute it by enumerating those $I$ with $|I| &lt; \infty$ in arbitrary order and get the same result. As a result, $$\begin{align}\sum_{I\subset \mathbb{N}} e^{-\sqrt{S(I)}} \stackrel{def}{=} \sum_{I\subset \mathbb{N}, |I| &lt; \infty} e^{-\sqrt{S(I)}} &amp;= 2\sum_{I\subset \mathbb{Z}_{+}, |I| &lt; \infty} e^{-\sqrt{S(I)}} = 2\sum_{n=0}^\infty \sum_{I \subset \mathbb{Z}_{+}, S(I) = n} e^{-\sqrt{n}}\\ &amp;= 2\sum_{n=0}^\infty q(n) e^{-\sqrt{n}} \end{align} $$ where $q(n) = | \{ I \subset \mathbb{Z}_{+} : S(I) = n \} |$ is the number of partitions of integer $n$ into distinct parts. The OGF of $q(n)$ equals to $$\sum_{n=0}^\infty q(n) z^n = \prod_{k=1}^\infty ( 1 + z^k )$$ The closed form of $q(n)$ is not known. However, we do know for large $n$,${}^{\color{blue}{[1]}}$ $$q(n) \sim \frac{3^{3/4}}{12 n^{3/4}} \exp\left(\pi\sqrt{\frac{n}{3}}\right) $$ Since $\alpha \stackrel{def}{=} \frac{\pi}{\sqrt{3}} - 1 &gt; 0$, the sub-sum over those $I$ with $S(I) = n$ blows up like $e^{\alpha\sqrt{n}}$. From this, we can deduce $$\sum_{I\subset \mathbb{N}} e^{-\sqrt{S(I)}} = \infty$$ Refs $\color{blue}{[1]}$ - Philippe Flajolet, Robert Sedgewick Analytic Combinatorics, Cambridge University Press; (1st ed., 2009). Formula found at VIII.6 Saddle-point asymptotics / Integer partitions.
Finite free graded modules and the grading of their duals
As YACP has already said you are correct on $1$. Here's some elaboration about $2$. Let's let $\hom(A, B)$ be ungraded homomorphisms between left $S$-modules $A$ and $B$. For any $i$ let's let $\hom^i(A, B)$ be homogeneous maps of degree $i$ from $A$ to $B$. So left $S$-module homomorphisms $\phi\colon A \to B$ satisfying $\phi(A_n) \subseteq B_{i + n}$ (where $A_n$ is the $n^\text{th}$ graded piece). Now let $\hom^\bullet(A, B) = \bigoplus_{i \in \mathbb Z}\hom^i(A, B)$. If we declare $(\phi\cdot s)(a) = s\phi(a)$ then $\hom^\bullet(A, B)$ is a right $S$-module. Moreover, if we declare $\hom^i(A, B)$ to be the $i^\text{th}$ graded piece of $\hom^\bullet(A, B)$ then $\hom^\bullet(A, B)$ is a graded right $S$-module. Now we naturally have $\hom^\bullet(A, B) \subseteq \hom(A, B)$ but in general these need not be equal. For example let $S = \mathbb Z$ with everything in degree $0$. Let $A = B = \mathbb Z[x]$, but we will let $x^i$ have degree $0$ in $A$ and have degree $i$ in $B$. Now the identity map of $\mathbb Z[x]$ is an element of $\hom(A, B)$, but if you try and write it as a sum of homogeneous maps you'll find you need a map in every positive degree, so it is not an element of $\hom^\bullet(A, B)$. So $\hom^\bullet(A, B)$ is a graded right $S$-module which in general is not equal to $\hom(A, B)$. But, when $A$ is finitely generated they are equal. I'll let you figure out the proof, it's not to hard. I'll just give you the hint that the finitely many generators can only land in finitely many different degrees. Finally, let me note that as $S$-modules we have $\hom(S, S) \simeq S$. One can check that if we choose the gradings on these $S$'s to be $\hom(S(v_i), S) \simeq S(-v_i)$ then this map is a graded map (the element $1$ on the right has degree $v_i$ and corresponds to the identity map on the left. This identity map sends $1 \in S(v_i)$ which is of degree $-v_i$ to $1 \in S$ which is of degree $0$, so it is homogeneous of degree $v_i$ as well.)
Why does the residue method not work straight out of the box here?
The reason is actually that $(\cos{z}-1)/z^2$, the obvious first guess for the complex function, diverges as $|\Im(z)| \to \infty$. (Basically, either $e^{iz}$ or $e^{-iz}$ blows up, depending on the direction you choose.) Therefore you can't use the residue theorem in the usual way, where you persuade the integral over the semicircle to die off as its radius gets large. On the other hand, $(e^{iz}-1)/z^2$ doesn't have this problem: it is $O(e^{-\Im(z)})$ as $\Im(z)\to\infty$, and Jordan's lemma (or similar) tells us the semicircle integral dies off and we just have to look at the residues.
Solve the inequation for $x$
Well, let's see if we can simplify your expression first: $$ (x-1)^{2005} x^{2006} (x+1)^{2007} \le 0 $$ $$(x^2 - 1)^{2005}x^{2006}(x+1)^{2} \le 0$$ Think, first, what qualifies $x$ to be $\le 0$? If $x$ is $0$ or a negative number, correct? Well, our expression can be $0$ if $x = -1, 1$ Thus, now lets look when our expression yields a negative number. Therefore, look at the powers and see which ones could be negative: $$(x^2 - 1)^{2005}x^{2006}(x+1)^{2} \le 0$$ We can see that $(x^2 - 1)^{2005}$ can only be negative, since the other $2$ terms have even powers, meaning they will only be positive. Therefore, when is $(x^2 - 1)^{2005} \le 0$? Well, $$x^2 - 1 \le 0$$ $$x^2 \le 1$$ $$-1 \le x \le 1$$ And thus we arrive at our answer.
Develop the Taylor series for the function $f(x)={\frac{1-x}{\sqrt{1+x}}}$
The generalized binomial theorem says that $$ (1+x)^{-1/2}=\sum_{n\ge0}\binom{-1/2}{n}x^n \tag{1} $$ (the radius of convergence is discussed later on). Then $$ \frac{1-x}{\sqrt{1+x}}= (1-x)\sum_{n\ge0}\binom{-1/2}{n}x^n $$ Now you can distribute and collect terms: \begin{align} \frac{1-x}{\sqrt{1+x}} &amp;=(1-x)\sum_{n\ge0}\binom{-1/2}{n}x^n \\[6px] &amp;=\sum_{n\ge0}\binom{-1/2}{n}x^n-\sum_{n\ge0}\binom{-1/2}{n}x^{n+1} \\[6px] &amp;=1+\sum_{n\ge1}\binom{-1/2}{n}x^n-\sum_{n\ge1}\binom{-1/2}{n-1}x^{n} \\[6px] &amp;=1+\sum_{n\ge1}\left(\binom{-1/2}{n}-\binom{-1/2}{n-1}\right)x^n \end{align} The radius of convergence of a power series doesn't change when it's multiplied by a polynomial (verify it), so it's sufficient to look at the radius of convergence of $(1)$. With the ratio test, $$ \left|\frac{\dbinom{-1/2}{n+1}x^{n+1}}{\dbinom{-1/2}{n}x^{n}}\right|= \frac{n+1}{n+1/2}|x| $$ because $$ \binom{k}{n}=\frac{k(k-1)\dotsm(k-n+1)}{n!} $$ so $$ \frac{\dbinom{k}{n+1}}{\dbinom{k}{n}}= \frac{k(k-1)\dotsm(k-n+1)}{n!} \frac{(n+1)!}{k(k-1)\dotsm(k-n)}= \frac{n+1}{k-n} $$ Since the limit at $\infty$ of the ratio is $|x|$, the ratio of convergence is $1$.
A, B, C and D, and we are trying to find the probability that exactly one event occurs.
There are many possible formulas. The appropriate one depends on the information you are given. I assume that you know the basic Inclusion Exclusion Principle for computing $\Pr(A\cup B\cup C\cup D)$. A modification of that idea will take care of your problem. When we find $$\Pr(A)+\Pr(B)+\Pr(C)+\Pr(D),$$ the situations in which two of $A$, $B$, $C$, and $D$ occur have been taken account of twice. But we want to count them zero times. So we want to subtract $$2\left(\Pr(A\cap B)+\Pr(A\cap C+\Pr(A\cap D)+\Pr(B\cap C)+\Pr(B\cap D)+\Pr(C\cap D)\right).$$ But we have been overenthusiastic in our subtractions, for we subtracted too often the probability that three of $A$, $B$, $C$, $D$ occur. Use a picture (Venn Diagram) to decide what we must add back. And there will be also an issue about the probability that all four of $A$, $B$, $C$, and $D$ occur.
Calculating following probabilities when $X_i\sim N(\mu,\sigma^2)$
Preassuming that the $X_i$ are independent here: $\frac{4S^2}{\sigma^2}$ has chi-squared distribution with $4$ degrees of freedom. $T=\frac{\overline{X}-\mu}{S}$ has a Student's t-distribution with $n-1=5-1=4$ degrees of freedom. $T^{2}$ has an F-distribution with parameters $1$ and $4$. Moreover $\overline X$ and $S^2$ are independent so that the outcome in $(3)$ is actually the product of the outcomes in $(1)$ and $(2)$ Hint on $(1)$: You can rewrite $P\left(\overline{X}-2.059S\leq\mu\leq\overline{X}-2.059S\right)=P\left(T^{2}\leq2.059^{2}\right)$
Prove or disprove $(1-y)^x(1+xy)1$, $0<y<1$.
Using Bernoulli's Inequality, $$(1-y)(1+xy)^{1/x}\leqslant (1-y)(1+y)=1-y^2&lt;1$$ Raise both sides to the power $x$ to get your desired inequality.
Propositional logic: subjective statements
Both sentences are totally normal propositions in that they can either be true or false. In natrual language use, propositions always have to be evaluated w.r.t. a current utterance, i.e. including the utterance time, utterance place, speaker, addressee, and so on. Mathetmatics usually doesn't care about this context dependence and automatically assumes that such statements trivially can only be valid in a certain time and place, but the fact that a sentence might have a different outcome in a different situation doesn't change the proposition's ability to have a truth value in the first place. In more advanced logics, you could also inroduce symbols and semantic evaluation functions for time (i.e. a sentence like "The coffee can WAS empty" is true at point of time $t$ if and only if there exists a point in time $t'$ such that $t'$ stands in a before-relation to $t$ and the proposition is true at $t'$ and so on), but this only adds another factor to evaluate a statement on, without impacting a proposition's ability to carry a truth value at all. This means that statement A) obviously can have different truth values in different situations (i.e. different points in states and time), but since they are always evaluated w.r.t. a specific situation, this context dependence (which applies to the vast majority of utterances) is rather a triviality than a serious issue - from a mathematical point of view, it is pretty clear whether it is true or false under the given circumstances. As for B), although the evaluation whether this is a true matter of fact or not is rather vague, one can certainly make judgements about whether you are convinced that you feel good or not, either you uttered something truthful or you made a false statement if in fact you do not feel good. That the statement is about a subjective feeling is already included in the sentence, and the sentence's truth value only depends on the question whether the statement about your subjective perception is true or not, and it does not depend on anyone's judgement whether you actually feel good or not, because this is indeed subjective and not really a matter of propositional logic. Things would look differently if it were about "Coffee tastes good", here the truth value judgement is fully dependent on the evaluater's opinion and yes, here logic can not make objective statements. But it doesn't really want to either; that judgejements about certain states of affairs in our world are oven opinion-based is a trivial truth, and propositional logic is not interested in making any attempt to set an (impossible) objective world view on any proposition once and for all. So yes, subjective statements can not be objectively evaluated in propositonal logic without explicitely encoding in your model what is assumed to be true and what is not. But this is not the matter here - the sentence B) is only about the truth of the subjective perception (i.e. whether the speaker's subjective feeling actually is good or bad) and not a subjective perception itself, and as long as it is true that the speaker feels good, the proposition is true and oterherwise false, thus, the sentence can very naturally have a truth value, no matter how vague such statements may be.
Bounded positive sequence
Hint: The limit being zero means that for some $N$ you have $\frac{a_{n+1}}{a_n}&lt;1$ for all $n&gt;N$.
Solutions of $10^{x} -5^{x}-2^{x}=y^{2}-1$.
Here I discus a certain case. $10^x-2^x-5^x=y^2-1$ If $p= x+1$ is prime we may write: $10^{p-1} ≡ 1 \mod p$ $2^{p-1} ≡1 \mod p$ $5^{p-1} ≡1 \mod p$ $10^{p-1}-2^{p-1}-5^{p-1}≡ -1 \mod p≡ (p-1)\mod p$ $10^{p-1}-2^{p-1}-5^{p-1}= t . p -1= y^2-1 ⇒ y=t.p$ This is possible only if $t=p$ ⇒ $y=±p$. For example $x=1⇒t= p=2 ⇒y=±2$ in this case question reduces to: $10^x-2^x-5^x=(x+1)^2-1$ or $10^{p-1}-2^{p-1}-5^{p-1}=p^2-1$ which are 'single unknown' equations. We may also write: $10^{p-1}-2^{p-1}-5^{p-1}= t . p + p -1= y^2-1 ⇒ y=t(p+1)=t(x+2)$ Since (p+1) is not prime if t completes factors of (p+1) for a square then we have an integer solution for y. It is not known in addition to 2 there are other primes that would give integer solutions for y.
Why is the transpose of an interpolation matrix the "inverse" interpolation matrix?
To me this looks more like an artifact of the bidiagonal structure of the interpolation matrix than anything suggestive of a deep relationship between the matrix and its transpose. The nonzero entries are all equal and lie on the two central diagonals. The transpose has the same bidiagonal structure, so aside from the first and last rows, you have the same averaging operations in each row that you had in the original matrix.
Show that the tree of a connected, nonempty finite graph with $n$ edges has $n+1$ vertices.
Perhaps a cleaner way of presenting your argument is as follows: we'll build your graph vertex-by-vertex. So pick a vertex somewhere in your tree, since it's non-empty. This gives us 1 vertex and no edges, as we'd hope! Call this $\Gamma_1$, and call your tree $T$. Now suppose we've already got $\Gamma_n$, a connected, non-empty finite graph with $n$ vertices and $n-1$ edges. If $\Gamma_n$ is all of $T$, then we're done. If not, then there's some vertices or edges in $T$ we haven't added to $\Gamma_n$ yet. We should never reach the case where there are only edges missing, because $T$ has no cycles ($\Gamma_n$ is itself a tree, so if we add another edge between two of its vertices then we've made a cycle). So we can focus on just the case where there are more vertices to add. There must be a vertex that has an edge joining it to $\Gamma_n$, since $T$ is connected. So add this vertex and this edge to $\Gamma_n$ to form $\Gamma_{n+1}$, which has $n+1$ vertices and $n$ edges. Since $T$ is finite, this will eventually terminate, with $\Gamma_n$ being all of $T$.