title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Can someone help me with this proof?
It's generally a better idea to assume it's true for $n$, then plug in $n+1$ and see what you can do, rather than writing it down for $n$ and then trying to somehow turn it into the $n+1$ case. So assume it's true for $n$ and write down the sum for $n+1$: $$\sum_{k=1}^{n+1}(-1)^{k-1}k^2$$ The nice things about sums and induction is that the case for $n$ easily breaks away (another reason why writing down the $n+1$ case is usually a better approach): $$=\left(\sum_{k=1}^{n}(-1)^{k-1}k^2\right)+(-1)^n(n+1)^2=(-1)^{n-1}\frac{n(n+1)}{2}+(-1)^n(n+1)^2$$ Now, the expression you want will have a factor of $(-1)^n$, so factor one of those out and see what happens: $$=(-1)^{n}\left(-\frac{n(n+1)}{2}+(n+1)^2\right)$$ Now just follow your nose...
Prove that a second order diff. eq. has only two linearly independent solutions.
Assume that this refers to an open $t$-interval $\Omega$ with $0\in\Omega$. According to the standard existence theorem (extended to second-order ODEs) there are two solutions $y_1$, $y_2:\>\Omega\to{\mathbb R}$ realizing the particular initial values $$y_1(0)=1, \quad y_1'(0)=0,\qquad{\rm resp.}\qquad y_2(0)=0,\quad y_2'(0)=1\ .$$ The existence theorem guarantees this first for some neighborhood of $0$, but in the case of a linear ODE the two solutions can actually be extended to all of $\Omega$. The two functions $y_1$ and $y_2$ are obviously linearly independent. It follows that the solution space ${\cal L}$, which is a vector space in our case, has dimension $\geq 2$. Now let $t\mapsto y(t)$ be an arbitrary solution. Then the function $$y_*(t):=y(t)- y(0)y_1(t)-y'(0)y_2(t)$$ is also a solution, and in addition satisfies $y_*(0)=y_*'(0)=0$. But the zero solution satisfies these two conditions as well, and by the uniqueness part of the main theorem about ODE's it follows that in fact $y_*(t)\equiv0$. This proves that $y_1$, $y_2$ generate all of ${\cal L}$, whence ${\rm dim}({\cal L})\leq2$ as well.
Theory Book to go with Demidovich's Exercises
It is "Analysis basics" by Grigorii Fichtenholz. Its ISBN is 5-9511-0010-0 on my hardcover in Russian. The book "Differential and Integral calculus by: Piskonov" is subset of those of Grigorii Fichtenholz from proofs point of view. Demidovich's Exercises contains proof exercises, which are analogous to those in "Analysis basics", but those have got no counterpart in "Differential and Integral calculus by: Piskonov".
How many solutions for equation, with Inequation (Combinatorics)
Addition is commutative and associative so there's nothing in the expression that distinguishes any $x_i$ from the other. So any fact about them should hold true under a permutation of the indices. In particular, The number of solutions for which $$x_1 + x_2 + x_3 > x_4 + x_5 + x_6$$ should equal the number of solutions for which $$x_4 + x_5 + x_6 > x_1 + x_2 + x_3$$ (which can be re-written as: $x_1 + x_2 + x_3 < x_4 + x_5 + x_6$) The reason why you need $D(3,12)^2$ is because when $x_1 + x_2 + x_3 = x_4 + x_5 + x_6$, we know that $x_1 + x_2 + x_3 = 12$ and $x_4 + x_5 + x_6 = 12$. Because we know how many solutions there are to each of these, we can multiply the number to get how many possible combinations there are for all $x_1, ..., x_6$ in this case.
Solution of Simultaneous Equations
From the second equation we get $$C=\frac{A-B}{2}$$ then we get $$Q+S\sin\left(\frac{A-B}{2}\right)=T\cos(B)$$ so $$T=\frac{Q+S\sin\left(\frac{A-B}{2}\right)}{\cos(B)}$$ with this equation we get $S$ from $$S\cos\left(\frac{A-B}{2}\right)=\tan (B)\left(Q+S\sin\left(\frac{A-B}{2}\right)\right)$$ From here we get $S$ and then $T$ and then $R$
examples of infinitely differentiable function
A continuous function has this property if and only if there is a point $x$ for which $|f(x)| = 1$ and $|f(y)| \leq 1$ for all $y$ in a neighborhood of $x$.
I have a question about homotopy of the unit circumference with a point
$\Gamma$ doesn't stay inside $\Bbb C\setminus\{0\}$. So no homotopy, and no contradiction. For the question itself, how about $\Gamma(s,t)=e^{2\pi i s(1-t)}$.
Gradient and Hessian of $\sum_i \log \left(1 + \exp\left\{ -t_i \left(w^T x_i\right)\right\} \right) + \mu \|w \|_2^2$?
The gradient looks correct, but the hessian doesn't. Here's how I did the calculations. In order to write the function in purely matrix form, first note that the $\{x_i\}$ vectors are columns of a single matrix $X$. Next use $(\circ)$ to denote the elementwise/Hadamard product and (:) to denote the trace/Frobenius product, i.e. $$A:B = {\rm Tr}(A^TB)$$ Define the following variables. $$\eqalign{ a &= t\circ X^Tw &\implies da = t\circ X^Tdw \cr b &= \exp(-a) &\implies db = -b\circ da \cr p &= \exp(a) &\implies dp = p\circ da \implies 1=b\circ p \cr c &= \log(1+b) &\implies dc = \frac{db}{1+b} \cr }$$ Write the function in terms of these variables. Then calculate its differential and and back-substitute variables until we arrive at the gradient with respect to $w$. $$\eqalign{ f &= \mu\,w:w + 1:c \cr df &= 2\mu\,w:dw + 1:dc \cr &= 2\mu\,w:dw + \frac{1}{1+b}:db \cr &= 2\mu\,w:dw - \frac{1}{1+b}:b\circ da \cr &= 2\mu\,w:dw - \frac{b}{1+b}:t\circ X^Tdw \cr &= 2\mu\,w:dw - X\Big(\frac{t\circ b}{1+b}\Big):dw \cr &= \bigg(2\mu\,w - X\Big(\frac{t}{p+1}\Big)\bigg):dw \cr g = \frac{\partial f}{\partial w} &= 2\mu\,w - X\Big(\frac{t}{1+p}\Big) \cr }$$ Now find the differential and gradient of $g$. $$\eqalign{ dg &= 2\mu\,dw + X\Big(\frac{t\circ dp}{(1+p)\circ(1+p)}\Big) \cr &= 2\mu\,dw + X\Big(\frac{t\circ p\circ da}{1+2p+p\circ p}\Big) \cr &= 2\mu\,dw + X\Big(\frac{t\circ p\circ t\circ X^Tdw}{1+2p+p\circ p}\Big) \cr }$$ Replace the Hadamard products with diagonal matrices, e.g. $$\eqalign{ P &= {\rm Diag}(p),\,\, T &= {\rm Diag}(t),\,\, I &= {\rm Diag}(1) \cr Px &= p\circ x \cr }$$ Therefore $$\eqalign{ dg &= \Big(2\mu I + X(I+2P+P^2)^{-1}T^2PX^T\Big)\,dw \cr H = \frac{\partial g}{\partial w} &= 2\mu I + X(I+2P+P^2)^{-1}T^2PX^T \cr }$$
Inverse Fourier transform of Gaussian
The Gaussian $e^{-\alpha x^{2}}$ satisfies the differential equation $$ \frac{df}{dx} = -2\alpha x f,\;\;\; f(0)=1. $$ The Fourier transform turns differentiation into multiplication, and multiplication into differentiation. So you get back the same differential equation with different constants. For example, if $f(x)=e^{-\alpha x^{2}}$, then \begin{align} \frac{d}{dk}\hat{f}(k) & =\frac{d}{dk}\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}e^{-\alpha x^{2}}e^{-ikx}dx \\ & = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}e^{-\alpha x^{2}}e^{-ikx}(-ix)dx \\ & = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}(-2\alpha xe^{-\alpha x^{2}})(\frac{i}{2\alpha}e^{-ikx})dx \\ & = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}(\frac{d}{dx}e^{-\alpha x^{2}})(\frac{i}{2\alpha}e^{-ikx})dx \\ & = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}e^{-\alpha x^{2}}\left(-\frac{d}{dx}\frac{i}{2\alpha}e^{-ikx}\right)dx \\ & = -\frac{k}{2\alpha}\hat{f}(k). \end{align} And, $$ \hat{f}(0) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}e^{-\alpha x^{2}}dx $$ Therefore, $$ \hat{f}(k) = \left[\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}e^{-\alpha x^{2}}dx\right]e^{-k^{2}/4\alpha} $$ Determining the multiplier constant is normally done with a trick using polar coordinates: \begin{align} \left[\frac{1}{2\pi}\int_{-\infty}^{\infty}e^{-\alpha x^{2}}dx\right]^{2} & = \frac{1}{2\pi}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}e^{-\alpha x^{2}}e^{-\alpha y^{2}}dxdy \\ & = \frac{1}{2\pi}\int_{0}^{2\pi}\int_{0}^{\infty}e^{-\alpha r^{2}}rdrd\theta \\ & = \int_{0}^{\infty}e^{-\alpha r^{2}}rdr \\ & = -\frac{1}{2\alpha}\int_{0}^{\infty}\frac{d}{dr}e^{-\alpha r^{2}}dr \\ & = \frac{1}{2\alpha} \end{align} Finally, $$ \hat{f}(k) = \frac{1}{\sqrt{2\alpha}}e^{-k^{2}/4\alpha}. $$
Find the least number of balls that must be taken from the bag..
For (a), let $A$ be the event that at least one $2$ is drawn in $n$ draws. Then $$ P(A)=1-P(A^c)=1-\left(\frac{4}{6}\right)^n $$ We want $P(A)\geq 0.95$ which you can solve for $n$. For the second question let $X, Y, Z$ be the number of balls numbered $1$, $2$, $3$ drawn from the $8$ draws respectively. Then $X, Y, Z$ are binomially distributed (since the balls are replaced before being drawn). In particular, it is given that $$ EX=4.8=8p_1 $$ $$ \text{Var}(Y)=1.5=8p_2(1-p_2) $$ where $p_i$ is the probability of drawing a ball numbered $i$ on a single trial for $i=1,2,3$. You can solve for $p_1, p_2$ and hence $p_3$ which will yield the answer to (b.)
Density and Probabilities of Geometric Brownian Motion
a) $P(S_5\leq x)=P(Z\leq \frac{lnx-25}{3\sqrt{5}})=\int_{-\infty}^{\frac{lnx-25}{3\sqrt{5}}}\frac{1}{\sqrt{2\pi}}e^{-\frac{y^2}{2}}dy=\int_{-\infty}^{\frac{lnx-25}{3\sqrt{5}}}I(y)dy$. So, thanks to fundamental theorem of calculus $f_{S_5}(x)=I(\frac{lnx-25}{3\sqrt{5}})\frac{\partial}{\partial x}(\frac{lnx-25}{3\sqrt{5}})$ is the density of $S_5$. c) $P(S_3-S_1<0) = P(e^{3(B_3-B_1)} < e^{-6}) = P(B_3-B_1<-3)$ But $(B_t)_t$ is a Brownian Motion so $B_t-B_s \sim N(0,t-s)$ (for $0\leq s\leq t$), so $Y=B_3-B_1 \sim N(0,2)$. Then, $P(Y<-3)=P(Z<-\frac{3}{\sqrt{2}})$ with $Z\sim N(0,1)$.
An example of an algebraic loop which has different L and R inverses?
Pointer: https://math.stackexchange.com/a/1444602/123905 The referenced MSE answer exhibits a five element example of a non-associative loop having at least one element with distinct left- and right-inverses.
Question about proof extending measure to complete measure
Hint. You have $$(E \cup N^\prime)^c=E^c \cap (N^\prime)^c$$ and both set of the RHS belong to $M$.
find the $\cos 40(2\cos 80-1)=?$
$$ \cos(40°)\cdot(2\cdot \cos(80°)-1)\\ = \cos(40°)\cdot(4\cos^2(40°)-3)\\ = 4\cos^3(40°)-3\cos(40°)\\ = \cos(120°)\\ = -\frac{1}{2} $$ Just look how simple the solution is. Why would you want to not use this approach. Unless you want to prove the triple angle identity from scratch. Or $$ \cos(40°)\cdot(2\cdot \cos(80°)-1)\\ = 2\cdot \cos(40°) \cdot \cos(80°) - \cos(40°)\\ = \frac{2\cdot \sin(40°)\cdot \cos(40°) \cdot \cos(80°)}{\sin(40 °)} - \cos(40°)\\ = \frac{\sin(20°)}{2\sin(40°)} - \cos(40°)\\ = \frac{1}{4\cos(20°)} - \cos(40°)\\ = \frac{1-4\cos(40°)\cos(20°)}{4\cos(20°)}\\ $$ And this mess is so unnecessary that I let you continue if you wish so.
Factor out (m+1) in the following so that the final answer is $\frac{(2m+1) (m+2) (m+1)} {6}$
The expression in the title can't be the simplified answer, because as a test, if we substitute $m=0$ in your original (you could use any number but $0$ is convenient) we get $$\dfrac {(2(0)+1)(0+1)+6(0+1)^2}{6}=\dfrac 76$$ Substituting $m=0$ in the expression in the title yields $$\dfrac {(2(0)+1)(0+2)(0+1)}{6} = \dfrac 26$$ So the $2$ expressions cannot be equal. What you can do to simplify your original problem is the following: Both terms in the numerator, $(2m+1)(m+1)$ and $6(m+1)^2$, have a common factor of $(m+1)$, so we can factor that out $$\dfrac {(m+1)[(2m+1)+6(m+1)]}{6}$$ Now you should distribute the $6$ in the numerator and combine like terms $$\dfrac {(m+1)(2m+1+6m+6)}{6}$$ $$\dfrac {(m+1)(8m+7)}{6}$$ If you want to keep all integers, this is as far as you can go.
Decompose solution of pde in harmonic and non-harmonic part
Hopefully you're familiar with basic elliptic theory (Lax-Milgram theorem and Sobolev spaces is all you really need). The equation for $w_1$ is simply Laplace's equation with Dirichlet boundary conditions $w$, and likewise the equation for $w_2$ is Poisson's equation with source term $f(w)$ and zero boundary condition. Since we have already fixed a solution $w$ to (1), there exists such a decomposition so long as $w$ and $f(w)$ are regular enough for the usual elliptic existence theory to apply; e.g. $w \in H^1(\Omega)$, $f(w) \in H^{-1}(\Omega)$. In fact, all you really need is one of the above conditions: if for example $w \in H^1(\Omega)$ then we get a $w_1$ solving one equation, and $w_2 := w - w_1$ immediately solves the second. Since the boundary condition for (1) is $w=g$ on $\partial\Omega$, this reduces to just needing $g \in H^1(\Omega)$; hopefully this is required to solve (1) in the first place? If not then you may need some facts about the regularity of solutions to (1).
Conjugacy of factors and semidirect products
After playing with this for a while, I think the following is a counter-example: Let $$G = \mathbb{Z}/4\mathbb{Z} \rtimes_\varphi \mathbb{Z}/2\mathbb{Z} = \langle x, y \mid x^4 = y^2 = 1\,, yxy^{-1} = x^3\rangle\,,$$ and take $N = \langle x \rangle$, $H = \langle y \rangle$ and $K = \langle xy \rangle$. We have both $G \simeq N \rtimes H$ and $G \simeq N \rtimes K$. However, any conjugate of $y$ by $x^i \in N$ satisfies: $$x^iyx^{-i} = x^ix^{-3i}y = x^{-2i}y\,.$$ Hence, $K$ is not a conjugate of $H$ by an element of $N$.
Prove that there exist uncountable family of subsets of $\mathbb{N}$ which is linearly (totally) ordered by relation of inclusion.
Hint: Fix a bijection $f \colon \mathbb N \to \mathbb Q$. For each $x \in \mathbb R$ let $$ C_x := \{ q \in \mathbb Q \mid q \le x \}, $$ where $\le$ is the natural order of $\mathbb R$. Show that $(\{ C_x \mid x \in \mathbb R \}; \subseteq )$ is a linear order, that $C_x \neq C_y$ for $x \neq y$ and conclude, by pulling the $C_x$ back via $f$ $(\dagger)$, that there is an uncountable family $\{ D_x \mid x \in \mathbb R \}$ of subsets $D_x \subseteq \mathbb N$ which is also linearly ordered by $\subseteq$. $(\dagger)$ I'm intentionally vague here.
Graphing solutions to the Schrodinger equation for a periodic potential
I think what Ian said is right (that your function should only be plotted on negative z). Also on your wolfram alpha plot what you meant to do was make the range -1 to 1 but instead you have just plotted -1 and 1 on top of your function while the range got too big so wolfram alpha gave up.
$\binom{n}{k} : \binom{n}{k+1} : \binom{n}{k+2} = a : b : c$
Sketch: Suppose $\binom{n}{k},\binom{n}{k+1},\binom{n}{k+2}$ are in ratio $1:2:3$. Then $\frac{n-k}{k+1}=2$ and $\frac{n-k-1}{k+2}=\frac{3}{2}$. Solving these two equations gives $n=14$ and $k=4$.
Axiomatic proof help?
This is how the proof would go if you know the field axioms of the real numbers. For a general field $F$ the reciprocal is the multiplicative inverse denoted $x^{-1}$ but for $\mathbb{R}$ the multiplicative inverse lives in the field of fractions of $\mathbb{R}$. Please only use this if you know these properties. I can make edits if you explain which axioms you do have. $\mathbb{R}$ is a field $(1)$ You know if $x>y$ then $\dfrac{1}{x}<\dfrac{1}{y}$ for $x,y \not = 0$ $(2) \ a,b \in \mathbb{R}_{>0}$ $(3) \ a<b \Rightarrow a-b<0 \Rightarrow -1(a-b)>0 \Rightarrow -a-(-b)>0 \Rightarrow -a+b>0 \Rightarrow -a>-b$ $(4)$ Apply statement $(1)$ to the last inequality.
Why does $A \times B \cap \operatorname{diag} X^2 = \emptyset$ imply $A \cap B = \emptyset$?
If $x \in A \cap B$, then clearly $(x,x) \in A \times B$ and $(x,x) \in \operatorname{diag} X^2$, so $(A \times B) \cap \operatorname{diag} X^2 \neq \emptyset$. In addition, if $(a,b) \in (A \times B) \cap \operatorname{diag} X^2$, then we must have $a=b$, hence $A \cap B \neq \emptyset$, so in fact it is an equivalence.
integrate function with change of variable
The substitutions will work fine...but your evaluation is problematic. Most problematic is the fact that you are integrating each factor of a product in the integrand, and expressing this as the product of integrated factors: which you cannot do, unless you are using, say, integration by parts, which proceeds much differently: . I.e. $$\int [f(x)\cdot g(x)]\,dx \;\neq \;\int f(x) \,dx \cdot \int g(x)\,dx$$ So, let's start from the point after which we've substituted: $$\int \underbrace{(u - 1)^2}_{\text{expand}} . u^{1/2} \; du \; = \;\int \underbrace{(u^2 - 2u + 1)u^{1/2}}_{\text{distribute}} \,\;du = \int \left(u^{5/2} - 2u^{3/2} + u^{1/2}\right) \,du$$ Now integrate, and then back-substitute.
Geometry - Tangent circles
First of all, from previous parts, $\beta = \beta_1 = ... = \beta_6$. Following part of the given solution, $\triangle MAE \sim \triangle MTA$. This leads to $\dfrac {MA}{MT} = \dfrac {EM}{AM}$. This further means $\dfrac {MI}{MT} = \dfrac {ME}{MI}$ (from part (a)). The newly formed ratio, together with the common angle $\tau$ give $\triangle MEI \sim \triangle MIT$. This further means $\lambda = \lambda_1$. $\angle 1 = \tau + \lambda = \tau + \lambda_1 = \angle 2 = \angle 3$. Result follows from the converse of angle in alternate segment.
Is every dual space with strong topology locally convex?
Does this mean that every dual space with this topology is locally convex? YES! Or I'm missing something? NOT! Your thought is correct. Another way to think. The strong topology is a polar topology (see the book Topological Vector Space, Lawrence Narici and Edward Beckenstein, Example 8.5.5) and every polar topology is locally convex (see the same book 8.5 Polar Topology, or Example 11.2.5).
Geometric intuition for the concept of analytical function
Consider a map $f\colon\mathbb{C}\to\mathbb{C}$ which is differentiable when considered as a map $\mathbb{R}^2\to\mathbb{R}^2$. Then its derivative at any point is a linear map $\mathbb{R}^2\to\mathbb{R}^2$, or if you wish (and I do), an $\mathbb{R}$-linear map $\mathbb{C}\to\mathbb{C}$. Analyticity requires this map to be $\mathbb{C}$-linear, i.e., multiplication by a complex number. Geometrically, this requires the map to be the composition of a uniform scaling and a rotation. So there you have it: An analytic map needs to look like scaling and rotation around every point in its domain. This is equivalent to conformality, i.e., preservation of angles.
Finding an Expression for $\sum_{i=1}^{n}{\frac{i}{(i+1)!}}$
$$\sum_{i=1}^{n}{\frac{i}{(i+1)!}}=\sum_{i=1}^{n}{\frac{(i+1)-1}{(i+1)!}}=\sum_{i=1}^{n}\left({\frac{1}{i!}}-{\frac{1}{(i+1)!}}\right)=\frac{1}{1!}-\frac{1}{(n+1)!}$$
Find all solutions of the equation z.
Plugging your $z_1$ in the equation, we get $$\left(\frac{1+i}2\right)^3=\left(\frac{-1+i}2\right)^3,$$ which cannot be true. The solutions of $$3z^2+3z+1=0$$ are $$\frac{-3\pm\sqrt{9-12}}6=\frac{-3\pm i\sqrt3}{6}.$$ For a more "trigonometric" solution, $$\left(1+\frac1z\right)^3=1=e^{i2k\pi}$$ so that $$z=\frac1{e^{i2k\pi/3}-1},$$ for $k=1,2$.
Evaluating this triple integral
I cannot comment last one, because i don't have rep. There is a mistake : $\left[ -e^{-xy} \right]^1_0 = -e^{-y} - (-e^0) = 1-e^{-y}$ Correctly done here : $$\int\limits^1_0\int\limits^1_0\int\limits^1_0 {y e^{−xy}}\,dx dy dz = \int\limits^1_0\int\limits^1_0 {[−e^{−xy}]}^1_0\,dy dz = $$ $$= \int\limits_0^1\int\limits_0^1 1-e^{-y}\,dydz = \int\limits_0^1 (y + e^{-y})|^1_0\,dz =$$ $$= \int\limits_0^1 (1 + e^{-y} -1)\,dz = \int\limits_0^1 e^{-1}\,dz = e^{-1}z|^1_0 = e^{-1} = {1\over e}$$
Joint PDF with graph and variable, problem
You are given an uniform density (in fact you have $f(x,y)=c$. a constant) in a specific area, that is the colored area in the picture. Thus the density is exactly the reciprocal of the area. This because your joint distribution is a solid with Base as the area $A$ in the picture and Height constant $c$. The probability is the volume of the solid figure, thus you have $$A\times c=1 \rightarrow c=\frac{1}{A}$$ You can integrate the constant in the colored area finding the same result $$\int_0^{1/2}\left[\int_{1/2-x}^{1-x}dy \right]dx+\int_{1/2}^1\left[\int_{0}^{1-x}dy \right]dx+\int_{1/2}^1\left[\int_{3/2-x}^{1}dy \right]dx=\frac{1}{c}$$ You get $c=2$
$T^2v= \lambda v \implies Tv= \sqrt{\lambda} v$?
As we have $T^2\nu = \lambda \nu$ and $\nu\neq0$ we can say that $\lambda$ is an eigenvalue of $T^2$. Hence,$det(T^2-\lambda I) = 0$. Therefore, we have: $$det(T^2-\lambda I) = det((T-\sqrt{\lambda}I)(T+\sqrt{\lambda}I)) = 0$$ As referenced here we can say: $$det(T-\sqrt{\lambda}I)det(T+\sqrt{\lambda}I) = 0$$ So, $det(T-\sqrt{\lambda}I) = 0$ or $det(T+\sqrt{\lambda}I) = 0$. It means one of the $\sqrt{\lambda}$ and $-\sqrt{\lambda}$ is at least eigen value of $T$. Anyhow, you can't say necessarily $\sqrt{\lambda}$ is eigen value of $T$. Also, If we know all eigen values of $T$ are positive and real, we can say the statement is true. As mentioned by Jyrki, You should be aware that this analysis about $\lambda$ not about $\nu$ as eigen vector. So, if your question is also on $\nu$ you can't find anything in this analysis.
Show that $\{x \in \mathbb{Q}:x \geq 0, x^2 \leq 2\}$ has no rational least upper bound.
At first i can't follow your argument for why if $\alpha^2>2$ than $\alpha$ can't be a least upper bound, because you only give another upper bound which is greater. If $\alpha^2 <2$ then you have to find a rational number $q$ such that $\alpha^2< q^2< 2$. First we show that if $\alpha^2>2$ then $\alpha$ can't be a least upper bound. Because of $\alpha^2 >2$ there is a $\varepsilon >0$ such that $\alpha^2 > 2+\varepsilon$. Due to archimedes wlog we may assume this $\varepsilon$ be be rational. We now want to find a $\delta >0$ such that $\alpha-\delta$ is another upper bound, which would contradict that $\alpha$ is the least upper bound. We now have \begin{align*} (\alpha -\delta)^2 &= \alpha^2 - \delta(2 \alpha -\delta)\\ & > 2 + \varepsilon - \delta (2 \alpha - \delta) \end{align*} which tells us we want to have that $$\varepsilon > \delta(2\alpha-\delta)$$ to show that for this choice of $\delta$ the number $\alpha-\delta$ will be an upper bound for our set. This calculation even shows, that not every $\varepsilon$ should be neglected. Even though you could solve this inequality exactly due to the easy terms, it is even more easy (and should be trained) to make some more estimates. As $\delta >0$ (and $\alpha >0$ we choose $\delta$ such that $$\varepsilon > 2\delta \alpha$$ in other words $\delta < \frac{\varepsilon}{2 \alpha}$ we see that $$\varepsilon > 2\delta \alpha > \delta( 2\alpha -\delta)$$ and hence $\alpha-\delta$ is a lower upper bound.
Computing the homology groups.
Many thanks to Steve D, user17786, and Dave Hartman for their helpful corrections. First, I put a cell structure on the twice-punctured disk with 3 0-cells, 5 1-cells, and 1 2-cell: Note that the boundary of the 2-cell $D$ is $$d_2D=\alpha+\beta+\gamma-\beta+\delta+\epsilon-\delta=\alpha+\gamma+\epsilon,$$ and that the boundaries of the 1-cells are $$\begin{align} d_1\alpha&=0\\ d_1\beta&=y-x\\ d_1\gamma&=0\\ d_1\delta&=z-x\\ d_1\epsilon&=0 \end{align} $$ Now, we identify $y$ with $z$, and $\gamma$ with $\epsilon$, to produce a cell structure on $X$: For $X$, the chain groups are $$\begin{align} C_0(X)&=\langle x,y\rangle\\ C_1(X)&=\langle \alpha,\beta,\gamma,\delta\rangle\\ C_2(X)&=\langle D\rangle \end{align}$$ where $D$ is our 2-cell, and we have $$\begin{align} d_1\alpha&=0\\ d_1\beta&=y-x\\ d_1\gamma&=0\\ d_1\delta&=y-x \end{align} $$ $$d_2D=\alpha+\beta+\gamma-\beta+\delta+\gamma-\delta=\alpha+2\gamma.$$ Thus, $$H_0(X)=\ker(d_0)/\mathrm{im}(d_1)=\langle x,y\rangle/\langle y-x\rangle=\left\langle\overline{x}\right\rangle\cong\mathbb{Z}$$ $$H_1(X)=\ker(d_1)/\mathrm{im}(d_2)=\langle \alpha,\gamma,\beta-\delta\rangle/\langle \alpha+2\gamma\rangle=\left\langle\overline{\gamma},\overline{\beta-\delta}\right\rangle\cong\mathbb{Z}^2$$ $$H_2(X)=\ker(d_2)/\mathrm{im}(d_3)=0/0\cong 0.$$
Expected value from cumulative distribution function
Since the distribution is nonnegative, you can use this formula for the expectation of a nonnegative random variable given its CDF $F$. $$E[X] = \int_0^\infty P(X \ge x) \, dx = \int_0^\infty (1-F(x))\,dx.$$
Are circles also squares in $(\mathbb{R}^2,||\cdot||_{\infty})$?
Let $V$ be a real or complex inner product space with inner product $( \cdot, \cdot)$ and induced norm $||v||=(v,v)^{1/2}.$ Then we have for $v,w \in V$: $ (v,w)=0 \iff ||v|| \le ||v +tw ||$ for all scalars $t$. This motivates the following definition: let $(X, ||\cdot||)$ be a normed space. For $x,y \in X$ define $ x \perp y : \iff ||x|| \le ||x+ty||$ for all scalars $t$.
Correctly count all unique near-coincident hexagonal lattice pairs?
If you're trying to pick points geometrically you are working too hard. here's how it's really done, with the example of finding a decent match for a size ratio of $x = \pi$. In this we will be iterating over the Loeschain numbers, which is equivalent to considering points in order of euclidean distance from the origin. We'll start with $a_2=1$, using $k=1$ and $\ell=0$. We now find lower and upper $a_{1-}$, $a_{1+}$ and corresponding $i_-,j_-, i_+, j_+$ by starting at $\lfloor x^2 \rfloor$ and $\lceil x^2 \rceil$ and going down and up respectively until we land on Loeschian numbers. Then $p = a_{1-}/a_2$ and $q = a_{1+}/a_2$ are our current best approximations -- well, the squares of the best approximations; working with integers is less of a pain in general. Repeatedly: increment $a_2$ to the next Loeschian number. Calculate $a_2x^2$ again, and also $a_2p$ and $a_2q$. Starting at $\lfloor a_2x^2 \rfloor$ and $\lceil a_2x^2 \rceil$, again go down and up respectively to find Loeschian numbers... but if you reach $a_2p$ or $a_2q$ before that happens, give up: something earlier worked better. On the other hand, if you do find one, then you can update $p$ or $q$ as appropriate. This can go on forever, so long as it's never true that $a_2x^2$ is itself a Loeschian number; in that case you've found an exact match. So, for $\pi$: $\pi^2 \approx 9.9$, so our starting point is $p=9$, $q=12$ $3\pi^2 \approx 29.6$, and we find $3\cdot9 < 28 < 3\pi^2 < 31 < 3\cdot12$, so now $p=28/3$, $q=31/3$ $4\pi^2 \approx 39.5$, $4\frac{28}{3} < 39 < 4\pi^2 < 4\frac{31}{3}$ -- this time, we don't find a better upper bound, so only $p$ changes, to $\frac{39}{4}$. $7\pi^2 \approx 69.1$, and in this case we don't find any thing that improves the situation. Proceeding in this fashion we get $q = \frac{91}{9}$, $q = \frac{121}{12}$, $p = \frac{127}{13}$, $q=\frac{129}{13}$, $p = \frac{157}{16}$, $q=\frac{208}{21}$, and $q=\frac{247}{25}$, and so on. Not all the numbers we get are optimal: $\frac{157}{16}$ isn't as good as $\frac{129}{13}$, but it is on the low side instead of the high side so we get to keep it for now to keep us on track.
Evaluating sums question $\sum_{k=2}^n \frac{1}{k^2-1}$
$$\frac12\sum_{k=2}^N\left(\frac1{k-1}-\frac1{k+1}\right)=$$ $$=\frac12\left(1-\color{red}{\frac13}+{\frac12}-\color{green}{\frac14}+\color{red}{\frac13}-\color{blue}{\frac15}+\color{green}{\frac14}-\frac16+\ldots+\color{purple}{\frac1{N-1}}-\frac1{N+1}\right)=\ldots$$
Future Lifetime Distribution
Hint: The random variable $\min(T_{20},50)|T_{20} > 50)$ doesn't vary much! If you then want to use a formula, call the above random variable $Y$. We want $E((Y-50)^2)$. How did you decide earlier that $E(Y)=50$?
Find all complex numbers z, for which is $(\frac{(z - i - 1)}{(iz + 1)})^2$ real number?
You want $\frac{z-i-1}{iz+1}$ to be real or imaginary (i.e., $t$ or $it$ with $t\in\Bbb R$). Note that $z\mapsto \frac{dz-b}{-cd+a}$ is an inverse map to $z\mapsto \frac{az+b}{cz+d}$
Show Schwarz-Christoffel retains the same form
The Schwarz-Christoffel formula shows how to map the upper half plane onto any convex polygon. By setting all exponents $\beta = 2/n$, we get a mapping of the upper half plane to a regular $n$-gon. Letting $n$ go to infinity, I would expect this formula to agree with your $\phi^{-1}$, since both would map the upper half plane to circles.
Why the result is not $0$?
It is not a symbolab error. If we have two powers $x^m$ and $x^n$, then $x^m\cdot x^n = x^{m+n}$. This is known as the Power Product Rule because we are taking the product of two powers, but with a common base $x$. If $m = \dfrac{1}{i}$ and $n = \dfrac{1}{j}$ then the same is still applied, such that $x^{1/i}\cdot x^{1/j} = x^{1/i\,+\, 1/j}$. When we raise $x$ to the power of a fraction with numerator $1$ and denominator $h$ (or the reciprocal of $h$), then this is the same as $\sqrt [h]x$. Therefore, this power rule also applies to roots (or radicals). It follows, then, that symbolab makes no mistake in writing that $$\Large \left(\sqrt [n] a\cdot \sqrt[k] a\right) - \left(a^{(n+k)/nk}\right) = a^{1/n\, + \, 1/k} - a^{(n+k)/nk}.$$ But it is technically incorrect as it should equal $0$ since, $$\frac{n+k}{nk} = \frac{n}{nk} + \frac{k}{nk} = \frac{\require{\cancel}\cancel n}{\cancel nk} + \frac{\cancel k}{n\cancel k} = \frac 1k + \frac 1n.$$
Apostol calculus 2- Legendre polynomials
Note that: $$(2m+2)(2m+4)\dotsb(2m+2n)=2^n(m+1)(m+2)\dotsb(m+n)$$ Now $$\frac{(2m)!(2m+1)(2m+3)\dotsb(2m+2n-1)\cdot(2m+2)(2m+4)\dotsb(2m+2n)}{(2m)!(2m+2)(2m+4)\dotsb(2m+2n)}=\frac{(2m+2n)!}{2^{n}(2m)!(m+1)(m+2)\dotsb(m+n)}=\frac{(2m+2n)!m!}{2^{n}(2m)!m!(m+1)(m+2)\dotsb(m+n)}=\frac{(2m+2n)!m!}{2^{n}(2m)!(m+n)!}$$
Calculating $\|A\|_2$ in terms of eigenvalues of $A^\ast A$
As the answer you looked at indicates, the key here is that a unitary transition of basis preserves the matrix norm. In particular, if $A$ has SVD $A= UDV^T$, then we'll have $\|A\|=\|D\|$. So indeed, $\|A\|=\sqrt{\lambda_{max}(A^TA)}$.
Elements of Order 6 in a simple group of order 168
If there is an element of order $6$ with cycle structure $(6\,2)$ then for each pair of points $\{a\,b\}$ there are at least two elements of order $6$ with $(a\,b)$ as a cycle and at least $56$ elements of order $6$, since $G$ is $2$-transitive. Likewise there are at least $56$ elements of order $3$. As there are $48$ elements of order $7$ there are at most $168-56-56-48=8$ elements of $2$-power order, so the Sylow $2$-subgroup is unique, and normal, contradicting the simplicity of $G$.
Understanding some basics
Given $x_1,\ldots,x_n$, you can define a function $$f(a) = \sum_{i=1}^n (x_i-a)^2.$$ This function is defined for all real $a$, and in some sense it measures the centrality of $a$. You can try some simple example on Wolfram alpha, say "plot((a-1)^2+(a-2)^2+(a-4)^2,a=0..5)". In this case the data points are $1,2,4$, and you get a nice, parabolic-like shape (this is no coincidence). The lowest point of the function is the minimum. The function $f(a)$ is bounded from below (since $f(a) \geq 0$), and so it cannot happen that $f(a)$ can be arbitrarily small. Therefore there must be a value below which $f(a)$ cannot reach, and moreover an optimal one - this is known as the infimum. The infimum is a value $L$ such that $f(a) \geq L$ always, and moreover for each $l > L$, there is some value of $a$ such that $f(a) < l$; that expresses the fact that $L$ is optimal. In general, it might be that $f(a)$ can get arbitrarily close to the infimum, but never reach it. In the case at hand, this doesn't happen, and the function $f(a)$ is actually minimized at some point, namely the average (or mean). The equation you state can be used to justify the definition of average - the average is the value that minimizes the average squared distance from the datapoints. You could choose other criteria - for example, if you replace squared distance by absolute distance, then the optimal value of $a$ is the median. The reason that people care about squared distance is that it's much easier to work with, and the resulting theory is very nice. For example, the central limit theorem states that in many cases, processes converge to a normal distribution whose parameters depend on the mean and variance of the original distribution - the variance is just $\min f(a)$ (normalized).
Finding the intersection point of a line with a rectangle defined by 4 points
I assume that the naming of the points is such that $p_1p_2,\;p_2p_3,\;p_3p_4,\;p_4p_1$ are the four sides of the rectangle. For finding $t$, it does not matter if you take $p_3$ or $p_4$ to get $v_2.$ You only need a set of two independent vectors to span the plane in which the rectangle resides. However it matters for the question 2. You must use $v_2=p_4-p_1$ Example: $$ p_1=\begin{pmatrix} 0 \\ 0\\ 0\end{pmatrix}\;,\;\; p_2=\begin{pmatrix} 1 \\ 0\\ 0\end{pmatrix}\;,\;\; p_3=\begin{pmatrix} 1 \\ 1\\ 0\end{pmatrix}\;,\;\; p_4=\begin{pmatrix} 0 \\ 1\\ 0\end{pmatrix} $$ and $$ p=\begin{pmatrix} \frac{1}{4} \\ \frac{5}{4} \\ 0\end{pmatrix} $$ If you define $v_2=p_3-p_1,$ then $$ 0 < (p-p_1)\cdot v_1 = \frac{1}{4} < 1 = v_1\cdot v_1 \\ 0 < (p-p_1)\cdot v_2 = \frac{3}{2} < 2 = v_2\cdot v_2 $$ although the point $p$ is clearly outside of the rectangle. The reasoning behind the inequalities: Let us call the direction of $v_1$ "right" and the direction of $v_2$ "up" (which only makes sense if we define $v_2=p_4-p_1$ as you proposed). In the following, I use "left", "right", "above" and "below" accordingly. Then $(p-p_1)\cdot v_1 < 0$ means that $p$ is more to the left than $p_1.$ Due to all the right angles this means "more to the left than the $p_4p_1$ edge." $(p-p_1)\cdot v_1 > v_1\cdot v_1$ means that $p$ is more to the right than $p_2.$ This means "more to the right than the $p_2p_3$ edge." $(p-p_1)\cdot v_2 < 0$ means that $p$ is below $p_1.$ This means "below the $p_1p_2$ edge." $(p-p_1)\cdot v_2 > v_2\cdot v_2$ means that $p$ is above $p_4.$ This means "above the $p_3p_4$ edge."
Given two relations R, S on a Set X (so that R, S ⊆ X . X) : Prove or disprove, If R and S is transitive, then so is R ∪ S
Welcome to MathStackExchange. A binary relation $R$ on a set $X$ is a subset of $X\times X$. The relations $R = \{(1,2)\}$ and $S=\{(2,1)\}$ are transitive, but the union $R\cup S = \{(1,2),(2,1)\}$ is not since $(1,1)$ and $(2,2)$ are missing.
How to pronounce of $0^+$ and $0^-$?
Those are right-sided and left-sided limits. You can say, as $x$ approaches $0$ from the right side, $f(x)$ approaches $L$. You can also say the limit of $f(x)$ as $x$ approaches $0$ from the right side. Of course, the natural association of being to the left and right of a value depends on the dialect and language. You could use the notion of being below or above a value in place of mentioning right or left.
Finding a bound on the difference of the products of two sequences
For simplicity take $N=2$ and notice: $$ |a_1a_2-b_1b_2|=|a_1a_2-((b_1-a_1)+a_1)b_2|=|a_1a_2-[b_2(b_1-a_1)+a_1b_2] $$ $$ =|a_1(a_2-b_2)-b_2(b_1-a_1)|\leq |a_2-b_2|+|b_1-a_1| \leq 2\delta $$ I'm using here $|a_k|,|b_k| \leq 1$, that's not the condition you gave, but if we only suppose $a_k,b_k \in (-\infty, 1]$ I don't believe this will be true, if that's what you're asking about I can try to think of a counterexample.
Integrate $ \int_a^b \frac{1}{\sqrt{Ax-\frac{x^2}{2}}}dx$
$$\int_a^b \frac{1}{\sqrt{Ax-\frac{x^2}{2}}}dx$$ $$=\int_a^b \frac{\sqrt 2}{\sqrt{2Ax-x^2}}dx$$ $$=\sqrt 2\int_a^b \frac{1}{\sqrt{A^2-(x-A)^2}}dx$$ $$=\sqrt 2\int_a^b \frac{d(x-A)}{\sqrt{A^2-(x-A)^2}}$$ $$=\sqrt 2\left[\sin^{-1}\left(\frac{x-A}{A}\right)\right]_a^b $$
Can a divergent alternating series by rearrangement of terms be made to converge to a value?
Maybe or maybe not. If you follow through the proof that rearranging the conditionally convergent series can make the sum anything you want, an important point is that the sum of the positive terms diverges to $+\infty$ and the sum of the negative terms diverges to $-\infty$. If that happens, you can use the same style of rearrangement. If not, consider the series $$a_n=\begin {cases} 2^{-n}&n \text { odd}\\ -1& n \text { even} \end {cases}$$ This is a diverging alternating series, but you cannot make it converge to anything. If you insist that the terms go to zero in absolute value (as required in the alternating series theorem) you can use $$a_n=\begin {cases} 2^{-n}&n \text { odd}\\ -1/n& n \text { even} \end {cases}$$
Multiplying the roots of polynomials using only their integer coefficients
The following algorithm suggests itself. Let $\alpha$ be an unknown root of a polynomial $f(x)$ of degree $n$, and $\beta$ be an unknown root of a polynomial $g(x)$ of degree $m$. Using the equation $f(\alpha)=0$ allows us to rewrite any power $\alpha^k$ as a linear combination of $1,\alpha,\alpha^2,\ldots,\alpha^{n-1}$. Similarly using the equation $g(\beta)=0$ allows us to rewrite any power $\beta^\ell$ as a linear combination of $1,\beta,\beta^2,\ldots,\beta^{m-1}$. Putting these two pieces together shows that we can write any monomial $\alpha^k\beta^\ell$ as a linear combination of the $mn$ quantities $\alpha^i\beta^j, 0\le i<n, 0\le j<m.$ Denote $c_k=\alpha^i\beta^j$, where $k=mi+j$ ranges from $0$ to $mn-1$. Next let us use the expansions of $(\alpha\beta)^t$, $0\le t\le mn$ in terms of the $c_k$:s. Let these be $$ (\alpha\beta)^t=\sum_{k=0}^{mn-1}a_{kt}c_k. $$ Here the coefficients $a_{tk}$ are integers. We seek to find integers $x_t,t=0,1,\ldots,mn$, such that $$ \sum_{t=0}^{mn}x_t(\alpha\beta)^t=0.\qquad(*) $$ Let us substitute our formula for the power $(\alpha\beta)^t$ above. The equation $(*)$ becomes $$ 0=\sum_t\sum_kx_ta_{tk}c_k=\sum_k\left(\sum_t x_ta_{kt}\right)c_k. $$ This will be trivially true, if the coefficients of all $c_k$:s vanish, that is, is the equation $$ \sum_{t=0}^{mn}a_{kt}x_t=0 \qquad(**) $$ holds for all $k=0,1,2,\ldots,mn-1$. Here there are $mn$ linear homogeneous equations on the $mn+1$ unknowns $x_t$. Therefore linear algebra says that we are guaranteed to succeed in the sense that there exists a non-trivial vector $(x_0,x_1,\ldots,x_{mn})$ of rational numbers that is a solution of $(**)$. Furthermore, by multiplying with the least common multiple of the denominators, we can make all the $x_t$:s integers. The polynomial $$ F(x)=\sum_{t=0}^{mn}x_tx^t $$ is then an answer. Let's do your example of $f(x)=x^2-1$ and $g(x)=x^2+3x+2$. Here $f(\alpha)=0$ tells us that $\alpha^2=1$. Similarly $g(\beta)=0$ tells us that $\beta^2=-3\beta-2$. This is all we need from the polynomials. Here $c_0=1$, $c_1=\beta$, $c_2=\alpha$ and $c_3=\alpha\beta$. Let us write the power $(\alpha\beta)^t$, $t=0,1,2,3,4$ in terms of the $c_k$:s. $$ (\alpha\beta)^0=1=c_0. $$ $$ (\alpha\beta)^1=\alpha\beta=c_3. $$ $$ (\alpha\beta)^2=\alpha^2\beta^2=1\cdot(-3\beta-2)=-2c_0-3c_1. $$ $$ (\alpha\beta)^3=\alpha\beta(-3\beta-2)=\alpha(-3\beta^2-2\beta)=\alpha(9\beta+6-2\beta)=\alpha(7\beta+6)=6c_2+7c_3. $$ $$ (\alpha\beta)^4=(-3\beta-2)^2=9\beta^2+12\beta+4=(-27\beta-18)+12\beta+4=-14-15\beta=-14c_0-15c_1. $$ We are thus looking for solutions of the homogeneous linear system $$ \left(\begin{array}{ccccc} 1&0&-2&0&-14\\ 0&0&-3&0&-15\\ 0&0&0&6&0\\ 0&1&0&7&0 \end{array}\right) \left(\begin{array}{c}x_0\\x_1\\x_2\\x_3\\x_4\end{array}\right)=0. $$ Let us look for a solution with $x_4=1$ (if the polynomials you started with are monic, then this always works AFAICT). The third equation tells us that we should have $x_3=0$. The second equation allows us to solve that $x_2=-5$. The last equation and our knowledge of $x_3$ tells that $x_1=0$. The first equation then tells that $x_0=4.$ This gives us the output $$ x^4-5x^2+4. $$ The possibilities for $\alpha$ are $\pm1$ and the possibilities for $\beta$ are $-1$ and $-2$. We can easily check that all the possible four products $\pm1$ and $\pm2$ are zeros of this quartic polynomial. My solution to the linear system was ad hoc, but there are known algorithms for that: elimination, an explicit solution in terms of minors,...
3 digit Prime Palindrome Numbers.
$90$ is not so many, so you can just check them. We know that primes end in $1,3,7,$ or $9$, so $a$ must be one of those and you are down to $40$. You should be able to find a condition on $a,b$ that will guarantee that $aba$ is divisible by $3$, which will cut the number down a few more.
Conifolds and Exotic Spheres
No, the authors are not suggesting that the two manifolds are homeomorphic and not diffeomorphic. Your two manifolds are Riemannian manifolds: smooth manifolds equipped with a Riemannian metric. The volume of a Riemannian manifold is preserved under isometry: it's defined by integrating the volume form over the manifold, and an isometry $M \to N$ pulls back the volume form of $N$ to the volume form of $M$. But this is far from true just for diffeomorphisms. For instance, the $2$-sphere of any radius is diffeomorphic to the 2-sphere of any other radius; but the volume of $S^2(r)$ is $4\pi r^2$. So we can give a sphere a Riemannian metric of any given volume. The authors in your paper show that the two Riemannian manifolds are not isometric by showing that they have different volumes; but they are still diffeomorphic.
If 1 is the identity of the multiplicative (semi)group what is the term for 0?
There is no such element in a nontrivial group. Every element of a group has an inverse. Let $G$ be a group and suppose (for purpose of contradiction) $G$ contains your proposed element $O \neq e$. Then there is a $p = O^{-1} \in G$ such that $pO = e \neq O$. But this contradicts the definition of $O$. Therefore, there is no nontrivial group, $G$, containing an $O \neq e$ as described. Another way to get at this, using the required existence of inverses, is that from $$ x O = O \text{,} $$ we have $$ x = xO O^{-1} = O O^{-1} = e \text{.} $$ So the assumed multiplication properties of $O$ are incompatible with its membership in a group unless the only element of the group is $e$ (in which case $e = O$ does satisfy the properties of both the multiplicative identity in a multiplicative group and the properties of the $O$ element you describe). (This is why I wrote "$O \neq e$" in the second paragraph: to avoid the case that we were secretly only talking about the group with one element.)
If $\lim_{x\to c}f'(x)=A$ then $f'(c)=A$
First, let us look at the left-hand side. For all $x_1 \in (a,c)$, there exists $c_1 \in (x_1, c)$ such that $f(x_1) - f(c) = f'(c_1) (x_1 - c)$ by MVT. Note that this is so because $f$ is continuous on $[x_1, c]$. Therefore, $$ \lim_{x_1 \nearrow c} \dfrac{f(x_1) - f(c)}{x_1 - c} = \lim_{c_1 \nearrow c} f'(c_1) $$ since $x_1 \nearrow c$ implies $c_1 \nearrow c$ by the condition. Hence, we have $$ \lim_{x_1 \nearrow c} \dfrac{f(x_1) - f(c)}{x_1 - c} = A $$ Likewise, we can do a similar process for the case that, say, $x_2 \in (c,b)$.
Some help with an intuition behind a two-stage game
This doesn't pretend to be an answer and I might say some bullshit. But consider this as more like an invitation for a discussion. Also this looks like a research problem, so most probably, there is no direct and truly correct answer. Consider first situation where none of the agents decide to discover information. Then I would use a framework of global interactions of Brock Durlauf to derive some sort of mean field equilibrium. Essentially agents form rational expectations on probabilities with which others will choose $A=1$, say this expectation is equal to $m$, then in equilibrium $m = \bar{F}(\delta (N-1) m)$ where $N$ is the total number of firms and $\bar{F}(\cdot)$ is complementary cdf of $s$. Consider then the case where all agents decide to discover information. For simplicity, say we have only 2 agents. Then we have the following payoff matrix of the game with complete information: \begin{vmatrix} -c,-c & s_1-c, -c \\ -c,s_2-c & \delta + s_1-c, \delta + s_2-c \end{vmatrix} Here $c$ is cost of acquiring information. Depending on realizations of $s_1$ and $s_2$ we might get different equilibria. And it looks to me that the payoffs that agents get here might be lower than what they get in case 1. After that I am not sure how to proceed with the scenario where some of the agents acquired information and some of them didn't. What are the payoffs in this case?
U-substitution of 2x in trigonometric substitution
The hint: $$\frac{x^3}{\sqrt{(4x^2+9)^3}}=\frac{x^3+\frac{9}{4}x-\frac{9}{4}x}{\sqrt{(4x^2+9)^3}}=\frac{x}{4\sqrt{4x^2+9}}-\frac{9x}{4\sqrt{(4x^2+9)^3}}.$$ Can you end it now?
What is maximum of $\frac{x^2+y^2+z^2}{xy+xz+yz}$ when $x, y, z \in [1, 2]$?
Let $k$ is a maximal value and $f(x,y,z)=x^2+y^2+z^2-k(xy+xz+yz)$. Since, $f$ is a convex function of $x$, of $y$ and of $z$, we obtain: $$0=\max_{\{x,y,z\}\subset[1,2]}f=\max_{\{x,y,z\}\subset\{1,2\}}f$$, which for $x=y=1$ and $z=2$ gives $k=\frac{6}{5}$ and we are done!
A series of rational number converges to an irrational number
With your definition, it's clearly true that $\liminf \frac{n_k}{n_1n_2\cdots n_{k-1}} \le 1$. But that doesn't seem relevant to the stated problem, given its hypothesis. It's not necessarily true that $n_{k+1} \ge 3n_k$ for $k\ge3$ (the sequence can start with $n_k=k$ for as many terms in a row as you like), but there does exist $k_0$ such that it's true for $k\ge k_0$.
Finding a cluster of size $k$ with minimum intra-distance
You can solve the problem via integer linear programming as follows. Let binary decision variable $y_i$ indicate whether $i \in S$. The problem is to minimize $$\sum_{i<j} d(x_i,x_j) y_i y_j \tag1$$ subject to $$\sum_i y_i = k \tag2$$ You can linearize the objective $(1)$ by introducing binary (or just nonnegative) variables $z_{i,j}$ and minimizing $$\sum_{i<j} d(x_i,x_j) z_{i,j}$$ subject to $(2)$ and $$z_{i,j} \ge y_i + y_j - 1 \quad\text{for all $i<j$} \tag3$$ Constraint $(3)$ enforces the logical implication $y_i \land y_j \implies z_{i,j}$.
Brauer Group of $\mathbb{Q}_2$
I discovered what is wrong. If $a=2, b=3$, and we let $x\in\mathbb{Q}_2$ be a solution to $x^2=17$, which exists by Hensel, then $\left(\frac{xv+uv}{3}\right)^2=5$, by anticommutativity, and $\frac{xv+uv}{3}$ and $u$ anticommute, so the three division algebras are actually all the same. So there's only one division algebra as expected.
$\sqrt{n}$ in scaled random walk
While I'm not entirely sure about your notation here, if the $X_j$ are iid then your $W^n$ have constant variance. This is in turn because of the basic properties $$\text{Var} \left ( \sum_{j=1}^n X_j \right ) = n \text{Var}(X_1) \\ \text{Var}(cX)=|c|^2 \text{Var}(X)$$ when the $X_j$ are iid and $c$ is a constant. Combining these gives $$\text{Var} \left ( \frac{1}{\sqrt{n}} \sum_{j=1}^n X_j \right ) = \frac{1}{n} n \text{Var}(X_1)=\text{Var}(X_1).$$ This is probably why you would want the walk to be scaled in that way. This same scaling is used in, for example, the central limit theorem.
Unordered outcomes (counting)
HINT: If you have $n$ numbered marbles, in how many distinct ways can you paint some of them red and the rest blue? Now replace the marbles by pairs of users and the colors by ... ?
Stiffness matrix for 3-nodes beam elements FEM (M+N)
It's very exciting that you are going to build an FEM Model from scratch! The stiffness matrix in your case is simply: $$ K_m+K_n $$ But this stiffness matrix only applies to each edge's local coordinate system respectively, while the variables shown in the triangle are inevitable in a global coordinate system. A conventional routine would be: (1) first construct the local stiffness matrix (6 DOF per node) matrix per edge: But don't forget to transform it into a global coordinate system by $$ K=\mathbf{v}^T k' \mathbf{v} $$ where the $12\times 12 $ matrix $\mathbf{v}$ represents the transformation according to the edge's direction. (2) Virtually construct a full matrix $A$ whose dimension is $3N\times 3N$ (truss) or $6N \times 6N$ (frame), where $N$ denotes the number of nodes. Then add every edge's local matrix onto $A$ (the nodal index in the full matrix should be consistent with that in local matrix). (3) distinguish the free nodal displacements $\mathbf{x}_a$ from the constrained ones $\mathbf{x}_c$ according to boundary conditions. Then $A$ can be partitioned into: $$ \begin{bmatrix} A_{aa} & A_{ac} \\ A_{ac} & A_{cc} \end{bmatrix} \begin{bmatrix} \mathbf{x}_{a} \\ \mathbf{x}_{c} \end{bmatrix} = \begin{bmatrix} F_{a} \\ F_{c} \end{bmatrix} $$ It's a magic if you get a full rank square matrix $A_{aa}$. Then solve $$ A_{aa} \mathbf{x}_{a} = F_{a} $$ to find out the unknown displacements $\mathbf{x}_{a}$. (4) The displacement $\mathbf{x}_{a}$ in the last step is represented in a global coordinate system. One need to go back to each edge's local coordinate system to compute its axis forces and bending moments.
Representation of a bilinear form on an Hilbert space
1) There exist bijection between bounded bilinear operators and bounded opeartors. The proof of this fact requires Banach-Steinhaus theorem. If bilinear form is symmetric, then the respecitive opeartor is (obviously) symmetric too 2) Spectrum of any bounded operator is compact, and as the consequence closed 3) No, consider bilinear form $$ b:\ell_2\times\ell_2\to\mathbb{R}:(x,y)\mapsto\sum\limits_{i=1}^\infty n^{-1}x_iy_i $$ It is symmetric positive but not coercive.
Sequence of matrices converging to a positive semidefinite matrix.
No. Suppose $A=0$, and $A_{n}=-\frac{1}{n}I$. The sequence of $A_{n}$ matrices converges in Frobenius norm to $A$ but none of them are positive semidefinite. If $A$ were positive definite, then you could reach the desired conclusion. The OP has since modified the question to include a restriction that all of the elements of the matrices be positive. The OP also wants A to have at least one positive eigenvalue. It's still easy to find a counter example. For example, let $A=\left[ \begin{array}{ccc} 1 & 1 & 0.01 \\ 1 & 1 & 0.01 \\ 0.01 & 00.01 & 1 \end{array} \right]$ and $A_{n}=\left[ \begin{array}{ccc} 1 & 1+1/n & 0.01 \\ 1+1/n & 1 & 0.01 \\ 0.01 & 0.01 & 1 \end{array} \right]$
Comprehension: Consider the $8$ digit number $N=22234000$
A simple way You computed the ans as $1120$. Just multiply it by $\dfrac58$, the probability that it starts with a non-zero The ans it yields (which others have also got) is $700$, the options are definitely wrong.
Maximizing a function subject to a constraint
The Lagrangian relaxation can be written as $$ \text{Maximize } L(\lambda) = (xyz)^{1/4} + \lambda(d - ax - by - cz) \\ \lambda \geq 0 $$ Partially differentiating with respect to each variable, we get \begin{align} \frac{\partial L}{\partial x} = \frac{(yz)^{1/4}}{4x^{3/4}}-a\lambda = 0 \tag 1\\ \frac{\partial L}{\partial y} = \frac{(xz)^{1/4}}{4y^{3/4}}-b\lambda = 0 \tag 2\\ \frac{\partial L}{\partial z} = \frac{(xy)^{1/4}}{4z^{3/4}}-c\lambda = 0 \tag 3\\ \frac{\partial L}{\partial \lambda} = ax + by + cz - d = 0 \tag 4 \end{align} Solving equations $(1)$ and $(2)$ we get $ax = by$ and solving equations $(2)$ and $(3)$ we get $by = cz$. Once we know this, we can find out the optimal values are $$ x^* = \frac{d}{3a} \\ y^* = \frac{d}{3b} \\ z^* = \frac{d}{3c} \\ \lambda^* = \Big(\frac{3}{abcd}\Big)^{1/4} \\ $$
Using comparison theorem for integrals to prove an inequality
HINTS: Note that for $x\in [0\,\pi/2]$, we have $$0\le \frac{\sin(x)}{x}\le 1$$ and $$0\le \frac{1}{x+5}\le \frac15$$
Similarity transformation, symmetric and diagonal matrices
For a square orthogonal matrix $C$, $C^{-1}=C^T$, so you’re really asking if $CDC^T$, where $D$ is diagonal, is symmetric. Let’s see: $$(CDC^T)^T = (C^T)^TD^TC^T = CDC^T.$$
Prove that a definite integral is an infinite sum
It's really just a geometric series: \begin{align} \int_0^\infty\frac{x\,dx}{1+e^x}&=\int_0^\infty xe^{-x}(1+e^{-x})^{-1} =\int_0^\infty x\sum_{n=1}^\infty(-1)^{n-1}e^{-nx}\,dx\\ &=\sum_{n=1}^\infty(-1)^{n-1}\int_0^\infty xe^{-nx}\,dx =\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n^2}. \end{align}
Calculating the angle of a disc to provide given ellipse
Set up a Cartesian coordinate. Let the radius of disc be $r$. Let the camera take the $xy$-plane. Case (1): the plane of disc is parallel to the $xy$ plane This is essentially an ellipse with one of the axes $b=r$, whereas the remaining axis $a$ changes periodically with the angular displacement of the camera $\theta$. $a=r|\sin\theta|$, in equation: $$\frac{x^2}{(r\sin\theta)^2}+\frac{y^2}{r^2}=1$$ Case (2): the plane of disc is tilted w.r.t. to the $xy$-plane by an angle of $\phi$ ($0\leq\phi\leq \pi/2$). (angle of "tilting", like Tower of Pisa) By Lambert's cosine law, we can first interpret the projection radius on the $xy$-plane $r'=r\cos\phi$. This "corrected" disc then rotates like Case (1). Hence, in equation: $$\frac{x^2}{(r'\sin\theta)^2}+\frac{y^2}{r'^2}=1$$ $$\frac{x^2}{(r\sin\theta)^2}+\frac{y^2}{r^2}=\cos^2\phi$$ Notice $\phi=0$ corresponds to Case (1). How to find the angles? You measure three things, $r,a,b$, $b$ being the elliptical axis that remains unchanged during rotation of the disc. Then, find $\phi$ using the relation $b=r\cos\phi$ Find $\theta$ (your angle in prompt) using the relation $a=r|\sin\theta|\cos\phi$ $$\phi=\arccos\frac{b}{r} \quad \theta=\arcsin\frac{a}{b}$$
How to calculate double integral of a step function
$$\int_0^1\left(\frac1N\sum f_i+f_{n+1}(s-n/N)\right)^2 ds\\ =\int_0^{1/N}(f_1s)^2ds+\int_{0}^{1/N}\left(\frac{f_1}N+f_2s\right)^2ds+...\\ =\frac{f_1^2/3}{N^3}+\frac{f_1^2+f_1f_2+f_2^2/3}{N^3}+\left(\frac{(f_1+f_2)^2+(f_1+f_2)f_3+f_3^2/3}{N^3}\right)+...\\ =\frac1{N^3}\left(\sum_if_i^2(N-i+1/3)+\sum_{i<j}f_if_j(1+2(N-j))\right) $$
Covariance of X^2 Y^2 when Cov(X,Y) = 0?
Yes that is true. If $X$ and $Y$ are independent it implies their correlation, and therefore covariance are zero (but the converse is not necessarily true). Assuming this knowledge, then by squaring the RVs separately we are not inducing any new correlational information, and thus they still remain uncorrelated and independent. In this way we conclude $Cov(X,Y) = Cov(X^2,Y^2) = 0$.
Finding a solution of $x^{2}=a \pmod p$
You are on the right path. Let's try to prove that $a^{(p+3)/4}\equiv a \pmod{p}$. This is not always true, so our proof will run into a stumbling block. However, the stumbling block will tell us what the fix might be. (Conveniently, we are told what it is!) To prove $a^{(p+3)/4}\equiv a \pmod{p}$ is equivalent to proving that $a^{(p-1)/4}\equiv 1 \pmod{p}$. Since $a$ is a quadratic residue, say $a\equiv b^2\pmod{p}$, we have $a^{(p-1)/4}\equiv b^{(p-1)/2}\equiv \pm 1\pmod p$. If you prefer, you may express this in terms of order. The order of $a^{(p-1)/4}$ is either $1$ or $2$. If $a^{(p-1)/4}$ has order $1$, we are finished. What about if the order is $2$? Then $a^{(p+3)/4}\equiv -a \pmod{p}$. Awfully close, except for that unfortunate minus sign. That's where the $r$ of the statement of the problem comes to the rescue.
Suppose that$\ gcd(b, a) = 1$. Prove that $\gcd(b + a, b − a) \leq 2$
If $d$ is odd, then $d\mid 2a$ implies $d\mid a$ and $d\mid 2b$ implies $d\mid b$, so $d\le\gcd(a,b)$. If $d$ is even, say $d=2e$, then $d\mid 2a$ means $2a=kd=2ke$ for some integer $k$, i.e. $e\mid a$; similarly, $e\mid b$, hence $e\le \gcd(a,b)$. In other words, we have the more general statement $$ \gcd(a+b,a-b)\le 2\gcd(a,b)$$
Does $\cos \pi/2$ really equal Zero?
$\cos\left(\frac{\pi}{2}\right)=0$. The reason you are getting $-0.5$ is because you are not putting brackets around $\pi/2$. Thus, you are obtaining the value of $(\cos\pi)/2=(-1)/2=-0.5$. You need to use the brackets as follows: $\cos\left(\frac{\pi}{2}\right)$.
Proving Riemann Integrability- Uniform Convergence
For $\epsilon > 0$, there is $N \in \mathbb{N}$ such that $$ |f(x) - f_n(x)| < \epsilon \quad\forall n\geq N, x \in I $$ Whence $$ f(x) < f_N(x) + \epsilon \leq \sup_{n\in \mathbb{N}} f_n(x) + \epsilon \quad\forall x\in I $$ and so $$ f < \sup f_n + \epsilon $$
Calculate the sum $\sum_{n=0}^\infty\frac{1}{(4n)!}$
Hint: (a) Write down the Maclaurin series for $\cos x$; (b) Write down the Maclaurin series for $\cosh x$, that is, $\frac{e^x+e^{-x}}{2}$; (c) Look.
How to prove that $n^{\log_{2} (n) } = O (2^n) $ ?
HINT: What is $$\lim_{x\to\infty}\frac{(\ln x)^2}x\;?$$
tautological implication and deduction
Indeed, if $\Sigma$ is finite, the above steps finishes at somepoint; one can actually show that the above step finishes when the steps were applied until all the members of $\Sigma$ is used for the step $(*)$ repeatedly. The reason is $\Sigma\vDash\tau$. Let's look at an example. Let $\Sigma=\{\alpha,\beta,\gamma\}$, and we want to have a deducion from $\Sigma$, the last component of which is $\tau$, and say $\Sigma\vDash\tau$. If we apply the step $(*)$ repeatedly until we use all the member of $\Sigma$, then we get, for instance, $<\alpha,\alpha\rightarrow(\beta\rightarrow(\gamma\rightarrow \tau)),\beta, \beta\rightarrow(\gamma\rightarrow \tau), \gamma,\gamma\rightarrow \tau,\tau>$. Now if we look at $\alpha\rightarrow(\beta\rightarrow(\gamma\rightarrow \tau))$ we can see that it is tautology, because of $\Sigma\vDash\tau$: $\alpha\rightarrow(\beta\rightarrow(\gamma\rightarrow \tau))$ cannot have $F$ truth values, because if $\alpha,\beta,\gamma$ are true, then $\tau$ cannot be $F$ (because of $\Sigma\vDash\tau$), which is the only way to get $F$ for $\alpha\rightarrow(\beta\rightarrow(\gamma\rightarrow \tau))$. If $\Sigma$ is infinite, then consider the following corollary to the compactness theorem: $$\text{if }\Sigma\vDash\tau,\text{ then there is a finite }\Sigma_0\subseteq\Sigma\text{ such that }\Sigma_0\vDash\tau.$$Then we can argue as above with $\Sigma_0$.
What is the distribution of randomly choosing a binomial group?
Let $X$ be the number of people who correctly chose the outcome of the coin toss. Let $Y$ a bernoulli random variable with $Y=1$ corresponding to coin coming up heads. $$ X\mid Y=1\sim \text{Bin}(n, p) $$ and $$ X\mid Y=0\sim \text{Bin}(n, 1-p) $$ so by the law of total probability $$ \begin{align} P(X=k)&=P(X=k\mid Y=1)P(Y=1)+P(X=k\mid Y=0)P(Y=0)\\ &=qP(X=k\mid Y=1)+(1-q)P(X=k\mid Y=0) \end{align} $$ for $0\leq k\leq n$ and the distribution of $X$ is a mixture of binomial distributions.
Example of a finite non-commutative ring without a unity
There are many examples in this spirit: the $n\times n$ matrices over a finite field with bottom row zero.
Calculating the integral $ \int_{e}^{\infty} e^{-\frac{1}{2} (nx)^2 }dx$
There is no solution in a finite number of terms involving the elementary functions for the first integral; its value is $a(1-\Phi(y))$ where the values of $a$ and $y$ can be determined via a change of variables that transform the integral into the integral formula for $aP\{X > y\}$ where $X$ is a standard normal random variable. The second integral should be integrated by parts to simplify it a little. Try going the other way first: what is the derivative of $e^{-n^2x^2/2}$? Can you express the integral you need to compute in terms of $\int x\ \mathrm d(e^{-n^2x^2/2})$ and use integration by parts? The third question you have solved already as your response to Nate Eldredge's hint seemed to indicate, but if not, look again to my suggestion of the derivative of $e^{-n^2x^2/2}$.
Removing $\phi(1)=1$ in a ring homorphism
A multiplicative map $\phi: R \to S$ must send $1_R$ to an idempotent of $S$ because $\phi(1_R)^2=\phi(1_R^2)=\phi(1_R)$. If $S$ only has trivial idempotents, $0_S$ and $1_S$, then $\phi$ is either the zero map or $\phi(1_R)=1_S$.
Verification of category
Given $f\in \hom((A,\alpha,a),(B,\beta,b)),\ g\in\hom((B,\beta,b),\ (C,\gamma,c))$, the first candidate that comes to mind for $gf$ is the set-theoretic function $g\circ f$. You need to verify that $g\circ f\in\hom((A,\alpha,a),\ (C,\gamma,c))$, i.e. that $[g\circ f](a)=c$ and that $(g\circ f)\circ \alpha=\beta\circ(g\circ f)$. Once you've done this, associativity follows from the fact that the usual composition of functions is associative. In this case, proving that composition preserves commutative properties is quite immediate, since it amounts to using the associativity of function composition twice and the hypothesis
How do I find the derivative to this function?
From the diagram, we have $$\cos\theta = \frac xa\tag 1$$ and the derivative of the function is $$y'=-\tan\theta = -\sqrt{\sec^2\theta -1}= -\sqrt{\frac1{\cos^2\theta} -1}\tag 2$$ Plus (1) into (2) to obtain the derivative $$y'= -\frac{\sqrt{a^2-x^2}}x$$
Show that a group of order $p^2q^2$ is solvable
You argument works just as well with $p$ and $q$ switched, so the only time you have trouble is if both $n_p=q$ and $n_q = p$. Since $1\equiv n_p \mod p$ and $1\equiv n_q \mod q$ this puts very strong requirements on $p$ and $q$. Hint 1: Unless $n_p=1$, $n_p > p$. Hint 2: If $n_p=q$, then $q>p$. If $n_q =p$, then $p>q$. Oops. Fix for OP's argument: The OP's argument is currently flawed in the case $n_p=q^2$, so this answer is only truly helpful after that flaw is fixed. A very similar argument to the one given in this answer works. First part of your argument works, and the $p-q$ symmetry helps: If $n_p=1$ or $n_q=1$, then the group is solvable. Now we use the Sylow counting again to get some severe restrictions: If $n_p \neq 1$, then $n_p \in \{q,q^2\}$ and in both cases we have $1 \equiv q^2 \mod p$. Similarly, if $n_q \neq 1$, then $1 \equiv p^2 \mod q$. Unfortunately now we don't get an easy contradiction, but at least we only get one possibility: Since $p$ divides $q^2-1 = (q-1)(q+1)$, we must also have $p$ divides $q-1$ or $q+1$, so $p \leq q+1$ and $q \leq p+1$, so $p-1 \leq q \leq p+1$. If $p=2$ is even, then $q$ is trapped between 1 and 3, so $q=3$. If $p$ is odd, then $p-1$ and $p+1$ are both even, so the only possibility for $q \neq p$ is $q=p-1=2$ (so $p=3$) or $q=p+1=2$ (so $p=1$, nope). Hence the only possibility is $p=2$ and $q=3$ (or vice versa). In this case, we get: If $p=2$ and $q=3$, then $n_q \in \{2,4\}$. Considering the permutation action of $G$ on its Sylow $q$-subgroups, we know that $n_q=2$ is impossible (Sylow normalizers are never normal) and $n_q=4$ means $G$ has a normal subgroup $K$ so that $G/K$ is isomorphic to a transitive subgroup of $S_4$ containing a non-normal Sylow 3-subgroup and having order a divisor of 36. The only such subgroup is $A_4$, so $K$ has order 3. Hence $G/K\cong A_4$ and $K \cong A_3$ are solvable, so $G$ is solvable.
proof verification: $f(x) = 1/x$ is not uniformly continuous on the open interval (0,1).
Proof is absolutely correct. If you take $x_n$$=$$1/n$. And $y_n$$=$$1/(n+1)$ then you don't have to put extra thing I.e. $n\leq3$
$a_{2014}$ sought in a sequence
This sequence is related to one special case of an unproven conjecture by Erdos, which is equivalent to: "For any $m > 2$, given a sequence $a_k$ containing no subsequences of length $m$ that are in arithmetic progression, the harmonic sum $\sum_k \frac{1}{a_k}$converges." The sequence you are looking at is one less than the the "greedy" sequence for $m=3$. That is known as Szkere's sequence. I'm unsure whether this sub-case of the Erdos conjecture has been proven, but it probably has because of this interesting feature: The sequence of $a_k$ you present consists of all numbers whose ternary (base 3) representation do not include the tigit (ternary digit) $2$. Now we can find a_{2014} as follows: There are 2 single-tigit numbers (0 and 1) in the sequence. ($a_2 = 1$.) Then there are two 2-tigit members (10 and 11) so that $a_4 = 11 = 4$ in base 10. Then there are 4 3-digit numbers; $a_8 = 111 = 13$ in base 10. You see the pattern: If we take the terneray representation, which will have no 2's, of any $a_k$, then interpret those 1's and 0's as binary digits, we will have the binary representation of $k-1$. Thus the binary representation of $2013$ will be the ternary representation of $a_{2014}$. So the ternary representation of $a_{2014}$ is $11111011101$. In base ten, $$ a_{2014} = 88327 $$
What am I doing wrong in calculating the Laplace Inverse Transform of $\mathcal L^{-1} \left(\frac{3}{s}-\frac{3s}{s^2+1}\right)e^{-(s+1)x}$?
Hint: $$\mathcal L^{-1}(1)=u(t)$$ Thus $$\mathcal L^{-1}(e^{-sx})=u(t-x)$$ And according to the 2nd Shift Theorem: $$\mathcal L^{-1}(F(s)*e^{-sx})=f(t-x)u(t-x)$$ Also $e^{-x}$is a constant. So rewriting the equation: $$\mathcal L^{-1}\left(\frac{3}{s}-\frac{3s}{s^2+1}\right)e^{-(s+1)x}={-3\times(\color{red}{\cos(t-x)-1})\color{purple}{u(t-x)}}{}e^{-x}$$
How do I solve this exact differential equation
Hint As mlainz commented, the equation is separable and write $$x e^x dx =-\frac {dy}{y^2}$$ what you can easily integrate (the lhs will need integration by parts; the rhs is obvious). Please, since we divided both sides by $y^4$, we assumed that $y \neq 0$. $y=0$ is also a solution of the equation.
Conditional proability store
For the sake of the other users to understand the solution to this question, I'll redo the entire problem. As you stated in your question, you identified that this problem requires Bayes Theorem and conditional probability. Let $A$ be the event that the chosen employee is a woman, and $B$ be the event that the chosen woman came from the store with $12$ employees. We are trying to find the probability of $B$ given $A$, or $P(B|A)$: $$P(B|A) = \frac{P(A \cap B)}{P(A)}$$ Lets first calculate $P(A \cap B)$, the probability the employee is a woman and came from the store with $12$ employees. We have a $\frac{1}{3}$ chance of choosing the store with $12$ employees, since they are $3$ stores, and there is a $\frac{8}{12}$ chance to select a woman, since there are $8$ women in the store with $12$ employees. We then multiply these $2$ probabilities to get: $$P(A \cap B) = \frac{1}{3}*\frac{8}{12} = \frac{2}{9}$$ Now, lets calculate $P(A)$, or the probability that the chosen employee is a woman. For this probability we look at the $3$ stores, and calculate the probabilities that a woman is chosen from each of these stores. As stated earlier, the probability that we choose one of the $3$ stores is $\frac{1}{3}$. For the first store, we have that the probability of choosing a woman is $\frac{3}{8}$. For the second store, we have that the probability of choosing a woman is $\frac{8}{12}$. For the third store, we have that the probability of choosing a woman is $\frac{7}{15}$. Therefore, our total probability is: $$P(A) = \frac{1}{3}*\frac{3}{8} + \frac{1}{3}*\frac{8}{12} + \frac{1}{3}*\frac{7}{15} = \frac{181}{360}$$ We therefore have our answer: $$P(B|A) = \frac{P(A \cap B)}{P(A)} = \frac{\frac{2}{9}}{\frac{181}{360}} = \frac{80}{181}$$ Hope this helped. Comment if you have any questions.
Prove equality of integrals
\begin{align} \int_0^{\infty} f(x,y) dy & = \int_0^{\infty} \text{sgn}(x-y) \exp(-\lvert x - y\rvert) dy\\ & = \int_0^{x} \text{sgn}(x-y) \exp(-\lvert x - y\rvert) dy + \int_x^{\infty} \text{sgn}(x-y) \exp(-\lvert x - y\rvert) dy\\ & = \int_0^{x} \exp(-\lvert x - y\rvert) dy - \int_x^{\infty} \exp(-\lvert x - y\rvert) dy\\ & = \int_0^{x} \exp(-(x - y)) dy - \int_x^{\infty} \exp(x-y) dy\\ & = \int_0^{x} \exp(y-x) dy - \int_x^{\infty} \exp(x-y) dy\\ & = \left. \exp(y-x) \right \rvert_{0}^{x} + \left. \exp(x-y) \right \rvert_{x}^{\infty}\\ & = \left(1 - \exp(-x) \right) + \left( 0 - 1\right)\\ & = - \exp(-x) \end{align} Hence, \begin{align} \int_0^{\infty} \left(\int_0^{\infty} f(x,y) dy \right) dx & = \int_0^{\infty} - \exp(-x) dx = \left. \exp(-x) \right \rvert_{0}^{\infty} = 0 - 1 = -1 \end{align} You can do a similar thing for $$\int_0^{\infty} \left(\int_0^{\infty} f(x,y) dx \right) dy$$ \begin{align} \int_0^{\infty} f(x,y) dx & = \int_0^{\infty} \text{sgn}(x-y) \exp(-\lvert x - y\rvert) dx\\ & = \int_0^{y} \text{sgn}(x-y) \exp(-\lvert x - y\rvert) dx + \int_y^{\infty} \text{sgn}(x-y) \exp(-\lvert x - y\rvert) dx\\ & = \int_0^{y} -\exp(-\lvert x - y\rvert) dx + \int_y^{\infty} \exp(-\lvert x - y\rvert) dx\\ & = \int_0^{y} -\exp(x - y) dx + \int_y^{\infty} \exp(y-x) dx\\ & = -\left. \exp(x-y) \right \rvert_{0}^{y} - \left. \exp(y-x) \right \rvert_{y}^{\infty}\\ & = - \left(1 - \exp(-y)\right) - \left( 0 - 1\right)\\ & = \exp(-y) \end{align} Hence, \begin{align} \int_0^{\infty} \left(\int_0^{\infty} f(x,y) dx \right) dy & = \int_0^{\infty} \exp(-y) dy = - \left. \exp(-y) \right \rvert_{0}^{\infty} = - \left(0 - 1 \right) = 1 \end{align}
Does the series solution to $2\frac{d^2y}{dx^2}-8x y=0$ have an infinite radius of convergence?
To solve $$ y''=4xy $$ let $$ y=\sum_{k=0}^\infty a_kx^k $$ Then $$ \sum_{k=0}^\infty k(k-1)a_kx^{k-2}=4\sum_{k=0}^\infty a_kx^{k+1} $$ This means, $$ (k+3)(k+2)a_{k+3}=4a_k $$ By the ratio test $$ \limsup_{k\to\infty}\frac4{(k+3)(k+2)}=0 $$ Thus, the series has an infinite radius of convergence.
symbols to denote mathematical structures
Concerning Galois theory, fields and field extensions are often denoted by $L/K$ with intermediate fields $E,M$ and $E,E',M,M',N,N'$ and so on. Specific fields have its own symbol, like $\Bbb F_p$ for the finite field with $p$ elements, or $\Bbb F_q$ with $q=p^n$. An algebraic closure of a field $K$ is often denoted by $\overline{K}$, e.g., the Galois extension $\overline{\Bbb Q}/\Bbb Q$ with absolute Galois group $\rm{Gal}(\overline{\Bbb Q}/\Bbb Q)$. The field of meromorphic functions for a Riemannian surface $X$ is often denoted by $\mathcal{M}(X)$.
Differentiation from first principles
How would I prove that: d/dr (1+t-2t^2) = 1-4t I assume you want to find the derivative with respect to $t$, not $r$. Using differentiation from first principles. I tried to integrate the equation and got the following: f(t) =(1t+.5t^2-2/3t^3) Why would you integrate if you want to differentiate (from first principles or otherwise)...? Then I tried to uses the equation: f(t+h)-f(t) / h That's better, use the definition and find the following limit: $$\lim_{h \to 0}\frac{f(t+h)-f(t)}{h}$$ for $f(t) = 1+t-2t^2$. Is this correct and what do I do after this. Use $f$ to evaluate $f(t+h)$ and $f(t)$ in the limit above: substitute and simplify first.
Proving that $\sum_{i=2}^{M}\frac{\pi(x^{1/i})}{i}=O(x^{1/2})+O(Mx^{1/3})$
We don't even need the prime number theorem for this. Trivially, we have that $\pi(x)\leq x,$ so $$\sum_{i=2}^{M}\frac{\pi\left(x^{1/i}\right)}{i}\leq\sum_{i=2}^{M}\frac{1}{i}x^{\frac{1}{i}}$$ $$\leq\frac{1}{2}x^{1/2}+x^{1/3}\sum_{i=3}^{M}\frac{1}{i}x^{1/i-1/3},$$ and since $1/i-1/3\leq0$ for $i\geq3,$ we have the inequality $$\sum_{i=2}^{M}\frac{\pi\left(x^{1/i}\right)}{i}\leq\frac{1}{2}x^{1/2}+x^{1/3}\sum_{i=3}^{M}\frac{1}{i}.$$ From here, we immediately obtain your inequality, but notice that by using the upper bound $\sum_{i=1}^{n}\frac{1}{i}\ll\log n$ , we could replace $M$ by $\log M.$ Additional Remarks: Using Chebyshevs upper bound, or equivalently the upper bound from the prime number theorem, we could obtain a factor of $\log x$ in the denominator as well. Note as well that the function you are looking at is a truncated version of Riemann's $\Pi(x)$, which we may define as $$\Pi(x)=\sum_{n=1}^\infty \frac{1}{n}\pi\left(x^{1/n}\right).$$
Result of topological definition of continuity of a function
$f|_{A_k}$ is not continuous. For example, the map $x \mapsto [x]$ defined on $[0,1]$ is explicitly written as $$x \mapsto \begin{cases} 0 & x \in [0,1) \\ 1 & x=1 \end{cases}$$ which is not continuous.