title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
probability terminology for parameter in a Markov process
I don't recognize the formula exactly but $1/\tau$ is proportional to a rate at which something happens and $\tau$ a time scale (as Didier suggests in the comment). Imagine a light or other two-state system (e.g., feature on / feature off) that is being activated, de-activated, or reversed, at random times according to a random process with some characteristic rate of occurrence of each action per unit of time. The events ON (1), OFF (0), ON-reversal (10), and OFF-reversal (01) could have rates (or probabilities) $r_0, r_1, \pi_{10}, \pi_{01}$, and the events could happen according to a Poisson process or suitable Markov chain. If, on average, the light is on for a fraction $\beta$ of the time, then for large $\Delta t$ what happens at the two times is independent and the probability is close to $\beta^2$. For small $\Delta t$ the probability of a change of state is also small, and the probability of light being on at both times is almost the same as that of the light being on at time $t$, which is $\beta$. $\exp(-\Delta t/\tau) = \exp (-\Delta t (\pi_{01} + \pi_{10}))$ looks like a probability that no reversals occur. For example, it is that probability for an interval of length $\Delta t$ if, on average, one person per unit time randomly walks by the light switch and a fraction $\pi_{10}$ of people always will turn the light off (if on) and $\pi_{01}$ will always turn it on (if off).
Let $p(t), q(t) ∈ \mathbb C[t]$ be relatively prime, $A ∈ M_n(\mathbb{C})$. Show that $\operatorname{rank}(p(A))+\operatorname{rank}(q(A)) ≥ n$.
Using $1=pr+qs$ one can easily deduce $\operatorname{ker}(p(A)) \subset \operatorname{Im}(q(A))$, thus we get $$n = \operatorname{rank}(p(A)) + \dim \operatorname{ker}(p(A)) \leq \operatorname{rank}(p(A))+\operatorname{rank}(q(A)).$$ This proof gives you for free: Equality holds iff $\operatorname{ker}(p(A)) \supset \operatorname{Im}(q(A))$ iff $(pq)(A)=0$.
Equation of one variable
It's $$(x-2)(16x^6+30x^5+65x^4+86x^3+240x^2+128x+1024)=0$$ and since $$16x^6+30x^5+65x^4+86x^3+240x^2+128x+1024=$$ $$=(16x^6+30x^5+15x^4)+(50x^4+86x^3+37x^2)+(203x^2+128x+1024)>0,$$ we get the answer: $\{2\}$.
integer programming formulation problem
After thinking further, the following either-or constraints do the trick. So, when $x<\sigma_l$, then $z$ must equal one and $u$ is set to $\sigma_w-x$. Otherwise, $z=0$ and $u=0$ $ 0 \leq x - \sigma_l + Mz\\ 0 \leq -u + Mz \\ x - \sigma_l \leq 0 + M(1-z) \\ \sigma_w - x \leq u \leq \sigma_w - x + M(1-z) \\ u \geq 0 \\ $
How to find numbers $k$ such that $kx - \ln(ex + 1-x) $ is positive on $(0,1]$?
Hint In order $$g(x)= kx - \ln(ex + 1-x) > 0, x\in [0,1]$$ k must be such that for no $x$ in the interval $g(x)$ could be negative. So, first look where is located the extremum (corresponding to $g'(x)=0$), use the second derivative test to know nif it is a maximum of a minimum. If it is a minimum, express the value of $g(x_*)$ (corresponding to $g'(x_*)=0$) and find the condition for $k$. I am sure that you can take from here. I hope and wish that you understand the reasonning; if not, just post?
Derive Richardson's method to central divided difference
Note that by Taylor's expansion around $x=x_0$ $$f(x_0+h)=f(x_0)+f'(x_0)h+o(h)$$ $$f(x_0-h)=f(x_0)-f'(x_0)h+o(h)$$ subtracting both sides $$f(x_0+h)-f(x_0-h)=2f'(x_0)h+o(h)$$ thus $$f'(x_0)=\frac{f(x_0+h)-f(x_0-h)}{2h}+\frac{o(h)}{h}$$ and recall that $\frac{o(h)}{h}\to 0$.
If $P \leq G$, $Q\leq G$, are $P\cap Q$ and $P\cup Q$ subgroups of $G$?
$P$ and $Q$ are subgroups of a group $G$. Prove that $P \cap Q$ is a subgroup. Hint 1: You know that $P$ and $Q$ are subgroups of $G$. That means they each contain the identity element, say $e$ of $G$. So what can you conclude about $P\cap Q$? If $e \in P$ and $e \in Q$? (Just unpack that means for their intersection.) Hint 2: You know that $P, Q$ are subgroups of $G$. So they are both closed under the group operation of $G$. If $a, b \in P\cap Q$, then $a, b \in P$ and $a, b \in Q$. So what can you conclude about $ab$ with respect to $P\cap Q$? This is about proving closure under the group operation of $G$. Hint 3: You can use similar arguments to show that for any element $c \in P\cap Q$, $c^{-1} \in P\cap Q$. That will establish that $P\cap Q$ is closed under inverses. Once you've completed each step above, what can you conclude about $P\cap Q$ in $G$? $P$ and $Q$ are subgroups of a group $G$. Is $P\cup Q $ a subgroup of $G\;$? Here, you need to provide only one counterexample to show that it is not necessarily the case that $P\cup Q$ is a subgroup of $G$. Suppose, for example, that your group $G = \mathbb{Z}$, under addition. Then we know that $P = 2\mathbb{Z} \le \mathbb{Z}$ under addition (all even integers), and $Q = 5\mathbb{Z} \le \mathbb{Z}$ under addition (all integer multiples of $5$). So $P \cup Q$ contains $2\in P,$ and $5 \in Q.\;\;$ But:$\;$ is $\;2 + 5 = 7 \in P\cup Q\;$? So what does this tell regarding whether or not $P \cup Q$ is a subgroup of $\mathbb{Z}\;$?
How to make derivative operation in matrix space?
I refuse to answer this question without adding the following comment: one common reason for confusion when it comes to take the derivative of equations from linear algebra is a poor or inconsistent choice of notation. This is the case here, too. If you take the transpose of an $n$- vector you get a $(1,n)$ -matrix. If you apply this from the left to an $(n,m)$ matrix you get $(1, m)$ matrix, an object from which you cannot substract an $m$- vector. The result of this kind of inconsistency in notation are more inconsistencies when you perform operations on these constructions, like taking the derivative. Consequently your computation contains some lines which are not really well defined, eg. if they contain products $XX$ -- you do not assume $X$ to be a square matrix, so this is not defined. Another choice of notation which can turn inta a trap is using the square notation for the scalar product of a vector with itself, like $v^2 = \langle v, v\rangle = v^T v$ While $v^2$ is common, it is easier to use one of the other forms when it comes to take the derivative. I'm also not too happy with the notation $\frac{d}{d\theta}$, since $\theta$ is a vector. I prefer something like $D_v f(\theta)$, denoting the directional derivative of $f(\theta)$. Now if $f(\theta) = X\theta$ you just get $D_v f = Xv$. (assuming $X$ is independent of $\theta$). Combining the comments: what you could look at is either $\langle X\theta - y, X\theta -y\rangle $ or $\langle (\theta^T X)^T - y, (\theta^T X)^T -y\rangle $, depending on how $X$ looks like. I assume you are interested in the first variant. Then, taking the derivative wrt to a vector $v$ you get $$D_v \frac{1}{2}\langle X\theta-y,X\theta-y\rangle = \frac{1}{2}(\langle Xv,X\theta -y\rangle + \langle X \theta -y, Xv\rangle = \langle Xv,X \theta -y\rangle = \langle v,X^TX\theta-X^Ty\rangle $$ This will be $=0$ for any $v$, so the second factor needs to be $=0$, hence $$X^TX\theta = X^Ty \Rightarrow \theta = (X^TX)^{-1}X^T y$$ (assuming $X^TX$ is invertible ...) which miraculously is the result you were expecting.
Exchange limit on bounds of Lebesgue integral
Take the Lebesgue measured space $(\mathbb R, \mu)$, $E_n= (-\infty, 1/n)$ and $$f(x)=\begin{cases} 0 & x \le 0\\ 1/x & x>0 \end{cases}$$ Then $\lim\limits_{n \to \infty}E_n =(-\infty , 0]$. And $$\infty = \lim_{n \to \infty}\int_{E_n}f d \mu \neq \int_{E}f d\mu =0$$
Limit of Euler's Totient function
Certainly. Let $\{p_i\}_{i\in\mathbb{N}}$ enumerate the prime numbers, and take $s_n=p_np_{n+1}$. We have $$\lim_{n\rightarrow\infty}\frac{\phi(p_np_{n+1})}{p_np_{n+1}}=\lim_{n\rightarrow\infty}\frac{p_np_{n+1}-p_n-p_{n+1}+1}{p_np_{n+1}}=1.$$
How do I find this line integral using Green's Theorem and Change of Variables?
Can we at least assume that you know what "Green's theorem" is? Green's theorem says that $\oint Ldx+ Mdy= \int_D\int \left(\frac{\partial M}{\partial x}- \frac{\partial L}{\partial x}\right) dxdys$ where D is the region bounded by the closed curve the integral on the left is over. Here we have $L= xy- sin(x)$ and $M= y^2- cos(y)$ so $\frac{\partial M}{\partial x}= 0$ and $\frac{\partial L}{\partial y}= x$. By Green's theorem, we only have to integrate $\int\int x dxdy$ over the region inside that contour. Of course that contour is a little "complicated". It is a rectangle but the vertices are (0, 0), (2, 1), (1, -2), and (3, -1). The edges are not parallel to the x and y axes. That's where the "change of variable" comes in. The line from (0, 0) to (2, 1) is x= 2y. If we make the change of variable $u= \frac{2\sqrt{5}}{5}x+ \frac{\sqrt{5}}{5}y$, $v= \frac{\sqrt{5}}{5}x- \frac{2\sqrt{5}}{4}y$, (those coefficients are the sine and cosine from a 2, 1, $\sqrt{5}$ right triangle. I an effectively rotating the coordinate system) when (x, y)= (0, 0) (u, v)= (0, 0), when (x, y)= (2, 1), (u, v)= ($\sqrt{5}$, 0). When (x, y)= (1, -2), (u, v)= (0, $\sqrt{5}$), and when (x, y)= (3, -1), (u, v)= ($\sqrt{5}$, $\sqrt{5}$). That is now the rectangle with vertices at (0, 0), ($\sqrt{5}$, (0, $\sqrt{5}$), and ($\sqrt{5}$, $\sqrt{5}$). Integrate, in the u,v coordinate system, taking u from 0 to $\sqrt{5}$ and v from 0 to $\sqrt{5}$.
Number of ways to finish a game
If player a is awarded a point then player b loses a point and also the other way around. Therefore the final score is always going to be 4,-4 or -4,4. However there are infinite number of ways this can happen. The game can last 4 turns, 6 turns, 8 turns, any even number of turns. Therefore there are infinite ways it can happen but only two possible outcomes.
Unique solution in linear optimisation problem
Your primal problem is $$\min_x\left\{a^Tx : B^Tx = c, x\geq 0\right\}$$ The dual problem is: $$\max_y \left\{c^Ty : By \leq a\right\}$$ To ensure that the dual has a unique solution, you could modify the dual into: $$\max_y \left\{c^Ty + \rho ||y||_2 : By \leq a\right\}$$ with $\rho>0$. The corresponding primal is: $$\min_x\left\{a^Tx : ||B^Tx-c||_2 \leq \rho, x\geq 0\right\}$$ If you pick $\rho$ sufficiently small, the optimal value will not change too much.
$z_1,...z_n$ are the $n$ solutions of $z^n =a$ and $a$ is real number, show that $z_1+...+z_n$ is a real number
Two observations: If $z$ is a root of $x^n = a$, then $\overline{z}$ is as well, since $\overline{z}^n = \overline{(z^n)} = \overline{a} = a$. $z + \overline{z}$ is real. Now do you see why it's true?
Numerical Integration of a Gaussian Distribution in Polar Coordinates
This is to be expected; in fact we can calculate from the error you report that your $\sigma$ value must have been around $0.1$. From the way you set up your $r$ grid, you're effectively using the trapezoidal rule. You've got $N+1$ points and you're normalizing by $N$. The two outer points should be weighted by $1/2$, but since the values there are $0$ that doesn't matter. The error of the trapezoidal rule can be estimated by expanding the function at the centre of the trapezoid and integrating the missing quadratic term, which yields $$\int_{-h/2}^{h/2}\frac{f''(x_0)}2(x-x_0)^2\mathrm dx=\frac{f''(x_0)}{12}h^3\;,$$ where $h$ is the interval length. Approximating the sum over all intervals by an integral then yields $$ \int_a^b\frac{f''(x_0)}{12}h^3\frac1h\mathrm dx=\frac{h^2}{12}\left(f'(b)-f'(a)\right)=\frac{(b-a)^2}{12N^2}\left(f'(b)-f'(a)\right)\;, $$ where $N$ is the number of intervals. In your case $b-a=1$, $N=50$, $f'(b)\approx0$ and $f'(a)=1/\sigma^2$ (after cancelling $2\pi$ with the angular integration), so the error would be expected to be about $-(12\cdot50^2\sigma^2)^{-1}$, which is $-0.0033$ for $\sigma\approx0.1$. The reason you didn't have this problem in the first integration is that you had $f'(a)\approx f'(b)\approx0$ in that case. I guess the moral of the story is to use quadrature rules with higher error orders.
Finding aggregate score from incomplete data
Your "updating" formula is correct. You do not need to know every value to know the average, all you need is the sum of scores and the number of scores. Your formula is effectively "recovering" the previous sum, adding in the new result, and re-averaging with one higher number of scores. A more compact version might be: $\bar x_{n} = \frac{\bar x_{n-1}(n-1)+x_n}{n}$ Now you really only need to know the total number of reviews, the previous average score and the current review score.
Using Weierstrass approximation to show $f(x) = 0.$
I cannot really follow what you are trying to do in your argument, nor where your inequalities come from. The way it is usually done is that because you can approximate $x^5f(x)$ uniformly with polynomials, you get get $(x^5f(x))^2=0$, and the $f(x)=0$ by continuity. In more detail: let $\{p_n\}$ be polynomials with $p_n(x)\to x^5f(x)$ uniformly. The uniform convergence makes the integrals $0=\int_0^1 x^5 f(x)p_n(x)dx$ to converge to $\int_0^1x^5f(x)g(x)\,dx=0$. In part (ii), you can show that $p(x^2)\,x^2f(x)=0$ for all $p$. Now choose polynomials $\{p_n\}$, with zero constant term, that approximate uniformly the function $g(x)=x\,f (\sqrt x)$. Then $$ 0=\int_0^1p_n(x^2)x^2f(x)\,dx \to \int_0^1g (x^2)x^2f (x)\,dx=\int_0^1 x^4(f (x))^2\,dx. $$ So $x^2f(x)=0$; this immediately implies that $f(x)=0$ for $x\ne0$, and we also get $f(0)=0$ by continuity.
Find displacement function using mass and spring constant?
$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ $\ds{\omega = \root{k \over m} = \root{300 \over 3}\mrm{sec}^{-1} = 10\,\mrm{sec}^{-1}}$ The general solution is given by: \begin{align} \mrm{y}\pars{t} &= A\cos\pars{\omega t} + B\sin\pars{\omega t}\implies \mrm{y}'\pars{t} = -A\omega\sin\pars{\omega t} + B\omega\cos\pars{\omega t} \\ &\ \mrm{y}\pars{0} = 10\,\mrm{cm} = {1 \over 10}\,\mrm{m} \implies A = {1 \over 10}\,\mrm{m} \\ &\ \mrm{y}'\pars{0} = 100\,\mrm{cm \sec} = 1\mrm{m \over sec} \implies B = 1\,\mrm{m \over sec}\,{1 \over \omega} = {1 \over 10}\,\mrm{m} \end{align} $$\bbx{\ds{% \mrm{y}\pars{t} = \bracks{\cos\pars{10\,\mrm{sec}^{-1}\,t} + \sin\pars{10\,\mrm{sec}^{-1}\,t}}{1 \over 10}\,\mrm{m}}} $$
Most direct way to prove the domain of $A^2$ is dense.
Assume that $A : \mathcal{D}(A)\subseteq X\rightarrow X$ is self-adjoint on the complex Hilbert space $X$. Then $A\pm iI$ are surjective, which also makes the following surjective on $\mathcal{D}(A^2)$: $$ (A+iI)(A-iI)= A^2+I $$ That is enough to imply that $A^2$ is self-adjoint, a result which follows from the theorem given below. Theorem: Let $B : \mathcal{D}(B)\subset \mathcal{H}\rightarrow\mathcal{H}$ be a symmetric linear operator on a linear domain $\mathcal{D}(B)$. If $B$ is surjective, then $B$ is densely-defined and self-adjoint. Proof: Suppose that $B$ as described is surjective, and suppose $y\in\mathcal{D}(B^*)$. Then there exists $x\in\mathcal{D}(B)$ such that $Bx=B^*y$, which gives $$ \langle Bz,y\rangle = \langle z,B^*y\rangle=\langle z,Bx\rangle=\langle Bz,x\rangle,\;\;\; z\in\mathcal{D}(B). $$ Therefore, $$ \langle Bz,y-x\rangle = 0,\;\;\; z\in\mathcal{D}(B). $$ It follows that $y=x$ because $B$ is surjective. Hence, $y\in\mathcal{D}(B^*)$ is in $\mathcal{D}(B)$. So $B$ is self-adjoint. $\blacksquare$
Calculating sin and cos based on combination of exponentiation and power series?
Computing $(\pi/2)^{22} / 22!,$ $(\pi/4)^{18}/18!,$ and $(\pi/8)^{14}/14!$ shows that halving the interval length allows chopping off only two terms from the Taylor polynomial; computing $(\pi/16)^{12}/12!$ shows that an additional halving of the interval length only allows chopping off one additional term of the Taylor polynomial. This means halving the interval can only eliminate at most two multiplications. The double angle formula for $\cos(2x)$ from $\sin(x)$ and $\cos(x)$ requires two multiplications, plus the computation of both $\sin(x)$ and $\cos(x)$; the double angle formula for $\sin(2x)$ requires one multiplication and computation of both $\sin(x)$ and $\cos(x)$. So if you need to compute both $\sin(x)$ and $\cos(x)$ at the same time, you could reduce the total number of multiplications by one if you compute $\sin(x/2)$ and $\cos(x/2)$ and then compute $\sin(x) = \sin(x/2)\cos(x/2) + \sin(x/2)\cos(x/2)$ and $\cos(x)=\cos(x/2)^2 - \sin(x/2)^2.$ On the other hand, if you only need to compute one of $\cos(x)$ or $\sin(x),$ it will be much more expensive to compute both $\sin(x/2)$ and $\cos(x/2)$ and use the double angle formula than it would be to compute $\cos(x)$ or $\sin(x)$ directly from the Taylor polynomial.
$\int\frac{1}{\sqrt{a^2-x^2}}dx\overset{?}=\cos^{-1}\left(\frac{x}{a}\right)$
Assuming you haven't read any book's solution manual about this problem, we can start again by letting $\theta \in (-\frac{\pi}{2}, \frac{\pi}{2}), x = a\sin \theta\Rightarrow dx = a\cos \theta d\theta\Rightarrow \sqrt{a^2-x^2} = \sqrt{a^2-a^2\sin^2\theta}= a\cos \theta\Rightarrow I = \displaystyle \int 1d\theta= \theta + C = \sin^{-1}\left(\dfrac{x}{a}\right)+C$
Suppose $C_1,C_2,...,C_n$ are compact sets on $\mathbb{R}$. Prove that $\underset{i = 1}{\bigcup}^n C_i$ is compact
$\mathcal U\cup\{\Bbb R\setminus C_n\}$ covers all the $C_i$-s because it covers $\Bbb R$. Your mistake lies else where. Other than the small imprecision of you saying that $\mathcal U$ is an open cover, but then claiming to take "intervals" from $\mathcal U$ (in fact, $\mathcal U$ may not contain intervals at all), there's the catastrophic issue that you have only proved that all covers coming from a construction such as yours admit a finite subcover: effectively, just the covers $\mathcal V$ of $\bigcup_{i=1}^n C_i$ such that $\Bbb R\setminus C_n\in \mathcal V$ (or, in a broader sense, the ones such that there is some $i$ such that $\Bbb R\setminus C_i\in\mathcal V$). But you have to prove existence of a finite subcover for all open covers of $\bigcup_{i=1}^n C_i$.
Method of Lagrange multipliers to find all critical points of a function
Let us discuss the example you were given. Generally, this optimization method uses the following strategy. Let $f(x,y,z)$ be the function that we are attempting to determine the critical points for, subject to the constraint equation $$g(x,y,z)=k$$ for some $k \in \mathbb{R}$. We solve the following system: $$\nabla f(x,y,z) = \lambda \nabla g(x,y,z) \\g(x,y,z)=k$$ of four equations and four unknowns (note that $\nabla$ is the gradient function which returns the vector composed of partial derivatives with respect to $x$, $y$, and $z$). In this case, we have $f(x,y,z)=2x+y-2z$ and $g(x,y,z)=x^2+y^2+z^2=4$ (this is a sphere of radius $2$). Thus, we have the following system of equations: $$\begin{cases}2 = 2\lambda x \,\,\,\,\,\,(f_x = \lambda g_x) \\ 1 = 2\lambda y \,\,\,\,\,\, (f_y=\lambda g_y)\\ -2 = 2\lambda z \,\,\,\,\,(f_z= \lambda g_z)\\ x^2+y^2+z^2=4\end{cases}$$ There are various ways that you can solve this, but we will solve in the following way. Multiplying the first equation by $yz$, the second equation by $xz$, and the third equation by $xy$ and setting each of these equal to one another, we obtain $$2\lambda xyz = \begin{cases} 2yz \\ xz \\ -2xy \end{cases}$$ So, first we have $x = 2y$ upon dividing $2yz=xz$ by $z \neq 0$. Then we also have $z=-2y$ upon dividing $xz=-2xy$ by $x \neq 0$. Finally, we have $x=-z$ upon dividing $2yz = -2xy$ by $2y \neq 0$. Applying this, we substitute for $x$ and $z$ in terms of $y$ into the fourth equation to get $$x^2 +y^2 +z^2 =4 \implies 4y^2 + y^2 + 4y^2 = 9y^2 = 4 \implies y = \mp \frac{2}{3}$$ I will let you solve for the other $3$ unknowns (consider each case separately: assume $y = -\frac{2}{3}$ and solve for $x,z,\lambda$ and then assume $y=\frac{2}{3}$ and solve for $x,z,\lambda$). Recall from before that $z = -2y$ and $x=-z$. You will find the two solutions $$(x,y,z,\lambda)=\left(\mp \frac{4}{3},\mp \frac{2}{3}, \pm \frac{4}{3},\mp \frac{3}{4}\right) .$$ These solutions $(x,y,z)$ are the critical points of the function $f$ under this constraint $g(x,y,z)=4$ and we can use multiple ways to classify them (as, for instance, maximums, minimums, or saddle points).
Expected value of sum of a random number of i.i.d. random variables
$$\begin{aligned}\mathsf{E}Y & =\sum_{n=0}^{\infty}\mathsf{E}\left[Y\mid N=n\right]\mathsf{P}\left(N=n\right)\\ & =\sum_{n=1}^{\infty}\mathsf{E}\left[Y\mid N=n\right]\mathsf{P}\left(N=n\right)\\ & =\sum_{n=1}^{\infty}\mathsf{E}\left[X_{1}+\cdots+X_{n}\mid N=n\right]\mathsf{P}\left(N=n\right)\\ & =\sum_{n=1}^{\infty}\mathsf{E}\left[X_{1}+\cdots+X_{n}\right]\mathsf{P}\left(N=n\right)\\ & =\sum_{n=1}^{\infty}\left[\mathsf{E}X_{1}+\cdots+\mathsf{E}X_{n}\right]\mathsf{P}\left(N=n\right)\\ & =\sum_{n=1}^{\infty}n\mathsf{E}X_{1}\mathsf{P}\left(N=n\right)\\ & =\mathsf{E}X_{1}\sum_{n=1}^{\infty}n\mathsf{P}\left(N=n\right)\\ & =\mathsf{E}X_{1}\mathsf{E}N \end{aligned} $$ second equality: because the first term is $0$ since $\mathsf E[Y\mid N=0]=0$. fourth equality: because $N$ is independent wrt the $X_i$ fifth equality: linearity of expectation. seventh equality: factor $\mathsf EX_1$ does not depend on index $n$ so can be taken out of the summation and placed before summation symbol.
Prove that $\left(\dfrac{b}{a}+\dfrac{d}{c}\right)\cdot\left(\dfrac{a}{b}+\dfrac{c}{d}\right)\geq4$ with $a>0, b>0 , c> 0$ and $d>0.$
Use AM-GM. $\frac{ad + bc}{2} \ge \sqrt{abcd}$. Squaring both sides, you get the answer. A tiny tip: If everything is positive, and you have an inequality, think about AM-GM once at least.
If $m\left( \sum_{k=1}^n E_k \right)>n-1 $ and each $m(E_k)>0,$ is it true that $m\left( \bigcap_{k=1}^n E_k \right)$ has positive measure?
For $1\le k\le n$, let $$ A_k=k\sqrt 2+\Bbb Q,$$ let $$A=\bigcup_k A_k $$ and finally $$E_k=\left(\left[\tfrac {k-1}n,\tfrac {k}n\right)\setminus A\right)\;\cup\; \bigl([0,1]\cap A_k \bigr)$$ Then the $E_k$ have measure $\frac 1n$, are even pairwise disjoint, but $\sum E_k$ differs from $[0,n]$ by only countably many points.
How to show associativity of multiplication of polynomials in $R[x]$, where $R$ is a commutative, associative ring
Instead of directly showing that two expressions for triple products with different orders are the same, it might be both easier and more illuminating to show that the $k$-th coefficient in the product of any number of polynomials taken in any order is the sum of all products of coefficients, one from each factor, whose powers add up to $k$. This implies associativity. We can prove this by structural induction over the factors. So let some multiple product of polynomials be given, and focus on the "outermost" multiplication, the one to be applied last. We have $$p(x) = \sum_{i=0}^n a_i x^i, \ \ q(x) = \sum_{i=0}^m b_i x^i\;,$$ where $p(x)$ and $q(x)$ are the results of carrying out all the other multiplications. By the induction hypothesis, $a_i$ is the sum of all products of coefficients, one from each factor in the left factor, whose powers add up to $i$, and the same for $b_i$ and the right factor. Now the $k$-th coefficient in the product is by definition $$\sum_{i+j=k, \ i,j \in \mathbb{N}_0} a_i b_j\;.$$ Clearly when we multiply all these products out using distributivity, we'll get every product of coefficients, one from each factor, whose powers add up to $k$, and we'll get it exactly once. That completes the induction.
Prove this Floor function indentity $\sum_{k=0}^{n-1} \bigl\lfloor \frac{ak+b}{c} \bigr\rfloor$
Here is a way to compute LHS. Consider line $L$ in the plane defined by $y = \frac{ax+b}{c}$. Clearly, LHS is counting the lattice points in $[0,n)\times (0,\infty]$ on/under the line. Call the set of such lattice points $A$. Alternatively, one may count lattice points in $A$ "horizontally". Specifically, for each $y\in \{1, \ldots, Y\}$, where $Y:=\lfloor(an+b)/c\rfloor$, the largest possible $y$-coordinate of points in $A$, the size of $A_y := A\cap [0,n)\times \{y\}$ is equal to $n$, if $y \leq a/c$. $\lfloor\frac{an+b-cy}{a}\rfloor$,if $y > a/c$. Therefore, one may change RHS to $$n\lfloor a/c \rfloor + \sum_{k=\lfloor a/c\rfloor+1}^{\lfloor(an+b)/c\rfloor}\lfloor\frac{an+b-ck}{a}\rfloor.$$
Fourier transform for sum-of-divisors function
Computing the sum-of-divisors function $\sigma_1(n)$ is at least as hard as determining whether or not $n$ is prime, since $n$ is prime iff $\sigma_1(n) = n + 1$. Computing with any fancy formula you write down to do this therefore ends up doing at least as much work as running a primality test, and usually it will be a bad one (e.g. trial division). Edit: Also, if $n = pq$ is a product of two distinct primes, then computing $\sigma_1(n) = pq + p + q + 1$ is exactly as hard as computing the totient $\varphi(n) = (p - 1)(q - 1) = pq - p - q + 1$. This is tantamount to solving the RSA problem, which is widely believed to be difficult (in the same way that factoring in general is widely believed to be difficult), and this belief is the foundation for the assumption that RSA is secure.
If 0.01% of a population has a disease, what sample size k is needed so that there is a 95% chance that one person with this disease falls into k?
In this case, you can use the "exact" Binomial distribution. We want $k$ such that $\mathbb{P}(j \geq 1) \geq 95\%$, where $j$ is the number of persons with the disease. This is equivalent to say that $\mathbb{P}(j < 1) = \mathbb{P}(j = 0)\leq 5\%$ The probability is: $$ \mathbb{P}(j = 0) = \binom{k}{0} p^0 (1-p)^{k-0} = (1-p)^k $$ where $p = 0.0001$ is the probability of getting the disease. Therefore, we have to solve for k $$ 0.9999^k = 0.05 $$ I.e., $$ k \log 0.9999 = \log 0.05 $$ Which gives $$ k \approx 29\ 955.8\ldots $$ So, a sample size of $29\ 956$ individuals. If you really want to use the Poisson approximation, then you need the same reasoning, but now the probability is: $$ \mathbb{P}(j = 0) = e^{-pk} \frac{(pk)^0}{0!} = e^{-pk} $$ where $p = 0.0001$ is the probability of getting the disease and $pk$ is the expected (average) number of people with the disease in a sample of size $k$. Can you take it from here and get the value of $k$ in this case? The result will not be the same as before. Edit: We just need to plug in the value of $p$ and solve: $$ e^{-0.0001k} = 0.05 $$ So, applying natural logarithms: $$ -0.0001k = \log 0.05 $$ Which gives $$ k \approx 29\ 957.3 $$ So, a sample size of $29\ 957$ will have an approximate probability of $95\%$ of having at least one disease person. In case you need at least $95\%$ probability, $k$ will need to be at least $29\ 958$.
Integrate $\int_{-\infty}^\infty e^{\frac{-x^2}{a^2}} dx$
Notice, $$\int_{-\infty}^{\infty}e^{-x^2/a^2}\ dx$$ using property of even function, $f(-x)=f(x)$, $$=2\int_{0}^{\infty}e^{-x^2/a^2}\ dx$$ Now, let $\frac{x}{a}=t\implies \frac{dx}{a}=dt$ or $dx=a\ dt$, $$=2\int_{0}^{\infty}e^{-t^2} (a\ dt)$$ $$=2a\int_{0}^{\infty}e^{-t^2} dt$$ we know $\int_{0}^{\infty}e^{-x^2} dx=\frac{\sqrt \pi}{2}$, $$=2a\cdot \frac{\sqrt \pi}{2}$$ $$=\color{red}{a\sqrt \pi}$$
solve the differential equation $2y^{(5)}-7y^{(4)}+12y'''-8y''=0$.
Assuming $y^{\left(n\right)}\equiv\frac{\partial^{n}y}{\partial t^{n}}$. Let $w=y^{\prime\prime}$ so that the equation becomes $$ 2w^{\prime\prime\prime}-7w^{\prime\prime}+12w^{\prime}-8w=0. $$ The characteristic polynomial, as you pointed out, is $$ 2r^{3}-7r^{2}+12r-8 $$ with imaginary and real roots (solvable in $\mathbb{Q}$). As @Pieter21 points out, you can use Wolfram to get the roots; they are not "nice" numbers. Perhaps you can just use the approximate forms $1.2581$ and $1.210\pm 1.3867i$. With a solution for $w$, you can derive a solution for $y$ since $y=\int\int w$.
Let $G = (V,E)$ be a $3$-critical graph. Show that $G$ is a circle of odd length.
A graph has chromatic number 2 if and only if it has no odd cycles. Since $G$ has chromatic number 3, it has an odd cycle. Pick such an odd cycle. But since it is 3-critical, for every edge $e$, the chromatic number of $G - e$ is $2$, which means $G - e$ has no odd cycles. But this means that every edge of $G$ must lie this odd cycle. But therefore $G$ itself is an odd cycle.
Expressing "formally" $f(x)=\frac {1}{\sqrt {1-2x}}$ as a power series
How can I prove the result in the general case of the binomial series or how can I formally prove in the function I'm given that it is represented by it's Taylor Series? There are several nice and easy ways. Which is easiest depends on what you already know. One way uses complex analysis. The function $f_\alpha(z) = (1+z)^\alpha = \exp\bigl(\alpha \log (1+z)\bigr)$ is holomorphic on the open unit disk $\mathbb{D} = \{z\in \mathbb{C} : \lvert z\rvert < 1\}$, hence its Taylor series about $0$ converges locally uniformly to $f_\alpha$ on $\mathbb{D}$. Now one can show $$\Biggl(\frac{d}{dz}\Biggr)^n f_\alpha(z) = n!\cdot \binom{\alpha}{n}\cdot (1+z)^{\alpha-n}\tag{1}$$ by induction, and sees that the coefficients of the Taylor series about $0$ are $\binom{\alpha}{n}$. Another way uses the theory of differential equations. If we let $f_\alpha(x) = (1+x)^\alpha$ and $$g_\alpha(x) = \sum_{n=0}^\infty \binom{\alpha}{n}x^n,$$ then it is easily seen that $f_\alpha(0) = g_\alpha(0) = 1$. And if we differentiate and then multiply with $(1+x)$, we have $$(1+x)\cdot\frac{d}{dx}f_\alpha(x) = (1+x)\cdot\frac{d}{dx}(1+x)^\alpha = (1+x)\cdot \alpha (1+x)^{\alpha-1} = \alpha(1+x)^\alpha,$$ and \begin{align} (1+x)\cdot \frac{d}{dx} g_\alpha(x) &= (1+x)\sum_{n=1}^\infty n\binom{\alpha}{n}x^{n-1}\\ &= (1+x) \sum_{n=1}^\infty \alpha\binom{\alpha-1}{n-1}x^{n-1}\\ &= \alpha\cdot(1+x)\sum_{m=0}^\infty \binom{\alpha-1}{m}x^m\\ &= \alpha\Biggl(\sum_{m=0}^\infty \binom{\alpha-1}{m}x^m + \sum_{m=0}^\infty \binom{\alpha-1}{m}x^{m+1}\Biggr)\\ &= \alpha \Biggl(1 + \sum_{m=1}^\infty \biggl(\binom{\alpha-1}{m} + \binom{\alpha-1}{m-1}\biggr)x^m\Biggr)\\ &= \alpha \Biggl(1+\sum_{m=1}^\infty \binom{\alpha}{m} x^m\Biggr)\\ &= \alpha g_\alpha(x). \end{align} So the two functions satisfy the same differential equation $$(1+x)\cdot y' = \alpha y\tag{$\ast$}$$ with the initial condition $y(0) = 1$. By the uniqueness of solutions of $(\ast)$(1), we have $f_\alpha \equiv g_\alpha$ on $\{ x : \lvert x\rvert < 1\}$. (1) The map $F\colon (x,y) \mapsto \alpha\frac{y}{1+x}$ locally satisfies a Lipschitz condition in $y$ on $(\mathbb{R}\setminus \{-1\})\times \mathbb{R}$, so the Picard-Lindelöf theorem asserts the existence and uniqueness of solutions of $(\ast)$ to any given initial condition $y(x_0) = y_0$ in some neighbourhood of $x_0 \neq -1$. Not quite as nice and short, but not too bad either is showing that the series converges to $f_\alpha$ by showing that the remainder tends to $0$, where we use the integral form of the remainder term. We assume that $\alpha$ is not a non-negative integer, for in that case, the series is actually a finite sum, and the equality is just the binomial formula. Like in the complex analysis method, we see by induction that the relation $(1)$ holds, where now we restrict $z$ to be real and $> -1$. Thus the series $$\sum_{n=0}^\infty \binom{\alpha}{n} x^n$$ is the Taylor series of $f_\alpha$ with centre $0$. The remainder term in the integral form is $$R_n(x) = \frac{1}{n!} \int_0^x (x-t)^n f_\alpha^{(n+1)}(t)\,dt,$$ using $(1)$, that becomes $$R_n(x) = (n+1)\binom{\alpha}{n+1}\int_0^x (x-t)^n(1+t)^{\alpha-1-n}\,dt.$$ If $0 \leqslant x < 1$, we let $K = \max \{ 1, (1+x)^\alpha\}$, then $0 < (1+t)^{\alpha-1-n}\leqslant \frac{K}{(1+t)^{n+1}}\leqslant K$ for all $n\in\mathbb{N}$ and $t\in [0,x]$, and we have the estimate $$\lvert R_n(x)\rvert \leqslant K\cdot(n+1)\left\lvert\binom{\alpha}{n+1}\right\rvert \int_0^x (x-t)^n\,dt = K\left\lvert \binom{\alpha}{n+1}\right\rvert\cdot x^{n+1},$$ and since the series converges for $\lvert y\rvert < 1$, in particular for $y = x$, the terms $\binom{\alpha}{n+1} x^{n+1}$ converge to $0$, so $\lvert R_n(x)\rvert \to 0$. For $-1 < x < 0$, we write \begin{align} \lvert R_n(x)\rvert &= (n+1)\left\lvert \binom{\alpha}{n+1} \int_0^x (x-t)^n(1+t)^{\alpha-1-n}\,dt \right\rvert\\ &= (n+1)\left\lvert \binom{\alpha}{n+1} \int_0^{\lvert x\rvert} (x+t)^n(1-t)^{\alpha-1-n}\,dt\right\rvert\\ &= (n+1) \left\lvert\binom{\alpha}{n+1}\right\rvert \int_0^{\lvert x\rvert} (\lvert x\rvert-t)^n(1-t)^{\alpha-1-n}\,dt\\ &= (n+1) \left\lvert\binom{\alpha}{n+1}\right\rvert \int_0^{\lvert x\rvert} \biggl(\frac{\lvert x\rvert-t}{1-t}\biggr)^n (1-t)^{\alpha-1}\,dt. \end{align} Now we note that $t \mapsto \frac{\lvert x\rvert-t}{1-t}$ is monotonically decreasing on $[0,\lvert x\rvert]$, so we obtain the estimate $$\lvert R_n(x)\rvert \leqslant (n+1)\left\lvert\binom{\alpha}{n+1}\right\rvert\cdot \lvert x\rvert^n \underbrace{\int_0^{\lvert x\rvert} (1-t)^{\alpha-1}\,dt}_C = C\lvert \alpha\rvert\cdot\left\lvert \binom{\alpha-1}{n} x^n\right\rvert.$$ Since also the series $\sum\limits_{n=0}^\infty \binom{\alpha-1}{n}x^n$ converges on the interval $(-1,1)$, it follows that $R_n(x) \to 0$ also for $x\in (-1,0)$. So we have shown that $R_n(x) \to 0$ for all $x\in (-1,1)$, hence the identity $$(1+x)^\alpha = \sum_{n=0}^\infty \binom{\alpha}{n} x^n$$ for $x\in (-1,1)$ is proved.
In how many ways we can rearrange n number of alphabets in a row while not keeping special 2 alphabets in front or end?
There are $n$ positions for alphabets, of which $2$ are not allowed for the special ones. You can place the first special one in $n-2$ locations. You have used up one location, so the second special one can go in $n-3$ locations. Then there are $n-2$ locations left, so you can arrange the rest in $(n-2)!$ ways, for a total of $(n-2)(n-3)(n-2)!$ as the book says. Your factor of $2!$ is overcounting as you have already placed the special alphabets in specific locations in the previous stage.
Linear Maps, Basis of Domain, and Matrix
First question: $W$ is some arbitrary space that YOU CHOSE as the codomain in your map $T$. But $ T $ IS defined by its action on a basis of its domain. Since $ T $ is a linear transformation, we know that for any vectors $a, b \in V , c \in F$, where $ F $ is the field over which your vector spaces lie, $T(ca + b) = cT(a) + T(b)$. So, if you have a basis $ \{ e_1, ..., e_n \} $ and you know the outputs $ T(e_1), ..., T(e_n) $, given a linear combination of those vectors, say $v = c_1 e_1 + ... + c_n e_n $, now you know exactly what happens to $v $ under $T$, since you already have the values $ T(e_1), ..., T(e_n) $ and you just split up $ T(v) = T(c_1 e_1 + ... + c_n e_n) = c_1 T(e_1) + ... + c_n T(e_n) $. Also, I would like to point out that the matrices are useful for recording values if you have a basis for the domain and one for the codomain. The matrix will tell you what happens to COORDINATES in those bases. If you change your bases, you get a different matrix. For the second question: This just means there is only one linear map $T $ for which $T(v_1) = w_1, T(v_2) = w_2, ... $. This is again for the same aforementioned reason. Last question: $ V , W $ are ABSTRACT vector spaces. So a vector could be a row vector, column vector, polynomial, linear transformation, etc. The COORDINATES are column vectors. If you have an $n$ dimensional vector space, then in any basis, the coordinates will be of the form $ \begin{pmatrix} 1 \\ 0 \\ \vdots \\ 0 \end{pmatrix},\begin{pmatrix} 0 \\ 1 \\ 0 \\ \vdots \\ 0 \end{pmatrix}, ..., \begin{pmatrix} 0 \\ \vdots \\ 0 \\ 1 \end{pmatrix} $. Second thing: the columns of the matrix may not be linearly independent, so they may not be a basis, but they SPAN the image up to isomorphism. If you put in coordinates of a basis vector into a linear transformation, then the matrix tells you the coordinates to which that vector is sent by the transformation and the coordinates of the $j$th ordered basis vector will be sent to the $j$th column of the matrix. Try working an example yourself to see this. To see the power of this idea in the abstract, try the space of polynomials up to degree n and note that differentiation is a linear transformation. Find a reasonable basis for the polynomials (you can take the easy way out, or try something fun like the Lagrange interpolation polynomials), find a matrix for the differentiation transformation, and test out this matrix on some polynomials (which you first have to convert into coordinates in the bases you chose).
Find the least next N-digit number with the same sum of digits.
Divide the number into four regions: Region 1: Trailing zeros. Region 2: The lowest digit not in Region 1. Region 3: Consecutive $9$s starting with the digit above Region 2. Region 4: All remaining digits. Region 1 and Region 3 may be empty. Region 4 may also be empty: if it is assume that it has value 0. The required number is made up from bolting together the values $$\text{Region 4} + 1\quad\text{Region 1} \quad\text{Region 2} - 1 \quad\text{Region 3.}$$ It is obvious that the number of digits is the same because the $1$ added to Region 4 is cancelled out by the $1$ subtracted from Region 2 and all the other digits are the same, if in a different order. Example 1: $217$ has no Region 1 or Region 3, Region 2 is $7$ and Region 4 is $21$. The required number is made up from $21+1$ and $7-1$, or $226$. Example 2: $197$ has no Region 1, Region 2 is $7$, Region 3 is $9$ and Region 4 is $1$. The required number is made up from $1+1$ and $7-1$ and $9$, or $269$. Example 3: $97$ has no Region 1, Region 2 is $7$, Region 3 is $9$ and Region 4 is empty so is assigned $0$. The required number is made up from $0+1$ and $7-1$ and $9$, or $169$. Example 4: $199$ has no Region 1, Region 2 is $9$, Region 3 is $9$ and Region 4 is $1$. The required number is made up from $1+1$ and $9-1$ and $9$, or $289$. Example 5: $468992000$ Region 1 is $000$, Region 2 is $2$, Region 3 is $99$ and Region 4 is $468$. The required number is made up from $468+1$ and $000$ and and $2-1$ and $99$, or $469000199$.
Fixing Hasse principle
The Hasse principle basically addresses the question of whether an algebraic variety defined over a global field $K$ has any $K$-rational points at all. For cubic equations(curves and varities), this was a topic of much research. The most commonly studied obstruction to Hasse principle is the Manin obstruction, or the Brauer-Manin group. See for example the last chapter of Y. I. Manin's book "Cubic forms : algebra, geometry and arithmetic", preview available for free on google books. Others too have taken up this line of thought and come up with various other results -- for instance, in some cases the Manin obstruction is insufficient to explain the nonexistence of rational points. This is an active topic of research. Refer to the works of David Harari, Bjorn Poonen, Alexei Skorobogatov, among others, for examples of results obtained in the last few decades. It appears to me that you have a misunderstanding about definitions. The Tate-Shafarevich group is an object attached to an elliptic cuve $E$ over a number field $K$. It can be equivalently described as equivalence classes of all torsors of $E$ that have $K_v$-rational points for all places $v$ for $K$, but no $K$-rational point, modulo the equivalence relation of equivalence as $E$-torsors, the isomorphisms being defined over $K$. Here it so happens that the given elliptic curve is the Jacobian variety of the torsor, the latter being a genus $1$ curve over $K$ possibly without a $K$-rational point. The definition of an elliptic curve $E/K$ is that it is a genus $1$ curve $E$ defined over $K$ with a specified rational point(the origin), and therefore you do not need to consider the Hasse principle for the elliptic curve itself; it is a moot question; and your statement of "fixing" the Hasse principle for elliptic curves is meaningless. Lastly, the phrase "error term" is used in analytic expression involving some asymptotic results or approximations, such as in the prime number theorem. In the algebraic context that phrase is strange. You should rather say "obstruction", "obstruction group", etc..
Why is the Jacobi method defined the way it is?
It's easy to lose sight of the simple, clear intuition behind the Jacobi method when it's expressed using matrix notation. Here's the idea. Suppose we want to solve the linear system \begin{align} a_{11} x_1 + \cdots + a_{1n} x_n &= b_1 \\ & \vdots \\ a_{n1} x_1 + \cdots + a_{nn} x_n &= b_n, \end{align} and suppose that our current best guess for the solution is $$ x^k = \begin{bmatrix} x_1^k \\ \vdots \\ x_n^k \end{bmatrix}. $$ A very simple idea is to solve the first equation for $x_1$, and use our most recent estimate for the other components of $x$, like this: $$ x_1^{k+1} = -\frac{a_{12}}{a_{11}} x_2^k - \cdots - \frac{a_{1n}}{a_{11}} x_n^k. $$ We compute $x_2^{k+1},\ldots, x_n^{k+1}$ similarly (and we can compute them all in parallel). That's all the Jacobi method is. You can also see how a similar strategy leads to the Gauss-Seidel method.
Sum of an unknown sequence (perhaps arithmetic or geometric)
Using your relation for $n=4$ and $n=5$: $$\begin{align} a_1+a_2+a_3+a_4= 1+2^{4+1} &= 33\\ a_1+a_2+a_3+a_4+a_5= 1+2^{5+1} &= 65 \end{align}$$ So you get $$ a_5 = (a_1+a_2+a_3+a_4+a_5)-(a_1+a_2+a_3+a_4) = 65-33 = 32 $$
Manipulating a sum and showing it has an upper bound
Consider $x_1=1$, $x_2=0$ and $y_1=0$, $y_2=\lambda$. Then $$ \left|\sum_{k=1}^2x_ky_k\right|=0\le1 $$ and $$ \sum_{k=1}^2x_k^2=1 $$ However, $$ \sum_{k=1}^2y_k^2=\lambda^2 $$ which can be as large as we want. The second part follows from the Cauchy-Schwarz Inequality.
Splitting an electricity bill
Any split is going to be based on assumptions that may or may not actually be true. The most straightforward approach is to assume that each person used the same amount of electricity per week. On that basis Party $A$ used $6$ person-weeks of electricity, and Party $B$ used $6\cdot2=12$ person-weeks of electricity. Between them the two parties used $6+12=18$ person-weeks, so Party $A$ used $\frac6{18}=\frac13$ of the total and therefore owed $$\frac13\cdot\$201.44=\$67.15\;.$$ (I’ve rounded that to the nearest cent.) Party $B$ owes the remaining $\$134.29$ and will presumably split this $6$ ways.
Expand $\frac{z}{z^4+9}$ To Taylor Series
Your expansion as Taylor series around $0$ (i.e. as Maclaurin series) is fine. But we also have to state the validity of the series representation of $f(z)$ \begin{align*} f(z)=\frac{z}{z^4+9}=\frac{z}{9}\cdot\frac{1}{1-\left(-\frac{z^4}{9}\right)}=\sum_{n=0}^\infty (-1)^n\frac{z^{4n+1}}{9^{n+1}} \end{align*} The range of validity is $\left|-\frac{z^4}{9}\right|<1$ or equivalently $|z|<\sqrt{3}$. We now consider the Taylor expansion around other points $z_0\in\mathbb{C}$. The function \begin{align*} f(z)&=\frac{z}{z^4+9}\\ &=-\frac{i}{12}\cdot\frac{1}{z-\sqrt{\frac{3}{2}}\left(1+i\right)} -\frac{i}{12}\cdot\frac{1}{z-\sqrt{\frac{3}{2}}\left(-1-i\right)}\\ &\qquad+\frac{i}{12}\cdot\frac{1}{z-\sqrt{\frac{3}{2}}\left(-1+i\right)} +\frac{i}{12}\cdot\frac{1}{z-\sqrt{\frac{3}{2}}\left(1-i\right)}\tag{1} \end{align*} has four simple poles at $z_1=\sqrt{\frac{3}{2}}\left(1+i\right),z_2=\sqrt{\frac{3}{2}}\left(1-i\right),z_3=\sqrt{\frac{3}{2}}\left(-1+i\right)$ and $z_4=\sqrt{\frac{3}{2}}\left(-1-i\right)$, one in each quadrant residing on the diagonals. According to (1) we derive an expansion at $z=z_0$ via \begin{align*} \frac{1}{z-a}&=\ \frac{1}{(z-z_0)-(a-z_0)}\\ &=-\frac{1}{a-z_0}\cdot\frac{1}{1-\frac{z-z_0}{a-z_0}}\\ &=-\frac{1}{a-z_0}\sum_{n=0}^\infty\left(\frac{z-z_0}{a-z_0}\right)^n\\ &=-\sum_{n=0}^\infty\frac{1}{(a-z_0)^{n+1}}(z-z_0)^n\tag{2} \end{align*} We obtain from (1) and (2) for $z,z_0\in\mathbb{C}\setminus\{z_1,z_2,z_3,z_4\}$: \begin{align*} f(z)&=\frac{z}{z^4+9}\\ &=\frac{i}{12}\sum_{n=0}^\infty\left[ \frac{1}{\left(\sqrt{\frac{3}{2}}\left(1+i\right)-z_0\right)^{n+1}} +\frac{1}{\left(\sqrt{\frac{3}{2}}\left(-1-i\right)-z_0\right)^{n+1}}\right.\\ &\qquad\qquad\qquad\left.-\frac{1}{\left(\sqrt{\frac{3}{2}}\left(-1+i\right)-z_0\right)^{n+1}} -\frac{1}{\left(\sqrt{\frac{3}{2}}\left(1-i\right)-z_0\right)^{n+1}}\right](z-z_0)^n \end{align*} The radius of convergence of the expansion around $z=z_0$ is the distance to the pole in the same quadrant of $z_0$ or the distance to the two nearest poles if $z_0$ resides on an axis.
Real Variable, Complex Integral $\int_{o}^{\pi} ie^{3it}\,dt$
Your working is fine so far, just evaluate $$\frac13 \left( \cos(3t) +i \sin(3t) \right) \mid_0^\pi$$
discrete probability distributions; Dish problem
The probability distribution of fish, meat, and vegetarian dishes among five customers has generating function $$ \left( {{3 f + 6 m + 5 v} \over {3 + 6 + 5}} \right) ^5 $$ So $\ldots$
Which of the following are true about the ring of continuous real valued functions C[0,1]
a),b) and d) have been treated in the comments. For c) you should notice that the constant functions give you a natural inclusion $\mathbb R^* \subset C[0,1]^*$ and $\mathbb R^* = \mathbb R \setminus \{0\}$ is not even countable, in particular not cyclic.
The sum of odd powered real numbers equals zero implies the numbers are inverses
In principle we do an induction on $n$. The result is obviously true at $n=1$ and $n=2$. Without loss of generality we may assume that none of the $a_i$ is $0$. For if some $a_i$ is equal to $0$, we can remove it, reducing the problem to the case $n-1$. Without loss of generality we may assume that if $a_i$ and $a_j$ have the same absolute value, they are in fact equal. For if $a$ and $-a$ both occur in the sequence, they can be paired and removed, and we are at the case $n-2$. It will make things easier if we use the fact that the problem is scale-invariant: The result holds for the numbers $a_1,a_2,\dots,a_n$ if and only if it holds for the numbers $Ca_1,Ca_2,\dots,Ca_n$, whatever non-zero constant $C$ we choose. Arrange the numbers in non-decreasing order of absolute value. Call them $b_1,b_2,\dots,b_n$. By scale invariance, we may assume that $b_n=1$. Several other $b_i$ may be equal to $1$, say $k$ of them. Let the largest absolute value smaller than $1$ that occurs among the absolute values of the $b_i$ be $\delta\lt 1$. Then $$b_n^l +b_{n-1}^l +b_{n-2}^l +\cdots +b_1^l \ge k(1^l) -(n-k)\delta^l.\tag{1}$$ We show that for large enough $l$, we have $$k-(n-k)\delta^l \gt 0.\tag{2}$$ That, together with Inequality (1), will contradicts the assumption that $a_1^l+a_2^l +\cdots+a_n^l=0$ for all odd $l$. To prove Inequality (2), we need to show equivalently that $$\delta^l \lt \frac{k}{n-k}$$ for large enough $l$. This is clear, since $\frac{k}{n-k}$ is positive, and $\lim_{l\to\infty} \delta^l=0$. Remark: Basically, it comes down to the fact that for large enough $l$, $a_1^l+a_2^l+\cdots+a_n^l$ is dominated by the terms of largest absolute value. The above inequalities pin down the details.
Measure Theory on integrals
Note that $$\int_X|f|=\int_X\chi_E|f|+\int_X\chi_{E^c}|f|.$$
find a value in order matrix has solution
It means that whenever $a\neq1$ there’s a solution. (In case that $a=0$ there are infinitely many ones.)
Using 10p,5p,2p and 1p coins to make 10 p where p stands for PENCE.
I'll try to give you just a sketch of the reason why that works. You must take into account first of all the formal series expansion: $$ \tag{a} {1\over 1-t}=1+t+t^3+t^3+\ldots $$ Suppose you represent a total sum of $n$ pence by $x^n$, so that to combine two or more coins of values $a$ and $b$ to get a total of $a+b$ pence you just have to multiply the powers: $x^a\cdot x^b=x^{a+b}$. With only 10p coins you can get 0, 10, 20, … pence and you can represent that as an infinite polynomial $1+x^{10}+x^{20}+x^{30}+\ldots={1\over1-x^{10}}$ where I have used (a) to write that in a compact form. The same goes for the other coins, so that all the possible combinations of your coins are given by the product of all those polynomials, which is just $\displaystyle{1\over(1-x^{10})(1-x^5)(1-x^2)(1-x)}$. By using (a) with $t=(1-x^{10})(1-x^5)(1-x^2)(1-x)-1$, this can be written as an infinite polynomial: $$ {1\over(1-x^{10})(1-x^5)(1-x^2)(1-x)}= 1+x+2 x^2+2 x^3+3 x^4+4 x^5+5 x^6+6 x^7+7 x^8+8 x^9+11 x^{10} +12 x^{11}+\ldots $$ The coefficient of $x^{10}$ will then give you the number of different combinations of coins which can be combined to give 10 pence.
Isomorphism from $\langle Z, +\rangle$ onto $\langle Z, \ast\rangle$?
You need to define the operation $*$ on $\mathbb Z$ in order to make $f$ and isomorphism. So we need for $f$ to satisfy the homomorphism property: $$f(m+n) = m+n+1 = f(m)*f(n) = (m+1)*(n+1)$$ Now, what operation $*$ will give us the equality: $$m+n + 1 = (m+1)*(n+1)\;\;?$$ How about: $p*q = p + q -1$. Compute, now, $(m+1)*(n+1)$, and see if you obtain the desired $m+n +1$.
summation notation of elements in several different (sub)sets
Well, assuming that the sets are finite, and that $A\cap B = \emptyset$, then indeed: $$\require{cancel}\begin{align}\sum\limits_{a\in A}a+\sum\limits_{b\in B}b ~~=&~ \sum\limits_{x\in A\cup B} x\cancelto{0}[color=silver]{+\sum\limits_{y\in A\cap B}y}\end{align}$$
Proof Check: If $f$ is multiplicative then $F(n)=\sum\limits_{d|n}f(d)$ is also multiplicative.
Maybe a bit simpler: if $n = n_1 n_2$ with $n_1$ and $n_2$ coprime, then $d | n$ iff $d = d_1 d_2$ where $d_1 | n_1$ and $d_2 | n_2$. Of course $d_1$ and $d_2$ are coprime. So $$\sum_{d | n} f(d) = \sum_{d_1 | n_1} \sum_{d_2 | n_2} f(d_1 d_2) = \sum_{d_1 | n_1} \sum_{d_2|n_2} f(d_1) f(d_2) = \left(\sum_{d_1 | n_1} f(d_1)\right) \left(\sum_{d_2 | n_2} f(d_2)\right) $$
Mean (average) distance of point to line segment
To gain more insight into your answer, first consider a special case of the problem in which the line segment is positioned on the $+x$ axis, with $x_1=0$, and $x_2= l = $ segment length and $y_1=y_2=0$. Then $(x,0)$ is a typical point on the segment, the distance to which becomes \begin{align} f(x) = \sqrt{ (x-x_3)^2 + y_3^2}, ~ x\in [0,l] \end{align} The mean of $f(x)$ over the relevant interval is then $\int_{0\leq x\leq l} \frac{f(x)dx }{l}$. Next, note that no matter how your initial line segment is positioned, it can always be rotated and translated to fall on $[0,l]$. The point $(x_3,y_3)$ should be similarly transformed. Even in the transformed coordinate system, the indefinite integral $\int f(x) dx$ works out to a very complicated expression, just as you pointed out: Integrate[Sqrt[(x - a)^2 + b^2], x] == ((-a + x)*(a^2 + b^2 - 2*a*x + x^2) - b^2*Sqrt[a^2 + b^2 - 2*a*x + x^2]* Log[2*(a + Sqrt[b^2 + (a - x)^2] - x)])/ (2*Sqrt[a^2 + b^2 - 2*a*x + x^2]) where I set $(a,b):=(x_3,y_3)$. See Wolfram Alpha for this.
Open subsets of perfect Polish spaces
$G_\delta$-sets in a completely metrizable space are completely metrizable, so the same is certainly true of open sets. An open subset of a separable metrizable space is also separable and metrizable. Finally, a non-empty open subset of a perfect space is perfect: an isolated point in it would be isolated in the whole space. Thus, every non-empty subset of a perfect Polish space is a perfect Polish space and therefore has cardinality $2^\omega=\mathfrak{c}$.
Test the binary relation on the set for reflexivity, symmetry, antisymmetry, and transitivity.
$\langle x, y\rangle$ is an ordered pair that is a member of $R$ if, and only if, $x+y=5$. For example, the ordered pair $\langle 1 , 4\rangle$ is a member of $R$, but the ordered pair $\langle 1, 3\rangle$ is not. Does that clear it up for you?
Isomorphic Local Fields from Extension of Valuation
I keep your notations $K,L,v,w$ etc. You must distinguish $2$ cases: 1) Suppose $L/K$ is Galois, with group $G$. Then $G$ permutes transitively all the $w$'s above $v$. For $a\in L$ and $\sigma \in G$, by definition $\sigma w (a)=w(\sigma^{-1}a)$. A Cauchy sequence for $w$ in $L$, acted on by $\sigma$, gives a Cauchy sequence for $\sigma w$, and conversely a Cauchy sequence for $\sigma w$, acted on by $\sigma^{-1}$, gives a Cauchy sequence for $w$; so $\sigma$ induces by continuity an isomorphism $L_w \cong L_{\sigma w}$ , and your "conjecture" holds. Note that the decomposition subgroup $G_w$:={$\sigma \in G; \sigma w =w$} is naturally isomorphic to $Gal(L_w /K_v)$. For more details, see Cassels-Fröhlich, "Algebraic Number Theory", chapter 7, §1. 2) The non Galois case can be completely different because the $w$'s above $v$ can behave independently. In general, $K_v \otimes_K L\cong \oplus L_w$ , the direct sum bearing over all the $w$'s above $v$ (op. cit., chap. 2, thm. of §10). Let us construct an example which contradicts your "conjecture". As in @KCd's comment, take $K=\mathbf Q, L=\mathbf Q(\sqrt [3]2)$ and $v=$the $5$-adic valuation. There are exactly $2$ prime ideals of $L$ above $5$, which are $P_1=(5, \alpha +1)$ and $P_2=(5, \alpha^2+3\alpha -1)$, where $\alpha=\sqrt [3]2$ for short. For details, see D. Marcus, "Number Fields", chap. 3, example p. 70, or exercise 12, p.84). For $P_1$(resp. $P_2$), the inertia index is $1$ (resp. $2$), hence the ramification index is $1$ (resp. $1$) because of the usual formula degree $3=e_1f_1 + e_2 f_2$. This means that $L_{w_1}=\mathbf Q_5$ and $L_{w_2}/\mathbf Q_5 $ is quadratic. Note that the difference with 1) is that in the Galois case all the indices $e_i$ (resp. $f_i$) must be the same. Another explanation lies in the proof of the relation $K_v \otimes_K L\cong \oplus L_w$ above (op. cit., chap. 2, §9, lemma) : the minimal polynomial $X^3 -2$ of $\alpha$ over $\mathbf Q$ has no reason to remain irreducible over $\mathbf Q_5$; actually, notice that $3^3=2$ in $\mathbf F_5$ and apply Hensel's lemma.
How to find out the solid angle subtended by a tetrahedron at its vertex?
Denote the solid angle by $\omega$ and let $v_1,v_2,v_3$ be the vectors from vertex $A$ along the edges $AB, AD, AC$. Then we have (using the usual cross product, dot product, and Euclidean norm): $$(4 \pi)\omega + \pi = \cos ^{-1} \left( \frac{ (v_1 \times v_2) \cdot (v_1 \times v_3)}{||v_1 \times v_2|| ||v_1 \times v_3||}\right) + \cos ^{-1} \left( \frac{ (v_2 \times v_1) \cdot (v_2 \times v_3)}{||v_2 \times v_1|| ||v_2 \times v_3||}\right) + \cos ^{-1} \left( \frac{ (v_3 \times v_1) \cdot (v_3 \times v_2)}{||v_3 \times v_1|| ||v_3 \times v_2||}\right)$$
Let $t_n$ denote the $n$th triangular number. For what values of $n$ does $t_n$ divide $t_1^2+t_2^2+ \cdots +t_n^2$
The equality $$ t_{1}^{2}+t_{2}^{2}+\cdots+t_{n}^{2}=t_n(3n^3+12n^2+13n+2)/30$$ can be proved by the mathematical induction: (1) $n=1$. This is obvious: $t_{1}^{2}=1=t_1(3+12+13+2)/30$. (2) Assume that formula holds for $n$. Then $$ t_{1}^{2}+t_{2}^{2}+\cdots+t_{n}^{2}+t_{n+1}^{2}=t_n(3n^3+12n^2+13n+2)/30+t_{n+1}^{2} = $$ $$=\frac{n(n+1)}{2}\cdot \frac{3n^3+12n^2+13n+2}{30}+\frac{(n+1)^2(n+2)^2}{4}=$$ $$=\frac{(n+1)(3n^4+27n^3+88n^2+122n+60)}{60}. $$ On the other hand $$t_{n+1}\frac{3(n+1)^3+12(n+1)^2+13(n+1)+2}{30}=\frac{(n+1)(n+2)(3(n+1)^3+12(n+1)^2+13(n+1)+2)}{60}= $$ $$=\frac{(n+1)(3n^4+27n^3+88n^2+122n+60)}{60}. $$ Thus the formula holds for any $n\in {\mathbb N}$. To see when $30\bigl|(3n^3+12n^2+13n+2)$ we note that $3n^3+12n^2+13n+2$ is an even integer for any $n$. For divisibility with $3$ we consider three cases: (i) $n\equiv 0\qquad (\mod 3) \quad \Rightarrow\quad 3n^3+12n^2+13n+2\equiv 2\quad (\mod 3)$ (ii) $n\equiv 1\qquad (\mod 3) \quad \Rightarrow\quad 3n^3+12n^2+13n+2\equiv 0\quad (\mod 3)$ (iii) $n\equiv 2\qquad (\mod 3) \quad \Rightarrow\quad 3n^3+12n^2+13n+2\equiv 1\quad (\mod 3).$ For divisibility with $5$ we consider five cases: (i) $n\equiv 0\qquad (\mod 5) \quad \Rightarrow\quad 3n^3+12n^2+13n+2\equiv 2\quad (\mod 5)$ (ii) $n\equiv 1\qquad (\mod 5) \quad \Rightarrow\quad 3n^3+12n^2+13n+2\equiv 0\quad (\mod 5)$ (iii) $n\equiv 2\qquad (\mod 5) \quad \Rightarrow\quad 3n^3+12n^2+13n+2\equiv 0\quad (\mod 5)$ (iv) (i) $n\equiv 3\qquad (\mod 5) \quad \Rightarrow\quad 3n^3+12n^2+13n+2\equiv 0\quad (\mod 5)$ (v) $n\equiv 4\qquad (\mod 5) \quad \Rightarrow\quad 3n^3+12n^2+13n+2\equiv 3\quad (\mod 5)$ We conclude that $n$ has to satisfty $n\equiv 1\qquad (\mod 3)$ and $n\equiv 1\, \text{or}\, 2 \,\text{or}\, 3\qquad (\mod 5) $. It follows that $n$ is of the form $$ n=15m+r\quad \text{where} \quad m\in {\mathbb N}\cup \{ 0\}\quad \text{and}\quad r\in \{ 1, 7, 13\}. $$ I hope that there is no mistake in my calculations.
What does $\text{rank}(AB) = \text{rank}(A)$ imply?
It implies $rank(B)$ is at least $rank(A)$. Since the rank of the matrix is the dimension of its image, then $\dim A(B(\mathbb{R}^\ell)) \leq \dim B(\mathbb{R}^\ell)$, hence $rank(AB) \leq rank(B)$. As noted in the comments, you cannot deduce that $B$ is full rank. Consider $$A = \begin{pmatrix} 1 & 0 & 0\end{pmatrix}, B = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{pmatrix}$$ Then $$AB = \begin{pmatrix} 1 & 0 & 0 \end{pmatrix}.$$ Hence $AB = A$, in particular $Rank(AB) = Rank(A)$. But $Rank(B) = 2 <3$. Edit, another example: For some reason, we want to consider $A$ $m \times n$ and $B$ $n \times \ell$ will $n < m < \ell$. Let $$A = \begin{pmatrix} 0 & 0 \\ 0 & 0 \\ 0& 0 \end{pmatrix}, B = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 &0 \end{pmatrix}$$ Then $A$ is rank 0, $AB$ rank 0, $B$ rank 1 < 2.
Some questions regarding partial derivatives
Roughly speaking, taking a partial derivative means that we treat all variables of the function as independent variable and we are taking the derivative with respect to one of those independent variables. Say, for a function $F(x,y,y')$ in general or for your example $F(x,y,y')=x+y+y'$ specifically, we treat it as a function of the three independent variables $x$, $y$, and $y'$, and therefore there are three separate partial derivatives: $$\frac{\partial F}{\partial x}=1, \quad \frac{\partial F}{\partial y}=1, \quad\frac{\partial F}{\partial y'}=1.$$ And when treating all three of them as independent variables, then of course, the derivative of any one with respect to another is zero. However, if one of these variables in turn depends on some other variables, then the multivariate Chain Rule kicks in. What you calculated in your question, where $y=x^3$, is not a partial derivative at all, but the total derivative with respect to $x$. It would even be denoted differently: $$\frac{\mathrm{d}F}{\mathrm{d}x}=\frac{\partial F}{\partial x}\cdot\frac{\mathrm{d}x}{\mathrm{d}x}+\frac{\partial F}{\partial y}\cdot\frac{\mathrm{d}y}{\mathrm{d}x}+\frac{\partial F}{\partial y'}\cdot\frac{\mathrm{d}y'}{\mathrm{d}x}=1\cdot1+1\cdot3x^2+1\cdot6x=3x^2+6x+1.$$
Is there any symbol for "undefined"?
According to a Wikipedia article on the subject, in Herbert B. Enderton's book Computability: An Introduction to Recursion Theory (2011), even if nowhere else (no other reference is given, and I've never seen the usage): If $f$ is a partial function on $S$ and $a$ is an element of $S$, then this is written as $f(a)\!\downarrow$ and is read as "$f(a)$ is defined." If $a$ is not in the domain of $f$, then this is written as $f(a)\!\uparrow$ and is read as "$f(a)$ is undefined".
What’s a good (fairly advanced) book on probability, cumulants and CLT please?
I would recommend Introduction to mathematical statistics by Robert V. Hogg, Joseph Mckean, Allen T. Craig 7th Edition. (I'm not sponsored) The first chapter gives you pretty detailed information about the probability and distributions. I also recommend you to look at the second and the third chapter which talks about multivariate distribution and some special distributions. If you go to the Index page of the book, you can look for the CLT. The book that I currently own indicates that page 170,220,225,249,313,317,323-324, 340, 342, 352,370, 448, 456, 463, 541, 552, 570, 605, and 626 have information related to CLT. I was struggling a lot with statistics, not coming from the math or stat background and this book really helped me a lot. Thanks.
Matrix location by indices?
The reason why $n$ doesn't appear in $f(x,y)=m(x-1)+y$ is because your matrix is written left-to-right and then top-to-bottom. With $f(x,y)=m(x-1)+y$, when you jump to the next row ($x \mapsto x+1$), the entry in the matrix increases by $m$; however when you jump to the next column ($y \mapsto y+1$) the entry in the matrix increases only by $1$. Had you written your matrix top-to-bottom and then left-to-right, i.e. $$\begin{pmatrix} 1 & 4 & 7 & 10 \\ 2 & 5 & 8 & 11 \\ 3 & 6 & 9 & 12 \end{pmatrix}$$ then the formula would have been $g(x,y)=x+(n-1)y$. If you wanted to make your function give a one-to-one correspondence between the numbers $1,2,\cdots,mn$ and the entries of the $n \times m$ matrix, you'd need to specify $1 \le x \le n$ and $1 \le y \le m$ in the definition of the function; so really both numbers, $m$ and $n$, do implicitly appear.
Show that $\frac{\pi}{4} = 1 − \frac13 +\frac15 −\frac17 + \cdots$ using Fourier series
You just have to expand $y=x/2$ as a Fourier series: $$ {x\over2}=\sin (x)-\frac{1}{2} \sin (2 x)+\frac{1}{3} \sin (3 x)-\frac{1}{4} \sin (4x)+\frac{1}{5} \sin (5 x)+\ldots $$ and put here $x=\pi/2$.
Fact about measurable functions defined on $\sigma$-finite measure spaces.
A diagonalization argument would work, we just need to take care how to approximate our functions. For ease of notation, define $$\begin{align}H_1 &= F_1\\ H_{n+1} &= F_{n+1}\setminus F_n\end{align}$$ so that the $H_n$'s are pairwise disjoint and $\bigcup_{k=1}^n H_k = F_n$. For all $n\geq 1$ we can take measurable simple functions $\{g^{(n)}_m\}_{m\geq 1}$ such that $g^{(n)}_m\mathop{\longrightarrow}_{m\to\infty} f^{(n)} := \chi_{H_n}f$. W.l.o.g. we can assume that $g^{(n)}_m$ is supported in $H_n$. Define the functions $$h_n := \sum_{k=1}^n g^{(k)}_n.$$ Clearly, for all $n\geq 1$ we have $\chi_{F_n}h_m\mathop{\longrightarrow}_{m\to\infty} \sum_{k=1}^n f^{(k)}=\chi_{F_n} f$ almost everywhere (since for every sufficiently large $m$ we have $\chi_{F_n}h_m = \sum_{k=1}^n g^{(k)}_m$), and since $\Omega=\bigcup F_n$ we're done.
Translating a 3D graph plot
So there are a couple of things going on here. Firstly, you are correct in one sense. $x^2 + y^2 + z^2= 4$ is the equation for a sphere centered around the origin in 3-space. $(x-2)^2 + (y-2)^2 + z^2 = 4$ is that same sphere, but with the center at $(2,2,0)$. So there, you are correct. But in your second, $x + y - 4 = z$ or rather $x + y - z = 4$, these are planes in 3-space. The four offsets all three, x, y, and z coordinates (not just x). For example, one can see where the plane intercepts the three axes. In the case without the 4, all intercepts are at 0. But now, the x intercept is 4, the y intercept is 4, and the z intercept is -4. You might think - that's really weird! But a plane is rigid, and moving the x part affects the y part too. Does that all make sense?
Riccati equation $y'+\frac{y^2}{x}=1$
[First] : Rewrite the ODE in the form of : $$ y'(x)=p(x)+q(x)y(x)+r(x)(y(x))^2\implies y'=1-\frac{y^2}{x} $$ Where $p(x)=1$, $q(x)=0$, and $r(x)=-\frac{1}{x}$ [Second] : Let $y(x)=-\frac{u'(x)}{r(x)u(x)}$, we obtain : $$ r(x)u''(x)-\left(r'(x)+q(x)r(x)\right)u'(x)+p(x)(r(x))^2u(x)=0\implies-\frac{1}{x}u''+\frac{1}{x^{2}}u'+\frac{1}{x^{2}}u=0 $$ That is assuming $x\neq0$, we get : $$ xu''-u'-u=0 $$ You can now proceed from here.
Prove that if matrix $A$ is an $m\times n$ and $B$ is $n\times p$, then $\operatorname{rank} AB$ is less than or equal to $\operatorname{rank} B$
I am assuming that you are using the fact that the rank of the matrix is the number of linearly independent columns of the matrix. The trick is to use block multiplication. If we write $B$ columnwise as $$B=\begin{pmatrix}\mathbf{b}_1 & \cdots & \mathbf{b}_p\end{pmatrix}$$ then we may express $AB$ in the same way as $$AB = \begin{pmatrix}A\mathbf{b}_1 & \cdots & A\mathbf{b}_p\end{pmatrix}$$ To this end, it follows that if column $i$ of $B$ is originally a linear combination of the other columns of $B$, then we also have column $i$ of $AB$ as a linear combination of the other columns of $AB$. This shows that the number of linearly dependent columns of $AB$ is at least the number of linearly dependent columns of $B$. In other words, the number of linearly independent columns of $AB$ is at most the number of linearly independent columns of $B$, i.e. $$\rm rank(B) \ge \rm rank(AB)$$ You'll probably need to air out the above argument a bit, but this is the general idea.
The distribution of xtΣx
Hint: If $X \sim N(\mu,\Sigma)$, then $X=\mu + \Sigma^{1/2} Z$ where $Z \sim N(0,I)$. If $X \sim N(0,\Sigma)$, then $$X^\top \Sigma^{-1} X = Z^\top \Sigma^{1/2}\Sigma^{-1} \Sigma^{1/2} Z = Z^\top Z\sim \chi^2_n.$$
Algebraic-Topology/Differential Topology books that also introduce General Topology
My book Topology and Groupoids has its first half giving a geometric approach to general topology appropriate for algebraic topology, including adjunction spaces, finite cell complexes, with projective spaces as examples, and function spaces. It does not include the more analysis oriented theorems you mention. This book's almost unique use in algebraic topology texts of the fundamental groupoid on a set of base points is of course appropriate for discussion of unions of non connected spaces such as the circle, see this mathoverflow discussion, and was supported by Grothendieck in Section 2 of his 1984 Esquisse d'un Programme. Other background to the methodology is in this paper Modelling and Computing Homotopy Types: I.
Parametric solution of the Diophantine equation $\frac{1}{p}=\frac{1}{x}+\frac{1}{y}+\frac{1}{z} ,x,y,z∈Z^+.$
Got it. Your equation is $$ xy = px + py, $$ $$ xy - px - py = 0, $$ $$ xy - px - py + p^2 = p^2, $$ $$ (x-p)(y-p) = p^2. $$ Apparently this observation occurs at Number of solution for $xy +yz + zx = N$ All solutions are given by finding a divisor $w$ of $p^2,$ with triple $$ \color{magenta}{ \left( p, \; \; p + w, \; \; p + \frac{p^2}{w} \; \right).} $$ If $w < p$ these are in order, if $w=p$ it is just $(p,2p,2p),$ if $w > p$ it is a repeat but out of order. So, the total number of solutions is $$ \frac{1 + d(p^2)}{2}, $$ where $d(n)$ is the number of positive divisors of $n.$ Note that the primitive triples, $\gcd(p,x,y),$ come when my $w$ is $1$ or some other square, so $p^2/w$ is also a square, in addition we require $\gcd(w,p^2/w)= 1$; for example $(6,10,15)$ with $w=4$ and $p^2/w = 9.$ OR $$ (30,31,930); \; \; (30,34,255); \; \; (30,39,130); \; \; (30,55,66). $$ $p$ up to $30.$ p x y 1 2 2 2 4 4 2 3 6 3 6 6 3 4 12 4 8 8 4 6 12 4 5 20 5 10 10 5 6 30 6 12 12 6 10 15 6 9 18 6 8 24 6 7 42 7 14 14 7 8 56 8 16 16 8 12 24 8 10 40 8 9 72 9 18 18 9 12 36 9 10 90 10 20 20 10 15 30 10 14 35 10 12 60 10 11 110 11 22 22 11 12 132 12 24 24 12 21 28 12 20 30 12 18 36 12 16 48 12 15 60 12 14 84 12 13 156 13 26 26 13 14 182 14 28 28 14 21 42 14 18 63 14 16 112 14 15 210 15 30 30 15 24 40 15 20 60 15 18 90 15 16 240 16 32 32 16 24 48 16 20 80 16 18 144 16 17 272 17 34 34 17 18 306 18 36 36 18 30 45 18 27 54 18 24 72 18 22 99 18 21 126 18 20 180 18 19 342 19 38 38 19 20 380 20 40 40 20 36 45 20 30 60 20 28 70 20 25 100 20 24 120 20 22 220 20 21 420 21 42 42 21 30 70 21 28 84 21 24 168 21 22 462 22 44 44 22 33 66 22 26 143 22 24 264 22 23 506 23 46 46 23 24 552 24 48 48 24 42 56 24 40 60 24 36 72 24 33 88 24 32 96 24 30 120 24 28 168 24 27 216 24 26 312 24 25 600 25 50 50 25 30 150 25 26 650 26 52 52 26 39 78 26 30 195 26 28 364 26 27 702 27 54 54 27 36 108 27 30 270 27 28 756 28 56 56 28 44 77 28 42 84 28 36 126 28 35 140 28 32 224 28 30 420 28 29 812 29 58 58 29 30 870 30 60 60 30 55 66 30 50 75 30 48 80 30 45 90 30 42 105 30 40 120 30 39 130 30 36 180 30 35 210 30 34 255 30 33 330 30 32 480 30 31 930 jagy@phobeusjunior:~$ =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= All primitive solutions are given by finding a divisor $w$ of $p$ such that $\gcd(w,p/w) = 1$ with triple $$ \color{magenta}{ \left( p, \; \; p + w^2, \; \; p + \frac{p^2}{w^2} \; \right).} $$ To keep them ordered we also choose $w \leq \sqrt p.$ If $p$ is a square in the first place, larger than $1,$ then $w=\sqrt p$ does not ever give a primitive solution anyway, that just gives $(p,2p,2p).$ Here are just the primitive ones for $p \leq 30$ and then $p=210.$ p x y 1 2 2 2 3 6 3 4 12 4 5 20 5 6 30 6 7 42 6 10 15 7 8 56 8 9 72 9 10 90 10 11 110 10 14 35 11 12 132 12 13 156 12 21 28 13 14 182 14 15 210 14 18 63 15 16 240 15 24 40 16 17 272 17 18 306 18 19 342 18 22 99 19 20 380 20 21 420 20 36 45 21 22 462 21 30 70 22 23 506 22 26 143 23 24 552 24 25 600 24 33 88 25 26 650 26 27 702 26 30 195 27 28 756 28 29 812 28 44 77 29 30 870 30 31 930 30 34 255 30 39 130 30 55 66 jagy@phobeusjunior:~$ 210 211 44310 210 214 11235 210 219 5110 210 235 1974 210 246 1435 210 259 1110 210 310 651 210 406 435 jagy@phobeusjunior:~$ =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
What's the probability if getting the same objects of the same colour?
$2$ white socks from $4$ white socks can be chosen in $\binom 42=\frac{4\cdot3}{2\cdot1}=6$ ways. Any $2$ socks can chosen from $9$ socks in $\binom92=\frac{9\cdot8}{2\cdot1}=36$ ways. So, the probability of first two socks being white is $\frac{\text{ the number of favourable cases }}{\text{ the number of possible cases }}=\frac{6}{36}=\frac16$ Similarly, the probability of first two socks being black is $\frac{\binom52}{\binom92}=\frac5{18}$ So, the probability of first two socks being of same colour is $\frac16+\frac5{18}=\frac49$ Alternatively, The probability of first socks being white is $\frac4{4+5}=\frac49$ The probability of second socks being white with 1st one also white is $\frac{4-1}{4+5-1}=\frac38$ So, the probability of first two socks being white is $\frac49\cdot\frac38=\frac16$ Similarly, the probability of first two socks being black is $\frac5{4+5}\frac{5-1}{4+5-1}=\frac5{18}$
Bound for multi-index sum
To provide an answer, it is basically just what @Willie Wong already said in his comments. The boundedness conditions are $|x_i| \leq a$, $|\frac{1}{x_i}| \leq b$. By definition, $x^i= x_1^{i_1} \cdots x_n^{i_n}$. Note that for $|i|=3$, we can write $x^i=x_{j_1} x_{j_2} x_{j_3}$ (it is clear that there are at most three $j_k\neq 0$, but we can also write it as three always - for example, if $i_j=3$, $x^{i_j}_j=x_{j_1} x_{j_2} x_{j_3}$ where $j_1=j_2=j_3:=j$ (and $i_{j_1}=i_{j_2}=i_{j_3}=1$)). Therefore $x^i \leq a^3$ , and each summand $\leq a^3 b$, so that $$(\ast) \leq a^3 b C \text{, } C:= \sum_{|j|=3} \frac{1}{j!} \text{.}$$ As I said in my comments for the task behind it was important for me to see that, $\frac{(hx)^i}{hx_l} = \frac{h^{|i|}}{h} \frac{x^i}{x_l} = h^2 \frac{x^i}{x_l}$ for $|i|=3$. Thanks a lot for all your help
Length of a union of intervals
Let $A=X\setminus U$. We want to show that $|A|=0$, where $|\cdot|$ denotes the Lebesgue measure of any measurable set. Suppose by contradiction that $|A|>0$. By Lebesgue's density theorem, for any measurable set $A\subset\mathbb{R}$, the $1$-dimensional density $\Theta^1(x,A)$ exists at almost every point $x$ and equals $1$ at almost every point of $A$. Since $|A|>0$, there is a point $x\in A$ such that $\Theta^1(x,A)$ exists and equals $1$, that is $$ \lim_{r\searrow 0} \frac{|A\cap (x-r,x+r)|}{2r}=1. $$ Since $x\in X$, $(x,x+\epsilon_x)\subset U$ for some $\epsilon_x>0$, and thus $(x,x+\epsilon_x)\cap A=\emptyset$. It follows that for any $0<r<\epsilon_x$ we have that $$ |A\cap (x-r,x+r)|\le |(x-r,x)|=r, $$ and then $$ \lim_{r\searrow 0} \frac{|A\cap (x-r,x+r)|}{2r}\le\frac12, $$ that gives a contradiction. Hence we have that $|A|=|X\setminus U|=0$, that is to say that almost every point of $X$ belongs to $U$, and thus $|U|\ge|X|=1$.
Show using logarithms that the first equation can be transformed into the second.
The properties that you needs are: $$ \log ab=\log a \log b \qquad and \qquad\log a^b=b \log a $$ Using these properties your expression become: $$ \log y^k=k \log y= \log (1-k)zx^ka^{-1}=\log(1-k)+\log z+k\log x-1 \log a $$ Now you can find $\log y$ as: $$ \log y=\dfrac{1}{k}\left[\log(1-k)+\log z+k\log x-1 \log a\right]= $$ $$ = \dfrac{1}{k}\log(1-k)+\dfrac{1}{k}\log z+ \log x-\dfrac{1}{k}\log a $$ and, using the same properties, this becomes: $$ \log y=\log \left[ (1-k)^{\frac {1}{k}}z^{\frac {1}{k}}xa^{-\frac {1}{k}}\right] $$ now , exponentiating, you have the result. But note that you can find the same result simply using the rules of radicals since from $y^k=A$ you have $y=\sqrt[k]{A}=A^{\frac{1}{k}}$. In this way you see immediately that , if $K$ is even, we have a real solution only if your $A=(1-k)zx^ka^{-1}$ is positive. This condition is a bit more hidden using logarithms. You can see where it is request?
How do I prove the middle-$\alpha$ cantor set is perfect?
Almost entirely irrelevant to the question That the Middle-$\alpha$ Cantor set is closed is easy as it is the intersection of closed sets. The rest will be a fairly broad outline, and there are some details to fill in. Let $C$ denote the Middle-$\alpha$ Cantor set, and let $x \in C$ be arbitrary. We need to show that for all $\epsilon > 0$ there is a $y \in C \cap ( x - \epsilon , x + \epsilon )$ distinct from $x$. Note that there must be an $n$ such the the unique closed interval containing $x$ in the $n$th stage of the construction of $C$ is entirely contained within the $( x - \epsilon , x + \epsilon )$. (Remember how I said some details are missing? This is where they would go. one has to determine the lengths of the intervals at each stage of the construction, but it is not overly difficult.) Note that the endpoints of this interval will be elements of $C$, and (at least) one of them is distinct from $x$. Clearly each endpoint of $I$ is an endpoint of either $I_0$ or $I_1$. Perhaps slightly relevant to the question I think your problem might come down to notational issues. Perhaps a better way of attacking this problem is to determine the endpoints of the open middle-$\alpha$ interval removed given an arbitrary closed interval $[a,b]$. Relatively simple calculation shows that this open interval is $\left( \frac{(b-a)(1-\alpha)}{2} , \frac{(b-a)(1+\alpha)}{2}\right)$, meaning that the subintervals remaining are $\left[ a , \frac{(b-a)(1-\alpha)}{2} \right]$ and $\left[ \frac{(b-a)(1+\alpha)}{2} , b \right]$. From here the result you are looking for is easy. As it stands, your functions $T_0$ and $T_1$ seem to really mix up the intervals, and it will make it quite difficult to find for each interval remaining in the $(n+1)$st stage which interval from the $n$th stage generated it. (You would have to play around with how these interact, and you could come up with a formula, but it won't be pretty.)
Working out the average age of different sub-groups
Here's how to find the average age of all users, using only your data in your last box. Convert the number of users in each age group to a percentage. You seem to be able to do this, so I won't show how it is done. Now "weigh" each age with the percentage, as such: $$0.086\cdot15+0.349\cdot21+0.343\cdot30+0.142\cdot40+0.044\cdot50=26.789$$ This calculation is called the arithmetic mean (or just mean), and tells you the average (in the sense of the mean) age of your users.
Existence and value of $\lim_{n\to\infty} (\ln\frac{x}{n}+\sum_{k=1}^n \frac{1}{k+x})$ for $x>0$
We have $$\sum_{k=1}^n \dfrac1{k+x} = \int_{1^-}^{n^+} \dfrac{d \lfloor t \rfloor}{t+x} = \left. \dfrac{\lfloor t \rfloor}{t+x} \right\vert_{t=1^-}^{t=n^+} + \int_{1^-}^{n^+} \dfrac{\lfloor t \rfloor}{(t+x)^2} dt = \dfrac{n}{n+x} + \int_{1^-}^{n^+} \dfrac{\lfloor t \rfloor}{(t+x)^2} dt$$ Now $$\int_{1^-}^{n^+} \dfrac{\lfloor t \rfloor}{(t+x)^2} dt = \int_1^{n^+} \dfrac{t}{(t+x)^2} dt - \int_1^{n^+} \dfrac{\{t\}}{(t+x)^2} dt$$ $$\int_1^{n^+} \dfrac{t}{(t+x)^2} dt = \int_1^{n} \dfrac{dt}{t+x} - x \int_1^n \dfrac{dt}{(t+x)^2} = \log(n+x) - \log(1+x) -x \left(\dfrac1{1+x} - \dfrac1{n+x}\right)$$ Hence, we get that \begin{align} \log(x/n) + \sum_{k=1}^n \dfrac1{k+x} & = \log\left(\dfrac{x}n \right) + \dfrac{n}{n+x} + \log \left(\dfrac{n+x}{1+x}\right) -\dfrac{x}{1+x} + \dfrac{x}{n+x} - \int_1^{n^+} \dfrac{\{t\}}{(t+x)^2} dt\\ & = - \dfrac{x}{1+x} + 1 + \log\left(\dfrac{x}n \cdot \dfrac{n+x}{1+x}\right) - \int_1^{n^+} \dfrac{\{t\}}{(t+x)^2} dt\\ & = \dfrac1{1+x} + \log\left(\dfrac{x}n \cdot \dfrac{n+x}{1+x}\right) - \int_1^{n^+} \dfrac{\{t\}}{(t+x)^2} dt \end{align} Now letting $n \to \infty$, we get that $$W(x) = \dfrac1{1+x} + \log \left(\dfrac{x}{1+x}\right) - \underbrace{\int_1^{\infty} \dfrac{\{t\}}{(t+x)^2} dt}_{\text{Converges since }\{t\} \in [0,1)}$$ $$0 \leq \overbrace{\int_1^{\infty} \dfrac{\{t\}}{(t+x)^2} dt}^{g(x)} \leq \int_1^{\infty} \dfrac1{(t+x)^2} dt = \dfrac1{1+x}$$ There might be some name for $g(x)$ (Probably some of the number theorists on this website might be able to identity this). For instance, $g(0) = 1-\gamma$, where $\gamma \approx 0.57721$ is the Euler Mascheroni constant. Now $$\lim_{x \to \infty} W(x) = 0 + \log(1) + 0 = 0$$ Another method is as follows. From here, we have \begin{align} \sum_{k=1}^n \left(\dfrac1k - \dfrac1{x+k} \right) & = \sum_{k=1}^n \int_0^1 (y^{k-1} - y^{x+k-1})dy\\ & = \int_0^1 (1-y^x) \sum_{k=1}^n y^{k-1} dy\\ & = \int_0^1 (1-y^x) \dfrac{1-y^n}{1-y} dy \end{align} Hence, we have \begin{align} \log(x/n) + \sum_{k=1}^n \dfrac1{k+x} & = \log(x/n) + \sum_{k=1}^n \left(\dfrac1{k+x} - \dfrac1k \right) + \sum_{k=1}^n \dfrac1k\\ & = \log(x) + \sum_{k=1}^n \dfrac1k - \log(n) + \sum_{k=1}^n \left(\dfrac1{k+x} - \dfrac1k \right)\\ & = \log(x) + \sum_{k=1}^n \dfrac1k - \log(n) - \int_0^1 (1-y^x) \dfrac{1-y^n}{1-y} dy \end{align} Now letting $n \to \infty$, we get that $$W(x) = \log(x) + \gamma - \int_0^1 \dfrac{1-y^x}{1-y} dy$$ Now as $x \to \infty$, we have $$\int_0^1 \dfrac{1-y^x}{1-y} dy = \log(x) + \gamma + \mathcal{O}(1/x)$$ Hence, we get that $$\lim_{x \to \infty} W(x) = 0$$ Let us prove why, as $x \to \infty$, we have $$\int_0^1 \dfrac{1-y^x}{1-y} dy = \log(x) + \gamma + \mathcal{O}(1/x)$$ The proof is the same as before. We have $$\sum_{k=1}^n \dfrac1k = \int_0^1 \sum_{k=1}^n y^{k-1} dy = \int_0^1 \dfrac{1-y^n}{1-y} dy$$ But we know that $\displaystyle \sum_{k=1}^n \dfrac1k = \log(n) + \gamma + \mathcal{O}(1/n)$. Hence, we get that $$\int_0^1 \dfrac{1-y^n}{1-y} dy = \log(n) + \gamma + \mathcal{O}(1/n)$$ Replacing $n$ by $x$ and because the integral is a smooth function of $x$, we can conclude that $$\int_0^1 \dfrac{1-y^x}{1-y} dy = \log(x) + \gamma + \mathcal{O}(1/x)$$
The sum of a family of submodules
A module is an abelian group, and so every subgroup (and therefore any submodule) is a normal subgroup. Further, it's easy to show that scalar multiplication is well-defined in the quotient module, and so we don't need any added conditions. For your second question, that condition (that only finitely many $x_i$ are nonzero) is basically saying it contains all the finite sums of vectors in the respective spaces (so your definition was a bit redundant, you merely specified that the sums have to be finite in two different ways). This condition is necessary because there isn't a good way (for these purposes) to define an infinite sum of elements of a module. These condition are sufficient to ensure that the sum is the module additively "generated" by the modules which you are summing, i.e. the sum module is the smallest module containing the union of all of the submodules.
compactness of a set where am I going wrong
In this situation, try to apply your argument to the counter-example you found, to see where it goes wrong. Let us take for $K$ the circle of center $(1,0)$ and radius $1$. Then $S := \bigcup_{\lambda \geq 0} \lambda K = \{(0,0)\} \cup (\mathbb{R}_+^* \times \mathbb{R})$. It is not closed because one can take the family $f(t) = (t,1)$, which belongs to $S$ for $t>0$, and converges to $(0,1) \notin S$ as $t$ goes to $0$. The corresponding family of parameters $\lambda (t)$ and $k(t)$ are: $$\lambda(t) = \frac{1+t^2}{2t}, \quad k(t) = \frac{2t}{1+t^2} (t,1).$$ As $t$ goes to $0$, we have that $k(t)$ converges to $0$. But $\lambda (t) k(t)$ does not converge to $0$, because $\lambda(t)$ increases fast enough to compensate for the decay of $k(t)$. hence your mistake is there: the fact that $(k_n)$ converges to $0$ does not imply that $(\lambda_n k_n)$ also converges to $0$, because $\lambda_n$ has no reason to be bounded.
$\int_{J}(f(x))^{2} dx = \int_{J}(g(x))^{2} dx \Rightarrow \int_{J}|f(x)| dx = \int_{J}|g(x)|$
You need to be careful with quantifiers. What you proved is that if $\int_J f(x)^2\; dx = \int_J g(x)^2\; dx$ for all bounded intervals $J$, then $\int_J |f(x)|\; dx = \int_J |g(x)|\; dx$ for all bounded intervals $J$. Leaving out the for all, what you wrote says: if $\int_J f(x)^2\; dx = \int_J g(x)^2\; dx$ for some interval $J$, then $\int_J |f(x)|\; dx = \int_J |g(x)|\; dx$. That is incorrect.
$(n-1)! \equiv \text{?} \bmod n$
For primes, $(p-1)! \equiv -1 (\mod p)$, Wilson's theorem And note that $(n-1)! \equiv 0(\mod n)$ for all $n$ being composite greater than $4$. You will see the similar question here and proof for $n \in$ composite is given here.
Edge coloring of a bipartite graph with a maximum degree of $D$ requires only $D$ colors
No, it's not. For each edge $vw$, we have up to $D-1$ colors that are already used on existing edges out of $v$ (in $G_n$). So you're right that there is some color we can give $vw$ that does not conflict with other edges out of $v$. But this is not enough, by itself. We have many such edges $v_1w, \dots, v_Dw$ out of $w$, and even if we can give them colors that don't conflict at $v_1, \dots, v_D$, there's no reason to think that they will all be different. In all likelihood, this strategy will give several of these edges the same color, and then you have a conflict at $w$. A "greedy" proof of König's theorem will not work. Instead, induct on $D$, proving the auxiliary theorem: in a bipartite graph with maximum degree $D$, there is a matching saturating all vertices of degree $D$. (Make this matching one of your color classes, and repeat.) This problem, in turn, can be reduced to the problem of finding a perfect matching in a $D$-regular graph.
What are some easy Banach fixed point theorem applications?
Here is a "fun" application of the Banach fixed point theorem: Pick a map of your city. Drop it on the floor of your room/office/classroom. Prove that there exists unique point on the map which sits exactly above the corresponding point in the room.
Formulating the Twin Prime Conjecture as a Language Recognition problem.
The property $p\in B$ is decidable, just run a Turing machine which tests $p$ and $p+2$ for being prime numbers. This machine will definitely give you a result (either way) and thus decide whether $p\in B$ or $p\notin B$. For the other set $A$: If there are infinitely many twin primes, then $A$ would be empty by definition (i.e. there is no largest twin). If there are only finitely many twin primes, then $A$ would be a singleton, containing the smaller number in the biggest twin primes pair. By definition the empty set and a particular singleton (any finite set) are decidable, i.e. there are Turing machines which can decide (case by case) if an object is in an explicitly given finite set or not. The problem here is that your definition of $A$ relies on a possibly undecidable property (i.e. it could be that the twin primes problem itself is undecidable), hence the two cases of whether $A$ is empty or a singleton would not be decidable. But even if the twin problem were decidable, it could be decidable in a non-constructive fashion, meaning that the biggest twin prime would not be given explicitly (and neither would there be an upper bound on the biggest twin prime) and then the unique element in the set $A$ would not be known (or computable from the knowledge about the cardinality of existing twin primes alone).
Axiom of Limitation of Size implies Axiom of Union?
Never mind, I did a little more searching and I finally found the information I was looking for. Sorry for the hassle. Here's where I found it: https://www.jstor.org/stable/2315201?seq=1#page_scan_tab_contents
How to fit fixed data from two linear functions
Imagine that your data is presented as a binary two dimensional image (assuming a unit value whenever the point coordinate (x,y) is present in your data). Then the problem is equivalent to straight line detection in binary images. The equivalent problem can be solved using the Hough transform for straight line detection. The basic idea of the hough transform is as follows: It transforms an array from the geometrical coordinates (x,y) to the space of initial points and slopes. Every point contributes a unit to all sets of slopes and initial points of straight lines passing through this point. The Hough transform of the whole array is the linear superposition of the transform of it's individual points. The detected lines appear as peaks in the hough transform image
Applying Bolzano-Cauchy theorem with infinities
Your reasoning is correct. Notice that, by definition, $\lim_{x \to 0^-} f(x) = +\infty$ means that for all $M$, there exists a $\delta > 0$ such that $0 < 0 - x < \delta$ implies that $f(x) > M$. On the other hand, $\lim_{x \to -1^+} f(x)$ means that for all $N$, there exists an $\eta > 0$ such that for $0 < x + 1 < \eta$ implies that $f(x) < - N$. Now, fix $C$ and choose $N = M = 2|C| + 1$. Then, choosing the $\delta$ and $\eta$ from the definitions gives us that $f(-1 + \eta/2) < -2|C|$ and $f(- \delta/2) > 2|C|$. By applying the Bolzano-Cauchy theorem (better known as the Intermediate Value Theorem) to the interval $[-1 + \eta/2, -\delta/2]$, we see that there must exist a $c$ with $-1 < -1 + \eta/2 < c < - \delta < 0$ such that $f(c) = C$. Do notice, however, that you need a little more information to conclude that $f$ is positive to the right of $c$ and left to the left of $c$: this means that $f$ is increasing over $(-1, 0)$ (and strictly increasing near $0$). You can justify this by taking there derivative of $f$ and noticing that it is positive over this interval.
What does it mean for a function $f$ to be defined on a disk? (Clairaut's theorem)
No, it means that $f$ must be defined at least on a disk (a filled in circle) centred at $(a, b)$. It may be defined elsewhere too. It doesn't really matter for the purposes of determining derivatives anyway; only points around $(a, b)$ will determine its various derivatives.
Proving that if $\lim_{n \rightarrow \infty} a_n = a$ then $\lim_{n \rightarrow \infty} a_n^2 = a^2$
Hint A convergent sequence is bounded. So you can also bound $\vert a+a_n\vert$.
Commutative subring of matrices iff trivial unit group
You are very close; you just aren't taking the right products. Suppose $u\in R^*$, and consider the invertible matrices $$\left(\begin{array}{cc}u&1\\0&1\end{array}\right)\qquad\text{and}\qquad \left(\begin{array}{cc}u^{-1}&0\\0&1\end{array}\right).$$ If $T^*$ is abelian, then the two products are equal, so $$\begin{align*} \left(\begin{array}{cc} u&1\\ 0 &1\end{array}\right)\left(\begin{array}{cc} u^{-1}&0\\0&1\end{array}\right) &= \left(\begin{array}{cc} 1 & 1\\ 0 & 1 \end{array}\right)\\ \left(\begin{array}{cc} u^{-1}&0\\ 0&1\end{array}\right)\left(\begin{array}{cc} u&1\\ 0&1\end{array}\right) &= \left(\begin{array}{cc} 1 & u^{-1}\\ 0 & 1 \end{array}\right), \end{align*}$$ hence $u^{-1}=1$.
Two differentials within one integral?
The two final steps should be$$\int2\,\mathrm dx+\int\frac{\mathrm dx}{(x-1)^2}$$and$$2x-\frac1{x-1}+C$$respectively.
Does $f'(x)>\frac{f(x)}{x}$ for $x>0$ imply $f$ is convex?
Let us consider a simple polynomial function $f(x) = ax^4+bx^3+cx^2$. Obviously we have $f(0)=0$, to satisfy the condition, we need \begin{equation} f'(x) = 4ax^3+3bx^2+2cx > \frac{f(x)}{x} = ax^3+bx^2+cx. \end{equation} Thus $3ax^2+2bx + c > 0$ will be held for any $x$, i.e., \begin{equation} a>0, \quad b^2 - 3ac < 0. \end{equation} To violate the convexity, we need some $x_0 > 0$, such that \begin{equation} f''(x_0) = 12a x_0^2+6b x_0 + 2c < 0. \end{equation} Thus what we need is \begin{equation} \left\{ \begin{array}{c} {a>0,} \\ {b^2 - 3ac < 0,} \\ {\exists x_0, 12ax_0^2 + 6bx_0 + 2c <0.} \end{array} \right. \end{equation} Here we simply choose $x_0=1$, and we can easily find a feasible solution: \begin{equation} a=0.1, \quad b=-0.4, \quad c=0.55. \end{equation} And I plot a simple figure in the following:
Difference of support functions and its minimum points
We have $\|a^0-b\|\ge\|a^0-b^0\|\, \forall b\in B$. Hence $$\langle a^0-b^0,\, (a^0-b)-(a^0-b^0)\rangle=\langle a^0-b^0,\, b^0-b\rangle\ge 0\, \forall b\in B.$$ Thus, $$\langle g^0,b^0\rangle\ge\langle g^0,b\rangle\, \forall b\in B.$$ This implies that $\langle g^0,b^0\rangle=\sigma_B(g^0)$. Similarly, we also have $\langle g^0,a^0\rangle=\sigma_A(g^0)$. Consequently, $$\sigma_A(g^0)-\sigma_B(g^0)=\|a^0-b^0\|.$$ The equality in the question is verified.
Solving an integral that depends on a parameter
The subsequent integral is amenable to integration by parts using a familiar trick. Let $$I(x) = \int e^{-x} \cos yx \, dx.$$ Then with the choice $$u = \cos yx, \quad du = -y \sin yx \, dx, \\ dv = e^{-x} \, dx, \quad v = -e^{-x},$$ we obtain $$I(x) = -e^{-x} \cos yx - y \int e^{-x} \sin yx \, dx.$$ Repeating this process with $$u = \sin yx, \quad du = y \cos yx \, dx, \\ dv = e^{-x} \, dx, \quad v = -e^{-x},$$ we get $$I(x) = -e^{-x} \cos yx + y e^{-x} \sin yx - y^2 \int e^{-x} \cos yx \, dx = e^{-x} (y \sin yx - \cos yx) - y^2 I(x).$$ Therefore, $$I(x) = \frac{e^{-x} (y \sin yx - \cos yx)}{y^2 + 1} + C.$$ The definite integral is then $$\int_{x=0}^\infty e^{-x} \cos yx \, dx = \frac{1}{y^2+1}.$$ I leave the rest as an exercise.
Calculating The Reduction Of An Effect With Distance in 2D Space
I can see how you might reach that conclusion - in some ways it seems intuitively true - but it's not quite correct. If the light bulb is fed 10W of energy then it is, as you correctly pointed out, distributing 10W of energy as well. But how is that energy distributed? If you were to place a 100% efficient solar panel ring at any arbitrary distance, then that ring must collect 10W of power, by definition - there's just nowhere else for the power to go. Instead, let's say that we've got a panel on that ring (specifically a curved panel which matches the circle at that distance) which is 1 metre across, but 10 metres away. What would be the circumference of a ring at that distance? It's $2\pi r = 20\pi$. This means that the amount of energy harvested by that panel is $10\times\frac{1}{20\pi}=\frac{1}{2\pi}W$, and that's what we'd refer to as the intensity at that distance. If we were to double our distance, we would be doubling the circumference of the circle, while still having the same sized panel - and therefore we'd be halving the amount of energy collected. In this particular example, the closest the panel could get would be a distance $r$ such that $2\pi r=1$, or $r=\frac{1}{2\pi}$. At that distance, the energy collected is $10\div (2\pi\frac{1}{2\pi})=10W$. Now let's assume that our panel is particularly special, and that if we decrease the distance below $r=\frac{1}{2\pi}$, the overlapping sections of the panel are both generating power (i.e. they're magically not hiding each other). If, for example, we set $r=\frac{1}{4\pi}$, then the energy collected would be $10\div(2\pi\frac{1}{4\pi})=20W$. Now obviously that doesn't make sense, but that's the point. The formula stops working when you start to get too close to the energy source - because it doesn't actually make sense to be able to get that close. In particular, if you were at a distance of 0 from the source, then no matter the size of the panel you were using, you'll still be harvesting 10W. That in turn means the intensity at that distance is undefined, because even though the power being produced is 10W, it could (theoretically) be harvested with a panel that is infinitely small. The main question then, is, does your formula need to take into account a situation where the distance from the source will ever be that small? You'll save yourself a lot of trouble if you don't need to consider such a scenario. If you do, then there would be certain ways to approach the issue that recognized the specific parameters of the object on which the intensity is being measured - however these would need to be specialised for your particular scenario.
On the category of Sets as an example of an algebraic category
About point 2) if you consider $\mathcal T_1^\text{op}$, the full-subcategory of $Set$ whose objects are natural numbers, you clearly have that this category is closed by co-products (i.e. disjoin unions in $Set$). When you consider $\mathcal T_1$, which is the opposite/dual category to above mentioned full-subcategory of $Set$, since duality turn coproduct in product, you get that $\mathcal T_1$ has products. 3) Since $\mathcal T_1^\text{op}$ is a full-subcategory of $Set$ its morphisms are just functions between the natural numbers, morphisms of $\mathcal T_1$ are just the opposite of such morphisms (or, depending on your definition of dual category just the same morphisms of $\mathcal T_1$ with reversed direction). 1) The reason why we use $\mathcal T_1$ as a subcategory of $Set^\text{op}$ insteand of $Set$ is that in this why the sum of natural numbers is the product in the category (that's because $n+m$ is the coproduct of $n$ and $m$ seen as objects in $Set$ and so it's the product in $Set^\text{op}$). In this why you get the cool property that every object $n\in \mathcal T_1$ is an iterated finite product of $1$. In particular this implies that every model $A \colon \mathcal T_1 \to Set$ can be recovered just by the set $A(1)$: for instance for every natural number $n \in \mathcal T_1$ we have $A(n)=A(1)^n$, in a very similar way you can recover the other information on the functor (the image of the various morphisms in $\mathcal T_1$). This property is what make possible the existance of the equivalence between the category of sets ($Set$) and the category of functors $[\mathcal T_1,Set]$. If choose to work with $\mathcal T_1$ which is the full-subcategory of $Set$ spanned by natural numbers you loose the above mentioned property (is not true that every natural number is generated by product from one object) hence you cannot recover all the information of models (i.e. functors of type $A \colon T \to Set$) from just the set $A(1)$. So from these different theory you cannot get a category of models which is equivalent to $Set$ (that's not actually a proof but it should give the intuition of why we cannot use the full-subcategory of $Set$ instead of that of $Set^\text{op}$). Hope this helps.
Understanding the proof of the Hausdorff-Young theorem
As it turns out it is completely irrelevant which values $\widetilde C$ can attain, it is far more important which values $\lambda$ attains. Proposition: Let $U\subseteq\mathbb R^n$ be an open set and let $\Lambda:=\{\lambda>0|\lambda U\subseteq U\}$ have a limiting point at $\lambda=0$. If $1<p,q<\infty$ such that $\|\hat f\|_{L^q(U)}\leq C\|f\|_{L^p(\mathbb R^n)}$ for all $f\in\mathcal S(\mathbb R^n)$, we have $q\leq p'$ (and $p\in[1,2]$, which I'm not gonna prove). Proof: Choose some $\psi\in\mathcal S(\mathbb R^n)$ such that $\|\hat\psi\|_{L^q(U)}>0$ and any $\lambda\in1/\Lambda$. For $f:=\psi(\cdot~/~\lambda)$ we have $\hat f(\xi)=\lambda^n\hat\psi(\lambda\xi)$, and thus $$ \lambda^n\lambda^{-n/q}\|\hat\psi\|_{L^q(U)} =\|\hat f\|_{L^p(\lambda^{-1}U)} \overset{\lambda^{-1}\in\Lambda}\leq\|\hat f\|_{L^q(U)} \leq C\|f\|_{L^p(\mathbb R^n)} =C\lambda^{n/p}\|\psi\|_{L^p(\mathbb R^n)} $$ which is equivalent to $$ \lambda^{n-\frac nq-\frac np}\leq\underbrace{C\|\hat\psi\|_{L^q(U)}\|\psi\|_{L^p(\mathbb R^n)}}_{\text{independent of }\lambda}\in[1,\infty[. $$ Since $\lambda\in1/\Lambda$ can be chosen arbitrarily large, we have $n-\frac nq-\frac np\leq0$ giving us $q\leq p'$.
Separable kernel in convolution
Eigenvalues can give you some hint. However, a More direct method is to treat the 2D kernel as a 2D matrix K and to take its singular value decomposition (SVD). If only the first singular value is non-zero, the kernel is separable.