title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Coin Tosses Conditioning
You're losing track of how often you've multiplied by $\tfrac 12$.   You need to keep the probabilities of the conditions separate from the conditional expectations.   Thus by using the Law of Total Probability we get: $$\begin{align} \mathsf E(N) & = \tfrac 12 \mathsf E(N\mid H)+\tfrac 12 \mathsf E(N\mid T) \\[1ex] & = \tfrac 12(\mathsf E(N)+1)+\tfrac 12 \mathsf E(N\mid T) \\[2ex] \mathsf E(N) & = 1+\mathsf E(N\mid T) & \bigstar \\[1ex] & = 1+\tfrac 12\mathsf E(N\mid TT)+\tfrac 12\mathsf E(N\mid TH) \\[1ex] & = 1+\tfrac 12(1+\mathsf E(N\mid T))+\tfrac 12\mathsf E(N\mid TH) \\[1ex] & = 1+\tfrac 12\mathsf E(N)+\tfrac 12\mathsf E(N\mid TH) & \textsf{because }(\star) \\[2ex] \mathsf E(N) & = 2+\mathsf E(N\mid TH) \\[1ex] & = 2+\tfrac 12 \mathsf E(N\mid THH)+\tfrac 12\mathsf E(N\mid THT) \\[1ex] & = 2+\tfrac 32+\tfrac 12(2+\mathsf E(N\mid T)) \\[1ex] & = 4+\tfrac 12\mathsf E(N)& \textsf{because }(\star) \\[2ex] \mathsf E(N) & = 8 \end{align}$$
Probability that Array of Numbers Came from Distribution
What you are looking for is a goodness-of-fit test. There you find an often used test called the chi-squared test. But you will find there also other fitting tests.
relationship between $Syl_p(G)$ and $Syl_p(N_G(P))$ for a $p$-subgroup
Let $P$ be a $p$-subgroup of a (finite) group $G$. Then the following holds. Proposition $P \in Syl_p(G) \iff P \in Syl_p(N_G(P))$. Proof Assume that $P \in Syl_p(G)$. Since $P \subseteq N_G(P)$, we have $|G:P|=|G:N_G(P)| \cdot |N_G(P):P|$, hence $p$ does not divide $|N_G(P):P|$, meaning $P$ is a $p$-Sylow subgroup of $N_G(P)$. Conversely, suppose $P \in Syl_p(N_G(P))$. Now $P$ is a $p$-subgroup, hence applying Sylow Theory in $G$, there must be some $Q \in Syl_p(G)$ with $P \subseteq Q$. Of course, $P \subseteq Q \cap N_G(P)$. But $Q \cap N_G(P)$ is a $p$-subgroup of $N_G(P)$, hence, $P$ being Sylow (and thus a maximal $p$-subgroup), we must have $P=Q \cap N_G(P)=N_Q(P)$. If we would have $P \lt Q$, then by the "normalizers grow" principle in $p$-groups, we would have $N_Q(P) \gt P$, a contradiction. So in fact $P=Q$ and we are done. Note There is yet another proof that depends on a number theoretic argument. Lemma Let $P$ be a $p$-subgroup of a (finite) group $G$. Then the following holds. $|G:P| \equiv |N_G(P):P|$ mod $p$ Proof (sketch) Let $P$ act on its left cosets in $G$. The fixed points are exactly those cosets, which have a representative in $N_G(P)$. Now apply the Cauchy-Frobenius (sometimes erroneously called the Burnside) counting formula.$\square$ From the Lemma it is crystal clear that the Proposition holds.
Express $z = (6 − 2i)(4 − 7i)$ in polar form and calculate $z^2$.
Well, we have: $$(6-2i)(4-7i)=6\cdot4+6\cdot(-7i)-2i\cdot4-2i\cdot(-7i)=24-42i-8i-14=10-50i\tag1$$ So: The magnitude is: $$\left|(6-2i)(4-7i)\right|=\left|10-50i\right|=\sqrt{10^2+50^2}=10\sqrt{26}\tag2$$ The phase/argument is: $$\arg\left((6-2i)(4-7i)\right)=\arg\left(10-50i\right)=\frac{3\pi}{2}+\arctan\left(\frac{10}{50}\right)=\frac{3\pi}{2}+\arctan\left(\frac{1}{5}\right)\tag3$$ So: $$(6-2i)(4-7i)=10\sqrt{26}\cdot\left(\cos\left(\frac{3\pi}{2}+\arctan\left(\frac{1}{5}\right)\right)+\sin\left(\frac{3\pi}{2}+\arctan\left(\frac{1}{5}\right)\right)i\right)\tag4$$ And: $$\left((6-2i)(4-7i)\right)^2=\left(10\sqrt{26}\cdot\exp\left(\left(\frac{3\pi}{2}+\arctan\left(\frac{1}{5}\right)\right)i\right)\right)^2=$$ $$2600\cdot\exp\left(2\cdot\left(\frac{3\pi}{2}+\arctan\left(\frac{1}{5}\right)\right)i\right)=2600\cdot\exp\left(\left(3\pi+2\arctan\left(\frac{1}{5}\right)\right)i\right)=$$ $$2600\cdot\left(\cos\left(3\pi+2\arctan\left(\frac{1}{5}\right)\right)+\sin\left(3\pi+2\arctan\left(\frac{1}{5}\right)\right)i\right)\tag5$$
Find the general solution to the nonhomogeneous equation $x^2y''+xy'-n^2y=x^m$
HINT: Your expression simplifies to $$λ(λ-1)x^λ+λx^λ-n^2x^λ=x^λ(\lambda^2-n^2)=0.$$
How to solve this sum limit? $\lim_{n \to \infty } \left( \frac{1}{\sqrt{n^2+1}}+\cdots+\frac{1}{\sqrt{n^2+n}} \right)$
$$ \frac n{\sqrt{n^2+n}}\le \frac1{\sqrt{n^2+1}}+\cdots+\frac{1}{\sqrt{n^2+n}}\le \frac n{\sqrt{n^2+1}}$$ Apply Squeeze theorem Observe that for finite $a,b$ $$\lim_{n\to\infty^+}\frac n{\sqrt{n^2+an+b}}=\lim_{n\to\infty^+}\sqrt{\frac{n^2}{n^2+an+b}}=\lim_{n\to\infty^+}\sqrt{\frac1{1+a\cdot\frac1n+b\cdot\frac1{n^2}}}=1$$
Example of measure sequence
Everything goes wrong. Take $F_n = \left]-\infty, -n\right] \cup \left[n,+\infty\right[$ for all $n\geq 1$, in $\Bbb R$ with the Lebesgue measure ${\frak m}$. We have $F_1 \supset F_2 \supset \cdots$. Also, we have that $\bigcap_{n\geq 1}F_n = \varnothing$ and ${\frak m}(F_n) = +\infty$ for all $n\geq 1$. And finally: $$\lim_{n\to +\infty} {\frak m}(F_n) = +\infty \neq 0 = {\frak m}\left(\bigcap_{n\geq 1}F_n\right).$$
Convexity of log-determinant
(EDIT: If $A,X$ are both square, then ...) As the comment suggest, we know that since the determinant is multiplicative and $\det X^T=\det X$, we have that $$\det(X^TAX)=\alpha\det(X)^2$$ where $\alpha=\det(A)>0$. However, the function $g(x)=x^2$ is not logarithmically convex, therefore the function $f(X)$ is not either.
DuBois-Reymond Lemma
Let $g, f \in L^1[a,b]$, and suppose that $\nu$ is a weak derivative for $g$. That means $$\int_a^b g h' dx = - \int_a^b \nu h dx$$ for all $h \in C^\infty_0[a,b]$. If $\int_a^b fh + gh' dx = 0$ for all $h \in C^\infty_0$, then $$\int_a^b h fdx = \int_a^b h \nu dx.$$ Therefore when viewed as measures $f$ and $\nu$ agree after integration against $C^\infty_0$ functions. Which means that $f = \nu$ almost everywhere. This comes from measure theory.
Why is the covariant derive of the metric tensor physically zero?
I think you are confusing several things. In the first video you have linked, the manifold $M$ you are working with is embedded in an ambient flat space; then the covariant derivative of a tensor along some path in $M$ is indeed the usual derivative with respect to the ambient coordinates "minus the normal component". As far as I can understand, what you mean by "intrinsic plane" is NOT embedded isometrically into an ambient flat space, so there is no sense in which there is a "normal derivative"; this whole description is not applicable for manifolds described in intrinsic terms, without an isometric embedding. Thus one uses the machinery of Levi-Civita connections an all that. Now, if you start with a manifold isometrically embedded in a flat space (or, say, use the (difficult) Nash's embedding theorem to embed your $M$ in this way) then you can compute the derivative of metric tensor by differentiating the (flat, constant) ambient metric tensor, and "removing the normal component" - and then restricting to appropriate tensor subbundle corresponding to vectors and covectors tangent to $M$; but of course the ambient metric tensor has zero derivative (being constant). "Removing components" and "restricting" zero still gives zero, so the resulting covariant derivative of the metric tensor on $M$ is also zero, just as the intrinsic computation said. In some sense, the point is that the ambient metric is constant, and it's the restriction that changes as we move in $M$; and since the covariant differentiation takes derivative first, and restricts afterwards, the result is 0.
Proper definition of set between two functions
A = { $(r,\delta) : r \in (0,\ln(4)], \delta \in [\underline \delta(r), \overline \delta(r)]$ }.
Amenability of the integers (with the Følner condition)
Here the operation of the group is addition, so it is more proper to write $F+s$ instead of $sF$. For the symmetric difference to be small, you need the intersection of the sets to be big. We can replace $E=[-m,m]\cap\mathbb Z$ for some $m$ with $m\geq\max E$ (this would only increase the max, so if we are below $\epsilon$ for this new $E$, we will also be for the original one). Now fix $\epsilon>0$. We need a set that is not "changed much" by a translation by $m$. Let us take $$ F=[-m^2,m^2]\cap\mathbb Z.$$ Then, for $k\in E$, $F+k$ is the set $F$ shifted $k$ units (one way or the other, depending on the sign of $k$). The important thing is that $$ |F\Delta (F + k)|=2|k|. $$ Thus, for any $k\in E$, $$ \frac{|F\Delta (F + k)|}{|F|}=\frac{2|k|}{m^2}\leq\frac{2m}{m^2}=\frac2m. $$ Now it is enough to choose $m$ with $m>2/\epsilon$.
Cardinality of Cartesian Product where (a,b) are elements of A x B
You may begin by omitting all $a \in A$ such that $a\ge 7$ and all $b \in B$ such that $b \ge 4$ because we know ordered pairs with said elements will not be included in the Cartesian product we're looking for. So, after omission, we have $A = \{0,1,2,3,4,5\}$ and $B=\{0,2,-1\}$. Now, the cardinality of the Cartesian product between sets $A$ and $B$, denoted as $|A$ x $B|$, is given by $|A$ x $B|$ = $|A| \cdot |B| = 6 \cdot 3 = 18$. So you are correct! If your book says differently, then I highly suspect your book contains a mistake.
Find an equation on the cone which is tangent plane is perpendicular to a given plane
Hint: You can let $f(x,y,z)=\sqrt{x^2+y^2}-z$. Then a normal vector to the tangent plane is given by $\nabla f =\frac{x}{\sqrt{x^2+y^2}}i+\frac{y}{\sqrt{x^2+y^2}}j-k$, and you want this to be orthogonal to the vector $i+k$, which is a normal vector to $x+z=4$ (since the planes are perpendicular). Now set the dot product of these two vectors equal to zero.
Hypothesis Testing of Percentage Content of Cadmium
It's correct if you assume that the data is normally distributed. Otherwise you can use a a Wilcoxon test.
A problem related to intersecting lines
This is a brutal question, but I enjoyed doing it quite a lot. Let’s say the new line to be added is $$y=mx + c $$ which has two parameters. We’ll find out the set of solutions of $m,c$ that satisfy the given conditions. Firstly, get the $x$-coordinate the point of intersection of this new line with the previous lines: $$L_1 :\ x=\frac{1-c}{m-1} \\ L_2: x=\frac{2-c}{m-1} \\ L_3: x= \frac{3-c}{m-1} \\ L_4: x=\frac{1+c}{2-m}$$ where $L_1, L_2, L_3, L_4$ are lines respectively in the order specified in the question. Now, we need each of these intersection abscissas to lie outside of the interval $[0,1]$, which is equivalent to saying $$ x \gt 1\, \vee \, x\lt 0$$ in each case. Applying this condition yields $4$ different conditional inequalities in $m$ and $c$ which we must take the intersection of. Simplifying these we get the following: $$\bullet \space m+c \lt 2 \, \vee \, (c \gt 1 \wedge m \gt 1)$$ $$\bullet \space m+c \lt 3 \, \vee \, (c \gt 2 \wedge m \gt 1)$$ $$\bullet \space m+c \lt 4 \, \vee \, (c \gt 3 \wedge m \gt 1)$$ $$\bullet \space m+c \gt 1 \, \vee \, (c \lt -1 \wedge m \lt 2)$$ It should be fairly easy to solve these inequations graphically. Here’s a very rough picture for the solution set. Note the $x$ and $y$ axes represent $c$ and $m$ respectively. The yellow region is the solution set. We can infer from the graph that the solution set is as follows for $m,c \in \mathbb R$: $$ ( m \lt 2 \wedge c \lt -1)\, \vee \, ( 2 \lt m+c \lt 3 ) \, \vee \, (m \gt 1 \wedge c \gt 1 \wedge m+c \lt 3) \, \vee \, (m \gt 1 \wedge c \gt 2 \wedge m+c \lt 4) \, \vee \, (m \gt 1 \wedge c \gt 3)$$
Find a value of a strange improper integral.
By substituting $x^2=z$ we have: $$ I = \frac{1}{2}\int_{0}^{+\infty}\frac{\sin(z)}{z^{1/2}(1+z)^{3/2}}\,dz\tag{1}$$ but since $\mathcal{L}\left(\sin(z)\right)=\frac{1}{s^2+1}$ and: $$ \mathcal{L}^{-1}\left(\frac{1}{z^{1/2}(1+z)^{3/2}}\right)= se^{-s/2}\left(I_0\left(\frac{s}{2}\right)-I_1\left(\frac{s}{2}\right)\right)\tag{2}$$ it follows that: $$ I = \frac{1}{2}\int_{0}^{+\infty}\frac{se^{-s/2}}{1+s^2}\left(I_0\left(\frac{s}{2}\right)-I_1\left(\frac{s}{2}\right)\right)\,ds\\=\frac{\pi}{4}\left[\left(J_1\left(\frac{1}{2}\right)-Y_0\left(\frac{1}{2}\right)\right)\cos\left(\frac{1}{2}\right)+\left(J_0\left(\frac{1}{2}\right)+Y_1\left(\frac{1}{2}\right)\right)\sin\left(\frac{1}{2}\right)\right].\tag{3} $$
Derive a recursive formula using integration by parts
Change of variables $y=x-a$ gives you $$ \eqalign{F(y+a,n) &= \int \frac{(y+a)^n}{y^\alpha} \; dy\cr &= \sum_{k=0}^n {n \choose k} a^{n-k}\int y^{k-\alpha}\; dy\cr &= \sum_{k=0}^n {n \choose k} a^{n-k} \frac{y^{k-\alpha+1}}{k-\alpha+1} + C }$$ (except, if $k-\alpha+1 = 0$, replace $y^0/0$ by $\log(y)$).
Prove that a map is continuous
Hint: if $\tau(\lambda_0) = \tau_0 < 1$ and $\epsilon > 0$ is small, there is $\delta > 0$ such that $u_\lambda(t) > \delta$ if $\epsilon \le t \le \tau_0 - \epsilon$, while $u_\lambda(\tau_0 + \epsilon) < 0$.
How to calculate two populations combined mean and standard deviation.
You have $\bar{x}=18$ and so $\Sigma x=18\times 10=180$ Similarly, $\bar{y}=15$ so $\Sigma y=15\times20=300$ So the pooled mean is $$\frac{180+300}{10+20}=16$$ For the standard deviation, the combined sum of squares is $2950+5000=7950$ Using the standard formula, the pooled standard deviation is $$\sqrt{\frac{7950}{30}-16^2}=3$$
Isomorphism of the perfection of two ring
$\newcommand{\perf}{\mathrm{perf}}$ I'm not totally sure what you're trying to do. Can you add more detail? If you're just looking for a solution, here's one I believe. By definition we have that $$R^\perf=\{(x_i)\in R:x_{i+1}^p=x_i\}$$ Note then that the map $f:R^\perf\to S^\perf$ is merely the map $f((x_i))=(f(x_i))$. Let us set $I:=\ker f$ and suppose that $I^{p^k}=0$ for some fixed $k\geqslant 0$. We first claim that $f$ is surjective. Suppose now that $(y_0,\ldots)\in S^\perf$. Let us then for all $i\geqslant 0$ take some $x'_i$ such that $f(x'_i)=y_{i+k}$. Let us then consider $(x_i)$ with $x_i=(x'_i)^{p^k}$. Note that $f(x_i)=f(x'_i)^{p^k}=y_{i+k}^{p^k}=y_i$. The only thing to verify is that $x_{i+1}^p=x_i$. To see this note that since $$f((x'_{i+1})^p)=y_{i+k+1}^p=y_{i+k}=f(x'_i)$$ that $(x'_{i+1})^p-x'_i\in I$. This implies that $$0=(x'_{i+1})^p-x'_i)^{p^k}=x_{i+1}^p-x_i$$ from where the conclusion follows. To see that it's injective note that if $f((x_i))=(0)$ then $x_i\in I$ for all $i$. Note though that since $x_i=x_{i+k}^{p^k}$ this implies, since $I^{p^k}=0$, that $x_i=0$. The conclusion follows.
Finding the eccentricity of the hyperbola $\left|\sqrt{(x-3)^2+(y-2)^2}\right|-\sqrt{(x+1)^2+(y+1)^2}=1$
First off, the absolute value does not matter, because square roots are always positive. Because of the distance formula, this means: Distance from $(x,y)$ to $(3,2)$ minus distance from $(x,y)$ to $(-1,-1)$. Therefore, by the definition of a hyperbola, the foci are $(3,2),(-1,-1)$, and the distance from the center to the vertices is $\frac{1}{2}$. This center is at the midpoints of the 2 foci, or $(1,0.5)$. The distance from a focus to the center is 2.5, by the distance formula. Therefore, the eccentricity is $\frac{\mathrm{distance}\:\mathrm{from}\:\mathrm{focus}\:\mathrm{to}\:\mathrm{center}}{\mathrm{\mathrm{distance}\:\mathrm{from}\:\mathrm{vertex}\:\mathrm{to}\:\mathrm{center}}}=\frac{\frac{5}{2}}{\frac{1}{2}} = 5$ Hopefully this helps!
Induced map between zeroth homology groups an isomorphism?
A map $\Delta^0 \to X$ is just a point $x\in X$, so we can identify $C_0(X) = \mathbb ZX$ (free $\mathbb Z$ module on basis $X$). What does it mean for two points $x, x'\in C_0(X)$ to become identified in $H_0(X)$? It means that there is a map $p: I \cong \Delta^1 \to X$ such that $p(0) - p(1) = x - x'$, and so $p(0) = x$ and $p(1) = x'$. That is, $x$ and $x'$ are in the same path component of $X$. So, $H_0(X) = \mathbb Z \pi_0(X)$ where $\pi_0(X)$ is the set of path components of $X$. Thus, if any map $X\to Y$ induces a $\pi_0$ isomorphism, it follows that it also induces an $H_0$ isomorphism. If both spaces are path connected, this is automatic. All of this goes through with any coefficient ring $R$ in place of $\mathbb Z$, by the way.
Sequence of orthogonal vectors in a Hilbert space
$(a)\iff (b)$: You can write $x_{n}=\alpha_{n}e_{n}$ where $\{ e_{n}\}_{n=1}^{\infty}$ is an orthonormal subset of $X$. Then $\sum_{n}x_{n}=\sum_{n}\alpha_{n}e_{n}$ converges iff $\sum_{n}|\alpha_{n}|^{2} = \sum_{n}\|x_{n}\|^{2} < \infty$. $(a)\implies (c)$: If $\sum_{n}x_{n}$ converges, then $(\sum_{n}x_{n},y)=\sum_{n}(x_{n},y)$ converges, regardless of $y$, by continuity of the inner-product with respect to norm convergence. This part (c)$\implies$ (b) is the trickiest, and I had that wrong, as pointed out by Jonas Meyer. Following the comment of PhoemueX above, suppose $\sum_{n}(x_{n},y)$ converges for all $y$, and define linear functionals $F_{k}(y)=\sum_{n=1}^{k}(y,x_{n})$. Each $F_{k}$ is bounded and $\sup_{k}|F_{k}(y)| < \infty$ for all $y$. Therefore, $\|F_{k}\| \le M$ for some constant $M$ by the Uniform Boundedness Principle. In particular, for $k \ge l$, $$ \sum_{n=1}^{l}\|x_{n}\|^{2}=|F_{k}(\sum_{n=1}^{l}x_{n})| \le M\|\sum_{n=1}^{l}x_{n}\| = M\left(\sum_{n=1}^{k}\|x_{n}\|^{2}\right)^{1/2}, $$ which implies the following for all $l$: $$ \left(\sum_{n=1}^{l}\|x_{n}\|^{2}\right)^{1/2} \le M. $$ So $(c)\implies (b)$ per PhoemueX's suggestion.
Existance of non-constant real-valued continuous functions on topological space
Here is a very explanatory proof of Urysohn's lemma which constructs a non-constant function for each pair of disjoint closed subsets of $X$. Let $X$ be a Hausdorff normal space and $A$ and $B$ be two disjoint closed subsets of $X$. First prove that if for any closed set $A$ and an open set $U$ containing $A$, there is an open set $V$ such that $A \subset V \subset clV \subset U$. Consider the countable set $Q = [0,1]\cap \mathbb{Q}$. We will construct a sequence of open sets of $X$ to construct such a function. Since $Q$ is countable, we can construct a sequence $(p_n)_{n \in \mathbb{N}_{0}}$ that consists of all rationals in $Q$. W.l.o.g assume $p_0 = 0$ and $p_1 = 1$. The sequence must satisfy for all $p < p'$ in $Q$, $clU_p \subset U_{p'}$. Define $U_1 = X\setminus B$. Note that $A \subset U_1$, so that we can find an open set $U_0$ containing $A$ and $U_1$ contains its closure. As $Q$ is countable, we can inductively define all such sets $U_p$ for $p \in Q$ as follows: Suppose $U_0,U_1, U_{p_2}, U_{p_3}, \ldots U_{p_n}$ are given. Let $p_{n+1}$ be the next rational in the sequence. Then, since $\{p_i\ |\ 0\leq i \leq n+1\}$ is a well ordered set, $p_{n+1}$ has an immediate successor-and-predecessor. Say $p_i < p_{n+1} < p_j$. Then, we know that $clU_{p_i}\subset U_{p_j}$ and so by the same argument as above, we can find an open set $U_{p_{n+1}}$ such that $clU_{p_i}\subset U_{p_{n+1}}$ and $cl U_{p_{n+1}} \subset U_{p_j}$. By induction we can find all of these $U_{p_n}$ that satisfy the requirement. Now, we extend $U_p$ for all rationals in $\mathbb{R}$ by letting $U_p = \emptyset$ if $p <0$ and $U_p = X$ if $p > 1$. You can check that $p \leq q$ in $\mathbb{Q}$ holds iff $cl U_p \subset U_q$. For each $x \in X$, let $\mathbb{Q}(x): = \{p \in \mathbb{Q}\ |\ x \in U_p\}$. Note $\mathbb{Q}(x)$ is bounded below by $0$, so it makes sense to define $f:X\rightarrow [0,1]$ by $f(x) = inf \mathbb{Q}(x)$. If $x \in A$, then $x \in U_p$ for each $p \in \mathbb{Q}^+$ so that $f(x)=0$ and if $x \in B$, then $\mathbb{Q}(x)$ consists of all rationals greater than $1$. Thus $f(x)=1$. Now we need only show that $f$ is continuous, so to that end we prove: Given $x$ in $X$ and $f(x) \in (c,d)$, we find an open set $U$ such that $f[U]\subset (c,d)$. Choose rational numbers $r,s$ in $Q$ such that $c<r<f(x) <s <d$. Then $U:= U_s\setminus clU_r$ is the desired open set.
Correct name for equation with negative delay
That's a differential equation with advanced (not delayed) argument.
intersections between 3d planes
take $z=\lambda$ and solve for $x$ and $y$ in terms of $\lambda$ . You will get the required line. ($y=0; \frac{x}{3}=\frac{z}{2}$)
Generating function of a convolution
You’re thinking in the right direction, but the details are a bit off. It makes no sense to write $$\sum_{k=0}^{n} (n-k) a_k = \sum_{n\ge 0} nx^n \cdot \sum_{n\ge 0} a_nx^n \cdot \frac{1}{1-x}\;:$$ the lefthand side is a constant, and the righthand side is a non-constant function of $x$. What is true is that $$\left(\sum_{n\ge 0}nx^n\right)\left(\sum_{n\ge 0}a_nx^n\right)=\sum_{n\ge 0}\left(\sum_{k=0}^n(n-k)a_k\right)x^n\;.$$ We know that $$\frac{x}{(1-x)^2}=\sum_{n\ge 0}nx^n\;,$$ so if $g$ is the generating function for $\langle a_n:n\in\Bbb N\rangle$, then $\sum_{k=0}^n(n-k)a_k$ is the coefficient of $x^n$ in the power series expansion of $$\frac{xg(x)}{(1-x)^2}\;.\tag{1}$$ If $g(x)$ is a rational function, you can in principle expand $(1)$ into partial fractions, expand in power series, and combine to get the desired coefficient.
Trying to integrate a stochastic RV, $\int_0^t sZ_s \, ds$
The fastest way to verify your formula is to apply Ito's lemma: $$ \mathrm d(\frac25t^2Z_t) = \frac45tZ_t + \frac25t^2\mathrm dZ_t\neq tZ_t $$ so the answer is incorrect. Another point is that the representation via $Y$ amy not be applicable in case of computing integrals. Also you have to remember that in your case you have different representations for each $Z_s$ where $s$ runs in $[0,t]$ - but those representations must be clearly dependent. In particular, it is unclear to me how did you get $s^{\frac32}$ and took $Y$ outside of the integral. In case $Z_t$ is a standard Brownian motion, sometimes it hepls to use the integration by parts: $$ \mathrm d(f_tZ_t) = f'_tZ_t\mathrm d t+f_t\mathrm dZ_t \implies f'_tZ_t\mathrm = \mathrm d(f_tZ_t) - f_t\mathrm dZ_t $$ which holds for any deterministic $C^1$ function $f_t$. Now, in your case $f'_t = t$ and so $$ \int_0^t sZ_s\mathrm ds = \frac12 t^2Z_t - \frac12\int_0^t s^2\mathrm dZ_s $$ which does not seem to be a much of simplification, though. In the end, it does not seem to me that the original integral can be simplified.
Probability of getting at least 1 white ball
Here Question is What is the probability of getting at least 1 white ball ? So there are 3 possible cases as follows 1st is white AND 2nd is not for that (3/10) * (7/9) OR 2. 1st is not white AND 2nd is white for that (7/10)*(3/9) (adding dummy characters in order to save the edit) OR 3. both are white for that (3/10) * (2/9) for answer add these 3 cases so answer is 48/90 which is 0.533333 In probability questions remember OR means ADD, AND means Multiply
Small Changes of Sphere
As David notes in the comments, $\Delta V$ should have units of length cubed, so the given answer is probably a typo. For an alternate method, you can always just attack it directly. To wit: Let's say $\Delta r = \delta$. Letting $V'$ denote the new volume, $V' = \displaystyle \frac{4}{3} \pi (r+\delta)^3$. Expanding this out: $$V' = \frac{4}{3} \pi \Big( r^3 + 3\delta r^2 + 3\delta^2r + \delta^3 \Big)$$ Since $\delta$ is very small, we can use the approximation $\delta^n = 0$ for all $n > 1$. So this becomes: $$V' \approx \frac{4}{3} \pi \Big( r^3\ + 3\delta r^2 \Big) = V + 4 \pi \delta r^2$$ The difference in volume, $\Delta V = V' \!-\! V$, is therefore roughly $4 \pi \delta r^2 = 4 \pi r^2 ( \Delta r )$.
Help buying a calculator program
The Python programming language is free. It brings a GUI that you can type arithmetic into and automatically extends from normal integers to arbitrary length integers when needed.
Linearize product of three variables (one binary, two non-negative)
No, not if you mean linearize in the sense of converting it to an equivalent logic model. The bilinear product $yz$ is inherently nonlinear.
Fundamental property of Dirac Delta function on shifted functions and compositions
Just define $h(x):= f(x-b)$ and $k(x):=f(g(x))$. Then you can use the well known property of the dirac delta, which you already stated, for the functions $h$ and $k$ and conclude the two identities: $$ \int_{a-\epsilon}^{a+\epsilon}f(g(x))\delta(x-a)dx = \int_{a-\epsilon}^{a+\epsilon}k(x)\delta(x-a)dx = k(a) = f(g(a)) $$
multiplication and addition fractions
Let us say you have 2 fractions you need to multiply, $\frac{2}{7}$ and $\frac{4}{5}$ for example. Then the multiplication rule you explained above means that: $$\frac{2}{7} \times \frac{4}{5}=\frac{2\times4}{7\times5}=\frac{8}{35}$$ In general: $$\frac{a}{b}\times\frac{c}{d}=\frac{ac}{bd}$$ Now, to explain why this works, you need to consider what a fraction is or means. $\frac{2}{7}$ actually means $2\div7$. So above, when I did $\frac{2}{7} \times \frac{4}{5}$, I was actually doing the same thing as $(2\div7)\times(4\div5)$. Now: visualise some cakes (or pizzas or pies). Let's say I have 3 cakes. If I multiply the number of cakes I have by 5, then I take 5 sets of 3 cakes to make 15 cakes. If I divide the number of cakes I have by 6, I am trying to split up the cakes into 6 groups of equal size. So I will end up with 6 halves. Now notice what happens if I first multiply my 3 cakes by 5 and then divide by 6. I get the same answer as first dividing by 6 and then multiplying by 5. So order does not matter when it comes to multiplication and division (Note you still have to pay attention to brackets though). Therefore, we could rearrange what we had above: $$\frac{2}{7} \times \frac{4}{5}=(2\div7)\times(4\div5)=2\div7\times4\div5=2\times4\div7\div5=(2\times4)\div(7\times5)=\frac{2\times4}{7\times5}$$ Another helpful way of understanding what I said above is to think about the analogy between addition/subtraction and multiplication/division. What I said above is still true if I replace all of the $+$ and $-$ with $\times$ and $\div$ respectively (ignoring all of the fractions, as there isn't a "fraction" sign for subtraction). The way in which these 2 pairs are related are very similar. If I asked your original question, but with addition/subtraction instead, then it would have been something like: Show me why this rule works: $$(5-3)+(4-9)=(5+4)-(3+9)$$ $$(a-b)+(c-d)=(a+c)-(b+d)$$ It is because you can change around the order of things without changing the meaning. In mathematics, this means that an operation is "commutative". Yet another way of understanding this is to consider the idea of "inverses". An inverse operation undoes an operation. So for example, if I add 5 to a number, the inverse operation to this would be subtracting 5 from the new number, as this gets me back to my original number. Subtraction is the inverse operation of addition. Division is the inverse operation of multiplication. With inverses, you can change around the order of which you do things. This is actually related to basic Group Theory. Group theory replaces the operations $+$ $-$ $\times$ $\div$ with other symbols like $*$ to prove generic properties about these operations without it being necessary for you to know what that particular operation is. I hope this helps!
Non-trivial Polynomial Solution to $P(x)=(P'(x))^2, P(2)=0$?
Hint: Suppose $P$ is a solution of degree $n$. What is the degree of $(P')^2$?
How to find (research) literature?
I think that what you're looking for may be the Princeton Companion to Mathematics. This book provides introductory survey articles, written by experts, covering much of pure mathematics. According to the publisher: This is a one-of-a-kind reference for anyone with a serious interest in mathematics. Edited by Timothy Gowers, a recipient of the Fields Medal, it presents nearly two hundred entries, written especially for this book by some of the world's leading mathematicians, that introduce basic mathematical tools and vocabulary; trace the development of modern mathematics; explain essential terms and concepts; examine core ideas in major areas of mathematics; describe the achievements of scores of famous mathematicians; explore the impact of mathematics on other disciplines such as biology, finance, and music--and much, much more. The Princeton Companion to Mathematics is definitely where I'd go to get the lay of the land regarding some area of mathematics before delving into more technical articles. On the other hand, if you're interested in knowing what's being done regarding a more specific topic and what sort of techniques the experts are using, you might find the survey articles in the Bulletin of the American Mathematical Society helpful. The description of the journal on BAMS' website states: The Bulletin publishes expository articles on contemporary mathematical research, written in a way that gives insight to mathematicians who may not be experts in the particular topic.
Short Exact Sequence of Tangent Space of smooth morphism
Question: "Is there any elegent way to get it? I always feel confused about this kind of thing because the same notation might have different meaning when compute(as ring or as module)." Answer: Let $f:X:=Spec(B)\rightarrow Y:Spec(A)$ with $\phi:A \rightarrow B$ the corresponding morphism of rings and let $\mathfrak{m}_x \subseteq B$ with $\mathfrak{m}_x \cap A=\mathfrak{m}_y \subseteq A$. There is a canonical surjective map $$p: B \rightarrow B/\mathfrak{m}_yB:=(A/\mathfrak{m}_yA) \otimes_A B$$ and define $\mathfrak{n}_x:=p(\mathfrak{m}_x)$ and you get a canonical sequence $$\mathfrak{m}_y/\mathfrak{m}_y^2 \rightarrow \mathfrak{m}_x/\mathfrak{m}_x^2 \rightarrow \mathfrak{n}_x/\mathfrak{n}_x^2 \rightarrow 0.$$ This becomes exact when you tensor with $\kappa(x)$: $$T1.\text{ }\mathfrak{m}_y/\mathfrak{m}_y^2 \otimes \kappa(x) \rightarrow \mathfrak{m}_x/\mathfrak{m}_x^2 \rightarrow \mathfrak{n}_x/\mathfrak{n}_x^2 \rightarrow 0.$$ When you dualize T1 you get your sequence $$0 \rightarrow T_{X_y,x} \rightarrow T_{X,x} \rightarrow T_{Y,y}\otimes_{\kappa(y)} \kappa(x) \rightarrow 0.$$ Note: There is a field extension $\kappa(y) \subseteq \kappa(x)$ and you get an isomorphism $$ Hom_{\kappa(x)}(\mathfrak{m}_y/\mathfrak{m}_y^2\otimes_{\kappa(y)} \kappa(x), \kappa(x)) \cong Hom_{\kappa(y)}(\mathfrak{m}_y/\mathfrak{m}_y^2, \kappa(y))\otimes_{\kappa(y)} \kappa(x).$$ Im not used to the Liu book and he is making references to results earlier in the book. In Hartshorne Proposition III.10.4 is proved: If $f: X \rightarrow Y$ is a morphism of nonsingular varieties over an algebraically closed field $k$ let $n:=dim(X)-dim(Y)$. There is an exact sequence $$C1.\text{ } f^*\Omega^1_{Y/k} \rightarrow \Omega^1_{X/k} \rightarrow \Omega^1_{X/Y} \rightarrow 0$$ where $\Omega^1_{X/Y}$ is locally trivial of rank $n$ iff $f$ is smooth of relative dimension $n$ iff for every closed point $x\in X$ it follows the tangent mapping $T_{f,x}: T_{y} \rightarrow T_x$ is surjective. Since $\Omega^1_{X/k}$ is locally trivial of rank $dim(X)$ and $\Omega^1_{Y/k}$ is locally trivial of rank $dim(Y)$ we get at each closed point $x\in X$ an exact sequence of $\kappa(x)$-vector spaces $$0 \rightarrow f^*\Omega^1_{Y/k}\otimes \kappa(x) \rightarrow \Omega^1_{X/k}\otimes \kappa(x) \rightarrow \Omega^1_{X/Y}\otimes \kappa(x) \rightarrow 0$$ and dualizing this sequence you get your sequence. This proves your case in a special case.
Calculate GCD(Fib(531185354674),Fib(613570636967))
GCD(Fib(531185354674),Fib(613570636967)) = Fib(GCD(531185354674,613570636967))
Union of finite linearly independent subsets of eigenspaces, is a lin-indep subset.
Question 1: Do induction on $k$. If $k=1$, there's nothing to prove. Suppose the assertion for $k$ eigenvalues and assume $$ v_1+v_2+\dots+v_k+v_{k+1}=0 $$ where $v_i$ is an eigenvector of $\lambda_i$ ($i=1,\dots,k+1$) and $\lambda_1,\dots,\lambda_{k+1}$ are pairwise distinct eigenvalues of $T$. Evaluate the relation with $T$ to get $$ \lambda_1v_1+\dots+\lambda_kv_k+\lambda_{k+1}v_{k+1}=0 $$ Multiply the relation by $\lambda_{k+1}$ to get $$ \lambda_{k+1}v_1+\dots+\lambda_{k+1}v_k+\lambda_{k+1}v_{k+1}=0 $$ Subtract the two relations to get $$ (\lambda_1-\lambda_k)v_1+\dots+(\lambda_k-\lambda_{k+1})v_k=0 $$ By the induction hypothesis you conclude \begin{gather} (\lambda_1-\lambda_k)v_1=0\\ \dots\\ (\lambda_k-\lambda_k)v_k=0 \end{gather} because $v_i\in E_{\lambda_i}$ implies $\alpha v_i\in E_{\lambda_i}$ for all scalars $\alpha$, because $E_{\lambda_i}$ is a subspace. Therefore $$ v_1=v_2=\dots=v_k=0 $$ because $\lambda_i-\lambda_{k+1}\ne0$ ($i=1,2,\dots,k$), whence also $v_{k+1}=0$. Having this, the rest is easy. About your other questions. Question 3: yes, the eigenspaces are vector subspaces. Question 2: if you have a sum $v_1+\dots+v_k$, you want to assume that some vector is nonzero, by contradiction. You can't know which ones are non zero, so you assume they're the first in the ordering, as the order in which the eigenvalues are presented is irrelevant. So it's not restrictive (by renumbering, if necessary), that $v_1,\dots,v_m\ne0$ and $v_{m+1},\dots,v_k=0$. It's a very common technique and avoids to say something like suppose $v_{i_1}\ne0,\dots,v_{i_m}\ne0$ with $1\le i_1<i_2<\dots<i_m\le k$ and $v_j=0$ if $j\notin\{i_1,\dots,i_m\}$ that just complicates the proof.
Differential equation problem, how to solve such types?
$$y(x-y\log(y))dx+x(y-x\log(x))dy=0$$ $$p(x,y)dx+q(x,y)dy=0\quad\begin{cases} p=y(x-y\log(y))\\ q=x(y-x\log(x)) \end{cases}$$ We look for an integrating factor $\mu(x,y)$ to make it an exact differential of a function $F(x,y)$. $$\begin{cases} \frac{\partial F}{\partial x}=\mu(x,y)y(x-y\log(y))\\ \frac{\partial F}{\partial y}=\mu(x,y)x(y-x\log(x)) \end{cases}$$ See http://mathworld.wolfram.com/OrdinaryDifferentialEquation.html , Eq.$(12)$. $$\frac{\frac{\partial q}{\partial x}-\frac{\partial p}{\partial y}}{xp-yq}= \frac{(y-2x\log(x)-x)-(x-2y\ln(y)-y)}{xy(x-y\log(y))-yx(y-x\log(x))}=\frac{2}{xy}$$ Thus $\mu$ is a function of $xy$. $$\begin{cases} \frac{\partial F}{\partial x}=\mu(xy) y(x-y\log(y))\\ \frac{\partial F}{\partial y}=\mu(xy) x(y-x\log(x)) \end{cases}$$ $\frac{\partial \mu}{\partial x}=y\mu'(xy)$ and $\frac{\partial \mu}{\partial y}=x\mu'(xy)$ $$\frac{\partial^2 F}{\partial x \partial y}=\\=x\mu'y(x-y\log(y))+\mu(x-2y\ln(y)-y)=y\mu'x(y-x\log(x))+\mu(y-2x\ln(x)-x)$$ After simplification : $\quad\mu'yx +2\mu=0$ Not forgetting that $\mu$ is function of $xy$, let $X=xy$. $$\frac{d\mu}{\mu}=-2\frac{dX}{X}\quad\to\quad \mu(X)=\frac{c}{X^2}\quad\to\quad \mu(x,y)=\frac{c}{(xy)^2}$$ Doesn't matter the value of $c$. Any one is sufficient as integrating factor. $$\begin{cases} \frac{\partial F}{\partial x}=\frac{1}{(xy)^2} y(x-y\log(y))\\ \frac{\partial F}{\partial y}=\frac{1}{(xy)^2} x(y-x\log(x)) \end{cases}$$ $$F(x,y)=\int\frac{1}{x^2y}(x-y\log(y))dx = \int \frac{1}{y^2x} (y-x\log(x))dy$$ $y$ is constant parameter in the first integral and $x$ is constant parameter in the second integral. $$F(x,y)=\frac{\ln(x)}{y}+\frac{\ln(y)}{x}$$ Coming back to the initial ODE with integrating factor : $$\mu y(x-y\log(y))dx+\mu x(y-x\log(x))dy=dF=0$$ Thus $F=$constant. The solution of the ODE expressed on implicit form is : $$\frac{\ln(x)}{y}+\frac{\ln(y)}{x}=c$$ Search for explicit form of solution : $$\frac{x\ln(x)}{y}+\ln(y)=cx\quad\to\quad y\exp\left(\frac{x\ln(x)}{y}\right)=e^{cx}$$ $$\frac{1}{y}\exp\left(-\frac{x\ln(x)}{y}\right)=e^{-cx} $$ $$\left(-\frac{x\ln(x)}{y}\right)\exp\left(-\frac{x\ln(x)}{y}\right)=-e^{-cx}x\ln(x)$$ From the definition of the Lambert's W function : $\quad W(\xi)e^{W(\xi)}=\xi$ and with $\xi=-e^{-cx}x\ln(x)$ $$-\frac{x\ln(x)}{y}=W\left(-e^{-cx}x\ln(x) \right)$$ $$y(x)=-\frac{x\ln(x)}{W\left(-e^{-cx}x\ln(x) \right)}$$
Logarithmic differentiation (how to solve log differentiation with two different term)
You have, as you wrote, $$f(m)=m \log (m)+(n-m) \log (n-m)$$ $$f'(m)=\log (m)-\log (n-m)$$ $$f''(m)=\frac{1}{n-m}+\frac{1}{m}$$ So, the serivative cancels when $$\log (m)=\log (n-m)\implies m=n-m\implies 2m=n\implies m=\frac n 2$$ Now, using the second derivative test $$f''\left(\frac n 2\right)=\frac{4}{n}>0$$ So, the point is the minimum and $$f\left(\frac n 2\right)=n \log \left(\frac{n}{2}\right)$$
How do you show Tonelli’s theorem can fail in the σ-nonfinite case, and also that product measure does not need to be unique.
$1$. Yes, your proof that Tonelli's theorem conclusion fails in this case is correct. The only minor correction is: The diagonal set $\Delta=\{(x,x) : x∈[0,1]\}$ is a closed subset of $I_2$. Hence $\Delta\in\mathcal{B}(I_2)=\mathcal{B}(I)×\mathcal{B}(I)\subset \mathcal{B}(I)×2^I$. $2$. Now, let us show that there is more than one measure $\mu$ on $\mathcal{B}([0,1])\times 2^{[0,1]}$ with the property that $\mu(E\times F)=\lambda(E)\#(F)$ (where $\lambda$ is the Lebesgue measure and $\#$ is the counting measure) for all $E\in \mathcal{B}([0,1])$ and $F\in 2^{[0,1]}$. Let $\mu_1$ be the the product measure $\lambda \times \#$. It is immediate that for all $E\in \mathcal{B}([0,1])$ and $F\in 2^{[0,1]}$, $$\mu_1(E\times F) = (\lambda \times \#) (E\times F)= \lambda(E)\#(F)$$ Let $\mu_2$ be a measure defined on $\mathcal{B}([0,1])\times 2^{[0,1]}$ by, for all $A \in \mathcal{B}([0,1])\times 2^{[0,1]}$, $$\mu_2(A) = \sum_{y\in[0,1]} \lambda(([0,1]\times\{y\}) \cap A)$$ It is easy to see that $\mu_2$ is in fact a measure. Moreover, for all $E\in \mathcal{B}([0,1])$ and $F\in 2^{[0,1]}$, $$\mu_2(E\times F) = \sum_{y\in[0,1]} \lambda(([0,1]\times\{y\}) \cap (E\times F))=\sum_{y\in F}\lambda(E)=\lambda(E)\#(F)$$ Lets us prove that $\mu_1\neq \mu2$. Consider the diagonal set $\Delta=\{(x,x) : x∈[0,1]\}$ again. Note that, if $E\times F$ is a rectangle such that $\mu_1(E\times F)$ is finite, then $\lambda(E)=0$ or $F$ is a finite set. So $\Delta$ can not be covered by any countable collection of rectangles $E\times F$ with $\mu_1(E\times F)$ finite. So $\mu_1(\Delta)=+\infty$ (in fact, we proved more: $\Delta$ is not even $\mu_1$-$\sigma$-finite). On the other hand, $$\mu_2(\Delta) = \sum_{y\in[0,1]} \lambda(([0,1]\times\{y\}) \cap \Delta)= \sum_{y\in[0,1]} \lambda(\{y\})=0$$
Where is the flaw in my proof claiming that if $P$ is a prime minimal over $\operatorname{ann}M$, then $P=\operatorname{rad}(\operatorname{ann}M)$.
You obtain $$PR_P=(\operatorname{rad}(\operatorname{ann} M)R_P$$ and conclude from this that $P=\operatorname{rad}(\operatorname{ann} M)$. But this implication is not true; indeed, in your example where $R=\mathbb{Z}$, $P=(2)$ and $\operatorname{rad}(\operatorname{ann} M)=(6)$, you have $(2)R_P=(6)R_P$, as $3$ is a unit in $R_P$, but of course $(2) \ne (6)$ in $\mathbb{Z}$.
Can someone prove the statement related to convexity?
Let's think of this in terms of ideas rather than formulae. $\phi(\theta)$ is a function that, for fixed $x$ and $y$, goes a certain distance along the line between $x$ and $y$ and checks the value of the original function at that point. When we take $\theta_1$ and $\theta_2$ and draw a line between them and check the value at a point on that line, we can construct an $x'$ and a $y'$ that correspond to the location specified by $\theta_1$ and $\theta_2$ and check the value of $f$ on the corresponding point on that line and get the same number back. Since $f$ is convex, it then follows that $\phi$ is convex. If this idea is clear, it shouldn't be too hard to just sit down and write the inequality that demonstrates convexity.
Prove that $|z-10|=3|z-2$| is the equation of a circle with radius $3$ and center $1$.
You are almost at end of your exercise. Just notice that $$0=x^2+y^2-2x-8=(x-1)^2+y^2-9=|z-1|^2-3^2\Leftrightarrow |z-1|=3$$ where $z=x+iy$.
Result relating fundamental groups and covering spaces
Choose a base point $b$ in $X$ and one from $p^{-1}(b)$, say $a$. Since $\bar X$ is path connected there exists a path from $a$ to every point in $p^{-1}(b)$. The composition of these paths with $p$ gives non homotopic loops in $X$. Also since $\bar X$ is simply connected every loop in $X$ will be homotopic to one of these. Hence there are exactly $m$ elements in $\pi_{1}(X)$ and hence it is $\Bbb Z_{m}$.
Any positive semidefinite matrix can be written as $AA^{\ast}$
If $A \ge 0$ and $A = A^*$, then $A$ is unitarily diagonalizable and all eigenvalues are real and non-negative. That is, for some unitary $U$, we have $U^*AU = \Lambda$, where $\Lambda = \operatorname{diag}(\lambda_1,...,\lambda_n)$, and $\lambda_k \ge 0$. If we let $\Lambda^{\frac{1}{2}} = \operatorname{diag}(\sqrt{\lambda_1},...,\sqrt{\lambda_n})$, then we note that $(\Lambda^{\frac{1}{2}})^* = \Lambda^{\frac{1}{2}}$ and see that $A = U \Lambda U^* = U \Lambda^{\frac{1}{2}} \Lambda^{\frac{1}{2}} U^*= U \Lambda^{\frac{1}{2}} (\Lambda^{\frac{1}{2}})^* U^* = U \Lambda^{\frac{1}{2}} ( U \Lambda^{\frac{1}{2}})^*$, as desired. Aside: The Hermitian assumption is necessary. The matrix $A= \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix}$ satisfies $\langle x, Ax \rangle \ge 0$ for all $x$, but cannot be written as $B B^*$ for any matrix $B$. (This follows since $B B^*$ is self-adjoint, but $A$ is not.)
$A$ is closed if and only if $A=\overline {A}$.
Hints: $\;A\;$ is closed iff it is one of the subsets in $\;\{B\subset\Bbb R\;/\;B\;\text{is closed and contains}\;A\}...$
If $a_{ij}=|A_i\cap A_j|$ and let $A=(a_{ij})\in M_n(\mathbb{R})$, show that $\det(A)\geq0$
A similar idea was used in my previous answer here. We enumerate elements of $\bigcup_{i=1}^n A_i = \{b_1,\ldots, b_k\}$. Let $v_j\in\mathbb{R}^k$ with $v_j=\mathbf{1}_{A_j}(b_i)$ which is $1$ if $b_i\in A_j$, and $0$ otherwise. Then each $v_j$ encodes the elements of $A_j$. Moreover, the $k\times n$ matrix $M$ whose $j$ th column is $v_j$, satisfies $$ M^T M=(m_{ij})=A, \ \ m_{ij} = |A_i\cap A_j|. $$ As darij grinberg commented, such matrix is positive semidefinite. Therefore, $\det A \geq 0$.
How to find $w_s$ and $w_f$ when I have to Lagrange multipliers
To solve these problems, you can use the fact that either the multiplier is zero or the constraint is binding (or both). Assuming that the constraints are binding leaves you with the following system of equations $$ \begin{split} -0.75 +\frac{\lambda \cdot 0.25}{2 \cdot \sqrt{w_s}} + \frac{\mu \cdot 0.75}{2 \cdot \sqrt{w_s}} &= 0 \\ -0.25 -\frac{\lambda \cdot 0.25}{2 \cdot \sqrt{w_f}} + \frac{\mu \cdot 0.25}{2 \cdot \sqrt{w_f}} &= 0 \\ 0.25( \sqrt{w_s} - \sqrt{w_f}) -1 &= 0 \\ 0.75 \cdot \sqrt{w_s} + 0.25 \cdot \sqrt{w_f} - 1 - \bar{U} &= 0. \end{split} $$ The last two can be solved for $w_s,w_f$ (solution is $w_s=(\bar U +1)^2$, $w_f=(\bar U -3)^2$). To check whether the assumption of binding constraints is justified, you can solve for the multipliers, they need to be non negative.
If $\int_0^1 x^n d\mu = 0$ for $n = 0, 1, 2, 3, ...$ then $\mu = 0$
Let $L(f)=\int_{[0,1]}fd\mu$ for $f\in C[0,1]$. We are to show that $L=0$, in other words, $L(f)=0$ for any $f\in C[0,1]$. Consider the set \begin{align*} \mathcal{D}=\{f\in C[0,1]: L(f)=0\}. \end{align*} If $f$ is a polynomial, then $L(f)=0$, and hence $\mathcal{D}$ contains all the polynomials. Therefore, $\mathcal{D}$ is dense in $C[0,1]$. Now we note that \begin{align*} |L(f)|\leq\int_{[0,1]}|f|d|\mu|\leq|\mu|([0,1])\cdot\|f\|_{C[0,1]}, \end{align*} this implies $L$ is continuous on $C[0,1]$.
Does $|x|^p$ with $0<p<1$ satisfy the triangle inequality on $\mathbb{R}$?
By homogeneity, we try to see whether $|1+t|^p\leq 1+|t|^p$ for all $t$, where $0&ltp&lt1$. We first look whether it's the case if $t\geq 0$. Define the maps $f(t):=(1+t)^p-t^p-1$ on $\mathbb R_{\geq 0}$. The derivative is $f'(t)=p((1+t)^{p-1}-t^{p-1})\leq 0$, so $f(t)\leq f(0)=0$ and the inequality holds for $t\geq 0$. To deal with the general case, note that $$|1+t|^p\leq |1+|t||^p\leq 1+||t||^p=1+|t|^p.$$ This inequality is useful to deal with the $L^p$ spaces for $0&ltp&lt1$, showing that the map $d(f,g)=\int |f-g|^pd\mu$ is a metric.
Trisect a square on its sides. How many isosceles triangles can you find by connecting three of the dots?
Some hints: The peak of the triangle can be a vertex or a trisection point. It is sufficient to analyze these two cases. For the first case take the upper left vertex $A$ of the square, and write next to each other marked point its distance to $A$. Looking at the resulting numbers you should be able to find the number of isosceles triangles with peak at $A$. Same for the marked point $B$ at one third of the upper edge of the square. Now think of how many marked points of type $A$ you have, and how many marked points of type $B$.
Solving a system of equations involving holomorphic functions
This is not possible without more restrictions on the $h_{ij}$. For instance, in the case $n=1$, you just have a single real-analytic function $h=h_{11}:\mathbb{C}\to(0,\infty)$ and want to find a holomorphic function $p=p_1$ such that $p'+\overline{p'}=h$. That is, you want $p$ such that $2\operatorname{Re}(p')=h$. This is possible only if $h$ is harmonic, since $p'$ will be holomorphic and the real part of a holomorphic function in one variable is harmonic. In fact, $h$ must be constant in this case, since a bounded-below harmonic function on all of $\mathbb{C}$ is constant. I don't know how these restrictions generalize for $n&gt;1$, though it seems a similar argument will show that $h_{ii}$ must be constant for each $i$.
Linear dependence after a linear transformation
$l_1,l_2,l_3, l_4$ are linearly dependent there exists some not trivial set of scalars $(c_1,c_2,c_3,c_4)$ such that $c_1l_1 + c_2l_2 +c_3l_3 + c_4l_4 = 0$ $\phi(l)$ is a linear map $\phi(c_1l_1 + c_2l_2 +c_3l_3 + c_4l_4) = c_1\phi(l_1) + c_2\phi(l_2) + c_3\phi(l_3) + c_4\phi(l_4) = 0$ $\phi(l_1),\phi(l_2),\phi(l_3),\phi(l_4)$ are linearly dependent. 2) Your example seems fine, but you could probably find something even simpler.
Extraneous and Missing Solution Confusion
After implicit differentiation I came to $$y'x(2xy+1)+y(2xy+1)=0\tag{1}$$ $$(2xy+1)(y'x+y)=0$$ This is correct. Now rather than solving this for $y'$, you could simply substitute $y'=-1$, since this is what you want, and add the equation of the curve to get the following system: $$\left\{\begin{array}{rcl} \left(\color{red}{2xy+1}\right)\left(\color{blue}{y-x}\right) &amp; = &amp; 0 \\ (xy)^2+xy &amp; = &amp; 2 \end{array}\right.$$ You need to consider both equations since you're looking for points: on the curve (i.e. satisfying the second equation); and with the correct slope (i.e. satisfying the first equation). Now from the first equation you have either $\color{red}{xy=-\tfrac{1}{2}}$ or $\color{blue}{y=x}$. You say this first case corresponds to an "extraneous solution" but it's not a solution since no points lying on the curve satisfy this equation. Solutions are only those points satisfying both equations. Substitution of $y=x$ into the second equation leads to $x=\pm 1$ which then leads to the two correct solutions, namely the points $(1,1)$ and $(-1,-1)$.
Prove that $D$ is a core for $T_f$.
How does the Holder inequality argument lead us to conclude that $L^q \subset D(T_f)$? Let $g \in L^q$. Since $\|g\|_2 \leq \|1\|_p \|g\|_q&lt; \infty$ ($M$ has finite measure), you know that $L^q \subseteq L^2$. By $\|fg\|_2 \leq \|f\|_p \|g\|_q&lt; \infty$, you know that $L^q$ is furthermore a subset of $D(T_f)$ (Note that $D(T_f)$ is by definition a subset of $L^2$, this is why the first application of Hölder was needed). How does the fact that each $g_n$ is in $L^q$ allow us to conclude that $L^q$ is a core for $T_f$? By the first part, we know that the restriction of $T_f$ to $L^q$ is well-defined. You should also convince yourself that $T_f$ is densely defined and closed. In this setting, the question of whether the closure of $T_f \upharpoonright L^q$ equals $T_f$ makes sense. What the proof shows is in fact $\overline{T_f \upharpoonright L^q} = T_f$, by showing that $\overline{\Gamma (T_f \upharpoonright L^q)} = \Gamma(T_f)$, where $\Gamma(A)$ denotes the graph of the function $A$. Since $T_f$ is closed, we know that $\overline{\Gamma (T_f \upharpoonright L^q)} \subseteq \Gamma(T_f)$, so we (and accordingly Reed and Simon) only have to prove $\supseteq$. Therefore, let $(g, fg) \in \Gamma(T_f)$, i.e. $g \in D(T_f)$ and $fg \in L^2$. Then, define $g_n := \chi_{\{|g| &gt; n\}} g$. Again, by Hölder you get $$ \|g_n\|_q \leq \| \chi_{\{|g| &gt; n\}} \|_r \|g\|_2 &lt; \infty, $$ with $\frac{1}{q} = \frac{1}{r} + \frac{1}{2}$, i.e. $r = \frac{2q}{2 - q} &gt; 0$. Note that since $p &gt; 2$ by assumption, we must have $q &lt; 2$ so the expression for $r$ is well-defined. But now you have found a sequence $(g_n, f g_n) \in \Gamma(T_f \upharpoonright$ converging to $(g, fg)$ in the product topology. I also don't have much of an intuition for what the core of an unbounded operator is, if someone could provide any intuition that would be great. Here is my two cents. If it doesn't satisfy you, consider reposting this specific part of your original question: The Proposition deals with two different types of domains: $D(T_f)$ on the one hand and $L^q$ and $D$ on the other hand. $D(T_f)$ sometimes is called "maximal domain of definition" by obvious reasons. $L^q$ and $D$ are "cores" for $T_f$. "Cores" to $D(T_f)$ correspond to the dense subspaces of $D(T_f)$. The philosophy behind considering cores: Work on a smaller "handy" set ($L^q$ is handier than just $D(T_f)$) without "loosing" your original operator - you can always go back to $T_f$ by considering the closure $\overline{T_f \upharpoonright D}$. Note that for the closure to be well-defined it is essential that $T_f$ is a closed operator.
If dim$(W)+$ dim$(U)=$ dim$(V)$ and $W+U=V$, then $W\cap U=\{0\}$.
If you have that the sum of subspaces is the span of their union, then it should be clear enough that the dimension of the sum is the sum of the dimensions minus the basis vectors in common (meaning one is contained in the others span for example, $v_i=cw_m$, $v_i$ and $w_m$ basis vectors). What does this look like in symbols: $$ \dim(V)=\dim(U+W)=\dim(U)+\dim (W)-\dim(U\cap W) $$ now if the above is equal to $\dim(U)+\dim(W)$ can you conclude?
Bound the pseudo-inverse matrix of a product
There is no such bound. Let $A=\begin{pmatrix}1&amp;0\end{pmatrix}$ be orthogonal projection from plane to line. The norm of $A^\dagger$ is $1$. Next, let $B=\begin{pmatrix}\cos t\\ \sin t\end{pmatrix} $ be isometric embedding of line into plane. The norm of $B^\dagger$ is also $1$, no matter what $t$ is. But $AB=\begin{pmatrix} \cos t\end{pmatrix}$ has pseudoinverse of norm $1/|\cos t|$ as long as $\cos t$ is not zero.
Tangent of a unit circle
$r(t)=(\cos(t),\sin(t))$ is the unit circle parametrised so that it traverses the circle counter-clockwise. $r'(t)=(-\sin(t),\cos(t))$ is the tangent at the point $r(t)=(\cos(t),\sin(t))$. In your notation, if $(a,b)$ is on the unit circle then $(-b,a)$ is the tangent vector at $(a,b)$.
What is $\lim\limits_{n\rightarrow\infty}\left(1-\left(1-\frac1n\right)^{f(n)}\right)^{2f(n)}$ when $f(n)$ grows faster than $n$?
tl;dr: the limit could be anything in $[0,1]$. Let $f$ be a function of the form $$ f(n) = n g(n) $$ with $g(n)\xrightarrow[n\to\infty]{} \infty$. Then we can write $$ \left(1-\left(1-\frac1n\right)^{f(n)}\right)^{2f(n)} = e^{ 2f(n) \ln \left(1-\left(1-\frac1n\right)^{f(n)}\right) } = e^{ 2n g(n) \ln \left(1-\left(1-\frac1n\right)^{n g(n)}\right) }. $$ Let's break it down. $$ \left(1-\frac1n\right)^{n g(n)} = e^{g(n)\cdot n \ln\left(1-\frac1n\right) } = e^{g(n) (-1+o(1)) } \xrightarrow[n\to\infty]{} 0 $$ since $g(n)\to\infty$. Therefore, we can use the Taylor expansion of $\ln$ to get: $$ 2n g(n) \ln \left(1-\left(1-\frac1n\right)^{n g(n)}\right) = - 2n g(n) \left(1-\frac1n\right)^{n g(n)} = - 2n g(n)e^{-g(n) (1+o(1)) } $$ This gives us some intuition. This can go, depending on $g(n)$, to either $0$, $-\infty$, or basically any constant $c&lt;0$. For instance, take $g(n) = \ln n$: then $- 2n g(n)e^{-g(n) (1+o(1)) } = -2(1+o(1))\ln n\xrightarrow[n\to\infty]{} -\infty$, and the overall limit will be $0$. (After taking the exponential). $g(n) = 2\ln n$: then $- 2n g(n)e^{-g(n) (1+o(1)) } = -2(1+o(1))\frac{\ln n}{n}\xrightarrow[n\to\infty]{} 0$, and the overall limit will be $e^0=1$. $g(n) = \ln n + \ln \ln n$: then one can check (being a bit more careful in the above, that is in the $o(1)$ term) that $2n g(n) \ln \left(1-\left(1-\frac1n\right)^{n g(n)}\right)\xrightarrow[n\to\infty]{} -2$, and the overall limit will be $e^{-2}$.
Special case of the Eilenberg-Watts theorem for the base ring
First, let me point out that you shouldn't really expect there to be such a characterization that does not involve some extra data, because the $R$-module isomorphism from $M$ to $R$ is not unique. So, you are going to need some sort of extra data to encode the choice of that isomorphism. A related issue is that if you allow noncommutative rings, the property you are asking about is not Morita-invariant: if you identify $\text{Mod}(R)$ with $\text{Mod}(M_2(R))$ in the canonical way, then a functor $F:\text{Mod}(A)\to\text{Mod}(R)$ with your property would not have that property as a functor $\text{Mod}(A)\to\text{Mod}(M_2(R))$. So you are going to need to use something about the commutativity of your rings, which basically means using the monoidal structure. You don't need to go all the way to giving $F$ a strong monoidal structure, though: it suffices for it just to preserve the unit. That is, the data of an isomorphism from $F$ to a functor $R\otimes_A -$ for some $A$-algebra structure on $R$ is equivalent to the data of an isomorphism $F(1_{\text{Mod}(A)})\to 1_{\text{Mod}(R)}$. Indeed, identifying $F$ with $M\otimes_A-$ for some bimodule $M$, an isomorphism $F(1_{\text{Mod}(A)})\to 1_{\text{Mod}(R)}$ is just an isomorphism $M\to R$ as an $R$-module. It is then automatic that the $A$-module structure on $M$ comes from a homomorphism $A\to R$, since the $A$-module structure must commute with the $R$-module structure and every $R$-module endomorphism of $M$ is multiplication by an element of $R$.
Solution to linear congruences
See the entry in Wikipedia: "Linear congruence theorem." What you might be recalling is the fact that $a$ is not the inverse of $x$ unless $\gcd (a, b) = 1$. But here, we have that the following applies: IF $d = \gcd (a,m) &gt; 1$, then $d\mid b$ and $${\small \dfrac ad }x\equiv {\small \dfrac bd} \left(\mod \small\frac md\right)$$
torus crossing directions & Conway polynomial
For a reference, you might consider looking at chapter 8 of Lickorish's book An Introduction to Knot Theory. In short, the Conway polynomial is from a normalized Alexander polynomial, and the Alexander polynomial is invariant under mirror image. I think it's Fox's A Quick Trip Through Knot Theory that gives a nice reason for this, but this follows from the fact that the Wirtinger presentation can have relations that are either from loops above or below a crossing, and one can pass between these group presentations by formally replacing each generator with its inverse; since these are group presentations for the same group, this is reflected in the Alexander polynomial by a $t\leftrightarrow t^{-1}$ symmetry. A definition of the Conway polynomial is that it is $\det(t^{1/2}A-t^{-1/2}A^T)$ with $x=t^{-1/2}-t^{1/2}$, where $A$ is a Seifert matrix for a given link, which for a Seifert surface $S\subset S^3$ is the matrix of a form $H_1(S)\times H_1(S)\to \mathbb{Z}$ given by $([\alpha],[\beta])\mapsto \operatorname{lk}(\alpha,\beta^+)$ when $\alpha,\beta$ are simple closed curves and $\beta^+$ is the curve pushed off $S$ in the positive normal direction with respect to the orientation of the surface. The corresponding Seifert matrix for the mirror image of everything is $-A^T$, as outlined in this answer: Seifert Matrix of Amphichiral Knots. Thus, the Conway polynomial of a mirror image is \begin{align} \det(t^{1/2}(-A^T)-t^{-1/2}(-A^T)^T)&amp;=\det(-(t^{1/2}A-t^{-1/2}A^T)^T)\\ &amp;=(-1)^n\det(t^{1/2}A-t^{-1/2}A^T) \end{align} where $n=\operatorname{rank}(H_1(S))$, which at least for a knot is even. Hence, for a knot the Conway polynomial is invariant under mirror images. (In general, this is true for links with an odd number of components.) So: the Conway polynomial of a knot and its mirror image are the same. In particular, the two chiral forms of the trefoil both have $1+x^2$ as their Conway polynomials.
Trigonometric integral with different angles and different powers
As you probably know that $$\cos 3x=4\cos^3 x-3\cos x$$ $$\implies \cos^3 x=\frac{\cos 3x +3\cos x}{4}$$ Putting this in the integral, and breaking it into two parts yields, $$\frac{1}{4}\int \cos {3x} \sin^4{3x} dx+ \frac{3}{4} \int \cos x \sin^4{3x}dx$$ The first one is trivial, just substitute $\sin 3x=t$ to get $\frac{sin^5 3x}{60}$ For the next one, notice that $$\sin 3x = 3\sin x -4\sin^3 x$$ Putting it into the integral gives $$ \int \cos x(3\sin x-4 \sin^3 x)^4 dx$$ You may put $\sin x =t$ to get, $$\int (3t-4t^3)^4dt$$ Which you may open via the binomial theorem and integrate (will be messy), so W|A is probably right.
Showing $\sum_{j=1}^{n}\left(\frac{\partial}{\partial x_{j}}\right)^{m}$ is not elliptic for odd $m$
Indeed, the factor of $-i$ makes no difference: it's all about $\sum \xi_j^m$ being zero or not. Your proof goes wrong when you "distribute" the absolute value over a sum. When $m$ is even, each $\xi_j^m$ is nonnegative, so the only way for their sum to be $0$ is if every term is $0$. When $m$ is odd, the above no longer applies, and counterexamples such as $\xi=(1,-1,0,\dots,0)$ present themselves.
Expected number of cuts to partition the interval $[0, n]$ into segments of unit length or less
You need at least one point in each interval of the form $(k, k+1)$ for whole $k$. The coupon collector problem argument says this takes $n \log n + \gamma$ points. If you get one point in every interval of the form $(k, k+\frac 12)$ and $(k+\frac 12, k+1)$ that is guaranteed to be sufficient. The same coupon collector argument says this takes $2n \log(2n) + \gamma$ points, which is also $O(n \log (n))$ so we have the big-O class of the answer. Thanks to Jorge Fernández Hidalgo for the half unit idea.
Density of the class of "semialgebraic" sets dense on that one of compact sets
Sure, because finite sets are dense with respect to the Hausdorff distance (cover $A$ by $\epsilon$-balls, pick a finite subcover by compactness, take the center of these balls). This even gives you that algebraic sets are dense with respect to Hausdorff distance.
Explicit homeomorphism from $I \times I$ to itself that maps $I\times \{0\} \cup \{0,1\} \times I$ to $I \times \{0\}$
Here is an illustration of a piecewise linear homeomorphism with the desired property defined on 3 pieces: two triangles and a trapezoid. On each piece you can obtain an explicit formula by putting any three vertices and their images into the general form of an affine linear map.
Find the kernel and range of the following transformation
Before seeing the answer, please see this source. Given, $$ T\cdot \begin{bmatrix} v_1\\ v_2\\ v_3\\ \end{bmatrix} = \begin{bmatrix}v_1\\ v_2\\ \end{bmatrix} $$ Calculating nullspace is easy. $$ T\cdot \begin{bmatrix} 0\\ 0\\ k\\ \end{bmatrix} = \begin{bmatrix}0\\ 0\\ \end{bmatrix} $$ Therefore $Nullspace(T) = span\left\{\begin{bmatrix} 0 \\ 0\\ 1\\ \end{bmatrix}\right\}$ To get the column space or range we need to find the matrix behind this transformation. We gonna look at what $T$ does to each basis vector. $$ T\cdot \begin{bmatrix} 1\\ 0\\ 0\\ \end{bmatrix} = \begin{bmatrix}1\\ 0\\ \end{bmatrix} $$ $$ T\cdot \begin{bmatrix} 0\\ 1\\ 0\\ \end{bmatrix} = \begin{bmatrix}0\\ 1\\ \end{bmatrix} $$ $$ T\cdot \begin{bmatrix} 0\\ 0\\ 1\\ \end{bmatrix} = \begin{bmatrix}0\\ 0\\ \end{bmatrix} $$ Therefore the matrix looks like $T = \begin{bmatrix}1 &amp; 0&amp; 0 \\ 0 &amp; 1&amp; 0\end{bmatrix}$ Above $T$ is already is in reduced row-echelon form. Therefore $$ Col(T) = span\left\{ \begin{bmatrix}1 \\0 \end{bmatrix}, \begin{bmatrix}0 \\1 \end{bmatrix} \right\} $$
Attempting to draw G(x) from G'(x)
Another way to attack this problem is to draw a tangent field diagram. This is usually done when the gradient is a function of both $x$ and $y$, but there is no reason why not to go ahead in this instance. The basic idea is to draw a short line with gradient $G'(x)$ at every point $(x,y)$: You then take your starting point $(0,0)$ and draw a curve that would fit the general pattern.
Covariance between fitted values and residuals
I assume that you meant their dot product is zero. If so, let $\hat{y} = Hy$ where $H = X(X'X)^{-1}X'$ (from OLS). Also, note that $H$ is idempotent, i.e. $H^2 = H.$ Then, $$\hat y' . e = y'H(I - H)y = y'(H - H^2)y = y'(H - H)y = 0.$$
$\prod_{k \in Q}{\frac{k^2-1}{k^2+1}}$ is rational number for $Q \subset N$
If $1\in Q$, then the product is $0$ -- the $k=1$ factor kills it immediately. That's indeed an uninteresting case. Otherwise, since each term decreases the product, the smallest result we can get is to include all $k\ge 2$. Now that smallest result, $$ \alpha = \prod_{k=2}^{\infty} \frac{k^2-1}{k^2+1}, $$ is positive, because we can take logarithms factor for factor and get $$ \tag{*} -\log \alpha = \sum_{k=2}^{\infty} \log\frac{k^2+1}{k^2-1} = \sum_{k=2}^{\infty} \log\Bigl(1+\frac{2}{k^2-1}\Bigr) &lt; \frac{2\pi^2}{6} $$ because $\frac{1}{k^2-1} &lt; \frac{1}{(k-1)^2} $ when $k\ge 2$, and and $\log(1+x)\le x$. However, every real number $\beta\in(\alpha,1)$ -- and in particular every rational number in that interval -- can be produced by some $Q$. All we have to do is to find a subset of the $\log(1+2/(k^2-1))$ terms that sum to $-\log \beta$. This is always possible in this case -- we can simply start from $k=2$ and select every term that doesn't make the sum so far too large. There's a subtlety, however: This &quot;greedy&quot; procedure doesn't work for all series. For example, we clearly can't pick a subset of the terms in $\sum_{k=1}^{\infty} 10^{-k}$ to reach an arbitrary number between $0$ and $0.111\ldots$. What saves us in this case is that each term in (*) is at least half the size of the previous term. (This is easily the case asymptotically, and can be verified by direct computation for the first few terms). Thus whenever we skip a term because we're too close to the target, the sum of the terms we haven't considered yet will be larger than the term we skipped. Therefore the difference between the partial sum and our target number remains bounded by the sum of not-yet-considered terms -- and that bound goes to $0$. Note that the resulting $Q$ is a computable subset of $\mathbb N$, as long as we can effectively compare $\beta$ to the partial products that would result if we pick a candidate term. That's clearly the case when $\beta$ is rational, and also for a large class of interesting irrational numbers. (We don't need to compute with the logarithms during the practical calculation; that was just for proving the procedure gives the right result). For example, suppose we want to find a $Q$ where the product is exactly $\frac12$. We take the factors one by one: $k=2$, candidate factor $\frac{3}{5}$. We'll include this factor. The partial product so far is $\frac{3}{5}$. $k=3$, candidate factor $\frac{8}{10}$. If we include this factor the product would be $\frac{3}{5}\cdot\frac{8}{10}=\frac{24}{50}$, which is too small. So we skip it. $k=4$, candidate factor $\frac{15}{17}$. If we this factor, the partial product becomes $\frac{3}{5}\cdot\frac{15}{17} = \frac{9}{17}$, which is $&gt;\frac12$, so include this factor. $k=5$, candidate factor $\frac{24}{26}$. Partial product would be $\frac{9}{17}\cdot\frac{24}{26} = \frac{216}{442}$, which is too small. Skip. $k=6$, candidate factor $\frac{35}{37}$, new partial product $\frac{9}{17}\cdot\frac{35}{37} = \frac{315}{629}$, just barely over $\frac12$. Include. $k=7$, candidate factor $\frac{48}{50}$, new partial product would be $\frac{315\cdot 48}{629\cdot 50}=\frac{15120}{31450} &lt; \frac12$. Skip. ... all factors end up skipped until ... $k=35$, new partial product would be $\frac{315\cdot1224}{629\cdot1226} = \frac{385560}{771154} &lt; \frac12$ so skip. $k=36$, new partial product would be $\frac{315\cdot1295}{629\cdot1297} = \frac{407925}{815813} = \frac{11025}{22049} &gt; \frac12$, so include. The next included factor ends up being for $k=210$. Continuing this way, we find $Q=\{2,4,6,36, 210, 44100, \ldots\}$, so $$ \frac12 = \frac{2^2-1}{2^2+1} \times \frac{4^2-1}{4^2+1} \times \frac{6^2-1}{6^2+1} \times \frac{36^2-1}{36^2+1} \times \frac{210^2-1}{210^2+1} \times \frac{44100^2-1}{44100^2+1} \times \cdots $$ This is by far not the only $Q$ that produces $\frac12$, though.
The "beach problem": does anyone know it? or know how to solve it?
For old problems the "Bundeswettbewerb Informatik" offers detailed solutions. In your case you find the solution set here on page 16 to 38 (!). Also an optimal algorithm is presented which leads to the conclusion that 17 is the maximum.
Prove that $\sqrt{10} - \sqrt6 - \sqrt5 + \sqrt3$ is irrational
Let $a=\sqrt6+\sqrt5$ and $b=\sqrt{10}+\sqrt3$. You want to show that $b-a\notin\mathbb Q$. Notice that $$ \begin{align*} a^2&amp;=11+2\sqrt{30}\\ b^2&amp;=13+2\sqrt{30} \end{align*} $$ hence $$b^2-a^2=(b-a)(b+a)=2$$ and $b+a=\frac2{b-a}$. Now if $b-a\in\mathbb Q$, then also $b+a=\frac2{b-a}\in\mathbb Q$ and $$b=\frac{(b-a)+(b+a)}2\in\mathbb Q,$$ a contradiction. (To prove that $b$ is irrational you can use methods from the thread you linked.)
Galois group of ($x^2$ + $2x$ − $1$)($x^3$ + $3x^2$ + $3x$ − 6) over $\Bbb Q$
Both things you've noticed is of help. The second note just shows that the roots of $(x+1)^{2}-2$ are $x=\pm\sqrt{2}-1$ because $x'=x+1$ and $x'=\pm\sqrt{2}$ are the roots of $(x')^{2}-2$ and you can show that $K_{1}=\mathbb{Q}(\sqrt{2})=\mathbb{Q}(\sqrt{2}-1)$. Likewise for the second factor (where the splitting field is $K_{2}=\mathbb{Q}(\sqrt[3]{7},\zeta_{3})$ where $\zeta_{3}$ is a primitive third root of unity). Now just show that $$\mathbb{Q}(\sqrt[3]{7},\zeta_{3})\cap\mathbb{Q}(\sqrt{2})=\mathbb{Q},$$ and consider the compositum $L=K_{1}K_{2}=\mathbb{Q}(\sqrt[3]{7},\zeta_{3},\sqrt{2})$ as an extension over $\mathbb{Q}$. $L/\mathbb{Q}$ is Galois and we have the following isomorphism of Galois groups $$\operatorname{Gal}(L/\mathbb{Q})\stackrel{\simeq}{\longrightarrow}\operatorname{Gal}(K_{1}/\mathbb{Q})\times\operatorname{Gal}(K_{2}/\mathbb{Q}),$$ which you can use to deduce the answer.
Existence of a root for a specific function
Let $\eta:I\rightarrow I$ be such that $2^{n}|f(\eta^{(n)}(x))|\leq|f(x)|$, where $\eta^{(n)}=\eta\circ\cdots\circ\eta$ for $n$-times. Fix an $x_{0}\in I$, then the sequence $(\eta^{(n)}(x_{0}))$ is a bounded sequence in $I$, so there exists some subsequence $(n_{k})$ (depends on $x_{0}$, but this doesn't matter) such that $\eta^{(n_{k})}(x_{0})\rightarrow L(x_{0})$. We also have \begin{align*} |f(\eta^{(n_{k})}(x_{0}))|\leq\dfrac{1}{2^{n_{k}}}|f(x_{0})|. \end{align*} Taking $k\rightarrow\infty$, continuity of $f$ gives \begin{align*} |f(L(x_{0}))|\leq 0, \end{align*} so $f(L(x_{0}))=0$.
$\frac{d}{du}\left(\frac{1}{u}\right) = -\frac{1}{u^2}$ for $u \in \mathbb{R}\setminus\{0\}$
Since $1 = u\cdot\frac1u$, differentiating yields $$0 = \frac{\mathsf d}{\mathsf du}[1] = \frac{\mathsf d}{\mathsf du}\left[u\cdot \frac1u\right]. \tag 1$$ By the product rule, the RHS of (1) is $$u\frac{\mathsf d}{\mathsf du}\left[\frac1u\right] + \frac{\mathsf d}{\mathsf du}[u]\frac1u = u\frac{\mathsf d}{\mathsf du}\left[\frac1u\right]+\frac1u. \tag 2$$ Equating $(1)$ and $(2)$, we have $$\frac{\mathsf d}{\mathsf du}\left[\frac1u\right]=-\frac1{u^2}. $$
why is the domain of the function in the picture >=0, why is it not just !=0
The problem is that Wolfram Alpha isn't interpreting your input as you intend. For Mathematica, the domain of $x$ to a non-integral power is set as the non-negative reals. This isn't correct according to many people's interpretation of that formula. To make this match what you intend use the "surd" command instead. You have the following Wolfram Alpha input: domain z=((x^2+y^2-2x)/(2y-y^2-x^2)^(1/131) I believe what you actually want is (in Wolfram Alpha): domain z=surd((x^2+y^2-2x)/(2y-y^2-x^2),131) Annoyingly it looks like both expressions are interpreted the same way, but this is not this case.
Is a t distribution for a certain degree of freedom equivalent to the sample mean distribution for the corresponding sample size?
It seems you may be confusing $Z = \frac{\bar X - \mu_0}{\sigma/\sqrt{n}}$ and $T = \frac{\bar X - \mu_0}{S/\sqrt{n}}.$ The crucial difference is that only the numerator of $Z$ is random, while both numerator and denominator of $T$ are random. So you cannot talk about the number of SDs $\bar X$ is from $\mu_0$ without being clear whether the "SD" is $\sigma$ or its estimate $S.$ Suppose we are testing $H_0: \mu = 100$ against $H_a: \mu \ne 100$ at the 5% level and that $n = 5$ and $\sigma = 15.$ If we know that $\sigma = 15,$ then we reject $H_0$ when $|Z| &gt; 1.96$. If we do not know the value of $\sigma,$ then we reject when $|T| &gt; 2.776.$ The figure below shows $S$ plotted against $\bar X$ for 200,000 tests in which the data are sampled from $Norm(100, 15)$. For the t tests, the 5% of the points in red are the $(\bar X, S)$ points for which $H_0$ is incorrectly rejected. It is not just the distance between $\bar X$ and $\mu_0 = 100$ that matters; the random variable $S$ must be taken into account. By contrast, for the z tests, the 5% of the points outside the vertical green lines are the ones for which $H_0$ is incorrectly rejected. (The green lines are vertical because the value of $S$ is not relevant.) Notes: (1) As a bonus, the figure illustrates that for normal data the random variables $\bar X$ and $S$ are independent. (2) The R code to make the figure is provided below. m = 200000; n = 5; x = rnorm(m*n, 100, 15) DTA = matrix(x, nrow=m); a = rowMeans(DTA); s = apply(DTA, 1, sd) plot(a, s, pch=".", xlab="Sample Mean", ylab="Sample SD") t = (a-100)*sqrt(n)/s; t.crit = qt(.975, n-1); cond = (abs(t) &gt; t.crit) points(a[cond],s[cond], pch=".", col="red") pm = c(-1,1); abline(v=100+pm*1.96*15/sqrt(n), col="darkgreen", lwd=2)
Proving a biconditional relation between wfs $B$ and $C$, where $C$ is obtained by erasing all quantifiers from $B$
A quantifier that quantifies a variable that otherwise does not occur in the scope of that quantifier is called a 'null' quantifier. For example, in $(\forall x) A(y)$, the quantifier is a null quantifier. The exercise you have to do is to show that once you remove all (and only) the null quantifiers from $B$, the result $C$ is equivalent to $B$. The way they phrase this: 'erase all quantifiers whose scope does not contain x free' is a little confusing, but what they mean is this: Say you have the following statement: $(\forall x)(\forall y)(A(y) \rightarrow (\exists x)(B(x)\land C(x,x)))$ The $x$'s in $B(x)$ and $C(x,x)$ are bound by the $(\exists x)$, and the $(\forall x)$ is in fact a null quantifier. So: even though the $B(x)$ and $C(x,x)$ occur in the scope of the $(\forall x)$, they do not become free once you remove the $(\forall x)$. Indeed, within the scope of $(\forall x)$ they are not free ... they are bound ... just not by the $(\forall x)$ itself. So that's what they mean by 'erase all quantifiers $(\forall x)$ and $(\exists x)$ whose scope does not contain x free'. In your example: $B = (\forall x)(\exists y) A(x,y)$ the $x$ in $A(x,y)$ is not free, exactly because it is bound by the $(\forall x)$ before it. So, the $(\forall x)$ is not a null quantifier, and you therefore shouldn't remove it. The same goes for the $y$. Hence, the result of removing all null quantifiers from $B$ is $B$ itself! Finally, a hint for proving what you need to prove: use structural induction on the syntactical formation of $B$. Use the Null quantification Equivalence laws: $(\forall x) \phi \Leftrightarrow \phi$ $(\exists x) \phi \Leftrightarrow \phi$ for any variable $x$ and formula $\phi$ that does not contain $x$ as a free variable. and indeed use the replacement theorem as you already suspected.
How do we divide $P(x)$ with $ax+b$?
Note that when you divide any polynomial $P(x)$ by $Q(x)$ where $deg(P)\geq deg(Q)$ then the remainder is a polynomial $R(x)$ with $deg(R)&lt;deg(Q)$. So here you have $Q(x)=ax+b$ so $R(x)=c$, i.e. a constant. By Division Theorem, you can write $P(x)=(ax+b)D(x)+c$ where $D(x)$ is your quotient. Put $x=\dfrac{-b}{a}$ assuming $a\neq0$ which gives $c=P(\dfrac{-b}{a})$. Hence $D(x)=\dfrac{P(x)-P(\dfrac{-b}{a})}{ax+b}$.
What are the Pontryagin duals of additive and multiplicative group of complex number?
For Pontryagin duality, the characters are the continuous homomorphisms to $S^1$, not to all of $\mathbb{C}^\ast$, as characters are defined in some other circumstances. We have $\widehat{G\times H} = \widehat{G}\oplus \widehat{H}$, and it is probably known that $\widehat{\mathbb{R}} \cong \mathbb{R}$ and $\widehat{S^1} \cong \mathbb{Z}$. Writing $(\mathbb{C},+)$ and $(\mathbb{C}^\ast,\cdot)$ as products of simpler groups then gives the answer.
When do we have ker $T$ = im $S$ and im $T$ = ker $S$?
An example where this happens is when $T=P$ and $S=I-P$ where $P$ is any projection.
Mechanical vibration: single degree of freedom model of wheel mounted on a spring
This is more a physics problem then math. But here are some points that you should consider: The moment of inertia of a filled disk is not $J=mr^2$, but $J=\frac 12 mr^2$, so the kinetic energy of rotation would be half of the kinetic energy of translation. $$K.E.R.=\frac 12 \frac12 mr^2\frac{v^2}{r^2}=\frac 14 mv^2$$ You have $J=mr^2$ only if the mass of the wheel is uniformly distributed on the periphery of the disk. So bodies with the same mass can have different moments of inertia. If the kinetic energy of translation is the same as the kinetic energy of rotation, which formula would you use? Why is the other formula wrong? Let's look at a simpler case, a wheel rolling with constant velocity $v$. The mass of the disk is concentrated in two points, on opposite sides of the disk. We have at each of these points a mass $m/2$. The kinetic energy of the center of the mass is $$K.E._{CM}=\frac12(m/2+m/2)v^2=\frac{mv^2}2$$ But this must be equal to the sum of kinetic energies of the two particles. When one of the particle is at the bottom, the velocity of that particle is $v_b=0$. At the same time, the velocity of the particle at the top is $v_t=2v$. The sum of kinetic energies is then $$\frac 12 \frac m2 0^2+\frac 12\frac m2 (2v)^2=mv^2\ne \frac{mv^2}2$$ So if the sum is $mv^2$ but the kinetic energy of the center of mass is only half, where did the other half go? The answer is in the kinetic energy of rotation.
Prove or Disprove, if R and S are partial order relations on a set A, then $R \cup S$ is a partial order relation on A
The example $A=\{1,2,3\}$, $R=\{(1,1),(2,2),(3,3),(1,2)\}$ and $S=\{(1,1),(2,2),(3,3)(2,3)\}$ works, as $R\cup S$ is not transitive anymore. Your proof goes wrong in the transitivity step where you conclude from $(a,b),(c,d) \in R\cup S$, then $(a,b),(c,d) \in R$ or $(a,b),(c,d)\in S$. It may well be that $(a,b) \in R$ and $(c,d)\in S$, while $(c,d)\not\in R$ and $(a,b)\not\in S$. Actually, your proof is wrong in some others steps as well. For reflexivity you have $(a,a)\in R$ as well as in $S$, hence $(a,a) \in R \cup S$. At the anti-symmetric part you make the same (wrong) conclusion as in the transitivity part.
Measures: Atom Definitions
First, as mentioned in the comments excluding the pathological case is not enough: (Thanks alot @tomasz!!) Consider the measure space: $$\mu:\mathcal{P}(\mathbb{N}\cup{\infty})\to\overline{\mathbb{R}}_+:\quad\mu(\varnothing):=0,\;\mu(\{\infty\}):=1,\;\mu(E):=\infty\text{ else}$$ Then the atoms are w.r.t. (1) all nonempty subsets whereas w.r.t. (2) only the singletons and for both (1) and (2) additionally the point at infinity. Next, for atoms of finite mass the proof goes as follows: (Thanks alot @asaf karagila!!) Assume, it holds (1). Let $E\in\Sigma$ arbitrary. If $\mu(A\cap E)=0$ we're done. Otherwise $\mu(A\cap E)=\mu(A)$ by (1). But then $\mu(A\cap E^c)=0$ as $\mu(A)&lt;\infty$ and we're done, too. Concluding that (2) holds. Finally, for the sigma-finite case atoms have finite mass and the preceding applies: Assume $A$ is an atom w.r.t. (1) with $\mu(A)=\infty$. By sigma-finiteness, there exists a sequence $A_n\uparrow A$ with $\mu(A_n)&lt;\infty$. By continuity, there exists a natural $n_0\in\mathbb{N}$ with $\mu(A_{n_0})&gt;0$. By defintion of an atom, this implies $\mu(A_n)=\mu(A)=\infty$. Contradiction! Concluding that $\mu(A)&lt;\infty$.
Random variables, calculating probability for an event occurring after a number of attempts
Formula for $P(x=3)$ : $\frac{3!}{3^3}=\frac{6}{27}=\frac{2}{9}$. In the second step, we have probability $\frac{2}{3}$ to get a new letter. Number of strings with length $9$ using one letter : $3$ Number of strings with length $9$ using exactly two letters : $3\cdot (2^9-2)=1530$ strings, in total $1533$ strings out of $3^9=19683$ strings without all letters. Use this result to solve the last part.
If $f$ is smooth and the inverse Fourier transform is zero
Yes, for $f$ a tempered distribution, so that $\widehat{f}$ makes sense as a tempered distribution, and is the $0$ distribution, then $f$ must be zero. The smoothness plays no role. Saying $\int f=0$ is ambiguous without further info about $f$, even for smooth $f$, since $f$ certainly need not be in $L^1$, so the literal integral expression the Fourier transform is ambiguous, and it is not at all the case the every such $f$ (smooth or not) gives a tempered distribution... so it's not clear what the Fourier transform of such $f$ would be. (And then, certainly if $f=0$ as a distribution, and if $f$ is smooth, it has pointwise values, which are necessarily all $0$, yes.)
What is the difference between a point and 2 points that are separated by an infinitesimal distance?
The OP seems to be struggling with the notion of limit, when for example a point $A=1$ on the real axis is being approached by a variable point $B=1+\frac{1}{n}$ so that for $n=1$ the point $B$ is located at distance $1$ from $A$ and as $n$ increases, $B$ gets closer and closer to $A$. To speak loosely, for infinite $n$ it seems that $B$ is infinitely close to $A$ or more precisely the distance between $A$ and $B$ is infinitesimal, but still not zero. Where does the limit come in, and how can you claim that in the limit $A$ and $B$ are the same? The above is my understanding of the OP's query. The answer is provided conveniently in the hyperreal framework by the standard part function. The limit is not the value of $B$ for an infinite index $n$ but rather the standard part thereof. Only after taking the standard part can one claim that the points are the same. This is a more direct interpretation of the limit concept than in the epsilon, delta approach where the value of the limit needs to be given in advance, rather than computed directly. For an accessible introduction see Keisler's Elementary Calculus.
A couple of questions using Ramsey Theorem
In general, you want to use Ramsey's theorem when you want to construct a large/infinite set of objects such that any $n$-sized group of them has a certain property. With that in mind, here's an example of how you can use Ramsey's Theorem on your two problems: For (1), draw a blue edge between two elements of the sequence if they are ascending (i.e. the one that comes later in the sequence is the larger of the two) and draw a red edge if they are descending. By Ramsey's Theorem, we are done. (2) is harder. The first obstacle is that there's no obvious disjunction: there are pairs of circles which do not touch yet are closer than $M$ apart, which really throws a wrench into the "obvious" Ramsey approach. However, we can try naively coloring any such pair of circles a third color and working with the 3-color version of Ramsey. There's another obstacle, though. If we somehow get a set of $k$ circles such that each pair has a common point, there's still no guarantee that all of the circles have a common point. We can fix this too: instead of just saying that pairs of circles should have a common point, say that their centers should be less than 1/1000 units apart. If in a set of $k$ circles, any pair satisfies this, then all $k$ circles definitely have a point in common. With this in mind, color a pair of circles red if their centers are less than 1/1000 units apart, blue if they're more than $M$ units apart, and orange if neither. Now, note that there's a limit on the number of circles we can have in an all-orange subgraph. In particular, any all-orange subgraph must fit inside a huge circle of radius $M$ - otherwise, some pairs will be blue. Also, each circle in an all-orange subgraph must have a small circle of radius 1/1000 to itself - otherwise, some pairs wil be red. Therefore, by Pigeonhole, there's some number $B$ such that no all-orange subgraph has size $B$ or greater. Now we can finish it off: by Ramsey's Theorem, there's a number of circles so large that there's either a $B$-sized all-orange subgraph, a $k$-sized all-red subgraph, or a $k$-sized all-blue subgraph. The $B$-sized all-orange subgraph is impossible, so it must be one of the other two possibilities, and we are done. Note that (2) might also be solved just using the Pigeonhole Principle. Ramsey's Theorem is almost never your only recourse, although it's really helpful to know!
IMO 1988 Q6 $a_n = ...$
$$ \frac{n_{i}^2+x^2}{1+n_{i} x}=s \tag{1}$$ $ \ $ $$n_{i}^2+x^2=s(1+n_{i} x) \\ x^2 + (-s n_{i})x+(n_{i}^2-s)=0 \\ x_{1,2}=\frac{1}{2}\bigg(n_{i} s\pm \sqrt{n_{i}^2 s^2+4(s-n_{i}^2)}\bigg) $$ if $(n_{i}s) \neq 0$ $$x_{1,2}=\frac{n_{i} s}{2}\bigg(1\pm \sqrt{1+4 \bigg( \frac{s-n_{i}^2}{n_{i}^2 s^2} \bigg)} \bigg) \tag{2} $$ Note that the element under the square root looks a lot like the square of a binomial: $$ 1+4 \bigg( \frac{s-n_{i}^2}{n_{i}^2 s^2}\bigg)=\bigg(1-2 \frac{ q}{n_{i} s} \bigg)^2=1+4 \frac{q^2}{n_{i}^2 s^2}-4\frac{q}{n_{i} s} $$ So we can rewrite $(2)$ as: $$ x_{1} =\frac{n_{i} s}{2} \Bigg(1 - \bigg(1-\frac{2 q}{n_{i} s} \bigg) \Bigg)=q \\ x_{2} =\frac{n_{i} s}{2} \Bigg(1 + \bigg(1-\frac{2 q}{n_{i} s} \bigg) \Bigg)= n_{i} s -q $$ Since the equation $(1)$ is symmetric for $n_{i}$ and $x$, the procedure done in $ (2) $ to get $ x $ can be used equivalently to get $ n_ {i} $ and vice versa, we can write $$ n_{i+1}=n_{i} s -n_{i-1} \tag{3} $$ $$ \frac{0^2+n_{i+1}^2}{1+0 n_{i+1}} =n_{i+1}^2=s \Leftarrow\Rightarrow n_{i+1}=\sqrt{s} $$ The solutions $ n_ {i} $ must be positive so the solution $ n_ {i} = 0 $ will surely be the smallest that can be found therefore we call it $ n_ {0} = 0 $ we will then have $ n_ {1 } = \sqrt {s} $. We have found two solutions, which if compared with all the others, with the same $ s $ take smaller values: $$ \forall i , s&gt;1 : 0=n_{0}&lt; \sqrt{s}=n_{1}&lt;n_{i}. $$ Known the first two solutions we can find the third and so on: $$n_{0}=(0 )\sqrt{s} \\ n_{1}=(1) \sqrt{s} \\s n_{2}=s n_{1}-n_{0}=(s) \sqrt{s}\\ n_{3}=(s^{5}-1)\sqrt{s} \\ \vdots$$ Equation $(3)$ has the following solution: $$ n_{i}=\frac{\sqrt{s}\bigg( \big(s+\sqrt{s^2-4}\big)^i - \big(s-\sqrt{s^2-4}\big)^i \bigg)}{2^{i} \sqrt{s^2-4}} $$ which for $ i&gt; 1 $ has the following series representation: $$ n_{i}=\sum_{k=0}^{\frac{1}{4}(2i+i-(-1)^i)} (-1)^k \binom{i-k-1}{k} s^{\frac{1}{2} (2i-1-4k)} $$ For more, look at: Simple proof for the Legendary Question 6. International Mathematical Olympiad (IMO) 1988
Morphisms in the category of group presentations
I've never heard of the "category of group presentations". If I had to guess what was intended by such a name, it would be a category whose objects are diagrams $$ F \to G $$ where $F$ and $G$ are free groups. Correspondingly, the morphisms ought to be morphisms of diagrams: that is, commutative squares $$ \begin{matrix} F &amp;\to&amp; G \\ \downarrow &amp; &amp; \downarrow \\ F' &amp;\to&amp; G' \end{matrix} $$
Notation in Complex Analysis Theorem
The line above the statement of the theorem has the definition $\langle u, v\rangle = \operatorname{Re}(u\overline{v})$. This is a real inner product on $\mathbb{C}$ - the notation $\langle\cdot, \cdot\rangle$ is usually used to denote inner products. More generally, $\langle u, v\rangle = \operatorname{Re}(u^T\overline{v})$ is a real inner product on $\mathbb{C}^n$.
Relation between homogeneous and non-homogeneous system of linear equations
$\alpha = \frac {\det(A_1)}{\det(A)}$ where $\alpha \ne 0$ $\implies \det(A_1) \ne 0$. Then $A_1$ is invertible. Meaning $$(A_1)x=0 \\ \implies x=(A_1)^{-1}0 = 0$$ Edit: OK. You asked for an alternate way so here's a method that doesn't explicitly use Cramer's rule: We know that $$A(\alpha e_1 + 2\alpha e_3) = b$$ And we're asked about the solution to $$A_1y = 0 \\ \pmatrix{b &amp; A_2 &amp; A_3}y = 0 \\ \pmatrix{\alpha Ae_1 + 2\alpha Ae_3 &amp; A_2 &amp; A_3}y=0 \\ \pmatrix{\alpha A_1 + 2\alpha A_3 &amp; A_2 &amp; A_3}y=0$$ We know that $\det(A)\ne 0$ because $Ax=b$ has a unique solution. Let's look at the determinant of $A_1$. It's $$\begin{align}\det(A_1) &amp;= \det\pmatrix{\alpha A_1 + 2\alpha A_3 &amp; A_2 &amp; A_3} \\ &amp;= \det\pmatrix{\alpha A_1 &amp; A_2 &amp; A_3} \\ &amp;= \alpha\det\pmatrix{A_1 &amp; A_2 &amp; A_3} \\ &amp;\ne 0\end{align}$$ Thus $A_1$ is invertible. Thus $y=0$ is the only solution to $A_1y=0$.
Evaluate $\int_\gamma \frac{e^{iz}}{z^2+1} \ dz$
I misread your question initially. This is in fact quite easy to do via the Residue Calculus: note that $\gamma$ is not just a line segment, but a semi-circular contour. Notice that the integrand has poles at $\pm i$. Only the pole at $i$ could matter, and so we have three cases: $R&lt;1:$ we don't include any poles so the integrand is zero by Cauchy's Theorem. $R&gt;1:$ we include the pole at $i$ so our integral is $2\pi i \operatorname{Res(i)} = (2\pi i)\left(\frac{-i}{2e}\right) = \pi/e$ $R = 1:$ Things become a bit complicated because the pole at $i$ is on the boundary of the circle, but we can use the Cauchy Principle Value to associate a value with the integral Since in your case $R&gt;1$ we can safely say the answer is $\pi/e.$
Log laws proof using only rational exponents
$b\ln a=b\int_1^a\frac{1}{x}dx=\int_1^a\frac{b}{x}dx$ Let $u=x^b$ Then $du= bx^{b-1}dx$ and $\frac{du}{u}=\frac{b}{x}dx$ Therefore, $\int_1^a\frac{b}{x}dx = \int_1^{a^b}\frac{du}{u}=\ln {a^b}$
Determine the complete real solution to a homogeneous differential equation
Well, you could use Variation of Parameters. The fourth order ODE $$ y^{(4)} - 16 y = u' + u $$ has the homogeneous solution $$ y_h(x) = c_1 \cosh(2 x) + c_2 \sinh(2 x) + c_3 \cos(2 x) +c_4 \sin(2 x). $$ Now, for the particular solution, take $$ y_p(x) = a_1(x) \cosh(2 x) + a_2(x) \sinh(2 x) + a_3(x) \cos(2 x) + a_4(x) \sin(2 x). $$ where \begin{align} a_1'(x) \cosh(2 x) + a_2'(x) \sinh(2 x) + a_3'(x) \cos(2 x) + a_4'(x) \sin(2 x) &amp;=0 \\ a_1'(x) \sinh(2 x) + a_2'(x) \cosh(2 x) - a_3'(x) \sin(2 x) + a_4'(x) \cos(2 x)&amp;=0 \\ a_1'(x) \cosh(2 x) + a_2'(x) \sinh(2 x) - a_3'(x) \cos(2 x) - a_4'(x) \sin(2 x)&amp;=0 \end{align} Substituting in the ODE we have $$ a_1'(x) \sinh(2 x) + a_2'(x) \cosh(2 x) + a_3'(x) \sin(2 x) - a_4'(x) \cos(2 x) = \frac{1}{8}\big(u'(x) +u(x)\big). $$ This four equations can be solved for $a_1'(x)$, $a_2'(x)$,$a_3'(x)$ and $a_4'(x)$, leading to \begin{align} a_1'(x) &amp;= -\frac{\sinh(2 x)}{16} \big(u'(x) + u(x)\big)\\ a_2'(x) &amp;= \frac{\cosh(2 x)}{16} \big(u'(x) + u(x)\big)\\ a_3'(x) &amp;= \frac{\sin(2 x)}{16} \big(u'(x) + u(x)\big)\\ a_4'(x) &amp;= -\frac{\cos(2 x)}{16} \big(u'(x) + u(x)\big) \end{align} and, assuming that it's an Initial Value Problem given at $x = 0$ (w.l.g), \begin{align} a_1(x) &amp;= -\frac{\sinh(2 x) u(x)}{16} - \int_0^x \frac{\sinh(2 t) - 2\cosh(2 t)}{16} u(t)\, dt\\ a_2(x) &amp;= \frac{\cosh(2 x) u(x) - u(0)}{16} + \int_0^x \frac{\cosh(2 t) - 2\sinh(2 t)}{16} u(t)\, dt\\ a_3(x) &amp;= \frac{\sin(2 x) u(x)}{16} + \int_0^x \frac{\sin(2 t) - 2 \cos(2 t)}{16} u(t)\, dt\\ a_4(x) &amp;= -\frac{\cos(2 x) u(x) - u(0)}{16} - \int_0^x \frac{\cos(2 t) + 2\sin(2 t)}{16} u(t)\, dt \end{align} the solution is \begin{multline} y(x) = \big(c_1 + a_1(x)\big) \cosh (2 x) + \big(c_2 + a_2(x)\big) \sinh (2 x) \\ + \big(c_3 + a_3(x)\big) \cos (2 x) + \big(c_4 + a_4(x)\big) \sin (2 x). \end{multline} For a Boundary Value Problem the construction is a bit different, but the same principle can be applied.
Solutions to the Laplace Equation $\Delta u =0$, where $u= \log p$
This is an heavy method, but you can continue. $$U_{xx} + U_{yy} = \frac{2A ln(10)(Ax^2 + By^2 +Cxy + D) - ln(10)(2Ax + Cy)^2}{ln(10) (Ax^2 + By^2 +Cxy + D)^2} + \frac{2B ln(10)(Ax^2 + By^2 +Cxy + D) - ln(10)(2By + Cx)^2}{ln(10) (Ax^2 + By^2 +Cxy + D)^2} = 0$$ After simplification : $$2A(Ax^2 + By^2 +Cxy + D)-(2Ax + Cy)^2 + 2B(Ax^2 + By^2 +Cxy + D)-(2By + Cx)^2=0$$ It is easy to see that $A=B=1\:;\:C=D=0$ is solution. Hense : $p(x,y)=x^2+y^2$ $$U(x,y)=log(x^2+y^2)$$ A simpler method consists in solving first the PDE : $$U(x,y)=f(x+iy)+g(x-iy)$$ any functions $f$ and $g$. In case of $f=g=log$ then $U=log(x+iy)+log(x-iy)$ $$U(x,y)=log(x^2+y^2)$$