title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Prove that there are no integer solutions x,y to the following system of equations using mod 4 arithmetic:
If you have a system of equations it implies more than one equation and that you are supposed to find a solution that satisfies all of them simultaneously. I presume the system consists of those three equations, not that you are supposed to "show this for" each of them individually. What they mean is to reduce both sides modulo $4$ and see what it gives. $2x + 3y \equiv 3 \pmod{4}$. $3x \equiv 3 \pmod{4}$. $y \equiv 1 \pmod{4}$. You can easily check that $x \equiv 1 \pmod{4}$ based on the second equation but that and the third equation contradict the first.
Approximation of Stochastic integral with Stieltjes integrals
The answer is negative. They converge to Stratonovich integral. References : Karatzas and Shreve book and http://www.sciencedirect.com/science/article/pii/0047259X8390043X
If $x^a \equiv x^b \bmod p$, what can we say about $a$ and $b$?
All that you can say is that the order of $x$ divides $b-a$. Relatedly, all that you know about the order of $x$ without any additional information is that it is a divisor of $p-1$.
Using average value to find average with respect to t and s.
So we have $s(t) = \frac{1}{2}gt^2$ therefore $v(t) = gt$. So compute the average velocity with respect to time, then we integrate the velocity over time and divide by the total time: $$ v_{avg_t} = \frac{\int\limits_0^T gtdt}{T - 0} = \frac{\frac{1}{2}gT^2}{T} = \frac{1}{2}gT = \frac{1}{2}v_T $$ because $v(T) = gT = v_T$. BTW, this is the usual definition of "average velocity": $v_{avg} = \frac{\Delta s}{\Delta t}$. If, however, we want to find the spacial average, that is the average velocity over the displacement, then we need to do a different integral: $$ v_{avg_s} = \frac{\int\limits_{s_0}^{s_T}v(t)ds}{s_T - s_0} $$ Notice that we have a mix of variables: $t$ and $s$. We can use substitution to find $ds$ in terms of $dt$: $$ \frac{ds}{dt} = v(t) \rightarrow ds = v(t)dt $$ Therefore we have: $$ v_{avg_s} = \frac{\int\limits_{0}^{T}v^2(t)dt}{s_T - s_0} = \frac{\int\limits_{0}^{T}g^2t^2dt}{s_T - s_0} = \frac{\frac{g^2T^3}{3}}{\frac{1}{2}gT^2} = \frac{2}{3}gT = \frac{2}{3}v_T $$ Notice that in the substitution the limits of integration "magically" changed from $s_0$ to $s_T$ to $t = 0$ to $t = T$. This is because, since we changed the integration to be over $t$ we need to find the $t$ values where $s = s_0$ and $s = s_T$--by definition these must occur at $t = 0$ and $t = T$, respectively. Alternatively, you could find $v$ in terms of $s$: \begin{align} s(t) = \frac{1}{2}gt^2 \rightarrow&\ t = \sqrt{\frac{2s}{g}}\\ v(t) = gt \rightarrow&\ v(s) = g\sqrt{\frac{2s}{g}} = \sqrt{2gs} \end{align} Now the integral becomes $$ \require{cancel} \sqrt{2g}\int\limits_0^{\frac{1}{2}gT^2} \sqrt{s}ds = \sqrt{2g}\frac{2}{3}\left(\frac{1}{2}gT^2\right)^{\frac{3}{2}} = \frac{1}{3}g^{\frac{1}{2}}g^{\frac{3}{2}}\frac{\cancel{2^{\frac{1}{2}}\cdot 2}}{\cancel{2^{\frac{3}{2}}}}T^3 = \frac{g^2T^3}{3} $$ which is the same result we got from the substitution above.
Find all solutions for f(x) (without induction)
$f(2)=2f(1)+1=2\cdot1+1=3$ $f(3)=2f(2)+1=2\cdot3+1=7$ $f(4)=2f(3)+1=2\cdot7+1=15$ $f(5)=2f(4)+1=2\cdot15+1=31$ . . . $f(x)=2^x-1$ Note that $y=a^x$ is a monotonically increasing function of $x$ for all $a \in(0,1)U(1,\infty)$. So, $f(x)=2^x-1$ is an increasing function of $x$. The second derivative of $f$ with respect to $x$ is: $f''(x)=[\ln(2)]^2\cdot2^x$ which is always positive for all values of $x$, this means that $f$ is always concave-up. Now to think, is there any other function that is monotonically increasing, concave-up, and satisfying the given definition $f(x)=2f(x-1)+1$? No! (using basic calculus). $$\boxed{f(x)=2^x-1}$$
How to evaluate one improper integral
Edited. Let $u=\log x$ (as per martini's hint) and $f(u)=u^{40021}$. Then $$\begin{equation*} I:=\int_{0}^{\infty }\frac{\left( \log x\right) ^{40021}}{x}dx=\int_{-\infty}^{\infty}f(u)du, \end{equation*}$$ Function $f$ is odd, $f(-u)=-f(u)$. These integrals don't converge $$ \begin{equation*} \int_{-\infty }^{0}f(u)du=-\int_{0}^{\infty }f(u)du. \end{equation*}$$ The integral $I$ is undefined (as commented by pedja).
Space as a network. Is it possible to model topological properties of Euclidean (or non-Euclidean) space using a discrete network of points?
I really like the spirit of your question - what discrete spaces (with graph metrics) have euclidean space as their continuum limit? First, I don't think you have to worry about multiple shortest length paths on the graphs. If we think about the rectangular lattice $\mathbb{Z}^2$, the number of shortest paths between $(0,0)$ and $(1,n)$ is $n+1 \to \infty$, and this is practically the best case scenario. It doesn't jive with the fact that in Euclidean space we have a unique shortest length path between any two points. However, the graph metric as you've defined it in this space is $$ d_G((x_1,y_1),(x_2,y_2)) = |x_1 - x_2| + |y_1 - y_2|, $$ which would more naturally have (and does have) the $L^1$ norm (taxicab norm) as its limit: $$ \| (x_1,y_1) - (x_2,y_2))\|_{L^1} = |x_1 - x_2| + |y_1 - y_2|. $$ This metric's Borel sigma-algebra is generated by the diamonds $B_{L^1}(a,r) = \{b : \|a - b\|_{L^1} < r \}$ (where $a,b$ are points in $\mathbb{R}^2$). If we are only interested in the topology of Euclidean space, this does a fine job of recovering it; the $L^1$ basis described generates the same topology as the Euclidean topology. Now if we hope to recover the Euclidean geometry I think we'll need to alter the graph metric, so that we end up with straight-line shortest paths between points, as opposed to the $L^1$ case. A shortest path between $(0,0)$ and $(1,1)$ in the $L^1$ distance is given by any path moving only straight left or straight up between the two points. To get a more familiar geometry (metric) we could make the lattice complete and weight the edges so that (embedding the graph into $\mathbb{Z}^2$) the edge lengths are given as $$ d_G((x_1,y_1),(x_2,y_2)) = |x_1 - x_2|^{\alpha} + |y_1 - y_2|^{\alpha}, ~~~ \alpha > 1. $$ It isn't obvious to me that we can get the classic geometry using only the graph distance.
every root of unity $x$ determines a degree one representation of $G$
It is only $n$th roots of unity, where $g$ has order $n$. Then $\rho(g)=(\zeta)$ for $\zeta$ an $n$th root of unity is the representation.
solution of 1st order PDE
Let $u=v^2$ , Then $u_x=2vv_x$ $u_y=2vv_y$ $\therefore2vv_x2vv_y=v^2$ with $v(x,0)=0$ $4v^2v_xv_y=v^2$ with $v(x,0)=0$ $v_y=\dfrac{1}{4v_x}$ with $v(x,0)=0$ $v_{xy}=-\dfrac{v_{xx}}{4v_x^2}$ with $v(x,0)=0$ Let $w=v_x$ , Then $w_y=-\dfrac{w_x}{4w^2}$ with $w(x,0)=0$ $\dfrac{w_x}{4w^2}+w_y=0$ with $w(x,0)=0$ Follow the method in http://en.wikipedia.org/wiki/Method_of_characteristics#Example: $\dfrac{dy}{dt}=1$ , letting $y(0)=0$ , we have $y=t$ $\dfrac{dw}{dt}=0$ , letting $w(0)=w_0$ , we have $w=w_0$ $\dfrac{dx}{dt}=\dfrac{1}{4w^2}=\dfrac{1}{4w_0^2}$ , letting $x(0)=f(w_0)$ , we have $x=\dfrac{t}{4w_0^2}+f(w_0)=\dfrac{y}{4w^2}+f(w)$ , i.e. $w=F\left(x-\dfrac{y}{4w^2}\right)$ $w(x,0)=0$ : $F(x)=0$ $\therefore w=0$ $v_x=0$ $v(x,y)=g(y)$ $u(x,y)=G(y)$ $u_x(x,y)=0$ $u_y(x,y)=G_y(y)$ $\therefore G(y)=0$ Hence $u(x,y)=0$
Evaluating $\lim_{x \to 0}\frac{4^{\arccos^2 \frac{1}{1 + x^2}} - 1}{\log_4(1 + x)}$
The limit is zero, since $\arccos^2\frac{1}{1+x^2}$ is an analytic and even function in a neighbourhood of zero, so: $$\arccos^2\frac{1}{1+x^2}=O(x^2)$$ and $$ 4^y = e^{y\log 4}=1+y\log 4+o(y) $$ gives: $$4^{\arccos^2\frac{1}{1+x^2}} = 1+O(x^2)$$ while: $$\log_4(1+x) = \frac{x+o(x)}{\log 4}.$$
Absolute continuity of the kernel
Suppose that $K(x,\cdot)$ is a measure for each $x$. Define $$ \mu(B) = \sum_{x\in X}K(x,B). $$ This possibly uncountable sum can be understood either as the supremum over all finite subsums, or as the integral with respect to counting measure on $X$. Those are equivalent definitions. The latter is useful, so that we may apply tools of integration theory, such as Fubini/Tonelli. To check that $\mu$ is a measure, first note that $\mu(\emptyset)=0$. Let $\{B_n\}$ be pairwise disjoint. Then \begin{align*} \mu\bigg(\bigcup_{n=1}^\infty B_n\bigg) &= \sum_{x\in X}K\bigg(x,\bigcup_{n=1}^\infty B_n\bigg)\\ &= \sum_{x\in K}\sum_{n=1}^\infty K(x,B_n)\\ &= \sum_{n=1}^\infty\sum_{x\in X} K(x,B_n)\\ &= \sum_{n=1}^\infty\mu(B_n). \end{align*} Reversing the order of summation is justified by Tonelli's theorem. Finally, if $\mu(B)=0$, then $K(x,B)=0$ for all $x$. Hence, $K$ is absolutely continuous with respect to $\mu$. Edit: To address the comments, let us suppose that $x\mapsto K(x,U)$ is Borel measurable for each open $U$. We will show that this implies $x\mapsto K(x,B)$ is Borel measurable for every $B\in\mathcal{B}(X)$. Let $f_B(x)=K(x,B)$. Let $$ \mathcal{D} = \{B\in\mathcal{B}(X): f_B\text{ is Borel measurable}\}. $$ Since $X$ is open, we have $X\in\mathcal{D}$. If $A,B\in\mathcal{D}$ and $B\subset A$, then $f_{A\setminus B}=f_A-f_B$ is measurable, which implies $A\setminus B\in\mathcal{D}$. And if $\{A_n\}_{n=1}^\infty$ is an increasing sequence in $\mathcal{D}$ with $A=\bigcup_{n=1}^\infty A_n$, then $f_A=\sup_n f_{A_n}$ is measurable, which shows that $A\in\mathcal{D}$. In all, we have shown that $\mathcal{D}$ is a Dynkin system that contains the open sets. So by the $\pi$-$\lambda$ theorem, $\mathcal{D}=\mathcal{B}(X)$. If $\nu$ is a measure on $(X,\mathcal{B}(X))$, then we may now define $$ \mu(B) = \int_X K(x,B)\nu(dx). $$ The proof that $\mu$ is a measure is the same as what was done above for counting measure. However, now if $\mu(B)=0$, then we may only conclude that $K(x,B)=0$ for $\nu$-a.e. $x$. A word of warning: the exceptional null set may depend on $B$. More specifically, we can say that $$ \forall B,\;\mu(B)=0 \quad\Rightarrow\quad(K(x,B)=0\text{ for $\nu$-a.e. $x$}). $$ But we can *not* say that $$ (\forall B,\;\mu(B)=0 \quad\Rightarrow\quad K(x,B)=0)\text{ for $\nu$-a.e. $x$}. $$ In fact, there may not be a single $x\in X$ such that $K(x,\cdot)$ is absolutely continuous with respect $\mu$. For example, let $X=\mathbb{R}$ and $K(x,B)=1_B(x)$. In other words, $K(x,\cdot)=\delta_x$, the point mass centered at $x$. Let $\nu=m$, where $m$ is Lebesgue measure. Then $$ \mu(B) = \int_{\mathbb{R}}1_B(x)m(dx) = m(B). $$ That is, $\mu=m$. It is true that for each fixed $B$ with Lebesgue measure zero, $\delta_x(B)=0$ for Lebesgue a.e. $x$. However, there is not a single $x\in\mathbb{R}$ such that $\delta_x(B)=0$ for every Lebesgue null $B$.
improper integrals( very basic)
The point is that if the function is nice, then the difference between all the different $f(c_i)$ for all valid choices of $c_i$ are roughly the same. Therefore, for any given partition, if it is fine enough, changing $c_i$ to some other valid value makes very little change in the total sum. However, if $f$ is unbounded, then you can choose $c_i$ that makes $f(c_i)$ as large as you want, which means that for any given partition, different choices of $c_i$ can make that sum be basically whatever you want. That is why the limit is not defined.
Why are call options necessary?
This is a good question. As @Macavity states, when jumping to the continuous model things are more complicated. Another important point, is that in practice, it may not always be possible to replicate due to the following reasons: Depending on the markets you are in, it may not always be possible to buy or sell the replicated assets when needed (liquidity issues) For legal reasons, it may not be possible to always trade in the stocks that you need to replicate the claim with. When you hedge say against some underlying asset, the assets in your portfolio may for practical reasons be only loosely correlated with the underlying -- its easier to buy an option than work out the exact relationship between the prices in your hedging portfolio and the asset you are trying to hedge against. Hope this helps.
Do subspaces of polyhedron give subcomplexes?
Let us call $h : \lvert K \rvert \to X$ a triangulation of the polyhedron $X$. Polyhedra have many triangulations, and I suggest to consider the following more general question: If we have a subspace $A \subset X$, is there a triangulation $h : \lvert K \rvert \to X$ and a subcomplex $L \subset K$ such that $h(\lvert L \rvert) = A$? The answer is "no". The minimal requirement is that $A$ is a polyhedron, hence any $A \subset X$ which cannot be triangulated (e.g. a copy of the Cantor set) provides a counterexample. But even if $A$ is a polyhedron, the answer is in general "no". As an example (with a non-trvial proof!) take the Alexander horned sphere $A$ sitting in $S^3$ (see https://en.wikipedia.org/wiki/Alexander_horned_sphere). I do not know general conditions assuring that $A$ is triangulated by a subcomplex, and I doubt that such conditions exist, but perhaps somebody else can help.
Showing a sequence defined recursively is convergent
Continuing as Thomas Andrews suggested, $a_{n+1}-a_{n+2} =\frac1{2+a_n}-\frac1{2+a_{n+1}} =\frac{(2+a_{n+1})-(2+a_n)}{(2+a_n)(2+a_{n+1})} =\frac{a_{n+1}-a_n}{(2+a_n)(2+a_{n+1})} $ so $|a_{n+1}-a_{n+2}| =\big|\frac{a_{n+1}-a_n}{(2+a_n)(2+a_{n+1})}\big| <|\frac{a_{n+1}-a_n}{4}| $. From this, $|a_{n+k}-a_{n+k+1}| <|\frac{a_{n+1}-a_n}{4^k}| $. Putting $n = 0$, $|a_{k}-a_{k+1}| <|\frac{a_{1}-a_0}{4^k}| $ which is more than enough to get convergence. It is interesting that this shows that the convergence is at least $\frac1{4^k}$, not just $\frac1{2^k}$. To find the limit: $|a_{n+1}-a_{n}| =|a_n-\frac1{2+a_n}| =|\frac{a_n(2+a_n)-1}{2+a_n}| =|\frac{a^2_n+2a_n-1}{2+a_n}| =|\frac{(a_n+1)^2-2}{2+a_n}| $. Since $a_n$ converges, $(a_n+1)^2-2 \to 0 $ or $a_n \to \sqrt{2}-1$. To find the true rate of convergence, since $a_n \to \sqrt{2}-1$, $|a_{n+1}-a_{n+2}| =\big|\frac{a_{n+1}-a_n}{(2+a_n)(2+a_{n+1})}\big| \approx \big|\frac{a_{n+1}-a_n}{(2+\sqrt{2}-1)(2+\sqrt{2}-1)}\big| = \big|\frac{a_{n+1}-a_n}{(\sqrt{2}+1)^2)}\big| = \big|\frac{a_{n+1}-a_n}{3+\sqrt{2}}\big| $, so the convergence is like $\frac1{(3+\sqrt{2})^k} $.
Prove the existence of some weird family of sets
Here's a construction which I think combines the cleanest elements of Noah's and Gro-Tsen's. By applying bijections, we can use any sets $A,B$ with $|A| = |\mathbb{N}|$ and $|B| = |\mathbb{R}|$ in place of $\mathbb{N}$ and $\mathbb{R}$. Consider the infinite rooted binary tree. Take $A$ to be the set of $k$-tuples of vertices in the tree, where we require the vertices in each $k$-tuple to all lie on the same level of the tree. Take $B$ to be the set of infinite rays in the tree which start at the root. These sets have the correct cardinalities. For each ray $\rho$ in $B$, let $X_\rho$ be the set of $k$-tuples in $A$ which contain at least one element of $\rho$. The intersection of any $k$ sets of this form is infinite, since on each level of the tree we can choose a $k$-tuple containing one vertex from each ray. The intersection of any $k+1$ sets of this form is finite, since on some level of the tree, the $k+1$ rays must have all branched apart, so there is no $k$-tuple of vertices on that level or below which hits all $k+1$ rays.
Solving Polynomial of the Resultant Dual Problem
I don't know the "dual problem" concept you are asking about, but I can solve it directly. First, if $p \le 0$, then obviously the solution is $b = 0, a = r$, and if $0 < p \le \frac{|r|^2}s$, the solution is $b = p, a = r$. So for the remainder we can assume that $p > \frac{|r|^2}s$. Second, $\left|\frac {|r|a}{r} - |r|\right| = |a - r|$. Since $a$ varies over all of $\Bbb C$ we can replace $\frac {|r|a}{r}$ with $a'$, and if $a'$ is the solution for $|r|$, then $a = \frac{ra'}{|r|}$ is the solution for $r$, and since $|a| = |a'|, b$ is unaffected. Thus it suffices to solve the problem for positive real-valued $r$. The process is the same as complex $r$, but it makes the notation cleaner. In the following, I'll assume $r > 0$. Third, if we fix $|a - r| = \rho < p - \frac{r^2}s$, then the minimum value occurs when $b$ is as close to $p$ as possible. I.e., when $b = \frac{|a|^2}s$. So, among all $a$ with $|a - r| = \rho$, the minimum will occur for the one with maximum $|a|$, since this allows $b$ to get closest to $p$. That maximum value of $|a|$ is when $0,r,a$ all lie on the same line, which gives $a = tr$ for some real $t\ge 1$. So we can dispense with the complex numbers altogether. Now the original problem becomes finding $$\min_{b,t}~(b-p)^2 + (t-1)^2r^2, \quad 0 \le b \le t^2\frac {r^2}s, t \ge 1$$ The location of the minimum is not changed if you multiply the expression by the positive number $\frac 1{r^2}$. So letting $q = \frac pr, u = \frac br$, the problem changes into $$\min_{t,u}~(u-q)^2 + (t - 1)^2, \quad 0 \le u \le t^2\frac rs, t \ge 1$$ with $q > \frac rs$. And again for fixed $t$ this expression beomes smaller as $u$ increases (assuming $u < q$), so we can just let it take on the maximum value allowed, changing the problem to $$\min_{t\ge 1}~ (t^2\frac rs - q)^2 + (t - 1)^2$$ Which is now a single variable polynomial problem. Setting the derivative to $0$ gives $$2\frac {r^2}{s^2}t^3 + 2\left(1-\frac rsq\right)t - 1 = 0$$ Since the original quartic opens upward, the two outside roots of the derivative will be local minima, while the middle root is a local maximum. Once you've determined which local minima $t$ is lower, the solution to the original problem is $$a = tr, b = t^2\frac {|r|^2}s$$
Why $\mathbb P(X\boldsymbol 1_{\{X>a\}}>y)=\mathbb P(X>y\mid X>a)$?
it seem not correct. $$\mathbb P(XY>z)=\mathbb P(A)=\mathbb E(1_A)=\mathbb E(\mathbb E(1_A|Y))=\mathbb E(\mathbb P(A|Y))=\mathbb E(\mathbb P(XY>z|Y))$$ and $$\mathbb E(\mathbb P(XY>z|Y=y))=\mathbb E(\mathbb P(Xy>z|Y=y))$$ and it is depend on $y<0$ or $y\geq 0$. if $Y=1_{X>a}$ so $Y\sim Ber(p=P(X>a))$ $$\mathbb P(XY>z)=\mathbb P(A)=\mathbb E(1_A)=\mathbb E(\mathbb E(1_A|Y))= \mathbb E(g(Y))$$ $$=P(Y=1) * g(1) +P(Y=0) * g(0)$$ $$=P(Y=1) * E(1_A|Y=1)+P(Y=0)*E(1_A|Y=0)$$ $$=P(Y=1) * P(A|Y=1)+P(Y=0)*P(A|Y=0)$$ $$=P(1_{X>a}=1) * P(X1_{X>a}>z|1_{X>a}=1)+P(1_{X>a}=0)*P(X1_{X>a}>z|1_{X>a}=0)$$ $$=P(1_{X>a}=1) * P(X>z|1_{X>a}=1)+P(1_{X>a}=0)*P(0>z|1_{X>a}=0)$$ note $P(0>z|1_{X>a}=0)$ is alwayes is 1 or zero $$=P(X>a) * P(X>z|X>a)+P(X\leq a)*P(0>z|X\leq a)$$ for example if $z\geq 0$ $$=P(X>a) * P(X>z|X>a)+0=P(X>z , X>a)=P(X>max(a,z))$$ but if $z<0$ $$=P(X>a) * P(X>z|X>a)+P(X\leq a)=P(X>max(a,z))+P(X\leq a)$$
Converse of 0/0 Type Stolz Theorem
For a counterexample, take $$a_n = \frac{3}{n} - \frac{(-1)^n}{n^2}, \quad b_n =\frac{3}{n} + \frac{(-1)^n}{n^2},$$ where $a_n, b_n \to 0$ and $\frac{a_n}{b_n} \to 1$ as $n \to \infty.$ We have $$a_{n+1} - a_n = \frac{-3}{n(n+1)} +(-1)^n\frac{2n^2 + 2n +1}{n^2(n+1)^2}, \\ b_{n+1} - b_n = \frac{-3}{n(n+1)} -(-1)^n\frac{2n^2 + 2n +1}{n^2(n+1)^2}$$ Note that $b_n$ is strictly decreasing since, clearly, $b_{n+1} -b_n <0$ when $n$ is even and when $n$ is odd $$n^2 (n+1)^2 (b_{n+1}-b_n) = -3n^2 -3n + 2n^2 + 2n +1 = -n^2 - n + 1 < 0$$ However, $$c_n =\frac{a_{n+1}-a_n}{b_{n+1} - b_n} = \frac{-3 + (-1)^n\left(2+ \frac{1}{n^2+n}\right)}{-3 - (-1)^n\left(2+ \frac{1}{n^2+n}\right)}$$ does not converge, with $\limsup_{n \to \infty} c_n = 5$ and $\liminf_{n \to \infty}c_n = 1/5$.
What's correspondence between the model theoric and the set theoric kernel of homomorphism?
If the $ker_m(h)=diag(A)$, then the $h$ is an embedding and viceversa(see diagram lemma in hodges book). Hence, $ker_m(h)\neq diag(A)$ iff the set-kernel has a equivalence-class with more then 1 element.
If a module homomorpic image is free, then does a module in domain free?
Certainly not. Take $\text{Whatever}\to0$. Or, perhaps, more egregiously $R\oplus\text{Whatever}\to R$. EDIT: Of course, if $f$ was injective then this is certainly true because $A$ sits inside $B$ and subthings of torsion-free things are torsion-free.
Sum $\sum_{i=\left\lceil\frac{n}{2}\right\rceil}^n \left\lceil\frac{n}{2}\right\rceil^k$
The summation $\displaystyle\sum_{i = \lceil n/2\rceil}^{n}\left\lceil\dfrac{n}{2}\right\rceil^k$ is adding the constant $\left\lceil\dfrac{n}{2}\right\rceil^k$ exactly $n-\left\lceil\dfrac{n}{2}\right\rceil+1$ times. Hence, the result is simply $\left(n-\left\lceil\dfrac{n}{2}\right\rceil+1\right)\left\lceil\dfrac{n}{2}\right\rceil^k$.
Inequality used to prove the lemma Brezis-Lieb
Hint :if "$|a|\le$d then $-d\le a\le d$ " $$1.{|{a+b}|^p - | a^p| - | b^p}|\le C \bigl(| a^{p-1}|| b |+ | a| | b^{p-1}\bigr)\color{red}{\iff_ 1}\color{green}{|a+b|^p\le |a|^{p-1}(c|b|+|a|)+|b|^{p-1}(c|a|+|b|)}$$$$2.{|{a+b}^p| - |a^p| - | b^p|}\ge -C \bigl(| a^{p-1}||b |+ |a| | b^{p-1}|\bigr)\color{red}{\iff_ 2}\color{green}{|{a+b}^p| \ge|a|^{p-1}(|b|-c|a|)+|b|^{p-1}(|a|-c|b|)}$$ clearly $\forall c\ge 0$$\color{red} {1,2}$ are true
question about vacuous truth and function
It is false that $f(1.5)=3$. As you have pointed out, $(\forall y)(1.5,y)\notin f$, so $(\forall y)y\neq f(1.5)$. In particular $3\neq f(1.5)$. The fact that $1.5$ is not in the domain of $f$ does not stop the proposition "$f(1.5)=3$" from being false.
What is a group action in simple English?
A group acts on a space if it shuffles around the elements of the set (i.e. each group element gives a bijection of the space). For it to be a group action, something involving the definition of the group has to be involved. Groups are defined by a kind of multiplication, so for our shufflings to be a group action, doing two shuffflings in a row associated to different group elements should give us the same result as the one shuffling from the group element. For instance, in $\mathbb{Z}_2$, any group action is a shuffling that, when repeated, gives you the original space, and vice versa. Semigroups work just the same as groups; you only need composition to behave well, as above. In fact, a semigroup action is more natural than a group action. For rings and fields, you either get a group action multiplicatively or additively, but it's hard to get a good definition of how they should interact. I should mention that semi group actions don't need to be bijections, since they don't need inverses.
Proving a function (on metric space) is continuous
Here is my idea. Assume $f$ is cont on $M$, that is for any open set $E$ in $\mathbb{R}$, the pre- image of $E$ is open in $M$. We can fix arbitrary $c \in \mathbb{R}$, choose an open set $\{y|y<C\}$, so the pre-image of this set under $f$ would be $\{x|f(x)<c\}$. For the converse part, assume both sets you provided are open in $M$ for every $c \in \mathbb{R}$, and assume arbitrary open set $E \in \mathbb{R}$, we know any open set in $\mathbb{R}$ is the arbitrary union or finite intersection of the following open intervals, namely $(a, \infty)$, $(-\infty, a)$ and $(a,b)$. In that case, you can deduce that the pre-image of set $E$ is open in $M$, hence we can show $f$ is cont on $M$. Hopefully, this is helpful for you.
The lenght of the diagonal of a hypercube of side 1
The diagonal $D$ can be written as $D = e_{1}+e_{2}+\cdots + e_{n}$, where $\{e_{1},...,e_{n}\}$ is the canonical basis for $\mathbb{R}^{n}$. Thus, with the usual Euclidean norm: $$||D|| = \sqrt{n}$$ Can you evaluate the other ones?
Showing that $U\cap W$ is a subspace of $U$ (or $W$ or both)
First note that $U \cap W$ must contain at least the zero vector since they are both vector subspaces which by definiton means they include the zero vector. Now assume there is a vector lets call $v$ such that $v \in U \cap W$. This means that $v \in U \space \& v \in W$. Since the vector $v$ is in the vector space $U$ and $W$ defined over the field $F$ then $\forall c \in F$ the vector $cv \in U \space \& \space cv \in W$. Therefore $cv \in U \cap W \space \forall c \in F$ which means the set $U \cap W$ is closed under scalar multiplication by an element in $F$. Now assumes there are $2$ vectors lets call $v_1, v_2 \in U \cap W$. This means that $v_1,v_2 \in U \space \& v_1,v_2 \in W$. Since $U$ and $W$ are vector subspaces themselves that must mean they are closed under addition so $v_1 + v_2 \in U \space \& \space v_1 + v_2 \in W$. Therefore $v_1 + v_2 \in U \cap W$ and the set $U \cap W$ is closed under addition. This shows that $U \cap W$ is a subspace of both $U$ and $W$. Note: notice how it did not matter whether or not $U$ and $W$ where subspaces of another vector space, the criterion for them was to be defined over same field $F.$
Visualizing a vector field
We need to distinguish between $\mathbb{R}^n$, the vector space, and $\mathbb{R}^n$, the manifold. Obviously, they are related, but we're going to be doing different things with each one, so it is good to set them apart. Let $V$ be the former and $X$ the latter. At every point $P\in X$, we can get a copy of $V$, denoted $V_P$, whose origin is at $P$. A vector in $V_P$ is then just an ordinary vector drawn in $X$, whose base is at $P$. If $f:X\rightarrow V$ is a function, then you can view it's vector field as the set of vectors: $$ \textrm{vectorField}(f) = \{f(P)\in V_P \;|\; P\in X\} $$ Let $f(x,y)=(x,y)$. Then if $P=(a,b)$, then the corresponding vector in the vector field will be the vector $(a,b)$ living in $V_{(a,b)}$. This means that its tail is at $(a,b)$ and its head will be at $(2a,2b)$.
About limits and successions
I guess there's supposed to be some assumption which ensures that $x_n$ occurs on the $n$th "bump" of the cosine wave. Otherwise there is nothing I can see in the question as stated to prevent $x_n$ from growing arbitrarily fast, destroying any hope of evaluating $\lim_{n\to\infty} (x_n-n\pi)$. So I'll just assume that $$ n\pi - \frac\pi2 \le x_n \le n\pi \tag{$\ast$} $$ (If that needs more justification, let me know. Edit: see below.) Having this statement available simplifies things a lot. For the first part, ($\ast$) shows $x_n\to\infty$; combining that with your finding that $\tan(x_n) = -1/x_n$ yields $$ |\sin(x_n-n\pi)| = |\sin x_n| = \frac{|\cos x_n|}{x_n} \to 0 $$ which since $x_n-n\pi\in[-\frac\pi2,0]$ (again by ($\ast$)) implies $x_n-n\pi\to 0$. For the second part, I'd like to write it this way: $$ nc_{2n} = -n\sin(x_{2n}) = \frac{n}{x_{2n}} \cos(x_{2n}) = \frac{n}{x_{2n}} \cos(\underbrace{x_{2n}-2n\pi}_{\to0}) \to \frac2\pi $$ Edit: Here's some more about ($\ast$). As you said, the tangent condition in the problem is equivalent to saying that the $x_n$ are solutions to the equation $$ \tan x = -\frac1x \tag{$\dagger$} $$ Lemma. Equation ($\dagger$) has exactly one solution in each interval $I_k = (k\pi-\frac\pi2,k\pi)$, and no other positive solutions. Proof. Let $g(x) = \tan x + 1/x$. First consider intervals of the form $[k\pi,k\pi+\frac\pi2)$, where $k$ is a nonnegative integer. On such intervals $\tan$ is nonnegative, so $g>0$. Thus there are no roots in such intervals, as desired. Next consider intervals $(k\pi-\frac\pi2, k\pi)$ where $k$ is a positive integer. Since $g$ is continuous on such intervals and $\lim_{x\to (k\pi-\frac\pi2)^+} g(x) = -\infty$ and $\lim_{x\to k\pi^-} g(x) = 1/k\pi > 0$, by IVT the function $g$ has at least one zero in each such interval. On the other hand, if $g$ had two or more zeroes in any such interval, then by Rolle's theorem $g'$ would have a zero in that interval, which is impossible since $g'(x) = \sec^2 x - 1/x^2 \ge 1 - 1/(\frac\pi2)^2 > 0$. (If you want a proof without so much calculus, use the fact that the cosine function alternates between being positive and concave and being negative and convex, which I guess can be shown directly from the addition formula.) From the lemma we can see that the conditions given in the problem are are not sufficient to determine $\lim_{n\to\infty} (x_n-n\pi)$. Indeed, let the $n$th positive solution of equation ($\dagger$) be denoted $a_n$. The conditions of the problem — that the sequence $\{x_n\}$ is increasing and its terms are positive solutions of ($\dagger$) — just mean that the sequence $\{x_n\}$ is a subsequence of $\{a_n\}$. For example, we could have $x_n = a_{n^2}$, for which we'd have $x_n - n\pi \ge n^2\pi - \frac\pi2 - n\pi \to \infty$. So we need to assume something else about the $x_n$ to solve the problem. The simplest thing is to assume that $x_n=a_n$. (That assumption might be more aesthetically pleasing in another form, e.g., we could assume that the sequence $\{x_n\}$ grows as slowly as possible given the other assumptions.)
Showing that this intersection is an open set
$M \subset A\cup B$ is open in $A \cup B$, therefore $\forall x \in M, \exists \epsilon>0: B_\epsilon(x)\cap(A\cup B)\subset M$ If, in addition. if $x\in A$, then (since $A\cap B=\emptyset)$: $ \begin{aligned} M\cap A\supset (B_\epsilon(x)\cap(A\cup B))\cap A=(B_\epsilon(x)\cap A\cap A)\cup (B_\epsilon(x)\cap B\cap A) = (B_\epsilon(x)\cap A)\cup \emptyset=B_\epsilon(x)\cap A \end{aligned} $ Which means that $M\cap A$ is open in A.
Group homomorphism $\mathbb{Z}_p \longrightarrow\mathbb{F}_p$
There is a homomorphism, called the reduction modulo $p$, given by $$ \sum_{i\ge 0}a_ip^i\mapsto a_0 \mod p. $$ This is a standard definition, which is used among other things to show that the invertible elements in the ring $\mathbb{Z}_p$ consist of $\mathbb{Z}_p^{\times}=\{\sum_{i\ge 0}a_ip^i \mid a_0\neq 0\}$.
Differentiating definite integral seems weird to me.
Use fundamental theorem of calculus. If the integral in question is denoted by $F(k) $ then the limit in question is $F'(0)$. The integrand $f(x) $ has a removable discontinuity at $x=0$ and lets redefine $f(0)=e^2$ to make it continuous at $0$. By FTC we have $F'(0)=f(0)=e^2$. The use of L'Hospital's Rule is pretty roundabout here (since when did L'Hospital's Rule become a tool to compute derivatives?). I wonder why the solution manual suggests L'Hospital's Rule.
The gradient of the standard mollifier
I don't think you need the computation of $D\eta_\epsilon$. The gradient of any smooth compactly supported function is bounded, being itself a continuous function with compact support. Since $\eta$ is smooth and has compact support, the rescaled function $\eta_\epsilon$ also has those properties. The conclusion follows.
Using differentials to optimize a function
You should be able to use it to solve constrained optimization. Let me give you sort of the big-picture overview. So you know that if $du$ is small compared to $u$ then you can linearize about $u$. When you want to generalize to more variables, you have to do partial derivatives: for example $$ f(x + dx, y + dy, z + dz) \approx f(x,y,z) + \left(\frac{\partial f}{\partial x}\right)_{y,z} dx + \left(\frac{\partial f}{\partial y}\right)_{x,z} dy + \left(\frac{\partial f}{\partial z} \right)_{x,y} dz. $$ There is a nice shorthand for this expression using the dot product of the vector $d\vec r = (dx, dy, dz)$ as $$ f(\vec r + d\vec r) \approx f(\vec r) + \nabla f \cdot d\vec r.$$ If you calculate it out, the last term here is the "differential" approach that you get above. Your constraints will generate fixed conditions for $d\vec r$, $$\nabla g_1 \cdot d\vec r = \nabla g_2 \cdot d\vec r = 0.$$ Now we want to find the points $\vec r$ such that, if $d\vec r$ obeys the constraints, then $\nabla f \cdot d\vec r = 0$. There is no reason that I see that this should lead to a wrong answer. In fact, it leads to a matrix solution $$\left[\begin{array}{c}(\nabla f)^T\\(\nabla g_1)^T\\(\nabla g_2)^T\end{array}\right] \left[\begin{array}{c}dx \\ dy \\ dz\end{array}\right] = \left[\begin{array}{c} 0 \\ 0 \\ 0\end{array}\right]$$which amounts to looking for a nontrivial kernel of that matrix, which amounts to saying that the vectors are linearly dependent, which says that $\nabla f + \lambda_1 \nabla g_1 + \lambda_1 \nabla g_2 = \nabla (f + \lambda_1 g_1 + \lambda_2 g_2) = 0$, which gives you back exactly the method of Lagrange multipliers! So these are totally equivalent approaches if you do them correctly. I guess my basic question is, "are you sure that you've been doing partial derivatives correctly when doing constrained optimization?" Because the general idea of trying to do a simultaneous-solution of differential-based equations seems pretty solid to me.
What kind of formula would I use to get all possible outcomes?
You can consider each deck separately and multiply the number of options for each deck. For the Draw Deck, each card can come in zero to three copies, so without the $45$ card minimum you would have $176^4=959512576$ possibilities. The practical answer is that the choices less than $45$ cards will be a small fraction of this, so ignore it. Similarly for the Problem Deck, without the $10$ card minimum you would have $35^3=42875$ choices, giving about $2.5\cdot 10^{14}$ choices.
if I know $f(x+1) = 2f(x) + 1$, how do I solve f(x)
Let $g(x)=f(x)+1$. Then $$ g(x+1)=f(x+1)+1=2f(x)+1+1=2(f(x)+1)=2g(x). $$ This suggests a representation of the form $g(x)=g(0)2^x$ so that $f(x)=g(x)-1=g(0)2^x-1$. Conversely, you can check that all $f(x)=c2^x-1$ satisfy your functional equation.
Finite difference method, boundaries
You could make use of the ghost-point method. Intuitively, ghost-point method is based on the analytic continuity of the solution. It assumes that the governing equation not only holds for all $x\in\left(-1,1\right)$, but also holds on $x=-1$ and $x=1$. With this trick, the Robin (or simply Neumann) boundary conditions could be implemented in a natural fashion. Suppose $$ -1=x_0<x_1<\cdots<x_N=1 $$ are your equi-spaced grid points, with $$ x_j=jh-1,\quad h=\frac{2}{N}. $$ Now define \begin{align} x_{-1}&=-1-h,\\ x_{N+1}&=1+h. \end{align} Plus, implement $$ -3\frac{u_{j+1}-2u_j-u_{j-1}}{h^2}+\left(x_j+2\right)u_j=4x_j,\quad j=0,1,\cdots,N $$ for the governing equation, with boundary conditions \begin{align} \frac{u_1-u_{-1}}{2h}+4u_0&=3,\\ -\frac{u_{N+1}-u_{N-1}}{2h}+2u_N&=0. \end{align} Note that for the governing equation, the index $j$ runs from $0$ to $N$, instead of the usual boundary-value-problem case from $1$ to $N-1$. Thus combine the main scheme with the boundary conditions from above, you will be able to determine all of $x_{-1}$, $x_0$, ..., $x_{N+1}$, because there are $N+3$ unknowns, and you have $N+3$ linear equations.
Integer coefficient polynomial - values as powers of 2
Claim: There is no polynomial $f(x)$ with integer coefficients whose values at all non-negative integers are distinct powers of $2$. Proof by contradiction: Let $m$ be the smallest power of $2$ attained by $f(x)$ when evaluated at positive integers, so that: $f(v)=m$ for some non-negative integer $v$. $f(v+2m)$ is a different, and therefore higher, power of $2$, so it's divisible by $2m$. $f(v+2m)-f(v)$ is divisible by $(2m)$ (general property of polynomials with integer coefficients) But that implies $f(v)$ is divisible by $2m$ too; a contradiction. Claim: For every $n\geq 0$, there is a polynomial $f_n(x)$ with integer coefficients, such that $f(0), f(1), \ldots, f(n)$ are all distinct powers of $2$. Proof by construction: Let $n! = 2^m\times r$, with $r$ odd and $q=2^{\varphi(r)}$ and let $$f_n(x)=2^m\times \sum_{k=0}^n \binom{x}{k}(q-1)^k$$ Clearly, $f_n(x)$ is a polynomial of degree $n$. Evaluating it for $0\leq x\leq n$ yields: $$f(x) = 2^m\times \sum_{k=0}^n \binom{x}{k}(q-1)^k = 2^m\times \sum_{k=0}^x \binom{x}{k}(q-1)^k = 2^m\times q^x = 2^{m+x\varphi(r)}$$ so the values of the polynomial for $0\leq x\leq n$ are indeed powers of $2$. In order to prove that it has integer coefficients, we can write $$2^m\binom{x}{k}(q-1)^k = \frac{2^m (q-1)^k}{k!}\prod_{i=0}^{k-1}(x-i)$$ This is trivially seen to be integer if $k=0$. For $k\geq 1$, the numerator is divisible by $2^m$ and also by $r$; since it's multiple of $(q-1)$, which Euler's generalization of Fermat's Little Theorem shows to be multiple of $r$. But being divisible by both $2^m$ and $r$ implies divisibility by $n!$ which, in turn, implies divisibility by $k!$. Thus, the polynomial's coefficients are indeed integers, as we required. Note: The value of $q$ used in preceding proof is often unnecessarily large; it's sufficient for $k!$ to divide $2^m(q-1)^k$ while the value used in proof guarantees divisibility of $2^m(q-1)$. For example, for $n=12$, the proof-used value of $q$ would be $2^{194400}$, while $q=2^{60}$ would work just as well. The value of $m$ can sometimes be lowered too (e.g. for $n=12$, it's sufficient to take $m=8$ rather than $m=10$ based on the proof), but unlike the lower $q$, this is not easily seen from the proof.
How do I parametrically represent this region?
Your Region is not a triangle, here is a plot of this region it is not bounded, but that shall give the idea. This one was made with Mathematica the code is RegionPlot[-1<= x+y <=1,{x,-10,10},{y,-10,10}] which can be used in Wolframalpha too. It is not possible to find a basis in the sense of a basis of a vectorspace, as it won't be closed under addition nor under multiplication with a scalar. Still you can represent it as $$M=\left\{ x\in \mathbb{R}^2 : x= \lambda\cdot \begin{pmatrix} -1 \\ 1 \end{pmatrix} + \gamma \cdot \begin{pmatrix} 1 \\ 1 \\ \end{pmatrix} \text{ with } \gamma \in [-0.5,0.5], \lambda \in \mathbb{R} \right\}$$ So you can describe every point in the region unique with the parameters $\gamma$ and $\lambda$, but this is not a basis in the sense of a vector space.
finding hypergeometric solutions for a recurrence relation
The algorithm Hyper is the general method for solving this kind of recurrence, and there is no reason to believe that this particular case is simple. It is, of course, possible to guess two answers by inspiration from the source of the problem and prove it by recurrence, but this is hardly a method.
Precise limit definition
For your argument to work, you'll need $\epsilon < 1/2$, to ensure that $1 - 2\epsilon > 0$ and the last inequality makes sense. After that last step, you may set $\delta$ to be the distance from $2$ to the nearer endpoint of $(2/(1 + 2\epsilon), 1/(1 - 2\epsilon))$, i.e., $$\delta = \min\{2 - 2/(1 + 2\epsilon), 1/(1 - 2\epsilon) - 2\}.$$ For all $x$, if $|x - 2| < \delta$, then $x - 2 < \delta < 1/(1 - 2\epsilon) - 2$ and $2 - x < \delta < 2 - 2/(1 + 2\epsilon)$. This reduces to $2/(1 + 2\epsilon) < x < 1/(1 - 2\epsilon)$, which implies $|\frac{1}{x} - \frac{1}{2}| < \epsilon$ by your previous analysis.
Prove $p$ has only $2$ divisors: $1$ and $p$
Suppose that there is in fact a number $1<m<p$ which divides $p$. What would be a good choice of $k$ and $l$ to show that the second property cannot hold?
Show that $\mathcal{L}$ structures $\mathcal{A}$ and $\mathcal{B}$ are not equivalent
Exactly one element of $\mathcal{A}$ has no predecessors, while two distinct elements of $\mathcal{B}$ have no predecessors. See if you can write $\text{isPred}(x,y)$ and use this to distinguish the structures. I hope this helps ^_^
Do all analytic and $2\pi$ periodic functions have a finite Fourier series?
Fourier series represents an analytic function if and only if its coefficients decrease at least as a geometric progression: $$\limsup_{n\to\infty}\,(|a_n|+|b_n|)^{1/n}=q<1.$$ This fact can be found in books on Forier series.
Remove singular points
If $x_0$ is a zero of $J_0$, then near $x=x_0$ you have $$ \frac{J_0(x)}{x - x_0} = J_0'(x_0) + \frac{x-x_0}{2} J_0''(x_0) + \frac{(x-x_0)^2}{6} J_0'''(x_0) + \ldots $$ Use as many terms as you need to get sufficient accuracy.
$J$-homomorphism for unitary group
First of all note that the homotopy groups of $O(n)$ are the same as that of $SO(n)$. Then the natural inclusion $U(n) \subset O(2n)$ gives an induced map on homotopy. In the colimit this gives the complex $J$-homomorphism $\pi_i(U) \to \pi_i^S$. According to Ravenel "this is well known to coincide up to a factor of 2 with that of the real $J$-homomorphism" (Complex Cobordism and Stable Homotopy Groups of Spheres - pp. 168)
Using complete induction, prove that if $a_1=2$, $a_2=4$, and $a_{n+2}=5a_{n+1}-6a_n$, then $a_n=2^n$
It’s pretty straightforward; you really need only follow the models that you’ve already seen. Your induction hypothesis will be that $a_k=2^k$ for $k=1,2,\dots,n$, where $n$ is any integer greater than $2$, and your induction step will then be $$\begin{align*} a_{n+1}&=5a_n-6a_{n-1}&&\text{by the given recurrence}\\ &=5\cdot2^n-6\cdot2^{n-1}&&\text{by the induction hypothesis}\\ &=5\cdot2\cdot2^{n-1}-6\cdot2^{n-1}&&\text{to get a common factor}\\ &=(10-6)2^{n-1}&&\text{pulling out the common factor}\\ &=\ldots \end{align*}$$ Can you finish the calculation?
Radius of convergence from recurrence with variable coefficients
All $a_n$ are $>0$. From $a_{n+1}\sim{2\over n+1}a_{n-1}$ we see that we gain a factor $\sim{2\over n}$ in two steps. This observation leads to the claim that $$a_n\leq{4^n\over\sqrt{n!}}\qquad(n\geq1)\ .\tag{1}$$ Proof. The statement is true for $n=1$ and $n=2$, by inspection. Assume that it holds for $n-1$ and $n$. Then $$\eqalign{a_{n+1}&\leq{2\over n+1}\left({4^n\over\sqrt{n!}}+{4^{n-1}\over\sqrt{(n-1)!}}\right)\leq {2\over \sqrt{n(n+1)}}\left({4^n\over\sqrt{(n-1)!}}+{4^n\over\sqrt{(n-1)!}}\right)\cr &={4^{n+1}\over\sqrt{(n+1)!}}\ .\cr}$$ Using $(1)$ and, e.g., Stirlings fomula it is then easy to show that $$\rho=\limsup_{n\to\infty}{1\over|a_n|}=\infty\ .$$ As to your second question: The standard proof of Picard's theorem in a complex analysis setting shows that the solution is analytic in a neighborhood of $1\in{\mathbb C}$.
Integral reducing to infinite summation
$$\sum_{n=1}^{+\infty}\frac{1}{n\cdot 2^n}=\sum_{n=1}^{+\infty}\int_{0}^{1}\frac{x^{n-1}}{2^n}\,dx =\int_{0}^{1}\frac{dx}{2-x}=\log 2.$$
Definition clarification for $k$-colourable graphs
The smallest number of colors needed to color the vertices of a graph $G$ is called its chromatic number, $\chi(G) =k$. And yes, such a graph is $k+z$-colorable, for $z \in \mathbb{N}$.
Why is the trivial vector space the smallest vector space?
I suspect part of your confusion is, what does "smallest" mean? It seems to imply a partial ordering somehow, so here are two possible definitions: $V$ is smaller than $W$ provided there is an injective linear map $V\to W$. $V$ is smaller than $W$ provided $|V| \leq |W|$. (Bonus questions: Is there any relationship between these definitions? Can one be proven from the other, and vice versa?) Now, given either definition, say $V$ is the smallest vector space provided $V$ is smaller than $W$ for any vector space $W$. From this definition, can you prove that $\{0\}$ is the smallest vector space? (Hint: Every vector space must have a $0$ element, so ...)
Is This Conditional Probability Problem asking for Pr(A|B) or Pr(A and B)?
The events labelled "Black" and "White" are really "first marble black" and "second marble white" (I think this is confusing: it would be better to label these as something like "B1" and "W2"). If they just said "a black marble and a white marble" it would also include the case of a white marble first and a black marble second.
How do I prove this with induction?
Assume $a_n \leq 4 \Rightarrow a_{n+1} = \sqrt{a_n+12} \leq \sqrt{4+12} = 4$, and this completes the induction process.
Is the space of continuous maps $Top(X,Y)$ between two topological spaces compact if $X$ is?
Suppose $X$ is a one-point space. Then $Y^X$ is naturally homeomorphic to $Y$, so is not compact unless $Y$ is.
If a family has two children, what is the probability that at least one is a boy?
It's $\frac{3}{4}$. The mistake in your second example {0B, 1B, 2B} is that these 3 possibilities are not equally probable. I'm assuming that there's an equal probability of a child being a boy or a girl. With that in mind, if a family has 2 children then we have 4 possible outcomes each with equal probability: GG GB BG BB Each of these outcomes has a probability of $\frac{1}{4}$. Now we can just count how many of them correspond to the outcome we're interested in, namely that at least one is a boy. The answer is clearly that 3 of them correspond to that, so the probability is $\frac{3}{4}$.
intersection of convex sets
Take three general lines in the plane. Any two of them intersect in a single point, but since they are not concurrent the intersection of all of them is empty.
Difference of two complex polynomials
We follow the comments and consider the polynomial \begin{align*} u^{2m}=1\qquad\text{ with roots }\qquad \exp\left(\frac{k\pi i}{m}\right), 1\leq k\leq 2m \end{align*} The function \begin{align*} u(z)=\frac{z-a}{z+a} \end{align*} has the inverse function \begin{align*} z(u)=-a\frac{u+1}{u-1} \end{align*} We conclude the roots of the equation \begin{align*} \left(\frac{z-a}{z+a}\right)^{2m}=1\tag{1} \end{align*} are if $-m+1\leq k\leq m-1$ \begin{align*} -a\cdot\frac{\exp\left(\frac{k\pi i}{m}\right)+1}{\exp\left(\frac{k\pi i}{m}\right)-1} =-ai\cdot\frac{\frac{1}{2}\left(\exp\left(\frac{k\pi i}{2m}\right)+\exp\left(-\frac{k \pi i}{2m}\right)\right)}{\frac{1}{2i}\left(\exp\left(\frac{k\pi i}{2m}\right)-\exp\left(-\frac{k \pi i}{2m}\right)\right)} =ia\cot \frac{k\pi}{2m} \end{align*} together with the root $z=0$. We can now write (1) in polynomial form and as product of factors \begin{align*} (z+a)^{2m}-(z-a)^{2m}&=Az\prod_{k=1}^{m-1}\left[\left(z-ia\cot \frac{k\pi}{2m}\right)\left(z+ia\cot \frac{k\pi}{2m}\right)\right]\\ &=Az\prod_{k=1}^{m-1}\left(z^2+a^2\cot ^2\frac{k\pi}{2m}\right) \end{align*} with $A$ constant. In order to determine $A$ it is convenient to calculate the coefficient of $z^{2m-1}$, denoted with $[z^{2m-1}]$. We obtain \begin{align*} [z^{2m-1}]&\left((z+a)^{2m}-(z-a)^{2m}\right) =\binom{2m}{2m-1}a-\binom{2m}{2m-1}(-a)=4ma\\ [z^{2m-1}]&Az\prod_{k=1}^{m-1}\left(z^2+a^2\cot ^2\frac{k\pi}{2m}\right)=A \end{align*} We finally obtain with $A=4ma$ \begin{align*} \color{blue}{(z+a)^{2m}-(z-a)^{2m}=4maz\prod_{k=1}^{m-1}\left(z^2+a^2\cot ^2\frac{k\pi}{2m}\right)} \end{align*}
Line Integral Over a Vector Field of a set of points where a sphere intersects $2z$ $+$ $x$ $=$ $0$
The curve is a circle, so, its projection on the $x-y$ plane is an ellipse (better $\frac{5}{16}x^2+\frac{1}{4}y^2=1$ and $x(t)=\dfrac{4\sqrt{5}}{5}\cos t$, $y(t)=2\sin t$), the one you got. Nevertheless, you almost have it: we need $\vec{g}(t) = \left(x(t),y(t),z(t)\right)$, a vector with three components, but you did the parametrization for the $x$ and the $y$ ones. We can complete it because we know that $z=-x/2$, so is, $z= -\dfrac{1}{2}\dfrac{4\sqrt{5}}{5}\cos t$ $$\vec{g}(t) = \left(\frac{4\sqrt{5}}{5}\cos t,\;2\sin t,\frac{-2\sqrt{5}}{5}\cos t\right)$$ Now, for the line integral, $\mathbb dx=x'(t)\,\mathbb dt=-\dfrac{4\sqrt{5}}{5}\sin t\,\mathbb dt$ and $\mathbb dy=y'(t)\,\mathbb dt=2\cos t\,\mathbb dt$ $$\int_C zdx+xdy=\int_0^{2\pi}\frac{-2\sqrt{5}}{5}\cos t\dfrac{-4\sqrt{5}}{5}\sin t\,\mathbb dt+\int_0^{2\pi}\frac{4\sqrt{5}}{5}\cos t\,2\cos t\mathbb dt=$$ $$=\int_0^{2\pi}\left(\dfrac{8}{5}\cos t\sin t+\frac{8\sqrt{5}}{5}\cos^2 t\right)\mathbb dt$$ Being $C$ anticlockwise as you parametrized its projection in that way.
Uncoupled Linear System: Differential Equations
You don't need to write $\vec{u}$ and $\vec{v}$ in components. Use the fact $\vec{u}'=A\vec{u}$ and $\vec{v}'=A\vec{v}$. Since $$\vec{w}=b\vec{u}+c\vec{u}$$ Then $$\vec{w}'=b\vec{u}'+c\vec{v}'$$ What can you get using the facts you have? For the second part, you had a typo in either the matrix or the solution with $2$ or $-2$.
Torsion group of $y^2 = x^3 + B$ has order dividing $6$
Then what? Then you have an integer $n$ which divides $p + 1$ for almost all (i.e. all except finitely many) prime numbers $p \equiv -1 \mod 6$. Suppose that $n$ does not divide $6$. Let $m = \operatorname{lcm}(6, n)$. I claim that there is an integer $d$ which satisfies the following properties: $d \equiv -1 \mod 6$; $d \not\equiv -1 \mod m$; $d$ is coprime to $m$. The existence of such a $d$ is a simple exercise in elementary number theory. Now by Dirichlet's theorem, the arithmetic sequence $\{mk + d: k \geq 1\}$ contains infinitely many prime numbers. In particular, since these prime numbers are all $\equiv -1\mod 6$, there must be at least one prime number $p$ of the form $mk + d$ such that $n$ divides $p + 1$. But this contradicts our construction, as $p = mk + d \equiv d \not\equiv -1 \mod m$.
What's the result? $1/i=?$, where $i=\sqrt{-1}$
Consider: $$\dfrac{1}{i}=\dfrac{1}{i}\times\dfrac{i}{i}=\dfrac{i}{-1}=-i$$ However, the following argument would not work: $$\dfrac{1}{i}=\dfrac{\sqrt{1}}{\sqrt{-1}}=\sqrt{\dfrac{1}{-1}}=\sqrt{-1}=\pm i .$$ The latter argument fails because $\frac{\sqrt{a}}{\sqrt{b}} \equiv \sqrt{\frac{a}{b}}$ holds (if and) only if $a,b>0.$
In arbitrary commutative rings, what is the accepted definition of "associates"?
In rings with zero-divisors, factorization theory is much more complicated than in domains, e.g. $\rm\:x = (3+2x)(2-3x)\in \Bbb Z_6[x].\:$ Basic notions such as associate and irreducible bifurcate into at least a few inequivalent notions, e.g. see the papers below, where three different notions of associateness are compared: $\ a\sim b\ $ are $ $ associates $ $ if $\, a\mid b\,$ and $\,b\mid a$ $\ a\approx b\ $ are $ $ strong associates $ $ if $\, a = ub\,$ for some unit $\,u.$ $\ a \cong b\ $ are $ $ very strong associates $ $ if $\,a\sim b\,$ and $\,a\ne 0,\ a = rb\,\Rightarrow\, r\,$ unit When are Associates Unit Multiples? D.D. Anderson, M. Axtell, S.J. Forman, and Joe Stickles. Rocky Mountain J. Math. Volume 34, Number 3 (2004), 811-828. Factorization in Commutative Rings with Zero-divisors. D.D. Anderson, Silvia Valdes-Leon. Rocky Mountain J. Math. Volume 28, Number 2 (1996), 439-480
Prove the lecturer is a liar...
Let $a$ = the number of women, $b$ = the number of men, and $n = a + b$ be the total number of attendees. The probability that the first 3 students to leave are all female is $\frac{a}{n} \cdot \frac{a-1}{n-1} \cdot \frac{a-2}{n-2}$. Setting this expression equal to $\frac{1}{3}$ and cross-multiplying gives $3a(a-1)(a-2)=n(n-1)(n-2)$. The product of any three consecutive integers is divisible by 6, so the left-hand side is divisible by 18. For the equation to work out, we must have $n \in \{0, 1, 2\}$ modulo 9. This doesn't solve your puzzle, but it does rule out (informally) 2/3 of the domain.
Is Bayes' Theorem really that interesting?
You are mistaken in thinking that what you perceive as "the massive importance that is afforded to Bayes' theorem in undergraduate courses in probability and popular science" is really "the massive importance that is afforded to Bayes' theorem in undergraduate courses in probability and popular science." But it's probably not your fault: This usually doesn't get explained very well. What is the probability of a Caucasian American having brown eyes? What does that question mean? By one interpretation, commonly called the frequentist interpretation of probability, it asks merely for the proportion persons having brown eyes among Caucasian Americans. What is the probability that there was life on Mars two billion years ago? What does that question mean? It has no answer according to the frequentist interpretation. "The probability of life on Mars two billion years ago is $0.54$" is taken to be meaningless because one cannot say it happened in $54\%$ of all instances. But the Bayesian, as opposed to frequentist, interpretation of probability works with this sort of thing. The Bayesian interpretation applied to statistical inference is immune to various pathologies afflicting that field. Possibly you have seen that some people attach massive importance to the Bayesian interpretation of probability and mistakenly thought it was merely massive importance attached to Bayes's theorem. People who do consider Bayesianism important seldom explain this very clearly, primarily because that sort of exposition is not what they care about.
Bridge resonance Differential equation
When the bridge is idle we have $$ y''+ky = 0 $$ which has general solution in the form $$ y = C_1 \cos(\sqrt k t)+C_2 \sin(\sqrt k t) $$ the natural frequency graphic can be build having in mind the concept of transference function $$ y''+k y = u\Rightarrow (k-\omega^2)Y(j\omega) = U(j\omega)\Rightarrow G(j\omega) = \frac{Y(j\omega)}{U(j\omega)} = \frac{1}{k-\omega^2}\Rightarrow |G(j\omega)| = \frac{1}{|k-\omega^2|} $$ so the natural resonance angular frequency is $\omega = \sqrt k$ NOTE Here $Y(s) = \mathcal{L}(y(t))$ is the Laplace transform.
Question about speed of convergence of some sequence.
Your previous post indicates you meant $A_{n}(x)=\sum_{k=1}^{n}\frac{(-1)^k\cos(\ln(k))}{k^x}$. Then $$\sum_{k=1}^{2n-1} (-1)^k k^{-s}= -\eta(s) -\sum_{k=2n}^\infty (-1)^k k^{-s} \\=-\eta(s) -\sum_{k=n}^\infty (2k)^{-s}-(2k+1)^{-s}=-\eta(s)- \sum_{k=n}^\infty \int_{2k}^{2k+1} s x^{-s-1}dx \\= -\eta(s)-\sum_{k=n}^\infty \frac12 \int_{2k}^{2k+2} s x^{-s-1}dx + \frac12\int_{2k}^{2k+1} s (x^{-s-1}-(x+1)^{-s-1})dx$$ $$ = -\eta(s) - \frac12\int_{2n}^\infty s x^{-s-1}dx + \sum_{k=n}^\infty O( \frac12 s (s+1) (2k)^{-s-2}) \\ =-\eta(s) - 2^{-s-1}n^{-s} + O(2^{-s-1} s(s+1) \frac{|n^{-s-1}|}{\Re(s)+1})$$ Take the real and imaginary part with $s = x-i$ to get your $A_n(x),B_n(x)$. For $2n $ instead of $2n-1$ it works the same way up to a sign obtaining $$\sum_{k=1}^{2n} (-1)^k k^{-s}=-\eta(s) + 2^{-s-1}n^{-s} + O(2^{-s-1} s(s+1) \frac{|n^{-s-1}|}{\Re(s)+1})$$
Completing a commutative diagram
Let $f,g$ be as in the diagram in the question and let $f$ be an epimorphism and $g$ a monomorphism (equivalent to $f$ has trivial cokernel and $g$ has trivial kernel). Let us consider the case when $h \colon C \to D$ is an epimorphism, which holds in your more particular case. Then the composition $A \overset{f}{\to} C \overset{h}{\to} D$ is an epimorphism, and thus the composition $A \to B \overset{g}{\to} D$ would have to be an epimorphism. Then $g$ would have to be an epimorphism, and hence an isomorphism. So in this case there is such a map $A \to B$ if and only if $g$ is an isomorphism. If the map $h \colon C \to D$ is arbitrary I am not much help. You can say such a map $A \to B$ exists if and only if the composition $\mathrm{Cok}(g) \circ h \circ f = 0$, but that is really just saying the same thing with different words, using that $g$, being a monomorphism, is the kernel of its cokernel.
Calculating multivariable limit
The function is not continuous at (0,0). Another way to see this is to take the limit along $y = 0 $ and $y = x^6 - x^3$. Your approach is correct.
Why would a branch cut not end at a branch point?
Hmm, here is one arguable case: The lacunary function $f(z)=\sum_{n=0}^{\infty} z^{n!} $ is defined on the unit disc and has singularities at every point on the unit circle. Now, the function $$ g(z) = f(e^{-\sqrt z}) $$ where the square root is the usual principal square root, is defined on the complex plane except for the (closed) negative real axis. Then one might consider the negative real axis to be a branch cut for $g$. Or perhaps not; that depends on exactly what one takes "branch cut" to mean and how seriously one takes the requirement (in both Wikipedia's and Wolfram's definitions) that the function must be "multi-valued". In any case its endpoint at $0$ is definitely not a branch point: one cannot continue the function analytically along any closed curve around $0$. Alternatively, and less dramatically, one may consider $$ h(z) = \sum_{n=1}^\infty \frac{\sqrt{z+1/n}}{2^{-n}} $$ where again all of the square roots are principal ones. Here the discontinuity along the negative real axis is actually a bona fide branch cut at all except countably many points -- but still the endpoint at $0$ is not a branch point.
Inequation in paper from Terence Tao on the Collatz Conjecture
Thank you for your suggestion writing Terence Tao, he answered my question on his blog. Just for the case, someone else faces the same questions in future: https://terrytao.wordpress.com/2019/09/10/almost-all-collatz-orbits-attain-almost-bounded-values/#comment-570769
What does $ \frac{m}{n} $ is fully reduced mean?
It means just what it says - that $m$ and $n$ are coprime, meaning that the greatest common divisor of $m$ and $n$ is 1, so you cannot simplify the fraction further. If a fraction is not fully reduced, then we have $\text{gcd}(m, n) = k$, so we can factor out $k$ in both the numerator and the denominator, cancel it out and get the fully reduced form.
Periodicity of a Certain Generalized Power Series
This is closely related to series multisection. Theorem For analytic function $f(z)=\sum^\infty_{n=0}a_n z^n$, $$\sum^\infty_{m=0}a_{qm+p}\cdot z^{qm+p}=\frac1q\sum_{k=0}^{q-1}\omega^{-kp}f(\omega^k z)$$ where $\omega=e^{2\pi i/q}$, $p,q$ are integers and $0\le p\le q.$ Hence $$H(j,k,x)=\frac1{k}\sum^{k-1}_{n=0}\omega^{-nj}\exp\left(\omega^n z\right)$$ where $\omega=e^{2\pi i/k}$. Not to spoil everything, I will stop here and I am sure you can proceed with this hint. Unfortunately, periodicity does not exist for $k\ge 3$. Denote the period of the $n$th term in the summation $T_n$. Then, $$T_n=\frac{2\pi i}{\omega^n}$$ If a ‘universal period’ exists, then for every $(n_1,n_2)$ pair, there exists positive coprime integers $a,b$ such that $$aT_{n_1}=bT_{n_2}\implies \omega^{n_2-n_1}\in\mathbb Q$$ Take $n_2=n_1+1$. It is clear that there is no universal period for $k\ge 3$.
Is it always true if two random variables are jointly Gaussian, then they must be individually Gaussian as well?
Yes, it is always true. If $X$ and $Y$ are jointly Gaussian with mean $\mu = (\mu_x, \mu_y)$ and covariance matrix $\Sigma$, then $X$ is Gaussian with mean $\mu$ and variance $\Sigma_{1,1}$. Just integrate out the other variable: $$f_X(x) = \int_{-\infty}^\infty f_{X,Y}(x,y)\; dy $$
Leading terms in asymptotic expansion of modified bessel function of the first kind
From $$ I_n(x) ~=~\frac{1}{\pi}\int_0^{\pi} \! \mathrm{d}\theta ~\exp\left(x\cos\theta\right)\cos n\theta, \qquad n\in~\mathbb{N}_0,\tag{A} $$ we calculate $$\begin{align} \sqrt{x}\pi e^{-x}I_n(x) &~~=~\sqrt{x} \int_0^{\pi} \! \mathrm{d}\theta ~\exp\left(- x(1-\cos\theta)\right)\cos n\theta \cr &\stackrel{t=\sqrt{x}\theta}{=}~ \int_0^{\pi\sqrt{x}} \! \mathrm{d}t ~\exp\left(- x(1-\cos\frac{t}{\sqrt{x}})\right)\cos \frac{nt}{\sqrt{x}} \cr &~~=~ \int_0^{\infty} \! \mathrm{d}t ~\exp\left(- \frac{t^2}{2} + \frac{t^4}{24 x} + O(x^{-2})\right) \left(1- \frac{(nt)^2}{2x} + O(x^{-2})\right) \cr &~~=~ \int_0^{\infty} \! \mathrm{d}t ~\exp\left(- \frac{t^2}{2}\right) \left(1- \frac{(nt)^2}{2x}+ \frac{t^4}{24 x} + O(x^{-2})\right) \cr &\stackrel{u=t^2/2}{=}~ \int_0^{\infty} \! \frac{\mathrm{d}u}{\sqrt{2u}} ~\exp\left(- u\right) \left(1- \frac{n^2u}{x}+ \frac{u^2}{6x} + O(x^{-2})\right) \cr &~~=~\frac{1}{\sqrt{2}}\left( \Gamma(\frac{1}{2}) - \frac{n^2}{x}\Gamma(\frac{3}{2}) +\frac{1}{6x}\Gamma(\frac{5}{2}) + O(x^{-2})\right)\cr &~~=~\sqrt{\frac{\pi}{2}}\left( 1+ \frac{1-4n^2}{8x} + O(x^{-2})\right), \end{align}\tag{B}$$ which agrees with OP's sought-for formulas (1).
Matrix of complex linear transformation!
The general approach is to write down the equation $T(z,w)=(z,0)$ in matrix form. So for all $z,w \in \mathbb{C}$ $$ \begin{bmatrix}a & b \\ c & d\end{bmatrix} \begin{bmatrix} z \\ w \end{bmatrix} = \begin{bmatrix} z \\0 \end{bmatrix},$$ where $a,b,c,d \in \mathbb{C}$ are the unknowns we must find. A good trick is to use simple values of $z$ and $w$ so that you get easy equations for $a,b,c,d$. For instance, take $z=1$, $w=0$ and also $z=0$, $w=1$.
Analysis Question Involving Real Numbers
Take $A$ and $B=A^c$. We have to show first that $(A|B)$ is a cut. For this we need to show that for all $x\in A$ and $y\in B$ we have $x\leq y$. Assume $x\in A$ and $y\in B$. Then, $y>0$ and $y^2\geq2$. If $x>y$ then $x>0$ and it follows that $x^2>xy>y^2\geq2$. Contradiction. Therefore $x\leq y$. Therefore $A,B$ is a cut. Now consider $(A_1|B_1)=(A|B)^2$, i.e. where $B_1=\{z=y_1y_2:\ y_1,y_2\in B\}$ and $A_1=B_1^c$. Observe that $B_1\supset \{y>0:\ y\geq2\}$. Conversely, if $y\in B_1$ then $y=y_1y_2$ with $y_1,y_2\in B$. Therefore $y_1,y_2>0$ and $y_1^2,y_2^2\geq2$. Then $(y_1y_2)^2\geq4$ and $y_1y_2>0$. Therefore $y_1y_2\geq2$. Hence $B_1=\{y>0:\ y\geq2\}$. This is, $(A_1|B_1)=2$.
Ramsey cycle property
Since $K_p$ contains a $p$-cycle whenever $p\ge 3$, it’s clear that a graph on $R(p,q)$ nodes has the $(p,q)$ Ramsey cycle property, where $R(p,q)$ is a Ramsey number; thus, $$R(p,q)\in\{n\in\Bbb Z^+:n\text{ has the }(p,q)\text{ Ramsey cycle property}\}\ne\varnothing\,,$$ and $Z(p,q)$ is well-defined and finite for $p,q\ge 3$. The Wikipedia article to which I linked has a proof of the $2$-color Ramsey theorem, which is what I’m using here, by way of a recursion that shows that $$R(3,4)\le R(2,4)+R(3,3)-1\,.$$ It also shows that $R(2,4)=4$ and $R(3,3)=6$, so $Z(3,4)\le R(3,4)\le 9$. On this web page you’ll find a graph on $8$ nodes that contains no $K_3$ and whose complement contains no $K_4$, showing that in fact $R(3,4)=9$. Of course it contains no $3$-cycle, since $K_3$ is a $3$-cycle, and you’ll find that its complement contains no $4$-cycle.
$\sigma$-algebra generate by uncountable collection of subsets.
Use the "good set principle", i.e. Define $$ M = \{A \in \sigma(F) \mid \exists \text{ countable } F_A \subset F \text{ such that } A \in \sigma(F_A)\}. $$ Show that this is a sigma algebra containing $F$.
Lyapunov stability dynamic systems
The system comprises two equilibrium points $x=1$ and $x=2$ To study the first equilibrium point $x=1$, consider the variable transformation $z=x-1$. Then the original dynamical system is rewritten as follows: $$\dot{z}=-z(z-1)^2$$ Let $V(z)=\frac{1}{2}z^2$. Its time derivative: $$\dot{V}(z)=z\dot{z}=-z^2(z-1)^2<0\;\;\forall\;\; |z|<1$$ then the equilibrium point $z=0$ (or equivalently $x=1$) is stable. For the second equilibrium $x=2$, we can not construct a Lyapunov function because it is not stable.
Find nth term of sequence
Well, the patterns clear, right. So $A_1 = 1$ $A_2,A_3 = 1,2$ $A_4,... A_6 = 1...3$ .... $A_{foo}, ... A_{bar} = 1....n$ Well, okay the index $foo = 1 + 2 + 3 + ..... +(n-1) + 1$ and the index $bar = 1 + 2 + 3 + .....+ (n-1) + n$ So we have $A_{1 + 2+ ... + (n -1) +1}- A_{1+2+...+(n-1) + n} = 1... n$ Or in other words $A_{1 + 2 + .... + (n - 1) + i} = i$. Is there any way we can find a formula from that? Would it help if I told you $1 + 2 + 3 + ..... + (n-1) = \frac {(n-1)n}{2}$ and that $1 + 2 + 3 + ..... + n = \frac {n(n+1)}{2}$? So we have $A_{\frac {n(n+1)}{2} + i} = i$. Can we make a formula from that?
If a matrix power has a limit which is positive, then there exists some power of the matrix such that the matrix is positive for this power.
Since $\lim\limits_{k\to\infty} A^k=L$, then this holds for all entries of matrix $A$. In other words $$ \lim\limits_{k\to\infty} A_{ij}^k=L_{ij}>0\tag{1} $$ for all $i$ and $j$. Now take arbitrary $i,j$, then from $(1)$ it follows that there exist $m_{ij}\in\mathbb{N}$ such that $k\geq m_{ij}$ holds $A_{ij}^k>0$. Now consider $m=\max_{i,j} m_{ij}$, then for all $k>m$ and all $i,j$ we have $A_{ij}^k>0$. Which means that there exist $m\in\mathbb{N}$ such that for all $k\geq m$ holds $A^k>0$
Existence theorem for antiderivatives by Weierstrass approximation theorem
I propose now the following proof (thanks to R. Israel for the hint): One can easily write down antiderivatives for polynomials. Let now $f\in C^0[a,b]$ and $(p_k)$ be a sequence of polynomials with $p_k\rightarrow f$ uniformly (by the Weierstrass approximation theorem); let $(P_k)$ be a sequence of polynomials with $P'_k=p_k$, made unique with this property by requiring that $P_k(a)$ be equal to some constant $C$ (independent of $k$) for all $k$. Let $\varepsilon>0$; since $(p_k)$ is a Cauchy sequence there is an index $N$ such that $||p_m-p_n||<\varepsilon/|b-a|$ for all $m,n\geq N$ (here $||\cdot||$ denotes the sup norm). By the Mean value theorem we have for all $x\in [a,b]$ and indices $n,m$ $|P_m(x)-P_n(x)|=|P_m(x)-P_n(x)-(P_m(a)-P_n(a)|\leq||p_m-p_n|||x-a|\leq||p_m-p_n|||b-a|$, hence, taking the sup wrt $x\in[a,b]$ gives $||P_m-P_n||\leq||p_m-p_n|||b-a|$. Therefore $(P_k)$ is a Cauchy sequence as $||P_m-P_n||<\varepsilon|b-a|/|b-a|$ whenever $m,n\geq N$; let $F\in C^0[a,b]$ be its limit (exists by the completeness of $C^0[a,b]$ wrt the sup norm). Then $F\in C^1(a,b)$ and $F'=f$, completing the proof.
Show that for any classes $\mathbf A, \mathbf B: \mathbf B - (\mathbf A - \mathbf B) = \emptyset$
Let $$A=\{1,2,3\} \text{ and } B=\{3,4,5\}.$$ Then $$A-B=\{1,2\},$$ but $$B-(A-B)=\{3,4,5\} \neq \emptyset.$$ In fact, $$B-(A-B)=B \cap (A \cap B^c)^c=B \cap (A^c \cup B)=(B\cap A^c) \cup B=B.$$ Alternatively, since $B \cap (A-B) = \emptyset$, therefore $B-(A-B)=B$.
Complex Analysis textbook - specific criteria
I think that John Conway's Functions of One Complex Variable is precisely what you want. For example it defines the complex integrals using rectifiable curves as you want, also it does the homotopy version of Cauchy's theorem, it defines the winding number as you want, and it even has a chapter on analytic continuation and Riemann surfaces (with a discussion of sheaves of germs). Now, maybe it does not have a lot of analytic number theory in it, but it has some relevant sections on the factorization of the Sine function, and on the Gamma and Zeta functions. Finally, another book worth looking at is Lars Ahlfors' Complex Analysis. Although I personally prefer Conway's book.
all complex solutions of $z\sin(z)=1$?
See the following paper: Charles Edward Siewert and Ernest Edmund Burniston, Exact analytical solutions of $ze^{z} = a,$ Journal of Mathematical Analysis and Applications 43 #3 (September 1973), 626-632. Author's Abstract: By means of the theory of complex variables, the solutions of $ze^{z} = a,$ where $a$ is in general complex, are established analytically, and thereby reduced to elementary quadratures.
Big-Oh Analysis of For Loop
When dealing with for loops, the strategy is as simple as stacking summations one on top of the other. Each loop corresponds to a sum, thus the complexity of this function is: $$ \sum_{i=1}^{n}\sum_{j=1}^{i³}j $$ The sums correspond to the first two loops, and the $j$ term accounts for the last one. We can now develop this expressions using the partial sum for arithmetic progressions. Therefore, the number of operations being done (and thus the complexity of the algorithm) is a function of $n$ pertaining to $\mathcal{O}(\sum_{i=1}^{n}\sum_{j=1}^{i³}j)$.
Integrating $\int^{+\infty}_{-\infty} dz e^{-m\sqrt{z^2+r^2}}\frac{1}{(z^2 + r^2)^{3/2}}$
By parity the given integral equals $$ 2\int_{0}^{+\infty}\frac{dz}{(z^2+r^2)^{3/2}\exp(m\sqrt{z^2+r^2})}\stackrel{z\mapsto r\sinh u}{=}\frac{2}{r^2}\int_{0}^{+\infty}\frac{du}{\cosh^3(u)\exp(mr\cosh u)} $$ or, by setting $u=\text{arccosh}(v)$, $$ \frac{2}{r^2}\int_{1}^{+\infty}\frac{dv}{v^3\sqrt{v^2-1}}e^{-mrv}\,dv =\frac{2}{r^2 e^{mr}}\int_{0}^{+\infty}\frac{e^{-mrt}dt}{(t+1)^3 \sqrt{t(t+2)}}$$ where the last integral tends to $\frac{\pi}{4}$ when $mr\to 0^+$ and is close to $\sqrt{\frac{2\pi}{13+4mr}}$ when $mr\to +\infty$, by approximating$^{(*)}$ $\frac{1}{(t+1)^3\sqrt{t+2}}$ with $\frac{1}{\sqrt{2}}e^{-13t/4}$. I believe there is no simple representation in terms of standard hypergeometric functions. $(*)$ The Laplace method is the standard technique for deriving the asymptotic behaviour of $K_0$ and similar functions.
How to Solve Large Scale Matrix Least Squares with Frobenius Regularization Problem efficiently?
Here is a simple Julia script. If you translate it to another language beware of the nested loops. Julia handles these efficiently but they should be vectorized for Matlab or Python. The first time the script is run it will create tab-separated-values (TSV) files for the $X$ and $W$ matrices. On subsequent runs, the script will read the TSV files, execute $k_{max}$ iterations, update the TSV files, and exit. Thus you can intermittently refine the solution until you run out of patience. #!/usr/bin/env julia # Sequential Coordinate-wise algorithm for Non-Negative Least-Squares # as described on pages 10-11 of # http://users.wfu.edu/plemmons/papers/nonneg.pdf # # Convergence is painfully slow, but unlike most other NNLS # algorithms the objective function is reduced at each step. # # The algorithm described in the PDF was modified from its # original vector form: |Ax - b|² # to the matrix form: |LXKᵀ - M|² + λ|X|² # # and to include the regularization term. using LinearAlgebra, MAT, DelimitedFiles function main() matfile = "problem.mat" Xfile = "problem.mat.X.tsv" Wfile = "problem.mat.W.tsv" # read the matrices from the Matlab file f = matopen(matfile) K = read(f,"K1"); println("K: size = $(size(K)),\t rank = $(rank(K))") L = read(f,"K2"); println("L: size = $(size(L)),\t rank = $(rank(L))") M = read(f, "M"); println("M: size = $(size(M)),\t rank = $(rank(M))") # S = read(f,"S00");println("S: size = $(size(S)),\t rank = $(rank(S))") close(f) A = L'L B = K'K C = -L'M*K m,n = size(C) λ = 1/10 # regularization parameter kmax = 100 # maximum iterations # specify the size of the work arrays X = 0*C W = 1*C H = A[:,1] * B[:,1]' # resume from latest saved state ... or reset to initial conditions try X = readdlm(Xfile); println("X: size = $(size(X)), extrema = $(extrema(X))") W = readdlm(Wfile); println("W: size = $(size(W)), extrema = $(extrema(W))") println() catch @warn "Could not read the saved X,W matrices; re-initializing." X = 0*C W = 1*C end fxn = (norm(L*X*K' - M)^2 + λ*norm(X)^2) / 2 println("at step 0, fxn = $fxn") k = 0 while k < kmax for i = 1:m for j = 1:n mul!(H, A[:,i], B[:,j]') H[i,j] += λ δ = min( X[i,j], W[i,j]/H[i,j] ) X[i,j] -= δ H .*= δ W .-= H end end k += 1 fx2 = (norm(L*X*K' - M)^2 + λ*norm(X)^2) / 2 println("after step $k, fxn = $fx2") # convergence check if fx2 ≈ fxn; break; end fxn = fx2 end # save the current state for the next run writedlm(Xfile, X) writedlm(Wfile, W) # peek at the current solution println("\nsummary of current solution") println(" vector(X) = $(X[1:4]) ... $(X[end-3:end])") println("extrema(X) = $(extrema(X))") end # invoke the main function main()
joint probability in sum
Read $\sum_{x,y}p\left(x,y\right)$ as $\sum_{x}\sum_{y}p\left(x,y\right)$. Then: $$\sum_{x,y}p\left(x,y\right)=\sum_{x}\sum_{y}p\left(x,y\right)=\sum_{x}p\left(x\right)$$ as a consequence of: $$\sum_{y}p\left(x,y\right)=p\left(x\right)$$ Application on $\sum_{x,y}p\left(x,y\right)\log p\left(x\right)$ gives: $$\sum_{x,y}p\left(x,y\right)\log p\left(x\right)=\sum_{x}\log p\left(x\right)\sum_{y}p\left(x,y\right)=\sum_{x}\log p\left(x\right)\times p\left(x\right)$$
Asymptotic equivalence and $\lim_{x\to 0} \frac{\sin x}{x}=1$
No, asymptotic equivalence doesn't justify the limit $\lim_{x \to 0}\frac{\sin x}{x} = 1$. The limit justifies the equivalence.
Halmos, Finite-Dimensional Vector Spaces, Sec. 7, Ex. 8
First the set contains $8$ elements, of which $7$ can be used to form a basis (the set $(0,0,0)$ does not count). So there are $\binom{7}{3} \, = 35$ the possible combinations. A combination does not form a basis if one of the elements can be expressed as linear combination of the rest two of it. In this particular case, the only possibility is $v_1 = v_2 + v_3$ since the vectors are of component coefficients $0$ or $1$. Moreover, the two vectors $v_2$ and $v_3$ must be like $(0,0,1)$ and $(0,1,0)$ so that the sum does not have $2$ as component. So there are merely $2$ possible situations. $v_2$ and $v_3$ both have exactly one $1$ in their coordinates and in different places, like $(0,0,1)$ and $(0,1,0)$. There are $\binom{3}{2} = 3$ possibilities. $v_2$ has one $1$ and $v_3$ has two $1$s in their coordinates, and the $1$s are in a totally complementary position, for example $(0,0,1)$ and $(1,1,0)$, so that they add up to $(1,1,1)$. There are $\binom{3}{1} = 3$ possibilities. So the answer is $35 - 3 - 3 = 29$.
Challenging recurrence relation problem
I've checked some parts of your question. Your approach seems feasible and many calculations look quite ok. But I see two possible sources of problems: When calculating the derivative of e.g. $$ \lambda_{n,c}(x) = \sum_{k=c}^n \sum_{j=0}^{k-c} {k-c \choose j} \ln(g(x))^{k-c-j} \frac{d^j}{df^j}[f(x)_c] B_{n,k}^{(f \diamond g)^c}(x) $$ and we take boundary values of the indices e.g. $k=c$ and $j=0$, the inner part of the sum gives $$\ln(g(x))^{k-c-j}=\ln(g(x))^{0}=1.$$ So, when doing differentiation the product rule might not be applicable in all cases. Your explanation in the case (B) is not easy (for me) to check. Although the switch from $c$ to $c-1$ sounds reasonable its hard to review. Especially since a combination with A,C, and D is not given. In fact providing the complete expression with all sums and settings of the indices is essential for a proper review. Hint: Select small values of $n,k$ and $c$ which permit a non-trivial (i.e. none of the inner sums is empty) but manageable manual calculation of the formulas in order to check your approach. I suggest $n=3,k=2$ and $c=1$. Some more aspects: With respect to this related question your main task is to find a recurrence relation for \begin{align*} B_{n,k}^{(f \diamond g)^c}(x):=\frac{(a^{(k-c)\diamond}\diamond b^{c\diamond})_n}{(k-c)!c!}\qquad 1\leq k\leq n, 0\leq c \leq k \end{align*} $a=(f^\prime,f^{\prime\prime},\ldots),b=(g^\prime,g^{\prime\prime},\ldots)$ with $f,g$ sufficiently often differentiable real-valued functions. This is an expression involving the convolution identity $\diamond$ of certain partial Bell polynomials. According to point (6) in this answer we already know an explicit representation in terms of the partial Bell polynomials \begin{align*} B_{n,k}^f&:=B_{n,k}(f^\prime,f^{\prime\prime},\ldots,f^{(n-k+1)})\qquad \qquad 1\leq k \leq n\\ B_{n,k}^g&:=B_{n,k}(g^\prime,g^{\prime\prime},\ldots,g^{(n-k+1)})\\ \end{align*} namely (arguments in the following sometimes omitted) The following is valid for $1\leq k \leq n$: \begin{equation*} B_{n,k}^{(f \diamond g)^c}= \begin{cases} B_{n,k}^f\qquad& c=0\\ \sum_{l=1}^{n-1}\binom{n}{l}B_{l,k-c}^fB_{n-l,c}^g\qquad& n>1, 0 < c < k\tag{1}\\ B_{n,k}^g\qquad& c=k\\ \end{cases} \end{equation*} We also know according to (1) and (2) of this answer \begin{align*} \frac{d}{dx}&B_{n,k}^f(x)=B_{n+1,k}^f(x)-f^{\prime}(x)B_{n,k-1}^f(x)\tag{2} \end{align*} and \begin{align*} B_{n,k}^f(x)&=\frac{1}{k!}\sum_{j=0}^k\binom{k}{j}\left(-f(x)\right)^{k-j}\frac{d^n}{dx^n}\left(f(x)\right)^j \end{align*} Approach: So, based upon (1) you could also try to start from $$B_{n+1,k}^{(f \diamond g)^c}=\sum_{l=1}^{n-1}\binom{n+1}{l}B_{l,k-c}^fB_{n+1-l,c}^g\qquad n>1, 0<k<c$$ and then you could apply (2) in order to properly replace certain parts of the expression together with $\binom{n+1}{l}=\binom{n}{l}+\binom{n}{l-1}$ and some index shifting.
If two transitive models of ZFC have the same sets of ordinals, they are equal
HINT: Take any set $x\in M$, show that by knowing $\operatorname{tc}(\{x\})$ we know what $x$ is; then use the axiom of choice to find a set of ordinals $A$ such that by decoding $A$ into a set of ordered pairs of ordinals, we obtain a structure isomorphic to $\operatorname{tc}(\{x\})$, finally use Mostowski's collapse lemma.
Integer solutions of the equation: $x^2+y^2+z^2=kxyz$
For $k=3$ this is the Markov Equation. It has the solution $(1,1,1)$ and from this solution you can build any other by the identification $$(x_0,y_0,z_0) \to (x_0, y_0, 3x_0y_0 - z_0)$$ and by noticing you can permute $x,y,z$. $$(1,1,1) \to (1,1,2)$$ $$(1,2,1) \to (1,2,5)$$ $$(1,5,2) \to (1,5,13)$$ $$(2,5,1) \to (2,5,29)$$ etc.
Can $\sin(x^2)$ be solution of the diff equation $y''+p(x)y'+q(x)y=0$ in some interval containing $0$
We have $y = \sin (x^2)$, $y' = 2x \cos (x^2)$, and $y'' = 2 \cos(x^2) - 4x^2 \sin (x^2)$. Substituting, $$ 2 \cos(x^2) - 4x^2 \sin (x^2) + p(x) 2x \cos (x^2) + q(x) \sin (x^2) = 0$$ Following A.G.'s solution given in the comments, setting $x=0$ in the equation gives $2=0$ if we assume $p(x)$ and $q(x)$ being bounded in a neighbourhood of $0$. Hence $p(x)$ and $q(x)$ cannot be continuous.
Why isn't every subcollection of a set a set in ZF?
The "formula" "$y\in z$" . . . isn't really a formula. It's not expressible, unless $z$ is definable from parameters in the model somehow. So in general the axiom of separation won't apply. For example, take (by the Lowenheim-Skolem theorem) a model $M$ of ZFC containing $\omega$ but which is countable. Then most subsets of $\omega$ - that is, most reals - won't be in $M$, but will of course be sub-collections of an element of $M$ (namely $\omega$). Do you see why this isn't a contradiction?
Nonlinear ODE Abel equation of the first kind
Hint (Before edit): I assume $A_1$, $A_3$ and $F$ to be constants. Then this differential equation is separable. $$\dfrac{dy}{F-A_1y-A_3y^3}=dx$$ The solution is not very nice. But if you can narrow the range of $y$ you could simplify it. EDIT: In the case of a general $F(x)$ it is very likely that there is no simple closed form solution to the ODE. If you are trying to solve a practical problem you could try to look how much $F(x)$ does change in the practical range of values for $x$. It might be, that $F(x)\approx \text{const.}$ in the practical range of values.
Local Extrema of a 2D Surface in R^3 (Math GRE Question)
Find the extrema by solving the partials simultaneously being zero. In this case, $3x^2+3y=0$ and $3y^2+3x=0$. Subtracting, $$3(x^2-y^2)-3(x-y)=0\quad\longrightarrow\quad (x-y)(x+y-1)=0.$$ If $x=y$ then $x^2=-x$ implies $x=y=0$ or $x=y=-1$. If $y=1-x$ then $x^2-x+1=0$ which has no real solutions. Therefore the two extrema are $(0,0)$ and $(-1,-1)$. Evaluate the Hessian at both points to determine the nature of the extrema.
Prove if $|z|,|w|<1 $, then $|\frac{z-w}{1-zw}|<1 $ and if $|w|=|z|=1 $, then $\frac{z-w}{1-zw}\in{\Re}$
Let $|w|=|z|=1$ $\frac{z-w}{1-wz}=\frac{\bar{z}-\bar{w}}{1-\bar{z}\bar{w}}$ But: $(1-\bar{w}\bar{z})(z-w)=z-\bar{w}-w+\bar{z}=(\bar{z}-\bar{w})(1-zw)$ From this calculation we have that the above fractional complex number is equal to its conjugate,thus it is a real number.