title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Proving that the derivative is unique in higher dimensions
Let $E$ and $F$ be normed linear spaces, so $\mathbb{R}^n$ and $\mathbb{R}^m$ in your case. Let $T_1$ and $T_2$ be continuous linear maps satisfying: $$ f(x+h) = f(x)+T(h)+|h|\psi(h)$$ where $\psi:E\to F$ satisfies $\lim_{h\to 0}\psi(h)=0$ and that $\psi (0)=0$. Let $v\in E, t\in\mathbb{R}, t>0$ such that $x+tv$ lies in $D$. Let $h=tv$, then $$\begin{aligned} f(x+h)-f(x)&=T_1(h)+|h|\psi_1(h)\\ &=T_2(h)+|h|\psi_2(h) \end{aligned}$$ Where $\lim_{h\to 0}\psi_j(h)=0$ for $j=1,2$. Let $T=T_1-T_2$. Subtracting the two expressions for $f(x+tv)-f(x)$, we have that: $$T_1(h)-T_2(h)=|h|(\psi_2(h)-\psi_1(h)) $$ Setting $h=tv$ and using linearity of $T$ $$t(T_1(v)-T_2(v))=t|v|(\psi_2(tv)-\psi_1(tv)) $$ Dividing by $t$: $$T_1(v)-T_2(v)=|v|(\psi_2(tv)-\psi_1(tv)) $$ Take the limit as $t\to 0$, the limit of the right hand side is equal to $0$. This implies that $T_1(v)-T_2(v) = 0\implies T_1(v)=T_2(v)$.
State which are one-one and onto or not?
Well, I am not a professional, but it seems that your first function does not map integers to positive integers for $x \leq 0$. So, for the first function, we may say that it is one - to - one but NOT onto (since, every positive integer will be mapped to an odd positive integer but NEVER an even positive integer). On the other hand, the second function is also one - to - one and onto since the first part maps non - negative integers to odd positive integers and the second part maps negative integers to the even positive integers. Hence, we may say that the second function is one - to - one and onto. As said earlier, I am not really a professional (in fact, this is my first answer), but I hope this helps.
Do transcendental numbers contain any string of digits?
No. Consider the Liouville number $\sum_k{\frac{1}{10^{k!}}}$. It is transcendental but all of its digits are $0$ or $1$. As for $\pi$, it is unknown whether it contains any string of digits, but it is conjectured to have the stronger property of being normal in all bases.
What is the generator of all polynomials that determine the zero function in a finite field?
This is correct. A generator $g$ has to be a multiple of $x-c$ for every element $c \in F$ as you can see by considering the Euclidean division of $g$ by $x-c$. We get the conclusion as $x-c_1,x-c_2$ are coprime polynomials for $c_1 \neq c_2$.
Compute or approximate the inverse of $\mathbf{A} + \mathbf{u}\mathbf{v} + \lambda \mathbf{I}$
Given $B$ any invertible matrix with $\mu\ne 0$ minimum eigenvalue, then $1/\mu$ is the spectral radius of $B^{-1}$, so, if $\lambda<\mu$, we have $$ (B+\lambda I)^{-1} = B^{-1}\sum_{i=0}^{\infty} B^{-i}\lambda^i(-1)^i $$ so you can approximate the inverse with a partial sum of the series, with $B=A+uv^T$.
Prove that $||f||_p=\sup|\int fg d\mu|$
For $p>1$ : choose sets $A_n$ of finite measure increasing to $X$. LHS $\leq$ RHS is immediate form Holder's inequality. Now consider $g=\frac 1 c I_{A_n} |f|^{p-1} sgn(f) I_{\{x:|f(x)| \leq N\}}$ where $c=(\int_{\{x:|f(x)| \leq N\}} I_{A_n} |f|^{p})^{1/p'}$. you can see that RHS $ \geq \int fg$ for each $N$ and tends to LHS as $N \to \infty$. [$sgn(f)$ stands for sign of $f$]. I will let you handle the case $p=1$
Show $ F^{\prime}\left(t_{0}\right)=D f_{p+t_{0} v}(v)=D_{v} f\left(p+t_{0} v\right) $
You're basically showing how the Frechet and Gateaux derivatives are related: first, write $F$ as a composition $F=f\circ g:\mathbb R\to \mathbb R^n$ where $g(t)=p+tv$. Now then, by the chain rule: $\tag1 DF(t_0)=Df(g(t_0))((Dg(t_0))=Df(g(t_0))v$ But, $Df(g(t_0))$ is the linear transformation: $\mathbb R^m\to \mathbb R^n$ that satisfies $\tag2 f(g(t_0)+u)=f(g(t_0))+Df(g(t_0))u+r(u)$ where $\frac{r(u)}{|u|}\to 0$ as $|u|\to 0$ so putting this together, with $u=tv$, we get $\tag3 \frac{f(g(t_0)+tv)-f(g(t_0))}{t}=Df(g(t_0))v+\frac{r(tv)} {t}=Df(g(t_0))v+\frac{r(tv)}{t|v|}|v|$ Letting $t\to 0$, we get $\tag4 D_{v} f\left(p+t_{0} v\right)=\underset{t\to 0}\lim \frac{f(g(t_0)+tv)-f(g(t_0))}{t}=Df(g(t_0))v=DF(t_0)$
Transform $\iint_D \sin(x^2+y^2)~dA$ to polar coordinates and evaluate the polar integral.
No, it is not correct. The polar representation you give yields a quarter circle of radius $\sqrt{3\pi}$ on the first quadrant, which is obviously not the region described by the question. The first thing I always do to solve these types of problems is to sketch the region of integration. In this case, it turns out to be extremely helpful: The region $D$ you are concerned about I highlighted in yellow. It is easy to see that $x^2+y^2=2\pi$ and $x^2+y^2=3\pi$ are just circles of radius $\sqrt{2\pi}$ and $\sqrt{3\pi}$ respectively. Similarly, it can be seen that $y=\sqrt{3}\cdot x$ and $x=0$ correspond to $\theta=60^{\circ}=\pi/3$ (Since $\arctan(\sqrt{3})=\pi/3$) and $\theta=90^{\circ}=\pi/2$ respectively. Hence, the region can be described in polar coordinates as: $$D=\{(r,\theta)\in \mathbb{R}^2\mid \sqrt{2\pi}\leq r\leq \sqrt{3\pi}, \pi/3\leq \theta\leq \pi/2\}$$ Hence, converting the double integral to polar coordinates gives (Don't forget the Jacobian): $$\iint_D \sin(x^2+y^2)~dA=\int_{\pi/3}^{\pi/2} \int_{\sqrt{2\pi}}^{\sqrt{3\pi}} \sin(r^2)\cdot r~dr~d\theta$$ This should be easy to evaluate.
Homomorphism, Kernel and Image
as the matrix notation of a permutation is correct, to find kernel and Image of the group homomorphism $f$; we note that 1 is the generator of $\Bbb{Z}$, so $f (1) = (123)(456)$ is a generator of $ Imf$ that is cyclic of order 3 because $(123)(456)$ has order 3 as product of two disjoint 3-cycle, it follows that the kernel is $3\Bbb{Z}$.
Is contraction of radical ideals and/or prime ideals surjective in this case?
Yes, this is true. For prime ideals, this is a special case of the "lying over theorem" for integral extensions. If $B$ is any commutative ring with a subring $A$ such that $B$ is integral over $A$, then every prime ideal of $A$ is the contraction of some prime ideal of $B$. In this case, $B=\overline{K}[x_1,\ldots,x_n]$ is integral over $A=K[x_1,\ldots,x_n]$ since it is generated by $\overline{K}$ as an $A$-algebra and every element of $\overline{K}$ is integral over $K$. As a sketch of a proof, let $P\subset A$ be a prime ideal. Localizing at $P$, we may assume $P$ is the unique maximal ideal of $A$. It then suffices to show that $PB$ is a proper ideal of $B$, so it is contained in some maximal ideal of $B$ (which can then only lie over $P$ since $P$ is maximal). If $PB=B$, then this equation remains true when we replace $B$ with some finitely generated $A$-subalgebra $B_0\subseteq B$. But then $B_0$ is a finitely generated $A$-module since $B$ is integral over $A$, so $PB_0=B_0$ implies $B_0=0$ by Nakayama. This is a contradiction, and hence $PB$ must be a proper ideal of $B$, as desired. Once we have the result for prime ideals, it immediately follows for radical ideals, since radical ideals are just intersections of prime ideals. Note that the statement $(\sqrt{I})^c = \sqrt{I^c}$ is comparatively trivial and holds for any extension of commutative rings $A\subseteq B$. Indeed, if $f\in (\sqrt{I})^c$, then $f\in \sqrt{I}$ and so $f^d\in I$ for some $d$. But then $f^d\in A$ as well and so $f^d\in I^c$ and $f\in \sqrt{I^c}$. Conversely, if $f\in\sqrt{I^c}$, then $f^d\in I^c$ for some $d$ and so $f^d\in I$ and hence $f\in (\sqrt{I})^c$.
Verify the limit using the formal definition
The problem you are facing is because you are afraid to simplify. Use intermediate inequalities first to reduce the complexity, then bound the result. $x\to\infty$ so we can assume $x>5$ and then $(x+5)<2x$ So we get $0<\dfrac{5+x}{x^2}<\dfrac 2x<\varepsilon\quad$ for $x>\delta$ where $\delta=\max(5,\frac 2\varepsilon)$
Are homotopy types of finite CW complexes countable?
Every finite CW complex has the homotopy type of a finite simplicial complex, and the latter are clearly countable.
Write the logical statement of this circuit - am I correct?
Your initial "answer" is correct, but can be simplified: Recall the distributive law (DL): $(a\lor b) \land c \equiv (a \land c) \lor (b \land c)$. And recall the law of the excluded middle (LEM): $p\lor \lnot p\equiv T$ where $T \equiv \;\text{true}\;\equiv 1$. $$\begin{align}(p\land q)\lor (\lnot p \land q) & \equiv (p\lor \lnot p) \land q\tag{DL} \\ \\ & \equiv T \land q \tag{LEM}\\ \\ & \equiv q\end{align}$$
$|G|=p^2$ then $G \cong \mathbb Z_{p^2}$ or $G \cong \mathbb Z_{p} \times \mathbb Z_{p}$
It is not clear in your proof why $G$ is the semi-direct product of $\langle x \rangle$ and $\langle y \rangle$ and why it turns out this is actually a direct product. You must check a final condition: that $\langle x \rangle$ is normal in $G$ for the first, and that both $\langle x \rangle$ and $\langle y \rangle$ are normal in $G$ for the second. Do you see why this is the case?
Ordinary generating function of $n\cdot 2^{n-1}$ demonstration
Hint: What is the ordinary generating function of $2^n$? What happens when you take a derivative?
Book on the Rigorous Foundations of Mathematics- Logic and Set Theory
Gosh. I wonder if those recommending Bourbaki have actually ploughed through the volume on set theory, for example. For a sceptical assessment, see the distinguished set theorist Adrian Mathias's very incisive talk https://www.dpmms.cam.ac.uk/~ardm/bourbaki.pdf Bourbaki really isn't a good source on logical foundations. Indeed, elsewhere, Mathias quotes from an interview with Pierre Cartier (an associate of the Bourbaki group) which reports him as admitting 'Bourbaki never seriously considered logic. Dieudonné himself was very vocal against logic' -- Dieudonné being very much the main scribe for Bourbaki. And Leo Corry and others have pointed out that Bourbaki in their later volumes don't use the (in fact too weak) system they so laboriously set out in their Volume I. Amusingly, Mathias has computed that (in the later editions of Bourbaki) the term in the official primitive notation defining the number 1 will have 2409875496393137472149767527877436912979508338752092897 symbols. It is indeed a nice question what possible cognitive gains in "security of foundations" in e.g. our belief that 1 + 1 = 2 can be gained by defining numbers in such a system!
Limit of $\sum_{i=1}^n \left(\frac{{n \choose i}}{2^{in}}\sum_{j=0}^i {i \choose j}^{n+1}\right)$
For $p\ge2$, Hölder says $$ \|f\|_p^p\le\|f\|_2^2\,\|f\|_\infty^{p-2}\tag{1} $$ and we know that $$ \sum_{j=0}^i\binom{i}{j}^2=\binom{2i}{i}\tag{2} $$ and $$ \binom{i}{j}\le\binom{i}{i/2}\tag{3} $$ For odd $i$, use $\binom{i}{i/2}=\binom{i}{(i\pm1)/2}=\frac12\binom{i+1}{(i+1)/2}$. Applying $(1)$ to $(2)$ and $(3)$ and using $\binom{2n}{n}\le\frac{4^n}{\sqrt{\pi n}}$ yields $$ \begin{align} \sum_{j=0}^i\binom{i}{j}^{n+1} &\le\binom{2i}{i}\binom{i}{i/2}^{n-1}\\ &\le\frac{4^i}{\sqrt{\pi i}}\left(\frac{2^i}{\sqrt{\pi i/2}}\right)^{n-1}\\ &=\frac{2^{in+n/2+i-1/2}}{(\pi i)^{n/2}}\tag{4} \end{align} $$ Apply $(4)$ and note that for $i\ge6$, we have $\sqrt{\frac{2}{\pi i}}\le\frac1{\sqrt{3\pi}}\lt\frac13$ $$ \begin{align} S_{n} &=\sum_{i=1}^n\left(\frac{\binom{n}{i}}{2^{in}}\sum_{j=0}^i\binom{i}{j}^{n+1}\right)\\ &\le\frac1{\sqrt2}\sum_{i=1}^n\left(\frac2{\pi i}\right)^{n/2}2^i\binom{n}{i}\\ &\le\frac1{\sqrt2}\sum_{i=1}^5\left(\frac2{\pi i}\right)^{n/2}2^i\binom{n}{i} +\frac1{\sqrt2}\sum_{i=6}^n\left(\frac1{\sqrt{3\pi}}\right)^n2^i\binom{n}{i}\\ &\le\frac1{\sqrt2}\sum_{i=1}^5\left(\frac2{\pi i}\right)^{n/2}2^i\binom{n}{i} +\frac1{\sqrt2}\left(\frac{3}{\sqrt{3\pi}}\right)^n\tag{5} \end{align} $$ Each of the $6$ terms in $(5)$ decays exponentially to $0$.
Let $f \in \mathbb{Z}[t]$. What is the density of $\{n \mid f(n) \text{ is prime}\}$?
Yes this is true. Without loss of generality assume that $f$ is irreducible. Let $w(n)$ denote the number of prime factors of $n$. Given $\varepsilon > 0$, the following stronger statement holds: $$ \#\{n \leq x: w(f(n)) < (1 - \varepsilon)\log\log x \} = o(x). $$ That is, most $f(n)$ with $n \leq x$ have more than $\log\log x$ prime factors! This result (in fact a stronger result) was originally established by Halberstam in the 50's. A recent account of methods leading to this theorem is available in Granville's and Soundararajan's survey, http://www.dms.umontreal.ca/%7Eandrew/PDF/ErdosKac.pdf .
Congruent parts of triangles
Clearly, $\angle ABC>\angle ACB$ Let $D$ be a point on $AC$ such that $\angle ABD=\angle ADB$ Also let $E$ be the midpoint of $BD$ So, in $\triangle ABE, \triangle ADE$ $(1) BE=DE,(2) AE$ is the common side and $(3) \angle ABE=\angle ADE$ So, $\triangle ABE, \triangle ADE$ are congruent (Using $SAS$ formula) So, $AB=AD$ and $\angle AEB=\angle AED=\frac{\angle AEB+\angle AED}2=\frac\pi2$ Using Pythagoras Theorem, $AC^2=AE^2+EC^2,AD^2=AE^2+ED^2\implies AC^2-AD^2=EC^2-ED^2>0$ So, $AC>AD\implies AC>AB$
Error Function limit
I don't know the constant, but when it comes to calculating the product, we might note that it can be written $$\prod_{n=1}^\infty\Bigl(1-\frac{2}{\sqrt{\pi}}\int_n^\infty e^{-x^2}\,dx\Bigr).$$ When $n$ grows large, partial integration yields $$\int_n^\infty e^{-x^2}\,dx=\frac{e^{-n^2}}{2n}+\frac12\int_n^\infty\frac{e^{-x^2}}{x^2}\,dx,$$ which gives you a reasonable handle on the integral on the left. Taking the log of the infinite product, you get a sum in which this will give you a somewhat decent approximation for the tails of the sum. There are many details to fill in, but off the top of my head this outlines how I would go about computing the product.
It is possible to prove that these two collections generate the same topology on $ \mathbb{X} $?
Suppose that $F\subseteq\Lambda$ is finite, $V_i\in\tau$ for each $i\in F$, and $x\in\bigcap_{i\in F}\varphi_i^{-1}[V_i]$. For each $i\in F$ let $y_i=\varphi_i(x)\in V_i$; there is a $B_i\in\mathcal{B}$ such that $y_i\in B_i\subseteq V_i$, so $x\in\bigcap_{i\in F}\varphi_i^{-1}[B_i]\subseteq\bigcap_{i\in F}\varphi_i^{-1}[V_i]$. It follows immediately that $\mathcal{A}_\tau$ and $\mathcal{A}_{\mathcal{B}}$ generate the same topology on $\Bbb X$.
Complex number and conjugate
Note that $|z|^{6}=|z^{6}|= |-16z^{2}|=|-16|\cdot |z^{2}| = 16|z|^{2}$. Hence, $|z|^{6} = 16|z|^{2}$ and so $|z|^{2}(|z|^{4}-16) = 0$. So, $|z|=0$ or $|z|=2$.
For what kind of $f$ is $(\mathbb{R}, |f(.)-f(.)|)$ complete?
It is a complete space iff the range of $f$ is closed in the standard topology. First of all note that a metric space is complete iff every Cauchy sequence admits a convergent subsequence (this is an easy exercise). Now if $x_n$ is a Cauchy sequence wrt. your new metric, the sequence $f(x_n)$ is bounded. But this means it contains a convergent subsequence and since the range of $f$ is closed, there is a $y$ so that $f(y)$ is the limit of our subsequence in the standard topology. But this means that $x_n$ converges to $y$ in the new topology. To see the other direction, assume the range is not closed and choose a sequence $f(x_n)$ in the range whose limit point does not lie in the range of $f$. Now $x_n$ is a Cauchy sequence in the new topology without a limit.
A positive integer is equal to the sum of digits of a multiple of itself.
I got it, I was trying to find a multiple of $n$ that has sum of digits $n$. What we have to do is find a number that has sum of digits $n$ that is a multiple of $n$. In other words, which numbers have sum of digits $n$? A particular kind of such numbers are the ones that have $n$ ones, and no other digits. We can build such a number that is multiple of $n$. Write $n$ as $2^a5^bj$ with $(j,10)=1$. Then $10^{k\varphi(j)}$ is congruent to $1\bmod j$ via Euler, and if $k$ is large enough then $10^{k\varphi(j)}$ is a multiple of $2^a5^b$. Such just take $n$ numbers of the aforementioned form, add them and you get a number with sum of digits $n$ that is congruent to $n=0\bmod j$ and is a multiple of $2^a5^b$. In other words a multiple of $n$ with sum of digits $n$, as desired.
Completeness of $\langle \mathscr{C} [0, 1], \| \cdot \|_1 \rangle$
This space is not complete. Consider functions $$ f_n(t)=\begin{cases}0\qquad\qquad\qquad\qquad t\in\left(0,\frac{1}{2} -\frac{1}{2n}\right)\\\frac{1}{2}+n\left(t-\frac{1}{2}\right)\qquad t\in\left[\frac{1}{2}-\frac{1}{2n},\frac{1}{2}+\frac{1}{2n}\right]\\1\qquad\qquad\qquad\qquad t\in\left(\frac{1}{2}+\frac{1}{2n},1\right)\end{cases} $$ It is easy to check that $\{f_n\}_{n=1}^\infty\subset \mathscr{C}([0,1])$. Moreover this is a Cauchy sequence. Indeed $$ \lim\limits_{N\to\infty}\sup\limits_{m,n>N}\Vert f_n-f_m\Vert_1 =\lim\limits_{N\to\infty}\sup\limits_{m,n>N}2^{-2}|n^{-1}-m^{-1}| \leq\lim\limits_{N\to\infty}2^{-1}N^{-1}=0 $$ Assume that there exist $f\in \mathscr{C}([0,1])$ such that $\lim\limits_{n\to\infty}\Vert f_n-f\Vert_1=0$. Assume that there exist $t_0\in\left(\frac{1}{2},1\right)$ such that $f(t_0)\neq 1$. Since $f$ is continuous then there exist neighborhood $U_\delta(t_0)$ such that for all $t\in U_\delta(t_0)$ we have $|1-f(t)|\geq\frac{|1-f(t_0)|}{2}$. Now take $\delta'=\min\left(\delta,t_0-\frac{1}{2}\right)$, so we can assume $U_{\delta'}(t_0)\subset \left(\frac{1}{2},1\right)$. Define $N=\frac{4(t_0-\delta')-1}{2}$, then for all $n>N$ we have $U_{\delta'}(t_0)\subset \left(\frac{1}{2}+\frac{1}{2n},1\right)$. Note that for all $n>N$ and $t\in U_{\delta'}(t)$ we have $f_n(t)=1$, so $$ \Vert f_n-f\Vert_1= \int\limits_{(0,1)}|f_n(t)-f(t)|dt\geq \int\limits_{(t_0-\delta',t_0+\delta')}|f_n(t)-f(t)|dt= $$ $$ \int\limits_{(t_0-\delta',t_0+\delta')}|1-f(t)|dt\geq \int\limits_{(t_0-\delta',t_0+\delta')}\frac{1-f(t_0)}{2}dt\geq \delta'(1-f(t_0))>0 $$ and as the consequence $$ 0=\lim\limits_{n\to\infty}\Vert f_n-f\Vert_1\geq\delta'(1-f(t_0))>0 $$ Contradiction, therefore $f(t_0)=1$ for all $t_0\in\left(\frac{1}{2},1\right)$. Similarly one can show that for all $t_0\in\left(0,\frac{1}{2}\right)$ we also have $f(t_0)=0$. Now note that $$ \lim\limits_{t\to 1/2-0}f(t)=0\qquad\qquad\lim\limits_{t\to 1/2+0}f(t)=1 $$ so $f$ is not continuous. Contradiction, hence $\mathscr{C}([0,1])$ is not complete.
Baire one functions, characteristic functions of intervals
For $[0,1]$, consider the polygonal interpolation of $(-n^{-1},0),(0,1),(1,1),(1+n^{-1},0)$ and $f_n=0$ if $x<-n^{-1}$, $x>1+n^{-1}$. Use a similar idea for open intervals, and a scaling argument for general intervals. For the second question, pointwise convergence is preserved by linear combinations, and so is continuity.
In a sequence of Bernoulli trials let X be the length of the run
Calling the possible trial outcomes Heads and Tails, the event $\{X=k\}$ occurs if and only if the sequence is either $k$ Heads followed by a Tail, or $k$ Tails followed by a Head, so $$P(X=k)=P(H^kT\cup T^kH)=P(H^k)P(T)+P(T^k)P(H)=p^kq+q^kp$$ where $p=P(H)$ and $q=1-p=P(T)$. Now $$\begin{align}E(X)&=\sum_{k=1}^\infty k P(X=k)\\ &=\sum_{k=1}^\infty k (p^kq+q^kp)\\ &=q\sum_{k=1}^\infty k p^{k} + p\sum_{k=1}^\infty k q^{k}\\ \end{align}$$ $$\begin{align}E(X^2)&=\sum_{k=1}^\infty k^2 P(X=k)\\ &=\sum_{k=1}^\infty k^2 (p^kq+q^kp)\\ &=q\sum_{k=1}^\infty k^2 p^{k} + p\sum_{k=1}^\infty k^2 q^{k}\\ \end{align}$$ so we have $E(X)$ and $V(X)=E(X^2)-(E(X))^2$ once the above summations are calculated. Letting $a$ stand for either $p$ or $q$, we have $$\begin{align}\sum_{k=1}^\infty k a^{k}&=a\sum_{k=1}^\infty k a^{k-1}\\ &=a\frac{d}{da}\sum_{k=1}^\infty a^{k}\\ &=a\frac{d}{da}((1-a)^{-1}-1)\\ &=a(1-a)^{-2}\\ \end{align}$$ $$\begin{align}\sum_{k=1}^\infty k^2 a^{k}&=a\sum_{k=1}^\infty k^2 a^{k-1}\\ &=a\frac{d}{da}\sum_{k=1}^\infty k\,a^{k}\\ &=a\frac{d}{da}(a(1-a)^{-2})\\ &=a\{2a(1-a)^{-3}+(1-a)^{-2}\}\\ &=a(1+a)(1-a)^{-3} \end{align}$$ so all four of the required summations are covered by the latter two results, giving the posted answers upon substitution.
Show that a prime divisor to $x^4-x^2+1$ has to satisfy $p=1 \pmod{12}$
Observe that $p$ cannot divide $x$, so that $x$ is invertible modulo $p$. Rewrite the second rewritten congruence as $$\left( \frac{2x^2-1}{x} \right)^2 \equiv -1 \pmod{p} .$$ This shows that -1 is a quadratic residue modulo $p$, implying that $p\equiv 1\pmod{4}$.
Show that the closure of $A$ is the intersection of all closed sets containing $A$, topology proof needed
You’re getting bogged down in the details of the definitions and thereby making it much harder than it really is. For the first inclusion, start, as you did, with an arbitrary $x\in\operatorname{cl}A$. Let $C$ be any closed set such that $A\subseteq C$. Suppose that $x\notin C$: then $x\in X\setminus C$, and $X\setminus C$ is open, so $(X\setminus C)\cap A\ne\varnothing$. But on the other hand we know that $A\subseteq C$, so $A\cap(X\setminus C)=\varnothing$. This contradiction shows that $x\in C$, and since $C$ was an arbitrary closed set containing $A$, we conclude that $$\operatorname{cl}A\subseteq\bigcap\{C\subseteq X:A\subseteq C\text{ and }C\text{ is closed}\}\;.$$ For the opposite inclusion just observe that $\operatorname{cl}A$ is one of the closed sets containing $A$, so if $x\in\bigcap\{C\subseteq X:A\subseteq C\text{ and }C\text{ is closed}\}$, then automatically $x\in\operatorname{cl}A$. It follows that $$\bigcap\{C\subseteq X:A\subseteq C\text{ and }C\text{ is closed}\}\subseteq\operatorname{cl}A$$ and hence that $$\bigcap\{C\subseteq X:A\subseteq C\text{ and }C\text{ is closed}\}=\operatorname{cl}A\;.$$
$n \in \Bbb N$ and $p$ is prime and odd,
Using Fermat's Little Theorem, $a^{p-1}\equiv1\pmod p$ if $(a,p)=1$ $\implies a^n\equiv1\pmod p$ if $(p-1)\mid n$ So, $\sum_{1\le a\le p-1 }a^n\equiv \sum_{1\le a\le p-1 }1\pmod p\equiv p-1$ if $(p-1)\mid n$ If $(p-1)\not\mid n,$ since the sets of reduced residue classes $\{1, 2,\cdots, (p − 1)\}$ and $\{g, 2g,\cdots, (p − 1)g\}$ are the same if $(g,p)=1$, then $\sum_{1\le a\le p-1 }a^n ≡\sum_{1\le a\le p-1}(g\cdot a)^n$ So, $p\mid (g^n-1)\sum_{1\le a\le p-1 }a^n$ If $g$ is set to be a primitive root of $p,$ $ord_pg=p-1\implies g^n\equiv 1 \pmod p\iff (p-1)\mid n$ As $(p-1)\not\mid n,g^n\not\equiv 1 \pmod p\implies p\not\mid (g^n-1)$ $\implies p\mid \sum_{1\le a\le p-1 }a^n\iff \sum_{1\le a\le p-1 }a^n\equiv0\pmod p$
Union of ascending ideals is an ideal
The nested chain is important for showing closure under addition. If we have $a\in I_m$ and $b \in I_n$, how can we guarantee $a+b$ belongs to $I$?
Would it be reasonable to define $\binom{n}{n+1} = 0$?
One of many equivalent definitions of Newton symbol is that $${n \choose k} = \frac{n\cdot(n-1)\cdot(n-2)\cdot\ldots\cdot(n-k+1)}{k!}.$$ Therefore $${n \choose n+1} = \frac{n\cdot(n-1)\cdot(n-2)\cdot\ldots\cdot1\cdot 0 }{(n+1)!} = 0.$$
Uniform convergence of the series $\sum_{n=1}^{\infty} \frac{1}{((x+\pi)n)^2}$ for $x \in (-\pi,\pi)$?
Hint. Note that for $x_n=-\pi+1/n$ we have that $$\frac{1}{((x_n+\pi)n)^2}=1.$$ On the other hand, when $x\in [a,\pi)$ for $a\in (-\pi,\pi)$, $$0<\frac{1}{((x+\pi)n)^2}\leq \frac{1/(a+\pi)^2}{n^2}.$$
Partial Sum of Sequence
This is $\sum_{t=1}^n t^2x^t$ where $x=1/(1+r)$. Now $$\sum_{t=1}^n t^2x^t=x\frac d{dx}\sum_{t=1}^n tx^t$$ and $$\sum_{t=1}^n tx^t=x\frac d{dx}\sum_{t=1}^nx^t$$ etc.
Is this string ascending or descending?
\begin{align*} a_n &= \sqrt{n+5}-\sqrt{n} = \left(\sqrt{n+5}-\sqrt{n}\right)\frac{\sqrt{n+5}+\sqrt{n}}{\sqrt{n+5}+\sqrt{n}} \\ &= \frac{\left(\sqrt{n+5}-\sqrt{n}\right)\left(\sqrt{n+5}+\sqrt{n}\right)}{\sqrt{n+5}+\sqrt{n}} \\ &= \frac{n+5 - n}{\sqrt{n+5}+\sqrt{n}} \\ &= \frac{5}{\sqrt{n+5}+\sqrt{n}} \\ \end{align*} This is clearly a decreasing sequence. As $n$ increases, the denominator increases so the sequence as a whole decreases. If you insist on using $\frac{a_{n+1}}{a_n}$, use the same approach above to show that: \begin{align*} \frac{a_{n+1}}{a_n} &= \frac{\sqrt{n+6}-\sqrt{n+1}}{\sqrt{n+5}-\sqrt{n}} \\ &= \frac{\sqrt{n+5}+\sqrt{n}}{\sqrt{n+6}+\sqrt{n+1}} \end{align*} Now, notice that: $$ n+1 \gt n \Rightarrow \sqrt{n+1} \gt \sqrt{n} $$ And: $$ n+6 \gt n+5 \Rightarrow \sqrt{n+6} \gt \sqrt{n+5} $$ Add the inequalities side by side to get: $$ \sqrt{n+6} + \sqrt{n+1} \gt \sqrt{n+5} + \sqrt{n} $$ Divide both sides by $\sqrt{n+6} + \sqrt{n+1}$ to get: $$ \frac{a_{n+1}}{a_n} = \frac{\sqrt{n+5}+\sqrt{n}}{\sqrt{n+6}+\sqrt{n+1}} < 1 $$
How can I justify this existential quantifier transformation in predicate logic?
You should have access to a rule that when $z$ does not occur free in $P$ then $$(\exists z~Q(z))\wedge P \iff \exists z~(Q(z)\wedge P)$$ Do you have any trouble seeing why that is justified? This also works in restricted domains, $(\exists z{\in}Z~Q(z))\wedge P\iff \exists z{\in}Z~(P\wedge Q(z))$ So applying this and the commutivity of conjunction, we have $$\begin{align} & \exists y{\in}Y~(y\in B\wedge (\exists x{\in}X~x\in A)) \\ \iff & (\exists y{\in}Y~y\in B)~\wedge~(\exists x{\in}X~x\in A) \\ \iff & (\exists x{\in}X~ x\in A)~\wedge~(\exists y{\in}Y~y\in B) \\ \iff & \exists x{\in} X~(x\in A\wedge (\exists y{\in}Y~y\in B)) \\[2ex] \iff & \exists x{\in}X~\exists y{\in} Y~(x\in A\wedge y\in B) \end{align}$$
for which positive integer $m$ does $(ab)^{2015} = (a^2 + b^2)^m$ have positive integer solutions
Let $p$ be some prime dividing $b$. Then mod $p$, the equation gives that $$0\equiv a^{2m}$$ telling us that $p$ divides $a$. This implies that every prime which divides $b$ divides $a$ and vice versa. So we have $$\begin{align}a&=\prod_{i=1}^np_i^{\alpha_i} & b&=\prod_{i=1}^np_i^{\beta_i}\end{align}$$ with all the $\alpha_i$ and $\beta_i$ at least $1$. Now write the original equation as $$\left(\prod_{i=1}^np_i^{\alpha_i+\beta_i}\right)^{2015}=\left(\prod_{i=1}^np_i^{2\alpha_i}+\prod_{i=1}^np_i^{2\beta_i} \right)^m$$ $$\prod_{i=1}^np_i^{2015\alpha_i+2015\beta_i}=\prod_{i=1}^np_i^{2m\mu_i}\left(\prod_{i=1}^np_i^{2\alpha_i-2\mu_i}+\prod_{i=1}^np_i^{2\beta_i-2\mu_i} \right)^m$$ where $\mu_i=\min\{\alpha_i,\beta_i\}$. Examine the factor in parentheses with respect to one prime $p_i$. We can't have both summands in that factor divisible by $p_i$ or else a higher power of $p_i$ would have been distributed out. We can't have exactly one of the two summands in that factor still be divisible by $p_i$, because then wlog $\alpha_i>\beta_i$. Yet examining powers of $p_i$ dividing the two sides of the equation would give that $2015\alpha_i+2015\beta_i=2m\beta_i$. This would give that $$4030\beta_i<2015\alpha_i+2015\beta_i=2m\beta_i\implies m>2015$$ As pointed out in the comments to the OP, no such $m$ allow for solutions. Therefore neither summand in that factor is divisible by $p_i$, each power of $p_i$ divides $a$ and $b$ to the same extent, each $\alpha_i=\beta_i$, and so $a=b$. And the quantity in parentheses is $(1+1)$. So the equation becomes $$\prod_{i=1}^np_i^{4030\alpha_i}=2^m\prod_{i=1}^np_i^{2m\alpha_i}\tag{$\star$}$$ Consider the powers of $2$ in this equation. (Evidently, since $m\geq1$, $2$ is indeed a divisor of $a$, so we can agree that $2=p_1$ and use the exponent $\alpha_1$.) $$2^{4030\alpha_1}=2^{m+2m\alpha_1}=2^{m(1+2\alpha_1)}$$ So $$4030\alpha_1=m(1+2\alpha_1)$$ and since $(1+2\alpha_1)$ is coprime to $\alpha_1$, we must have that $1+2\alpha_1$ divides $4030=2\cdot5\cdot13\cdot31$. Note that $1+2\alpha_1$ is odd though. So there are $8$ numbers then that $1+2\alpha_2$ can be: $1,5,13,31,65,155,403,2015$. These lead correspondingly to $\alpha_1=0,2,6,15,32,77,201,1007$. And to $m=\frac{4030\alpha_1}{1+2\alpha_1}$ being one of $$0,1612,1860,1950,1984,2002,2010,2014.$$ Although $m=0$ is not permitted. So there are only seven possibilities for $m$, as listed. And for each of these values of $m$, we can demonstrate that $a=b=2^{m/(4030-2m)}$ is at least one integer solution set for $a$ and $b$: $$\begin{align}(ab)^{2015} &=\left(2^{m/(4030-2m)}2^{m/(4030-2m)}\right)^{2015}\\ &=2^{2015m/(2015-m)} \end{align}$$ while $$ \begin{align} (a^2+b^2)^m &=\left(2^{m/(2015-m)}+2^{m/(2015-m)}\right)^m\\ &=\left(2\cdot2^{m/(2015-m)}\right)^m\\ &=\left(2^{m/(2015-m)+1}\right)^m\\ &=\left(2^{2015/(2015-m)}\right)^m\\ &=2^{2015m/(2015-m)} \end{align}$$ In fact for these $m$, there can be no more integer solutions $(a,b)$ other than the simple powers of $2$, because equation ($\star$) reveals that if some $p\neq2$ divides $a$, then $4030\alpha_i=2m\alpha_1$, and so $m$ would have to divide $2015$. But none of the seven values for $m$ do this. So $m$ must be $1612$ (with the only solution for $a$, $b$ being $a=b=2^2$), $1860$ ($a=b=2^6$), $1950$ ($a=b=2^{15}$), $1984$ ($a=b=2^{32}$), $2002$ ($a=b=2^{77}$), $2010$ ($a=b=2^{201}$), or $2014$ ($a=b=2^{1007}$).
Noetherian ring question.
HINT: Use Correspondence Theorem for Ideals and the definition of Noetherian ring.
Does there exist any positive integer $n$ such that $e^n$ is an integer (to show $\log 2$ is irrational)?
No $e$ is transcendantal and if there were integers $n$ and $m$ such that $e^n=m$ and therefore would be algebraic over $\mathbb{Q}$ as a root of the polynomial $X^n-m$. Contradiction.
L'Hospital's Rule for $u^{x}e^{-u}$
Assuming x is strictly an integer... I needed to apply L'hopital's rule x times recursively to find the limit: $$\lim_{u \to \infty} \frac{d^x}{du^x} \bigg( \frac{u^x}{e^{u}} \bigg)$$ when you take the derivative x times for $u^x$ it turns out the result is x factorial. $$\lim_{u \to \infty} \bigg(\frac{x!}{(-1)^xe^{u}} \bigg)$$ thus, the denominator grows without bounds, numerator has a finite value... limit is zero....
Using the Snowflake Method to Factor Trinomials
Two numbers, say $p$ and $q$, $p+q=37$, $p×q=252$, so $p=28$ and $q=9$. So they are not $36$ and $1$. Then it will work.
Continuity and Subsequences
Notice that what the question asks is that continuous functions are Riemann-integrable. If you have the theorem that states that continuous functions defined on compact sets are uniformly continuous, then it suffices you to choose $$ a = x_0 < x_1 < \dots < x_n = b $$ such that $$ |x_{i+1} - x_i| < \delta, i = 0, 1, \dots, n-1 $$ where $$ \forall \varepsilon > 0,\quad \exists \delta > 0,\quad \forall x,y \in [a,b], \quad |x-y| < \delta \quad \Rightarrow \quad |f(x) - f(y)| < \varepsilon, $$ i.e. the definition of uniform continuity. I believe this is a sufficiently good hint, if you want more details you can ask me again. Hope that helps,
Finding roots of polynomial using roots of unity
We have: $$z^6+z^4+z^3+z^2+1$$ $$= z^6 + z^5 + z^4 + z^3 + z^2 - z^5 - z^4 - z^3-z^2-z + z^4 + z^3 + z^2+z+1 $$ $$= (z^2-z+1)(z^4+z^3+ z^2+z+1)=0$$ Surely you can take it from here.
Prove that all diagonal entries of a negative definite matrix are negative
If you choose the vector $x=(0,0,\dots,1,\dots)$ with the $1$ in the $k$-th position, then $$ x^TAx=a_{kk} $$ Since $x\neq 0$ you should have $a_{kk}<0$.
Statements number system, negating of a sentence.
"If $9>10$ then $3^2=5$" is the same as "$9\le 10$ or $3^2=5$". If you say "$9$ is not greater than $10$", it means $9$ is either less than $10$ or it is equal to $10$, therefore $\sim(9>10)$ is the same as $9\not >10$ which is the same as $9\le 10$.
Are there some fast algorithm for convolution of sequences of matrices?
Fast Multiplication of Polynomials over Arbitrary Rings On fast multiplication of polynomials over arbitrary algebras These papers can help.
Line integral: Circulation to the square
You can also apply Stoke's theorem $$\oint\hat{v}\cdot d\hat{r}=\int\!\!\!\int_S(\nabla\times\hat{v})\cdot dS,$$ where $$(\nabla\times\hat{v})\cdot dS=\left(\frac{\partial v_y}{\partial x}-\frac{\partial v_x}{\partial y}\right)dx\,dy=-2\cos x\cos y\,dx\,dy.$$ Then $$\Delta\hat{v}=\int_{-\pi/2}^{\pi/2}\int_{-\pi/2}^{\pi/2}(-2\cos x\cos y)dx\,dy=-2\left(\int_{-\pi/2}^{\pi/2}\cos x\right)^2=-8.$$
Complex conjugation for absolute value calculation
Assume that $z = e^{\frac{2\pi i}{3}}a^{∗}b$, then we have $z^{*} = e^{-\frac{2 \pi i}{3}}ab^{*}$. You can easily check that $z + z^* = 2Re(z)$. And then substitute this into your equation.
Triangular numbers for numbers.
Gauss proved that every number is a sum of three triangular numbers. I don't think there's a good formula for the summands.
if $dU=TdS-PdV$ then $U=U(S,V)$ - rigorous proof?
Ok, i think i finally got it An important hypotesis not written is that $S, V$ are mutually independent Let us consider $$dU=T\,dS-P\,dV$$ From this 6 cases are possible: $U=U(S,V,\{X_i\})$ where $\{X_i\}=\{X_1,X_2,..,X_n\}$ is a subset of all additional independent variables different from $S,V,U$ (note: if one of this additional variables had some dependencies from $S$ and/or $V$ it should not be included among U dependencies, if instead $S$ and/or $V$ had some dependencies from one or more $X_i$, then $S$ and/or $V$ are totally determined by a particular set of $X_i$s, and then $S,V$ should be not included in $U$ dependencies, but case 2 already possibly deals with this situation) $U=U(\{X_i\})$ $U=U(S,V)$ $U=U(S)$ $U=U(V)$ $U$ has no dependencies Case 1 - $U=U(S,V,\{X_i\})$ Let's calculate $$\frac{\partial U}{\partial X_i}=\frac{dU}{dX_i}\Bigg|_{S,V,\{X_{j}\}-X_i}=\bigg(T\,\frac{dS}{dX_i}-P\,\frac{dV}{dX_i}\bigg)\Bigg|_{S,V,\{X_{j}\}-X_i}=0$$ Thus we conclude that if $U=U(S,V,\{X_i\})$, $U$ cannot be function of any additional variable $X_i$, then Case 1 reduces to one of the remaining cases. Case 2 - $U=U(\{X_i\})$ Let's calculate $$\frac{\partial U}{\partial X_i}=\frac{dU}{dX_i}\Bigg|_{\{X_{j}\}-X_i}=T\,\frac{dS}{dX_i}\Bigg|_{\{X_{j}\}-X_i}-P\,\frac{dV}{dX_i}\Bigg|_{\{X_{j}\}-X_i}$$ Now, if $S,V$ are not dependent from any $X_i$, then $dS$ and $dV$ are just arbitrary increments and then we can choose them to be null, making expression above to be zero. In this eventuality, we conclude that if $U=U(\{X_i\})$, $U$ cannot be function of any variable $X_i$, then this eventuality reduces to Case 6. If instead $S$ is determined by a certain set $\{X_i\}'\subset\{X_i\}$, we cannot make the expression above to be zero, but certanly since $$\frac{d U}{d S}\bigg|_{\{X_i\}-\{X_i\}'}=T\neq 0$$ and since S is totally determined by $\{X_i\}'$, we could equivalently consider Case 1, Case 3 and Case 4 instead. The same goes for the situation in which $V$ is determined by $\{X_i\}''\subset\{X_i\}$, we could equivalently consider Case 1, Case 3 and Case 5. In conclusion, considering what already concluded for Case 1, Case 2 reduces to one of the remaining cases. Case 4 - $U=U(S)$ If $U$ is solely a function of $S$, then for any variable $A\neq S$ we should have $\frac{dU}{dA}\Big|_S=0$ But for $A=V$ $$\frac{dU}{dV}\Bigg|_S=-P\neq 0$$ Thus, we conclude that Case 4 is NOT possible. Case 5 - $U=U(V)$ If $U$ is solely a function of $V$, then for any variable $A\neq S$ we should have $\frac{dU}{dA}\Big|_V=0$ But for $A=S$ $$\frac{dU}{dS}\Bigg|_V=T\neq 0$$ Thus, we conclude that Case 5 is NOT possible. Case 6 - $U$ has no dependencies If $U$ has no dependencies, then we might choose $dU$ arbitrarly, in particular we could choose it such that for any variable $A$ we have $\frac{dU}{dA}=0$ But for $A=V$ $$\frac{dU}{dV}=T\,\frac{dS}{dV}-P\neq 0$$ Having selected $dS=0$ since arbitrary, as not dependent on V. Thus, we conclude that Case 6 is NOT possible. Case 3 - $U=U(S,V)$ Only case left, we can finally conclude that $$U=U(S,V)$$
A Generalization of Carmichael Numbers
Suppose that your condition holds, and let $a$ be any solution to $T^{\frac{p-1}{2}} \equiv -1 \pmod{p}$. Then $x\mapsto ax$ gives a bijection between the solutions to $T^{\frac{p-1}{2}} \equiv -1 \pmod{p}$ and the solutions to $T^{\frac{p-1}{2}} \equiv 1 \pmod{p}$. But then there are at least $\frac{p-1}{2} + \frac{p-1}{2} = p-1$ invertible elements in $\mathbb{Z}/p$, in other words $p$ is prime.
What is the distribution of this sampled sequence of random variables?
You take the product of measures when you have independence, so $(1)$ would be the conditioned law of $(Z_n)_{n\in \mathbb{N}}$ where you draw independtly for each $Z_i$ whether or not they follow the law $\mu$ with probability $p$. The conditioned distribution of your sequence $(Y_n)_{n\in \mathbb{N}}$ would be $(2)$ assuming the $(X_n)_{n\in \mathbb{N}}$ are independent.
Simplify the difference quotient $\frac{f(x+h)-f(x)}{h}$.
$f(x+h)$ means replace $x$ with $x+h$ in your function definition. If $f(x) = 2x+3$, then $f(x+h) = 2(x+h)+3$, not $2x+3+f(h)$. Therefore, $$\frac{f(x+h)-f(x)}{h} = \frac{2(x+h)+3-\left(2x+3\right)}{h} = \frac{2x+2h+3-2x-3}{h} = \cdots$$ Edit: To answer your second question, how do you handle just $\frac{1}{x+h+1}-\frac{1}{x+1}$? One simple way: multiply the first term above and below by the second term's denominator, and vice-versa: $$\frac{1}{x+h+1}\frac{x+1}{x+1}-\frac{1}{x+1}\frac{x+h+1}{x+h+1} = \frac{(x+1)-(x+h+1)}{(x+h+1)(x+1)}.$$
How do you parameterize a sphere so that there are "6 faces"?
Parametrizing squares with circular coordinates is doomed to bring pain! Parametrizing a square with a square is much easier: up = NestList[RotateLeft, {s, t, 1}/Sqrt[s^2 + t^2 + 1], 2]; ParametricPlot3D[ Join[up, -up], {s, -0.9, 0.9}, {t, -0.9, 0.9}, PlotStyle -> {Red, Blue, Yellow, Green, Magenta, Cyan}, Boxed -> False, Axes -> False ] produces Notice I draw from -0.9 to 0.9 in order to emphasize the patches, but you want to do it from -1 to 1. (If you want the parametrizations to have their obvious normals going out of the sphere, then you need to be more gentle when constructing the last 3 patches: I just changed the sign of the other three, but that gets them oriented the wrong way..
Any algorithm in literature to solve my problem? Should call it "Segment ordering"?
I don't know if this is a known problem, but it can be solved in linear time. View the problem as a directed graph $G$, with vertices $V(G) = \{x_1, \dots, x_n\}$ and an edge $uv$ whenever "$u$ precedes $v$" is a constraint. Finding a feasible solution to your problem is simply a topological sort of this graph. To find an optimal solution, you want to guide your topological sort. For now, assume $x_i$ must precede $x_j$ (add the edge $x_ix_j$ if necessary). Begin with a "$x_i$-phobic topological sort", where we refuse to add $x_i$ to the list (i.e., never allow $x_i \in S$ in Kahn's algorithm from the wiki page), obtaining a list $L_1$. Now, perform a $x_j$-phobic topological sort, but follow edges backwards, obtaining a list $L_2$. Reverse $L_2$ to make it topologically sorted; also remove any elements in $L_1$ from it. We now simply need to add the remaining vertices between $L_1$ and $L_2$; this can be done with a topological sort rooted at $x_i$ which ignores all nodes in $L_1$ and $L_2$, obtaining a new list $L_3$. We finally construct $L = (L_1,L_3,L_2)$ as our answer. The above certainly produces a topological sort (i.e., a feasible solution). I claim it is also optimal, and in fact, any element $x_k$ between $x_i$ and $x_j$ in $L$ must be between them for any feasible solution. Since $x_k$ is not in $L_1$, one can show that there is a path from $x_i$ to $x_k$; thus, $x_i$ must precede $x_k$. Similarly, since $x_k$ also wasn't found by $L_2$, there is a path from $x_k$ to $x_j$ (i.e., a path from $x_j$ to $x_k$ when the edges were reversed), so $x_k$ must precede $x_j$. Thus, our algorithm is optimal. Since topological sorting is linear in $|V(G)| + |E(G)|$, this is a linear-time algorithm. Note that we have to run this process once for forcing $x_ix_j$, and once for $x_jx_i$. Also note that the segment lengths didn't matter.
How to prove that every power of 6 ends in 6?
Use induction in order to complete the hint given by @ThePortakal. First, show that this is true for $n=1$: $6^1=6$ Second, assume that this is true for $n$: $6^n=10k+6$ Third, prove that this is true for $n+1$: $6^{n+1}=$ $6\cdot\color{red}{6^n}=$ $6\cdot(\color{red}{10k+6})=$ $60k+36=$ $60k+30+6=$ $10(6k+3)+6$ Please note that the assumption is used only in the part marked red.
Two points $A,B$ on a right path, point $C$ can be seen with an angle of $CAB=60^\circ,\ CBA=45^\circ$. Determine the distance between $C$ and $AB$
Let the perpendicular distance of $C$ from $AB$ be $h$. Then from triangle ratio $\cot(\theta)$: $$h (\cot(60) + \cot(45)) = 400\\ h= \frac{400 \sqrt 3}{1+\sqrt 3} = 253.59$$ Here is an image, all angles in degrees. Try computing $\cot(\theta)$ for the two small triangles.
Improving a proof for a basic property of convergent series
I think proving this by contradiction is unnecessary, as it follows almost immediately by definition. Let $(s_n)$ be the sequence of partial sums. Then since $a_k\geq0$, $(s_n)$ is increasing. Hence if the limit is $s$ then $$\sum_{i=n}^\infty a_i=s-s_{n-1}=|s-s_{n-1}|$$ so your result follows by definition of a limit.
Basis vector partition
I think it depends on whether the polytope is a Minkowski sum of two polytopes from linearly independent subspaces. Consider the sum of a vertical segment with a horizontal hexagon. Then, in the resulting hexagonal prism, normals of caps are in subspace $\{e_z\}$ and the normals of sides are in the subspace $\{e_x, e_y\}$.
How to show that the image of a complete metric space under an isometry is closed?
Let $y\in N$ be a limit point of $f(M)$. Then there's a sequence $(f(x_n))\in f(M)$ converging to $y$. Then $(f(x_n))$ is Cauchy in $(N,\sigma)$. Since $f$ is an isometry, $(x_n)$ is Cauchy in $(M,d)$. Since $(M,d)$ is complete, $(x_n)$ converges to $x\in M$. Then $f(x)=\lim_{n\to\infty}f(x_n)=y$. Thus $y\in f(M)$. So $f(M)$ is closed. A slicker way would be to note that an isometry is a homeomorphism, and a complete space is closed.
Is this moment inequality valid?
No inequality $\mathrm E(X^p)^2\leqslant c(p)\cdot\mathrm E(X^2)^p$ $(\ast)$ may hold for a finite $c(p)$ and for every nonnegative random variable $X$, as soon as $p\gt2$. To see why, note that the LHS of $(\ast)$ may be infinite while its RHS is finite. This happens, for example, when the tail distribution is $\mathrm P(X\geqslant x)\sim c/x^q$ when $x\to+\infty$, for some $c\gt0$ and $2\lt q\leqslant p$. The same argument also shows that, even when restricted to bounded random variables (and in particular, with every moment finite), $(\ast)$ cannot hold for $p\gt2$, for any finite $c(p)$. To wit, consider a random variable $X$ as above and, for every $u\gt0$, $X_u=\min\{X,u\}$. Then every moment of every $X_u$ is finite and, when $u\to+\infty$, $\mathrm E(X_u^2)\to\mathrm E(X^2)$, which is finite, and $\mathrm E(X_u^p)\to+\infty$. Hence, for every finite $c$, one sees that $\mathrm E(X_u^p)^2\gt c\cdot\mathrm E(X_u^2)^p$ for some finite $u\gt0$, and in particular for some bounded random variable $X_u$. If $1\leqslant p\lt2$, any nonzero $X$ which is almost surely constant disproves the proposed inequality. Finally, if $p=2$, the proposed inequality reduces to a tautology.
Show that a submanifold is lagrangian
One approach is to work in coordinates. Let $k = \dim(C)$ and $n = \dim(M)$. Write the usual symplectic structure on $T^*M$ as $(T^*M, \omega)$. To show that $L \subset T^*M$ is Lagrangian, we must show that: (a) $\dim(L) = n$, and (b) $\omega|_L = 0$. Since we are working with a submanifold $C \subset M$, we can choose "slice" coordinates $(x^1, \ldots, x^k, x^{k+1}, \ldots, x^n)$ for an open set $U \subset M$. In other words, the map $(x^1, \ldots, x^n) \colon U \to \mathbb{R}^n$ identifies $C \cap U \to \mathbb{R}^k \times 0$. In general, a covector $p \in T^*M$ has the form $$p = \xi^1\,dx^1|_x + \cdots + \xi^k \,dx^k|_x + \xi^{k+1}dx^{k+1}|_x + \cdots + \xi^n\,dx^n|_x,$$ where $x = \pi_M(p) \in M$. Note that $p \in T^*M$ is specified by $2n = \dim(T^*M)$ numbers: the basepoint $x \in M$ gives $n$, and the components $(\xi^1, \ldots, \xi^n)$ give $n$. However, a typical element $p \in L \subset T^*M$ takes the form $$p = \frac{\partial f}{\partial x^1}(x)\,dx^1|_x + \cdots + \frac{\partial f}{\partial x^k}(x)\,dx^k|_x + \xi^{k+1}\,dx^{k+1}|_x + \cdots + \xi^n\,dx^n|_x,$$ where now $x = \pi_M(p)$ lies in $C \subset M$. Notice that $p \in L$ is specified by $n = k + (n-k)$ numbers: the basepoint $x \in C$ gives $k$, and the components $(\xi^{k+1}, \ldots, \xi^n)$ give $n-k$. This illustrates that $\dim(L) = n$. (To make this completely precise, one should use this idea to define a coordinate chart $L \to \mathbb{R}^k \times \mathbb{R}^{n-k}$.) Now, notice that we not only chose coordinates $(x^1, \ldots, x^n)$ on an open set $U \subset M$, but we've also described coordinates $(\widetilde{x}^1, \ldots, \widetilde{x}^n, \widetilde{\xi}^1, \ldots, \widetilde{\xi}^n)$ on the open set $\widetilde{U} = \pi^{-1}(U) \subset T^*M$. In these coordinates, the usual symplectic form on $T^*M$ is $$\omega = \sum d\widetilde{x}^i \wedge d\widetilde{\xi}^i = (d\widetilde{x}^1 \wedge d\widetilde{\xi}^1 + \cdots + d\widetilde{x}^k \wedge d\widetilde{\xi}^k) + (d\widetilde{x}^{k+1} \wedge d\widetilde{\xi}^{k+1} + \cdots + d\widetilde{x}^n \wedge d\widetilde{\xi}^n).$$ Using this, one can show that $\omega|_L = 0$. Details are left to you. (Hint: Show that $d\widetilde{\xi}^j|_L = 0$ for $1 \leq j \leq k$, while $d\widetilde{x}^j|_L = 0$ for $k+1 \leq j \leq n$.) Note: The original question statement states that $(M, \omega_M)$ is a symplectic manifold. However, one only needs that $M$ is a smooth manifold, while $T^*M$ is given its usual symplectic structure.
Is the Petersen Graph k-partite?
The Petersen graph is not bipartite, because it has a 5-cycle. It is 3-colorable. You can find such a coloring here, midway down the page on the right. So the smallest $k$ for which the Petersen graph is $k$-partite is $k=3$.
Let the focus S of parabola divide one of its focal chord in the ratio 2:1. If the tangent at Q cuts directrix at R such that RQ=6...
The mistake lies in taking incorrect sign of $t$ for point $Q$. It lies on opposite side of $P$. So for $Q$, parameter is $-1/t=-1/\sqrt{2}$ but in the tangent equation at $Q$ you took $1/t=1/\sqrt{2}$ which gave wrong coordinates for $R$. Since $(x,y)=(at^2,2at)$, $t$ has same sign as $y$. So the entire upper branch of a parabola has $t$ positive and entire lower branch has $t$ negative - something to keep in mind.
Finding $P(X \ge Y)$ when X and Y are independent and uniformly distributed on $\{1,2,...,N\}$
Hint: $$1=\Pr(X\geq Y)+\Pr(Y\geq X)-\Pr(X=Y)$$ and $$\Pr(X\geq Y)=\Pr(Y\geq X)$$
Overlapping null spaces
In the statement of your question there seems to be a few unjustified claims. First, why is it obvious that the nullspaces intersect...of course we always have $0 \in N(A) \cap N(B)$, but here I am referring to intersect in the sense of including other non-zero vectors. Now, secondly, why should $\dim(N(A)\cap N(B))=M-2N$? Suppose $M=3$ and $N=2$, then we would have $\dim(N(A)\cap N(B))=3-4=-1$ which is not possible. Anyway, apart from these observations, I think it is best to determine the nullspace for each matrix (in the usual way by solving the homogenous system associated with each matrix) and then to check whether each of the basis vectors for one space is in the span of the other space's basisvectors.
Size of $Th_{\kappa\kappa}$.
If I understand what you're asking (per your comments), the answer is yes via a general construction. Suppose $v$ is a signature and $\mathfrak{A}$ is a $v$-structure. There is an infinitary sentence characterizing $\mathfrak{A}$ up to isomorphism amongst $v$-structures; roughly, it has the form $$\exists (x_i)_{i<\mu}[\forall y(\bigvee_{i<\mu}y=x_i)\wedge(\bigwedge_{i<j<\mu}x_i\not=x_j)\wedge \mathbb{A}]$$ where $\mu=\vert\mathfrak{A}\vert$ and $\mathbb{A}$ is basically the atomic diagram of $\mathfrak{A}$ pulled back along a fixed bijection $\{x_i: i\in\mu\}\rightarrow\mathfrak{A}$. This is exactly analogous to how we show that every finite structure in a finite language is pinned down up to isomorphism by a single first-order sentence. Now let $\theta=\max\{\mu, \vert v\vert, \omega\}$; the sentence above belongs to $\mathcal{L}_{\theta^+,\mu^+}$, so a fortiori to $\mathcal{L}_{\theta^+,\theta^+}$. In particular, if $\kappa>\theta$ then the $\mathcal{L}_{\kappa,\kappa}$-theory of $\mathfrak{A}$ is determined by a single $\mathcal{L}_{\kappa,\kappa}$-sentence.
$\int_0^\infty \frac{1}{1+x^ 9} \, dx$
There is a simpler way, using Euler's $B$ function: 1) change the variable from $x$ to $y=x^9$. You'll get $\frac 1 9 \int \limits _0 ^\infty \frac {y^{- \frac 8 9}} {1+y} \mathbb{d}y$. 2) make a second change of variables: $t=\frac y {1+y}$; you'll get $\frac 1 9 \int \limits _0 ^1 t^{- \frac 8 9} (1-t)^ {- \frac 1 9} \mathbb{d}t$, which is $\frac 1 9 B(\frac 1 9, \frac 8 9)$. 3) finally, using that $B(x,y)=\frac {\Gamma(x) \Gamma(y)} {\Gamma(x+y)}$ and $\Gamma(x) \Gamma(1-x) = \frac \pi {\sin( \pi x)}$, you'll get $\frac \pi {9 \sin \frac \pi 9}$.
Product of schemes picture
(1) No, the fiber product of schemes doesn't really involve gluing or identification at all. Instead, it is like a cartesian product (but more complicated since schemes are weird). For instance, the fiber product of two copies of $\mathbb{A}^1$ over $\operatorname{Spec} k$ is $\mathbb{A}^2$, and (assuming $k$ is algebraically closed) the you can think of closed points of $\mathbb{A}^2$ as ordered pairs of two closed points of $\mathbb{A}^1$. So on closed points, the fiber product is just the cartesian product in this case (though on non-closed points things are more complicated). More generally, if $X$ and $Y$ are schemes with morphisms $f:X\to Z$ and $g:Y\to Z$, then the fiber product $X\times_Z Y$ is like an algebro-geometric version of the set $\{(x,y)\in X\times Y: f(x)=g(y)\}$ (so if $Z$ has just one point, this is all of $X\times Y$). In general, there is a natural surjection from $X\times_Z Y$ to this set, and in nice cases this surjection will be injective on the closed points. (2) It is much easier to find a counterexample to your second question here. For instance if $A=k[x,y]/(xy)$ and $X=\operatorname{Spec} k[x,y]/(xy)$, then $X$ is connected but the affine open subset $U=\operatorname{Spec} A_{x+y}$ is disconnected. Geometrically, $X$ is two lines which meet at a point and $U$ is the complement of the intersection point, which is disconnected since you can decompose it into the two punctured lines. Finding a connected but not locally connected scheme is a lot harder; for instance, any Noetherian space is locally connected (given a point, the complement of all the irreducible components that don't contain it is a connected neighborhood). Here's one example. Let $K$ be your favorite compact Hausdorff space that is connected but not locally connected (e.g., a topologist's sine curve), and let $C(K)$ be the ring of all continuous functions $K\to\mathbb{C}$. Then the topology of $\operatorname{Spec} C(K)$ is closely related to the topology of $K$ (see this MO post, for instance) and in particular it will also be connected but not locally connected. (3) No. If $A$ is a commutative ring, then the idempotent elements of $A$ are in bijection with the clopen subsets of $\operatorname{Spec} A$. If there are only finitely many idempotents in $A$, then the connected components of $\operatorname{Spec} A$ are exactly the minimal nonempty clopen subsets, and every clopen subset is a union of some collection of connected components. So if there are $n$ connected components, there will be $2^n$ idempotents, one for each union of connected components. If $A$ has infinitely many idempotents, the relationship between idempotents and connected components is much more complicated (the connected components correspond to the ultrafilters on the Boolean algebra of idempotents of $A$).
A multi-dimensional Frobenius problem
Take $\mathcal{E}=(e_1,\ldots,e_d) \in A^d$ that is a basis of $\mathbb{Q}^d$. We choose the norm $|| \cdot ||$ to be $|| \cdot ||_{\infty,\mathcal{E}}$ (sup-norm of the coefficients in the basis $\mathcal{E}$). $\mathbb{Z}e_1 + \ldots + \mathbb{Z}e_d$ contains $N \mathbb{Z}^d$ for some positive integer $N$. Let $C_0$ be the cone spanned by the $e_i$'s. Take $C$ to be the cone spanned by $f_1,\ldots,f_n$ where $f_i = e_i + \epsilon \sum_{j \neq i} e_j$, choosing $\epsilon>0$ small enough so that $(f_1,\ldots,f_n)$ is free. There exists $\eta>0$ (depending on $\epsilon$) such that for every $x \in C$, the coefficients of $x$ in the basis $\mathcal{E}$ are all $\geq \eta ||x||$, so $B(x,\eta ||x||) \subset C_0$ (draw a picture!). Now choose $s_1,\ldots,s_k \in S$ exhausting $\mathbb{Z}^d/N\mathbb{Z}^d$. Let $M=\max_i ||s_i||$. Take $x \in C \cap \mathbb{Z}^d$, with $||x||>M/\eta$. There exists $i$ such that $x-s_i \in N \mathbb{Z}^d$, and $x-s_i \in B(x,M) \subset C_0$, so the coefficients of $x-s_i$ in $\mathcal{E}$ are non-negative integers. As a consequence, $x=s_i+(x-s_i)$ is in $S$. Edit: Just to add rigor to the statement with the "picture": if $x = \sum_i x_i f_i \in C$, and if $x_{i_0} = \max_i x_i$, then $x = \sum_i y_i e_i$ with $y_{i_0} = \max_i y_i \leq (1+(d-1)\epsilon ) x_{i_0}$ and $y_i \geq \epsilon x_{i_0}$ for each $i$, so that $y_i \geq \epsilon / (1+(d-1)\epsilon) ||x||$.
Given that $f(z)$ and $|f(z)|$ are each analytic in a domain D, prove that $f(z)$ is constant in D.
Hint: the only real-valued analytic functions are constant.
Is every cauchy sequence bounded?
For $n=1$ we have $n-1=0$ and so $\frac{1}{n-1}$ is not defined. So you cannot start your sequence at $n=0.$ $x_1$ is not infinite but $x_1$ is not defined, at least in the set of real numbers $\mathbb{R}$. The symbol $\infty$ is used in mathematics but you should always check what is its meaning in the context where it is used. In the context you use it (a an element of the real numbers) it does absolutely make no sense and so you can not use it. The sequence $$1,\frac12,\frac13,\ldots$$ (this is your sequence $x_2,x_3,x_4,\ldots$) is a Cauchy sequence and it is bounded. (What is a bound for this sequence?) The sequences $1,2,3,4,\ldots$ and $1,2,1,2,1,2,1,2,\ldots$ are nto Cacuhy sequences but the second one is bounded the first one is not (Why?). Annotation One can construct extensions to the set of real numbers $\mathbb{R}$ that contain $\infty$ but statements that are valid in $\mathbb{R}$ must not be valid in this extenstion of $\mathbb{R}$
Prove by mathematical induction that $n(n+1)(n+2)...(n+r-1)$ is divisible by $r!$.
To save a lot of typing, let me define $$f(n,r)=n(n+1)(n+2)\cdots(n+r-1).$$ You can verify the identity $$f(n,r+1)=(r+1)f(n,r)+f(n-1,r+1)$$ which you can then use to prove the statement $$f(n,r)\text{ is divisible by }r!$$ by induction on the quantity $n+r.$ Alternatively, if you're allowed to recast the problem in terms of binomial coefficients, you can prove that $$\frac{f(n,r)}{r!}=\binom{n+r-1}r$$ is an integer by using Pascal's identity $$\binom x{r+1}=\binom{x-1}r+\binom{x-1}{r+1}$$ to prove by induction on $m$ that $\binom mr$ is an integer for all integers $m,r\ge0.$ (Of course this is just the same proof in different notation.)
Find the moment generating function of a discrete random variable given its probability mass function
First we consider that PMF must sum up to unity so $(P_y(Y) = \frac{1}{2^y})$ for $y \in \mathbb N$: $$\sum_{y=1}^{\infty}\frac{C}{2^y}=1 \Rightarrow C \frac{\frac{1}{2}}{1-\frac{1}{2}}=1 \Rightarrow C = 1$$ Then by definition of MGF we see that ($i=\sqrt{-1}$): $$\Phi_Y(\omega) = \mathbb{E}\{ e^{i\omega y} \}= \sum_{y=1}^{\infty} \frac{e^{i\omega y}}{2^y} = \frac{\frac{e^{i\omega}}{2}}{1-\frac{e^{i\omega}}{2}}=\frac{e^{i\omega}}{2-e^{i\omega}}$$ Also we have the geometric series formula: for a complex number $z$ with $|z|<1$ we have: $$\sum_{i=a}^{N}z^n = S = z^a+z^{a+1} +...z^N$$ $$\sum_{i=a}^{N}z^{n+1} = zS = z^{a+1}+z^{a+2} +...z^{N+1}$$ $$zS-S = (z-1)S = z^{N+1}-z^a \Rightarrow S = \frac{z^{N+1}-z^a}{z-1}= \frac{z^a-z^{N+1}}{1-z}$$ When $N\rightarrow\infty$ we have $|z|^{N+1}\rightarrow 0$ and the formula becomes $$S=\frac{z^a}{1-z}$$ For above consider the case $a=1$. In first one $z=\frac{1}{2}$ and in the second $z = \frac{e^{i\omega}}{2}$ Also it can be interpreted as the discrete Fourier's transform of the sequence of the PMF which can be obtained from z-transform, also.
Say p $> 7$ be a prime such that p $\equiv$ 1 (3). Show that the set of cubic residues (including zero) do not form an arithmetic progression.
Hint: As you indicated, there are $k+1$ cubic residues, including $0$. $1$ and $p-1$ are cubic residues ($1^3$ and $(-1)^3$, respectively).
Find all continuous functions satisfying $\int_0^xf=(f(x))^2+C$ for some constant $C \neq 0$.
[EDIT: my previous answer was incomplete. This was updated to derive rigorously the full set of solutions] 1) Assume $f(x)\not=0$. Then $f^2$ differentiable at $x$ implies $(f^2(x+h)-f^2(x))/h$ has a finite limit as $h\to 0$, equivalently ${f(x+h)-f(x)\over h} (f(x+h)+f(x))\to c$. Since $f(x)\not = 0$, $f(x+h)+f(x)\to 2f(x)=k\not =0$ hence $f$ is differentiable at $x$. 2) Now, as in the text, this implies $f(x) = {1\over 2} x +b$ in every interval where $f(x)\not=0$. In other words, the graph of $f$ is a straight line segment parallel to $y=1/2 x$ in every such interval. There are 3 types of intervals to be considered: maximal intervals $S_-=(u,v)$ in which $f(x)<0$ (hence $f(v)=0$ if $v<+\infty$), maximal intervals $S_0=[u,v]$ in which $f(x)=0$, and maximal intervals $S_+=(u,v)$ in which $f(x)>0$ (hence $f(u)=0$ if $u>-\infty$). Such maximal intervals exist because $f$ is continuous. It has just been shown that that in every interval of the first and third form, the graph of $f$ is a straight segment parallel to $x/2$. Since $f$ is continuous, the end points of two contiguous such segments must be the same (this is what the text expresses as "the two possible solutions at these points must be the same"). By maximality of the intervals, there may exists at most one interval $S_-$, $S_0$ and $S_+$ of each type: $$S_- = (-\infty, \alpha),\quad S_0 = [\alpha, \beta],\quad {\rm and}\quad S_+= (\beta, +\infty).$$ Observe that setting $x=0$ in the proposed equation leads to $0=f(0)^2+C$, hence $C$ must be $\leq 0$ (otherwise there is no solution), and $f(0) = \pm \sqrt{-C}$. Now, if $C< 0$, $f(0)=\pm\sqrt{-C}$, hence $x=0$ is inside $S_+$ or $S_-$ according to the sign of $f(0)$, and there holds $a.0+b=f(0)=\pm\sqrt{-C}$, hence $b=\pm \sqrt{C}$. In both cases, the line $x/2+b$ intersects the axis $y=0$ at $x=-2b$. In conclusion, if $b=\sqrt{-C}\ (>0)$, the general solution must be of this form, for some $\alpha\leq -2b$: $$ f(x)=\cases{ x/2 - \alpha/2 , & $x\in (-\infty, -\alpha) $,\cr 0, & $x\in [\alpha,-2b]$, \cr x/2 + b, & $x\in (-2b, +\infty)$.\cr } $$ If $b = -\sqrt{C}\ (<0)$, the general solution must be of this form, for some $\beta \geq -2b$: $$ f(x)=\cases{ x/2 + b, & $x\in (-\infty, -2b) $,\cr 0, & $x\in [-2b, \beta]$, \cr x/2 - \beta/2 , & $x\in (\beta, +\infty)$.\cr } $$ (notice that in these solutions, $\alpha$ and $\beta$ may be equal to $-\infty$ or $+\infty$ resp.). Conversely, it is easily seen that the above functions are solution of the proposed equation. So, we have found the general form of the solution of this equation whenever $C<0$. If $C=0$, $x=0$ is in $S_0$ hence the general solution must be of the following form, with $\alpha\leq 0$ and $\beta\geq 0$: $$ f(x)=\cases{ x/2 - \alpha/2 , & $x\in (-\infty, \alpha) $,\cr 0, & $x\in [\alpha,\beta]$, \cr x/2 - \beta/2, & $x\in (\beta, +\infty)$.\cr } $$ (again, $\alpha$ may be equal to $-\infty$ and $\beta$ to $+\infty$). Conversely, such a function is easily seen to be a solution of the equation for every $\alpha\leq 0$ and $\beta\geq 0$.
Solve: $xdx+ydy=\frac{xdy-ydx}{x^2+y^2}$
$$xdx+ydy=\frac{xdy-ydx}{x^2+y^2}$$ $$\frac12d(x^2)+\frac12d(y^2)=d\arctan\frac{y}{x}$$ $$\frac12(x^2+y^2)=\arctan\frac{y}{x}+C$$
Does weak convergence in $L^{q}$ imply weak convergence in $L^{p}$
Since $\Omega$ is bounded, we have a continuous inclusion $$j \colon L^q(\Omega) \hookrightarrow L^p(\Omega).$$ Now we have the general fact that a continuous linear map $T\colon X \to Y$ between two normed spaces (it holds more generally for Hausdorff locally convex spaces) is also continuous if we endow both spaces with their respective weak topology. Since a continuous map maps convergent sequences to convergent sequences, we have indeed the conclusion that weak convergence in $L^q(\Omega)$ implies weak convergence in $L^p(\Omega)$ for $\Omega$ of finite measure and $p \leqslant q$.
Solution to recurrence $c_{l+1,t}=c_{l,t+1}-c_{l-1,t+1}$.
There actually is a pattern, though it shows up more clearly after another steps or two: if we ignore the signs, which simply alternate, the coefficients can be read off diagonals in Pascal’s triangle. (I’ve tried to emphasize them by coloring them alternately black and brown.) $$\newcommand\br{\color{brown}}\begin{array}{cc} 1\\ \br{1}&1\\ 1&\br{2}&1\\ \br{1}&3&\br{3}&1\\ 1&\br{4}&6&\br{4}&1\\ \br{1}&5&\br{10}&10&\br{5}&1\\ 1&\br{6}&15&\br{20}&15&\br{6}&1\\ \br{1}&7&\br{21}&35&\br{35}&21&\br{7}&1\\ \end{array}$$ In other words, it appears that $$c_{k,t}=\sum_i(-1)^i\binom{k-i}ic_{0,t+k-i}\;;$$ note that $\binom{k-i}i=0$ if $i<0$ or $i>\frac{k}2$. Since $c_{0,t}=C_t$, the $t$-th Catalan number, we can rewrite this as $$c_{k,t}=\sum_i(-1)^i\binom{k-i}iC_{t+k-i}\;.$$ Those diagonals are actually rather interesting: $$\sum_i\binom{k-i}i=F_{k+1}\;,$$ the $(k+1)$-st Fibonacci number, and $$\sum_i(-1)^i\binom{k-i}i=\begin{cases} 1,&\text{if }k\bmod 6=0\\ 1,&\text{if }k\bmod 6=1\\ 0,&\text{if }k\bmod 6=2\\ -1,&\text{if }k\bmod 6=3\\ -1,&\text{if }k\bmod 6=4\\ 0,&\text{if }k\bmod 6=5\;. \end{cases}$$ (For the latter see this question and answer.) I’m not sure, however, that this really gets you much further. Another approach is to start with the generating function $c(x)$ for the Catalan numbers; it’s well-known that it satisfies the equation $c(x)=x\big(c(x)\big)^2+1$, which I will rewrite as $x\big(c(x)\big)^2=xc(x)-1$. From this it follows immediately that $$x\big(c(x)\big)^{k+2}=\big(c(x)\big)^{k+1}-\big(c(x)\big)^k$$ for $k\ge 0$. This looks very much like your recurrence, the extra factor of $x$ on the lefthand side corresponding to the offset in $t$ in the recurrence. This suggests that we’re looking at repeated convolutions of the Catalan numbers, and so we are: a generalization of these numbers is treated in Example $\mathbf{5}$ in Section $7.5$ of Graham, Knuth, & Patashnik, Concrete Mathematics. After getting the notation properly matched up, we find that $$c_{\ell,t}=\binom{2t+\ell+1}t\frac{\ell+1}{2n+\ell+1}=\binom{2t+\ell}t\frac{\ell+1}{t+\ell+1}\;.$$ This PDF is also helpful. This PDF on the Catalan transform is also relevant: Problem $\mathbf{1}$ shows how it generates your numbers $c_{2,t}$. This one discusses the Catalan transform in more detail.
How to show that $f(x)=|x|/x$ does not have any limit as $x\to0$?
You just have to find one point $x_0$ in $(-\delta, \delta)$ such that $|f(x_0) - L| \geq 1/2$. Let's say we first suppose that $L \geq 0$ -- could you then find a point that would do the trick?
Matrix Transpose Property Proof
Hint. $A^TA=A\Rightarrow(A^TA)^T=A^T$.
Express the mathematical constant $e$ in terms of a limit that goes to zero.
Is this what you want? $$\lim_{n\to 0}(1+n)^{1/n}=e$$
Minimum perimeter of a three-sided rectangular fence with given enclosed area
What got you into difficulty is that you did not take into account that one side of the rectangular region is bounded by the river. Thus, the perimeter of the fence includes just three sides of the rectangle. The area of the rectangular region is $1800 = lw$, so $$w = \frac{1800~\text{ft.}^2}{l}$$ The perimeter of the fence is $P = l + 2w$. Substituting for $w$ yields \begin{align*} P(l) & = l + 2\left(\frac{1800~\text{ft.}^2}{l}\right)\\ & = l + \frac{3600~\text{ft.}^2}{l} \end{align*} Differentiating yields $$P'(l) = 1 - \frac{3600~\text{ft.}^2}{l^2}$$ Setting the derivative equal to zero yields $l = 60~\text{ft.}$ The derivative is negative if $l < 60~\text{ft.}$ and positive if $l > 60~\text{ft.}$ Thus, by the First Derivative Test, the perimeter is minimized if $l = 60~\text{ft.}$ If $l = 60~\text{ft.}$, then the width of the rectangular region is $$w = \frac{1800~\text{ft.}^2}{l} = \frac{1800~\text{ft.}^2}{60~\text{ft.}} = 30~\text{ft.}$$ so the fence has perimeter $$P = l + 2w = 60~\text{ft.} + 2(30~\text{ft.}) = 120~\text{ft.}$$
Link between Fourier transform of $\Bbb Z$-valued function and $\Bbb T$
$\operatorname{supp}\hat{f} \subset [-\Omega,\Omega] \implies \hat f(\omega)=\sum_k a_ke^{-ik\frac\omega\Omega\pi},\,\forall \omega\in[-\Omega,\Omega]$ due to the Fourier series expansion. But $a_k=\big(\int_{-\Omega}^\Omega=\int_{\Bbb R}\big)\hat f(\omega)e^{ik\frac\omega\Omega\pi}d\omega=f(k)$. So $\hat f(\omega)=\sum_k f(k)e^{-ik\frac\omega\Omega\pi},\,\forall \omega\in[-\Omega,\Omega]$. Going one step beyond the question, substitue the last equation into the Fourier transform and get a nice expression $$f(x)=\int_{-\Omega}^\Omega \hat f(\omega)e^{i2\pi\omega x}\,d\omega=2\sum_k f(k)\frac{\sin\big(\pi(2\Omega x-k)\big)}{\pi\big(2x-\frac k\Omega\big)}.$$ So an $L^1(\Bbb R)$ function with compactly supported Fourier transform is an interpolation of its values at integral points.
Prove that $a_n = \{\left(1+\frac{1}{n}\right)^n\}$ is bounded sequence, $ n\in\mathbb{N}$
Let me denote $S_n=\{(1+\frac{1}{n})^n\}$ \begin{align*} \bigg(1+\frac{1}{n}\bigg)^n &=1+ {_nC_1} \frac{1}{n}+ {_nC_2} \bigg(\frac{1} {n}\bigg)^2+..........+{_nC_n} \bigg(\frac{1}{n}\bigg)^n\\ &=1+ n \frac{1}{n}+\frac{n(n-1)}{2!} \bigg(\frac{1}{n}\bigg)^2+ \frac{n(n-1)(n-2)}{2!} \bigg(\frac{1}{n}\bigg)^3+..........+\frac{n(n-1)(n-2)....(n-(n-1))}{n!}\bigg(\frac{1}{n}\bigg)^n\\ &=1+ 1+ \frac{(1-\frac{1}{n})}{2!} + \frac{(1-\frac{1}{n})(1-\frac{2}{n})} {3!}+..........+\frac{(1-\frac{1}{n})(1-\frac{2}{n}).....(n-\frac{n-1}{n})} {n!}\\ &<1+ 1+ \frac{1}{2!} + \frac{1}{3!}+..........+\frac{1}{n!}\\ &<1+ 1+ \frac{1}{2} + \frac{1} {2^2}+..........+\frac{1}{2^{n-1}}\\ &=1+\frac{1-(\frac{1}{2})^n}{1-\frac{1}{2}}\\ &=1+2\bigg(1-\frac{1}{2^n}\bigg)\\ &=3-\frac{1}{2^{n-1}}\\ &<3\\ \end{align*} $S_n$ is bounded by 3. Also note that $S_n>2$. Here I'm not writing the reason in every step i hope u will understand.
Friedberg book on linear algebra theorem 6.6
Well, no: he only defined a vector by the formula of the second part and showed what he did show. This is not using part of what has to be proved in the proof, which is completely wrong in any logic I can think of. So he defines $\;u\;$ as he does, and then he shows $\;z=y-u\in W^\perp\;$ , and since trivially $\;y=u+z\;$ we are done except for the uniqueness part, which I pressume is what must be in the part where you wrote periods ... Nothing wrong here.
Solve nonlinear ordinary differential equation?
$$ f'(t) + \gamma\left[ r+\frac{\sigma^{-2}B^2}{2(1-\gamma)} \right]f(t)+(1-\gamma)e^{-\frac{\rho t}{1-\gamma}}\left( f(t) \right)^{\frac{\gamma}{\gamma-1}} = 0, $$ $$ f(T) = e^{-\rho T}, $$ This ODE can be re-written as $$f'(t)+af(t)+b e^{-ct} f^q=0=f^{-q}f'+a f^{1-q}=-be^{-ct}$$ which is a Bernoulli IDE as pointed out in the comment above. Take $f^{1-q}=z \implies (1-q) f^{-q} f'=z'$, then $$\frac{z'}{1-q}+az=-be^{-ct} \implies z'+(1-q)az=-b(1-q)e^{-ct}$$ This is linear ODE with integrating factor as $I=e^{(1-q)at}$. Then $$z=f^q= e^{-(1-q)at} \int e^{(1-q)at} (-b(1-q)e^{-ct}) dt+D e^{-(1-q)at}$$ $$\implies f(t)=\left(\frac{-b(1-q)}{a-aq-c} e^{-ct}+ D e^{(q-1)at}\right)^{1/q}.$$
Higher order Taylor method error
It always helps to stay as close to the theoretical formula as possible. In this case, $$ w_{k+1}=w_k+hf(t_k,w_k)+\frac12h^2(f_t(t_k,w_k)+f_y(t_k,w_k)f(t_k,w_k)) \\ =w_k+h\left(\left(1+\frac h2f_y(t_k,w_k)\right)f(t_k,w_k)+\frac h2f_t(t_k,w_k)\right) $$ def f(t,y): return np.sin(t)+np.exp(-t) def f_t(t,y): return np.cos(t)-np.exp(-t) def f_y(t,y): return 0 t,y,h=0,0,0.5; while t < 1+0.1*h: t, y = t+h, y + h*( (1+0.5*h*f_y(t,y))*f(t,y) + 0.5*h*f_t(t,y) ); print t,y which prints out 0.5 0.5 1.0 1.0768595869306357 1.5 1.7030876580073921 In the formula you used you have an error in the inner factor 0.5*h, as instead of $0.5^2=0.25$ you got there $0.125=0.5^3$.
Need Help Finding Area of A Rectangle
Obviously, you get the correct relation, here it can be proved Let $a$ be the side of the square garden then the new rectangular garden will have the length $2a$ (i.e. double the original side $a$) & width $a-3$ (i.e. decreased by $3$m from $a$). Now condition we have $$\text{area of new rectangular garden}=1.25\times(\text{area of original square garden})$$ $$\implies (2a)(a-3)=1.25(a^2)$$ $$\implies 2a^2-6a=1.25a^2$$ $$\implies 0.75a^2-6a=0$$ $$\implies \frac{3}{4}a^2-6a=0$$ $$\implies \color{blue}{a^2-8a=0}$$ $$\implies a(a-8)=0 \implies a=0, 8$$ But length of original garden $a>0$ Hence, the length of the original square garden $a=8$m. Now the dimensions of the new rectangular garden are length $2a=16$m & width $a-3=5$m Hence the area of rectangular garden $$=16\times 5=80 \space m^2$$
Can we say that $\mu^* (A) = \mu (A),$ for all $A \in \mathcal A$?
Indeed, we cannot assert that $\bigcup_{j=1}^\infty A_j \in \mathcal{A}$. But we don't have to. First note that $\mu$ enjoys the following countable sub-additivity property: (*) If $B_j, B \in \mathcal{A}$ and $B = \bigcup_{j=1}^\infty B_j$, then $\mu(B) \le \sum_{j=1}^\infty \mu(B_j)$. This is proved by the usual "disjointification": define $C_j$ inductively as $C_1 = B_1$, $C_{j+1} = B_{j+1} \setminus C_j$. Then the $C_j$ are disjoint and $B = \bigcup_{j=1}^\infty C_j$ so by countable additivity $\mu(B) = \sum_{j=1}^\infty \mu(C_j)$. But we have $C_j \subseteq B_j$ so by finite additivity $\mu(C_j) \le \mu(B_j)$, hence $ \sum_{j=1}^\infty \mu(C_j) \le \sum_{j=1}^\infty \mu(B_j)$. Now, returning to the original problem, apply property (*) with $B_j = A \cap A_j$. We get $\mu(A) \le \sum_{j=1}^\infty \mu(B_j)$, and since $B_j \subseteq A_j$, we conclude $\mu(A) \le \sum_{j=1}^\infty \mu(A_j)$ as desired. So we do indeed have the following improved countable sub-additivity property: (**) If $A_j, A \in \mathcal{A}$ and $A \subseteq \bigcup_{j=1}^\infty A_j$, then $\mu(A) \le \sum_{j=1}^\infty \mu(A_j)$. This holds whether $\bigcup_{j=1}^\infty A_j \in \mathcal{A}$ or not. This justifies the line of the proof that you were asking about.
$\Bbb Q\cap\Bbb Z[\frac{1+\sqrt n}2]=\Bbb Z$ when $n=4k+1$ is square-free
Note that $$\left(\frac{1+\sqrt n}2\right)^2=\frac{2k+1+\sqrt n}2 =k+\frac{1+\sqrt n}2.$$ It follows by induction that $$\left(\frac{1+\sqrt n}2\right)^r=a_r+b_r\left(\frac{1+\sqrt n}2\right)$$ where $a_r$ and $b_r$ are integers, ans so $$\Bbb Z\left[\frac{1+\sqrt n}2\right] =\left\{a+b\frac{1+\sqrt n}2:a,b\in\Bbb Z\right\}.$$ As $\frac12(1+\sqrt n )$ is irrational the only rationals in this set are the $a+\frac b2(1+\sqrt n)$ with $b=0$, that is the integers.
Dirichlet conditions for the existence of the Fourier transform
2) No 3) These two are completely different- Dirichlet's condition talks about pointwise convergence while the L2 condition talks about convergence in norm. And neither type of convergence implies the other. As a senior in high school I sometimes have to interpret a paragraph as a whole in an English test since I might have to do it that way on the NCAT(National College Aptitude Test) on 14 Nov. And even then I might not be able to understand every word (and instead find the meaning out impromptu) When I was in grade school though English classes and tests were mainly about learning English word by word. But knowing the meaning of every word of a paragraph did not guarantee the knowledge of the paragraph. Interpreting paragraphs in whole = convergence in norm Learning word by word = pointwise convergence; now you can see why neither implies the other. P.S. no they don't teach that there is any other notion of convergence than the pointwise one at HS level at school even in Korea But I found out by myself about what a "convergence in norm" is via the Internet: and just typing "Fourier transform" on Google leads you to links to some American colleges' websites (that I don't think at all to apply for lol) on which the L2 Fourier transform is mentioned in.
An algorithm for the open travelling salesman problem
I coded your heuristic in MATLAB (because I already happened to have a bunch of other TSP heuristics coded) and tested it on 50 random instances on the unit square with up to 200 nodes. I compared it to several other heuristics, as well as the Held-Karp lower bound. (In a handful of cases, the HK lower bound is the optimal objective function value, but in most cases it is strictly smaller.) Here are the tour lengths plotted against the number of nodes. (I called your heuristic "Successive Grouping".) For each heuristic, here is the average ratio between the tour length from the heuristic and the HK lower bound): Successive Grouping : 1.5890 Nearest Neighbor : 1.2096 Nearest Insertion : 1.1899 Farthest Insertion : 1.0731 Cheapest Insertion : 1.1558 Convex Hull : 1.0778 Minimum Spanning Tree : 1.3078 Christofides : 1.1100 As you can see, your heuristic is not great. :) It's an interesting idea, though. I think part of the problem stems from the fact that after you connect all the pieces, you're left with an additional edge that must be added (from the first to the last node), and this edge is not considered at all during the heuristic, so it could be quite long. (In your example, it's quite long.) By the way, your heuristic is not really a dynamic programming algorithm. It's just iterative. Here's my MATLAB script, for the record: % number of trials T = 50; % maximum number of nodes maxN = 200; % output arrays N_arr = zeros(1, T); SG_z = zeros(1, T); NN_z = zeros(1, T); NI_z = zeros(1, T); FI_z = zeros(1, T); CI_z = zeros(1, T); CH_z = zeros(1, T); MST_z = zeros(1, T); C_z = zeros(1, T); HK_z = zeros(1, T); HK_code = zeros(1, T); for t = 1:T if mod(t, 10) == 0 display(t) end % generate instance N = randi(maxN); N_arr(t) = N; x = rand(N, 1); y = rand(N, 1); c = squareform(pdist([x y])); % solve with heuristics [~, SG_z(t)] = TSPSuccessiveGrouping(c); [~, NN_z(t)] = TSPNearestNeighbor(c, 1); [~, NI_z(t)] = TSPNearestFarthestInsertion(c, 1, true); [~, FI_z(t)] = TSPNearestFarthestInsertion(c, 1, false); [~, CI_z(t)] = TSPCheapestInsertion(c, 1); [~, CH_z(t)] = TSPConvexHull(c, x, y); [~, MST_z(t)] = TSPMST(c); [~, C_z(t)] = TSPChristofides(c); % solve with Held-Karp [HK_z(t), ~, HK_code(t), ~] = TSPHeldKarpSO(c, ones(1, N), NN_z(t)); end % plot close all figure [N_arr_sorted, I] = sort(N_arr); plot(N_arr_sorted, SG_z(I), '-o'); hold on plot(N_arr_sorted, NN_z(I), '-+'); plot(N_arr_sorted, NI_z(I), '-*'); plot(N_arr_sorted, FI_z(I), '-.'); plot(N_arr_sorted, CI_z(I), '-x'); plot(N_arr_sorted, CH_z(I), '-s'); plot(N_arr_sorted, MST_z(I), '-d'); plot(N_arr_sorted, C_z(I), '-^'); plot(N_arr_sorted, HK_z(I), '-v'); legend({'Successive Grouping', 'Nearest Neighbor', 'Nearest Insertion', 'Farthest Insertion', 'Cheapest Insertion', 'Convex Hull', 'MST', 'Christofides', 'Held-Karp Lower Bound'}, 'Location', 'southeast'); xlabel('N'); ylabel('Tour Length'); fprintf('Successive Grouping : %.4f\n', mean(SG_z ./ HK_z)); fprintf('Nearest Neighbor : %.4f\n', mean(NN_z ./ HK_z)); fprintf('Nearest Insertion : %.4f\n', mean(NI_z ./ HK_z)); fprintf('Farthest Insertion : %.4f\n', mean(FI_z ./ HK_z)); fprintf('Cheapest Insertion : %.4f\n', mean(CI_z ./ HK_z)); fprintf('Convex Hull : %.4f\n', mean(CH_z ./ HK_z)); fprintf('Minimum Spanning Tree : %.4f\n', mean(MST_z ./ HK_z)); fprintf('Christofides : %.4f\n', mean(C_z ./ HK_z)); fprintf('Held-Karp Lower Bound : %.4f\n', mean(HK_z ./ HK_z));
How fast can $\Sigma_{n=1}^{\infty}a_nr^n$ blow up as $r \to 1$?
$\newcommand{\eps}{\varepsilon}$If $(a_{n}) \to 0$, then for every $\eps > 0$, there exists a positive integer $N$ such that $|a_{n}| < \eps$ for $n \geq N$. If $|r| < 1$, then \begin{align*} \left|(1 - r)\sum_{n=0}^{\infty} a_{n} r^{n}\right| &= \left|(1 - r)\sum_{n=0}^{N} a_{n} r^{n} + (1 - r)\sum_{n=N+1}^{\infty} a_{n} r^{n}\right| \\ &\leq \left|(1 - r)\sum_{n=0}^{N} a_{n} r^{n}\right| + \eps\left|(1 - r)\sum_{n=N+1}^{\infty} r^{n}\right| \\ &= \left|(1 - r)\sum_{n=0}^{N} a_{n} r^{n}\right| + \eps|r^{N+1}| \\ &< \left|(1 - r)\sum_{n=0}^{N} a_{n} r^{n}\right| + \eps. \end{align*} Since $\eps > 0$ was arbitrary, the left-hand side approaches $0$ as $r \to 1$.
Calculating Eigenvectors: Is my book wrong?
TLDR: The answers are the same. The vectors $(0.646586,1)$ and $(0.54,0.84)$ go in (almost) the same direction (the only differences due to rounding and the magnitude of the vector). The first has the benefit of one of the entries equalling one. The second has the benefit that its magnitude is (almost) $1$, but they both give essentially the same information. Remember that an eigenvector for a specific eigenvalue $\lambda$ is any vector such that $Av=\lambda v$ and these vectors collectively make up an entire subspace of your vector space, referred to as the eigenspace for the eigenvector $\lambda$. In the problem of determining eigenvalues and corresponding eigenvectors, you need only find some collection of eigenvectors such that they form a basis for each corresponding eigenspace. There are infinitely many correct choices for such eigenvectors.
How do you formulate the linearity condition for a covariant derivative on a vector bundle in terms of parallel transport?
The term "linear connection" refers to linearity in the other slot. So the point here is that $\nabla_X(s_1+s_2)=\nabla_X s_1+\nabla_X s_2$. This is indeed equivalent to the parallel transport being linear as a map between fibers (and thus only makes sense on vector bundles). What you discuss in the post is linear dependence on the direction into which you differentiate. This is not a specific feature of linear connections, it is also true for connections on general fiber bundles and indeed for arbitrary tangent maps.
Expression for largest eigen value of a real symmetric matrix
Yes there is. Note that if we write our matrix $A$ in the form $A = Q\Lambda Q^t$, where $Q$ is orthogonal and $\Lambda$ diagonal with entries $\lambda_1 \ge \lambda_2$ , then we have for any $x \in S^1 = \{x \in \mathbf R^2 \mid \def\abs#1{\left|#1\right|}\abs x = 1\}$ that $$ \def\<#1>{\left<#1\right>}\<Ax,x> = \<\Lambda Q^t x, Q^tx> = \sum_{i=1}^2 \lambda_i (Q^tx)^2_i \le \lambda_1 \abs{Q^tx}^2 = \lambda_1$$ So $$ \lambda_1 \ge \max_{x \in S^1} \<Ax,x> $$ Choosing the eigenvector $v \in S^1$ such that $Av = \lambda_1 v$ shows that $$ \lambda_1 = \<Av,v> \le \max_{x\in S^1} \<Ax,x> $$ hence $$\lambda_1 = \max_{x\in S^1} \<Ax,x> = \max_{x \ne 0}\frac{\<Ax,x>}{\<x,x>}$$
How many cubic (i.e., third-degree) polynomials $f(x)$ are there such that $f(x)$ has positive integer coefficients and $f(1)=9$?
Hint: the number of $(a,b,c,d)\in\left(\mathbb{Z}^+\right)^4$ such that $a+b+c+d=9$ is given by the coefficient of $x^9$ in the product $(x+x^2+x^3+\ldots)^4$, i.e. by $$ [x^9]\frac{x^4}{(1-x)^4} = [x^5]\frac{1}{(1-x)^4}=[x^5]\sum_{n\geq 0}\binom{n+3}{3}x^n =\binom{5+3}{3}=\color{red}{56}.$$ See also stars and bars.
The number of possible combination of column values
Assuming all the words are different, then you will have $y^x x!$ different possibilities. $2^3 3! = 48$, $2^4 4! = 384$. To see this, first fix the $x$ columns, each of which is selected from $y$ words. This gives $y^x$ possibilities. There are $x!$ possible permutations of the columns, hence you have $y^x x!$ different possibilities.
How are the homology groups of map space related to the "factors"?
In general the topology of $Map(X,Y)$ will be very complicated, even for reasonable spaces $X$ and $Y$. This answer cannot possibly claim to be complete but hopefully will be useful to you. Apologies for its unwieldy length. We'll assume throughout that $X$ and $Y$ are pointed CW complexes, and use the more standard notation $Map(X,Y)$ for the space of unbased maps and $Map_*(X,Y)$ for the space of based maps. By a result of Milnor $Map(X,Y)$ and $Map_*(X,Y)$ are CW complexes. Now it is true that if $A$ is an abelian group and $Y=K(A,n)$ is the Eilenberg-Mac Lane space in degree $n$, then $Map(X,K(A,n))$ is homotopy equivalent to a product of Eilenberg-Mac Lane spaces. This is true when $X$ is a finite complex, or more generally of finite type. It may still be true for more general $X$, although I would suggest care taken with this. I'll prove the statement for $X$ a simply connected finite complex. For more details look at Moller's paper Spaces of Sections of Eilenberg-Mac Lane Fibrations and the references therein. To see that the statement should be true first consider the based mapping space $Map_*(X,K(A,n))$. Then $\pi_0Map_*((X,K(A,n)))=[X,K(A,n)]\cong \tilde H^n(X;A)$, and more generally $$\pi_k(Map_*(X,K(A,n))=\pi_0(\Omega^kMap_*(X,K(A,n)))=\pi_0(Map_*(S^k,Map_*(X,K(A,n))))\cong\pi_0(Map_*(S^k\wedge X,K(A,n)))\cong\tilde H^n(\Sigma^kX;A)\cong\tilde H^{n-k}(X;A).$$ Thus $Map_*(X,K(A,n))$ has the same homotopy groups as the product $\prod K(\tilde H^{n-k}(X;A),k)$. There are several ways of increasing sophistication to realise the homotopy equivalence. We'll follow what is perhaps the simplest and assume that $X$ is a finite, simply connected CW complex, using induction on its cellular structure. Now when $X=S^{k-1}$ it is clear that $$Map_*(S^{k-1},K(A,n))=\Omega^{k-1}K(A,n)\simeq K(A,n-k+1)\simeq K(\tilde H^{k-1}(S^{k-1};A),n-k-1)$$ has the correct homotopy type, and more generally that so does $$Map_*\left(\bigvee S^{k-1},K(A,n)\right)\simeq K\left(\tilde H^{k-1}\left(\bigvee S^{k-1};A\right),n-k-1\right)$$ when $X$ is a wedge of $(k-1)$-spheres. Now assume the statement is true for all $X'$ of cellular dimension $<k$ and that $X$ is obtained from $X'$ by adding $k$-cells. Then there is a cofibration sequence $$\bigvee S^{k-1}\xrightarrow{\varphi}X'\xrightarrow{i}X\xrightarrow{q}\bigvee S^k\rightarrow\dots$$ which induces a fibration sequence $$\dots\rightarrow Map(X,K(A,n))\xrightarrow{i^*} Map(X',K(A,n))\xrightarrow{\varphi^*}Map(\bigvee S^{k-1},K(A,n)).$$ By assumption $Map(X',K(A,n))\simeq\prod K(\tilde H^{n-i}(X';A),i)$ and $Map(\bigvee S^{k-1},K(A,n))\simeq \prod K(A,n-k+1)$. For dimensional reasons $\varphi^*$ is trivial when restricted to all factors except $K(\tilde H^{k-1}(X';A),n-k+1)$, and on this factor it is the map corresponding to the change of coefficients induced by the homomorphism $\varphi^*$ in the long exact cohomology sequence $$0\rightarrow \tilde H^{k-1}(X;A)\xrightarrow{i^*}\tilde H^{k-1}(X';A)\xrightarrow{\varphi^*} \tilde H^{k-1}\left(\bigvee S^{k-1};A\right)\rightarrow \tilde H^k(X;A)\rightarrow 0.$$ Thus the fibre of $\varphi^*$ resticted to $K(\tilde H^{k-1}(X';A),n-k+1)$ identifies with $K(\tilde H^{k-1}(X;A);k-1)$ and it follows that the fibre of $\varphi^*$ is homotopy equivalent to $K(\tilde H^{k-1}(X;A);k-1)\times\prod_{i<k-1} K(\tilde H^{n-i}(X';A),i)$. Using the fibration sequence above, this fibre is precisely $Map_*(X,K(A,n))$ . Now since $\tilde H^{k-i}(X;A)\cong\tilde H^{k-i}(X';A)$ for $i<1$, we have from all this that $$Map_*(X,K(A,n))\simeq \prod K(\tilde H^{n-i}(X;A),i)$$ and the proof is complete upon appealing to the inductive hypothesis. Now consider the unbased mapping space $Map(X,K(A,n))$ and the evaluation fibration $$Map_*(X,K(A,n))\xrightarrow{j} Map(X,K(A,n))\xrightarrow{ev} K(A,n).$$ Since $K(A,n)$ is an H-space so too is the mapping space $Map(X,K(A,n))$. The map $\theta:K(A,n)\rightarrow Map(X,K(A,n))$ defined by $\theta(a)(x)=a$ for $a\in K(A,n)$, $x\in X$, splits $ev$, and using the H-space multiplication we get a map $$j+\theta:Map_*(X,K(A,n))\times K(A,n)\rightarrow Map(X,K(A,n))$$ which is seen to be a weak homotopy equivalence. Since $X$ is CW, so too are the mapping spaces and we conclude that this map is a homotopy equivalence. We can sum up the result with the following, $$Map(X,K(A,n))\simeq \prod_{k\geq 0}K(\tilde H^{n-k}(X;A),K).$$ Now let us turn to your final question. Using the above we have $$Map(S^1,S^1\times S^1)\cong Map(S^1,S^1)\times Map(S^1,S^1)\simeq S^1\times S^1\times \mathbb{Z}\times \mathbb{Z}.$$ On the other hand, using the fact that the composite $S^2\hookrightarrow S^1\vee S^1\vee S^2\xrightarrow{pinch} S^2$ is the identity we see that $Map(S^1,S^2)$ retracts off of $Map(S^1,S^1\vee S^1\vee S^2)$. In particular $H^*(Map(S^1,S^2))$ is a direct summand of $H^*(Map(S^1,S^1\vee S^1\vee S^2))$. Now consider the rationalisation of this last function space. Since $S^1$ is a finite complex and $S^2$ is simply connected it holds that $$Map(S^1,S^2)_\mathbb{Q}\simeq Map(S^1,S^2_\mathbb{Q})$$ Now there is a homotopy fibration sequence $$S^2_\mathbb{Q}\rightarrow K(\mathbb{Q},2)\xrightarrow{\iota^2} K(\mathbb{Q},4)$$ to which we can apply the mapping functor and use the previous results to get a homotopy fibration $$Map(S^1,S^2_\mathbb{Q})\rightarrow K(\mathbb{Q},1)\times K(\mathbb{Q},2)\xrightarrow{\ast\times \iota^2} K(\mathbb{Q},3)\times K(\mathbb{Q},4).$$ This shows that $Map(S^1,S^2_\mathbb{Q})\simeq K(\mathbb{Q},1)\times K(\mathbb{Q},2)\times S^2_\mathbb{Q}$. The point is that this tells us that $H^*(Map(S^1,S^2))$ contains free $\mathbb{Z}$ factors in all even degrees, corresponding rationally to the homology of $K(\mathbb{Q},2)$. As we have already observed, this means that $Map(S^1,S^1\vee S^1,S^2)$ has integral summands in all even degrees. This is clearly not the case for $Map(S^1,S^1\times S^1)\simeq S^1\times S^1\times\mathbb{Z}\times\mathbb{Z}.$