title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
What does a restriction mean?
If $f$ is a function between sets $X \to Y$ and $Z \subseteq X$, then the restriction $f|Z$ is the function $Z \to Y$ given by $(f|Z)(z) = f(z)$. Informally, a restriction keeps the rule but reduces the domain. Note that this can have consequences. For instance, the restriction of $x \mapsto x^2$ to $(0,\infty)$ is injective but the original function with domain $\mathbb R$ is not. In your linear algebra example, $A|(\ker A)=0$ means that $A$ sends every vector in $\ker A$ to the zero vector, which follows from the definition of $\ker A$. In other words, $A$ is the zero transformation in $\ker A$.
Simple calculus question about differentiation
$$dx=\frac{(\phi a^{\phi-1}da+\log a\,a^\phi\,d\phi)b-a^\phi db}{b^2}$$ and $$dy=\frac{da\,b-a\,db}{b^2}$$ so that $$\frac{dx}{dy}=\dfrac{\phi a^{\phi-1}b\,da+\log a\,a^\phi b\,d\phi-a^\phi db}{da\,b-a\,db}.$$ If there are no dependencies between $a,b,\phi$, this is about all you can say.
What is the partition for a dice, where the universe is $\Omega=\{1, 2, 3, 4, 5, 6\}$?
"A partition is a grouping of its elements into non-empty subsets, in such a way that every element is included in exactly one subset." (Wikipedia) There are many partitions of the same set. You have listed some of the possible ones: $$\mathcal{P}_1 = \Big\{ \{1,2,3,4,5,6\}\Big\} $$ $$\mathcal{P}_2 = \Big\{ \{1\},\{2\},\{3\},\{4\},\{5\},\{6\} \Big\} $$ $$\mathcal{P}_3 = \Big\{ \{1,2\},\{3,4\},\{5,6\}\Big\} $$ But there are many others: $$\mathcal{P}_4 = \Big\{ \{1,4\},\{2\},\{3,5,6\}\Big\} $$ $$\mathcal{P}_5 = \Big\{ \{1\},\{2\},\{3,4,5,6\}\Big\} $$ $$\mathcal{P}_6 = \Big\{ \{1,2\},\{3\},\{4\},\{5,6\}\Big\} $$ and so on.
Centralizer and Normalizer as Group Action
It's saying that if you want to prove that centralizers, normalizers, and kernels are subgroups of G, it is enough to show that they are stabilizers or kernels of group actions. Then they exhibit a particular group action (conjugation) and show that $N_G(A)$ is the stabilizer of A and Z(G) is the kernel of this action. Thus those two sets are subgroups. Then they let $N_G(A)$ act only on A and show the $C_G(A)$ is the kernel of this second action and thus a subgroup.
Are there prime gaps of every size?
It appears to be open if every even number is the difference of two primes, let alone consecutive primes. Here is a m.se question mentioning that and an mo question here
For certain pairs of Platonic Solids, are the edge-centers and face-centers equivalent?
The tetrahedron can be vertex inscribed into the cube (just alternating the cubes vertices). Thus the tetrahedrons edges become diagonals of the cubes faces. Therefore each edge of the tetrahedron uniquely corresponds to a face of the cube. This connection can be indeed given by their respective midpoints. The cube on the other hand can be vertex inscribed into a dodecahedron. This best can be seen when attaching on the cubes faces hipped roofs within alternating orientations, where the trapezia and obtuse triangles pairwise will become reconnected (across the cubes edges) into the required pentagons. Thus again you'll have a unique correspondance from the edges of the cube to the faces of the dodecahedron. But, in contrast to the above case, the edges of the cube do not run through the centers of the pentagons. Therefore the respective centers do not align here. --- rk
When does Order of Second Partial Derivatives Matter?
It may not be true if the function is not of class $C^2$. You can check that $$f(x,y) = \begin{cases} \dfrac{xy(x^2-y^2)}{x^2+y^2}, \mbox{ if }(x,y) \neq (0,0) \\[1em] 0, \mbox{ if }(x,y) = (0,0) \end{cases}$$is a counter-example. See more details in the relevant Wikipedia page, for example.
$\lim _{x\rightarrow\infty} \left(\sqrt{(1+ab)(1+ab+(1-a)cx^{-d})}-\sqrt{ab(ab+(1-a)cx^{-d})}\right)^{-x}$
I solved this, and figured that I should post the answer here. Taking $\log f(x)$ (in the original form) yields: $$\log f(x)=-x\log\left(\sqrt{(1+ab)(1+ab+(1-a)cx^{-d})}-\sqrt{ab(ab+(1-a)cx^{-d})}\right)$$ Now, $f(x)=\exp[\log f(x)]$, but we can take the limit through the exponent since it's continuous. Substituting $y=1/x$ and taking the limit as $y\rightarrow 0$ results in an indeterminate form 0/0. However, that allows us to use L'Hopital's rule in the above expression. For the sake of clarity, let's also substitute $u=ab$ and $v=(1-a)c$. The new expression looks as follows: $$\lim_{y\rightarrow0}\log f(y)=-\lim_{y\rightarrow0}\frac{\log\left(\sqrt{(1+u)(1+u+vy^{d})}-\sqrt{u(u+vy^{d})}\right)}{y}$$ Per L'Hopital's rule, we take the derivative with respect to $y$ of the numerator and denominator and evaluate the limit of their ratio. Since the derivative of the denominator is 1, we are interested in the limit of the derivative of the numerator. The expression for it is as follows: $$\lim_{y\rightarrow0}\frac{\frac{d (1+u) v y^{d-1}}{2 \sqrt{(1+u) \left(1+u+v y^d\right)}}-\frac{d u v y^{d-1}}{2 \sqrt{u^2+u v y^d}}}{\sqrt{(1+u) \left(1+u+v y^d\right)}-\sqrt{u^2+u v y^d}}$$ The above looks daunting but it can be simplified by multiplying the numerator and the denominator of the large fraction by $\frac{d (1+u) v y^{d-1}}{2 \sqrt{(1+u) \left(1+u+v y^d\right)}}+\frac{d u v y^{d-1}}{2 \sqrt{u^2+u v y^d}}$. Doing some tedious algebra results in the following expression: $$L=\lim_{y\rightarrow0}\frac{d v^2 y^{2 d-1}}{2 \left(u+u^2+2 u v y^d+v y^d \left(1+v y^d\right)+\sqrt{u \left(u+v y^d\right)} \sqrt{(1+u) \left(1+u+v y^d\right)}\right)}$$. Now, $\lim f(x)=\exp(-L)$. Thus, we can analyze the limit as $y\rightarrow0$ (or, correspondingly, $x\rightarrow\infty$) that we originally set out to analyze by looking at $L$ for different cases of $d$. When $d<1/2$, $L\rightarrow\infty$ and $\lim f(x)=0$; when $d>1/2$, $L=0$ and $\lim f(x)=1$; when $d=1/2$, $L=\frac{v^2}{8(u+u^2)}$ and $\lim f(x)=\exp\left(-\frac{v^2}{8(u+u^2)}\right)$, as my numerical experiments suggested.
Proof-verification: $(x_n)$ bounded and $y_n \to 0$ implies $(x_ny_n)\to 0$
If we correct the obvious typo then this is fine; exactly this sort of argument is given all the time. But it's somewhat "informal", in that the conclusion does not look like what's required by the definition. A more formally correct version would read "Since $y_n\to0$, given $\epsilon>0$ there exists $N$ such that $|y_n|<\epsilon/A$ for all $n>N$..." So whether it's actually "right" depends on the context. In a research paper, or some other context where the point is to convince the reader it's true, what you wrote is fine - people do write this sort of thing all the time. In a calculus class it's different - there the point is to convince the reader that you know exactly how to prove it, and in that context you're much better off getting a literal $|x_ny_n|\dots<\epsilon$ at the end. Note: Of course @pre-kidney is also correct in saying that the last sentence is missing the point; that issue doesn't come up if we change it as suggested.
If $\lim t_n=0$, when does $\lim s_nt_n \neq 0$?
There are many such examples, one of them is $s_n = n, t_n = \dfrac{1}{n}$
Prove that using induction that $\binom22+\dots+\binom n2 = \binom{n+1}2$
To prove $P(k+1)$, start with your left hand side, i.\,e. $$ \def\P#1{\binom 22 + \binom 32 + \cdots + \binom{#1}2}\P{k+1}$$ Write it as $$ \P k + \binom{k+1}2 \tag+ $$ Now use, that by the induction hypothesis, we have $$ P(k): \P k = \binom{k+1}3 $$ Hence, we can write $(+)$ as $$ \P k + \binom{k+1}2 = \binom{k+1}3 + \binom{k+1}2 $$ But the right hand side here equals (as you write - but for $k+1$ instead of $k$) $$ \binom{k+1}3 + \binom{k+1}2 = \binom{k+2}3 $$ Collecting everything, you are done.
Solving a cubic function with P and Q
Using Descartes rule of signs, there can be maximum two positive roots, and we observe the sign changes $f(0) = 3, f(6) = -3, f(7)=45$, so the largest root $\alpha \in (6, 7)$. Writing $\alpha = 6+\epsilon$, we have $0< \epsilon < 1$, and we get $f(6+\epsilon) = -3+35\epsilon + 12\epsilon^2+\epsilon^3 = 0$. Ignoring the cubic term, we get $\epsilon = \frac1{12}$, so $\alpha \approx 6\frac1{12}$ should be a good approximation.
Looking for some help with integration of $y=(1+x^2) \cos x$
\begin{gather*} I=\int \left( 1+x^{2}\right)\cos xdx\\ \\ We\ have,\ for\ integration\ by\ parts,\\ \int u\cdot vdx=u\int vdx-\int \frac{du}{dx}\left(\int vdx\right) dx\\ \\ Using\ integration\ by\ parts,\ and\ taking\ \\ u=1+x^{2} \ as\ the\ first\ part\ and\ v=\cos x\ as\ the\ second,\\ I=\left( 1+x^{2}\right)\sin x-\int 2x\sin xdx\\ \\ Again\ ,I_{2} =\int 2x\sin xdx\\ Take\ u=2x\ as\ the\ first\ part,\ and\ v=\sin x\ as\ the\ second,\\ I_{2} =-2x\cos x+\int 2\cos xdx=-2x\cos x+2\sin x\\ I=\left( 1+x^{2}\right)\sin x-I_{2} =\left( 1+x^{2}\right)\sin x-( -2x\cos x+2\sin x)\\ =\left( 1+x^{2}\right)\sin x+2x\cos x-2\sin x\\ =\left( x^{2} -1\right)\sin x+2x\cos x\\ \\ \end{gather*} Hope this answers your query! Edit: If I interpret R correctly to be the real number line, then the resulting definite integral will turn out to be zero, simply because the integrand is even, and for even functions, \begin{equation*} \int ^{a}_{-a} f( x) dx=0 \end{equation*}
How to verify $PA+A^tP=-I$?
Lets do it by a very simple way: If $A = \begin{pmatrix} 0 & 1\\ -2 & -3 \end{pmatrix}$ and $P = \begin{pmatrix} a & b\\ b & c \end{pmatrix}$ then: $$PA+A^tP = \begin{pmatrix} -4b & a-2c-3b\\ a-2c-3b & 2b-6c \end{pmatrix}$$ which sould be equal to $-I = \begin{pmatrix} -1 & 0\\ 0 & -1 \end{pmatrix}$.
What can r.v.s mean?
The abbreviation r.v.s is Random variables
Finite Group and normal Subgroup
There are two things you need to prove: $H\subseteq X$, by Lagrange. $X\subseteq H$. To see this, consider the quotient group $G/H$. Suppose $x\in X$. Then $(xH)^d=x^dH=H$. This means that $xH$ has order dividing $d$. Apply Lagrange, we have that $xH=H$, so $x\in H$ as required. These combine to prove the result. Finally - a quick note on your attempt. You said that you want to show that $|X|=d$, and your logic was that you knew that $H$ was the only subgroup of order $d$. However, $X$ is not necessarily a subgroup! For example, the permutations $(123)$ and $(345)$ both have order $3$, but they multiply to give $(1 2 3 4 5)$, a permutation of order $5$.
Show that the sequence $\sqrt{n+1} - \sqrt{n}$ converge toward 0
This is very good. You are missing some parentheses, as was mentioned. I would rephrase your application of the Archimedean property. Generally it refers to your $N$ as being an integer. So, you would need $$N\ge \frac{1}{\epsilon^2} - 1.$$ Also you are using $N$ for two different things, just use $\mathbb{N}$ (\mathbb{N}) or $\mathbb{Z}^+$ (\mathbb{Z}^+) for the set of natural numbers. I don't know your prefference but I like to end with a statement such as "since for any $\epsilon >0$ there exists an $N\in \mathbb{N}$ such that $n\ge N$ implies $|u_n|<\epsilon$, by definition, $$\lim_{n\rightarrow \infty} \sqrt{n+1} -\sqrt{n} = 0."$$ Very good work.
How do I integrate this shape in 3D given as the area in between $z=\sqrt 2$ $;z=x^2+y^2$ and $x^2+y^2+z^2=2$?
I assume you want to find the volume of the described region, and are not integrating some function over that region. The region is the first-octant part of region between the two surfaces: Your region $G_1$ seems to be the upper portion, above the ring-shaped intersection of the surfaces up to the top of the sphere. Some of your limits on $G_1$ are incorrect. You are right that $1\le z\le \sqrt 2$, since one can show that the circular intersection happens at $z=1$ and the top of the sphere is at $z=\sqrt 2$. However, the limits on $x$ are given by the inequlity $x\ge 0$ and the equation $x^2+y^2+z^2=2$ where $y=0$, namely $0\le x\le\sqrt{2-z^2}$. The resulting limits on $y$ come from the inequality $y\ge 0$ and the same equation: $0\le y\le\sqrt{2-z^2-x^2}$. Therefore your integral should be $$G_1=\int_1^{\sqrt 2}\int_0^{\sqrt{2-z^2}}\int_0^{\sqrt{2-z^2-x^2}}dy\,dx\,dz$$ The limits on $G_2$, the paraboloid below the circular intersection, are similar but based on the equation $z=x^2+y^2$. $z$ goes from $0$ to $1$. The limits on $x$ are given from zero to where $y=0$, and the limits on $y$ come from the equation. The integral is $$G_2=\int_0^1\int_0^{\sqrt z}\int_0^{\sqrt{z-x^2}}dy\,dx\,dz$$ However, I would not recommend finding the volumes by that method. Better would be using cylindrical coordinates, or the equivalent of using the method of disks to find the volume in all four octants above $z=0$ and dividing the result by four. Let me know if you need details.
Let $I$ and $J$ be ideals of a commutative ring $R$ such that $I + J = R$. Show that there is an ideal $K$ in $R$ with $R/K \cong R/I \times R/J$
Consider $f=(p_I,p_J):R\rightarrow R/I\times R/J$ where $p_I$ is the quotient map $R\rightarrow R/I$, you have to show that $f$ is surjective and its kernel is $K$. Let $r,r'\in R$, there exists $i,i'\in I, j,j'\in J$ with $i+j=r, i'+j'=r'$. We have $p_I(i+j)=p_I(j)=p_I(r), p_J(r')=p_J(i')$. We deduce that $f(i'+j)=(p_I(i'+j),p_J(i'+j))=(p_I(j),p_J(i'))=(p_I(r),p_J(r'))$ is surjective. We denote by $K$ its kernel and deduce that $R/K$ is isomorphic to $R/I\times R/J$.
Prove that GCD$(n^a - 1,n^b -1)= n^{GCD(a,b)} -1$
By symmetry, we can assume $b\geq a$. Now observe that $n^b-1=n^{b-a}(n^a-1)+n^{b-a}-1$. So $gcd(n^b-1,n^a-1)=gcd(n^{b-a}(n^a-1)+n^{b-a}-1,n^a-1)=gcd(n^{b-a}-1,n^a-1)$ Go on this way, we find that the exponential ends up with $gcd(a,b)$ by Euclidean algorithm.
Describing $Hom(\mathbb{Z}^2, G)$ as a subset of $G \times G$
Note that the map is completely determined by the images of $(1,0)$ and $(0,1)$, but these commute in the domain, so their images must commute in $G$. This means such hom-set consists of commuting pairs of elements in $G$. Your last observation is not correct, because $\mathbb Z\times \mathbb Z$ is not the coproduct of $\mathbb Z$ with itself in groups, so that isomorphism doesn't hold. It holds, however, if $G$ is abelian.
Time averaged delta function
Dirac delta is a distribution, which means that it is a linear functional acting upon the space of test functions. The same is true for $f$. That said, if you want to recognize $f$, then you can first apply $f$ to test functions. Now if $\varphi \in \mathcal{D}(\mathbb{R})$ is a test function, then (by abusing notations to denote the distribution-function pairing by the usual integral notation) we find that \begin{align*} \int_{\mathbb{R}} \varphi(x)f(x) \, dx &=\int_{\mathbb{R}} \varphi(x) \left( \frac{\omega}{2\pi} \int_{0}^{2\pi/\omega} \delta(x - a\sin(\omega t)) \, dt \right) \, dx \\ &= \frac{\omega}{2\pi} \int_{0}^{2\pi/\omega} \left( \int_{\mathbb{R}} \varphi(x) \delta(x - a\sin(\omega t)) \, dx \right) \, dt \\ &= \frac{\omega}{2\pi} \int_{0}^{2\pi/\omega} \varphi(a\sin(\omega t)) \, dt \tag{1} \\ &= \frac{1}{\pi} \int_{-a}^{a} \frac{\varphi(x)}{\sqrt{a^2 - x^2}} \, dx. \tag{2} \end{align*} The computation $\text{(1)}$ tells that your integral is just a disguise of the following familiar-looking linear functional $$ \varphi \quad \mapsto \quad \frac{\omega}{2\pi} \int_{0}^{2\pi/\omega} \varphi(a\sin(\omega t)) \, dt. $$ Moreover, $\text{(2)}$ tells that $f$ can be identified with the following function $$ f(x) = \frac{1}{\pi\sqrt{a^2 - x^2}} \mathbf{1}_{[-a,a]}(x). $$ This is not surprising, since you are building up $f$ by summing up infinitesimal masses.
Why can't change of variables be done using the density distribution?
The mechanics of doing a transformation differ, depending on whether the distribution is discrete or continuous. Continuous Distribution. Suppose $X \sim \mathsf{Exp}(rate = \lambda = 2),$ so that the PDF is $f_X(x) = \lambda e^{-\lambda x},$ for $x > 0;$ and the CDF is $F_X(x) = 1 - e^{\lambda x},$ for $x > 0.$ Method 1: The CDF of $Z = 5X$ can be found as $$F_Z(z) = P(Z \leq z) = P(5X \leq z) = P(X \leq .2z) = 1 - e^{\lambda(.2x)} = 1 - e^{-.2\lambda z},$$ for $z > 0.$ Thus $Z \sim \mathsf{Exp}(rate = .2\lambda = \lambda/5).$ Notice that $E(X) = 1/2 = 1/\lambda$ and $E(Z) = E(5X) = 5E(X) = 5/\lambda = 2.5.$ Method 2: Denote the transformation as $z = h(x) = 5x,$ so that the inverse transformation is $x = h^{-1}(z) = 0.2z$ and $dx/dz = 0.2.$ Then for single-valued increasing transformations such as this one, there is a theorem that states: $$f_Z(z) = f_X(h^{-1}(z))\times \frac{dh^{-1}}{dz} = f_X(0.2z)\times 0.2 = \lambda e^{-\lambda(0.2z)}\times 0.2 = .4 e^{0.4 z},$$ for $z > 0.$ This is the density function of $\mathsf{Exp}(2/5).$ I suppose you can find this theorem in your text. Intuitively, the reason for the factor $dh^{-1}/dz = .2$ is that the PDF of $Z$ is 'five times as wide' as the PDF of $X,$ so it needs to be 'one-fifth as tall' in order to maintain the mandatory unit area under the PDF curve. (Under each curve, 95% of the area is to the left of the vertical red line.) Discrete Distribution. Suppose that $Y$ is the number of spots showing when a fair die is rolled, and that the payout in a game is \$5 per spot. Then the payout from a roll of the die is $W = 5Y$ dollars. The possible values of $Y$ are the integers from 1 through 6. The possible values of $W$ are 5, 10, 15, 20, 25, 30. For $i = 1, 2, 3, 4, 5,$ we have $p_Y(i) = P(Y = i) = 1/6.$ And $p_W(5i) = P(W = 5i) = 1/6.$ Here, $E(Y) = 3.5$ and $E(W) = E(5Y) = 5E(Y) = 5(3.5) = 17.5$ dollars.
change of variables of a ODE, proof
Your original PDE is: $$\frac{d^2y}{dx^2} = -\frac{y}{x^4}$$ If we let $s= x^{-1}$ then we have: $$\frac{d^2y}{dx^2} = \frac{d}{dx} (\frac{dy}{dx})$$ But, $$\frac{dy}{dx} = \frac{dy}{ds} \frac{ds}{dx} = \frac{dy}{ds} [-x^{-2}]$$ Thus, we have: $$\frac{d^2y}{dx^2} = \frac{d}{dx} (\frac{dy}{ds} [-x^{-2}])$$ The above simplifies to: $$\frac{d^2y}{dx^2} = -x^{-2} \frac{d}{dx} (\frac{dy}{ds}) + \frac{dy}{ds} [2x^{-3}])$$ The first term above simplifies to: $$-x^{-2} \frac{d^2y}{ds^2} (\frac{ds}{dx})$$ which in turn simplifies to: $$x^{-4} \frac{d^2y}{ds^2}$$ Putting together everything we have: $$x^{-4} \frac{d^2y}{ds^2} + \frac{dy}{ds} [2x^{-3}]) = -s^4 y$$
Find all solutions to the functional equation $f(x) +f(x+y)=y+2 $
You are correct. Setting $y = 0$ gives us $$ \forall x \in \mathbb{R} : f(x) = 1$$ In particular $$ \forall y \in \mathbb{R} : 1 + 1 = y + 2 \iff y = 0$$ which clearly is a contradiction!
Residue of $f(z) = \frac{e^{3z}}{(z+1)^2}$
Your thoughts are correct. The residue $= 3e^{-3}.$
Can I reverse an ODE?
You need to distinguish the independent variable of the equation and the fixed time where you consider the correspondence of initial point and final point. Name the latter time $T$, the points $x_0$ and $x_T$. Then to reverse the flow you have to consider the problem $$ \dot x(t)=u(t,x(t)),~~ x(T)=x_T $$ where you want to compute $x_0=x(0)$ backwards in time. If you want to integrate forward in time, you can consider the function $y(t)=x(T-t)$, which then has the differential equation $$ \dot y(t)=-\dot x(T-t)=-u(T-t,x(T-t))=-u(T-t,y(t)),~~ y(0)=x_T. $$
Is an integral ring extension a local property?
Unintentionally answering my own question here, although I will leave the question open to see if there is a more direct solution. But the answer to this question: Show that a morphism of algebraic sets which restricts to finite morphisms of principal sets is finite also answers my question using the following fact: Let $\phi : R \to S$ be a ring extension such that $S$ is finitely generated as an $R$-algebra. Then $\phi$ is an integral ring extension if and only if $S$ is finitely generated as an $R$-module.
Splitting up an infinite sum
You could note that$$\sum_{n=1}^\infty\frac{n-2n\sqrt n+n^2}{n^3}=\sum_{n=1}^\infty\frac{1-2\sqrt n+n}{n^2}$$and that$$\lim_{n\to\infty}\frac{\frac{1-2\sqrt n+n}{n^2}}{\frac1n}=\lim_{n\to\infty}\frac{1-2\sqrt n+n}n=1.$$Therefore, your series diveres, by the comparison test.
fibres are connected, total space connected $\Rightarrow$ base space is so?
N.B: The original question has been completely re-written since this answer was posted. Check the page history to see the original question. In short: yes! You have a fibre bundle $\pi : T \twoheadrightarrow B$, where $\pi$ is a continuous surjection and both the fibres $\pi^{-1}(b)$ and the total space $T$ are connected. Connectedness is preserved by continuous maps, so if $T$ is connected then $B$ must be connected too. We can show what the base space is connected as follows: Assume that the base space $B$ is disconnected. Then there exist open subsets $X,Y \subset B$ such that $X \cup Y = B$ while $X \cap Y = \emptyset$. Since $\pi$ is a continuous surjection it follows $\pi^{-1}(X)$ and $\pi^{-1}(Y)$ are open in $T$ and that $\pi^{-1}(X), \pi^{-1}(Y) \subset T$ such that $\pi^{-1}(X) \cup \pi^{-1}(Y) = T$ and $\pi^{-1}(X) \cap \pi^{-1}(Y) = \emptyset$. It follows that $T$ is also disconnected, which is a contradiction.
Proving big-o notation with induction
Assuming $n=2^k$, let's make few steps: $$f(n)=2f\left( \frac{n}{2} \right)+n=2\left[2f\left( \frac{n}{2^2} \right)+\frac{n}{2} \right]+n=\\ =2^2f\left( \frac{n}{2^2} \right)+2n=2^3f\left( \frac{n}{2^3} \right)+3n=\cdots=\\ =2^kf\left( \frac{n}{2^k} \right)+kn$$ Now using $k=\log_{2}n$, we obtain $$f(n)=nf(1)+n\log_{2}n \in O(n\log n)$$
Show that $\{\mathbf{u},\mathbf{v}\}$ is linearly independent.
Let $au+bv=0$, where $\{a,b\}\subset\mathbb R$. Thus, $$(au+bv)\cdot u=0$$ or $$a(u\cdot u)+b(v\cdot u)=0$$ or $$a(u\cdot u)=0.$$ Can you end it now?
Cantor-Bernstein theorem for magmas
No, here's a counterexample. Let $H = \mathbb R^2$ under addition $G=H_1 = \mathbb R \times [0,\infty)$ $G_1 = \mathbb R \times \{0\}$ It is well known that $H \cong G_1$ all we need to prove is that $G \not \cong H$. To that end observe that each element of $H$ has an inverse but there are one or more elements of $G$ with no inverse. This prohibits there being ANY isomorphism between the two magmas.
I have to show either $q_\lambda$ is relatively prime to $p$
Since $F[x]$ is an Euclidean domain, irreducible and prime are equivalent concepts. Then $p$ and $q_\lambda$ are coprime or associated. Since both are monic, if they are associated, they are equal.
$K[X,Y]$ is a PID and a primary ideal in it is not power of a maximal ideal?
Yes, if $Q$ is an ideal and $\sqrt{Q}$ is maximal then $Q$ is primary. This is proposition 4.2 in Atiyah and MacDonald.
Ordering sequences of prime decompositions
First, I believe the actual problem can be more explicitly stated as something like Out of $n + 1$ distinct positive integers $\le 2n$, prove you can find distinct $p,q$ s.t. $p \mid q$. The "pigeonhole", so to speak, occurs when you check just one prime, i.e., $2$, and just treat potential values of the odd part of each integer as a separate group. Doing this simplifies your solution attempt to something which can then be fairly easily shown, such as I've done below. In particular, note each integer $1 \le i \le 2n$ can be uniquely expressed as a product of some power of $2$ (including just a power of $0$) and an odd part, i.e., $$i = 2^j k, \text{ with } j \ge 0, \gcd(k,2) = 1 \tag{1}\label{eq1A}$$ The range $1$ to $2n$ has $n$ odd integers which $k$ can represent, i.e., $1, 3, 5, \ldots, 2n - 1$. Since there are $n + 1$ distinct integers which have been selected, by the Pigeonhole principle, at least $2$ of them, call them $p$ and $q$ with $q \gt p$, have the same $k$, say $k_1$, i.e., $$p = 2^{j_1}k_1, \; q = 2^{j_2}k_1 \text{ with } j_2 \gt j_1 \tag{2}\label{eq2A}$$ Thus, $q = \left(2^{j_2 - j_1}\right)p$ meaning that $p \mid q$.
Prove that an expression is not a prime
For (a) and (b), we can use an Aurifeuillean factorization: $$ a^{4}+4b^{4}=(a^{2}-2ab+2b^{2})(a^{2}+2ab+2b^{2}) $$ Take $a=5^3$ and $b=2^2$. Then $$ 2^{10}+5^{12} = 4b^{4}+a^{4} = a^{4}+4b^{4} = (a^{2}-2ab+2b^{2})(a^{2}+2ab+2b^{2}) = 14657 \cdot 16657 $$ Now take $a=65^{16}$ and $b=2$. Then $$ 65^{64}+64 = a^{4}+4b^{4} = (a^{2}-2ab+2b^{2})(a^{2}+2ab+2b^{2}) = \small 10309258098174834118790766041058622855698420907745361328133 \quad\cdot 10309258098174834118790766041870898989955232237091064453133 $$ (c) is easier: $$ x(x+12)(x+18)+320 = (x + 2) (x + 8) (x + 20) $$ Now take $x=989$ and get $$ 989\cdot 1001\cdot 1007+320 = 991 \cdot 997 \cdot 1009 $$ (I was lucky that the first form I tried, $x(x+12)(x+18)+320$, factored so nicely!)
hat problem and probability
Given a cyclic binary $1$-covering code of length $7$, the prisoners can ensure that they're freed unless the hat assignment corresponds to one of the codewords. Each prisoner should form the two binary strings obtained by completing what she sees with the two possible colours of her hat. If one colour yields a codeword, she should guess the other colour; otherwise she should remain silent. Since the code is $1$-covering, at least one prisoner will guess her colour, and she will guess correctly unless the hat assignment corresponds to a codeword. Thus the probability of failing is the proportion of words that are codewords. A minimal cyclic binary $1$-covering code of length $7$ is given by $0000000$, the seven permutations of $1101000$ and the complements of those eight strings, for a total of $16$ codewords and a probability $\frac{16}{128}=\frac18$ of failing , and thus $\frac78$ of being freed.
Arbitrary intersection of compact sets is compact.
The proof is somewhat dependent on the definitions you use. The simple (as you basically have only one possible definition) part is the intersection of bounded sets. You either define bounded as having a radius such that every member of the set has absolute value (or norm) less or equal to the radius, or more general that the distance of any two points in the set have distance $d(x,y)$ less or equal to the radius (I use the latter), I don't require the radius to be the smallest bound here (if one bound exists there exists an infimum of them which is also a bound by the axiom of largest lower bound): Let $B_j$ be bounded sets which means that there exists a radius $r_j$ such that $d(x,y)\le r_j$ for all $x,y\in B_j$. Now consider the intersection $B = \bigcap B_j$, then we have that $d(x,y)\le r_o$ for any $x,y\in B_o$, but since $B \subseteq B_o$ we have for any $x, y\in B$ that $x,y\in B_o$ and therefore $d(x,y)\le r_o$, so $r_o$ is a radiuos of $B$. To prove that intersection of closed sets is closed we have to do that differently depending on the definition: Closed being that the complement is open: Use deMorgan theorem and use that $\overline{F_j}$ is open, therefore $\bigcap F_j = \overline{\bigcup \overline {F_j}}$, but the union of open sets is open so the intersection is the complement of an open set. Closed being containing all it's limit points. Suppose we have $a$ being a limit point of $F =\bigcap F_j$, then there exists a sequence $a_k\in F$ such that $\lim_{k\to\infty}a_k = a$, but since $F_j\subseteq F$ we have that $a_k$ is a sequence in $F_j$ and therefore $a\in F_j$ as $F_j$ is closed. And since $a\in F_j\subseteq F$ we have $a\in F$. So $F$ contains all its limit ponts. Also for compactness it's a matter of definition: Compact being closed and bounded: The intersection of closed is closed, and intersection of bounded is bounded. Therefore intersection of compact is compact. Compact being that open cover has a finite subcover: This is a lot trickier (and may be out of your scope), I will need to use more assumptions here. Suppose we have a open cover of $F \subset \cup \Omega_j$. We extend this to a cover of any $F_o$ by observing that for each $\phi\in F_o\setminus F$ we have an open set $\Omega_\Phi$ that doesn't intersect $F$ (assuming the space is regular Hausdorff - for example the set of real numbers). Now we have an open cover of $F_o$ which have a finite subcover, and excluding the ammended sets to cover the whole $F_o$ will make it a cover of $F$.
Integral and differential calculus, after a good (and mathematically correct) explanation of the two.
You are correct. Just some further explanation: Regarding the derivative part, here is a more precise notation: we usually say the slope is $\frac{\Delta{y}}{\Delta{x}}$ as $\Delta{x} \to 0$. And, we usually use the notation $\frac{dy}{dx}$ to show the exact derivative. Regarding the integral, your explanation and approach is again correct. "Riemann sum" is also another kind of approximation for calculating the area under a curve and I think it might be easier due to the fact that it uses rectangular to the best of my knowledge. And I think rectangular is an easier shape. But, overall, you are good to go.
Rigorous Proof?: Proving Cauchy Criterion of Integrals
For every $\varepsilon\gt0$, one wants to exhibit a partition $P$ such that $U(f,P)-L(f,P)\lt\varepsilon$. Since $U(f,P_n)\to s$ when $n\to\infty$, there exists $n_U(\varepsilon)$ such that for every $n\geqslant n_U(\varepsilon)$, $|U(f,P_n)-s|\lt\frac12\varepsilon$. Since $L(f,P_n)\to s$ when $n\to\infty$, there exists $n_L(\varepsilon)$ such that for every $n\geqslant n_L(\varepsilon)$, $|L(f,P_n)-s|\lt\frac12\varepsilon$. Choose $n^*(\varepsilon)=\max\{n_U(\varepsilon),n_L(\varepsilon)\}$ and $P=P_{n^*(\varepsilon)}$. Then $$ U(f,P)-L(f,P)\leqslant |U(f,P_{n^*(\varepsilon)})-s|+|L(f,P_{n^*(\varepsilon)})-s|\lt \tfrac12\varepsilon+\tfrac12\varepsilon=\varepsilon. $$ This proves that $f$ is integrable. Furthermore, $L(f,P_n)\leqslant \int\limits_a^b f\leqslant U(f,P_n)$ for every $n$, hence $$ \sup\limits_n\ L(f,P_n)\leqslant \int\limits_a^b f\leqslant \inf\limits_n\ U(f,P_n). $$ The supremum on the LHS and the infimum on the RHS are both equal to $s$ hence $\int\limits_a^b f=s$.
How to yield lcm between several numbers?
Welcome to MSE! If the lcm between two numbers $a$ and $b$ is denoted by $[a,b]$, then for three numbers $a,b,c$, $[a,b,c]=[[a,b],c]$, and so on.
Expectation of a multivariate normal distribution
In the general case, to get $ E(x) $ you can complete the square in $ y, z, u $ etc by bringing terms in $ xy, xz, xu $ etc together with $ y^2, z^2, u^2 $. These are integrated out wrt $ dydzdu.. $ $ F $ is a quadratic form in $ n $ variables consisting of sums of terms like $ ax^2 + bxy + cy^2 + dxz + gz^2 ... $ . Completing the square on terms involving $ y, z $ etc. you get $ ax^2 + c(y-h)^2 + g(z-k)^2 ...+ $ a constant term. The $ h, k...$ will involve $ x $. But, these will integrate out wrt $ dydzdu... $, as the range is infinite, to give a constant factor. You are then left with the one variable $ x $ integration which is the Normal expectation. You could state this in more general terms using $ i, j $.
How to change Cylindrical $\nabla$ to cartesian $\nabla$?
Note that the position vector is invariant under the coordinate transformation and therefore we can write $$\hat xx+\hat yy=\hat \rho\rho \tag 1$$ Taking partial derivatives of $(1)$ with respect to $\rho$ and $\phi$, we obtain respectively $$\begin{align} \hat \rho&=\hat x \frac{\partial x}{\partial \rho}+\hat y \frac{\partial y}{\partial \rho} \\\\ &=\hat x \frac{\partial \rho}{\partial x}+\hat y \frac{\partial \rho}{\partial y}\tag 2\\\\ \hat \phi&=\frac{1}{\rho}\left(\hat x \frac{\partial x}{\partial \phi}+\hat y \frac{\partial y}{\partial \phi}\right)\\\\ &= \rho\left(\hat x \frac{\partial \phi}{\partial \rho}+\hat y \frac{\partial \phi}{\partial y}\right) \tag 3 \end{align}$$ Using $(2)$ in $(3)$ in the expression for the gradient operator in cylindrical coordinates yields $$\begin{align} \nabla &=\hat \rho\frac{\partial }{\partial \rho}+\hat \phi \frac{1}{\rho}\frac{\partial }{\partial \phi}+\hat z\frac{\partial }{\partial z}\\\\ &=\left(\hat x \frac{\partial \rho}{\partial x}+\hat y \frac{\partial \rho}{\partial y}\right)\frac{\partial }{\partial \rho}+\left(\rho\left(\hat x \frac{\partial \phi}{\partial \rho}+\hat y \frac{\partial \phi}{\partial y}\right) \right)\frac{1}{\rho}\frac{\partial }{\partial \phi}+\hat z\frac{\partial }{\partial z}\\\\ &=\hat x \left(\frac{\partial \rho}{\partial x}\frac{\partial }{\partial \rho}+\frac{\partial \phi}{\partial x}\frac{\partial }{\partial \phi}\right)+\hat y \left(\frac{\partial \rho}{\partial y}\frac{\partial }{\partial \rho}+\frac{\partial \phi}{\partial y}\frac{\partial }{\partial \phi}\right)+\hat z \left(\frac{\partial }{\partial z}\right)\\\\ &=\hat x \frac{\partial }{\partial x}+\hat y \frac{\partial }{\partial y}+\hat z \frac{\partial }{\partial z} \end{align}$$ Note that in arriving at $(2)$ and $(3)$, we made use of the equalities $$\begin{align} \frac{\partial x}{\partial \rho}&=\frac{\partial \rho}{\partial x}\\\\ \frac{\partial y}{\partial \rho}&=\frac{\partial \rho}{\partial y}\\\\ \frac{\partial x}{\partial \phi}&=\rho^2 \frac{\partial \phi}{\partial x}\\\\ \frac{\partial y}{\partial \phi}&=\rho^2\frac{\partial \phi}{\partial y} \end{align}$$
The minimum value of the polynomial $x(x+1)(x+2)(x+3)$ is
Hint: It is $$x(x+1)(x+2)(x+3)+1=(1+3x+x^2)^2$$
How does extension of restriction of $M$ relate to $M$?
There's a natural short exact sequence of $(A, A)$-bimodules $$0 \to I \to A \otimes_B A \xrightarrow{m} A \to 0$$ where $m$ is the multiplication map and $I$ is its kernel. Geometrically (in the case that everything is commutative), $I$ is the ideal cutting out the relative diagonal $\Delta : \text{Spec } A \to \text{Spec } A \times_{\text{Spec } B} \text{Spec } A$. Tensoring this short exact sequence with an arbitrary left $A$-module $M$, and using the isomorphism $A \otimes_B A \otimes_A M \cong A \otimes_B M$, produces a short exact sequence of left $A$-modules $$0 \to I \otimes_A M \to A \otimes_B M \to M \to 0$$ since $\text{Tor}_1(A, M) = 0$. So the kernel is precisely $I \otimes_A M$. This always vanishes iff the multiplication map $m$ is an isomorphism, which occurs for example if (let me restrict to the commutative case for safety) $A$ is a quotient or localization of $B$. More generally, if $A$ and $B$ are commutative, the multiplication map $m$ being an isomorphism is also equivalent to the map $f : B \to A$ being an epimorphism of commutative rings. There are a lot of other things to say here but I'm not sure what you're interested in. You can think of the map $A \otimes_B M \to M$ as the counit of the natural adjunction between $A$-modules and $B$-modules given by restriction and extension of scalars. As such it is part of the structure of an induced comonad on $A$-modules which can be used to study the question of when an $A$-module descends to a $B$-module. In this setting the multiplication map $m$ being an isomorphism is equivalent to the counit being an isomorphism, which is equivalent to the right adjoint (restriction of scalars) being fully faithful.
Subgroups of $\Bbb Z_n$
Pair each $b$ in $H$ with its additive inverse, $n-b$.
Proof of $\lim_{k\to \infty}{|x^k-x|\over { 1+|x^k-x|}}=0$
If $|x|<1$, then $|x^k-x|=x-x^k$ $$\lim_{k\rightarrow\infty}\frac{x-x^k}{1+x-x^k}=\frac{x}{1+x}.$$ If $|x|=1$, then $$\lim_{k\rightarrow\infty}\frac{|x^k-x|}{1+|x^k-x|}=0.$$ If $|x|>1$, then $|x^k-x|=x^k-x$ $$\lim_{k\rightarrow\infty}\frac{x^k-x}{1+x^k-x}=1-\lim_{k\rightarrow\infty}\frac{1}{1+x^k-x}=1.$$
The value of the Nth derivative equals the function.
First observe, if $f(x) = f^{(n)}(x)$ for all $n \geq 1$ and for all $x$, then $f(x) = f'(x)$. The answer is actually different if the domain is disconnected. We only consider $f$ defined on an interval (which is connected of course). Claim: $f = f'$ if and only if there is a constant $k \in \mathbb{R}$ such that $f(x) = ke^{x}$. The only non-trivial direction is the "only-if". Assume $f = f'$ on an interval. \begin{eqnarray*} f' &=& f \\ f' - f &=& 0\\ e^{-x}f' - e^{-x}f &=& 0 \\ (e^{-x}f)' &=& 0 \end{eqnarray*} Thus, since the interval is connected, we know that $(e^{-x}f) = k$ for some constant of integration $k$. This yields $f = ke^x$.
Normal distribution probability density function for dummies
The simplest cumulant-generating function of a non-constant variable is quadratic, say $i\mu t-\frac12\sigma^2t^2$ (see here for further motivation). It can be shown the resulting distribution has mean $\mu$ and variance $\sigma^2$, and the PDF you cited. Because of how distributions respond to linear transformations, we need only check the $\mu=0,\,\sigma=1$ case, i.e. prove$$\varphi(t):=\exp-\frac12t^2\implies\int_{\Bbb R}\frac{1}{2\pi}\varphi(t)\exp(-itx)dt=\frac{1}{\sqrt{2\pi}}\exp-\frac{x^2}{2}dx.$$(This integral is the PDF, by the inversion formula.) The proportionality constants boil down to the $\alpha=\frac12$ special case of$$\int_{\Bbb R}\exp(-\alpha y^2)dy=\sqrt{\frac{\pi}{\alpha}}.$$Again, verification need only check $\alpha=1$. This has many proofs, the first here being the standard one in textbooks.
Find the average to reduce the expenditure
In both (a) and (c) the only way to get "95" from the given data is to subtract 5 from 100. But "5" was the desired savings in dollars and 100 was the distance driven in miles. Subtracting dollars from miles is meaningless. In (b) 25/4 is "miles per gallon" divided by "dollars per gallon" so "miles per dollar". The "5" on the right side is 5 dollars hoped to be saved. multiplying "miles per dollar" by m "miles" will not give "dollars". In (d) 4/25 is "dollars per gallon" divided by "miles per gallon" so "dollars per mile". Here multiplying "dollars per mile" by m "miles" does give dollars! So this one is worth looking at. If he drives m miles and gets 25 miles per gallon, then he uses m/25 gallons. At 4 dollars per gallon that cost (4/25)m dollars. But that should be equal to the 16- 5= 11 dollars he wants to spend, not the 5 dollars he wants to save. You are right. None of these is correct.m
Hint to prove that $\phi^n + \phi'^n$ is an integer.
Hint The inductive step will make use of $\varphi^2=\varphi+1$.
Nonhomogeneous Poisson Process question
No, this is not true. To generalize your question a little, you are asking if the value of $N(2)$ has any influence on the value of $N(1)$, and it certainly has. I can write up the similar expression $E[N(1) | N(2) = 0].$ In this case, you know that at time $2$, you have had no arrivals, and since $N(1)\leq N(2)$, you also know that you must have $N(1)=0$. This means that $E[N(1) | N(2) = 0]=0.$ On the other hand, you have that $E[N(1)] = \lambda_1,$ where $\lambda_1$ is the Poisson parameter of $N(1)$, so the two expressions are not equal. Concerning your case, given that you know $N(2)=4$, you must have that the value of $N(1)$ is some number in the set $\{0,1,2,3,4\}$, so $N(1)|N(2)=4$ is no longer Poisson, since it can take only a finite number of values. I hope this helps.
How does hard clipping change the frequency of a pure sinusoidal signal?
In general, the Fourier series of a signal $f(t)$ (given all the conditions for such a Fourier transformation are satisfied) is represented as $$ f_0 + \sum_{n=1}^{\infty}{A_n\cos(nt) + B_n \sin(nt)}$$ where $A_n = \frac{1}{\pi} \int_{0}^{2\pi}{f(t) \cos(nt) dt}$ and $B_n = \frac{1}{\pi} \int_{0}^{2\pi}{f(t) \sin(nt) dt}$ If an LTI (linear time-invariant) system is applied on a cosine wave, the output is also a cosine wave, possibly with different phase and amplitude, but no higher order harmonics. Now for the cosine wave passed through this clipping device, the top and bottoms of the output wave are clipped to $\pm V_s$, when $A > V_s$, we no longer have an LTI system so we must do the whole Fourier analysis on the output waveform to understand its harmonic content. This distorted wave has a Fourier series in which the $A_n$ coefficients are zero because the output wave is odd in $\omega t$. Thus, to find the Fourier transform for this nonlinearity, we only have to evaluate $B_n$ as follows: $$ \begin{aligned} B_n = \frac{1}{\pi} \left[ \int_{0}^{\beta}{V_s \sin(nt) dt} + \int_{\beta}^{\pi-\beta}{\cos(\omega t) \sin(nt) dt} - \\ \int_{\pi-\beta}^{\pi+\beta}{V_s \sin(nt) dt} + \int_{\pi+\beta}^{2\pi-\beta}{\cos(\omega t) \sin(nt) dt} + \int_{2\pi-\beta}^{2\pi}{V_s \sin(nt) dt} \right] \end{aligned} $$ where $\beta\omega = \arccos\left( \frac{V_s}{V}\right)$. I didn't try to simplify these terms to make a point that when you have to deal with a nonlinear system, you need to proceed with the calculation of Fourier coefficients and figure out how the nonlinear distortion modifies the integrand terms and integral boundaries. Unless some properties of the system impose particular Fourier coefficients to be zero, higher order harmonics are not zero in general.
Interval Notation for the set of real numbers
A common misconception beginning mathematicians have is that mathematical notation follows some universal and inviolate rules; perhaps this stems from the fact that mathematical arguments follow rigorous rules of inference. Notation is all about human communication. It is "right" if your readers can easily understand what you are trying to say and "wrong" if it is incomplete, confusing, or misleading. Clashing with some common conventions is one way of being confusing, so it's good to familiarize yourself with the most common ways others write things, but keep in mind that different mathematicians and authors will use different notation and that there is no "one right way" to write mathematics. With that in mind, I would say that you have identified two common ways to define sets: the first as an explicit enumeration of elements, $$S = \{1, 2, 5\}.$$ You can write (small) intervals of natural numbers this way, but it is not so convenient to define intervals of real numbers like this, since you cannot list the uncountably infinite real numbers between the endpoints $a$ and $b$ of an interval. The above, though, is a special case of set-builder notation $$S = \{x\in\mathbb{R}\ \mathrm{s.t.}\ \textrm{some condition on } x \};$$ using this notation you can write an open interval as $$S = \{x\in\mathbb{R}\ \mathrm{s.t.}\ a < x < b\}.$$ Set-builder notation is extremely powerful and common, but notice that again there is no universal "right" notation; some authors will write ":" or "|" instead of "s.t", etc. The second set of examples you list are common shortcuts for writing intervals, using $()$ for open endpoints and $[]$ for closed ones. Notice that some authors write $]a,b[$ for the open interval instead of $(a,b)$, but yours is probably the most common convention. In this notation $(-\infty, \infty)$ would indeed indicate the set of all real numbers, although you should be aware that this notation is not complete free of potential confusion: is this an interval of real numbers, rational numbers, integers, or something else? In context it might be obvious, but there is a potential ambiguity. Like your books, I wouldn't try to write the set of real numbers using an interval at all: this is a bit circular, since intervals are subsets of the real numbers, and in any case it's quite safe to assume that your readers know what the real numbers are without being handed an explicit set. Actually defining the real numbers rigorously is a surprisingly subtle problem, and if you are interested one keyword for starting your research is "Dedekind cuts." For saying that a variable comes from the real numbers, I would write "$x\in\mathbb{R}$" or "a real number $x$" instead of "$x\in (-\infty,\infty)$" in most situations.
Complete representative set of squares modulo $15$.
If you really wanted to, you could choose the set $\{0,1,2,3,4,5,6,7,8,9,10,11,12,13,14\}$, but notice that $8 \equiv -7 \mod 15$, $9 \equiv -6 \mod 15$, etc. So then it is more convenient to choose the representative set $\{0, \pm 1, \pm 2, \pm 3, \pm 4, \pm 5, \pm 6, \pm 7\}$. This cuts down the amount of work from $15$ computations to $8$, as clearly $a^2 \equiv (-a)^2$.
Need help explaining how the elasticity was derived
An economics question? It might belong in a different exchange, but i find myself partial to this one... we have the demand function for good x: QxD = 5 - 2Px + .03I - .4P + 2P and we want to find the own price elasticity of demand, ExD, where Px = 2 and QxD = 1. ExD = 2QxD / 2Px = 2 * 2/1 = -4 The"2" in the elasticity in formula (2$ Q_x^D$) is really the "$\partial$" sign ($\partial Q_x^D$), representing a partial derivative (partial slope) and the correct formula is: $ E_x^D = \frac{\partial Q_x^D}{\partial P_x} \frac{ P_x}{Q_x^D} = -2 \frac{2}{1} = -4$ Your interpretation is correct. What happened to .03I - .4P + 2P? When taking a partial derivative, the remaining terms (other than $P_x$), are treated as constants. Now, since the derivative is measuring the rate of change (or slope), and since the constant terms do not change, their rate of change is zero and they ($I, P, $and $P$) drop out of the elasticity term. Does this reply help?
RHL and LHL of $\frac{\sin([x])}{[x]}$
R.H.L. will have Numerator = $0$ and Denominator = $0$. and $\cfrac 00$ is indeterminate.
Why does tensoring a projective resolution with a flat module give another projective resolution?
This is true with added generality, that is in the non commutative world. So, assume $\phi\colon R\to S$ is a ring homomorphism between not necessarily commutative rings. If $P_{R}$ is a projective right $R-$module, then, if $P_{R}$ is a direct summand of the free right module $R^{(X)}$ for some set $X$, we get that $$ P_{R}\otimes_{R} S $$ is a direct summand of $R^{(X)}\otimes_{R} S\simeq S^{(X)}$, thus it is a projective right $S-$module. It follows that tensoring with $S$ sends projective resolutions of a right $R-$module $M_{R}$ into $S-$projective resolutions of $M_{R}\otimes_{R} S$. In particular, if $M_{R}$ is projective, so is $M_{R}\otimes_{R} S$ (as an $S$-module). I would like to mention that this allows to prove what follows. Under the above hypothesis of flatness of $S$, if $C$ is a left $S-$module, then, for each $n\in \mathbb{N}$, we have $$ Tor_{n}^{R}(M_{R},\ C)\simeq Tor_{n}^{S}(M_{R}\otimes_{R} S,\ C). $$ Indeed, the above argument says that, if $P_{\bullet}\to M\to 0$ is a projective resolution of $R$, then $$Tor_{n}^{S}(M_{R}\otimes_{R} S,\ C)\simeq H_{n}((P_{\bullet}\otimes_{R}S)\otimes_{S} C).$$ Since, for each (projective) $R-$module $P_{R}$, there is a natural isomorphism $$(P_{R}\otimes_{R}S)\otimes_{S} C\simeq P_{R}\otimes_{R}(S\otimes_{S} C)\simeq P_{R}\otimes_{R} C,$$ we get $$H_{n}((P_{\bullet}\otimes_{R}S)\otimes_{S} C)\simeq H_{n}(P_{\bullet}\otimes_{R} C),$$ which allows us to conclude. As a corollary, we get that, if $R, S$ are commutative (with $S$ flat as an $R-$module) and $M,N$ are $R-$modules, then $$Tor_{n}^{R}(M,\ N)\otimes_{R}S\simeq Tor_{n}^{S}(M\otimes_{R}S,\ N\otimes_{R}S).$$
Is there $n$ such that $n,n^2,n^3$ start with the same digit ($\neq 1)$
It is a bit unfortunate that I already saw that one answer would be 99, but it would probably one of my guesses since $99, 99^2, 99^3$ are very close to respectively $100, 100^2, 100^3$, so they would all start with the same digit, 9. For example, if one had to find an $n$ where the first two digits of $n, n^2, n^3$ are the same, 999 would work, since they are all three even closer to $1000, 1000^2, 1000^3$. Here is a list of numbers I found just by using this logic: 97, 98, 99, 966, 967, 968, 969, ... 997, 998, 999, 9655, 9656, ... , 9998, 9999, ... In general, any number $k$ with $n$ digits such that $k > 10^n \cdot \sqrt[3]{0.9}$ works. Note that $\sqrt[3]{0.9} \approx 0.96549$. This also means that there are infinitely many, and that they take about 3.5% of the positive integers. I suspect that this are all such numbers.
Origins of conjunction and disjunction symbols ($\wedge$ and $\vee$) in formal logic
This is not an answer but more like a comment. I decided to write as an answer because I wanted to include an image. If you insist on writing your top at the top of the picture and your bottom at the bottom, you can (as I do, most of the time*) picture disjunction as a pushout and conjunction as a pullback, and think about the symbols as the markings one does for these special commutative squares. Here is an image of what I mean: After I came up with this I could finally sleep well at night. (*). There are certain situations when it is easier for me to think about the total space as being in the bottom: e.g. thinking about local sections of a sheaf or the closed sets of the spectrum of a ring. One may argue that in these two examples things are "flipped over" by some contravariant functor somewhere (a sheaf in the first case and the correspondence between ideals and closed sets in the second).
Picard's theorem application
The premises say that $f$ omits no real value: $$\mathbb{R}\subset f(B\{0,r\}\setminus \{0\}).$$ So if $f$ omits any value $w$, that must be non-real. Now consider the function $$g(z) = \overline{f(\overline{z})}.$$ $g$ is holomorphic on $B\{0,r\}\setminus \{0\}$, and if $f$ omits $w$, then $g$ omits $\overline{w}$. But since $f$ is real on $(B\{0,r\}\setminus \{0\}) \cap \mathbb{R}$, we have $g(t) = f(t)$ for $t \in (B\{0,r\}\setminus \{0\}) \cap \mathbb{R}$, and by the identity theorem, $g \equiv f$. So if $f$ omits any non-real value, it omits two values, which cannot be by Picard's theorem.
Geometrical representation of complex numbers.
First observe that for all real numbers $x$ the distance from $f(x)$ to $i/2$ is $1/2$: $$\begin{align}|f(x)-i/2|^2&=(f(x)-i/2)\overline{(f(x)-i/2)}\\ &=\left(\frac{1}{x-i}-i/2\right)\left(\frac{1}{x+i}+i/2\right)\\ &=\frac14, \end{align}$$ that is the image of $\mathbb R$ under $f$ is (contained in) the circle with center $i/2$ and radius $1/2$. From here it's easy to find the area of the desired square.
Is $\forall n\exists m:\, m^2=n,\text{ where }m,n ∈ \mathbb N$ true or false?
That's correct, providing one counterexample is sufficient to prove that the statement in question is false. You may also note that the statement basically states that all natural numbers are perfect squares, which is of course false. Depending on the circumstances, you might still need to elaborate on your proof i.e. give a "full" prove of the fact that $3$ is not a perfect square.
Inequality $ \left(1+\frac{x}{n}\right)^n \leq \exp(x) $
In this answer I define the natural logarithm as $$ \log x =\int_{1}^{x}\frac{1}{t} \, dt \, . $$ and the exponential function as the inverse of the logarithm. We begin with the fact that $$ \frac{x}{n} \geq \log\left(1+\frac{x}{n}\right) \, . $$ which can be deduced geometrically: $\log\left(1+\frac{x}{n}\right)$ is the area of the region bounded by the hyperbola $y=1/t$, the $t$-axis, and the vertical lines $t=1$ and $t=1+\frac{x}{n}$. The rectangle with a width of $x/n$ and a height of $1$ gives us an upper bound for this region. A little care is needed when $x/n$ is less than $0$, but the result still holds. Hence, $$ n\log\left(1+\frac{x}{n}\right) \leq x \, , $$ and therefore $$ \exp\left(n\log\left(1+\frac{x}{n}\right)\right)=\left(1+\frac{x}{n}\right)^n \leq \exp(x) \, . $$
Is it true that any non-convergent function or series must eventually "terminate" to some infintitely repeating sequence or series?
What you can show is that if $f(x)$ is bounded then its values have an accumulation point. This is the Bolzano-Weierstrass theorem. It certainly does not have to have any regular cycle. For a sequence, you can have $f(n)=\{10^{n-1}\pi\}$, basically taking the decimal expansion of $\pi$ starting from the $n^{th}$ place. It is nicely bounded between $0$ and $1$ but bounces around between those limits erratically. On the assumption that $\pi$ is normal, its values are dense in the interval, so every point in $(0,1)$ is an accumulation point. I described a function $(0,1)\to(0,1)$ that takes all values in any interval here. It is very messy. You can make it go from $\Bbb R \to (0,1)$ with your favorite bijection and it is also bounded. Functions can be much messier than the nice curves we draw on the whiteboard.
The image of a mono (or epi) under forgetful functors
For any variety of universal algebras, the forgetful functor is representable (in fact, has a left adjoint), and hence preserves monomorphisms. In the case of $\mathbf{Ab}$, or even $\mathbf{Grp}$, the representing object is the (abelian) group $\mathbf{Z}$. But in general, the forgetful functor need not preserve epimorphisms, as the inclusion $\mathbf{Z} \hookrightarrow \mathbf{Q}$ in $\mathbf{Ring}$ shows. The same example also shows that a variety of universal algebras need not, in general, be a balanced category.
What is $\frac{(-2)^{x}}{2^{x-1}}$
Laws of exponents say that $(ab)^x=a^xb^x$, and $a^{x+y}=a^xa^y$. Therefore $$\frac{(-2)^x}{2^{x+1}}=\frac{(-1)^x2^x}{2\cdot 2^x}=\frac{(-1)^x}{2}.$$ This expression is a real number if $x$ is an integer. If $x$ is an even integer, it is equal to $1/2$. If $x$ is an odd integer, then it is equal to $-1/2$.
The domain of function f(x) = $\log_e(\frac{x}{1-x})$
You answer is not true, but the start is right. $$\frac{x}{1-x}>0$$ or $$0<x<1$$ by the intervals method.
Generating Pythagorean Triples from Others via Dissections
You can generate all triples where $|b-a|=1$ such as $(3,4,5)\quad (20,21,29) ...$ if you start with a seed of $(a_0,b_0,c_0)=(0,0,1)$ and apply the following formula $$a_{n+1}=3a_n+2c_n+1\quad b_{n+1}=3a_n+2c_n+2\quad c_{n+1}=4a_n+3c_n+2$$ You can also generate all triples of the subset where $GCD(a,b,c)$ is an odd square (which contains all primatives and with no trivials) for all natural numbers $(m,n)\in\mathbb{N}$ if you replace $m$ in Euclid's formula with $2m-1+n$.
How can I calculate $\int_{0}^{2\pi}\sqrt{2-\sin(2t) }dt$?
Using symmetry: $$ \int_0^{2\pi} \sqrt{2 - \sin(2t)} \mathrm{d}t = 2 \int_0^{\pi} \sqrt{2 - \sin(2t)} \mathrm{d}t = \int_0^{2 \pi} \sqrt{2 - \sin(t)} \mathrm{d}t $$ Now we use $\sin(t) = 1 - 2 \sin^2\left(\frac{\pi}{4} - \frac{t}{2} \right)$: $$ \int_0^{2 \pi} \sqrt{2 - \sin(t)} \mathrm{d}t = \int_0^{2\pi} \sqrt{1+2 \sin^2\left(\frac{\pi}{4} - \frac{t}{2} \right)} \mathrm{d} t = 2\int_{-\tfrac{\pi}{4}}^{\tfrac{3 \pi}{4}} \sqrt{1+2 \sin^2(u)} \mathrm{d} u $$ The anti-derivative is not elementary (uses elliptic integral of the second kind): $$ \int \sqrt{1 - m \sin^2 \phi} \, \mathrm{d} \phi = E\left(\phi|m\right) + C $$ Thus the parameter of interest equals $$ 2 \left( E\left(\left.\frac{3 \pi}{4}\right|-2\right) - E\left(\left.-\frac{\pi}{4}\right|-2\right)\right) = 2 \left( E\left(\left.\frac{3 \pi}{4}\right|-2\right) + E\left(\left.\frac{\pi}{4}\right|-2\right)\right) $$ Inspired by Mhenni's result: $$ 2\int_{-\tfrac{\pi}{4}}^{\tfrac{3 \pi}{4}} \sqrt{1+2 \sin^2(u)} \mathrm{d} u = 2\int_{0}^{\pi} \sqrt{1+2 \sin^2(u)} \mathrm{d} u = 4 \int_0^{\frac{\pi}{2}} \sqrt{1+2 \sin^2(u)} \mathrm{d} u = 4 E(-2) $$ where $E(m)$ is the complete elliptic integral of the second kind. Numerical verification in Mathematica: In[67]:= NIntegrate[Sqrt[2 - Sin[2 t]], {t, 0, 2 Pi}, WorkingPrecision -> 20] Out[67]= 8.7377525709848047416 In[68]:= N[4 (EllipticE[-2]), 20] Out[68]= 8.7377525709848047416
Solve the equation by using logarithms
when you divide $8x = (\ln(195/64))/\ln(12)$ you get $x = (\ln(195/64))/(\ln(12)(8))$. The 8 only affects the bottom denominator of equation when you divide a fraction. thank you @GerryMyerson
Can $x^{n}-1$ be prime if $x$ is not a power of $2$ and $n$ is odd?
The following result can be found in most introductions to Number Theory. Theorem: Let $x$ and $n$ be integers greater than $1$. If $x^n-1$ is prime, then $x=2$ and $n$ is prime. Proof: Note that $x-1$ divides $x^n-1$, for $$x^n-1=(x-1)(x^{n-1}+x^{n-2}+ \cdots + 1).$$ If $x>2$ and $n>1$, each of the above factors of $x^n-1$ is $>1$. It follows that if $x>2$ and $n>1$, then $x^n-1$ cannot be prime. So if $x^n-1$ is prime, then $x=2$. Next we show that if $2^n-1$ is prime, then $n$ itself must be prime. For suppose to the contrary that $n=ab$ where $a$ and $b$ are greater than $1$. Then $$2^n-1=2^{ab}-1=(2^a)^b-1.$$ Let $y=2^a$. Then $2^n-1=y^b-1$. But $y-1$ divides $y^b-1$. Thus $2^a-1$ divides $2^n-1$. It is easy to see that in fact $2^a-1$ is a proper divisor of $2^n-1$, so $2^n-1$ cannot be prime. This concludes the proof. By calculating, we can verify that $2^n-1$ is prime for $n=2$, $3$, $5$, and $7$. But one should not jump to conclusions. When $n=11$, $2^n-1$ is not prime, for $23$ divides $2^{11}$. Primes of the form $2^n-1$ (where $n$ is necessarily prime) are called Mersenne primes. There has been interest in what would later be called Mersenne primes ever since the time of Euclid, because of their connection with the even perfect numbers. Despite a search spanning millenia, only $47$ Mersenne primes are currently known. The current record holder is $2^{43,112,609}-1$. From the computational evidence, it appears that for "most" primes $p$, the number $2^p-1$ is not prime. It is not known whether there are infinitely many Mersenne primes.
Derivative verification
Here's a somewhat more systematic way of getting these kinds of relations, for this kind of function. We'll de-clutter notation a little bit. Let $$e = e^x, t_n = \frac{x^n}{n!}, e_n = \sum_{k=0}^n t_k, e_n - e_{n-1} = t_n$$ You can write a function like your own as $$a = \frac{t_{n+1}}{e-e_n}$$ To derive the first order relation we'll note that $$t_{n+1}' = t_n, e_n' = e_{n-1}, \frac{t_{n+1}}{t_n} =\frac{x}{n+1}$$ Then you can write the equation as $$a(e-e_n) = t_{n+1}$$ and take derivatives $$a'(e-e_n) + a(e' - e_n') = t_{n+1}'$$ $$a'(e-e_n) + a(e - e_{n-1}) = t_n$$ From the original equation we know that $$e - e_n = \frac{t_{n+1}}{a}$$ and $$e - e_{n-1} = \frac{t_{n+1}}{a} + e_n - e_{n-1} = \frac{t_{n+1}}{a} + t_n$$ Putting these together leads to $$\frac{t_{n+1}a'}{a} + a\left(\frac{t_{n+1}}{a}+t_n\right) = t_n$$ Multiplying by a, collecting terms and dividing by t_n leads to $$a^2 = \left(1-\frac{t_{n+1}}{t_n}\right)a - \frac{t_{n+1}}{t_n}a' = \left(1-\frac{x}{n+1}\right)a - \frac{x}{n+1}a'$$ Multiplying by n+1 leads to $$(n+1)a^2 = (n+1-x)a - xa'$$ To de-clutter things even more we use $n$ instead $$na^2 = (n-x)a - xa'$$ where $$a = \frac{t_n}{e - e_{n-1}}$$ From here on out we have $$xa' = (n-x)a -na^2$$ Taking the first derivative gives $$a' + xa'' = (n-1)a + (n-x)a' - 2naa'$$ Collecting a' and multiplying by $x$ gives $$x^2a''+(1-n)ax = (n-x-1-2na)a'x = (n-x-1-2na)((n-x)a -na^2)$$ Expanding and collecting terms gives $$x^2a''-2n^2a^3+(n-n^2+2nx-x^2)a+(3n^2-n-3nx)a^2 = 0$$ and replacing $na^2$ with $(n-x)a-xa'$ leaves us with $$2n^2a^3 = (2x^2-(4n-1)x+2n)a + x(3x-(3n-1))a'+x^2a''$$ One could probably take some more derivatives here to get the next few terms In your case $n = 3$ so we find $$18a^3 = (2x^2-11x+18)a + x(3x-8)a' + x^2a''$$ which is the equation you have. Your linked problem has n = 2 which will leads to $$8a^3 = (2x^2-7x+8)a + x(3x-5)a' + x^2 a''$$ The exponential generating function for the Bernoulli numbers has n = 1 which leads to $$2b^3 = (2x^2-3x+2)a + x(3x-2)a' + x^2 a''$$ so the equations you found are not a coincidence. The relationship between the partial sums of the exponential function, their derivatives and differences leads to the first order differential equation. Then the algebra is a bit messy but we get a second order differential equation that will help you get the 3-rd order convolution. I think this is a very interesting problem.
Can there be a power series that converges for all reals but not for the complex numbers?
No. If a power series converges on all $\mathbb{R}$, it(s extension) converges for all complex numbers. One quick way to see why this holds is considering the formula for the radius of convergence and recalling that one proves absolute convergence with it, thus holding for $\mathbb{C}$ as well.
Given a Parallellogram $OACB$, How to Evaluate $\vec{OC}.\vec{OB}$?
I think there is no need to memorize the cosine law. Put the origin of the coordinate system at O, then we can write $$\mathop {OB}\limits^ \to .\mathop {OC}\limits^ \to = {\bf{B}}.{\bf{C}} = {\bf{B}}.\left( {{\bf{A}} + {\bf{B}}} \right) = {\bf{A}}.{\bf{B}} + {\bf{B}}.{\bf{B}} = {\bf{A}}.{\bf{B}} + {b^2}\tag{1}$$ so, your main task will be to compute ${\bf{A}}.{\bf{B}}$, you can do it as follows $$\eqalign{ & \mathop {AB}\limits^ \to .\mathop {AB}\limits^ \to = \left( {{\bf{B}} - {\bf{A}}} \right).\left( {{\bf{B}} - {\bf{A}}} \right) = {\bf{A}}.{\bf{A}} + {\bf{B}}.{\bf{B}} - 2{\bf{A}}.{\bf{B}} \cr & {c^2} = {a^2} + {b^2} - 2{\bf{A}}.{\bf{B}} \cr & {\bf{A}}.{\bf{B}} = \frac{{{a^2} + {b^2} - {c^2}}}{2} \cr}\tag{2}$$ Now, combine $(1)$ and $(2)$ to get $$\mathop {OB}\limits^ \to .\mathop {OC}\limits^ \to = \frac{{{a^2} + {b^2} - {c^2}}}{2} + {b^2} = \frac{{{a^2} + 3{b^2} - {c^2}}}{2}\tag{3}$$
Computing the gradient of $\det \left( \mathbf{X}^{T} \mathbf{X} \right)$
Let $$Y=X^TX$$ Then the gradient of the function $(\log\det Y)$ is a well known result which can be looked up in the Matrix Cookbook or on Wikipedia. $$\eqalign{ f &= \log\det Y \\ G = \frac{\partial f}{\partial Y} &= (\det Y)\;Y^{-T} \\ }$$ All that's needed to answer this question is to perform a change of variables from $Y\to X$. $$\eqalign{ df &= G:dY \\ &= G:(dX^TX + X^TdX) \\ &= G:dX^TX + G:X^TdX \\ &= G^T:X^TdX + G:X^TdX \\ &= (G^T+G):X^TdX \\ &= 2G:X^TdX \\ &= 2XG:dX \\ &= 2(\det Y)XY^{-T}:dX \\ &= 2(\det X^TX)X(X^TX)^{-1}:dX \\ \frac{\partial f}{\partial X} &= 2(\det X^TX)X(X^TX)^{-1} \\ }$$ where a colon is employed as a convenient product notation for the trace, i.e. $$\eqalign{ A:B = {\rm Tr}(A^TB) \\ }$$
Why is the presheaf of holomorphic functions that admit a square root not a sheaf?
The problem is the "gluing axiom": Let $U^{-}$ and $U^{+}$ denote the complements of the non-negative and non-positive real axis, respectively. The function $f(z) = z$ admits a square root in each of $U^{\pm}$, but does not admit a square root on their union.
combinations and permutations - choosing when there's a limit
For the first question, note that it is $\binom{5}{3}\binom{7}{4}$. (I am assuming the doughnuts are lined up in a row.) For there is a total of $12$ doughnuts. If we are to eat $7$, exactly $3$ of which are among the first $5$, we must choose $3$ from the first $5$, and the remaining $4$ from the last $7$. For at least three, it is the same idea. Add to our expression for exactly three the number $\binom{5}{4}\binom{7}{3}$ for exactly four, and $\binom{5}{5}\binom{7}{2}$ for exactly five.
How can a vector represent velocity and a position as well?
A vector can represent a position by thinking of it as specifying the displacement from the origin of your system of coordinates to the position in question -- hence such vectors are called position vectors, not strangely. Since velocity by definition is the derivative of position, then velocity as well is a vector, derived by differentiating the position vector. Continuing this train of thought, you can see why acceleration, force (derivative of momentum, which involves velocity), etc. are all vectors.
How can I prove this statement about square root?
For the induction step, you assume that $a \in \mathbb{N}$ and that $$\exists b \in \mathbb{N} . (b \times b \le a) \land (a < (b+1) \times (b+1)). \tag{1}$$ Now, depending on exactly how your formal system is formulated, there should be some way to identify a witness for $(1)$, that is, for the inductive step you can introduce a free variable $b_0$ and assume that $(b_0 \times b_0 \le a) \land (a < (b_0+1) \times (b_0+1)).$ For $a + 1$, then, there are two cases. In one case, $a + 1 < (b_0 + 1) \times (b_0 + 1).$ You can then prove (via some additional formal steps) that $(b_0 \times b_0 \le a + 1)$, and therefore that $$\exists b \in \mathbb{N} . (b \times b \le a+1) \land (a+1 < (b+1) \times (b+1)).$$ In the other case, $a + 1 \geq (b_0 + 1) \times (b_0 + 1).$ If you can then prove (via additional formal steps) that $(a < ((b_0+1)+1) \times ((b_0+1)+1)),$ these two inequalities establish that $$\exists b \in \mathbb{N} . (b \times b \le a + 1) \land (a + 1 < (b+1) \times (b+1)), \tag{2}$$ using $b_0 + 1$ as a witness for $b$. You then need at least one more formal step in order to combine the conclusions of the two cases so that you can state formula $(2)$ outside the context of either case, that is, in a context where you have assumed only $a \in \mathbb N$ and formula $(1)$, to complete the inductive step. This is merely a strategy for the proof. Obviously you will need to produce a sequence of formal inferences; the details depend on exactly what inferential tools you are allowed to use and on the way in which they have been formulated.
How to prove $\lim_{x\rightarrow\infty}-x^2+\log(x^2)=-\infty$
hint If we put $t=x^2$, it will be equivalent to prove that $$\lim_{t\to+\infty}\Bigl(-\frac t2+\frac 72\ln(t+2\sqrt{2})\Bigr)=-\infty$$ $$\ln(t+2\sqrt{2})=\ln(t)+\ln(1+\frac{2\sqrt{2}}{t})$$ we want $$\lim_{t\to+\infty}\Bigl(t(\frac{-1}{2}+\frac 72\frac{\ln(t)}{t})+\ln(1+\frac{2\sqrt{2}}{t})\Bigr)$$ $$=+\infty(\frac{-1}{2}+0) + 0 =-\infty$$
Transverse Manifolds. Intersection is a Manifold again.
This result is a generalization of the pre-image theorem (which you can find on page 21 of Guillemin & Pollack) which states If $y$ is a regular value of $f: X \to Y$, then the preimage $f^{-1}(y)$ is a submanifold of $X$, with $\dim f^{-1}(y) = \dim X - \dim Y$. Given two manifolds that intersect transversely, we can generalize this to (from page 28 of Guillemin & Pollack) If $f:X \to Y$ is transverse to a submanifold $Z \subset Y$, then the preimage $f^{-1}(Z)$ is a submanifold in $X$. Moreover the codimensions of $Z$ and $f^{-1}(Z)$ are equal Lastly, suppose we have submanifolds $X$ and $Y$ of $\mathbb{R}^n$. $X$ and $Y$ intersecting transversely is the same as saying the inclusion $i:X \to \mathbb{R}^n$ is transverse to $Y$. So, we get that $i^{-1}(Y)$ is a submanifold of $X$. But $i^{-1}(Y) = X \cap Y$. So we get that $X \cap Y$ is indeed a manifold.
Why is the derivative of an (everywhere differentiable) function on the real line the limit of a sequence of continuous functions?
Try $f_n(x)=n\cdot(g(x+1/n)-g(x))$.
how to find the vector that makes the maximum change
Your function is actually $$z = x^2 + y^2$$ which can be written as $$F(x, y, z) = x^2 + y^2 - z = 0$$ Its gradient is $\nabla F(x,y,z) = (\partial F/\partial x, \partial F/\partial y, \partial F/\partial z)$, $$\vec{n}(x, y, z) = \nabla F(x,y,z) = (2 x, 2 y, -1)$$ (Note that some write $\nabla F = (\partial F/\partial x, \partial F/\partial y, \partial F/\partial z) = ( F_x, F_y, F_z )$; the subscripts then indicate partial derivatives, and not components of $F$. I personally dislike that notation.) Normalizing the gradient vector $\vec{n}(x,y,z)$ to unit length gives you a better idea of its direction: $$\hat{n}(x, y, z) = \frac{\vec{n}(x,y,z)}{\lvert\vec{n}(x,y,z)\rvert} = \left( \frac{x}{\sqrt{x^2 + y^2 + \frac{1}{4}}}, \; \frac{y}{\sqrt{x^2 + y^2 + \frac{1}{4}}}, \; -\frac{1}{\sqrt{4 x^2 + 4 y^2 + 1}} \right)$$ Now, to the issues in your problem statement (or rather, assumptions about the solution.) The surface $F(x,y,z)=0$ is constant; there is no change between different points in the surface (so the fastest change vector is definitely not in the tangent plane). You only see change when you go away or towards the surface. This direction, $\vec{n}(x,y,z)$, is the surface normal. If you draw the normal vector from point $(x, y, z)$ towards $(x, y, z) + \vec{n}(x, y, z)$, you'll see it always points downwards and away from the $z$ axis (the line from $(x, y, z)$ to $(x, y, z) - \vec{n}(x, y, z)$ always passes through the $z$ axis). It is not the vector you have drawn in your figure; it is perpendicular to it. $F(x,y,z) = 0$ defines an implicit surface. Because the normal vector $\vec{n}(x, y, z)$ is nonzero everywhere on the surface (because the $z$ component is always -1), the surface is regular. This in turn means that the tangent plane is well defined for every point on the surface. The equation for the plane tangent to the surface at $(x_0, y_0, z_0)$ is $$\vec{n}(x_0, y_0, z_0) \cdot ( x - x_0, y - y_0, z - z_0) = 0$$ i.e. $$2 x_0 (x - x_0) + 2 y_0 ( y - y_0) - (z - z_0) = 0$$ which is equal to $$(2 x_0) x + (2 y_0) y + (-1) z + (z_0 - 2 y_0^2 - 2 x_0^2) = 0$$ I wrote the last form, so you could easily see that it describes a plane perpendicular to the normal vector $(2 x_0, 2 y_0, -1)$ at point $(x_0, y_0, z_0)$. (The scalar part, the last part in parenthesis, determines the signed distance from origin (scaled by the length of the normal vector) along the normal vector to the plane.) If you want to parametrize the tangent plane -- say, your underlying question is actually about something completely different to the direction of most change --, there are ways to do that, too, but I think that should be a separate question. (The easiest suggestion, using say $v$ for the direction away from the $z$ axis, and $u$ perpendicular to it, is ill-defined at origin -- since the point is on the $z$ axis, every direction is away, much like every direction is "North" on the South pole.)
Linear programming with what I think has 3 variable. I need to plot a graph of the constraints too.
Variables are x, y, z litres of oil from refinery A, B and C respectively. constraints are: $4x+2y+6z \leq 3.5$ $70x+60y+80z \geq 65 $ $x+y+z = 100$ $x, y \geq 0$ Objective function: $z=21x+23y+26z$
How to classify stationary points of a multivariable function?
Hint: The system of equations $$ x+y+z=0\\ 2y+x=0\\ 3z+x=0 $$ looks interesting. So does computing a Hessian.
compact in the product topology
Indeed it suffices to show that your set $A = \{f \in X: |f(t)| < 1 \mbox{ for all } t \in [0,1]\}$ is not closed in $X = \mathbb{R}^{[0,1]}$, because $X$ is Hausdorff (a product of Hausdorff spaces is Hausdorff) and a compact subset of a Hausdorff space is closed. One way to see it is not closed: the sequence of (constant) functions $f_n(t) = 1 - \frac{1}{n}$ are in all in $A$, and converges pointwise to the constant function $1$, which is not in $A$.
How does $Mor(V,W)$ becomes a vector bundle and what are the transition functions of it?
If $V$ is defined using transition functions $f_{ij} : U_i \cap U_j \to GL_n(\mathbb{C})$, thought of as acting on $\mathbb{C}^n$, and $W$ is defined using transition functions $g_{ij} : U_i \cap U_j \to GL_m(\mathbb{C})$, thought of as acting on $\mathbb{C}^m$ then the hom bundle $[V, W]$ is defined using transition functions $$T \mapsto g_{ij} T f_{ij}^{-1}$$ thought of as acting on $[\mathbb{C}^n, \mathbb{C}^m]$. Similarly, the tensor product bundle $V \otimes W$ is defined using transition functions $f_{ij} \otimes g_{ij} : U_i \cap U_j \to GL_{nm}(\mathbb{C})$, thought of as acting on $\mathbb{C}^n \otimes \mathbb{C}^m$.
Sum only positive numbers in series
This can be obtained with the absolute value $$\sum_k\frac{(c_k-x)+|c_k-x|}2$$ to cancel out the negative terms. $$\frac{32-4x+|10-x|+|3-x|+|4-x|+|15-x|}2.$$ As the function is piecewise linear and the constants seem irregular, there is no way to simplify it. It is also customary to denote the positive part of an expression with a $+$ exponent, giving $$\sum_k(c_k-x)^+=\sum_k\max(0,c_k-x).$$
Theta functions identites
From $$\theta(z)=\sum_{n=-\infty}^{\infty}\exp(\pi i[n^2\gamma+2nz])$$ we get \begin{align} \theta(z+\gamma)&=\sum_{n=-\infty}^{\infty}\exp(\pi i[n^2\gamma+2nz+2n\gamma])\\ &=\exp(-\pi i\gamma)\sum_{n=-\infty}^{\infty}\exp(\pi i[(n+1)^2\gamma+2nz])\\ &=\exp(-\pi i\gamma-2\pi inz) \sum_{n=-\infty}^{\infty}\exp(\pi i[(n+1)^2\gamma+2(n+1)z])\\ &=\exp(-\pi i\gamma-2\pi inz)\theta(z). \end{align}
Having problems evaluating some limits
All of the problems can be done by using L'Hospital's Rule. Please read about it, say in Wikipedia, specially for the conditions under which it can be applied. Please note that the Rule can only be applied when we are dealing with certain types of "indeterminate forms." Problem 1 has already been dealt with by Kevin. For Problem 2, rewrite our function as $\frac{1-\tan x}{\cos 2x}$. Note that as $x\to \pi/4$, top and bottom approach $0$. So by L'Hospital's Rule, our limit is equal to $$\lim_{x\to\pi/4} \frac{-\sec^2 x}{-2\sin 2x}.$$ Now we are finished, the new top and bottom behave nicely as $x\to\pi/4$. The top approaches $-2$, as does the bottom, so the limit is $1$. For Problem 3, rewrite our function as $\frac{2^{1/x}-1}{1/x}$. Then top and bottom approach $0$. We could apply L'Hospital's Rule directly, but it is convenient to let $t=1/x$. So we want $$\lim_{t\to 0^+} \frac{2^t-1}{t}.$$ To apply L'Hospital's rule, we need to differentiate top and bottom. The derivative of the top will be clearer if we note that $2^t=e^{(\ln 2) t}$. So the derivative of the top is $(\ln 2) 2^t$. The derivative of the bottom is $1$. So we conclude that our limit is $\ln 2$. For Problem 4, rewrite our function as $e^{\frac{\ln x}{1-x}}$. We find the limit as $x\to 1$ of the exponent $\frac{\ln x}{1-x}$. Note that top and bottom of the exponent approach $0$ as $x\to 1$. A routine application of L'Hospital's Rule shows that $$\lim_{x\to 1} \frac{\ln x}{1-x}=\lim_{x\to 1} \frac{1/x}{-1}=-1.$$ So the exponent has limit $-1$, and therefore the original expression has limit $e^{-1}$.
Posterior is proportional to prior times likelihood
It is a useful trick when manipulating conditional density functions to just ignore some subset of the variables and then after the manipulation return the "ignored" variables to the resulting expression after the $"|"$, this is so because the conditional probabilities are themselves valid probability density functions - if you like you could write $q(\cdot) := p(\cdot | \alpha,\sigma^2)$ - and so we have that $$ \begin{align*} q(\mathbb{w}|\mathbb{t}) = \frac{q(t|\mathbb{w})q(\mathbb{w})}{q(\mathbb{t})} \\ \end{align*}, $$ becomes $$ p(\mathbb{w}|\mathbb{t},\sigma^2,\alpha) = \frac{ p(\mathbb{t}|\mathbb{w},\sigma^2,\alpha) p(\mathbb{w}|\sigma^2,\alpha)}{p(\mathbb{t}|\sigma^2,\alpha)}. $$ You then inspect all of these terms and apply any conditional independence properties of the generative model to remove any redundant conditioning, so for example looking at the paper you link to we have the hierarchical structure $$ \begin{align*} \alpha &\sim\pi(\alpha) \\ \mathbb{w}|\alpha &\sim p(\mathbb{w}|\alpha)\\ \mathbb{t}|\mathbb{w},\sigma^2 &\sim p(\mathbb{t}|\mathbb{w},\sigma^2) \end{align*} $$ similarily we see that $p(\mathbb{w}|\sigma^2,\alpha)=p(\mathbb{w}|\alpha)$, inserting these we have $$ p(\mathbb{w}|\mathbb{t},\sigma^2,\alpha)=\frac{p(\mathbb{t}|\mathbb{w},\sigma^2)p(\mathbb{w}|\alpha)}{p(\mathbb{t}|\alpha,\sigma^2)}. $$
Solving a Second Order Linear Equation with Power Series
$$y=\sum a_kx^k$$ $$xy=\sum a_kx^{k+1}=\sum a_{k-1}x^k$$ $$y'=\sum ka_kx^{k-1}=\sum(k+1)a_{k+1}x^k$$ $$y''=\sum k(k+1)a_{k+1}x^{k-1}=\sum(k+1)(k+2)a_{k+2}x^k$$ $$0=\sum(a_{k-1}+(k+1)a_{k+1}+(k+1)(k+2)a_{k+2})x^k$$ So I get $$a_{k-1}+(k+1)a_{k+1}+(k+1)(k+2)a_{k+2}=0$$ EDIT: Which is what OP got, so we have independent confirmation, always important in the experimental sciences.
Determining if ___ is an ideal:
Considering that the list of justifications hasn't been fully populated, I will provide an answer so as to remove this from the unanswered queue. (1) $k$ is not an ideal, since it is not closed under multiplication using elements from outside $k$. For instance $1x_1\not\in k$. (2) This is basically the same as (1). (3) This is actually an ideal! (4) You are correct. (5) You are correct, it is not closed under multiplication by outside elements. To be clear, we are multiplying by something like $x+1$. Also not closed under addition if the degrees vary.
A real tridiagonal matrix A satisfying $ a_{i,i+1}a_{i+1,i} \geq 0 $ has real eigenvalues?
If $a_{i,i+1}a_{i+1,i}=0$, then one of these off-diagonal entries is zero (or both). For example, if $a_{i+1,i}=0$ the matrix $A$ has the block form $$ A=\pmatrix{A_{11}&A_{12}\\0&A_{22}}, \quad $$ where $a_{i+1,i}$ is the top-right element of the zero block and $A_{12}$ has at most one nonzero entry (the bottom-left entry is $a_{i,i+1}$). The eigenvalues of $A$ are then given by the eigenvalues of the diagonal blocks $A_{11}$ and $A_{22}$. On the other hand, if $a_{i,i+1}$ is zero then $A$ can be partitioned in a similar manner to the lower block triangular form and if the entries are both zero, the matrix is block diagonal. Now if $A_{11}$ and $A_{22}$ are such that no entry in the first super or subdiagonal are zero, you can apply what you already know (the blocks are similar to a real symmetric matrix and the eigenvalues are real). Otherwise, you can devise a similar partitioning and apply the argument recursively. Note that if only one of the entries $a_{i,i+1}$ and $a_{i+1,i}$ is zero, you cannot make this matrix similar to a symmetric matrix (at least not by a diagonal scaling).
Adjugate of adjacency matrix of a directed bipartite graph
No, just because if you reverse the direction of all the edges, the new adjacency matrix will be the transpose of the old one, not the adjugate.
Geometric intuition behind subspaces in $\mathbb C^n$
In $\mathbb{R}^n$, a vector $v$ can be thought of as a list of components $v_i$. Each component has a magnitude, which is a positive real number $|v_i|$, and a direction, which is one of $1$ and $-1$. In $\mathbb{C}^n$, a vector $v$ is likewise a list of components $v_i$. Each component has a magnitude $|v_i|$, and a direction, which now can be any angle between $0$ and $2\pi$ and not just positive and negative directions. In a sense, this means an element of $\mathbb{C}^n$ is a vector of vectors. The norm of a vector in $\mathbb{C}^n$ is just like the norm of a vector in $\mathbb{R}^n$: you add up the magnitude squared of each component, and take the square root. Just like in $\mathbb{R}^n$ where you ignore the angle $1$ or $-1$ in computing the norm, here you ignore the argument of the complex number. The conjugate of a vector is formed by reversing the angle in each component. Multiplying by a complex number $re^{i \theta}$ scales the vector by a certain amount $r$, and rotates each component by some angle $\theta$. This is the same as in the real case, where multiplication by $2$ and $-2$ are essentially the same up to changing the sign of the components. This is how I think of it anyway. Hope you find this helpful.
Using the product of the roots of $(z+1)^n=1$ to prove that $\prod_{k=1}^{n-1} \sin\frac{k\pi}{n}=\frac{n}{2^{n-1}}$
Hint. Note that $$\prod_{k=1}^{n-1}\sin\left(\frac{k\pi}{n}\right)=\prod_{k=1}^{n-1}\frac{e^{i\frac{k\pi}{n}}-e^{-i\frac{k\pi}{n}}}{2i}= \frac{e^{-i\frac{n(n-1)\pi}{2n}}}{(2i)^{n-1}}\prod_{k=1}^{n-1}\left(e^{i\frac{2k\pi}{n}}-1\right) =\frac{(-1)^{n-1}\prod_{k=2}^{n}z_{k}}{2^{n-1}}.$$ where $$\frac{(z+1)^n-1}{z}=z^{n-1}+nz^{n-2}+\dots +n=(z-z_2)\dots(z-z_n).$$ Can you take it from here?
complement of zero set of holomorphic function is connected
Suppose $\overline{A}\cap\overline{B}\cap Z = \varnothing$. Since $A$ and $B$ are open (in $U$, or equivalently in $\mathbb{C}^n$), we have $$\overline{A}\cap B = \varnothing = A\cap\overline{B},$$ and thus $\overline{A}\cap \overline{B} \subset Z$. The supposition thus implies that $\overline{A}$ and $\overline{B}$ are disjoint, and thus $$\varnothing = \overset{\Large\circ}{Z} = U\setminus \overline{U\setminus Z} = U\setminus \overline{A\cup B} = U\setminus (\overline{A}\cup \overline{B}),$$ which means that $U$ is the disjoint union of the nonempty closed sets $\overline{A}$ and $\overline{B}$, and therefore $U$ is not connected. This contradicts the premise that $U$ is connected, hence the supposition $\overline{A}\cap\overline{B}\cap Z = \varnothing$ must have been wrong. So the conclusion that $\overline{A}\cap\overline{B}\cap Z \neq \varnothing$ follows if $Z$ is any nowhere dense closed subset of $U$. Since there are nowhere dense closed sets $F\subset U$ such that $U\setminus F$ is not connected, you need special properties of the zero sets of holomorphic functions to conclude that $U\setminus Z$ must be connected. Off the top of my head, I can't think of another way than the Riemann extension theorem.