title
stringlengths
20
150
upvoted_answer
stringlengths
0
54.3k
Valid proof regarding complexity class?
This is incorrect. If $L$ were in $RP$, then there would be a deterministic TM that would accept with probability 2/3 on a correct input and reject with probability 1 on an incorrect input (along with a second input of random bits) in polynomial time. You're correct that $BPP$ gives you the first condition, but $NP$ makes no guarantees about the second. It only says that there's a nondeterministic TM that accepts/rejects $x$ in polynomial time. As of me writing this, the only known relationship between $NP$ and $RP$ is that $RP\subseteq NP$. Since it's also true that $RP\subseteq BPP$, then your conclusion would imply that $BPP\cap NP = RP$, which would be pretty significant.
Global Extrema on a Multivariable function
For positive $y$ values the dominant term is $e^{3y}$ and there is no abs max. For negative $y$ values the dominant term is $x^3$ and there no abs min. There is a local minimum, though, at $(1,0)$ First derivatives are zero when $$ \begin{cases} 3 x^2-3 e^y=0\\ 3 e^{3 y}-3 x e^y=0\\ \end{cases} $$ $$H(x,y)=\left( \begin{array}{rr} 6 x & -3 e^y \\ -3 e^y & 9 e^{3 y}-3 x e^y \\ \end{array} \right) $$ $$\det(H(x,y))=-18 x^2 e^y+54 x e^{3 y}-9 e^{2 y};\;\det(H(1,0))=27>0$$ The determinant of the Hessian matrix is negative at $(1,0)$, therefore it is a local minimum.
Given that $a$ and $b$ are integers satisfied $3 \mid ab(a + b) + 2$, prove that $9 \mid ab(a + b) + 2$.
At first, work $\pmod 3$. Clearly we can't have either $a,b\equiv 0 \pmod 3$ so by symmetry we only have to consider $3$ pairs $$(a,b)\in \{(1,1),(1,2), (2,2)\}$$ A quick computation shows that only $(2,2)$ works so we must have $a=3m+2,b=3n+2$ for some integers $m,n$. We now check that $$(3m+2)(3n+2)(3m+3n+4)+2\equiv 0\pmod 9$$ and we are done. May be worth noting: to do the final check you don't need to multiply everything out (though that's not all that hard). It's clear that the coefficients of $n^2,m^2,mn$ are divisible by $9$. The coefficients of $m,n$ are both $3\times 2\times 4+2\times 2 \times 3=24+12=36\equiv 0 \pmod 9$ so the product is $16\pmod 9$ and we are done (since adding $2$ to the product gives us $18$ which of course is $0\pmod 9$). Note: I believe this is substantially similar to the argument you give in your post, but your version appears to be more complex and involved. I think the way I've written it gets at the main issues quickly.
Why this answer does not provide counter-example for $C([0,1/2],||\cdot||_\infty)$
That answer makes no sense, because in that answer the number $\varepsilon$ is $4^{-N}$; it should be a fixed number greater than $0$.
Lax-Milgram theorem on Evans. If the mapping is injective why do we need to prove uniqueness again?
Step 6 is indeed redundant. One can see that the argument used in Step 6 is essentially identical to that used for proving the 1-1 property of $A$. Evans might just think it is clearer for students to put in the extra step.
Generalizing $f(n)=\int_0^\infty \frac{1}{e^{x^n}+1}=\left(1-2^{(n-1)/n}\right )\zeta(n^{-1})\Gamma(1+n^{-1})$
$$ \begin{align} \int_0^\infty\frac{\mathrm{d}x}{e^{x^n}+1} &=\frac1n\int_0^\infty\frac{x^{\frac1n-1}\,\mathrm{d}x}{e^x+1}\\ &=\frac1n\int_0^\infty x^{\frac1n-1}\sum_{k=1}^\infty(-1)^{k-1}e^{-kx}\,\mathrm{d}x\\ &=\frac1n\sum_{k=1}^\infty\frac{(-1)^{k-1}}{k^{1/n}}\int_0^\infty x^{\frac1n-1}e^{-x}\,\mathrm{d}x\\[3pt] &=\frac1n\eta\left(\frac1n\right)\Gamma\left(\frac1n\right)\tag{1} \end{align} $$ where $\eta(s)$ is the Dirichlet eta function: $$ \begin{align} \eta(s) &=\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n^s}\\[6pt] &=\left(1-2^{1-s}\right)\zeta(s)\tag{2} \end{align} $$ and as you say, $\eta(1)=\log(2)$. See this recent answer.
Calculating probability using joint density
You calculation is fine. You can double check your result by changing the order of integration. $$\int_0^{\infty} \int_y^{\infty} 2\cdot e^{-(x+2y)} \, dx \, dy=\frac23$$
How to prove a Ramanujan-type series for Pi?
Re-typeset version of Ramanujan's 1914 article Modular equations and approximations to π: http://ramanujan.sirinudi.org/Volumes/published/ram06.pdf (number 6 from http://ramanujan.sirinudi.org/html/published_papers.html). Lorenz Milla, A detailed proof of the Chudnovsky formula with means of basic complex analysis: https://arxiv.org/abs/1809.00533.
What is the intuition behind a low-rank covariance matrix?
One way this matrix can be low-rank is if $X$ and $Y$ are linear functions of the same random vector $Z$, which is itself much lower-dimensional. For simplicity, suppose that $X$ and $Y$ are mean zero, $n$-dimensional random vectors, so that $K=\mathbb{E}[XY^T]$. As an example, let $A,B\in \mathbb{R}^{n\times k}$, and let $Z\sim \mathcal{N}(0,I_k)$ be a standard $k$-dimensional Gaussian, where $k<<n$. Then let $X=AZ$ and $Y=BZ$. In this case, $X$ and $Y$ depend linearly on the same randomness, so we would guess the covariance rank to be of order $k$, not $n$. Indeed, we find that \begin{equation} K=\mathbb{E}[XY^T]=\mathbb{E}[AZZ^TB^T]=A\mathbb{E}[ZZ^T]B^T=AB^T. \end{equation} Notice that $AB^T$ is rank at most $k$, so is low-rank.
Find coefficient of $X^{12}$
You have to use the binomial theorem: $(a+b)^n=\sum_{k=0}^{n}\binom{n}{k}a^{n-k}b^k$ In this case you have $a=1$, $b=-2X$ and $n=19$ Therefore the term which will have $X^{12}$, is the term in which $k=12$, therefore, for this $k$, we have that $[X^{12}]=\binom{19}{12} 1^{19-12}(-2)^{12}=\binom{19}{12} (-2)^{12}$ Here $[X^{12}]$ is the coefficient of $X^{12}$ in the expansion.
Function $f(x,y)=u(x)v(y)$ differentiable of 2 variables
Yes, because the functions $a(x,y) = u(x)$ and $b(x,y) = v(y)$ are differentiable (as two variable functions), hence the product of $a$ and $b$, $f(x,y) = a(x,y) b(x,y)$, is differentiable.
Using binomial theorem to prove an identity
After you take the first derivative, you have $$n(x+1)^{n-1}=\sum_{k=1}^nkx^{k-1}\binom{n}k\;.$$ Now multiply both sides by $x$ before you differentiate again. Here’s an alternative that uses the binomial theorem only to evaluate $\sum_k\binom{m}k=2^m$. Using the fact that $k\binom{n}k=n\binom{n-1}{k-1}$, we can calculate $$\begin{align*} \sum_kk^2\binom{n}k&=\sum_kkn\binom{n-1}{k-1}\\ &=n\sum_kk\binom{n-1}{k-1}\\ &=n\sum_k(k+1)\binom{n-1}k\\ &=n\sum_kk\binom{n-1}k+n\sum_k\binom{n-1}k\\ &=n\sum_k(n-1)\binom{n-2}{k-1}+n2^{n-1}\;, \end{align*}$$ and I expect that you can finish it from there.
$\mathbb{Z} \times \mathbb{Z}$ and application of isomorphism theorems
I think that you are possibly confusing two different subgroups. (i) $G=\mathbb{Z}\times\mathbb{Z}$ has a subgroup $H_1=\{(2k,3k)| k\in\mathbb{Z}\}$, and then $G/H_1\simeq\mathbb{Z}$. (ii) $G=\mathbb{Z}\times\mathbb{Z}$ has a subgroup $H_2=\{(2k,3\ell)| k,\ell\in\mathbb{Z}\}$, and then $G/H_2\simeq\mathbb{Z}_2\times\mathbb{Z}_3\simeq\mathbb{Z}_6$. Of course $H_1\leqslant H_2$, but they are not equal.
Applications of the lack of compactness of the closed unit ball in infinite-dimensional Banach spaces
One of the simplest consequences is that continuous function doesn't attain their minimum on the ball, fact that is always true in finite dimensional spaces for Weierstrass theorem. Consider for example the integral functional $\int_0^1|\cdot|:(C([0,1]),\Vert\cdot\Vert_{\infty})\to\mathbb{R}$, whose infimum over the ball is zero but zero is never attained on the ball. Another interesting thing is that is easy to find an infinity of elements that lie on the ball which are $\epsilon$-separated. For an example consider the characteristics functions $\{f_n=\chi_{[n,n+1]}(x)\}_{n\in\mathbb{N}}\subset L^\infty(\mathbb{R})$. One can easily check that they belong to the unit ball of $L^\infty$ and that are 2-separated, i.e. $\Vert f_n-f_m\Vert_\infty=2$ for each $n\ne m$.
Why called double centralizer property?
Let $S$ be the group of all endomorphisms of $A$ as an abelian group. We can identify $R$ with a subring of $S$ via the left action on $A$. Then $D$ is exactly the centralizer of $R$ inside $S$, and $\operatorname{End}(A_D)$ is exactly the centralizer of $D$ inside $S$. So to say that $f$ is an isomorphism is the same as saying that $R$ is equal to its own double centralizer, as a subset of $S$.
Normalizing Matrix wrt Exponential
Correct me if I'm wrong, but I suspect you're trying to implement the Sinkhorn algorithm in a numerically stable way. In that case you are constructing a kernel $K := e^{-\frac{C}{\varepsilon}}$, where $C$ is some cost matrix and division by a regularization constant $\varepsilon>0$ is meant elementwise. So if elements in $C$ get large, elements in $K$ get closer to zero and numerically you might get underflows. One important thing here is that $K$ is not the cost matrix, but $C$ is - and since the absolute cost from one unit of mass to another is not important for our transport problem, but only the relative cost, we can rescale $C$ by some factor, for example divide it by the largest value in $C$.
Rolle's Theorem $2+\sin(2x)-a\cdot\sin(x)\cos(x)=0$
Just note that since $\sin x \cos x = \frac 12 \sin (2x)$, the equation becomes $$ 2 + \sin (2x)-\frac a2 \sin (2x) = 0\Leftrightarrow \sin(2x) = \frac{-2}{1-\frac a2} = -\frac{4}{2-a} $$ You get solutions if $-1 \leq -\frac{4}{2-a} \leq 1$.
Do all the solutions have to be in an affine variety?
Yes, by definition, $V(X)$ means all the solutions. It's perfectly ok to talk about open subsets of this set, e.g. $V(X)$ minus the origin --- the fancy name for such things is quasi-affine varieties --- but you can't give those the same name!
For $M\otimes_R N$, why is it so imperative that $M$ be a right $R$-module and $N$ a left $R$-module?
Try tensoring a left $R$-module $M$ with a left $R$-module $N$. This tensor product should be comprised of "tensors" $m\otimes n$ (sorry for denoting them by the same notation as usual tensors; they don't really deserve this) satisfying $rm\otimes n = m\otimes rn$ for all $r\in R$. Now, for any $r,s\in R$, we must have $rsm\otimes n=sm\otimes rn=m\otimes srn$, but at the same time $rsm\otimes n=m\otimes rsn$ (because we can move the $rs$ as a whole across the tensor sign). Thus, $m\otimes srn=m\otimes rsn$. In other words, $m\otimes\left(rs-sr\right)n=0$. Similarly, $\left(rs-sr\right)m\otimes n=0$. But this means that our fake tensor product $M\otimes N$ does not depend on the left $R$-modules $M$ and $N$, but only on the left $R / \left(\mathrm{Ab} R\right)$-modules $M / \left(\left(\mathrm{Ab} R\right) M\right)$ and $N / \left(\left(\mathrm{Ab} R\right) N\right)$, where $\mathrm{Ab} R$ is the commutator ideal of $R$ (that is, the ideal generated by all differences of the form $rs-sr$ with $r,s\in R$). And it is actually exactly the tensor product of these two $R / \left(\mathrm{Ab} R\right)$-modules over the commutative ring $R / \left(\mathrm{Ab} R\right)$. This tensor product, of course, can be reinterpreted as a tensor product of a left module with a right module (since over a commutative ring, modules can be switched from left to right at will). So the notion of the tensor product of a right module with a left module is more reasonable and completely encompasses the notion of a "tensor product" of two left modules.
How many ways of assigning beds are possible?
Thee are 3! ways the pairs can be assigned rooms. In EACH room there are 2=2! ways the twins can be assigned beds and there are 3 rooms, hence you get 3 factors of 2!. Does that help? for b. There are 6! possible sleeping arrangements in all,so the probability of each pair sharing a room is (3!)(2!)(2!)(2!)/6!
Proof $f(x)\equiv 0$
1) We have $f(x)\geq 0$ for all $x$, as you have shown. 2) Let $a\in \mathbb{R}$ fixed, put for $x\geq a$ $\displaystyle F(x)=\int_a^x f(t)^2dt$. We have $\displaystyle F^{\prime}(x)=f(x)^2$, hence $\displaystyle f(x)=\sqrt{F^{\prime}(x)}$, and we get $\displaystyle F(x)-f(a)\leq \sqrt{F^{\prime}(x)}$ for $x\geq a$. 3) Suppose now that there exists $b>a$ such that $F(x)-f(a)>0$. Then we have $F(x)-f(a)>0$ for all $x\geq b$, and so $\displaystyle \frac{F^{\prime}(x)}{(F(x)-f(a))^2}\geq 1$. We integrate from $b$ to $x>b$, we get $$-\frac{1}{F(x)-f(a)}+\frac{1}{F(b)-f(a)}\geq x-b$$ hence $$\frac{1}{F(b)-f(a)}\geq x-b$$ If we let $x\to +\infty$, we get a contradiction. Hence for all $a\in \mathbb{R}$ and $x\geq a$, we have $F(x)\leq f(a)$. 4) Fix $b\in \mathbb{R}$, put $\displaystyle G(x)=\int_x^bf(t)^2dt$ for $x<b$. We have $G^ {\prime}(x)=-f(x)^2$ and $G(x)\leq f(x)$ by 3). Suppose that there exists $a<b$ such that $G(a)>0$. Then we have $G(x)>0$ for all $x\leq a$, and $G(x)\leq \sqrt{-G^{\prime}(x)}$. Hence $\displaystyle -\frac{G^{\prime}(x)}{G(x)^2}\geq 1$ for $a\geq x$. We integrate from $x<a$ to $a$, we get $$\frac{1}{G(a)}-\frac{1}{G(x)}\geq a-x$$ and $\displaystyle \frac{1}{G(a)}\geq a-x$. If $x\to -\infty$, we find a contradiction. 5) Hence $\displaystyle \int_a^b f(t)^2dt=0$ for all $b,a$ such that $b>a$, and this imply $f=0$.
Isomorphism classes of $\mathbb{Z}[i]$ modules.
As you suggested, both of those are primes/irreducible in the PID $R=\mathbb{Z}[i]$, so $M \cong R/(a_{1}+ib_{1}) \oplus R/(a_{2}+ib_{2}) \oplus R/(a_{3}+ib_{3}) \cdots $ where $a_{1}+ib_{1} \vert a_{2}+ib_{2} \vert a_{3}+ib_{3} \cdots $. Now, the order of such module, is $(a_{1}^{2}+b_{1}^{2})(a_{2}^{2}+b_{2}^{2})\cdots$. Now, since it is equal to $5$, you can conclude that $a_{2}+ib_{2}$ and everything coming after are units, and so one must have $a_{1}^{2}+b_{1}^{2}=5$. This being said, so $(a_{1},b_{1})=(1,2)$ or $(-1,2)$ or $(1,-2)$ or $(-1,-2)$, or $(2,1)$, or $(2,-1)$, and so on. Notice that many solutions will get you the same ideal! I think you will only have two ideals, the ones you mentionned! And so there are only two types of $\mathbb{Z}[i]$ modules of order $5$, $\mathbb{Z}[i]/(2-i)$ and $\mathbb{Z}[i]/(2+i)$. Those are not isomorphic as $\mathbb{Z}[i]$-modules, as they are cyclic and have different annihilators!
If $x,y,z>0$ and $x^2+7y^2+16z^2=1\;,$ Then $\min(xy+yz+zx)$
Well, $xy+yz+zx>0$ obviously, but can be made as close to zero as needed say when $x\to 1, y=z\to 0$, so it doesn't have a minimum, and the infimum is $0$.
calculating value for tangent in complex polar form.
First, we have for $z=x+iy$, with $x,y\in \mathbb{R}$ $$|z|=\sqrt{x^2+y^2}$$ and $$\arg(z)=\text{atan2}(y,x)$$ where $\text{atan2}(y,x)$ is defined HERE. Therefore, we see that $$\frac{-2+i4}{4+i5}=\frac{(-2+i4)(4-i5)}{41}=\frac{12}{41}+i\frac{26}{41}$$ such that $$\left|\frac{-2+i4}{4+i5}\right|=2\sqrt{\frac{5}{41}}$$ and $$\arg\left(\frac{-2+i4}{4+i5}\right)=\arctan(13/6)$$
Uncountable set of $\Bbb R^n$ that does not contain any perfect subset
Assuming the axiom of choice, such sets do indeed exist. This is a straightforward transfinite recursion argument: Fix a listing $\{P_\eta: \eta<2^{\aleph_0}\}$ of $\mathbb{R}^n$ the set of perfect subsets of $\mathbb{R}^n$ (crucially, there are only continuum many perfect sets!). Now we build a sequence of pairs of sets $(In_\eta, Out_\eta)_{\eta<2^{\aleph_0}}$ as follows: At stage $\eta+1$, we pick some real $r\in\mathbb{R}^n\setminus (In_\eta\cup Out_{\eta})$ and some $s\in P_\eta\setminus In_\eta$, and set $$In_{\eta+1}=In_\eta\cup\{r\}, \quad Out_{\eta+1}=Out_\eta\cup\{s\}.$$ At a limit stage $\lambda$ we set $$In_\lambda=\bigcup_{\alpha<\lambda} In_\alpha,\quad Out_\lambda=\bigcup_{\alpha<\lambda} Out_\alpha.$$ We set $In_0=Out_0=\emptyset$. Then it's easy to check that $\bigcup_{\eta<2^{\aleph_0}} In_\eta$ is uncountable but contains no perfect subset. However, if the axiom of choice fails, this might not work: it is consistent with ZF that every uncountable set has a perfect subset. The most natural way this can happen is if the Axiom of Determinacy (AD) holds. This has the drawback that ZF+AD has strictly greater consistency strength than ZF. A more artificial model of ZF + "every uncountable set has a perfect subset" can be constructed merely from the assumption that ZF is consistent; this was done by Truss. SEMI-RELATED (but hopefully interesting) CODA: The necessity of choice suggests that "naturally occurring" sets probably won't be counterexamples. Say that a class $\mathcal{S}$ of sets has the perfect set property if for all $X\in\mathcal{S}$, either $X$ is countable or $X$ contains a perfect subset; can we prove that "large" classes of sets have the perfect set property? The answer turns out to be yes. ZFC proves that the class of Borel sets - indeed, the class of analytic sets - has the perfect set property. And while ZFC doesn't prove that the class of coanalytic (= complement is analytic) sets has the perfect set property, there are natural axioms which do prove this when added to ZFC - namely, large cardinal axioms. In general, the following is a good (informal) heuristic: If $X$ is a "nicely definable" set of reals, then - possibly assuming large cardinals - $X$ is either countable or contains a perfect subset. The study of regularity properties (e.g. perfect set property, Lebesgue measurability, property of Baire, ...) of "nicely definable" sets is (one aspect of) descriptive set theory. Over time, one of the fundamental aspects of descriptive set theory has turned out to be the importance of determinacy principles and the central role of large cardinal axioms; Larson has a nice paper on the history of this, and for further detail I would recommend Kanamori's book.
Permutation and Combination problem : In how many ways can Rs. 16 be divided into 4 person when none of them get ...
The easiest way to solve it is to imagine giving each person Rs. $3$ right away and then distributing the remaining Rs. $4$ arbitrarily. In other words, reduce the problem to distributing Rs. $4$ amongst $4$ people with no restrictions: you have the formula for that, with $n=r=4$: $$\binom{4+4-1}{4-1}=\binom73=35\;.$$
Proving Lebesgue measurability of Dirichlet-like functions
The functions $x \mapsto e^x$ and $x \mapsto x e^x$ are both continuous, hence measurable (you can also use your criterion to show that they are increasing, hence measurable). Using the fact that $\mathbb{Q} \cap [0,1]$ is measurable, and that sums and products of measurable functions are measurable, we find that $$ F(x) = \mathbf{1}_{\mathbb{Q} \cap [0,1]} \, e^x + \mathbf{1}_{\mathbb{Q}^c \cap [0,1]} \,x e^x $$ is measurable as well.
Polynomial and its derivative have a common factor?
Hint: working in $\mathbb C$, we have $$p(x) = \prod_{i=1}^n\left(x-\alpha_i\right)^{n_i}$$where the $\alpha_i$ are the distinct roots of $p(x)$ in $\mathbb C$. What is $p'(x)$? If $gcd(p(x), p'(x)) = 1$, what can you say about the roots of $p$? Extra hint: What happens if $n_i > 1$ for some $i$
How does the Metropolis Algorithm work? (for idiots)
This answer is aimed at non-mathematicians and is intentionally put in Layman's terms... The Metropolis Algorithm The Metropolis algorithm will move across a Markov-chain using known values about the probability of being in a certain kind of state. It CAN be used to simulate (and therefore demonstrate) an even distribution across a fixed or infinite number of nodes. It has 3 distinct steps: The Proposal The Acceptance/Rejection The Transition This is based on a formula that calculates Acceptance/Rejection which is: $P(a->b) = min[1, π(b)/π(a)]$ Translated into English, this means: The probability of transitioning from Node a to Node b is the same as the smaller of either certainty or the chance of being in node b divided by the chance of being in node a. We need the "the smaller of either..." bit as a probability cannot exceed 1 or Maths will explode. The Scenario Imagine you are magically deposited in the middle of a desert island. You want to move around and explore but also need to drink from the springs that you are surrounded by. You have a map of the island in your pocket. The world here moves in discrete time - i.e. chunks of 1 hour. No other unit of time is possible and this unit is atomic. Example 1 For now, assume that the springs have no animals in them and there are no barriers to drinking from them. The springs on your map are arranged in a 3 x 3 grid and you are currently standing next to spring 5 - the middle spring/cell. 3 x 3 Grid Step 1 - The Proposal: You pick a spring at random - let's say spring 5. You choose another spring at random - say spring 2. Our proposal in Maths language looks like this: $π(i)$ = The chance of being in node $i$ where $i$ in this case refers to an index of a spring. In our case, $π(2)$ Therefore our proposal is: $P(spring5->spring2)$ Or put simply $P(5->2)$ Step 2 - The Acceptance/Rejection We can use the Metropolis acceptance/rejection formula to calculate your success of moving between the two springs. We need to know what $P$ means: $P(5->2)$ <- "I want to navigate from spring 5 to spring 2. What is the probability of me getting there successfully?" Therefore: $P(a->b) = min[1, π(b)/π(a)]$ Or to put it another way: $P(2->5) = min[1, π(2)/π(5)]$ Now all we need to know is what $min$ means. $min$ = The minimum of 2 arguments. In the case take the smallest answer from $1$ or $π(b)/π(a)$. We know what the $π(i)$ for every spring is. It's $1/9$. We have 9 springs. They are not distinct and there aren't any barriers. We could be in any 1 of them if we wandered around aimlessly for a gazillion hours. As we know $π(i)$, we can fill in the rest of the equation. $P(2->5) = min[1, (1/9)/(1/9)]$ This resolves to: $P(2->5) = min[1, 1/1] = min[1, 1] = 1$ This means that it is CERTAIN that if we choose a spring, we will get there unimpeded. This means ACCEPT. Step 3 - The Transition: We just have to update $π$. $π(2)$ Success! Example 2 Exactly the same as before but we are near to the ocean (in an "edge" node) and we want to know the chance of going into the sea by mistake (outside of the grid). Let's call the sea $x$. Step 1 - The Proposal: $P(2->x)$ Step 2 - The Acceptance/Rejection: Everyone knows that the sea doesn't contain fresh water. You would never, ever make this mistake. Therefore the probability of making this transition is 0. $P(2->x) = min[1, 0/(1/9)] = min[1, 0] = 0$ This means REJECT. Step 3 - The Transition: The transition proposal failed. There was no transition. You did not move into the sea and drown. Good for you! You stayed put. $π(2)$ Example 3 The springs are no longer in a 9x9 grid but an infinite grid. There is no advantage to the springs being in a grid or using a finite grid. The maths for this works exactly the same if the grid is infinite. Instead of a having 9 springs, we can say that $1/9th$ springs are of type $a$, another $1/9th$ of our springs are of type b, etc. Here are the 3 steps. Step 1 - The Proposal: $P(a->b)$ Step 2 - The Acceptance/Rejection: $P(a->b) = min[1, (1/9)∞/(1/9)∞] = min[1, 1] = 1$ Step 3 - The Transition: $π(b)$ Example 4 Let's go back to the 9 springs arranged in a grid. 4 of the springs now have Orange Juice and 5 of the springs have Vodka. We label the ones that have Orange juice as $y$ and the ones with vodka as $z$. We are currently at a Vodka and now, utterly inebriated, we want some orange juice. We don't care which spring it is - as long as it has orange juice. Being totally drunk as we are, we don't even know which spring we're in. Thankfully, that doesn't matter. So, we just have to pick a direction at random. What we want to know is the answer to the following question: "Given that I am in a spring of vodka, what is the chance of successfully going to a spring of orange juice if I walk to a random spring?" Step 1 - The Proposal: $P(z->y)$ Step 2 - The Acceptance/Rejection: $P(z->y) = min[1, (4/9)/(5/9)] = min[1, 0.8] = 0.8$ Step 3 - The Transition: This part isn't so Black and White any more. We have an 80% chance of transition ACCEPTANCE. $P(π(y))$ = 80% $P(π(z))$ = 20% This implies there are 2 things that could happen: We could successfully move from a Vodka spring to an Orange spring with 80% probability We could fail to transition (pass out, walk into a tree, etc) with 20% probability. This means we do NOT transition and remain in $π(z)$. So to answer the question "What is the chance of going from a Vodka Spring to an Orange Spring": $5/9 * 4/5 = 44$% ChanceOfChoosingAOrangeSpring * ChanceOfSuccessfulTransition = ChanceOfTransitioningToOrangeSpring How we actually compute this is irrelevant (dice roll, random number, etc). Also, note that if we go from Orange to Vodka, we are not drunk and therefore the transition probability is 1 (Certain).
finding a solution of an autonomous differential equation with two variables
You are supposed to see that on the unit circle you have $$ 1-x_1^2+x_2^2=0 $$ and that the vector field is tangential to the unit circle. Uniqueness provides the separation of the in- and outside of the unit circle.
Interesting Difference between Lebesgue and Riemann Integral
Perhaps you already know most of this, but here are some things to consider. There is only one definition of Riemann integrability that must be very restrictive for it to work. I am not talking about inproper integrals here. On the other hand, an effective notion of Lebesgue integrability can be defined hierarchically as these restrictive conditions are weakened. Start with sets of finite measure $E \subset \mathbb{R}$ and bounded functions $f:E \to \mathbb{R}$. (1) Strictly speaking the Riemann integral is defined for functions on a closed and bounded interval $[a,b]$. Also, it is necessary for the function to be bounded to meet the requirement that there exists $I \in \mathbb{R}$ such that for any $\epsilon > 0$ there exists a partition $P_\epsilon$ of $[a,b]$ such that for any partition $P$ that is a refinement of $P_\epsilon$ and any Riemann sum $S(P,f)$,we have $|S(P,f) - I| < \epsilon$. That $f$ must be bounded is not just an arbitrary part of the definition. It is, of course, possible to extend the definition to open intervals or even general subsets $E$ of finite measure with $\int_E f$ defined as $\int_a^b f(x) \chi_E(x) \, dx$. Nevetheless, the definition of Riemann integrability can only be met when the measure of the boundary $\partial E$ is $0$, and this is related to the notion of Jordan measurability. Clearly, there are bounded functions defined on sets of finite measure that are not Riemann integrable -- as with the Dirichlet function you mention -- and this is entirely due to "too much" discontinuity. (2) Again for bounded functions on sets of finite measure, there always exist lower and upper Lebesgue integrals $$\underline{\int}_E f = \sup_{\phi \leqslant f} \int_E \phi, \quad \overline{\int_E} f = \inf_{\psi \geqslant f} \int_E \psi,$$ where $\phi$ and $\psi$ are simple functions, and we must have $$\underline{\int}_E f\leqslant \overline{\int_E} f $$ The most basic definition in this restrictive case is that $f$ is "Lebesgue integrable" on E if $$\underline{\int}_E f = \overline{\int_E} f$$ There are two important theorems for bounded functions on finite measure sets. Theorem 1: If a function is Riemann integrable then it is Lebesgue integrable. Theorem 2: A function is Lebesgue integrable if and only if it is measurable. An important consequence of Theorem 1 is that the class of Lebesgue integrable functions includes the class of Riemann integrable functions. An important consequence of Theorem 2 is that, similar to the Riemann integral, there exist bounded functions defined on a set of finite measure that are not Lebesgue integrable. To see this take $E$ as a non-measurable set and consider the function $\chi_E$. You do raise an interesting question of why the Lebesgue integral is less impacted by the extent of discontinuity as long as we have measrability. Next consider sets of infinite measure $E \subset \mathbb{R}$ and/or unbounded functions $f:E \to \mathbb{R}$. Here we cannot even speak of Riemann integrals, yet the Lebesgue integral can be extended. First, we extend to nonnegative functions where the Lebesgue integral can be defined using the previous definition as the supremum of $\int_E g$ over all nonnegative, bounded, measurable functions $g$ with compact support in $E$. In this case the integral may take the value $+\infty$, so satisfaction of this definition alone does not mean that $F$ is Lebesgue integrable. For nonnegative $f$ to be Lebesgue integrable we must have $\int_E f < +\infty$. The reason for this definition of Lebesgue integrability is to make it possible to extend the definition of the integral further to include general functions. In this case, we consider positive and negative parts $f^+$ and $f^-$ (which are themselves nonnegative functions) and define the Lebesgue integral as $$\tag{*}\int_E f = \int_Ef^+ - \int_E f^-$$ Since $+\infty - +\infty$ cannot be defined in a meaningful way, this explains why Lebesgue integrability of a nonnegative functions stipulates that the Lebesgue integral is finite. Otherwise, (*) is not well defined. In this way, Lebesgue integrability of a general function $f$ implies that we also have $$\int_E|f| = \int_Ef^+ + \int_E f^- < +\infty$$ Improper Riemann Integrals In your question, you cite functions like $x \mapsto 1/x$ on $(0,1]$ and $x \mapsto 1/\sqrt{x}$ on $[1, \infty)$ as examples where the Lebesgue integral "fails". Needless to say, these functions are not Riemann integrable , but we can say that we have defined Lebesgue integrals $$\int_{(0,1]} \frac{1}{x} = +\infty , \quad \int_{[1,\infty)} \frac{1}{\sqrt{x}} = +\infty$$ We just cannot say these functions are Lebesgue integrable as explained above. Some of the deficiencies of the Riemann integral can be corrected by introducing the improper Riemann integral. We can even find examples where a function is improperly Riemann integrable but not Lebesgue integrable. Perhaps that should be considered as well in assessing the relative merits of Riemann and Lebesgue integration.
Replace all variables in $Γ$ and $φ$ such that given $Γ\vdash φ$ we can derive $Γ'\vdashφ'$
I assume that you are working with (a) a propositional language with letters $p,q,\dots$ and connectives $\neg$ and $\to$; (b) a set of axioms which are "all instances" of some specified set of the formulas (c) a single rule of inference MP, that is from $\Gamma\vdash (\alpha\to\beta)$ and $\Gamma\vdash\alpha$ deduce $\Gamma\vdash\beta$. The first thing is to be clear that the function $\gamma\mapsto\gamma'$ is well-defined on the set of formulas. This is easy, because the formulas are defined recursively, and we have unique readability: hence the map can recursively defined by $p':=\theta$, $q':=q$ if $q$ is a propositional letter other than $p$, and then $(\neg\alpha)':=(\neg\alpha')$, $(\alpha\to\beta)':=(\alpha'\to\beta')$. We can then, for any set of formulas $\Delta$, put $\Delta':=\{\delta' | \delta\in\Delta\}$. We now need a simple observation: If $\alpha$ is the instance of an axiom so is $\alpha'$. To see this it does not matter what the axioms are precisely, we just need the recursive definition of $\gamma'$. Now suppose we know that $\Gamma\vdash\phi$. That means we have a sequence of formulas $(\alpha_1,\dots,\alpha_k=\phi)$ where for each $i$ we have one of (a) $\alpha_i\in\Gamma$; (b) $\alpha_i$ is an instance of an axiom; (c) (MP step) there exist $i_1,i_2<i$ with $\alpha_{i_1}=\beta$, $\alpha_{i_2}=(\beta\to\alpha_i)$. We now assert that the sequence $(\alpha'_1,\dots,\alpha'_k=\phi')$ is a derivation of $\Gamma'\vdash\phi'$. To see this we note that (a) if $\alpha_i\in\Gamma$ then $\alpha'_i\in\Gamma'$; (b) if $\alpha_i$ is an instance of an axiom then so is $\alpha'_i$; (c) if there exist $i_1,i_2<i$ with $\alpha_{i_1}=\beta$, $\alpha_{i_2}=(\beta\to\alpha_i)$ then there exist $i_1,i_2<i$ (the same ones!) with $\alpha'_{i_1}=\beta'$, $\alpha'_{i_2}=(\beta\to\alpha_i)'$, and as we observed $(\beta\to\alpha_i)'=(\beta'\to\alpha'_i)$, so this is a proper MP step. (We could write this as an induction but there is no need since the derivation of $\phi$ translates line-for-line into a derivation of $\phi'$.
Find a function that makes two regions of a rectangle have a certain proportion
hint We look for $f (x) $ such that $$(1+10)\int_0^xf (t)dt=xf (x) $$ (sum of two areas $=$area of the rectangle). differentiating gives $$11f (x)=f (x)+xf'(x) $$ You can finish and get $$f (x)=\lambda x^{10} $$ The other solution satisfies $$(1+\frac {1}{10})\int_0^xf (t)dt=xf (x) $$ gives $$f (x)=\mu x^\frac {1}{10} $$
Existence of the limit of a subsequence of a double sequence
I'm not quite sure if I understood your question correctly, but knowing that $\displaystyle \lim_{n\rightarrow \infty} a(m,n) = L$ for all $m\in\mathbb{N}$ and that $m_n$ is a non-decreasing function of $n$ does not guarantee, that $\displaystyle \lim_{n\rightarrow \infty} a(m,n)$ exists. Consider the following example: $$L:=0$$ $$a(m,n):= \begin{cases} 0, & \text{if $n>m^2$} \\ 1, & \text{if $n\leq m^2$} \end{cases} $$ $$m_n:= \lfloor \sqrt{n} \rfloor \text{ $\qquad$ for all $n\in \mathbb{N}$}$$ Clearly, $a(m_n,n)$ does not converge towards $0$, despite convergence for all $m\in \mathbb{N}$. If we suppose, that $\displaystyle \lim_{n,m\rightarrow \infty} a(m,n) = 0$ we get a contradiction, since for $\epsilon=\frac{1}{2}$ we cannot pick neither $N$ nor $(N,M)$ such that for all $n>N$ ( $n>N$ and $m>M$ in the second case) distance between $a(n,m)$ and $0$ is no greater than $\epsilon$. Same argumentation goes for $N$ such that $n+m>N$. I hope that the following example is quite clear and provides answer to your question.
Finding the possible dimensions of the intersection of subspaces
The dimension of a subspace $V$ of $\Bbb R^n$ is determined by the dimension of its orthogonal complement $V^\perp$. $\operatorname{dim}V=n-\operatorname{dim}\ V^\perp$ where $$V^\perp=\{x\in \Bbb R^n| \langle x,y\rangle=0,\text{for all}\ y\in V\} $$ $\langle,\rangle$ is the usual dot product. $(V\cap Y)^\perp=V^\perp+Y^\perp$, so the question becomes equivalent to looking at possible dimensions for sums of vector spaces. For example if $V$ and $Y$ are 2-dimensional in $\Bbb R^4$, then $\operatorname{dim}(V^\perp+Y^\perp)=4,3,$ or $2$ by an easy argument. For $\Bbb R^3$, we have $\operatorname{dim}(V^\perp+Y^\perp)$ can only be $3$ or $2$, so we can conclude your cases.
$f_n$ uniformly absolutely continuous implies $f_n^+$ uniformly absolutely continuous?
Hint: $\int_A f^+\, d\mu = \int_{A\cap \{f\ge0\}} f\, d\mu.$
Integer Partition by Counting Repetition : Conjecture ??
It’s not clear that there’s a general phenomenon to be explained. Note that $n=2,3$, and $4$ behave somewhat differently: none of them ends up in a single one-element loop irrespective of the starting point. For $n=2$ we have the loop $$2,0\to1,1\to2,0\;,$$ a single two-element loop that does not depend on the starting point. For $n=3$ each starting point leads to the loop $$3,0,0\to1,2,0\to1,1,1\to3,0,0\;.$$ For $n=4$ all starting points save $2,2,0,0$ lead to the loop $$4,0,0,0\to 1,3,0,0\to 1,1,2,0\to2,1,1,0\to1,2,1,0\to1,1,1,1\to4,0,0,0\;,$$ while $2,2,0,0$ is a fixed point (a one-element loop). That’s already several different behaviors in the first few values of $n$. The only thing that’s clear is that for each $n$, each starting point must eventually fall into a cycle, since there are only finitely many $n$-tuples that sum to $n$.
why are all the weights of the Gaussian quadrature formula non zero
There are other formulas for finding the weights. For example $$w_i = \frac {-2}{(n+1)P'_n(x_i)P_{n+1}(x_i)} $$ Which clearly shows $$ w_i\ne 0 $$ For derivation of this formula see Atkinson,$1989, p.276$;Ralston and Rabinowitz,$1978, p. 105.$
Trying to solve conic for ellipse equation
If, for some reason, what you say you get is correct then $$9(x+1)^2+4(y-2)^2=1\iff \frac{(x+1)^2}{\frac19}+\frac{(y-2)^2}{\frac14}=1\implies a=\frac13\;,\;b=\frac12$$ Remember that for any non-zero number $\;a\;$ , we have $$a=\frac1{\frac1a}$$
$1-a$ is a unit in the ring $R$
We have$$(1-a)(a^2+a+1)=-a^3+1=1.$$
Formula to calculate the number of possible models with hierarchical structure
It seems that if you have $n$ covariates you start by checking $2^n$ models, and there are ${n \choose k}$ of these that use $k$ of the covariates with $0 \le k \le n$. If one of those is successful then you need to check a further $2^{k(k+1)/2}-1$ models, so $2^{k(k+1)/2}$ including the original successful one. So the answer may be $$\sum\limits_{k=0}^n {n \choose k} 2^{k(k+1)/2}$$ which seems to be related to OEIS A006898, which is in turn related to OEIS A006896 described as "the number of hierarchical linear models on n labeled factors allowing 2-way interactions (but no higher order interactions)"
Solve the system of linear congruence.
Hint $\ {\rm mod}\ 3=\gcd(9,12)\ $ we have $\,x\equiv 8\equiv 2\,$ by the first, contra $\,x\equiv 6\equiv 0\,$ by the second. Similarly if $\, x\equiv a\pmod m,\ x\equiv b\pmod n\,$ then $\, a\equiv x\equiv b\pmod d\,$ for $\,d =\gcd(m,n),\ $ hence $\,d\mid a-b\,$ is a necessary condition for the existence of a solution. This compatibility condition is also a sufficient condition for the existence of solution, and it extends pairwise to any number of congruences - see this answer for a constructive proof (which depends on the key fact that gcd distributes over lcm).
Unsure how to prove that this function is homeomorphism:
You were given $$ f : Q \rightarrow S^1 ~;~ (x,y) \mapsto \left( \frac{x}{\sqrt{x^2 + y^2}}, \frac{y}{\sqrt{x^2 + y^2} } \right) $$ (Result from analysis:) $f$ is continuous on $\mathbb{R}^2 \setminus \{(0,0)\}$ as a composition of continuous maps on this domain. Thus it is also continuous on the subset $Q \subset \mathbb{R}^2 \setminus \{(0,0)\}$ with the subset-topology. Now let $g$ be $$ g : S^1 \rightarrow Q ~;~ (x,y) \mapsto \left( \frac{x}{ \vert x \vert + \vert y \vert }, \frac{y}{ \vert x \vert + \vert y \vert } \right) $$ By the same argument as above, we can also conclude that $g$ is continuous. We will see now, that $g$ is also the inverse of $f$. For $(x,y) \in Q$ we have \begin{align} g(f(x,y)) &= g \Big( \underbrace{\frac{x}{\sqrt{x^2 + y^2}}}_{=:~a}, \underbrace{\frac{y}{\sqrt{x^2 + y^2}}}_{=:~b} \Big) = g(a,b) \\&= \left( \frac{a}{ \vert a \vert + \vert b \vert }, \frac{b}{ \vert a \vert + \vert b \vert } \right) \end{align} Where we have $$ \vert a \vert + \vert b \vert = \frac{\vert x \vert}{\sqrt{x^2 + y^2}} + \frac{\vert y \vert}{\sqrt{x^2 + y^2}} = \frac{1}{\sqrt{x^2 + y^2}} $$ since $\vert x \vert + \vert y \vert = 1$. So it follows \begin{align} g(f(x,y)) &= \left( a \sqrt{x^2 + y^2} ,~ b \sqrt{x^2 + y^2} \right) = (x,y) \end{align} Similarly, you can check that $f(g(x,y)) = (x,y)$. So we found a continuous inverse of $f$, which shows that $f$ is a homeomorphism.
Bound for $(ax+b)e^{-cx^2-dx}$
You can apply the classic method of finding points where the derivative is 0. Those are going to be local extrema of your function. Then, a quick study of the sign of the derivative will lead you to finding the exact global maximum. Take your function $g : x \mapsto (ax + b)e^{-cx^2 - dx}$ and derivate it. It gives : $g'(x) = (a + (ax + b)(-2cx - d))e^{-cx^2 - dx} \\ = (a - bd - 2bcx - adx - 2acx^2)e^{-cx^2 - dx}$ Can you take it from there ?
Multiple Integral, substitution
Here's your hint: try polar coordinates again. Your region is everything in the fourth quadrant that's both red and blue. You can get your bounds for $\theta$ just based on the fact that you're in Quadrant IV. As for finding bounds on $r$: the lower bound should be easy. If you need a hint for the upper bound on $r$, here it is: Write the equation for $x=y+1$ in polar coordinates. How does that help you find the upper bound for $r$? Hopefully that's enough to help you solve it!
Solving a Probability Question
The wording of the problem is not optimal. It should say something like "The probability that among the books she checks out there is at least one fiction book is $0.4$." And so on for the others. With that interpretation, your calculation is right. To be very formal, let $A$ be the event that she checks out at least one fiction book, and let $B$ be the event she checks out at least one non-fiction book. We want $\Pr(A\cup B)$. We have in general $$\Pr(A\cup B)=\Pr(A)+\Pr(B)-\Pr(A\cap B).$$ Thus in our case we have $\Pr(A\cup B)=0.4+0.4-0.2$.
Find the inverse linear transformation of a matrix
Hint: Since $T$ is invertible, we have $$T^{-1} \left[ \begin{array}{c} 1 \\ 0 \end{array} \right] = \left[ \begin{array}{c} 1 \\ 2 \end{array} \right] \text{ and } T^{-1}\left[ \begin{array}{c} 0 \\ 1 \end{array} \right]= \left[ \begin{array}{c} 1 \\ -2 \end{array} \right].$$ Can you take it from here?
Question referring to minimum value of a expected value.
$$ E\min\{X,1\} = \int_{-\infty}^\infty \min\{x,1\} f(x) \, dx .$$ where $f(x)$ is the PDF.
Curl in cylindrical coordinates
I'm assuming that you already know how to get the curl for a vector field in Cartesian coordinate system. When you try to derive the same for a curvilinear coordinate system (cylindrical, in your case), you encounter problems. Cartesian coordinate system is "global" in a sense i.e the unit vectors $\mathbb {e_x}, \mathbb {e_y}, \mathbb {e_z}$ point in the same direction irrepective of the coordinates $(x,y,z)$. On the other hand, the curvilinear coordinate systems are in a sense "local" i.e the direction of the unit vectors change with the location of the coordinates. For example, in a cylindrical coordinate system, you know that one of the unit vectors is along the direction of the radius vector. The radius vector can have different orientation depending on where you are located in space. Hence the unit vector for point A differs from those of point B, in general. I'll first try to explain how to go from a cartesian system to a curvilinear system and then just apply the relevant results for the cylindrical system. Let us take the coordinates in the new system as a function of the original coordinates. $$q_1 = q_1(x,y,z) \qquad q_2 = q_2(x,y,z) \qquad q_3 = q_3(x,y,z) $$ Let us consider the length of a small element $$ ds^2 = d\mathbf{r}.d\mathbf{r} = dx^2 + dy^2 + dz^2 $$ The small element $dx$ can be written as $$ dx = \frac{\partial x}{\partial q_1}dq_1 + \frac{\partial x}{\partial q_2}dq_2+\frac{\partial x}{\partial q_3}dq_3 $$ Doing the same for $dy$ and $dz$, we can get the distance element in terms of partial derivatives of $x,y,z$ in the new coordinate system. This will be of the form $$ ds^2 = \sum_{i,j} \frac{\partial \mathbf{r}}{\partial q_i} . \frac{\partial \mathbf{r}}{\partial q_j} dq_i dq_j = \sum_{i,j} g_{ij} dq_i dq_j $$ Here $\frac{\partial \mathbf{r}}{\partial q_j}$ represents the tangent vectors for $q_i = $constant , $i\neq j$. For an orthoganal coordinate system, where the surfaces are mutually perpendicular, the dot product becomes $$ \frac{\partial \mathbf{r}}{\partial q_i} . \frac{\partial \mathbf{r}}{\partial q_j} = c \delta_{ij}$$ where the scaling factor $c$ arises as we haven't considered unit vectors. These factors are taken as $$c = \frac{\partial \mathbf{r}}{\partial q_i} . \frac{\partial \mathbf{r}}{\partial q_i} = h_i^2$$ $$ ds^2 = \sum_{i=1}^3 (h_i dq_i)^2 = \sum_i ds_i^2$$ Hence the length element along direction $q_i$ is given by $ds_i = h_i dq_i $. Now, we are equipped to get the curl for the curvilinear system. Consider an infinitesimal enclosed path in the $q_1 q_2$ plane. And evaluate the path integral of the vector field $\mathbf{V}$ along this path. $$\oint \mathbf{V}(q_1,q_2,q_3).d\mathbf{r} = \oint \mathbf{V}.\left( \sum_{i=2}^3 \frac{\partial \mathbf{r}}{\partial q_i} dq_i\right)$$ (q1, q2+ds_2) (q1+ds_1, q2+ds_2) -----------<------------ | | | | V ^ | | |---------->-----------| (q1, q2) (q1+ds_1, q2) $$ \oint \mathbf{V}.d\mathbf{r} = V_1 h_1 dq_1 - \left( V_1 h_1 + \frac{\partial V_1 h_1}{\partial q_2} dq_2\right)dq_1 - V_2 h_2 dq_2 + \left( V_2 h_2 + \frac{\partial V_2 h_2}{\partial q_1} dq_1\right)dq_2$$ $$ = \left( \frac{\partial V_2 h_2}{\partial q_1} - \frac{\partial V_1 h_1}{\partial q_2} \right)dq_1 dq_2$$ From Stokes theorem, $$ \oint \mathbf{V}.d\mathbf{r} = \int_S \nabla \times \mathbf{V} . d\mathbf{\sigma} = \nabla \times \mathbf{V} . \mathbf{\hat{q_3}} (h_1 dq_1) (h_2 dq_2) = \left( \frac{\partial V_2 h_2}{\partial q_1} - \frac{\partial V_1 h_1}{\partial q_2} \right)dq_1 dq_2 $$ Hence the $3$ component of the curl can be written as $$(\nabla \times \mathbf{V})_3 = \frac{1}{h_1 h_2} \left( \frac{\partial V_2 h_2}{\partial q_1} -\frac{\partial V_1 h_1}{\partial q_2} \right) $$ Similarly, other components can be evaluated and all the components can be assembled in the familiar determinant format. $$\nabla \times \mathbf{V} = \frac{1}{h_1 h_2 h_3} \begin{vmatrix} \mathbf{\hat{q_1}}h_1 & \mathbf{\hat{q_2}}h_2 & \mathbf{\hat{q_3}}h_3\\ \frac{\partial}{\partial q_1} & \frac{\partial}{\partial q_2} & \frac{\partial}{\partial q_3} \\ V_1 h_1 & V_2 h_2 & V_3 h_3 \\ \end{vmatrix}$$ Now the expression for the curl is ready. All we need to do is find the values of $h$ for the cylindrical coordinate system. This can be obtained, if we know the transformation between cartesian and cylindrical polar coordinates. $$ (x,y,z) = (r\cos\phi, r\sin\phi, z)$$ Now the length element $$ ds^2 = dx^2 + dy^2 + dz^2 = (d(r\cos\phi))^2 + (d(r\sin\phi))^2 + dz^2 $$ Simplifying the above expression, we get $$ ds^2 = (dr)^2 + r^2(d\phi)^2 + (dz)^2 $$ From the above equation, we can obtain the scaling factors, $h_1 = 1$ , $h_2 = r$, $h_3 = 1$. Hence the curl of a vector field can be written as, $$ \nabla \times \mathbf{V} = \frac{1}{r} \begin{vmatrix} \mathbf{\hat{r}} & r\mathbf{\hat{\phi}}& \mathbf{\hat{z}}\\ \frac{\partial}{\partial r} & \frac{\partial}{\partial \phi} & \frac{\partial}{\partial z} \\ V_r & r V_\phi & V_z \\ \end{vmatrix} $$
Balls and bins conditioned on the number of non-empty bins
The standard argument to get the expected number of bins occupied is greatly simplified by the linearity of expectation. We only need to compute the probability that a single bin is filled. This is simplified by computing the probability that that bin is empty: $\left(1-\frac1n\right)^m$. Thus, the probability that that bin is filled is $1-\left(1-\frac1n\right)^m$. Linearity of expectation says that the expected number of bins filled would be $$ n\left(1-\left(1-\frac1n\right)^m\right)\tag{1} $$ as you have stated. However, when assuming that at least $k$ bins have been filled, we can not use all of the preceding simplifications. Inclusion-Exclusion One method of attack is using the Inclusion-Exclusion Principle. Let $S(i)$ be all the possible arrangements where bin $i$ is empty. Then we can compute $N(k)$, the size of the intersection of $k$ of the $S(i)$: there are $\binom{n}{k}$ ways to choose the empty bins and $(n-k)^m$ ways to put the $m$ marbles in to the remaining bins. Therefore, $$ N(k)=\binom{n}{k}(n-k)^m\tag{2} $$ Now, to compute the number of arrangements in which exactly $k$ bins are filled, we compute the number of elements in exactly $n-k$ of the $S(i)$: $$ \begin{align} &\sum_{j}(-1)^{j-n+k}\binom{j}{n-k}N(j)\\ &=\sum_{j}(-1)^{j-n+k}\binom{j}{n-k}\binom{n}{j}(n-j)^m\\ &=\sum_{j}(-1)^{j-n+k}\binom{k}{j+k-n}\binom{n}{k}(n-j)^m\\ &=\binom{n}{k}\sum_{j}(-1)^{k-j}\binom{k}{j}j^m\tag{3} \end{align} $$ The number of ways to get at least $k$ bins filled is $$ \begin{align} &\sum_{i\ge k}\sum_j(-1)^{i-j}\binom{n}{i}\binom{i}{j}j^m\\ &=\sum_i\sum_j(-1)^{k-j}\binom{-1}{i-k}\binom{n}{j}\binom{n-j}{n-i}j^m\\ &=\sum_j(-1)^{k-j}\binom{n}{j}\binom{n-j-1}{n-k}j^m\tag{4} \end{align} $$ The number of bins times the number of ways to get at least $k$ bins filled is $$ \begin{align} &\sum_{i\ge k}\sum_j(-1)^{i-j}i\binom{n}{i}\binom{i}{j}j^m\\ &=\sum_i\sum_j(-1)^{k-j}i\binom{-1}{i-k}\binom{n}{j}\binom{n-j}{n-i}j^m\\ &=\sum_i\sum_j(-1)^{k-j}[k+(i-k)]\binom{-1}{i-k}\binom{n}{j}\binom{n-j}{n-i}j^m\\ &=\sum_i\sum_j(-1)^{k-j}\binom{n}{j}\left[k\binom{-1}{i-k}-\binom{-2}{i-k-1}\right]\binom{n-j}{n-i}j^m\\ &=\sum_j(-1)^{k-j}\binom{n}{j}\left[k\binom{n-j-1}{n-k}-\binom{n-j-2}{n-k-1}\right]j^m\tag{5} \end{align} $$ Dividing $(5)$ by $(4)$ yields the expected value $$ k-\frac{\sum\limits_{j=0}^n(-1)^{k-j}\binom{n}{j}\binom{n-j-2}{n-k-1}j^m}{\sum\limits_{j=0}^n(-1)^{k-j}\binom{n}{j}\binom{n-j-1}{n-k}j^m}\tag{6} $$ where $\binom{-1}{k}=(-1)^k$ and $\binom{-2}{k}=(-1)^k(k+1)$ and $0^0=1$. Example For $n=6$, $m=4$, $k=2$: $$ \begin{align} &2-\frac{\small\binom{6}{0}\binom{4}{3}0^4{-}\binom{6}{1}\binom{3}{3}1^4{+}\binom{6}{2}\binom{2}{3}2^4{-}\binom{6}{3}\binom{1}{3}3^4{+}\binom{6}{4}\binom{0}{3}4^4{-}\binom{6}{5}\binom{-1}{3}5^4{+}\binom{6}{6}\binom{-2}{3}6^4} {\small\binom{6}{0}\binom{5}{4}0^4{-}\binom{6}{1}\binom{4}{4}1^4{+}\binom{6}{2}\binom{3}{4}2^4{-}\binom{6}{3}\binom{2}{4}3^4{+}\binom{6}{4}\binom{1}{4}4^4{-}\binom{6}{5}\binom{0}{4}5^4{+}\binom{6}{6}\binom{-1}{4}6^4}\\ &=2-\frac{0-6+0-0+0-(-3750)+(-5184)}{0-6+0-0+0-0+1296}\\ &=\frac{670}{215}\doteq3.11627906976744 \end{align} $$ whereas, without the knowledge that at least two bins were not empty, we would get $$ 6\left(1-\left(1-\frac16\right)^4\right)=\frac{671}{216}\doteq3.10648148148148 $$ Not a terribly large difference, but with $4$ balls into $6$ bins, you wouldn't expect them to all land in one bin, so the knowledge that at least two bins were not empty is not very significant.
Problems about $\sin(n)$ where $n$ is an integer $\in (0,1000]$
There's no clever way to do this by hand. It's probably intended for you to use a program. The point of the exercise is to illustrate how close you can get to particular values using integer inputs. In fact you can get arbitrarily close (without being equal) to any value between $-1$ and $1$ using integer inputs.
Representing a first order like condition as the solution of an optimization problem
If $f$ is strictly concave and $g$ is strictly convex in $x$ (for all $y$), then the sum $$f(x)-g(x,y)$$ is strictly concave in $x$ (since $-g$ is concave if $g$ is convex, and the sum of concave functions is concave). Thus, $x^*$ fulfilling $$f_1(x^*)=g_1(x^*,y)$$ would be the solution to the maximization problem $$\max_x f(x)-g(x,y)$$ for a given $y$, since the maximization problem yields the first order condition $0=f_1(x^*)-g_1(x^*,y)$, which is similar but not equivalent to your condition. Similarly, you can flip concavity/convexity of $f/g$: If $f$ is strictly convex and $g$ is strictly concave in $x$ (for given $y$), then the maximization problem $$\max_x -f(x)+g(x,y),$$ is again strictly concave, so the first order condition $f_1(x^*)=g_1(x^*,y)$ is necessary and sufficient for the maximum. Finally, you can phrase both of these as minimization problems, just flip the signs in front of the $f$ and $g$ functions. EDIT: In order to match your condition exactly, so that $y=x^*$, you indeed need to look at the maximization problems $$\max_x f(x)-g(x,y=x^*)$$ with $f$ being strictly concave and $g$ being strictly convex; however, this might not be attractive since you need to fix $y=x^*$ before you computed $x^*$. In response to your first comment, if both $f$ and $g$ are concave in $x$, then you cannot establish your condition as a result of an optimization problem without further assumptions, because you need different signs in front of $f$ and $g$ (so if both functions are concave, then flipping the sign on one means this is convex, but the sum of a convex and a concave function is neither necessarily concave nor convex). One additional assumption, informally, would be that either $$f(x)-g(x,y)$$ is concave for all $x$ and $y$, which is not implied by $f$ and $g$ being concave, or that $$-f(x)+g(x,y)$$ is concave, so that the first order condition is necessary and sufficient for the maximum. Thus, you could allow $-f$ or $-g$ to be convex as long as the other term is "much more concave" so that their sum is concave. If the functions are twice differentiable, this boils down to assuming $$f_{xx}(x)-g_{xx}(x,y)<0\text{ or }-f_{xx}(x)+g_{xx}(x,y)<0$$ for all $x$ and $y$. Then use the above formulation with fixing $y=x^*$.
calculate the directional derivative in the direction of v at the given point
What you have done is not correct. Note that $$f_x=\frac{y}{1+x^2y^2},f_y=\frac{x}{1+x^2y^2}$$
Proving $2^{\aleph_0} = {\aleph_0}!$
For $x \subset \omega$ let $x^* =\{n+2 : n \in x \}$. Let $\phi(x)$ be a bijection $ f:\omega \to \omega$ where $f(m)=m$ when $m \in x^*$ and $f(m) \ne m$ when $m \not \in x^*$.
Basis, polynomial vectors
The two polynomials contain $2x^2$ and the constant $2$ obtained respectively as their sum and difference, hence their linear span contain all polynomials of the form $a+cx^2$, and so throwing in the two linearly independent polynomials (imitating the existing ones) $x^3+x $ and $x^3-x$, should take care of all polynomials of degree 3 or less. It is a basis because we have totally 4 polynomails.
Manifolds and open sets in them with different dimensions
No, this is in contradiction to the invariance of the domain.
Real function, satisfying $\frac{\mathrm{d} }{\mathrm{d} x}f(x_0)<1+f^2(x_0)$
Consider the differentiable function $F(x):=\arctan(f(x))$. Then $F(x)\in (-\pi/2,\pi/2)$. Moreover, if the desired inequality does not hold, it follows that $$F'(x)=\frac{f'(x)}{1+f^2(x)}\geq 1$$ for all $x\in (a,b)$ which implies $$\pi=\frac{\pi}{2}+\frac{\pi}{2}&gt;F(b)-F(a)=\int_a^b F'(x)\,dx\geq \int_a^b 1\,dx=b-a\geq 4$$ which is a contradiction.
Proving $R$ is a division ring
Every k-algebra is necessarily a vector space, so that doesn't in itself tell you anything. What you need is that it is finite-dimensional when viewed as a vector space. Under no circumstances should you say $R=\{x_1,x_2,\ldots,x_n\}$ -- nothing guarantees you that $R$ has a finite number of elements. That's something quite different from having finite dimension. Having finite dimensional means that $R$ is isomorphic as a vector space to $k^n$ for some $n$. So another way to pose the problem would be: Let $k$ be a field, and assume that some binary operation $*$ on $k^n$ is given that makes $k^n$ into a $k$-algebra. Suppose also that $(k^n,+,*)$ viewed as a ring is an integrity domain. Show $k^n$ is actually a division ring. Hint: For $a\in k^n\setminus\{0\}$, consider the mapping $T: b\in k^n\mapsto a*b$. This is a linear operator on $k^n$ when $k^n$ is considered a vector space (show this). What more can you say about $T$ given the assumptions?
Pollard p-1 in Pari/GP
Here the code (in the example, a factor of $2^{67}-1$ is searched) ? n=2^67-1;x=3;s=1;while(gcd(x-1,n)==1,s=s+1;x=lift(Mod(x,n)^s));print(gcd(x-1,n )) 193707721
Counter-examples to $x \not \in K \implies \Bbb Q(x) \cap K = \Bbb Q$
Let $x=\sqrt[4]{2}$ and $K=\mathbb Q(a_1)$ where $a_1=i\sqrt[4]{2}$.
Finding $(a,b)\,,a>b$ such that $\int^b_a(6-x-x^2)\,dx$ is maximum
Yes, you are correct. Note that $f(x)=(6-x-x^2)=-(x-2)(x+3)$ so in order the get the maximum value of $\int_a^bf(x)dx$ with $a&lt;b$, it suffices to integrate over the interval $[a,b]$ where the integrand is non-negative that is $[-3,2]$.
Interpreting a lemma on coset representatives
The “preimage of $D’$” is just the usual pre-image, no need to factor through any quotient or induced function. That is, $$D = \{d\in G\mid f(d)\in D’\}.$$ I’ll use additive notation, since the groups are abelian. To show $D$ is a complete set of coset representatives for $S$ in $G$, let $g\in G$. Then $f(g)$ is in a coset of $S’$, so there exists $d’\in D’$ such that $f(g)\in d’+S’$. therefore, there exists $s’\in S’$ such that $f(g)=d’+s’$. Since $f$ maps $S$ isomorphically into $S’$, there exists a (unique) $s\in S$ such that $f(s)=s’$. Thus $f(g)=d’+f(s)$, so $d’=f(g)-f(s)=f(g-s)$. Thus, $g-s\in D$. And clearly, $g\in (g-s)+S$. Thus, every element of $G$ is in a coset of the form $d+S$ for some $d\in D$. Now assume that $d_1,d_2\in D$ are such that $d_1+S = d_2+S$. Then there exists $s\in S$ such that $d_1=d_2+s$. That means that $f(d_1) = f(d_2)+f(s)\in f(d_2)+S’$, hence $f(d_1)+S’=f(d_2)+S’$. But that means that $f(d_1)=f(d_2)$ (since both $f(d_1)$ and $f(d_2)$ are in $D’$, which is a complete set of coset representatives for $S’$). Therefore, $f(s)=0$ (since $f(d_1)=f(d_2)+f(s)$). Since $f$ is one-to-one when restricted to $S$, we must have $s=0$, so $d_1=d_2+s=d_2$. Thus, distinct elements of $D$ correspond to distinct cosets of $S$ in $G$. Hence, every element is in a coset of the form $d+S$ with $d\in D$, and distinct elements of $D$ are incongruent modulo $S$. Thus, $D$ is a complete set of coset representatives for $S$ in $G$, as claimed.
What is the inverse function of $\alpha\mathrm{e}^{\beta x}+\gamma\mathrm{e}^{\delta x}$?
Letting $z = \alpha e^{\beta x}/\epsilon$, and assuming the parameters are positive, the equation becomes $$ z + \dfrac{\gamma \epsilon^{\delta/\beta-1}}{\alpha^{\delta/\beta}} z^{\delta/\beta} - 1 = 0$$ which I'll write as $$ z + c z^p - 1 = 0 $$ This has a series solution in powers of $c$, that should converge for small $|c|$: $$\eqalign{z &amp;= \sum_{n=0}^\infty \dfrac{((-c)^n}{n!} \prod_{j=0}^{n-2} (np - j)\cr &amp;= 1 - c + p c^2 - \dfrac{3p(3p-1)}{6} c^3 + \dfrac{(4p)(4p-1)(4p-2)}{24} c^4 + \ldots}$$
Find the set of $k \geq 3$ satifying $(k-2)|2k$
$2k=(k-2)p$ $,p \in \mathbb{Z},$ now $\gcd(k-2,2k)|(k-2)$ and $\gcd(k-2,2k)|2k$ then if $(k-2)|2k,$ $\gcd(k-2,2k)=k-2$. You can complement this with the Bèzout's Lemma.
If $T_1, T_2$ are one-to-one linear transformations, prove that $W$ is not one to one
Hint. You can't prove that $W$ isn't one-to-one. (For example, take $T_1 = T_2 = I$.) You can prove that $W$ isn't necessarily one-to-one by exhibiting two one-to-one mappings $T_1$ and $T_2$ for which $T_1 + T_2$ isn't one-to-one.
Green's theorem for conservative fields - are partials equal?
1) Both $P$ and $Q$ equal to zero will do. A little less obvious example is when $Q$ is a function of $y$ and $P$ is a function of $x$. Anyway, the most general case is when $(P, Q) = \nabla F = (\frac{\partial F}{\partial x}, \frac{\partial F}{\partial y})$ where $F$ is some nice enough function. 2) It is possible to find $(P, Q)$ with $\frac{\partial Q}{\partial x} - \frac{\partial P}{\partial y} \ne 0$ such that the integral over some domain is $0$. However, if you consider any regions, you can squeeze them as small as possible around the point where $\frac{\partial Q}{\partial x} - \frac{\partial P}{\partial y}$ is not zero, and that integral will be non-zero (assuming $\frac{\partial Q}{\partial x} - \frac{\partial P}{\partial y}$ is continuous).
Show that $d: X \times X \rightarrow \mathbb{R}$ is a metric on $X$
1) Taking $x=y$: $d(x,x)\le d(x,z) + d(y,z)=2\cdot d(x,z)$, and $d(x,z)\ge 0$ follows. 3) Taking $z=x$: $d(x,y)\le d(x,z)+d(y,z)=d(x,x)+d(y,x)=d(y,x)$, and so $d(x,y)\le d(y,x)$. The other direction follows similarly.
non-trivial convergent sequence
(1) "Trivial" means "constant after some point". (2a) The $B_n$ are clearly disjoint. If $m\neq n$ then $B_n\not\in \mathcal V_m$ because $B_n\subset A_n$ and $A_n\not\in\mathcal V_m$. If $m=n$ then $A_i\notin\mathcal V_m$ for all $i=0,\dots ,n-1$, so $X\setminus A_i\in\mathcal V_m$ (since $\mathcal V_m$ is an ultrafilter) and hence $B_n=A_n\cap\left( \bigcap_{i=0}^{n-1} (X\setminus A_i)\right)\in\mathcal V_m$. (2b) "Clopen" means "closed and open".
$A\in M_n(\mathbb{R})$ is symmetric s.t. $A^{10}=I.$ Prove $A^2=I$
The minimal polynomial of $A$ divides $x^{10}-1$. $x^{10}-1=(x^2-1)q(x)$, where $q(x)$ has no real roots. The eigenvalues of a real symmetric matrix are all real and so its minimal polynomial is a product of linear real factors. Therefore, the minimal polynomial of $A$ divides $x^2-1$ and so $A^2=I$.
Prove that the function's derivative is continuous for x>0
Hint: the series is easily seen to be convergent for every $x&gt;0$. Moreover, you can prove that the series of derivatives is uniformly convergent on every compact interval $[a,b]\subset (0, +\infty)$, so that $F$ is continuously differentiable on those intervals. But this implies that $F$ is continuously differentiable in $(0,+\infty)$. (Given any point $x_0 &gt; 0$, it is enough to apply the above reasoning on a compact interval $[a,b]$ with $0 &lt; a &lt; x_0 &lt; b$.)
If I subtract 1 from the n,n entry of a Pascal Matrix, why does the determinant become zero?
$$ Q^T D Q = H $$ $$\left( \begin{array}{rrrr} 1 &amp; 0 &amp; 0 &amp; 0 \\ 1 &amp; 1 &amp; 0 &amp; 0 \\ 1 &amp; 2 &amp; 1 &amp; 0 \\ 1 &amp; 3 &amp; 3 &amp; 1 \\ \end{array} \right) \left( \begin{array}{rrrr} 1 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 1 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 \\ \end{array} \right) \left( \begin{array}{rrrr} 1 &amp; 1 &amp; 1 &amp; 1 \\ 0 &amp; 1 &amp; 2 &amp; 3 \\ 0 &amp; 0 &amp; 1 &amp; 3 \\ 0 &amp; 0 &amp; 0 &amp; 1 \\ \end{array} \right) = \left( \begin{array}{rrrr} 1 &amp; 1 &amp; 1 &amp; 1 \\ 1 &amp; 2 &amp; 3 &amp; 4 \\ 1 &amp; 3 &amp; 6 &amp; 10 \\ 1 &amp; 4 &amp; 10 &amp; 19 \\ \end{array} \right) $$ Compare $$\left( \begin{array}{rrrr} 1 &amp; 0 &amp; 0 &amp; 0 \\ 1 &amp; 1 &amp; 0 &amp; 0 \\ 1 &amp; 2 &amp; 1 &amp; 0 \\ 1 &amp; 3 &amp; 3 &amp; 1 \\ \end{array} \right) \left( \begin{array}{rrrr} 1 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 1 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 1 \\ \end{array} \right) \left( \begin{array}{rrrr} 1 &amp; 1 &amp; 1 &amp; 1 \\ 0 &amp; 1 &amp; 2 &amp; 3 \\ 0 &amp; 0 &amp; 1 &amp; 3 \\ 0 &amp; 0 &amp; 0 &amp; 1 \\ \end{array} \right) = \left( \begin{array}{rrrr} 1 &amp; 1 &amp; 1 &amp; 1 \\ 1 &amp; 2 &amp; 3 &amp; 4 \\ 1 &amp; 3 &amp; 6 &amp; 10 \\ 1 &amp; 4 &amp; 10 &amp; 20 \\ \end{array} \right) $$ five by five $$ Q^T D Q = H $$ $$\left( \begin{array}{rrrrr} 1 &amp; 0 &amp; 0 &amp; 0 &amp; 0 \\ 1 &amp; 1 &amp; 0 &amp; 0 &amp; 0 \\ 1 &amp; 2 &amp; 1 &amp; 0 &amp; 0 \\ 1 &amp; 3 &amp; 3 &amp; 1 &amp; 0 \\ 1 &amp; 4 &amp; 6 &amp; 4 &amp; 1 \\ \end{array} \right) \left( \begin{array}{rrrrr} 1 &amp; 0 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 1 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 \\ \end{array} \right) \left( \begin{array}{rrrrr} 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 \\ 0 &amp; 1 &amp; 2 &amp; 3 &amp; 4 \\ 0 &amp; 0 &amp; 1 &amp; 3 &amp; 6 \\ 0 &amp; 0 &amp; 0 &amp; 1 &amp; 4 \\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 1 \\ \end{array} \right) = \left( \begin{array}{rrrrr} 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 \\ 1 &amp; 2 &amp; 3 &amp; 4 &amp; 5 \\ 1 &amp; 3 &amp; 6 &amp; 10 &amp; 15 \\ 1 &amp; 4 &amp; 10 &amp; 20 &amp; 35 \\ 1 &amp; 5 &amp; 15 &amp; 35 &amp; 69 \\ \end{array} \right) $$ $$\left( \begin{array}{rrrrr} 1 &amp; 0 &amp; 0 &amp; 0 &amp; 0 \\ 1 &amp; 1 &amp; 0 &amp; 0 &amp; 0 \\ 1 &amp; 2 &amp; 1 &amp; 0 &amp; 0 \\ 1 &amp; 3 &amp; 3 &amp; 1 &amp; 0 \\ 1 &amp; 4 &amp; 6 &amp; 4 &amp; 1 \\ \end{array} \right) \left( \begin{array}{rrrrr} 1 &amp; 0 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 1 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 1 \\ \end{array} \right) \left( \begin{array}{rrrrr} 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 \\ 0 &amp; 1 &amp; 2 &amp; 3 &amp; 4 \\ 0 &amp; 0 &amp; 1 &amp; 3 &amp; 6 \\ 0 &amp; 0 &amp; 0 &amp; 1 &amp; 4 \\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 1 \\ \end{array} \right) = \left( \begin{array}{rrrrr} 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 \\ 1 &amp; 2 &amp; 3 &amp; 4 &amp; 5 \\ 1 &amp; 3 &amp; 6 &amp; 10 &amp; 15 \\ 1 &amp; 4 &amp; 10 &amp; 20 &amp; 35 \\ 1 &amp; 5 &amp; 15 &amp; 35 &amp; 70 \\ \end{array} \right) $$
Occurrence of 5 consecutive tails before occurrence of 2 consecutive heads
A different approach: consider two independent geometric vars $X_1$ ,$X_2$, each of which measures the amount of trials until getting a success, in an experiment with prob. of success $p_1$ (resp $p_2)$ Then $$P(X_1 \le X_2)=p_1 + p_1 q_1 q_2 +p_1 (q_1 q_2)^2+\cdots=\frac{p_1}{1- q_1 q_2} $$ $$P(X_1 &lt; X_2)=p_1 q_2 + p_1 q_2 q_1 q_2 +p_1 q_2 (q_1 q_2)^2+\cdots=\frac{p_1 q_2}{1- q_1 q_2} $$ We can consider each run of tails/heads such experiments, with $p_1=1/2^{5-1}=2^{-4}$, $p_2 =1/2$ Let $E$ be the desired event (run of 5 tails happens before run of 2 heads). Let $T$ be the event that the first coin is a tail. Then $$P(E)=P(E|T)P(T)+P(E|T^c)P(T^c)=\\ =\frac{p_1}{1- q_1 q_2} \frac{1}{2}+\frac{p_1 q_2}{1- q_1 q_2} \frac{1}{2}=\\=\frac{1}{2} (1+q_2)\frac{p_1}{1- q_1 q_2} =\frac{3}{34} $$ In general $$P(E)=\frac{2^h-1}{2^t+2^h-2}$$ Seeing that the final formula is so simple, I wonder if there is a simpler derivation.
Understanding the meaning of Hamilton's general equation of motion
The book (which book?) seems to be discussing a $2$-dimensional phase space with canonical coordinates $q,p$, unless there is some abuse of notation. The Hamiltonian $H$ is a smooth real-valued function on phase space. Hamilton's equations $$\frac{dq}{dt} = +\frac{\partial H}{\partial p}, \qquad \frac{dp}{dt}=-\frac{\partial H}{\partial q}$$ define the time evolution of the points $(q,p)$ in phase space under the Hamiltonian $H$: each point $(q_0,p_0)$ provides an initial condition for the ODE, and hence yields a flow $(q(t),p(t))$ starting at that point $q(0)=q_0, p(0)=p_0$. The time evolution of a smooth real-valued function $F$ on phase space (under the same flow) is then given by $$\frac{dF}{dt} = \frac{d}{dt}\left(F(q(t),p(t))\right)= \frac{\partial F}{\partial q}\frac{dq}{dt}+\frac{\partial F}{\partial p}\frac{dp}{dt}=\frac{\partial F}{\partial q}\frac{\partial H}{\partial p}-\frac{\partial F}{\partial p}\frac{\partial H}{\partial q}\mathrel{=:}\{F,H\}.$$ More generally in $2n$-dimensional phase space with canonical coordinates $q_1,\ldots,q_n,p_1,\ldots,p_n$, Hamilton's equations are $$\frac{dq_k}{dt}=+\frac{\partial H}{\partial p_k},\qquad \frac{dp_k}{dt}=-\frac{\partial H}{\partial q_k};$$ the time evolution of a smooth real-valued function $F$ is then given by $$\frac{dF}{dt}=\frac{d}{dt}\left(\ldots\right)=\ldots = \sum_{k=1}^n\left(\frac{\partial F}{\partial q_k}\frac{\partial H}{\partial p_k}-\frac{\partial F}{\partial p_k}\frac{\partial H}{\partial q_k}\right)\mathrel{=:}\{F,H\}.$$ It is just the same calculation as above, repeated $n$ times (i.e. once for each pair of canonically conjugate coordinates). To answer your questions: $F$ is a real-valued function. If the book uses an abuse of notation, then the above provides the correct interpretation of it. As you write, $n=3N$ (hence $6N$ coordinates in total) is common for direct descriptions of mechanical systems with $N$ point particles.
Function that is uniquely determinable in one variable, but not the other.
Let $n=pq,$ where $p$ is a large prime of sufficient size, say 1000 bits in size, so that the discrete logarithm problem modulo $p$ is hard. Let $q$ be much smaller, say $q$ is 100 bits in size and be made public, together with a generator $g$ of the group $Z^{\ast}_p.$ Let $X=\{0,1,\ldots,q-1\}$ be the domain of $x$ and let $Z^{\ast}_p$ be the domain of $Y.$ Let $f(x,y)=x+qg^{y}.$ Given $f(x,y)$ reduction modulo $q$ yields $x.$ But $z=(f(x,y)-x)/q$ gives $z=g^y$ and it is difficult to compute $\log_g(z)$ in $Z^{\ast}_p.$
Prove by induction for$ P(x)$
Note that: \begin{align*} P_0^{(0)}(x) &amp;= P_0(x) = a_0 = 0!a_0 \\ P_1^{(1)}(x) &amp;= P_1'(x) = \frac{d}{dx}[a_1x + a_0] = a_1 = 1!a_1 \\ P_2^{(2)}(x) &amp;= P_2''(x) = \frac{d^2}{dx^2}[a_2x^2 + a_1x + a_0] = 2a_2 = 2!a_2 \\ &amp;~~\vdots \end{align*}
Why is $Z*Z/({xyx^{-1}y} )= 1$?
You're misreading the (admittedly rather poor) notation. The statement isn't that $\Bbb Z * \Bbb Z / \langle xyx^{-1} y\rangle$ is the trivial group. (The $xyx^{-1}y = 1$ bit is supposed to invoke the idea that we're setting $xyx^{-1}y$ to one and seeing what we get.) The author is saying that the fundamental group of the Klein bottle has presentation $\langle x, y \mid xyx^{-1}y \rangle$, which means that it's the quotient of the free group $F_2$ on two generators $x,y$ by the normal subgroup 'normally generated' by $xyx^{-1}y$; that is, the smallest normal subgroup of $F_2$ containing this word $xyx^{-1}y$.
(Vol. 1 Shafarevich) Two questions about finite maps
For question (1), what you are trying to show is that if you have a finite ring map $A \to B$ and some element $f \in A$ then $A_f \to B_f$ is finite (meaning invert the multiplicative subset of powers of $f$ in both rings). That is, we have some $A$-module surjection $A^n \twoheadrightarrow B$. Since localization is exact, and it commutes with direct sums, we get a surjection $A_f^n \twoheadrightarrow B_f$, as desired. In general you're coming upon the idea of properties of morphisms that are affine-local on the target, where their truth can be verified either by checking every affine open on the target, or by checking it on a cover of the target by affine opens. This idea is discussed very cleanly in Vakil's Rising Sea notes section 5.3.
Monoidal categories in which $\mathrm{Aut}(X \otimes Y) \cong \mathrm{Aut}(X) \sqcup \mathrm{Aut}(Y).$
This is almost never satisfied since the endomorphisms (automorphisms) of $X$ resp. $Y$ commute when extended to $X \otimes Y$. It is more reasonable to ask if $\mathrm{End}(X \otimes Y) = \mathrm{End}(X) \times \mathrm{End}(Y)$ holds. For example, this holds in the free monoidal category on a category. It fails in most symmetric resp. braided monoidal categories because we cannot express the symmetry $X \otimes X \cong X \otimes X$ as a tensor product of two endomorphisms of $X$. For closed symmetric monoidal categories it makes more sense to ask if $\underline{\mathrm{End}}(X \otimes Y) = \underline{\mathrm{End}}(X) \otimes \underline{\mathrm{End}}(Y)$ holds. For example, this holds in the category of finite-dimensional vector spaces over a field (but not in the full category of vector spaces). More generally, let $C$ be a closed symmetric monoidal category and $X,Y,X',Y' \in C$. There is a canonical homomorphism $$\underline{\mathrm{Hom}}(X,X') \otimes \underline{\mathrm{Hom}}(Y,Y') \to \underline{\mathrm{Hom}}(X \otimes Y,X' \otimes Y').$$ It is an isomorphism when $X$ and $Y$ are dualizable, because then both sides identify with $X^* \otimes Y^* \otimes X' \otimes Y'$. In particular, for two dualizable objects $X,Y$ we have $\underline{\mathrm{End}}(X \otimes Y) = \underline{\mathrm{End}}(X) \otimes \underline{\mathrm{End}}(Y)$.
Convert the power series solution of $(1+x^2)y''+4xy'+2y=0$ into simple closed-form expression
Hint: Your power series are both geometric series.
Symmetric latin square of order 9 & 10 ? (focusing the diagonal)
For any $n$, $$\matrix{1&amp;2&amp;3&amp;\cdots&amp;n\cr 2&amp;3&amp;4&amp;\cdots&amp;1\cr 3&amp;4&amp;5&amp;\cdots&amp;2\cr \vdots&amp;\vdots&amp;\vdots&amp;\ddots&amp;\vdots\cr n&amp;1&amp;2&amp;\cdots&amp;n-1\cr}$$ is a symmetric Latin square.
What's the equation of this parametric surface?
Let's call your things $x(t), y(t), z(t)$, OK? Let $$ v(t) = \begin{bmatrix} x'(t) \\ y'(t) \\ z'(t) \end{bmatrix} \\ T(t) = v(t) / \| v(t) \|. $$ Then $T$ will be tangent to your curve at each time $t$. Do the same thing with $x'', y'', z''$ to get $$ w(t) = \begin{bmatrix} x''(t) \\ y''(t) \\ z''(t) \end{bmatrix} \\ u(t) = w(t) - ( u(t) \cdot T(t) ) T(t) \\ N(T) = u(t) / \| u(t) \|. $$ Then $N(t)$ will be perpendicular to $T(t)$ at each point. Finally, let $$ B(t) = T(t) \times N(t). $$ Now: $$ S(s, t) = T(t) + r \sin(s) N(t) + r \cos(s) B(t) $$ will, as $s$ ranges from $0$ to $2\pi$, and $t$ ranges over its usual range, and for small enough values of $r$ (like $r = 0.1$) sweep out a tube around your curve. What I've done is construct for you the Frenet-Serret frame for the curve, which assumes that the curvature (the length of the vector $u(t)$) is never zero; if it is, you have to use the "Bishop frame", which ... takes more work to write out. I think that for your curve, the F-S frame will work fine.
Find all functions satisfy an equality
Set $y=f(x)$ so $f(0)=f(x^{2002}-f(x))-2001f(x)^2$. Set $y=x^{2002}$ to get $f(x^{2002}-f(x))=f(0)-2001x^{2002}f(x)=f(x^{2002}-f(x))-2001f(x)^2-2001x^{2002}f(x)$ and therefore $2001f(x)^2+2001x^{2002}f(x)=0$. Can you continue from here?
Do complex eigenvalues of a real matrix imply a rotation-dilation?
I'll try to formulate in a way that does not depend on the fact that the real $n$-dimensional space $V$ being acted upon is $\Bbb R^n$, in other words it does not depend on choosing a basis in $V$. Let $\def\C{\Bbb C}V_\C$ be the complexification of $V$, which is a complex vector space built from the real vector space $V\oplus V$ by defining multiplication by $\def\ii{\mathbf i}\ii\in\C$ by $\ii\cdot(v,w)=(-w,v)$, and let $L_\C:V_\C\to V_\C$ be the complexification of $L$, defined by $L_\C(v,w)=(L(v),L(w))$ (it is clearly complex-linear). Now $L_\C$ as complex-linear map has the same characteristic polynomial as $L$ has as real-linear map (indeed on suitable bases they have identical matrices), so for every complex eigenvalue $\lambda=a+b\ii$ of $L$ there is a corresponding eigenvector in $(v,w)\in V_\C$. This means that $$ (L(v),L(w))=(a+b\ii)(v,w)=(av-bw,bv+aw). $$ One easily sees that $v,w\in V$ must be $\Bbb R$-linearly independent in the case $\lambda\notin\Bbb R$ of interest here, that is $b\neq 0$: supposing $cv+dw=0$ and applying $L$ one gets after simplification $b(-cw+dv)=0$, so $dv-cw=0$, and forming linear combinations with the supposed relation gives $(c^2+d^2)v=0$ and $(d^2+c^2)w=0$, which contradicts $(v,w)\neq(0,0)$. So the $\Bbb R$-subspace $\langle v,w\rangle_\Bbb R\subseteq V$ is the $2$ dimensional and $L$-stable, with the restriction of $L$ to it given by the matrix $$ \begin{pmatrix}a&amp;b\\-b&amp;a\end{pmatrix}. $$ I guess this gives what you asked in the title. For the "bigger proof" you need to modify the statement first to exclude taking the empty set for $K$.
Proving a Certain $\mathbb{C}$-Algebra is a Domain Using a Specified Method
Since the question contains a mistake, let me fix it. The ring $R=\mathbb{C}[X,Y]/(X^2+Y^2-1)$ is a UFD because it is isomorphic to $\mathbb{C}[U,V]/(UV-1)$ (via the substitutions $U\mapsto X+iY$ and $V\mapsto X-iY$) and the last one is a ring of fractions of $\mathbb{C}[U]$ (with respect to the multiplicative system $\{1,U,U^2,\dots\}$). Remark. If $R$ is an integral domain and $\alpha\in R$, then $R[X]/(X^2−\alpha)$ is an integral domain if and only if there are no non-zero elements $a,b\in R$ such that $b^2=\alpha a^2$.
Change of Variables in Second Order ODE
In this case you have not $x$ in equation, so let $y$ be the independent variable and with assumption $u=\dfrac{dy}{dx}$ then $$y''=\dfrac{du}{dx}=\dfrac{du}{dy}~\dfrac{dy}{dx}=u'u$$ so new DE is $$8y^2u'u+6yu=0$$
Why is probability density function is always positive?
By definition the probability density function is the derivative of the distribution function. But distribution function is an increasing function on $\mathbb{R}$ thus its derivative is always positive.
$\sigma$-field generated by random variables
I guess that $\mathscr F_1$ is the $\sigma $-algebra generated by $X_1$. So it is $$\{\{H,T\}, \{H\}, \{T\},\emptyset\}.$$ Then $\mathscr F_2$ is the $\sigma $-algebra generated by $X_1,X_2$. That is, $$\mathscr F_2=\mathscr F_1\times\mathscr F_1$$ containing $\Omega_2$, the set of the possible pairs of $H $ and $T $, all the outcomes of double coin flipping, and all the subsets of this set: $$\mathscr F_2=\{\Omega_2\}\cup 2^{\Omega_2}.$$ And for the $\sigma $-algebra generated by $X_1,X_2, X_3$ you will have to do the same with $\Omega_3$, the set of the possible triplets of $H$ and $T$.
How many recurrence relations are possible for a sequence?
Let $s$ be an arbitrary number. Then the recurrence $$T_n=s\left(3T_{n-1}-4\right)+(1-s)\left( T_{n-1}+6\cdot 3^{n-1} \right)$$ gives the same sequence for any $s$.
Let $a,b,n$ be positive integers, if $n|a^n-b^n$ then $n|\frac{a^n-b^n}{a-b}$
Suppose $(n, a-b)=d$ is the highest common factor. Use the $a=b+kd$ and the binomial expansion to show that $d^2|a^n-b^n$.
What does the term $c_x$ mean in the theorem of Taylor's remainder?
Maybe the best way to see where $c_x$ "comes from", is to do the proof: Take $x_0=0$ for convenience, suppose $f:\mathbb R\to \mathbb R$ has (at least) $n+1$ derivatives at $0$ and let $p(x)=\sum^n_{k=0}\frac{f^{(k)}(0)}{k!}x^k$ be the Taylor (well, MacLaurin) polynomial for $f$ of dgree $n$. The idea is that $p$ should be a good approximation to $f$ in some interval $I$ containing $0$. That is, $f(x)-p(x)$ should be small in this interval. We want to see how small it is. So, fix $b\in I$. We will approximate $f(b)$ using $p$. The trick is to choose $K$ so that $f(b)-p(b)-Kb^{n+1}=0$, and set $T(x)=f(x)-p(x)-Kx^{n+1}$. The first thing to notice is that $T^{(k)}(0)=0$ for all $0\le k\le n$. That is, the first $n$ derivatives of $T$ at $0$ are all equal to $0$. Then, since by construction, $T(b)=f(b)-p(b)-Kb^{n+1}=0$, we have $T(b)=T(0)=0$ and Rolle's theorem applies to give a $0&lt;c_1&lt;b$ such that $T'(c_1)=0$. But we also have $T'(0)=0$ so Rolle applies again, and we get a $0&lt;c_2&lt;c_1&lt;b$ such that $T''(c_2)=0.$ By now we see what is happening: we are getting a sequence of numbers $0&lt;c_k&lt;c_{k-1}&lt;b$ with the property that $T^{(k)}(c_k)=0.$ This process continues until at the $n^{th}$ step, we get a $0&lt;c_{n+1}&lt;\cdots&lt;b$ such that $T^{n+1}(c_{n+1})=0.$ Then, since $T^{n+1}(x)=f^{n+1}(x)-0-K(n+1)!$, we get $0=f^{n+1}(c_{n+1})-0-K(n+1)!$, and therefore $K=\frac{f^{n+1}(c_{n+1})}{(n+1)!}$. The upshot of all this is that we have now, since $T(b)=0,$ $f(b)=p(b)+\frac{f^{n+1}(c_n)}{(n+1)!}b^n=\sum^n_{k=0}\frac{f^{(k)}(0)}{k!}b^k+\frac{f^{n+1}(c_{n+1})}{(n+1)!}b^{n+1}.$ So, by getting $K$ in terms of the $(n+1)^{th}$ derivative of $f$ at a point in $I$, we have found a bound on the error we make by replacing $f$ by $p$. And we also know where $c_{n+1}$ comes from: it is the last point we get from $n$ repetitions of Rolle's theorem.
Orthogonal Operator Infinite Dimensional Inner Product Space
Consider $\Bbb V$ the real vector space formed by the quasi-null families of real numbers $(x_n)_{n \in \Bbb Z}$, "quasi-null" meaning that $\{ n \in \Bbb Z ~:~x_n \neq 0\}$ is finite. The operations are defined in the usual way. Define the inner product: $$\langle x, y \rangle = \sum_{n \in \Bbb Z}x_n y_n, \qquad x = (x_n)_{n \in \Bbb Z}, y = (y_n)_{n \in \Bbb Z} \in \Bbb V$$ Consider the right/left shift operators $R,L: \Bbb V \rightarrow \Bbb V$ such that $(Rx)_n = x_{n-1}$ and $(Lx)_n = x_{n+1}$. They are clearly isomorphisms and $R = L^{-1}$. We have also that $R^\ast = L$, and hence $L^\ast = R$. And $R, L$ are unitaries and normals. If you let $\Bbb W = \{ x \in \Bbb V ~:~ x_n = 0,~\forall~n &lt; 0 \} \leq \Bbb V$, then $\Bbb W$ is $R$-invariant, but not $L$-invariant. Having this in mind, we can consider the restriction $R_{|_\Bbb W}:\Bbb W \rightarrow \Bbb W$. Finally, we have that $(R_{|_\Bbb W})^\ast$ is the operator: $$L': \Bbb W \rightarrow \Bbb W \\ (L'x)_n = \begin{cases} 0 \mbox{, if } n &lt; 0 \\ x_{n+1} \mbox{, if } n \geq 0\end{cases}$$ So, it's not hard to check that $R_{|_\Bbb W}$ is not normal.
Why the derivative of a complex function is a "Strong Demand"
If a function of two real variables is differentiable, it may not be twice-differentiable. Indeed, its derivatives might not even be continuous. If a function of a complex variable is differentiable, then it is infinitely differentiable, indeed analytic – it has a power series that converges to it.
Counterexample with finite index
Take $G=S_3$, $H=\langle (1\ 2)\rangle$ and $g=(2\ 3)$. The minimal $k$ such that $g^k\in H$ is $2$, and it doesn't divide the index $[G:H]=3$.
Left derived functors vanish on a projective.
If $A$ is projective, then $$ 0 \longrightarrow A \overset{\mathrm{id}_A}{\longrightarrow} A \longrightarrow 0 $$ is exact, where $\mathrm{id}_A$ is the identity morphism of $A$. Since each term is projective, it is a projective resolution for $A$. Applying $F$ gives $$ 0 \longrightarrow F(A) \longrightarrow F(A) \longrightarrow 0 \,, $$ where the map $F(A) \to F(A)$ is $F(\mathrm{id}_A) = \mathrm{id}_{F(A)}$, the identity morphism of $F(A)$. To get the $L^i F(A)$, we remove the second $F(A)$ term to get the chain complex $$ 0 \longrightarrow F(A) \longrightarrow 0 \,, $$ and take its homology. But since only the $0$-th term is nontrivial, the homology must be trivial for all $i &gt; 0$. Note that we did not show $F$ is exact. The result of applying $F$ does end up being exact to the left of the $F(A)$ term, but only because it vanishes completely.
How to show the logical equivalence of the following two definitions of continuity in a topological space?
The definitions are not equivalent. Consider $X=Y=\Bbb R$ with usual topology and $$f(x)=\begin{cases}0&amp;x\in\Bbb Q\\x&amp;x\notin \Bbb Q\end{cases}$$ Then $f$ is definition-1-continuous at $0$: For every open set $V\ni 0$, we can let $U=V$ and find that $f(U)\subseteq V$. But $f$ is not definition-2-continuous at $0$: Let $V=(-1,1)$. Then $f^{-1}(V)=\Bbb Q\cup(-1,1)$, which is not open.
How do I write $A \# n$ in set builder notation?
$\{ X : X \subseteq A, \forall x,y \in X, |x - y| \le n \}$
Do these two r.v. generate the same $\sigma$-algebra?
Hint: Borelsets are closed under translations and reflections. So For $B\subseteq\mathbb R$: $$B\text{ is a Borelset if and only if }1-B:=\{1-r\mid r\in B\}\text{ is a Borelset}$$ Note that $Y^{-1}(B)=1-X^{-1}(B)$.
Proving that a discrete set percentage approximates always the value of percentage
Let $r(x)$ be the closest integer to $x$ (with upwards rounding in the middle).$\,$ Then $$x - \frac{1}{2} &lt; r(x) \le x + \frac{1}{2}$$ Thus, if $n&gt;0,$ \begin{align*} &amp;nP - \frac{1}{2} \;&lt;\; r(nP) \;\le\; nP + \frac{1}{2}\\[12pt] \implies\;&amp;\frac{nP - \frac{1}{2}}{n} \;&lt;\; \frac{r(nP)}{n} \;\le\; \frac{nP + \frac{1}{2}}{n}\\[12pt] \implies\;&amp;P - \frac{1}{2n} \;&lt;\;\frac{r(nP)}{n} \;\le\; P + \frac{1}{2n}] \end{align*}
Finding the derivative of $x$ tetrated to the $x$
$$\frac{d}{dx}(^nx) = (^nx)\Bigl((lnx)\frac{d}{dx}(^{n-1}x)+\frac{^{n-1}x}{x}\Bigl)$$
Quotient of two assets following a GBM
Neither expression is correct, since interchanging $X$ with $Y$ corresponds to negating the expression $\log(X/Y)$, and hence negating the mean. In particular, the mean must be an antisymmetric function of $X$ and $Y$, which it is not in either of your two expressions. With that being said, clearly the second expression is closer to being correct, where the mistake occurs in your constant term. By considering the degenerate case $\sigma_X=\sigma_Y=0$ in which case everything is deterministic and $X(t)=X(0)e^{rt}$ and similarly for $Y(t)$ should clarify what your constant term needs to be.