title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Polytopes defined by $x_i >=0, Ax = b$ are generic ? (Understanding simplex method) | Your point (1) is not correct. There can be at most $C(n,r)$ vertices, but it is easy to formulate LP's in which some basic solutions are infeasible because the corresponding point doesn't have $x \geq 0$. Furthermore, there are degenerate LP's in which a basic variable is 0 in the basic solution. In this situation, different bases can easily correspond to the same vertex.
You should read up on degeneracy in linear programming. |
Motivation for the binary entropy function | Usually one requires certain properties (axioms) for an entropy: Nonnegativity; the bigger the uncertainty, the bigger the entropy; and additivity for independent observations/measurements. The last property implies that there should be a logarithmic dependence.
For more details on the axiomatic formulation of entropy see
http://www.math.nyu.edu/faculty/kleeman/infolect1.pdf
or
http://arxiv.org/pdf/quant-ph/0511171.pdf |
When a mathematician says $f(y,x)$ is strictly increasing in $x$, what do they mean? | "$f(x,y)$ is strictly increasing in $y$" means that, for every $y_1, y_2$, and $x$ we have
$$y_1<y_2\Rightarrow f(x,y_1)<f(x,y_2)$$ |
Mutually exclusive events-Probability | One way to think about 2)
We always have:
$P(A) = P(A\cap B) + P(A \cap B^C)$
but now that $A$ and $B$ are mutually exclusive, we have $P(A\cap B)=0$
thus: $P(A\cap B^C)=P(A)$ |
Using notion of disjoint open sets to prove a connectedness property involving separated sets | Suppose $Y = A \cup B$, where $Y$ is any space, then TFAE:
$A$ and $B$ are separated in $Y$.
$A$ and $B$ are both open and disjoint in $Y$.
$A$ and $B$ are both closed and disjoint in $Y$.
1 implies 3, as 1 implies $\overline{A} \subseteq Y \setminus B \subseteq A$ and so $A$ is closed and a symmetric argument shows that $B$ is closed as well. The disjointness is then immediate from the separatedness.
3 implies 2, as $A$ and $B$ are each other's complement so $A$ closed implies $B$ open and vice versa.
2 implies 1, as $x \in \overline{A} \cap B$ would imply that $B$ is an open neighbourhood of $x$ and would have to intersect $B$ as $x \in \overline{B}$ too. But $A$ and $B$ are disjoint so this cannot happen. Symmetrically $\overline{B} \cap A = \emptyset$ and 1 holds.
Also, if $A$ and $B$ are separated in $X$, then $A$ and $B$ are also separated in any subspace $Y$ that contains $A$ and $B$.
So TFAE:
a. We can write $X$ as a union of two separated non-empty sets.
b. $X$ has a closed-and-open set $\emptyset \neq C \neq X$.
c. We can write $X$ as a disjoint union of two non-empty open subsets.
d. We can write $X$ as a disjoitn union of two non-empty closed subsets.
e. There is a continuous non-constant map $f: X \to \{0,1\}$, where $\{0,1\}$ has the discrete topology.
The proofs are obvious from the first equivalence and the fact that if $X= A\cup B$, with both $A,B$ disjoint, non-empty and closed/open, then both $A$ and $B$ are non-trivial closed-and-open sets, and the observation that if we have such a partition, mapping $A$ to $0$ and $B$ to $1$ or vice versa, gives a continuous map and such a map induces a partition by $f^{-1}[\{0\}], f^{-1}[\{1\}]$. They're obvious but useful reformulations.
If $X$ satisfies one of these equivalent conditions a-e, $X$ is called disconnected. If $X$ is not disconnected, $X$ is called connected.
So if $A,B$ are separated in $X$, they are a disconnection of $Y=A \cup B$ and a connected subset $C$ of $Y$ must be contained in $A$ or in $B$ or else $A \cap C$ and $B \cap C$ would disconnect $C$. |
Formula to get the sum of values from an adjustable spinner | Lets say you have the probability P of being in a state S defined as,
$P(S_2)=0.08$, $P(S_5)=0.40$, $P(S_{10})=0.24$, $P(S_{15})=0.20$, $P(S_{40})=0.07$, $P(S_{50})=0.01$,
Now if you get $50$ or $40$ in the first go, you don't need to spin the wheel anymore.
If you get any other no. you will have to spin again. Now suppose if you get $10$ and in the next spin $40$ you have already reached the sum. But if you get 15 you will again have to spin it and go through the same process.
Forming this mathematically,
Let $M_{1,40}$ be the event of reaching a sum of $40$ in $1st$ spin, then $M_{1,40} = P(S_{40}) + P(S_{50})$.
Similarly, $M_{2,40} = M_{1,40} + [P(S_{2})+P(S_{5})+P(S_{10})+P(S_{15})].[P(S_{40}) + P(S_{50})] $.
And all other calculations would go on like this.
If you want you can come up with a formula for values $A,B,C,D,E$ having probabilities $a,b,c,d,e$ by hand in a similar fashion and use it directly in simulation.
It is evident here that the minimum no. of spins required to always attain a sum of $N$ is $ceil(\frac{N}{min(A,B,C,D,E)})$. |
Rudin functional analysis theorem 3.28, $P$ is weak*-compact in $C(Q)^*$. | Assuming that $\|h\|$ denotes the $\sup$-norm of $h$, we have that $$\bigg |\int_Q h d\mu \bigg| \leq \int_Q |h| d \mu \leq \int_Q \|h\| d\mu = \|h\| \mu(Q) = \|h\|$$
for $\mu \in P$, since $P$ contains only probability measures on $Q$. Hence, if $\|h\| < 1$ and $\mu \in P$ then $$\bigg |\int_Q h d\mu \bigg| \leq 1$$ as desired. |
Angle quadrisection in a triangle | Once you've used the angle bisector theorem you know that the center bisector breaks up 98 into 42 and 56. Now drop a height to the 98 side. You know that it has to be closer to the 84 side than that center bisector. Why? Now break up the base into 42-x and 56+x according to where you put the height. Using the Pythagorean theorem, we know $84^2 - (42-x)^2 = 112^2 - (56+x)^2$. Solve and you get $x=21$. This means that the altitude (height)of the triangle formed by the 84 side, the 42 side, and the center bisector is also a median. Therefore, that triangle is isosceles and the middle bisector has length 84 and the result follows from another application of the angle bisector theorem. |
How to imagine/prove that all of the following pictures are 2-torus? | The comments are misinformed: a monkey saddle is not a Morse singularity. You cannot find the Euler characteristic by counting critical points, since they are not all nondegenerate!
Instead you must use the following variation of the Morse Lemma.
Consider the sequence of groups $H_*(M_t)$, where $M_t = f^{-1}(-\infty, t]$. This sequence changes precisely for critical values $t$. So we just need to see what happens at the Monkey saddle (let's say that happens at time $t=1$). For $M_{.999}$, we have a manifold diffeomorphic to a disc, as it has exactly one nondegenerate critical point.
Now look at the picture in IV. What happens as we go from the bottom to the top? We've taken $M_{.999} \times [0,1]$ and added a "tripod" on the top --- a space that looks like a thickened letter $Y$ --- by attaching the three boundary arcs $(\text{3 points}) \times I$ to $M_{.999} \times \{1\}$. You can see by the Mayer-Vietoris sequence that the result has $H_*(M_{1.001}) = \Bbb Z$ in degree 0 and $\Bbb Z^2$ in degree 3. Another way to think about this: attching this "tripod" is functionally equivalent to attaching two handles, which correspond to two nondegenerate index 1 critical points.
Anyway, all that's left is to attach the cap, which only adds something to top degree homology. What we get out of this is that the homology of this surface matches up with the torus, and so it is a torus. |
How can I show that the Zariski topology is the discrete topology for any finite field? | Let $x_1,...,x_n$ in the finite field $F$, $V((x-x_1)...(x-x_n))$ is $\{x_1,...,x_n\}$ so any (finite) subset of $F$ is closed. This implies that every subset is open since its complementary is a finite subset thus closed. |
Really Stuck on Partial derivatives question | We are given:
$$u(x, y)=\begin{cases}
xy \frac {x^2-y^2}{x^2+y^2}, ~(x, y) \ne (0,0)\\\\
~~~0, ~~~~~~~~~~~~~~~(x, y) = (0,0)\;.
\end{cases}$$
I am going to multiply out the numerator for ease in calculations, so we have:
$$\tag 1 u(x, y)=\begin{cases}
\frac {x^3y - xy^3}{x^2+y^2}, ~(x, y) \ne (0,0)\\\\
~~~0, ~~~~~~~~~~~~~~(x, y) = (0,0)\;.
\end{cases}$$
We are asked to:
(a) Find: $\displaystyle \frac{\partial u} {\partial x} (x,y) ~\forall x~ \in \Bbb R^2$
(b) Find: $\displaystyle \frac{\partial u} {\partial y} (x,y) ~\forall x~ \in \Bbb R^2$
(c) Show that $ \displaystyle \frac {\partial^2 u} {\partial x \partial y} (0,0) \neq \frac {\partial^2 u} {\partial y \partial x} (0,0)$
(d) Check, using polar coordinates, that $\displaystyle \frac {\partial u}{\partial x} \text{and} \frac {\partial u}{\partial y} $ are continuous at $(0,0)$
Using $(1)$, for part $(a)$, we get:
$\tag 2 \displaystyle \frac{\partial u} {\partial x} (x,y) = \frac{(3x^2y- y^3)(x^2+y^2) - 2x(x^3y - xy^3)}{(x^2 + y^2)^2} = \frac{x^4y + 4x^2y^3-y^5}{(x^2+y^2)^2}$
Using $(1)$, for part $(b)$, we get:
$\tag 3 \displaystyle \frac{\partial u} {\partial y} (x,y) = \frac{(x^3-3xy^2)(x^2+y^2) - 2y(x^3y - xy^3)}{(x^2+y^2)^2} = \frac{x^5 - 4x^3y^2-xy^4}{(x^2+y^2)^2}$
Next, we need mixed partials, so using $(2)$, we have:
$ \tag 4 \displaystyle \frac {\partial^2 u} {\partial x \partial y} (x, y) = \frac{(x^4+ 12x^2y^2-5y^4)(x^2+y^2)^2 - 2(x^2+y^2)(2y)(x^4y + 4x^2y^3-y^5)}{(x^2+y^2)^4} = \frac{x^6 + 9x^4y^2-9x^2y^4-y^6}{(x^2+y^2)^3} = \frac {\partial^2 u} {\partial y \partial x} (x, y)$
Thus, we get:
$$\tag 5 \displaystyle \frac {\partial^2 u} {\partial x \partial y} (x, y) = \frac {\partial^2 u} {\partial y \partial x} (x, y) = \begin{cases}
\frac{x^6 + 9x^4y^2-9x^2y^4-y^6}{(x^2+y^2)^3}, ~(x, y) \ne (0,0)\\\\
~~~~~~~~~~~~~~~~~~0, ~~~~~~~~~~~~~~~~~~~~~~~~(x, y) = (0,0)\;.
\end{cases}$$
Now, for part $(c)$, we want to show that $ \frac {\partial^2 u} {\partial x \partial y} (0,0) \neq \frac {\partial^2 u} {\partial y \partial x} (0,0)$, so we need to find the limits of each mixed partial.
We have:
$ \tag 6 \displaystyle \frac{\partial^2 u} {\partial x \partial y} (0,0) = \lim\limits_{h \to 0} \frac{\frac{\partial u}{\partial x} (0, h) - \frac{\partial u}{\partial x} (0, 0)}{h} = \lim\limits_{h \to 0} \frac{-h^5/h^4}{h} = \lim\limits_{h \to 0} \frac{-h}{h} = -1$, and
$ \tag 7 \displaystyle \frac{\partial^2 u} {\partial y \partial x} (0,0) = \lim\limits_{h \to 0} \frac{\frac{\partial u}{\partial y} (h, 0) - \frac{\partial u}{\partial y} (0, 0)}{h} = \lim\limits_{h \to 0} \frac{h^5/h^4}{h} = \lim\limits_{h \to 0} \frac{h}{h} = +1$
$\therefore$, for part $(c)$, we have shown that:
$$\frac {\partial^2 u} {\partial x \partial y} (0,0) \neq \frac {\partial^2 u} {\partial y \partial x} (0,0)$$
as desired.
Can you handle part $(d)$?
Regards |
Simplifying $(A\implies B)\implies(A\implies C)$ | If you go on with De Morgan laws you get
$$
(A\land \lnot B)\lor(\lnot A\lor C)
$$
that can be reordered as
$$
\bigl((A\land \lnot B)\lor\lnot A\bigr)\lor C
$$
Apply distributivity to the expression $(A\land \lnot B)\lor\lnot A$, that gives $(A\lor\lnot A)\land(\lnot B\lor\lnot A)$; the first term can be removed, being true, so we remain with
$$
(\lnot B\lor\lnot A)\lor C
$$
and, again with De Morgan, we get $(A\land B)\Rightarrow C$ |
Proving Two sets have same cardinality | Hint: treat $B$ as a kind of indicator function for $P(A)$ - i.e. if $f:A\to\{0,1\}$ is a function, then $\{a \in A : f(a) = 1\}$ is a subset of A
With regards to your answer, if two sets to have the same cardinality, then there exists a bijective function between them. But that doesn't mean that every function between them has to be bijective (e.g. {0,1} and {0,1} have the same cardinality, but I can take a function that maps everything to 0 - it is not bijective).
So in order to prove that two things have the same cardinality, you need to find a bijection between them, not prove that any function between them is a bijection. |
Proving continuity with $\epsilon$-$\delta$ criterion | 1) You say: "let's assume without loss of generality that $\|(x,y) -(a,b)\|_\infty = |x-a|$. You can't really do this WLOG (at least I don't see how you don't lose generality), but more importantly you don't need to make this assumption, it isn't used anywhere. By the definition of the $\|\cdot \|_\infty$ norm, $|x-a|, |y-b|<\delta$.
2) You write that you define $\delta := \frac{\epsilon}{2|a| + 2}$, but you really need to take $\delta := \min( \frac{\epsilon}{2|a| + 2}, 1)$ since you used, $\delta \leq 1$ above.
Other than that, everything looks good. |
Demonstration with fitch notation and quantifiers | In the version of a Fitch style proof that you are apparently using, when you get to your line 5, you can go on to just quantify by UQ int.
Thus
$1\quad \forall x(Ax \to Bx)\\
2\quad \forall xAx\\
3\quad\quad d\mid \quad Ad \to Bd\\
4\quad\quad \ \mid \quad Ad\\
5\quad\quad \ \mid \quad Bd\\
6\quad \forall xBx$
Check out your version of the universal quantifier introduction rule which we are appealing to at the last step!
[Actually I don't think this is the most transparent version of the Fitch style proof-layout, although it is [near] Thomason's for example. I prefer to only indent columns when a new, independent, temporary assumption is being made which doesn't follow from what's gone before, e.g. as when starting a conditional proof, or making an assumption for reductio. So even in a Fitch-style natural deduction system where you indent to the right everytime you make a new assumption, and track back to the left when you eventually discharge that assumption, I'd prefer to write this proof simply as
$1\quad \forall x(Ax \to Bx)\\
2\quad \forall xAx\\
3\quad\ Ad \to Bd\\
4\quad Ad\\
5\quad Bd\\
6\quad \forall xBx$
Since, whatever $d$ picks out, (3) and (4) follow from (1) and (2) without further ado. And (6) follows from (5) as (5) depends on no special premisses about $d$.] |
Sum of reciprocals of squared and factoriels | It is, hopefully, a well known result that
$$ \mathrm{e}^x = \sum_{n=0}^{\infty} \frac{x^n}{n!}.$$
Your series can be rewritten as
\begin{align}
\sum_{n=1}^{\infty} \frac{1}{n! 2^n}
&= \sum_{n=1}^{\infty} \frac{ \left(\frac{1}{2} \right)^n }{n!} \\
&= \left[
\frac{ \left(\frac{1}{2}\right)^0 }{0!} \color{red}{- \frac{ \left(\frac{1}{2}\right)^0 }{0!}}\right] + \sum_{n=1}^{\infty} \frac{ \left(\frac{1}{2} \right)^n }{n!} && (\text{add zero}) \\
&= \color{red}{-1} + \sum_{n=0}^{\infty} \frac{ \left(\frac{1}{2} \right)^n }{n!} \\
&= -1 + \mathrm{e}^{\frac{1}{2}} && (\text{use the well known result}) \\
&= \sqrt{\mathrm{e}} -1.
\end{align} |
A limit theorem in Rudin. Please elaborate? | Let $n$ be an arbitrary positive integer. Since $\frac{1}{n}>0$, it follows from the definition of a limit point that
\begin{equation}B_{\frac{1}{n}}(p) \cap E \setminus \{ p \} \neq \emptyset \, .
\end{equation}
Hence there is a point that we denote $p_n$ different from $p$ so that $d(p_n, p)<\frac{1}{n}$. Since $n$ was arbitrary it now follows that, for each positive integer $n$ there is a point $p_n \in E \setminus \{ p \}$ such that $d(p_n,p)<\frac{1}{n}$.
Let $\varepsilon > 0$ be given. By the Archimedean property, there is a positive integer $N$ so that $N\varepsilon>1$. It now follows that $d(p_n,p)<\varepsilon$ whenever $n \geq N$. |
Let $V $be a vector space. Prove/Disprove: There is a norm $\|\cdot\|$, such that all subsets of $V$ are open sets in $(V,\|\cdot\|)$. | HINT: This is impossible. Show that there is a homeomorphism between a $1$-dimensional subspace and $\Bbb R$. |
Show that the set of lines in $\mathbb{R}^n$ is a (smooth) manifold of dimension $2(n-1)$ | I tried considering $n$ open sets $\lbrace U_i \rbrace_{1\leq i\leq n}$ where $U_i$ is the set of all the lines not parallel to $e_i$ the $i$-th standard basis vector, but could not get very far.
You have to exclude more than a single point for this to work cleanly; it's best to exclude a subspace of codimension $1$.
The atlas: Let $U_i$ be the set of all lines not contained in a hyperplane $x_i=c$ parallel to the $i$th coordinate hyperplane. For coordinates on $U_i$, associate each such line to the ordered pair $(u,v)$ of its intersections with the hyperplanes $x_i=0$ and $x_i=1$ respectively.
The dimension is $2(n-1)$, of course. |
Prove that $\lim _{n\to \infty \:}\left(\sqrt{n^2+3}-\sqrt{n^2+1}\right)=0$ | You first have to multiply the top and bottom by the conjuage, which is $\sqrt{n^2+3} + \sqrt{n^2+1}$.
So you get $$\dfrac{(\sqrt{n^2+3}-\sqrt{n^2+1})(\sqrt{n^2+3}+\sqrt{n^2+1})}{\sqrt{n^2+3}+\sqrt{n^2+1}} = \dfrac{n^2+3 - (n^2+1)}{\sqrt{n^2+3} + \sqrt{n^2+1}} = \dfrac{2}{\sqrt{n^2+3} + \sqrt{n^2+1}}$$
Clearly, the $\lim_{n\to\infty}$ of this expression $ = 0$. |
How many distinct integer solutions does the inequality $|x_{1}|+|x_{2}|+...+|x_{n}| \leq t$ have? | For $k=1,\ldots,n$ there are $\binom{n}k$ ways to choose $k$ positions to be non-zero, $2^k$ ways to assign algebraic signs to those positions, and $\binom{s-1}{k-1}$ ways to assign non-zero absolute values to those positions to get a sum of $s$. Letting $s$ range from $1$ to $t$, and adding $1$ for the all-zero solution with sum $0$, we get a total of
$$\begin{align*}
1+\sum_{s=1}^t\sum_{k=1}^n2^k\binom{s-1}{k-1}\binom{n}k&=1+\sum_{k=1}^n2^k\binom{n}k\sum_{s=1}^t\binom{s-1}{k-1}\\\\
&=1+\sum_{k=1}^n2^k\binom{n}k\binom{t}k\\\\
&=\sum_{k=0}^n2^k\binom{n}k\binom{t}k
\end{align*}$$
solutions, which is at least a bit simpler to work with. I’ve not been able to find a closed form for this, however, even in the special case $t=n$ (and the sequence that results in that case is unknown to OEIS). |
Distance between parametric function and a point | I'm note sure if (homework).
Hint:
I'm assuming that we're given $R, c, d, x_0, v_x, a_x, y_0, v_y, a_y.$
The squared distance between $P = (c,d)$ and $(x,y)$ is given by
$$ R^2 = (x - c)^2 + (y - d)^2 \tag{1}$$
Substitute the definition of $$x = x_o + v_x * t + \frac{1}{2} * a_x * t^2 \tag{2} \\ y = y_o + v_y * t + \frac{1}{2} * a_y * t^2$$ in $(1) $
You're left with a polynomial in $t.$ Solve for $t,$ this will give you different values of $t = \{t_1, \ldots, t_4\}.$ Substitute each $t_i$ in $(2)$ to get different $(x,y)$ points.
Update # 1:
I'm too lazy to LaTeX the following equation:
2 2 4 3
(0.25 ay + 0.25 ax ) t + (1.0 vy ay + 1.0 vx ax) t
2 2 2
+ (vx + 1.0 xo ax - 1.0 ax c - 1.0 ay c + vy + 1.0 yo ay) t
2 2 2
+ (-2. vy c + 2. yo vy + 2. xo vx - 2. vx c) t + xo + 2. c - 1. R
2
- 2. xo c - 2. yo = 0
Anyways, that's the degree $4$ polynomial in $t.$ To find a closed-form expressions for the roots $$\{t_1 = \cdots, t_2 = \cdots, t_3 = \cdots, t_4 = \cdots\},$$ you will need to do a lot of algebra on this polynomial. For example this. |
If $X$ is regular, then so is $X\times \Bbb A^1$ | Your interpretation of $X\times\Bbb A^1$ to mean $X\times_{\Bbb Z} \Bbb A^1_{\Bbb Z}$ is correct. The reduction to the affine case is good, but you don't have to assume that $B$ is local, and I think this is actually maybe leading you astray here.
What do we know? We know that for every prime $\mathfrak{p}\subset B$, the ring $B_\mathfrak{p}$ is a regular local ring. What do we want to prove? That for every prime ideal $\mathfrak{q}\subset B[x]$, the ring $(B[x])_\mathfrak{q}$ is a regular local ring. Since $\mathfrak{q}\cap B$ is a prime ideal of $B$, we can apply the proposition to $B_{\mathfrak{q}\cap B} \to (B[x])_{\mathfrak{q}}$ as you've done and get that $(B[x])_\mathfrak{q}$ is regular, and we're done. |
Is $\frac{2^{3510\times2}-1}{218^2-1}$ prime? | Facts:
$6 \mid k\implies 9\mid(2^k-1)$
$6 \mid (2\times3510)$
$9 \mid (2^{2\times3510}-1)$
$9 \not\mid (218^2-1)$
$3 \mid \frac{2^{2\times3510}-1}{218^2-1}$ |
A difficulty in understanding a use of Cauchy Schwartz inequality. | For any complex number $z=u+iv$, we have
$$ \Re z=u\leq \sqrt{u^2+v^2}=|z|$$
Setting $z=\langle x,y\rangle$ and using the Cauchy-Schwarz inequality, we get
$$ \Re\langle x,y\rangle\leq |\langle x,y\rangle|\leq \|x\|\|y\|$$ |
Prove that $P(A\cup B) = P (B) + P (A)\cdot P (B^c)$ | Make use of the formula $P(A\cup B) = P(A)+P(B)-P(A\cap B)$.
You have also provided $P(A\cap B) = P(A)\cdot P(B)$ since $A$ and $B$ are independent.
You can use them to express the terms in the problem. |
To show a family of holomorphic function on unit disc satisfying $|f(0)|^2+\int_D|f'(z)|^2dA\leq1$ is normal | Let $\epsilon >0$ It is enough to show that the family is uniformly bounded on $\{z:|z|\leq 1-\epsilon\}$. Let $|z|\leq 1-\epsilon$. Consider the disk $D_{\epsilon}$ of radius $\epsilon /2$ around $z$. It is well known that analytic functions have mean value property so $f'(z)$ is the average of its values on $D_{\epsilon}$. Using the hypothesis and Cauchy -Schwarz inequality we get an upper bound for $|f'(z)|$ on $D_{\epsilon}$. Since $|f(0)| \leq 1$ and $f(z)$ is $f(0)$ plus the integral of $f'$ on the line segment from $0$ to $z$ we have proved that the family is uniformly bounded on $D_{\epsilon}$. |
Bayes Rule and Conditional Probability | One common mistake is that one tries to solve problems about Bayes rules with intuition, and often gets mistakes. I found the following methodical tricks work well.
Write down conditions given and questions asked in precise probability terms.
Use mathematical manipulation (including Bayes rule and rules about marginal probability, joint probability) to solve the problem.
Specifically in this question, the given conditions:
P(test is positive | has measles) = P(test is negative | no measles) = 0.76
P(has measles) = 0.02%=0.0002
Questions:
P(test is positive)
P(has measles | test is positive)
Now mathematical manipulation:
P(test is positive) = P(test is positive, has measles) + P(test is positive, no measles) = P(test is positive | has measles)P(has measles) + P(test is positive | no measles) P(no measles) = 0.76 * 0.0002 + (1-0.76)*(1-0.002) = 0.240104.
P(has measles | test is positive) = P(test is positive | has measles) * P(has measles) / P(test is positive) = 0.76 * 0.0002 / 0.240104 = 0.000633.
The main point of my answer is more of showing the trick than giving the answer. Hope it helps. |
Is $\mathbb{R}$ a subset of $\mathbb{R}^2$? | It's not really true that $\mathbb R^n$ is a subset of $\mathbb R^m$ when $n<m$. It is true that there is a subspace of $\mathbb R^m$ that is isomorphic to $\mathbb R^n$, but unfortunately, there are way too many of them, and there is no real way to pick one of them usefully as "the obvious embedding."
A simple example when $n=1$ and $m=2$ is that the $x-$ and $y-$axes are both embeddings of $\mathbb R^1$ into $\mathbb R^2$, and there is no "natural" way to choose between these enbeddings (or any other embeddings of the real link into $\mathbb R^2$.) |
Finitely Additive Set Function Inequality | Your idea is essentially this: we know that for every $m$ we have $\bigcup_{i=1}^{m} A_i \subseteq \bigcup_{i=1}^\infty A_i$, so also (by monotonicity of $\mu$) $\mu(\bigcup_{i=1}^m A_i) \le \mu(\bigcup_{i=1}^\infty A_i)$.
(You reprove the monotonicity ($A \subseteq B$ in the field, then $\mu(A) \le \mu(B)$) for this specific case, but you need not do that, if it was proved before: if $A \subseteq B$, both in $\mathcal{F}$, then $\mu(B) = \mu((B \setminus A) \cup A) = \mu(B \setminus A) + \mu(A) \ge \mu(A)$ as $\mu$ is non-negative and the union is disjoint.)
As the (finite) union is disjoint, for all $m$ we know, as you state correctly, that $a_m := \sum_{i=1}^m \mu(A_i) \le \mu(\bigcup_{i=1}^\infty A)$.
Now the $a_m$ are an increasing sequence of reals, bounded above by the fixed real number $\mu(\bigcup_{i=1}^\infty A_i)$, so their limit exist, and is by definition of convergent series equal to $\sum_{i=1}^\infty \mu(A_i)$ and this limit is still bounded above by $\mu(\bigcup_{i=1}^\infty A_i)$, essentially because all sets of the form $(-\infty, c]$ are closed in the reals.
I wrote it down a bit pedantically, perhaps, but yes, this argument is perfectly valid. The essential idea is that the finite sums etc. are all in the reals so you use all the standard facts for the reals there. You have proved an inequality of real numbers, using monotonicity of $\mu$, which you need not re-prove, from axioms for $\mu$ and then proceed from there.
Added after comment: if $\mu$ can have two co-domains, without extra information, namely all positive reals $[0,+\infty)$ (no infinite values) or $[0,+\infty]$, in the extended reals, so $+\infty$ is also a value that is allowed (and sums are adapted in the logical way).
If the former is the case, $\mu(\bigcup_{i=1}^\infty A_i) < +\infty$ by definition, as we take the $\mu$ of a set in the field (!). If the latter, then the above argument also holds, and even if $\mu(\bigcup_{i=1}^\infty A_i) = +\infty$ we have nothing to prove, as all numbers are $\le +\infty$ and the statement becomes trivially true (no need to go through the proof): any series of $\mu(A_i)$ values is either some finite positive number of also $+\infty$, and both are $\le +\infty$.. |
Calculate Overlapping Area of $2$-Dimensional Shapes | Hint:
This is a pretty complex problem as there are many possible cases, but you can address it in a manageable way by the sweepline approach.
Draw horizontals by the square corners and by the N and S pole of the circle. The two shapes delimit line segments on these horizontals, and finding the intersection of two segments is trivial. When you go down from one line to the next, if the endpoints switch position, you have an intersection between the circle and the square outlines.
Scanning all horizontals in turn, you can follow the outline of the intersection shape, and as you go you describe curvilinear trapezoids (the oblique sides are circular arcs). Now it "suffices" to accumulate the areas of these trapezoids, using the standard formula, corrected with the areas of circular segments.
Warning:
The criterion "if the endpoints switch position" is not sufficient. You can very well have a large arc which is intersected twice by a vertical side, so that the endpoints switch position twice. So it is safer to write a test that explicitly looks for intersections between an arc and a vertical line in the current vertical span. |
Action of Unipotent algebraic group | This might be a somewhat silly example: Take $G$ to be any unipotent group and consider the $\Bbb R$-variety $V:=\operatorname{Spec}(\Bbb C)$, which clearly satisfies $V(\Bbb R)=\emptyset$. If we let $G$ act on $V$ trivially, it nevertheless acts transitively and $\Bbb R$-morphically. The fundamental issue is visible however: Given a $K$-variety $V$ with transitive $K$-morphic $G$-action, we may consider $V$ as a $k$-variety for a proper subfield $k\subsetneq K$. Since $V(k)=\emptyset$ (there can be no $k$-morphism $K\to k$), this is also an example because the action of $G$ remains transitive and is also $k$-morphic because it is $K$-morphic.
So for a more sophisticated example, take the $\Bbb R$-variety $V=\operatorname{Spec}(\Bbb C[x])=\Bbb A_{\Bbb C}^1$ and $G=\Bbb G_a$, i.e. $G=V$ acting on itself by addition. |
area of trapezoid base on radius of excircles | Excircles radii are not enough to compute the area of the trapezoid, as you can see from the animation below: excircles have always radii $1$, $1$, $1$ and $3$, but the area of the trapezoid varies. |
When is a module finitely generated over its endomorphism ring? | There is the following theorem by Morita [Lang, Algebra - Theorem 7.1]:
Let $R$ be a ring and $M$ an $R$-module. Then $M$ is a generator iff $M$ is balanced and finitely generated projective over $R' = \mathrm{End}_R(M)$.
This generalizes the statement above, as if $R$ is simple Artinian and $M$ is simple, then $R \cong M^{(n)}$ for some $n$, so $M$ is in particular a generator.
Here is an elementary proof of the implication “$M$ is a generator $\implies M$ is f.g. over $R'$”:
Proof.
Let $M$ be a right $R$-module, so it can be considered as a left $R'$-module. Being a generator means that $R$ is an epimorphic image of $M^{(n)}$ for some $n$, so there are elements $x_1, \dots, x_n \in M$ and homomorphisms $\varphi_1, \dots, \varphi_n \colon M \to R$ such that $\sum_i \varphi_i x_i = 1$. For any $x \in M$ let $\alpha_x$ be the unique homomorphism $R \to M$ with $\alpha_x(1) = x$. Then for all $x \in M$ we have $$ x = \alpha_x(1) = \alpha_x \left( \sum_i \varphi_i x_i \right) = \sum_i (\alpha_x \varphi_i) x_i. $$ Since $\alpha_x \varphi_i \in R'$ for all $i$, this means that $M$ is generated by $x_1, \dots, x_n$ as an $R'$-module. $\square$ |
How many ways $12$ persons may be divided into three groups of $4$ persons each? | We can also organize the count in a different way. First line up the people, say in alphabetical order, or in student number order, or by height.
The first person in the lineup chooses the $3$ people (from the remaining $11$) who will be on her team. Then the first person in the lineup who was not chosen chooses the $3$ people (from the remaining $7$) who will be on her team. The double-rejects make up the third team.
The first person to choose has $\binom{11}{3}$ choices. For every choice she makes, the second person to choose has $\binom{7}{3}$ choices, for a total of
$$\binom{11}{3}\binom{7}{3}.$$
Remark: The lineup is a device to avoid multiple-counting the divisions into teams. The alternate (and structurally nicer) strategy is to do deliberate multiple counting, and take care of that at the end by a suitable division. |
What does $\Big(\frac{(x+1)^2}{2}\Big)^n-\Big(\frac{(x-1)^2}{2}\Big)^n$ equal to? | You can find the highest degree term in $(x+1)^{2n}-(x-1)^{2n}$, then divide it by $2^n$. Now apply the binomial theorem:
\begin{align}
(x+1)^{2n}&=x^{2n}+2nx^{2n-1}+\text{lower degree terms}\\[4px]
(x-1)^{2n}&=x^{2n}-2nx^{2n-1}+\text{lower degree terms}
\end{align}
Subtracting we get
$$
(x+1)^{2n}-(x-1)^{2n}=4nx^{2n-1}+\text{lower degree terms}
$$
So the required term is
$$
\frac{4n}{2^n}x^{2n-1}
$$ |
If $\varphi(x) = m$ has exactly two solutions is it possible that both solutions are even? | As I requested, here are the first twenty examples of numbers $m$ with two answers to $\phi(x)=m.$ I did not print out when $m+1$ was prime, those are the majority of the $m'$s
54 = 2 * 3^3 81 = 3^4 162 = 2 * 3^4
110 = 2 * 5 * 11 121 = 11^2 242 = 2 * 11^2
294 = 2 * 3 * 7^2 343 = 7^3 686 = 2 * 7^3
342 = 2 * 3^2 * 19 361 = 19^2 722 = 2 * 19^2
506 = 2 * 11 * 23 529 = 23^2 1058 = 2 * 23^2
580 = 2^2 * 5 * 29 649 = 11 * 59 1298 = 2 * 11 * 59
812 = 2^2 * 7 * 29 841 = 29^2 1682 = 2 * 29^2
930 = 2 * 3 * 5 * 31 961 = 31^2 1922 = 2 * 31^2
1144 = 2^3 * 11 * 13 1219 = 23 * 53 2438 = 2 * 23 * 53
1210 = 2 * 5 * 11^2 1331 = 11^3 2662 = 2 * 11^3
1456 = 2^4 * 7 * 13 1537 = 29 * 53 3074 = 2 * 29 * 53
1540 = 2^2 * 5 * 7 * 11 1633 = 23 * 71 3266 = 2 * 23 * 71
1660 = 2^2 * 5 * 83 1837 = 11 * 167 3674 = 2 * 11 * 167
1780 = 2^2 * 5 * 89 1969 = 11 * 179 3938 = 2 * 11 * 179
1804 = 2^2 * 11 * 41 1909 = 23 * 83 3818 = 2 * 23 * 83
1806 = 2 * 3 * 7 * 43 1849 = 43^2 3698 = 2 * 43^2
1936 = 2^4 * 11^2 2047 = 23 * 89 4094 = 2 * 23 * 89
2058 = 2 * 3 * 7^3 2401 = 7^4 4802 = 2 * 7^4
2162 = 2 * 23 * 47 2209 = 47^2 4418 = 2 * 47^2
2260 = 2^2 * 5 * 113 2497 = 11 * 227 4994 = 2 * 11 * 227
In the examples where the odd $x$ is squarefree, it seems that the prime factors are always $2 \pmod 3.$ |
What is difference between newton interpolation and lagrange interpolation? | What is great with Newton's interpolation is the fact that if you add new points you don't have to re-calculate all the coefficients (see forward divided difference formula) which can be really useful ! You'll have more details at the end of this article : https://en.wikipedia.org/wiki/Newton_polynomial |
Velocity and acceleration problem | for the first one notice that:
$${\bf v}. {\bf a}={\bf v}.{d{\bf v}\over dt}={d\over dt}({1\over 2}{\bf v}. {\bf v})={d\over dt}({1\over 2}\|v\|^2)$$
for the second:
$${d{\bf r}\over dt}={\bf r} \quad \Rightarrow \quad {\bf r}(t)={\bf r}(0)\, e^t={\bf r}_0\, e^t \\ \Rightarrow {\bf v}(t)={\bf a}(t)={\bf r}_0\, e^t$$
It is easy to see the path of the motion is a semi-line contains $\bf r_0$. |
Compute expectation of stopped Brownian motion | Since $t \wedge \tau_a$ is a bounded stopping time, a direct application of the optional stopping theorem shows that $\mathbb{E}(B_{t \wedge \tau_a}) = \mathbb{E}(B_0)=0$, i.e. $\mathbb{E}(X_t)=0$ for all $t \geq 0$.
If you do not want to use the optional stopping theorem, then there are several possibilities to compute the expectation but as far as I can see the hint, which you were given, does not work. As
$$X_t = B_{t \wedge \tau_a} = a 1_{\{\tau_a \leq t\}} + B_t 1_{\{\tau_a>t\}}$$
we have
$$\mathbb{E}(X_t) = a \mathbb{P}(\tau_a \leq t) + \mathbb{E}(B_t 1_{\{\tau_a>t\}}). $$
Using that $\{\tau_a>t\} = \{M_t < a\}$ for $M_t := \sup_{s \leq t} B_s$, we get
$$\mathbb{E}(X_t) = a \mathbb{P}(\tau_a \leq t) + \mathbb{E}(B_t 1_{\{M_t < a\}}). \tag{1}$$
If we could replace the second term on the right-hand side by $\mathbb{E}(B_t 1_{\{B_t<a\}})$, then we could differentiate this identity with respect to $t$ to get the equation which you were given as a hint. However, I don't see why it should possible to replace the second term. Here are two alternative approaches:
Approach 1 via strong Markov property
As $\mathbb{E}(B_t)=0$ for all $t \geq 0$, we have
$$\mathbb{E}(B_t 1_{\{M_t<a\}}) = - \mathbb{E}(B_t 1_{\{M_t \geq a\}})$$
and so
$$\mathbb{E}(B_t 1_{\{M_t<a\}}) = - \mathbb{E}(B_t 1_{\{\tau_a \leq t\}}).$$
Using the tower property of conditional expectation and the strong Markov property of Brownian motion, we get
$$\begin{align*} \mathbb{E}(B_t 1_{\{M_t<a\}})&= - \mathbb{E} \bigg( \mathbb{E}(B_t 1_{\{\tau_a \leq t\}} \mid \mathcal{F}_{\tau}) \bigg) = - \mathbb{E} \bigg( 1_{\{\tau_a \leq t\}} \mathbb{E}^{B_{\tau_a}}(B_{t-s}) \big|_{s=\tau_a} \bigg). \tag{2} \end{align*}$$
Since $$\mathbb{E}^{B_{\tau_a}}(B_r) =\mathbb{E}^a(B_r) = \mathbb{E}(a+B_r) = a$$
for any $r \geq 0$, we conclude from $(2)$ that
$$\mathbb{E}(B_t 1_{\{M_t<a\}}) =- a \mathbb{P}(\tau_a \leq t). $$
Plugging this into $(1)$ we infer that $\mathbb{E}(X_t)=0$.
Approach 2 via joint density of $(B_t,M_t)$
It is known that the joint density of $(B_t,M_t)$ equals
$$q_t(x,y) = 1_{\{x<y\}} \frac{2(2y-x)}{\sqrt{2\pi t^3}} \exp \left(- \frac{(2y-x)^2}{2t} \right),$$
and therefore the second term on the right-hand side of $(1)$ equals
$$\begin{align*} \mathbb{E}(B_t 1_{\{M_t<a\}}) &=\frac{2}{\sqrt{2\pi t^3}} \int_{-\infty}^a \int_{-\infty}^y x (2y-x) \exp \left( - \frac{(2y-x)^2}{2t} \right) \, dx \, dy \\ &\stackrel{\text{Fubini}}{=} \frac{2}{\sqrt{2\pi t^3}} \int_{-\infty}^a x \int_{x}^a (2y-x) \exp \left( - \frac{(2y-x)^2}{2t} \right) \, dy \, dx. \end{align*}$$
The inner integral can be computed easily and we get
$$\begin{align*} \mathbb{E}(B_t 1_{\{M_t<a\}}) &= \frac{1}{\sqrt{2\pi t}} \int_{-\infty}^a x \left( \exp \left[ - \frac{x^2}{2t} \right] - \exp \left[ - \frac{(2a-x)^2}{2t} \right] \right) \, dx. \end{align*}$$
If we denote by $p_t$ the density of $B_t$ then we obtain
$$\begin{align*} \mathbb{E}(B_t 1_{\{M_t<a\}}) &= \int_{-\infty}^a x p_t(x) \, dx - 2a \int_{-\infty}^a p_t(2a-x) \, dx + \int_{-\infty}^a (2a-x) p_t(2a-x) \, dx. \end{align*}$$
Performing a simple change of variables ($z=2a-x$) we conclude that
$$\begin{align*} \mathbb{E}(B_t 1_{\{M_t<a\}}) &= \underbrace{\int_{-\infty}^{\infty} x p_t(x) \, dx}_{0} - 2a \int_{a}^{\infty} p_t(z) \, dz = - a \mathbb{P}(|B_t| > a). \end{align*}$$
By the reflection principle, we have $\mathbb{P}(|B_t| \geq a) = \mathbb{P}(M_t \geq a)$, and therefore our computation entails that the terms on the right-hand side of $(1)$ cancel each other, i.e. $\mathbb{E}(X_t)=0$. |
Is the function continuous and differentiable at $x=-2$? | make a change of variable $x = -2 + h.$ now look at $$\frac{x^2 + 5x + 7}{x+3} = \frac{4 -4h - 10+10h + 7 + 4h^2}{1+h} = (1+6h+\cdots)(1 - h + \cdots) = 1+5h+\cdots\\
-x-e^{-x}+e^2 - 1=2-h-e^2(1-h+\cdots)+e^2 - 1=1+(e^2 - 1)h+\cdots$$
you can see that the graph of $y=(x)f$ at $(-2, 1)$ have slopes $5$ on the left and $e^2 - 1$ on the right of $x = -2.$ therefore it is not differentiable at $x = -2.$ |
Relation between hypergeometric functions? | First, you need to use Barnes integral representation
$${}_2F_1(a,b;c;z) = \frac{\Gamma(c)}{\Gamma(a)\Gamma(b)}\frac{1}{2\pi i}
\int_{-i\infty}^{+i\infty}\frac{\Gamma(a+s)\Gamma(b+s)\Gamma(-s)}{\Gamma(c+s)}(-z)^sds. $$
The Gauss hypergeometric function ${}_2F_1(a,b;c;z)$ is usually defined by a power series that converges only for $|z|<1$, but you have to extend the definition on the whole complex plane such that the function can be evaluated at both $1-z$ and $1/(1-z)$.
Using the integral representation,
\begin{align*}
\ _2F_1(1,-a;1-a;{1-z}) &=
\frac{\Gamma(1-a)}{\Gamma(1)\Gamma(-a)}\frac{1}{2\pi i}
\int_{-i\infty}^{i\infty} \frac{\Gamma(1+s)\Gamma(-a+s)\Gamma(-s)}{\Gamma(1-a+s)}(z-1)^sds \cr
&=\frac{(-a)}{2\pi i}\int_{-i\infty}^{+i\infty}\frac{\Gamma(1+s)\Gamma(-s)}{s-a}(z-1)^sds,
\end{align*}
and
\begin{align*}
\ _2F_1\left(1,a;1+a;\frac{1}{1-z}\right) &=
\frac{\Gamma(1+a)}{\Gamma(1)\Gamma(a)}\frac{1}{2\pi i}
\int_{-i\infty}^{i\infty} \frac{\Gamma(1+s)\Gamma(a+s)\Gamma(-s)}{\Gamma(1+a+s)}(z-1)^{-s}ds \cr
&=\frac{a}{2\pi i}\int_{-i\infty}^{+i\infty}\frac{\Gamma(1+s)\Gamma(-s)}{s+a}(z-1)^{-s}ds.
\end{align*}
In the second relation, change $s$ to $-s$, we get (keeping track of all the minus signs)
$$
\ _2F_1\left(1,a;1+a;\frac{1}{1-z}\right) =
\frac{a}{2\pi i}\int_{-i\infty}^{+i\infty}\frac{\Gamma(1-s)\Gamma(s)}{a-s}(z-1)^{s}ds.
$$
Finally using the relation
$$\Gamma(s)\Gamma(1-s)=\frac{\pi}{\sin \pi s}
\quad \mbox{and}\quad \Gamma(-s)\Gamma(1+s)=-\frac{\pi}{\sin \pi(-s)} = -\frac{\pi}{\sin \pi s},$$
we get
$$\ _2F_1(1,-a;1-a;{1-z}) = \ _2F_1\left(1,a;1+a;\frac{1}{1-z}\right)$$
and similar the equivalence for the rest two (or just change $a$ to $-a$). |
transform a formula for doubling time | It is using logarithm base $e$. That didn't matter in the first formula, because the difference was cancelled by the two logarithms.
Two solutions: 1) there might be a logarithm spelt 'log10' instead of 'log'
Or, 2) you could use $\exp(\log(2)/p)-1$ |
Confusion about the definition of self adjoint and formally self-adjoint | I think you are confusing the framework for studying a symmetric operator. The setting is
$$
T : \mathcal{D}(T)\subseteq X\rightarrow X.
$$
If you consider $T : \mathcal{D}(T) \rightarrow X$ where you put the graph norm on $\mathcal{D}(T)$, then $T$ always becomes bounded, but that's not the framework for a symmetric operator; you can't even take the inner product of $Tx$ with $y$ in this setting because $x$, $Tx$ no longer lie in the same space. And that's what you're doing when you consider $P : H_{s}(M)\rightarrow H_{s-d}(M)$. The proper setting is that you consider $\mathcal{D}(P)=H_{s}(M)$ not in its native norm, but as a subspace of $H_{s-d}(M)$. Then you can form the inner product of $x$ and $Px$ for $x\in\mathcal{D}(P)$.
Any time you have a closed operator $T : \mathcal{D}(T)\subseteq X\rightarrow X$, you can always define $T$ to become bounded by defining a new space $Y=\mathcal{D}(T)$ endowed with the graph norm $\|y\|_{Y}=\|Tx\|_{X}+\|x\|_{X}$. The new map $T : Y\rightarrow X$ now becomes continuous because
$$
\|Ty\|_{X} \le \|Ty\|_{X}+\|y\|_{X}= \|y\|_{Y},
$$
but the map is now from $Y$ to $X$ instead of from a subspace of $X$ to $X$.
This becomes particularly confusing in $L^{2}$ Sobolev spaces, where the norms are essentially defined in terms of the domains of fractional powers of the Laplacian. You're tempted to want to do exactly what you did. |
Distance between two coordinates | The distance is $\sqrt{23^2+29^2} \approx 37$ according to Pythagoras, the $x$-distance is $23$ and $y$-distance is $29$ as you concluded. So did you use Pythagoras correctly? |
general vector solution given two others | $c$ is perpendicular to $a$ and to $b$. Then, the first step is to find the plane of $a$ and $b$. For that, we need two vectors in the plane. One of them is obviously $a$, the other is $a\times c$. Why this choice? First, $a\times c$ is perpendicular to $c$, so it must be in the plane. Secondly, it is perpendicular to $a$. If you have two vectors in the plane, any other vector (in particular $b$) can be written as a linear combination of the two:$$b=\alpha a+\beta (a\times c)$$Here $\alpha$ and $\beta$ are some arbitrary numbers. However, you also know that $a\times b=c$, so $$a\times(\alpha a+\beta(a\times c))=c$$
By expanding the outer parenthesis, and using the properties of the cross product, we have $a\times a=0$ and $a\times(a\times c)=a^2c-(ac)a=a^2c$. I've used here that $a$ and $c$ are perpendicular, so $ac=0$. Then you get $\beta a^2c=c$ or $\beta=1/a^2$. So your final solution for $b$ is $$b=\alpha a+\frac 1{a^2}a\times c$$ |
Probability of result | Hint:
You can try to find by using binomial distribution which you want to find P(X=4)+P(X=5)
You have to set the number of trials n=5 and p=0.7.
$X\sim Bin (n,p)$
$P(X=4)= \binom{5}{4}0.7^4(1-0.7)^1$
$P(X=5)= \binom{5}{5}0.7^5(1-0.7)^0$
Hope this helps |
Can books be arranged into bags? | Adding an answer following @Irvan's comment above:
The bin packing problem ("Given $n$ items of sizes $d_1,...d_n$ and $m$ bins with capacity $c_1,...,c_m$ store all the items using the smallest number of bins.") could be reduced to this problem in polynomial time.
If we had the algorithms in the question, we could map the items to books and bins to bags, allowing each book to enter every bag. And the question of smallest number of bins is answered by running the algorithm a maximum of $m$ times (sorting the bins in decreasing order and adding a bin at every run).
Thus, the problem in question is NP-hard. |
Transitivity of almost surely | $P(B) = P(B|A)P(A) + P(B|\tilde A)P( \tilde A)$
but if you know that $P(A)=1$ then $P(\tilde A)=0$ and so this simplifies to
$P(B) = P(B|A) = 1$ |
Meaning of "circumference" | You can draw a circle with a circumference that goes through the three vertices of the triangle, it means this circle. |
How does $ax^2 − 2x^2 − a^2x − 2a^2 $ result in $ |ax||x − a| + 2|x^2 − a^2 | $ by the triangle inequality? | Yes, it is by the triangle inequality. But you also made a mistake in the expression in the title of your question. After subtracting those two fractions, you will have
$$ax^2−2x^2−\color{red}{(}a^2x−2a^2\color{red}{)}=ax^2−2x^2−a^2x\color{red}{+}2a^2.$$
The lesson here is: use parentheses for grouping, because you may need to distribute something!
Then, for this epsilon-delta proof one needs to estimate $|ax^2−2x^2−a^2x+2a^2|$. Note that
$$|ax^2−2x^2−a^2x+2a^2|=|(ax^2−a^2x)−(2x^2-2a^2)|\le|ax^2−a^2x|+|2x^2-2a^2|$$
by the triangle inequality. |
chain rule for multi-variables | If I understood your problem correctly, then you have to first plug $x = 20 + t^{\frac{2}{3}}$ and $y = t\ln(1 + t)$ into $D(x,y)$ and then calculate it's derivative with respect to $t$ at the point $t = 8$. Here is a possible solution:
First we define
\begin{equation}
\begin{split}
\tilde D(t) := D(20 + t^{\frac{2}{3}}, t\ln(1+t)) &= \frac{1}{125}(20 + t^{\frac{2}{3}})\text{e}^\frac{(20t + t^{\frac{5}{3}})\ln(1 + t)}{1000}\\ &= \frac{1}{125}(20\text{e}^\frac{(20t + t^{\frac{5}{3}})\ln(1 + t)}{1000} + t^{\frac{2}{3}}\text{e}^\frac{(20t + t^{\frac{5}{3}})\ln(1 + t)}{1000})
\end{split}
\end{equation}
Next we want to calculate it's derivative. To keep things simple, we set
\begin{equation}
g(t) := \text{e}^\frac{(20t + t^{\frac{5}{3}})\ln(1 + t)}{1000}
\end{equation}
Because
\begin{equation}
\frac{\text{d}}{\text{d}t} \frac{(20t + t^{\frac{5}{3}})\ln(1 + t)}{1000} = \frac{1}{1000}((20 + \frac{5}{3}t^{\frac{2}{3}})\ln(1 + t) + \frac{20t + t^{\frac{5}{3}}}{1+t})
\end{equation}
we have
\begin{equation}
g'(t) =\frac{1}{1000}((20 + \frac{5}{3}t^{\frac{2}{3}})\ln(1 + t) + \frac{20t + t^{\frac{5}{3}}}{1+t})g(t)
\end{equation}
Finally, we get
\begin{equation}
\tilde D'(t) = \frac{1}{125}(20g'(t) + \frac{2}{3}t^{-\frac{1}{3}}g(t) + t^{\frac{2}{3}}g'(t))= \frac{1}{125}((20 + t^{\frac{2}{3}})g'(t) + \frac{2}{3}t^{-\frac{1}{3}}g(t))
\end{equation}
All we have to do now is to evaluate $\tilde D'(t)$ at $t = 8$, and according to Mathematica this is
\begin{equation}
\tilde D'(8) = 0.0274655 \approx 0.027
\end{equation} |
Constructing an equilateral triangle of a given side length inscribed in a given triangle | $\mathbf{1}.$ Notations, definitions and classifications used in our answer
The given scalene triangle is denoted by $ABC$. Its sides $a, b,$ and $c$ are sized according to $a > b > c$, and, hence its vertex angles $A, B,$ and $ C$ obey the inequality $\measuredangle A > \measuredangle B > \measuredangle C$, which implies that $\measuredangle A > 60^o$ as well. $\Delta$ stands for the area of $ABC$.
The sidelength of the inscribed equilateral triangle is denoted by $d$. The sidelength of the largest of the inscribable equilateral triangles is $d_{max}$, whereas that of the smallest is $d_{min}$. We denote the smallest and largest inscribed equilateral triangles as $DEF$ and $XYZ$ respectively. In a similar vein, $PQR$ and $STU$ are the sought pair of inscribable equilateral triangle with sidelength $d$.
For the ease of elucidating the construction, we discriminate between three types of triangles as depicted in $\mathrm{Fig.1}$. If the largest vertex angle of an obtuse triangles (i.e. $\measuredangle A$) is greater than or equal to $120^o$, we call it a triangle of Type-I. Type-II contains acute and obtuse triangles having just one angle (i.e. $\measuredangle A$), which is greater than $60^o$ and less than $120^o$. Acute and obtuse triangles having only one vertex angle (i.e. $\measuredangle C$) less than $60^o$ together with all equilateral triangles make up the group named Type-III.
$\mathbf{2}.$ Construction
The construction described below, in which we do a vertex-chasing, is, so-to-speak, a geometrical iteration, where the outcome at the end of each iteration is checked to see whether it has achieved the desired accuracy. This procedure makes sure that the points found in succession on the sides of $ABC$ converge very fast to the vertices of the coveted inscribed equilateral triangle. Because of its iterative nature, a pair of steady hands, a pair of sharp eyes, and a very sharp pencil are essential to achieve acceptably accurate outcome.
However, before attempting to construct an inscribed equilateral triangle with the given sidelength, we should make sure that such triangle or triangles actually exist. Otherwise, we could find ourselves chasing wild geese instead of vertices. For that matter, we need to carry out two additional constructions beforehand, one to determine the smallest inscribable equilateral triangle, while the other to find the largest. Nome of these constructions needs no iterations and, therefore, the exact location of the vertices of the sought equilateral triangles can be determined directly.
$\mathbf{3}.$ Construction of the smallest inscribable equilateral triangle of the given triangle $ABC$
If you are dealing with a triangle of Type-I or Type-II, draw the angle bisector of the largest vertex angle (i.e. $\measuredangle A$) as shown in $\mathrm{Fig.3.1}$, so that it meets the longest side (i.e. $BC$) at $U$. Point $U$ is the vertex of the inscribed equilateral triangle that lies on the side $BC$ of $ABC$. If $ABC$ is a Type-III triangle, draw the angle bisector of the smallest vertex angle (i.e. $\measuredangle C$) to intersect the shortest side (i.e. $AB$) at $U$ (see $\mathrm{Fig.3.2}$). As in the previous case, point $U$ is one of the vertices of the inscribed equilateral triangle, but now it lies on the side $AB$ of $ABC$. Please note that, regardless of type of the triangle, if its second largest angle is equal to $60^o$ (i.e. $\measuredangle B = 60^o$), the angle to be bisected can be either $\measuredangle A$ or $\measuredangle C$ (see $\mathrm{Fig.3.3}$).
To complete the construction, draw two lines flanking the drawn angle bisector, so that each of them makes an angle of $30^0$ with it at $P$. Their internal intersection points with the nearest sides of $ABC$ mark the other two vertices of the inscribed equilateral triangle.
A triangle, whether it is scalene, isosceles, or equilateral, has only one smallest inscribable equilateral triangle. The two triangles share their incenter.
It is also possible to determine the value of $d_{min}$ numerically using the appropriate equation given below.
$$d_{min}=\frac{2\Delta}{\left(b+c\right) \sin\left(30^o+\frac{A}{2}\right)} \tag{for Type-I & II triangles}$$
$$d_{min}=\frac{2\Delta}{\left(a+b\right) \sin\left(30^o+\frac{C}{2}\right)}\tag{for Type-III triangles}$$
$\mathbf{4}.$ Construction of the largest inscribable equilateral triangle of the given triangle $ABC$
If $ABC$ is a Type-I triangle, its vertex $A$, which has the largest angle, coincides with one of the vertices (i.e. $Z$) of its largest inscribable equilateral triangle. One side of the inscribed triangle of this type of triangle (i.e. $YZ$) always lies on its side $CA$. Therefore, to obtain the vertex lying on the side $BC$, draw a line, which makes an angle $60^o$ with the side $CA$, through the vertex $A$ to meet the side $BC$ at $X$ (see $\mathrm{Fig.4.1}$). Since we now know two vertices of the sought inscribed equilateral triangle, its third vertex $Y$ on the side $CA$ can be easily found.
If $ABC$ is a Type-II triangle, as in the case of Type-I triangles, one of the vertices of the largest inscribable equilateral triangle $Y$ coincides with its vertex $A$, the vertex with the largest angle. However, this type of triangles have one of its sides (i.e. $YZ$) lying on the side $AB$ of $ABC$. The vertex lying on the side $BC$ can be pinpointed by drawing a line, which makes an angle $60^o$ with the side $AB$, through the vertex $A$ to meet the side $BC$ at $X$ (see $\mathrm{Fig.4.2}$).
If the triangle $ABC$ is of Type-III, its vertex $B$, where the second largest vertex angle is, harbours one of the vertices of the largest inscribable equilateral triangle, i.e. $Z$. One side of the inscribed triangle of this type of triangle (i.e. $ZX$) always lies on its side $BC$. To locate the vertex lying on the side $CA$, draw a line that makes an angle $60^o$ with the side $BC$ and goes through the vertex $B$ to meet the side $CA$ at $Y$ (see $\mathrm{Fig.4.3}$).
There are a few noteworthy special cases. All triangles, which has a vertex angle equal to $120^o$ (i.e. $\measuredangle A = 120^o$), have two identical largest inscribed equilateral triangles, which do not overlap as shown in $\mathrm{Fig.4.4}a$. If the second largest angle of the given triangle is equal to $60^o$ (i.e. $\measuredangle B = 60^o$), the given triangle and its largest inscribable equilateral triangle share the shortest side (i.e. $AB$) as depicted in $\mathrm{Fig.4.4}b$. All isosceles triangles have two partially overlapping identical largest inscribed equilateral triangles (see $\mathrm{Fig.4.4}c$). An equilateral triangle and its largest inscribed equilateral triangle are one and the selfsame (see $\mathrm{Fig.4.4}d$). All triangles other than isosceles triangles have a unique largest inscribed equilateral triangle.
Following equations can be used to calculate the value of $d_{max}$.
$$d_{max}=\frac{2\Delta}{a \sin\left(60^o+C\right)} \tag{ for Type-I triangles }$$
$$d_{max}=\frac{2\Delta}{a \sin\left(60^o+B\right)} \tag{ for Type-II triangles }$$
$$d_{max}=\frac{2\Delta}{b \sin\left(60^o+C\right)} \tag{ for Type-III triangles}$$
$\mathbf{5}.$ Construction of inscribed equilateral triangles with a given sidelength $d$
Once you know for sure that there are inscribed equilateral triangles with a given sidelength, you can follow the steps outlined below to construct them. We hope that the series of diagrams from $\mathrm{Fig.5.1}$ to $\mathrm{Fig.5.4}$ would help you to comprehend the description.
Draw the angle bisector of the largest angle $\measuredangle A$ of the given triangle $ABC$ to meet its largest side $BC$ at $D$. As shown in $\mathrm{Fig.5.1}$, draw a circle or an arc with $D$ as the center and $d$ as the radius to cut the sides $CA$ and $AB$ at $Q$ and $U$ respectively, each of which serves as the educated guess to start the geometrical iteration leading us to one of the sought pair of inscribable equilateral triangles with sidelength $d$, i.e. either $PQR$ or $STU$.
Obviously, to construct $PQR$, we need to consider the point $Q$. As shown in $\mathrm{Fig.5.2}$, we draw a circle with $Q$ as the center and $d$ as the radius to cut the side $AB$ at $R$. Next, draw a circle with $R$ as the center and $d$ as the radius to cut the side $BC$ at $P$. If you measure the sides of the triangle $PQR$ after the end of this first iteration, you will find that $QR = RP = d$, but $PQ ≠ d$. As a consequence, we have to perform further iterations as follows. Draw a circle with $P$ as the center and $d$ as the radius to intersect the side $CA$ and move the point $Q$ to this point of intersection. Now, you may find that $QR ≠ d$. Therefore, we proceed along by drawing a circle with $Q$ as the center and $d$ as the radius to intersect the side $AB$. This point of intersection is the new location of $R$. Now, you have to measure $RP$ to check whether it is exactly equal or almost equal to $d$. If you are satisfied with the length of $RP$, you can stop the iteration, because you have found one of the two inscribable equilateral triangles to a certain degree of accuracy. However, if you want to increase the accuracy of the construction, you have to iterate further to improve the positions of the three vertices $P$, $Q$, and $R$ (e.g. $\mathrm{Fig.5.3}$). To find the other inscribable equilateral triangle $STU$ (e.g. $\mathrm{Fig.5.4}$), a similar series of iterations starting from the point $U$ in $\mathrm{Fig.5.1}$ should be carried out.
$\mathbf{6}.$ Points to ponder
You may have already noticed that we have not provided any proof of what we have stated in our answer. All our deductions set out above are evidence-based, meaning our inferences came only through the observations made during a thorough analysis of the problem. If you find mistakes, errors, or counter-evidence, please post them. If we cannot rectify errors or are unable to argue against counter-evidence, we are ready to take this post down immediately. |
Another upper bound for the Stirling numbers of the first kind | I think this proof by induction works:
\begin{align}
{n \brack n-k}
&= (n-1){n-1 \brack n-k}+{n-1\brack n-k-1}
\\&\le (n-1)\frac{(n-1)^{k-1}}{2^{k-1}}\binom{n-2}{k-1}+\frac{(n-1)^k}{2^k}\binom{n-2}{k}
\\&=\frac{(n-1)^k}{2^k}\binom{n-1}{k}\left[\frac{2k}{n-1}+\frac{n-k+1}{n-1}\right]
\\&=\frac{n^k}{2^k}\binom{n-1}{k}\cdot {\frac{(n-1)^k}{\color{blue}{n^k}}\cdot\frac{n+k-1}{n-1}}
\\&\le\frac{n^k}{2^k}\binom{n-1}{k}\cdot {\frac{(n-1)^k}{\color{blue}{(n-1)^k+k(n-1)^{k-1}}}\cdot\frac{n+k-1}{n-1}}
\\&=\frac{n^k}{2^k}\binom{n-1}{k}
\end{align}
To prove that second inequality, expand $n^k=((n-1)+1)^k$ with the binomial theorem, and only keep the first two terms. |
Singular homology of cofinite topology space | I adapted the answer to this question (which deals with the case of the fundamental group). The crucial part is that a map into $X$ is continuous iff the preimage of any point is closed (directly from the definition). The following works as soon as $X$ has at least the cardinality of the continuum (so in particular when $X = \mathbf{CP}^1$):
First, $X$ is path-connected: let $x,y \in X$, then $\gamma : [0,1] \to X$ is to be any injection between $[0,1]$ and $X$ such that $\gamma(0) = x$ and $\gamma(1) = y$ (these exist since $X$ has cardinality greater than $|[0,1]|$). The preimage of any point is either a singleton or empty, which are closed in $[0,1]$, hence $\gamma$ is continuous and $X$ is path-connected.
Now pick any base point $x_0 \in X$. I will show that $\pi_n(X,x_0) = 0$ for all $n > 0$, and therefore $X$ is weakly contractible and has therefore trivial homology. Let $f : S^n \to X$ be a continuous map such that $f(u_0) = x_0$ (where $u_0$ is the base point of $S^n$). Then define a homotopy $H : S^n \times [0,1] \to X$ by:
$$H(u,t) = \begin{cases}
f(u) & t = 0 \\
x_0 & t = 1 \text{ or } u = u_0 \\
g(u,t) & \text{otherwise}
\end{cases}$$
Where $g$ is a any injection between $(S^n \setminus \{u_0\}) \times (0,1) \to X$ (again, cardinality assumption on $X$). Then an easy check shows that the preimage of any point of $X$ is a finite subset of $S^n \times [0,1]$, hence closed. Therefore $f$ is homotopic to the identity (as based maps). |
Prove that $(-1)^n - n$ is divergent to $-\infty$ | Guide:
$$(-1)^n-n \leq 1-n$$
Just show that $1-n \to -\infty $ as $n\to \infty$.
Remark:
We can say that the sequence is divergent, but we can't conclude it goes to $-\infty$ just by looking at two subsequence. |
How to compute norm of linear functional | It is a bad idea to use $x \leq 1$ in getting an upper bound for $\|F\|$ . Let $q$ be the index conjugate to $p$ (i.e. $q=\frac p {p-1}$). Then $|F(f)| \leq \|f\|_p (\int x^{q})^{1/q}$ by Holder's inequality. Hence $\|F\| \leq (\int_o^{1} x^{q}dx)^{1/q}=\frac 1 {(q+1)^{1/q}}$. This is actually an equality. To see this you have to use the condition for equality in Holder's inequality. Recall that for $f,g \geq 0$ the conidtion for $\int fg =(\int f^{p})^{1/p} (\int g^{q})^{1/q}$ is $f^{p}= cg^{q}$. So take $f(x)=x^{q/p}$. Compute $\|f\|$ and check that $|F(\frac f {\|f\|})| =\frac 1 {(q+1)^{1/q}}$. This proves that $\|F\| =\frac 1 {(q+1)^{1/q}}$.
PS
This is precisely the argument used to prove that the dual of $L^{p}$ is $L^{q}$. So we have the following general fact: If we define $F(f)=\int_0^{1} f(x)g(x)d\lambda (x)$ for all $f \in L^{p}$ then the norm of $F$ is equal to $(\int_0^{1}|g(x)|^{q}d\lambda(x))^{1/q}$. |
Why the faithful $D$-submodule of $F$ which is finitely generated as $D$ module, $M=M^2$? | Just note that $M$ is none other than the ring $D[u]$ itself, since any higher power of $u$ can be reduced to an element of $M$ using $f(u)=0$. Thus $M$ is closed under multiplication. |
Approximating $\int_0^1 x\sin xdx$ | With $\sin x>x-\frac {x^3}{3!}$, we have:
\begin{align}
\int _0^1x\sin x \, dx&>\int _0^1x^2-\frac {x^4}{6}\, dx\\
&=\big[\frac {x^3}3-\frac {x^5}{30}\big]_0^1\\
&=0.3
\end{align} |
How to prove MAE is convex? | The function is not differentiable. Fixing a particular $i$ and let $\lambda \in [0,1]$, we can then use triangle inequality to conclude that
\begin{align}
f(\lambda w_{j,1} + (1-\lambda) w_{j,2}) &= \frac{|x_{i,j}(\lambda w_{j,1} + (1-\lambda) w_{j,2})-y_i|}{n} \\
&= \frac{|\lambda (x_{i,j}w_{j,1}-y_i) + (1-\lambda) (x_{i,j}w_{j,2}-y_i)|}{n} \\
&\le \frac{\lambda| (x_{i,j}w_{j,1}-y_i)| + (1-\lambda) |(x_{i,j}w_{j,2}-y_i)|}{n} \\
&= \frac{\lambda f(w_{j,1})+(1-\lambda) f(w_{j,2})}{n}
\end{align}
Also, note that sum of convex function is convex. |
Is the function $P_m(x) = 2^mx + 3^mx^2 + ... +p_n^mx^n +...$ mathematically useful? Does it converge to known constants for rational m and x? | This first part will be to answer the convergence questions. Using an upper bound from the Prime Number Theorem and Rosser's Theorem, namely $p_n < n\ln n +n\ln\ln n$ for $n \geq 6$, we can set an easy to work with upper bound $p_n<n^2$ just to get some basic concepts out of the way. Therefore, we can consider the series
$$\sum_{n=1}^\infty {n^{2m}}{x^n}$$
to be greater than your prime series. This series converges for $-1<x<1$ regardless of what $m$ is. At $x=-1$, $m<0$ for the series to converge by the Alternating Series Test. The last case to consider, $x=1$, is the hardest, because we can't really use our upper bound of $p_n<n^2$ anymore. It also gets a little weird, because $\lim_{x\to\infty}x^n>\log(x)\forall n>0$. In other words, a polynomial function will always eventually be greater than a logarithmic function, but a logarithmic function will always eventually be greater than a constant function. Theoretically, any function $x^{1+n}$ will be an upper bound to $x\ln x$, no matter how small the $n$. We can use the series upper bound of $\sum_{n=1}^\infty {n^{(1+n)m}}$. This series will only converge if $(1+n)m<-1$, and, since $n>0$, the only way for this equation to be satisfied is for $m<-1$. In summary, your series should converge for all $m$ when $-1<x<1$, for $m<0$ when $x=-1$, and for $m<1$ when $x=1$.
This part will deal with the convergence of specific values. Basically, all power series that can converge can converge to any specific value within a specific range. For instance, the Maclaurin Series for $e^x$ can converge to all positive numbers, but no negative numbers. For this function, it can converge to all numbers, as setting $m=0$ yields the generic geometric series, which can converge to any number.
Finally, this part will deal with the use of this function. It is important to realize that this function will only be useful when the actual primes do not need to be found or there is no other function that can model a specific behavior and therefore it must be used (i.e. if someone found a way to encode things using your series that worked better than other encryptions). For the Euler example, both of these were true. |
Question about heights on elliptic curves- the set $\{P \in E(K) : h_f(P) \leq C \}$ is finite (Silverman's AEC) | Are you okay with the fact that we are identifying $f$ with a map $E\to\Bbb P^1$? [If not, go back and look at example $2.2$].
If yes, then this is just the fact that a nonconstant map between curves over $K$ is always finite-to-one. This is a combination of the following two facts [see Theorems $2.4$ and $2.6$]:
If $\phi:C_1\to C_2$ is a nonconstant map of curves defined over $K$, then $\deg(\phi):=[K(C_1):\phi^*K(C_2)]$ is finite.
If $\phi:C_1\to C_2$ is a nonconstant map of smooth curves, then for every $Q\in C_2$ we have $$\sum_{P\in\phi^{-1}(Q)}e_\phi(P)=\deg(\phi),$$ where $e_\phi(P)\ge1$ is the ramification index of $\phi$ at $P$. |
Inequalities in integration | Look at this picture. The red curve is the function $\displaystyle f(x)=\frac{1}{x^2+x+1}$
The red curve is decreasing on $[0,1]$, since $\displaystyle f'(x)=-\frac{2x+1}{\left(x^2+x+1\right)^2}<0$ for $0\leq x \leq 1$.
The gray shaded area is the value of $\displaystyle\frac{1}{n}\sum_{k=1}^{n}\frac{1}{\left(\frac{k}{n}\right)^2+\frac{k}{n}+1}$ when $n=6$.
Can you see why? The width of each box is $\displaystyle \frac{1}{n}$, and the height of the $k$th box is $\displaystyle f\left(\frac{k}{n}\right)=\frac{1}{\left(\frac{k}{n}\right)^2+\frac{k}{n}+1}$.
As you can see, as $n$ increases, the number of boxes will increase, but the shaded area will never be equal to or greater than the area under the red curve for the interval $[0,1]$. (By the way, these approximations of the area under the curve are called Riemann sums.)
Ergo, we conclude that $\displaystyle S_n<\int_{0}^{1}\frac{dx}{x^2+x+1}$
$$\begin{align}
\int_{0}^{1}\frac{dx}{x^2+x+1}&=\int_{0}^{1}\frac{dx}{\left(x+\frac{1}{2}\right)^2+\frac{3}{4}}
\\
&=\frac{2}{\sqrt{3}}\int_{\frac{1}{\sqrt{3}}}^{\sqrt{3}}\frac{du}{u^2+1} \qquad\text{(Substitute $u=\frac{2x+1}{\sqrt{3}}$, $\frac{du}{dx}=\frac{2}{\sqrt{3}}$)}\\
&=\frac{2}{\sqrt{3}}\arctan(u)\bigg|_{\frac{1}{\sqrt{3}}}^{\sqrt{3}}\\
&=\frac{\pi}{3\sqrt{3}}
\end{align}$$
And thus we have our desired result.
(By the way, since I am bad at calculus, I used the following sites: http://www.integral-calculator.com/, http://www.derivative-calculator.net/) |
order of accuracy for numerically evaluating $u''(x) +u'(x) +u(x) = e^{-x^2}$ with regard to grid resolution | Your first conclusion of a cosine series is wrong, the ODE is not symmetric and the second boundary condition is not $u'(π)=-u'(−π)$ as an even symmetry would demand.
The quest for accuracy assumes that the error behaves dominantly as $E(N)\simeq C\cdot N^{-p}$. Taking the logarithm gives
$$
\log|E(N)|\simeq c-p⋅\log|N|
$$
so that in a loglog plot you should get a straight line with slope $-p$. You can now read off the order visually or do a linear regression and round to the next integer. |
Number Theory question - divisibility | The digit sum of $s + 1$ for your number $18999999999999$ is $10$, not divisible by $19$.
If there are $k$ $9$'s at the end of $s$, then the digit sum of $s$ and $s + 1$ differ by $9k - 1$.
Therefore there should be at least $17$ $9$'s at the end of $s$ (as $17$ is the inverse of $9$ modulo $19$). In order for the sum to be divisible by $19$, we should add another $18$. But it is not possible to do that in two digits, as that would require another two $9$'s.
So we must have at least $20$ digits, and the smallest such $s$ is $19899999999999999999$. |
$\lim_{n\to\infty}\int_0^n\left(1-\frac{x}{n}\right)^n\text{e}^{\frac{x}{2}}\text{d}x$ Evaluating this limit | Use the fact that
$$\left(1 - \dfrac{x}n \right)^n \leq e^{-x}$$
i.e.
$$\left(1 - \dfrac{x}n \right)^n e^{x/2} \leq e^{-x/2}$$
and
$$\lim_{n \to \infty}\left(1 - \dfrac{x}n \right)^n = e^{-x}$$
Consider the sequence $$f_n(x) = \begin{cases} \left(1 - \dfrac{x}{n} \right)^n e^{x/2} & x \in [0,n]\\ 0 & x > n\end{cases}$$ which is dominated by $g(x) = e^{-x/2}$. Now apply dominated convergence theorem to get that
$$\lim_{n \to \infty} \int_0^n f_n(x) dx = \lim_{n \to \infty} \int_0^{\infty} f_n(x) dx = \int_0^{\infty} \lim_{n \to \infty} f_n(x) dx = \int_0^{\infty} e^{-x/2} dx = 2$$ |
"Let $\epsilon > 0$be given .... is $\epsilon < 1$ ok? | Without loss of generality, you can restrict yourself to $\epsilon \in (0,1)$, or more generally, $\epsilon \in (0,a)$ for each $a > 0$.
To make this formal, you can always modify your argument as follows. You open with, "Let $\epsilon > 0$ be given. Define $\epsilon' = \min(\epsilon,a/2)$." Then you carry out your ordinary argument using $\epsilon'$. Finally, at the end where you conclude
$$
... < \epsilon'
$$
you finish with "and $\epsilon' \leq \epsilon$". |
conditional probabilites - pairs of balls | answered by the comments and sadly i can't mark them as answers
Perhaps I am mistaken, but it looks like (not A2) and A3 are mutually
exclusive (i.e. they can't both happen). If I'm right, either you have
a typo or the problem is meaningless or you are supposed to realize
this and declare the probability = 0. |
Continuous functions with $f^2(x)=g^2(x)\neq 0$ | Hint: Consider the function
$$
h(x) = \frac{f(x)}{g(x)}
$$
and apply the intermediate value theorem. |
Subgroups of Index $2$ of $(\mathbb{Z}_{2})^{\aleph_{0}}$ | Take any non-empty set $X$ of $\mathbb{N}$ forms :
$$e_X:=(\chi_X(x))_{x\in\mathbb{N}}\in(\mathbb{Z}_2)^{\mathbb{N}} $$
Here $\chi_X(x)$ is $0$ if $x\notin X$ and $1$ if $x\in X$.
Each $e_X$ allows you to construct a dual form $e_X^*$. Denote the kernel of each $X$ : $K_X$.
Since $K_X$ is the kernel of a non-trivial group morphism from $(\mathbb{Z}_2)^{\mathbb{N}}$ to $\mathbb{Z}_2$, it is of index $2$.
Basically what you need to show is that $X\mapsto K_X$ is injective. Then you are done. |
Charts for a real vector bundle | Yep! Given an open set $U$ such that $E$ is trivial over $U$ and an open set $V$ that is diffeomorphic to an open subset of $\mathbb{R}^n$, $U\cap V$ is an open set which is both diffeomorphic to an open subset of $\mathbb{R}^n$ and on which $E$ is trivial. Letting $U$ and $V$ range over open covers of $M$, you get an open cover of $M$ by sets of this form. So you can always find an atlas for $M$ such that $E$ is trivial on each set in the atlas. |
How can we solve this ODE | Multiplying both sides by $\frac{dx}{dy}$ gives
$$ \frac{dx}{dy}\frac{d(\frac{dx}{dy})}{dy}=\frac{1}{x^2}\frac{dx}{dy} $$
and hence
$$ \frac12\frac{d}{dy}\bigg(\frac{dx}{dy}\bigg)^2=-\frac{d}{dy}\bigg(\frac1{x}\bigg). $$
So
$$ \bigg(\frac{dx}{dy}\bigg)^2=-\frac2{x}+C_1. $$
Taking square root gives
$$ \frac{dx}{dy}=\pm\sqrt{-\frac2{x}+C_1}. $$
so
$$ \frac{dx}{\sqrt{-\frac2{x}+C_1}}=\pm dy. $$
Now integrating both sides will give
$$ y=\pm\int\frac{dx}{\sqrt{-\frac2{x}+C_1}}+C_2 $$
which is easy to solve. I omit the detail. |
Does following funtion exists? | $$g(f(x))=\ x^{2019}$$
$$=> f(g(f(x)))=f(\ x^{2019})$$
$$=> (f(x))^{2018}=f(x^{2019})$$ (by associativity of function composition)
there exist no such f and g because$ f(1)=1, f(-1)=1 => g(1)=1$ and $ g({(-1)}^{2018})=-1$
contradiction hence proved
well one may argue f(1)=0
in that case consider $f(0),f(1),f(-1) $will have either 1 or 0 value and by pigeon hole principle two of them will have atleast same value hence we are done |
Relation between semisimple Lie Algebras and Killing form | Hint: Recall that $X_\alpha$, $X_{-\alpha}$ and $[X_\alpha,X_{-\alpha}]$ span a Lie subalgebra of $\mathfrak g$ that is isomorphic to $\mathfrak{sl}(2,\mathbb C)$. Now given another root $\beta$, you have the $\beta$-string through $\alpha$, which defines a representation of this subalgebra that you can analyze using the representation theory of $\mathfrak{sl}(2,\mathbb C)$. If you assume that $\alpha$ and $\beta$ are both simple, then there are some simplifications, for example, you automatically get $p=0$. |
Question about basic optimization. | Let $p$ be the required perimeter. Then $2(l+w)=p \Rightarrow w=p/2-l$. The area is then:
$$
A(l) = l(p/2-l)
$$
You want to maximize the area, $A(l)$. $A'(l) = 0 \Rightarrow l = w$. |
find all matrices that com-mute with the given matrix | Consider a generic matrix
$$
X=\begin{bmatrix}
a & b \\
c & d
\end{bmatrix}
$$
If $A$ is your matrix, the condition $AX=XA$ translates into
$$
\begin{bmatrix}
2a+3c & 2b+3d \\
-3a+2c & -3b+2d
\end{bmatrix}
=
\begin{bmatrix}
2a-3b & 3a+2b \\
2c-3d & 3c+2d
\end{bmatrix}
$$
By comparing coefficients,
$$
\begin{cases}
2a+3c=2a-3b \\[4px]
2b+3d=3a+2b \\[4px]
-3a+2c=2c-3d \\[4px]
-3b+2d=3c+2d
\end{cases}
$$
The first equation is $3c=-3b$, or $c=-b$. Similarly for the others, so we get
$$
\begin{cases}
c=-b \\[4px]
d=a \\[4px]
a=d \\[4px]
-b=c
\end{cases}
$$
that simplifies into $d=a$ and $c=-b$. There's no restriction on $a$ and $b$, so you get all matrices of the form
$$
X=\begin{bmatrix}
a & b \\
-b & a
\end{bmatrix}
$$ |
Is there a function $f : N\to N$ such that every $(k-1)$-connected graph with minimum degree at least $f(k)$ is at least $k$-connected? | Fix $m$ and $n$ with $n\gg m$ and consider a "$K_m$ made of $K_n$'s", i.e., we have $m$ disjoint copies of $K_n$'s, pick a vertex from each of these, and add edges between any two of the $m$ picked vertices.
The minimal degree of this graph is $n$, but the connectivity is determined by $m$. |
How to solve limits for trigonometric sequences? $\lim_{n \to \infty} n^3\left(1-\cos\left(\frac{1}{n}\right)\right)\sin\left(\frac{1}{n}\right) $ | For the first one, set $1/n=2h$ to get
$$\lim_{h\to0}\dfrac{(1-\cos2h)\sin2h}{(2h)^3}=\dfrac12\cdot\left(\lim_{h\to0}\dfrac{\sin h}h\right)^2\cdot\lim_{h\to0}\dfrac{\sin2h}{2h}$$ |
If $\gcd(x,y)=1379,\operatorname{lcm}(x,y)=n!$ then find $n$ | Since $\gcd(x,y)=7\cdot 197$, we can assume $x=7\cdot 197a$, and $y=7\cdot 197b$, where $\gcd(a,b)=1$. Then $\text{lcm}(x,y)=7\cdot 197\cdot ab=n!$. Thus, $n\geqslant 197$. Actually, we can assign different prime factors of $\frac{n!}{7\cdot 197}$ to $a$ and $b$ respectively to make sure they are coprime. |
How do you calculate $E[U_1 | U_1 > U_2]$ where $U_1$ and $U_2$ are independent Uniform RVs from $(0,1)$ | You are not conditioning over a random variable (or its sigma algebra), but rather over an event.
By definition for conditioning over a non-zero probability event:$$\begin{align}\mathsf E(U_1\mid U_1 > U_2) = &~\dfrac{\mathsf E(U_1\mathbf 1_{U_1>U_2})}{\mathsf P(U_1>U_2)}\\ =&~ \dfrac{\displaystyle \iint_{u>v} u~f_{U_1,U_2}(u,v)~\mathsf d (u, v)}{\displaystyle \iint_{u>v} f_{U_1,U_2}(u,v)~\mathsf d (u, v)}\\ =&~ \dfrac{\displaystyle \int_0^1\int_v^1 u~\mathsf d u~\mathsf d v}{\displaystyle \int_0^1\int_v^1 1~\mathsf d u~\mathsf d v}\\ \vdots~&\end{align}$$ |
Upper bound for eigenvalues | This is a consequence of the Gershgorin circle theorem. |
Do integers modulo $n$ minus $\frac n 2$ (i.e. signed integers) still form a commutative ring? | The answer to your question depends on how you define the operations.
For instance what should $2^{63}-2$ plus $7$ be?
If you are willing to say that it is $-2^{63}+5$ then you are effectively caculating modulo $2^{64}$ so in $\mathbb{Z}/2^{64}\mathbb{Z}$.
One would say you chose $\{-2^{63},\cdots,0,\cdots,2^{63}-1\}$ as a set of representatives for the classes modulo $2^{64}$. Whether you choose this set of representative or $\{0,\cdots,2^{64}-1\}$ or $\{17,\cdots,2^{64}+16\}$ or still something is often irrelevant in a math context.
But if you do this the sign stops making sense.
If you do not use that definition for $2^{63}-2$ plus $7$ the question would be what else you use. You might say it is not defined, but then this is not a commutative ring anymore since in it addition is defined for any two elements.
However, it would still stay true that when you have an expressions where all the operations are defined, then they are distributive, associative, commutative. To see this note that you are really only working with usual integers and just disallow some that are too large. |
The pumping theorem and regular language | If we had $L_1 = L(a^*bba^*a^*bba^*)$, $L_1$ would (obviously) be regular, as generated by a regular expression. But note the difference:
\begin{align*} L(a^*bba^*a^*bba^*) &= L_0 \circ L_0 = \{uv \mid u,v\in L_0\} \\
L_1 &= \{uu\mid u \in L_0\}
\end{align*}
The point in the definition of $L_1$ is that words of $L_1$ consits of two times the same word of $L_0$. Such languages are seldom regular, as a finite automaton cannot remember a whole word from $L_0$ to repeat it. This is only a handwaving argument, to prove that $L_1$ is not regular, you should, as you wrote, use the pumping lemma.
So, suppose $L_1$ were regular and $p$ its pumping length. Consider $w = a^pbba^pa^pbba^p \in L_1$. By the pumping lemma there would be a decomposition $w = xyz$ with $|xy| \le p$, $|y| \ge 1$. Then $y$ consitsts only of $a$s and $xy^iz = a^{p+i}bba^pa^pbba^p \not\in L_1$. So $L_1$ isn't regular. |
One-to-one function (Linear Algebra) | Let's break down the questions into what you have to prove:
(a)(i) If $T$ is injective, and $S$ is linearly independent, then $T(S)$ is linearly independent.
(a)(ii) If, given any linearly independent $S$, $T(S)$ is linearly independent, then $T$ is injective.
(b)(i) If $T$ is injective, and $S$ is linearly independent, then $T(S)$ is linearly independent.
(b)(ii) If $T$ is injective and $T(S)$ is linearly independent, then $S$ is linearly independent.
As you can see, (a)(i) and (b)(i) are the same, but (b)(ii) does not immediately follow from part (a). It's not quite the same as (a)(ii), as the conclusions and the assumptions are swapped around, in a sense. So no, the question doesn't ask you the same thing, but it's a question that masquerades as a question implicitly with $4$ parts, but really only has $3$.
To prove part (b)(ii) from part (a), you could use the fact that $T$ is invertible on its range. Note that $T(S)$ is a linearly independent subset of the range of $T$, show that the inverse $T'$ is also injective, then use the fact that $S = T'(T(S))$, where $T'$ is injective and $T(S)$ is linearly independent. By (a), $S$ is also linearly independent. |
Let $\tau,\phi:A \rightarrow \mathbb{C}$ be faithful states on a C*-algebra A, with $\tau \geqslant \phi$. How do the GNS representations compare? | I don't have a general answer, but here are some ideas. Given a state $\tau$, let $h\in Z(A)_+$ with $\|h\|=1$, and define $\phi(a)=\tau(ha)$; then $\phi$ is a state with $\phi\leq\tau$.
You can check that in this case $p^*=p^*p=pp^*=M_h$ (multiplication by $h$). So $p$ is injective/surjective if and only if $h$, and you can construct examples with all three possible situations.
For $\hat p:x\longmapsto pxp^*$ to be a homomorphism you need, in particular, that $pxp^*px^*p^*=pxx^*p^*$ for all $x\in A_\tau''$. This equality is
$$
px(I-p^*p)x^*p=0,
$$
so $(I-p^*p)xp=0$ for all $x\in A_\tau''$. As the range of $p$ is dense, you get $(I-p^*p)x=0$. And by using $x\in A$ you obtain that $I-p^*p=0$. So $\hat p$ is a homomorphism if and only if $p^*p=I$. |
Counting vectors of natual number with restrictions on coordinates | We can write a recurrence. Define $N(p,n)$ as the number of strictly increasing sequences of length $p$ with maximum term no more than $n$ and minimum term at least $1$. $p$ here corresponds to $n-k+1$ in your question. The recurrence is $N(p,n)=N(p-1,n-1)+N(p-1,n-1)$ where the first term covers the sequences that end in $n$ and the second term covers those that do not end in $n$. The boundary conditions are that $N(k,k)=1$ as you must have the sequence from $1$ to $k$ and $N(0,n)=1$ as there is one empty sequence. The number of acceptable sequences of length $n$ is then $\sum_{i=0}^nN(i,n)$ because we can pad the ones shorter than $n$ with any number of zeros. It turns out $N(p,n)={n \choose p}$-this is Pascal's triangle. The sum just sums the entries in row $n$ in Pascal's triangle, which is known to sum to $2^n$ |
calculate surface normal using light | Gentle
Interesting. I was contemplating a similar question while staring at the reflection of the sun of a choppy ocean: could I estimate the choppiness as a function of position by taking some sort of average of the intensity of the reflection.
The literature on optics is full of terminology concerning this sort of problem. In satellite imaging one wishes to analyze materials using spectral information, for example. This requires a host of information so that you can determine how much of the light that you sense from a particular point was attenuated by the atmosphere or dust, how bright the source was, etc. There are databases full of data collected in laboratories about how much light is reflected as a function of wavelength, angle of incidence, angle of reflection, and other parameters.
Generally speaking, the maximum amount of light will be reflected when the angle between the light source and the normal is the same as that between the sensor and the normal. So if you can figure out a way to discover the orientation that produces this maximum you've got a start.
Of course doing this once only provides a plane of possible normals. You need to repeat with a second orientation of the sensor (unless the light source is collocated with the sensor).
If you cannot control the orientation of the glitter you could try measuring the returned light for a number of points and assume that the maximum observed value represents those pieces of glitter with the optimal geometry and then try to infer how much orientation change is associated with various lesser values to guess the orientations of the rest of the glitter. You would then have a graph that relates intensity to the angle between source and sensor -- roughly. |
Probability of survival on an infinite go board | Here's a rephrasing of the question for those who are unfamiliar with Go:
The vertices of an infinite square grid are labeled with $b$ ("black stone"), $w$ ("white stone"), or $e$ ("empty"). The vertex at $(0,0)$ is always labeled with $b$; the other vertices are randomly and independently labeled with $b$, $w$, or $e$, with equal probability.
Define the "black group" $B$ as the largest connected set of vertices which includes $(0,0)$ and where every vertex is labeled with $b$. ($B$ is finite with probability $1$.) What is the probability that $B$ is adjacent to at least one vertex which is labeled with $e$ (meaning that $B$ "survives")?
This probability can be written out as a pseudo-formula like so:
$$1 - \sum_{B} \text{(probability of $B$ occurring)}\text{(probability, given $B$, that $B$ does not survive)}.$$
The summation runs over all possible (finite) black groups $B$. Note that a "possible black group" is simply a "pointed fixed polyomino": in other words, a fixed polyomino where one particular point has been distinguished as $0,0$.
Now, let's define $N(B)$ has the number of vertices ("black stones") in $B$, and define $F(B)$ as the number of vertices which are not in $B$ but which are adjacent to $B$ (the number of "frontier vertices" of $B$).
The probability of any particular black group $B$ occurring is just the probability that $N(B) - 1$ vertices have randomly been labeled with $b$ and $F(B)$ vertices have randomly been labeled with $w$ or $e$. In other words:
$$P(\text{$B$ occurs}) = \left (\frac13 \right)^{(N(B) - 1)} \left (\frac23 \right)^{F(B)}.$$
Meanwhile, given a particular black group $B$, the probability that $B$ does not survive is simply the probability that $F(B)$ vertices have randomly been labeled with $w$, or
$$\left (\frac12 \right)^{F(B)}.$$
Substituting, we find that the overall probability of survival is
$$1 - \sum_{B} \left (\frac13 \right)^{(N(B) - 1)} \left (\frac23 \right)^{F(B)} \left (\frac12 \right)^{F(B)}\\
= \sum_{B} \left (\frac13 \right)^{(N(B) - 1)} \left (\frac13 \right)^{F(B)}\\
= \sum_{B} \left (\frac13 \right)^{(N(B) + F(B) - 1)}.$$
Recall that the sum ranges over all pointed fixed polyominoes $B$. It may be hard to find information about pointed fixed polyominoes, so let's rewrite the summation to instead range over all fixed polyominoes $P$. Each fixed polyomino $P$ appears as a pointed fixed polyomino $N(P)$ times, so we get
$$1 - \sum_{P} N(P) \left (\frac13 \right)^{(N(P) + F(P) - 1)}.$$
The value of this sum is almost certainly not known. But by enumerating all of the polyominoes up to, say, size 10, it should be possible to get very good upper and lower bounds. |
Understanding this proof that $\lim\limits_{h\to 0}\frac{\cos(h)-1}{h}=0$ | The simplest proof is this:
$$\frac{\cos h-1}h=\frac{(\cos h-1)(\cos h+1)}{(\cos h+1)h}=\frac{\cos^2h-1}{(\cos h+1)h}=-\frac{\sin^2h}{(\cos h+1)h}=-\frac{\sin h}h\cdot\frac{\sin h}{\cos h+1}.$$
The first fraction tends to $1$, the second tends to $\dfrac 02=0$, hence the limit is $\color{red}0$.
For the proof you mention, at the third line, you should have
$$=\lim_{h\to 0}\Bigl( -\frac{2 \sin^2(h/2)}{h}\Bigr)=\lim_{h\to 0}\Bigl( -\frac{\sin^2(h/2)}{h/2}\Bigr)=\dots$$ |
Proof that $\pi$ is rational | Let's apply this technique to a more transparent question.
CLAIM: $0.333\ldots < 1/3$
Proof: We induct on the number of decimal digits. Clearly, $0.3 < 1/3$. Now, by induction, if $n$ digits of $0.333\ldots 3 < 1/3$, than in particular $3 \cdot 0.333\ldots3 = 0.999\ldots900 < 1$, and so $0.999\ldots 990$ (i.e. with one more $9$ digit) $<1$, and thus it holds for $n+1$ as well. So by induction, the claim is proven.
What's wrong with this? Induction is a proof for all natural numbers, not for $\infty$. It's clear that $0.333\ldots = 1/3$. But any finite decimal representation is less than $1/3$. And the induction only shows that any finite decimal representation is, in fact, less than $1/3$.
This is the same flaw at the heart of the $\pi$ rational argument. |
Equivalence in Portmanteau's lemma | For your first hesitation: we can take $f$ to have values in $[-1,1]$ because we are already assuming $f$ is bounded. Thus, if $M = \sup_x |f(x)|$, $f/M$ does take values in $[-1,1]$. The constant $M$ does no harm because $E[f(X)/M) = \frac{1}{M}E[f(X)]$, so the general case of values in $[-M,M]$ follows immediately from the $[-1,1]$ case.
For the question about the inequality, if you agree that $|f - f_\epsilon| < \epsilon$ on $I$,
\begin{align*}
|Ef(X_n) - Ef_\epsilon(X_n)| &= \left|\int_\Omega f - f_\epsilon dP^{X_n}\right|\\
&=\left|\int_I f-f_\epsilon dP^{X_n} + \int_{I^c} f-f_\epsilon dP^{X_n}\right| \\
&\leq \int_I |f-f_\epsilon| dP^{X_n} + \int_{I^c} |f-f_\epsilon| dP^{X_n}.
\end{align*}
If we agree that $|f-f_\epsilon| < \epsilon$ on $I$, then it does really mean the integral over $I$ is less than $\epsilon$ because $I$ has measure at most $1$ (whereas you might be thinking of $I$ in terms of area in the Lebesgue sense...):
$$\int_I |f-f_\epsilon| dP^{X_n} \leq \int_I \epsilon dP^{X_n} \leq \epsilon \int_\Omega dP^{X_n} = \epsilon\cdot 1.$$
For the second term, notice that on $I^c$, $f_\epsilon \equiv 0$. Then, by the comment about $f$ being bounded,
$$\int_{I^c} |f-f_\epsilon| dP^{X_n} \leq \int_{I^c} dP^{X_n} = P(X_n \notin I).$$
Does this help? |
Integral by trigonometric substitution: $ \int \:\frac{2y^3}{\sqrt{1-y^2}}\mathrm dy $ | This type of problem is much easier if you do a $u-$sub by picking $u=1-y^2$ and $du=-2ydy,$ then
$$\int \frac{2y^3dy}{\sqrt{1-y^2}}=-\int\frac{(1-u)}{\sqrt{u}}du$$
and this becomes a simple power rule computation afterward. |
Show an open set has no minimum | Suppose $m \in A$ is the minimum of A. By definition of open set, you have that $(m-\delta , m + \delta) \subseteq A$ with $\delta > 0$ sufficiently small, then $m- \frac{\delta}{2} \in A$ but this is a contradiction (because we have found an element in A smaller than $m$) |
Calculate variance for a probability mass function | Note that$$\sum_{x=0}^{10}9=9(11)=99$$
$$10K(11)+99K=1$$
$$110K+99K=1$$
$$209K=1$$
$$K=\frac1{209}$$
\begin{align}
E(X^2) &= K \sum_{x=0}^{10} x^2(2x+9)\\
&=K\left( 2\left(\frac{(10)(11)}{2}\right)^2+9\cdot\frac{(10)(11)(21)}{6}\right) \\
&= 10(11)K\left(\frac{10(11)}{2} + \frac{3(21)}{2} \right)\\
&=\frac{10(11)}{2}K\left(110+63 \right)\\
&=9515K
\end{align}
\begin{align}
E(X) &= K \sum_{x=0}^{10} x(2x+9)\\
&=K\left( 2\cdot\frac{(10)(11)(21)}{6}+9\left(\frac{(10)(11)}{2}\right)\right) \\
&= 10(11)K\left(7 + \frac{9}{2} \right)\\
&=\frac{10(11)}{2}K\left(23 \right)\\
&=1265K
\end{align}
$$Var(X)=E(X^2)-(E(X))^2=9515K-(1265K)^2$$ |
Visualize and solve $\int\int_D e^{y^2}dxdy$ with $D=\{x\geq 0$ , $x\leq y\leq 1\}$ | The integration is over the shaded area in the diagram. So, change the order of the as follows,
$$\int_0^1dx \int_x^1 e^{y^2}dy = \int_0^1dye^{y^2}\int_0^ydx=\int_0^1e^{y^2}ydy=\frac{e-1}2$$ |
Uniform convergence of a function composition | $g$ is decreasing and $0\le f_n(x)\le1$. This implies that $g\circ f_n(x)\ge g(1)=1-e^{-1}=0.632121\dots$ and that the series $\sum g\circ f_n$ does not converge.
Also, it is easy to see that $g\circ f_n(x)$ converges point wise, but not uniformly, to $1$. |
For Green's theorem, why is the region of integration of the line integral a weird partial derivative character? | Because the double integral is over some region $Q$ (e.g. a disc), while the line integral is over the boundary of $Q$ (e.g. the circle bounding that disc): they're not the same.
You can give it another name if you like, such as "$C$, the boundary of $Q$", but I wouldn't call it $Q$ again since $Q$ is already used for the entire region!
For this boundary of $Q$, the notation $\partial Q$ is common.
Referring to my example above, you could have the unit disc $Q$:
$$Q=\left\{\left(x,y\right)\in\mathbb{R}^2 \;\vert\; x^2+y^2 \le 1\right\}$$
and its boundary, the unit circle $\partial Q$ (possibly with a chosen orientation):
$$\partial Q=\left\{\left(x,y\right)\in\mathbb{R}^2 \;\vert\; x^2+y^2 = 1\right\}$$
Related: Meaning of partial differential in limits of integration? |
Linear Programming (Approach) | for me, it seems correct.
the constraints are perfectly formulated, and the profit is precise. |
4 simultaneous equations in real numbers | Let $x+y+z=3u$, $xy+xz+yz=3v^2$, where $v^2$ can be negative, and $xyz=w^3$.
Thus, $w^3=1$ and after summing of first three equations we obtain:
$$\sum_{cyc}(x^2+xy)=2(x+y+z)$$ or
$$9u^2-3v^2=6u,$$ which gives $$v^2=3u^2-2u.$$
Now, $$(x-y)^2(x-z)^2(y-z)^2\geq0$$ gives
$$3u^2v^4-4v^6-4u^3w^3+6uv^2w^3-w^6\geq0$$ or
$$3u^2(3u^2-2u)^2-4(3u^2-2u)^3-4u^3+6u(3u^2-2u)-1\geq0$$ or
$$(u-1)^2(81u^4-18u^3+15u^2+2u+1)\leq0$$ and since
$$81u^4-18u^3+15u^2+2u+1=u^2(9u-1)^2+(u+1)^2+13u^2>0,$$ we obtain $u=1$, $v^2=1$ and since $x$, $y$ and $z$ they are roots of the equation
$$(t-x)(t-y)(t-z)=0$$ or
$$t^3-3ut^2+3v^2t-w^3=0$$ or $$t^3-3t^2+3t-1=0$$ or $$(t-1)^3=0,$$ we obtain
$$x=y=z=1.$$
Easy to see that these values are valid and we are done!
There is also the following solution.
We'll prove that $$\sum_{cyc}(x^2+xy)\geq2(x+y+z)$$ for all reals $x$, $y$ and $z$ such that $xyz=1$.
Indeed, we need to prove that
$$\left(\sum_{cyc}(x^2+xy)\right)^3\geq8xyz(x+y+z)^3,$$ which is a linear inequality of $w^3$, which says that it's enough to prove our inequality for an extreme value of $w^3$, which happens for an equality case of two variables.
Since the last inequality is even degree and homogeneous, it's enough to assume $y=z=1$, which gives
$$(x-1)^2(x^4+8x^3+28x^2+44x+27)\geq0$$ and since
$$x^4+8x^3+28x^2+44x+27=(x^2+4x+5)^2+2(x+1)^2>0,$$ our inequality is proven.
The equality occurs for $x=y=z$, which gives a solution of our system. |
How do i solve this hard joint variation problem (also known as joint proportion)? | Problem 1
Let
$$\begin{align}n &= \text{no. of woodchucks} \\
w &= \text{wood chopped} \\
t &= \text{time taken}\end{align}$$
Case 1: $ n = 5, \ w = 8, \ t = 2$
Case 2: $ n = 1, \ w = ?, \ t = 24$
Logically, $t\propto n $ and $t\propto w $
$$\implies \ t=knw \ \ \ \ .\ .\ .\ (1)$$
where $k$ is a constant.
Substituting values given in case 1, $k = \dfrac{8}{10}$
Now we put the value of k in case 2 and calculate $w$ using $\rm eqn\ 1$
We get $w = \dfrac{24\times 8}{10} = 19.2$ pieces of wood
Problem 2
Let
$$\begin{align}c &= \text{no. of chicken} \\
b &= \text{bags} \\
d &= \text{days}\end{align}$$
Case 1: $ c = 5, \ b = 10, \ d = 20$
Case 2: $ c = 18, \ b = 100, \ d = ?$
Logically, $d\propto b $ and $d\propto \dfrac{1}{c} $
$$\implies \ d=\dfrac{kb}{c} \ \ \ \ .\ .\ .\ (1)$$
where $k$ is a constant.
Substituting values given in case 1, $k = 10$
Now we put the value of $k$ in case 2 and calculate $d$ using $\rm eqn\ 1$
We get $d = \dfrac{100\times 10}{18} = \dfrac{500}{9}$ days |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.