title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Balancing Chemical Reactions using Systems of Equations | Also remember that three of those terms are balanced against the value of the fourth, whose value is then chosen to make them all the smallest integer values possible.
Tip: Unless there is evidence for decomposition, then you should treat compound ions, such as $\rm CO_3^{2-}, NO_3^{-}$ et cetera, as single units rather than collections of elements.
$$ x{\rm KNO_3}+y{\rm H_2CO_3}\raise{.25ex}\to\hspace{-2.5ex}\lower{.25ex} \gets z{\rm K_2CO_3}+w{\rm HNO_3}$$
$$\begin{array}{r|l} \rm K & x=2z & x-2z=0\\ \rm NO_3 & x=w & x=w\\ \rm H & 2y=w & 2y=w\\ \rm CO_3 & y= z & y-z=0
\end{array}$$
Then $$\left[\begin{array}{rrr|r}1&0&-2&0\\1&0&0&w\\0&2&0&w\\0&1&-1&0
\end{array}\right]\to\left[\begin{array}{rrr|r}1&0&0&w\\0&1&0&w/2\\0&0&1&w/2\\0&0&0&0
\end{array}\right]$$Conclusion: $x=w, 2y=w, 2z=w$ so if we let $w=2$ then $x=2, y=1, z=1$.$$\rm 2KNO_3+H_2CO_3\raise{.25ex}\to\hspace{-2.5ex}\lower{.25ex} \gets K_2CO_3+2HNO_3$$ |
Normal family of complex functions | Recall the Weierstrass convergence theorem: If $(f_n)\subset\text{Hol}(\Omega)$ and $f_n\to f$ locally uniformly (that is, uniformly over each compact set) then $f\in\text{Hol}(\Omega)$ and moreover $f_n^{(k)}\to f^{(k)}$ locally uniformly.
Let $\mathcal{F}\subset\text{Hol}(\Omega)$ be a normal family. By definition, this means that if $(f_n)\subset\mathcal{F}$ then $(f_n)$ has a subsequence $(f_{n_k})$ such that $f_{n_k}\to f$ locally uniformly, where $f\in\text{Hol}(\Omega)$. Now Let $\mathcal{F}'$ be the family you described and let $(f_n')$ be a sequence in $\mathcal{F}'$. Since $(f_n)\subset\mathcal{F}$, let $(f_{n_k})$ be the locally-uniformly-convergent subsequence of $(f_n)$ and let $f$ be the limit function. Then by Weierstrass's theorem, $(f_{n_k}')$ converges to $f'$ locally uniformly and $f'\in\text{Hol}(\Omega)$, since $f$ is analytic.
The converse is not true: Let $\Omega=\mathbb{C}$ and $\mathcal{F}=\{f_n:n\in\mathbb{N}\}$ with $f_n:\mathbb{C}\to\mathbb{C}$ and $f_n(z)=z+n$. Obviously, $\mathcal{F}'=\{h\}$, where $h$ denotes the constant function $1$, therefore it is a normal family. But there exists a sequence of $\mathcal{F}$ (actually, every sequence has this property) that has no locally uniformly convergent subsequence: indeed, suppose that $(f_{m_k})\subset\mathcal{F}$ converges locally uniformly to $f\in\text{Hol}(\mathbb{C})$. Then $f(0)=\lim_{k\to\infty}f_{m_k}(0)=\lim_{k\to\infty}m_k=\infty$, which is impossible, since $f$ admits values in $\mathbb{C}$. |
When the injective hull is indecomposable | It's elementary if you know the basic properties: "$E(M)$ is a maximal essential extension of $M$", and "injective submodules are direct summands".
Suppose $E(M)$ is indecomposable. Let $A$ and $B$ are nonzero submodules of $M$ such that $A\cap B=0$. Then $E(A)$ is an injective submodule of $E(M)$, and hence it is a direct factor. The only possibilities are $E(M)$ and $0$. Since $A$ is nonzero, it is not the latter, so $E(A)=E(M)$. But this means that $A$ is essential in $E(M)$, and so $A\cap B\neq 0$, a contradiction.
Now suppose $0$ is irreducible. Let $C\oplus D=E(M)$ with $C\neq 0$. Now $(C\cap M)\cap (D\cap M)=0$, and since $M$ is essential in $E(M)$, $M\cap C\neq 0$. By irreducibility of $0$, $M\cap D=0$, but again because $M$ is essential, this amounts to $D=0$. Thus, $E(M)$ is indecomposable. |
Derivative of a fraction | I think your error has been pointed out in the comments. Here is another way to proceed. Instead of using the quotient rule you can do some manipulation and then use the power rule, note that :
$$\frac{x^2-\frac{1}{3}}{x^3} = \frac{x^2}{x^3} - \frac{1}{3x^3}$$ so $$f(x) = \frac{1}{x} - \frac{1}{3x^3}$$ So using the power rule we have $$f'(x) = \frac{-1}{x^2} + \frac{1}{x^4}$$ combining the fractions gives $$f'(x) = \frac{-x^2+1}{x^4}.$$ The first manipulation is really useful in classic integration and differentiation problems and pays to remember. |
Asymptotics and function composition -- which is bigger | It definitely does not hold in general. Let $f(x)=x^2$ and $g(x)=x$; then
$$\lim_{x\to\infty}\frac{g(x)}{f(x)}=\lim_{x\to\infty}\frac{x}{x^2}=0\;,$$
so $g(x)\in o(f(x))$, but $g\circ f=f\circ g$.
Taking $f(x)=1$ and $g(x)=\frac1x$ yields a similar example. |
How to solve the limit $\lim_{n\to\infty}\sum_{k=1}^{n} \frac{k}{k\,n+2n^2}$ | Hint: This is the limit of a Riemann sum.
$$\lim_{n\to\infty}\sum_{k=1}^{n} \frac{k}{k\,n+2n^2} = \lim_{n\to\infty}\frac1{n}\sum_{k=1}^{n} \frac{k/n}{k/n+2}= \,\,\ldots$$ |
Double integral of $e^{x^2+y^2}dydx$? | Without context it's hard to suggest the tools, but for the exact problem you are posing notice that
$$
\int_{y=0}^{y=2} e^{x^2+y^2} dy
= \int_{y=0}^{y=2} e^{x^2} e^{y^2} dy
= e^{x^2} \int_{y=0}^{y=2} e^{y^2} dy
$$
So if we define $\int_0^x e^{u^2} du = F(x)$ then your double integral becomes
$$
\int_{x=0}^{x=1} \int_{y=0}^{y=2} e^{x^2+y^2} dy dx
= \left( \int_{x=0}^{x=1} e^{x^2} dx \right) \cdot
\left( \int_{y=0}^{y=2} e^{y^2} dy \right)
= F(2) \cdot F(1).
$$
How you find $F(x)$ is a more complex problem.
Here is another approach. Note that you can switch to polar coordinates, and then
$$
\int \int e^{x^2+y^2} dxdy = \int \int e^{r^2} r dr d\theta
$$
and
$$
\int r e^{r^2} dr
$$
can be integrated using the substituion $u = r^2$. The trick is to map the region correctly, which for your square is not an easy thing to do either.
Hence me asking, in what context is your question? |
Block diagonalization of a symmetric square boolean matrix | I think this will work, if you are looking for an algorithm:
Start with a vertex.
Find all its neighbors $N_1$, and delete the vertex itself.
Find all the neigbors $N_2$ of the neigbors, and delete the neigbors $N_1$.
Continue the progress until no more neigbors are found.
Check if there is any vertex remained. If yes, then the matrix is block diagonisable; otherwise is not. |
harmonic function. How to prove? | This $|x|$ should mean $\sqrt{x^2+y^2+z^2}$. This would be harmonic on $\mathbb{R}^3\backslash \{0\}$. |
Polynomial System has only isolated solutions | What you are asking about is whether the ideal generated by the polynomials of the system is zero dimensional. This is a difficult condition to test, which requires the computation of a Gröbner basis for the ideal. Certainly the fact of having as many polynomials as indeterminates, as your question suggest, does not suffice, even if the set of polynomial equations is independent (none of the is implied by the others). |
Proof of uncountability of irrationals without using completeness of real numbers | Say a Dedekind cut $(A, B)$ is between two irrationals $p<q$ if there are elements of both $A$ and $B$ in the interval $(p, q)$. We now argue as follows:
Suppose $(A_n, B_n)$ (for $n\in\mathbb{N}$) is a sequence of Dedekind cuts; we want to build a Dedekind cut not in this sequence.
Fix an enumeration $(s_i)_{i\in\mathbb{N}}$ of $\mathbb{Q}$.
We define a pair of sequences of rationals $p_i, q_i$ as follows:
$p_0$ is any element of $A_0$, $q_0$ is any element of $B_0$.
Having defined $p_i, q_i$, we now define $p_{i+1}, q_{i+1}$ as follows:
If $(A_{i+1}, B_{i+1})$ is not between $p_i$ and $q_i$, then we pick any rationals $p_{i+1}, q_{i+1}$ with $p_i<p_{i+1}<q_{i+1}<q_i$ with $s_i\not\in (p_{i+1}, q_{i+1})$.
If $(A_{i+1}, B_{i+1})$ is between $p_i$ and $q_i$, we let $q_{i+1}$ be some element of $A_{i+1}$ in $(p_i, q_i)$, and let $p_{i+1}$ be some rational such that $s_i\not\in (p_{i+1}, q_{i+1})$.
It's an easy exercise to show that this construction does in fact give rationals $p_0<p_1<p_2<...<q_2<q_1<q_0$ and that there is no rational contained in every $(p_i, q_i)$ (think about how we handled $s_i$). Letting $$A=\{s\in\mathbb{Q}: s<p_i\mbox{ for some $i$}\},\quad B=\{s\in\mathbb{Q}: s>q_i\mbox{ for some $i$}\}$$ we get that $(A, B)$ is a Dedekind cut not equal to any $(A_n, B_n)$.
Note that what's really going on here is that completeness is essentially built into Dedekind cuts automatically.
A point which may look fishy in the above is the use of arbitrary choices. However, this is really just a device to make the proof more readable, and is easily avoided: we can always just pick an appropriate rational with minimal index according to the enumeration $(s_i)_{i\in\mathbb{N}}$, and such an enumeration can be given explicitly. |
Using Bayesian Networks to find Probability | $\newcommand{\pass}{\operatorname{pass}}\newcommand{\smart}{\operatorname{smart}}
\newcommand{\study}{\operatorname{study}}\newcommand{\prepared}{\operatorname{prepared}}\newcommand{\fair}{\operatorname{fair}}\newcommand{\plus}[1]{{^+\!}\color{navy}{#1}}\newcommand{\minus}[1]{{^-\!}\color{purple}{#1}}$
I'm not sure how to isolate $P(\plus\pass\mid\plus\smart)$ when what we're given is $P(\pass\mid\smart,\prepared,\fair)$
From the diagram, $\fair$ is independent of $\prepared$ and $\smart$, so using the Law of Total Probability.
$$\begin{align}\mathsf P(\plus\pass\mid\plus\smart)~&=~{\mathsf P(\plus\pass\mid\plus\smart,\plus\prepared,\plus\fair)~\mathsf P(\plus\prepared\mid\plus\smart)~\mathsf P(\plus\fair)\\+\mathsf P(\plus\pass\mid\plus\smart,\plus\prepared,\minus\fair)~\mathsf P(\plus\prepared\mid\plus\smart)~\mathsf P(\minus\fair)\\+\mathsf P(\plus\pass\mid\plus\smart,\minus\prepared,\plus\fair)~\mathsf P(\minus\prepared\mid\plus\smart)~\mathsf P(\plus\fair)\\+\mathsf P(\plus\pass\mid\plus\smart,\minus\prepared,\minus\fair)~\mathsf P(\minus\prepared\mid\plus\smart)~\mathsf P(\minus\fair)}\end{align}$$
On the other hand, $\prepared$ is dependent on $\smart$ and $\study$, while they are independent of each other.
$$\begin{align}\mathsf P(\plus\prepared\mid\plus\smart)~&=~{\mathsf P(\plus\prepared\mid\plus\smart,\plus\study)~\mathsf P(\plus\study) \\+ \mathsf P(\plus\prepared\mid\plus\smart,\minus\study)~\mathsf P(\minus\study)} \\[2ex]\mathsf P(\minus\prepared\mid\plus\smart)~&=~{\mathsf P(\minus\prepared\mid\plus\smart,\plus\study)~\mathsf P(\plus\study) \\+ \mathsf P(\minus\prepared\mid\plus\smart,\minus\study)~\mathsf P(\minus\study)}\end{align}$$
PS: Also, as $$\mathsf P(\plus\smart\mid\plus\pass) = \dfrac{\mathsf P(\plus\pass\mid\plus\smart)\mathsf P(\plus\smart)}{\mathsf P(\plus\pass)}$$
...then, you are not quite done.
$$\begin{align}\mathsf P(\plus\pass)~&=~{\mathsf P(\plus\pass\mid\plus\smart)~\mathsf P(\plus\smart)+\mathsf P(\plus\pass\mid\minus\smart)~\mathsf P(\minus\smart)}\end{align}$$ |
Clarification needed on ordering $\mathbb{Z}[X]$ in the language of arithmetic. | As you pointed out, it's not entirely clear what the author meant but I have a good feeling that I know what they had in mind:
$L_{A}$ - the language of arithmetic - includes two constant symbols $\dot{0},\dot{1}$, one (or two - depending on your convention) relation symbols $\dot{\prec}$ (,\dot{=}) and two function symbols $\dot{\circ}$ and $\dot{+}$. If we want to consider $\mathbb Z[X]$ as an $L_{A}$-structure, we must decide how to interpret all these symbols in this structure. The interpretation of $\dot{0}, \dot{1}$ should be clear (it's the zero polynomial and constant polynomial with value $1 \in \mathbb Z$ respectively). The interpretation of $\dot{\circ}, \dot{+}$ is given by polynomial multiplication and addition respectively. The interpretation of $\dot{=}$ is equality.
What remains now is the interpretation of $\dot{\prec}$. Call this interpretation $<$. We want that $(\mathbb Z[X]; 0,1, +, \circ, \prec)$ is an ordered ring and here is what we can do to achieve this.
Let $f, g \in \mathbb Z[X]$ be constant polynomials. We let $$f \prec g : \iff f(0) \lt g(0), $$ where $\lt$ is the natural strict order of $\mathbb Z$.
For any constant polynomial $f \in \mathbb Z[X]$, we let $f \prec X$,
We now expand $\prec$ to all polynomials via the rules for ordered rings, i.e. if $f \prec g$ and $0 \prec h$ then $f \circ h \prec g \circ h$ and the second rule: If $f \prec g$ then for all $h \in \mathbb Z[X] \colon f + h \prec g + h$.
Evaluating $\prec$ with only this information given can be quite painful so let me spare you some time and tell you that
$$
f \prec g \iff g-f \text{ has non-negative leading coefficient}.
$$
It's a good exercise to prove that this equivalence holds. |
How to prove that $\mathbb{R}$ is $T_1$? | Let $x,y\in \mathbb{R}, x\neq y$. Take $r<\frac{|x-y|}{2}$. Then you can take $U=B(x,r)$ and $V = B(y,r)$. This proves even more: $\mathbb{R}$ is $T_2$. |
Show that a function has directional derivatives at a point but is not differentiable there. Need help understanding proof | I take the question to be "How did someone come up with the particular choice of $x^2$ and $x$ to plug into $g$ and discover discontinuity?" Of course, not being a mind-reader, I don't know how it was actually done, but here's how it might have been done. The key observation is that, in the fraction that defines $g(x,y)$, the numerator $xy^2$ is the geometric mean of the two terms in the denominator, $x^2$ and $y^4$. (That might sound complicated, but it really just means that the exponents of $x$ and $y$ in the numerator are the averages of their exponents in the two terms of the denominator: For the exponents of $x$, $1$ is the average of $2$ and $0$, and for the exponents of $y$, $2$ is the average of $0$ and $4$.) So if we give $x$ and $y$ values that make the two denominator terms equal, then the numerator will automatically be equal to them also. So the numerator will match each of the terms in the denominator, and the fraction will be $1/2$. If we can find such values for $x$ and $y$ arbitrarily close to $0$, then that will make $g$ discontinuous at $(0,0)$. The easiest way to achieve this, meaning to make $x^2=y^4$, is to give $y$ an arbitrary value, say $t$, and to set $x=t^2$. So you set $(x,y)=(t^2,t)$ to find points, as close to $(0,0)$ as you like if $t$ is very small, where $g$ takes the value $1/2$.
Finally, if you're in a nasty mood, you re-name the variable $t$ as $x$, even though it serves as the value to substitute for $y$, just to confuse readers.
P.S. If you know how to use a program like Mathematica, you can have it plot the graph of $g$, and you'll probably be able to see a sort of a ridge in the graph, over the parabola $x=y^2$, at height $z=1/2$. So this might be another way to "guess" the substitution that proves discontinuity of $g$ at $(0,0)$. |
How to prove that the subset of polynomials $A=\{\sum_{k=0}^n{a_kx^{2k}}:a_i\in\mathbb{R},n\in\mathbb{N} \}$ is dense in $C[0,1]$ | Let $f\in C([0,1])$ and consider $g(x)=f(\sqrt{x})\in C([0,1])$. By Weierstrass approximation theorem, there is a sequence of polynomials such that $\|p_n-g\|_{\infty}\to 0$. This gives $\|p_n(x^2)-g(x^2)\|_{\infty}\to 0$, but $p_n(x^2)$ belongs to $A$ and $g(x^2)=f(x)$, so $A$ is dense in $C([0,1])$. |
Maximum amount willing to gamble given utility function $U(W)=\ln(W)$ and $W=1000000$ in the game referred to in St. Petersberg's Paradox? | I see no flaw in your reasoning. Unfortunately it leads to an equation that you (and I, and maybe everyone) cannot solve explicitly. So you might need to solve using computational method. This can be done using OpenOffice Calc for instance, and shows that the maximal $F$ is between 10.935 and 10.94, hoping that I encoded the formula right (the very small line are the computations for h going from 0 to 98):
Notice that the wikipedia article on the paradox treats explicitly the question of the maximum price a millionaire would be ready to pay with log utility (under the section "Solving the paradox) and gets the same result of $10.94, which is reinsuring. If, as I did when I first saw it, you have a doubt on whether the formula in the wikipedia article is the same as the one you propose, see this question : Mistake in Wikipedia article on St Petersburg paradox? |
Average Employment Time (Poisson Distribution) | (a) Hint - If you work for the company, the odds of leaving within a year are $\frac{744}{2418}$. So your odds of still being there are $\frac{1674}{2418}$.
If you are still there after a year, the odds of leaving and staying the next year are the same. That is, the odds of you being there after $2$ years is $\frac{1674}{2418}^2$.
You can work with fractional years. If 744 people leave per year, that works out to around $2$ per day.
You can see that pattern, and figure out the odds after n years. How big does n have to be to make the odds $50$%?
(b) looks right to me. |
The annihilator of an intersection is the sum of annihilators | There is a reason why the first relation
$$(M+N)^\circ =M^\circ \cap N^\circ \tag1$$
is easier to prove than
$$(M\cap N)^\circ =M^\circ +N^\circ\tag2$$
Indeed, (1) is true for arbitrary vector spaces. (The proof is short: a linear function annihilates $M+N$ if and only if it annihilates both $M$ and $N$.)
But (2) fails for general vector spaces (it holds for finite dimensional ones). More specifically, the inclusion $(M\cap N)^\circ \supseteq M^\circ +N^\circ$ is true in general vector spaces, but not the reverse inclusion. So, any proof of (2) must use the finiteness of dimension somewhere. It cannot be a twin of the proof of (1).
(A gerw said, (2) can be made true in full generality by replacing $M^\circ +N^\circ$ with $\overline{M^\circ +N^\circ}$.) |
How can I solve for the general solution of a second ODE using Laplace transform? | Since solving the linear ODEs by using Laplace transform should involve the terms of $y^{[n]}(0)$ , if appears, don't panic, just let them to some arbitrary symbols.
However in those situations, Laplace transforms are often not the best choice, since inverse Laplace transforms are often very difficult to solve, so there is a type of the evolving method for homogenous linear ODEs, no need to involve inverse transforms, just involve the choice of constants instead, is so called the "kernel method". For example the homogenous linear ODEs of the type $\sum\limits_{k=0}^n(a_kx+b_k)y^{(k)}(x)=0$ , quite a lot of the cases can be solved by assuming the integral kernel of the form $y=\int_Ce^{xs}K(s)~ds$ . Particular solution to a Riccati equation $y' = 1 + 2y + xy^2$ and Help on solving an apparently simple differential equation are the good examples. |
Evaluate the path integral of $f(x, y) = y$ over the graph of the semicircle $y = \sqrt{1-x^2}, -1 \leq x \leq 1$ | You can also parameterise the curve $x=cost$, $y=sint$
$$r'(t) = <-sint,cost>$$
$$||r'(t)|| = 1$$
$$F = sin t$$
$$I = \int_{0}^{\pi} sint dt = -cost|_{0}^{\pi} = 2$$ |
Is having models with ever increasing cardinality of the power set of $\omega$ is a theorem of ZFC? | Again, you're conflating theories and objects: if $\alpha$ is an arbitrary ordinal in some model $M$ there's no reason for "$2^\omega=\aleph_\alpha$" to be in any sense expressible by a first-order sentence in the language of set theory. So while we can ask whether that expression is satisfied in a structure containing $\alpha$, it doesn't make sense to ask whether ZFC proves it (or anything involving it).
I think the right way to ask the intuitive question you have in mind is:
Does ZFC prove that, whenever $M\models$ ZFC and $\alpha\in Ord^M$, there is some $N\models$ ZFC with $Ord^M=Ord^N$ and $N\models 2^\omega>\aleph_\alpha$?
If we add the assumption that $M$ is countable, this is easy: forcing over arbitrary countable models can be formalized inside ZFC, so just consider the forcing adding $\aleph_{\alpha+1}$-many Cohen reals.
For uncountable models, however, the statement is clearly false: we could have an $\omega$-model which literally contains every real, so we clearly can't push the continuum any higher. |
Prove $\sum_{i=1}^n\frac{1}{\sqrt{i}}\le\frac{1}{\sqrt{n!}}\prod_{i=2}^n(\sqrt{i-1})+2\sum_{i=2}^n\frac{1}{\sqrt{i}}$ using Weierstrass inequality | Hint:
Since
$$
\sum_{k = 1}^{n} \frac{1}{\sqrt{k}} = 1 + \sum_{k = 2}^{n} \frac{1}{\sqrt{k}},
$$
we can rewrite the inequality, which has to be shown (by subtracting $2 \sum_{k = 2}^{n} \frac{1}{\sqrt{k}}$ from both sides) as
$$
1 - \sum_{k = 2}^{n} \frac{1}{\sqrt{k}}
\le \frac{1}{\sqrt{n!}} \prod_{k = 2}^n (\sqrt{k-1}).
$$
By the Weierstrass inequality (since $\frac{1}{\sqrt{k}} \in (0,1]$ for all $k \in \mathbb{N}_{>0}$) we have
$$
1 - \sum_{k = 2}^{n} \frac{1}{\sqrt{k}}
\le \prod_{k = 2}^{n} \left(1 - \frac{1}{\sqrt{k}}\right)
= \prod_{k = 2}^{n} \frac{\sqrt{k} - 1}{\sqrt{k}}
= \frac{1}{\sqrt{n!}} \prod_{k = 2}^{n} (\sqrt{k}-1),
\le \frac{1}{\sqrt{n!}} \prod_{k = 2}^{n} \sqrt{k - 1}
$$
where the last equality is achieved by pulling out the denominator out of the product and the last inequality is using the fact that $\sqrt{k}-1 \le \sqrt{k - 1}$ for all $k \in \mathbb{N}_{>1}$, which you can easily verify either by induction or by drawing a graph. |
Prove that $\frac{1}{x^2}$ is continuous for any $x \in (0, \infty)$. | the $\frac{x_0}2$ is required for proving continuity from below.
it is worth noting that continuity from above is somewhat easier to show because (for $\delta \gt 0$):
$$
\frac1{x_0^2} - \frac1{(x_0+\delta)^2} = \frac{\delta^2+2\delta x_0}{x_0^2(x_0+\delta)^2} \lt \frac{2\delta^2+2\delta x_0}{x_0^2(x_0+\delta)^2}\lt \frac{2\delta}{x_0^2(x_0+\delta)} \lt \frac{2\delta}{x_0^3}
$$
for continuity from below we have:
$$
\frac1{(x_0-\delta)^2}-\frac1{x_0^2}= \frac{2\delta x_0-\delta^2}{x_0^2(x_0-\delta)^2}
$$
the simplification now requires $\delta \lt \frac{x_0}2$, in which case we have
$$
\frac{2\delta x_0-\delta^2}{x_0^2(x_0-\delta)^2} \lt \frac{2\delta x_0}{x_0^2(x_0-\delta)^2} \lt \frac{2\delta x_0}{x_0^2(\frac{x_0}2)^2}=\frac{8\delta}{x_0^3}
$$ |
How to find the volume of a cube remaining after drilling a cylinder with a diameter larger than the cube's side length? | Due to symmetry, you can restrict your calculations to one quadrant. In your figure for example, look at the square from $0$ to $10$. Take the area of the piece outside the circle, multiply by the height of the cube, and by $4$ (you have 4 identical pieces), and you get the volume.
The only issue is how to calculate the area of the small piece. Let's name the corners of the rectangle in the first quadrant $OABC$, with $O$ at the origin, and $A$ along the horizontal axis, and let's call $D$ and $E$ the intersection of the circle with this square. the area you require is then the area of the square minus the area inside the circle. We divide this inside area into three pieces. You have two right angle triangles $OAD$ and $OEC$ of equal area. You know the hypotenuse (the radius of the circle) and one of the sides (the side of the rectangle in the first quadrant = half of the side of your original cube). You can use Pythagoras' theorem to calculate the other side, then the area. You can also calculate the angle $\angle AOD$ fromthe same triangle, using $\arccos$ function. The only piece left is a sector of a circle. The area is radius squared, multiplied by the $\angle DOE$. Since you calculated $\angle AOD$, you get $$\angle DOE=\pi/2-2\angle AOD$$
You now have all the information to finish the problem |
Sphere packing in a sphere | Actually, it's the same, though parameters, indicated in the tables, are different: Maximum radius of inner spheres r, 1-st Table, Enclosing circle diameter R, 2-nd Table.
r=(2√3-3)R
If r=1, then
R=r/(2√3-3)=1/(2√3-3)=(2√3+3)/(12-9)=1+2/√3
Sorry for non-decent format, I'm a rookie in that. |
Pulling balls out of a box | Revised answer
Part 2 is easy, $\dfrac {r}{r+b}$, because last being red is the same as 1st being red by symmetry.
For part 1, let n = b+r, then if ball # k is the 1st red, we have (n-k) balls left of which r-1 are red,
thus Pr = ${n-k\choose r-1}/{n \choose r}$ |
The equation of the surface in cylindrical coordinates: ${r^{2}-2z^{2}=4r\cosθ-8r\sinθ-12z}$ . What is the equation in perpendicular coordinates? | You have a mistake in a sign in your last step:
$$(x-2)^2+(y+4)^2{\color{red}-}2(z-3)^2=2.$$
So the surface is a hyperboloid of one sheet (see https://en.wikipedia.org/wiki/Quadric):
$$\frac{(x-2)^2}{(\sqrt{2})^2}+\frac{(y+4)^2}{(\sqrt{2})^2}-\frac{(z-3)^2}{1^2}=1.$$ |
Fraction inequality $|(\Delta+a)/(\Delta+b)|<\varepsilon$ | No, you can't - unless $a=0$. For otherwise in particular $\Delta=0$ will lead to $\left|\frac{\Delta+a}{\Delta+b}\right|=\left|\frac ab\right|>\epsilon$ as soon as we pick $\epsilon$ small enough. |
Derivative of $\frac { y }{ x } +\frac { x }{ y } =2y$ with respect to $x$ | $\text{ Assuming your y' is correct... } \\ \text{ then we should get rid of the compound fractions.. } \\ y'=\frac{\frac{-y}{x^2}+\frac{1}{y}}{2-\frac{1}{x}+\frac{x}{y^2}} \\ \text{ now we need to multiply top and bottom by } \\ x^2y^2 \text{ this is the lcm of the bottoms of the mini-fractions } \\ y'=\frac{-y(y^2)+1(x^2)(y)}{2x^2y^2-xy^2+x(x^2)} \\ y'=\frac{-y^3+x^2y}{2x^2y^2-xy^2+x^3}$ |
'Strange' trigonometric roots of $x^5-4x^4+2x^3+5x^2-2x-1$ - could someone explain? | Using the equality $\cos(\pi \pm x)=-\cos(x)$ you get
$$x_1=-\frac{\cos \frac{3 }{22} \pi}{\cos \frac{1}{22} \pi} \\
x_2=-\frac{\cos \frac{9}{22} \pi}{\cos \frac{3}{22} \pi} \\
x_3=-\frac{\cos \frac{15}{22} \pi}{\cos \frac{5}{22} \pi} \\
x_4=-\frac{\cos \frac{21}{22} \pi}{\cos \frac{7}{22} \pi} \\
x_5=-\frac{\cos \frac{27}{22} \pi}{\cos \frac{9}{22} \pi}
$$
Note that the identity
$$\cos(3x)= \cos(x) [2 \cos(2x)-1]$$
gives you a nicer form for the roots. With this form, after the proper substitution you should be able to reduce your polynomial to the minimal polynomial of $\cos(\frac{\pi}{11})$, which will explain where the roots are coming. |
Is $\mathbb{C}[x,y]/(x^3+y^3−1)$ is a UFD or not? | Hint. Let $\alpha$ the class of $x$ and let $\beta$ the class of $y$.
Show that $\alpha$ is irreducible, but that $(\alpha)$ is not a prime ideal, for example. For the last part, notice that $\alpha^3=(1-\beta)(1+\beta+\beta^2)$. |
Changing a double integral into a single integral - Volterra-type integral equations | There is an issue of confusion between the dummy variable of integration $t$ and the upper limit on the outer integral. Instead, write
$$F(t)=\int_0^t\int_0^s y(x)\,dx\,ds\tag1$$
Then, note that the region $0\le x\le s$, for $0\le s\le t$ is a triangular shaped region with vertices in the $(x,s)$-plane at $(0,0)$, $(0,t)$, and $(t,t)$.
So, this triangular region is also defined by $x\le s\le t$, for $0\le x\le t$. Thus, we can write $(1)$ as
$$F(t)=\int_0^t\int_x^t y(x)\,ds\,dx\tag2$$
But note that in $(2)$, $y(x)$ is independent of $s$. So, we can "take $y(x)$ outside the inner integral" to obtain
$$F(t)=\int_0^t y(x)\int_x^t (1)\,ds\,dx=\int_0^t (t-x)y(x)\,dx$$ |
What's the meaning of "decreasing filtration"? | It's explained in the line following the term. A sequence of indexed objects (I'd assume subsets of $H^n$ in this case) with the inclusion relation "$\supset$". The word decreasing refers to the direction of the inclusion. (If you'd replace "$\supset $" by "$\subset $" it would be increasing. )
(Sometimes additional requirements are imposed, e.g. that $H^n$ is the limit (in whatever sense) of the sequence in the increasing direction or $\{0\}$ the limit in the other direction, but I'd expect this to be stated).
If additional structure is relevant than usually one wants the sequence to respect that, e.g. in case of vector spaces you'd require that the inclusion relation is actually a subspace relation, in case of groups you'd want to have subgroups etc.
See here: https://en.wikipedia.org/wiki/Filtration_(mathematics) |
number of subgroups of $\mathbb{Z}_p \oplus \mathbb{Z}_p \oplus \mathbb{Z}_p$ | You can consider this group a vector space over the field $\mathbb{Z}_p$. Your question thus reduces to this question How to count number of bases and subspaces of a given dimension in a vector space over a finite field? |
Decomposition of Torsion Module | This is true in a more general setting. Let $R$ be a principal ideal domain, $M$ a torsion $R$-module, and let $p$ be a prime element of $R$. Define
$$M_1 = \{a\in M \ | \ p^n a = 0 \textrm{ for some } n\in \mathbb{N} \};$$
$$M_2 = \{a\in M \ | \ qa=0 \textrm{ for some } q\in R \textrm{ coprime with } p \}.$$
Now, let $a\in M$. Since $M$ is a torsion module, there exists a non-zero $z\in R$ such that $za=0$. Let $n$ be the greatest integer such that $z=p^nq$, with $p$ coprime with $q$. By Bézout's identity, there exist $\alpha, \beta\in R$ such that $\alpha p^n + \beta q = 1$. Then
$$ a = \alpha p^n a + \beta q a,$$
with $\alpha p^n a \in M_2$ and $\beta q a\in M_1$. This finishes the proof.
You can then apply this to the special case $R=k[X]$ and $p=X$. |
How many vertices of degree 3 or more can a tree have at most? | If we denote the number of vertices as $n$, from the hand shaking lemma and the fact that the number of edges in a tree is $n-1$, we have
$$\sum_{v \in V} d_n = 2 \vert E \vert = 2(n-1)$$
If $s_k$ vertices have degree at-least $k$, then we have
$$2(n-1) = \sum_{v \in V} d_n \geq s_k \times k +(n-s_k)$$
Hence, we get that
$$s_k(k-1) \leq n-2 \implies s_k \leq \dfrac{n-2}{k-1} \implies s_k \leq \left\lfloor\dfrac{n-2}{k-1} \right\rfloor$$
This bound is tight, i.e., you can construct a tree with exactly $\left\lfloor\dfrac{n-2}{k-1} \right\rfloor$ vertices of degree $k$.
Hence, in the case of $k=3$, we have $s_3 \leq \left\lfloor\dfrac{n}2 \right\rfloor - 1$.
I hope I understood your question correctly. |
Interior Product and Pullback Properties | $(f^*\omega)_y(Y_1,..,Y_k)=\omega_{f(y)}(df_y(Y_1),...,df_y(Y_k))$
$i_X\omega_x(X_1,..,X_{k-1})=\omega_x(X,X_1,..,X_{k-1})$ we deduce that
$(f^*i_X\omega)_y(Y_1,..,Y_{k-1})=\omega_{f(y)}(X(f(y)),df_yX_1(y),..,df_yX_{k-1}(y))$
$(f^*X)(y)=df^{-1}_{f(y)}X(f(y))$, we deduce that $(i_{f^*X}f^*\omega)_y(Y_1,..Y_{k-1})=$
$(f^*\omega)_y(df_{f(y)}^{-1}(X(f(y)),Y_1,..,Y_{k-1}))$=
$\omega_{f(y)}(df_y(df_{f(y)}^{-1}(X(f(y))),df_yY_1,..,df_y.Y_{k-1})=$
$\omega_{f(y)}(X(f(y)),df_yX_1(y),..,df_yX_{k-1}(y))$. |
An Elliptic Curve Question | In the cases you are interested in, the roots of the cubic is a single root and a double root and so the discriminant of it is zero. That is, $\,4a^3+27
b^2=0\,$ since the Wikipedia article discriminant states in the case of a cubic
The discriminant is zero if and only if at least two roots are equal.
If the curve crosses itself, then two roots approach each
other and coincide. In other words, the equation has a double root. |
$n$ as an integer and a real number | I read your question as one about intuition on numbers, and I will answer from that point of view.
There are many equivalent ways to construct differ kinds of numbers.
For example, you can build the reals with Dedekind cuts or with Cauchy sequences.
If you construct the natural numbers first and then expand to integers, rationals, and reals, you have a large number of options.
As others have pointed out, at each step the previous kind of numbers can be seen as a subset of the new ones via an embedding.
The exact embedding depends on your choice of constructions.
I see all of this as formalization or rigorous definition of the various kinds of numbers, not the essence of numbers.
The way I see numbers is different.
I might think of numbers as quantities, points on the geometrical real line, or something else.
I think of them flexibly in different ways.
To me Dedekind cuts and the like come from the intuitive meaning of a number and are secondary.
While rigorous definitions are crucial for building a sound theory, on an intuitive level I see definitions as descriptions.
The number two can be thought of as a Dedekind cut, a supremum of a set of rationals, the formal limit of a Cauchy sequence, the number of elements in some finite sets, the set consisting of the empty set and the set containing the empty set, the successor of one, the ratio or difference of two naturals or integers, or something yet different.
It is useful to see the same thing in many ways. |
Conditions for simplified rational expressions | The fraction $\frac{x^2+6x+5}{x^2-x-2}$ is defined when $x$ is not 2 or -1. Its roots could be -5 or -1. However, -1 is not defined there. So only -5 is a root. What you call "end result" is that $\frac{x+5}{x+2}$ is already not defined in 2. So depending on how the original question was formulated, you might or might not say that x can't be 2. Also, you didn't provide the actual question, so we can't know. |
Prove that number of subgroups of order p equals the number of subgroups of index p. | Both $A_p$ and $A/A^p$ have exponent $p$, so, as they have the same
order, they are both isomorphic to $C_p^n$ for some $n$.
Each order $p$ subgroup of $A$ is contained in $A_p$, and there
are $(p^n-1)/(p-1)$ of these. Each index $p$ subgroup of $A$
corresponds to an index $p$ subgroup of $A/A^p$. These subgroups
are kernels of non-zero homomorphisms from $A/A^p$ to $C_p$
and there are $p^n-1$ of these homomorphisms. But two of these
homomorphisms have the same kernel iff they differ by a scalar
factor, so there are $(p^n-1)/(p-1)$ of these kernels. |
Harmonic functions constant on circumferences | Identifying $\mathbb{R}^2$ with $\mathbb{C}$ as usual, if we compose a harmonic function $h$ on $\mathbb{C}\setminus \{0\}$ with the exponential function, we obtain a harmonic $2\pi i$-periodic function $g$ on $\mathbb{C}$.
If $h$ is constant on circles with centre $0$, then $g$ is constant on the lines $\operatorname{Re} z = \operatorname{const}$, so $g$ is an entire harmonic function depending only on $\operatorname{Re} z$. That means
$$g(z) = a + b\operatorname{Re} z$$
for some constants $a,b$. It remains to write $\operatorname{Re} z$ as a function of $e^z$ to find $h$. |
Functions satisfying $f(x+y)+f(x-y)=2f(x)g(y)$ | Let $M$ be the supremum of $|f(x)|$ on $\Bbb R$. For $0 < \epsilon < M$
choose $x \in \Bbb R$ such that $|f(x)| > M - \epsilon$.
Then
$$
|g(y)| = \frac{|f(x+y) + f(x-y)|}{2|f(x)|} \le \frac{M}{M - \epsilon} \, .
$$
With $\epsilon \to 0$ it follows that $|g(y)| \le 1$.
The bound for $|g|$ is best possible, as the examples $f(x) = 1$, $g(y) = 1$, or $f(x) = \sin(x)$, $g(y) = \cos(y)$ show.
It is also necessary to require that $f$ is bounded, otherwise
$f(x) = e^x$, $g(y) = (e^y + e^{-y})/2$ would be a counter-example. |
Geometric Progression Word Question | It's not entirely obvious which formula (geometric/arithmetic/etc) to use at the beginning of the problem, so it makes sense to do some scratch work to figure out what is going on.
The $7$ times that the dog runs back and fourth from the house make things complicated. What happens in the first run to the house?
Callum and the dog are both $36$ meters from the house. The dog runs to the house ($36$ meters) and then back to Callum. Since Callum has continued to walk this is $36$ meters plus the extra distance that Callum has walked.
Let $d$ be the distance that Callum walks, then the distance the dog goes is $72+d$. Since the dog goes at $4$ times Callum's speed, this means that $4d=72+d$. Hence, $d=24$ meters. Therefore, the second run starts at $36+24=60$ meters from the house.
Now, you could replicate this calculation $6$ more times or (better) try to generalize. If Callum and the dog are at a distance of $h$ from the house, how far does Callum walk in one run of the dog? |
Prove the following sequence diverges to infinity using the definition of convergence | Hint:
To find a lower bound for a ratio, it is enough to find a lower bound for the numerator and an upper bound for the denominator. |
MAGMA: One-sided ideals in finitely presented algebras | Try the following:
K := RationalField();
F<x> := FreeAlgebra(K,1);
A := quo<F|x^3>;
I := ideal<A|x^2>;
MAGMA sometimes gives problems if you redefine the x. Also, since your algebra is commutative, the rideal and lideal constructors are not available. |
Martingales, martingale transform, $L_2$ norm and $\textbf{Itô′s isometry}$. | This is mostly right, but it looks like you might be confusing continuous time and discrete time somewhat. The martingale transform you write, $(C \circ X)_n = \sum_{k=1}^n C_k(X_{k}-X_{k-1})$, is generally defined for discrete time processes so I'm a little confused about why you mention continuous martingales in your definition of $\mathcal{X}_0^{2,c}$. This isn't too big of a problem, but it makes it somewhat confusing when you talk about simple processes since in discrete time all processes are simple processes.
I also don't think I've heard something be called a simple predictable process of another process before. It is not the case that every simple predictable process is in $L^2(X)$. However, if we know that $C$ is predictable and $\|C\|_{L^2(X)} < \infty$ then Ito's isometry applies and $\|C\|_{L^2(X)} = \|C\circ X \|_{\mathcal X_0^{2,c}}.$ |
Analytic continuation of power series of holomorphic with real nonnegative coefficients | Assuming $f$ is analytic at $1$, since everything is increasing $$f^{(k)}(1)=\lim_{z\to 1^-}f^{(k)}(z)= \lim_{z\to 1^-} \sum_{n=k}^\infty a_n \frac{n!}{(n-k)!} z^{n-k}=\sum_{n=k}^\infty a_n \frac{n!}{(n-k)!}$$
Next we are told that for any $s>0$ $$\sum_{n=0}^\infty a_n (1+s)^n= \infty$$
Everything is non-negative so we can change the order of summation as we want obtaining
$$\infty=\sum_{n=0}^\infty a_n \sum_{k=0}^n s^k\frac{ n!}{k!(n-k)!} = \sum_{k=0}^\infty \frac{s^k}{k!} \sum_{n=k}^\infty a_n\frac{n! }{(n-k)!}=\sum_{k=0}^\infty \frac{s^k}{k!} f^{(k)}(1)$$
contradicting that $f$ is analytic at $1$. |
On a property of a uniformly bounded sequence of holomorphic functions on $D$ | I suppose $D=\{|z|<1\}$.
Being our sequence $\{f_n\}_n$ of holomorphic functions on $D$, uniformly bounded, then by Montel theorem, admits some converging subsequence $\{f_{n_\nu}\}_\nu$: let's call $f$ the limit function.
Since $f_n^{(k)}(a)\stackrel{n\to+\infty}{\longrightarrow}0$ for all $k\ge1$, then this condition holds even for the subsequence $f_{n_\nu}^{(k)}(a)\stackrel{\nu\to+\infty}{\longrightarrow}0$ for all $k\ge1$; but it's clear that $f_{n_\nu}\to f$ wrt compact subsets topology implies that
$f_{n_\nu}^{(k)}\to f^{(k)}$ for every $k\ge0$, thus
$$
f_{n_\nu}^{(k)}(a)\stackrel{\nu\to+\infty}{\longrightarrow} f^{(k)}(a)\;\;\;\;\forall k\ge1
$$
from which we get that $f^{(k)}(a)=0$ for all $k\ge1$ and thus (expanding $f$ near $a$ in Taylor series first, and then applying identity principle) we get $f\equiv f(a)$ on the whole $D$.
Now, this argument can be done for every converging subsequence of $\{f_n\}_n$, thus every converging subsequence of it, converges to the same function (the constant function $f(a)$), thus by the Montel converging criterion, we get that $\{f_n\}_n$ converges, and it converges necessarely to $f(a)$ (wrt the topology above). |
Let $A\in M_{5×5}(\mathbb{R})$ be a matrix such that $\operatorname{rank}(A)=2$ and $A^3 = 0$. Is A guaranteed to be diagonalizable over R? | If $A$ was diagonalizable, then $D^3=0,$ so $D=0,$ so $A=0,$ contradicting the fact that the rank is $2.$ |
Is this a LQGI controller - Linear Quadratic Gaussian Integral regulator | Due to certainty equivalence you can split the observer and feedback control up into two separate problems. So the full state observer can indeed be found using a Kalman filter/LQE. For the control LQI can be used, which tries to minimize
$$
J(u) = \int_0^\infty \left[z^\top(t)\,Q\,z(t) + u^\top(t)\,R\,u(t) + 2\,z^\top(t)\,S\,u(t)\right] dt, \tag{1}
$$
with $z(t) = \begin{bmatrix}x^\top(t) & x_i^\top(t)\end{bmatrix}^\top$ and $x_i(t)=\int (r-y)dt$. So using the standard state space model (without the feed through matrix $D$) and augmenting the state space yields
$$
\begin{bmatrix}
\dot{x} \\ \dot{x}_i
\end{bmatrix} =
\begin{bmatrix}
A & 0 \\ -C & 0
\end{bmatrix}
\begin{bmatrix}
x \\ x_i
\end{bmatrix} +
\begin{bmatrix}
B \\ 0
\end{bmatrix} u. \tag{2}
$$
The combined optimization problem using $(1)$ and $(2)$ can just be solved with LQR. However I am not entirely sure myself why your are allows to disregard the reference input from $(2)$. |
Integrate $\int x^x dx$ | You arrived at
$$\int { { x }^{ x } } dx=\sum _{ n=0 }^{ \infty }{ \int { \frac { { { x }^{ n }\left( \ln {x} \right) }^{ n } }{ n! } } dx}$$ which is equal to $$=\sum _{ n=0 }^{ \infty }{ \frac { 1 }{ n! } \int { { x }^{ n } } { \left( \ln { x } \right) }^{ n } dx }$$
Consider $u = {\left(\ln {x} \right)}^{n} $, $dv = {x}^{n} dx $.
Now, you should use the integration by parts to complete the exercise. |
Why is " omega" = ord (N) a " limit element". | Because, topologically, $\omega$ is the limit point of the preceding ordinals, i.e. the natural numbers.
A "limit point" of a set is a point which can be "approximated as close as we like" by elements of that set. This, of course, requires us to have a notion of "approximation", which is what topology does, and orders, like those on the ordinals, can be considered as a source of such because you can consider the approximation of a point by an interval: if you have a point $p$, and you can bracket it in $(a, b)$, i.e. $a < p < b$, then you can say $(a, b)$ is like a "confidence interval", as in taking measurements: e.g. if I say I'm around 169-171 cm tall, that's an approximation with a 2 cm interval. If you now make $a$ and $b$ "closer", i.e. have an interval $(a', b')$ such that $a < a'$ and $b' < b$, and yet $p$ is still contained therein, then the second interval approximates better.
Likewise, if we have a subset, $S$ of some ordered set, here (a suitable initial segment of (**)) the ordinals, we can say that that subset has as limit point $l$ - which need not be in it, but must be in the larger ordered set - if in every, open interval $(a, b)$ which contains $l$, there is also a point from $S$. That point is an "approximation", and the intuition is that the smaller the interval, the closer that approximation is thereto, and you can always find an approximation from $S$ no matter how small you make the interval.
In that regard, $\omega$ has this property with respect to $\mathbb{N}$ because if we have any open interval $(a, b)$ of ordinals containing $\omega$, then we must of course $a < \omega < b$, but that first inequality means $a$ must be a natural since there is nothing before $\omega$ but naturals. Hence there is a natural one larger (its successor) and such "approximates" $\omega$(*) and, moreover, we can, by making $a$ larger, get better approximations thereof. Thus $\omega$ is a limit point and hence this is why the name "limit ordinal". Similar considerations apply to all other limit ordinals.
(*) The notion of something being "approximately infinite", despite not making strict sense, is very, very useful, but can also be a poison pill if you're not careful: if something is really huge, but finite, compared to something else that is at a much smaller scale we happen to be interested in, it so often happens that taking the "really huge" thing as actually infinite often makes things a lot easier. One example is from physics, where you can, say, model a capacitor made from two parallel metal sheets very accurately, provided they aren't too far apart, as being made from infinite sheets. A misuse of this notion is in treating the Earth as infinite when you have 7.5 billion people covering it all demanding more and more profligate use of natural resources :) There you're in territory where the approximation breaks down as the scales are now comparable.
(**) The ordinals are too numerous to be collected into a set without creating a logical contradiction, at least in typical conceptions of set theory. |
Triangular Factorials | I've e-mailed Christopher Tomaszewski who, according to the OEIS, is the source of this information. I'll report here if he responds.
I will point out that, as far as I can tell, the paper Matthew Conroy links to does not answer this question. (Great survey though!)
As discovered by user Charles in a comment below, deep in the OEIS history (find page with edit #105 and see "Discussion") the following comment by Vladimir Reshetnikov can be seen:
From e-mail communication with Christopher M. Tomaszewski I learnt
that he found that his purported proof of 1-6-120 conjecture was
incorrect. But he claimed that there is no counterexample below
10^77337, so it still remains an interesting conjecture. |
Calculating gradient, how to do this? | To simplify notation, let us define $u= xy+x^2$, $v=xy-y^2$, so that:
$$
\frac{\partial u}{\partial x}=y+2x ,\quad \frac{\partial v}{\partial x}=y
$$
Then, using the chain rule, since $g(x,y)=f(u,v)$:
$$\frac{\partial g}{\partial x}=\frac{\partial f}{\partial u}\frac{\partial u}{\partial x}+\frac{\partial f}{\partial v}\frac{\partial v}{\partial x}$$
Now, if $q=(x,y)=(3,-3)$ we have $p=(u,v)=(0,-18)$, and we already know:
$$
\left.\frac{\partial f}{\partial u}\right|_p =-2 , \quad
\left.\frac{\partial f}{\partial v}\right|_p =3
$$
Also we can compute from above:
$$
\left.\frac{\partial u}{\partial x}\right|_q =3 ,\quad \left. \frac{\partial v}{\partial x}\right|_q=-3
$$
Then,
$$\left.\frac{\partial g}{\partial x}\right|_p=-2 \cdot 3+ 3 \cdot (-3)=-15 $$
Can you repeat the operation with $\dfrac{\partial g}{\partial y}$ ? |
Number of solutions of polynomials in a field | Let $n=p_1^{a_1}p_2^{a_2}\cdots p_k^{a_k}$, where the $p_i$ are distinct primes and the $a_i$ are $\ge 1$.
Consider the system of congruences $x\equiv e_i \pmod{p_i^{a_i}}$ ($i=1$ to $k$) where the $e_i$ can be either $0$ or $-1$. There are $2^k$ such systems.
By the Chinese Remainder Theorem, the above system of congruences has a unique solution modulo $n$.
If $x$ is any such solution, then $x(x+1)\equiv 0\pmod{p_i^{a_i}}$, and hence modulo $n$. Conversely, if $x(x+1)\equiv 0\pmod{n}$, then $x(x+1)\equiv 0\pmod{p_i^{a_i}}$. But any solution of the congruence modulo $p_i^{a_i}$ must be congruent to $0$ or $-1$ modulo $p_i^{a_i}$.
It follows that our congruence has precisely $2^k$ solutions modulo $n$.
For exactly $8$ solutions, we can therefore use $n=(2)(3)(5)$, or any other positive integer with three distinct prime factors. |
Simplify $B=\sqrt{x^2} - x$ | $B=0$ if $x\geq0$ and $B=-2x$ if $x\leq 0$ |
How common (dense) are undecidable propositions in Peano Arithmetic? | There is an analogy between undecidable propositions and computer programs which do not halt. The density of halting programs is called Chaitin's constant (see http://en.wikipedia.org/wiki/Halting_probability ), and it is a transcendental number; in particular it is neither 0 nor
1, so both the halting and the non-halting programs make up positive proportions of the set of programs. I couldn't tell if those results can be directly transferred to propositions in PA, but it's worth a look. |
What is numerical step by step logic of SVD (singular value decomposition)? | If you absolutely, genuinely, really, truly need to implement the singular value decomposition yourself (which I have already advised you not to do based on my reading of you), you will want to see the ALGOL code in Golub and Reinsch's classic paper. (Alternatively, see the EISPACK implementation.) There have been a lot of improvements to the basic two-part algorithm (bidiagonalize and apply a modified QR algorithm to the resulting bidiagonal matrix) since then, but not knowing your background in numerical linear algebra matters, I'll refrain from mentioning them for the time being. |
Why $x\longmapsto \mathrm{sgn}(x)$ not in $W^{1,p}(\mathbb R)$? | $\text{sgn}(x)$ is not in $L^p({\mathbb{R}})$, hence not in $W^{1, p}({\mathbb R})$ (when $p<+\infty$). Also, $\text{sgn}(x)$ is the derivative of $|x|$ in the distribution sense, but the derivative of $\text{sgn}(x)$ is $2\,\delta_0$, not a $L^p({\mathbb R})$ function (even for $p=+\infty$).
When $p=\infty$, you can take a test function $\psi(x)$ such that $\psi(0)\ne 0$, and the sequence $\phi_n(x) = n\psi(n x)$. A change of variable shows that
$$\|\phi_n\|_{L^1} = \|\psi\|_{L^1}\qquad\text{and}\qquad-\int\text{sgn}(x)\phi_n'(x)d x = 2 n \psi(0)$$ If $\text{sgn}(x)$ was in $W^{1,\infty}$, the integral would be bounded by $C\|\text{sgn}(x)\|_{W^{1,\infty}} \|\phi_n\|_{L^1}$. |
Intuition behind growth rate of some functions | First you should notice that 4. and 5. should be out of the question. This is because $\lim_{n\rightarrow\infty} 10000^{10000/n}=10000^{\lim_{n\rightarrow\infty}10000/n}=10000^0$. That basically means that the function will grow at a smaller and smaller rate.
Now we are left with 1., 2. and 3. To solve this let us rewrite the three functions.
$$2^{n/2}=(2^{1/2})^n=(\sqrt{2})^n \ \ \ (1)$$
$$3^{n/3}=(3^{1/3})^n=(\sqrt[3]{3})^n \ \ \ (2)$$
$$5^{n/5}=(5^{1/5})^n=(\sqrt[5]{5})^n \ \ \ (3)$$
Now all we have to do is to see which number inside the brackets is larger. For 1. we have $\approx1.414^n$, for 2. we have $\approx1.442^n$ and for 3. we have $\approx1.380^n$. Since 2. is larger than both 1. and 3. it is therefore the one with the greatest growth rate. |
Clarification of the proof of the euclidean division algorithm for polynomials over a field. | You're right. Presumably the equation in the text is supposed to say $$q(x)=(a_n/b_m)x^{n-m}+q_1(x)$$ and the plus sign was just omitted by accident. |
what's the general form of 3D projective mapping? | The transformation from 3D to 2D is same, just with two extra terms, one in the denominator and one in the numerator. This is an 11 parameter projective mapping, since one of the 12 parameters can be set to 1.
Some more info on this "camera model" can be found here. |
Binomial Theorem twice in one equation? | Hint:
The $x$ term of $(1+\frac{2}{3}x)^n$ is $$\binom{n}{1}1^{n-1}\left( \frac{2}{3}x\right)=\frac{2n}{3}x$$.
The $x$ term of $ (3+nx)^2$ is $nx$.
So, the $x$ term of the product is: $$3\frac{2n}{3}x+nx=3nx$$. |
Any reference that explains generated equivalence relation | Recall that the equivalence relation generated by a relation is the smallest equivalence relation containing it. Equivalently, the equivalence relation $S$ generated by a relation $R$ is such that $(x,y)$ is in $S$ if and only if there exist $n\geqslant0$ and $(x_k)_{0\leqslant k\leqslant n}$ such that $x= x_0$, $y = x_n$, and for every $1\leqslant k\leqslant n$, either $(x_{k-1},x_{k})$ or $(x_{k},x_{k-1})$ belongs to $R$.
Since every equivalence relation is generated by itself, for every equivalence relation there exists a relation generating it.
To get $R\subseteq S$ generating $S$, one can erase from $S$ some or all of the following: (1) every $(x,x)$, (2) either $(x,y)$ or $(y,x)$ for every $x\ne y$ such that $(x,y)$ is in $S$, (3) every $(x,y)$ such that $(x,z)$ and $(z,y)$ are in $S$ for some $z\ne x,y$. |
Is there any analysis method of non-differentiability? | There is Rademacher's theorem which says that a function that is Lipschitz continuous is differentiable (almost everywhere). So one way of interpreting your question is to "how close to Lipschitz continuous is a function". Lipschitz continuity is a specific case of Hölder continuity.
A function is $\alpha$-Hölder continuous for $\alpha\in (0,1]$ if
$$|f(x)-f(y)|<K|x-y|^\alpha$$
If $\alpha=1$ then we say the function is Lipschitz.
Note that if $f$ is $\alpha$-Hölder, then it is $\beta$-Hölder for $\beta<\alpha$.
So if $f$ is $.6$-Hölder and $g$ is $.7$-Hölder, then $g$ is "closer" to being Lipschitz, i.e. differentiable.
Here are some pictures of various functions that are of increasing Hölder continuity:
Hölder continuous for all $\alpha \in(0,0.15)$:
Hölder continuous for all $\alpha \in (0,0.55)$:
Hölder continuous for all $\alpha\in (0,0.75)$:
Hölder continuous for all $\alpha\in (0,0.95)$:
Note how the paths get "nicer"/closer to being able to differentiate. |
what is a magnitude of an element in an vector? how to compute this magnitude? | The magnitude of an element in a vector is whatever makes sense at the time and will be explicitly defined for whatever context you are working in in the event that it is a more exotic scenario.
Generally, in most normal scenarios where you are working with the $L^\infty$ norm, you would be working in $\Bbb R^n$ or $\Bbb C^n$ or something similar for your vector space in which case the magnitude of a specific entry of a vector is very simply the "absolute value" which we are used to using every day of our lives.
For example if the vector was $\begin{bmatrix}1\\-7\\3\end{bmatrix}$, the $L^\infty$ norm of this vector would be $7$ as this is the largest absolute value of the entries of the vector. |
Clearly, $3 \in \mathbb{Z}$ is not a unit, because $1/3 \notin \mathbb{Z}$. What theorem does this kind of reasoning appeal to? | I have often seen the proof "$\mathbb{Z}$ is not a field: $2$ is not invertible, since $1/2 \notin \mathbb{Z}$" (of course the same can be said with $3$ instead of $2$). But this is not a proof. It is only a reformulation of the claim: We use the embedding $\mathbb{Z} \to \mathbb{Q}$ and say that $1/2 \in \mathbb{Q}$ does not lie in the image of $\mathbb{Z}$. Ok but one has to argue why this is the case. Why is there no integer among $0,1,-1,2,-2,\dotsc$ which becomes $1$ when doubled? Of course we learn this in school, but in school one doesn't learn a proof for this either. $^{(1)}$
Anyway, here is a correct proof, in fact for a stronger statement: $\pm 1$ are the only units of $\mathbb{Z}$. For that, let $n \in \mathbb{Z}$. Assume $n \geq 2$. Then for every $m \in \mathbb{Z}$ we have either $m \leq 0$, hence $mn \leq 0$, or $m \geq 1$, hence $mn \geq n > 1$, in particular $mn \neq 1$ (these properties of inequalities can be derived from any formal definition of the set of natural numbers, using induction). This shows $mn \neq 1$. Hence, $n$ is not a unit in $\mathbb{Z}$. Assuming $n \leq -2$, then $-n \geq 2$, hence $-n$ is not a unit, and therefore $n$ is not a unit. $\square$
$^{(1)}$ Here is another example which hopefully convinces you that this proof is incomplete: Consider the ring of formal power series with, say, rational coefficients $\mathbb{Q}[[x]]$. Then one might say "this is not a field, since $\frac{1}{1-x} \notin \mathbb{Q}[[x]]$". Well, it is true that a priori this fraction is not represented as a formal power series. And this fake proof could be be able to convince students who have just learned the definition of formal power series, and remember that $1-x$ is not a unit in $\mathbb{Q}[x]$. But actually $1-x$ is a unit, the inverse is the well-known geometric series $1+x+x^2+\dotsc$. Actually this is the key step to the more general result $R[[x]]^* = \{f \in R[[x]] : f(0) \in R^*\}$ for commutative rings $R$. From this we can build more complicated examples, such as
$$\frac{1}{x^2+x+1} = \sum_{k=0}^{\infty} x^{3k} - \sum_{k=0}^{\infty} x^{3k+1},$$
$$\frac{1}{x^2+x-2}=\sum_{k=0}^{\infty} \bigl(-\frac{1}{3} - \frac{(-1/2)^k}{6}\bigr) x^k.$$ |
Presenting Factorials as a sequence of multiplied numbers | First, that original representation you have should not have $(n-2)!$, but just $(n-2)$.
The "..." is just to show you are continuously multiplying by the next lowest integer until you reach $1$. For someone who has never learned about the factorial that second notation could be confusing (it may seem obvious to most to stop at $1$, and not $0$ or negative numbers, but it's good to be clear in the beginning).
For the case $n=3$ as you asked, you wouldn't list $3$ twice in the product, one of the $(n-something)$ would be your $3$, and you would multiply all lesser integers (one time each) until you reach $1$ (which means you would just multiply $3$, $2$ and $1$).
So $3! = 3 \times 2 \times 1$ is all you need and when you see
$n! = n \times (n-1) \times (n-2) \times ... \times 3 \times 2 \times 1$
it just means take your $n$ and multiply it with all the smaller integers less than $n$. The "..." is just a placeholder for all the other integers less than $n$. For $n=3$ your $n$, $(n-1)$, and $(n-2)$ are already your $3$, $2$, and $1$ values, so you don't list them twice. |
Is $N_{k\subset K}$ the only *norm* on the field extension $k\subset K$? | Let $k \subset K$ be an extension of fields. A function $F: K \rightarrow k$ satisfying your requirements is a homomorphism of $K^\times$ to $k^\times$ when restricted to $K^\times$. In particular given a homomorphism $\psi:K^\times\rightarrow k^\times$ we can extend it to a function satisfying 1) and 2) by letting $\psi(0)=0$.
So for instance we could take the trivial homoorphism between $K^\times$ and $k^\times$. Or we could compose the norm map with a suitable automorphism of $k$ to get another such function. For instance if we considered the norm on the extension $\mathbb C(x,y) \subset \mathbb C(\sqrt{x},y)$ and composed the norm with the automorphism switching $x$ and $y$. |
How to solve this system of nonlinear ODE's? | You can simplify your system through a sequence of variable transformations. First, based on your discovery and that of @Michael, it seems reasonable to define
\begin{equation}
x = a b,\quad y = a c,\quad z = c d,\quad w = b d
\end{equation}
and write the system in terms of those variables. Then, we introduce
\begin{equation}
A = x + z,\quad B = x-z,\quad C = y+w,\quad D = y-w
\end{equation}
to yield $A'=0$, which you found. @Michael ' s observation then boils down to $C = B^2 + k$, with $k$ a constant. The only two remaining free variables $B$ and $D$ can be used to obtain the following second order equation for $B$:
\begin{equation}
B'' = 4 B (1 +2 k +2 B^2),
\end{equation}
which should be straightforward to analyse. |
I am stuck with some details when graphing this function | A qualitative approach.
The function can be written as $x^2+\dfrac1x$, and is the sum of a parabola and an equilateral hyperbola.
The hyperbola has a vertical asymptote at $x=0$ and the horizontal asymptote $y=0$. Hence for small $|x|$ it is dominant, and for large $|x|$, it is neglectible and the function gets closer and closer to the parabola.
In the negatives, both functions are decreasing and the sum is decreasing (to $-\infty$).
In the positives we sum increasing and decreasing and we cannot conclude about the variation. The first derivative is $2x-\dfrac1{x^2}$, which has a single positive root, hence the function has a single minimum.
This is confirmed by a plot: |
Why does a set of m elements have 2$^m$ subsets? | A element in the set can be either included or not in the subset. So you have 2 possibilities for each element, which equals $2^m$ possibilities, or subsets |
Self adjoint operator property | Let $\varphi = d\gamma/d\mu$ be the Radon-Nikodym derivative, then for any $f \geq 0$ we have
$$
\int fd\gamma = \int f\varphi d\mu \qquad (\ast)
$$
In addition to absolute continuity, you also need the fact that
$$
\varphi \in L^{\infty}(\mu)
$$
The proof is in the following steps :
- Since $\mu$ and $\gamma$ are both positive measures, $\varphi \geq 0$ a.e. $[\mu]$
- Define $U : L^2(\gamma) \to L^2(\mu)$ by
$$
f \mapsto \sqrt{\varphi}f
$$
By $(\ast)$ it follows that
$$
\|U(f)\|_2 = \|f\|_2
$$
so $U$ is an isometry.
- Now do the same with
$$
\psi := d\gamma/d\mu
$$
to get an operator
$$
V : L^2(\mu) \to L^2(\gamma)
$$
- Check that this satisfies both $V = U^{-1}$ and $V = U^{\ast}$ (once again, using $(\ast)$)
- Finally check that $U$ conjugates $A$ to $B$ (or rather, the multiplication operators corresponding to $A$ and $B$) |
How to find irreducible representations of $\mathbb{C}S_2$ and $\mathbb{C}S_3$ | I am not sure what a representation is.
You should be making sure you fix this problem before you consider finding irreducible representations! I will try to motivate and then explain the idea.
Let's get some historical perspective. Originally, groups were conceived of as symmetry groups, sets of transformations that preserve the structure of features of a mathematical object, such as a geometric figure in space or arithmetic equations in a number system. These always are sets of functions which can be composed and inverted. One can then lose the function interpretation of the elements of a group and abstract this to a generic set with an associative binary operation satisfying the same group axioms - this is an abstract group. Finally, if in nature one finds a structure which is naturally a group and whose elements have concrete interpretations (numbers, loops, games), this is a concrete group. Symmetry groups are in particular concrete groups. The distinction between abstract and concrete groups is informal, not formal, by the way.
With the abstraction of groups, we can understand the idea of two groups being "the same" but only having their elements (and the corresponding entries of the Cayley table) relabelled - we say that two groups are isomorphic. In this way they are "essentially the same." Thus, we can speak of different mathematical objects having "the same symmetry" (isomorphic symmetry groups) or otherwise comparing symmetries.
Instead of taking a mathematical object and then attaching a symmetry group to it, one can reverse our perspective by starting with a group and attaching objects for it to "act" on. The fundamental mathematical object (okay, debatable) is a set. The term "group action" is usually reserved for having a group act on a set. This means that every element of $G$ gets an interpretation as an invertible function on a set $X$, in such a way that composition in $G$ corresponds to composition of functions, just as with the idea of a symmetry group. This may be formally described as a group homomorphism $G\to{\rm Perm}(X)$. There is an alternate definition as a map $G\times X\to X$, given the abbreviated notation $(g,x)\mapsto gx$, satisfying "associativity" $(gh)x=g(hx)$.
(Fun fact: if one writes this associativity condition using commutative diagrams, as per the "categorical imperative" to think with diagrams, one may transplant this definition into other categories to define group objects. If one takes group objects in the categories of sets, topological spaces, smooth manifolds or algebraic varieties one gets groups, topological groups, Lie groups and algebraic groups respectively. Anyway, tangential remark.)
Of course, functions on sets hardly captures all there is about symmetry. In different categories there are different types of invertible maps - topological spaces have continuous maps, posets have monotone maps, vector spaces have linear maps - and in general we would want a group to act in the "appropriate" ways on these objects. This gives rise to the most general notion of a group representation: a map $G\to{\rm Aut}(X)$ where $X$ is an object in some category and ${\rm Aut}(X)$ is its automorphism group.
Almost always, by representation we mean group representation (I haven't hinted at any other kind yet), and even more specifically we mean a linear group representation, which uses the category of vector spaces over some field (usually the field $\Bbb R$ because we use it in geometry, or in $\Bbb C$ because the algebra works out so much nicer). In particular, a linear group representation of $G$ is formally a group homomorphism $\rho:G\to{\rm GL}(V)$. This means every element of $G$ gets interpreted as an invertible linear transformation of the vector space $V$.
From there, you can build the idea of isomorphisms of representations, direct sums, indecomposability, irreducibility, semisimplicity, tensor product and so on. I will omit explaining these concepts in this answer. There is also a notion of linear representations of algebras: if $A$ is an algebra over a field, then a representation is an algebra homomorphism $A\to{\rm End}(V)$ for some vector space $V$ over the same field. Thus, every element of $A$ gets interpreted as a linear transformation of $V$, just as with groups, but there's an addition operation too. At this juncture one can prove a complex group representation of $G$ is "essentially the same" as an algebra representation of $\Bbb C[G]$, the complex group algebra. Indeed there is an equivalence of categories. There are a few questions on this site laying around explaining this idea.
(One may also call $V$ an $A$-module, or alternatively define it by a linear map $A\otimes V\to V$ satisfying the aforementioned commutative diagram.) Note that with finite groups, if $|G|$ is invertible in the scalar field then all finite-dimensional representations are semisimple. In particular, indecomposable representations are irreducible. (Maschke's theorem.)
I want to find all the complex irreducible representations of $S_2$ and $S_3$.
Over $\Bbb C$, all irreducible representations of an abelian group are one-dimensional. (This is a consequence of Schur's lemma.) You've given a good representation of $S_2$, but even over $\Bbb R$ it's not irreducible: it fixes the "diagonal" subspaces in the Cartesian plane, $\Bbb R\cdot(1,1)$ and $\Bbb R\cdot(1,-1)$.
For a one-dimensional representation of a group $G$, since ${\rm GL}_1(\Bbb C)$ can be identified with $\Bbb C^\times$ you need to specify a group homomorphism $G\to\Bbb C^\times$. It should be obvious then what the two irreducible representations of $S_2$ are.
Now consider $S_3$. There are automatically two one-dimensional representations $S_3\to\Bbb C^\times$, the trivial representation and the sign representation. (Remember, taking signs of permutations is a group homomorphism $S_n\to\{\pm1\}$.) One can also take the standard representation of $S_3$ by permutation matrices acting on $\Bbb C^3$, but this is not irreducible. Indeed, it has an obvious $G$-invariant subspace of vectors of the form $(z,z,z)$. Indeed, this is the subspace of fixed points, denoted $V^G$ in general.
One thing you learn in the the beginning of representation theory is Weyl's trick: how to construct a $G$-invariant inner product $\langle\cdot,\cdot\rangle_G$ from any choice of inner product $\langle\cdot,\cdot\rangle$. One employs an averaging formula $\langle v,w\rangle_G:=\frac{1}{|G|}\sum_{g\in G}\langle gv,gw\rangle$, which is clearly $G$-invariant and one may check is an inner product. Then, if $W$ is a representation with subrepresentation $V$, one may find a complementary $G$-invariant subspace $U$ so that $W=U\oplus V$ as representations - indeed, one may find such a $U$ as the orthogonal complement of $V$ with respect to $\langle\cdot,\cdot\rangle_G$.
As it turns out, the standard inner product on $\Bbb C^3$ is already $S_3$-invariant. If one takes the orthogonal complement of $(\Bbb C^3)^{S_3}$ one obtains the two-dimensional subrepresentation of points $(x,y,z)\in\Bbb C^3$ satisfying $x+y+z=0$. We can check that it is irreducible too: since it's two-dimensional, if it had a proper nontrivial subrepresentation it would have to be one-dimensional. See if you can continue from there to prove it's irreducible.
This gives altogether three irreducible complex representations of $S_3$, two that have one dimension and one that has two dimensions. (Later, you will learn how to check if you have all of the representations: the sum of the squares of their dimensions is the group's order; in this case we have $1^1+1^2+2^2=3!$) It also showcases a bit of technology, like Schur's lemma, standard representations of symmetric groups, and $G$-invariant inner products yielding $G$-invariant complementary subspaces, which should be covered in introductions to representation theory. |
Is the cardinality of the basis of a vector space $V$ having infinite dimension necessarily countable infinite? | Not all infinite dimensional vector spaces have countable basis. In fact, most spaces you come across in "real life" don't. The simplest example would be a sequence space like
$$
\ell^2 = \{(x_n) \subset \mathbb{C} : \sum_{n=1}^{\infty} |x_n|^2 < \infty\}
$$
This is an example of a Banach space - a vector space that comes with a nice notion of distance (a norm) and one that is complete with respect to that norm.
It is a general fact that an infinite dimensional Banach space cannot have a countably infinite basis (due to the Baire Category theorem).
However, the space
$$
c_{00} := \{(x_n) \in \mathbb{C} : x_n \neq 0 \text{ for only finitely many } n\}
$$
is an example of an infinite dimensional vector space whose basis is countable (the "standard" basis).
Most books on Linear Algebra mention only finite dimensional vector spaces because they are easy to visualize (just extend your notion of a vector in $\mathbb{R}^2$), but they are also deep enough to prove some rather interesting results (for instance the spectral theorem). Furthermore, infinite dimensional vector spaces are best analysed with some topology in mind - this is what functional analysis studies. |
One-sided limits proof | You need to know the definition of the limit at a real number. Assume that both $\lim_{x\to a^-}f(x)$ and $\lim_{x\to a^+}f(x)$ exist and eqauls to $L$. Then, given any $\epsilon>0$, there is a $\delta$ such that $|f(x)-L|<\epsilon$ for all $x$ such that $0<x-a<\delta$ or $0<a-x<\delta$( In fact for all $x$ such that $0<|x-a|<\delta$). But then $\lim_{x\to a}f(x)$ exists and it is equal to $L$. |
Apparent paradox on d-separation and conditioning in Bayesian Networks | D-separation between $A$ and $B$
implies $A\perp\!\!\!\!\perp B\mid C$
to be true.
The d-separation of nodes in the unconditioned DAG does not imply that the nodes will be d-separated when conditioned. A collider in a conditioning set does not block the path.
In the DAG $(A)\to(C)\gets(B)$, the collider $(C)$ blocks the path between nodes $(A)$ and $(B)$. $(A),(B)$ are d-separated. $$A\perp\!\!\!\!\perp B$$
When $(C)$ is a member of the conditioning set, the path is unblocked. $(A),(B)$ are d-connected when conditioned by $\{C\}$. $$A\not\perp\!\!\!\!\!\perp B\mid C$$
If a node in a connecting path is in the conditioning set, the node becomes a blocker.
If a node in a conditioning set is a collider, or a decendant of a collider, the collision is not a blocker. |
Operations in the exterior algebra. Multiplication in the direct sum of rings. | The product is defined component-wise, i.e. $$(x_0,\dots,x_n)\wedge (y_0,\dots,y_n) = (z_{0}, \ldots z_{n})$$ where $$z_{i} = \sum_{j = 0}^{i} x_{j} \wedge y_{i-j}$$ |
Homomorphisms S4 to Z | There is only one such homomorphism, the zero homomorphism (mapping everything to 0).
The reason is there every non-zero element of $\mathbb{Z}$ has infinite order, and the order of the image (under a homomorphism) of any element of finite order must be finite. |
Showing the triviality of homomorphisms | You were on the right track.
To show the image has order $1$, note that the order of the quotient is the order of $G$ divided by the order of the kernel, hence divides the order of $G$. But the order of the image is the same as the order of the quotient, hence divides both the order of $G$ and the order of $H$. |
How to find a joint PDF? | In case 1) $f_Y(y)=\int_1^{2} ax^{2}\, dx$ and in case 2) $f_Y(y)=\int_y^{2} ax^{2}\, dx$. |
Bounded operator but not Compact | Hint: Instead, try
$$ f_n(x) = x^{2n}, $$
and observe that any subsequence of $Tf_n=f_n$ converges pointwise to a discontinuous function. |
Is the system wrong in this multiple choice problem? | Is $\{0\}$ an orthogonal set, apparently so according to the definition? Similarly,
$\{0,1\}$ is an orthogonal set (of real numbers, that is in dimension $1$).
Are they linearly independent: No.
So isn't C also true? I would think it is.
You did not give a definition of an orthogonal matrix, apparently the elements are orthogonal unit vectors according to wikipedia, so B would indeed be correct. |
Interpretation of this linear application | It's the mirror reflection on the plane $(r_1,r_2)$ :
$(I-2r\cdot r^\top)\cdot v = v-2\left( (v\cdot r_1)r_1+( v\cdot r_2)r_2 \right) = v-2v_p$ where $v_p$ is the projection of $v$ on plane $(r_1,r_2)$ . |
Find the unique monic polynomial that is an associate of 4 in $Z_7$ | Then yes: the wanted monic polynomial is $\;1\;$ , since $\;4\cdot2=1\;$ and $\;2\;$ is a unit in $\;\Bbb F_7[x]\;$ .
For the other one:
$$(5x^2+3)\cdot3=x^2+2\;,\;\;\text{and}\;\;3\;\;\text{is a unit, again}$$ |
Alternating versions of Convergent Series summing to half | Note that we have in general
$$\begin{align}
\sum_{n=1}^{2N} (-1)^na_n&=\sum_{n=1}^N a_{2n}-\sum_{n=1}^Na_{2n-1}\\\\
&=\sum_{n=1}^N a_{2n}-\left(\sum_{n=1}^{2N}a_n-\sum_{n=1}^Na_{2n}\right)\\\\
&=2\sum_{n=1}^N a_{2n}-\sum_{n=1}^{2N} a_n\tag1
\end{align}$$
We can use $(1)$ to evaluate the series of interest. Using $(1)$, for $a_n=\frac1{n^2}$ we have
$$\begin{align}\sum_{n=1}^{2N}\frac{(-1)^n}{n^2}&=2\sum_{n=1}^N \frac1{(2n)^2}-\sum_{n=1}^{2N}\frac1{n^2}\\\\
&=-\frac12\sum_{n=1}^N \frac1{n^2}-\sum_{n=N+1}^{2N}\frac1{n^2}\tag2
\end{align}$$
Since the first term on the right-hand side of $(2)$ converges to $-\frac{\pi^2}{12}$ while the second term converges to $0$, we obtain the expected result
$$\sum_{n=1}^\infty \frac{(-1)^n}{n^2}=-\frac{\pi^2}{12}$$
Note that the expression in $(1)$ can be useful in evaluating other series.
EXAMPLE $1$:
For example, in THIS ANSWER, I used $(1)$ with $a_n=\frac{\log(n)}{n}$ to show that
$$\sum_{n=1}^\infty \frac{(-1)^n\log(n)}{n}=\gamma\log(2)-\frac12\log^2(2)$$
EXAMPLE $2$:
As another example of $(1)$, let $a_n=\frac1n$. Then, we see that
$$\begin{align}
\sum_{n=1}^{2N}\frac{(-1)^n}{n}&=2\sum_{n=1}^N \frac1{2n}-\sum_{n=1}^{2N}\frac1{n}\\\\
&=-\sum_{n=N+1}^{2N}\frac1n\\\\
&=-\frac1N\sum_{n=1}^N \frac1{1+(n/N)}\tag3
\end{align}$$
The right-hand side of $(3)$ is the Riemann sum for $-\int_0^1\frac1{1+x}\,dx=-\log(2)$. Therefore, we find that
$$\sum_{n=1}^\infty \frac{(-1)^n}{n}=-\log(2)$$ |
Prove that the distance between opposite edges of a regular hexagon of side length $\sqrt3$ is a rational value | You're right: it is equivalent to saying that the altitude in an equilateral triangle with sides $\sqrt 3$ is equal to $3/2$.
Indeed, if we denote $h$ this altitude, we have
$$ \frac1{\sqrt 3}=\tan\tfrac\pi 6= \frac{\frac{\sqrt 3}2}h,\quad\text{whence }\quad h=\frac{(\sqrt 3)^2}2. $$ |
A fair die is rolled 15 times. What is the probability that at least 8 ones are rolled given no twos? | $$\sum\limits_{i=8}^{15} {15 \choose i} \left(\frac{1}{5}\right)^i \left( \frac{4}{5} \right)^{15-i} = \frac{129386893}{30517578125} = 0.00423975.$$
Given that you have no $2$s, you can consider the die to have just five equally likely sides (as noted by @RossMillikan). We sum the binomial expansion for $8 \to 15$ successful events (appearance of a $1$), each of which has probability of $1/5$ and the probability of failure $4/5$ on any single roll.
Here's the relevant binomial distribution (for the appearances of $1$s):
Notice that, as expected, the most likely number of appearances of $1$s is $3$ (because $1/5 \cdot 15$).
You sum this from $8 \to 15$ (inclusive). |
Product of indexed parentheses | This is
$$K^{n-1}\left(1+\frac1K\right)\left(2+\frac1K\right)\cdots
\left(n-1+\frac1K\right).$$
Using $\Gamma(x+1)=x\Gamma(x)$ gives
$$K^{n-1}\frac{\Gamma(n+1/K)}{\Gamma(1+1/K)}.$$ |
Do these intervals have the same image under the $e^z$ transformation? | For fixed $\alpha$ and $\beta$, there are always points in $D_1 \setminus D_2$. On the other hand, you are correct that
$$ \bigcup_{0<\alpha<\beta<\pi} D_2 = D_1. $$ |
Understanding the definition of a set with $C^k$ boundary and of the outward pointing normal vector field | The definition says, geometrically, that the boundary of $U$ is locally the graph of a function of class $C^{k}$, i.e., a sufficiently small piece of $U$ near a boundary point $x_{0}$ is (possibly after re-indexing coordinates) the super-level set
$$
F(x_{1}, \dots, x_{n-1}, x_{n}) := x_{n} - \gamma(x_{1}, \dots, x_{n-1}) > 0.
$$
For example, the function $\gamma(x) = |x|x^{k}$ is of class $C^{k}$ but not of class $C^{k+1}$, so the (unbounded) region
$$
U = \{(x, y) \text{ in } \mathbf{R}^{2} : y - \gamma(x) > 0\}
$$
has boundary of class $C^{k}$, but not of class $C^{k+1}$. (At most points the boundary is real-analytic, but this example should convey the idea. If you have "more interesting" examples of $C^{k}$ functions, you can cook up correspondingly more interesting regions.)
If $\partial U$ is of class $C^{1}$, then by definition a small piece of $\partial U$ may be written as the graph of a $C^{1}$ function. Consequently, at each boundary point $x_{0}$, there is a well-defined tangent space $T_{x_{0}}$, of dimension $(n - 1)$; the outward unit normal $\nu(x_{0})$ is the unique unit vector orthogonal to $T_{x_{0}}$ and satisfying the directional derivative condition $\nu(F) < 0$, with $F$ a local defining function for the boundary, as above. Geometrically, $\nu(x_{0})$ spans the orthogonal complement of $T_{x_{0}}$ and points toward the exterior of $U$, where $F < 0$. |
Why negating universal quantifier gives existential quantifier? | The following two statements are equivalent:
"It is not true that all men have red hair."
"There exists at least one man who does not have red hair."
Hence $\neg\forall x\ \varphi$ is the same as $\exists x\ \neg\varphi$.
The following are equivalent:
"It is not true that some men have green hair."
"All men have non-green hair."
Hence $\neg \exists x\ \varphi$ is the same as $\forall x\ \neg\varphi$.
However, the form in which you've written them is not correct (as pointed out in Daniel Fischer's comment). |
If the scalar product of two vectors is equal to the magnitude of their vector product, find the angle between them. | With two vectors $u,v\in\Bbb R^3$ and angle $\alpha$ between them, We know
$$ u\cdot v = |u||v|\cos\alpha$$
and
$$ |u\times v|=|u||v|\sin \alpha$$
Hence the given conditions imply
$$ |u||v|\cos\alpha=|u||v|\sin \alpha.$$
This holds trivially when at least one of $u,v$ is the zero vector (in which case we say they are orthogonal, though actually speaking of an angle between them makes no sense). If we only consider the case that $u,v$ are both non-zero, we can divide by the lengths and find
$$ \cos\alpha=\sin\alpha.$$
As additionally $0\le\alpha\le\pi$, the only solution to this is $$\alpha=\frac\pi4,$$ where $\sin\alpha=\cos\alpha =\frac{\sqrt 2}2$. |
proving the conjugate transpose of a linear map is the adjoint | See that the adjoint is defined by
$$g(Tv, w) = g(v, T^\ast w) \qquad \forall v,w\in V$$
So we must verify that
$$g(Tv, w) = g(v, \left[\overline{[T]_B^T}\right]_V w) \qquad\forall v,w\in V$$
Where I invent $[T]_V$ to mean the map $\tilde T$ such that $[\tilde T]_B = T$
Or in base $B$
$$[v]_B^\ast [T]_B [w]_B = \overline{[w]_B^\ast \overline{[T]_B^T} [v]_B} \qquad \forall v,w\in V$$
Where $v^\ast = \overline{v^T}$ by definition. |
Chain rule proof. Why is $\Phi = f'(g(a))$ if $\Delta_h = 0$ | Equation $(4)$ says
$$\begin{align}
\Phi(h) &= \frac{f(g(a) + \Delta_h) - f(g(a))}{\Delta_h} \ \ \ \ \ \text {if $\Delta_h \ne 0$} \ \ (*) \\\\
&= f'(g(a)) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text {if $\Delta_h = 0$} \ \ (**)
\end{align}$$
The second part is just the definition of the limit. Recall
$$p'(x) = \lim_{h \to 0} \frac{p(x + h) - p(x)}{h}$$
If we take the limit of $(*)$ as $\Delta_h \to 0$, we find
$$\begin{align}
\lim_{\Delta_h \to 0} \Phi(h) &= \lim_{\Delta_h \to 0} \frac{f(g(a) + \Delta_h) - f(g(a))}{\Delta_h} \\\\
&= f'(g(a)) \\\\
&= \Phi(h) \ \text {at $\Delta_h = 0$} \\\\
&= (**)
\end{align} $$
Which is what the text says. |
Show $\vdash \phi$ implies $\vdash \psi \to \phi$. | Yes, correct. You start with a derivation the end of which is $\phi$ and for which you assume it has no open assumptions, and argue that if this derivation exists, you can also find a derivation that ends in $\psi \to \phi$ with no further open assumption. When writing down the proof, you can use $\vdots$ or $\mathcal{D}$ on top of the conclusion to indicate the unknown start of the derivation.
For this particular proof (and many others of the form "If $\vdash A$ then $\vdash B$"), the argumentation that the other derivation exists will boil down showing that the derivation ending with $\phi$ can be continued, using the rules of ND, to reach the conclusion $\psi \to \phi$. Which step(s) that continuation will consist of should now be easy, if you correctly understood the business about rules with dischargeable assumptions. |
Which integration theory to use? | I got a fun standard one where Lebesgue integral is trivial but Riemann integration is slightly annoying:
$f(x)=1/q$ on rational $x=p/q$ and $0$ otherwise.
It's a bit of work to even show this is Riemann integrable, but the Lebesgue integral is zero as the function is zero almost everywhere.
This doesn't adequately address the question I am sure but it is worth noting. I think the value in Lebesgue integration is exactly the limit theorems you mention, dominated convergence being the most useful. |
Finding the closed form solution of a third order recurrence relation with constant coefficients | Let the generating function $$g(x)=\sum_{n=0}^\infty a_nx^n$$ with the recurrence $a_n-pa_{n-1}-qa_{n-2}-ra_{n-3}=0$
Now consider $$(1-px-qx^2-rx^3)g(x)=a_0+(a_1-pa_0)x+(a_2-pa_1-qa_2)x^2=A(x)$$ Note that all the other terms go to zero because of the recurrence. We then have $$g(x)=\frac{A(x)}{(1-px-qx^2-rx^3)}$$
$A(x)$ is quadratic (and we have an explicit expression for it). If the denominator factors as $(1-sx)^2(1-tx)$ we have the partial fraction decomposition$$g(x)=\frac B{(1-sx)^2}+\frac C{1-sx}+ \frac D{1-tx}$$ [$B,C,D$ are constant]
We expand using the binomial theorem and equate coefficients - the coefficients involving $n$ come from the quadratic factor in the denominator.
If the denominator factors as $(1-ex)^3$ we have the partial fraction decomposition$$g(x)=\frac B{(1-sx)^3}+\frac C{(1-sx)^2}+ \frac D{1-sx}$$ The cubic factor gives the $n^2$ factor which appears as a coefficient in the expression for $a_n$.
Note: this was adjusted in the light of Brian's comment, which highlighted a careless basic error in the original text. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.