title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Improper Integral Question $\int_0^1 \ln\sqrt{x}dx $ | It would have been slightly more pleasant to rewrite the integrand as $\frac{1}{2}x\ln x$.
After the integration, the issue is whether $\lim_{\epsilon\to 0^+}\epsilon\ln \epsilon$ exists. It does, and is equal to $0$.
One way of showing it is by writing $\epsilon=e^{-t}$. Then we are interested in the limit as $t\to\infty$ of $e^{-t}(-t)$.
The fact that
$$\lim_{t\to\infty} -\frac{t}{e^t}=0$$
can be proved in various ways. It follows that $\lim_{\epsilon\to 0^+}\epsilon\ln \epsilon=0$. |
What is the correct shape of the sine integral? | Take care since there are two definitions
$$\text{sinc}(x)=\frac {\sin(x)} x \qquad \text{and} \qquad \text{sinc}(x)=\frac {\sin(\pi x)} {\pi x}$$
In digital signal processing and information theory, the second is used. |
If $i_1 : k \rightarrow K$ and $i_2 : K \rightarrow k$ are fields extension, do we have $k \cong K$? | Let $E_1$ and $E_2$ be elliptic curves over $\Bbb Q$ which are isogenous, but not isomorphic. Let $K_1$ and $K_2$ be their function fields. There are then isogenies
$i_1:E_1\to E_2$ and $i_2:E_2\to E_1$. These induce maps of function fields
$i_1^*:K_2\to K_1$ and $i_2^*:K_1\to K_2$. Then $K_2$ is isomorphic to a finite
extension of $K_1$, and vice versa yet $K_1$ and $K_2$ are not isomorphic. |
Calcuating an Integral via Residues | Consider $\displaystyle \oint_C \frac{z \, \textrm{Log }z \, dz}{(z^2+1)(z+2)}$ where $C$ is the keyhole contour that goes from $\epsilon$ to $R$ then around a large circle of radius $R$ counterclockwise, from $R$ to $\epsilon$ (below the positive real axis) and around a circle of radius $\epsilon$ clockwise. There are three poles within the contour at $\{-2,i,-i \}$.
The integral is equal to $2\pi i$ times the sum of the residuals at the poles inside the contour.
The integral around the keyhole is equal to the sum of the integrals along the curves making up the keyhole contour. On the circles of radius $R$ and radius $\epsilon$, the integrals approach zero in the limit.
Taking the limit as $\epsilon\rightarrow 0$ and $R\rightarrow \infty$, and showing only the non-zero terms that remain, we have
$$\lim_{\epsilon \rightarrow 0 , R \rightarrow \infty} \left[ \int_\epsilon^R \frac{x \log x \, dx}{(x^2+1)(x+2)} - \int_\epsilon^R \frac{x (\log x+2\pi i)\,dx}{(x^2+1)(x+2)}\right]= 2\pi i \sum_{z\in \textrm{poles}} \textrm{Res }\left[ \frac{z \, \textrm{Log }z}{(z^2+1)(z+2)}\right].$$
The left-hand side is
$$-2\pi i \int_0^\infty \frac{x \, dx}{(x^2+1)(x+2)}$$
The sum of the residues (multiplied by $2\pi i$) is also purely imaginary, so we obtain a real expression for the integral.
$$ \int_0^\infty \frac{x \, dx}{(x^2+1)(x+2)} =
- \left. \text{Res } \frac{z\,\text{Log }z}{(z^2+1)(z+2)}\right|_{z=i}
- \left. \text{Res } \frac{z\,\text{Log }z}{(z^2+1)(z+2)}\right|_{z=-i}
- \left. \text{Res } \frac{z\,\text{Log }z}{(z^2+1)(z+2)}\right|_{z=-2}.$$
$$ \int_0^\infty \frac{x \, dx}{(x^2+1)(x+2)} =
- \frac{ i (\frac{i\pi}{2})}{2i(2+i)}
- \frac{ (-i) (\frac{3\pi i}{2})}{(-2i)(2-i)}
+ \frac{2(\log 2 + \pi i)}{5}.$$
After simplifying, we get
$$\int_0^\infty \frac{x \, dx}{(x^2+1)(x+2)}=\frac{\pi}{10} + \frac{2 \ln 2}{5}.$$
This agrees with the result you quoted from Mathematica. |
Conditional Expectation of $M_n$ given $F_n$ | This question lacks the machinery/terminology required to capture what you are (probably) trying to express, but I imagine what you meant to say is that the process $X$ is adapted to $\left(F_n\right)_{n\in \mathbb{N} }$ and that $M_n \equiv f\left(X_1,X_2,\ldots,X_n\right)$ and $f$ is "nice enough". If you want more than a heuristic explanation, I would start by reading a text on measure-theoretic probability.
A link describing what measure-theoretic probability is https://en.wikipedia.org/wiki/Probability_theory#Measure-theoretic_probability_theory
Good books are found at "Best measure theoretic probability theory book?" |
Inequality with invertible symmetric matrices | Clearly not, because the LHS can be nonzero when the RHS vanishes. E.g. let $\phi=\frac{1+\sqrt{5}}2$ and
$$v=\pmatrix{-1\\ 0\\ 0\\ \phi},\ w=\pmatrix{0\\ 1\\ 1-\phi\\ 0},\ B=\pmatrix{2\\ &1\\ &&1 \\ &&&1}.$$
Then $\langle Jv,w\rangle=0$ but $\langle JBv,Bw\rangle=1-\phi$. |
$n!>n^m$ for $n\ge?$ | Note the following:
$n\ge n$.
$(n-1)(n-2) \ge n$, for $n$ sufficiently large
$(n-3)(n-4) \ge n$, for $n$ sufficiently large
$(n-5)(n-6) \ge n$, for $n$ sufficiently large
If we do this for $m$ steps (and provided that $n$ is sufficiently large for all of them), we may multiply the LHS to get $n(n-1)(n-2)\ldots, (n-2m+2)$, which (provided $n>2m-2$) will be smaller than $n!$, while the RHS will be $n^m$. All of the inequalities are implied by the last one, which is $(n-2m+3)(n-2m+2)\ge n$. This rearranges to $n^2+(-4m+4)n+(2m-3)(2m-2)\ge 0$. Take the larger root of this quadratic, and $2m-2$ from above, and the larger of these will serve for $M$. |
Simplify expression in Boolean algebra | No, both questions are not correct.
The first statement is a tautology. You can prove it.
The second statement is not a tautology. You can prove it is not always true. (via counter example) |
Eigenvalue of the sum of a symmetric matrix and the outer product of it's eigenvector | The action of $B$ on $v_j$ is
$$ Bv_j = (A - \lambda_i v_iv_i^T)v_j = Av_j - \lambda_i v_i v_i^T v_j $$
Since the eigenvectors are orthogonal, then $v_i^Tv_j = \delta_{ij}$, assuming the eigenvectors are suitably normalized. For $j\ne i$, then $Bv_j = A v_j = \lambda_j v_j$. Therefore $\lambda_j$ is an eigenvalue of $B$ when $j\ne i$, and the corresponding eigenvector is the same $v_j$.
When $j=i$, then
$$ Bv_i = Av_i - \lambda_i v_i = \lambda_i v_i - \lambda_i v_i = 0 $$
so $\lambda_i$ of $B$ is zero. |
What is a "cubical map" between cubical complexes? | I think one can give definition of cubical maps of cubical complexes in a similar way to simplicial maps between simplicial sets. Cubical maps between cubical sets is a map of vertices which is compatible with face and degeneracy maps. |
Number of equivalence classes of $w \times h$ matrices under switching rows and columns | This has a very straightforward answer using the Burnside lemma. With
$n$ rows, $m$ columns and $q$ possible values we simply compute the
cycle index of the cartesian product group ($S_n \times S_m$, consult
Harary and Palmer, Graphical Enumeration, section 4.3) and evaluate
it at $a[p]=q$ as we have $q$ possibilities for an assignment that is
constant on the cycle. The cycle index is easy too -- for two cycles
of length $p_1$ and $p_2$ that originate in a permutation $\alpha$
from $S_n$ and $\beta$ from $S_2$ the contribution is
$a[\mathrm{lcm}(p_1, p_2)]^{\gcd(p_1, p_2)}.$
We get for a $3\times3$ the following colorings of at most $q$ colors:
$$1, 36, 738, 8240, 57675, 289716, 1144836, 3780288,\ldots$$
which points us to OEIS A058001 where
these values are confirmed.
We get for a $4\times 4$ the following colorings of at most $q$ colors:
$$1, 317, 90492, 7880456, 270656150, 4947097821,
\\ 58002778967, 490172624992,\ldots$$
which points us to OEIS A058002 where
again these values are confirmed.
We get for a $5\times 5$ the following colorings of at most $q$ colors:
$$1, 5624, 64796982, 79846389608, 20834113243925, 1979525296377132,
\\ 93242242505023122, 2625154125717590496,\ldots$$
which points us to OEIS A058003 where
here too these values are confirmed.
This was the Maple code.
with(combinat);
pet_cycleind_symm :=
proc(n)
option remember;
if n=0 then return 1; fi;
expand(1/n*add(a[l]*pet_cycleind_symm(n-l), l=1..n));
end;
pet_flatten_term :=
proc(varp)
local terml, d, cf, v;
terml := [];
cf := varp;
for v in indets(varp) do
d := degree(varp, v);
terml := [op(terml), seq(v, k=1..d)];
cf := cf/v^d;
od;
[cf, terml];
end;
pet_cycles_prod :=
proc(cyca, cycb)
local ca, cb, lena, lenb, res, vlcm;
res := 1;
for ca in cyca do
lena := op(1, ca);
for cb in cycb do
lenb := op(1, cb);
vlcm := lcm(lena, lenb);
res := res*a[vlcm]^(lena*lenb/vlcm);
od;
od;
res;
end;
pet_cycleind_symmNM :=
proc(n, m)
local indA, indB, res, termA, termB, flatA, flatB;
option remember;
if n=1 then
indA := [a[1]];
else
indA := pet_cycleind_symm(n);
fi;
if m=1 then
indB := [a[1]];
else
indB := pet_cycleind_symm(m);
fi;
res := 0;
for termA in indA do
flatA := pet_flatten_term(termA);
for termB in indB do
flatB := pet_flatten_term(termB);
res := res + flatA[1]*flatB[1]*
pet_cycles_prod(flatA[2], flatB[2]);
od;
od;
res;
end;
mat_count :=
proc(n, m, q)
subs([seq(a[p]=q, p=1..n*m)],
pet_cycleind_symmNM(n, m));
end;
Addendum. The above can be optimized so that the contribution from
a pair $(\alpha,\beta)$ does not require computing $l_\alpha \times
l_\beta$ cycle pairs (product of total number of cycles) but only
$m_\alpha \times m_\beta$ cycle pairs (product of number of different
cycle sizes present). This is shown below.
with(combinat);
pet_cycleind_symm :=
proc(n)
option remember;
if n=0 then return 1; fi;
expand(1/n*add(a[l]*pet_cycleind_symm(n-l), l=1..n));
end;
pet_flatten_termA :=
proc(varp)
local terml, d, cf, v;
terml := [];
cf := varp;
for v in indets(varp) do
d := degree(varp, v);
terml := [op(terml), [op(1,v), d]];
cf := cf/v^d;
od;
[cf, terml];
end;
pet_cycles_prodA :=
proc(cyca, cycb)
local ca, cb, lena, lenb, insta, instb, res, vlcm;
res := 1;
for ca in cyca do
lena := op(1, ca);
insta := op(2, ca);
for cb in cycb do
lenb := op(1, cb);
instb := op(2, cb);
vlcm := lcm(lena, lenb);
res := res*
a[vlcm]^(insta*instb*lena*lenb/vlcm);
od;
od;
res;
end;
pet_cycleind_symmNM :=
proc(n, m)
local indA, indB, res, termA, termB, flatA, flatB;
option remember;
if n=1 then
indA := [a[1]];
else
indA := pet_cycleind_symm(n);
fi;
if m=1 then
indB := [a[1]];
else
indB := pet_cycleind_symm(m);
fi;
res := 0;
for termA in indA do
flatA := pet_flatten_termA(termA);
for termB in indB do
flatB := pet_flatten_termA(termB);
res := res + flatA[1]*flatB[1]*
pet_cycles_prodA(flatA[2], flatB[2]);
od;
od;
res;
end;
mat_count :=
proc(n, m, q)
subs([seq(a[p]=q, p=1..n*m)],
pet_cycleind_symmNM(n, m));
end;
Addendum Nov 17 2018. There is additional simplification possible
here, based on the simple observation that a product of powers of
variables implements the multiset concept through indets (distinct
elements) and degree (number of occurrences). This means there is
no need to flatten the terms of the cycle indices $Z(S_n)$ and
$Z(S_m)$ to build multisets, we have multisets already and we may
instead iterate over the variables present in pairs of monomials
representing a conjugacy class from $Z(S_n)$ and $Z(S_m)$ and compute
$a[\mathrm{lcm}(p_1, p_2)]^{\gcd(p_1, p_2)}$ for pairs of cycles
$a_{p_1}$ and $a_{p_2}.$ This makes for a highly compact algorithm,
which will produce e.g. for a three by four,
$${\frac {{a_{{1}}}^{12}}{144}}+1/24\,{a_{{1}}}^{6}{a_{{2}}}^{3}
+1/18\,{a_{{1}}}^{3}{a_{{3}}}^{3}+1/12\,{a_{{2}}}^{6}
\\+1/6\,{a_{{4}}}^{3}+1/48\,{a_{{1}}}^{4}{a_{{2}}}^{4}
+1/8\,{a_{{2}}}^{5}{a_{{1}}}^{2}+1/6\,a_{{1}}a_{{2}}a_{{3}}a_{{6}}
\\+1/8\,{a_{{3}}}^{4}+1/12\,{a_{{3}}}^{2}a_{{6}}
+1/24\,{a_{{6}}}^{2}+1/12\,a_{{12}}.$$
This is the Maple code.
with(combinat);
pet_cycleind_symm :=
proc(n)
option remember;
if n=0 then return 1; fi;
expand(1/n*add(a[l]*pet_cycleind_symm(n-l), l=1..n));
end;
pet_cycleind_symmNM :=
proc(n, m)
local indA, indB, res, termA, termB, varA, varB,
lenA, lenB, instA, instB, p, lcmv;
option remember;
if n=1 then
indA := [a[1]];
else
indA := pet_cycleind_symm(n);
fi;
if m=1 then
indB := [a[1]];
else
indB := pet_cycleind_symm(m);
fi;
res := 0;
for termA in indA do
for termB in indB do
p := 1;
for varA in indets(termA) do
lenA := op(1, varA);
instA := degree(termA, varA);
for varB in indets(termB) do
lenB := op(1, varB);
instB := degree(termB, varB);
lcmv := lcm(lenA, lenB);
p :=
p*a[lcmv]^(instA*instB*lenA*lenB/lcmv);
od;
od;
res := res + lcoeff(termA)*lcoeff(termB)*p;
od;
od;
res;
end;
mat_count :=
proc(n, m, q)
subs([seq(a[p]=q, p=1..n*m)],
pet_cycleind_symmNM(n, m));
end; |
How to tell Mathematica about a constant? | If you don't specify any information about $a$, Mathematica automatically treats it as a constant, this is the default behaviour. For example, if you input "Integrate[$a$ $x$,$x$]", the output will be $a x^2/2$ as expected. Do not forget that in Mathematica, multiplication is a space, so you need to put a space between $a$ and $x$ for the command I gave to work (otherwise it will treat $ax$ as a single variable). |
If $||A^N|| < 1$ then is ||A||<1? | Counterexample to the first question: A nilpotent matrix. The second part is true, however: Since $A$ is bounded linear there is a uniform bound $c$ on $||A^1||,\dots, ||A^n||$, and then we have $|A^{kn+i}v|\le c||A^n||^k|v|\to 0$. |
References on similarity orbits of operators | I really want to close this problem. As mentioned by user6299 in his/her comment, Herrero has done a lot of work on problems related to orbits of operators in Hilbert spaces. Also a good referece (but quite hard) is his book Approximation of Hilbert Space Operators. |
subtraction of subspace? | If you have the bases in the form of matrices of column vectors, say $A$ for $V\cap W$ and $B$ for $V+W$ then put them together as $[A \ B]$ and make column reductions towards the right.
If $\dim(V\cap W)=k < \dim(V+W)=n$ then the first $k$ vectors is a basis of $V\cap W$ and the next (non-zero) $n-k$ vectors will give a complement (i.e. a basis for $X$) in $V+W$. |
Calculus: How do you solve this double sequence as its limit approaches infinity? | $$\lim_{n\rightarrow \infty}\sum_{k=1}^{n}[2+\frac{3}{n}k]^2\left(\frac{3}{n}\right)$$
We first remember that
$$b\int_0^1 f(x) \, dx = \lim_{n \to \infty} \frac{b}{n}\sum_{i=1}^{\infty} f\left(\frac{i}{n}\right)$$
If we adjust the sum we get
$$\frac{3}{n}\lim_{n\rightarrow \infty}\sum_{i=1}^{n}\left[2+3\frac{i}{n}\right]^2$$
And we now transform this to
$$3\int_0^1[2+3x]^2 dx = \color{red}{39}$$ |
Combinatorial group theory books | As mentioned, Presentations of Groups by D.L. Johnson is nice.
Try also these books:
Topics in the Theory of Group Presentations by D.L. Johnson
Combinatorial Group Theory by Roger C. Lyndon and Paul E. Schupp
Combinatorial Group Theory: Presentations of Groups in Terms of Generators and Relations, by Wilhelm Magnus, Abraham Karrass, Donald Solitar |
Coproducts in $\mathbf{Grp}$ and in $\mathbf{Ab}$ | In the category of groups, the coproduct is the free product of groups. For example, $A = B = \mathbb Z$ has $F_2$ (free group on $2$ generators) as coproduct and $\mathbb Z^2$ as product. |
diophantine equation with squares over 3 variables | by considering x,y are both even and let $x^2=s ,y^2=r$
then $sr+s+r=(2z)^2$ this equation is equivalent to:
$(2s+r+1)^2=(r-1)^2+(2s)^2+(4z)^2$ you can check that
and the positive solutions for the last equation are given by the dimensions and the length of the diagonal of a rectangular box which is a related problem to Pythagorean Triple.
so $s=a , b=a+1 , r=(4z^2-a)/b$
& b is a divisor of $a^2+4z^2 ,b<√(a^2+4z^2 ) , 1<2z^2$
one can solve this equation over the positive integers if he just know the value of $a$ |
Let $T : V → W$ be a linear transformation. | HINT: If $\{\alpha_1,...,\alpha_n\}$ is a basis for $V$, is $\{T(\alpha_1), ..., T(\alpha_n)\}$ a basis for $W$? Also, you may have to be careful if the dimensions of $V$ and $W$ differ, as the matrix of the linear transformation won't be square. |
If I take a line, take the cube, add a constant, over what range is the cube root roughly linear? | We have that
$$\begin{align*}
y &= \sqrt[3]{(mx+b)^3+C} \\
&= \sqrt[3]{(mx+b)^3\cdot\left(1+\dfrac{C}{(mx+b)^3}\right)} \\
&= (mx+b)\cdot\sqrt[3]{1+\dfrac{C}{(mx+b)^3}} \\
\end{align*}$$
As $x\to\infty$, or as $x\to-\infty$, we have that $\dfrac{C}{(mx+b)^3}\to0$ and $\sqrt[3]{1+\dfrac{C}{(mx+b)^3}}\to1$.
So $\sqrt[3]{(mx+b)^3+C}$ will get closer to $mx+b$ if we make $x$ very big or very negative.
Edit: When I wrote the above, I was assuming that $m\ne0$. Of course, if $m=0$ and $y=\sqrt[3]{(mx+b)^3+C}$, then $y$ is constant. |
Intuition behind the Jacobi triple product | The lecture notes of Igor Pak "Partition bijections, a survey" give several nice combinatorial proofs in chapter $6$ - Jacobi’s triple product identity: http://www.math.ucla.edu/~pak/papers/psurvey.pdf. It is perhaps a matter of taste what the intuition behind it is. This involves certainly much more than just a nice combinatorial argument, e.g., elliptic functions, Jacobi theta functions, etc., see the interesting discussion here: Motivation for/history of Jacobi's triple product identity. |
Uses for esoteric integral symbols | The one with the sloped dash is sometimes used to denote integral averages:
\fint_A $f(x)d\mu(x) = \frac{1}{\mu(A)} \int_{A} f(x) d\mu(x)$ |
Solving a "simple" quadratic/quartic equation | Hint:
Write the polynomial in canonical form:
$$(w^2-w_0^2)^2-w_0^4\Bigl(\Bigl(1+\frac1{2Q^2}\Bigr)^2-1\Bigr)=(w^2-w_0^2)^2-\frac{w_0^4}{2Q^2}\frac{4Q^2+1}{2Q^2},$$
whence
$$w^2=w_0^2\pm\frac{w_0^2}{2Q^2}\sqrt{4Q^2+1}.$$ |
Why is $f = x^3$ a homeomorphism when its inverse is undefined for all negative numbers? | In some contexts, it is preferable to leave $x^a$ undefined when $x<0$ and $a\notin \mathbb{Z}$.
But when considering the function $f(x)=x^n$ with odd $n$, we are faced with the fact that it's naturally defined on all of $\mathbb{R}$, is one-to-one, and its range is also all of $\mathbb{R}$. So it has an inverse, $f^{-1}$. Then we need notation for that inverse, and there isn't any better than $x^{1/3}$... even if in other contexts, we might leave $(-2)^{1/3}$ undefined. |
Shared terms in an arithmetic sequence | As requested in comments:
Hint: Consider $$2,10,18,\mathbf{26},34,42,\mathbf{50},\ldots$$ and $$5,8,11,14,17,20,23,\mathbf{26},29,32,35,38,41,44,47,\mathbf{50},\ldots$$
the gaps between these and how many you need to get $41$ shared terms |
a question related to distribution of a sum of random variables | In general
What we're doing from step (A) to step (B) is basically calculating the probability of $X$ being smaller-or-equal-than $z-Y$, with the help of double integrals. To begin with, let's generalize a little bit. Let's calculate the probability of events that satisfy $G(x,y)\leqslant 0$.
On the $XOY$ plane you can draw out the domain $D_{XOY}$ that solves $G(x,y)\leqslant 0$. Integrate the probability over the region where $X\leqslant z-Y$ will give you the desired probability. So:
$$
\operatorname{Pr}[X \leqslant z-Y] = \iint_{D_{XOY}} f_{X,Y}(x, y) \mathrm d x \mathrm d y
$$
With the definition of the Joint Proability Distribution in mind, there would be
$ f_{X,Y}(x, y) = f_{X|Y}(x|y)f_Y(y) = f_{Y|X}(y|x)f_X(x) $.
When and only when X and Y are independent from one another, do $f_{X,Y}(x,y)=f_X(x)f_Y(y)$.
Proper domain
Remember that with a proper shape of $D_{XOY}$ you could rewrite the double integral by choosing an optimal order of integration. This is discussed here: References at Order of Integration | Wikipedia.
If your domain is improper, it simply tells you that you should divide your $D_{XOY}$ into multiple parts with proper domain, and sum their probability.
A few common cases
Case of Addition, $G(x,y)=ax+by-c\geqslant0$
In your case, assuming $X$ and $Y$ are IDV.
As a matter fact, remember that your domain $D_{XOY}$ is a proper shape of a triangle you could integration in any order. Let's try first $x$ last $y$, a.k.a. for every possible $y\in Y$, $x\in D_x(y)=\left[x_{min}(y),x_{max}(y)\right]$ then
$$
\iint_{D_{XOY}} f_{X,Y}(x, y) \mathrm d x \mathrm d y = \int_{D_Y} \int_{x=x_{min}(y)}^{x_{max}(y)} f_{X,Y}(x, y) \mathrm d x \mathrm d y
$$
Assuming independence, then $f_{X,Y}(x,y)=f_X(x)f_Y(y)$.
$$
\int_{D_Y} \int_{x=x_{min}(y)}^{x_{max}(y)} f_{X,Y}(x, y) \mathrm d x \mathrm d y
=
\int_{-\infty}^{+\infty} {\color{blue}{\int_{-\infty}^{z-y} f_X(x)f_Y(y) \mathrm d x}}\mathrm d y
=
\int_{-\infty}^{+\infty} F_x(z-y) f_Y(y) \mathrm d y
$$
Since your domain of integration $D_{XOY}$ is also vertically-sliceable (first $y$ last $x$), we may also calculate it in the following manner:
$$\begin{align}
\operatorname{Pr}[X \leqslant z-Y] &= \iint_{D_{XOY}} f_{X,Y}(x, y) \mathrm d x \mathrm d y
= \int_{-\infty}^{+\infty} {\color{blue}{\int_{-\infty}^{z-x} f_X(x)f_Y(y) \mathrm d y}}\mathrm d x\\
&= \int_{-\infty}^{+\infty} F_Y(z-x) f_X(x) \mathrm d x
\end{align}$$
About Multiplication, $G(x,y)=xy-z\leqslant0$
Given any distributions $X\geqslant0$, $Y\geqslant0$ where X and Y are independent, find $\operatorname{Pr}[XY\leqslant z]$ where $z\geqslant0$. So here we have $G(x,y)=xy-z\leqslant0$.
All right. It's the area to the lower left of hyperbola $xy=z$. So, Find $\operatorname{Pr}[X\leqslant z/Y]$ and apply the same steps above.
About Division, $G(x,y)=y/x-z\leqslant0$
Given any distributions $X>0$, $Y\geqslant0$ where X and Y are independent, find $\operatorname{Pr}[\frac{Y}{X}\leqslant z]$. Hmm, this looks suspicious.
It so happens that $\operatorname{Pr}[\frac{Y}{X}\leqslant z] = \operatorname{Pr}[Y\leqslant z X]$. Remember that the geometric interpretation for $\frac{Y}{X}$ is the slope of line segment with ends $(0,0)$ and $(X,Y)$. So, the desired shape of integration is yet another infinitely-large triangular area.
Other cases
It's high time that you tried the general method on all other cases. |
Dirac delta on differential equation | We have for any $\epsilon>0$, any fixed $i$, and any smooth test function $\phi$
$$\begin{align}
\int_{t_i+\epsilon}^{t_{i+1}-\epsilon}\phi(t)\frac{dx(t)}{dt}dt&=\int_{t_i+\epsilon}^{t_{i+1}-\epsilon}\phi(t)F(x(t),t)dt+\sum_{k=1}^n\int_{t_i+\epsilon}^{t_{i+1}-\epsilon}\phi(t)g(t,x)\delta(t-t_k)dt\\\\
&=\int_{t_i+\epsilon}^{t_{i+1}-\epsilon}\phi(t)F(x(t),t)dt
\end{align}$$
since the Dirac Delta "functions" were not "active" in the interval of integration. Thus,
$$\begin{align}
\int_{t_i+\epsilon}^{t_{i+1}-\epsilon}\phi(t)\left(\frac{dx(t)}{dt}-F(x(t),t)\right)dt&=0 \tag 1
\end{align}$$
for all test functions $\phi$. Noting that $(1)$ holds for $\phi(t)=\frac{dx(t)}{dt}-F(x(t),t)$ reveals that
$$\begin{align}
\int_{t_i+\epsilon}^{t_{i+1}-\epsilon}\left(\frac{dx(t)}{dt}-F(x(t),t)\right)^2dt&=0 \tag 2
\end{align}$$
The only way for $(2)$ to be zero is if the integrand is zero almost everywhere. Inasmuch as we assumed that $\phi=x'-F$ is a "smooth" function, then we have $\frac{dx}{dt}=F$ as was to be shown. |
How to prove that $E[||\mathbf x||^4]=K(K+1)$? where $K$ is the length of $\mathbf x$ | Let $y_i=\operatorname{re}(x_i)$ and $z_i=\operatorname{im}(x_i)$. Then, for $\sigma_x=1$,
\begin{align}
\mathsf{E}|\mathbf{x}|^4&=\mathsf{E}\left(\sum_{i=1}^K (y_i^2+z_i^2)\right)^2 \\
&=\sum_{i=1}^K\mathsf{E}(y_i^2+z_i^2)^2+\sum_{i=1}^K\sum_{j\ne i}\mathsf{E}(y_i^2+z_i^2)(y_j^2+z_j^2) \\
&=\sum_{i=1}^K(\mathsf{E}y_i^4+\mathsf{E}z_i^4+2\mathsf{E}y_i^2\mathsf{E}z_i^2)+\sum_{i=1}^K\sum_{j\ne i}\mathsf{E}(y_i^2+z_i^2)\mathsf{E}(y_j^2+z_j^2) \\
&=\sum_{i=1}^K 8\cdot2^{-2}+\sum_{i=1}^K\sum_{j\ne i}4 \cdot 2^{-2}=K(K+1).
\end{align}
For $\sigma_x\ne 1$, note that $\mathbf{x}/\sigma_x\sim \mathcal{CN}(0,I)$. |
Is there a characterization for pairs $(m,n)$ such that $m|n^2-1$ and $n|m^2-1$? | Given $m,n$ with $m<n$, write $p=\frac{m^2-1}{n}$. We clearly have $p<m$.
Now, we se that $np\equiv -1\pmod m$. Since $n^2\equiv 1\pmod m$ this means that $p^2\equiv 1\pmod{m}$. So $m\mid p^2-1$.
But we also have that $p\mid m^2-1$. So we have a "smaller" solution, $(p,m)$.
This lets us see that the sequence must be of the form:
$$a_0=0,a_1=1, a_2=?, a_{n+1}=\frac{a_n^2-1}{a_{n-1}}$$
The value of $a_2$ then determines $k$ - that is, $a_2=k$.
Then it is just an induction proof to show that this $a_i$ is your $a_i^k$.
Then consider the case $m=n$. |
Prove/disprove: $\lim_{x\to x_0}f(x)=L\iff \leq \delta$ and $\leq \epsilon$ | Your attempt for the $\implies$ direction falls short when you try to work out the details. You'd have to show $|x-x_0|\leq \delta \implies |x-x_0|< \delta \implies|f(x)-L|< \epsilon \implies|f(x)zL|\leq \epsilon$. You have the second implication by definition and the third implication by the argument you presented, but not the first.
Instead, you want to think about $\delta$ as a function of $\epsilon$ and change the parameters as follows. Let $\Delta(\epsilon)$ give the $\delta$ which satisfies the limit definition for $\epsilon$. To satisfy the given condition, choose a smaller $\delta$, e.g., $\frac{\Delta(\epsilon)}2$.
I'll leave you to work out the details, and the other direction is similar. Let me know if you need more help. |
What does $f:[0, 1] \rightarrow [0,1]\times[0, 1]$ mean? | The output lives in the set $\{(x,y) | x \in [0,1], y\in [0,1] \}$.
In LaTeX you should use "\times" rather than a capital X to represent this - it's called a Cartesian product of two sets. |
Is there a connection between $tanh(nx)$ and $\frac{nx}{1+nx}$? | We have
$$\tanh(nx) = \frac{e^{2nx}-1}{e^{2nx}+1}.$$
Note as $x \to 0$ then $e^{2nx}$ can be approximated by $1 + 2nx$ so that then
$$\tanh(nx) = \frac{e^{2nx}-1}{e^{2nx}+1} \approx \frac{1+2nx - 1}{1 + 2nx + 1} = \frac{nx}{1+nx}.$$
So they're related in the sense that the latter approximates the former at $0$. |
What is the probability of occurrence of natural numbers? | A distribution like $P(n) = 2^{-n}$ isn't going to give very good predictions. According to that I'm $8$ times more likely to say "$997$" than "$1000$". You mention that base $10$ is sort of a cultural thing that should maybe be ignored, but plenty of numbers distinguish themselves for other reasons and should be more common than their neighbors.
One way of improving this situation is to fix some description language and then define $P(n) = c \cdot 2^{-2^{K(n)}}$ where $K(n)$ is the Kolmogorov complexity of some number $n$ and $c$ is a constant chosen to make all the probabilities add up to $1$. This will give higher probabilities to numbers with shorter descriptions, so "googolplex" will have a much higher probability than some random number with a googol digits. Like base $10$, whatever description language you choose will come with some cultural baggage but some are definitely more natural than others (for example English vs. combinator calculus). |
Discrete maths... Is this a valid argument?? | This is a valid argument, by Modus Tullens twice.
Let p be "one reads a lot".
Let q be "one is a brilliant conversationalist".
Let r be "one has many friends".
By MT, given "if p then q", or in English, "If one reads a lot, one is a brilliant conversationalist". If ~q, or, "one isn't a brilliant conversationalist", then ~p holds, or "one doesn't read a lot".
By MT for the second pair, and by the same line of logic, if one doesn't have many friends, one isn't a brilliant conversationalist. And by our first line of logic, if one isn't a brilliant conversationalist, one doesn't read a lot.
Therefore, if one doesn't have many friends, one doesn't read a lot.
Obviously in the real world, this could be false, but by the premises given, it is a valid conclusion. |
Are there trigonometric and hyperbolic identities that are true in $\mathbb{R}$ but not true in $\mathbb{C}$ | If an identity of analytic functions holds on a set with a limit point, then it holds everywhere.
Of course $\mathbb R$ is a set with a limit point.
So a counterexample must involve non-analytic functions. Like this
$$
x^2 = |x|^2
$$
holds on $\mathbb R$, but
$$
z^2 = |z|^2
$$
fails on most of $\mathbb C$.
Another example, deadly to calculus students:
$$
\int\frac{dx}{x}=\log|x|+C
$$
true in real calculus, but
$$
\int \frac{dz}{z} = \log|z|+C
$$
is false in complex calculus. The remedy:
$$
\int\frac{dx}{x}=\log x +C
$$
is true even in the case of a real variable, but may have a complex constant $C$ since $\log x$ may be complex when $x<0$. |
Gaussian Distribution with conditional mean | The expression $N(x \mid \mu_0, \sigma_0)$ is the notation of Bishop to denote the Gaussian distribution under the condition that the mean and standard deviation are given by $\mu_0$ and $\sigma_0$ respectively. So, your expression means: $\mu$ is normally distributed with mean $\mu_0$ and standard deviation $( \lambda_0 \tau )^{-1}$, which translates to the pdf
$$
\frac{\lambda_0 \tau}{\sqrt{ 2 \pi }} e^{\frac{(\lambda_0 \tau)^2}{2} (\mu - \mu_0)^2} .
$$ |
monotonically reducing euclidean distance | Denote the angle TQP as b, and TPQ as c. Since a < TPX, we have that $\sin b = \sin TQX > \sin a$. We also have that: $\frac{|PT|}{\sin b} = \frac{|QT|}{\sin c}$ & $ \frac{|PT|}{\sin a} =\frac{|TX|}{\sin c}$. Therefore, $|QT|<|TX|$. |
Is a metrizable subspace of a separable space separable? | No. Let $X=[0;1]^{\frak c}$ be a Tychonoff cube. Then $X$ is separable, $w(X)=\frak c$, but it contains a discrete space $M=\{e_\alpha:\alpha<\frak c\}$ (where each $e_\alpha=(e_\alpha)_{\beta, \beta<\frak c}$, $(e_\alpha)_{\alpha}=1$, and $(e_\alpha)_{\beta}=0$ provided $\beta\ne\alpha$) of size $\frak c$. |
Curious case of asymptotic equivalence | Just a comment as a contribution, is that I believe that you could be interested in Theorem 8, from Jakimczuk, Functions of Slow Increase and Integer Sequences, Journal of Integer Sequences, Vol. 13 (2010), Article 10.1.1. It is an open access journal. |
For which natural numbers $a$ the function $f(x)=x^ae^x$ has exactly one extremum? | I don't know why you say "it's not true". By the usual definition, a stationary point of $f$ is a zero of $f′$, so you are indeed looking
for when $f'$ has one zero.
If $a>1$, both $x=0$ and $x=−a$ are stationary points. What happens if $a=0$? If $a=1$? |
Prove that a non-positive definite matrix has real and positive eigenvalues | $J$ is $2 \times 2$ and has rank $2$, so $J$ is invertible. Consequently, the Moore-Penrose pseudoinverse of $J$, $J^+ = J^{-1}$. Then \begin{align*}
J^+ A J = J^{-1} A J
\end{align*} is similar to $A$ and has the same eigenvalues as $A$. $A$ is a diagonal matrix, so is its own symmetric part. Therefore, since $A$ is a real positive definite matrix, $A$ and $J^+ A J$ have all positive eigenvalues.
Edit:
Assuming your $3 \times 3$ version of $A$ is also diagonal... Take \begin{align*}
J &= \begin{pmatrix} 1 & 4 \\ 0 & 1 \\ 0 & 1 \end{pmatrix} \text{,} \\
A &= \begin{pmatrix} 1/2 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} \text{.}
\end{align*} Then the rank of $J$ is $2$. (To see this, subtract row 3 from row 2 to get row-echelon form and note that we have two nonzero rows. Alternatively, the two columns are linearly independent.) We find \begin{align*}
J^+ &= \begin{pmatrix} 1 & -2 & -2 \\ 0 & \frac{1}{2} & \frac{1}{2} \end{pmatrix} \text{,} \\
B = J^+ A J &= \begin{pmatrix} \frac{1}{2} & -2 \\ 0 & 1 \end{pmatrix} \text{,}
\end{align*} and the symmetric part of $B$ is \begin{align*}
B_S &= \begin{pmatrix} \frac{1}{2} & -1 \\ -1 & 1 \end{pmatrix} \text{.}
\end{align*} The eigenvalues of $B_S$ are $\frac{1}{4}(3 \pm \sqrt{17})$ and the smaller one is negative. Thus, we have found choices for $J$ and $A$ yielding a $J^+ A J$ which is not positive definite.
(Note, these values aren't "special". They're more or less the first thing I tried after using a CAS to find the expressions for the eigenvalues. Two more: let the right column of $J$ and $A$ be $(85, -21, -42)$ and $\mathrm{diag}(15, 2173/29, 58)$, respectively, or $(-271, 0,-18)$ and $\mathrm{diag}(93, 70, 148)$, respectively. These were found with that same CAS using a random number function for several numbers and messing around with the remaining ones.) |
Geometric intepretation of matrix vector product | It looks as if you're on the right track. The problem really wants you to look at this in terms of $c$, so restating what you've said,
For $c = 0$, the vector $\pmatrix{0 \\ 1}$ is a solution (as is any multiple of it).
For $c = 1$, the vector $\pmatrix{1 \\ 0}$ is a solution (as is any multiple of it).
For any other $c$, $Av = cv$ has no nonzero solutions (indeed, no solutions at all).
In short: you're on exactly the right track, and a little reorganizing of what you wrote gets things to be expressed the way the question-asked was probably hoping they'd be. |
$\sum_{i=1}^n 1/i \leq c\log n$ | Consider the function $ f(x) = \frac{1}{x} $, then it's decreasing in $(0, + \infty )$ so considering the area under the integral we have $$\sum_{i = 2}^{n}\frac{1}{i} \leq \int_{1}^{n} \frac{1}{x} dx = \log n$$ and so $$\sum_{i = 1}^{n}\frac{1}{i} \leq \log n + 1 \leq c \log n$$ for all $n > N$ |
Very basic question on vector line integrals. | The point is that when you write $\vec{dl} = -dx\hat{i}$, you are already setting the orientation of the path, but the orientation is already included in your order of integration from $a$ to $0$, which says you are going backwards. So, you should use one of the following:
$$\int_{0}^{a}\hat{i}\cdot(-dx\hat{i}) \quad \mbox{or} \quad \int_{a}^{0}\hat{i}\cdot dx\hat{i}$$
Note that there is another approach which is more rigorous and prevents thes sort of mistakes. You have $\vec{F}(\vec{r}) = (1,0)$ and the parametrized curve is $\gamma(t) = (t, 1)$, where $t$ goes from $a$ to $0$. Now, $\gamma'(t) = (1,0)$ and $F(\gamma(t))=(1,0)$, so:
$$\int_{C}\vec{F}(\vec{r})\cdot d\vec{l} = \int_{a}^{0}\vec{F}(\gamma(t))\cdot\gamma'(t)dt = \int_{a}^{0}(1,0)\cdot (1,0)dt = \int_{a}^{0}dt = -a$$ |
Big-Oh of exponent of exponent | We could try to compute
$$L_1=\lim_{n\to \infty} \frac{a^{b^n}}{b^{a^n}}\text{,}$$
but it is harder than computing the limit
$$L_2=\lim_{n\to \infty} \log \left(\frac{a^{b^n}}{b^{a^n}} \right) = \lim_{n\to \infty} \left(\log (a^{b^n}) - \log (b^{a^n})\right)\text{.}$$
Since $\log x^y = y\log x$ for all $x,y>0$, it holds that
$$L_2=\lim_{n\to \infty} \left(b^n \log a -a^n \log b \right)\text{.}$$
Now, consider all the cases: $1<a<b$, $a<1<b$, $a=b<1$, ...
If $L_2=\infty$, then note that $L_1 = \infty$ and prove $b^{a^n}\in\mathcal{O}(a^{b^n})$.
If $L_2=-\infty$, then note that $L_1 = 0$ and prove that $a^{b^n}\in\mathcal{O}(b^{a^n})$.
If $-\infty<L_2<\infty$, then note that $-\infty<L_1<\infty$ and prove that $b^{a^n}\in\mathcal{O}(a^{b^n})$ and $a^{b^n}\in\mathcal{O}(b^{a^n})$. |
Does the boundaries of non-disjoint sets in Euclidean space have common element? | As commenters pointed out, this is false as stated: e.g., David Mitra gave the intervals $(0,2)$ and $(1,3)$ as a counterexample.
Here is a positive result (a rather trivial one). Assume $\partial A$ is connected and the sets $\partial A\cap B$ and $\partial A\setminus B$ are not empty. Then $\partial A\cap \partial B$ is not empty.
Indeed, if $\partial A\cap \partial B=\varnothing$, then every point of $\partial A$ is either an interior point of $B$ or an exterior point of $B$. Then
$$\partial A=(\partial A\cap \operatorname{int}B)\cup (\partial A\cap \operatorname{ext}B)$$
where both sets on the right are open and nonempty. This contradicts $\partial A$ being connected. |
Solve $(x^4+y^4)\,\text{d}x-xy\,\text{d}y=0$ | $$x^4+y^4=x^2\frac{2ydy}{2xdx}$$
Let $\quad\begin{cases}x^2=X\\y^2=Y\end{cases}$
$$X^2+Y^2=X\frac{dY}{dX}$$
$$\frac{dY}{dX}=X+\frac{Y^2}{X}$$
Riccati ODE. Let : $\quad Y(X)= -X\frac{F'(X)}{F(X)} \quad\implies\quad Y'=-\frac{F'}{F} -X\frac{F''}{F}+X\frac{(F')^2}{F^2} $
$\frac{dY}{dX}=X+\frac{Y^2}{X}=-\frac{F'}{F} -X\frac{F''}{F}+X\frac{(F')^2}{F^2}=X+\frac{\left(-X\frac{F'}{F} \right)^2}{X}$
$-\frac{F'}{F} -X\frac{F''}{F}=X$
$$F''(X)+\frac{1}{X}F'(X)+F(X)=0$$
Bessel equation :
$$F(X)=c_1J_0(X)+c_2Y_0(X)$$
$J_0(X)$ and $Y_0(X)$ are the Bessel functions of first and second kind, of order $0$.
$\frac{dJ_0(X)}{dX}=-J_1(X)$ and $\frac{dJ_0(X)}{dX}=-Y_1(X)$
$J_1(X)$ and $Y_1(X)$ are the Bessel functions of first and second kind, of order $1$.
$\frac{F'(X)}{F(X)}=-\frac{c_1J_1(X)+c_2Y_1(X)}{c_1J_0(X)+c_2Y_0(X)} = -\frac{C\,J_1(X)+Y_1(X)}{C\,J_0(X)+Y_0(X)} $ where $C=\frac{c_1}{c_2}$
$Y(X)=-X\frac{F'(X)}{F(X)}=X\frac{C\,J_1(X)+Y_1(X)}{C\,J_0(X)+Y_0(X)}$
$$y(x)=\pm \,x\,\sqrt{\frac{C\,J_1(x^2)+Y_1(x^2)}{C\,J_0(x^2)+Y_0(x^2)}}$$
$C$ is an arbitrary constant. |
Volume form on a complex manifold vs. volume form on the underlying real manifold | A nowhere zero section $\Omega$ of $K_X$ is called a holomorphic volume form, but it is not a volume form on the underlying smooth manifold $X$ precisely for the reason you point out. However, in any coordinate chart $(U, (z^1, \dots, z^n))$, $\Omega = fdz^1\wedge\dots\wedge dz^n$ for some nowhere-zero holomorphic function $f$ on $U$, so
\begin{align*}
\Omega\wedge\overline{\Omega} &= fdz^1\wedge\dots\wedge dz^n\wedge\overline{(fdz^1\wedge\dots\wedge dz^n)}\\
&= fdz^1\wedge\dots\wedge dz^n\wedge(\bar{f}d\bar{z}^1\wedge\dots\wedge d\bar{z}^n)\\
&= |f|^2dz^1\wedge\dots\wedge dz^n\wedge d\bar{z}^1\wedge\dots\wedge d\bar{z}^n\\
&= (-1)^{n(n-1)/2}|f|^2dz^1\wedge d\bar{z}^1\wedge\dots\wedge dz^n\wedge d\bar{z}^n\\
&= (-1)^{n(n-1)/2}|f|^2(-2idx^1\wedge dy^1)\wedge\dots\wedge(-2idx^n\wedge dy^n)\\
&= (-1)^{n(n-1)/2}(-2i)^n|f|^2dx^1\wedge dy^1\wedge\dots\wedge dx^n\wedge dy^n.
\end{align*}
As $f$ is nowhere-vanishing, $\Omega\wedge\overline{\Omega}$ is a nowhere vanishing $2n$-form and hence a volume form on the smooth $2n$-dimensional manifold $X$.
Note, another distinction is that an orientable smooth manifold always admits a volume form, but a complex manifold need not admit a holomorphic volume form. A complex manifold which admits a holomorphic volume is called a Calabi-Yau manifold. |
How $\nabla_x f(x,y) = -2y$ holds when $f(x,y) = 2x \cdot y$ | For arbitrary functions $g(x,y)$ which is differentiable in $x$ we have
$$ \nabla_xg = \left( \begin{matrix} \frac{\partial g}{\partial x_1} \\ \frac{\partial g}{\partial x_2} \\ \frac{\partial g}{\partial x_3}\end{matrix} \right)$$
As $f(x,y) = 2\sum_{i=1}^3 x_i y_i$, we have $$\frac{\partial f}{\partial x_j} = 2\sum_{i=1}^3 \frac{\partial}{\partial x_i} x_j y_j = 2 y_i$$
So $\nabla_x f = 2y$, unless somewhere in the paper they define $\nabla_x$ to be minus the 'normal' $\nabla_x$. |
Red black ordering of a matrix | Check this link to some slides:
http://tel-zur.net/teaching/bgu/pp/slides11.ppt
and please tell me whether that was helpful. |
Power series expansion of f(x)=1/(1-x) around x=0 and x=-1 | Hint: remember that for $\lvert x\rvert <1$, $$\frac{1}{1-x} = \sum_{n=0}^\infty x^n = 1+x+x^2+x^3+\dots$$
This gives you the power series expansion at $0$. For the one at $-1$, write $x=-1+\varepsilon$ and
$$\frac{1}{1-x} = \frac{1}{2-\varepsilon} = \frac{1}{2}\frac{1}{1-\frac{\varepsilon}{2}}$$
and use the same formula. |
Finding the integer solutions of $y^2 = x^3 - 12$ | As you've noticed, $y^2+4=(x-2)\left(x^2+2x+4\right)$.
If $x$ is even, then let $x=2k$. Then $y=2m$, so $m^2=2k^3-3$, impossible (use mod $8$, as has been suggested in the comments. $2k^3\equiv \{0,2,6\}\pmod{8}$, but then $m^2\equiv \{5,7,3\}\pmod{8}$, contradiction, because $5,7,3$ are not quadratic residues mod $8$).
If $x$ is odd, then: let $p$ be a prime divisor of either $x-2$ or $x^2+2x+4$. Then $p$ is odd and $p\mid y^2+4$, so $\left(y2^{-1}\right)^2\equiv -1\pmod{p}$, so $p=4t+1$ (by Quadratic Reciprocity).
Therefore all the prime divisors of $x-2$, $x^2+2x+4$ are of the form $4t+1$. Also $x^2+2x+4=(x+1)^2+3>0$ and $y^2+4>0$, so $x-2>0$, so $x-2\equiv 1\pmod{4}$ and $x^2+2x+4\equiv 1\pmod{4}$. The first congruence gives $x\equiv 3\pmod{4}$, but $3^2+2\cdot 3+4\not\equiv 1\pmod{4}$, contradiction. |
How to make truth table | Try filling in the rest of this truth table:
\begin{array}{c|c|c|c|c|c}
p & q & F(p,q) & \lnot p &\lnot q& F(\lnot p,\lnot q)&\lnot F(\lnot p,\lnot q)\\
\hline
\top & \top & \top & \bot & \bot \\
\hline
\top & \bot & \bot\\
\hline
\bot & \top &\top\\
\hline
\bot & \bot &\bot\\
\end{array}
You can easily find the values of $F(\lnot p,\lnot q)$
by using the first three columns of the table.
For example, on the first line of the table $\lnot p = \bot$ and
$\lnot q = \bot$, and you find the value of $F(\bot,\bot)$ on
the last line of the table.
I find that if there is any amount of complication in the
logical expressions, it almost always makes truth-table problems much easier
if I insert auxiliary columns like this to show the intermediate steps
of the calculations. Of course you still want to compare only the columns
for $F(p, q)$ and $\lnot F(\lnot p,\lnot q)$ in the end.
As you may discover, there is really no need to "simplify"
$\lnot F(\lnot p,\lnot q)$ if you use the truth-table entries
you already discovered for $F(p, q)$ in this manner.
Edit: As pointed out by Mauro ALLEGRANZA, the entry for $F(p,q)$ on
the last line was incorrect in the truth table originally presented in
this answer (as well as in the truth table in the original question,
from which that entry was copied). This does not invalidate the method
described in this answer, but it does have an effect on the values
that result on the second line of the table. |
Correspondence between one-parameter subgroups of G and TeG | $1.$ The correspondence is that there is a bijection from one parameter subgroups and elements of $T_eG$. A one parameter subgroup is a homomorphism $\mathbb{R}\rightarrow G$. The correspondence takes a one parameter subgroup, $\phi$, such that $\phi(0)=e$ and computes the derivative $d\phi_0(1)\in T_eG$. The object is to find a $\phi$ such that $d\phi_0(1)=v$ for some given $v\in T_eG$.
$2.$ Call $\psi(t)=\phi(s)\phi(t)$. Then
$$
d\psi_t=\phi(s)d\phi_t=\phi(s)X^v_{\phi(t)}=X^v_{\phi(s)\phi(t)}=X^v_{\psi(t)}
$$
where the second equality follows by definition of $\phi$. For the third equality $\phi(s)\in G$ and $X^v$ is invariant under multiplication from group elements, i.e. $gX^v_h=X^v_{gh}$.
$3.$ The number $1$ is just the time you are plugging into the derivative. Something like if $f(x)=x^2$, then $f'(1)=2$. Here $\phi$ is a function from $\mathbb{R}$ to $G$. So, it has a derivative at $t=0$. That derivative is a function from $\mathbb{R}$ to $T_eG$. So, $\phi_0(1)$ is that derivative evaluated at $x=1$. In the above $\phi_0(1)=v$. |
How many Spherical codes are there, given a constraint of distance between any pair of code? | I'm fairly sure that for the choice of parameters: length $256$, max absolute value of a non-trivial inner product $16$, the answer is $4096$ achieved by the so called small Kasami code (I used to know such facts by heart, but I need to check what the bounds say about codes with these parameters).
Let's begin by abbreviating $K=\Bbb{F}_{2^4}$ and $L=\Bbb{F}_{2^8}$.
Let $$tr_8:L\to\Bbb{F}_2$$ and
$$tr_4:K\to\Bbb{F}_2$$
be the trace maps. I will also be needing the fact that for all $x\in L$ the power $N(x)=x^{17}\in K$. This follows from the fact that $N$ is the relative norm.
Let $a\in L$ and $b\in K$ be arbitrary. We use them to define a function $f_{a,b}:L \to\{\pm1\}$ by the formula
$$
f_{a,b}(x)=(-1)^{tr_8(ax)+tr_4(b N(x))}.
$$
With $x$ ranging over $L$ in some fixed order, we can identify $f_{a,b}$ with a vector in $\{-1,+1\}^{256}$. The small Kasami code $S$ then consists of the vectors
$$
S=\{f(a,b)\mid a\in L,b\in K\}\subset\{\pm1\}^{256}.
$$
By additivity of the trace functions we see that the inner product
$$
\langle f_{a,b},f_{a',b'}\rangle=\sum_{x\in L}(-1)^{tr_8([a+a']x)+tr_4([b+b']N(x))}=\sum_{x\in L}f_{a+a',b+b'}(x).
$$
Therefore it suffices to prove that unless $a=b=0$ the sum
$$
S(a,b)=\sum_{x\in L}(-1)^{tr_8(ax)+tr_4(b N(x))}
$$
has absolute value at most $16$. Let us first introduce an element $\epsilon\in L$ with the property $\epsilon^{16}+\epsilon=1$. Such an element exists by surjectivity of the relative trace $tr_{8/4}:L\to K, tr_{8/4}(x)=x^{16}+x$.
By transitivity of trace $tr_8=tr_4\circ tr_{8/4}$ and $K$-linearity of $tr_{8/4}$ we can write everything using the additive character $e:L\to\{\pm1\},x\mapsto (-1)^{tr_8(x)}$ as
$$
S(a,b)=\sum_{x\in L}e(ax+b\epsilon N(x)).
$$
Squaring this gives
$$
S(a,b)^2=\sum_{x\in L, y\in L}e(a[x+y]+b\epsilon[N(x)+N(y)]).\tag{1}
$$
Here we make the substitution $y=x+z$. Using the Freshman's dream we calculate
$$
\begin{aligned}
N(x+z)&=(x+z)^{16}(x+z)\\
&=(x^{16}+z^{16})(x+z)\\
&=x^{17}+[x^{16}z+xz^{16}]+z^{17}\\
&=N(x)+[x^{16}z+xz^{16}]+N(z).
\end{aligned}
$$
Plugging this into $(1)$ gives
$$
S(a,b)^2=\sum_{z\in L}e(az+b\epsilon N(z))\sum_{x\in L}e(b\epsilon[x^{16}z+xz^{16}]).\tag{2}
$$
Let's look at the inner sum (over $x$). Because conjugate elements share the same trace
$$
\begin{aligned}
e(b\epsilon x^{16}z)&=e((b\epsilon x^{16}z)^{16})\\
&=e(b^{16}\epsilon^{16}x^{256}z^{16})\\
&=e(b\epsilon^{16}xz^{16})
\end{aligned}
$$
as $b\in K\implies b^{16}=b$ and $x\in L\implies x^{256}=x$.
Taking into account the equation $\epsilon^{16}+\epsilon=1$ we have arrived at the form
$$
S(a,b)^2=\sum_{z\in L}e(az+bN(z))\sum_{x\in L}e(bz^{16}x).\tag{3}
$$
By orthogonality of characters the inner sum vanishes unless $bz^{16}=0$ in which case it is equal to $|L|=256$.
If $b=0$, we thus have
$$S(a,0)^2=256\sum_{z\in L}e(az).$$
Another application of the orthogonality of characters tells us that this sum vanishes unless we also have $a=0$. But in that case we have the trivial inner product $S(0,0)=256$.
If $b\neq0$ the inner sum in $(3)$ vanishes except when $z=0$ implying
$$
S(a,b)^2=256e(a\cdot 0+b N(0))=256
$$
proving that in the non-trivial cases we either have $S(a,b)=0$ or $S(a,b)=\pm16$. QED. |
Validity of a q-series theorem | Take the logarithm of the quotient of infinite products to get a difference of infinite sums. Both sums converge absolutely, and hence you can rearrange their terms at will and let them cancel. |
Splitting sum into two sums | Denote the set of divisors of $n$ by $D(n)$. Then since $n_1$ and $n_2$ are coprime there is a bijection $D(n_1) \times D(n_2) \to D(n_1n_2)$ given by $(a,b) \mapsto ab$. It follows that
$$
S = \sum_{x \in D(n_1n_2)} f(x) = \sum_{a \in D(n_1)} \sum_{b \in D(n_2)} f(ab) = \sum_{a \mid n_1} \sum_{b \mid n_2} f(a)f(b) = \sum_{a \mid n_1} f(a) \sum_{b \mid n_2} f(b).
$$
We have $f(ab)=f(a)f(b)$ since $a$ and $b$ are coprime for $a \in D(n_1)$, $b \in D(n_2)$. |
d'Alembert functional equation: $f(x+y)+f(x-y)=2f(x)f(y)$ | There are good resources about this equation, that have studied the solutions of
$$f(x+y)+f(x-y)=2f(x)f(y)\tag0\label0$$
or its generalizations. For example, Functional Equations in
Several Variables by J. Aczel and J. Dhombers, or Introduction to
Functional Equations by P. Kannappan and P. Sahoo, and of course many
other books and papers which can be found on the internet. Here, I try to
solve the equation for two more common cases.
Every continuous function $f:\mathbb R\to\mathbb R$ satisfying \eqref{0} is
of the form $f(x)=0$, $f(x)=\cos(\alpha x)$ or $f(x)=\cosh(\alpha x)$.
Letting $x=y=0$ in \eqref{0} we find out that $f(0)=0$ or $f(0)=1$. If
$f(0)=0$, then letting $y=0$ in \eqref{0}, we infere that $f$ is the constant zero function. So let's suppose $f(0)=1$. Letting $x=0$ in \eqref{0}, we find
out that $f$ is an even function. Also, since $f$ is continuous, there is a positive $t$ such that for every $-t\leq y\leq t$ we have $f(y)>0$. Now, $f$ is locally integrable because it's continuous. Thus we have:
$$\int_{-t}^tf(x+y)\,dy+\int_{-t}^tf(x-y)\,dy=2f(x)\int_{-t}^tf(y)\,dy$$
$$\therefore\quad2\int_{x-t}^{x+t}f(y)\,dy=2f(x)\int_{-t}^tf(y)\,dy\tag1\label1$$
So by the fundumental theorem of calculus, the left hand side of \eqref{1} is a
differentiable function of $x$, because $f$ is continuous. Hence the right
hand side is differentiable too and we have:
$$f(x+t)-f(x-t)=f^\prime(x)\int_{-t}^tf(y)\,dy\tag2\label2$$
$$\therefore\quad f(t)-f(-t)=f^\prime(0)\int_{-t}^tf(y)\,dy$$
But since $f$ is even and $\int_{-t}^tf(y)\,dy>0$, we have $f^\prime(0)=0$.
Again, since $f$ is differentiable, so the left hand side of \eqref{2} is a
differentiable function of $x$ and thus the right hand side is so, which
yields that $f$ is two times differentiable (continuing this way, we can
prove that $f$ has derivatives of any order). Now, differentiating \eqref{0}
twice, with respect to $y$ we get:
$$f^{\prime\prime}(x+y)+f^{\prime\prime}(x-y)=2f(x)f^{\prime\prime}(y)$$
$$\therefore\quad f^{\prime\prime}(x)=f^{\prime\prime}(0)f(x)\tag3\label3$$
It's a well-known result of elementry differential equations that if $f$
satisfies \eqref{3} and $f(0)=1$ and $f^\prime(0)=0$, then
$f(x)=\cosh\left(\sqrt{f^{\prime\prime}(0)}x\right)$ when $f^{\prime\prime}(0)\geq0$ and
$f(x)=\cos\left(\sqrt{-f^{\prime\prime}(0)}x\right)$ when $f^{\prime\prime}(0)\leq0$.
A function $f:\mathbb R\to\mathbb C$ is a solution of \eqref{0} iff it's of
the form $f(x)=\frac{E(x)+E(-x)}2$ for some $E:\mathbb R\to\mathbb C$
satisfying $E(x+y)=E(x)E(y)$ for all $x$ and $y$.
Assume that $E:\mathbb R\to\mathbb C$ satisfies $E(x+y)=E(x)E(y)$. we have:
$$\frac{E(x+y)+E(-x-y)}2+\frac{E(x-y)+E(-x+y)}2\\
=\frac{E(x)E(y)+E(-x)E(-y)+E(x)E(-y)+E(-x)E(y)}2\\
=2\cdot\frac{E(x)+E(-x)}2\cdot\frac{E(y)+E(-y)}2$$
So $\frac{E(x)+E(-x)}2$ satisfies \eqref{0}. Conversly, suppose that $f$
satisfies \eqref{0}. If $f$ is the constant zero function, then we let $E$ be
the constant zero function and we would have $f(x)=\frac{E(x)+E(-x)}2$ and
$E(x+y)=E(x)E(y)$. Otherwise, $f(0)$ must be equal to $1$. Now, letting $y=x$
in \eqref{0} we get:
$$f(2x)=2f(x)^2-1$$
Replacing $x$ by $x+y$ and $y$ by $x-y$ in \eqref{0} we have:
$$f(2x)+f(2y)=2f(x+y)f(x-y)$$
$$\therefore\quad f(x+y)f(x-y)=f(x)^2+f(y)^2-1$$
Thus we conclude that:
$$\big(f(x+y)-f(x)f(y)\big)^2\\
=f(x+y)^2-2f(x)f(y)f(x+y)+f(x)^2f(y)^2\\
=f(x+y)^2-\big(f(x+y)+f(x-y)\big)f(x+y)+f(x)^2f(y)^2$$
$$\therefore\quad\big(f(x+y)-f(x)f(y)\big)^2=\left(f(x)^2-1\right)
\left(f(y)^2-1\right)\tag4\label4$$
Now, if for every $x$ we have $f(x)=-1$ or $f(x)=1$, then by \eqref{4} we have
$f(x+y)=f(x)f(y)$. Since in this case $f(x)f(-x)=1$, if we let $E(x):=f(x)$
then $f(x)=\frac{E(x)+E(-x)}2$ and $E(x+y)=E(x)E(y)$. So let's suppose that
for some $x_0$ we have $f(x_0)\neq-1$ and $f(x_0)\neq1$. We define
$\alpha=f(x_0)$, $\beta^2:=\alpha^2-1$ and $E(x):=\frac1\beta\big(f(x+x_0)+
(\beta-\alpha)f(x)\big)$. By \eqref{4} we have:
$$\big(E(x)-f(x)\big)^2=\frac1{\beta^2}\big(f(x+x_0)-f(x_0)f(x)\big)^2=f(x)^2-1$$
$$\therefore\quad E(x)^2-2f(x)E(x)+1=0$$
$$\therefore\quad f(x)=\frac{E(x)+E(x)^{-1}}2$$
Next we consider:
$$E(x)E(y)=\frac1{\beta^2}\big(f(x+x_0)+(\beta-\alpha)f(x)\big)
\big(f(y+x_0)+(\beta-\alpha)f(y)\big)\\
=\frac1{\beta^2}\Big(f(x+x_0)f(y+x_0)+(\beta-\alpha)\big(f(x)f(y+x_0)+f(y)f(x+x_0)\big)+
(\beta-\alpha)^2f(x)f(y)\Big)\tag5\label5$$
We also note that by \eqref{0}:
$$2f(x+x_0)f(y+y_0)=f(x+y+2x_0)+f(x-y)\\
=\big(2f(x_0)f(x+y+x_0)-f(x+y)\big)+\big(2f(x)f(y)-f(x+y)\big)\\
=2\big(\alpha f(x+y+x_0)+f(x)f(y)-f(x+y)\big)\tag6\label6$$
And again by \eqref{0}:
$$2\big(f(x)f(y+x_0)+f(y)f(x+x_0)\big)\\
=2f(x+y+x_0)+f(x_0+x-y)+f(x_0-x+y)\\
=2f(x+y+x_0)+2f(x_0)f(x-y)\\
=2\Big(f(x+y+x_0)+\alpha\big(f(x)f(y)-f(x+y)\big)\Big)\tag7\label7$$
Combining \eqref{5}, \eqref{6} and \eqref{7} we get:
$$E(x)E(y)\\
=\frac1{\beta^2}\Big(\left((\beta-\alpha)^2+2\alpha(\beta-\alpha)+1\right)
f(x)f(y)+\beta f(x+y+x_0)-\big(1+\alpha(\beta-\alpha)\big)f(x+y)\Big)\\
=\frac1{\beta^2}\Big(\left(\beta^2-\alpha^2+1\right)f(x)f(y)+\beta
f(x+y+x_0)-\left(\alpha\beta+1-\alpha^2\right)f(x+y)\Big)\\
=\frac1{\beta^2}\big(\beta f(x+y+x_0)+\beta(\beta-\alpha)f(x+y)\big)$$
$$\therefore\quad E(x)E(y)=E(x+y)$$
Since $E(x)E(-x)=1$ we have $f(x)=\frac{E(x)+E(-x)}2$. |
Show the isomorphism between $\mathbb{Q}(2^{1/n}; 3^{1/m})=\mathbb{Q}(2^{1/n}3^{1/m})$ when $\gcd(n,m)=1$ | The inclusion $\mathbb{Q}(2^{1/n}3^{1/m})\subseteq\mathbb{Q}(2^{1/n},3^{1/m})$ should be clear. For the reverse inclusion, it suffices to establish $2^{1/n}$ and $3^{1/m}$ are elements of $\mathbb{Q}(2^{1/n}3^{1/m})$.
Hint #1: What happens when you take certain powers of $2^{1/n}3^{1/m}$?
Hint #2: Bezout's identity says if $\gcd(n,m)=1$ then there exist $a,b$ such that $an+bm=1$.
(You should be able to take $2^{1/n}3^{1/m}$ to a power until it is just one of $2$ or $3$ to a rational power, up to a rational multiple. And then you should be able to take said rational powers of $2$ or $3$ to another power so that it is $2^{1/n}$ or $3^{1/m}$ up to a rational multiple.) |
A confusion about a proof of that $f(x_n) \to f(x)$ then $x_n \to x$. | No. The converse is this:
Let $f\colon X\longrightarrow Y$. If $X$ is metrizable and if for every convergent sequence $x_n\to x$, the sequence $f(x_n)$ converges to $f(x)$, then $f$ is continuous.
So, now he is assuming that $f$ maps convergent sequences into convergent sequences. And he wants to prove that $f$ is continuous, which he does using the fact that the continuity of $f$ is equivalent to the assertion that, if $A\subset X$, then $f\bigl(\overline A\bigr)\subset\overline{f(A)}$. |
I have to use this formula to prove that the ycoordinate of a stationary point is is an integer without a calculator | Let $x$ be the stationary point. You know that $\sinh x = -3$. You know that $\cosh^2 x = 1 + \sinh^2 x$. You don't need anything else to figure out what $y = 6\sinh x + \cosh^2 x$ is. |
Open Set in Composition of Mappings | Yes if $g \circ f$ is a quotient map. And this happens if $g$ and $f$ are quotient maps. But otherwise there can be loads of counterexamples. You need assumptions on $f$ and $g$ or there is no hope of this holding. |
Reference to this Young Inequality for matrices | I find out the reference for this refinement of the Young Inequality:
https://ieeexplore.ieee.org/abstract/document/7524904/ |
Computing $\int{2\sec^3xdx}$ | Use the fact that $\sec^2x=(\tan x)'$ and partial integrate:
$$
2\int{\sec^3(x)dx}=2\int(\tan x)'\sec x dx=2\tan x\sec x -2\int \frac{\sin^2x}{\cos^3x}dx$$
In order to solve the last integral, use the formula $\sin^2x=1-\cos^2x$ and split the integral in two parts:
$$=2\tan x\sec x +2\int \frac{1}{\cos x}dx-2\int \frac{1}{\cos^3 x}dx$$
$$=2\tan x\sec x +2\ln(\sec x+\tan x)-2\int \sec^3 x dx$$
Bring the integral $\int{\sec^3x dx}$ to the other side of the equation:
$$4\int{\sec^3(x)dx}=2\tan x\sec x +2\ln(\sec x+\tan x)$$
In conclusion:
$$\int{\sec^3(x)dx}=\frac{1}{2}\tan x\sec x +\frac{1}{2}\ln(\sec x+\tan x)$$ |
Can the solution of $(a+x^2)=0\bmod(b-x)$ such that $x<b-1$ be found without brute force? | You can certainly reduce the brute force effort somewhat.
By using long division we can find that $$ x^2+a = -(b+x)(b-x) + (b^2+a) $$
and so applying $\mod (b-x)$ we get $$ x^2+a \mod (b-x) \equiv (b^2 +a) \mod (b-x)$$
This reduces your problem to finding $$ b^2+a \equiv 0 \mod (b-x)$$
which can be done by finding the prime factorisation of $b^2+a$ and determining which, if any, factors are strictly smaller than $b-1$.
This will work reasonably well for any numbers you can factorise quickly. |
How many ways are there to arrange 4n people around a square table if the 4 cyclic shifts around the table constitute the same seating? | The total number of ways to line people up in a row is $(4n)!$.
I guess we're then assuming that each side of the table is going to have $n$ people, so each of these arrangements gives us 4 cyclic shifts, so you need simply to divide by 4 (as you originally thought).
So the answer is $(4n)!/4$. |
Localization of a polynomial ring at a maximal ideal | $$\frac{R[X]}{mR[X]+(X)}\simeq\frac{R[X]/(X)}{mR[X]+(X)/(X)}\simeq R/m$$
$R[X]_N$ is regular since $R[X]$ is regular and localizations of regular rings are regular. (Alternatively, notice that $X\notin (NR[X]_N)^2$, and $R[X]_N/XR[X]_N\simeq R_m=R$ is regular.) |
How to find the probabilities on getting none of the numbers and/or at least one number in the lottery? | As said in comments, you are correct for (a). You just haven't simplified the fraction. Simplified, the fraction is $$\frac{38786}{68103}.$$
For (b), this is a great use of complementary probability. Instead of finding the probability that there is at least $1$, find the probability that there are $0$ numbers picked. Then subtract from $1$.
You already have the probability of $0$ numbers picked in part (a). So subtract this from $1$ to get your answer.
Complementary probability is an ingenious way to solve problems like these. Whenever you see something along the lines of "probability that at least 1", always think complementary. |
Experiences with Kumon | I have had no personal dealings with Kumon. I have some friends who had their children in it for a while, and I know several people who did it as children, and I know one person who taught at a Kumon center for a while. They varied in their opinions of it.
My understanding of the Kumon method is that it focuses overwhelmingly on computation, and that the method's strongest proponents are unapologetic about this. I do not mean to suggest that they should be apologetic--- I just mean, they are entirely aware of this focus and they do not see it as a liability. Frankly it would surprise me to hear someone suggesting the Kumon system as a way of promoting any kind of generalized "understanding" of math. Whenever I have heard people talk about the Kumon method, it is promoted as, in decreasing order of frequency,
(a) a way of raising flagging grades in mathematics, due to a student's difficulty with getting the right answers, or taking too long to do so,
(b) a supplement to a school curriculum that is seen as lacking in computational essentials,
(c) a way of making a student's calculational skills "automatic", so their mind is free to focus on the non-calculational aspects of their non-Kumon math education.
I have never heard of anyone using Kumon to directly promote "conceptual understanding" of mathematics, and if that is what you want to invest in, my personal opinion is that Kumon is not the way to go. (I don't really know what would work at that age, beyond giving your son access to puzzles, books, and people who are enthusiastic about mathematics.)
Additional random thoughts based on my own experience as a tutor and teacher:
I would not evaluate any method of instruction of 5 year old children by the speed with which it acquaints students with the fact that order does not matter in addition. Many concepts like this--- concepts that generalize or abstract large numbers of individual facts, each of which can be verified through a computation--- take a long time for people to internalize. (For one thing, the language one really needs to express general relations like this is the language of algebra, and it is not taught until much later.) I do not mean to say that students at age 5 cannot grasp that order doesn't matter, only that it takes a long time for this to really "sink in". People forget about things like that, and needlessly duplicate calculations, even at the university level.
I no longer teach math, but when I did, I found that the longer I taught it, the less sympathetic I was to the idea that it is harmful for students to see math as a bunch of rules to follow. It is harmful for them to see math that way if (i) they are never given a coherent set of rules, or (ii) they aren't competently taught about how to use the rules, or (iii) they never personally practice what it means to follow symbolic rules, or (iv) they never develop any idea of where the rules came from. Your own example (that $x + y = y + x$ for any two integers $x$ and $y$) is itself a rule, and as you argued in your own question, it is immensely helpful to be able to recognize instances of this rule. From this point of view, your objection to Kumon (in that instance) is only that they haven't gotten to that rule yet.
Having said that, I don't know don't know how well the Kumon method really rates by the standards (i), (ii), (iii), (iv) just listed. My guess is that Kumon makes very little attempt at addressing the last point, and this is a legitimate source of concern, if you expect Kumon to fill that role. The person I knew who taught at a Kumon center said they were extremely restricted in what they were supposed to teach--- she didn't go so far as to say there was a script, like telemarketers have, but almost sounded like that. I think this would preclude any meaningful discussion of the "real meaning" of a lot of calculation. At the same time, I do not think your son will be getting that in his regular math classroom either. It is extremely difficult to teach basic mathematics "conceptually" and most elementary teachers are not competently trained to do it.
Long story short, I think Kumon is more about (a), (b), and (c) above than it is about "conceptual understanding." I personally probably would not use Kumon except for reasons (a) and (b) above. |
Reducing nonlinear equations to obtain simplified form | Write your two equations in the following form:
cosψ·λv + sinψ·λw = cotγ·λu
-sinψ·λv + cosψ·λw = 0
and solve them to get
λv = cotγ·λu·cosψ
λw = cotγ·λu·sinψ
It follows that λV = λu·√(1+cot²γ) = λu/sinγ,whence the values
of the the ratios λu/λV, λv/λV, λw/λV follow readily. |
Using the observation vector $ \vec{y}$ instead of the centered observation vector $ \vec{y_{d}} $ doesn't change the projection $\vec{\hat y}$ | To see why first lets take an ordinary regression:
$ y_i=\alpha + X_i'\beta +\epsilon_i \space \space \space \space $ (1)
Therefore we have
$\hat{y_i} = \hat{\alpha} + X_i'\hat{\beta}$
But by definition, $\hat{\alpha} = \bar{y}-\bar{X}'\hat{\beta}$ and thus
$\hat{y_i} = \bar{y} + (X_i-\bar{X})'\hat{\beta} = \bar{y} + X_d'\hat{\beta}$
Once again, take the relation:
$ \bar{y}=\alpha + \bar{X}'\beta \space \space \space$ (2)
Subtract (2) from (1) to get
$ y_d= X_d'\beta +\epsilon_i $ Now note that if we run this regression, the coefficient on $\beta$ will be identical to the one above and
$\hat{y_d} = \hat{y_i} - \bar{y} = X_d'\hat{\beta}$ which should give you what you need. |
Integrating $ \int\frac{5}{\ 16 + 9\cos^2(x)}\,dx $ | Notice, you should use $1+\tan^2x=\sec^2x$.
You can get to that integral as follows
$$ =5\int\frac{1+\tan^2x}{\ 16\tan^2 x + 25}\,dx $$
$$ =5\int\frac{\sec^2x}{\ 16\tan^2 x + 25}\,dx $$
Let $\tan x=u\implies \sec^2x\ dx=du$
$$ =5\int\frac{du}{16u^2 + 25} $$ |
Homework Problem on Absolute + Relative Error | Since that is the only question you asked, I will focus my answer on it.
(Is this right? I'm suspicious this isn't the answer expected.)
What you did is correct, but I would write the result slightly different.
\begin{align*}|f(x_0) - \tilde{f}(x_0)| &= |f'(c) \cdot \epsilon| \\ &= |f'(c)|\,|x_0-\tilde{x_0}|\leqslant \max_{c∈(a,b)}|f'(c)|\,|x_0-\tilde{x_0}|.\end{align*}
Now you have an estimate in the form $$\text{"absolute error in the result"} = \text{constant}*\text{"absolute error in the data"},$$
which is better in the sense, that you explicitly see the connection of both errors. Also you don't know the value of $c$, because the Mean Value Theorem only states "There exists a $c$…". So the only thing you can evaluate numerically is $\underset{c}{\max}…$ .
For the relative error we can do the same
\begin{align*}\frac{|f(x_0) - \tilde{f}(x_0)|}{|f(x_0)|} &= \frac{|f'(c) \cdot \epsilon|}{|f(x_0)|} \\ &= |f'(c)| \frac{|x_0|}{|f(x_0)|}\frac{|x_0-\tilde{x_0}|}{|x_0|} \\& \leqslant \max_{c∈(a,b)}|f'(c)| \frac{|x_0|}{|f(x_0)|}\frac{|x_0-\tilde{x_0}|}{|x_0|}.\end{align*}
I think this answer also answers your not asked questions about b) and c).
It is nice to see, that that that factor resembles very much the condition number $$κ(x)=\frac{∂f(x)}{∂x}\frac{x}{f(x)}.$$
In fact, the condition number is a concept that describes how the relative error in the data connects to the relative error in the result (see here.) The reason, why we here get a different result is, that we used the Mean Value Theorem, while the condition number uses Taylor's expansion. |
How to solve explicitly the Dirichlet problem (in dimension 2) with boundary data $f(e^{i\theta}) := ( \sin(\theta) - \cos(2\theta))^2$ | In this situation, it is easier to deal with the following expression for the Poisson Kernel:
$$P_r(\vartheta)=\sum_\mathbb Z e^{in\vartheta}r^{|n|}$$
Using this, and
$$u(re^{i\phi})=\frac{1}{2\pi}\int_0^{2\pi} P_r(\phi-\vartheta)f(e^{i\vartheta})$$
The result follows easily:
$$u(re^{i\phi})=\sum_\mathbb{Z} \left(\frac{1}{2\pi}\int_0^{2\pi}f(e^{i\vartheta})e^{-in\vartheta}d\vartheta\right)r^{|n|}e^{in\phi}$$
It is not hard to see that
$$f(e^{i\phi})=1+\sin(\phi)-\frac12\cos(2\phi)-\sin(3\phi)+\frac12\cos(4\phi)$$
Splitting real and immaginary part we get (since $u$ is real):
$$u(re^\phi)=\frac{1}{2\pi}\int_0^{2\pi}f(e^{i \vartheta})d\vartheta+ \sum_0^\infty
\left(\frac{1}{\pi}\int_0^{2\pi}f(e^{i\vartheta})\cos(n\vartheta)d\vartheta\right)r^n\cos(n\phi)+
\sum_{n=1}^\infty \left(\frac{1}{\pi}\int_{n=1}^{2\pi}f(e^{i\vartheta})\sin(n\vartheta)d\vartheta\right)r^n\sin(n\phi)$$
Using the usual orthogonality relations between $\sin(nx),\cos(mx)$, one gets
$$u(re^{i\phi})=1+r\sin(\phi)-\frac12r^2\cos(2\phi)-r^2\sin(3\phi)+r^4\frac12\cos(4\phi)$$ |
Orders of Asymptotes | How about
$$
(\log X)^{(\log X)^a},\quad 0<a<1?
$$ |
When deriving the equation of a plane from a normal vector u, how can all dimensions vary? | As some of the comments have mentioned your normal vector is off. The normal vector to the $xy$ plane, by convention is $\hat{k}=<0,0,1>$. But nonetheless we can show this other ways. Given your information we have the points
$$(0,0),(2,2,0), \, \text{and}\,(a,b,0)$$
All of which lie on the plane. The last one is obtained by the fact that you mentioned all $z$ coordinates are zero. This gives us two vectors that lie in the plane:
$$\vec{v}_1=<2,2,0>\,\text{and}\,\,\vec{v}_2=<a,b,0>$$
We can then get a normal vector to the plane by taking their cross product
$$\vec{N}=\vec{v}_1\times \vec{v}_2=\left|\begin{array}{ccc}
\hat{i} & \hat{j} & \hat{k} \\
2 & 2 & 0 \\
a & b & 0 \\
\end{array}\right|=\hat{i}\left|\begin{array}{cc}
2 & 0 \\
b & 0 \\
\end{array}\right|-\hat{j}\left|\begin{array}{cc}
2 & 0 \\
a & 0 \\
\end{array}\right|+\hat{k}\left|\begin{array}{cc}
2 & 2 \\
a & b \\
\end{array}\right|$$
$$=\hat{i}\cdot 0-\hat{j}\cdot 0+2\hat{k}(b-a)$$
Thus normal vectors to such a plane are of the form
$$\vec{N}=<0,0,2(b-a)>=<0,0,c>\quad c\in\mathbb{R}$$
Hence the reason your equation is off is because of the normal vector. |
Why is there a one-to-one correspondence between homomorphic images of a group $G$ and normal subgroups of $G$? | The trick between the correspondence lies in the isomorphism theorems. You already know that to every normal subgroup of $G$ there corresponds a unique homomorphic image (upto isomorphism). Now, suppose that $h:G\to H$ is a group homomorphism, then $\ker(h)$ is normal, and by the first isomorphism theorem
$$G/\ker(h)\cong h(G).$$ |
Can we estimate $ \max_{x,y \in [0,1]} \,\left|\frac{\sin x - \sin y }{x-y} \right| < \infty $? | Since $\sin(x)-\sin(y)=2\,\sin\left(\frac{x-y}{2}\right)\,\cos\left(\frac{x+y}{2}\right)$, so
$$\left|\frac{\sin(x)-\sin(y)}{x-y}\right|= \left|\frac{\sin\left(\frac{x-y}2\right)}{\left(\frac{x-y}{2}\right)}\right|\,\Biggl|\cos\left(\frac{x+y}{2}\right)\Biggr|\leq 1\,.$$
However, the value $1$ is not achievable (i.e., the maximum does not exist) because the inequality $\left|\frac{\sin(t)}{t}\right|<1$ holds for all $t\in\mathbb{R}\setminus\{0\}$, but $1$ is the supremum value.
To see why $1$ is the supremum, we can set $x=3y$, so that
$$\left|\frac{\sin(x)-\sin(y)}{x-y}\right|= \left|\frac{\sin(y)}{y}\right|\,\big|\cos\left(2y\right)\big|\,.$$
Taking $y\to 0^+$, we see that $\left|\frac{\sin(y)}{y}\right|\,\big|\cos\left(2y\right)\big|\to 1^-$. |
difference between an arithmetic mean and n Arithmetic mean | Arithmetic mean (AM) of two numbers $a$ and $b$ is just the average of the two numbers defined by $\frac{a+b}{2}$.
But when you are asked to find $n$ AM's between $a$ and $b$, it means to find a sequence of numbers $\{a_1, \dots , a_n\}$ such that $a,a_1, \dots,a_n,b$ are in arithmetic progression. To find them, lets use the well known formula for AP. Here $b$ is the $(n+2)^{th}$ term of the AP and hence $b = a + (n+2-1)d$ or $d = \frac{b-a}{n+1}$.
Now we can write $a_1 = a+d$, $a_2 = a+2d$ and in general $a_k = a +kd $ where $d = \frac{b-a}{n+1}$ |
Complex exponent $(z^{\frac{m}{n}})^{\frac{n}{m}}$ | You've pretty much done the work. If you look at $(z^{1/2})^{2/1}$ you get a single answer, since
$$ z^{1/2} = (re^{i\theta})^{1/2} = \{ \sqrt{r}e^{i\theta/2}, ~ \sqrt{r}e^{i\theta/2+i\pi} \} $$
and squaring we get
$$ (z^{1/2})^{2/1} = re^{i\theta} $$ for both possible roots.
On the other hand,
$$ z^2 = r^2e^{i2\theta}$$
and taking a square root we have
$$ (z^2)^{1/2} = \{ re^{i\theta}, ~ re^{i\theta + i\pi} \}. $$
This shows that your values of $(z^{m \over n})^{n\over m}$ and $(z^{n\over m})^{m\over n}$ will in general be different, just like you stated in your question. I.e., power functions do not commute.
Accepting this, $(z^{m\over n})^{n\over m}$ has the $m$ values $\{ re^{i\theta + i2\pi k/m } \}$ for $k = 0, 1, \dots, m-1$ where $z = re^{i\theta}$. |
How to prove $\int_0^\pi\frac{\ln(2+\cos\phi)}{\sqrt{2+\cos\phi}}d\phi=\frac{\ln3}{\sqrt3}K\left(\sqrt{\frac23}\right)$? | $\int^{\pi}_0\cos^{2k+1}\phi d\phi=0$, and $$\int^{\pi}_0\cos^{2k}\phi d\phi=\sqrt{\pi}\Gamma(k+1/2)/\Gamma(k+1)=2^{-2k}\pi\binom{2k}{k}.$$
Therefore $$
\begin{align*}
I(n) &=\int^{\pi}_0\sum_{m=0}^{\infty}2^{n-m}\binom{n}{m}\cos^m\phi~d\phi\\
&=2^n\sum_{m=0}^{\infty}2^{-m}\binom{n}{m}\int^{\pi}_0\cos^m\phi~d\phi\\
&=2^n\pi\sum_{k=0}^{\infty}2^{-2k}\binom{n}{2k}2^{-2k}\binom{2k}{k}\\
&=2^n\pi~{}_2F_1\left(\frac{1-n}2,-\frac{n}{2};1;\frac14 \right)\\
&=2^n\pi\left(\frac23\right)^{-n}~{}_2F_1\left(-n,\frac{1}{2};1;\frac23\right)\\
&=3^n\pi~{}_2F_1\left(-n,\frac{1}{2};1;\frac23\right)\\
\end{align*}
$$
Using DLMF 15.8.13 with $a=-n$, $b=1/2$ and $z=2/3$.
We note that $I(-1/2)=\frac{2}{\sqrt{3}}K(\sqrt{2/3})$.
Edit: We have $$
I(n)=3^n\pi~{}_2F_1\left(-n,\frac{1}{2};1;\frac23\right)=\frac{\pi}{\sqrt{3}}~{}_2F_1\left(n+1,\frac{1}{2};1;\frac23\right)=3^{n+1/2}I(-n-1).$$
Therefore, If we write $J(n)=3^{-n/2}I(n)$, then $J(n)=J(-n-1)$, and consequently $J'(-1/2)=0$.
Thus $$
\begin{align*}
I'(-1/2) &=\left.\frac{d}{dn}3^{n/2}J(n)\right|_{n=-1/2}\\
&=\left.\left(3^{n/2}J'(n)+3^{n/2}\frac{\log 3}{2}J(n)\right)\right|_{n=-1/2}\\
&=3^{-1/4}\frac{\log 3}{2}J(-1/2)\\
&=\frac{\log 3}{2}I(-1/2)\\
&=\frac{\log 3}{\sqrt{3}}K(\sqrt{2/3}).
\end{align*}
$$ |
$ \sum _{n=1}^{\infty} \frac 1 {n^2} =\frac {\pi ^2}{6} $ then $ \sum _{n=1}^{\infty} \frac 1 {(2n -1)^2} $ | $$\sum_{n=1}^\infty \frac1{n^2} =\sum_{n=1}^\infty \frac1{(2n)^2} + \sum_{n=1}^\infty \frac1{(2n-1)^2}=\frac14 \sum_{n=1}^\infty \frac1{n^2} + \sum_{n=1}^\infty \frac1{(2n-1)^2}$$
So, $\sum\limits_{n=1}^\infty \frac1{(2n-1)^2}= \frac34 \sum\limits_{n=1}^\infty \frac1{n^2} = \frac{\pi^2}{8}$. |
Time dependent temperature distribution of bar | Here's a guide on how to provide context, so you can ask a better question next time. You did not provide the full PDE, or even the full solution from your textbook. With so much missing information, this should be flagged as "missing context".
I'm going to read between the lines to infer that the full equation is
$$ u_t - \alpha^2u_{xx} = 0 $$
With boundary conditions
$$u(0,t) = 0,\quad u(2,t) = 100 $$
and initial condition
$$ u(x,0) = 0 $$
where $u$ is the temperature of the bar. Now, you might think the initial condition is inconsistent with the boundary temperature are $x=2$. You can reconcile them by considering that at time $t=0$, $u=0$ for $0 \le x < 2$ and $u=100$ only at a point $x=2$. This might not make physical sense, but the math will work out.
Moving on, since the boundary conditions are inhomogeneous, you'll want to split it up into two pieces
$$ u(x,t) = v(x,t) + w(x) $$
where $w(x)$ is the steady-state solution, and $v(x,t)$ is the transient (or homogenous) solution. Some texts may use the same letter for these solution, so that might have been what confused you.
First off, the steady-state solution is one that satisfies the heat equation, except with $w_t=0$. This is the temperature distribution the bar will reach after an indefinitely long time. It also matches the temperature at the endpoints, thus you have
$$ \alpha^2w_{xx} = 0 $$
$$ w(0) = 0, \quad w(2) = 100 $$
Solving the above ODE gives $w(x) = 50x$. This is also what you incorrectly have as the initial condition.
Now for the other solution. It's called the transient solution because it will decay to $0$ as $t \to \infty$, as you will see later.
Plugging in $u = v + w$ to the original equation, we have
$$ (v_t + w_t) - \alpha^2(v_{xx} + w_{xx}) = 0 \implies v_t - \alpha^2 v_{xx} = 0 $$
Due to linearity, $v(x,y)$ must also satisfy the heat equation, except the boundary conditions are now homogeneous, since
$$ v(0,t) = u(0,t) - w(0) = 0 $$
$$ v(2,t) = u(2,t) - w(2) = 0 $$
And the initial condition is now
$$ v(x,0) = u(x,0) - w(x) = -50x $$
Using separation of variables, we can obtain a particular solution of the form
$$ v_p(x,t) = \big[a\sin(kx) + b\cos(kx)\big] e^{-\alpha^2k^2t} $$
The first B.C $v(0,t)=0$ gives $b=0$, as you've already done, so
$$ v_p(x,t) = a\sin(kx) e^{-\alpha^2k^2t} $$
The second B.C. is also $v(2,t)=0$, so this gives
$$ \sin(2k) = 0 \implies 2k = n\pi $$
where $n$ is an integer, so we can write.
$$ v_n(x,t) = a\sin\left(\frac{n\pi}{2}x\right)e^{-\alpha^2(\frac{n\pi}{2})^2t} $$
Since any value of $n$ will satisfy the equation, we must use the law of superposition to get the full solution
$$ v(x,t) = \sum_n c_nu_n(x,t) = \sum_{n=1}^\infty c_n \sin\left(\frac{n\pi}{2}x\right)e^{-\alpha^2(\frac{n\pi}{2})^2t} $$
From here on, all that's needed is to match up the initial condition
$$ v(x,0) = -50x = \sum_{n=1}^\infty c_n \sin\left(\frac{n\pi}{2}x\right) $$
The constants $c_n$ are derived from the Fourier series expansion
$$ c_n = \int_0^2 (-50x) \sin\left(\frac{n\pi}{2}x\right)\ dx = -\frac{4}{n\pi}(-1)^n $$
Putting everything together, we have the final temperature distribution
$$ u(x,t) = 50x - \sum_{n=1}^\infty \frac{4(-1)^n}{n\pi} \sin\left(\frac{n\pi}{2}x\right)e^{-\alpha^2(\frac{n\pi}{2})^2t} $$ |
Initial form of a polynomial | Have a look at Remark 5.7 of Gublers "Guide to tropicalization", I find his approach easier to understand than the definition you seem to have copied from Sturmfels' book.
Initial forms can be used to define the tropicalization of a variety, and are especially important if the field K ist trivially valued.
The set of all weight vectors $w$, such that $in_w(f)$ is not a monomial for $f$ in the Ideal defining a variety $V$ coincides with the set of valuation poins of the K-points in the variety for a non-trivial valuation. The closure of these sets is the tropicalization. |
Problems taking the limit in $\int_a^b f=\lim_{c\to a}\int_c^b f$ from definitions | As $f$ is bounded on $[a,b]$, we have $|f(x)| \leqslant M $ for all $x \in [a,b]$.
Choose $x_1$ such that
$$x_1 - a < \frac{\epsilon}{4M}.$$
As $f$ is integrable over $[x_1,b]$, for every $\epsilon > 0$ there is a partition $P': x_1 < x_2 < \ldots < x_n = b$ such that the difference between the upper and lower Darboux sums satisfies
$$U(P',f) - L(P',f) < \frac{\epsilon}{2}.$$
Consider the partition $P: a = x_0 < x_1 < x_2 < \ldots < x_n = b$ of $[a,b]$. Then the difference between the upper and lower sums is
$$U(P,f) - L(P,f) = U(P',f) - L(P',f) + [\sup_{a \leqslant x \leqslant x_1}f(x) - \inf_{a \leqslant x \leqslant x_1}f(x)](x_1-a) \\ < \frac{\epsilon}{2} + 2M(x_1-a) \\ < \epsilon.$$
Therefore, $f$ satisifies the Riemann criterion and is integrable over $[a,b]$.
Furthermore, $f$ is integrable over $[a,c]$ and
$$0 \leqslant \left|\int_a^cf(x) \, dx\right|\leqslant \int_a^c|f(x)| \, dx\leqslant M(c-a)\\ \implies \lim_{c \to a} \int_a^cf(x) \, dx =0.$$
Hence,
$$\int_a^bf(x) \, dx = \\ \lim_{c \to a} \int_a^bf(x) \, dx \\ = \lim_{c \to a} \int_a^cf(x) \, dx + \lim_{c \to a} \int_c^bf(x) \, dx \\ = \lim_{c \to a} \int_c^bf(x) \, dx $$ |
Is there such a thing as a mathematical thesaurus? | There is a very comprehensive (at least very long!) "Handbook of Mathematical Discourse" which you can find here.
Under the various topics they list some synonyms, but I can't vouch for its completeness or accuracy. Also, it is more of a dictionary or encyclopedia than a pure thesaurus.
EDIT: Another question on Math.SE asks a similar question and one of the answers points to this online encyclopedia. It seems, however, that this one might be of limited use as a thesaurus. After a few tries it looks like synonyms are rarely included. |
Probability of being hired out of 3 people | You should have $P(B) = P(J) + 0.2$, etc. |
Any open interval in an n-dimensional Euclidean space is connected | OK. This is a basic result which states that Cartesian product of connected sets are connected. You may look here for the proof:
http://planetmath.org/encyclopedia/ProofOfProductsOfConnectedSpaces.html |
Need help identifying what property of these integrals make them 0 | Hint:
Expand the integrand by distributivity and use for the result the linearisation formula
$$\sin a\sin b=\frac12\bigl((\cos(a-b)-\cos(a+b)\bigr),$$
in order to compute the integral. |
What is the probability of getting multiple choice question correct | There are $\binom{4}{2}=6$ unordered pairs of two choices. If the student chooses at random, each of these pairs is equally probable, so the probability of choosing the correct pair is $1/6$. |
What's the typical role of the constant $e^{-\gamma}$? | This constant appears in Weierstrass factorization of $\Gamma(s)$:
$$
{1\over\Gamma(s)}=se^{\gamma s}\prod_{k\ge1}\left(1+\frac sk\right)e^{-s/k}
$$
and also in Mertens' formula:
$$
\prod_{p\le x}\left(1-\frac1p\right)={e^{-\gamma}\over\log x}\left[1+\mathcal O\left(1\over\log x\right)\right]
$$
Mertens' formula accounts for most appearance of $e^\gamma$ in number theory. For instance, it can be used to deduce the minimal order of Euler's totient function:
$$
\liminf_{n\to\infty}{\varphi(n)\log\log n\over n}=e^{-\gamma}
$$
and the maximal order for divisor sum function (aka Gronwall's theorem):
$$
\limsup_{n\to\infty}{\sigma(n)\over n\log\log n}=e^\gamma
$$ |
Which of the following cannot possibly be the average - Standardized Test | The question might have been better worded. For instance: "The lightest man weighs 110 pounds, and the heaviest man weighs 135 pounds." As it stands, I find it ambiguous. |
Fundamental theorem of Calculus: $\frac{d}{dx} \int_{-x}^{x} \frac{1}{3+t^2} \ dt$ Possible textbook mistake | Your answer is correct. Note if the textbook was correct then the integral would be independent of $x$: clearly this is false because the function is always positive so increasing $x$ will add more to the integral. |
Why is it natural to define a prime as $p|ab$ implies $p|a$ or $p|b$? | Good question!
There are probably several possible answers to this question, but here is my perspective.
Reason 1: Ring Theoretic
Let $R$ be a commutative ring. Let $f\in R$, then consider $fR=\{af : a\in R\}$.
You can check that this is an ideal of $R$, and it is also often denoted by simply $(f)$ (meaning the ideal of $R$ generated by $f$).
Now we can ask the question
When is $R/fR$ a domain?
It turns out the answer is:
Precisely when $f$ is prime. (Or $f=0$)
Proof:
If $p$ is prime, then if $$ab\equiv 0 \pmod{pR},$$
by definition this means $p\mid ab$, which since $p$ is prime implies that $p\mid a$ or $p\mid b$. However this in turn means
$$a\equiv 0\!\!\pmod{pR}\quad\text{or}\quad b\equiv 0\!\!\pmod{pR},$$
which is what it means for $R/pR$ to be a domain.
Conversely, if $R/pR$ is a domain, then if $p\mid ab$, $ab\equiv 0 \pmod{pR}$, so either $a\equiv 0 \pmod{pR}$ or $b\equiv 0 \pmod{pR}$, which means either $p\mid a$ or $p\mid b$. Hence $p$ is prime. $\blacksquare$
Note:
As rschwieb points out in the comments, I should have been a bit more careful when originally writing this. We usually exclude $0$ from the definition of prime (as you've done above). However $R/0R=R/0=R$ is certainly a domain. I suspect that the reason for excluding $0$ is a function of the other motivations for this definition discussed below. Since if we allow $0$ to be prime, then it complicates the statment of unique prime factorization, since after all, $0=0\cdot 3^2=0\cdot 101=0\cdot (-17)$, so how can $0$ have a unique prime factorization?
For more on this and a different perspective, I recommend rschwieb's excellent answer here (same link as in the comments).
Reason 2: Number Theoretic (kind of)
The other way we come up with this naturally is that it is the condition we need to hold in order to get unique factorizations.
I.e., suppose we have two factorizations of an element $x\in R$ into irreducibles
$$x = \prod_i p_i = \prod_j q_j,$$
with $p_i,q_j\in R$ irreducibles,
then when are we guaranteed that some $p_i$ occurring in the first factorization appears somewhere on the left hand side (or an associate of $p_i$, i.e. a unit times $p_i$, since for example in the integers we could have $9=3\cdot 3 = (-3)\cdot (-3)$)?
Well, we need to have $p_i$ divide one of the $q_j$ (for then they are associates, since $p_i$ and $q_j$ are irreducibles).
The condition that for all multiples $x$ of $p_i$, $p_i$ divides some $q_j$ for any factorization $x=\prod_j q_j$ of $x$ into irreducibles is equivalent to $p_i$ being prime (for a Noetherian ring, so that we are guaranteed to have factorizations into irreducibles, otherwise bad things could happen).
Proof:
A note on notation: I'll replace $p_i$ with $p$.
Suppose $p$ is prime, and $p\mid x$, and $\prod_j q_j$ is a factorization of $x$ into irreducibles, then we induct on the length of the factorization. If $x=q_1$ is irreducible, then $p\mid q_1$ by definition, and we are done. Otherwise, since $p\mid (q_1\cdots q_{n-1})q_n$, then by primality of $p$, either $p\mid q_1\cdots q_{n-1}$, in which case $p\mid q_j$ for some $j$ by the inductive hypothesis, or $p\mid q_n$, and we are done.
Conversely, if $p$ has the property discussed above, then if $p\mid ab$ for some $a$ and $b$, then let $a=\prod_i\alpha_i$ and $b=\prod_j\beta_j$ be factorizations of $a$ and $b$ into irreducibles (since $R$ is Noetherian. If you aren't familiar with Noetherianness yet, then just take the existence of factorizations into irreducibles as a black box for now). Then $p\mid x ab= \prod_i \alpha_i \prod_j \beta_j$, so by the property we're assuming $p$ has, either $p\mid \alpha_i$ for some $i$, or $p\mid \beta_j$ for some $j$, and thus either $p\mid a$ or $p\mid b$. Hence $p$ is prime.
Reason 3: (Actually a consequence of reason 2)
There is a theorem, which is relevant here.
A Noetherian domain $R$ is a UFD if and only if every irreducible is prime. |
Interpolate 4 points by an increasing polynomial | Here are som thoughts abouth this question, but they do not lead to a conclusive statement.
I will use the following evidence:
If a polynomial $P$ is such that $P(a)=b$, then $P$ is necessarily of the form
$$ P(X)=(X-a)Q(X)+b,$$
where $Q$ is a polynomial.
The idea is to apply this iteratively to $P$ such that $P(0)=0$, $P(x_i)=y_i$ with $1\leq i\leq3$. Since $P(0)=0$, $P$ is of the form
$$P=XA(X)$$
where $A$ is a polynomial.
Let us compute $A(x_1)$. We have $P(x_1)=x_1A(x_1)=y_1$, so we have
$$ A(x_1)=\frac{y_1}{x_1}\equiv z_1.$$
We conclude that $A$ is of the form
$$A(X)=z_1+(X-x_1)B(X)$$
where $B$ is a polynomial. We can compute $B(x_2)$ using $$y_2=P(x_2)=x_2A(x_2)=x_2\big[z_1+(x_2-x_1)B(x_2)\big].$$
We get $$B(x_2)=\frac{1}{x_2-x_1}\left(\frac{y_2}{x_2}-\frac{y_1}{x_1}\right)\equiv z_2.$$
We can likewise write $B$ as
$$B(X)=z_2+(X-x_2)C(X),$$
where $C$ is a polynomial. We compute $C(x_3)$ using
$$\begin{split}y_3&=P(x_3)=x_3A(x_3)=x_3\Big[z_1+(x_3-x_1)B(x_3)\Big]\\&=x_3\Bigg\{z_1+(x_3-x_1)\Big[z_2+(x_3-x_2)C(x_3)\Big]\Bigg\}.\end{split}$$
We obtain $$C(x_3)=\frac{1}{x_3-x_2}
\left[\frac{1}{x_3-x_1}\left(\frac{y_3}{x_3}-\frac{y_2}{x_1}\right)-z_2\right]\equiv z_3.$$
Finally (or at last) we can write $C$ as
$$C(X)=z_3+(X-x_3)Q(X).$$
By necessary conditions, any polynomial satisfying the requirements
$P(0)=0$, $P(x_i)=y_i$ is of the form
$$P(X)=X\Bigg\{z_1+(X-x_1)\Big[z_2+(X-x_2)\big(z_3+(X-x_3)Q(X)\big)\Big]\Bigg\}.$$
Clearly if $Q=0$ then we have the Lagrange interpolation result.
If this polynomial does not work, then we must consider $Q\neq0$ with a positive leading coefficient.
We must have $P'(0)\geq0$. Let us compute $P'(0)$. We have
$$P'(0)=A(0)=z_1-x_1z_2+x_1x_2z_3-x_1x_2x_3Q(0).$$
Consequently, we want
$$Q(0)\leq\frac{z_1-x_1z_2+x_1x_2z_3}{x_1x_2x_3}\equiv q_0.$$
If $q_0\leq0$ then necessarily $Q$ must be of degree at least 1.
By considering $P'(x_i)\geq0$, we can find an inequality for $Q(x_i)$.
This sums up to four inequalities, so the required minimal degree of $Q$
for the general case is probably at least $3$, unless I'm missing something.
I believe the general case requires a computer's help.
This does not provide a full answer, I'm afraid. I anyway hope this could help. |
What to study after Calculus I? | I would recommend Combinatorics. There are many different directions you can go to give you a taste of many types of math and it’s relatively easy to self study.
Alternatively getting a start in group theory, or more broadly abstract algebra, is a good idea. You will likely see related concepts in linear algebra in your next semester. |
Show that $\frac{d}{dx}f(0,0)$ exists | No. By definition $f_x(0,0)=\lim_{h \to 0} \frac {f(h,0)-f(0,0)}h$. And $f(h,0)-f(0,0)=0-0=0$ for all $h \neq 0$. Hence the limit is $0$. |
Which is the trick to solve this simple equation? | $$2000x=(10-18000x)^2$$and try using:$2000x=t$,
$t=(10-9t)^2$ |
Differentiating by $\ln$ of $x$ | It's correct. A more direct way is $\frac d{d(\ln x)} \ln \alpha x = \frac d{d(\ln x)} \ln \alpha + \frac d{d(\ln x)} \ln x = 0 +1=1$ as $\ln \alpha$ is a constant.
It is assumed that both $\alpha$ and $x$ take positive values. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.