title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Quadratic subfield of $\mathbb{Q}(\zeta)$ | It's not true. Consider $\Bbb Q[\zeta_{12}]$. It contains both $\Bbb Q[i]$ and $\Bbb Q[\zeta_3]$, each of which have degree $2$ over $\Bbb Q$. |
How to determine the outcome of the recursive sequence $a_n=\frac{1}{\operatorname{abs}\left(a_{n-1}\right)-1}$ | Since the iterating map $f : a \mapsto \frac 1 {|a|-1}$ is a piecewise rational homography, and because rational homographies are a group under composition, any iterate of $f$ is again a piecewise rational homography.
Then, cycles can only occur at a fixpoint of one of those homographies, and their fixpoints are quadratic numbers : if $x = \frac {ax+b}{cx+d}$ for rational $a,b,c,d$, then $cx^2+(d-a)x-b = 0$, which is a quadratic equation in $x$ with rational coefficients.
We can count the number of cycles for each $n$, and also see what possible sequence of sign you can get.
Split the circle $\Bbb R \cup \{\infty\}$ into four parts $A=[\infty ; -1],B=[-1;0],C=[0;1],D=[1;\infty]$. $f$ sends $A$ and $D$ on $C \cup D$, and sends $B$ and $C$ on $A$.
With this we can quickly determine by induction that if $F_0 = 0, F_1 = 1, \ldots$ is the Fibonacci sequence,
$f^{\circ n}$ sends $A$ and $D$ on ($F_{n-1}$ times $A$ and $F_n$ times $C$ and $D$) ; and it sends $B$ and $C$ on ($F_{n-2}$ times $A$ and $F_{n-1}$ times $C$ and $D$).
Each time you have a piece of $f^{\circ n}$ sending a subinterval of $A$ onto a whole $A$, you get exactly one fixpoint in that interval, and the same goes for $B,C,D$.
This gives you $F_{n-1} + F_{n-1} + F_n$ fixpoints in total for every piece of $f^{\circ n}$. That's the Fibonacci-like sequence $1,3,4,7,11,18,\ldots$
(except for the cycle $0 \mapsto -1 \mapsto \infty \mapsto 0$, they all have to be inside the intervals so they are only counted once. But so is this one because we only count the "half" of $0$ that occurs in $C$)
By looking at that Fibonacci-like sequence then removing the smaller cycles coming from divisors of $n$, you get $1,1,1,1,2,2,\ldots$ cycles of length exactly $1,2,3,4,5,6\ldots$
Moreover, a number is completely determined by which intervals $C,A,D$ the sequence goes through. Any $C$ is followed by a $A$, while $A$ or $D$ can be followed by $C$ or $D$.
Going back to signs, knowing the sequence of intervals is the same as knowing the sequence of signs. A negative sign correspond to a $C$, a positive sign correspond to $C$ or $D$ according to if the next sign is positive or not, so the possible sign sequences are every possible sequence where there is no consecutive $-$ sign, and again, an irrational number is completely determined by the sequence of signs it goes through
If you are on a positive real, then you will see a bunch of $CA (+-)$ and $D (+)$, and of course, numbers that are on cycles correspond to periodic sequences.
For example the $4$-cycle correspond to repeating $CADD (+-++)$ and two $5$-cycles correspond to $CACAD (+-+-+)$ and $CADDD (+-+++)$
If you know about continued fractions, they come from a similar piecewise homography $g$ on positive real numbers ($g(x) = x-1$ if $x \ge 1$ and $g(x) = \frac 1x$ if $0 < x \le 1$).
In fact, they are very much intertwined.
If $2 \le x$ then $f(x) = \frac 1{x-1} > 0$, $f(f(x)) = \frac{1-x}{x-2} < 0$, and $f(f(f(x))) = x-2 = g(g(x))$
If $1 \le x \le 2$ then $f(x) = \frac 1{x-1} = g(g(x))$
If $0 \le x \le 1$ then $f(x) = \frac {-1}{1-x} < 0$, $f(f(x)) = \frac{1-x}x = \frac 1x - 1 = g(g(x))$
So iterating $f$ on some number will reach (among others), all the iterates of $g\circ g$ on that number.
In particular, if you start on a positive real, you reach large a number if and only if you reach a large number when computing the continued fraction of that real (that's what you see happening with $\pi$ for example)
In particular, any quadratic number has a continued fraction which is eventually periodic, so under your iteration, their trajectory will be eventually periodic as well : quadratic numbers correspond exactly to eventually periodic sign sequences. |
Muller's method's initial approximations for non-real roots | Due to the square root from the quadratic solution formula, even if you start with real numbers the iteration can spontaneously leave the real axis. Of course you can also force this by setting one or multiple of the initial points to have an imaginary part. Note that this is no guarantee to find a complex root, the iteration may still converge to the real axis.
The roots of the given polynomial have magnitudes via root radius bounds between $\frac12$ and $5$, so you could for instance start with $x_0=2$, $x_1=2i$ and $x_2=-2$. Or you could construct points with a random radius in $[2,3]$ and a random angle.
Remember that Muller's method finds the roots of the quadratic Newton interpolation polynomial
\begin{align}
p(x)&=f(x_2)+f[x_1,x_2](x-x_2)+f[x_0,x_1,x_2](x-x_1)(x-x_2)
\\
&=f(x_2)+\Bigl(f[x_1,x_2]+f[x_0,x_2]-f[x_0,x_1]\Bigr)(x-x_2)+f[x_0,x_1,x_2](x-x_2)^2
\end{align}
and sets the next point $x_3$ as the root that is closest to $x_2$.
The divided differences can be computed for complex points the same way as for real points.
Introducing $w=f[x_1,x_2]+f[x_0,x_2]-f[x_0,x_1]$ gives the root computation
\begin{align}
0&=4f_2^2+2\,(2f_2)\,w(x_2-x_1)+4f_2f_{012}(x-x_2)^2\\
&=(2f_2+w(x_2-x_1))^2-(w^2-4f_2f_{012})(x-x_2)^2\\
x&=x_2-\frac{2f_2}{w\pm\sqrt{w^2-4f_2f_{012}}}
\end{align}
In a complex version you will have to take the complex square root. It is easiest to follow if you compute both possible denominators and select the one with the larger absolute value, see for instance here. |
Probability generating function question. | Using independence,
$$ \mathbb{P}(Y\leq y)=\sum_{n=1}^{\infty}\mathbb{P}(\max\{X_1,\dots,X_n\}\leq y,\nu=n)=\sum_{n=1}^{\infty}\mathbb{P}(\max\{X_1,\dots,X_n\}\leq y)\mathbb{P}(\nu=n)$$
As you said in the question, since the $X_i$ are iid, it follows that
$$ \mathbb{P}(\max\{X_1,\dots,X_n\}\leq y)=F(y)^n $$
Therefore
$$\sum_{n=1}^{\infty}\mathbb{P}(\max\{X_1,\dots,X_n\}\leq y)\mathbb{P}(\nu=n)=\sum_{n=1}^{\infty}\mathbb{P}(\nu=n)F(y)^n=P(F(y))$$
as desired. |
How to prove PQ = QP using the product formula of polynomials? | Quite obvious if you write it in a symmetric way:
$$PQ = \sum_{i = 0}^{n + m}\biggl ( \sum_{k+\ell =i}a_k b_{\ell}\biggr )X^{i}= \sum_{i = 0}^{m+n}\biggl ( \sum_{\ell+k =i} b_{\ell}a_k\biggr )X^{i}=QP.$$ |
Hatcher 1.3.27: Clarification on left and right actions | This is explained in Hatcher's book on p. 69. The elements of $\pi_1(X,x_0)$ are path homotopy classes of closed paths $\gamma : [0,1] \to X$ based at $x_0$ (i.e. $\gamma(0) = \gamma(1) = x_0$). Given $y \in p^{-1}(x_0)$ and a path $\gamma$ based at $x_0$, we find a unique lift $\tilde \gamma$ of $\gamma$ such that $\tilde \gamma (0) = y$. Define $a(y, \gamma) = \tilde \gamma (1)$. It is easy to verify that $\tilde \gamma (0)$ only depends on $[\gamma] \in \pi_1(X,x_0)$. That is, $a(y, [\gamma]) = a(y, \gamma)$ is well-defined. The product of $[\gamma], [\delta] \in \pi_1(X,x_0)$ is given by $[\gamma] \cdot [\delta] = [\gamma \cdot \delta]$, where the path $\gamma \cdot \delta$ is defined by $(\gamma \cdot \delta)(t) = \gamma (2t)$ for $t \le 1/2$ and $(\gamma \cdot \delta)(t) = \delta (2t-1)$ for $t \ge 1/2$ (that is, we first travel with $\gamma$ and then with $\delta$). This shows that
$$a(y, [\gamma] \cdot [\delta]) = a(a(y, [\gamma]), [\delta]) .$$
In fact, we first lift $\gamma$ to $\tilde \gamma$ with $\tilde \gamma (0) = y$ to get $a(y, [\gamma]) = \tilde \gamma (1)$. Next we lift $\delta$ to $\tilde \delta$ with $\tilde \delta (0) = a(y, [\gamma]) = \tilde \gamma (1)$ to get $a(a(y, [\gamma]),[\delta]) = \tilde \delta (1)$. But the composed path $\tilde \gamma \cdot \tilde \delta$ is a lift of $\gamma \cdot \delta$ such that $(\tilde \gamma \cdot \tilde \delta)(0) = \tilde \gamma(0) = y$. Hence $a(y, [\gamma] \cdot [\delta]) = (\tilde \gamma \cdot \tilde \delta)(1) = \tilde \delta(1) = a(a(y, [\gamma]),[\delta])$. This shows that $a$ is a right action. See for example right group action. Writing suggestively $y \cdot [\gamma] = a(y, [\gamma])$ we get
$$y \cdot ([\gamma] \cdot [\delta]) = (y \cdot [\gamma]) \cdot [\delta] .$$
This explains the name right action: The above "associative property" works via multiplication on the right side of $y$.
But isn't writing $y \cdot [\gamma]$ quite arbitrary, coudn't we write $[\gamma] \cdot y$? Yes, we can, but then we get
$$([\gamma] \cdot [\delta]) \cdot y = [\delta] \cdot ([\gamma] \cdot y) $$
which doesn't look like a usual associative property.
This is the reason why Hatcher defines a left action by
$$[\gamma] * y = y \cdot [\gamma]^{-1} .$$
In fact
$$([\gamma] \cdot [\delta]) * y = y \cdot ([\gamma] \cdot [\delta])^{-1} = y \cdot ([\delta]^{-1} \cdot [\gamma]^{-1}) = (y \cdot ([\delta]^{-1}) \cdot [\gamma]^{-1} = ([\delta] * y ) \cdot [\gamma]^{-1} \\= [\gamma] * ([\delta] * y) .$$ |
Is Grzegorczyk's theory $TC$ interpretable in Robinson Arithmetic $Q$ | It is. The theory is easily interpretable in IΔ0, which is interpretable in Q. For the later, see for example Interpretability in Robinson's Q (Ferreira, Fernando; Ferreira, Gilda. Bull. Symbolic Logic 19 (2013), no. 3, 289--317.) |
What is the consistency strength of $ZC+\neg CH\ \forall x (|x|>1)$? | No. Just start with a model of GCH, force CH to fail below $\aleph_\omega$, without going above it, and consider $V_{\omega+\omega}$ of the extension.
Large cardinals only come into play when you want to also have strong limit cardinals. |
Comparison convergence test $\sum_{n=1}^\infty (2n+1)\beta^{-\sqrt{2n+1}}$ | If $b > 1$
then,
since
$\dfrac{\log(n)}{\sqrt{n}}
\to 0$,
$\sqrt{2n+1}
\gt 2\log(n)/\log(b)
$
for all large enough $n$,
so
$\begin{array}\\
b^{-\sqrt{2n+1}}
&=\dfrac1{b^{\sqrt{2n+1}}}\\
&\lt\dfrac1{b^{2\log(n)/\log(b)}}\\
&=\dfrac1{e^{2\log(b)\log(n)/\log(b)}}\\
&=\dfrac1{e^{2\log(n)}}\\
&=\dfrac1{n^2}\\
\end{array}
$
and the sum of these converges. |
Trying to solve infinite integral involving error function | When I substitute $y=2x^2$ I get
$$I=4\sqrt2\int_0^\infty x^2e^{-x^2}\,dx.$$
By integration by parts, this can be reduced to the usual "probability integral"
$$\int_0^\infty e^{-x^2}\,dx.$$ |
Confusion with Recursion Theorem | In (1) you have given a set of properties of $f$ but not actually shown that a function with these properties exists. In your head you are probably saying something like "the function is defined for zero, and for each $n$, if the function is defined for $n,$ then it is defined for $n+1$." But when you start a sentence with 'the function', you are presupposing its existence. If it helps, remember that we must show the function is a set whose existence follows from the axioms , i.e. we establish its existence using pairing, unions, etc.
So for each $k\in \mathbb N$, we the replace the vague "$f$ is defined at $k$" with the rigorous "$f_k$ exists." You can show by induction on $k$ that for each $k,$ there is a unique function $f_k$ with domain $\{0,\ldots k\}$ that satisfies your properties up to $f(k) = g(f(k-1),k-1).$ The proof of the induction step that if an $f_k$ with the properties exists, then an $f_{k+1}$ does is just adding the ordered pair $\langle k+1,g(f(k),k)\rangle$ to $f_k$. Then as you've showed all the $f_k$ exist, you can define $f = \{\langle(k,f_k(k)\rangle\mid k\in \mathbb N\}.$ |
In a field, $a \mapsto a^2$ is bijective. Then the characteristic is..? | If $a\mapsto a^2$ is a permutation, then $1^2\neq(-1)^2$ unless $1=-1$. But, on the other hand, you always have $1^2=(-1)^2$. Therefore, $1=-1$. In other words, the characteristic is $2$. |
How many 6 digit numbers with 2 or 3 repetitions allowed | For pairs: the permutations over 6 for 3 pairs will be $\binom{6}{2,2,2}=90$ and it variations will be $(10)_3$. The problem are the starting zeroes and the repeated strings that come from variations+permutations of groups with same cardinality, the prohibited starting zeroes numbers will be the 10%, and the repeated numbers cause permutations of groups of same cardinality we can eliminate discounting these permutations ($\frac{1}{k!(6-2k)!}$). So, by example, for 3 pairs will be:
$$90\cdot (10)_3\cdot\frac{9}{10}\cdot \frac{1}{3!}=3^5\cdot 40$$
And the general case will be
$$\frac{9}{10}\sum_{k=0}^{3}\frac{6!(10)_{6-k}}{(2k+0^k)k!(6-2k)!}=\frac{9\cdot6!\cdot(10)_3}{10}\sum_{k=0}^{3}\frac{(7)_{3-k}}{(2k+0^k)k!(6-2k)!}=X$$
For triples it will be the previous number more the compositions with a triple and a pair, a triple only, or two triples.
The point is to change the composition of the denominator of the multinomial coefficient to represent the groups that compose the number and for every composition multiply for the number of variations that will be the falling factorial of 10 to the number of different groups in the number, an after down the total to it 90% cause the numbers cant start with 0 to have 6 digits.
For triples will be
$$X+\frac{9}{10}\sum_{j=1}^{3}\binom{6}{j,3}\frac{(10)_{5-j}}{2^{\delta_{j,3}}3^{\delta_{j,1}}}$$
EDIT: sry for too much editions, a lot of tiny and stupid mistakes. I think now is correct. |
Numbering points on a line based on their position without checking all positions. | If you have $N$ equally spaced points, numbered $1$ to $N$, with point $1$ at position $L_0$, and point $N$ at position $L_1$, then point $i$ is at position
$$x_i = L_0 + \frac{i-1}{N-1} \left ( L_1 - L_0 \right)$$
See the resemblance to linear interpolation? That's how I constructed it.
If we solve the above for $i$, we find that the number at position $x$ is
$$i = 1 + \frac{(x - L_0)(N - 1)}{L_1 - L_0}$$
If $L_0 = 0$ and $L_1 = L$, those simplify to
$$x_i = \frac{i-1}{N-1} L$$
and
$$i = 1 + \frac{x \; L}{N - 1}$$
The OP has a JavaScript project with non-equally spaced points.
For this case, I recommend using an array with the points sorted by value, so that a binary search can be used to very efficiently find the correct (exact or closest) point to a given value.
Here is an example (wrapped in HTML for easy testing) of how to use binary search to find the correct or nearest label in O(log2 N) time. In practice, it means you can easily have thousands of labels in a range, and the functions are not much slower than when you have, say, dozen.
This example is of course in public domain (CC0).
<html>
<head>
<title> JavaScript range with labels </title>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<style type="text/css">
html, body { color: #000000; background: #ffffff; }
.highlight { background: #99ccff; }
</style>
<script type="text/javascript">
var labels = [];
/* Add a label object. You can add other properties as needed.
If overwrite is true, overwrites an existing label with the same
value. If false, and a label with that value already exists,
the existing label is kept. Returns true if added to array. */
function addLabel(value, label, overwrite)
{
var imin = 0;
var imax = labels.length - 1;
var newLabel = { value:value, label:label };
/* No labels yet? After last existing label? */
if (imax < 0 || labels[imax].value < value) {
labels.push(newLabel);
return true;
}
/* Before first existing label? */
if (value < labels[0].value) {
labels.unshift(newLabel);
return true;
}
/* Replace first existing label? */
if (value == labels[0].value) {
if (!overwrite)
return false;
labels[0] = newLabel;
return true;
}
/* Replace last existing label? */
if (value == labels[imax].value) {
if (!overwrite)
return false;
labels[imax] = newLabel;
return true;
}
/* Binary search to find insertion point. */
while (imax - imin > 1) {
var i = imin + Math.floor((imax - imin)/2);
if (labels[i].value < value)
imin = i;
else
if (labels[i].value > value)
imax = i;
else {
/* Found at [i]. */
if (!overwrite)
return false;
labels[i] = newLabel;
return true;
}
}
/* Not found. Insert after imin. */
labels.splice(imin + 1, 0, newLabel);
return true;
}
/* Delete (and return) a label with the specified value. */
function delLabel(value)
{
var imin = 0;
var imax = labels.length - 1;
/* No labels? */
if (imax < 0)
return null;
/* Outside the array range? */
if (value < labels[imin].value ||
value > labels[imax].value)
return null;
/* First? */
if (value == labels[imin].value)
return labels.shift();
/* Last? */
if (value == labels[imax].value)
return labels.pop();
/* Binary search. */
while (imax - imin > 1) {
var i = imin + (imax - imin) / 2;
if (labels[i].value < value)
imin = i;
else
if (labels[i].value > value)
imax = i;
else {
found = labels[i];
/* Remove [i] */
labels.splice(i, 0);
/* Return the label removed from the array. */
return found;
}
}
/* Not found. */
return null;
}
/* Find and return the label closest to value. */
function closestLabel(value)
{
var imin = 0;
var imax = labels.length - 1;
/* No labels? */
if (imax < 0)
return null;
/* Before or at first label? */
if (value <= labels[imin].value)
return labels[imin];
/* At or after last label? */
if (value >= labels[imax].value)
return labels[imax];
/* Binary search. */
while (imax - imin > 1) {
var i = imin + Math.floor((imax - imin)/2);
if (labels[i].value < value)
imin = i;
else
if (labels[i].value > value)
imax = i;
else
return labels[i];
}
/* Pick the closer one. */
if (value - labels[imin].value <= labels[imax].value - value)
return labels[imin];
else
return labels[imax];
}
/* The following are needed only for the example user interface. */
function ui_list(val)
{
var n = labels.length;
var html = [];
for (i = 0; i < n; i++) {
if (val === labels[i].value)
html.push("<tt class=\"highlight\">{ value: ");
else
html.push("<tt>{ value: ");
html.push(labels[i].value, ", label:\"", labels[i].label, "\" }</tt><br>\n");
}
document.getElementById("table").innerHTML = html.join("");
}
function ui_exact()
{
var val = parseFloat(document.getElementById("value").value);
var item = closestLabel(val);
if (item !== null && item.value === val)
ui_list(val);
else
ui_list(null);
}
function ui_closest()
{
var val = parseFloat(document.getElementById("value").value);
var item = closestLabel(val);
if (item !== null)
ui_list(item.value);
else
ui_list(null);
}
function ui_add()
{
var value = parseFloat(document.getElementById("value").value);
var label = document.getElementById("label").value;
if (addLabel(value, label, false))
ui_list(null);
document.getElementById("label").value = "";
}
function ui_delete()
{
var value = parseFloat(document.getElementById("value").value);
if (delLabel(value))
ui_list(null);
}
</script>
</head>
<body>
<form method="GET" action="#">
<fieldset>
<legend>Labels</legend>
<div id="table"></div>
</fieldset>
<fieldset>
<legend>Actions</legend>
<input type="button" value="Delete" onClick="ui_delete();">
<input type="button" value="Add" onClick="ui_add();">
<input id="value" type="text" value="" size="5" title="Value">
<input id="label" type="text" value="" size="10" title="Label">
<input type="button" value="Find Exact" onClick="ui_exact();">
<input type="button" value="Find Closest" onClick="ui_closest();">
</fieldset>
</form>
</body>
</html>
If you save the above as a HTML file, and open it in your browser, fill in the two text boxes with a value (a number) and a label (some text), and click the Add button. All the other buttons only use the value text box (smaller one, on the left). If a label matching some value is found, it is highlighted.
For further info on binary search, start at the Binary search algorithm Wikipedia page.
Note that the above implementation does not allow more than one label with the same value. If you allow duplicate values, a binary search finds one of them, but not necessarily the first one, so a bit of additional code (finding the leftmost of multiple matches) is needed then. I don't think multiple labels for the same value are that useful, though.
Since a binary search only works on a sorted array, the array is always kept in sorted order. This lets us do insertion and deletion with the JS array .splice() method; we only need to find the insertion point (index in the array) -- for which we can again use a binary search.
Note that I effectively duplicated the binary search code. This is not good practice. One should write it just once in a separate helper function (returning the array index i, or -1 if not found, for example), which would also make testing easier. (If you copy and paste the same bit of code, and later on fix a problem in only some of them, you're going to feel pain trying to find the ensuing bugs..)
However, in this particular example I decided to make an exception, and copy-paste the code into the functions, so that the logic would be easier to follow. |
Is there a novel way to integrate this without using complex numbers? | The given integral is not converging, so I assume you wanted to study:
$$f(a)=\int_{0}^{+\infty}\frac{1-\cos(ax)}{x^2}\,dx$$
that is an even function, hence we can assume $a\geq 0$ WLOG. Integration by parts then gives:
$$ f(a) = a\int_{0}^{+\infty}\frac{\sin(ax)}{x}\,dx $$
and by replacing $x$ with $\frac{z}{a}$ we get:
$$ f(a)=a\int_{0}^{+\infty}\frac{\sin x}{x}\,dx = \frac{\pi}{2}\,a $$
leading to:
$$\forall r\in\mathbb{R},\qquad \int_{0}^{+\infty}\frac{1-\cos(rx)}{x}\,dx = \frac{\pi}{2}|r|.$$ |
Prove that $L^1\cap L^{\infty }\subseteq L^p$ for all $p\in [1,\infty]$ | Yes this is correct. Maybe a step could be added after the first equality, saying that $\lvert f(x)\rvert \leqslant \lVert f\rVert_\infty$ for almost every $x$ hence $\lvert f(x)\rvert^{p-1} \leqslant \lVert f\rVert_\infty^{p-1}$ almost everywhere. |
Why do I get a different result with partial fractions than with parts? | Observe that $$\frac3{x+3}=\frac{x+3-x}{x+3}=-\frac x{x+3}+1$$
i.e., they differ by only constant so have the same derivative |
Find the Laurent series of $1/(z^3+1)$ in the annulus $1<|z|<3$ | HINT:
Note that
$$
f(z) = \frac{1}{z^3}\frac{1}{1+1/z^3}
$$
and remember that
$$
\frac{1}{1+\omega} = \sum_{k=0}^{+\infty}(-1)^k \omega^k ~~~\mbox{for}~~~ |\omega| < 1
$$ |
Does the Weil conjecture work with 0 dimensional varieties? | That's not the zeta function. There are two points over $\mathbb{F}_{q^n}$ if $n$ is even and none if $n$ is odd. This means the zeta function is
$$\exp \left( \sum_{k \ge 1} \frac{2}{2k} t^{2k} \right) = \frac{1}{1 - t^2}.$$ |
$x_n^3- y_n^3 \rightarrow 0 \ as \ n \rightarrow \infty$, then $x_n- y_n \rightarrow 0 $ if | it's not d since for $x_n=y_n=n$ it is $x_n-y_n=0\to0$ and $x_n^2+x_ny_n+y_n^2=3n^2\to\infty$ |
How do I prove these properties on similar triangles (Euclidian Geometry)? | Hint:
$$\frac{[ADE]}{[ABC]}=\underbrace{\frac{[ABE]}{[ABC]}}_{\frac{AE}{AC}}\cdot\underbrace{\frac{[ADE]}{[ABE]}}_{\frac{AD}{AB}}.$$ |
Prove that the center of a ring is a subring. | Your proof is right, but unnecessarily long. When you want to prove that some nonempty set is a subring you have to use the subring test. Denote the center of your ring by $Z(R)$, you only have to prove that $1\in Z(R)$ and if $x,y\in Z(R)$, then $x-y, x\cdot y\in Z(R)$. Since you have proved all that, then $Z(R)$ is a subring of $R$.
The answer to the other question is right too, the center of a commutative ring it's the ring itself. |
Cramér-Rao Lower Bound-Exponential distribution | a)
I'm not sure though if this is correct.
If you have a single observation yes it is! If you have a size-$n$ random sample you get
$$V(T)\geq \frac{k^2\theta^{2k}}{n}$$
b) Observe that $T=\overline{X}_n$ is unbiased estimator for $\theta$ and it is a function of $S=\Sigma_iX_i$, complete and sufficient statistics...thus T is UMVUE for $\theta$
Given this, one may suspect that $[\overline{X}_n]^k$ is UMVUE for $\theta^k$. To verify this it can be useful to calculate the expectation of
$$T_k=[\Sigma_iX_i]^k$$
This is easy because $\Sigma_iX_i$ has a known distribution (it's a gamma) and $T_k$ is function of $S$, complete and sufficient statistics.
Thus all you have to do, once you have calculated the expectation of $T_k$ is to correct, if necessary, its bias |
Invertibility of $T:\mathbb R^3\to \mathbb R_2[X]$ and determinant | It seems to me that you are correct, the determinat is $-1$ and the transformation is of course invertible. I don't think you deserved $0/10$.
Maybe there is some confusion in the input or in the presentation of the result? |
$|b|^{\frac {1}{\log b}}= |a|^{\frac {1}{\log a}}$ implies $|b|=b^ {\lambda}$ | Let $c:=|b|^{\frac {1}{log b}}= |a|^{\frac {1}{log a}}.$ Then we have
$$
|b|=c^{\log(b)}=e^{\log(c)\log(b)}=b^{\log(c)}=b^{\lambda}.
$$
Hence $|b|=|b|_{\infty}^{\lambda}$, where $|b|_{\infty}$ is the usual absolute value on $\mathbb{Q}$. |
Question about 'strong' assumptions and proving 'strong' result. | As you noted, weak induction trivially implies strong induction, and ordinarily we would therefore say that weak induction is at least as strong as strong induction. However in this special case the naming is probably due to the fact that many times using strong induction yields a shorter proof than weak induction (shorter by the length of the proof of strong induction from weak induction). So in this sense strong induction seems stronger, but completely not for the usual reason. |
Particular sequence in $L^2[0,1]$ that fades at $\infty$ in $L^2$ norm and whose limit is not defined $\forall$ t | Start from the function $\omega(x)=|x|$, defined in $[-1,1]$, and extend it periodically over $\mathbb{R}$. Then set
$$\omega_n = \omega(nx)^n ,\qquad x\in [0,1]$$
Edit: Never mind this previous example, here is a much simpler one to work with. Construct a sequence $\omega_n$ in the following way.
\begin{align*}
\omega_1&=\chi_{[0,1/2)},\qquad \omega_2=\chi_{[1/2,1]}\\
\omega_3&=\chi_{[0,1/3)},\quad \omega_4=\chi_{[1/3,2/3)},\quad \omega_5=\chi_{[2/3,1]}\\
\omega_6&=\chi_{[0,1/4)},
\end{align*}
and so on. Clearly $\|\omega_n\|_2\to 0$ because the sequence $\left\{\omega_n\right\}$ consists of a sequence of characteristic functions of intervals with measures converging to $0$. Moreover, for each $x_0\in [0,1]$ and for each $n\in \mathbb{N}$ there is a $k\in \left\{0,\dots,n-1\right\}$ so that
$$x_0\in \left[\frac{k}{n},\frac{k+1}{n}\right) $$
So that when the sequence $\left\{\omega_n\right\}$ gets to the term $\omega_{\alpha_n}$ given by the characteristic function of this interval, we will have $\omega_{\alpha_n}(x_0)=1$. On the other hand from the way the sequence is constructed we see that $\omega_{\alpha_n+1}(x_0)=0$, and therefore we have two subsequences of $\left\{\omega_n\right\}$ which are constantly equal to $1$ and $0$ on $x_0$ respectively. Hence $\lim_{n\to \infty}\omega_n(x_0)$ does not exist. |
When does the function $a\cdot \sin(x)+b\cdot \cos(x)-x$ have exactly one real root with multiplicity $1$? | A necessary, but insufficient condition for a solution to exist is the fact that
$x^2+1=a^2+b^2$
Which can be obtained by squaring both equations and adding them together.
Most likely, you will need to analyse the equation with the aid of a graphical depiction, in order to determine sufficient conditions for solutions to exist.
Update: Dividing the initial equation by the derivative equation, we obtain:
$\displaystyle \frac{a\sin{x} + b\cos{x}}{a\cos{x}-b\sin{x}} = x$
$\displaystyle \frac{\tan{x}+\frac{b}{a}}{1-\frac{b}{a}\times\tan{x}} = x$
Which is equivalent to:
$\displaystyle \tan{ ( x+\tan^{-1}{\frac{b}{a})}} = x$
This definitely requires graphical aid, but it is significantly easier to handle. |
Finding the equation of a line given a line perpendicular to it and the area of a triangle with the axes | Here is the picture drawn out as requested apart from the well written answer above. |
Solve integral equations | I don't see why f is necessarily unique. Any $f(x,t) = g(t)h(x) $ with $\int_a^b h(x) dx = 1$ is a solution |
Does there exist an English translation of Distler's paper on solving polynomials by radicals? | I've checked around, but can't seem to find a translated version of the paper, be it English or Russian. I did Google it. As a last resort, you can try Google's "translate" option: it rendered a decent translation, though formatting suffers.
Perhaps your best bet would be to contact Andrew Distler directly? |
$J_n(z)=(z/2)^n\frac{1}{\pi(\frac{1}{2})_n}\int_{0}^{\pi}\cos(z\cos\theta)\sin^{2n}\theta d\theta$ | Notice that
\begin{align*}
\sum\limits_{m = 0}^\infty {\frac{{( - 1)^m z^{2m} (1 - t)^{m - 1/2} }}{{(2m)!}}} & = (1 - t)^{ - 1/2} \sum\limits_{m = 0}^\infty {\frac{{( - 1)^m (z\sqrt {1 - t} )^{2m} }}{{(2m)!}}} \\ & = (1 - t)^{ - 1/2} \cos (z\sqrt {1 - t} )
\end{align*}
and perform the substitution $t=\sin^2 \theta$, $0<\theta<\frac{\pi}{2}$. This, together with the definition of the Pochhammer symbol, gives
$$
J_n (z) = \left( {\frac{z}{2}} \right)^n \frac{1}{{\pi \left( {\frac{1}{2}} \right)_n }}2\int_0^{\pi /2} {\cos (z\cos \theta )\sin ^{2n} \theta d\theta } .
$$
Finally,
\begin{align*}
\int_0^{\pi /2} {\cos (z\cos \theta )\sin ^{2n} \theta d\theta } & = \int_0^{\pi /2} {\cos (z\cos (\pi - \theta ))\sin ^{2n} (\pi - \theta )d\theta }
\\ & = \int_{\pi /2}^\pi {\cos (z\cos \varphi )\sin ^{2n} \varphi d\varphi }
\end{align*}
shows that
$$
2\int_0^{\pi /2} {\cos (z\cos \theta )\sin ^{2n} \theta d\theta } = \int_0^\pi {\cos (z\cos \theta )\sin ^{2n} \theta d\theta } .
$$ |
Why is $1-Cx\approx \frac{1}{1+Cx}$? | This is because for $|t|<1$, we have $(1-t)^{-1} = 1+t+t^2+t^3+\cdots$. Then Set $t=-Cx$. |
Hint for Complex Analysis Proof of $f = \frac{g'}{g}$ | Note that $g \longmapsto \frac{g’}{g}$ maps products to sums, so you can split $f$ into a holomorphic part and $\frac{n}{z}$, so you can treat each part separately. |
Why is the given smooth vector field non zero? | The $\frac{\partial}{\partial x_i}$ are linearly independent. So, for it to be zero one must have $x_1=x_2=...=x_{2k-1}=0$, but this point is not on the sphere. |
Are components of complement of a set $S$ always close to $S$ | A nice metric example where $S$ and $K$ are both singletons (similar to example 119 in Counterexamples in Topology) is explained in this answer.
More drastic examples, where every component of $X \setminus S$ is separated
from $S$, can be found by looking for T1 spaces with a dispersion point, such as the one-point compactification of the rationals or the Knaster-Kuratowski fan. In such spaces choosing $S$ to contain only the dispersion point and $K$ any other singleton will do the trick. |
$2019^{2n}$ can be expressed as a sum of ten different positive squares, for every positive integer $n$. | Hint: if $2019^{2m} = a_1^2 +\ldots + a_{10}^2$, then
$2019^{2m+2} = (2019 a_1)^2 + \ldots + (2019 a_{10})^2$. So you really only
need to do it for $m=1$. |
Finding the Maximum Likelihood Estimator of a Median | Consider $s_n=x_1+\cdots+x_n$ and $t_n=\max\{x_k\mid1\leqslant k\leqslant n\}$, then
$$
f(x_1,\ldots,x_n\mid\theta)=\mathrm e^{-s_n}(1-\mathrm e^{-\theta})^{-n}\mathbf 1(\theta\geqslant t_n).
$$
Since $2\mathrm e^{-\lambda}=1+\mathrm e^{-\theta}$, this is also
$$
f(x_1,\ldots,x_n\mid\theta)=\mathrm e^{-s_n}2^{-n}(1-\mathrm e^{-\lambda})^{-n}\mathbf 1(\lambda\geqslant\log2-\log(1+\mathrm e^{-t_n})).
$$
Thus, $f(x_1,\ldots,x_n\mid\theta)$ is indeed maximum when $\lambda=\hat\lambda$, with
$$
\hat\lambda=\log2-\log(1+\mathrm e^{-t_n}).
$$
Note that $\hat\lambda=g(\hat\theta)$ where the function $g$ is defined by $\lambda=g(\theta)$. |
Parametrizing $x^2(x^2+y^2)=4(x-y)^2$ | As you can see from the original equation, $\,x=y=r=0$ is a point on the curve. Dividing by $r^2\cos^2\theta$ is undefined.
Here's the first step after making your substitution:
$$x^2 (x^2 + y^2) = 4(x-y)^2 $$
$$\to\frac{1}{2}r^2\left(r^2 + r^2\cos(2\theta) +8\sin(2\theta) - 8\right) = 0$$ |
Tools or Resources for pictures and visualizations | A few suggestions for things that might improve the situation:
(1) Buy the books that are well-illustrated. This will indicate to authors and publishers that illustrations are important. Another very well illustrated one is Trefethen's book on approximation. See also this question.
(2) Learn to use the graphical tools in packages like Matlab and Mathematica. Good for situations where you have objects described by known equations.
(3) Get proficient with some free-hand drawing software package, preferably a 3D one. Good for those (more common) situations where you don't have any equations.
(4) Learn to draw by hand. Personally, I think drawing by hand is best for learning and conceptualizing; using a computer typically interrupts the thought process (for me, anyway). Take a drafting class (if you can find one these days) or an art class. There is an entire chapter explaining how to draw good diagrams in this book by Koenderink . You can see some examples on its cover. Koenderink's advice is especially interesting (to me, at least) because he knows a lot about human vision/perception. Simply making your drawings "accurate" or "correct" is not really the point.
(5) If some lecturer tells you that "pictures have no place in mathematics because they do not constitute proof", walk out of the room. |
Gamma function of negative argument | The relation $\Gamma(-1+\epsilon) = \Gamma({\epsilon})/(-1+{\epsilon})$ is true so long as $\epsilon$ is not a negative integer (so that $-1+\epsilon$ will then also not be a negative integer) since the gamma function is extended to the complex plane minus the negative integers by using the relation $\Gamma(z)=\Gamma(z+1)/z$ or by using analytic continuation.
Thus, you can say something about the limiting behaviour of $\Gamma(\epsilon)$ and $\Gamma(-1+\epsilon)$, in that you can say that
$$\lim_{\epsilon\to 0} \frac{\Gamma(-1+\epsilon)}{\Gamma(\epsilon)} = \lim_{\epsilon\to 0} \frac{1}{-1+\epsilon} = -1.$$
Note that the fact that $\Gamma(z)$ is not defined at $-1$ does not affect this, since for the limit, we are only interested in the values of the function close to $-1$.
In other words, $|\Gamma(z)\vert$ tends to infinity "at the same rate" as $z\to 0$ or as $z\to -1$, and similar results could be proved at any negative integer. |
Evaluating $\int_{0}^{\infty} \frac{\cos {ax}}{\cosh{x}}dx$ | First off, let us notice that the function $\cos (ax) / \cosh (x)$ does not change under $x \mapsto -x$. So,
$$
\int\limits_0^\infty \frac{\cos ax}{\cosh x} \, dx = \frac{1}{2} \int\limits_{-\infty}^\infty \frac{\cos ax} {\cosh x} \, dx.
$$
Next, we should take the contour $\gamma_M$ to be like that. Line (1) is a part of real line $[-M, \, M]$, and the upper line (3) is the same line translated by $\pi i$.
The integral over line (3) could be rewritten as
$$
\int\limits_{(3)} \frac{\cos ax}{ \cosh x} \, dx = \int\limits_{(1)} \frac{\cos a(x + \pi i)}{\cosh (x + \pi i)} \, dx = (*).
$$
Notice, that $\cosh (x + \pi i) = - \cosh x$ and
$$
\cos a (x + \pi i) = \cos (ax) \cos ( a \pi i) - \sin (ax) \sin ( a \pi i).
$$
Therefore,
$$
(*) = -\cos (a \pi i )\int\limits_{(1)} \frac{\cos a x}{\cosh x} \, dx + \sin (a \pi i) \int\limits_{(1)} \frac{\sin a x}{\cosh x} \, dx.
$$
Last integral clearly equals zero, as it is the sum of symmetric and antisymmetric function over symmetric interval.
Integrals over lines (2) and (4) go to zero as $M \to \infty$. So, we have now, via residuals theorem
$$
2 \pi i \, \mathrm{res} \; \frac{\cos a x}{\cosh x} = \lim\limits_{M \to \infty} \int\limits_{\gamma_M} \frac{\cos a x}{\cosh x} \, dx = (1 - \cos (a \pi i)) \int\limits_{-\infty}^\infty \frac{\cos a x}{\cosh x} \, dx
$$ |
Question on existence of coproducts. | A very good way to proceed would be to answer the second part of the problem first:
How can you describe such a product (coproduct) in more familiar terms?
Here, "more familiar terms" means using the language of partial order rather than the language of categories. Take your definition of (co)product and unfold what it means in the particular case of this category.
Note that the parts of the definitions that say that such-and-such an arrow must be unique turn out to be satisfied automatically because there is no hom-set with more than one element anyway. For the same reason, the parts of the definition that ask for two arrows (with the same beginning and end) to be identical will be satisfied automatically too.
The only parts of the definition that remain are the one that talk about whether arrows between certain objects exist at all. |
Given a set of Eigenvectors - find the Eigenvalues | There are two ways to solve for the eigenvalues.
One way is to directly solve for eigenvalues from the matrix using $\det(A-\lambda I)=0$, you would only have at most two eigenvalues and two sets of eigenvectors, so just solve it straight and you will find $\lambda_1$ and $\lambda_2$
The second way is to use the basic definition of eigenvalues: $$Ax=\lambda x$$
$A=\frac15 \begin{bmatrix}-3&4\\4&3\end{bmatrix}$, $x_1=\begin{bmatrix}2\\-1\end{bmatrix}$So$$Ax_1=\frac15 \begin{bmatrix}-3&4\\4&3\end{bmatrix}\begin{bmatrix}2\\-1\end{bmatrix}=\begin{bmatrix}-2\\1\end{bmatrix}=\lambda_1 x_1$$
Therefore $\lambda_1=-1$.
Similarily plug in $x_2$ you can get $\lambda_2$. |
CDF of $\max(x_1,x_2)+\max(x_3,x_4)$ where all $x_i$s are iid from $U[a,b]$ | Suppose $u,v$ are i.i.d. $U(0,1)$. Then for any $w\in[0,1]$, $\operatorname{Pr}(\max(u,v)\le w)=w^2$. Hence the density of $W=\max(u,v)$ is given by $f(w)=2w$. Therefore, if $X=(\max(x_1,x_2)-a)/(b-a)$ and $Y=(\max(x_3,x_4)-a)/(b-a)$, the densities of $X$ and $Y$ on $[0,1]$ are $2x$ and $2y$ respectively.
Now let $Z=X+Y=\left[\max(x_1,x_2)+\max(x_1,x_2)-2a\right]/(b-a)$. Then for any $m\in[0,\,2]$, we have
$$
\phantom{=}\operatorname{Pr}\left(Z\le m\right)
=\begin{cases}
\int_0^m \int_0^{m-y} 4xy\, dx dy=\frac{m^4}{6} &\text{ if } m\le1,\\
1-\int_{m-1}^1 \int_{m-y}^1 4xy\, dx dy = 1-\frac16m(m-2)^2(m+4) &\text{ otherwise}.
\end{cases}
$$ |
Convergence as surely Bernoulli r.v | According to SLLN
$$
\frac{1}{n}\sum_{i=1}^n I_i \xrightarrow{a.s.} p,
$$
hence your sequence is
$$
a_n \cdot \frac{T}{a_n},
$$
where $a_n = n$, thus
$$
n \frac{T}{n} \xrightarrow{a.s.} \infty p = \infty.
$$ |
What should a student (with algebraic-geometry minded) study in differential geometry? | To start with the basics, the strangest thing in differential geometry for somebody coming from an algebraic-geometry background is the idea of partition of unity. So this would have to be internalized as an introduction to a DG point of view as opposed to AG. Next, I would suggest performing the following thought experiment: take $\mathbb{CP}^2$ blown up an $k$ points. Now reverse the orientation of this beautiful algebraic variety. Does the resulting manifold make any sense? |
example of a sequence of uniformly continuous functions on a compact domain converging, not uniformly, to a uniformly continuous function | It should be sufficient to take a sequence of continuous (and therefore uniformly continuous) "triangle functions" on $[0, 1]$ where the $n$th function is supported on $[0, \frac{1}{n}]$, but the heights of the triangles do not approach 0 as $n \to \infty$. Then the pointwise limit is 0 everywhere (for $x=0$ because each triangle function has value 0 there, and for $x>0$ because eventually the functions become 0 for such an $x$); and therefore the pointwise limit is continuous. However, because of the condition on the heights of the triangles, the convergence to 0 will not be uniform. |
How to check whether a function $f : \mathbb{R}^2 \to \mathbb{R}$ is differentiable or not | The partial derivatives $f_x$ and $f_y$ of $f(x, y) = \frac{x}{\sqrt{x^2 + y^2}}$ exist and are continuous on $\mathbb{R}^2 \setminus\{(0, 0)\}.$ Thus $f$ is differentiable on $\mathbb{R}^2 \setminus\{(0, 0)\}.$ |
Root test and $\lim\sup$ | The superior limit of a sequence $(x_n)_{n\in\mathbb N}$ which is not bounded above can simply be defined as $\infty$. It will still be the supremum (in $\overline{\mathbb R}$) of all cluster points.
And it is not hard to to turn $\overline{\mathbb R}$ into a metric space. You consider the bijection$$\begin{array}{rccc}f\colon&\overline{\mathbb R}&\longrightarrow&[-1,1]\\&x&\mapsto&\begin{cases}\frac x{1+\lvert x\rvert}&\text{ if }x\in(-1,1)\\\pm1&\text{ if }x=\pm\infty\end{cases}\end{array}$$and you define the distance $d$ in $\overline{\mathbb R}$ by $d(x,y)=\bigl\lvert f(x)-f(y)\bigr\rvert$.
Finally, yes, if $\bigl(\lVert x_n\rVert\bigr)_{n\in\mathbb N}$ is unbounded, then you can just say that the series $\sum_{n=1}^\infty x_n$ diverges and that's it. |
Acturial Studies: what is the formula for compound discount with simpe discount over the final fractional period? | Let d be the discount rate, t be the total time of investment, and k be the largest integer such that t > k . First we calculate the compound interest over time k, then multiply that by the simple interest for time t-k. Thus your accumulation function is:
$ a(t) = (1 -d)^{-k} (1-d(t-k)) $
Similarly, the accumulation function for compound interest with interest rate i :
$ a(t) = (1 + i)^k (1 + it) $
Bonus points if you don't forget to take $ (a(t))^{-1} $ for present value in your question like I did. |
If the matrix $A \in K^{m \times n}$ has $m$ pivots and the matrix $B \in K^{n \times r}$ has $n$ pivots, does the matrix $AB$ has $m$ pivots? | A matrix $A$ has the same number of pivots as rows (or perhaps more simply, has a pivot in every row) if and only if the equation $Ax=y$ has a solution for every $y$. To see that this is true, just think about how you use echelon forms to solve linear equations.
So, suppose this is true for $A$ and also for $B$. We need to prove that it is true for $AB$. For any $y$ there exists $z$ such that $Az=y$; then there exists $x$ such that $Bx=z$; and so $ABx=y$. Thus $ABx=y$ has a solution for every $y$, and so $AB$ has the same number of pivots as rows. |
Kernel of homomorphism $(J:I)\rightarrow \text{Hom}_R(R/I, R/J)$ | $xr\in J$ for all $r\in R$, in particular for $r=1$, so $x\in J$. |
If $C$ is compact in $X$, is $C \cap A$ compact in $A$ (with $A \subset X$, and the relative topology)? | HINT: if $C\supseteq A$, then $C\cap A=A$. Is every subset of a compact set also compact? |
A couple of inequality / similarity that don't make sense to me. | The first one is easy. If you compare the two sides, it's really just asserting that
$${n\choose t} \leq\frac{n^t}{t!}$$
But that follows quickly from the fact that
$${n\choose t}=\frac{n!}{t!(n-t)!}=\frac{\overbrace{n(n-1)(n-2)\cdots(n-t+1)}^{t{\textrm{ factors}}}}{t!}$$
since each of the factors in the last numerator are no more than $n$. |
Calculating velocity of an object moving at 12m/s north, with 5m/s wind from the west | If you think about the problem geometrically/trigonometrically, you'll be able to immediately calculate the solution. |
What conditions must satisfy the source term of an ODE given some boundary conditions? | The general question for arbitrary ODE is quite difficult. But in this specific case, there is some estimate of $S(x)$ that you can obtain.
Note that $\lim_{x\rightarrow \infty} u(x) = 0$, thus for large $x$, your equation is approximated by
$$x^2 \frac{d}{dx}\left[\left(1 - \frac 2x\right)\frac{du(x)}{dx}\right] = S(x)$$
Which is equivalent to
$$\frac{d}{dx}\left[\left(1 - \frac 2x\right)\frac{du(x)}{dx}\right] = \frac{S(x)}{x^2}$$
Since $\lim_{x\rightarrow \infty} u(x) = 0$, the limit of its derivative must behave similarly, so $\lim_{x\rightarrow \infty} u'(x) = 0$, by barbalat's lemma. Consequently, the LHS is $0$. Thus,
$$\lim_{x\rightarrow\infty} \frac{S(x)}{x^2} = 0$$
This is true only if $S(x)$ has order smaller than $x^2$, e.g. $S(x) = x$, etc. I can't see precisely how Mathematica comes up with $S(x) \sim \frac{1}{x^3}$ for large $x$, but that does satisfy the above condition.
A very informal analysis does gives a better asymptotic estimation though. Since $\lim_{x\rightarrow\infty} u(x) = 0$, suppose $u(x)$ is of order $\frac{1}{x}$. This means for large $x$, $u'(x)$ is of order $\frac{1}{x^2}$. "Patching" things together you have something like this
$$x^2 \frac{d}{dx}\left[O\left(\frac{1}{x}\right) O\left(\frac{1}{x^2}\right)\right] - O\left(\frac{1}{x}\right) = S(x)$$
This is, loosely speaking, equivalent to
$$O\left(x^2\right) O\left(\frac{1}{x^4}\right) - O\left(\frac{1}{x}\right) = S(x)$$
Or
$$O\left(\frac{1}{x^2}\right) - O\left(\frac{1}{x}\right) = S(x)$$
Since $S(x)$ drives the equation, it has to goes to $0$ faster than any term on the left hand side, thus $S(x)$ is of order $\frac{1}{x^3}$ or lower. If it is of a lower order than $\frac{1}{x^3}$, then we can expect $u(x)$ to be of a lower order. But since $u(x) \sim \frac{1}{x}$ is perfectly fine assumption (without solving the equation and this is a very informal analysis), we conclude that
$$S(x) \sim \frac{1}{x^3}$$
Edit: with the new information, then I think my very informal analysis makes more sense (since you were simply trying out different solution for $S(x)$). |
Second order differential equation and its related power series | hint
Write your equation as
$$(1-x)y''+y'×(1-x)y=0$$
with
$y=a_0+a_1x+a_2x^2+a_3x^3+$
$$...+a_nx^n+...$$
$y'=a_1+2a_2x+$
$$...+(n+1)a_{n+1}x^n+...$$
$y''=2a_2+2.3a_3x+$
$$...+(n+1)(n+2)x^n+...$$ |
Evaluating the integral of $(\csc(x))^5$ with the reduction formula of $(\sin(x))^n$ | Use the reduction formula
$\int \csc ^{m}(x) d x=-\frac{\cos (x) \csc ^{m-1}(x)}{m-1}+\frac{m-2}{m-1} \int \csc ^{-2+m}(x) d x$
When m=5, you get
$-\frac{1}{4} \cot (x) \csc ^{3}(x)+\frac{3}{4} \int \csc ^{3}(x) d x$
using the formula for m=3 to carry out the integral,
you get
$-\frac{3}{8} \cot (x) \csc (x)-\frac{1}{4} \cot (x) \csc ^{3}(x)+\frac{3}{8} \int \csc (x) d x$
\begin{aligned}
&\text { Using the fact that integral of } \csc (x) \text { is }-\log (\cot (x)+\csc (x))\\
&-\frac{1}{4} \cot (x) \csc ^{3}(x)-\frac{3}{8} \cot (x) \csc (x)-\frac{3}{8} \log (\cot (x)+\csc (x))+\text { constant }
\end{aligned}
Which can be simplified to
$$-\frac{1}{64} \csc ^4\left(\frac{x}{2}\right)-\frac{3}{32} \csc ^2\left(\frac{x}{2}\right)+\frac{1}{64} \sec ^4\left(\frac{x}{2}\right)+\frac{3}{32} \sec ^2\left(\frac{x}{2}\right)+\frac{3}{8} \log \left(\sin \left(\frac{x}{2}\right)\right)-\frac{3}{8} \log \left(\cos \left(\frac{x}{2}\right)\right)+constant$$ |
Wikipedia wrong? Convergence of finite difference | You are right. In the first case, for
$$\frac{f(x+h)-f(x)}{h} - f'(x)$$
you only have an $o(1)$ bound. A function like $f(x) = x\cdot\lvert x\rvert^\alpha$ for $0 < \alpha < 1$ is continuously differentiable on all of $\mathbb{R}$, but at $0$ the difference quotient converges only of the order $\lvert h\rvert^{\alpha}$ to the derivative.
In the second case, choosing $1 < \alpha < 2$ gives a twice continuously differentiable function with
$$\frac{f\left(x + \tfrac{h}{2}\right) - f\left(x - \tfrac{h}{2}\right)}{h} - f'(x) \in \Theta(h^{\alpha}).$$
The order of convergence under the assumption of differentiability resp. twice differentiability is
$$\frac{f(x+h)-f(x)}{h} - f'(x) \in o(1)$$
resp.
$$\frac{f\left(x + \tfrac{h}{2}\right) - f\left(x - \tfrac{h}{2}\right)}{h} - f'(x) \in o(h),$$
nothing better is to be had without stronger assumptions. |
Integrate $e^{-x^4+x^2}$ | In general, $~\displaystyle\int_0^\infty\exp\Big(-\sqrt[N]x\Big)~dx~=~N!~,~$ so even a relatively simple looking expression
like $~\displaystyle\int_0^\infty\exp\Big(-x^4\Big)~dx~=~\Big(\tfrac14\Big)!~=~\Gamma\bigg(\frac54\bigg)~$ cannot be expressed in terms of elementary
functions, let alone a slightly more complex one, like $~\displaystyle\int_0^\infty\exp\Big(-x^4+ax^2\Big)~dx,~$ for whose
evaluation even more obscure special functions are required. A first step, in this case, would
be to employ the parity of the integrand, by rewriting $~\displaystyle\int_{-\infty}^\infty~=~2\displaystyle\int_0^\infty$
Basically, just like $~\displaystyle\int_0^\infty\exp\Big(-\sqrt[n]x\Big)~dx~$ cannot be expressed in terms of elementary
functions, but requires the creation of a completely new function, called factorial, to
help express its value, yielding the more general result $~\displaystyle\int_0^\infty x^{m-1}\cdot\exp\Big(-\sqrt[n]x\Big)~dx$
$=~n~\Gamma(mn),~$ which, by replacing the lower limit with an arbitrary value becomes
inexpressible even in terms of the latter, thus requiring the creation of yet another
special function to help parse its value, finally yielding $~\displaystyle\int_\ell^\infty x^{m-1}\cdot\exp\Big(-\sqrt[n]x\Big)~dx$
$=~n~\Gamma\Big(mn,~\sqrt[n]\ell\Big),~$ so this latter expression also becomes equally useless when asked
to evaluate $~\displaystyle\int_\ell^\infty(x+u)^{m-1}\cdot\exp\Big(-\sqrt[n]x\Big)~dx,~$ which, for $~m=n=\dfrac12$ and $~u~=-\ell$
$=~\dfrac a2~,~$ becomes our original integral. |
About the limit of the coefficient ratio for a power series over complex numbers | Hint: Assume that you have a simple pole at $z = z_0$, where $|z_0| = 1$ and try to prove it. In particular, take $f(z) = \frac{g(z)}{z-z_0}$ where $g(z)$ is holomorphic on $\Omega$. Prove the result for this case. (Expand $\frac{1}{z-z_0}$ about $z=0$ and do some manipulations). Now the same idea can be extended for higher order poles.
EDIT: For a simple pole, $f(z) = \frac{g(z)}{z-z_0} = \displaystyle \sum_{n=0}^{\infty} a_n z^n$. Since $g(z)$ is holomorphic, $g(z) = \displaystyle \sum_{n=0}^{\infty} b_n z^n$. So $\displaystyle \sum_{n=0}^{\infty} b_n z^n = (z-z_0) \displaystyle \sum_{n=0}^{\infty} a_n z^n \Rightarrow b_{n+1} = a_n - z_0 a_{n+1}$.
Now what can we say about $\displaystyle \lim_{n \rightarrow \infty} b_n$ and $\displaystyle \lim_{n \rightarrow \infty} a_n$?
(Note: $g(z)$ holomorphic on $\Omega$ whereas $f(z)$ is holomorphic except at $z_0$, a point on the unit disc).
This same idea will work for higher order poles as well. |
Stokes theorem to get $\oint \vec{F}d\vec{R}$ | Surface Integral
You would need to calculate
$$
I = \int_A\limits \mbox{curl} F \cdot dA =
\int_A\limits (0, 0, 2x) \cdot dA =
2 \int_A\limits x \, dA_z
$$
where $A$ is a surface with boundary $\partial A = C$.
The open task to choose some feasible $A$ which allows for simple calculation of the last term.
So what is $C$? We had
$$
r(t) = (1+\cos t, 1+\sin t, 1-\cos t-\sin t)^t
$$
this gives
$$
r(t) \cdot (1,1,1)^t = 3 \iff
r(t) \cdot n = \sqrt{3} \quad n = (1,1,1)^t/\sqrt{3}
$$
So the $r(t)$ endpoints form a curve $C$ which lies in a plane $E$ with normal vector $n$ and origin $\sqrt{3} n = (1,1,1)^t$:
$$
E = \{ x \mid \left(x - \sqrt{3} n\right) \cdot n = 0 \}
$$
Let $A$ be the part of $E$ that is inside $C$. From the observation that the projection of $A$ onto the $x$-$y$-plane is a circle of radius $1$ with center $(1,1)$ we assume the parametrization:
$$
A(r, t) = (1 + r \cos t, 1 + r \sin t, 1 - r \cos t - r \sin t)^t
\quad (r \in [0, 1], t \in [ 0, 2\pi ])
$$
with the partial derivatives along the parameters
$$
\partial_r A = (\cos t, \sin t, - \cos t - \sin t)^t \\
\partial_t A = (-r \sin t, r \cos t, r \sin t - r \cos t)^t
$$
then we get
$$
(\partial_r A \times \partial_t A)_x =
r (\sin t)^2 - r \cos t \sin t + r (\cos t)^2 + r \cos t \sin t =
r \\
(\partial_r A \times \partial_t A)_y =
r \cos t \sin t + r (\sin t)^2 - r \cos t \sin t + r (\cos t)^2 =
r \\
(\partial_r A \times \partial_t A)_z =
r (\cos t)^2 + r (\sin t)^2 = r
$$
this gives
$$
dA
= \partial_r A \times \partial_t A \, dr \, dt
= r \, (1,1,1)^t \, dr \, dt
= n \, \sqrt{3} \, dA^{xy}
$$
So we have
\begin{align}
I
&= 2 \int\limits_A (1 + r \cos t) r dr dt \\
&= 2 \int\limits_0^1 dr \, \int\limits_0^{2\pi} dt \, (r + r^2 \cos t) \\
&= 2 \int\limits_0^1 dr \, 2\pi r \\
&= 2 \pi
\end{align}
Line Integral
With the help of a computer algebra system (link) we get
\begin{align}
I
&= \int\limits_C F \cdot dr \\
&= \int\limits_C (ye^x, x^2+e^x, z^2 e^z) \cdot dr \\
&= \int\limits_0^{2\pi} (ye^x, x^2+e^x, z^2 e^z) \cdot \dot{r} dt \\
&= \int\limits_0^{2\pi} (ye^x, x^2+e^x, z^2 e^z) \cdot (-\sin t, \cos t, \sin t - \cos t) dt \\
&= \int\limits_0^{2\pi} \left(-(1+\sin(t)) e^{1+\cos(t)} \sin(t) + \cos(t) ((1+\cos(t))^2 + e^{1+\cos(t)}) + (\sin(t) - \cos(t)) (1-\cos(t) - \sin( t))^2 e^{1-\cos(t) - \sin(t)}\right) dt \\
&= \left[ t+\frac{7}{4}\sin(t)+\frac{1}{12} \sin(3 t)+\sin(t) \cos(t)+(\sin(t)+1) e^{\cos(t)+1}+(\sin(2 t)+2) e^{-\sin(t)-\cos(t)+1}\right]_0^{2\pi} \\
&= 2 \pi
\end{align} |
Meaning of the term $1$-complemented | Your guess is correct. One says that $F$ is $\lambda$-complemented if there is a projection onto $F$ with norm $\le \lambda$. If $\lambda=1$, this is a $1$-complemented subspace.
Example of usage: for every normed space $X$, the space $X^*$ is $1$-complemented in $X^{***}$. |
Use of the concept of subgroup vs field extension | My answer must be partial and idiosyncratic. People may, should, and I hope will, object to my opinions. And correct my misstatement of facts too!
When we look at the smallest possible group, namely the trivial group, to ask for extensions is to ask about all groups! Not an interesting viewpoint, I think. Even when we look at a group like $C_2$, the cyclic group of order $2$, this group is contained in (more exactly, has an injective homomorphism into) every finite group of even order. Even if you demand that $C_2$ should be a normal subgroup of the extended group, you can always look at $C_2\times G$ for any group at all, and many others, containing the group of order $2$.
On the other hand, when we look at a smallest possible field, it’s definitely an interesting question to ask what extensions it has. If your “smallest possible” is $\mathbb F_p=\mathbb Z/(p)$, it’s interesting that there’s only one extension of each finite degree, and these all are normal over $\mathbb F_p$ with cyclic Galois group. If your “smallest” is $\mathbb Q$, then there are extensions of all possible finite degrees, some normal, some not, and yet we don’t yet know whether among the normal extensions, all finite groups occur as Galois group. Interesting facts and questions all.
For substructures, when we look at a group, we can ask about all sorts of properties: abelian or not, soluble or not, nilpotent, etc. And of course we have techniques, extremely well developed. In a way, by looking at the substructures rather than the overlying structures, we have restricted our investigations to more manageable questions.
For substructures of fields, there’s the fact that some fields that are important outside the narrow subject of field theory have no proper subfields, and even for fields that aren’t prime fields, such as $\mathbb C$, there certainly are subfields, but we deal with them in most cases as extensions of $\mathbb Q$ not subfields of $\mathbb C$. A notable exception is the theorem that the only fields $K$ with $[\mathbb C\colon K]<\infty$ have this degree equal to $2$.
The split is not absolute. There are ways of studying situations in which, given a group $G$, we can make statements about groups $X$ that have $G$ as a normal subgroup. Similarly, there are statements about subfields of a given field. |
Show that $\sum_{k=0}^{\infty} \frac{k^2+3k+2}{2} z^k = \frac{1}{(1-z)^3}$, without using differentiation | hint...consider the sum of an infinite geometric series and differentiate it twice |
Antiderivative of discontinuous function | There is no such function, because by Darboux's theorem (cf. Wikipedia), every derivative has to fulfill the intermediate value theorem, but $f$ does not. |
At what point does normal line intersect curve second time? | Correcting some algebra, will look into the analytic geometry when I have time.
$$C_{norm}: y= \frac{21}{10}-\frac{x}{10}$$
Then you have $$\frac{21}{10}-\frac{x}{10}=-5+4x+3x^2 \iff -3x^2-\frac{41x}{10}+\frac{71}{10}=0 \iff -\frac{1}{10} ((x-1)(71+30x))=0 \iff (x-1)(71+30x)=0 \iff x=1 \; \mathrm{or} \; x=-\frac{71}{30}$$ |
How to write elements of uncountable products of a group? | The product $H^I$ is the set of functions $I \to H$. So, suppose we have an element in this product that is the identity everywhere except on one place. Say the value is $h\neq id$ on place $i\in I$. Then we can write this element which I call $f$ as
$$f(j)= \begin{cases} id \quad j\neq i\\ h \quad j=i\end{cases}$$ |
Interior of union of three sets | Let $D_1=\{(x,y)\mid ||(x,y),(-2,0)||\leq1\}$ and $D_2=\{(x,y)\mid ||(x,y),(2,0)||\leq1\}$ (closed balls) and $C=[-1,1]\times\{0\}$. Let $U_1=\{(x,y)\mid ||(x,y),(-2,0)||<1\}$ and $U_2=\{(x,y)\mid ||(x,y),(2,0)||<1\}$. Let $X=D_1\cup D_2\cup C$. We need to show that $\overset{\circ}{X}=U_1\cup U_2$. Let $x\in X$. Then $x\in D_1$ or $x\in D_2$ or $x\in C$. If $x\in D_1$ then it is either in $U_1$ or $D_1 - U_1$. If $x$ is in $U_1$ let $U=U_1$, $U$ is open and we have $x\in U\subseteq X$. Thus $x\in\overset{\circ}{X}$. If $x$ is in $D_1 - U_1$ then suppose $x=(a,b)$ and $b\geq0$. Then $(a,b+1/n)$ is not in $X$ but $(a,b+1/n)\rightarrow(a,b)$. Thus $(a,b)\not\in \overset{\circ}{X}$. Similarly if $b<0$ $(a,b-1/n)\rightarrow(a,b)$. Thus the points in $D_1$ that are in $\overset{\circ}{X}$ are exactly the points in $U_1$. By an analogous argument the points in $D_2$ that are in $\overset{\circ}{X}$ are exactly the points in $U_2$. Now suppose $(x,0)\in C$. Then $(x,1/n)\rightarrow (x,0)$ so there is a sequence in $X^c$ converging to $(x,0)$, so $(x,0)\not\in \overset{\circ}{X}$. Thus $\overset{\circ}{X}=U_1\cup U_2$. |
Could you suggest me that can I prove " For every $x$ $\in$ $[\frac{\pi}{2},\pi]$, $\sin(x)-\cos(x) $ $\geq$ $1$ " like this? | You can directly prove this:
Let $f(x)=\sin x-\cos x$ where $D_f=[\frac{\pi}{2},\pi]$
$$f'(x)=\cos x +\sin x$$
Now if $f'(x_0)=0$ for some $x_0$, then:
$$\cos x_0+\sin x_0=0\Rightarrow \cos x_0=-\sin x_0$$
Now $f(x_0)=\sin x_0-\cos x_0=2\sin x_0$
Also $x_0\in D_f$
Thus the only $x_0\in D_f$ such that $\cos x_0=-\sin x_0$ is $x_0=\dfrac{3\pi}{4}$
$$\therefore f(x_0)=\sqrt{2}$$
Now since we have a single maxima here, the minimum value of the function must be exhibited at one of the end points.
$$f(\frac{\pi}{2})=f(\pi)=1$$
$\therefore f(x)\in [1,\sqrt{2}]$ which means that $\sin x-\cos x\geq 1 \space \forall\ x\in [\frac{\pi}{2}, \pi]$ |
Show $V=\ker(f(T))\oplus \ker(g(T))$ if $T$ has characteristic polynomial $f(t)g(t)$. | Since $f,g$ are coprime, $uf+vg=1$ for some $u,v$. Hence if $x\in V$ then $x=u(T)f(T)x+v(T)g(T)x$ and $f(T)x\in {\rm Ker}g(T),g(T)x\in {\rm Ker}f(T) $ etc. |
Limit of $(1-2^{-x})^x$ | \begin{align*}
\lim_{x\to\infty} (1-2^{-x})^x&=\lim_{x\to\infty} e^{\ln(1-2^{-x})^x}\\
&=\lim_{x\to\infty} e^{x\ln(1-2^{-x})}\\
&=e^{\lim_{x\to\infty} x\ln(1-2^{-x})}\\
\end{align*}
now lets just focus on the exponent...
\begin{align*}
\lim_{x\to\infty} x\ln(1-2^{-x})&=\lim_{x\to\infty} \frac{\ln(1-2^{-x})}{\frac{1}{x}} \to \frac{0}{0} \tag{then by LHopital..}\\
&=\lim_{x\to\infty} \frac{\frac{1}{1-2^{-x}}(1-2^{-x})^\prime}{\frac{-1}{x^2}} \\
&=\lim_{x\to\infty} \frac{\frac{2^{-x}\ln 2}{1-2^{-x}}}{\frac{-1}{x^2}} \\
&=\lim_{x\to\infty} \frac{ x^2\ln 2}{1-2^x} \to 0\tag{after lHopital$^2$} \\
\end{align*}
so the limit is $$e^0=1$$ |
Which function is larger as $n$ gets very large? | We work with logarithms to base $2$. So $\log f(n) = 2^{2^{n}}\log 2 = 2^{2^{n}}$.
Repeat: $\log \log f(n) = 2^n \log 2=2^n$.
But $\log g(n) = 256^n \log 256 = 8 \cdot 256^n$ so $\log \log g(n) = \log 8 + n \log 256 = 8n + 3$.
Since eventually it's obvious we have $2^n > 8n + 3$ (if not, use induction) then $\log \log f(n) > \log \log g(n)$ and since $\log$ is strictly increasing we have $f(n) > g(n)$ for sufficiently large $n$.
Alternatively (well, really the same thing written differently), note that $256 = 2^8$ so $$g(n) = \left(2^{8}\right)^{2^{8n}}=2^{2^{8n+3}} < 2^{2^{2^n}}$$ when $2^n > 8n+3.$ |
$xz_x-yz_y=z\left(x,y\right)\text{ for }y=1,\:z=3x$ - Is my solution right? | From
$$
F(C_1,C_2) = 0,\ \ \exists f\ |\ C_2 = f(C_1)
$$
now
$$
\frac xz = f(xy)\Rightarrow z = x g(xy)
$$
and with the boundary conditions
$$
3x = x g(x)\Rightarrow g(x) = 3\Rightarrow z = 3x
$$ |
Probability question - Independent events - find probability of $A$ given $B$ | By the law of total probability,
$$
P(A)=P(A\cap B)+P(A\cap B^c).
$$
Using the independence of $A$ and $B$ and de Morgan's laws,
$$
P(A)=P(A)P(B)+1-P(A^c\cup B).
$$
Hence,
$$
P(A)=0.44P(A)+1-0.74
$$
and $P(A)=0.26/0.56=13/28$. |
$|f(x)| \leq |g(x)|$ implies $f$ is differentiable at $0$? | Suppose $f,g: \mathbb{R} \to \mathbb{R}.$
If $g$ is differentiable at a fixed $x$ and both $g(x) = g'(x) =0$ then having $|f| \leqslant |g|$ will imply differentiability of $f$ at this $x$ as well. In particular $f'(x) = 0$.
You basically already proved this. Note that $|f(x)| \leqslant |g(x)| = 0 \Longrightarrow f(x) = 0$, whence
\begin{align*} \left\lvert \frac{f(x+h)-f(x)}{h} - 0 \right\rvert &= \frac{|f(x+h)|}{|h|} \\[10pt] &\leqslant \frac{|g(x+h)|}{|h|} \\[10pt] &= \left\lvert \frac{g(x+h) -g(x)}{h} -
g'(x) \right\rvert \xrightarrow{h \to 0} 0. \end{align*}
I don't think too much can be said if $g(x),g'(x)$ aren't both zero. For example, the constant function $g \equiv 1$ is infinitely differentiable on the entire real line ($g'(x) = 0$ everywhere as well), but although the map
$$f(x) = \begin{cases} 0, & x \in \mathbb{Q} \\ 1,& x \notin \mathbb{Q} \end{cases}$$
satisfies, $|f| \leqslant |g|$, it is not true that $f$ is differentiable anywhere. It is not even continuous anywhere. |
Find all $n$ such that $n/d(n) = p$, a prime, where $d(n)$ is the number of positive divisors of $n$ | Observation 1. It is easy to prove (e.g. by induction) that:
$$p^n \geq n+1, \forall p\geq2, \forall n\geq 1$$
Thus
$$p^r\cdot \color{blue}{p_1^{r_1}\cdot ... \cdot p_k^{r_k}}=
p(r+1)\cdot \color{blue}{(r_1+1)...(r_k+1)}\geq
p^r \cdot \color{blue}{(r_1+1)...(r_k+1)}$$
or
$$p(r+1)\geq p^r \iff r+1\geq p^{r-1}$$
These are the only $(r,p)$ combinations possible $(1,2), (2,2), (3,2)$ then $(1,3),(2,3)$ and finally $(1,p), \forall p>3, p$ - prime.
Observation 2. Let's check the following case $(1,p), \forall p>3, p$ - prime (i.e. $r=1$).
$$p\cdot \color{blue}{p_1^{r_1}\cdot ... \cdot p_k^{r_k}}=
p\cdot 2\cdot \color{blue}{(r_1+1)...(r_k+1)} \iff \\
p_1^{r_1}\cdot ... \cdot p_k^{r_k}=
2\cdot (r_1+1)...(r_k+1) \iff ...$$
or one of the $p_i=2$, for simplicity let's say $p_1=2$.
$$... \iff 2^{r_1-1}\cdot \color{blue}{p_2^{r_2} ... \cdot p_k^{r_k}}=
(r_1+1)\color{blue}{(r_2+1)...(r_k+1)}\geq
2^{r_1-1}\cdot \color{blue}{(r_2+1)...(r_k+1)}$$
or again
$$(r_1+1)\geq 2^{r_1-1}$$
or $r_1 \in \{1,2,3\}$.
This reduces the problem to the following cases $(1,2), (2,2), (3,2)$ then $(1,3),(2,3)$. |
Distinguishable Objects in a Circular Arrangement | Since the chairs are distinguishable, cyclic shifts very well may yield different subsets. For instance, you want to treat $\{1,2,3\}$ and $\{2,3,4\}$ as distinct subsets of $\{1,2,3,4,5,6,7,8,9,10\}$. |
Show that $(\forall x)\alpha(x) \lor (\forall x)\beta(x)\rightarrow (\forall x)(\alpha(x)\lor \beta(x)) $ | See Prenex Normal Form for justification
\begin{align}
& (\forall x)\alpha(x) \lor (\forall x)\beta(x) \\
\iff & (\forall x)\alpha(x) \lor (\forall y)\beta(y) \\
\iff & (\forall y)((\forall x)\alpha(x) \lor \beta(y)) \\
\iff & (\forall x)(\forall y)(\alpha(x) \lor \beta(y))
\end{align}
In the last line if one takes the special case $x = y$, it implies the required formula. |
N marbles puzzle: find the heaviest among them. | Each weighing can reduce the number of possibilities to $1/3$ of that from before, using the same argument as in the example you linked. If you have $3^n$ marbles, then you need at least $n$ weighings to find the heavier marble. If you have $3^n < N \leq 3^{n+1}$ marbles you should need $n+1$ weighings--since $N$ is not a power of three, some weighings are not as 'efficient' as possible. (i.e. don't eliminate exactly $2/3$ of possibilities since possibilities are no longer divisible by 3. In the worst case scenario, $\lceil N/3 \rceil$ possibilities remain after an 'inefficient' weighing.) Thus the number of weighings needed is
$$\lceil \log_3(N) \rceil $$ |
prove if $A\subset M$ is open and $B\subset A$, then $B$ is open in $A$ iff $B$ is open in $M$. | From $G_M \cap A=B$: Let $x\in B$. Since $x\in A$ and $A$ is open in $M$, there exists $r_1>0$ s.t. $B_M(x;r_1)\subset A$. Since $x \in G_M$ and $G_M$ is open in $M$, there exists $r_2>0$ s.t. $B_M(x;r_2) \subset G_M$. Choose $r=\min\{r_1,r_2\}$. Then $B_M(x;r)\subset G_M$ and $B_M(x;r)\subset A$ so $B_M(x;r)\subset B$. This shows $B$ is open in $M$.
For the other direction, since $B\subset A$, we may write $B=B\cap A$. Since $B$ is open in $M$, we see that $B$ is open in $A$. |
Uncountable, algebraically independent subset of $\mathbb{C}$? | Order the algebraically independent subsets of $\mathbb{C}$ by inclusion. The union of a chain of algebraically independent subsets of $\mathbb{C}$ is algebraically independent. Thus by Zorn's Lemma there is a maximal algebraically independent subset $I$ of $\mathbb{C}$.
The set $I$ cannot be countable, since the algebraic closure of a countable subset of $\mathbb{C}$ is countable.
One can give an alternate proof by well-ordering the reals as $r_{\alpha}$, where $\alpha$ ranges over the ordinals $\lt c$. Let $i_0$ be the first transcendental under this ordering. For any ordinal $\beta\gt 0$, let $i_{\beta}$ be the smallest real under the well ordering which is not algebraic over the $i_{\gamma}$ where $\gamma$ ranges over the ordinals $\lt \beta$. |
Different answers with $\sec(x) = 2\csc(x)$ | The two first methods led to $\cos^2x=\frac15$ and to $\sin^2x=\frac45$. That's the same assertion, since $\cos^2x+\sin^2x=1$.
But if you apply the $\arccos$ function to $\pm\dfrac1{\sqrt5}$, that will give you only the solutions that belong to the domain of $\arccos$, which is $[0,\pi]$. And if you apply the $\arcsin$ function to $\pm\dfrac2{\sqrt5}$, that will give yo only the solutions that belong to the domain of $\arcsin$, which is $\left[-\dfrac\pi2,\dfrac\pi2\right]$. So, you will have to provide the extra solutions for your self. For instance, if you used the $\arcsin$ function and you get a $\alpha\in\left[-\dfrac\pi2,0\right)$, then use $2\pi+\alpha$ instead; it is also a solution and it belongs to the right range.
Finally, if you are solving an equation of the type $f(x)=g(x)$ and if $x_0$ is such that $f^2(x_0)=g^2(x_0)$, then what you have to do is to compute $f(x_0)$ and $g(x_0)$. Either they'r equal or they're symmetric. If they're equal, then you have a solution in your hands. Keep it. Otherwise, throw it away. |
Confusion about the radius of convergence and the ratio test | The root formula with lim sup in the computation of $\rho$ is always correct, this is the Cauchy-Hadamard theorem.
The ratio formula is only correct if the limit exists. If it does not exist, the lim sup and lim inf give upper and lower bounds on the radius of convergence.
As for the example, the convergence analysis you did is correct, but it is using the criterion for number series, putting $x$ as a parameter. Be careful of how you distribute the letters, here one could use
$$
a_n=c_n\,x^n=\frac{x^n}{4^n\ln n},\text{ i.e., }c_n=\frac{1}{4^n\ln n}.
$$
In power series terms, $\rho=\limsup_{n\to\infty}\sqrt[n]{|c_n|}=\frac14$ and also $\rho=\lim_{n\to\infty}\left|\frac{c_{n+1}}{c_n}\right|=\frac14$, since this limit exists. So $R=\rho^{-1}=4$. |
Can $\{\Bbb R, [0, 1],\varnothing\}$ form a topology on $\mathbb R$? | $\{\mathbb R,[0,1],\varnothing\}$ is a topology on $\mathbb R$. Also, an open set may contain its boundary; for example, $\mathbb R$ itself. This uses the definition that the boundary of a set is the intersection of its closure and the closure of its complement, so that the boundary of $\mathbb R$ is $\varnothing$. |
Show that $|f(z)|\le M\frac{\prod_{k=1}^n|z-z_k|}{\prod_{k=1}^n|z+\overline{z_k}|}$ on the right half plane | The result as stated is not true. The function $e^z$ is a counterexample with $n=0$; this can easily be modified to give a counterexample for $n>0$.
The result becomes true if we assume in addition that $f$ is bounded. We need the following version of MMT:
Theorem. Suppose $f$ is analytic and bounded in the region $\Re z\ge0$. If $|f(z)|\le M$ for all $z$ with $\Re z=0$ then $|f(z)|\le M$ for all $z$ with $\Re z > 0$.
Proof. The simplest proof may be a Phragmen-Lindelofish argument. For $\epsilon>0$ let $$g_\epsilon(z)=f(z)/(1+\epsilon z).$$Then $|g_\epsilon(z)|\le M$ for $\Re z=0$. And also $g_\epsilon\to0$ at infinity, since $f$ is bounded. So the lim sup of $g_\epsilon$ is less than or equal to $M$ at every boundary point in the extended plane, hence a suitable version of MMT shows that $|g_\epsilon|\le M$ in $\Re z>0$. Now let $\epsilon\to0$. QED.
Now to do the problem. Given $f$ as in the problem, assuming $f$ is also bounded, let $g=$???. Applying the theorem to $g$ shows that $|g|\le M$ in the right half plane, and that shows that $f$ satisfies the conclusion. (Saying $|g|\le M$ implies what you need should be a good hint what $g$ should be...)
The Point: When you figure out what function $g$ you need to apply the theorem to to do the problem, you see that it's not obvious that $|g|\le M$ in the entire region, even if we begin by assuming thet $|f|\le M$ in the entire region. But it is clear that $g$ is bounded (by something, we don't know what) in the entire region, and that $|g|\le M$ on the imaginary axis. Hence the theorem shows that $|g|\le M$ everywhere. (In other words, even if we assume $|f|\le M$ everywhere to start we still need that theorem.) |
Is it a composite number? | Three cases:
$n$ is even. Then $$19\cdot 8^n+17\equiv 1\cdot (-1)^n+17\equiv 1+17\equiv 0\pmod{\! 3}$$
$n=4k+1$ for some $k\in\Bbb Z_{\ge 0}$. Then $$19\cdot 8^{4k+1}+17\equiv 6\cdot \left(8^2\right)^{2k}\cdot
8+4\equiv 6\cdot (-1)^{2k}\cdot 8+4$$
$$\equiv 48+4\equiv 52\equiv 0\pmod{\! 13}$$
$n=4k+3$ for some $k\in\Bbb Z_{\ge 0}$. Then $$19\cdot 8^{4k+3}+17\equiv (-1)\cdot \left(8^2\right)^{2k}\cdot 8^3+17\equiv (-1)\cdot (-1)^{2k}\cdot 8^3+17$$
$$\equiv -(-2)^3+17\equiv 8+17\equiv 25\equiv 0\pmod{\! 5}$$ |
Finding the number of subgroups of a finite group | For any group of order $pq$ with primes $p<q$ and $p\mid q-1$ there is exactly one non-abelian group up to isomorphism, namely the semidirect product of $\mathbb{Z}/p$ by $\mathbb{Z}/q$, see this answer. It also shows in the proof how many subgroups there are of order $p=3$, for $(p,q)=(3,13)$, namely $q=13$. |
Check whether field extension is splitting field | It is not a splitting field because it only contains one of the three roots of $x^3-3$. To get the other two roots, the field would need to contain $\sqrt{-3}$ (or equivalently, a third root of unity $\omega$) which it doesn't contain. |
Determining if two integer can be coprime | Assume that for all $r$, we have some prime dividing both $n_1$ and $n_2$. If there are only finitely many such primes, we can directly use Chinese Remainder Theorem to find some value of $r$ such that no prime of that finite set divides either expression. Thus, we must have infinitely many primes dividing both expressions simultaneously for some $r$. This shows that:
$$\frac{-a-\lambda b}{pb} \equiv \frac{-c-\lambda d}{pd} \pmod{q}$$
for infinitely many primes $q$. Then:
$$pd(a+\lambda b) \equiv pb(c+\lambda d) \pmod{q} \implies pd(a+\lambda b)=pb(c+ \lambda d)$$
as it isn't possible for two distinct values to be congruent modulo arbitrarily large primes (primes larger than them). This shows that $ad+\lambda bd=bc+\lambda bd \implies ad=bc$. Since $\gcd(a,b),(d,b)=1$, this shows $b=1$ and $c=ad$. If this is the case, then:
$$n_1=a+\lambda+rp$$
is a constant. We can find $r$ such that none of the primes dividing $n_1$ divide $n_2$ by Chinese Remainder Theorem. This concludes that $n_1$ and $n_2$ have to be relatively prime for some value of $r$ using proof by contradiction. |
Let $f(z)$ be entire. Suppose $\mathrm{Im} f(x) > 0$ for all $z$. Prove that $f =$ const. | Composing $f$ with $L$, we obtain
$$L\circ f=z\mapsto \frac{1+if(z)}{1-if(z)}$$
which is an entire function, and takes all values in the unit disk, hence bounded, so it's constant.
Then $f=L^{-1}\circ L\circ f$ is also constant. |
How $\sum\limits_{i=1}^{n}i(\frac{1+i}{2})=\frac{1}{6}n(n+1)(n+2)$? | Note that, if we call your sum $S$ and do some basic algebra, we have
$$S := \sum_{i=1}^n i \left( \frac{i+1}{2} \right) = \frac 1 2 \left( \sum_{i=1}^n i^2 + \sum_{i=1}^n i \right)$$
The two remaining summations are fairly well-known:
$$\sum_{i=1}^n i^2 = \frac{n(n+1)(2n+1)}{6} \qquad \sum_{i=1}^n i = \frac{n(n+1)}{2}$$
Thus,
$$S = \frac 1 2 \left( \frac{n(n+1)(2n+1)}{6} + \frac{n(n+1)}{2} \right)$$
From here, factor out $n(n+1)/2$ and you obtain
$$S = \frac{n(n+1)}{4} \left( \frac{2n+1}{3} + 1 \right)$$
Of course,
$$\frac{2n+1}{3} + 1 = \frac{2n+4}{3} = 2 \cdot \frac{n+2}{3}$$
Thus, letting that factor of $2$ cancel with the $4 = 2^2$ partially, we then have
$$S = \frac{n(n+1)}{2} \cdot \frac{n+2}{3} = \frac{n(n+1)(n+2)}{6}$$
as desired.
Granted this is purely an algebraic argument. Your observation of it being tied to a binomial coefficient is interesting; I wonder if there's a combinatorial argument one can make for it. |
How do you make a Cayley table for the molecule ethane from the D3h group? | The group of symmetries looks like $D_3 \cong S_3$ it is of order 6. It has 2 elements of order 3 which are inveres of one another and 3 elements of order 2.
You have noted one element of order three. its square is the other one.
To be specific. Label the atoms on top as 1, 2, and 3 in clockwise order and the ones right underneath them as 1', 2', and 3'. Since we have 6 atoms I will represent the group as a subgroup of $S_6$ on the set $\{1, 2, 3, 1', 2', 3'\}$
The group has the following elements
$$\{e,(123)(1'2'3'), (132)(1'3'2'),\\(11')(23')(32'), (22')(13')(31'), (33')(12')(21') \}$$
In order to develop a Cayley Table define $a:=(11')(23')(32')$ and $b:=(123) (1'2'3')$ then
the elements of the group can be written in the form $a^{\epsilon}b^i \text { for } \epsilon =0,1 \text { and } i = 0,1,2$ and multiplication can be derived from the relations $$a^2=b^3=(ab)^2=e$$ |
Quadratic equation problem. Composition of functions | Hint: Let the roots of $ p(x) = 0 $ be $\alpha, \beta$.
What can you say are the values of $ q(10), q(20), q(23) $?
They are either $ \alpha$ or $\beta$. Note that at most two of them have the same value, since $q(x)$ is a quadratic function.
Hence, can you conclude what the possibilities of the last root is? (Note, there is more than 1 possibility)
We split into casework, depending on what values they are.
Case 1: $q(10) = \alpha, q (20) = \beta , q (23) = \alpha $. The last solution is $r$ and satisfies $ q(r) = \beta$.
Observe that for any constant $C$, the solutions to $q(x) = C$ sum up to the same value by Vieta's formula. Hence, we have $ 10 + 23 = 20 + r$, which gives $r = 13$.
Complete the rest of the cases. |
How to prove $T_{m}T_{n}=\sum_{d|\gcd(m,n)}d^{k-1}T_{mn/d^2}?$ $T_{n}$ is Hecke operator. | Let $f=\sum_{m=0} ^{\infty} a(m)q^m$ be a Fourier expansion of $f\in M_k(\Gamma(1))$, then we have:
\begin{equation}(1) \ \ \
T_n f=\sum_{m=0} ^{\infty} \left(\sum_{d|(n,m)} d^{k-1}a\left(\frac{mn}{d^2}\right)\right)q^m .
\end{equation}
Let $b(l)$ denote the $l$-th Fourier coefficient of $T_n T_m f$. Then, by (1),
\begin{equation}(2) \ \ \
b(l)=\sum_{\substack{{d|n}\\{d|l}}} d^{k-1} \sum_{\substack{{e|m}\\{e|{\frac{nl}{d^2}}}}} e^{k-1} a\left(\frac{mnl}{d^2 e^2}\right).
\end{equation}
On the other hand, the $l$-th Fourier coefficient $c(l)$ of $\sum_{u|(n,m)} u^{k-1} T_{\frac{mn}{u^2}}f$ is
\begin{equation} (3) \ \ \
c(l)=\sum_{\substack{{u|n}\\{u|m}}} u^{k-1} \sum_{\substack{{v|l}\\{v|{\frac{mn}{u^2}}}}} v^{k-1} a\left(\frac{mnl}{u^2 v^2}\right).
\end{equation}
Consider a change of variable in (2) with
$$
(4) \ \ \ u=\frac {e(d,\frac me)}{(e,\frac ld)}, \ \ v=\frac{(e,\frac ld)d}{(d,\frac me)}.
$$
This is an invertible mapping between the sets of pairs $(d,e)$ in (2) and $(u,v)$ in (3).
Hence, $b(l)=c(l)$ as desired. Then it completes the proof of
$$
T_nT_m=\sum_{u|(n,m)} u^{k-1} T_{\frac{mn}{u^2}}.
$$
To see that the change of variable (4) is invertible, we have to show that $(u,v)$ in (4) satisfy the conditions in (3) under the assumptions of $(d,e)$ in (2). This can be shown by
$u|e \frac me = m$, $u| \frac nd(d,\frac me) | n$, $v|\frac ld d = l$,
$$
u^2 v = \frac{e^2 (d,\frac me)^2}{(e,\frac ld)^2} \frac{(e,\frac ld)d}{(d,\frac me)}=\frac{e^2(d,\frac me) d}{(e,\frac ld)}=ude \Bigg\vert n(d,\frac me) e \Bigg\vert n \frac me e = nm.
$$
Also, the same procedure
$$
(5) \ \ \ d=\frac{v(u,\frac lv)}{(v,\frac mu)}, \ \ e=\frac{(v,\frac mu) u}{(u,\frac lv)},
$$
takes the pairs $(u,v)$ in (3) to $(d,e)$ in (2).
Moreover, the change of variables (4) and (5) are in fact inverse of each other, since
$$
\frac{(u,\frac lv)}{(v,\frac mu)} = \frac{\left( \frac{e(d,\frac me)}{(e,\frac ld)}, \frac{(d,\frac me)l}{(e,\frac ld)d}\right)}{\left(\frac{(e,\frac ld)d}{(d,\frac me)},\frac{(e,\frac ld) m}{(d,\frac me)e}\right)}=\frac{(d,\frac me)}{(e,\frac ld)}.
$$
Note that this is essentially interchanging places of $l$ and $m$ in (2). Since places of $n$ and $l$ can be interchanged, the whole problem is equivalent to the result $T_mT_n = T_nT_m$.
The change of variable (4) is not easy to see. However, (4) becomes a lot easier if we impose $(m,n)=1$. So, an alternative method is to derive $T_{mn}=T_mT_n$ when $(m,n)=1$, then prove that $T_{p^s}$ is expressed as a polynomial of $T_p$. |
Trisecting an angle $\theta$ equally via applying trigonometry | I'm not sure what you mean by "reverse proof". Assuming $\phi=\theta/3$ and then reaching conclusions from there can't convince me that $\phi$ must be $\theta/3$, since I don't know we're not going to reach a contradiction later, or if $\theta/3$ is just one of multiple possibilities. This is the logical fallacy of "circular logic" or the classical meaning of "begging the question".
For Archimedes' argument using the same diagram but without trigonometry, see this subsection of the Wikipedia article on angle trisection. For other methods, see the rest of that entire section. |
Formulate two matrices A B such that B*A won't equal A*B | If you are looking for a systematic approach: Let $A= \begin{bmatrix} a & b \\ c& d \end{bmatrix}$ and $B= \begin{bmatrix} x & y \\ z& t \end{bmatrix}$.
Then
$$AB= \begin{bmatrix} ax+bz & ay+bt \\ cx+dz& cy+dt \end{bmatrix}\\
BA= \begin{bmatrix} ax+cy & bx+dy \\ az+ct&bz+dt \end{bmatrix}$$
Now, you must seek an entry where the $AB,BA$ have different expressions, which in this case is all. Pick such an entry, lets say $1,1$ and make sure you pick humbers such that
$$ax+bz \neq ax+cy$$ |
Conditional probability, bayes formula | A ball was taken from each bag.
We want to find the probability for taking a red from bag A (and therefore a blue from bag B) given that a red and a blue ball were taken out; let us represent that as: $\newcommand{\pair}[2]{\langle{#1,#2}\rangle} \mathsf P(\pair r b \mid \pair r b \cup\pair b r )$ , where the pair $\pair ab$ represents that the colours taken from bag $A$ and $B$ respectively (not first and second bag).
(Whichever bag from which the red ball was taken is the "first bag".)
In the A 30% of the blue balls, 40% red and 30% black.
In the B 30% of red, 50% of blue and 20% of yellow.
$$\begin{align}\mathsf P(\pair r b \mid \pair r b \cup\pair b r )~&=~\dfrac{\mathsf P(\pair r b )}{\mathsf P(\pair r b )+\mathsf P(\pair b r )} \\[1ex] &=~ \dfrac{0.40\cdot 0.50}{0.40\cdot 0.50 + 0.30\cdot 0.30}\end{align}$$ |
Compactness argument in SVD existence proof | We are interested in the compactness of the subset $S = \{v\mid ||v|| = 1\}\subseteq \Bbb C^n$. Compactness is relevant because among those vectors, $||Av||_2 \leq \sigma_1$, and compactness is used to guarantee the existence of a vector $v_1$ such that there is equality. In other words, the function $f:S \to \Bbb R$ given by $f(v) = ||Av||_2$ has sup $\sigma_1$ by definition of operator norm, and compactness guarantees that it is actually a max. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.