title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Math competitions for hobbyists? | You can "unofficially" (well, nobody really keeps track) participate in (mostly short answer, but some proof-based) student-run contests like the NIMO and OMO (see http://internetolympiad.org/), or the (proof-based) Olympiad-style ELMO (see this year's AoPS-version of the contest). Of course, AoPS also has a good repository of past contest problems; in particular, for the aforementioned NIMO, OMO, and ELMO (including ELMO Shortlist), you can find corresponding archives in the USA Contests subsection.
(Disclaimer: I was heavily involved with the OMO and ELMO in high school.)
In any case, AoPS is probably the best place to look for such contests (at least those based in the US). And as Will Jagy's comment suggests, high school students are probably much more willing to spend time and energy writing these contests than others. |
Divergence of modified harmonic series | Use the estimate $H_{n-1}\le C\log(n)$ for a suitable constant $C$.
Then it suffices to show that $\sum^\infty_{n=2} \frac{1}{n\log(n)}$ is divergent, which follows by the integral comparison test:
$$\int_2^R \frac{dt}{t\log(t)}=\log(\log(R))+\text{const.}\rightarrow \infty$$
as $R\rightarrow\infty$, albeit the convergence is extremely slow. |
Prove that ln(n) is not a Cauchy Sequence? | You need to prove that there is some epsilon > 0 such that for any N > 0 there is some n and m with |ln(n/m)| >= epsilon
for example, is you take epsilon = 1 and n=m+1, then the inequality is satisfied after some index N |
Minkowski's Inequality in $L^\infty$ space | We have $|f(x) + g(x)| \leq |f(x)| + |g(x)|$ for every $x$, by the triangle inequality. Now $|f(x)| \leq \|f\|_\infty$ almost everywhere, and similarly $|g(x)| \leq \|g\|_\infty$ almost everywhere. Therefore,
$$|f(x) + g(x)| \leq \|f\|_\infty + \|g\|_\infty$$
almost everywhere. Since the right hand side is an almost-everywhere upper bound for $|f(x) + g(x)|$, and $\|f + g\|_\infty$ is the infimum of all such almost-everywhere upper bounds, it follows that
$$\|f + g\|_\infty \leq \|f\|_\infty + \|g\|_\infty$$
Edit to respond to the question raised in the comments:
@George: I mean that there is a set $N$ with $\mu(N) = 0$ such that $|f(x)| \leq \|f\|_\infty$ for all $x \in N^c$. This is an easy consequence of the definition of the essential supremum.
Indeed, if there is no such set $N$, then $|f(x)| > \|f\|_\infty$ on a set of positive measure. Therefore, at least one of the sets $\{x : |f(x)| > \|f\|_\infty + 1/n\}$ must have positive measure (where $n$ is a positive integer). But this contradicts the definition of $\|f\|_\infty$. |
In a set $X$ with discrete topology, show that if $X$ is connected, then it contains exactly one element. | Your proof is correct if you assume that the space is metric, but that is in fact just an unnecessary complication. A more general proof (but along exactly the same lines) is:
If $X$ contains at least two points, then choose any point $*\in X$ and take
$$A = \{*\},\qquad B=X\backslash\{*\}.$$
As we are working in the discrete topology, every subset of $X$ is open, so $(A,B)$ form an open partition of $X$, showing that $X$ is not connected.
Conversely, if $X$ consists of a single point, then it is obviously connected. |
Does the proof of MCT need linearity of Lebesgue integration? | [Partial answer.]
When I revisit this question, I don't even remember how on earth I came up with such question. In lots of the stand texts in measure theory, convergence theorems (such as MCT) come after the linearity of Lebesgue integration.
In Folland's Real Analysis for example, linearity of Lebesgue integration of simple functions is firstly built up and the proof of MCT uses the linearity.
In Tao's Introduction to Measure Theory, the additivity
$$
\int_X f+ g\ d\mu= \int_X\ f\ d\mu +\int_X g\ d\mu.
$$
for unsigned measurable functions are stated as a theorem, of which the proof is "highly nontrivial" and the MCT is not used in the proof. |
Finding the $n^{th}$ derivative of $x^r$ | $$ \frac{{\rm d}^n}{{\rm d}x^n} x^r = \frac{r!}{(r-n)!} x^{r-n} $$
You are correct
$$ = r (r-1) (r-2) \ldots (r-n+1) \,x^{r-n} $$
Run a few test cases through Wolfram Alpha and you can confirm this yourself. |
Proving that $\frac{e^x + e^{-x}}2 \le e^{x^2/2}$ | $$\frac{e^x+e^{-x}}{2} = \frac{\left(1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \cdots \right) + \left(1 - x + \frac{x^2}{2!} - \frac{x^3}{3!} + \cdots \right)}{2} = \frac{2 + 2 \frac{x^2}{2!} + 2 \frac{x^4}{4!} + \cdots}{2}$$
$$ = 1 + \frac{x^2}{2} + \frac{x^4}{24} + \frac{x^6}{720}+ \cdots$$
$$e^{\frac{x^2}{2}} = 1 + \frac{x^2}{2} + \frac{\left(\frac{x^2}{2}\right)^2}{2!} + \frac{\left(\frac{x^2}{2}\right)^3}{3!} + \cdots = 1 + \frac{x^2}{2} + \frac{x^4}{8} + \frac{x^6}{48} + \cdots$$
Basically, it comes down to comparing $\displaystyle \frac{1}{(2n)!}$ and $\displaystyle \frac{1}{2^n n!}$, because they are the coefficients for $\displaystyle \frac{e^x + e^{-x}}{2}$ and $e^{\frac{x^2}{2}}$, respectively. We note that $$\frac{1}{(2n)!} = \frac{1}{1 \cdot 2 \cdot 3 \cdots (2n-1) \cdot (2n)}$$ while $$\frac{1}{2^n n!} = \frac{1}{2 \cdot 4 \cdot 6 \cdots (2n-2) \cdot (2n)}$$
Because the denominator for the second one is obviously smaller than that of the first, the second fraction is bigger than the first. This applies to every $n$, and therefore $$\frac{e^x+e^{-x}}{2} \le e^{\frac{x^2}{2}}$$ |
can't understand a question about finding general solution | The general solution is $$\pmatrix{2\\-3\\1}+s\pmatrix{1\\0\\1}+t\pmatrix{-1\\1\\1}$$, where $s$ and $t$ are real numbers.
The general solution of an inhomogenous system of linear equations is equal to the sum of the general solution of the corresponding homogenous system and some special solution of the inhomogenous system.
Every linear combination of solutions of the homogenous system is again a solution of the homogenous system. |
How am I supposed to tell if a number is divisible by $13$ (I need a shortcut)? | I find that with the possible exceptions of the rules for $9$ and $11$, the digit-based divisibility rules are annoyingly slow to actually use, e.g. in your head. If you've ever passed the time by looking at random numbers around you and trying to find their prime factors (and who hasn't?), you would never used a digit-based divisibility rule for testing for divisibility by even $7$, let alone $13$.
Here is what I do: I start with a number $n$ and a prime $p$ I want to test divisibility for, and then I transform the number in various ways such that the transformed number is divisible by $p$ if and only if $n$ is. The most common transformation is subtracting powers of $10$ times small multiples of $p$ (which ideally should be memorized). Eventually I try to get a number where it's either obviously divisible by $p$ or obviously not divisible by $p$, e.g. because it has a lot of trailing zeroes or is small.
For example, let's take the current year, $2014$. Is it divisible by...
$7$? Well, $2014 - 14 = 2000$, so no. (See how fast that was?)
$13$? Well, $2014 - 1300 = 714$, and $780 - 714 = 66$, so no. (Here I'm taking advantage of my freedom to subtract a number slightly larger than my current number, then take its negative.)
$17$? Well, $2014 - 1700 = 314$, and $340 - 314 = 26$, so no.
$19$? Well, $2014 - 1900 = 114$, and $114 - 95 = 19$, so yes! But let's pretend we didn't notice that so I can demonstrate the method for larger primes.
$23$? Well, $2300 - 2014 = 286$, and $286 - 230 = 56$, so no.
$29$? Well, $2900 - 2014 = 886$, and $886 - 870 = 16$, so no.
$31$? Well, $3100 - 2014 = 1086$, and $1086 - 930 = 156$, so no.
These kinds of arguments are relatively easy to perform in your head because, rather than keep track of two pieces of information (the digits you have left to use, and the digit sum or whatever that you're in the process of computing), you only keep track of one piece of information (a number which is divisible by $p$ if and only if your original number was).
Because you're only testing for divisibility and not trying to find either the remainder or the quotient, you can ignore information and there are more moves available to you: for example, you can at any step in this process multiply or divide by other primes. I didn't use this freedom above, but I did refrain from dividing by $2$; that happened to make some of the arguments more convenient. |
Example of matrices A, B, and row-echelon form matrix C | A and B are row-equivalent to C, but not to each other | Is this a true-false question? Because I think the statement that "when C is in Row Echelon Form, A is row-equivalent to C, B is row-equivalent to C, but A and B are not row-equivalent" is incorrect, therefore you will not be able to find any examples.
Simply put, two matrices are row equivalent if and only if one may be obtained from the other one via elementary row operations.If A is row-equivalent to C, by the definition, there are elementary matrices $E_1, · · · , E_k$ such that$$A=E_1· · ·E_kC$$ If B is row-equivalent to C, by the definition, there are elementary matrices $F_1, · · · , F_k$ such that$$C=F_1· · ·F_kB$$
Substituting we get $$A=E_1· · ·E_kF_1· · ·F_kB$$Since $E1, · · · , Ek, F1, · · · , Fj$ are elementary matrices, by the definition, A is row
equivalent to B. |
Checking my proof for the following question | Remember that in this formal axiomatic approach, every little detail matters.
So, when iv) says that if $x<y$ and $z>0$, then $xz<yz$, then that is not the same as $zx<zy$
Accordingly, the result of the second step is $cb<db$, rather than $bc<bd$. You will need to use two applications of A5 to change that into $bc<bd$.
And frankly, I am surprised that there is no explicit axiom that says that $x<y$ if and only if $y < x$, because while you are given that $0<c$, what you really need to apply iv) is $c>0$ ... I think you should point this out in your answer: a good professor will give you extra credit!
Otherwise good. |
When, how many times & how often (less's better) switch batteries to end up with 2 dead batteries? | Let the capacity of each battery be $C$. Your total lifetime is $\frac {2C}{M+S}$ hours from dividing total capacity by total consumption. Swap them halfway through in time, as $\frac C{M+S}$ hours. |
Proof that $\mathbb{R}^+$ is a vector space | By definition $\alpha\odot(x\oplus y)=(xy)^\alpha$ and $\alpha\odot x\oplus\alpha \odot y=x^\alpha\oplus y^\alpha=x^\alpha y^\alpha$, and those two are the same. I denoted vector space addition and scalar multiplication by $\oplus$ and $\odot$ for distinguishability.
Edit: Though avid19 already answered this, any nonzero vector will provide a one-element basis for $\mathbb{R}^+$, however, in this case the zero vector is $1$, since $1\oplus x=1x=x$.
We can check this by the following. Let $g$ be a nonzero (eg. $\neq 1$) element of $\mathbb{R}^+$, and let $x$ be an arbitrary element of $\mathbb{R}^+$. And also let $\alpha\in\mathbb{R}$ be a scalar. In this case the equation $$ \alpha\odot g=x $$ is written as $$ g^\alpha=x. $$ Taking the $g$-base logarithm of both sides (remember $g$ and $x$ are larger than zero): $$ \alpha=\log_gx, $$ which by the properties of logarithm functions, always exists. Thus given a non $1$ element of $\mathbb{R}^+$, we can always find a scalar, which when multiplied together by vector space scalar multiplication , results in any desired vector, so $\{g\}$ is a generating set.
It is also linearly independent, since there is only one element in it, and it isn't the zero vector, so $\{g\}$ is a basis. |
Quotient of matrix group | $G$ itself is isomorphic to ${\mathbb R}_{> 0} \ltimes {\mathbb R}$ (the left ${\mathbb R}_{> 0}$ with multiplication; the right ${\mathbb R}$ with addition). Under this isomorphism, $N$ corresponds to $\{1 \} \times {\mathbb R}$. The quotient, therefore, is isomorphic to ${\mathbb R}_{> 0}$.
More explicitly, consider the subgroup $H$ of $G$ defined by
$$H = \left\{ \begin{pmatrix} a & 0 \\ 0 & a^{-1} \end{pmatrix} \mid a > 0 \right\}.$$
Then every element of $G$ can be written as $hn$ with $h \in H$ and $n \in N$, i.e., $G = HN$; and also $H \cap N = \{ I \}$; and, as the OP already noticed, $N$ is a normal divisor of $G$. Therefore $G = H \ltimes N$. Consequently, $G/N \cong H \cong {\mathbb R}_{> 0}$. |
Incorrect cancellation of fraction with correct answer | We'll solve $$\frac{10 a + n}{10 n + b} = \frac{a}{b},$$ or equivalently $n (10 a - b) = 9 a b$; to interpret the l.h.s. as a ratio of $2$-digit numbers, we restrict to solutions for which $1 \leq a, b, n \leq 9$.
Reducing modulo $9$ reduces $n(10 a - b) = 9 ab$ to $$n (a - b) \equiv 0 \pmod 9.$$ In particular $n$ and $(a - b)$ must have two factors of $3$ between them. This gives three cases:
CASE 1 $(3^2) \mid n$ : This forces $n = 9$, and canceling gives $10 a - b = ab$. Rearranging gives $(a + 1)(10 - b) = 10$, so since $1 \leq a, b \leq 10$ we must have either $a + 1 = 10$, or one of $a + 1$ and $10 - b$ is $2$ and the other is $5$. These cases respectively give $$\frac{99}{99}, \quad \frac{19}{95}, \quad \frac{49}{98}.$$
CASE 2 $3 \mid n$ and $3 \mid (a - b)$: We may as well assume $9 \nmid n$, as this is the content of case $1$, so, $n = 3$ or $n = 6$. If $n = 3$, simplifying gives $$10 a - b = 3 a b ,$$ and rearranging gives $(3 a + 1)(10 - 3 b) = 10$, which forces $a = b = 3$, giving $$\frac{33}{33} .$$ If $n = 6$, simplifying and rearranging gives $(3 a + 2)(20 - 3 b) = 40$. The factors of $40$ of the form $3 a + 2$ for $1 \leq a \leq 9$ are $5, 8, 20$, and these lead to $a = 1, b = 4$ and $a = 2, b = 5$, $a = 6, b = 6$, which respectively give
$$\frac{16}{64}, \quad \frac{26}{65}, \quad \frac{66}{66}.$$
CASE 3 $9 \mid a - b$: Since $1 \leq a, b \leq 9$, we have $a = b$, and so $n (10 a - b) = 9 a b$ simplifies to $n = a$, giving the trivial cases
$$\frac{11}{11}, \ldots, \frac{99}{99} .$$
In summary, the nontrivial solutions are
$$\color{#bf0000}{\boxed{\frac{16}{64}, \qquad \frac{19}{95}, \qquad \frac{26}{65}, \qquad \frac{49}{98}}}. $$
Of course, we can also look for "false cancellations" of the form $$\frac{10 n + a}{10 b + n} = \frac{a}{b} ,$$ but this is exactly the equation formed by taking the reciprocals of both sides of the above equation and interchanging the roles of $a, b$, so this equation leads precisely to the four reciprocals, $\frac{64}{16}$, etc., of the above solutions.
(There are other two-digit "false cancellation" fractions, corresponding to other equations, but there are all trivial in some sense; they have the forms $\frac{d0}{e0}$ and $\frac{dd}{ee}$ for digits $d, e$.) |
Relation between characters of symmetric group and general linear group | There is a link between irreducible represetations kwonn as Schur-Weyl duality.
https://en.wikipedia.org/wiki/Schur%E2%80%93Weyl_duality |
Irreducible representation restricted to index 2 subgroup | This follows from the following form of Frobenius reciprocity: for every $k[G]$-module $V$ and every $k[H]$-module $U$, there is an isomorphism of groups
$$
{\rm Hom}_G(V,{\rm Ind}_{G/H}U) \cong {\rm Hom}_H({\rm Res}_{G/H}V,U).$$
So if the right hand side is non-trivial, then so is the left hand side. But if $V$ is a simple module (i.e. the representation is irreducible), then any homomorphism from it to anything is either 0 or injective, since the kernel of a hom is a submodule. So in your situation you have an injective homomorphism from $\rho$ to ${\rm Ind}_{G/H}\psi_1$. Since the two have the same dimension, this must be an isomorphism. |
Solving $|x-1|+|2-x|>3+x$ | You made a mistake by saying $|2-x|=2-x$ when $x\ge2$. The opposite is true.
In your first case, when $x\ge2$ we have $(x-1)+(x-2)>3+x$, which means $x>6$.
In your third case, when $x\lt1$ we have $(1-x)+(2-x)>3+x$, which means $x<0$. |
When is the largest eigenvalue of a matrix equal to the sum of its diagonal elements? | Let $r$ be the largest eigenvalue of our $n\times n$ matrix $A$. Then $r>0$, is associated to a $>0$ eigenvector and $(n-1)a+\min_i(x_i)\leq r\leq (n-1)a+\max_i(x_i)$.
If, for example $\min_i(x_i)=x_1,max_i(x_i)=x_n$, then a necessary condition for $r=trace(A)$ is
$x_1+\cdots x_{n-1}\leq (n-1)a\leq x_2+\cdots +x_n$.
Of course, if $a=x_1=\cdots=x_n$, then $r=trace(A)$.
EDIT 1. An example
Let $A=\begin{pmatrix}1&a&a\\a&2&a\\a&a&3\end{pmatrix}$ where $3/2\leq a\leq 5/2$. If $\chi_A$ is the characteristic polynomial of $A$, then $\chi_A(trace(A))=-2a³-12a²+60=0$ for $a\approx 1.94338$.
Here, we are not interested in the hypothesis $\det(A)=0$.
EDIT 2. Now we consider also the condition $\det(A)=0$. By homogenization, we may assume $a=1$. Since $A$ is real symmetric, $A$ is diagonalizable and its eigenvalues are real.
$n=2$. The NS condition is $x_1x_2=1$.
$n=3$. Since $spectrum(A)=\{0,0,trace(A)\}$, one has $rank(A)=1$ and necessarily $x_1=x_2=x_3=1$.
$n=4$. $spectrum(A)=\{\alpha,-\alpha,0,trace(A)\}$. That follows is a solution, in a neighborhood of $(1,\cdots,1)$ s.t. $\sigma_{3}\not= 4$.
$x_1\approx 1.0031092671785911483799797182382559149854506473873$
$x_2\approx 1.0698119384582623856800661089697199349941153438922$
$x_3\approx 0.93098479177751996117373383813583484579257578850415$
$x_4\approx .99689878083018837746093466133628743171083298298820$. |
Comparing theoretical value with experimental value with measurement uncertainty | You seem to be talking about inference using confidence intervals. Generally speaking, if the confidence interval does not include a hypthetical value, then
the hypothesis is rejected.
More specifically, suppose you have $n= 100$ observations from a normal population
with unknown mean $\mu$ and standard deviation $\sigma.$ If the sample mean is $\bar X = 23.4$ and sample standard deviation is $S = 3.11,$ then a 95% confidence interval for $\mu$ is $\bar X \pm 1.97 S/\sqrt{100}$ or $(22.79, 24.01).$ [You can find the formula for this t confidence interval online
or in an elementary statistics textbook.]
If you want to test the null hypothesis $H_0: \mu = 20$ against the alternative
$H_a: \mu \ne 20,$ then you would reject $H_0$ on the basis of the confidence
interval, which does not contain $20$ (working at significance level 5%). If you got the confidence from a
journal article and trying to test the hypothesis on your own, then this is
a reasonable ad hoc inference.
However if you have access to the 100 observations, it is better to do the test yourself. Then you could get a p-value, which would give you a better
idea of the strength of the evidence that the population mean is not $\mu = 20.$
You could also check the data to get some idea whether the appropriate statistical method was used. Specifically, you could get an idea whether the population is normal and the observations were chosen at random. |
Finding a point on a circle that has a distance L (arc length) from another point | Traveling along a fixed circle, is basically rotating a vector around a fixed point. For that we can use the well known rotation matrix $\begin{pmatrix}\cos\theta&-\sin\theta\\\sin\theta&\cos\theta\end{pmatrix}$. But to succesfully do this you need to know the center of you circle first, like ccorn mentioned |
If $a:[0,1) \to \mathbb C$ defined by $a(s)=e^{2\pi i s}$ is not an embedding, how is the restriction of $a$ to $[0,b)$ for $0<b<1$ an embedding? | The image of an open set doesn't need to be open in $\mathbb C,$ it needs to be open in the image of $a$. So if your $a$ is the one restricted to $[0,b),$ with $b<1$ to check if $a$ is an open mapping, you need to check if it takes every open subset of $[0,b)$ to an open subset of the the "half-open arc" $a([0,b)).$ You will find that it does.
However, we take $b=1,$ then the image of $[0,1/2)$ is not open in the circle $a([0,1)).$ The difference is that when you take the complement of the image in the circle, the image of zero disappearing leaves an 'open endpoint' on the other side, so the complement is not closed in the circle. This didn't happen in the case with $b<1$ since there the other end of the arc didn't touch zero. |
Transformation Definition | No. If you assume $AS=SB$, then $T(S)$ is a zero map for all $S$. This cannot happen if $A, B$ have no common eigenvalues. |
To show martingale, what other conditions do you need to check if the process has zero drift? | If an Ito process has no drift, then we have a process of the type $I_t = \int_0^t H_u dW_u,$ (i.e the stochastic differential is $dI_u = H_u dW_u$). The previous integral is defined for integrand processes $H$ such that $H$ is $\{ \mathscr{F}_t \}$-adapted and $\int_0^t |H_u|^2 du < \infty $ a.s for every $t>0.$ But $I_t$ defined like that, is ("only") a local martingale. For $I_t$ to be a (true) martingale we have to check that $E \int_0^t |H_u|^2 du < \infty $ a.s for every $t>0.$ |
$4$ digit numbers with even digits | First question: $4\cdot 5\cdot 5\cdot 5$. Second question: $8\cdot 9\cdot 9-7\cdot 8\cdot 8$, i.e. the number of three-digit numbers without $7$ minus the number of three-digit numbers without both $7,8$. |
rotation after translation as translation after rotation | Yes, you can. I don't know.
Originally, I assumed you meant applying the operations as rotation-rotation-translation, but since you mean rotation-translation-rotation, my reasoning does not apply to your problem. I'll leave it up here anyway, for history.
Think of it in terms of general matrix multiplication: the order of the multiplications on the left hand side here is unimportant, since matrix multiplication is associative.
In other words, if you are transforming a vector $v$, you have the first transformation as $v'=Rtv$. Applying the second rotation, $R'$, you obtain $R'v' = R'(Rtv) = (R'R)tv = R''tv$, again because of the associativity of matrix multiplication.
Note, though, that the order in which you apply these transformations is important - matrix multiplication is in general not commutative, and rotational matrices are no exception. |
Area and Volumes of revolution using disc method | Let's bring ourselves down to the 2D case first. One might ask why can I approximate the area under a curve over some small dx by assuming the top is flat and at a fixed height y, when in fact I cannot approximate the length of an arc over some small dx by assuming the same thing.
Without getting too "mathy" this should be intuitive. Imagine the arc and splitting the area underneath into small rectangles. Each time we split into smaller and smaller rectangles our approximations always get better. This gives us the notion that as we go to infinity our approximation will be exact. However when we imagine doing the same thing with length our approximation always stays the same and is not converging anywhere. (It is always just the length of the portion of the x axis underneath)
This is similar to the notion that if I draw a right triangle I cannot say that c = a + b. Even though you could imagine that I approximate the diagonal with a zig zag of infinitely small "steps". This doesn't work because I am not reducing the error in the approximation in each application of making the steps smaller. So the length of the zig zag is in fact not approaching the length of the diagonal. (It is staying exactly the same). However the area under the zig zag is clearly visually approaching the area under the initial right triangle.
So we see that arc length does not do well under these naive approximations. We need to do something different (in this case approximate with tiny diagonal lines rather than horizontal). In the same manner think about where the volume comes from when we do the rotation. It comes from the area under the curve so it is intuitive that the same approach of using dx should work. The surface area however results by rotating a piece of arc. So it seems unlikely that this method would suddenly work.
For a less hand wavy explanation we can do the same reasoning of asking does the approximation get better as I make things smaller and smaller. In the case of volume the answer is yes. In the case of surface area the answer is no (Do the computation of approximating the surface area of a cone with a cylinder that has the average radius of the region. First just with one cylinder with radius r/2. And then with two cylinders with the top one having radius r/4 and the bottom one having radius 3r/4 and so on. The approximation doesn't go anywhere and clearly is not correct from the get go)
I'm sure you can find more technical answers that dive into analysis of the error of the approximation converging to 0 in one case and not in the other, but in my opinion sticking with the intuitive is the way to go. Hope this helps! |
If $m,M$ are the minimum and maximum value of $\alpha^2+\beta^2$,then find $m+M.$ | Your delta must be greater than (or equal to) $0$:
$$\Delta\geqslant 0$$
$$k^2-4(k^2+k-5)\geqslant 0$$ |
Solve for given probability sum? | Hint: Use the formula for the sum of the infinite geometric series, $\frac{n}{1-r}$ where n is the starting value of the series and r is the ratio between successive terms. |
Suppose $X$ is a finite set and $f : X \to X$ is a function. Then $f$ is injective if and only if $f$ is surjective. | Your proof is a little informal. Normally this wouldn't be a problem, but the issue is that the formal version of your "counting" argument is precisely what you wish to prove. The proof is not as simple as one would initially be lead to believe.
Let's prove the following statement. Your desired statement will follow.
Let $n\in\mathbb{N}$ and let $f:\{1,\ldots,n\}\to\{1,\ldots,n\}$ be a function. Then $f$ is injective iff $f$ is surjective.
Proof:
We use induction. This clearly holds whenever $n=1$. Fix $n\in\mathbb{N}$ and suppose that every function from $\{1,\ldots,n\}$ into $\{1,\ldots,n\}$ is injective iff it is surjective. Let $f:\{1,\ldots,n+1\}\to\{1,\ldots,n+1\}$ be a function.
Suppose $f$ is injective. Denote $m:=f(n+1)$, and observe that since $f$ is injective, the function $g:\{1,\ldots,n\}\to\{1,\ldots,n+1\}\setminus\{m\}$ given by $g(k)=f(k)$ is well-defined and one-to-one. Define the injection $h:\{1,\ldots,n+1\}\setminus\{m\}\to\{1,\ldots,n\}$ by
$$
h(k) = \begin{cases}
k & \text{if}\ k<m, \\
k-1 & \text{if}\ k>m.
\end{cases}
$$
Due to the fact that the composition of injective functions is injective, we have that $h\circ g:\{1,\ldots,n\}\to\{1,\ldots,n\}$ is one-to-one. By the induction hypothesis, $h\circ g$ is onto and hence $h$ is onto. Therefore $g=h^{-1}\circ(h\circ g)$ is a composition of surjective functions, which means that it is surjective. Consequently $f$ is surjective.
(I'll leave the converse to you). $\square$
As far as the examples go, the exponential function works perfectly. However, a logarithmic function doesn't work because it isn't defined on all of $\mathbb{R}$ (and it is injective). To fix this, you could define $f:\mathbb{R}\to\mathbb{R}$ by $f(x)=\ln|x|$ if $x\ne0$ and $f(0)=0$.
Edit: Actually, one can prove that this statement never holds whenever $X$ is infinite. Suppose $X$ is an infinite set. Then there is a countably infinite subset $\{d_1,d_2,\ldots\}$ of $X$. (Caution: This uses the axiom of choice.) Define $f_i,f_s:X\to X$ by
$$
f_i(x)=\begin{cases}
d_{i+1} & \text{if}\ x=d_i\ \text{for some}\ i, \\
x & \text{otherwise},
\end{cases}
\quad\text{and}\quad
f_s(x)=\begin{cases}
d_{i-1} & \text{if}\ x=d_i\ \text{for some}\ i>1, \\
x & \text{otherwise}.
\end{cases}
$$
Then $f_i$ is an injection and $f_s$ is a surjection, but neither is a bijection. |
Greatest number you can write using 3 symbols of a standard keyboard. | Using tetration, you can get quite a large number. ${^{n}a}$ is defined as ${{a^a}^a}^{...}$. In other words, it is repeated exponentiation.
Therefore, using just ^, 9, 9 you can get:
${^{9}9}$
which is veeeery big. |
Why does no undirected graph has eigenvalue $\sqrt{2+\sqrt{5}}$? | The adjacency matrix is a symmetric real matrix, hence all eigenvalues are real (spectral theorem).
But the matrix is also rational, hence any algebraic conjugate of an eigenvalue is again an eigenvalue. Among the algebraic conjugates of $\sqrt{2+\sqrt 5}$ is $\sqrt{2-\sqrt 5}=\sqrt{\sqrt5-2}\,i\notin\Bbb R$. |
Find domain and range of given function | $\frac{x^{2}+1}{x^{2}+2}$ = 1- $\frac{1}{x^{2}+2}$ ,x$^{2}$$\geq$
0 $\Longrightarrow$$\frac{1}{2}$$\leq$ $\frac{x^{2}+1}{x^{2}+2}$<
1 and$ $domain of sin$^{-1}$x =[-1,1]
[-1,1] $\cap$ [$\frac{1}{2}$, 1) = [$\frac{1}{2}$,1) $\Longrightarrow$
$\frac{1}{2}$$\leq$ $\frac{x^{2}+1}{x^{2}+2}$< 1 $\Longrightarrow$0
< $\frac{1}{x^{2}+2}$ $\leq$$\frac{1}{2}$ is true $\forall$ x
$\in$$\mathbb{R}$
Hence the Domain is $\mathbb{R}$
Range $\Longrightarrow$$\frac{1}{2}$$\leq$
$\frac{x^{2}+1}{x^{2}+2}$< 1 $\Longrightarrow$ sin$^{-1}$$\left(\frac{1}{2}\right)$$\leq$sin$^{-1}$$\left(\frac{x^{2}+1}{x^{2}+2}\right)$<
sin$^{-1}$0
$\Longrightarrow$
Range= [ $\frac{\text{ {$\pi$}}}{6}$,$\pi$) |
How Much water can a tank hold? | HINT: Let $V$ be the volume of the large tank and $v$ be the volume of the smaller tank. Then from the problem, you can derive that
$$V=v+500$$
$$\frac{1}{2}V=\frac{2}{3}v$$
Can you solve this system of equations? |
Metrically bounded equivalent to order bounded for $\mathbf{R}$ | If $Y$ is the empty set then any $r > 0$ does the job.
Otherwise choose an arbitrary $y' \in Y$. Then
$$
|y| = |y-y' + y'| \le |y-y'| + |y'| \le b + |y'|
$$
for all $y \in Y$, i.e. you can choose $r = b + |y'|$. |
Let K/F be a transcendental extension , then does every F-homomorphism has to be an automorphism? | Clearly $X \mapsto X^2$ is an $F$-algebra endomorpishm of $K=F(X)$ that is not an automorphism. |
Are there standard terms for more fine/coarse grained (but otherwise consistent) ways of ordering values? | I would call the relevant maps here order-preserving surjections. Alternately, if instead of using the language of partial orders you want to use the language of preorders, you could say that a preorder $\le$ refines a preorder $\le'$ on the same set if whenever $x \le y$ it is also true that $x \le' y$. |
Check Solution of the following ODE: $\frac{\mathrm{d}V}{\mathrm{d}t} = - \frac{V}{RC} + \frac{I}{C}$ | $$\frac{dV}{dt} = - \frac{V}{RC} + \frac{I}{C}$$
You had an extra $C$ in your attempts:
$$\frac{dV}{dt} = -\frac 1 {RC} \left ({V} - {RI}\right)$$
$$\frac{dV}{\left ({V} - {RI}\right)} = -\frac {dt} {RC} $$
Integrate and don't forget the constant of integration.
$$\ln{\left |{V} - {RI}\right|} = K-\frac {t} {RC} $$
$${\left |{V} - {RI}\right|} = Ke^{-\frac {t} {RC} }$$
$${V(t)} = {RI} + Ke^{-\frac {t} {RC} }$$
Maybe you have initial condition so that you can determine the value of the constant $K$ ? It seems that at $t=0$ you have $V(t)=0$ so that the constant $K=-RI$
$$\implies V(t)=RI(1-e^{-tR/C})$$ |
Minimum value of the expression given below. | You can see $a^3+b^3+c^3-3abc=\frac{1}{2}[(a-b)^2+(a-c)^2+(c-b)^2](a+b+c)$
For $[(a-b)^2+(a-c)^2+(c-b)^2]$, since they are three different integers, this is greater or equal to $[(0-1)^2+(0-2)^2+(1-2)^2]=6$;
For $(a+b+c)$, $(a+b+c)=\sqrt{(a+b+c)^2}\geq \sqrt{3(ab+bc+ca)}\geq\sqrt{3*107}>17$. Since the're integers, we have $a+b+c\geq 18$.
Check for $a=5, b=6, c=7$, should be the minimum point. |
sufficient and necessary conditions for holomorphic functions | Hint: A holomorphic function has an antiderivative on the domain $D$ if and only if $\int_\gamma f(z)\,dz = 0$ for every closed curve in $D$. What does that tell you about how $f$ must behave at $z = \pm 1$? (I assume you mean $\mathbb{C} \setminus \{ -1, 1 \}$, rather than $\mathbb{C} \setminus \{ 0, 1 \}$.) |
How to solve this limit of a function? ($\cos^3x$) | Hint:
$$
\begin{align}
\frac{1-\cos^3(x)}{x\sin(x)}
&=(1+\cos(x)+\cos^2(x))\frac{1-\cos(x)}{x\sin(x)}\\
\end{align}
$$ |
Is there a better way to input an $n$-cycle in GAP? | Thank you, it's a good question. In brief, your suggestion to use PermList is right, but its argument may be assembled faster:
alpha:=PermList(Concatenation([2..n],[1]));
Now a bit longer version of the reply with timings. First, another alternative is MappingPermListList:
gap> n:=5;;MappingPermListList([1..n],Concatenation([2..n],[1]));
(1,2,3,4,5)
Nevertheless, your suggestion with PermList is already slightly faster:
gap> n:=100000;;for i in [1..100] do alpha:=PermList(List([1..n],i->(i mod n)+1));od;time;
984
gap> n:=100000;;for i in [1..100] do alpha:=MappingPermListList([1..n],Concatenation([2..n],[1]));od;time;
1361
However, one could do even better, avoiding writing mod n)+1 and using Concatenation instead. Then the performance is ~20 times faster:
gap> n:=100000;;for i in [1..100] do alpha:=PermList(Concatenation([2..n],[1]));od;time;
47
Summarising, I'd recommend to use PermList in this particular case when one needs to create a cycle $(1,2,...n)$. In a general case, one should choose between PermList and MappingPermListList dependently on the data available.
P.S. Note that the naive approach to multiply transpositions does not scale well at all:
gap> n:=5;; alpha:=(1,2);;for j in [3..n] do alpha:=alpha*(1,j);od; alpha;
(1,2,3,4,5)
gap> n:=100000;; alpha:=(1,2);;for j in [3..n] do alpha:=alpha*(1,j);od; time;
14429 |
If $u,v,w$ are linearly independent, then is it true that $Tu, Tv, Tw$ are linearly independent? | Not necessary as you can see from Nick example.
This would be true only if linear transformation $T$ is injective, that is iff $\operatorname{Ker} T = \{0\}$. |
Is it enough to show that $\lim_{x\rightarrow 0}\cos(1/x)$ doesn't exist to show that $\lim_{x \rightarrow0}(2x\sin(1/x)-\cos(1/x))$ doesn't exist? | The contradiction comes from the following theorem:
If $\lim_{x\to a}f(x)$ and $\lim_{x\to a} g(x)$ both exist, then $\lim_{x\to a}(f(x)+g(x))$ exists (and is equal to $\lim_{x\to a}f(x)+\lim_{x\to a}g(x)$).
This theorem is being applied with $f(x)=\cos(1/x)-2x\sin(1/x)$ and $g(x)=2x\sin(1/x)$. You have assumed that $\lim_{x\to 0}f(x)$ exists, and you have proved that $\lim_{x\to 0}g(x)$ exists. The theorem then tells you that $\lim_{x\to 0}(f(x)+g(x))=\lim_{x\to0}\cos(1/x)$ exists. Since it doesn't exist, this is a contradiction.
Note that it is absolutely essential to prove that $\lim_{x\to 0}g(x)$ exists here. In particular, the following statement which it seems you intend to use (with $h(x)=\cos(1/x)$ and $g(x)=2x\sin(1/x)$) is not true in general:
(FALSE) If $\lim_{x\to a}h(x)$ does not exist, then $\lim_{x\to a}(h(x)-g(x))$ does not exist.
For instance, $\lim_{x\to 0}\frac{1}{x}$ does not exist, but $\lim_{x\to 0}\left(\frac{1}{x}-\frac{1}{x}\right)$ does exist since $\frac{1}{x}-\frac{1}{x}=0$ for all $x\neq 0$.
The FALSE statement above, however, is true if $\lim_{x\to a}g(x)$ exists. The proof is exactly the argument given above: define $f(x)=h(x)-g(x)$, and suppose for a contradiction that $\lim_{x\to a} f(x)$ does exist. Then $\lim_{x\to a}(f(x)+g(x))$ would exist, but this is $\lim_{x\to a} h(x)$ which we know does not exist. |
What does $E(f)(n)$ and $E^5(n^2)$ mean? | $E(f)$ is a function from $\mathbb{N}\rightarrow\mathbb{N}$. On the other hand, $E(f)(n)$ is the function $E(f)$ evaluated at $n$. Now, $E$ is a function that maps functions to functions, (pause and think for a bit), or more formally we write
$$E:\{f:f\text{ is a function }\mathbb{N}\rightarrow \mathbb{N}\}\rightarrow \{f:f\text{ is a function }\mathbb{N}\rightarrow \mathbb{N}\}.$$
So regarding $f(n)=n^2$ as the squaring function, we have
$$E(f)(n)=2^{n^2}.$$
This means that $E(f)$ is the function that maps $n\in\mathbb{N}$ to $2^{n^2}\in\mathbb{N}$. Now repeating,
$$E^2(f)(n)=E(E(f))(n)=2^{(2^{n^2})}.$$
So $E^2(f)$ is the function that maps $n\in\mathbb{N}$ to $2^{(2^{n^2})}\in\mathbb{N}$. Continuing this way, you can calculate $E^5(n^2)$, which is a function not a number. |
Convergence or divergence of $\sum\limits^{\infty}_{n=0} \frac{(2n+1)^{2}} {3^{n}(2n)!}$ | $$\dfrac{(2n+1)^2}{(2n)!} = \dfrac{4n^2+4n+1}{(2n)!} = \dfrac{(2n)(2n-1)+6n+1}{(2n)!} = \dfrac1{(2n-2)!}+ 3 \cdot \dfrac1{(2n-1)!} + \dfrac1{(2n)!}$$
Now $$\sum_{n=0}^{\infty}\dfrac{(2n+1)^2}{(2n)!}x^n = \sum_{n=0}^{\infty}\dfrac{x^n}{(2n)!} + 3 \cdot\sum_{n=1}^{\infty}\dfrac{x^n}{(2n-1)!} + \sum_{n=1}^{\infty}\dfrac{x^n}{(2n-2)!}$$
\begin{align}
\sum_{n=0}^{\infty}\dfrac{x^n}{(2n)!} & = \dfrac{\exp(\sqrt{x})+\exp(-\sqrt{x})}2\\
\sum_{n=1}^{\infty}\dfrac{x^n}{(2n-1)!} & = \sqrt{x} \cdot \dfrac{\exp(\sqrt{x})-\exp(-\sqrt{x})}2\\
\sum_{n=1}^{\infty}\dfrac{x^n}{(2n-2)!} & = x \cdot \dfrac{\exp(\sqrt{x})+\exp(-\sqrt{x})}2
\end{align}
Hence,
$$\sum_{n=0}^{\infty} \dfrac{(2n+1)^2}{(2n)!} x^n = \dfrac{(x+\sqrt{x}+1) \exp(\sqrt{x})+(x-\sqrt{x}+1) \exp(-\sqrt{x})}2$$ |
path space is contractible | Yes, I think all that is exactly right. In fact $H$ appears to give a strong deformation retraction to $\{\gamma_*\}$, since it's constant on $\gamma_*$.
The only slightly unclear part to me is why $H$ is continuous. Checking that requires going into the topology defined on $PX$. But that shouldn't be hard. |
Show at most one solution exists for Cauchy Problem | My proposed solution:
Let $w = u-v.$ Then $w_t - \Delta w = |v_{x_1}| - |u_{x_1}|.$
By the reverse triangle inequality, we have,
$$|w_{x_1}| = |u_{x_1}-v_{x_1}| \geq \big| |u_{x_1}| - |v_{x_1}| \big| \geq |u_{x_1}| - |v_{x_1}| = -( |v_{x_1}| - |u_{x_1}|),$$
meaning
$$ -|w_{x_1}| \leq w_t - \Delta w \;\text{ or } \; w_t - \Delta w + |w_{x_1}| \geq 0.$$
My claim, which I'll prove after this solution, is that $\min_{U_T} w = \min_{\partial U_T} w,$ where $U_T = \{ (x,t) \, | \, t \geq 0, x \in \mathbb{R}^n\}.$ We see that if this is true, then $min_{U_T} u-v = 0,$ since $w$ decays at infinity and $w(x,0) = 0.$
Similarly, we could go through the same exact steps for $w = v-u,$ and conclude that $min_{U_T} v-u = -max_{U_T} u-v = 0,$ meaning that $u-v=0$ and therefore we have uniqueness.
Proof of claim: Let $z = w + \mu e^{t}$ where in the end, we will let $\mu$ go to zero. Differentiating, $$z_t = w_t + \mu e^{t} \geq \Delta z - |z_{x_1}| + \mu e^{t}.$$
Assume $z$ takes a minimum inside the domain. Then $z_t \leq 0,\; -\Delta z \leq 0, \;z_{x_1} = 0,$ so $z_t - \Delta z - \mu e^{t} < 0$ since $-\mu e^{t} < 0,$ thus a contradiction with the fact that $z_t - \Delta z + |z_{x_1}| - \mu e^{t} \geq 0$ for all $(x,t).$ Letting $\mu \to 0,$ we see that $w$ must take its minimum on the boundary of the domain, meaning at $t = 0.$
If anyone spots any errors, please comment! |
How do I show $\|(e^{-2\pi ihx_j} - 1)/h \| \leq 2 \pi \|x\|$? | The inequality $|e^{i\theta}-1| \le |\theta|$ is clear from geometry: The LHS is the euclidean length between $e^{i\theta}$ and $1$ as complex numbers on the unit circle, while the RHS is the radian measure. Now set $\theta=-2\pi h x$ and you are done. |
Klenke's construction of Brownian motion | Let $0=t_0<t_1<\cdots <t_n \in \mathbb R$ ($0<n$) and define $J_0:=\left\{t_0, t_1, \dots, t_n\right\}$, $J:=\left\{t_1, \dots, t_n\right\}$.
Define $\underline{X}_0 := \left(X_{t_0}, X_{t_1}, X_{t_2}, \dots , X_{t_n}\right)$, $\underline X := \left(X_{t_1}, X_{t_2}, \dots , X_{t_n}\right)$, $\Delta\underline{X}_0 := \left(X_{t_1}-X_{t_0}, X_{t_2}-X_{t_1}, \dots, X_{t_n}-X_{t_{n-1}}\right)$. Note that, since $\mathbb P_{X_0}=\mathcal N_{0,0}$ (i.e. $X_0=0\space\space a.s.$), $\mathbb P_{\Delta\underline{X}_0}=\mathbb P_{\left(X_{t_1}, X_{t_2}-X_{t_1}, \dots, X_{t_n}-X_{t_{n-1}}\right)}$.
Let $\underline Y := \left(Y_1, Y_2, \dots, Y_n\right)$ be independent and such that $\mathbb{P}_{Y_i} = \mathcal{N}_{0, t_{i-1}-t_i}, \space i=1,\dots,n$. I'll show that $\mathbb{P}_{\Delta\underline{X}_0}=\mathbb{P}_\underline{Y}$.
From Corollary 14.44, $\mathbb{P}_{\underline{X}_0}=\delta_0\otimes\bigotimes_{i=1}^n\kappa_{t_i-t_{i-1}}$. To each rectangle $Q_0=B_{t_0}\times \underbrace{B_{t_1}\times\cdots\times B_{t_n}}_{=:Q}\in \times_{i=0}^n \mathcal B$, we get $\mathbb{P}_{\underline{X}_0}\left(Q_0\right)=\mathbb 1_{B_{t_0}}(0)\cdot\bigotimes_{i=1}^n\kappa_{t_i-t_{i-1}}\left(0, Q\right)$. So, based on Theorem 14.28, $\mathbb{P}_{\underline{X}_0}\left(Q_0\right)=\mathbb 1_{B_{t_0}}\left(0\right)\cdot\mathbb{P}_\underline{S}\left(Q\right)$, where $\underline{S}=\left(S_1,S_2,\dots,S_n\right), \space S_m:=\sum_{i=1}^m Y_i \space\space \left(m=1,\dots,n\right)$. Hence $\mathbb P_{\underline{X}}\left(Q\right)=P_{\underline{X}_0}\left(\mathbb R \times Q\right)=P_{\underline{S}}\left(Q\right)$. As the rectangles $Q$ comprise a $\pi$-system that generates $\mathcal B_J$, $\mathbb P_{\underline{X}}=P_{\underline{S}}$ (cf. Lemma 1.42, "Uniqueness by an $\cap$-closed generator", p. 20).
So $\mathbb{P}_{\Delta \underline{X}}=\mathbb{P}_{\left(S_1, S_2-S_1, \dots, S_n-S_{n-1}\right)}=\mathbb{P}_\underline{Y}$. |
Minimize $f(x, y, z) = \frac{a}{x} + \frac{b}{y} +\frac{c}{z}$ with $x+y+z=1$ | Another way to do the problem is to use Lagrangian optimisation. Then you would try to minimize the Lagrangian $$L(x,y,z,\lambda)=f(x,y,z)-\lambda(x+y+z-1)$$
Computing partial derivatives gives: $$\frac{\partial L}{\partial x}=-\frac a{x^2}-\lambda\\\frac{\partial L}{\partial y}=-\frac b{y^2}-\lambda\\\frac{\partial L}{\partial z}=-\frac{c}{z^2}-\lambda\\\frac{\partial L}{\partial \lambda}=-(x+y+z-1)$$
We require these to all be $0$. The fourth equation is then just the constraint, while the others say the minimum occurs at $$(x,y,z)=\left(\sqrt{-\frac a\lambda},\sqrt{-\frac b\lambda},\sqrt{-\frac c\lambda}\right)$$
Substituting this into the constraint gives $$\sqrt{-\frac a\lambda}+\sqrt{-\frac b\lambda}+\sqrt{-\frac c\lambda}=1$$ We can solve this to give $$\lambda=-\left(\sqrt a+\sqrt b+\sqrt c\right)^2$$And so the solution to the problem is the same as what you got.
You can then use theh Bordered Hessian method to determine if it's a minimum or maximum.
Note, if you wanted to check this some other way, you wouldn't have to show $\forall x,y,z,a,b,c>0$ $$\frac ax+\frac by+\frac cz\ge\left(\sqrt a+\sqrt b+\sqrt c\right)^2$$ since this loses sight of the constraint $x+y+z=1$. Without the constraint, we can just take $x,y,z$ to be very large, and then the LHS is clearly smaller than the RHS.
In fact, what needs to be verified is the above inequality $\forall a,b,c>0$ and $\forall (x,y,z)\in P$ where $P$ is the part of the 2-D plane $x+y+z=1$ that is in the region $x,y,z>0$.
It is hard to capture this constraint correctly by simply substituting something in. The best way to try to prove this is to find the minimum of the LHS minus the RHS. This is the same optimisation problem as before, and is most easily solved / proven in the same way as before, with a Lagrangian.
I say this to reassure you that there's no need to be "more sure" about the answer being correct - if it is what you get from minimising the function while considering the constraint, then it is correct. Proving such inequalities is a lot more difficult and not necessary. |
Error formula when using a polynomial interpolation | For some fixed $\tilde{x} \notin \{ x_0, \dots, x_n \}$ construct the function
$$
F(x) = f(x) - p_n(x) - g(\tilde{x}) \prod_{i=0}^n (x - x_i),
$$
where $g(\tilde{x})$ is defined such that $F(\tilde{x}) = 0$, i.e.
$$
g(\tilde{x}) = (f(\tilde{x}) - p_n(\tilde{x})) \left( \prod_{i=0}^n (x - x_i) \right)^{-1}.
$$
$F(x)$ has $(n+2)$ distinct roots, namely $x_0, \dots, x_n, \tilde{x}$. From what you stated about continuity of the derivatives of $f$ it follows that $F$ is $(n+1)$ times continuously differentiable.
Now, apply Rolles theorem $(n+1)$ times, from which it follows that there is some $t$ such that $F^{(n+1)}(t) = 0.$ Since $p_n$ is a polynomial of degree $n$ we have
$$
F^{(n+1)}(x) = f^{(n+1)}(x) - g(\tilde{x})(n+1)!
$$
Insert $t$ in the above expression gives us
$$
g(\tilde{x}) = \frac{f^{(n+1)}(t)}{(n+1)!},
$$
which completes the proof. |
Missing finitely many points by rational curve with parametrization by rational functions and rational curve self intersection | I think it not possible for any plane algebraic curve $C$ to have infinitely many self-intersections. If $p$ is a self-intersection then locally around $p$ curve looks like $f=f_1f_2=0$, thus $df|p=0$. So, infinitely many points of self-intersection give us infinitely many points where $df=0$ or infinitely many zeroes of the system of equations $f=0$, $\frac{\partial f}{\partial x}=0$ and $\frac{\partial f}{\partial y}=0$. But if two polynomials of two variables have infinitely many common zeroes they have a common irreducible component. Therefore, $\frac{\partial f}{\partial x}=0$ and $\frac{\partial f}{\partial y}=0$ along $C$ a and any point of $C$ is singular. That is not possible because singular points form a closed subset of a variety.
If you get such parametrization you curve is necessarily rational. This is because existence of rational map from $\mathbb{P}^1$ to the curve tell us that the curve is unirational. On the other hand Luroth's theorem tells us that any unirational curve is rational. |
Classification of groups of order 66 | As you say, it's a semidirect product of $Z_2$ and $Z_{33}$. But there are other
ways for $Z_2$ to act on $Z_{33}$. If $a$ and $b$ are generators of $Z_{33}$ and $Z_2$
then $bab^{-1}=a^r$ where $r^2\equiv1\pmod{33}$. But there are four possible $r$
modulo $33$ solving this, not just $\pm1$, there are $\pm10$ also. This gives two
other groups, which are $Z_3\times D_{11}$ and $Z_{11}\times D_3$. |
The Generalized Pigeonhole Principle | Your inequality $\lceil N/50\rceil\ge 100$ is correct. When $N=4951$, we have $N/50=99.02$, so $\lceil N/50\rceil=100$. Moreover, this is the smallest value of $N$ for which $\lceil N/50\rceil\ge 100$. |
Prove that $\cos\left(\frac{2\pi}{n}\right)+\cos\left(\frac{4\pi}{n}\right)+\ldots+\cos\left(\frac{2(n-1)\pi}{n}\right)=-1$ | Hint:
Compute
$$e^{i\frac{2\pi}{n}} + \cdots + e^{i\frac{2(n - 1)\pi}{n}}$$
then compare real and imaginary parts. |
Tensor product, injective homomorphism | The linear map $f$ sends a basis $\mathcal{B}$ of $A \otimes B$ to a linearly independent set in the image. Therefore $f(\mathcal{B})$ is a basis of the subspace $im(f)$. Any linear map that sends a basis to a basis is an isomorphism. The map $f$ itself is therefore an isomorphism followed by an inclusion. |
How to compute the number of Sylow p-subgroups of $GL_{n}(F_{p})$ ? | You know that a Sylow $p$-subgroup is of the form
$$T = \left\{\begin{bmatrix}
1 & a_{12} & a_{13} & \dots & a_{1,n-1} & a_{1n}\\
0 & 1 & a_{23} & \dots &a_{2,n-1} & a_{2n}\\
&&&\ddots\\
0 & 0 & 0 & \dots &1 & a_{n-1n}\\
0 & 0 & 0 & \dots & 0 & 1\\
\end{bmatrix}
: a_{ij} \in F_{p} \right\}
$$
Find its normaliser to be
$$N = \left\{\begin{bmatrix}
a_{11} & a_{12} & a_{13} & \dots & a_{1,n-1} & a_{1n}\\
0 & a_{22} & a_{23} & \dots &a_{2,n-1} & a_{2n}\\
&&&\ddots\\
0 & 0 & 0 & \dots &a_{n-1,n-1} & a_{n-1,n}\\
0 & 0 & 0 & \dots & 0 & a_{nn}\\
\end{bmatrix}
: a_{ij} \in F_{p}, a_{ii} \ne 0 \right\}.
$$
The index of the normalizer in $\operatorname{GL}_{n}(F_{p})$ is the number of the $p$-Sylow subgroups.
To determine the normaliser of $T$, let $e_{i}$ be the standard basis, let $V_{i} = \langle e_{1}, e_{2}, \dots, e_{i} \rangle$. Note that $T V_{i} \subseteq V_{i}$, and that the $V_{i}$ are the only subspaces $U$ such that $T U \subseteq U$. If $g \in N_{G}(T)$, then for $t \in T$ we have $g^{-1} t g = s \in T$, so that $g s V_{i} = g V_{i} = t (g V_{i})$. So $U = g V_{i}$ is a subspace such that $T U \subseteq U$, from which it follows that $g V_{i} = V_{i}$ for each $i$, which means $g \in N$.
Thanks to i707107 for noticing a misprint. |
Locus of the Centers of Circles. | Let $S$ be the center of circle with a (variable) radius $r$ which touches both circles.
Since $F(0,0)$ is the center of the first and $F'(2a,0)$ is the center of the second circle we have:
$$SF' -SF = (r+2a)-(r+a) = a$$ so $S$ describe one branch of hyperbola with focus at $F$ and $F'$. |
Prove two topologies are the same in a vector space | I suppose that this is what the commutative diagram is intended to express, but I am not sure about the notation. The point is that if $\phi$ and $\psi$ are the homeomorphisms associated to two different choices of bases, there is an invertible $n\times n$-matrix $A$ such that $\psi(v)=A\phi(v)$, and left multiplication $m_A$ by the invertible matrix $A$ defines a homeomorphism $\mathbb R^n\to \mathbb R^n$ (with inverse given by $m_{A^{-1}}$). But this exactly says that $id_E$ can be written as $\psi^{-1}\circ m_A\circ \phi$ so it is a homeomorphism from the topology on $E$ defined by $\phi$ to the topology on $E$ defined by $\psi$. |
A+B=AB for matrices and infinite dimensional setting | No. Consider $\ell^2$. Define $S, T \in L(\ell^2)$ by
\begin{align*}
Sx &= (0,x_0, x_1, \ldots), \quad x = (x_0, x_1, \ldots)\in \ell^2\\
Tx &= (x_1, x_2, \ldots).
\end{align*}
Note that $TS = \def\I{\mathrm{Id}_{\ell^2}}\I$, but $ST \ne \I$. Now let $A := \I-T$, $B := \I- S$. Then
\begin{align*}
AB &= TS - S - T + \I\\
&= \I - \I + A - \I + B + \I\\
&= A + B\\
BA &= ST - T - S + \I\\
&= ST - \I + A -\I + B + \I\\
&= (ST - \I) + A + B\\
&\ne A + B = AB.
\end{align*} |
Finding the null space of symmetric matrix generated by outer product | First notice that any vector $w$ orthogonal to both $p$ and $q$ is the null space of $A$, since
$$Aw=p(q^Tw)+q(p^Tw)=0.$$
Thus the null space has dimension at least $n-2$. Since the eigenvectors $p\pm q$ correspond to eigenvalue $p^Tq\pm 1$, at least one of the two eigenvalues is non-zero, so the nullspace has dimension at most $n-1$.
Now we have two cases to cover :
$p,q$ are linearly independant : in this case the Cauchy-Schwarz inequality implies that both eigenvalues above are non-zero, and the null space has dimension $n-2$.
$p,q$ are not linearly independant : then since they have the same norm we must have $p=\pm q$, thus $\langle p,q\rangle$ has dimension $1$, and its orthogonal complement has dimension $n-1$.
In both case, the null space is the orthogonal complement of $\langle p,q\rangle$ because one is included in the other and their dimensions agree.
You can even see the result more easily, without even considering eigenvalues and eigenvectors. The first equation shows that $\langle p,q\rangle^{\perp}\subset Ker A$. For the reverse inclusion, just notice that if $p,q$ are linearly independant then
$$0=Aw=pq^Tw+qp^Tw\Rightarrow q^T w =0=p^Tw,$$and if they aren't then the dimensions agree (as explained above); or you can notice that $p=\pm q$ and thus $A=\pm 2 qq^T$, and thus
$$0=Aw=\pm 2qq^Tw\Rightarrow q^Tw=0.$$ |
Joint CDF's of both continuous and discrete random variables | I think you first answer reverses what you want.
You could say $$\displaystyle F_Y(y) = \sum_x \mathbb{P}(X=x) F_{Y|X=x}(y)$$ and possibly even turn this into a density $\displaystyle f_Y(y) = \sum_x \mathbb{P}(X=x) f_{Y|X=x}(y)$
So your particular example would give $\displaystyle F_Y(y) = 1 - \tfrac16(e^{-y}+e^{-2y}+e^{-3y}+e^{-4y}+e^{-5y}+e^{-6y})=1-\frac{e^{-y}(1-e^{-6y} )}{6(1-e^{-y})}$ |
How many orthogonal basis does a set of vectors have? | Excluding the trivial case of set with just the zero vector, orthogonal bases are always infinitely many since we can scale the basis vectors as we want preserving orthogonality.
Otherwise for orhonormal bases we have
$2$ choices for $1$ dimensional subspaces
infinitely many choices for $n-$dimensional subspaces with $n\ge 2$ |
On the size of maximal clique and intersection number of graphs | Yes, it's true; what's more, the condition on maximal cliques of $G^c$, and the $-1$ in the inequality, are both unnecessary.
If $\omega(G^c) = k$, then there is a set of vertices $\{v_1, v_2, \dots, v_k\}$ forming a clique in $G^c$, and so none of the edges $v_iv_j$ exist in $G$. Therefore in any cover of $G$ by cliques, we are forced to put each vertex $v_i$ in its own clique, and so at least $k$ cliques are needed. This means that $\Omega(G) \ge k$. |
Relating some infinite summations | For all $x\in\mathbb{R}-\{1\}$ and $n\in\mathbb{N^\star}$, we have :
$$\sum_{k=0}^{n-1}x^k=\frac{1-x^n}{1-x}$$
Derivation with respect to $x$ gives us (for $n\ge2$) :
$$\sum_{k=1}^{n-1}kx^{k-1}=\frac{-nx^{n-1}(1-x)+(1-x^n)}{(1-x)^2}=\frac{(n-1)x^n-nx^{n-1}+1}{(1-x)^2}$$
Now, if we make the assumption that $\vert x\vert<1$, we know that $\lim_{n\to\infty}nx^n=0$ and therefore :
$$\sum_{k=1}^\infty kx^{k-1}=\frac{1}{(1-x)^2}$$
and of course (multiplicating by $x$) :
$$\sum_{k=1}^\infty kx^k=\frac{x}{(1-x)^2}$$ |
Finding the continuity of function | The limit as x approaches zero exists and is $1$ as you say. The limit does not care about what the function value is exactly at zero. The fact that $f(0)=0 \neq \lim_{x \to 0} f(x)$ says the function is not continuous there. |
Difference between normal convergence, pointwise convergence and uniform convergence | I believe when you say "normal" convergence that it actually is the same thing as pointwise convergence. What would it mean for a sequence of functions to converge to a function in a function space? Functions are defined by acting on elements of a given set, so the only way to check convergence would be to check how they act on their domain, but this is just how they act for each point, hence pointwise convergence.
So really there only is a distinction between pointwise and uniform convergence. This difference can best be summarized by the following: pointwise convergence is concerned with a single point at a time. Uniform convergence is concerned with ALL points in the domain at the SAME time. This is important because while a sequence of functions may converge pointwise, if it is converging at different "rates" at each point in may not converge uniformly.
For example,$f_n:(0,\infty)\to \mathbb{R}$ given by $f_n(x) = \frac{1}{nx}$ is a sequence of functions that converges pointwise to 0. This is clear because if you pick any $x_0\in (0,\infty)$, then $f_n(x_0) = \frac{1}{nx_0}$ is just a sequence of real numbers clearly converging to $0$. But $f_n$ does not converge uniformly to $0$, because you can always find, for some $\varepsilon$, a $\delta>0$ such that for all $x\in (0,\delta)$, $f_n(x) \geq \varepsilon$. The idea is when $x$ is large then clearly $f_n(x)=\frac{1}{nx}$ will be small for all values of $n\in\mathbb{N}$. But when you shrink $x$ you have to increase $n$ for $f_n(x)$ to remain small and this relationship between $x$ and $n$ implies that the convergence is not uniform as it depends on what point we pick for us to determine the convergence, whereas if it were uniform then for large enough $n$, EVERY $x$ would satisfy $f_n(x) <\varepsilon$ |
Every prime power ideal in a Noetherian Ring of dimension one can be written uniquely as a power of a prime. | It suffices to show that the chain
$$\mathfrak p\supseteq\mathfrak p^2\supseteq\mathfrak p^3 \supseteq \cdots$$
is strictly descending.
Suppose instead that $\mathfrak p^{n+1}=\mathfrak p^n$ for some positive integer $n$.
Let $M=\mathfrak p^n$, regarded as an $A$-module.
Since $A$ is a domain and $\mathfrak p\ne 0$, we get $M\ne 0$.
Since $A$ is Noetherian, $M$ is finitely generated.
From $\mathfrak p^{n+1}=\mathfrak p^n$ we get $\mathfrak pM=M$, hence by Nakayama's lemma
$\;\;\;\;$ https://en.wikipedia.org/wiki/Nakayama%27s_lemma#Statement
we get $aM=0$ for some $a\in A$ with $a\equiv 1\;(\text{mod}\;\mathfrak p)$.
But then $a\mathfrak p^n=0$, contradiction, since $a\ne 0$,$\;\mathfrak p^n\ne 0$, and $A$ is a domain. |
Find the $\frac{dy}{dx}$ for $x= \cos^{-1} \left(8t^4 - 8t^2 +1\right)$, $y= \sin^{-1} \left(3t-4t^3\right)$ | $$
\frac{dy}{dx} = -\frac{\sin x}{\cos y}\frac{3-12t^2}{32t^3-16t}
$$
I will leave the rest to do.
The key thing is when you have something like this
$$
y = f^{-1}(g(t))
$$
where $f^{-1}$ is a simple function, then it is much easier to compute this
$$
f(y) = g(t)
$$
and take derivatives and the invert again.
$$
\sin x = \sqrt{1-\cos^2 x}\\
\cos y = \sqrt{1-\sin^2y}
$$
using the above we can express the $\sin x, \cos y$ in terms of $t$.
Now for $\frac{dy}{dx}$ to be finite we require
$$
\cos y = \sqrt{1- \left(3t-4t^3\right)^2}\neq 0 \implies 1- \left(3t-4t^3\right)^2\neq 0\\
32t^3-16t = 32t\left(t^2-\frac{1}{2}\right)\neq 0
$$
For the first condition the cubic equation for each sign leads to only one root namely for $\pm 1$ we have $t = \mp 1$putting it all together we can only have two domains
$$
0 < t < \frac{1}{\sqrt{2}}\\
-\frac{1}{\sqrt{2}} < t < 0.
$$
which does not map to your original problem. which was $0 < t < 1/2$ |
Evaluate $\sum_{n=1}^N n {n \choose k}$ and get a closed form solution | Hint:
$$
\sum_{n=k}^N n {n \choose k} = \sum_{n=k}^N (n+1) {n \choose k} - \sum_{n=k}^N {n \choose k} = \sum_{n=k}^N (k+1) {n+1 \choose k+1} - \sum_{n=k}^N {n \choose k}
$$
and continue from there.... |
Expected number of coin tosses to get a skewed score | Here's a partial answer:
Let $X_i$ be the number of heads after $i$ tosses and $s$ your target ratio.
We can define a stochastic process $s_i:=\frac{1000+X_i}{2000+i}$
Define a stopping time, $\tau_s:=\{\inf i: s_i>s\}$
Note that $E[s_i|s_{i-1}]=0.5\left(s_{i-1}\frac{2000+i-1}{2000+i}\right)+0.5\left(s_{i-1}\frac{2000+i-1}{2000+i}+\frac{1}{2000+i}\right)=s_{i-1}\frac{2000+i-1}{2000+i}+\frac{1}{4000+2i}=s_{i-1}+\frac{1-2s_{i-1}}{4000+2i}$
As you've pointed out, $s<=0.5 \implies \tau_s=0$, so we can focus on the case $s>0.5$. Note that this process is not any type of martingale, as its conditional expectation can be above, at, or below the current value, depending on whether its current value is below, at, or above $0.5$ (i.e., we see regression towards the mean). However, it is both approximately and asymptotically a martingale.
Thus, you need to find $E[\tau_s]$ for $s>0.5$.
Let's define a new stopping time: $\tau'_s:=\{\inf i: s_i>s>0.5\text{ or } s_i<t<0.5\}$. Since $s_i$ is approximately a martingale, lets try using the Optional Stopping Theorem, to conclude
$E[s_{\tau'_s}]\approx E[s_0]=0.5=sP(s_{\tau'_s}=s)+t[1-P(s_{\tau'_s}=s)]=(s-t)P(s_{\tau'_s}=s)-t$
Thus:
$P(s_{\tau'_s}=s) = \frac{0.5+t}{s-t}$ Since we only care about $s$, lets set $t=0$:
$P(s_{\tau'_s}=s) = \frac{1}{2s}$. Unfortunately, $t=0,s>0.5 \implies P(s_{\tau'_s}=0)>0 \implies E[\tau'_s]=\infty$
The assumption of approximate martingale was actually conservative, since the regression-to-mean behavior will simply slow the processes approach to the cutoff. It looks like you can expect to stop immediately, or never (although there is a positive probability of stopping sometime before forever). |
Singular points query | Answered in the comments by Brenin:
If $f$ does not vanish at $P$, then $P$ is not a point of the variety defined by $f$, so it does not make sense to ask whether it is singular there. |
Reflection on a circle | $|AX|+|XB|$ shortest possible implies that either $OX$ bisects the angle $AXB$ or $A$ $X$ $B$ lie on a straight line with $X$ between $A$ and $X$ (if we reword it: the normal at $X$ has to bisect $AXB$, then it is true for any smooth curve, not just circle).
To see it, suppose the angle is not bisected or their not on the same straight line. Replace the circle with the straight line tangent to the circle at $X$. And now I'm probably cheating: move $X$ a little in the right direction along the line so that the sum of distances decreases (as you know by the reflection method) and check that if you moved it along the circle, the sum of distances would change only slightly differently, i.e. it would still decrease. [Properly speaking we're computing the derivative of $|AX|+|XB|$ in a geometric way. I hope I'm not waking up the calculus inquisition :) ]
edit: I forgot about the straight line possibility |
Independent math learning | I do not know much about the course on Brownian Motion and Stochastic Calculus you have taken. Assuming that they are of sufficiently advanced, you can try reading 'Introduction to Analytic Number Theory' by Tom Apostol. You will be able to understand how much you need to know. For analytic geometry, better read Calculus II by Apostol. If it is too easy (seeing the courses you have taken) you can read Differential geometry of Pressley. All the best. |
Given some positive integer $n$, how many general pentagonal numbers are there smaller than it? | Once $k$ is moderately large, $k^2$ will be rather small compared to $k^3$. You can do fixed point iteration. Start with $k_0=\sqrt[3]{2n}$, then iterate $k_{i+1}=\sqrt[3]{2n-k_i^2}$. It will converge quickly. As an example, if $k=100, n=5050$. We get $k_0\approx 100.33$ and it converges quickly to $k=100$ |
Norm for bounded variations function | $v(f)=0$ does in general not imply that $f(x)=0$ for all $x \in [a,b]$ ! |
Are there any good ways to see the universal cover of $GL^{+}(2,\mathbb{R})$? | This is borrowed from Clifford Taubes's differential geometry:
The group $SL(2,\mathbb{R})$ is diffeomorphic to $\mathbb{S}^{1}\times \mathbb{R}^{2}$. This can be seen by using a linear change of coordinates on $M(2,\mathbb{R})$ that writes the entires in terms of $(x,y,u,v)$ as follows $$M_{1,1}=x-u, M_{22}=x+u,M_{12}=v-y,M_{21}=v+y$$
The condition $\det(M)=1$ now says that $x^{2}+y^{2}=1+u^{2}+v^{2}$. This understood, the diffeomorphism from $\mathbb{S}^{1}\times \mathbb{R}^{2}$ to $SL(2,\mathbb{R})$ sends a triple $(\theta,a,b)$ to the matrix determined by $$x=(1+a^{2}+b^{2})^{1/2}\cos[\theta],y=(1+a^{2}+b^{2})^{1/2}\sin[\theta],u=a,v=b$$ Here $\theta\in [0,2\pi]$ is the angular coordinate for $\mathbb{S}^{1}$.
And it should not be difficult for you to see the universal cover of this space is $\mathbb{R}^{3}$.
The typo in the solution was corrected by Taubes' email. |
Reconfiguring this poker equation to solve for a different variable | You can write
$$\frac s{1+2s}=f\\
s=f(1+2s)\\
s=f+2fs\\
s-2fs=f\\
s(1-2f)=f\\
s=\frac f{1-2f}$$ |
Compress a three digit number into a single number | No, assuming by three digit number you mean $100\leq n\leq 999$, then there are $999-100+1=900$ three digit numbers. But there are only $10$ single digit numbers (the integers $0$ through $9$). Since sets with different cardinalities can not have a bijection between them, there is no invertible function which maps three digit numbers to single digit numbers.
Of course, if you do not need the map to be invertible, then there are any number of functions you can use. For example
$$f(x)=\left\lfloor\frac{x}{100}\right\rfloor$$
simply maps every three digit number to its first digit (i.e. $f(293)=2$). |
Does the size of a Venn diagram indicate the cardinality of the set? | Usually, no, because just reproducing faithfully the intersections on a $2$-d sheet of paper is enough of a technical difficulty without bothering with estimates of the areas.
The usual way to encode cardinality effectively in a Venn diagram is by labelling each region with its own. |
Show that $P$ is divided to simple roots knowing that $a_{k}^2-4a_{k-1}a_{k+1}>0$ | The assumption can be slightly generalized to $a_k>0$ and
$$
a_k^2> q^2\,a_{k-1}a_{k+1}.
$$
for all $k=1,...,d-1$ for some $q\ge2$.
Because these are finitely many conditions on the coefficients, there also exist $q>2$ for the given set of coefficients.
Define $b_k=q^{k^2}\,a_k$, then
$$
b_{k-1}b_{k+1}=q^{2k^2+2}\,a_{k-1}a_{k+1}< q^{2k^2}\,a_k^2=b_k^2
$$
Rewriting that relation as
$$
\frac{b_{k-1}}{b_k}<\frac{b_k}{b_{k+1}}
$$
one recognizes that the sequence of these fractions is strictly monotonically increasing.
Now fix some $m$ and consider values
$$
q^{2m}\,\frac{b_{m-1}}{b_m}=\frac{qa_{m-1}}{a_m}< x< \frac{a_m}{qa_{m+1}}=q^{2m}\,\frac{b_m}{b_{m+1}}
$$
Proposition 1: For these short intervals, $p(-x)$ is non-zero and has the sign $(-1)^m$.
Consider the quotient
$$
\frac{p(-x)}{(-x)^m}=\dots+(-a_{m-3}\,x^{-3}+a_{m-2}\,x^{-2})+(-a_{m-1}\,x^{-1}+a_m-a_{m+1}\,x)+(a_{m+2}\,x^2-a_{m+3}\,x^3)+...
$$
The claim of the proposition is proved if this fraction is positive.
Lemma 2: All these groups of terms on the right of the fraction are positive.
Check first the group at the center around $a_m$, then the groups to the left and finally to the right. If at one end the group is incomplete, because one of $m$ or $\deg p-m$ is even, then the only term in this group of one is positive.
\begin{align}
-a_{m-1}\,x^{-1}+a_m-a_{m+1}\,x
&> a_m\,(-q^{-1}+1-q^{-1})
\\
&=
a_m\, q^{-1}\,(q-2)
\\&
\ge0
\\[1.2em]%\hline
(-a_{m-2p-1}\,x^{-2p-1}+a_{m-2p}\,x^{-2p})
&=a_{m-2p}\,x^{-2p-1}\,\left(x-\frac{a_{m-2p-1}}{a_{m-2p}}\right)
\\
&>
a_{m-2p}\,x^{-2p-1}\,\left(q^{2m\,}\frac{b_{m-1}}{b_m}-q^{2m-4p-1}\,\frac{b_{m-2p-1}}{b_{m-2p}}\right)
\\
&>
a_{m-2p}\,x^{-2p-1}\,q^{2m}\,\left(1-q^{-4p-1}\right)\,\frac{b_{m-1}}{b_{m}}
\\&> 0
\\[1.2em]%\hline
a_{m+2p}\,x^{2p}-a_{m+2p+1}\,x^{2p+1}
&=
a_{m+2p+1}\,x^{2p}\,\left(\frac{a_{m+2p}}{a_{m+2p+1}}-x\right)
\\
&>
a_{m+2p+1}\,x^{2p}\,q^{2m}\,\left(q^{4p+1}-1\right)\,\frac{b_{m}}{b_{m+1}}
\\&> 0
\end{align}
One sees that in the sum there is a lower positive bound on $\frac{p(-x)}{(-x)^m}$ for the selected interval.
Conclusion: Since $p(0)=a_0>0$ and the sign of $p(-x)$ for very large $x$ is $(-1)^{\deg p}$, one identifies $\deg p$ disjoint intervals
$$
\left(-\infty, -\frac{a_{\deg p-1}}{qa_{\deg p}}\right],\;
\left[-\frac{qa_{\deg p-2}}{a_{\deg p-1}},-\frac{a_{\deg p-2}}{qa_{\deg p-1}}\right],\;...,\;
\left[-\frac{qa_{1}}{a_{2}},-\frac{a_{1}}{qa_{2}}\right],\;
\left[-\frac{qa_{0}}{a_{1}},0\right]
$$
on the negative ray where $p$ changes its sign and thus must have a real root in between. This already is the number of all roots of $p$, so that the factorization claim follows.
Remark: This question is closely related to the Newton polygon of the polynomial. Indeed, the assumption leads to the equivalent formulation that
$$
-\ln(a_k)=\alpha \,k^2+\phi(k)
$$
where $\alpha=\ln q$ and $\phi$ is a convex function on the integers. The condition on $x$ is related to the slopes of supporting linear functions at the corner $(m,\phi(m))$ of the graph of $\phi$.
Some of that is discussed in the articles of Malajovich/Zubelli on the geometry of the Graeffe iteration resp. the tangent Graeffe iteration. |
Newton’s method to estimate a root of $f(x)=x^5-3x^2+1$ | This should really be a comment. Consider this python script. Now I let myself to borrow your table structure and the header. The script produces this:
\begin{array}{|c|c|c|c|}
\hline
x_n& f(x_n) & f'(x_n) & \frac{f(x_n)}{f'(x_n)} & x_n-\frac{f(x_n)}{f'(x_n)} \\ \hline
x_1=3.000000 & 217.000000 & 387.000000 & 0.560724 & 2.439276 \\ \hline
x_2=2.439276 & 69.508302 & 162.380993 & 0.428057 & 2.011220 \\ \hline
x_3=2.011220 & 21.772682 & 69.742981 & 0.312185 & 1.699035 \\ \hline
x_4=1.699035 & 6.498158 & 31.471553 & 0.206477 & 1.492558 \\ \hline
x_5=1.492558 & 1.724044 & 15.858533 & 0.108714 & 1.383844 \\ \hline
x_6=1.383844 & 0.329922 & 10.033520 & 0.032882 & 1.350962 \\ \hline
x_7=1.350962 & 0.024737 & 8.549144 & 0.002894 & 1.348068 \\ \hline
x_8=1.348068 & 0.000181 & 8.424276 & 0.000021 & 1.348047 \\ \hline
x_9=1.348047 & 0.000000 & 8.423353 & 0.000000 & 1.348047 \\ \hline
\end{array}
There's no place for a mistake in the script, you see?) |
Finding the limit of the recursive sequence $r_{n+1} = \sqrt{2 + r_n}$ | First, you need to solve $L=\sqrt {2+L}$ Square it and you have a quadratic. Once you get the solutions, plug them into the original equation to see which one is not spurious. You also have to show that the limit exists. In this case, you can show that (if $r_n \lt 2$, then $r_{n+1} \lt 2$) and $r_{n+1} \gt r_n$ so you have a monotonic sequence bounded above. That gives you convergence. |
How to write an equation of a line bisecting an angle in terms of the slope of the bisector. | Are you converting the slopes of the two lines to angles first? (Use the $\arctan$ function). Then you average them and take the tangent.
$$\theta_1 = \arctan(m_1)$$
$$\theta_2 = \arctan(m_2)$$
$$\theta_b = \frac{(\theta_1 + \theta_2)}{2}$$
$$m_b = \tan(\theta_b)$$
Your final answer for $m_b$ should be about -0.31, giving you $y-8=-0.31(x+6)$.
(If this is what you did, sorry. It's working for me.) |
Expectation of Gaussian random measure | In case $\mu(\xi) = \mathcal N(\xi, C)$, and $\xi \sim \mathcal N(\xi_m,C_\xi)$, it holds
$$
\bar \mu=\mathcal N(\xi_m,C+C_\xi),
$$
which can be found by conditioning on $\xi$. Generalizing to $\mu(\xi)=\mathcal N(m(\xi),C)$ with an affine function $m$ is then trivial. |
Simple solving Skanavi book exercise: $\sqrt[3]{9+\sqrt{80}}+\sqrt[3]{9-\sqrt{80}}$ | HINT:
$$x^3=9+\sqrt{80}+9-\sqrt{80}+3\sqrt[3]{(9+\sqrt{80})(9-\sqrt{80})}(x)=18+3x$$
$$\iff x^3-3x-18=0$$
of which $x=3$ is a root(by inspection)
Find the other two roots from $$\frac{x^3-3x-18}{x-3}=0$$ |
Integrating pressure with respect to time | Rearrange the equation to read
$$-V_t\frac{P_0'}{P_0^2} - \frac{i}{P_0} = m$$
Then rewrite as
$$ \frac{d}{dt} \frac{1}{P_0} - \frac{i}{V_t} \frac{1}{P_0} = \frac{m}{V_t}$$
This may be re-expressed using an integrating factor:
$$\frac{d}{dt} \left [e^{-i t/V_t} \frac{1}{P_0} \right ] = \frac{m}{V_t} \, e^{-i t/V_t}$$
Integrating, we get
$$\frac{1}{P_0(t)} = C \, e^{i t/V_t} - \frac{m}{i}$$
where $C$ is a constant of integration. Rearranging and taking logs, we get
$$\log{\left (\frac{1}{P_0(t)}+\frac{m}{i} \right )} - \frac{i}{V_t} t = \log{C}$$
This means, for times $t=t_1$ and $t=t_2$ we have
$$\log{\left (\frac{1}{P_0(t_1)}+\frac{m}{i} \right )} - \frac{i}{V_t} t_1 = \log{\left (\frac{1}{P_0(t_2)}+\frac{m}{i} \right )} - \frac{i}{V_t} t_2$$
Set $P_1 = P_0(t_1)$ etc., and the rest is algebra. |
Why is dx/dt = -(∂u/∂t) / (∂u/∂x)? | This expression does not mean anything. So first we must define the variables. I assume that you meant: let $u(\alpha,\beta)$ be a smooth function of two real variables and let $x$ be a smooth univariate function with real values such that $u(t,x(t))=0$.
Then, $x’(t)=-\frac{(\partial u)/(\partial \alpha)(t,x(t))}{(\partial u)/(\partial \beta)(t,x(t))}$.
This is straightforward when you differentiate wrt $t$ the equation $u(t,x(t))=0$. |
Find the field that $\mathbb{Z}_7[x,y]/\langle y-2x^2, 4xy + y +1 \rangle$ is isomorphic to | A very useful trick to remember is that a multivariate polynomial ring is (isomorphic to) an iterated construction of polynomial rings. Namely, in your example, we have
$$
\Bbb F_7[x,y] \simeq \Bbb F_7[x][y].
$$
This is useful because polyomial rings in one variable may be more tractable.
For example: you can show that in general, if $T-\alpha \in A[T]$ is a degree $1$ monic polynomial in a polynomial ring $A$ and $P \in A[T]$ then there always exist unique $Q \in A[T]$ and $R \in A$ such that $P = (T-\alpha)Q+R$.
In particular, for any poynomial $p$ in $\Bbb F_7[x,y]$ there are unique $q \in \Bbb F_7[x,y]$ and $r \in \Bbb F_7[x]$ such that
$$
p(x,y) = (y-2x^2)q(x,y)+r(x).
$$
From this observation you can see that the map $\tau \colon \Bbb F_7[x,y] \to \Bbb F_7[x]$ sending $x \mapsto x, y \mapsto 2x^2$ has kernel precisely $(y-2x^2)$. Thus
$$
\Bbb F_7[x,y]/(y-2x^2) \simeq \Bbb F_7[x]
$$
and by the third isomorphism theorem we get
$$
\Bbb F_7[x,y]/I \simeq \Bbb F_7[x]/J.
$$
Now, you already observe that $J = (x^3+2x^2+1)$ is maximal, hence the latter quotient a field.
Since $\{[1],[x],[x^2]\}$ is a $\Bbb F_7$-basis for $\Bbb F_7[x]/J$ as a vector space, we have $\dim_{\Bbb F_7}\Bbb F_7[x]/J = 3$ and so $|\Bbb F_7[x]/J| = 7^3$. But then by uniqueness of finite fields, it has to be $\Bbb F_7[x]/J \simeq \Bbb F_{7^3}$.
Edit: here are some generalizations that use the exact same arguments: if $p = y-\alpha \in R[x][y]$ is of degree one, then $R[x,y]/(p) \simeq R[x]$ for any ring $R$, the isomorphism sends $x$ to $x$ and $y$ to $\alpha$.
Moreover, if $g \in R[x,y]$ then its image via this iso is $g(x,\alpha)$ and thus
$$
R[x,y]/(p,g) \simeq R[x]/(g(x,\alpha)).
$$
Fix $d = \deg g(x,\alpha(x))$. Then $\{[1],\ldots, [x^{d-1}]\}$ is a basis of $R[x]/(g(x,\alpha))$ as a free module.
In particular if $R = k$ is a field, then $k[X]/(g(x,\alpha))$ is a $d$-dimensional $k$-algebra, and if $g(x,\alpha)$ is irreducible then it is a field extension of dimension $d$. In that case, some computations follow:
If $k = \Bbb F_q$ is finite, then $\Bbb F_q[X]/(g(x,\alpha)) \simeq \Bbb F_{q^d}$.
If $d = 2 \neq \mathbf{char} (k)$, then $k[X]/(g(x,\alpha)) = k(\sqrt{d})$ with $d^2 \in k, d \not \in k$. Indeed, if $g(x,\alpha) = x^2+bc+c$ we have $k[X]/(g(x,\alpha)) \simeq k(\omega)$ with $\omega$ a root of $g(x,\alpha)$ in some algebraic closure, and we can apply the classification of quadratic extensions.
In general, if $R$ at least has the invariant basis property, you can (maybe) distinguish such quotients by looking at the total degree of a polynomial. |
Why is this 'obviously' positive semi-definite? | In general, let $a\in\mathbb R^n$ and $A=aa^T\in\mathbb R^{n\times n}$, as in the case of your matrix.
Clearly, $A$ is symmetric, as $A^T= (aa^T)^T=(a^T)^Ta^T=aa^T=A$.
$A$ is positive definite because of the fact that $(x,Ax)\ge 0$, for every $x\in\mathbb R^n$.
Indeed
$$
(x,Ax)=x^TAx=x^Taa^Tx=(x^Ta)(a^Tx)=(x^Ta)^2\ge 0.
$$
Note that $x^Ta$ is nothing but the inner product of $x$ and $a$. |
Boundedness of a Continuous Function with a Bounded Derivative | Assume that $\sup_{x\in[0,1]}|f'(x)|=1$, then a convergent sequence $(x_{n})\subseteq[0,1]$ is such that $|f'(x_{n})|\rightarrow\sup_{x\in[0,1]}|f'(x)|$, say, $x_{n}\rightarrow x$, then by continuity of $f'$, $|f'(x)|=1$, a contradiction.
So you can take $M=\sup_{x\in[0,1]}|f'(x)|<1$. |
Is it true that $\left|\sum_{n=1}^\infty \frac{x}{n^2+x^2}\right|<\frac{\pi}{2}$ for any $x\in \Bbb{R}$? | Thanks to @Grumpy Parsnip, I find that I just made a silly comment to the answer of the quoted question.
It is not true that $a_n<b_n$ implies $\lim_n a_n<\lim b_n$. It is also not true that
$$
\sum_{n=1}^Na_n<\sum_{n=1}^Nb_n
$$
implies
$$
\sum_{n=1}^\infty a_n<\sum_{n=1}^\infty b_n.
$$
However, it is true that
$$
a_n<b_n
$$
implies
$$
\sum_{n=1}^\infty a_n<\sum_{n=1}^\infty b_n.
$$
Therefore, according to the accepted answer, the strict inequality is obtained. |
A Question with both poisson and gamma distribution | If $\lambda\sim \Gamma(p,b)$, I think you are looking for
$$E(X) = E\left(\sum_{k=0}^{\infty}k\frac{\lambda^k}{k!}\,\mathrm{e}^{-\lambda}\right) = E(\lambda) = \frac{p}{b}
$$
and
$$ Var(X) = Var(\lambda) = \frac{p}{b^2}
$$ |
Trigonometric near-identity | Call $K=\csc\left(\theta\right)+\sec\left(\theta\right)$ and $W=a/\sin\left(2\theta\right)$ for some $a\gt0$ and $\theta\in\left]0,\pi/2\right[$ instead. You are interested in why $K$ and $W$ almost have a linear relationship. First, we know that $$K^2=\frac{1}{\sin^2\left(\theta\right)}+\frac{1}{\cos^2\left(\theta\right)}+\frac{2}{\sin\left(\theta\right)\cos\left(\theta\right)}\\=\frac{4a^2}{a^2\cdot 4\sin^2\left(\theta\right)\cos^2\left(\theta\right)}+\frac{4a}{a\cdot 2\sin\left(\theta\right)\cos\left(\theta\right)}=\frac{4W^2}{a^2}+\frac{4W}{a}\\\Longrightarrow K=\frac{2W}{a}\left(1+\frac{a}{W}\right)^{1/2}\\ \Longrightarrow \csc\left(\theta\right)+\sec\left(\theta\right)=\frac{2\sqrt{1+\sin(2\theta)}}{\sin\left(2\theta\right)}.$$ This is an exact identity. In the given interval, $\displaystyle \left|\sin\left(2\theta\right)\right| =\left|\frac{a}{W}\right|\lt 1$. We can invoke the binomial theorem: $$K=\frac{2}{\sin\left(2\theta\right)}\left[1+\frac{\sin\left(2\theta\right)}{2}-\frac{\sin^2\left(2\theta\right)}{8}+\text{higher order terms}\right]\approx \frac{2}{\sin\left(2\theta\right)}+1$$ as the other terms are going to zero. As said, this is almost a linear relationship. How good is it? Not much. |
Is each integer $n > 2$ is divisible by $4$ or by an odd prime number | Two cases.
It's odd. Then $2$ doesn't divide it; but some prime does divide it, and that prime has to be odd because it's not $2$.
It's even. Then consider $n/2$. Either $2$ divides this (in which case $4$ divides $n$ and we're done) or $n/2$ is odd; but that latter case means an odd prime divides $n/2$ so an odd prime divides $n$. |
$\delta$ = min {1, $\epsilon$} works for proving $\lim_{x->0}$ $x^3$ = 0? | Your work is correct. There are many possibilities for $\delta$, in fact, if $\delta$ is a number that works, then any number smaller than $\delta$ will also work. This follows directly from the definition of the limit, i.e. $|x < c| < \gamma$ holds whenever, $\gamma \le \delta$. |
Should I avoid distribution functions in probability? | Distribution functions (which are not the same as probability distributions) are a tool, just as characteristic functions for example.
Tools should be judged by what they help us accomplish. To prove the Central Limit Theorem, the characteristic function is an invaluable tool. To describe the distribution of the maximum of an i.i.d. sample, the distribution function is very useful (whereas the characteristic function is not).
I am not aware of the context of the quote, but it sounds a bit like telling a painter not to use a particular type of brush. Perhaps the author's intent was to tell the painter that there are other useful brushes, i.e. not to rely exclusively on distribution functions. While it is certainly true that some or even most subjects in probability theory can not even be formulated without using measure theory, this in my personal opinion does not constitute an argument against distribution functions, as using one does not exclude using the other. |
Find the curve, given that $r'(t) = Cr(t)$ | $r'(t)=Cr(t)$ implies $(x'(t),y'(t),z'(t))=C(x(t),y(t),z(t))$ So $x(t)=K_{1}e^{Ct}$, $y(t)=K_{2}e^{Ct}$ $z(t)=K_{3}e^{Ct}$ with $x(0)=1$, $y(0)=2$, $z(0)=3$, where $K_{1}$, $K_{2}$ $K_{3}$ re constants to determine.$x(0)=1$ then $K_{1}=1$, similarly, $K_{2}=2$, $K_{3}=3$. So I think $r(t)=e^{Ct}i+2e^{Ct}j+3e^{Ct}k$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.