title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Combinatoric proof for $\sum_{k=0}^n{n\choose k}\left(-1\right)^k\left(n-k\right)^4 = 0$ ($n\geqslant5$) | The expression counts the number of ways to partition a set of $4$ elements into $n$ distinguishable non-empty cells, which certainly must give zero for $n \geq 5$.
The result follows by inclusion exclusion. Alternately, see Stirling numbers of the second kind for a more general view. |
Integration of $x^a$ and Summation of first $n$ $a$th powers | Integration is a continuous version of the sum. So your sum is actually a discrete approximation of the integral. It will have the same leading term, but the next terms differ (the sum approximates a smooth function with rectangles, as in the Riemann sum, so there are some discrepancies).
The discrete version for arbitrary power is given by the Faulhaber's formula
[ http://en.wikipedia.org/wiki/Faulhaber%27s_formula ]. With a bit of thought you can see this can also help you derive the connection between a sum of a series ($\sum_n a_n$) and an integral ($\int a(x)dx$). With careful manipulation of the Taylor series of $a(x)$ under the integral, using the Faulhaber's formula in the process, you will arive at the Euler-Maclaurin formula
[ http://en.wikipedia.org/wiki/Euler_maclaurin_formula ]
which is very useful for calculating the series (frequently it's easier to integrate). You will notice how the Bernoulli numbers appear in both the Faulhaber's and Euler-Maclaurin's series.
It's quite fun to try and do this on paper -- you learn a lot.
EDIT:
A smooth function can be approximated as a polynomial around some point... the coefficients are derivatives of the function:
$$f(x+h)=f(x)+f'(x)h+\frac{1}{2!}f''(x)h^2+\cdots$$
where $h$ is a displacement from the chosen origin $x$. Now you can approximate the values $f(1)$, $f(2)$, $f(n)$ and so on by setting $x=0$, $h=n$:
$$f(n)=f(0)+f'(0)n+\frac{1}{2!}f''(0)n^2+\cdots$$
Now, take the sum you want to evaluate:
$$\sum_{n=0}^N f(n)=f(0)\sum_n^N 1 +f'(0)\sum_n^N n+\frac12 f''(0)\sum_n^N n^2+\cdots$$
$$=f(0) N +f'(0)\frac{N(N+1)}{2}+\frac{1}{2!}f''(0)\frac{N(N+1)(2N+1)}{6}+\cdots$$
So, if you have the expression for $n$-th term of some series that you want to sum ($\sum a_n=\sum f(n)$) then you can express it as a sum of derivatives times the Faulhaber's polynomials that you listed in your original post.
Now, if you instead of summing just try to integrate the function (again use the Taylor series):
$$\int_0^N f(x)dx=f(0)N+f'(0)\frac{N^2}{2}+\frac{1}{2!}f''(0)\frac{N^3}{3}+\cdots$$
You can now see the result is very similar... it only differs in a few higher-order terms. If you evaluate $\sum_{n=0}^N f(n)-\int_0^N f(x)dx$ you get the residual in the Euler-Maclaurin formula, which is just the extra terms that you noticed in comparison of integral to the Faulhaber's polinomial. Congratulations, you almost derived a very famous and mysterious formula from classical calculus.
In this form, it only matches the lower limit of the conventional Euler-Maclaurin, if you want the full version with derivatives on the upper limit, you have to write $\sum_{n=0}^N=\sum_{n=0}^\infty-\sum_{n={N+1}}^\infty$ and do a shifted version of above formula for the second sum (starting at $N+1$ instead of $0$.
Of course... the convergence of this is questionable (it's an asymptotic series) -- the Taylor series usually doesn't converge on all reals.
EDIT2:
And how to get the Faulhaber's polynomials? You get them by setting the polynomial with undetermined coefficients and comparing terms:
$$\sum_{n=0}^N n^k = a_0 +a_1 N + a_2 N^2+\cdots+ a_{k+1}N^{k+1}$$
$$\sum_{n=0}^N n^k=\sum_{n=0}^{N-1}n^k+N^k$$
Use the first one in the second:
$$a_0 +a_1 N + a_2 N^2+\cdots+ a_{k+1}N^{k+1}=a_0 +a_1 (N-1) + a_2 (N-1)^2+\cdots+ a_{k+1}(N-1)^{k+1}+N^k$$
From $N=0$ you get $a_0=0$. Then collect terms with $N$, terms with $N^2$ and so on. You get a set of linear equations for coefficients $a_n$. What you get are the Faulhaber's polynomials which (check the wiki) can alternatively also be expressed with Bernoulli numbers and binomial coefficents.
For $k=3$, for instance, you get
$$a_1 N + a_2 N^2+a_3 N^3 + a_4 N^4=a_1 (N-1) + a_2 (N-1)^2+a_3 (N-1)^3+a_4 (N-1)^4+N^3$$
which after expansion (note that the left side cancels out with the leading terms), you get:
$$0=-a_1+a_2(1-2N)+a_3(-1+3N-3N^2)+a_4(1-4N+6N^2-4N^3)+N^3$$
split by powers:
$$a_1-a_2+a_3-a_4=0$$
$$2a_2-3a_3+4a_4=0$$
$$3a_3-6a_4=0$$
$$4a_4=1$$
The system is always upper triangular (actually, if you write down the Matrix, it's a part of the Pascal triangle with alternating signs, there's no need to actually expand the polynomials, just write the matrix and fix the right column to (0,0,0,0,...1)), so the solution is a simple back-substitution:
$$a_4=1/4$$
$$a_3=1/2$$
$$a_2=1/4$$
$$a_1=0$$
Leading to
$$\sum_{n=0}^N n^3=\frac{N^2+2N^3+N^4}{4}$$
So everything can actually be done by hand. Have fun with $\sum n^4$ :) |
Finding the median value on a probability density function | I don't know, let's find out. Maybe the median is in the $[0,5]$ part. Maybe it is in the other part. To get some insight, let's find the probability that our random variable lands between $0$ and $5$. This is
$$\int_0^5(0.04)x\,dx.$$
Integrate. We get $0.5$. What a lucky break! There is nothing more to do. The median is $5$.
Well, it wasn't entirely luck. Graph the density function. (We should have done that to begin with, geometric insight can never hurt.) We find that the density function is symmetric about the line $x=5$. So the median must be $5$.
Remark: Suppose the integral had turned out to be $0.4$. Then to reach the median, we need $0.1$ more in area. Then the median $m$ would be the number such that $\int_5^m (0.4-0.04x)\,dx=0.1$. |
Sum of constants convergence rate | Yes this is true. This statement holds for every $g$.
Small note: We can easily see that
\begin{equation}
\sum_{x=1}^{n}a_i \le n \iff \sum_{x=1}^{n}a_i =Cn,\quad C\in[0,1].
\end{equation}
The first expression seems more efficient to use.
If $g=O(n)$, the $O(g)$-term then becomes unnecessary, since for suffiently large $n$, it does not affect our leading growth-term $n$. Writing this compact in big $O$-notation,
\begin{equation}
\sum_{x=1}^{n}a_i =O(n).
\end{equation}
If $n=O(g)$ the statement is still true, but yields a worse bound. If $n=O(g)$, we may write\begin{equation}
\sum_{x=1}^{n}a_i =O(g(n)).
\end{equation} |
Show that $A$ is positive definite via the Cholesky decomposition | No, you need to show that $x^T A x > 0$ for all $x \not = 0$ and you are not using that $L = \tilde{L}$ is nonsingular.
You have $$x^TAx = x^T LL^T x = (L^Tx)^T(L^Tx) = \|L^Tx\|_2^2 \ge 0$$ for any $x$.
Moreover, if $x^T A x = 0$, then $L^Tx = 0$ which implies that $x=0$ because $L$ and hence $L^T$ are nonsingular. |
Exercise in Tarski's Introduction to Logic | IMO, the problem is stated diffrently
Show that the three following "axioms" are independent: Axiom I and Theorems I and II of Sect.37, that are:
$\text {Ax.I } x \cong x \text { (Reflex)}$
$\text {Th.I } \text { if } y \cong z, \text { then } z \cong y \text { (Symm)}$
$\text {Th.II } \text { if } x \cong y \text { and } y \cong z, \text { then } x \cong z \text { (Trans)}$ |
What books to refer while preparing for rmo? | I would recommend 'Challenge and Thrill of Pre-College Mathematics' by V Krishnamurthy, C R Pranesachar (New Age International Publishers).
You may like to have a look at http://www.cmi.ac.in/~vipul/olymp_resources/ |
What's the correct notion of equivalence in a double category? | I don't think there is a correct notion. A double category is a couple of interacting 2-categories, moreso than a generalization of a single 2-category.
For instance in the double category of categories, functors, and profunctors, which is even a proarrow equipment, a vertical equivalence is an ordinary equivalence of categories, while a horizontal equivalence is a Morita equivalence, in this case an equivalence up to splitting of idempotents. The notions are distinct and separately significant, and it's complicated to see internally when a horizontal equivalence gives rise to a vertical one. |
Importance of $|\mu(A)|<\infty$ in complex measure? | Having a complex measure without imposing the condition of it being finite gives us an extremely weak object, because it doesn't need to be monotone.
In the case of positive measures we had that $\mu(A) \leq \mu(B)$ whenever $A\subseteq B$, and we would like to recover a similar if yet weaker property for complex measures.
Such property is the Hahn descomposition theorem, that allows us to break a complex measure into two measures that are again monotone. Moreover, every proof that I have seen of this descomposition theorem uses that the complex measure is finite. I can't think of a counter example right now, but I think that the result is false if one removes the hypothesis of it being finite. |
Derivative of an integral of differential form | Let's make a substitution:
$$
(x_1,x_2,...,x_n) \mapsto (g(x),x_2,x_3,...,x_n)
$$
with Jacobian $J = \frac{1}{\partial_{1} g(x)}$. Then we can rewrite $f(t)$ as
$$
f(t) = \int\limits_{0}^{t} \frac{a(x_1(\tau;y);y) d\tau}{\partial_{1}g(x_1(\tau;y);y)} \int\limits_{\mathbb{R}^{n-1}_+}dy
$$
where $y = (y_1,...,y_{n-1})$ and $x_1(\tau;y)$ is defined by $g(x_1(\tau;y);y) = \tau$. It is easy to differentiate $f(t)$ now:
$$
f'(t) = \int\limits_{\mathbb{R}^{n-1}_+} \frac{a(x_1(t;y);y)dy}{\partial_1 g(x_1(t;y);y)}
$$
We can consider $M_t = \{ x \in \mathbb{R}^n_+ \mid g(x) = t \}$ as a smooth manifold with the atlas $\{ (M_t,\varphi) \}$ such that $\varphi \colon M_t \to \mathbb{R}^{n-1}_+$ and $\varphi(x_1,...,x_n) = (x_2,...,x_n)$. Then a form $\omega$ defined by
$$
\omega = \frac{a(x_1(t;y);y)}{\partial_1 g(x_1(t;y);y)} dy_1 \wedge ... \wedge dy_{n-1}
$$
is a pullback via $(\varphi^{-1})^{*}$ of a form $\alpha$ defined by
$$
\alpha = \frac{a(x)}{\partial_1 g(x)} dx_2 \wedge ... \wedge dx_n
$$
Hence
$$
f'(t) = \int\limits_{M_t} \alpha
$$
Since $dg \wedge \omega_1 = dg \wedge \omega_2$ implies
$$
\int\limits_{M_t} \omega_1 = \int\limits_{M_t} \omega_2
$$
and $dg \wedge \alpha = a(x) dx_1 \wedge ... \wedge dx_n$ than for any form $\beta$ such that $dg \wedge \beta = a(x)dx$ we have
$$
f'(t) = \int\limits_{M_t} \beta
$$
For example, we can choose $\beta$ defined by
$$
\sum\limits_{k=1}^{n} (-1)^{k-1} \frac{a(x)\partial_k g(x)}{| \nabla g(x) |^2} dx_1 \wedge ... \wedge \overline {dx_k} \wedge ... \wedge dx_n
$$ |
Proof on a conjecture involving $d(N)$ | It turns out that this conjecture is false: there is no number $N$ whose "index of beauty" is $18$.
Note that $d(N) \le 2\sqrt N$ for any number $N$, since the divisors come in pairs $m,N/m$ and one member of each pair is at most $\sqrt N$. Thus $N/d(N) \ge \frac12\sqrt N$. One can compute by brute force that no number less than $(2\cdot 18)^2 = 1296$ has "index of beauty" equal to $18$, and numbers larger than $1296$ all have $N/d(N) \ge \frac12\sqrt N > \frac12\sqrt{1296} = 18$.
The omitted integer values of $N/d(N)$ under $1000$ (found by exhaustive computation, in the above manner) are:
$\{18, 27, 30, 45, 63, 64, 72, 99, 105, 112, 117, 144, 153, 160, 162,
165, 171, 195, 207, 225, 243, 252, 255, 261, 279, 285, 288, 294, 320,
333, 336, 345, 352, 360, 369, 387, 396, 405, 416, 423, 435, 441, 465,
468, 477, 490, 504, 531, 544, 549, 555, 567, 576, 603, 608, 612, 615,
616, 625, 639, 645, 657, 684, 705, 711, 726, 728, 735, 736, 747, 792,
795, 801, 810, 828, 840, 873, 880, 885, 891, 909, 915, 927, 928, 936,
952, 960, 963, 981, 992\}$
My gut feeling is that the set of omitted values in fact has density $1$. |
Lagrange Interpolating Polynomial , Error Estimation | Similar to here in Theorem 2, we can estimate $\|\Phi_{n+1}\|_{\infty}$ the following way:
First set $h := 2/n$. Assume $x_j \leq x < x_{j+1}$. We see that
$$|\Phi_{n+1}(x)| = |x-x_0|\dots|x-x_n|$$
Now, each $|x-x_j|$ is smaller or equal to $jh$ respectively. This is because $x_{j+1}-x_j = h$ by construction. So we get
\begin{align}
|\Phi_{n+1}(x)| &\leq jh \cdot (j-1)h \cdot \dots 2h \cdot (x-x_j) (x_{j+1} - x) \cdot 2h \dots \cdot(n+1-j)h \\
&= (x-x_j)(x_{j+1}-x)h^{n-1}j!(n+1-j)!.
\end{align}
Now since we had $x_j \leq x < x_{j+1}$ by assumption we can further estimate the term:
\begin{align}
(x-x_j)(x_{j+1}-x)h^{n-1}j!(n+1-j)! \leq \frac{(x_{j+1} - x_j)^2}{4} h^{n-1} j!(n+1-j)!\leq \frac{n!}{4}h^{n+1}.
\end{align} |
Positive integer solution to equation $(x_1+x_2+x_3)(y_1+y_2+y_3+y_4)=15$ | Since $15 = 1\cdot15 = 3 \cdot 5$, the only possibilities are
$$(x_1 + x_2 + x_3, y_1 + y_2 + y_3 + y_4) = (1, 15), (15, 1), (3, 5), (5, 3)$$
Since the variables are all at least $1$, we can rule out the case where $x_1 + x_2 + x_3 = 1$ or where $y_1 + y_2 + y_3 + y_4 = 1, 3$ for the minimum values of each sum exceeds the respective assignments.
This leaves us with
$$(x_1 + x_2 + x_3, y_1 + y_2 + y_3 + y_4) = (3, 5)$$
Here we see that the only possible assignment for $x_1, x_2, x_3$ is $x_1 = x_2 = x_3 = 1$. So we only need to count the number of solutions to
$$y_1 + y_2 + y_3 + y_4 = 5$$
which is possible via the Stars and Bars method. In fact, you don't even need this at all if you consider the fact that $y_1, y_2, y_3, y_4 \ge 1$. |
Degree of splitting field over $\mathbb Q$ | The degree is $2$ since it is generated by $i$ such that $i^2+1=0$. |
How many perms of 4 from letters of MATHEMATICS? | $^8C_4 * 24$ is $1680$, not $672$. |
System of equations and perturbation methods | Write $Z(x,y,\epsilon) = Z_0(x,y) + \epsilon Z_1(x,y)$. For $\epsilon=0$, stationary points of $Z = Z_0$ satisfy
$$
y^2 F'(x) = 0 \quad \text{and} \quad 2 y F(x) = 0.
$$
Notice that any point $(x_0,0)$ obeys these equations, regardless of the value of $x_0$, as long as $x_0$ is in the domain of $F$. Furthermore, double roots of $F$ (for which $F(x_0) = 0$ and $F'(x_0) = 0$) are automatically stationary points of $Z_0$; therefore, the total set of stationary points of $Z_0$ consists of the union of lines
$$
\{(x,y)\;|\;y=0\} \,\cup\,\{(x,y) \;|\; F(x)=0\;\text{and}\;F'(x)=0\}.
$$
So far, so good. Now, however, we take a closer look at the perturbation
$$
Z_1(x,y) = -y^2\left[1 + \text{log}\left(\frac{1}{y}-x\right)\right].
$$
Here, we encounter the problem that the logarithm is only defined when $\frac{1}{y}-x > 0$. This region in the plane is bounded by the hyperbolas $x y = 1$ and by the line $y=0$, it looks like this:
In particular, the (orange) line $\left\{ (x,y)\,|\,y=0\right\}$ is on the boundary of the logarithm domain. However, the limit $\lim_{y \downarrow 0} Z_1(x,y)$ exists (check this!), so the horizontal axis is included in the domain of the logarithm. Furthermore, both limits $\lim_{y\downarrow 0} \frac{\partial Z_1}{\partial x}$ and $\lim_{y\downarrow 0} \frac{\partial Z_1}{\partial y}$ exist and are equal to zero (check this!), so the entire horizontal axis consists of stationary points of $Z_1$. We have just seen that the horizontal axis consists of stationary points of $Z_0$, so we conclude that the entire horizontal axis consists of stationary points of $Z = Z_0 + \epsilon Z_1$. This is true for all $\epsilon$, not necessarily small. No need for perturbation theory here.
What about the other stationary points of $Z_0$, that are characterised by double roots of $F$? Here, we do need perturbation theory. So, let's take a point $(x_0,y_0)$ such that $x_0$ is a double zero of $F$, i.e. $F(x_0) = 0$ and $F'(x_0)=0$. The value of $y_0$ is as yet unspecified. However, if we want to have any chance at all for $(x_0,y_0)$ to be a stationary point of $Z = Z_0 + \epsilon Z_1$, we must at the very least have that $(x_0,y_0)$ is in the domain of $Z_1$ -- otherwise it doesn't make any sense to talk about the value of $Z_1(x_0,y_0)$, because it doesn't exist. So, we must assume that $y_0$ is such that $(x_0,y_0)$ lies somewhere inside the blue region in the image above. Since we've covered the case $y_0 = 0$ already above, we have the following conditions on $(x_0,y_0)$:
$$
F(x_0) = 0,\quad F'(x_0) = 0,\quad \frac{1}{y_0} - x_0 > 0\quad\text{and}\quad y_0 \neq 0.
$$
Only for these points you can start to apply perturbation theory. As the perturbation is regular, you just have to substitute $x = x_0 + \epsilon x_1 + \mathcal{O}(\epsilon^2)$ and $y = y_0 + \epsilon y_1 + \mathcal{O}(\epsilon^2)$, and expand the resulting expressions up to second order in $\epsilon$. I leave this up to you, but I can tell you that I get, at order $\epsilon$, the equations
$$
x_1 F''(x_0) + \frac{y_0}{1 - x_0 y_0} = 0
$$
and
$$
\frac{1}{1-x_0 y_0}-2-2\,\text{log}\left(\frac{1}{y_0}-x_0\right) = 0.
$$ |
Maximal unramified outside $p$ extension contained in $\mathbb Q(\zeta_n)$ | Yes. Any proper extension field $L$ of $\mathbb{Q}(\zeta_{p^a})$ that is contained in $\mathbb{Q}(\zeta_n)$ can be written as the compositum $K \mathbb{Q}(\zeta_{p^a})$ with $K = L \cap \mathbb{Q}(\zeta_k)$ and $[K \colon \mathbb{Q}] \geq 2$. Since $\mathbb{Q}$ admits no unramified extensions, some prime $q$ ramifies in $K/\mathbb{Q}$. From the theory of cyclotomic fields, $q$ must divide $k$, so $q \neq p$. Thus, the inertial degree of $q$ in $L/\mathbb{Q}$ is $\geq 2$. Since $q \neq p$, then also $\mathbb{Q}(\zeta_{p^a})/\mathbb{Q}$ is unramified above $q$. Through multiplicativity of inertial degrees, it must happen that the primes above $q$ in $\mathbb{Q}(\zeta_{p^a})$ ramify in $L$, and this answers your question.
When one studies your question in the context of class field theory, it becomes a special case of an interesting more general question: given a number field $F$, what can we say about the maximal Abelian extension $M$ of $F$ that is unramified away from some prime $p$? Work on this question has been done by George Gras, Thong Nguyen Quang Do, Jean Francois-Jaulent, and others. In general, it is easier to work locally, and we split $\mathrm{Gal}(M/F)$ into a profinite $p$-group and a group of order relatively prime to $p$. The latter part is much easier to understand.
The harder part is to understand the maximal Abelian extension of $F$ of $p$-power order that is unramified away from $p$. Here is a special case: if $F$ is a totally real number field (i.e. in every embedding into $\mathbb{C}$, the image of $F$ is in $\mathbb{R}$), then the maximal subfield of $F(\zeta_{p^a})$ of $p$-power degree over $F$ is totally real and unramified away from $p$. It can happen that every such extension of $F$ of $p$-power degree is contained in one of these fields, as is the case with $\mathbb{Q}$ (as your question shows), but other times this is false. What is interesting is that the existence of other such fields is determined by simple invariants of $F$, the main ones being the class group and the $p$-adic regulator. Leopoldt's conjecture states that you can never find an infinite tower of such extensions that is disjoint from the cyclotomic one -- it is still open. |
How we can find a straight line so that at least p percent of the points are exactly on that line? | You can solve it in $O(n^2)$ time, assuming your coordinates are rational. If they are real (and have to deal with floating point inaccuracies) it will take $O(n^2 \log n)$ time.
This is because we'll be using a multiset data structure. In the case of rational numbers I assume the existence of an efficient hash function that allows us to guarantee $O(1)$ insertion and $O(1)$ time getting the count of the most common element. When that hash function doesn't exist you can do the same with a multiset based on trees, but the time guarantee for insertion goes down to $O(\log n)$.
Given that, what is the algorithm? For each point $p$ you initialize an empty multiset and loop over all other points $q$. Compute slope $\frac{q_y - p_y}{q_x - p_x}$ and insert into the multiset. After you're done looping you request how many times the most common element in the multiset exists. If this is greater than $np/100$, you can find the line from the slope and $p$ and return it. |
Converse of Equal Area Theorem for Medians | Let $[AFG] = [AGE] = [BDG] = x$, $[GDC] = y$, $[GEC] = z$, and $[BFG] = w$.
Then, because $\frac{BD}{DC} = \frac{x}{y} = \frac{2x+w}{x+y+z}$, and $\frac{AE}{EC} = \frac{x}{z} =\frac{2x+w}{x+y+z}$,$\frac{x}{y} = \frac{x}{z}$, so $y = z$.
Now, $\frac{AF}{FB} = \frac{x}{w} = \frac{2x+y}{w+x+y}$. Therefore, $x^2 + xy = wx+wy \implies x(x+y) = w(x+y) \implies x = w$.
Also, $\frac{BD}{DC} = \frac{x}{y} = \frac{3x}{x+2y}$, we can find that $x^2 + 2xy = 3xy \implies x^2 = xy \implies x = y$.
To finish, $\frac{AE}{EC} = \frac{CD}{DB} = \frac{BF}{FA} = \frac{x}{x} = 1$, so we are done. |
Just How Independent are Events anyways? | Assuming that the generators are independent and of known bias, no. Knowing some subset of the coin flips, even if that subset was chosen randomly, does not give you any information about the remaining coin flips. Saying "but $8$ heads is super unlikely so since I've seen $6$ it's much likely that it's $6$ and $2$ rather than $8$ and $0$" is just the gambler's fallacy.
Assuming you agree that the traditional gamblers fallacy is a fallacy, the way I would assure you that the random element doesn't matter is this: Assume instead of flipping coins one by one and looking after each flip, flip a large number of them without looking, rearrange them and then pick a random number $n$ and uncover $n$ coins one by one. Symmetry makes gambling on the next uncover the same as being asked to gamble on a coinflip after $n$ observations (where $n$ was chosen randomly by the casino). If we shout 'tails is due! I bet on tails' we are guilty of the gamblers fallacy. But this is just what we'd be doing in your situation.
Addendum
I suppose since you never explicitly said they were independent or of known bias in your question, I should add that if they are not independent (or you don't know if they're independent) or are of unknown bias that all bets are off. Still, the most plausible scenario here actually calls for you to predict more heads if you see a lot of heads in your first sample. If you see lot of heads, that seems like it would be evidence that these machines are biased toward flipping heads. |
Does $1/n$ converge to $0$ in $\mathbb{R}$ with the discrete metric | Only sequences that are eventually constant can converge in discrete metric. Assume $1/n$ converges to $x$. Then $\{x\}$ is an open set that only contains $x$ and doesn’t contain the tail of $1/n$. So a contradiction. |
Change in Angle Between Vector and Positive x axis | If $\theta$ in your spherical system is going to have the same meaning as $\theta$ in the polar coordinates, then it should be $130$. The change formula is
$$
\theta_{\rm spherical}=\arctan\left(\frac yx\right)=\arctan\left(\frac{8\sin 130}{8\cos 130}\right)=\arctan(\tan(130))=130.
$$ |
Equivalent codes: Are these two approaches the same? | Yes. This is exactly the MacWilliams equivalence theorem (sometimes also called extension theorem). |
Constant-Width Curves and Circles | What is a property true for circles but false for constant-width curve?
There are many such properties. The most obvious ones that I can think of are
A circle has a point (its midpoint) which has the same distance (the radius) to every point on the curve.
A circle has constant and non-zero curvature.
How do I prove that a constant-width curve has constant-width curve?
Do you mean to ask about how to prove that a curve has constant width? You'll have to consider all pairs of parallel lines which touch the curve without cutting into its interior. If the curve is convex, then these lines are tangents, so you'd be interested in the pair of tangents for every angle. Perhaps you can use that angle as a parameter. But it really depends on how you describe the curve in the first place. |
Invariant proof for a commutation identity involving the Hodge stars on $V,V^*$ | It seems that you have a sign error. The one before last line in your calculation should end with $(-1)^{d-1}$ unless you use a strange sign convention for the pairing $\Lambda^k(V) \times \Lambda^k(V^{*}) \rightarrow \mathbb{R}$.
I can offer an alternative point of view and argument that doesn't use coordinates directly but does use another property of the Hodge star which I don't see how one can prove without introducing a basis. In what follows, all the vector spaces involved are real and finite dimensional. To fix notation, I will denote by $\Lambda(V)$ the exterior algebra on $V$ and use the pairing $\Lambda^k(V) \times \Lambda^k(V^{*}) \rightarrow \mathbb{R}$ given by
$$ (v_1 \wedge \dots \wedge v_k , \varphi^1 \wedge \dots \wedge \varphi^k) = \det(\varphi^i(v_j)) $$
in order to identify elements of $\Lambda^k(V^{*})$ as functionals on $\Lambda^k(V)$.
Given an inner product on $\left< \cdot, \cdot \right>_V$ on $V$, it induces naturally an inner product $\left< \cdot, \cdot \right>_{\Lambda(V)}$ on $\Lambda(V)$. This construction is functorial in the sense that if $T \colon (V, \left< \cdot, \cdot \right>_V) \rightarrow (W, \left< \cdot, \cdot \right>_W)$ is an isometry then $\Lambda(T) \colon (\Lambda(V), \left< \cdot, \cdot \right>_{\Lambda(V)}) \rightarrow (\Lambda(W), \left< \cdot, \cdot \right>_{\Lambda(W)})$ is also an isometry of inner product spaces. This can be verified without choosing bases.
Given an inner product on $g = \left< \cdot, \cdot \right>_V$ on $V$ and an orientation $\omega \in \Lambda^{\text{top}}(V)$ that is compatible with the inner product in the sense that $\left< \omega, \omega \right> = 1$, we can define a linear operator $\star_{(V,g,\omega)} = \star_V \colon \Lambda(V) \rightarrow \Lambda(V)$ that is determined uniquely by the following two properties:
The operator $\star_V$ maps $\Lambda^k(V)$ into $\Lambda^{n-k}(V)$ for all $0 \leq k \leq n = \dim V$.
For all $0 \leq k \leq \dim V$ and $\alpha, \beta \in \Lambda^k(V)$ we have
$$ \alpha \wedge \star_V (\beta) = \left< \alpha, \beta \right>_{\Lambda^k(V)} \omega. $$
The two properties above determine $\star_V$ uniquely since the pairing $\Lambda^k(V) \times \Lambda^{n-k}(V) \rightarrow \mathbb{R}$ induced by the choice of orientation is non-degenerate. This involves some argument that uses the structure of $\Lambda^k(V)$ and is best seen by choosing a basis.
The Hodge star is an isomorphism. This follows from the defining property as $\alpha \wedge \star \alpha = \left< \alpha, \alpha \right>_{\Lambda^k(V)} \omega$ and so if $0 \neq \alpha \in \Lambda^k(V)$ and $\star \alpha = 0$ then $\left< \alpha, \alpha \right>_{\Lambda^k(V} = 0$ and since $\star$ behaves well with respect to the grading and the dimensions are right (I guess this uses a basis argument in the background), the result follows.
The Hodge star construction is natural. Namely, if $T \colon (V, g, \omega) \rightarrow (W, h, \eta)$ is a bijective isometry between the inner product spaces $(V,g)$ and $(W,h)$ that respects the orientations in the sense that $\Lambda^{\text{top}}(T)(\omega) = \eta$ then $\star_W \circ \Lambda(T) = \Lambda(T) \circ \star_V$. That is, the following diagram commutes:
$$ \newcommand{\ra}[1]{\!\!\!\!\!\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!\!\!\!}
\newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.}
%
\begin{array}{lll}
\Lambda(V) & \ra{\star_V} & \Lambda(V) \\
\da{\Lambda(T)} & & \da{\Lambda(T)} \\
\Lambda(W) & \ra{\star_W} & \Lambda(W).
\end{array}
$$
In fancy language, $\star$ defines a natural automorphism of the functor $$ (V,\left< \cdot, \cdot \right>_V,\omega) \xrightarrow[]{\Lambda} (\Lambda(V), \left< \cdot, \cdot \right>_{\Lambda(V)}). $$
This can be checked from the defining property together with the previous item without choosing a basis by letting $\varphi = \Lambda(T)^{-1} \circ \star_W \circ \Lambda(T)$ and then checking that $\varphi$ also satisfies the defining property of $\star_V$. By uniqueness, $\star_V = \varphi$.
The Hodge star is an isometry. This is usually proved by analyzing explicitly the Hodge star action on an orthonormal basis and unfortunately I don't see how one can avoid here an argument that is more or less comparable to choosing an orthonormal basis. The point is that any proof much use the explicit form of the inner product on $\Lambda(V)$ which is defined on elementary $k$-wedges using the inner product on $V$ and then extended linearly. The abstract characterizing property doesn't even make it clear that one can choose a basis of $\Lambda^k(V)$ of elementary $k$-wedges that $\star$ sends to elementary $n-k$-wedges and without this information, not much more can be deduced.
If $\dim V = n$ and $\alpha \in \Lambda^k(V)$ then $(\star_V \circ \star_V)(\alpha) = (-1)^{k(n-k)} \alpha$. This follows from (and is actually equivalent to) the previous item and the defining property as
$$ (\star \beta) \wedge (\star (\star \alpha)) = \left< \star \beta, \star \alpha \right> \omega = \left< \alpha, \beta \right> \omega = \alpha \wedge \star \beta = (\star \beta) \wedge ((-1)^{k(n-k)} \alpha) $$
for all $\beta \in \Lambda^k(V)$ and since $\star$ is an isomorphism and the pairing is non-degenerate, we get the required result.
Finally, let us use the properties above to prove the result. Denote by $T \colon V \rightarrow V^{*}$ the isometry obtained using the inner product on $V$ (so $T(v) = \left< v, \cdot \right>$). Note that if
$$u_1 \wedge \dots \wedge u_k, v_1 \wedge \dots \wedge v_k \in \Lambda^k(V)$$
we have
$$ (\Lambda(T)(u_1 \wedge \dots \wedge u_k))(v_1 \wedge \dots \wedge v_k) = (Tu_1 \wedge \dots \wedge Tu_k)(v_1 \wedge \dots \wedge v_k) \\ = \det(T(u_i)(v_j)) = \det( \left< u_i, v_j \right> ) = \left< u_1 \wedge \dots \wedge u_k, v_1 \wedge \dots \wedge v_k \right>. $$
By bilinearity, we get that $(\Lambda(T)(\alpha))(\beta) = \left< \alpha, \beta \right>$ for all $\alpha, \beta \in \Lambda^k(T)$.
Now let $\varphi \in \Lambda^k(V^{*})$ and choose $\alpha \in \Lambda^k(V)$ such that $\Lambda(T)(\alpha) = \varphi$. By the naturality of the Hodge star, we get for all $\beta \in \Lambda^{n-k}(V)$
$$ (\star_{V^*} \varphi)(\beta) = ((\star_{V^{*}} \circ \Lambda(T))(\alpha))(\beta) = ((\Lambda(T) \circ \star_V)(\alpha))(\beta) = (\Lambda(T)(\star_V(\alpha)))(\beta) = \left< \star_V \alpha, \beta \right> = (-1)^{k(n-k)} \left< \alpha, \star_V \beta \right> = (-1)^{k(n-k)}(\Lambda(T)(\alpha))(\star_V \beta) = (-1)^{k(n-k)}\varphi(\star_V \beta). $$
Your result is obtained by taking $k = 1$ and $\beta = v_1 \wedge \dots \wedge v_{n-1}$ (with $\varphi$ instead of $\alpha$ and $n$ instead of $d$).
Addendum: There is an argument that proves that the Hodge star is an isometry that doesn't involve analyzing the action of $\star$ on an orthonormal basis and is more coordinate free but requires much more work. Let $(U,g,\omega)$ and $(V,h,\nu)$ be two finite dimensional oriented inner product spaces and consider the direct sum $(U \oplus V, g \oplus h)$. I will continue to denote by $g$ the inner product induced by $g$ on $\Lambda(U)$ and similarly for $V$
Lemma: There is a natural isometry $\varphi \colon (\Lambda(U) \otimes \Lambda(V), g \otimes h) \rightarrow (\Lambda(U \oplus V), g \oplus h)$ of bi-graded graded-commutative algebras endowed with an inner product. The product structure on the tensor product $\Lambda(U) \otimes \Lambda(V)$ is defined by the usual formula for the product of graded-commutative algebras:
$$ (\alpha \otimes \beta) \hat{\wedge} (\gamma \otimes \delta) := (-1)^{\deg \beta \deg \gamma} (\alpha \wedge \gamma) \otimes (\beta \wedge \delta). $$
The isomorphism is determined by its action on elementary tensors by the formula
$$ \varphi((u_1 \wedge \dots \wedge u_i) \otimes (v_1 \wedge \dots \wedge v_j)) = u_1 \wedge \dots \wedge u_i \wedge w_1 \wedge \dots \wedge w_j. $$
The proof is a tedious verification that everything makes sense, is well-defined and behaves as expected. The reason we get an isometry is that $U$ and $V$ are orthogonal inside $U \oplus V$ and so when we compute the induced inner product, the terms that mix vectors from $U$ and $V$ die.
In particular, using $\varphi$, we can define a volume form on $U \oplus V$ by $\varphi(\omega \otimes \nu)$ (this is the standard induced volume form on a direct sum) and consider the Hodge star of $(\Lambda(U \oplus V), g \oplus h, \varphi(\omega \otimes \nu))$.
Lemma: Given $U,V$ as above, the following diagram commutes:
$$ \newcommand{\ra}[1]{\!\!\!\!\!\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!\!\!\!}
\newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.}
%
\begin{array}{lll}
\Lambda(U) \otimes \Lambda(V) & \ra{\varphi} & \Lambda(U \oplus V) \\
\da{(-1)^{\sigma} \star_U \otimes \star_V} & & \da{\star_{U \oplus V}} \\
\Lambda(U) \otimes \Lambda(V) & \ra{\varphi} & \Lambda(U \oplus V).
\end{array}
$$
Here, $\sigma$ is a sign factor that depends on the bi-degree of elements in $\Lambda(U) \otimes \Lambda(V)$ that will be made explicit.
Proof: Let $\alpha, \gamma \in \Lambda(U)$ and $\beta, \delta \in \Lambda(V)$ be homogeneous elements. We calculate
$$ \varphi(\alpha \otimes \beta) \wedge \star(\varphi(\gamma \otimes \delta)) = \left< \varphi(\alpha \otimes \beta), \varphi(\gamma \otimes \delta) \right>_{g \oplus h} \varphi(\omega \otimes \nu) = \varphi( \left< \alpha \otimes \beta, \gamma \otimes \delta \right>_{g \otimes h} \omega \otimes \nu) = \varphi( (\left< \alpha, \gamma \right>_g \omega) \otimes (\left< \beta, \delta \right>_h \nu)) = \varphi( (\alpha \wedge \star_U \gamma) \otimes (\beta \wedge \star_V \delta)) = (-1)^{\deg \beta \deg \star_U \gamma} \varphi( (\alpha \otimes \beta) \hat{\wedge} (\star_U \gamma \otimes \star_V \delta)) \\
= (-1)^{\deg \beta \deg \star_U \gamma} \varphi(\alpha \otimes \beta) \wedge (\varphi \circ (\star_U \otimes \star_V))(\gamma \otimes \delta).$$
By taking $\alpha, \beta$ with $\deg \alpha = \deg \gamma$ and $\deg \beta = \deg \delta$ and using the non-degeneracy of the (bi)-pairing, we see that the diagram commutes with a sign factor of
$$ \star\varphi(\gamma \otimes \delta) = (-1)^{(\dim U - \deg \gamma)\deg \delta} \varphi(\star \gamma \otimes \star \delta).$$
Of course, an alternative proof would involve choosing orthonormal bases for $U,V$ and analyzing the operators explicitly. A less abstract way of stating the proposition above is that if $\gamma \in \Lambda(U)$ and $\delta \in \Lambda(V)$ and $U \perp V$ then
$$ \star(\gamma \wedge \delta) = (-1)^{(\dim U - \deg \gamma) \deg \delta} (\star \gamma) \wedge (\star \delta). $$
Finally, we can prove that $\star$ is an isometry by an inductive argument. Almost by definition, the Hodge star on an $n$-dimensional vector space is an isometry between $\Lambda^n(V)$ and $\Lambda^0(V)$ and in particular it is an isometry on the whole of $\Lambda(V)$ if $\dim V \leq 1$. If $\dim V > 1$, split it into a direct sum of orthogonal subspaces of lower dimension and use the diagram above and the induction hypothesis. The specific sign factor plays no role in showing that $\star$ is an isometry. |
Is this problem NP-complete? | The problem is in $P$, and hence not $NP$-complete unless $P=NP$.
Because we are only concerned with parity, we can treat the matrices and vectors as having entries lying in $\mathbb{F}_2$. Then the condition just becomes $uM=j$, where $j$ is the all-ones row vector. This can be solved with Gaussian elimination. |
What does the $\ll$ operator signify? This paper says "order" but why didn't they use the big or little Oh notation? is the same thing? | Vinogradov used "<<" to mean what Landau (and almost all following writers) writes as $O(...)$ |
Differentiable functions $f'(x)=f(-x)^4f(x)$ | The defining relation $f'(x)=f^4(-x)f(x)$ implies that $f$ is continuously differentiable (in fact it's even $C^\infty$).
Setting $-x$ in the relation $f'(x)=f^4(-x)f(x)$, one gets $\forall x, f'(-x) = f^4(x)f(-x)$.
Multiplying $f'(x)=f^4(-x)f(x)$ by $f^3(x)$ yields $$\forall x, f'(x)f^3(x) = f^4(-x)f^4(x)=[f^4(x)f(-x)]f^3(-x)=f'(-x)f^3(-x)$$
that is to say $x\mapsto f'(x)f^3(x) $ is even.
Therefore, $\displaystyle \int_{-x}^xf'(t)f^3(t) dt = 2\int_{0}^xf'(t)f^3(t) dt$, and since an antiderivative of $f'f^3$ is $\displaystyle \frac{f^4}4$, this implies $$\forall x, f^4(x)+f^4(-x)=2$$
Replacing $f^4(-x)$ in the defining relation, one gets $$\forall x, f'(x)=2f(x)-f^5(x)$$
By Picard–Lindelöf theorem, this non-linear differential equation (with initial condition $f(0)=1$) has a unique global solution. Using a computer or other means, one finds that $$x\mapsto \frac{\sqrt[4]{2} e^{2 x}}{\sqrt[4]{e^{8 x}+1}}$$ is a solution of the differential equation, thus it must be the only one.
Conversely, it's easily checked that $x\mapsto \frac{\sqrt[4]{2} e^{2 x}}{\sqrt[4]{e^{8 x}+1}}$ is indeed a solution to the original functional equation. |
derivative and integral | $\int_0^b f(t,s)g(s,x)ds = \int_0^t f(t,s)g(s,x)ds + \int_t^{b}f(t,s)g(s,x)ds$
So $\frac{d}{dt}\int_0^b f(t,s)g(s,x)ds = \frac{d}{dt}[e^{-t}\int_0^t (e^s -e^{-s})g(s,x)ds]+\frac{d}{dt}[(e^t - e^{-t})\int_t^{b}e^{-s}g(s,x)ds] = -e^{-t}\int_0^t (e^s - e^{-s})g(s,x)ds +e^{-t}(e^t-e^{-t})g(t,x)+ (e^t + e^{-t})\int_t^b e^{-s}g(s,x)ds-(e^{t}-e^{-t})e^{-t}g(t,x) = \int_0^t f_t(t,s)g(s,x)ds$
Using the product rule |
Fourier transform of the raised cosine pulse. | $\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\on}[1]{\operatorname{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
\begin{align}
\on{p}\pars{t} & \equiv
\bbox[5px,#ffd]{\on{sinc}\pars{Rt}
\cos\pars{\pi aRt} \over 1 - 4a^{2}R^{2}t^{2}}
\end{align}
Lets $\ds{\pars{~\tau \equiv \pi\verts{aR}t \implies t = {\tau \over \pi\verts{aR}}~}}$ and $\ds{\beta \equiv {1 \over \pi\verts{a}}}$ such that
\begin{align}
\on{p}\pars{t} & \equiv
{\on{sinc}\pars{\beta t}
\cos\pars{\tau} \over 1 - 4\tau^{2}/\pi^{2}} =
-\,{\pi^{2} \over 4}\,
{\on{sinc}\pars{\beta\tau}
\cos\pars{\tau} \over \tau^{2} - \pars{\pi/2}^{2}}
\end{align}
Then,
\begin{align}
&\bbox[#ffd,5px]{\int_{-\infty}^{\infty}
\on{p}\pars{t}\expo{-\ic\omega t}\,\dd t}
\\ = &\
-\,{\pi^{2} \over 4\pi\verts{aR}}\
\overbrace{\int_{-\infty}^{\infty}
{\on{sinc}\pars{\beta\tau}
\cos\pars{\tau} \over \tau^{2} - \pars{\pi/2}^{2}}
\,\expo{-\ic\nu\tau}\,\dd\tau}
^{\ds{\equiv {\cal J}}}
\\[2mm] &\
\mbox{where}\quad \nu \equiv {\omega \over \pi\verts{aR}}
\end{align}
\begin{align}
{\cal J} & \equiv
\int_{-\infty}^{\infty}
{\on{sinc}\pars{\beta\tau}
\cos\pars{\tau} \over \tau^{2} - \pars{\pi/2}^{2}}
\,\expo{-\ic\nu\tau}\,\dd\tau
\\[5mm] & =
{1 \over \pi}\int_{-\infty}^{\infty}
{\on{sinc}\pars{\beta\tau}
\cos\pars{\tau} \over \tau - \pi/2}
\,\expo{-\ic\nu\tau}\,\dd\tau
\\[2mm] & -
{1 \over \pi}\int_{-\infty}^{\infty}
{\on{sinc}\pars{\beta\tau}
\cos\pars{\tau} \over \tau + \pi/2}
\,\expo{-\ic\nu\tau}\,\dd\tau
\\[5mm] & =
-{\expo{-\ic\nu\pi/2} \over \pi}\int_{-\infty}^{\infty}
{\on{sinc}\pars{\beta\tau + \beta\pi/2}
\sin\pars{\tau} \over \tau}
\,\expo{-\ic\nu\tau}\,\dd\tau
\\[2mm] &
-{\expo{\ic\nu\pi/2} \over \pi}\int_{-\infty}^{\infty}
{\on{sinc}\pars{\beta\tau - \beta\pi/2}
\sin\pars{\tau} \over \tau}
\,\expo{-\ic\nu\tau}\,\dd\tau
\\[5mm] & =
-{\expo{-\ic\nu\pi/2} \over \pi}\int_{-\infty}^{\infty}
\on{sinc}\pars{\beta\tau + \beta\pi/2}
\on{sinc}\pars{\tau}
\,\expo{-\ic\nu\tau}\,\dd\tau
\\[2mm] &
-{\expo{\ic\nu\pi/2} \over \pi}\int_{-\infty}^{\infty}
\on{sinc}\pars{\beta\tau + \beta\pi/2}
\on{sinc}\pars{\tau}
\,\expo{\ic\nu\tau}\,\dd\tau
\\[5mm] & =
-{2 \over \pi}\Re\bracks{\expo{-\ic\nu\pi/2} \int_{-\infty}^{\infty}
\on{sinc}\pars{\beta\tau + \beta\pi/2}
\on{sinc}\pars{\tau}
\,\expo{-\ic\nu\tau}\,\dd\tau}
\end{align}
\begin{align}
&\int_{-\infty}^{\infty}
\on{sinc}\pars{\beta\tau + \beta\pi/2}
\on{sinc}\pars{\tau}
\,\expo{-\ic\nu\tau}\,\dd\tau
\\[5mm] = &\
\int_{-\infty}^{\infty}
\bracks{{1 \over 2}\int_{-1}^{1}\expo{\ic k\pars{\beta\tau + \beta\pi/2}}\,\,\dd k}
\bracks{{1 \over 2}\int_{-1}^{1}\expo{-\ic q\tau}
\,\dd q}
\expo{-\ic\nu\tau}\,\dd\tau
\\[5mm] = &\
{\pi \over 2}\int_{-1}^{1}\expo{\ic k\beta\pi/2}
\int_{-1}^{1}\
\overbrace{\int_{-\infty}^{\infty}
\expo{\ic\pars{k\beta - q - \nu}\tau}\,\,
{\dd\tau \over 2\pi}}
^{\ds{\delta\pars{k\beta - q - \nu}}}\
\dd q\,\dd k
\\[5mm] = &\
{\pi \over 2}\int_{-1}^{1}\expo{\ic k\beta\pi/2}\
\bracks{-1 < k\beta - \nu < 1}\,\dd k
\\[5mm] = &\
{\pi \over 2}\int_{-1}^{1}\expo{\ic k\beta\pi/2}\
\bracks{{\nu - 1 \over \beta} < k <
{1 + \nu \over \beta}}\,\dd k
\end{align}
Now, you can finishes the job. |
Is this claim true about the global minimum of a function? | $f(x)$ being positive when $x > 0$ doesn't seem to be relevant or necessary.
The KEY aspect is $f'$ is continuous and $f'$ has only one root.
If $f''(a)>0$ then $f'(a)=0$ and $f'(a)$ is increasing at $x=a$ so $f'(x) > 0$ for all $x > a$ and $f'(x) < 0$ for all $x < a$. (as $f'$ is continuous it can't "jump" from neg to pos without "going through" being zero, but $a$ is the only root). So for $(-\infty, a)$ $f(x)$ is decreasing and for $(a, \infty)$, $f(a)$ is increasing. so the $f(a)$ is an absolute global minimum.
But without knowing $f'$ has only one root we'd have no way of claiming $f(a)$ was a global minimum, although we would know it is a local minimum |
Geometric progression question | assume that the terms are $a, ar, ar^2$
the sum of the first two terms is $a+ar=17.5$
the third term is $ar^2=14/3$
thus you can write $a$ as $\frac{14}{3r^2}$ and put in the first equation
$$ \frac{14}{3r^2}+\frac{14}{3r}=17.5$$
$$ 14+14r=17.5*3r^2$$
now can you solve it |
Given $[D]$ and $\vec p$, can we solve $[D]=\begin{bmatrix}p\\q\end{bmatrix}^{\top}\begin{bmatrix}c\\1...\end{bmatrix}$ for $\vec q$ and $\vec c$? | The notation $[D]: \mathbb R \to \mathbb R^{m \times n}$ confuses me, so I suppose you've meant simply $D \in \mathbb R^{n \times m}$. I also will treat $p, q \in \mathbb R^m$ and $c \in \mathbb R^n$ as column vectors.
Let's simplify the matrix notation:
$$
D =
\begin{bmatrix}
p^\top\\
q^\top
\end{bmatrix}^\top
\begin{bmatrix}
c^\top\\
e^\top
\end{bmatrix} =
\begin{bmatrix}
p & q
\end{bmatrix}
\begin{bmatrix}
c^\top\\
e^\top
\end{bmatrix} =
p c^\top + q e^\top.
$$
Here $e \in \mathbb R^n$ denotes a vector of ones. It is now obvious that the right hand side is linear both in $c$ and $q$.
The least squares solution minimizes the residual's sum of squares that is
$$
E = \sum_{i=1}^m \sum_{j=1}^n (d_{ij} - p_i c_j - q_i)^2 \to \min_{c_j, q_i}.
$$
This is a quadratic problem and the solution can be obtained from the optimality conditions:
$$
0 = \frac{\partial E}{\partial c_j} = \sum_{i=1}^m 2 (d_{ij} - p_i c_j - q_i) (-p_i) =
-\sum_{i=1}^m p_i (d_{ij} - p_i c_j - q_i) = -2p^\top (D - pc^\top - qe^\top)\\
0 = \frac{\partial E}{\partial q_i} = -\sum_{j=1}^n 2 (d_{ij} - p_i c_j - q_i) =
-2 (D - pc^\top - qe^\top) e = 0.
$$
Transposing the first equation we have
$$
(D^\top - cp^\top - eq^\top) p = 0.
$$
Now separating unknowns from the knowns we obtain a system of linear equaions:
$$
(cp^\top + e q^\top) p = D^\top p\\
(pc^\top + qe^\top) e = D e
$$
Using property $a^\top b = b^\top a$ when the product is scalar and noting that $a^\top b$ as a scalar can be safely moved across the product we get
$$
(p^\top p) c + (e p^\top) q = D^\top p\\
(pe^\top) c + (e^\top e) q = De
$$
In matrix form this system can be written as
$$
\begin{bmatrix}
p^\top p I & ep^\top\\
pe^\top & n I
\end{bmatrix}
\begin{bmatrix}
c\\q
\end{bmatrix} =
\begin{bmatrix}
D^\top p\\De
\end{bmatrix}
$$
This system's matrix is singular, but consistent. It can be checked by multiplying with $[e^\top\; -p^\top]$ on the left. The solution is also not unique. If $c_0, q_0$ is a solution then
$$
c = c_0 + \alpha e, \quad q = q_0 - \alpha p.
$$
also would be. $c_0, q_0$ may be obtained using the pseudoinverse matrix.
Note that for arbitrary $\alpha$ the difference between $D$ and $pc^\top + qe^\top$ will be the same due to
$$
pc^\top + qe^\top = pc_0^\top + \alpha pe^\top + q_0 e^\top - \alpha pe^\top =
pc_0^\top + q_0 e^\top.
$$ |
Derivation of the "Combined Work Formula" | Answer: I have always used direct relation ratios and dimensional analysis to attack problems more difficult than this, still works for simple problems like this. Here it goes:
A takes 9hrs to paint a fence. In one hour he paints $\frac{1}{9}$ fence.
Similarly B takes 5 hrs to paint a fence. In one hour he paints $\frac{1}{5}$.
If both of them worked together, in one hour they would complete $\frac{1}{9}+\frac{1}{5}$ part of the fences. = 14/45 fences/hour.
Now: With combined work, 14/45 fence in one hour(fence/hour). One fence would take $\frac{45}{14}$ hours/fence which is the reciprocal. Now 3 fence would take $\frac{3*45}{14}$ There goes your answer.
Hope it helps.
Thanks
Satish |
Arrangement of zeroes and ones | First, list out the $M-N$ zero's as:
$$\wedge_10\wedge_20\wedge_3\dots\wedge_{M-N}0\wedge_{M-N+1}$$
Then let each $\wedge_i$ be the generation function of $g_i(x)=(1+x+x^2+\dots+x^L)$, this means that for each $\wedge$ there are only allowed to have at most $L$ one's, since there are $(M-N+1)$ $\wedge$'s, thus the generating function $$G_1(x)=\prod_{i}^{M-N+1}g_i(x)=(1+x+x^2\dots+x^L)^{M-N+1}$$ and solve for $[x_1^N]$, this is the coefficient of $x^N$, and this is the number of arrangements that for each arrangement there exists groups of at most $L$ consecutive one's. But we are looking for the arrangements that exist groups of exactly $L$ consecutive one's, so we need to subtract the arrangements that for each $\wedge$ there have at most $(L-1)$ one's, then this will give you the arrangements that exist at least one exactly $L$ consecutive one's.
By the same method as above, the generating function for the arrangements that for each $\wedge$ has at most $(L-1)$ consecutive ones is
$$G_2(x)=(1+x+x^2+\dots+x^{L-1})^{M-N+1}$$
and solve for $[x_2^N]$.
Finally, $[x_1^N]-[x_2^N]$ is the answer.
This is just the idea of the solution, if you want the general form of the solution it will take you a little time to actually solve for $[x_1^N]$ and $[x_2^N]$. |
Topology of the space of hermitian positive definite matrices | Edit: I think that the first answer below is more efficient and easier to write. Nevertheless, I have added a more concrete approach which might be closer to what you were after.
By some sort of transitivity: recall that every open or closed subset of a locally compact space is a locally compact space with the induced topology. We will use both, for open and for closed.
I consider every space here equipped with the topology induced by the Euclidean norm of $\mathbb{C}^{n\times n}$. Hence every space is Hausdorff. This is good to know, even though your question does not specifically ask about this aspect.
Like every finite-dimensional vector space over $\mathbb{R}$ or $\mathbb{C}$, $\mathbb{C}^{n\times n}$ is locally compact when equipped with the topology induced by any norm.
Clearly, $\mathcal{H}_n^+\mathbb{C}$, the set of positive semidefinite matrices, is closed in the locally compact $\mathbb{C}^{n\times n}$. So $\mathcal{H}_n^+\mathbb{C}$ is a locally compact space.
Now $\mathcal{P}=\{A\in \mathcal{H}_n^+\mathbb{C}\;;\det A>0\}$. By continuity of the determinant, it follows that $\mathcal P$ is open in the locally compact $\mathcal{H}_n^+\mathbb{C}$. Hence $\mathcal P$ is a locally compact space. QED.
Parametrized alternative: fix $A_0$ hermitian definite positive, and denote $\{t^0_1,\ldots,t_n^0\}$ its (positive) eigenvalues. Now let $\epsilon:=\min t_j^0/2>0$. Then denote $U_n$ the unitary group and $\mathcal{H}_n^{++}$ the cone of positive definte hermitian matrices. Now consider the map
$$
\phi:U_n\times \prod_{j=1}^n[t_j^0-\epsilon,t_j^0+\epsilon]\longrightarrow \mathcal{H}_n^{++}
$$
which sends $(U,t_1,\ldots,t_n)$ to $U\mbox{diagonal}\{t_1,\ldots,t_n\}U^*$. Since $U_n$ is compact, the domain is compact. And since $\phi$ is continuous, the range of $\phi$ is a compact subset of $\mathcal{H}_n^{++}$ containing $A_0$. So it only remains to check that this is a neighborhood of $A_0$ in $\mathcal{H}_n^{++}$. To that aim, note that it contains
$$
\phi(U_n\times \prod_{j=1}^n(t_j^0-\epsilon,t_j^0+\epsilon))\ni A_0
$$
i.e. the set of hermitian definite positive matrices with spectrum $\{t_1,\ldots,t_n\}$ such that, up to a permutation, $|t_j-t_j^0|<\epsilon$ for every $j=1,\ldots,n$. By continuity of polynomial roots over $\mathbb{C}$ applied to the characteristic polynomial, this is open in $\mathcal{H}_n^{++}$. QED. |
Point multiplication in elliptic curve | The answer is no. See how the addition on a elliptic curve is motivated by geometry: Adding $P$ and $Q$ is done by finding the third point of intersection between the line through $P$ and $Q$ and the curve, then reflecting at the $x$-axis. For $P+P$ you take the tangent at $P$, find the second intersection point and reflect that. This will in general not coincide with just adding the coordinates.
This is illustrated in the Wikipedia article on Elliptic Curves. |
Diagonizable linear operator and invariant subspace question | I'm missing a couple of details here, but...
Since $W$ is invariant under $T$, $T|_W$, the restriction of $T$ to $W$, makes sense. It should then follow (somehow) from the fact that $T$ is diagonalisable on $V$ that $T|_W$ is diagonalisable on $W$. Since $T|_W$ is diagonalisable on $W$, there exists a basis $(v_1, \ldots, v_k)$ of $W$ consisting of eigenvectors of $T|_W$. In particular, span$(v_1, \ldots, v_k) = W$, and $(v_1, \ldots, v_k)$ are also eigenvectors of $T$.
Any help with the critical step is appreciated. |
Proof Verification: $4x^2+6xy+4y^2>0$ unless both $x$ and $y$ are equal to $0$ | It is almost correct. The only flaw lies in the fact that you should assume only that $x\neq0$ or $y\neq0$. Therefore, you cannot say that both numbers $x^2$ and $y^2$ are greater than $0$. But you can say that their sum is greater than $0$, and that is quite enough for the proof. |
If $\forall x \exists y : R(x, y)$, then is it true that $y = y(x)$? | You seem to be asking: if for every $x$ there is a $y$ for which $R(x,y)$ holds, can we then define a function $f$ (in the usual sense) such that for every $x$, $R(x,f(x))$ holds?
From a set-theoretic point of view, that statement is in fact none other than the Axiom of Choice.
The Axiom of Choice has many forms, but one form is the following: given a nonempty family of nonempty sets, $S_i$ with $i\in I$ ("nonempty family" means that $I$ is not empty, so we have some sets; "nonempty sets" means that each $S_i$ is not empty), there exists a "choice function": a way to select for each $i\in I$ an element of the corresponding $S_i$. Formally, there exists a function $f\colon I\to \cup S_i$ from the index set to the union of the sets, with the property that $f(i)\in S_i$ for each $i$.
Suppose that by assumption, for every $x$ in our universe $X$, the set $S_x = \{y\mid R(x,y)\}$ is nonempty, and you have a function $f$ with the desried property. The function $f$ will be a "choice function": a function $f:X \to \cup_{x\in U}S_x$ such that for each $x\in X$, $f(x)\in S_x$. Conversely, a choice function for the family $\{S_x\}_{x\in X}$ will yield and $f$ with the desired property. The existence of such functions for every nonempty family of nonempty sets is precisely the Axiom of Choice. Conversely, suppose that the Axiom of Choice fails, and let $\{S_x\}_{x\in X}$ be a nonempty family of nonempty sets for which no choice function exists. Then let $R(x,y)$ be "$y$ is in $S_x$". For every $x$ there exists a $y$ such that $R(x,y)$, since $S_x\neq\emptyset$. But since we are assuming that the family has no choice function, there does not exist any function $f$ such that for every $x$, $f(x)$ is in $S_x$. Thus, the question not only depends on the Axiom of Choice, but is equivalent to the Axiom of Choice.
In some circumstances, the exact nature of the sets $S_x$ mean that you do not need to invoke the Axiom of Choice, because we have some "natural way" of defining the choice. For instance, if every set $S_x$ is a subset of the natural numbers, then you can define $f(x)$ to be "the least element of $S_x$". Another standard example is when $S_x$ is a singleton for every $x$; that is, for each $x$ there exists one and only one $y$ for which $R(x,y)$ holds. In that case, you can define your function $f$ to be "$f(x)$ is the unique $y$ for which $R(x,y)$ holds".
But in general, the Axiom of Choice is, in some sense, "inherently non-constructive". And the thing is, the Axiom of Choice is independent of the usual axioms of set theory. Like the parallel postulate in geometry, you can accept it or not at your taste (the consequences of accepting and of not accepting it are different; for example, if you accept the Axiom of Choice, then that means that there is a way to slice a ball of radius 1 into a finite number of parts, and then rearrange the parts without changing their shape or size, into two balls of radius of 1; this is called the Banach-Tarski paradox; on the other hand, if you want all your vector spaces to have bases, then you must accept the Axiom of Choice. In short, it has good consequences and weird/bad consequences).
Edit: In the situation you have, it is possible to describe a function that will work; however, the "description" is a bit like cheating, in the sense that you can give an explicit rule for what the value of the function will be, but in practice it would be difficult to actually find the value. Namely, for every $\epsilon\gt 0$ you know that the set of $\delta\gt 0$ for which the implication $||v-v_0||\lt\delta\Rightarrow ||f(v)-f(v_0)||\lt\epsilon$ holds is nonempty. In particular, by the Archimedean property, the set $A_{\epsilon}=\{n\in\mathbb{N}| ||v-v_0||\lt\frac{1}{n} \Rightarrow ||f(v)-f(v_0)||\lt\epsilon\}$ is nonempty. This is a nonempty set of positive integers, so it has a least element. You can then define your function $\mathbf{f}\colon (0,\infty)\to(0,\infty)$ by the "formula" $\mathbf{f}(\epsilon)=1/\min(A_{\epsilon})$, the reciprocal of the smallest $n$ in the set. This definition would not require you to invoke the Axiom of Choice.
As to your question 2: you almost had it, I think, but you should not invoke (implicitly) the Axiom of Choice: pick a specific $\epsilon$ (for example, $|A|/2$). You know that there exists a $\delta\gt 0$ that works for that epsilon, so you pick any $\delta_0$ that works and use that. That would be the standard way of proving the proposition at hand. If you assume you have some kind of "black-box function" that will hand you a $\delta$ whenever you hand it an $\epsilon$, you are invoking the Axiom of Choice and you are making your proof somewhat non-constructive. |
Minimum Mean Square Error Estimate Example | We have a prior $P(\vec{v}) \propto e^{-1/2(\vec{v} - \vec{\mu})^T P^{-1} (\vec{v} - \vec{\mu})}$.
We observe that $v_1 = 1$, and we are asked for (moments of) the posterior distribution of $v_2$. So plug $v_1 = 1$ into the above equation. $P^{-1} = \begin{bmatrix} 20/14 & -2/14 \\ -2/14 & 10/14 \end{bmatrix}, \begin{bmatrix} 1 & x \end{bmatrix} P^{-1} \begin{bmatrix} 1 \\ x \end{bmatrix} = 20/14 - 4x/14 + 10x^2/14 = 10/14*(x-1/5)^2 + C$.
$P(x | y=1) \propto e^{-5/14 (x-1/5)^2}$
So $E[x] = 1/5, Var[x] = 14/10 = 7/5$. |
Products of ideals being a subset of their intersection. | $IJ\subseteq IR\subseteq I$
$IJ\subseteq RJ\subseteq J$
Therefore $IJ\subseteq I\cap J$.
An example to show they don't have to be equal:
Consider the ideal generated by $x$ in $\mathbb R[x]/(x^2)$, and use that ideal for both $I$ and $J$. |
Can I write any element from a cyclic group as a power of other any element from that cyclic group? | Let $G$ be a cyclic group and $y,x\in G$ be elements of the group such that $o(x)|o(y)$. First thing to observe is that $H=\langle y \rangle \subset G$ and $K=\langle x \rangle \subset G$, and that both $H$ and $K$ are the unique subgroups of $G$ with orders $o(y)$ and $o(x)$ respectively. Since $H$ is a cyclic group itself, and $o(x)|o(y)$ there's a subgroup $L$ of $H$ with order $o(x)$, which is also a cyclic subgroup of $G$ of order $o(x)$. This tells us that $L=K$, and so $x\in \langle y \rangle$ which in turn implies that $y^k=x$ for some $k\in \mathbb{Z}$ |
Show that partial derivative is continuous | No. Consider
$$
f(x,y)=\begin{cases}
x,& \text{ if } y=0 \\
0,& \text{ otherwise }\\
\end{cases}
$$
That means
$$
f_x(x,y)=\begin{cases}
1,& \text{ if } y=0 \\
0,& \text{ otherwise }\\
\end{cases}
$$
which is not continuous at any point $(x,0)$. |
Taylor series representation of a function | The trick is to let $u = x+2$ so that we want a power series in $u$. Note $x=u-2$ so that $1-2x = 1-2(u-2) = 5 - 2u$ so that we have the expression
$$\frac{5}{5 - 2u} = \frac{1}{1 - \frac{2}{5}u} = \sum \left(\frac{2}{5}\right)^n u^n = \sum \left(\frac{2}{5}\right)^n (x+2)^n$$
(remember the basic geometric series $\sum_{n\geq 0} x^n = \frac{1}{1-x}$) |
Which of the following transforms preserve the angle between two lines? Select all that apply? | My guess would be that you might not agree with the author in regard to one of these matters:
whether scaling stands for uniform scaling or non-uniform scaling (the most likely of the two)
whether preservation of angles is meant as signed angles, in which case flips shouldn't be there. I think this is unlikely. |
Why do we have to use the exponents for this decay probelem. | Remember that dy is the differential on y, meaning that dy should be the difference between the initial ammount and the final amount after the two hours. |
Is it true that $\sqrt{(a_1+b_1+c_1)(a_2+b_2+c_2)}\geq \sqrt{a_1a_2}+\sqrt{b_1b_2}+\sqrt{c_1c_2}$? | Square both sides (they are non-negative) and rewrite the terms:
$$\iff(\sqrt{a_1}\sqrt{a_1}+\sqrt{b_1}\sqrt{b_1}+\sqrt{c_1}\sqrt{c_1})(\sqrt{a_2}\sqrt{a_2}+\sqrt{b_2}\sqrt{b_2}+\sqrt{c_2}\sqrt{c_2})\ge(\sqrt{a_1}\sqrt{a_2}+\sqrt{b_1}\sqrt{b_2}+\sqrt{c_1}\sqrt{c_2})^2$$
This is precisely the Cauchy–Schwarz inequality applied on $(\sqrt{a_1},\sqrt{b_1},\sqrt{c_1})$ and $(\sqrt{a_2},\sqrt{b_2},\sqrt{c_2})$. |
I need a small hint in my geometry task. | It seems the answer is quite trivial. (Correct me if my logic is wrong.)
We have a symmetric drawing with MNK as the axis of symmetry when we ignore the blue line and the green line.
We can form the circum-circle using M, X, K. By symmetry, X’ is another con-cyclic point. Then, $\angle MXK = red + blue = ... = 90^0$. |
Shifted spherical coordinates | If you use spherical conversions, you get the equation
$$0 \leq \rho^2 \leq \rho \sin(\phi) \cos(\theta) \implies 0 \leq \rho \leq \sin(\phi)\cos(\theta)$$
So long as you are careful about how you do your bounds for $\phi$ and $\theta$ wouldn't that solve your problem? |
Open Functions and Continuity | I'll assume $M$ and $N$ are just topological spaces.
(a) No. The sign function $\mathbb{R} \to \{-1,0,1\}$ is open but not continuous at $0$. Consider $\mathbb{R}$ with the usual topology and $\{-1,0,1\}$ with the discrete topology. No open set around $0$ in $\mathbb{R}$ is sent into the open set $\{0\} \subset \{-1,0,1\}$.
(b) Yes. The continuity of $f^{-1}$ is the openness of $f$.
(c) Yes. The openness of $f$ is the continuity of $f^{-1}$.
(d) No. Take a continuous surjection which is constant in $[0,1]$. The image of $(0,1)$ will not be open.
(e,f) Check this answer. |
What is the length of the side of a regular hexagon and why? | The hexacon can be partitioend into six equilateral triangles (three of them are shown in tge picture) |
Find area of shaded area in curve with range of values for $y$ | Lets consider the region as follows:
As you said before, we need just one half and then multiply it by $2$. Indeed, $y=32-2x^2$ gives us $x^2=\frac{32-y}{2}$ and then $x=\pm \sqrt{\frac{32-y}{2}}$. We nned,here, just the $+$ sign since we considered the first part of $xy$ plane. Now you should work on the following integral:
$$\int_{14}^{24} x dy$$ |
A question about Quadratic residue | Hint: suppose for the sake of contradiction that $-1-a^2$ is not a quadratic residue mod $p$, for any $a \in \{0, 1, 2, \ldots, p-1\}$. Then count how many quadratic nonresidues you get as a result. Use the fact that $x^2 \equiv y^2 \mod p$ if and only if $x \equiv \pm y \mod p$. |
If $(W,<)$ is a well-ordered set and $f : W \rightarrow W$ is an increasing function, then $f(x) ≥ x$ for each $x \in W$ | In the proof we pick $z$ to be the least, then we set $w = f(z)$, but we know from the definition of the set that $f(z) < z$, so $w < z$. However, $f$ is an (strictly) increasing function, that is $w < z$ implies $f(w) < f(z) = w$. This in turn implies that $w$ should be a part of $X$, but $w < z$ and contradicts that we have picked the least element (which we could because $W$ is well-ordered).
Alternative (and I think that a bit more intuitive) view on this proof is that an existence of element $z$ such that $$f(z) < z$$ implies that
$$z > f(z) > f(f(z)) > f^{(3)}(z) > f^{(4)} > \ldots$$
is an infinite strictly decreasing sequence, which is a contradiction due to the fact that $W$ is well-founded.
I hope this helps ;-) |
Solve by determinant | A reasonable attempt is to use the Rational root theorem, to find that if $p(x)$ has rational roots, then they must be in $\{\pm 1, \pm 3\}$. |
Is there any difference between statistical learning and machine learning? | These are pretty much the same thing. But if you want to make a distinction, for me machine learning, as a subfield of artificial intelligence, is more about a pratictal (or computer science) point of view. Statistical learning is a general (abstract) theory that can be applied with the different methods and algorithms that your course contains.
See https://en.wikipedia.org/wiki/Statistical_learning_theory |
What is the extension of function fields corresponding to the Artin-Schreier isogeny? | You don't have the right definition of the minimal polynomial. The minimal polynomial $f$ for the extension $k(x^q-x) \hookrightarrow k(x)$ should have coefficients in $k(x^q-x)$ and satisfy $f(x) = 0$. It's given by $f(T) = T^q-T - (x^q-x)$.
Here is an argument for the irreducibility of $f$: Assume we have a polynomial of degree $< q$ with coefficients in $k(x^q-x)$ annihilating $x$, say $\sum_{i=0}^{q-1} f_i(x^q-x) x^i = 0$. Multiplying with a suitable factor we can assume that the $f_i$ are polynomials. For $a \in \mathbb F_q$ we get $\sum_{i=0}^{q-1} f_i(0) a^i = 0$ by setting $x = a$ in the relation. Now $\sum_{i=0}^{q-1} f_i(0)T^i$ has $q$ distinct roots, so it is the zero polynomial. Hence $f_i(0) = 0$ for all $i$, so we can cancel $T$ from the $f_i(T)$ and obtain another relation with $f_i$ of smaller degree. The argument can be repeated indefinitely, so we must have $f_i = 0$ for all $i$. |
Prime ideals in a Dedekind domain | You can show that the ordinary definition of primes in commutative rings is equivalent to this one:
$P$ is prime if whenever $A,B$ are ideals such that $AB\subseteq P$, then $A\subseteq P$ or $B\subseteq P$.
By applying this to the product of powers if primes in your factorization, you get an affirmative proof. |
What does the "n choose multiple numbers" symbol stands for? | See multinomial coefficient for an explanation:
It is denoted, $$\displaystyle \binom{n}{k_1, k_2..., k_m} = \dfrac{n!}{k_1! \times k_2! \times \cdots \times k_m!}$$ and in your case, $(5\;\text{choose}\, 2, 2, 1)$ is denoted and computed as: $$\binom{5}{2, 2, 1} = \frac{5!}{2!\,2!\,1!} = \dfrac{120}{2\cdot 2 \cdot 1} = \dfrac{120}{4} = 30$$
Recall, for any number $n,\,$ $n\,! = n(n-1)(n-2)\cdots(2)(1)$. |
When is the difference of two consecutive positive cubes a perfect square? | There are infinitely many of them. Let
$$f(m,n) \stackrel{def}{=} (m+1)^3 - m^3 - n^2$$
It is clear the equation $f(m,n) = 0$ has a trivial solution $(m,n) = (0,1)$.
By brute force, one can verify
$$f(7m+4n+3,12m+7n+6) = f(m,n)$$
This means if $(m,n)$ is a solution for $f(m,n) = 0$, so does $$(m',n') = (7m+4n+3,12m+7n+6).$$
Start from the trivial solution $(0,1)$, we can use this to generate infinitely many positive integer solutions for the equation $f(m,n) = 0$. |
A man can cut the grass in $T$ minutes What part of the lawn can be cut in $30$ minutes? | The correct answer is a)
His clearing speed 1/T, meaning one lawn per T minutes.
So if you multiply his cutting speed by the time used, you get the part of the lawn:
$$1/T\cdot 30 =\frac{30}{T}$$
Explanation
His part of lawn mown per time is constant.
This means
$$\frac{\text{part of lawn mown}}{\text{time used}}= \text{cutting speed}$$
or
$$\frac{s_i}{t_i} = v \quad \forall i$$
defined as the text equation above.
We are given a part $s_1=1$ and $t_1 =T$, so we know
$$v=\frac{1}{T}$$
And we know $t_2=30$ and want to know $s_2$.
So
$$s_2=v\cdot t_2 = \frac{30}{T}$$ |
Is the function uniformly continuous? . | The derivative of $f_1$ is bounded: you can prove that $\|f'_1\|_\infty=1$; so it is $1$-Lipschitz. Then $f_n(x)=f_1(nx)$ is $n$-Lipschitz. |
Expressing the local volatility in terms of the implied volatility | You are missing terms. The partial derivatives in the first formula must take into account the explicit dependence of the option price on expiry $T$ and strike $K$ and implicit dependence when using an implied volatility $\hat{\sigma}(T,K)$ that also depends on $T$ and $K$.
Using the ordinary derivative notation in the first formula to avoid confusion, we would expand as follows
$$\frac{d}{dT} C(S,K,T, r, \hat{\sigma}(T,K)) = \frac{\partial C}{\partial T} + \frac{\partial C}{\partial \hat{\sigma}} \frac{\partial \hat{\sigma}}{\partial T}$$ |
Fixed Points in Phase Portraits | The existence and uniqueness theorem stated a page earlier in the book tells you that the initial value problem has a unique solution close to $t=0.$
When the book says "Consider," it is providing an example, being $x_0=0.$
Also note that the solution (in the case of $x_0=0$) is $x(t)=\tan(t)$ and not $x(t)=\tan(x)$. |
Converting between SVD and Eigenvector-based expressions | Hint : The eigenvalue decomposition of $X'X$ and the singular value decomposition of $X'X$ are the same. |
Hill Cipher and Exponential Cipher Questions | I wrote a small Python program trying all $16^x \pmod {29}$ and found that $x=3,10,17,24$ all give rise to result $7$ which corresponds to 'g' (as 'a' = 1 etc, we must work in the units modulo $29$, i.e. all nonzero elements.
The first one $x=3$ already gives $10^3 \equiv 14 \pmod{29}$, so 'n' which is wrong.
Trying the other options gave me $d=17$ as the decryption exponent, and we get 'goodguess' as the solution.
is for Hill, I suppose? What is $p$ and the encoding? It's just linear algebra after that.
Maybe is a digraphic system? So two letters ciphertext from A-E corresponds to 1 plain letter? A square with A-E on the sides, or encoding letters (minus q maybe) as pairs $(0,0)$ to $(4,4)$ and and applying a Hill mod 5? Guess an encoding of letters and try it out!
Oh no a bad RSA again...(\begin{rant} why don't they teach proper padding in schools nowadays and why do they use weird encodings and non-random encryption and all sorts of bad habits \end{rant}).
HEL is encoded as 080512 which is $< n$ at least?
It seems so as then I do get 68660 as the ciphertext, as you do...
LOS becomes 121519 which does become 196630 in the ciphertext (which cannot be encoded as letters).
To find $d$ compute $\phi(n)=(p-1)(q-1)$ and find the inverse of $e$ modulo $\phi(n)$ using the extended Euclidean algorithm. |
Prove that $\varphi^{-1}$ is the inverse of $\varphi$ | I'll take it as granted that $$\phi\quad (x,y,z)\mapsto(u,v,w):={(x,y,z)\over\sqrt{x^2+y^2+z^2}}$$ maps $C$ bijectively onto $S^2$. Consider now the map
$$\psi:\quad (u,v,w)\mapsto {(u,v,w)\over\max\{|u|,|v|,|w|\}}\qquad\bigl((u,v,w)\in S^2\bigr)\ .$$
Then $\psi$ is continuous on $S^2$: From $u^2+v^2+w^2=1$ it follows that $\max\{u^2,v^2,w^2\}\geq{1\over3}$, whence $\max\{|u|,|v|,|w|\}\geq{1\over\sqrt{3}}$, and $\max$ is a continuous function of its arguments. Furthermore one easily checks that
$$\psi(\lambda u,\lambda v,\lambda w)=\psi(u,v,w)\qquad(\lambda>0)\ .$$
We therefore have
$$\psi\circ\phi(x,y,z)=\psi\left({(x,y,z)\over\sqrt{x^2+y^2+z^2}}\right)=\psi(x,y,z)={(x,y,z)\over\max\{|x|,|y|,|z|\}}=(x,y,z)$$
for all $(x,y,z)\in C$. |
Let $F$ be a field. Prove that for every integer $n \geq 2$, there exist $p,q \in F$ such that $x^2 + x + 1$ divides $x^n + px + q$. | You are asked to prove existence of $p,q$. So let's make this as simple as possible.
$$x^n +x+1 = p(x)[x^2+x+1] + ax+b$$ for some $a,b \in F$ and some $p(x) \in F[x]$. Do you see why this is. You don't need to know precisely what $a$ and $b$ are, just that $x^n +x+1$ can be written in this form for some $a,b \in F$.
If $a$ and $b$ are both 0 then we are done.
Otherwise, note the following: $$x^n +x+1 = p(x)[x^2+x+1] +(ax+b).$$ So subtracting $ax+b$ from both sides gives $$x^n+n+1 -(ax+b) = p(x)[x^2+x+1].$$ Thus $x^2+x+1$ divides $x^n +x +1 -(ax+b)$. So take $p =1-a$ and $q=1-b$. |
Asymptotic notation (big Theta) | Yes you are right.. intuitively it's clear that it does not change a thing, since when $n \to \infty$ effects like this are definitely too small to notice.
The whole concept behind asymptotics is that we ignore small terms.. sometimes we can even ignore something "huge" like $2^n$, for example $\Theta(2^n + 3^n) = \Theta(3^n)$
So we don't really care about effect like this; yes if $n=0.6$ you would get $1$, but the error is at most $1/2$ which is not relevant. if you had $n = 10^9 + 0.6$, would you be worried about wether the result is rounded up or down?
If you want to be a little bit more formal you can start noting that $\lfloor \frac n2 \rfloor \le n$, and $\Theta(n) + \Theta(n) = \Theta(n)$
Or you can also notice that the error from rounding is at most $1/2$ or $1$, depends on the rounding that one uses and find some bonds from there. |
What is the name of $V_\alpha$? | The sets $V_\alpha$ are usually referred to either as levels of the cumulative hierarchy (as mentioned in the comments) or as rank initial segments of V. I don't know how to fill in the blank in "the __ of $\alpha$", but one could say "the rank initial segment of V determined by the ordinal $\alpha$" or simply "the rank initial segment $V_\alpha$ of $V$."
It's not good to call it just the $\alpha$th level of $V$ without specifying which hierarchy; that could cause confusion when $V=L$, because $V_\alpha \ne L_\alpha$ in general. |
Ordinary differential equation eigenvalues/eigenfuntions | Yes, your approach so far is correct. One should also check (preferably first, so you don't forget!) the $\lambda=0$ case: here, it is easy to check that the boundary conditions can't be satisfied.
One can actually extract the general solution from your analysis. In order for there to be a nonzero solution, the system of equations
$$ \begin{pmatrix} 1 & 1 \\ e^{ka} & e^{-ka} \end{pmatrix} \begin{pmatrix} A \\ B \end{pmatrix} = 0 $$
must be degenerate (else the only solution is zero). Hence the determinant of the matrix must vanish; this is
$$ e^{-ka}-e^{ka}=0, $$
or $e^{2ka}=1$. Recalling Euler's formula, this occurs if and only if $2ka=2n\pi i $ for some integer $n$. Hence $k=\frac{n\pi i}{a}$. The condition at zero gives $y=2A\sinh{kx}$, and $\sinh{iz} = i\sin{z}$, so the eigenfunctions are proportional to $\sin{(n\pi x/a)}$, for $n \in \{ 1,2,3,\dotsc\}$. Certainly this looks plausible, since this satisfies the equation and boundary conditions. |
Why is $J_{n}(B):=\{j \in J: \mu(B \cap A_{j}) > \frac{1}{n} \}$ finite | Suppose that $J_{n}(B)$ is infinite.
Take the union over $J$:
$\begin{equation}\mu(\bigcup\limits_{j \in J} B \cap A_{j}) > \sum_{j \in J} \frac{1}{n} = \infty \end{equation}$. Contradiction with the fact that $\mu$ is $\sigma$-finite. |
Presentation of discrete upper triangular group | There is a paper by D. Biss and S. Dasgupta (A Presentation for the Unipotent Group over Rings
with Identity, J. Algebra, vol. 237, 691-707 (2001)) which gives a presentation of the group of upper triangular matrices over $R$ with diagonal entries equal to 1, where $R$ is any ring with identity.
(Moreover, in some cases they show that their presentations are minimal both in the sense of having no redundant relations, and in the stronger sense of having the fewest possible number of generators and relations.)
In the case $R=\mathbb{Z}$, their generators are the same as the ones in the question above, and the relations are $$[a_{i,i+1},a_{j,j+1}] =1\ \ (i<j-1\leq n-2)$$
$$[a_{i,i+1},[a_{i,i+1},a_{i+1,i+2}]]=[a_{i+1,i+2},[a_{i,i+1},a_{i+1,i+2}]]=1$$ $$[[a_{i,i+1},a_{i+1,i+2}],[a_{i+1,i+2},a_{i+2,i+3}]]=1.$$ |
Conditional distribution involving both discrete and continuous random variables | This is a simple hierarchical model. To get the conditional distribution $Y \mid X = x$, we note $$f_{Y \mid X}(y) = \frac{\Pr[X = x \mid Y = y]f_Y(y)}{\Pr[X = x]}.$$ The denominator, the unconditional (marginal) probability mass function of $X$, is not a function of $y$, thus we do not need to compute it to recognize the form of the likelihood in the numerator. Indeed, we can remove all constants of proportionality with respect to $y$: we simply have $$\begin{align*} f_{Y \mid X}(y)
&\propto e^{-y} \frac{y^x}{x!} \cdot \frac{b^a y^{a-1} e^{-by}}{\Gamma(a)} \\
&\propto y^{a+x-1} e^{-(b+1)y}.
\end{align*}$$
The other factors, $x!$, $b^a$, and $\Gamma(a)$, are not functions of $y$. What remains is very clearly proportional to a gamma distribution with shape $a+x$ and rate $b+1$; i.e., the conditional density is $$f_{Y \mid X}(y) = \frac{(b+1)^{a+x} y^{a+x-1} e^{-(b+1)y}}{\Gamma(a+x)}.$$ |
What are the $\mathbb{R}$-vector subspaces invariant under this action? | There is an invariant subspace that you have not considered : the one consisting of vectors with all their coordinates equal, i.e. the one generated by $(1,1,1)$. Clearly all the vectors in this subspace are fixed by the action, so the subspace is invariant. Moreover, the action preserves the scalar product on $\Bbb R^3$; hence the orthogonal subspace is invariant as well. This orthogonal subspace consists of all vectors such that $x+y+z=0$, so you can see directly that it has to be invariant.
By the way your intuition is right : the action correspond indeed to rotations by an angle of $\frac{2\pi}{3}$, but the axis of the rotation is the line generated by $(1,1,1)$. Indeed, you can take a vector in the orthogonal subspace $(a,b,-a-b)$, and check that it makes an angle $\alpha=\frac{2\pi}{3}$ with its image under the action, $(-a-b,a,b)$:
\begin{align}\cos \alpha & = \frac{(a,b,-a-b)\cdot (-a-b,a,b)}{\|(a,b,-a-b)\|\cdot\|(-a-b,a,b)\|}\\ & =\frac{-a^2-ab+ba-ba-b^2}{a^2+b^2+(a+b)^2}\\ & =\frac{-a^2-ab-b^2}{2a^2+2ab+2b^2}=\frac{-1}{2}\\ &=\cos \left(\frac{2\pi}{3}\right) .\end{align}
This gives us two non-trivial invariant subspaces. We can show that there cannot be other invariant subspaces. Indeed, if $U$ is an invariant subspace, then :
either $U$ has dimension $1$ ; then $U$ is generated by a vector $(a,b,c)\neq 0$. Then $(c,a,b)\in U$ by invariance, and thus there exists $\lambda \in \Bbb R$ such that $(c,a,b)=\lambda (a,b,c)$. Then we have $a=\lambda b=\lambda^2 c=\lambda^3 a$, and the same holds for $b$ and $c$. Hence $\lambda^3=1$, and thus $\lambda =1$ and $a=b=c$, so $U$ is the subspace we already knew.
either $U$ has dimension $2$. Then its intersection with the plane $x+y+z=0$ is also invariant, and has dimension at least $1$. If the intersection has dimension $1$, the previous point shows that it must be generated by $(1,1,1)$, which is impossible as it must also lie in the plane $x+y+z=0$. Thus the intersection has dimension $2$, and thus $U$ is the plane of equation $x+y+z=0$. |
Sum of three uniformly distributed random variables. | So we get
\begin{equation}
f(b-z) = \begin{cases}
b-z & b-z\in (0,1)\\
2-(b-z) & b-z\in[1,2)\\
0 & \text{ otherwise}
\end{cases}.
\end{equation}
We have that
\begin{equation}
0<z<1
\end{equation}
or
\begin{equation}
b-1<b-z<b
\end{equation}
If $b < 0$, then $b - z < 0$, so $h(b) = 0$ according to the boundaries we have. On the other hand if $b - 1> 2$ (or $b>3$), that is is $b-z>2$, we also get $f(b) = 0$. Other than that, we can distinguish three cases:
Case 1:
If $0<b<1$
\begin{equation}
h(b)
=
\int_{0}^{1}f(b-z)dz
=
\int_0^{b} b-z \ dz
=
\frac{b^2}{2}
\end{equation}
Case 2: If $1<b<2$
\begin{equation}
h(b)
=
\int_{0}^{1}f(b-z)dz
=
\int_{b-1}^{1} b-z \ dz
+
\int_{0}^{b-1} 2-(b-z) \ dz
= \frac{b(2-b)}{2}-\dfrac{b^2-4b+3}{2}
\end{equation}
Case 3: If $2<b<3$, we have
\begin{equation}
h(b)
=
\int_{0}^{1}f(b-z)dz
=
\int_{b-2}^{1} 2-(b-z) \ dz
=
\frac{b^2-6b+9}{2}
\end{equation}
Distribution of B
\begin{equation}
h(b) = \begin{cases}
\frac{b^2}{2} & b\in (0,1)\\
\frac{b(2-b)}{2}-\frac{b^2-4b+3}{2} & b\in(1,2)\\
\frac{b^2-6b+9}{2} & b\in(2,3)\\
0 & \text{ otherwise}
\end{cases}.
\end{equation} |
Dynamic real-time system problem | We have
$$Y(s) = \left(\frac1{5+s}\right)\left(\frac{1+s}{s(2+s)^2}\right) = \frac{1+s}{s(2+s)^2(5+s)}. $$
The partial fraction decomposition is of the form
$$\frac{1+s}{s(2+s)^2(5+s)} = \frac As + \frac B{2+s} + \frac C{(2+s)^2} + \frac D{5+s}. $$
This implies
$$A(2+s)^2(5+s) + B(s(2+s)(5+s)) + C(s(5+s)) + D(s(2+s)^2) = 1+s $$
for all $s$. Taking $s=0$ yields
$$20A=1\implies A=\frac1{20}, $$
taking $s=-2$ yields
$$-6C = -1\implies C=\frac16, $$
taking $s=-5$ yields
$$-45D = -4 \implies D = \frac4{45}, $$
and finally, taking $s=-1$ yields
$$4A - 4B - 4C -D = 0\implies B = \frac1{4}\left(\frac4{20}-\frac46-\frac4{45}\right)=-\frac5{36}. $$
It follows that
$$Y(s) = \frac{1}{20 s}-\frac{5}{36 (s+2)}+\frac{1}{6 (s+2)^2}+\frac{4}{45 (s+5)} ,$$
and so
$$y(t) = \mathcal L^{-1}\{Y(s)\}=\frac1{20}e^{-t} -\frac5{36}e^{-2t} + \frac16 te^{-2t} +\frac4{45}e^{-5t}. $$ |
Does this partial derivative exist at (0,0)? | Yes,the limit does not exist. Otherwise also suppose you approach $(0,0)$ along $y=mx$ so putting y=mx gives, $f(x,mx)=\frac {2m}{1+m^2}$. The value the function tends to at $(0,0)$ varies with $m$. So the limit is non-existent at $(0,0)$. |
If every path crosses a subset, then is the subset connected? | The answer is no. Let $X$ be the unit circle in $\Bbb R^2$, $a=(1,0)$, $b=(-1,0)$ and $S=\{(0,1),(0,-1)\}$.
Note also that $X\setminus S$ can be connected, for example for $X$ the Warsaw Circle and $S$ a point on the nice part of the arc. |
recovering the information of the vertices from a simplex | Trickier than I expected. Suppose that $\{v_0,\dots,v_k\}$ does not equal $\{w_0,\dots,w_k\}$. Then without loss of generality we can assume $v_0 = \Sigma r_i w_i$ where multiple $r_i$ are nonzero. Again, without loss of generality assume that these are $r_0,r_1$. Then there is a function $(-\epsilon, \epsilon) \rightarrow \sigma$ defined by $t \rightarrow (r_0 +t)w_0 + (r_1-t)w_1 + r_2 w_2 +\dots + r_k w_k$. Such a function cannot exist because at $t=0$ this passes through $v_0$, and no such line segment passes through $v_0$ that is also contained in $\sigma$ because such a thing would necessarily contain points that when written as a sum of the $v_i$ would have negative coefficients. |
The distribution of the number of time an asymmetric Brownian Motion or Random Walk across zero | The fact that the random walk crosses zero finitely often, simply means that $\mathbb P[M=\infty]=0$. But it does not mean that the support of $M$ is finite. In fact, $M$ has a geometric distribution, as shown below.
Let $0=S_0,S_1,S_2,\ldots$ be an asymmetric random walk such that $\mathbb P[S_1=1]=p$ and $\mathbb P[S_1=-1]=1-p$. Assume $p>\frac 1 2$. Exercise 1.13 of Durret's book shows that $\mathbb P[M=0] = p(2p-1)=:q$. Now, the fact that $M$ has a geometric distribution follows from noting that $M=k$ if and only if the random walk returns to zero $k$ times and does not return to zero afterwards, hence, $\mathbb P[M=k]=(1-q)^k q$. This is true for any transient Markov chain (see Theorem 3.1 of Durret's book).
For the Brownian motion, the number of returns to zero is necessarily infinite. But you might be interested in the local time at zero. By the invariance principle, I guess that the local time has an exponential distribution. |
Entire function missing 2 values | Yes, for functions of finite order, we can prove it in an elementary way. If $f$ has finite order, so has $f - c$ for any $c \in \mathbb{C}$, so we may assume that $f$ misses $0$. Then, as a function of finite order without zeros,
$$f(z) = e^{P(z)}$$
for a polynomial $P$. If $f$ is not constant, $P$ is not constant, and as a polynomial, it follows that $P$ attains every complex value. But then $f$ attains every complex value except $0$, so misses only one value. |
Proving a result in Baire categories | Since $A$ is dense and $A \subset C_n$, it follows that $C_n$ is dense.
Now show that the complement of a dense open set is a (closed) nowhere dense set. |
Geometry pigeonhole principle problem. | Every $A_i$ contains ${6\choose 3}$ 3-subsets. Therefore, in total, the $A_i$'s combined contain $13{6\choose 3}=260$ 3-subsets.
We now count the number of possible different 3-subsets. In total, there are ${10\choose3}=120$ different possible 3-subsets. Thus, there are 260 3-subsets across the $A_i$'s, of which 120 are different. Now by the pigeonhole principle, there must be a 3-subset that appears 3 times across the $A_i$'s. |
How to prove that this function is convex? | Hint
I think that your life could be easier if you start replacing $x$ by $y/a$. Then, and I shall let you establishing the second derivative (do not forget to simplify as much as possible), the problem of concavity reduces to the analysis of the sign of $$y^2+y \sinh (y)-4 \cosh (y)+4$$ If this quantity is positive, then the function is concave; otherwise, it is convex. |
Question about normal module. | $\varphi(x^2) = x\varphi(x)$. One possible point of confusion: why does $\varphi(x) = x\varphi(1)$ not hold? |
Friedman's approach of proving Cauchy-Schwarz inequality | I expect it's a typo, and $\ \beta\ $ should be $\ -\big\langle x_n, y\big\rangle\ $. You'll then get (with $\ \alpha=|y|^2\ $)
\begin{align}
\big|\alpha x_n+\beta y|^2&=|y|^4|x_n|^2-2|y|^2\big\langle x_n, y\big\rangle^2+ |y|^2\big\langle x_n, y\big\rangle^2\\
&=|y|^4|x_n|^2-|y|^2\big\langle x_n, y\big\rangle^2\ ,
\end{align}
or
\begin{align}
\big\langle x_n, y\big\rangle^2&= |y|^2|x_n|^2-\frac{\big|\alpha x_n+\beta y|^2}{|y|^2}\\
&\le |y|^2|x_n|^2\ ,
\end{align}
from which the Cauchy-Schwarz inequality follows immediately. |
Is this good enough of a proof? (I know almost nothing about proofs) | There are only two small errors that I can see.
You write $(2a)d + b(3c) / bd$ when you mean $((2a)d + b(3c)) / bd$.
You say that $a, b, c,$ and $d$ were declared as non-zero integers; in fact, only $b$ and $d$ are guaranteed to be non-zero here.
Other than that, your proof is fine. |
Rigorous proof that any linear map between vector spaces can be represented by a matrix | If you start from the definition of $f$ a linear map between $E$ and $F$. A linear map has the following properties:
$f(\mathbf{x}+\mathbf{y}) = f(\mathbf{x})+f(\mathbf{y})$ (additivity)
$f(\alpha \mathbf{x}) = \alpha f(\mathbf{x})$ (homogeneity of degree $1$)
As a result, you can define $f$ from its values $f(e_i)$ over any base $e_i$. Express $f(e_i)$ into a base of $F$ and you've got the matrix. |
How can I translate this affine hyperplane? | Let $w\in H$. Since $<\mu,w-v>=0$, there are $u_i\in\mathbb{Z}$ s.t. $w-v=\sum_{i=1}^{d-1}u_ia_i=A[u_1,\cdots,u_{d-1}]^T$, that is $w=\phi([u_1,\cdots,u_{d-1}]^T)$.
EDIT. I wrote too fast. OK, you know that $\ker(<\mu,.>)$ has a basis with $d-1$ elements because $\mathbb{Z}$ is a PID. |
If $A>0$ is symmetric and has exactly one positive eigenvalue, then $a_{ij} \ge \sqrt {a_{ii}a_{jj}} \ge \min\{a_{ii},a_{jj}\}$? | HINT:
$A$ has eigenvalues
$$\lambda_1 \le \lambda_2 \le \ldots \le \lambda_n$$
and the smallest $(n-1)$ of them are $\le 0$. Using the Cauchy interlacing theorem we have that the smallest eigenvalue of any principal $2\times 2$ matrix of $A$ is $\le \lambda_{n-1} \le 0$. That implies that the determinant of such a matrix is $\le 0$ ( since the diagonal elements are $>0$) |
$X-A\subset\overline{X-\operatorname{Int}(A)}$ is true or false? | If $M \subset N$ then $X\setminus N\subset X\setminus M$.
Pf: just element chase. If $a\in X\setminus N$ then$a\in X$ but $x\not \in N$ so $x \not\in M \subset N$. So $x \in X\setminus M$
So $Int(A)\subset A$ so
$X\setminus A \subset X\setminus Int(A)\subset \overline{X\setminus Int(A)}$.
The statement is true even if you don't actually know anything. |
Dimension of $L(V,W)$ when $V, W$ are infinite dimensional? | The answer I linked in my comment shows in particular that if we write $|X|$ for the cardinality of a set $X$, then for a vector space $V$ over $K$:
$$ |V| = \max(|K|,\dim(V)).$$
Now if $(e_i)_{i\in I}$ is a basis of $V$, then $L(V,W)$ is in bijection with the functions $I\to W$ (this is what it means to be a basis), so
$$ |L(V,W)| = |W|^{\dim(V)}.$$
Since that cardinal is clearly strictly bigger than $|K|$ when $\dim(V)$ is infinite, we have $|L(V,W)| = \dim(L(V,W))$ so
$$\dim(L(V,W)) = |W|^{\dim(V)}.$$
So now you have two cases:
If $\dim(W)\leqslant |K|$, then $\dim(L(V,W)) = |K|^{\dim(V)}$;
If $\dim(W)\geqslant|K|$, then $\dim(L(V,W)) = \dim(W)^{\dim(V)}$.
Note that the behaviour of cardinal exponentiation can be a little tricky: see the problem of the continuum hypothesis. If you are comfortable with assuming the Generalized Continuum Hypothesis (I personally am), then https://en.wikipedia.org/wiki/Continuum_hypothesis#The_generalized_continuum_hypothesis gives you a complete list of formulas to compute exponentiations. |
uniform convergence of $u_\varepsilon(x)=-\varepsilon\log\left(\frac{\cosh(\frac{x}{\varepsilon})}{\cosh(\frac{1}{\varepsilon})}\right)$ | Since $u_\varepsilon$ is an even function, we can, to simplifiy nottaion, assume that $0 \leqslant x \leqslant 1$. To identify the behaviour, we write
\begin{align}
-\frac{1}{n}\log \frac{e^{nx}+e^{-nx}}{e^n+e^{-n}} - 1 + x
&= -\frac{1}{n}\log \frac{e^{-n(1-x)} + e^{-n(x+1)}}{1+e^{-2n}} - 1 + x\\
&= \frac{1}{n}\log (1+e^{-2n}) - \frac{1}{n}\log (e^{-n(1-x)}+e^{-n(1+x)}) - 1 + x\\
&= \frac{1}{n}\log (1+e^{-2n}) - \frac{1}{n}\log (1+e^{-2nx}) + \frac{n(1-x)}{n}-1+x\\
&= \frac{1}{n}\log (1+e^{-2n}) - \frac{1}{n}\log (1+e^{-2nx})
\end{align}
and hence obtain
$$\left\lvert -\frac{1}{n}\log \frac{e^{nx}+e^{-nx}}{e^n+e^{-n}} - 1 + x\right\rvert \leqslant \frac{\log 2}{n},$$
since $1 \leqslant 1+e^{-2n} \leqslant 2$ and $1 \leqslant 1+e^{-2nx} \leqslant 2$. |
Determine the smallest value of $k$ corresponding to the level of significance $α = 0.01$ | You are in the right direction. You need to solve the inequality $P(T\geq k|p=\frac{1}{2})\leq 0.01$.
(I hope this makes sense for you, as this is just the definition of level of the test. Some authors also call it size. But there is a distinction between them. For e.g. look here. To be more precise, if you are given the size for the test of a simple hypothesis, we would have used equality sign above. )
$T$ being the total no. of throws required for obtaining the first head, its probability distribution can be computed as mentioned in the comment above. I would apply rather an intuitive method to do the same.
\begin{equation}
\begin{aligned}
P(T\geq k|p=\frac{1}{2})&=1-P(T<k)=1-\left[P(T=1)+P(T=2)+\dots_+P(T=k-1)\right] \\
&=1-\left[\frac{1}{2}+\left(\frac{1}{2}\right)^2\dots+\left(\frac{1}{2}\right)^{k-1}\right]\\&\mbox{(When we say that $T=i, i>1$, we mean that the first $(i-1)$ results in tails. }\\
&\hspace{2mm} \mbox{Head is obtained only on the $i^{th}$ toss. Since the trials are independent, this }\\
&\hspace{2mm}\mbox{probability is just $\left(\frac{1}{2}\right)^{i-1}.\frac{1}{2}=\left(\frac{1}{2}\right)^{i}$})\\
&=\left(\frac{1}{2}\right)^{k-1}\leq 0.01
\end{aligned}
\end{equation}
The least value of $k$ for which this holds is $8$. |
Question about Selberg's formula | For some $\eta\in(n,n+1)$,
$$
\begin{align}
&\sum_{n\le x}\pi(n)(\log(n+1)^2-\log(n)^2)\\
&=\sum_{n\le x}\pi(n)\frac{2\log(\eta)}{\eta}\\
&=\sum_{n\le x}\left(\frac{n}{\log(n)}+O\left(\frac{n}{\log(n)^2}\right)\right)\left(\frac{2\log(n)}{n}+O\left(\frac{\log(n)}{n^2}\right)\right)\\
&=2x+O\left(\frac{x}{\log(x)}\right)
\end{align}
$$
Therefore, summing by parts,
$$
\begin{align}
\sum_{p\le x}\log(p)^2
&=\sum_{n\le x}\log(n)^2(\pi(n)-\pi(n-1))\\[6pt]
&=\log(x)^2\pi(x)-\sum_{n\le x}\pi(n)(\log(n+1)^2-\log(n)^2)\\
&=\log(x)^2\left(\frac{x}{\log(x)}+\frac{x}{\log(x)^2}+O\left(\frac{x}{\log(x)^3}\right)\right)-2x+O\left(\frac{x}{\log(x)}\right)\\
&=x\log(x)-x+O\left(\frac{x}{\log(x)}\right)
\end{align}
$$
Experimental Results
$$
\begin{array}{r|c|c|c}
x&\sum_{p\le x}\log(p)^2&x\log(x)-x&\frac{x}{\log(x)}\\
\hline\\
100 & 309.0926254 & 360.5170186 & 21.71472410\\
1000 & 5686.965759 & 5907.755279 & 144.7648273\\
10000 & 81399.38488 & 82103.40372 & 1085.736205\\
100000 & 1048435.866 & 1051292.546 & 8685.889638
\end{array}
$$ |
can someone explain the difference in uniform convergence and pointwise convergence | In the definition of pointwise convergence, the $N$ is chosen after $x$ is chosen, so it may depend on $x$ (there is no guarantee that the same $N$ works for every $x$). The definition of uniform convergence is stronger, because it states that there is a single $N$ that works for all $x$.
Edit. Uniform convergence implies pointwise convergence because, of course, even knowing $x$ we can still choose the same $N$ that the definition of uniform convergence gave us. (We don't need to have it depend on $x$.) |
Models of real numbers combined with Peano axioms | It is not the case that the class of all real-closed fields works in this case. There is a classical theorem due to Shepherdson that in general, a model of $M$ in the language of Peano arithmetic is the integer part of a real-closed field if and only if $M$ satisfies the induction scheme for quantifier-free formulas (also called "open induction" or "IOpen"), which is much weaker than the induction scheme of Peano arithmetic. The paper you are interested in is:
P. D'Aquino, J. F. Knight, and S. Starchenko, "Real closed fields and models of Peano arithmetic", Journal of Symbolic Logic, v. 75 n. 1, 2010, pp. 1-11.
Abstract.
Shepherdson [14] showed that for a discrete ordered ring I, I is a model of IOpen iff I is an integer part of a real closed ordered field. In this paper, we consider integer parts satisfying PA. We show that if a real closed ordered field R has an integer part I that is a nonstandard model of PA (or even IΣ₄), then R must be recursively saturated. In particular, the real closure of I, RC(I), is recursively saturated. We also show that if R is a countable recursively saturated real closed ordered field, then there is an integer part I such that R = RC(I) and I is a nonstandard model of PA.
Basically, to get the integer part of a real-closed field to be a model of PA, we need to assume a stronger property of the field, which is known in model theory as "recursive saturation". |
What is the fastest way to calculate $(x+1)^3-(x-1)^3$? | $$(1+x)^3+(1-x)^3$$ is an even polynomial of degree at most three. Hence the expansion has only even terms and must be of the form
$$ax^2+b.$$
Setting $x=0$, you find $b=2$, then with $x=1$, $a+b=8$ and you are done.
Alternatively, you may happen to know by heart the fourth row of Pascal's Triangle, $1\ 3\ 3\ 1$, and you get the coefficients $1+1,3-3,3+3,1-1$ (by increasing powers). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.