title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Method to solve as system of PDE in this form | From the first, by integration
$$K(r,\theta)=\int f(r,\theta)\,dr=F(r,\theta)+c(\theta)$$ and from the second
$$K(r,\theta)=\int h(r,\theta)d\theta=H(r,\theta)+d(r)$$ where $c,d$ are uknown functions.
By identification, $F,G,c,d$ must be such that
$$K(r,\theta)=F(r,\theta)+c(\theta)=H(r,\theta)+d(r).$$ |
Checking only for null measure that the fourier transform characterises measure | Measures here are not necessarily positive measures. $\mu -\nu$ is a measure whose charateristic function is $0$ and the arguement shows that $\mu -\nu=0$ so $\mu =\nu$. |
How to show that an event is in the tail $\sigma$ field | For any $m < n$ $$\frac{S_n}{n} = \frac{S_m}{n} + \frac{S_n-S_m}{n}$$ which is the sum of one term that tends to zero as $n\to\infty$ and another in $\sigma(X_k : k>m)$ |
Definition of $\pi_0 p^{-1}(u)$ | Yes, $\pi_0(G)$ should be the class of objects modulo isomorphism.
Note this is also consistent with the interpretation of groupoids as models for homotopy 1-types, since $\pi_0(G) \cong \pi_0(|G|)$ for any groupoid, and $\pi_0(X) \cong \pi_0(\Pi_1 (X))$ for a topological space $X$. |
Factorization for $e^{\lambda x}$ | Suppose we could. Then $1=e^0=e^{\lambda 0} = f(\lambda) g(0)$, so $f(\lambda)=\frac{1}{g(0)}$ is constant, but similarly we get $g(x)$ must also be constant, so $e^{\lambda x}$ would have to be constant for all $\lambda$ and $x$. But that's clearly nonsense. |
$X$ be Banach space with a proper dense subspace $Y$. Can the identity operator on $Y$ be extended to a continuous function from $X$ into $Y$? | Suppose $F$ is such an extension. Then $F(x) = x$ for $x \in Y$. But since $Y$ is dense, that implies $F(x) = x$ for all $x \in X$. So we must have $Y = X$. |
Geometric intuition behind pullback? | Suppose you have a map $\alpha: A \rightarrow B$. There are certain associated items that "go forward" or "go backwards":
Points are sent forward. Given $p \in A$ we have $\alpha(p) \in B$.
Functions are sent back, i.e. pull back from $B$ to $A$. If we have a function $f: B \rightarrow \mathbb{R}$ then we get the composition $f \circ \alpha : A \rightarrow \mathbb{R}$.
Vectors are send forward. Given $v \in TA_p$ we have the derivative map $d\alpha_p: TA_p \rightarrow TB_{\alpha(p)}$ (or maybe your book calls this $\alpha_*$; in coordinates it's just the derivative matrix). So we get $d\alpha_p(v) \in TB_{\alpha(p)}$.
One-forms give linear functionals on each tangent space; that is, they're functions which take vectors as input. As such, just like functions, they're sent back. If we have a one-form $\omega$ on $B$, then we get $\omega_q : TB_q \rightarrow \mathbb{R}$. So given $p \in A$ with $\alpha(p) = q$ we get $\omega_q \circ d\alpha_p : TA_p \rightarrow \mathbb{R}$ is a linear functional on $TA_p$. We get a one-form on $A$ this way.
Basically, geometric objects "go forward" and functions on them "go back." |
If in a group $G$, we have $a^5 = e$ and $aba^{-1} = b^2$ for some $a$, $b$ in $G$, then what is the order of $b$? | It is not hard to see that $a^nba^{-n}=b^{2^n}$. Thus, $b^{31}=1$. Note that $31$ is prime and so $o(b)=31$ or $b=1$. |
Is there a mathematical way to measure complexity? | To the best of my knowledge, there isn't an overarching/unifying theory for this. There are some directions on this within Computational Complexity and Theoretical Computer Science that are worth considering.
As you said, Kolmogorov Complexity measures the length of the shortest program required to generate an object.
Descriptive complexity measures how succinctly we can express certain properties about a given object with respect to an underlying logic. For example, in first-order logic, we are interested in both the number of variables and the quantifier depth of a given expression. The descriptive complexity of an object is closely related to its Weisfeiler--Leman dimension (see for instance, the Cai-Furer-Immerman paper, https://people.cs.umass.edu/~immerman/pub/opt.pdf). Descriptive Complexity also seeks to characterize complexity classes in terms of logical definability. Immerman's text (https://www.springer.com/gp/book/9780387986005) on this is a good starting point. Grohe has a text on descriptive complexity for graphs (https://www.cambridge.org/core/books/descriptive-complexity-canonisation-and-definable-graph-structure-theory/BC758F6004BD96F6995D5F1EF1E29BAD).
One way to approach Automata Theory is to ask the following: given a class $\mathcal{C}$ of languages, what is the minimally powerful type of automaton needed to characterize $\mathcal{C}$? For example, regular languages are precisely captured by finite-state automata.
In circuit complexity, we study the size and depth of circuits, as well as the minimal complexity of circuits needed to compute a given function. One result on the latter is that $\textsf{Parity} \not \in \textsf{AC}^{0}$. This result provides a separation between $\textsf{AC}^{0}$ and $\textsf{ACC}^{0}$ (that is, $\textsf{AC}^{0} \subsetneq \textsf{ACC}^{0}$).
Edinah Gnang has a very recent paper on Partial Differential Encodings of Boolean Functions. This is an algebraic analogue of Kolmogorov Complexity (https://arxiv.org/pdf/2008.06801.pdf).
In Geometric Complexity Theory, one way to try and separate complexity classes $\mathcal{C}_{\text{Easy}}$ and $\mathcal{C}_{\text{Hard}}$ is by finding a module $T$ (think polynomials) that vanishes on every function in $\mathcal{C}_{\text{Easy}}$, but does not vanish on at least one function in $\mathcal{C}_{\text{Hard}}$. The Introduction and Preliminaries of Joshua A. Grochow's paper (https://link.springer.com/article/10.1007%2Fs00037-015-0103-x) serve as a nice introduction.
As was mentioned above, the dimension of a mathematical object is a measure of complexity.
Edit: Since Hausdorff Dimension was mentioned in one of the comments, it is worth pointing out that Hausdorff Dimension has close connections both to fractal geometry and Kolmogorov Complexity (e.g., https://arxiv.org/pdf/cs/0208044.pdf). Jack Lutz is an expert on measures of geometric dimension, including their connections to computational complexity and information theory (e.g., relationships to Kolmogorov Complexity). It would be worthwhile to check out some of his papers (http://web.cs.iastate.edu/~lutz/papers.html). |
How to solve this Riccati equation $4xy'=4x^2y^2-4y-1$? | The general solution is
$$
-\frac{\tanh \left(\frac{x}{2}-i c\right)}{2 x}
$$
To get it change the variable $z(x)=x y(x)$ after that it is standard |
Incorrect combinatorial argument- 5 card hand with at least 3 red cards | This counting argument is incorrect because it will multiple-count configurations for which more than three cards are red; e.g., $$\{10\color{red}\heartsuit, J\color{red}\heartsuit, Q\color{red}\heartsuit, K\color{red}\heartsuit, A\color{red}\heartsuit\} = \{\{10\color{red}\heartsuit, J\color{red}\heartsuit, Q\color{red}\heartsuit \}, \{K\color{red}\heartsuit, A\color{red}\heartsuit\}\} = \{\{10\color{red}\heartsuit, J\color{red}\heartsuit\}, \{ Q\color{red}\heartsuit, K\color{red}\heartsuit, A\color{red}\heartsuit\}\}$$ but under such an enumeration scheme, the middle and right hand expressions are considered distinct. |
If $(A \vee B) \wedge (Β¬B \vee C)$ is true, then $(A \vee C)$ must be true ... can I argue that? | $$(A\lor B) \land (\lnot B \lor C)$$
If the premise above is true, then by conjunction elimination,
$A\lor B$ is true $(1)$
and $\lnot B \lor C \equiv B\rightarrow C$ is true.$(2)
$(1) \;A:\;$Suppose A is true. Then $A\lor C$ is true ($\lor$-Introduction).
$(1)\;B:\;$ Suppose B is true. Then by modus ponens with $(2): B\rightarrow C$, we have that $C$ is true. If C is true, then so is $A\lor C$, by $\lor$-Introduction.
In either case, $A\lor B$, which being true means $A$ is true or $B$ is true (or both) we have shown that with $(2)$, it follows that $A\lor C$ must be true. |
Evaluate $ \lim_{n\to+\infty}\frac{1}{n}\sum_{k=1}^n\Big[\frac{2n}{k}\Big]-2\Big[\frac{n}{k}\Big]$. | Hint: For fixed (large-ish) $n$, for which (spcifically, how many) $k$ is $\lfloor\frac nk\rfloor=m$? |
Does the Chinese Remainder Theorem hold for "incongruence" equations? | Well, it's not clear what you mean by the "incongruence" version of the Chinese remainder theorem.
But one thing we can definitely say is the following. Say we know $x \not\equiv 0 \pmod{p_1}$, $x \not\equiv 0 \pmod{p_2}$, and so on. Then there are $p_1 -1$ options for what $x$ could be modulo $p_1$; $p_2 - 1$ options modulo $p_2$, and so on.
For every one of the $(p_1 - 1)(p_2-1)\cdots$ combinations $(b_1, b_2, \dots)$ of these options, you could write down the equations $x \equiv b_1 \pmod{p_1}$, $x \equiv b_2 \pmod{p_2}$, and so on. Here, the ordinary Chinese remainder theorem applies, telling us that there is a unique answer modulo $p_1p_2\cdots$.
We need a slightly different (but very similar) argument when higher powers of a prime divide $n$, so watch out for that. |
Maximum value of $\sin(A/2)+\sin(B/2)+\sin(C/2)$? | Since $\sin x$ is concave on acute $x$, by Jensen's inequality the maximum is found at $A/2=B/2=C/2=\pi/6$, as $3\sin\pi/6=3/2$.
Edit: since the OP mentioned in a comment on @B.Goddard's answer that they know differentiation, here's another proof the equilateral case achieves a maximum:
Keep using $\frac{C}{2}=\frac{\pi}{2}-\frac{A+B}{2}$. To extemize $\sin\frac{A}{2}+\sin\frac{B}{2}+\cos\frac{A+B}{2}$ simultaneously solve$$\tfrac12\cos\tfrac{A}{2}-\tfrac12\cos\tfrac{C}{2}=0,\,\tfrac12\cos\tfrac{B}{2}-\tfrac12\cos\tfrac{C}{2}=0$$viz. $A=B=C$. I'll leave the reader to check it's a maximum by considering second derivatives. |
Using CP prove the truth of a tautology | For A) :
$[P β (Q β R)] \supset [Q β (P β R)]$
1) $[P β (Q β R)]$ --- assumed for Conditional Proof
2) $Q$ --- assumed [a] for CP
3) $P$ --- assumed [b] for CP
4) $(Q β R)$ --- from 1) and 3) by Modus Ponens or Conditional Elimination
5) $R$ --- from 4) and 2) by MP
6) $(P β R)$ --- from 3) and 5) by CP, discharging assumption [b]
7) $[Q β (P β R)]$ --- from 2) and 6) by CP, discharging assumption [a]
8) $[P β (Q β R)] \supset [Q β (P β R)]$ --- from 1) and 7) by CP
The same for B) :
$[Q β (P β R)] \supset [P β (Q β R)]$
From A) and b) we conclude with :
$[P β (Q β R)] \equiv [Q β (P β R)]$
by Biconditional introduction. |
Magic dice question. | Think of the sample space as $(M,n,m)$ where $M=0$ if the dice behave normally and $M=1$ otherwise. $n,m \in \{1,...,6\}$. $M, n,m $ are independent and uniformly distributed as appropriate. Hence $P[(M,n,m) ] = {1 \over 2 \cdot 6^2}$.
Define $f(M,n,m) = \begin{cases} nm,& M=0 \\
n^2, & M=1\end{cases}$.
Then $E[f] = {1 \over 2 \cdot 6^2} (\sum_n \sum_m nm + 6 \sum_n n^2) = {1 \over 2 \cdot 6^2} ((\sum_n n)^2 + 6 \sum_n n^2) = {329 \over 24} \approx 13.708$. |
Odds of getting the largest possible hand in blackjack | There are $\binom{52}{11}$ ways that you can choose $11$ cards out of $52$. Now, which hands you are interested in: It should contain $4$ aces, $4$ twos and $3$ threes. There are $\binom{4}{3}$ ways that you can choose the threes. So, at the end, we are interested in $\binom{4}{3}$ hands out of the total $\binom{52}{11}$. The probability will be $$\binom{4}{3}/\binom{52}{11} = 4\frac{11!41!}{52!}$$ |
Find minimum of $\frac{a+3c}{a+2b+c}+\frac{7a+6b+3c}{a+b+2c}+\frac{c-a}{2a+b+c}$ for non-negative reals | (This initial part is not a solution. It is too long to be a comment. This demonstrates that the suggested approach of substituting the denominator doesn't work out because equality is not achieved as the condition $ a \geq 0$ is not met.)
We use the substitution $x = a + 2b + c, y = a+b+2c, z = 2a + b + c$.
This system gives us $ a = (-x-y+3z)/4, b = (3x-y-z)/4, c = (-x+3y-z)/4$.
The expression becomes:
$$\frac{ -x + 2y } { x} + \frac{ 2 x - y + 3z } { y} + \frac{ y - z } { z} = \frac{ 2y}{x} + \frac{ 2x}{y} + \frac{3z}{y} + \frac{ y}{z} -3. $$
Since $ \frac{ 2y}{x} + \frac{ 2x}{y} \geq 2\sqrt{4} = 4$ and $ \frac{ 3z}{y} + \frac{y}{z} \geq 2 \sqrt{3} $, hence a lower bound is $ 2 \sqrt{3} +1 \approx 4.46 $.
Equality is achieved when $ y = x, y = \sqrt{3} z$, or equivalently when
$ a : b : c = 3 - 2 \sqrt{3} : 2 \sqrt{3} -1 : 2 \sqrt{3} - 1 $, which we have to verify have the same sign.
However, they do not have the same sign, so equality CANNOT be achieved. The minimum, or infimum, is higher.
We need the extra condition that (symmetric) $3z \geq x + y$ in order to get (symmetric) $a \geq 0$.
This condition was violated when we tried to find the equality case.
Since the expression is symmetric, WLOG we may assume that $ y = 1$. If so, we want to minimize
$$ \frac{2}{x} + 2x + 3z + \frac{1}{z} - 3 $$
subject to $3z \geq x + 1, 3 \geq x + z, 3x \geq z + 1$.
Even doing Lagrangian, this is horrendous and has very ugly solutions. We can show that the minimum happens on the boundary $3z=x+1$, but the actual solution ($\approx 4.493$) is pretty ugly.
I'm doubtful there's a contest-math solution.
My guess is that the problem setters made a mistake by not checking $ a : b : c$.
I do wish there was a nice way to complete this problem though, since it would illustrate the importance of checking conditions instead of just assuming as most of us did. |
Simple line integral | It is correct if the orientation of the contour is left-to-right (starting at $(0,0)$ and ending at $(0,1)$).
If the orientation is reversed, the value will be $-1/2$.
For line integrals, you should imagine a point in motion along the contour, so there is a "time orientation" of the path. It's not enough to specify the path as a point set, a direction must also be specified (unless the value is zero). |
Chain rule special application | What you did is right, but it requires you to obtain $\partial\xi/\partial t$ and $\partial\xi/\partial x$ given $x(\xi,\theta)$ – it would probably be preferable to use the chain rule the other way around to express $u_t$ and $u_x$ in terms of $u_\xi$ and $u_\theta$; that would involve $x_\xi$ and $x_\theta$ instead $\xi_t$ and $\xi_x$; and then you can solve for $u_t$ and $u_x$. |
Ratio between trigonometric sums: $\sum_{n=1}^{44} \cos n^\circ/\sum_{n=1}^{44} \sin n^\circ$ | The last line in the argument you give could say
$$
\sum_{n=1}^{44} \cos\left(\frac{\pi}{180}n\right)\,\Delta n \approx \int_1^{44} \cos n^\circ\, dn.
$$
Thus the Riemann sum approximates the integral. The value of $\Delta n$ in this case is $1$, and if it were anything but $1$, it would still cancel from the numerator and the denominator.
Maybe what you didn't follow is that $n^\circ = n\cdot\dfrac{\pi}{180}\text{ radians}$?
The identity is ultimately reducible to the known tangent half-angle formula
$$
\frac{\sin\alpha+\sin\beta}{\cos\alpha+\cos\beta}=\tan\frac{\alpha+\beta}{2}
$$
and the rule of algebra that says that if
$$
\frac a b=\frac c d,
$$
then this common value is equal to
$$
\frac{a+c}{b+d}.
$$
Just iterate that a bunch of times, until you're done.
Thus
$$
\frac{\sin1^\circ+\sin44^\circ}{\cos1^\circ+\cos44^\circ} = \tan 22.5^\circ
$$
and
$$
\frac{\sin2^\circ+\sin43^\circ}{\cos2^\circ+\cos43^\circ} = \tan 22.5^\circ
$$
so
$$
\frac{\sin1^\circ+\sin2^\circ+\sin43^\circ+\sin44^\circ}{\cos1^\circ+\cos2^\circ+\cos43^\circ+\cos44^\circ} = \tan 22.5^\circ
$$
and so on.
Now let's look at $\tan 22.5^\circ$. If $\alpha=0$ then the tangent half-angle formula given above becomes
$$
\frac{\sin\beta}{1+\cos\beta}=\tan\frac\beta2.
$$
So
$$
\tan\frac{45^\circ}{2} = \frac{\sin45^\circ}{1+\cos45^\circ} = \frac{\sqrt{2}/2}{1+(\sqrt{2}/2)} = \frac{\sqrt{2}}{2+\sqrt{2}} = \frac{1}{\sqrt{2}+1}.
$$
In the last step we divided the top and bottom by $\sqrt{2}$.
What you have is the reciprocal of this.
Postscript four years later: In my answer I explained why the answer that was "given" was right, but I forgot to mention that $$ \frac{\displaystyle\sum_{n=1}^{44} \cos n^\circ}{\displaystyle \sum_{n=1}^{44} \sin n^\circ} = \sqrt{2}+1 \text{ exactly, not just approximately.} $$ The reason why the equality is exact is in my answer, but the explicit statement that it is exact is not. |
For small x, Taylor Series to determine constants n and C in the approximation | Hint
Just as you did, start with the basic series $$\cos(x)=\sum_{n=0}^{\infty}\frac{(-1)^n}{(2n)!}x^{2n}$$ and $$e^{-y}=\sum_{n=0}^{\infty}\frac{(-1)^n}{n!}y^n$$ In the second one, replace $y$ by $\frac{x^2}{2}$ and now combine them for your expression. |
Proof that $ \lim_{r \to 0^+} \frac{m(f(I[a,r]))}{r^n} = |\, \det f'(a)\, | $ | Hint:
$$\frac{m(f(I[a,r]))}{r^n} - |\, \det f'(a)\, |= \frac{1}{r^n}\int_{I[a,r]} (|\, \det f'(x)\, | - |\, \det f'(a)\, |)\, dm(x).$$
Use the continuity of $\det f'(x)$ at $a.$ |
Suppose that a and b are positive integer then Ο($a^b$) = a$^{b-1}$ Ο(a) | This follows directly from
Show that $\phi(mn) = \phi(m)\phi(n)\frac{d}{\phi(d)}$
For $m=n=a$ this gives $$\phi(a^2)=\phi(a)^2\cdot \frac{a}{\phi(a)}=\phi(a)a$$
Now use induction. |
Cauchy criterion problem | Assume that $\sum_{n=1}^\infty \frac{x_n}{S_n}$ is convergent. Let's write out the Cauchy criterion: Let $\varepsilon>0$ be any number less than $1$ (if you want you can fix e.g. $\varepsilon=\frac{1}{2}$). Then there is some natural number $N$ such that for all $m\geq N,k\geq 1$ we have $$\sum_{j=m+1}^{k+m}\frac{x_j}{S_j}<\varepsilon$$
By shifting indices we get: $$1-\frac{S_m}{S_{m+k}}\leq\sum_{j=1}^{k}\frac{x_{j+m}}{S_{j+m}}<\varepsilon$$
Now what happens if we let $k\to\infty$? |
How does teacher get first step? | I'm posting this answer in response to the comment thread under picakhu's answer; writing comments was getting a bit tedious. The answer has the same content as Isaac's, but is explained a little differently, which might be useful for the OP.
The general problem, of which this is a special case, is:
Given an expression of the form $a \sin x + b \cos x$, for some numbers $a$ and $b$, to rewrite it in the form $c \sin (x + \theta)$, for some appropriate choice
of $c$ and $\theta$ (which will be related to $a$ and $b$ in some way, of course --- and we have to find out what that way is!).
Suppose first that $a^2 + b^2 = 1$. Then trigonometry (especially the point of view of the unit circle) tells us that there is an angle $\theta$ such
that $a = \cos \theta$ and $b = \sin \theta$.
Thus we can write $a \sin x + b \cos x = \cos \theta \sin x + \sin \theta \cos x,$ and via the addition theorem for $\sin$, we recognize this as being $\sin(x + \theta).$
Now in most examples, it may not be that $a^2 + b^2 = 1$. But it equals something (!), so let's call that something $c^2$. Then we see
that $(a/c)^2 + (b/c)^2 = 1$, so we can choose $\theta$ so that
$a/c = \cos \theta$ and $b/c = \sin \theta$. Then as above
we find that $(a/c)\sin x + (b/c) \cos x = \sin (x + \theta),$
and so (finishing finally) we have
$$a \sin x + b \cos x = c \sin (x + \theta).$$
In the OP's example, $a = \sqrt{3}$ and $b = 1$, so $c = 2$,
and we are led to the given answer.
Note that in practice this whole process will be easiest when $a/c$ and $b/c$ are easily recognized trig function special values, like $1/2$ or $\sqrt{2}/2$ or $\sqrt{3}/2$. If they are just somewhat random numbers, then you won't be able to figure out the correct choice of $\theta$ without using a calculator to
compute $\theta$. |
How to prove that the gradient of nonconvex smooth function is Lipschitz continuous? | Every concave, smooth function $f$ satisfies your first inequality for $L = 0$. Hence, your claim is not true. |
Different proofs of uniqueness of the Laplace transform | I don't think this question has a well defined answer. As far as I know, all proofs of the uniqueness of the Laplace transform are essentially corollaries of this one statement:
$\displaystyle \int_{0}^{\infty} f(t) \text{e}^{-st} dt = 0 \implies f(t) = 0$
The number of statements equivalent to this is infinite (e.g. you could add $1$ to both sides of the equation, to get a stupid yet true statement). Perhaps you mean to restrict the proofs to those which are reasonably simple and useful, but then we start to get into highly subjective territory. |
Are there an infinite number of prime quadruples of the form $10n + 1$, $10n + 3$, $10n + 7$, $10n + 9$? | I suppose, you will not find a proof for neither the positive nor the negative result here.
The positive result would obviously imply the (unproven and presumably difficult) twin prime conjecture.
The negative result would disprove the first Hardy-Littlewood conjecture about the density of prime sets with a given pattern, which (among other things) conjectures a (positive) density for prime quadruples. |
The cyclic vector-space | Since they correspond to distinct eigenvalues, the set of vectors $\left\{u_1,u_2,...,u_n \right\}$ is linearly independent, and thus we can use it as a basis for $V$. With respect to this basis, $y=[1,1,\ldots,1]^t$, $T(y)=[\lambda_1,\lambda_2,\ldots,\lambda_n]^t$, $T^2(y)=[\lambda_1^2,\lambda_2^2,\ldots,\lambda_n^2]^t$, $\ldots$, $T^{n-1}(y)=[\lambda_1^{n-1},\lambda_2^{n-1},\ldots,\lambda_n^{n-1}]^t$. And now Vandermonde's determinant proves that they are linearly independent. |
Do you need commutative algebra for Milne's Algebraic Number Theory course? | You can read the first few pages of the course notes to find out.
Prerequisites
The algebra usually covered in a first-year graduate course, for example, Galois theory, group
theory, and multilinear algebra. An undergraduate number theory course will also be helpful.
The name of the first chapter is
Preliminaries from Commutative Algebra
That should be enough to begin with. The first footnote to the first sentence in the first chapter says
See also the notes A Primer of Commutative Algebra available on my website.
Those notes are found here: A Primer of Commutative Algebra. |
Calculating expectation w.r.t. the empirical dist. fcn | There seems to be an error either in that lemma or in your rendition of it. The function
$$
F(t) = \sum_{i=1}^{N}a_i \ 1\{t\le t_i\}
$$
with $a_i\gt0$ is a decreasing step function (the value at $-\infty$ is $\sum_ia_i$ and the value at $+\infty$ is $0$). A correct formulation of the lemma would be for an increasing step function $F$ of the form
$$
F(t) = \sum_{i=1}^{N}a_i \ 1\{t\ge t_i\}\;.
$$
Then you don't need to turn the inequality around, and the result comes out right.
Your manipulation involving $t$ and $x$ isn't valid because you applied the lemma for $t$ even though the measure is $\mathrm dF_n(x)$, not $\mathrm dF_n(t)$. A valid way to turn around the inequality (if there'd been a need for turning it around) would have been to use $1\{t\ge t_i\}=1-1\{t\lt t_i\}$ and thus $\mathrm d1\{t\ge t_i\}=-\mathrm d1\{t\lt t_i\}$. |
Questions concerning the differential equation $\frac{dy}{dx}=ky$ | The function $y$ is assumed differentiable everywhere so it must be continuous everywhere as well.
If $y$ is only zero part of the time then on its nonzero parts it has to take the exponential form that we have found, possibly with different constants $A$ on different disconnected components.
On the other hand, on the parts where $y$ vanishes so does its derivative by virtue of the original equation.
This gives a contradiction at boundary points of the zero set of $y.$ |
Independent events from any other | For any events $A$ and $E$ we can write
$$ \mathbb{P}(E)=\mathbb{P}(A\cap E)+\mathbb{P}(A^c\cap E) $$
If $\mathbb{P}(A)=1$ then $\mathbb{P}(A^c)=1-1=0$ and
$$ 0\leq \mathbb{P}(A^c\cap E)\leq \mathbb{P}(A^c)=0 $$
so $\mathbb{P}(E)=\mathbb{P}(A\cap E)$.
Similarly, if $\mathbb{P}(A)=0$ then $0\leq \mathbb{P}(A\cap E)\leq \mathbb{P}(A)=0$, hence $\mathbb{P}(A\cap E)=0$ as well. |
Existence of solution of first order ordinary differential equation | Picard's Existence and Uniqueness Theorem states that:
Given IVP: $y'=f(x,y), \enspace y(x_0)=y_0$ with $f(x,y)$ and $\frac{\partial f}{\partial y}(x,y)$ continuous functions in some open rectangle $R = (a,b)\times(c,d)$ containing $(x_0,y_0)$, then the IVP has a unique solution in the closed interval $I = [x_0-h,x_0+h]$ for some $h>0$. (It also gives a sequence of functions which converge uniformly on I on the solution $y$ of the IVP).
Now, if you meant $y'(t)=2 \sqrt{g(t)}$, for some continuous function $g:J \subset \mathbb{R} \rightarrow \mathbb{R}_0^+,0 \in J$, then the function $f(t,y)$ is continuous on $\mathbb{R}^2$, and since it does not depend on $y$, $\frac{\partial f}{\partial y}(t,y)$ exists and $\frac{\partial f}{\partial y}(t,y)=0$ i.e. continuous on all $\mathbb{R}^2$. Thus the IVP has a unique solution (local) on some closed interval $I$ containing $0$.
If you meant $y'=2 \sqrt{y}$, where $y:J \subset \mathbb{R} \rightarrow \mathbb{R}_0^+$ continuous function, $0 \in J$, $J$ open interval, then, if $a=0$, Picard's theorem does not guarantee you a unique solution (nor does Picard-Lindelof theorem for that matter), because $\frac{\partial f}{\partial y}(t,y)$ does not exist at $(0,0)$. However if $a>0$ then by Picard's theorem with $R=J\times(0,\infty)$ we have a unique solution.
(So, 1. Yes 2. on some open interval containing the point of the intial condition) |
Graph of $(e^x-1)/x$ as x->0 | If you take your function on Wolfram Alpha, try replacing $-14$ by $-15$. You will see that Wolfram Alpha evaluates at a few discrete points, then connects the dots in some way. Hence, your viewing window is right up against the limits of computation, and rounding error (as @hardmath describes in more detail in the comments) becomes amplified. |
How to show that $y^T x - \frac{1}{2}x^T Q x$ is bounded above? | We can complete the square. We want to write $F(x) = \frac12 x^T Q x - y^T x$ in the form
\begin{align}
\frac12 (x - x_0)^T Q (x - x_0) + c &=
\frac12 x^T Q x - x^T Q x_0 + \frac12 x_0^T Q x_0 + c.
\end{align}
To make things match up, we should pick $x_0$ such that $Q x_0 = y
\iff x_0 = Q^{-1} y$, and we should pick
\begin{align}
c &= - \frac12 x_0^T Q x_0 \\
&= -\frac12 y^T Q^{-1} y.
\end{align}
We have discovered that
\begin{align}
\frac12 x^T Q x - y^T x &=
\underbrace{(x - Q^{-1} y)^T Q (x - Q^{-1}y)}_{\text{nonnegative}} -\frac12 y^T Q^{-1} y.
\end{align}
This shows that $F$ is bounded below, and that it attains a minimum at
$x = Q^{-1}y$.
(Note that minimizing $F$ is equivalent to solving $Qx = y$. That's a very useful fact.) |
what is the slope of a line that 1) can rotate around a point, and 2) its reflection from a circle has a specific direction | Let's call $c$ the center of the circle. Let's also assume that $i$ and $s$ (my name for $r_{need}$) are both unit vectors.
Then you need to find $p$ with two properties:
$(p-c) \cdot i = (p-c) \cdot r$
$(p-c) \cdot (p-r) = R$
where $R$ is the circle's radius. Rewrite to get
$(p-c) \cdot (i-r) = 0$
and suddenly it's easy: Let $h = (i-r)$ and let $k = rot(h)$, where "rot" means 'rotate 90 degrees'. Then $p = c \pm \frac{R}{\|k\|} k$.
This gives two answers; the other is the diametrically opposite point on the circle in your second drawing. You'll probably want to compute both and decide which one corresponds to an external rather than an internal reflection. |
A series related to prime numbers | First of all $f(t) = \sum_p t^p$ means $f'(t)/f(t) = \sum_n b_n t^n$ where $b_n=0$ for $n < -1$ and $\sum_p b_{n-p} = \cases{n \ if\ n+1\ is\ prime\\ 0\ otherwise}$.
$f(e^{-x})$ is the inverse Mellin transform of $\Gamma(s)\sum_p p^{-s}$ while things like $1/f(t), \log f(t), f'(t)/f(t)$ are quite inaccessible, in the same way that $\zeta(s)$ is accessible from only the integers while $1/\zeta(s),\log \zeta(s),\zeta'(s)/\zeta(s)$ need the primes.
The number of ordered ways to write $n$ as a sum of $m$ primes are the coefficients of $f(t)^m$ and the number of ordered ways to write $n$ as a sum of primes are the coefficients of $\frac{1}{1-f(t)}-1 = \sum_{m=1}^\infty f(t)^m$.
$\frac{1}{f(t)}=\frac{1}{t^2(1-(1-\frac{f(t)}{t^2}))}= t^{-2}\sum_{m=0}^\infty (-1)^m(\frac{f(t)}{t^2}-1)^m$ where the coefficients of $(\frac{f(t)}{t^2}-1)^m$ are the number of ordered ways to write $n+2m$ as a sum of $m$ primes $\ge 3$. $\frac{1}{t^2-f(t)}=t^{-2}\sum_{m=0}^\infty (\frac{f(t)}{t^2}-1)^m$.
Let $g(x) = \sum_{p^k} e^{-p^k x} \log p$ the inverse Mellin transform of $\Gamma(s) \frac{-\zeta'(s)}{\zeta(s)}$. We have the explicit formula $g(x) = \sum Res(\Gamma(s) \frac{-\zeta'(s)}{\zeta(s)} x^{-s}) = x^{-1}- \sum_\rho \Gamma(\rho) x^{-\rho}-\sum_{k=0}^\infty (a_k+b_k \log(x))x^k$. A corresponding explicit formula exists for $f(t)$ but it will be messy because $\Gamma(s)\sum_p p^{-s}$ has many branch points and a naturaly boundary $\Re(s)=0$.
We don't know anything on the zeros and particular values of $f(t)$ and since it is analytic only for $|t|<1$ then $f'(t)/f(t)$ won't be equal to a sum over $f$'s zeros. The most of what we know is the asymptotic of $f(t)$ as $t \to 1$, the error terms depends on the RH. It will give the asymptotic of $\log f(t)$ and possibly for $f'(t)/f(t)$, as $t \to 1$
If $f(t)$ has a zero on $|t|< 1$, let $z_0$ be one with minimal absolute value, assume there is no other zero on $|z_0|$, then $\frac{f'(t)}{f(t)}-\frac{1}{t-z_0}$ is analytic for $|t|\le |z_0|+\epsilon$ so that $b_n = z_0^{-n}+O(|z_0|+\epsilon)^{-n}$ and $\lim_{n \to \infty} b_n/b_{n+1} = z_0$. By numerical approximation you can show such a $z_0$ exists in which case $z_0 \in (-1,0)$. So your claim $z_0= -\gamma$ is that $f'(t)/f(t)$ is analytic for $|t| < \gamma$. It is plausible there are some heuristics for such a thing given $\frac{f(t)}{1-t} = \sum_{n \ge 2} \pi(n) t^n \approx\sum_{n \ge 2} \frac{n}{\log n} t^n$ |
pigeon-hole and parity | Let
$$S=\{(x_1,\ldots,x_m):x_1,\ldots,x_m\textrm{ is a permutation of }1,\ldots m\}.$$
So for a vector $a=(a_1,\ldots,a_m)\in\mathbb{Z}^n$ we need to show there are
distinct $x$, $y\in S$ with $x\cdot a\equiv y\cdot a$ (mod $m!$). If this
wasn't true, $x\cdot a$ runs through all congruence classes modulo $m!$
as $x$ runs through $S$. Therefore
$$\left(\sum_{x\in S}x\right)\cdot a\equiv\sum_{k=0}^{m!-1}k\pmod{m!}.$$
Each coordinate of $X=\sum_{x\in S}x$ equals $(m-1)!\sum_{j=1}^m j$.
But since $m$ is odd, this is divisible by $m!$. Hence $X\cdot a\equiv0$
(mod $m!$) but $\sum_{k=0}^{m!-1}k\not\equiv0$ (mod $m!$). |
cross product evaluations | $$A \times (B\times C)=A\times(B\times(B\times A))$$
$$=A\times(B(A\cdot B)-A(B\cdot B))$$
$$=(A\cdot B)(A\times B)-(B\cdot B)(A\times A)$$
$$=(A\cdot B)(A\times B)$$ |
solving ODE : $xy'=\sin(x+y)$ | Let $u=x+y$ ,
Then $y=u-x$
$y'=u'-1$
$\therefore x(u'-1)=\sin u$
$x\dfrac{du}{dx}-x=\sin u$
$x\dfrac{du}{dx}=\sin u+x$
$\dfrac{du}{dx}=\dfrac{\sin u}{x}+1$
Follow the method in http://science.fire.ustc.edu.cn/download/download1/book%5Cmathematics%5CHandbook%20of%20Exact%20Solutions%20for%20Ordinary%20Differential%20EquationsSecond%20Edition%5Cc2972_fm.pdf#page=223:
Let $v=\tan\dfrac{u}{2}$ ,
Then $\dfrac{dv}{dx}=\dfrac{v^2}{2}+\dfrac{v}{x}+\dfrac{1}{2}$
Let $v=-\dfrac{2}{w}\dfrac{dw}{dx}$ ,
Then $\dfrac{dv}{dx}=-\dfrac{2}{w}\dfrac{d^2w}{dx^2}+\dfrac{2}{w^2}\left(\dfrac{dw}{dx}\right)^2$
$\therefore-\dfrac{2}{w}\dfrac{d^2w}{dx^2}+\dfrac{2}{w^2}\left(\dfrac{dw}{dx}\right)^2=\dfrac{2}{w^2}\left(\dfrac{dw}{dx}\right)^2-\dfrac{2}{xw}\dfrac{dw}{dx}+\dfrac{1}{2}$
$\dfrac{2}{w}\dfrac{d^2w}{dx^2}-\dfrac{2}{xw}\dfrac{dw}{dx}+\dfrac{1}{2}=0$
$4x\dfrac{d^2w}{dx^2}-4\dfrac{dw}{dx}+xw=0$
$w=C_1xJ_1\left(\dfrac{x}{2}\right)+C_2xY_1\left(\dfrac{x}{2}\right)$ (according to http://www.wolframalpha.com/input/?i=4xw''-4w'%2Bxw%3D0)
$\therefore v=-\dfrac{2\dfrac{d}{dx}\left(C_1xJ_1\left(\dfrac{x}{2}\right)+C_2xY_1\left(\dfrac{x}{2}\right)\right)}{C_1xJ_1\left(\dfrac{x}{2}\right)+C_2xY_1\left(\dfrac{x}{2}\right)}=-\dfrac{C_1xJ_0\left(\dfrac{x}{2}\right)-C_1xJ_2\left(\dfrac{x}{2}\right)+4C_1J_1\left(\dfrac{x}{2}\right)+C_2xY_0\left(\dfrac{x}{2}\right)-C_2xY_2\left(\dfrac{x}{2}\right)+4C_2Y_1\left(\dfrac{x}{2}\right)}{2C_1xJ_1\left(\dfrac{x}{2}\right)+2C_2xY_1\left(\dfrac{x}{2}\right)}=\dfrac{xJ_2\left(\dfrac{x}{2}\right)-xJ_0\left(\dfrac{x}{2}\right)-4J_1\left(\dfrac{x}{2}\right)+CxY_2\left(\dfrac{x}{2}\right)-CxY_0\left(\dfrac{x}{2}\right)-4CY_1\left(\dfrac{x}{2}\right)}{2xJ_1\left(\dfrac{x}{2}\right)+2CxY_1\left(\dfrac{x}{2}\right)}$
$u=2\tan^{-1}\dfrac{xJ_2\left(\dfrac{x}{2}\right)-xJ_0\left(\dfrac{x}{2}\right)-4J_1\left(\dfrac{x}{2}\right)+CxY_2\left(\dfrac{x}{2}\right)-CxY_0\left(\dfrac{x}{2}\right)-4CY_1\left(\dfrac{x}{2}\right)}{2xJ_1\left(\dfrac{x}{2}\right)+2CxY_1\left(\dfrac{x}{2}\right)}$
$y=2\tan^{-1}\dfrac{xJ_2\left(\dfrac{x}{2}\right)-xJ_0\left(\dfrac{x}{2}\right)-4J_1\left(\dfrac{x}{2}\right)+CxY_2\left(\dfrac{x}{2}\right)-CxY_0\left(\dfrac{x}{2}\right)-4CY_1\left(\dfrac{x}{2}\right)}{2xJ_1\left(\dfrac{x}{2}\right)+2CxY_1\left(\dfrac{x}{2}\right)}-x$ |
Prove that the limit of this sequence is $\frac{1}{2}(1 + \sqrt{5})$ | To be rigorous: first check that $S_2>S_1$ and $S_1<S_0\equiv\frac{1}{2}(1+\sqrt{5})$ and use an induction argument to show that $S_n$ is increasing and each $S_n$ is bounded above by $S_0$. Thus, a limit $S$ exists satisfying $S=\sqrt{S+1}$. You can then check that $S=S_0$.
More details: to check $S_1<S_2$, you simply plug in the numbers. Similarly, do the same for $S_1<S_0$. Now, in one induction argument, you can verify both that $S_n$ is increasing and $S_n$ is bounded above by $S_0$. We have just done the base cases. The induction steps are
$$
S_{n+1}-S_n=\sqrt{S_n+1}-\sqrt{S_{n-1}+1}\geq 0\text{ if }S_n\geq S_{n-1};\\
S_{n+1}=\sqrt{S_n+1}\leq\sqrt{S_0+1}=S_0\text{ if }S_n\leq S_0.
$$
Therefore, now you have $\{S_n\}$ is an increasing sequence that is bounded above. So it has some limit $S$. This $S$ satisfies $S=\sqrt{S+1}$ which you obtain by taking the limits on both sides of $S_{n+1}=\sqrt{S_n+1}$. Note that this taking limits step isn't proper unless you first justify that the limit exists. Hence, the induction argument we did in the beginning. |
$AB$ invertible but $A$ not | Use $rank (AB)\leq \min\{rank (A),rank(B)\}$ and that when a matrix has full row rank it has right inverse. |
Do minimal hyperbolic surfaces exist? What do they look like? | $\newcommand{\Reals}{\mathbf{R}}$There is no minimal surface of constant negative Gaussian curvature in $\Reals^{3}$, even locally.
Up to scaling, the principal curvatures would satisfy
$$
\kappa_{2} = -\kappa_{1},\qquad
-1 = \kappa_{1} \kappa_{2} = -\kappa_{1}^{2},
$$
so the principal curvatures would be constant: $\kappa_{1} = 1 = -\kappa_{2}$ without loss of generality.
If a surface in $\Reals^{3}$ has constant principal curvatures, the Codazzi equations give $\kappa_{1} - \kappa_{2} = 0$ or $\kappa_{1}\kappa_{2} = 0$. (See, for example, O'Neill, Elementary Differential Geometry, Second revised edition, Theorem 2.6, page 272.) This excludes a minimal surface of constant negative Gaussian curvature.
In case it's of interest, a connected surface in $\Reals^{3}$ having constant principal curvatures is part of a plane, cylinder, or sphere. See, for example, O'Neill, Elementary Differential Geometry, Second revised edition, Exercise 5 on page 280. |
Prove that if $X$ and $Y$ are sets where $\,\left|X\right|=\left|Y\right|,\,$ then $\,\left|P\left(X\right)\right|=\left| P\left(Y\right)\right|$. | We have a bijection $f$ from $A$ to $B$. What function comes to mind from $P(A)$ to $P(B)$?
Send $S\subseteq A$ to $f(S)$ and prove this correspondence is injective and surjective. |
Atomless probability measure | No.
If $X_1=0$ with full probability and the CDF of $X_2$ is continuous then the distribution of $(X_1,X_2)$ is atomless. |
Show that there is sequence of homeomorphism polynomials on [0,1] that converge uniformly to homeomorphism | We can show the result along the lines you sketched. First, we note that it suffices to consider monotonically increasing $f$, for the transformation $g \mapsto 1-g$ is an isometry that preserves polynomials.
Now we need to approximate the homeomorphism $f$ by continuously differentiable homeomorphisms. For that, extend $f$ to $\mathbb{R}$ by setting
$$g(x) = \begin{cases} f(x) &, x \in [0,1] \\ x &, x \notin [0,1].\end{cases}$$
Further, let $\varphi$ be a non-negative even smooth function with compact support and $\int_\mathbb{R} \varphi(x)\,dx = 1$. Convoloution yields a family
$$g_\eta(x) = \int_\mathbb{R} g(x-\eta y)\varphi(y)\,dy$$
of strictly increasing smooth functions converging uniformly to $g$ on $\mathbb{R}$ for $\eta \searrow 0$, with $g_\eta(x) = x$ for $x \leqslant -\eta K$ or $x \geqslant 1+\eta K$ if the support of $\varphi$ is contained in $[-K,K]$.
Given $0 < \varepsilon < \frac{1}{2}$, choose $\eta > 0$ so small that $\lvert g_\eta(x)-g(x)\rvert < \frac{\varepsilon}{10}$ for all $x\in\mathbb{R}$. Then
$$h(x) = \frac{g_\eta(x) - g_\eta(0)}{g_\eta(1) - g_\eta(0)}$$
defines a smooth homeomorphism of $[0,1]$. We have
$$\begin{align}
\lvert g_\eta(x) - h(x)\rvert &= \left\lvert \frac{g_\eta(x)(g_\eta(1)-g_\eta(0)) - (g_\eta(x)-g_\eta(0))}{g_\eta(1)-g_\eta(0)}\right\rvert\\
&\leqslant \frac{\lvert g_\eta(x)\rvert\cdot\lvert g_\eta(1)-1\rvert + \lvert g_\eta(0)\rvert\cdot \lvert 1-g_\eta(x)\rvert}{g_\eta(1)-g_\eta(0)}\\
&\leqslant 2\frac{\varepsilon}{10}\frac{1+\frac{\varepsilon}{10}}{1-2\frac{\varepsilon}{10}}\\
&< \frac{\varepsilon}{4},
\end{align}$$
so $\lvert g(x) - h(x)\rvert < \frac{\varepsilon}{2}$ for all $x\in [0,1]$.
Now, since $h$ is smooth and strictly increasing, $h'$ is continuous and strictly positive, hence you can uniformly approximate $h'$ by positive polynomials. If $Q$ is a positive polynomial such that $\lvert Q(x)-h'(x)\rvert < \frac{\varepsilon}{5}$ for $x\in [0,1]$, and $P(x) = \int_0^x Q(t)\,dt$, then $P$ is a strictly increasing (on $[0,1]$) polynomial with $\lvert P(x) - h(x)\rvert < \frac{\varepsilon}{5}$ for all $x\in [0,1]$ and $R(x) = \frac{P(x)}{P(1)}$ is a polynomial homeomorphism of $[0,1]$ with
$$\lvert P(x) - R(x)\rvert = \left\lvert \frac{P(x)(P(1)-1)}{P(1)}\right\rvert\leqslant \frac{\varepsilon}{5}\cdot\frac{1+\frac{\varepsilon}{5}}{1-\frac{\varepsilon}{5}} < \frac{\varepsilon}{4},$$
so
$$\lvert f(x) - R(x)\rvert \leqslant \lvert f(x) - h(x)\rvert + \lvert h(x) - P(x)\rvert + \lvert P(x) - R(x)\rvert < \varepsilon.$$ |
How to do these Geometry proofs using vectors? | We do not work with free vectors. We work with position vectors, which are fixed on the Euclidean plane wrt some origin. (See figure.) The underlying coordinate plane provides coordinates to any points $P,Q$. This facilitates writing $\vec{OP}=P-O$ and $\vec{PQ}=Q-P$. For example if $P=(3,4)$ on coordinate plane, then $\vec{OP}=(3-0,4-0)=3\hat{i}+4\hat{j}$.
So let position vectors of the vertices of trapezium be $A,B,C,D$. Since $E$ is midpoint of $AD$, its position vector will be (think coordinates)
$$E=\frac{A+D}{2} \, , \quad F=\frac{B+C}{2}$$
Now lengths are given by magnitudes of vectors
$$|AB|=|\vec{AB}|=B-A \, , \quad |DC|=|\vec{DC}|=C-D$$
Since $AB // DC$, $\,\vec{DC}=k\vec{AB}$.
Now $$|EF|=F-E=\frac{(B-A)+(C-D)}{2}=\frac{1}{2}(|AB|+|DC|)$$
You can also conclude about parallelism.
This was Ques $1$. Now you can do Ques $2$. |
Finding Vector Spaces | (a) looks right, hint for (b): $\quad\quad
\begin{bmatrix}4a+3b\\ 0\\ a+b+c\\ cβ2a\\ \end{bmatrix}
= a \cdot \begin{bmatrix}4\\ 0\\ 1\\ β2\\ \end{bmatrix}
+ b \cdot \begin{bmatrix}3\\ 0\\ 1\\ 0\\ \end{bmatrix}
+ c \cdot \begin{bmatrix}0\\ 0\\ 1\\ 1\\ \end{bmatrix}
$ |
Not sure about proof presentation - existence of division with remainder | Your proof of existence looks fine; however, there's something off in your uniqueness proof. You state first that $d > r_1-r_2$, and then that $d = r_1-r_2$. I would go for a simpler argument like the following:
Suppose $a = q_1d + r_1 = q_2d+r_2$ with $q_1,q_2\geq 0$ and $d>r_1\geq r_2\geq 0$. Then $r_1-r_2 =(q_2-q_1)d \geq 0$. By choice of $r_1,r_2$, we must have $d>r_1-r_2\geq 0$, which means $d > (q_2-q_1)d \geq 0$. Dividing each side by $d$, we get $1 > q_2-q_1 \geq 0$. Since $q_1$ and $q_2$ are integers, this means $q_2-q_1 = 0$, and hence $r_1-r_2 = 0$. |
$\mathbb{Z}^{+}$ includes zero or not? | A number $x$ is defined to be positive if $x > 0$. Is $0 > 0$? No, so it is non-positive (and it is also non-negative).
$\mathbb Z^+$ is a notation, so it is difficult to argue about it, because some authors do use non-standard notation and it's alright as long as they're consistent. But $\mathbb Z^+_0$ is a better notation for the set of non-negative integers. |
Can a torus be cut into a MΓΆbius strip with zero number of half twists? | If you think of a torus by taking a cylinder and identifying its ends, then it is clear that cutting along this circle gives you a cylinder (a MΓΆbius strip with zero half twists). |
Is the compact interval $[0,1]$ in the usual topology compact in this new topology? | It is not a compact space.
Let $\pi_n$ be a positive increasing succession converging to an irrational number $p<1$. You had that $\{p\}=\emptyset \cup \{p\}$ is open in $[0,1]$.
So in particular $X=\cup_{n>0} \{(0,\pi_{n})\} \cup \{[0,\pi_1),\{p\},(p,1]\}$ is an open cover, from where you cannot extract a finite one. |
Left ideals of matrix rings are direct sum of column spaces? | A small correction first, $I$ is isomorphic to a direct sum of $C_j$'s (and all of the $C_i$ are isomorphic between themselves) - not necessarily equal.
The reason is that $M_n(K)$ is a direct sum of isomorphic (minimal) left ideals
$$M_n(K) = \oplus_{j=1}^n C_j$$
and that should finish it with some theory of semi simple modules.
Or, you can reason as follows: every left ideal $I$ of $M_n(K)$ is of the form
$$I = I_B = \{ A \mid A \cdot B = 0 \}$$
for some matrix $B$; equivalently, $I = I_W$ consists of all the matrices which are $0$ on a given subspace $W$ of $\mathbb{K}^n$. For $W$ = $K e_{l+1} + \cdots + K e_{n}$ we get $I_W = C_1 \oplus \cdots \oplus C_l$. |
$S$ is a subset of $\mathbb N$. Prove $S$ $=$ $\mathbb N$ | Using the method that if
$1\in$$S$
$\forall$$k$$\in$$\Bbb N$, if $k$$\in$$S$ then $k+1$$\in$$S$
$S=$$\Bbb N$
First, $2\ge2$ and $2=2^1$$\in$$S$ so $2-1=1\in$$S$
Second, Let $m$ and $k$$\in$$\Bbb N$ such that $k$ is the smallest value for which $2^k$$\gt$$m$
$2^k$$\in$$S$ then so is $2^k-1$
And as $2^k-1$$\in$$S$ then so is $(2^k-1)-1$
So $2^k-l$$\in$$S$ for some $l$$\in$$\Bbb N$
Hence $2^k-l-1$$\in$$S$
Let $m=$$2^k-l-1$ so $m+1=$$2^k-l$$\in$$S$
Hence $S=$$\Bbb N$ |
Understanding how the hint will prove injectivity. | Taking $J=\langle X\rangle$, let $r$ be as in the conclusion of Nakayamaβs Lemma. Then $r=p(X)X + 1$ for some $p(X)\in R[X]$. If $m\in\mathrm{ker}(f)$, then $p(X)X\cdot m = 0$, so
$$ m = 1\cdot m + p(X)X\cdot m = (1+p(X)X)\cdot m = r\cdot m = 0.$$ |
Show $\sqrt{1 + (\int_{\Omega} h d\mu)^2} \leq \int_{\Omega} \sqrt{1+h^2} d\mu$ | Hint: Apply Jensen's inequality to the convex function $x\mapsto \sqrt{1+x^2}$.
Trivia: If $\Omega=[0,1]$, $\mu=\lambda$ and $h=f'$ for some increasing $C^1$ function $f$ with $f(0)=0$ and $f(1)=1$, then the inequality reads
$$
\sqrt{1^2+1^2}=\sqrt{1+\left(\int_0^1 f'(x)\,\mathrm dx\right)^2}\leq \int_0^1 \sqrt{1+f'(x)^2}\,\mathrm dx
$$
meaning that the shortest path between the points $(0,0)$ and $(1,1)$ is obtained when travelling in a straight line, since the right-hand side is the arc length of the path obtained by travelling along $f$. |
How many different sums can be obtained of those p students, knowing that all of them filled their boards correctly? | HINT
Show that any kind of distribution of minus signs can be transformed into any other by repeating the following step:
If you have a minus sign in square $(i,j)$ (row $i$ and column $j$), and a minus sign in $(k,l)$ with $i \not = k$ and $j \not = l$, and you do not have a minus sign in either $(i,l)$ or $(k,j)$, then you can move the minus sign from $(i,j)$ to $(i,l)$, and the minus sign from $(k,l)$ to $(k,j)$
Note that any such $1$ step will keep the sum the same, so no matter how many of these kinds of steps we take, the sum remains the same as you transform one distribution into another. Also, to show that all distributions can be transformed into each other, it might be helpful to show that such distributions can be transformed to and from a kind of 'canonical' distribution, i.e one that has a 'nice' pattern of minus signs. |
Benedict Gross Abstract Algebra | There is a link on Benedict Gross's own Harvard webpage to the 'old' lecture notes and assignments since they were deleted off the original webpage. |
Problem: Connection between cross ratio and collinearity | The setup is invariant under projective transformations, so without loss of generality choose $$A_1=[1:0:0]\qquad A_2=[0:1:0]\qquad A_3=[0:0:1]$$ Then with $P=[p_1:p_2:p_3]$ you get $$B_1=[0:p_2:p_3]\qquad B_2=[p_1:0:p_3]\qquad B_3=[p_1:p_2:0]$$ either via cross-product computation or by observing that points on $A_2A_3$ have a zero in the first place, and $B_1=P-p_1A_1$ is exactly the linear combination of $P$ and $A_1$ which satisfies this property. Likewise for other two. Similarly you can assume $C_i=\lambda_i P+\mu_i A_i$ and then compute the cross ratio with respect to the basis $P,A_i$ as
$$a_i=(A_i,B_i;P,C_i)=
\left(
\begin{bmatrix}0\\1\end{bmatrix},\begin{bmatrix}1\\-p_i\end{bmatrix};
\begin{bmatrix}1\\0\end{bmatrix},\begin{bmatrix}\lambda_i\\\mu_i\end{bmatrix}
\right)=\\
\frac
{\begin{vmatrix}0&1\\1&0\end{vmatrix}\cdot
\begin{vmatrix}1&\lambda_i\\-p_i&\mu_i\end{vmatrix}}
{\begin{vmatrix}0&\lambda_i\\1&\mu_i\end{vmatrix}\cdot
\begin{vmatrix}1&1\\-p_i&0\end{vmatrix}}=
\frac{\mu_i+\lambda_ip_i}{\lambda_ip_i}\\
\lambda_ip_ia_i=\mu_i+\lambda_ip_i\\
0=\mu_i+\lambda_ip_i(1-a_i)$$
Solutions to this are as usual only defined up to some scalar factor, but one simple solution would be the following:
$$\mu_i=p_i(a_i-1)\qquad \lambda_i=1\\
C_i=P + p_i(a_i-1)A_i
$$
All three points are collinear if their determinant vanishes.
$$0=\det[C_1,C_2,C_3]=\begin{vmatrix}
p_1+p_1(a_1-1) & p_1 & p_1 \\
p_2 & p_2+p_2(a_2-1) & p_2 \\
p_3 & p_3 & p_3+p_3(a_3-1)
\end{vmatrix}$$
The rest is just a bit of algebra. |
How to show that the inequality with exponents is true | (In the true spirit of stackexchange:)
Yes. |
Proof of a weaker version of Hall's Marriage Theorem | Actually, all you need to show is that the inequality $N_{G_2}(U) \ge |U|$ holds for all $U \subseteq V_1 \setminus S$. Go back and reread your top paragraph which is shaded orange in your OP to see for yourself. We do this next.
Let $U \subseteq V_1 \setminus S$. Suppose that the strict inequality $|N_{G_2}(U)| < |U|$ holds.
Then on the one hand
$N_G(U \cup S) \subseteq N(S) \cup N_{G_2}(U_2)$ [make sure you see why]
and so $|N_G(U \cup S)|$ $\le |N(S)| + |N_{G_2}(U_2)|$ $=$ $|S| + |N_{G_2}(U_2)| <
|S|+ |U|$.
On the other hand $U$ and $S$ are disjoint so $|U \cup S|=|U|+|S|$.
Can you finish from here.
Thus if the strict inequality $|N_{G_2}(U)| < |U|$ holds then it follows that the strict inequality $|N_G(U \cup S)| < |U \cup S|$ also holds, which contradicts the original assumption that $|N_{G}(S')| \ge |S'|$ for all subsets $S' \subseteq V_1$; indeed take $S'=S \cup U$. |
Probability of three specific cards from different suits? | I'm assuming you are doing this without replacement. Let $S$ be the set of suits $[c,s,h,d]$ and let $s_1$ be the suit of the first card drawn. Since $s_1$ can by any of the four, then the probability for the first card is just the probability of getting a three, which is
$$\frac{4}{52}$$
For the next card, let the suit be $s_2$. Then $s_2$ can be chosen from only three of the four suits - the three not yet chosen. Since it must also be a $5$, then the probability is
$$\frac{3}{4}\cdot\frac{4}{51}$$
For the last card, $s_3$ can only be chosen from $2$ of the $4$ suits, and it must be a $7$, so its probability is
$$\frac{2}{4}\cdot\frac{4}{50}$$
Thus the probability of all $3$ happening is
$$\frac{4}{52}\cdot\frac{3}{4}\cdot\frac{4}{51}\cdot\frac{2}{4}\cdot\frac{4}{50}$$
I'll let you simplify that. |
Help understanding Feigenbaum constants from bifurcation mapping | Sorry for the confusion in the comments.
The $\mu_k$ seems to be the least parameter of the non-linear mapping where $k$ bifurcations has occurred (doublings of solutions that the iterate "hops" between). This is modelled as a geometric series with quotient $$\delta \approx \frac{\mu_{k}-\mu_{k-1}}{\mu_{k+1}-\mu_k}$$
How much larger span of parameters from the last bifurcation to the next.
The Feigenbaum constant is the $\delta \approx 4.66...$ number that is the same bifurcation gap in parameter space ( that this is the same for all second degree mappings was the remarkable thing Feigenbaum found in 78 ).
The $c$ is the distance in parameter space from first bifurcation to chaos occurs. I don't think this one is independent of which second order mapping we are testing.
So use the sequence $\{\mu_k\}$ that you can measure to estimate $\delta$, then use $\mu_\infty - \frac{c}{\delta^k}$ to estimate $c$.
You can probably derive it from wikipedia expressions of geometric series if you want to. |
Principal ideal ring analytic functions | While I am not very familiar with analytic functions themselves, I would imagine that one could construct an infinite ascending chain of ideals, showing that the ring is not Noetherian (and hence, definitely not principal.)
So say for example $I_j=\{f \mid \forall n\in\mathbb{N}, n\geq j\ \ f(n)=0\}$ would probably constitute an infinite ascending chain of ideals (as it does in the ring of continuous functions.)
Added: Experts have added useful examples in the comments to show that there is such an ascending sequence:
Ragib Zaman: $f_j(x)=\prod_{k=j+1}^{\infty} (1-z^2/k^2).$
Georges Elencwajg: $f_j(z)=\frac {sin(\pi z)}{z(z-1)...(z-j) }$
Thank you for contributing these: I too have learned something, now :) |
f is a meromorphic function satisfying $\vert f(z)\vert\leq\vert z\vert^n$ then f is a rational function | As $\overline D(0, r)$ is compact, $f$ has a finite number of poles in this disc. So by multiplying by the polynomial $P=\prod (z-\alpha_i)^{m_i}$ for $\alpha_i$ the poles and $m_i$ their multiplicity, you still have $|Pf(z)|\leq |Q|$ for a certain polynomial $Q$, and $z$ outside the disc. But $|Pf|$ is continuous function on this compact disc so we have $C$ such that $|Pf|\leq C$ on the disc. Then, $|Pf|\leq C + |Q|$ everywhere. Then, you can see that there is a polynomial $R$ such that $|Pf|\leq|R|$ everywhere. This is a classical result that an entire function bounded by a polynomial is a polynomial (see for example this thread), so we are done. |
Integral: $\aleph(f) = \int_0^1 (f(x^k) + f(x^{\frac{1}{k}})) f'(x) \; \mathrm{d}x$ is not dependent on $k$ | Substitute $y=x^{1/k}$ in the second integral and then integrate by parts:
$$
\int_0^1 f(x^{1/k}) f'(x) \, dx = \int_0^1 f(y) f'(y^k) k y^{k-1} \, dy
= \bigl[ f(y) f(y^k)\bigr]_{y=0}^{y=1} - \int_0^1 f'(y) f(y^k) \, dy
$$
so that
$$
\int_0^1 \bigl(f(x^k) + f(x^{1/k}) \bigr) f'(x) \, dx = f(1)^2 - f(0)^2
$$
holds for all (continuously differentiable) functions $f$.
In the same manner one can show that
$$
\int_0^1 \bigl(f(\phi(x)) + f(\phi^{-1}(x)) \bigr) f'(x) \, dx = f(1)^2 - f(0)^2
$$
for an increasing differentiable mapping $\phi$ from the interval $[0, 1]$ onto itself. |
Absolutely continuous and almost everywhere solution of a controlled dynamical system | The matrix $A$ is a generalized function, its values can be changed on any null set without changing it, that is, without leaving the equivalence class of ordinary functions that make up $A$. Any claims involving values of $A$ can only be "a.e."
"absolutely continuous" is the strongest regularity you can get in an anti-derivative of a locally integrable function. It replaces the differentiability you would get if $A$ were continuous. |
Probability problem: Commission of $4$ members votes for a bill | It is not easier to use complementation in this instance.
If $X$ is the count of up-votes, then:
$$\begin{split}\mathsf P(X\geq 3)~ &=~1-\mathsf P(X=1)-\mathsf P(X=2)-\mathsf P(X=0)\quad&{\text{The correct way to use complements}\\\text{don't forget the case of zero up-votes}}\\[2ex]&=~\mathsf P(X=3)+\mathsf P(X=4)&\text{Anyway, its easier to just use this.}\end{split}$$ |
General expression of polynomial roots sequence. | By Descartes rule of signs, $P_n(x)$ has one sign change, so it has one root on the positive half of the real line. $P_n(2) = 1$ and $P_n \left( 2-\frac{1}{n} \right) = \frac{n - \left( 2-\frac{1}{n} \right)^n}{n-1}$. Since $n-1>0$ for $n > 1$, the sign of this latter expression is controlled by the numerator. If we can show the numerator is negative for $n$ sufficiently large, the root must lie in $(2-\frac{1}{n}, 2)$. As the lower end of this interval is approaching $2$, the limit of the only positive root of $P_n$ approaches $2$ as $n \rightarrow \infty$.
For $n > 2$, $2 - \frac{1}{n} > 3/2$, so $\left( 2 - \frac{1}{n} \right)^n > \left( \frac{3}{2} \right)^n$. We find that $\frac{\mathrm{d}}{\mathrm{d}n} \left( n - \left( \frac{3}{2} \right)^n \right) = 0 $ when $n = \frac{\ln(\ln 3 - \ln 2)}{\ln 2 - \ln 3}$ at which, $\frac{\ln(\ln 3 - \ln 2)}{\ln 2 - \ln 3} - \left( \frac{3}{2} \right)^{\frac{\ln(\ln 3 - \ln 2)}{\ln 2 - \ln 3}} = \frac{-(1+\ln\ln(3/2))}{\ln(3/2)} = -0.2399{\dots} < 0$. Therefore, the numerator above is negative (at least) for all $n > 2$. |
$10-e$ interesting decimal expansion property | This works given that in $e$ we are getting several pairs of digits that add up to $9$, nothing more.
That is, let's take some random decimal number, making sure that we get pairs of digits adding up to $9$, e.g:
$$4.637236091881....$$
Subtract from $10$
$$10-4.637236091881....=5.362763908118...$$
And you get the same result! |
Subspace generated by three bivectors | Hint
(a) It's straightforward to find examples for which $\dim \operatorname{span}\{x_1, x_2, x_3\}$ is $1$, $3$.
So, suppose $\dim \operatorname{span} \{x_1, x_2, x_3\} < 3$. Then (by relabeling if necessary) we may assume that $x_3 \in S := \operatorname{span}\{x_1, x_2\}$. What can we conclude about $V_1, V_2, V_3$ if $\dim S = 2$?
(b) Suppose $\dim \operatorname{span} \{x_1, x_2, x_3\} = 1$. Then, we may take $\alpha_1 = \alpha_2 = \alpha_3 = x_1$ and thus $B_i = x_1 \wedge \beta_i$ for $i=1,2,3$. What can we then conclude from the linear independence of $\{B_1, B_2, B_3\}$? |
Is $P$ a subspace of $\mathbb{R_{β€2}[x]}$? | If $p \in P$, $p_0 = p(0)=0$, so your objection to the third condition is not true. |
How should I interpret notation such as $(X \times I) / (X \times \{0\})$ when dealing with quotient spaces? | Informally I think of $X/A$ to mean all the points of $A$ are glued together as to be indistinguishable while the other elements of $X$ are left unchanged. This means the equivalence classes are all singletons except for $A$ which is its own equivalence class as you have correctly deduced. |
Natural Deduction Proof: {A v B, Β¬A v C} β’ B v C | You are setting up for disjunction elimination, so when you derive contradiction, use explosion to derive the target conclusion.$$\def\fitch#1#2{~~\begin{array}{|l}#1\\\hline#2\end{array}}\fitch{~~1.~A\lor B\hspace{10ex}\textsf{Premise}\\~~2.~\lnot A\lor C\hspace{8.5ex}\textsf{Premise}}{\fitch{~~3.~A\hspace{12.5ex}\textsf{Assumption}}{\fitch{~~4.~\lnot A\hspace{8.5ex}\textsf{Assumption}}{~~5.~\bot\hspace{10ex}\textsf{Negation Elimination}\\~~6.~B\lor C\hspace{5.75ex}\textsf{Explosion (aka }\textit{Ex Falso Quodlibet}\textsf{)}}\\~~\vdots\\~~m.~B\lor C }\\~~\vdots\\~n.~B\lor C}$$
I am sure you can fill out the rest of this "proof by cases". |
Adding a combination of and + or operator constraint in Linear Programming | Assume $x$, $y$ and $z$ are binary variables.
For $z$ to be $x$ OR $y$ (inclusive or): $z\le x + y$; $z\ge x$; $z\ge y$.
For $z$ to be $x$ XOR $y$ (exclusive or): $z\le x + y$; $z\le 2 - (x + y)$; $z\ge x - y$; $z\ge y - x$.
For $z$ to be $x$ AND $y$: $z\ge x + y - 1$; $z\le x$; $z\le y$. |
Homomorphism $\phi: \Bbb Z\to R$ where $R$ is a ring with identity. | You are right it is unique, basically for the reasons you gave.
However, it is not "the identity map" as the term is usually understood. For example note that when $R$ is finite, such as $R = \mathbb{Z}/n \mathbb{Z}$, your map certainly cannot even be injective.
What is true, and maybe you meant this, is that for each $n \in \mathbb{Z}$ you necessarily have $\phi(n)= n \cdot 1_R$.
To complete you argument you should consider negative integers too, and depending on how formal you want to be give a proof for the general case for positive $n$ via induction. |
Joint entropy of 2 independent random variables | When you rewrote $-\sum P(x) \log_2 P(x)$ as $H(x)$ on the left hand side, you neglected the dependency $1/\log_2 P(x)$ on the right hand side on the summation. |
Prove that if $u$ and $v$ are harmonic conjugate then $\nabla u\bot \nabla v$ | To keep the question from being unanswered. Yes, you are correct. |
Finding patterns in differential equation coefficients | There's a pattern but it's somewhat complicated. Let's write your relation as
$$A_m b^{m+1} = a_{0,m} b + a_{1,m} b' + \cdots + a_{m,m} b^{(m)}$$
where $A_m$ is a constant and $a_{i,j}$ are functions of $x$. Taking the derivative of the $LHS$ gives
$$(A_m b^{m+1})' = A_m (m+1) b^m b'$$
We already know that
$$A_1 b^2 = a_{0,1}b + a_{1,1}b'$$
so we solve this for $b'$
$$b' = \frac{A_1 b^2 - a_{0,1}b}{a_{1,1}}$$
Substituting this back into the $LHS$ expression gives
$$LHS = A_m (m+1) b^m \frac{A_1 b^2 - a_{0,1}b}{a_{1,1}} = \frac{(m+1)A_1 A_m}{a_{1,1}}b^2 - \frac{a_{0,1}}{a_{1,1}}(m+1)A_m b^{m+1}$$
Here we can use the $A_m$ relation. So
$$a_{1,1} LHS = (m+1)A_1A_m b^{m+2} - a_{0,1}(m+1)(a_{0,m} b + a_{1,m} b' + \cdots + a_{m,m} b^{(m)})$$
Therefore we can write
$$(m+1)A_1 A_m b^{m+2} = a_{0,1}(m+1)(a_{0,m} b + a_{1,m} b' + \cdots + a_{m,m} b^{(m)}) + a_{1,1}RHS$$
The derivative of the $RHS$ is
$$RHS' = (a_{0,m}' b + a_{1,m}' b' + \cdots + a_{m,m}' b^{(m)}) +( a_{0,m} b' + a_{1,m} b'' + \cdots + a_{m,m} b^{(m+1)})$$
Combining terms gives
$$RHS' = a_{0,m}' b + (a_{1,m}'+a_{0,m})b' + \cdots (a_{m,m}'+a_{m-1,m})b^{(m)} + a_{m,m} b^{(m+1)}$$
Finally
$$(m+1)A_1 A_m b^{m+2} = a_{0,1}(m+1)(a_{0,m} b + a_{1,m} b' + \cdots + a_{m,m} b^{(m)}) + a_{1,1}(a_{0,m}' b + (a_{1,m}'+a_{0,m})b' + \cdots (a_{m,m}'+a_{m-1,m})b^{(m)} + a_{m,m} b^{(m+1)})$$
and combining terms yields
$$\begin{align*}A_{m+1} b^{m+2} = &((m+1)a_{0,1}a_{0,m} + a_{1,1}a_{0,m}')b
\\&+(a_{0,1}(m+1)a_{1,m}+a_{1,1}(a_{1,m}'+a_{0,m})) b' +
\\&+\cdots
\\&+(a_{0,1}(m+1) + a_{1,1}(a_{m,m}'+a_{m-1,m}))b^{(m)} +
\\&+a_{1,1}a_{m,m} b^{(m+1)}
\\&= a_{0,m+1}b + a_{1,m+1}b' + \cdots + a_{m,m+1}b^{(m)} + a_{m+1,m+1} b^{(m+1)}
\end{align*}$$
So we can read off the coefficient recurrence relations directly as
$$\begin{align*}
A_{m+1} &= (m+1)A_1 A_m\\
a_{0,m+1} &= (m+1)a_{0,1}a_{0,m} + a_{1,1}a_{0,m}'\\
a_{i,m+1} &= (m+1)a_{0,1}a_{i,m}+a_{1,1}(a_{i,m}'+a_{i-1,m}),\quad \text{for all } 1\leq i \leq m\\
a_{m+1,m+1} &= a_{1,1}a_{m,m} b^{(m+1)}
\end{align*}$$
Finally, from your equations
$$A_{1} = -n, \quad a_{0,1} = n-x, \quad a_{1,1} = x$$
The recurrence relations become
$$\begin{align*}
A_{m+1} &= -n(m+1) A_m\\
a_{0,m+1} &= (m+1)(n-x)a_{0,m} + xa_{0,m}'\\
a_{i,m+1} &= (m+1)(n-x)a_{i,m}+x(a_{i,m}'+a_{i-1,m}),\quad \text{for all } 1\leq i \leq m\\
a_{m+1,m+1} &= x a_{m,m} b^{(m+1)}
\end{align*}$$
I could go further and try to solve these recurrences if I have time. |
How to factor cubics having no rational roots | Note that your polynomial is not monic but has leading coefficient $-8$. Thus, by the rational root theorem, you have to test
$$
\pm 1, \pm 3, \pm \frac 1 2, \pm \frac 3 2, \pm \frac 1 4, \pm \frac 3 4, \pm \frac 1 8, \text{ and } \pm \frac 3 8
$$
to find all rational roots of $-8x^3 + 8x - 3 = 0$.
One of these will indeed be a root and long division will leave you with a quadratic equation for the other two roots. |
Limits of function and squeeze theorem | $18(\frac{2}{3})^n = 18(\frac{2^n}{3^n})$ which, when compared to $(\frac{2^n}{n!})$ is always bigger. $3^n$ is smaller than $n!$ for $n>6$ and hence $(\frac{2^n}{3^n})>(\frac{2^n}{n!})$ here. Below that, the $18$ factor ensures $18(\frac{2^n}{3^n})>(\frac{2^n}{n!})$ |
What is the possible minimum dimension for this linear space | Choose two non-zero and linearly independent vectors $w$ and $z$ in $V$ and put $v_1 = w$, $v_2 = v_3 = v_4 = v_5 = z$. Then the condition is satisfied. |
Inequality with two unknowns and two real parameters | If $a \ge 1 $ and $b \ge 1$ we have
$(x_1 - x_2)^2 + (a - 1)x_1^2 + (b - 1)x_2^2 \ge 0$
since $(x_1 - x_2)^2 \ge 0$, $(a - 1)x_1^2 \ge 0 $ and $(b - 1)x_2^2 \ge 0$. |
linear and a projection | Hint:
$$P^2f=P\left(\frac{f(x)+f(-x)}2\right)=\frac12\left(Pf(x)+Pf(-x)\right)=$$
$$\frac12\left(\frac{f(x)+f(-x)}2+\frac{f(-x)+f(-(-x))}2\right)=\;\ldots$$ |
Is there a closed-form or combinatorial proof for the 'multisections' of the binomial series $\sum_{k=0}^{n-1}\binom{n}{k}$? | With the adjustment mentioned in the comments (adding 1 to $S_{m,0}(n)$), we have that $S_{m,j}(n)$ is the number of ways to color $mn$ distinguishable objects black and white so that the number of black objects is congruent to $j \pmod m.$
It turns out there's no particular reason to restrict attention to the case where the number of objects is divisible by $m$, so let's talk instead about $\tilde S_{m,j}(n)$, defined to be the number of ways to color $n$ distinguishable objects black and white so that the number of black objects is congruent to $j \pmod m$. So we have $$S_{m,j}(n) = \tilde S_{m,j}(mn).$$
If we're fixing $m$ there's a nice recursive relationship between these: $$\tilde S_{m,j}(n+1) = \tilde S_{m,j}(n) + \tilde S_{m,j-1}(n).$$
That's because given a coloring of $n+1$ balls black or white with $j \pmod m$ black balls, either the last ball is white and the coloring of the first $n$ balls has $j \pmod m$ black balls, or the last ball is black and the coloring of the first $n$ balls has $j-1 \pmod m$ black balls. (Here, $j-1$ is evaluated modulo $m$, of course.)
For a fixed $m$, the set of these observations for all $j$ combines into one matrix equation:
$$
\pmatrix {\tilde S_{m,0}(n+1) \\ \tilde S_{m,1}(n+1) \\ \vdots \\ \tilde S_{m,m-1}(n+1)} =
\pmatrix {1 & 0 & 0 & \dots & 0 & 1 \\ 1 & 1 & 0 & \dots & 0 & 0 \\ 0 & 1 & 1 & \dots & 0 & 0 \\ \vdots \\ 0 & 0 & 0 & \dots &1 & 1 }
\pmatrix {\tilde S_{m,0}(n) \\ \tilde S_{m,1}(n) \\ \vdots \\ \tilde S_{m,m-1}(n)}
$$
So,
$$
\pmatrix {\tilde S_{m,0}(n) \\ \tilde S_{m,1}(n) \\ \vdots \\ \tilde S_{m,m-1}(n)} =
\pmatrix {1 & 0 & 0 & \dots & 0 & 1 \\ 1 & 1 & 0 & \dots & 0 & 0 \\ 0 & 1 & 1 & \dots & 0 & 0 \\ \vdots \\ 0 & 0 & 0 & \dots &1 & 1 } ^ n
\pmatrix {1 \\ 0 \\ \vdots \\ 0}
$$
So diagonalizing that matrix should give you explicit formulas for each $\tilde S_{m, j}(n)$ as a linear combination of exponential terms in $n$ where the bases of the exponent are the eigenvalues of the matrix.
Diagonalizing it is pretty easy, if $M$ is that matrix and $R$ is the rotation matrix
$$R = \pmatrix {0 & 0 & 0 & \dots & 0 & 1 \\ 1 & 0 & 0 & \dots & 0 & 0 \\ 0 & 1 & 0 & \dots & 0 & 0 \\ \vdots \\ 0 & 0 & 0 & \dots &1 & 0 }, $$
we have $M = I + R$ and the eigenvalues of $R$ are precisely the $m$'th roots of unity. That is, the eigenvalues of $M$ take the form $\lambda_0 = 2, \lambda_1 = 1+\zeta, \lambda_2 = 1+\zeta^2, ..., \lambda_{m-1} = 1+\zeta^{m-1}$ where $\zeta$ is a primitive $m$'th root of unity.
So, explicitly we expect for each $j$ there should be some coefficients $C_{j, k}$ for which
$$\tilde S_{m,j}(n) = C_{j,0} 2^n + C_{j, 1} (1+\zeta)^n + \dots + C_{j, m-1} (1+\zeta^{m-1})^n$$
In your case, we have
$$S_{m,j}(n) = C_{j,0} 2^{mn} + C_{j, 1} (1+\zeta)^{mn} + \dots + C_{j, m-1} (1+\zeta^{m-1})^{mn}.$$
The thing that makes things nice in the examples you're looking at ($m = 2, 3, 4, 6$) is the fact that the $m$'th power of one plus an $m$'th root of unity is nice for those things. That is for a primitive $m$'th root of unity $\zeta$ with $m$ in that set, $(1+\zeta^k)^m$ is always just an integer. This is not true for general $m$.
If you like, equivalently, the matrix $M^m$ has particularly nice eigenvalues for those values of $m$.
Uniformity
You mentioned that these are roughly uniform. That's because the eigenvector of $M$ corresponding to the largest (in absolute value) eigenvalue $2$ is the uniform vector $(1, \dots, 1)$. All the other exponential terms are relatively small.
Random walks
Another interpretation of this is that $\tilde S_{m, j}(n)$ the number of walks of length $n$ on $\mathbb Z / m \mathbb Z$ which land on $j$, where each step in the walk either leaves you where you started or moves you one step up in $\mathbb Z / m \mathbb Z$. Or, if you normalize, $\tilde S_{m, j}(n) / 2^n$ is the probability that a random walk in $\mathbb Z / m \mathbb Z$ ends up at $j$ after $n$ steps, where the steps in the random walk are chosen uniformly from those two options.
Then the normalized vector
$$ 2^{-n} \pmatrix {\tilde S_{m,0}(n) \\ \tilde S_{m,1}(n) \\ \vdots \\ \tilde S_{m,m-1}(n)} $$
just keeps track of the probability distribution of your location in $\mathbb Z / m \mathbb Z$ after $n$ steps of this random walk, and the normalized matrix $M/2$ is the transition matrix for this random walk.
This should also make it fairly clear that the distribution should approach a uniform one, since there's nothing in the random walk that tends to keep you in near one element of $\mathbb Z / m \mathbb Z$ over another.
More combinatorial approach
For the nice values of $m$ in the question, $M^m$ diagonalizes over the rational numbers. With enough effort, it should be possible to make this observation into a more combinatorial argument for the identities that you're talking about. (There's lots of things you could mean by "combinatorial", e.g. I consider the above to already be fairly combinatorial, especially in terms of the "random walk" interpretation above.) You can be explicit about the diagonalization, and write down some combinatorial interpretations of the individual terms in the diagonalization. You might need to do messy things like putting negative terms on one side of an equation and positive terms on another, and construct bijections recursively and hope that those bijections turn out to be something nice. I don't think the outcome of this process is likely to be worth the effort. It might be easier to work with $M^{2m}$ instead, since then the eigenvalues will all be positive.
The combinatorial things you put together this way can only exist and be meaningful for these special values of $m$, so I don't think they'll give that much insight over the more abstract approach. I think the more abstract approach (with a concrete interpretation in terms of random walks) is "the right way to think about it".
Edit: One exciting form of argument that might exist here: For those values $m$, it might be relevant that $\mathbb Q[\zeta]$ is a degree 2 extension of $\mathbb Q$, and that there's a corresponding rank 2 lattice of algebraic integers. Maybe the walks in $\mathbb Z / m \mathbb Z$ have some interesting combinatorial relationship to this lattice (and walks in this lattice?). |
Group Theory (Abstract Algebra) question! o(G) vs o(g)?? | Typically $$o(g)=\min\{n\in\mathbb{N}:\,g^n=e\}$$ is the order of the element $g$, whereas $$o(G)=\sum_{x\in G}{1}$$ is the number of elements in the group $G$ as a whole. |
How to prove that $W=Y$, where $Y=\bigcap\{U:U \text{ is a subspace of }V, \text{dim } U=n-1, W\subset U\}$. | The inclusion $W\subseteq Y$ is immediate.
Let $z\in V$ such that $z\not\in W$. Fix a basis $(e_1,\ldots,e_r)$ of $W$. Hence the family $(e_1,\ldots,e_r,z)$ is free, so we can compete it to a basis of $V$, say $(e_1,\ldots,e_r,z,e_{r+2},\ldots,e_n)$.
No let $U$ be the subspace of $V$ generated by $e_1,\ldots,e_r,e_{r+1},\ldots,e_n$, it a subspace of dimension $n-1$ and $z\notin U$, hence $z\notin Y$.
Conclusion $z\notin W\Rightarrow z\notin Y$: that is $Y\subseteq W$. |
Prove by induction that $0 < a_n < a_{n + 1} < b_{n + 1} < b_n$ for $a_{n + 1} = \sqrt{a_nb_n}$ and $b_{n + 1} = \frac 12(a_n + b_n)$. | As everything is positive here
$$a_1<a_2\iff a_1<\sqrt{a_1b_1}\iff a^2_1<a_1b_1\iff a_1(b_1-a_1)>0\;\color{red}\checkmark$$
Do something similar for $\;b_n\;$ |
Is empty language a singly capacitated regular language? | You seem to confuse several things
(1) The empty language can be defined on any alphabet, empty or not.
(2) A DFA is not necessarily minimal and not necessarily connected.
(3) A word is accepted if it is the label of a successful path, that is, a path starting in the initial state and ending in some final state.
If you read carefully this definition, you will conclude that the DFA represented in your picture accepts the empty language. |
Is the matrix $A$ diagonalizable if $A^2=I$ | Since $A^2=I$, then $A$ satisfies the polynomial $t^2-1 = (t-1)(t+1)$. Hence, the minimal polynomial of $A$ divides $(t-1)(t+1)$; so the minimal polynomial of $A$ splits and has distinct roots, so $A$ is diagonalizable.
As N.S. points out in the comments, the above fails if you are working in characteristic 2. There, the matrix
$$A=\left(\begin{array}{cc}
0 & 1\\
1 & 0
\end{array}\right)$$
has minimal and characteristic polynomials $t^2+1 = (t+1)^2$, and it is not diagonalizable (the eigenspace of $1$ has dimension $1$). But if $1\neq -1$, you are set. |
How prove this integral limit is exsit $\lim_{\varepsilon\to 0^{+} }f(x,y)dxdy$ | 1) Without loss of generality we can consider the region D as following:
$$
D = \{(x,y): (x-x_0)^2+(y-y_0)^2 \leqslant R^2 \},
$$
where R > 0. It does not change anything dramatically, one can easily prove for a random domain D. It does not really affect the proof.
Let us now consider the calculation of A, the integral becomes simple if we use the polar coordinates with $(x_0,y_0)$ as the center:
$$
x = x_0 + \sigma \cos \phi\\
y = y_0 + \sigma \sin \phi
$$
For the region $D\backslash B_\varepsilon (P_0)$ it means that $\phi \in [ 0, 2\pi)$ and $\sigma \in [\varepsilon, R]$. Now lets make the substitution:
$$
A = \lim\limits_{\varepsilon \to +0} \int\limits_{D\backslash B_\varepsilon (P_0)} \frac{\frac{\partial f}{\partial x}\cdot \sigma \cos \phi + \frac{\partial f}{\partial y} \cdot \sigma \sin \phi}{\sigma^2} \sigma d\sigma d\phi = \lim\limits_{\varepsilon \to +0} \int\limits_{D\backslash B_\varepsilon (P_0)} [ \frac{\partial f}{\partial x}\cdot \cos \phi + \frac{\partial f}{\partial y} \cdot \sin \phi ] d\sigma d\phi
$$
Here we keep in mind that the substitution also took place in partial derivatives but I omitted it to avoid text overloading.
It is clear that the limit exists as the function $[ \frac{\partial f}{\partial x}\cdot \cos \phi + \frac{\partial f}{\partial y} \cdot \sin \phi ]$ is continuously differentiable, therefore, integrable over any compact, such is $D\backslash B_\varepsilon (P_0)$ for any $\varepsilon$.
Ergo we have just proven the first part. Let's move to the second one.
2) I suggest the following notation in this part: $K_\varepsilon$ as the boundary of $B_\varepsilon(P_0)$, A is some random point on the boundary $\partial D$, B is a point on the boundary $\partial K_\varepsilon$ such that A, B and $P_0$ belong to the same line. Let's consider a contour:
$$
\Gamma = \partial D + AB + \partial K^-_\varepsilon + BA
$$
(Remember the contours used in theory of complex variables?) Let's use Cauchy's integral theorem for domain bound with $\Gamma$, thus we obtain:
$$
\oint\limits_\Gamma = \oint\limits_{\partial D} + \oint\limits_{\partial K^-_\varepsilon}
$$
First let's calculate:
$$
\lim\limits_{\varepsilon \to +0}\oint\limits_{\partial K^-_\varepsilon} \frac{f(x,y)}{(x-x_0)^2+(y-y_0)^2}[(x-x_0)dy - (y-y_0) dx] =\\ -\lim\limits_{\varepsilon \to +0} \int\limits_0^{2\pi} \frac{f(x_0+\varepsilon \cos \phi, y_0 + \varepsilon \sin \phi)}{\varepsilon^2} [\varepsilon^2 \cos^2\phi + \varepsilon^2 \sin^2\phi]d\phi = \\-\lim\limits_{\varepsilon \to +0} \int\limits_0^{2\pi} f(x_0+\varepsilon \cos \phi, y_0 + \varepsilon \sin \phi)d\phi = -2\pi f(x_0,y_0).
$$
Now let's move to the integral over $\Gamma$ curve (using Green's theorem):
$$
\lim\limits_{\varepsilon \to +0}\oint\limits_{\Gamma} \frac{f(x,y)}{(x-x_0)^2+(y-y_0)^2}[(x-x_0)dy - (y-y_0) dx] = \\
\lim\limits_{\varepsilon \to +0} \int\limits_{D\backslash B_\varepsilon (P_0)} \frac{\frac{\partial f}{\partial x}(x-x_0) + \frac{\partial f}{\partial y}(y-y_0)}{(x-x_0)^2+(y-y_0)^2} dx dy = A
$$
Subsitituting the calculated integrals into Cauchy's theorem one gets:
$$
f(x_0, y_0) = \frac{1}{2\pi} \oint\limits_{\partial D} \frac{f(x,y)}{(x-x_0)^2+(y-y_0)^2}[(x-x_0)dy - (y-y_0) dx] - \frac{A}{2\pi}.
$$
I hope the proof was understandable. |
Demonstrating that constructed solution to PDE satisfies initial condition | By Aubin--Lions you have that $u_n\to u$ strongly in $L^\infty(0,T;L^2(\Omega))$, hence the second integral can be made arbitrarily small for large $n$ independent of $m$, as $f_n\to0$ in $L^\infty(0,T;L^2(\Omega))$. |
If $x, y, z \in (0, 1)$ and $x+y+z = 2$, prove that $8(1-x)(1-y)(1-z) \leq xyz$. | As $0 \leq x, y, z \leq 1, (1-x), (1-y), (1-z) \geq 0$
Using A.M - G.M,
$(1-x) + (1-y) = z \geq 2 \sqrt{(1-x)(1-y)} \ $ (as $ \ x + y + z = 2$)
Similarly for $x, y$.
Multiplying them, we have
$xyz \geq 8 (1-x)(1-y)(1-z)$ |
What is $\limsup_{n\to\infty} \frac{p_{n+1}}{p_n}$? | It follows from the Prime Number Theorem that
$$
\lim_n \frac{p_{n+1}}{p_n}=\lim_n\left( \frac{p_{n+1}}{(n+1)\ln(n+1)}\cdot \frac{n\ln(n)}{p_n}\cdot \frac{(n+1)\ln(n+1)}{n\ln(n)}\right)=1 \cdot 1 \cdot 1
$$
Now, since $ \lim_n \frac{p_{n+1}}{p_n}=1$ then the limsup exists and is the same. |
Show that this lengthy integral is equal to $\int_{0}^{\infty}{1\over x^8+x^4+1}\cdot{1\over x^3+1}\mathrm dx={5\pi\over 12\sqrt{3}}$ | We want to compute:
$$ I = \int_{0}^{1}\frac{x^4-1}{x^{12}-1}\cdot\frac{\mathrm dx}{x^3+1}+\int_{0}^{1}\frac{x^{8}-x^{12}}{1-x^{12}}\cdot\frac{x\,\mathrm dx}{x^3+1}=\int_{0}^{1}\frac{1-x^3+x^6}{1+x^4+x^8}\,\mathrm dx $$
that is:
$$ I = \int_{0}^{1}\left(1-x^3-x^4+x^6+x^7-x^{10}\right)\frac{\mathrm dx}{1-x^{12}} $$
or:
$$ I=\sum_{k\geq 0}\left(\frac{1}{12k+1}-\frac{1}{12k+4}-\frac{1}{12k+5}+\frac{1}{12k+7}+\frac{1}{12k+8}-\frac{1}{12k+11}\right). $$
By the reflection formula for the digamma function we have:
$$ \sum_{k\geq 0}\left(\frac{1}{12k+\tau}-\frac{1}{12k+(12-\tau)}\right)=\frac{\pi}{12}\,\cot\frac{\pi \tau}{12} $$
and the wanted result trivially follows from $\cot\left(\frac{\pi}{12}\right)=2+\sqrt{3}$. |
Radius of Convergence of a Finite Sum | Our $f(x)$ is a polynomial and it is given by its Maclaurin expansion. Since the domain of a polynomial is $\Bbb R$, then the radius of convergence is infinite. Formally you could apply the Cauchy-Hadamard theorem. Because almost all terms of this expansion are zero, the appropriate root limit ($\lim\limits_{n\to\infty}\sqrt[n]{|a_n|}$) venishes. This means that the radius of convergnece is infinite. |
Large deviation theory--examples of irregular sets | Let $(B_t)_{t \geq 0}$ be a Brownian motion. It is well-known that the scaled process $X_t^{\epsilon} := \sqrt{\epsilon} B_t$ satisfies a large deviation principle in $C[0,1]$ as $\epsilon \to 0$ with (good) rate function
$$I(f) := \begin{cases} \frac{1}{2} \int_0^1 |f'(s)|^2 \, ds, & f(0)=0, \, \text{$f$ absolutely continuous}, \\ \infty, & \text{otherwise} \end{cases}$$
If we set $$A := \{f: [0,1] \to \mathbb{R}; f(0) \neq 0\},$$ then $A$ is open in $C[0,1]$ and therefore
$$\inf_{f \in A} I(f) = \inf_{f \in \text{int} \, A} I(f) = \infty.$$
On the other hand, it is not difficult to see that
$$\inf_{f \in \text{cl} \, A} I(f) = 0.$$ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.