title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
What is the difference between the boundary set and a bounded set? | Take the following counterexample: on the Euclidean plane, the open ball of radius $1$ centered in $(0,0)$ is a bounded set, because each point $\mathbf{x}$ in it obeys to the relation $\lVert \mathbf{x} \rVert \le 1$. However, its boundary is the empty set: because $\lVert \mathbf{x} \rVert \le 1$, you can assume that
$$
\lVert \mathbf{x} \rVert = 1 - \epsilon, \quad \epsilon \in (0,1].
$$
Thus, there are no points which cannot be included in a open ball wholly contained in the set: for $\mathbf{x}$, the open ball centered in it of radius $\frac{\epsilon}{2}$ is always fully contained in the set.
The boundary of a set is made of all the points with the following condition: every open ball centered in $\mathbf{x}$ both intersect the set and its complement. |
Examples for Burnside problem. | In the class of so called automaton groups there are multiple examples, such as the Gupta-Sidki groups (On the Burnside problem for periodic groups, 1983) or Brieussel's example (An automata group of intermediate growth and exponential activity, Journal of Group Theory) or an other construction of Grigorchuk on larger alphabet leading to p-groups (Degrees of growth ofp-groups and torsion-free groups, 1985 ; see also https://arxiv.org/pdf/math/0005113.pdf for some generalizations).
Recently, Nekrashevych gave explicit examples of infinite simple torsion groups (which have in addition intermediate growth) https://arxiv.org/pdf/1601.01033.pdf
In addition, a quick look to the wikipedia page https://en.wikipedia.org/wiki/Burnside_problem shows that the Burnside problem was originally solved by Golod and Shafarevich in 1963, and that later Aidan and Novikov proved that, for a (odd) large enough ($>4381$) $m$, there exists groups
$$B(k,m) = \langle a_1, ..., a_k | w^m =\mathbb{1} \:\:\forall w \in \{a_1, ..., a_k\}^*\rangle$$
(i.e. the groups were the m-th power of any word is trivial)
which are infinite.
This has been extended by Ivanov and Ol'shanskii to the case of even $m$. |
Work done to pump water out of a conical tank into a window above it | To draw the problem in a coordinate system, choose a positive $x$ axis in the downward direction since the motion is vertical. Take the window at $x = 0$, the top of the tank at $x = 10$ and the bottom of the tank at $x = 15$. The partition of water is therefore the closed interval $[10,15]$ on the $x$ axis.
Take the radius of the cone at $x_i$, the function is $9-3x_i/5$. Then take an element of volume as a circular disk having thickness $\Delta_i x$ and the radius of the cone at $x_i$, the volume is $\Delta_i V = \pi (9-3x_i/5)^2 \Delta_i x$. Then take $62.4$ as the weight of water in pounds per cubic foot, the force required to pump an element is $\Delta_i F = 62.4 \pi (9-3x_i/5)^2 \Delta_i x$.
If $\Delta_i x$ is close to $0$, then the distance through which an element moves is approximately $x_i$. Thus the work done pumping an element to the window is $\Delta_i W = 62.4 \pi x_i (9-3x_i/5)^2 \Delta_i x$ and the integral is:
$$
W = 62.4 \pi \int_{10}^{15} \! x(9-3x/5)^2 \, \textrm{d}x
$$ |
How to prove that the difference between two consecutive squares is odd? | First of all, you probably meant $p,q \in \textbf{Z}$. Now, if the smallest of two numbers is $n$, then you are interested in $(n+1)^2-n^2=n^2+2n+1-n^2=2n+1$ - which is odd.
Note: to avoid considering all cases, as you asked you can say (without lost of generality) that $n$ is the smallest among two of them, which will give you the fact that $n+1$ is the seocnd number. |
Prove this claim about language and structures. | Basically, we can base a proof of this fact building a truth-table for $\psi$ in terms of the sub-formulae : $\phi_1, \ldots, \phi_n$.
We have to pick-up the rows of the t-t for which $\psi$ is evaluated to $TRUE$ and then write a "long" conjunction with $\phi_i$ : if in that row it has the value $TRUE$, and $\lnot \phi_i$ : if in that row it has the value $FALSE$.
This account for :
$$\left(\bigwedge_{i \in X} \phi_i \wedge \bigwedge_{i \notin X} \neg \phi_i \right).$$
Each row corresponding to $TRUE$ under $\psi$ account for one of the "long" concjunctions above and we have to build a "long" disjunction with all them.
This account for :
$$\bigvee_{X \in S}$$
See Disjunctive normal form.
Examples
1) Consider $\psi := \lnot (\phi_1 \rightarrow \lnot \phi_2)$ and build-up the t-t :
\begin{array}{cc|cc|c}\phi_1&\phi_2&\lnot \phi_2&\phi_1 \to \lnot \phi_2&\psi\\\hline T&T&F&F&T\\T&F&T&T&F\\F&T&F&T&F\\F&F&T&T&F\end{array}
You can see that :
$\psi \Leftrightarrow (\phi_1 \land \phi_2)$.
In this trivial case we have : $n=2$ and thus $\mathcal P( \{1,2 \}) = \{ \emptyset, \{ 1 \}, \{ 2 \}, \{ 1, 2 \} \}$.
$X= \{ 1, 2 \}$ is the only element of $S$, because we have only one row of the t-t evaluated to $TRUE$.
Thus, the only "long" concjunction is made with all the sub-formulae which in that row are evaluated to $TRUE$.
2) Consider insted $\psi := \phi_1 \rightarrow \lnot \phi_2$; now we have that $\psi$ is evaluated to $TRUE$ in three rows of the above t-t.
Thus we have :
$\psi \Leftrightarrow [(\phi_1 \land \lnot \phi_2) \lor (\lnot \phi_1 \land \phi_2) \lor (\lnot \phi_1 \land \lnot \phi_2)]$.
In this case we have that $S$ has three elements (we have three $Xs$) : $S = \{ \{ 1 \}, \{ 2 \}, \emptyset \}$.
You can try a slightly more complex case, with a boolean combination of $\phi_1, \phi_2, \phi_3$. |
Condition number of $A^TA$ | I assume you mean the condition number defined by the Euclidean norm, i.e.
$$
\kappa = \|A\|\cdot \|A^{-1}\|
$$
Note also that $\|A\| = \|\sqrt{A^TA}\|$. Conclude that $\kappa(A^TA) = \kappa(A)^2$. |
Finding Min/Max Of $f(x,y)=x^3+y^3-3x-12y+20$ | Your work seems fine for me. If $(x,y)$ is a (local) maximum or minimum of a differentiable function defined in an open set, then $\nabla f(x,y) = (0,0)$. So the critical points $(1,2)$, $(-1,2)$, $(1,-2)$ and $(-1,-2)$ you found are candidates for (local) maxima or minima.
If $f$ is twice differentiable, then you can use the Hessian to classify the critical points previously found - which you did pretty much correctly. But since this analysis is only local, you only have that $(1,2)$ is a local minimum and $(-1,-2)$ is a local maximum.
There are no global minimum or global maximum, since $\lim_{x \to \pm \infty} f(x,0) = \pm \infty$. |
Solving for $x$ in an inequality | You can add or subtract the same quantity from both sides of an inequality. In this case, I suggest adding $\frac{1}{x+2}$ to both sides, so you get $0$ on the right. You'll have two fractions on the left, which you need to combine by finding a common denominator. Then you'll have a single rational expression on the left, and you only need to determine which values of $x$ make it negative.
Does that all make sense? |
Quick question about grammars | Your answer is correct, that only grammar D generates the required language. And your explanations are mostly correct, except for a few minor errors.
For grammar B, you said “the possible combinations all commence with 0”, which is almost true, but grammar B also generates the string 1. But as you said, it fails to derive 11, so it isn't what is wanted. (It also derives 001, 00001, and so on.)
For grammar C, you said “derivations all end with a letter”. Usually some of the symbols are designated as “terminal” and the rest as “non-terminal”, and the language only includes sequences of terminal symbols. Sequences that include non-terminal symbols are unfinished. Sometimes the lists of terminal and non-terminal symbols are given explicitly, but when they aren't as in this case, you are supposed to assume that capital letters like A, B, and S are non-terminal, and digits and lowercase letters are terminal. All of this is by way of saying that the language derived by grammar C is actually empty and contains no strings at all: there is no way to finish any derivation!
But your idea of what is going on seems generally correct, and you did get the right answer to this tricky-seeming question. |
Joint Probability function in expected value of sum of 2 random variable and product of 2 random variable | In general, if you have two random variables $X$ and $Y$ with joint density $f(x,y)$, then the expectation of the random variable $g(X,Y)$ is given by
$$
E(g(X,Y))=\iint g(x,y)f(x,y)dxdy\tag{0}
$$
This is sometimes called the Law of the unconscious statistician.
Notes 1.
The "density" $f_{X+Y}(x,y)$ does not makes sense. Similar for the other one. If you write $Z=X+Y$, then this is one random variable. It expectation is given by
$$
E(Z)=\int zf_Z(z)dz
$$
if you know the density function of $Z$. The mentioned law above allows you to calculate the expectation without knowing the density $f_Z$.
Notes 2.
Given two random variables $X$ and $Y$ and a function $g:\mathbb{R}^2\to\mathbb{R}$ (with good properties), you can define another random variable $Z$ by $Z=g(X,Y)$. For instance, if $g(x,y)=x+y$, then you have $Z=X+Y$; if $g(x,y)=xy$, you have $Z=XY$.
By definition of the expectation, for any continuous random variable $W$ with density function $f_W$, you have
$$
E(W)=\int xf_W(x)dx:=\int_{-\infty}^\infty xf_W(x)dx\tag{1}
$$
So in the example of $Z=X+Y$, there are three ways to express its expectation.
you can use the linearity of expectation to get
$$
E(X+Y)=E(X)+E(Y)=\int xf_X(x)dx+\int yf_Y(y)dy
$$
where $f_X$ and $f_Y$ are the density functions for $X$ and $Y$. Note that in the second integral, $y$ is a dummy variable; you can also write it as $\int xf_Y(x)dx$.
you can use (0):
$$
E(X+Y)=\iint (x+y)f(x,y)dxdy
$$
where $f$ is the joint density function of $X$ and $Y$.
If you denote $f_Z$ as the density function for $Z=X+Y$, then
$$
E(X+Y)=\int zf_Z(z)dz
$$
In these three ways, the functions $f$, $f_X$, $f_Y$, $f_Z$ are related. |
Set with infinitely many limit points not contained in S | For example
$$S = \left\lbrace k + \frac{1}{n} : k,n \in \mathbb N \text{ and } n \geq 3 \right\rbrace $$ |
How do you factor this? $x^3 + x - 2$ | By inspection, we see that $1$ is a root of $x^3 + x - 2$, so it is divisible by $x - 1$; alternatively, the rational roots theorem would suggest this too.
Now $x^2 + x + 2 = x^2 + x + \frac{1}{4} + \frac{7}{4} = (x + \frac{1}{2})^2 + \frac{7}{4}$ has no real roots, and is irreducible. If you're factoring over $\Bbb{C}$, then it's got roots at $\pm \sqrt{\frac 7 4}i - \frac{1}{2}$; denoting these as $\alpha_+$ and $\alpha_-$, the original polynomial then splits as
$$(x - 1)(x - \alpha_+)(x - \alpha_-)$$ |
Calculating Flux Across a Simple Closed Curve lying on an (x,z)-cylinder | The statement is not true. Take the "$(x,z)$" cylinder $x^2+(z-1)^2=1$ for instance. Computing the surface integral of the curl over a slice of constant $y$ gives us
$$\iint\limits_{x^2+(z-1)^2=1}z\hat{j}\cdot \hat{j}dS = \int_0^\pi \int_0^{2\sin\theta}r^2\sin\theta\:dr\:d\theta = \frac{8}{3} \int_0^\pi \sin^4\theta \:d\theta \neq 0$$ |
A region in the plane that has to intersect unit circle | Yes, it would have to contain a point both inside and outside the circle.
Let $P$ be a point of the circle and $U_\epsilon$ an epsilon neighborhood of $P$ centered at $P$ and contained in the region.
Then the ray from the origin containing $P$ contains points of the region at distances $1-\frac{\epsilon}{2}$ and $1+\frac{\epsilon}{2}$ from the origin. |
What does $\propto$ mean? | It means proportional as a function of two variables $x$ and $k$ with $y$ fixed.
You have a prior probability density as a function of $x$ and $k$, which is just a product of a function of $x$ and a function of $k$, so that the random variables involved are independent in the prior distribution. Then new data arrives: a random variable is observed to be equal to a number $y$. The conditional density of that random variable given that the first two were equal to $x$ and $k$, is the first factor in the expression on the right.
The expression on the left is the conditional density of the random variables corresponding to $x$ and $k$ given that observed value equal to $y$.
Since it means proportional as a function of $x$ and $k$ with $y$ fixed, you need to multiply it by a normalizing constant that may depend on $y$ but does not depend on $x$ and $k$, in order to make it a probability density function, as a function of $x$ and $k$. "Constant" in this case would mean not depending on $x$ and $k$.
"Constant" always means not depending on something. Usually it's clear from the context what the "something" is, but I think it should be stated explicitly more often than it is in present-day conventional practice. Here's a favorite example of mine, involving differentiation of exponential functions:
\begin{align}
\frac{d}{dx} 2^x & = \lim_{h\to0}\frac{2^{x+h}-2^x}{h} \\[10pt]
& = 2^x\lim_{h\to0}\frac{2^h-1}{h}\tag{1} \\[10pt]
& = (2^x\cdot\text{constant})\tag{2}.
\end{align}
In $(1)$, the factor $2^x$ can be taken out of the limit because it's "constant" but "constant" means not depending on $h$.
In $(2)$, "constant" means not depending on $x$.
Some instructors in calculus classes actually present this proof without mentioning the contextual change in the meaning of "constant".
Later note in response to comments below: The linked paper uses a rather obnoxious notation, $p(x)$ and $p(k)$ for the probability density functions of two different random variables. One should distinguish between capital $X$ and lower-case $x$ in expressions like $\Pr(X=x)$, where capital $X$ is a random variable and lower-case $x$ is a particular value that $X$ might be equal to. Then, if one writes $p_X(x)$ for the value of the probability density function of a random variable (capital) $X$ at the point (lower-case) $x$, then one knows that $p_X(3)$ means something different from $p_Y(3)$.
But at any rate, $p(x)p(k)$ is the notation used in the linked paper for the joint density function of a pair of independent random variables, and $p(y\mid x,k)$ is the conditional density of another random variable given the values of those two. The idea is that if you multiply those, what you get is proportional, as a function of $x$ and $k$ with $y$ fixed, to the conditional density function of the random variables corresponding to $x$ and $k$, given an observed value of the random variable corresponding to $y$. |
Balls fitted in empty boxes | Hint: adding a ball to box 7 and removing a ball from box 6 is equivalent to moving a ball from box 6 to box 7. So, at every step, you can either
move a ball in one of the boxes to an adjacent empty box, or
add a ball to an empty box whose neighbors are empty |
If $f_n$ converges to $f$ pointwise and $f_n$ is differentiable everywhere, can $f$ be differentiable but with $f'\ne\lim f_n'$? | Yes! $f'$ may exists but $\lim f_n' \neq f'$. Here's one example: $$f_n(x)=\frac{x}{1+n^2x^2}\;;\;x\in \Bbb R$$
Here, $f_n \rightrightarrows f\equiv0$ . But $$f_n'(x)=\frac{1-n^2x^2}{(1+n^2x^2)^2} \longrightarrow \begin{cases} 0&\text{if}\;x \neq 0\\1&\text{if}\;x=0\end{cases} \neq f' \equiv0$$ |
Prove that every extension of a finite field is normal | If the field $F$ is a subfield of $K$ with $K$ finite of order $q$, then the elements of $K$ are all roots of $x^q-x \in F[x]$, so this equation has $q$ (distinct) roots in $K$, and so $K$ is its splitting field over $F$. So $K$ is a Galois extension of $F$. |
Is it hard to solve equations such as $\sin x + \sin(2x \sin x) = \sin 3x$? | Just kidding : if you expect analytical solutions for such monsters, stop dreaming !
Now, being serious : take into account that the simple $x=\cos(x)$ does not shw analytical solutions and needs to be solved using numerical methods.
Consider that you look for the zero's of
$$f(x)=\sin (x) + \sin(2x \sin (x)) -\sin (3x)$$ Plot the function and, visually, locate where (more or less) is the root you want (there is an infinite number of solutions). Call this value $x_0$ and start using Newton method which will update the guess according to
$$x_{k+1}=x_k-\frac{f(x_k)}{f'(x_k)}$$
For example, there is root "around" $x=2$. Let us try to get the following iterates
$$\left(
\begin{array}{cc}
k & x_k \\
0 & 2.0000000000000000000 \\
1 & 2.2077878077914573503 \\
2 & 2.2372992747296569378 \\
3 & 2.2389331854770425722 \\
4 & 2.2389386369537428867 \\
5 & 2.2389386370146724403
\end{array}
\right)$$ which is the solution for twenty significant figures. |
Find value of $\lim_{n\rightarrow \infty} \sum^{(n+1)^2}_{n^2}\frac{1}{\sqrt{k}}$ | Another way:
$$
\int_{n^2}^{(n+1)^2}{1\over\sqrt x}dx=2
$$
This works since the difference between the sum and the integral approaches zero as $n\to\infty$. |
Proof of symmetry for trigonometric functions | You can go from the definition of $sin(x) = \sum^\infty_{n=0}\frac{(-1)^nx^{2n+1}}{(2n+1)!}$
From there, factor $-x$ into $(-1)x$ and use $(xy)^a = x^ay^a$
Notice since $n$ is an integer you can say that $(-1)^{2n+1} = (-1)^{1}= -1$ |
Dividing data into categories through graphical analysis | Clearly each quadrant contains a "category".
Beyond that, the question is highly ambiguous. |
Accumulation points of the graph of a function | (Edited to fix a serious – and elementary – mistake.) Let $G$ be any discrete subset of $\mathbb{R}^2$. Then $G$ is countable. To see this, consider the open disk centered at each point of $G$ with radius $r/2$, where $r$ is the infimum of distances to the other points of $G$. These disks are mutually disjoint, and so their number must be countable because each contains points with rational coordinates.
Therefore, every nonempty open interval $I$ contains some $x$ so that $G\cap(x\times\Bbb{R})=\emptyset$.
Now let $G$ be the graph of the function $f$. By the above result, $G$ cannot be discrete, so it contains at least one accumulation point.
To see that the set $A$ of $x$, so that $(x,f(x))$ is an accumulation point of $G$, is itself dense, let $I$ be a nonempty open interval apply the above result to the function $f\circ\phi$, where $\phi\colon\Bbb{R}\to I$ is a homeomorphism.
Finally, any dense subset of $\mathbb{R}$ contains a countable dense subset. Applying this to $A$ finishes the argument. |
Question regarding the simple proof involving dual space and annihilator | I believe it is probably this. The Convention you are talking about is to identify $x\in v$ with a functional $\overline{x}:v'\to k$ such that $\overline{x}(y)=y(x)$ for every $y\in v'$. That means that
$$[y,\overline{x}]=\overline{x}(y)=y(x)=[x,y]$$
Now, $\eta^{00}$ is the set of all vectors $\overline{y}\in v''$ such that $[x,\overline{y}]=0$ for all $x\in\eta^0$. (Note we can always see those vectors as being of the form $\overline{y}$ for some $y\in v$, because the correspondence $v\mapsto v''$ happens to be "onto".)
Swap the role of the letters $x$ and $y$, and you get that $\eta^{00}$ is the set of all vectors $\overline{x}\in v''$ such that $[y,\overline{x}]=0$ for all $y\in\eta^0$.
Now, identify $\overline{x}$ with $x$ and you get that $\eta^{00}$ is the set of all vectors $x\in v$ such that $[x,y]=0$ for all $y\in\eta^0$. |
Proving $\lim_{(x,y)\to(0,0)}\frac{\sin^2(xy)}{x^2+y^2}=0$ | As $\sin t$ is asymptotic to $t$, you may consider
$$\frac{x^2y^2}{x^2+y^2}=\frac1{\dfrac1{x^2}+\dfrac1{y^2}}\to0.$$ |
Prove {$(x, y) \in \Bbb{R}^2 : x \notin \Bbb{Z}$} is open | This set is the inverse image of the open subset $\Bbb R\setminus \{0\}$ of $\Bbb R$ under the continuous function $f:\Bbb R^2\to \Bbb R$ given by $f(x,y)=\sin(\pi x)$. |
Is the any math function that change the power of denominator of an input fraction? | You can define a function as follows. Let $f(a/b) := \log(a/b^2)$ where $a$ and $b$ are relatively prime positive integers. The function $f$ "maps" positive rational numbers (i.e. fractions) to real numbers. Note that $f$ is not smooth or continuous. |
What is the difference between meters squared and square meters? | So out of curiosity, I searched for "square meters vs meters squared" and found this post. So is Olivia right?
No. That is a complete posterior extraction.
When people say "$4$ meters squared". Practically without exception they mean an area equal to that of a $2\text{ m}\times 2\text{ m}$, exactly like "$4$ square meters". A few noobs might mistake it for $(4\text m)^2$, but they would quickly find out the error of their ways. |
Finding a general solution of a differential equation using the method of undetermined coefficients | You have found the particular solution, but you need to also find the complementary solution $Y_{c}(t)$, which solves the homogeneous equation (which is $y''+y'+4y=0$). The general solution is then $y=Y_{c}(t)+Y(t)$.
To find $Y_{c}(t)$, guess a solution of $e^{\lambda t}$ to the equation $y''+y'+4y=0$, and find $\lambda$ such that you get two linearly independent solutions. |
Tensor Analysis On Manifolds: Question on an example manifold | We are trying to chart $\mathbb{R}^2$ with the map $\mu\colon\mathbb{R}^2\to\mathbb{R}^2$ and analyse where and how it fails.
One way of interpreting "folding" here is a topological conjugate of $(x,y)\mapsto(\lvert x\rvert, y)$ that you can imagine as folding the plane in half with the singular locus $x=0$.
When the jacobian determinant is zero, some tangent curves with nonzero speed becomes instantaneously at rest. "Bad" things can therefore happen, e.g. a curve can suddenly change direction as you see in the example.
The inverse function theorem: if $f\colon\mathbb{R}^n\to\mathbb{R}^n$ is continuously differentiable with $f'(0)$ invertible, then there is a neighbourhood $U\ni 0$, $V\ni f(0)$ such that $f^{-1}\colon V\to U$ exists and is continuously differentiable with derivative $(f^{-1})'(f(0))=f'(0)^{-1}$.
The four regions of the $xy$-plane separated by the lines $y=\pm x/\sqrt 2$. Each is mapped bijectively, smoothly with smooth inverse, to $V$. |
Trigonometric equation solved with exponential functions | Write your equation as
$$\frac{c}{\sqrt{c^2+d^2}} \sin(a) + \frac{d}{\sqrt{c^2+d^2}} \cos(a) = 0 $$
Let $\theta$ be an angle such that either $\cos(\theta) = c/\sqrt{c^2 + d^2}$ and
$\sin(\theta) = d/\sqrt{c^2 + d^2}$ or $\cos(\theta) = -c/\sqrt{c^2 + d^2}$ and
$\sin(\theta) = -d/\sqrt{c^2 + d^2}$. Then the equation becomes
$$ \sin(\theta + a) = \cos(\theta)\sin(a) + \sin(\theta) \cos(a) = 0$$
So $a = - \theta + n \pi$ will work for any integer $n$, and we can take
$\theta = \arctan(d/c)$. |
How to show $a,b$ coprime to $n\Rightarrow ab$ coprime to $n$? | $\gcd(a,n)=1$ implies $ar+ns=1$ for some integers $r,s$. $\gcd(b,n)=1$ implies $bt+nu=1$ for some integers $t,u$. So $$1=(ar+ns)(bt+nu)=(ab)(rt)+(aru+sbt+snu)n$$ so $\gcd(ab,n)=1$. |
Proving a left-hand limit exists | We have that
$$F(x)=F(0)+\int_0^{x}f(t) dt$$
and $$\lim_{x\to 1^-} F(x)=F(0)+\int_0^{1}f(t) dt$$
which exists since $f$ is countinuos and bounded.
Refer also to the related
Is any continuous bounded function on $(a,b)$ Riemann integrable? |
Lusin's Theorem for finite measure spaces | Here is a proof of the Lusin's Theorem where $f:[a,b]\to \mathbb{C}$.
Lusin's Theorem: If $f:[a,b]\to \mathbb{C}$ is Lebesgye measurable and $\epsilon > 0$, there is a compact set $E\subset [a,b]$ such that $\mu(E^c) < \epsilon$ and $f|E$ is continuous. (Note I will use Egoroffs theorem, Theorem 2.26, theorem 1.18, and Corollary 2.32 in Folland's Real Analysis, I am sure there are the same theorems in your book)
Proof: Let $f:[a,b]\to \mathbb{C}$ be Lebesgue measurable and $\epsilon >0$. By theorem 2.26 we can build a sequence of continuous functions $\{g_n\}$ such that $$g_n\to f \in L^1$$
Then by corollary 2.32 there is a subsequence $\{g_{n_j}\}$ of $\{g_n\}$ such that $g_{n_{j}}\to f$ a.e. Now by Egoroff's theorem, for any $\epsilon > 0$ there exists a set $F\subset [a,b]$ with $\mu(F) < \epsilon/2$ such that $g_{n_j}\to f$ uniformly on $F^{c}$.
Now by theorem 1.18, since $\mu([a,b]) < \infty$, there is $E$ compact subset of $[a,b]$ such that $E\subset F^{c}$ and $$\mu(F^c) - \epsilon/2 < \mu(E) \leq \mu(F^c)$$ So $F\subset E^c$ and we have
\begin{align*}
\mu(E^c) &= \mu(F) + \mu(E^c\setminus F)\\
&= \mu(F) + \mu(E^c\cap F^c)\\
&= \mu(F) + \mu(F^c\setminus E)\\
&= \mu(F) + (\mu(F^c) - \mu(E))\\
&\leq \epsilon/2 + \epsilon/2\\
&= \epsilon
\end{align*}
Note since $E\subset F^c$ and $g_{n_j}\to f$ uniformly on $F^c$, we have that $g_{n_j}\to f$ uniformly on $E$. Since, for all $j$, $g_{n_j}$ is continuous, we have that $f$ is continuous on $E$, that is $f|E$ is continuous.
Hope this helps. |
Relation between group zero-cohomology and the dual of group zero-homology | For any module $M$ the natural quotient map $A \to H_0(\Gamma,A)$ induces
an isomorphism $Hom(H_0(\Gamma,A),M) \cong H^0(\Gamma,Hom(A,M)).$ (Here
we regard $Hom(A,M)$ as a $\Gamma$-module by the contragredient action.)
Thus, in order to get a relationship between $Hom(H_0(\Gamma,A),\mathbb Z)$
and $H^0(\Gamma,\mathbb Z)$, you would need an isomorphism of $\Gamma$-modules
$A \cong Hom(A,\mathbb Z)$. For this, $A$ should be a finitely generated free
$\mathbb Z$-action, with a self-dual $\Gamma$-action. |
When does an eigenvector of a matrix contain only a constant? | Take an $n \times n$ matrix $A$, and suppose that $v$ is an eigenvector of $A$, with all entries of $v$ equal to a constant $k$. Naturally, $k \ne 0$. Let $\lambda$ be the eigenvalue of $A$ that has $v$ as an eigenvector. If $(b_1, b_2, \dots, b_n)$ is any row of $A$, then by the definition of eigenvalue and eigenvector, we have
$$kb_1+kb_2+\cdots +kb_n=\lambda k,$$
from which we conclude that $b_1+b_2+\cdots+b_n=\lambda$. It follows that each row sum of the matrix is equal to $\lambda$.
Conversely, suppose that all row sums of $A$ are equal to $\sigma$. Let $v$ be the vector with all entries equal to $1$. Then $Av$ is a vector with all entries equal to $\sigma$, which means that $v$ is an eigenvector of $A$ with eigenvalue $\sigma$.
Thus $A$ has an eigenvector with all entries equal if and only if all row sums of $A$ are equal. |
How many composite functions from $(f \circ f)(1) = 2$ | By no means, since $|S^S| = 4^4$.
Hint: You need that $f(f(1)) = 2$. We only need to restrict a value of the function $f(1) = k \in \{2,3,4\}$, because, if $f(1) = 1$, then $f(f(1)) = 1$. |
Fast $L^{1}$ Convergence implies almost uniform convergence | Put $g_n:=|f_n-f|$, and fix $\delta>0$. We have $\sum_{n\in\mathbb N}\lVert g_n\rVert_{L^1}<\infty$ so we can find a strictly increasing sequence $N_k$ of integers such that $\sum_{n\geq N_k}\lVert g_n\rVert_1\leq \delta 4^{-k}$. Put $A_k:=\left\{x\in X:\sup_{n\geq N_k}g_n(x)>2^{1-k}\right\}$. Then $A_k\subset\bigcup_{n\geq N_k}\left\{x\in X: g_n(x)\geq 2^{-k}\right\}$ so
$$2^{-k}\mu(A_k)\leq \sum_{n\geq N_k}2^{-k}\mu\left\{x\in X: g_n(x)\geq 2^{-k}\right\}\leq \sum_{n\geq N_k}\lVert g_n\rVert_1\leq \delta 4^{-k},$$
so $\mu(A_k)\leq \delta 2^{-k}$. Put $A:=\bigcup_{k\geq 1}A_k$. Then $\mu(A)\leq \sum_{k\geq 1}\mu(A_k)\leq \delta\sum_{k\geq 1}2^{-k}=\delta$, and if $x\notin A$ we have for all $k$: $\sup_{n\geq N_k}g_n(x)\leq 2^{1-k}$ so $\sup_{n\geq N_k}\sup_{x\notin A}g_n(x)\leq 2^{1-k}$. It proves that $g_n\to 0$ uniformly on $A^c$, since for a fixed $\varepsilon>0$, we take $k$ such that $2^{1-k}$, so for $n\geq N_k$ we have $\sup_{x\notin A}g_n(x)\leq\varepsilon$. |
Explanation about an identity involving inverse binomial coefficients. | \begin{align}\sum^{\infty}_{k=0}\frac{1}{n+k\choose n}=\sum^{\infty}_{k=0}\frac{k!\cdot n!}{(n+k)!}&=\sum^{\infty}_{k=0}\frac{\Gamma{(n+1)}\cdot \Gamma{(k+1)}}{\Gamma(n+k+2)}\cdot (n+k+1)\\&=\sum^{\infty}_{k=0}(n+k+1)\cdot B(n+1,k+1)\end{align}
Where $B(x,y)$ is the Beta function defined as
$$B(x,y)=\int^{1}_{0}u^{x-1}(1-u)^{y-1}\,du$$
where $x,y>0$. So we can rewrite the last result as
\begin{align}\sum^{\infty}_{k=0}(n+k+1)\cdot B(n+1,k+1)&=(n+1)\sum^{\infty}_{k=0}B(n+1,k+1)+\sum^{\infty}_{k=0}kB(n+1,k+1)\\
&=(n+1)\sum^{\infty}_{k=0}\int^{1}_{0}u^{k}(1-u)^{n}\,du+\sum^{\infty}_{k=0}k\int^{1}_{0}u^{k}(1-u)^{n}\,du\\
&=(n+1)\int^{1}_{0}(\sum^{\infty}_{k=0}u^{k})(1-u)^{n}\,du+\int^{1}_{0}u(\sum^{\infty}_{k=1}ku^{k-1})(1-u)^{n}\,du\\&
=(n+1)\int^{1}_{0}\frac{1}{1-u}(1-u)^{n}\,du+\int^{1}_{0}u\frac{1}{(1-u)^2}(1-u)^{n}\,du\\&
=(n+1)\frac{1}{n}+B(2,n-1)\\&
=\frac{n+1}{n}+\frac{1}{n(n-1)}\\&
=\frac{(n+1)(n-1)+1}{n(n-1)}\\&
=\frac{n^2-1+1}{n(n-1)}=\frac{n}{n-1}\end{align} |
How to find all the equivalence classes of a given language without using the Table-Filling-Algorithm. | The Myhill-Nerode equivalence relation is that $x \sim_R y$ iff $x$ and $y$ have no distinguishing extension; i.e., if $xz$ and $yz$ are both in $B$ or both not in $B$ for any string $z$. A way to generate the equivalence classes is to start with $B$ and $B^c$ (which are distinguished from each other by the empty string), and then split classes on every possible distinguishing extension. In this case, if $z$ ends in $01$, then $xz$ and $yz$ are both not in $B$, and if $z$ ends in $0$ or $11$, then $xz$ and $yz$ are both in $B$, independent of $x$ and $y$. So the only possible distinguishing extension is the string $1$. Adding $1$ to two strings in $B^c$ (strings that end in $01$) does not distinguish them: both wind up in $B$. So $B^c$ is an equivalence class. Adding $1$ to two strings in $B$ (strings that don't end in $01$) will distinguish them if one ends in $0$ and the other doesn't. So $B$ splits into two equivalence classes: strings that end in $0$, and strings that don't end in $01$ or $0$ (this set includes the empty string). You wind up with three equivalence classes. |
Reconstruction of a matrix given its eigenvalues and eigenvectors dot products | Every nonsingular matrix has a unique polar decomposition. It follows that if $P$ is the unique positive definite matrix square root of $V^TV$, then $A=U(P\Lambda P^{-1})U^T$ for some real orthogonal matrix $U$. Since inner products are preserved under changes of orthonormal bases, there is not enough information to determine $U$ and you can only determine $A$ up to unitary equivalence. |
Are equivalence classes of subobjects of some X in Set just equivalence classes of subsets of X with a specific cardinality? | In $Set$, at least it has exactly the four subobjects you wrote up.
Note that a subobject of some object $X$ is not only a mere object $A$ in the same category, but it is understood to be together with a monomorphism $i:A\hookrightarrow X$ which plays the role of the inclusion.
Said that, indeed nothing prevents us to take $\{7\}$ as a subobject of $\{0,1\}$, moreover there are exactly two ways to do that: $i$ either sends $7\mapsto 0$, or $7\mapsto 1$.
So the first one represents the same subobject as $\{0\}\hookrightarrow\{0,1\}$, while the second one represents the same subobject as $\{1\}\hookrightarrow\{0,1\}$, but these two are distinct. |
Comparing 2 Gaussian Distribution | Make a Q-Q plot, that is sort your 2 set by increasing order of magnitude, then make a scatter plot.
If the sample follow the same distribution, up to a scaling factor, the Q-Q plot will be a straight-line. Otherwise, you will learn in here how to interpret the difference, although Tukey's Exploratory Data Analysis (1977), is much clearer. |
Problem 16 Chapter 1 From Rudin: Have I Understood The Question Correctly? | I think your second statement is closest to what you must show. Given two points $x$ and $y$ that are distance $d$ apart, and given a particular but arbitrary $r>0$ for which $r>{d\over2}$, you need to show that there are infinitely many points $z$ that are a distance $r$ from each of the points $x$ and $y$. |
Prove subgaussian norm of sugaurssian random variables is a norm | I think you mistake the definition.
sub-gaussian norm is
$$
\|X\|_{\psi_2} = \inf\left\{t>0:\mathbb{E}\exp\left( X^2/t^2\right)\le2\right\}.
$$
What you want to show that is
$$
\|X+Y\|_{\psi_2} \le \|X\|_{\psi_2} + \|Y\|_{\psi_2}.
$$
To show this, Let $f(x) = e^{x^2} $ which is increasing and convex.
Then, we have
$$
f\left(\frac{|X+Y|}{a+b} \right) \le f\left(\frac{|X|+|Y|}{a+b} \right)\\
\le \frac{a}{a+b}f\left(\frac{|X|}{a}\right) + \frac{b}{a+b}f\left(\frac{|Y|}{b}\right)
$$
by Jensen's Inequality. After that, taking expectations,
$$
\mathbb{E}f\left(\frac{|X+Y|}{a+b} \right)
\le \frac{a}{a+b}\mathbb{E}f\left(\frac{|X|}{a}\right) + \frac{b}{a+b}\mathbb{E}f\left(\frac{|Y|}{b}\right).
$$
If we insert $a=\|X\|_{\psi_2}, b=\|Y\|_{\psi_2}$, then by definition, we have
$$
\mathbb{E}f\left(\frac{|X+Y|}{\|X\|_{\psi_2}+\|Y\|_{\psi_2}} \right) \le
\frac{a}{a+b}\times2 + \frac{b}{a+b}\times2 = 2.
$$
So, $\|X\|_{\psi_2}+\|Y\|_{\psi_2}$ is in the set $\left\{t>0:\mathbb{E}\exp\left( X^2/t^2\right)\le2\right\}$, and it completes the proof as follows:
$$
\|X + Y\|_{\psi_2} \le \|X\|_{\psi_2}+\|Y\|_{\psi_2}.
$$ |
Perfect sets: squaring rudin's definition of "limit point" with that of another book's. | In general, x is limit point of a set A, when for all
open U nhood x, exists y in U $\cap$ A with y /= x.
x is an adherance point of A when for all
open U nhood x, U $\cap$ A is not empty.
For first countable spaces, x is an adherance point
of A iff there is a sequence within A that converges
to x.
x is a limit of a sequence s when for all
open U nhood x, s is eventually within U.
It is not a limit point. |
Show that $\int^\infty_0\left(\frac{\ln(1+x)} x\right)^2dx$ converge. | Since
$$\lim_{x\to0}\left(\frac{\ln(1+x)}{x}\right)^2=1$$
then the integral
$$\int_0^1\left(\frac{\ln(1+x)}{x}\right)^2dx$$
exists, moreover we have
$$\ln^2(1+x)=_\infty o(x^{1/2})$$
so
$$\left(\frac{\ln(1+x)}{x}\right)^2=_\infty o\left(\frac{1}{x^{3/2}}\right)$$
and then the integral
$$\int_1^\infty \left(\frac{\ln(1+x)}{x}\right)^2dx$$
also exists. Conclude. |
Is $(\mathbb{Z} \cup \{-\infty, \infty\},\leq)$ a complete lattice? | Yes, $(\mathbb{Z}\cup\{-\infty,\infty\},\leq)$ is a complete lattice, as given $S\subset \mathbb{Z}\cup\{-\infty,\infty\}$, either $S$ is bounded above, in which case the greatest integer in $S$ is the supremum, or its supremum is $\infty$, and likewise either $S$ is bounded below, in which case the least integer in $S$ is the infimum, or its infimum is $-\infty$.
What complete means is not really a matter of opinion. A lattice $(L,\leq)$ is complete exactly when every subset of $L$ has a supremum and infimum, and since $L$ and $\emptyset$ are perfectly good subsets of $L$, we know that their supremum and infimum must exist, and this provides the minimum and maximum of $L$.
To perhaps capture what you want, however, is the idea of Dedekind completeness. I.e. a lattice $(L,\leq)$ is said to be Dedekind complete exactly when every non-empty subset $S$ of $L$ which has an upper bound (i.e. bounded above) has a least upper bound. By adding a maximum and minimum to a Dedekind complete lattice, you can make a complete lattice (though the reverse does not hold; you can't in general take a complete lattice and remove its minimum and maximum and have a Dedekind complete lattice).
This is in contrast to a bounded-complete lattice, in which the above non-emptiness condition is not included (and so $\emptyset$ has a supremum, which is necessarily a minimum). |
Legendre symbol of $(-2/p)$. | The proof that
$\Big(\frac{-1}{p}\Big) = \begin{cases} 1 & \text{if } p \equiv 1 \pmod 4 \\ -1 & \text{if } p \equiv 3 \pmod 4 \end{cases}$ and
$\Big(\frac{2}{p}\Big) = \begin{cases} 1 & \text{if } p \equiv \pm1 \pmod 8 \\ -1 & \text{if } p \equiv \pm3 \pmod 8 \end{cases}$
are classic. The first bullet point becomes $\Big(\frac{-1}{p}\Big) = \begin{cases} 1 & \text{if } p \equiv 1,5 \pmod 8 \\ -1 & \text{if } p \equiv 3,7 \pmod 8 \end{cases}$.
Multiplying the two Legendre symbols gives
$$\Big(\frac{-2}{p}\Big) = \begin{cases} 1 & \text{if } p \equiv 1,3 \pmod 8 \\ -1 & \text{if } p \equiv 5,7 \pmod 8 \end{cases}.$$ |
Prove that in any convex polyhedron, the number of faces that have an odd number of edges is even. | Let the faces be $F_1,\ldots,F_k$. Consider the sum $S=n_1+\cdots+n_k$
where face $F_i$ has $n_i$ edges.
This sum counts each edge twice: $S$ is even. So the number of $j$ for
which $n_j$ is odd must be even. |
Solution to $x^n=a \pmod p$ where $p$ is a prime | Let $g$ be a generator of the multiplicative group, and let $a=g^s$. We are trying to find a number $e$ between $1$ and $p-1$ such that $(g^e)^n=g^s$.
So we want $g^{en}\equiv g^s\pmod{p}$. This holds if and only if $en\equiv s\pmod{p-1}$.
Now consider the congruence $en\equiv s\pmod{p-1}$, where $e$ should ne considered variable. This congruence has a solution if and only if $\gcd(n,p-1)$ divides $s$.
We therefore want to show that $\gcd(n,p-1)$ divides $s$ if and only if
the order of $a$ divides $\frac{p-1}{\gcd(n,p-1)}$.
We will do one direction in detail, and leave the other direction to you. We show that if $\gcd(n,p-1)$ divides $s$, then the order of $a$ divides $\frac{p-1}{\gcd(n,p-1)}$.
It is enough to show that $a$ raised to the power $\frac{p-1}{\gcd(n,p-1)}$ is congruent to $1$ modulo $p$. So we want to show that $g^s$ raised to the power $\frac{p-1}{\gcd(n,p-1)}$ is congruent to $1$ modulo $p$. To do this we need to show that $s\cdot \frac{p-1}{\gcd(n,p-1)}$ is a multiple of $p-1$.
This is obvious. For we are told that $\gcd(n,p-1)$ divides $s$, say $s=a'\gcd(n,p-1)$. Then
$$s\cdot \frac{p-1}{\gcd(n,p-1)}=s'\gcd(n,p-1)\cdot \frac{p-1}{\gcd(n,p-1)}=s'(p-1),$$
and $s'(p-1)$ is a multiple of $p-1$. |
What's the density of $(0, Z)$ when $Z \sim U([0,1])$ | When handling a vector of random variables (random vector) it is natural to consider not just the their respective marginals but also the joint. Without more context, it seems reasonable to opt for the following convention:
The first of $(X, Z)$ is a degenerate distribution $X = c=\text{constant}$, a single point mass, thus necessarily $Z$ is independent to $X$. Namely, the joint "density" is
$$f_{X,Z}(x,z)=\delta(x-c) \cdot \frac1{b-a}\mathbb{I}_{a \le z \le b} \qquad \text{, with } c = 0,\, b=1,\, a=0,$$
where $\delta(x)$ is the Dirac delta function and $\mathbb{I}_{\text{blah}}$ is the indicator function.
Now, in the paper it is stated that $(0, Z)$ is "a zero on the $x$-axis and the random variable $Z$ on the $y$-axis). Seems pretty clear to me that the two components are meant to be independent. After all, the intro to Example $\pmb 1$ is "The following example illustrates how apparently simple sequences ... (converge under ... but do not converge...)"
Also, a few lines after listing the several distances, the authors say "Although this simple example features distributionS with disjoint supports..." , which implies that they view this random vector as merely concatenating two independent distributions. |
implications about the existence of limits | If $g$ is the constant function $0$, and $f(x)=1$ for $|x|>0$, $f(0)=-1$, then $\lim_{y\rightarrow 0}f(y)=1$ and $\lim_{x\rightarrow x_{0}}f(g(x))=\lim_{x\rightarrow x_{0}}f(0)=-1$.
For the existence, let $f(x)=\chi_{\bf{Q}}(x)$, the function taking value $1$ on rational numbers and zero otherwise, and let $g$ be the constant $0$, then $\lim_{x\rightarrow 0}f(g(x))=\lim_{x\rightarrow 0}f(0)=1$ and $\lim_{y\rightarrow 0}f(y)$ does not exist.
Another example is like, let $g(x)=x\sin(1/x)$ for $x\ne 0$, $g(0)=0$, and $f(y)=1$ for $y\ne 0$, $f(0)=-1$, then $\lim_{y\rightarrow 0}f(y)=1$ exists, and the $\lim_{x\rightarrow 0}f(g(x))$ does not exist because for $x_{n}=\dfrac{1}{2n\pi}$, $f(g(x_{n}))=f(0)=-1$ but for $z_{n}=\dfrac{1}{(2n+(1/2))\pi}$, $f(g(z_{n}))=f\left(\dfrac{1}{(2n+(1/2)\pi)}\right)\rightarrow 1$. |
Probability question involving playing cards and more than one players | Any player that has no hearts has two diamonds, so you want the chance that the two heart cards are in the same hand. If you deal the first heart to somebody, he gets one of the remaining three cards, so the chance is $\frac 13$. This approach may not help you with the larger problem.
Added: for a slightly larger problem, take a standard deck, a player drawing two cards without replacement and we ask the chance that he gets no hearts and at least one diamond. The first card can be a diamond, probability $\frac 14$, in which case the second card can be any non-heart, probability $\frac {38}{51}$ or the first card can be a club or spade, probability $\frac 12$ and the second a diamond, probability $\frac {13}{51}$. The total is then $\frac 14 \cdot \frac {38}{51}+\frac 24\cdot \frac {13}{51}=\frac {64}{204}=\frac {16}{51}$. It takes care to get all the possibilities, and once each. In this case it would be easy to double count the draws of two diamonds.
Added: to have two players have no hearts each and at least one diamond, you just keep going. The new twist is that the chance of the second getting this is influenced by whether the first has two diamonds or not. So now we figure the chance the first player has two diamonds: $\frac 14 \cdot \frac {12}{51} = \frac 3{51}$ Then the chance the first player has exactly one diamond and one non-heart is $\frac {16}{51}-\frac 3{51}=\frac {13}{51}$ Given two diamonds in the first hand, the second player can draw a diamond then a non-heart with probability $\frac {11}{50}\cdot \frac {36}{49}$ and a club or spade then a diamond with probability $\frac {26}{50}\cdot \frac {11}{49}$. Given only one diamond and no hearts in the first hand, the second player can draw a diamond then a non-heart with probability $\frac {12}{50}\cdot \frac {36}{49}$ and a club or spade then a diamond with probability $\frac {25}{50}\cdot \frac {12}{49}$. So the total is $\frac 3{51}\cdot (\frac {11}{50}\cdot \frac {36}{49}+\frac {26}{50}\cdot \frac {11}{49})+\frac {13}{51}(\frac {12}{50}\cdot \frac {36}{49}+\frac {25}{50}\cdot \frac {12}{49})$. I'll leave it to you to gather all this together and to do the three card case. |
Find the $n$th term of sequence in the form of $a_{n+2}=ba_{n+1}+ca_n+d$ | The method is fairly similar, you suppose the solution is composed of 2 parts, namely the particular solution (which takes care of the constant and the homogenous solution which is the case you know how to solve.
$$a_{n} = a^{p}_{n} + a^{h}_{n}$$
To solve for $a_{n}^{p}$ say it is some constant $c$ and plug it in to the reccurence relation.
$$c = c - 2c - 1$$
$$2c = -1$$
$$c = -\frac{1}{2}$$
For the homogenous solution suppose the $-1$ isn't there, you get the following reccurence relation:
$$a_{n+2} = a_{n+1} -2a_{n}$$
Which has the characteristic polynomial:
$$x^2 - x + 2 = 0$$
$$x_{1, 2} = \frac{1 \pm \sqrt{1 - 8}}{2} = \frac{1 \pm i \sqrt{7}}{2}$$
Then our solution has the following form:
$$a_{n} = \alpha(\frac{1 - i \sqrt{7}}{2})^n + \beta(\frac{1 + i \sqrt{7}}{2})^n - \frac{1}{2}$$
All that remains is to solve for $\alpha, \beta$ by evaluating it at $n=1, 2$. |
Four sided dice rolled to match number | The dice has 4 sides (1,2,3 and 4), so there are 2 even sides and 2 odd sides.
We rolling the dice two times with the limitation, that the first and the second throw shows different results. So there are the following possible results:
(1,2), (1,3), (1,4), (2,1), (2,3), (2,4), (3,1), (3,2), (3,4), (4,1), (4,2) and (4,3). Since all possibilities are equally likely, your questions (a,b and c) are simple counting questions.
a) There are 6 out of 12 possibilities, where $k_{1}$ is even $\Rightarrow\;p=\frac{1}{2}$
b) There are 2 out of 12 possibilities, where $k_{1}$ and $k_{2}$ are even $\Rightarrow\;p=\frac{1}{6}$
c) There are 4 out of 12 possibilities, where $k_{1}+k_{2}<5\;\Rightarrow\;p=\frac{1}{3}$ |
Defining $\epsilon$ in terms of $|x|$ | I think this is similar to your proposal, but set out a little differently. Apologies if too pedantic. I don't know about \vspace for MathJax.
Suppose otherwise so that for a given $ y \neq a $ we have $ \lim_{x \to y} = l $. Then for every $ \varepsilon > 0 $ there exists $ \delta > 0 $ with the property that whenever $ x $ satisfies $ | x - y | \leq \delta ~ (*) $, we have $ | f(x) - l | \leq \varepsilon $. But there always exists both a rational, $ x_r $ and an irrational $ x_i $ that satisfies $ (*) $ with function values $ a $ and $ x_i $ respectively, and so $ |a - l| \leq \varepsilon $ and $ | x_i - l | \leq \varepsilon $.
Now $ x_i $ can be chosen as close to $ y $ as we like, so we may a well fix on a value that satisfies the stronger requirement $ | x_i - y | \leq \min( \varepsilon, \delta ) $.
We then have three inequalities. The value $ \varepsilon $ can be chosen freely, so the inequality $ | a - l | \leq \varepsilon $ implies $ a = l $. Use the triangle inequality with the other two to obtain $ | l - y | \leq 2\varepsilon $, and by the same reasoning we must also have $ y = l $.
Thus $ y = a $ which contradicts our requirement $ y \neq a $. |
How to find the determinant of a matrix using the PA = LU method. | $PA=LU$ and $\det(P)=+1$ or $-1$ depending on the sign of the permutation, so $\det(A)=\det(L)\det(U)\det(P)$. So if $L$ only has $1$'s on the diagonal, $\det(L)=1$ and so you just have to compute $\det(U)$ which is easy (product of diagonal). The $\det(P)$ is easy: start with $p=1$ and with every (row or column) swap (during the $LU$ computation), multiply $p$ by $-1$. The final $p$ is what you need. It's an almost free byproduct. |
Scaling a grading system | If you want to interpolate linearly, you must solve
$$75\% = a \cdot 40\% + (1-a) \cdot 100\%\\
90\% = b \cdot 40\% + (1-b) \cdot 100\%$$
This particular choice yields $a = \frac5{12}$ and $b = \frac16$.
Then given a new threshold $t$, the percentages for silver and gold would be
$$p_s = a \cdot t + (1-a) \cdot 100\% \\
p_g = b \cdot t + (1-b) \cdot 100\%$$ |
Show $A=(I-S)(I+S)^{-1}$ is an orthogonal matrix if $S$ is a real antisymmetric matrix | One approach is to note that by the Cayley Hamilton theorem, $(I-S)^{-1}$ can be expressed as a polynomial of $S$, and for any two polynomials $p,q$ we have $p(S)q(S) = q(S)p(S)$.
For a direct proof, we could note that
$$
(I + S)(I - S)^{-1} \\
= (2I - (I - S))(I - S)^{-1}\\
= 2(I - S)^{-1} - I\\
= (I - S)^{-1}(2I - (I - S))\\
(I - S)^{-1}(I + S).
$$ |
how to calculate 1000 digit number divided by 13 | Note that your number $N=11 \ldots 1$, with $1000$ digits, is equal to $(10^{1000}-1)/9$, and since gcd$(10,13)=1$, we have by Fermat's little theorem that $10^{12} \equiv 1 \pmod{13} \Rightarrow (10^{12})^{83} \equiv 1 \pmod{13}$ and so $10^{1000} = 10^{12\times 83 +4} \equiv 10^4 \equiv 3 \pmod{13}$, then, finally, $10^{1000}-1\equiv 2 \pmod{13}$, which gives us that $9N \equiv 2 \pmod{13}$. Multiplying both sides by $3$, we get $N \equiv 3\times9 N \equiv 6 \pmod{13}$. So, $N \equiv 6 \pmod{13}$. |
Optimizing a linear equation with multiple variables | Never rusty
I am going to divide the answer in two( i didn't get completely your question)
You have two liquids
oil=A(1)*t+B(1)*x+C(1)*z;
shampoo=A(2)*t+B(2)x+C(2)*z;
I suppose that you have a set of data for each equation
1) Supposing that you want a function for each liquid with specific values of A, B, C
You can run fminsearch or similar optimisation function(if you have access to Matlab or octave)
what you have to do is to combine the results in a single function
you have to create a function that has the values of t,x,z for all your measures and depend on your wanted values A, B, C on this case a vector or a matrix:
function y=liquids([A,B,C],t,x,z)
y1=equation1(A(1),B(1),C(1))=A(1)*t+B(1)x+C(1)*z
y2=equation2(A(2),B(2),C(2))=A(2)*t+B(2)x+C(2)*z
y=y1+y2
if you run an optimisation, you will get the matrix values for each equation
2)
if you want a unique value of A, B, C you have to consider A, B, C on each equation the same. |
Proving that a function is nowhere differentiable | If $f(z)$ is differentiable at $z_0$ and $f(z_0)\ne0$, then $1/f(z)$ is differentiable at $z_0$. In your case, if $1/\bar z$ were differentiable at some point, then so would $1/f=\bar z$. But $\bar z$ does not satisfy the Cauchy--Riemann equations. Contradiction. |
An example of a simple graph whose automorphism group is isomorphic to the cyclic group on 4 elements. | 10 vertices:
In this drawing, the vertices are colored according to their degree.
The red square must be preserved by any automorphism, and its orientation is fixed because each red vertex is part of exactly one red-green-blue triangle whose green member points towards the next red node.
Less than 10 vertices won't do:
Suppose the graph has an automorphism $\sigma$ of order $4$. Then the orbit of each vertex under $\sigma$ has $1$, $2$ or $4$ elements, and there has to be at least one orbit of size $4$ because otherwise $\sigma^2$ would be the identity.
If there is exactly one orbit of size $4$, then $\sigma^2$ swaps exactly two pairs of vertices, and swapping just one of those pairs will necessarily be an automorphism too, so the automorphism group is not $\langle \sigma\rangle$.
Now suppose there are exactly two orbit of size $4$ and every other orbit (if any) has size $1$. Consider how many vertices in orbit A each vertex in orbit B is connected to:
If $0$ or $4$, then the two orbits can be rotated independently of each other, and the automorphism group is not $\langle \sigma \rangle$.
If $1$ or $3$, then draw orbits A and B in concentric circles such that each vertex is right outside the vertex in the other orbit it is (or isn't, in case $3$) connected to. This drawing has enough symmetry that the automorphism group is at least $D_8$.
If $2$, and the two vertices a node is connected to are opposite each other, then turning one of the orbits through $180^\circ$ while leaving the other alone will be an automorphism, and the automorphism group is not $\langle\sigma\rangle$.
If $2$, and the two vertices a node is connected to are neightbors, then draw orbits A and B in concentinc circles such each vertex is in the middle of the two vertices it is connected to. Again, this drawing has enough symmetry that the automorphism group is at least $D_8$.
Therefore, if our graph has symmetry group $C_4$, then there will at least two orbits of size $4$ and at least one additional orbit of size either $2$ or $4$. That accounts for at least $10$ vertices in total.
A similar argument shows that the smallest graph whose automorphism group is $C_3$ or $C_5$ has $9$ or $15$ vertices, respectively. This means that the sequence of minimal sizes of graphs with automorphism group $C_n$ is
$$ 2, 9, 10, 15, \ldots $$
which is enough to find the sequence as A058890 in OEIS; see this for further references and a (complex) general formula. |
Integral$=-\frac{4}{3}\log^3 2-\frac{\pi^2}{3}\log 2+\frac{5}{2}\zeta(3)$ | Defining $I$ as the definite integral,
$$I:=
\int\limits_{0}^{1}\left[\frac{\zeta{(2)}-2\log^2{2}}{1-x}-\frac{1}{x(1-x)}\left(2\operatorname{Li}_2{\left(\frac{1-\sqrt{1-x}}{2}\right)}-\log^2{\left(\frac{1+\sqrt{1-x}}{2}\right)}\right)\right]\mathrm{d}x,$$
prove:
$$I=\frac52\zeta{(3)}-2\zeta{(2)}\log{2}-\frac43\log^3{2}.$$
Substituting
$$\xi=\frac{1-\sqrt{1-x}}{2},$$
the integral becomes:
$$I=
\int_{0}^{\frac12}\left[4\left(\zeta{(2)}-2\log^2{2}\right)-\frac{1}{\xi(1-\xi)}\left(2\operatorname{Li}_2{(\xi)}-\log^2{(1-\xi)}\right)\right]\frac{\mathrm{d}\xi}{1-2\xi}.$$
It turns out that the derivative of the expression $2\operatorname{Li}_2{(\xi)}-\log^2{(1-\xi)}$ is much simpler than the expression itself:
$$\frac{d}{d\xi}\left(2\operatorname{Li}_2{(\xi)}-\log^2{(1-\xi)}\right)=-\frac{2(1-2\xi)}{\xi (1-\xi)}\log{(1-\xi)}.$$
This suggests that we should integrate by parts.
$$\begin{align}
I
&=\int_{0}^{\frac12}\left[4\left(\zeta{(2)}-2\log^2{2}\right)-\frac{1}{\xi(1-\xi)}\left(2\operatorname{Li}_2{(\xi)}-\log^2{(1-\xi)}\right)\right]\frac{\mathrm{d}\xi}{1-2\xi}\\
&=\int_{0}^{\frac12}\left[4\left(\zeta{(2)}-2\log^2{2}\right)\xi(1-\xi)-\left(2\operatorname{Li}_2{(\xi)}-\log^2{(1-\xi)}\right)\right]\frac{\mathrm{d}\xi}{\xi(1-\xi)(1-2\xi)}\\
&=-\int_{0}^{\frac12}\left[4\left(\zeta{(2)}-2\log^2{2}\right)(1-2\xi)+\frac{2(1-2\xi)}{\xi (1-\xi)}\log{(1-\xi)}\right] \log{\frac{\xi(1-\xi)}{(1-2\xi)^2}} \mathrm{d}\xi\\
&=-2\int_{0}^{\frac12}\left[2\left(\zeta{(2)}-2\log^2{2}\right)+\frac{\log{(1-\xi)}}{\xi (1-\xi)}\right] (1-2\xi)\log{\frac{\xi(1-\xi)}{(1-2\xi)^2}} \mathrm{d}\xi\\
&=-4\left(\zeta{(2)}-2\log^2{2}\right)\int_{0}^{\frac12}(1-2\xi) \left[\log{\xi}+\log{(1-\xi)} - 2\log{(1-2\xi)}\right] \mathrm{d}\xi\\
&~~~~-2\int_{0}^{\frac12}\frac{(1-2\xi)}{\xi (1-\xi)} \log{(1-\xi)} \left[\log{\xi}+\log{(1-\xi)} - 2\log{(1-2\xi)}\right] \mathrm{d}\xi\\
&=4\left(\zeta{(2)}-2\log^2{2}\right)\int_{0}^{\frac12}(2\xi-1) \left[\log{\xi}+\log{(1-\xi)} - 2\log{(1-2\xi)}\right] \mathrm{d}\xi\\
&~~~~+2\int_{0}^{\frac12}\left(\frac{1}{1-x}-\frac{1}{x}\right) \left[\log{(1-\xi)}\log{\xi}+\log^2{(1-\xi)} - 2\log{(1-\xi)}\log{(1-2\xi)}\right] \mathrm{d}\xi.
\end{align}$$
Now, distributing factors in the integrands and integrating term-by-term, we can write the integral $I$ as a sum of a dozen or so primitive integrals that each have anti-derivatives in terms of polylogarithms that may be easily evaluated and added up with the aid of a computer algebra program such as WolframAlpha, thus obtaining the desired result. However, such a solution leaves much to be desired in the way of elegance.... |
Every smooth vector field generates a smooth flow | Fix $p \in M$. Consider the linear first order differential equation :
\begin{align}
\forall t \in I, ~ \gamma'(t) = X(\gamma(t)),~ \gamma(0) = p
\end{align}
In charts, this is an ordinary differential equation in $\mathbb{R}^n$ and it has a unique maximal solution, say $\gamma_p : I_p \to M$, with $I_p$ open.
Now, let $U = \bigcup_{p \in M} I_p\times \{p\}$. This is an open subset of $\mathbb{R}\times M$ containing $\{0\}\times M$ (this is the difficult statement of the theorem, I will not provide a proof here). Define on $U$:
\begin{align}
\varphi^X : U &\to M \\
(t,p) & \mapsto \gamma_p(t)
\end{align}
One can show that it is a flow and that it generates the vector field $X$.
First, it is a flow because of Cauchy-Lipschitz theorem on uniqueness of solution of differential equation. For $t,s$ such that $\varphi^X(t+s,p)$ and $\varphi^X(t,\varphi^X(s,p))$ is defined, one can show, differentiating with the $t$ variable, that they both are solutions of the same first order ODE with initial data $\varphi^X(s,p)$ at $t=0$, and consequently they are equal.
Moreover, by the very definition of $\varphi^X$:
\begin{align}
\left.\frac{\mathrm{d}}{\mathrm{d}t}\varphi^X(t,p)\right|_{t=0} = \left.\frac{\mathrm{d}}{\mathrm{d}t} \gamma_p(t)\right|_{t=0}=\gamma_p'(0) = X(\gamma_p(0))=X(p)
\end{align}
and $\varphi^X$ generates $X$. |
Does a derivative exist if it is equal to infinity | As some people in the comments have said, there really lacks a consensus about whether a limit exists if it is equal to $\infty$. Some will say "it diverges" if it is unbounded, others will say "it converges to $\infty$", saving "diverges" for when it oscillates or something. As far as the sum rule goes, if you are adding $\infty$ to any constant then it works, but if you are adding $\infty$ to $-\infty$, then you have indeterminate and should try calculating the derivative another way. |
Wagner's theorem about 3-transitive groups of odd degree. | We have that $N_{(x,y)}$ is normal in $N_{[x,y]}$ of index $2$.
Since $N$ is $2$-transitive, the orders of these groups do not depend on the choice of $x$ or $y$. In particular, a $2$-Sylow $P_{[x,y]}$ of $N_{[x,y]}$ is not contained in $N_{(w,z)}$ for any alternative pair $w,z$, and has order greater than the corresponding $2$-Sylow $P_{[w,z]}$.
OK, so assume that $P_{[x,y]}$ has $k \ge 2$ orbits. Then $P_{[x,y]}$ has $k \ge 2$ fixed points. So there are $2$ points $w$ and $z$ such that every element of $P_{[x,y]}$ fixes both $w$ and $z$. So there is an inclusion
$$P_{[x,y]} \subset P_{(w,z)}$$
and this is a contradiction as above. If $k = 1$, then $P_{[x,y]}$ would necessarily fix only $k = 1$ point $z$ and $P_{[x,y]} \subset P_{z}$ is no contradiction. |
Show that a continuous periodic function on $\mathbb{R}$ attains its maximum and minimum. - Proof Verification | Actually, for this question, there is no need to consider separately the cases where the function is constant vs when it isn't.
By the way, your indexing for the union is weird. It might be easier to just write it out explicitly:
\begin{equation}
\Bbb{R} = \dots \cup [x-2d, x-d] \cup [x-d, x] \cup [x, x+d] \cup [x+d, x+2d] \dots
\end{equation}
This, although tedious, gives a clear meaning of what you intend to say (and I'm sure most people will be fine with the use of $\dots$ because the intended meaning is clear). This is better than giving an incorrect statement, because currently what you have written does not make sense.
However, if you really want to write $\Bbb{R}$ as an infinite union, you could do something like:
\begin{equation}
\Bbb{R} = \bigcup_{n \in \Bbb{Z}} \left[x+nd, x+(n+1)d \right]
\end{equation}
Added Remarks:
The idea of your proof is certainly correct (and if you make the notational change I suggested above, it will certainly constitute a rigorous proof), but you should note that there is no need to consider an arbitrary $x$, simply pick $x=0$. |
ceiling functions inequality | $$ \lceil x \rceil = \min\{k \in \mathbb{Z} \mid
k \geq x\}$$
So, basically:
$$\left\lceil {n\over 4} \right\rceil \ge 3 \iff {n\over 4} > 2 \iff n > 8$$ |
The steps to get $\beta$ in Least Square Estimation, why $x_i$ was removed? | $\sum\limits_{i=1}^n y_i-\bar{y}+\hat{\beta}(\bar{x}-x_i) =0 \implies \sum\limits_{i=1}^n (y_i-\bar{y}) = - \hat{\beta}\sum\limits_{i=1}^n(\bar{x}-x_i) \implies \hat{\beta} = \dfrac{\sum\limits_{i=1}^n (y_i-\bar{y}) }{\sum\limits_{i=1}^n(x_i-\bar{x}) }$ looks a strange thing to say this way since I would have thought $\sum\limits_{i=1}^n (y_i-\bar{y})=0$ and $\sum\limits_{i=1}^n (x_i-\bar{x})=0$ by the definition of $\bar{y}$ and $\bar{x}$. Saying $\hat{\beta} =\frac00$ would not be helpful.
That being said, I would have thought you can say $\sum\limits_{i=1}^n y_i-\bar{y}+B(\bar{x}-x_i) =0$ was obviously true for any $B$ and this implies $\sum\limits_{i=1}^n y_i-\bar{y}+\hat{\beta}(\bar{x}-x_i) =0$ and so $\sum\limits_{i=1}^n \left(y_i-\bar{y}+\hat{\beta}(\bar{x}-x_i)\right)\bar{x} =0$. Subtract that from the earlier $\sum\limits_{i=1}^n \left(y_i-\bar{y}+\hat{\beta}\bar{x}-\hat{\beta}x_i\right)x_i =0$ and you can get the final line and the desired conclusion. |
Let $G = GL(2,F)$ and $B$ be the subgroup of upper triangular matrices in $G$. Show $ B \backslash G/B = \{ B,BwB \} $ | If $c \neq 0$, you need to find two upper triangular matrices $M_1, M_2$ such that $M_1 \begin{bmatrix}a & b \\ c & d\end{bmatrix}M_2 = \begin{bmatrix}0 & -1 \\ 1 & 0\end{bmatrix}$.
Try to think in terms of row and column operations. The first reduction to try is to apply two shearing operations, i.e., find $b_1, b_2$ such that
$$
\begin{bmatrix}1 & b_1 \\ 0 & 1\end{bmatrix}\begin{bmatrix}a & b \\ c & d\end{bmatrix}\begin{bmatrix}1 & b_2 \\ 0 & 1\end{bmatrix} = \begin{bmatrix}0 & b' \\ c & 0\end{bmatrix}.
$$
This is possible since $c \neq 0$. Then, another scaling matrix either in the front or in the back suffices. |
How many matrices do we need to add? | Some remarks about the writing of the question.
I rarely read a so unclear post. Let $p_k$ be the $k^{th}$ prime number (of the list $2,3,5,\cdots$). You must absolutely explicitly define the matrix $A_{n,k}\in M_n$, for example, as follows:
$A_{n,k}=M_{n,2}+M_{n,3}+\cdots+M_{n,p_k}$ where $M_{n,p}[i,j]=R_p[i_1,j_1]$, with $i_1=i\; mod\; p$ if $i\; mod\; p<>0$ and $i_1=p$ otherwise (similar definition for $j_1$). In particular $M_{n,p}$ is not tiled by the $R_p$ because you use scissors!
For a fixed $n$, you search the minimum of $k$ s.t. $A_{n,k}$ is invertible. In general, such a problem is difficult; yet, in $M_{n,p_k}$ (the last matrix), there are still many copies of the matrix $R_{p_k}$ and I think that the problem is feasible.
That follows is the result of some experiments (until $k=23$).
i) I think that it is better to consider that $k$ is given and to research the maximum of the $n$ s.t. $A_{n,k}$ is invertible. Indeed, when the $R_p$ are randomly chosen, $A_{n,k}$ is "always" invertible or "always" non-invertible. Moreover, if $A_{n,k}$ is invertible, then the $A_{m,k}$, with $m<n$, are "always" invertible.
ii) Then let $f(k)$ be this maximal $n$. Several calculations seem to "prove":
Conjecture (it may be the meaning of the last Matt Groff's comment): $f(k)=2+3\cdots+p_k-k+1$.
I think that it is much better to rewrite your post, using the lines above (definition of $A_{n,k}$ and conjecture). |
$P(A_n)\sim e^{-n}$, then there is a $c>1$ such that $P(A_n)\leq c^{-n}$ for al $n$? | $P\left(A_{n}\right)\sim e^{-n}$ (probably) stands for $$\lim_{n\rightarrow\infty}e^{n}P\left(A_{n}\right)=1$$
If that is the case and $0<c<e$ then $c^{n}P\left(A_{n}\right)=\left(\frac{c}{e}\right)^{n}e^{n}P\left(A_{n}\right)\rightarrow0$. Showing that $P(A_n)\leq c^{-n}$ for $n$ large enough. Note that $e:=\exp1>1$ so we can find a $c>1$ that satisfies this condition.
If indeed $P(A_n)\leq c^{-n}$ for each $n$ for some $c>1$ then $P(A_n)<1$ for each $n$.
This however is not a consequence of $P(A_n)\sim e^{-n}$. For instance take $P(A_n)=1$ for $n\leq N$ and $P(A_n)=e^{-n}$ for $n>N$ where $N$ denotes some positive integer. Then $P\left(A_{n}\right)\sim e^{-n}$ . |
Can we study sequences in the extended real numbers? | The map $(-1,1)\ni x\mapsto \frac{x}{1-\vert x\vert}:=f(x)\in\mathbb{R}$ may bei extended to a bijection $F\colon [-1,1]\to \Bbb R \cup \{-\infty,\infty\}$ by $F(x):=f(x)$ for $-1<x<1$ and $F(\pm1):=\pm\infty$. Then, when defining a distance on $\Bbb R \cup \{-\infty,\infty\}$ by $d(u,v):=\vert F^{-1}(u)-F^{-1}(v)\vert$, you make $\Bbb R \cup \{-\infty,\infty\}$ a metric space. And your sequences how are nothing else but sequences in a (special) metric space. |
find bound on condition number of matrix given matrix norm | $100=\big\Vert A \big\Vert_2 = \sigma_1$
$101^2 = \big\Vert A \big\Vert_F^2 = \sigma_1^2 + \sigma_2^2 + ....+ \sigma_{201}^2 + \sigma_{202}^2$
$k(A) =\frac{\sigma_1}{\sigma_{n}} =\frac{\sigma_1}{\sigma_{202}}= \frac{100}{\sigma_{202}}$
To get a lower bound on $k(A)$ we want to maximize the denominator. So what's the largest value that $\sigma_{202}$ may take? |
How do Integral Transforms work | There are many classes of problems that are difficult to solve - or at least quite unwieldy algebraically - in their original representations. An integral transform "maps" an equation from its original "domain" into another domain. Manipulating and solving the equation in the target domain can be much easier than manipulation and solution in the original domain. The solution is then mapped back to the original domain with the inverse of the integral transform. They have been successfully used for almost two centuries in solving many problems in applied mathematics, mathematical physics, and engineering science.
General formula: An integral transform is any transform $~\text T~$ of the following form:
$$F(u)={\displaystyle (Tf)(u)=\int _{t_{1}}^{t_{2}}f(t)\,K(t,u)\,dt}$$
The input of this transform is a function $~f~$, and the output is another function $~Tf~.~$ An integral transform is a particular kind of mathematical operator.
There are numerous useful integral transforms. Each is specified by a choice of the function $~K~$ of two variables, the kernel function, integral kernel or nucleus of the transform.
Of course, the interpretation of this new function $~F(u)~$ will depend on what the function $~K(t,u)~$ is. Choosing $~K(t,u)=0~$, for example, will mean that $~F(u)~$ will always be zero. But this tells us nothing about $~f(t)~$. Whereas choosing $~K(t,u)=t^u~$ will give us the $~u^\text{th}~$ moment of $~f(t)~$ whenever $~f(t)~$ is a probability density function. For $~u=1~$ this is just the mean of the distribution $~f(t)~$. Moments can be really handy.
A particularly interesting class of functions $~K(t,u)~$ are ones that produce invertible transformations (which implies that the transform destroys no information contained in the original function). Some kernels have an associated inverse kernel $~K^{−1}(u, t)~$ which (roughly speaking) yields an inverse transform:
$${\displaystyle f(t)=\int_{u_{1}}^{u_{2}}(Tf)(u)\,K^{-1}(u,t)\,du}$$
Whenever this is the case, we can view our operation as changing the domain from $~t~$ space to $~u~$ space. Each function $~f~$ of $~t~$ becomes a function $~F~$ of $~u~$ that we can convert back to $~f~$ later if we so choose to. Hence, we’re getting a new way of looking at our original function!
The Fourier transform :
It turns out that the Fourier transform, which is one of the most useful and magical of all integral transforms, is invertible for a large class of functions. We can construct this transformation by setting:
$$K(t,u) = e^{-i t u}\qquad\text{and}\qquad K(t,u) = e^{i t u}$$
which leads to a very nice interpretation for the variable $~u~$. We call $~F(u)~$ in this case the “Fourier transform of $~f~$”, and we call $~u~$ the frequency. Why is $~u~$ frequency ? Well, we have Euler’s famous formula:
$$e^{i t u} = \cos(t u) + i \sin(t u)$$
so modifying $~u~$ modifies the oscillatory frequency of $~\cos(tu)~$ and $~\sin(tu)~$ and therefore of $~K(t,u)~$. There is another reason to call $~u~$ frequency though. If $~t~$ is time, then $~f(u)~$ can be thought of as a waveform in time, and in this case $~|F(u)|~$ happens to represent the strength of the frequency $~u~$ in the original signal. You know those bars that bounce up and down on stereo systems ? They take the waveforms of your music, which we call $~f(t)~$, then apply (a discrete version of) the Fourier transform to produce $~F(u)~$. They then display for you (what amounts to) the strength of these frequencies in the original sound, which is $~|F(u)|~$. This is essentially like telling you how strong different notes are in the music sound wave.
The Laplace transform :
$$k(t,u) = e^{-tu}$$
This is handy for making certain differential equations easy to solve.
The Hilbert transform :
$$k(t,u) = \frac{1}{\pi} \frac{1}{t-u}$$
This has the property that (under certain conditions) it transforms a harmonic function into its harmonic conjugate, elucidating the relationship between harmonic functions and holomorphic functions, and therefore connecting problems in the plane with problems in complex analysis.
The identity transform :
$$k(t,u) = \delta(t-u)$$
Here $~\delta~$ is the dirac delta function. This is the transformation that leaves a function unchanged, and yet it manages to be damn useful.
References:
"Integral Transforms and Their Applications", by Lokenath Debnath and Dambaru Bhatta
"Mathematics for Physical Science and Engineering" by Frank E. Harris
https://www.askamathematician.com/2011/07/q-what-are-integral-transforms-and-how-do-they-work/
https://en.wikipedia.org/wiki/Integral_transform
How to learn Integral Transform?
https://mathoverflow.net/questions/2809/intuition-for-integral-transforms |
How do I derive a contradiction from an assumption that is "not asymmetric" | Proof Outline
Assume that $S$ is not asymmetric. Then $\exists x, y \in A, x \ne y$ such that $Sxy$ and $Syx$.
Use the definition of $S$ to write these in terms of $R$.
Then use transitivity of $R$. What do you get?
Again use the definition of $S$ on the result of Step 3. You get a contradiction. |
Proving $R[Y_1, \ldots, Y_r]_{P}$ is integrally closed (trying to prove $\mathbb{P}^r_R$ is normal) | This is a community wiki answer recording the discussion from the comments, in order that this question might be marked as answered (once this post is upvoted or accepted).
If $R$ is a valuation ring then it is integrally closed, then so is any polynomial ring over $R$, and its localizations as well. – user26857 |
Calculating $ E[X\mid E[X\mid Y]]$ | The answer is $E(X|Y)$. To show that we have to show that $$\int_E XdP= \int_E E(X|Y) dP$$ for all $E \in \sigma (E(X|Y))$. But note that $\sigma (E(X|Y)) \subseteq \sigma (Y)$. Hence the equation holds by deinition of $E(X|Y)$. |
Number of multisets with restrictions on specific element count | First, substitute $x_2 \mapsto x_2' + 1, x_3 \mapsto x_3' + 1$. Then this formulation is equivalent to
$$x_0 + x_1 + x_2' + x_3' = k-2$$
with the variables $\geq 0$. The total number of solutions is then ${k+1 \choose 3}$ by standard methods. The one case which doesn't work are those where $x_2' = x_3' = 0$. Can you take it from here? |
Solving $\sin x = x^3-2x^2+1$ using Newton's Method | I did not not understand why, starting with $x_0=1$, you have problem $$f(x)=x^3-2 x^2+1-\sin (x)$$ $$f'(x)=3 x^2-4 x-\cos (x)$$ $$x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)}=\frac{2 x_n^3-2 x_n^2+\sin (x_n)-x_n \cos (x_n)-1}{3 x_n^2-4 x_n-\cos (x_n)}$$ So, the iterates are
$$\left(
\begin{array}{cc}
n & x_n \\
0 & 1 \\
1 & 0.5798249292 \\
2 & 0.5765888390 \\
3 & 0.5765861544
\end{array}
\right)$$
In fact, your first formula is totally correct but "after making a common denominator and simplifying more", there is a "small" mistake : $x_n \cos(x_n)$ is not $\cos(x_n^2)$. |
Repeating Square Root Closed Form | If we let $$y = \sqrt{x+\color{red}{\sqrt{x+\sqrt{x + \cdots}}}},$$ then if the limit were to exist, the inner square-root in $\color{red}{\text{red}}$ is also $y$ and hence we have
$$y = \sqrt{x+y}$$
Squaring the above, we get that
$$y^2 = x+y$$from which we can find $y$ as you have done. The slightly non-trivial part though is proving that $$\sqrt{x+\color{black}{\sqrt{x+\sqrt{x + \cdots}}}}$$ makes sense, i.e., the limit exists.
EDIT
To prove the above limit exists, here is one way.
$1$. Define $y_1 = \sqrt{x}$ and $y_{n+1} = \sqrt{x+y_n}$.
$2$. Prove by induction, that $$ \sqrt{x} < y_n<\dfrac{1+\sqrt{4x+1}}2$$
$3$. Make use of $(2)$ and prove by induction that $y_n$ is an increasing sequence.
$4$. Recall from real analysis that $\mathbb{R}$ is complete (i.e., every bounded monotone sequence converges to prove that a limit exists) |
Phase plane portrait center ellipse equations | Equation for the phase paths:
$$\frac {dx}{dy}=\frac {x+13y}{-2x-y}$$
$$({-2x-y}){dx}= ({x+13y})dy$$
$$-2x{dx}= ydx+xdy+13ydy$$
Note that $ydx+xdy=dxy$ :
$$-2x{dx}= dxy+13ydy$$
Integrate:
$$C=x^2+ xy+ \frac {13} 2 y^2$$
It's an ellipse. |
LQR Controller for a nonlinear system - how to split the SS model as A and B matrices? | LQR stands for linear quadratic regulator, where: linear refers to the linear dynamics of the system (which can both be invariant or variant in time); quadratic refers to the cost function which is an integral of a quadratic form, which the LQR minimizes; regulator refers to the goal of the control input to bring the system to zero.
Since your system is not linear you can't directly use LQR for that system. You would either have to resort to linearizing your model around an equilibrium point, or use another technique like some model predictive control (MPC). Linearization will in general only stabilize the system locally around the equilibrium point, while nonlinear MPC might be none convex and therefore potentially really hard to solve. |
Counter example of Lusin's theorem | Enumerate the rationals in $[0,1]$ as $a_1,a_2,\ldots$. Remove an open interval of length $2^{-n}\epsilon$ centred at $a_n$ for each $n$.
A compact set $E$ remains, of Lebesgue measure $\ge1-\epsilon$
and on this set $f$ is zero (so certainly continuous). |
Simplification ideas for an integration | Let's consider the rescaled version of your integral
$$
I_n(k)=\frac{-2}{\pi }\int_{-\infty}^{\infty} \frac{1}{y^2}\sin(2y)\sin(ny)e^{i y k}dy \quad (1)
$$
differentiating two times w.r.t $k$ gives us
$$
I_n''(k)=\frac{2}{\pi }\int_{-\infty}^{\infty} \sin(2y)\sin(ny)e^{i y k}dy \quad (2)
$$
Observe that this integral doesn't converge in the usual sense, so we have to be a little bit careful here. I won't go into the details , but i can assure you that everything is well defined if one thinks about all the operations in the sense of (tempered) distributions.
using the facts that
$$
\mathcal{FT}(\cos(a x))= 2 \pi(\delta(k+a)+\delta(k-a))\\
\delta(\gamma x)=\frac{1}{|\gamma|}\delta(x)\\
2\sin(\alpha)\sin(\beta)=\cos(\alpha-\beta)-\cos(\alpha+\beta)
$$
we might calculate (2) as
$$
I_n''(k)= \delta(k+n-2)+\delta(k-n+2)-\delta(k+n+2)-\delta(k-n-2))
$$
Noe for integrating back w.r.t to k we need the following (distributional) identites
$$
\int\delta(x)dx=\Theta(x)+C\\
\int\Theta(x)dx= x \Theta(x)+C
$$
we get
$$
I_n'(k)= \Theta(k+n-2)+\Theta(k-n+2)-\Theta(k+n+2)-\Theta(k-n-2))+C_1
$$
and
$$
I_n(k)= (k+n-2)\Theta(k+n-2)+(k-n+2)\Theta(k-n+2)\\-(k+n+2)\Theta(k+n+2)-(k-n-2)\Theta(k-n-2))\\+C_1k+C_2
$$
Now all what is left is to fix the constants $C_1,C_2$
we can do that by observing 1.) that the $I_0(k)=0$ and that 2.) $I_n(\infty)$ should also vanish (this is a consequence of the Riemann-Lebesgue lemma). Therefore $C_1=C_2=0$ and we have
$$
I_n(k)= (k+n-2)\Theta(k+n-2)+(k-n+2)\Theta(k-n+2)\\-(k+n+2)\Theta(k+n+2)-(k-n-2)\Theta(k-n-2))
$$
which may be simplified by exploiting the properties of the Heaviside function. |
A game of lines and points | Since player $A$ chooses the point randomly, the only way $B$ can win is if the line segments eventually get arbitrarily short. But if a line segment gets very short then $A$ is essentially choosing a fixed point on the boundary of the disc. Then when $B$ chooses the next random line, there is a positive probability that the line segment will be longer than some fixed positive length. Thus the probability that the line segments will stay very short forever starting at some iteration is 0. So player B loses with probability 1. |
Maximum Adjacency Matching | First, an adjacency matching is defined for a simple undirected (finite) graph $G$ to be a disjoint collection of unordered pairs of (distinct) edges which share exactly one endpoint. Therefore an edge of $G$ can belong to at most one such pair in the collection.
Then a maximum adjacency matching is one that has the largest possible number of edge pairs. Finiteness of $G$ guarantees that the maximum number of pairs is attained by some adjacency matching. |
Finding the root of an equation involving digamma functions | I do not think that a closed form solution does exist.
The solution $x=0.607778331746657190693779866290$ is not recognized by inverse symbolic calculators. It is "surprisingly" close to
$$\frac{3 I_0(1)+I_0(2)}{10} =0.607778294$$ |
Are there real numbers that cannot be uniquely expressed with a finite number of symbols? | Internally, most of them cannot be so expressed, assuming you have a finite alphabet. The set of all finite strings from a finite alphabet is countable, but the reals are uncountable. Consequently, every function from the set of strings to the set reals is missing uncountably many reals.
Externally, it is possible (but not necessary) that every real number can be uniquely expressed in the same metalanguage you use to describe the set theory you're using. Here is a related question asking whether, not just each individual real number, but every set can be defined. |
Operational priority calculation | What interests us is the expected time we need.
When we do Operation $1$ first, we are successful after $12$ minutes with probability $\frac{1}{2}$, otherwise we require $18$ minutes. We therefore expect the process to take $\frac{1}{2} \cdot 12 + \frac{1}{2}\cdot 18 = 15$ minutes on average.
When we do Operation $2$ first, we are successful after $6$ minutes with probability $\frac{1}{6}$, otherwise we require $18$ minutes. We expect the process to take $\frac{1}{6} \cdot 6 + \frac{5}{6} \cdot 18 = 16$ minutes.
Therefore doing Operation $1$ first will save us a minute. |
Knights and Knaves, both accuse each other of being Knaves. | If both were knaves, both would be telling the truth => contradiction.
If both were knights, both would be lying => contradiction.
Therefore, one of them is a knight and the other a knave, which it turns out does not lead to a contradiction: The knight tells the truth about the other being a knave, and the knave lies about the other being a knight, thus both claim the other to be a knave. |
Determine polynomial function of degree 4. | $f$ is a polynomial of degree $4$ so $f'$ is a polynomial of degree $3$. Given the hypotheses, it has two roots $-3$ and $1$ of odd multiplicity (or it would not be local extrema). Then the last root has to be real as well,given that the polynomial under study has real coefficients, contradicting the last hypothesis. |
Sufficient statistic for $\theta$ from $f_Y(y) = e^{−(y − \theta)}$? | Presumably you are trying to factor the likelihood into something like
$$f_\theta(y)=h(y) \, g(u(y),\theta)$$ where $u(y)$ is the sufficient statistic (I would use a slightly different notation so may have made an error in understanding yours).
In that case you need the $e^{n\theta}\mathbb{1}_{\{Y_{(1)}\geq\theta\}}$ in the $g$ as they involve $\theta$, but not the $e^{-\sum\limits_{i=1}^{n}y_i}$. So you can choose
$h(y)=e^{-\sum\limits_{i=1}^{n}y_i}$
$g(u(y),\theta) = e^{n\theta}\mathbb{1}_{\{Y_{(1)}\geq\theta\}}$
$u(y) = y_{(1)}$ since it is part of the $g$ and has all the useful information from $y$ for $g$ but does not involve $\theta$ |
Let $U,V\subset \mathbb R^n$ is there a function $f:U\to V$ continuous, bijective s.t. $f^{-1}$ is not continuous. | Let $U \subset \mathbb R$ be any countable discrete set, for example $U = \{1,2,3, \ldots \}$ and let $V = \mathbb Q$ be the rational numbers. Observe both sets are countable. Then since $U$ is discrete any bijection $f \colon U \to V$ will be continuous (the preimage of any set at all is open). But $f$ being continuous would imply that $\mathbb Q$ has the discrete topology. Which is untrue. |
Replacing parts of inqueality in context of big o notation | For any $n\geq 1,$ it's obvious that $1\leq 4^{n-1}.$ Also, $$4^{n-2}\leq 4^{n-1}$$ for any $n$. So, they just bounded the terms in (1) by the above in order to combine everything together to get $4^n$, like you wanted. Since the inequalities hold, doing this process is valid (to answer your final question). Just be careful that you don't get overzealous and bound by something too large! |
Find the sum of $2^{-x}/x$ | According to Taylor's series,
$$\ln(1+x)=x-\frac{x^2}{2}+\frac{x^3}{3}-\frac{x^4}{4}+\cdots, ~~~\forall x \in (-1,1].$$
Let $x=-\dfrac{1}{2}.$ Then $$\ln \frac{1}{2}=\ln\left(1-\frac{1}{2}\right)=-\dfrac{1}{2}-\frac{1}{2 \cdot 2^2}-\frac{1}{3 \cdot 2^3}-\frac{1}{4 \cdot 2^4}+\cdots=-\sum_{n=1}^{\infty}\frac{1}{n\cdot 2^n}
$$
Therefore
$$\sum_{n=1}^{\infty}\frac{1}{n\cdot 2^n}=-\ln\frac{1}{2}=\ln2.$$ |
100 numbers chosen on unit interval | Indeed binomial distribution with parameters $n=100$ and $p=0.5$.
Let's say that there is a success if the chosen number does not exceed $0.5$.
The event that the second largest does not exceed $0.5$ is the same as the event that there are at least $99$ successes.
So to be found is: $$P(X\geq99)=P(X=99)+P(X=100)=0.5^{100}\left(\binom{100}{99}+\binom{100}{100}\right)=0.5^{100}\cdot101$$ |
Direct Proof even and odd | Almost. It should be $8s^3$. Close, though! :) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.