title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
How to solve for unknown variables in a matrix? | There's a fundamental result for this kind of problems:
A non-homogenous linear system $AX=B$ has solutions if & only if the matrix $A$ and the augmented matrix $A|B$ have the same rank. When this condition is satisfied, the common rank is the codimension of the affine subspace of solutions.
Therefore, if $(a^2-4)b\ne 0$, the matrix $A$ has rank $3$, which is also the maximum rank of the augmented matrix, and there is a unique solution.
If $a=\pm 2$, $A$ has rank $2$, but the augmented matrix may have rank $3$. If it has rank $2$, the set of solutions has dimension $1$.
To determine the solutions, write the augmented matrix in reduced row echelon form. The set of solutions is the last column. |
Theorema: If $E$ is a finite extension of $F$, then $E$ is an algebraic extension of $F$. | In a finite dimensional space, there are many characterizations of a basis.
1) A spanning linearly independent set.
2) A maximal linearly independent set.
3) A maximal orthonormal set.
The second condition, is the important one for us. You are given the set $\{ 1, \alpha, \alpha^2, \ldots, \alpha^n\}$. This set has $n+1$ elements.
By definition of dimension, every basis of the given space has $n$ elements. But then, a basis, by the second definition, is a maximal linearly independent set. That means, any set with cardinality larger than $n$ must have linearly dependent elements, and since $n+1>n$, $\{ 1, \alpha, \alpha^2, \ldots, \alpha^n\}$ must have linearly dependent elements. |
What is an ideal in topology? | That definition tells you exactly what an ideal is in this context: it’s a family of sets in some space that is closed under taking subsets and closed under unions. For example, let $X$ be any infinite space, and let $\mathscr{F}$ be the family of all finite subsets of $X$; then $\mathscr{F}$ is an ideal.
Subsets of finite sets are finite, so if $G\subseteq F\in\mathscr{F}$, then clearly $G$ is finite, and therefore $G\in\mathscr{F}$: $\mathscr{F}$ is closed under taking subsets.
If $F,G\in\mathscr{F}$, then $F\cup G$ is finite, so $F\cup G\in\mathscr{F}$.
In particular, if $\tau$ is the cofinite topology (sometimes called the finite closed topology) on an infinite set $X$, then $\mathscr{F}=\{X\setminus U:U\in\tau\setminus\{\varnothing\}\}$: the closed proper subsets of $X$ form an ideal.
An easy and useful exercise is to prove that if $\mathscr{L}$ is an ideal, and $\mathscr{A}$ is a finite subset of $\mathscr{L}$, then $\bigcup\mathscr{A}\in\mathscr{L}$: an ideal is not closed just under taking simple unions of two of its members, but also under taking the union of any finite number of its members.
If you’re acquainted with the notion of a filter on a set $X$, you should verify that a family $\mathscr{L}$ of subsets of $X$ is an ideal if and only if $\{X\setminus L:L\in\mathscr{L}\}$ is a filter on $X$, and $\mathscr{U}$ is a filter on $X$ if and only if $\{X\setminus U:U\in\mathscr{U}\}$ is an ideal on $X$: ideals and filters are dual notions.
It’s worth noting that neither of these notions actually involves the topology: you define ideals and filters in this way on any set. In a specifically topological setting we sometimes look at slightly more general notions of ideal and filter that do involve the topology. We might, for instance, speak of an ideal $\mathscr{C}$ of closed sets in a space $X$, meaning that $\mathscr{C}$ is a family of closed subsets of $X$ such that
if $H\subseteq C\in\mathscr{C}$, and $H$ is closed, then $H\in\mathscr{C}$, and
if $C,D\in\mathscr{C}$, then $C\cup D\in\mathscr{C}$. (Of course $C\cup D$ is automatically closed if $C$ and $D$ are.)
Added: Henno Brandsma reminds me that the nowhere dense sets are an excellent genuinely topological example of an ideal on a topological space. Let $\langle X,\tau\rangle$ be a topological space; a set $N\subseteq X$ is nowhere dense if $\operatorname{cl}N$ does not contain any non-empty open set. (For example, $\Bbb Z$ is nowhere dense in $\Bbb R$: it is already closed, and its interior is empty.) Let
$$\mathscr{N}=\{N\subseteq X:N\text{ is nowhere dense}\}\,;$$
then $\mathscr{N}$ is an ideal, though I’ll leave it to you to check that it satisfies the defining properties. |
Calculate the limit $\lim_{x\to 0} \left(\frac 1{x^2}-\cot^2x\right)$ | Here's a non-series solution.
$$\lim_{x \rightarrow 0} \left(\frac{1}{x^2} - \cot^2(x)\right) = \lim_{x \rightarrow 0} \frac{\sin^2(x) - x^2\cos^2(x)}{x^2\sin^2(x)}$$
Instead of directly doing l'Hospital, we can make our life easier using $\lim_{x \rightarrow 0}\frac{\sin(x)}{x} = 1$ Or rather, $\lim_{x \rightarrow 0}\frac{x}{\sin(x)} = 1$:
$$\lim_{x \rightarrow 0} \frac{\sin^2(x) - x^2\cos^2(x)}{x^2\sin^2(x)} = \lim_{x \rightarrow 0}\left(\frac{x}{\sin(x)}\right)^2\left(\frac{\sin^2(x) - x^2\cos^2(x)}{x^4}\right) = \lim_{x \rightarrow 0} \frac{\sin^2(x) - x^2\cos^2(x)}{x^4}$$
Now l'Hospital. Note that we can freely factor out constants, and also $\cos(x)$ (as it approaches 1 as $x \rightarrow 0$):
\begin{align*} \lim_{x \rightarrow 0} \frac{\sin^2(x) - x^2\cos^2(x)}{x^4} &= \lim_{x \rightarrow 0} \frac{2\sin(x)\cos(x) - 2x\cos^2(x) + 2x^2\sin(x)\cos(x)}{4x^3} \\
&= \frac{1}{2}\lim_{x \rightarrow 0}\frac{\sin(x) - x\cos(x) + x^2\sin(x)}{x^3} \\
&= \frac{1}{2}\lim_{x \rightarrow 0}\frac{\cos(x) - \cos(x) + x\sin(x) + 2x\sin(x) + x^2\cos(x)}{3x^2} \\
&= \frac{1}{6}\lim_{x \rightarrow 0}\frac{3x\sin(x) + x^2\cos(x)}{x^2} \\
&= \frac{1}{6}\left(\lim_{x \rightarrow 0}3\frac{\sin(x)}{x} + \lim_{x \rightarrow 0} \cos(x)\right) \\
&= \frac{1}{6}(3 + 1) = \frac{2}{3}\end{align*} |
About set of measurable function | Let's restrict the domain to non-negative $x$ (it can be extended to negative $x$, but would need some thought as to what the function might then mean)
Your function can be rewritten as $$f(x)=1 - x\bigg\lfloor\frac{1}{x}\bigg\rfloor$$
for $x \gt 0$ and looks like
It has a countable number of discontinuities, at reciprocals of positive integers, though is right-continuous at $0$.
It is therefore integrable. Indeed for $x \ge 1$ I think you would have $\int^x_0 f(y)\,dy= x - \frac{\pi^2}{12}$ and rather more complicated expressions for integrals up to $0 \lt x \lt 1$. |
Random Variable Vs. Probability Function Intuition? | The word experiment because of its connection in chemistry, or physics clouds our understanding. It need not be that in statistics we "conduct" the experiment as in science tightly controlling the settings. No.
To understand where probability models can fit in, one should come out of textbook examples of probability associated with gambling (cards, dice, coin tosses). Then we see natural examples of random variables that will give us intuition. Many times we are not interested in the outcome directly, but on some aspect (that could be measured with numbers) of the outcome.
For example a telemarketing executive might call all numbers from a printed phone book in the order. Making the call is a random experiment.
We don't know who would pick it up, whether it would be busy etc all these are not predictable in advance.
One random variable could be the duration of the call. (a numerical measurement on the outcome of the dialling of the call.) Another random variable (in the same experiment, that is same sample space) could be how much of sales (measured in dollars) the executive could accomplish by that call.
Another "experiment". Assume a restaurant opens at noon everyday. Who the first customer is unpredictable, so it is a random experiment. The sample space here is potentially whole population of the city. A random variable for this experiment could be "how much the customer dined" (the bill paid). Another could be for how long the customer stayed in the restaurant. Another random variable could be how many persons came in a group as the first batch of customers (a family dining out, a person treating friends, a group of co-workers coming out together from office for lunch) |
Is it possible to define the flow of a continous function? | It is possible to define a multifunction (that is, a function from some subset of $\mathbb{R} \times \mathbb{R}^n$ into the set of compact connected subsets of $\mathbb{R}^n$).
But perhaps you are asking whether one can choose in a continuous way one solution for each initial value. This is impossible, as the following example shows.
Let
$$
f(y) = \begin{cases}
2\sqrt{y} & \text{ for } y \ge 0,
\\
0 & \text{ for } y < 0.
\end{cases}
$$
Since for each $p > 0$ we have uniqueness of solutions, your function must have the property
$$
\varphi(p,s) = (\sqrt{p} + s)^2, \quad p > 0, \ s \in \mathbb{R}.
$$
Similarly,
$$
\varphi(p,s) = p, \quad p < 0, \ s \in \mathbb{R}.
$$
We thus have
$$
\lim\limits_{p\to 0^-} \varphi(p, 1) = 0 \ne 1 = \lim\limits_{p\to 0^+} \varphi(p, 1).
$$
Incidentally, notice that $\lim\limits_{p\to 0^-} \varphi(p, \cdot)$ is the minimal solution of the IVP
$$
\begin{cases}
y' = f(y) \\
y(0) = 0,
\end{cases}
$$
and $\lim\limits_{p\to 0^+} \varphi(p, \cdot)$ is the maximal solution of the above IVP. |
Non-finite abelian extensions of $\mathbb{Q}$ | All right, let's take these in turn. Firstly, the maximal abelian extension of $\Bbb Q$ is $\displaystyle\cup_n\Bbb Q(\zeta_n)$, this is just the Kronecker-Weber theorem. Now knowing that if $n=p_1^{e_1}\ldots p_r^{e_r}$ so that $\Bbb Q(\zeta_n) = \displaystyle\prod_{i=1}^r\Bbb Q(\zeta_{p_i^{e_i}})$ and that each $\Bbb Q(\zeta_{p^r})$ is ramified only at $p$ tells us that the field you require which is both an abelian extension of $\Bbb Q$ and unramified for all the primes in $S$ is the stated one.
As for the Galois group, well we know that $\langle\zeta_{p^r}\rangle$ is a cyclic group of order $p^r$, and that the automorphism group of such a group is just
$$(*)\qquad \begin{cases} \Bbb Z/2\times\Bbb Z/2^{r-2} && p = 2 \\ \Bbb Z/(p-1)\times \Bbb Z/p^{r-1} && p > 2\end{cases}$$
the former case being understood to have $r>1$. Now since each pair of integers $m,n$ with $\gcd(m,n)=1$ has the fact that $\Bbb Q(\zeta_n)\cap\Bbb Q(\zeta_m)=\Bbb Q$, we know that
$$\text{Gal}\left(\Bbb Q(\zeta_n)\Bbb Q(\zeta_m)/\Bbb Q\right)\cong \text{Gal}\left(\Bbb Q(\zeta_n)/\Bbb Q\right)\times \text{Gal}\left(\Bbb Q(\zeta_m)/\Bbb Q\right)$$
we can reduce the problem to the product of the Galois groups of each of the $\Bbb Q(\mu(p^\infty))$. But this is where the factor you are confused on shows up, the result $(*)$ is originally due to Gauß and is well known. You can find it in many places, eg. Serre's Course in Arithmetic around page 15 where he talks about the multiplicative group of $\Bbb Q_p$ (which of course boils down to the units of $\Bbb Z_p$ which is what you're looking at in this case). The main intuition is that the $\Bbb Z/(p-1)$ factor comes from the fact that in $\Bbb F_p$, the field with $p$ elements, an automorphism is given by multiplication by a non-zero element, and there are $p-1$ such possibilities. The fact that this is cyclic comes from the existence of a primitive root, which you prove in a first course in number theory typically.
To see the projective limit is actually quite easy, you have $\displaystyle\varprojlim_n\Bbb Z/(p-1)\times\Bbb Z/p^n$ is the object which projects onto every $\Bbb Z/(p-1)\times\Bbb Z/p^n$ in a way consistent with the usual projection maps, but we know by definition $\Bbb Z_p$ is this group for the projective system of $\{\Bbb Z/p^n\}$, and the $\Bbb Z/(p-1)$ factor is constant, so the limit is exactly the stated object with the projection maps being $id\times \phi_n$ where $id$ is the identity map on the first factor and $\phi_n$ is the canonical projection map from $\Bbb Z_p\to\Bbb Z/p^n\cong \Bbb Z_p/p^n$. |
Find the remainder when $ p(x) = (x+2)^{101} + (x+3)^{200}$ is divided by $ x^2 +5x + 6 $. | HINT. Set $P(x)=Q(x)(x^2+5x+6)+ax+b$ and choose two suitable values of $x$ to make a simultaneous equation for $a$ and $b$ |
Second partial derivatives and chain rule doubt | There is a mistake in your computations. Note that
$$\frac{\partial z}{\partial r}=\frac{\partial z}{\partial x} \frac{\partial x}{\partial r}+\frac{\partial z}{\partial y}\frac{\partial y}{\partial r}=
\frac{\partial z}{\partial x} 2r+\frac{\partial z}{\partial y}2s.$$
For the second derivative, it is useful to change notation by considering
$$\frac{\partial z}{\partial r}=g\cdot 2r+ h\cdot 2s,
$$
with $ \frac{\partial z}{\partial x}:=g(x,y)$ and $ \frac{\partial z}{\partial y}=h(x,y)$. All you need is to apply the partial derivative, as before. This answer can help you with notation. |
How do I solve $\frac{d}{dt}x(t)+3x(t)=\sin(2t)$? | Note that the derivative of $z:=x e^{3t}$ is $\dot z=(\dot x+3x)e^{3t}$.
Once you solve $\dot z = e^{3t}\sin 2t$, you also find $x$, so
$$x(t)=e^{-3t}\cdot\left(\int_0^te^{3\tau}\sin 2\tau\,\mathrm d\tau+C\right) $$
That integral calls for integration by parts. |
Poisson Distribution with a maximum data value | Let the daily demand $X$ be a Poisson random variable with mean $\lambda = 2.6$. Then the daily income random variable is $Y = 5(X \wedge 3) = 5\min(X,3)$. We want to find $\operatorname{E}[Y]$, the expected value of the daily income, so $$\begin{align*} \operatorname{E}[Y] &= \operatorname{E}[5 \min(X,3)] = 5 \sum_{x=0}^\infty \min(x,3) \Pr[X = x] \\ &= 5 \Biggl(\sum_{x=0}^2 (x-3) \Pr[X = x] + \sum_{x=0}^\infty 3 \Pr[X = x] \Biggr) \\ &= 5(-3\Pr[X = 0] - 2\Pr[X = 1] - \Pr[X = 2] + 3) \\ &= 15 - 5e^{-\lambda}\left(3 + 2\lambda + \frac{\lambda^2}{2}\right) \\ &\approx 10.6996.\end{align*}$$ The explanation is as follows: if $X \ge 3$, then there can be at most a daily income of $15$, since the demand exceeds the supply of $3$ vacuum cleaners. So another way to write this is $$\begin{align*} \operatorname{E}[Y] &= 5(0 \Pr[X = 0] + 1 \Pr[X = 1] + 2 \Pr[X = 2] + 3 \Pr[X \ge 3]) \\ &= 5(\Pr[X = 1] + 2\Pr[X = 2] + 3(1 - \Pr[X \le 2])) \\ &= 5(3 - 3\Pr[X = 0] - 2 \Pr[X = 1] - \Pr[X = 2]), \end{align*}$$ which is exactly the same expression as above. |
Profit Function where total revenue is re-spent on production? | Let's derive an equation that gives you profits and total wealth after reinvesting for $T\ge 1$ iterations.
The per-unit profit $\pi$ depends on per-unit production cost $c$ and revenue $r$, $\pi=r-c$. Your initial endowment of wealth is $w_1$. In the first iteration, using all your wealth $w_1$ you can produce $w_1/c$ units (I assume for simplicity that this is an integer value), and the total profit from the first iteration is
$$\Pi_1=\pi w_1/c=(r-c)w_1/c=rw_1/c-w_1.$$
In the second iteration, your wealth is
$$w_2=w_1+\Pi_1=w_1+rw_1/c-w_1=rw_1/c.$$
The total profit in the second iteration is
$$\Pi_2=\pi w_2/c=rw_2/c-w_2=r^2w_1/c^2-rw_1/c.$$
In the third iteration, your wealth is
$$w_3=w_2+\Pi_2=r^2w_1/c^2,$$
and the total profit is
$$\Pi_3=\pi w_3/c=rw_3/c-w_3=r^3w_1/c^3-r^2w_1/c^2.$$
You can go on like this, but using induction you will find that for iteration $T$, you get a profit of
$$\Pi_T= r^Tw_1/c^T-r^{T-1}w_1/c^{T-1}.$$
The total money you have after iteration $T$ is
$$w_{T+1}=\Pi_T+w_T=r^Tw_1/c^T.$$
In your case, plug in the values $c=5$, $r=8$, and $w_1=50,000$.
Just one non-mathematical remark: it is not plausible that conditions such as prices (revenue $r$) or costs $c$ do not change if you produce a lot as $T\to\infty$. |
Compute the $E(X)$ and $\sigma$ of a return? | Let us compute the variance of $R$. The calculation is substantially more complicated than the one in the post. We work with variance, not standard deviation. But recall that variance is the square of the standard deviation.
You may recall the formula
$$\operatorname{Var}(aX+bY)=a^2\operatorname{Var}(X)+b^2\operatorname{Var}(Y)+2ab\operatorname{Cov}(X,Y).$$
That gives you everything that is needed to calculate the variance of $R$, except that you were given the correlation coefficient $\rho(R_s,R_b)$, not the covariance. However, we have
$$\rho(X,Y)=\frac{\operatorname{Cov}(X,Y)}{\sqrt{\operatorname{Var(X)}\operatorname{Var}(Y)}},$$
so now you have all the necessary ingredients. Putting things together, and doing the computations, is left to you.
As to the second question, it is a lot easier than the first. You could compute the mean of $R$ explicitly as a function of $w$, and graph the mean return $y$ as a function of $w$. Note that $w$ ranges between $0$ and $1$. Then the $w$ that yields the greatest mean is visually obvious. But instead, think money. The stock has greater mean yield (but greater volatility). So to maximize your mean, what should you do?
Added: This is about the additional computation that you did. The covariance was calculated correctly. The variance of $aR_s+bR_b$ is therefore $$a^2(0.0049)+b^2(0.0016)+2ab(0.0007).\tag{$1$}$$
Here $a=1-w$ and $b=w$. But you were only asked to compute for $w=\frac{1}{2}$, so we take $a=b=\frac{1}{2}$. The last line "Therefore the Variance $\dots$" is not correct. We calculate, using $(1)$. The variance turns out to be $0.001975$. You were asked for the standard deviation, which is therefore about $0.04444$. |
"Direct" proof for a bound on the difference between endpoints of a path? | Use
$$\eqalign{\|\gamma(b)-\gamma(a)\|^2&=\bigl(\gamma(b)-\gamma(a)\bigr)\cdot\bigl(\gamma(b)-\gamma(a)\bigr)=\bigl(\gamma(b)-\gamma(a)\bigr)\cdot\int_a^b\gamma'(t)\>dt\cr&=\int_a^b \bigl(\gamma(b)-\gamma(a)\bigr)\cdot\gamma'(t)\>dt\leq\int_a^b \bigl|\gamma(b)-\gamma(a)\bigr|\>\bigl|\gamma'(t)\bigr|\>dt\cr&= \bigl|\gamma(b)-\gamma(a)\bigr|\int_a^b\bigl|\gamma'(t)\bigr|\>dt\ .\cr}$$ |
Is this direct proof of an inequality wrong? | The problem is that you did not state that those are equivalences (usually denoted by $\iff$) between your lines. And you do not even need the equivalences, you only need the implicatiosn from the bottom to the top, so you should perhaps write your proof "upside down".
The way your proof is presented right now makes it look like the top implies the bottom. (Which it does) but that doesn't mean that the bottom implies the top. So I'd write down:
For any positiv integer $n$ the following inequality obviously holds:
$$2n > n$$
This implies
$$n^2-n^2+2n > n^2-n^2+n$$
etc.
Notice that this is also the way you'd want to read your proof so that people understand it. And it is important to realize that this is not necessarly the way you found the proof.
Whenever you find a proof somewhere you can be quite sure that the way it is presented to you has nothing to do with how the one who proved it found the proof. It is really just written down nicely in order for the reader to be able to nicely follow the chain of arguments.
But don't worry, writing "nice" proofs does take a while at the beginning of your math career=)
Regarding the comment: You could prove the inequality by contradicion, e.g. suppose that the inequality does not hold, and then find a contradiction, but this is not necessary here.
EDIT: It seems to me that you are unfamiliar with the concept of logical implications. If two mathematical statements $A,B$ (e.g. equations, inequalities etc.) mean the same thing, they are called equivalent, denoted by the bidirectional double arrow. $$A \iff B$$
Example: Let $x$ be a real number. Then following equivalence holds: $$x -5 = 0 \iff x = 5$$
But if $A$ only hold if $B$ holds, then we say $A$ implies $B$ or alternatively, "if $A$ holds then $B$ must hold too." This is denoted by a simple doublearrow: $$A \implies B$$
Example: Let $x$ be a real number. Then following implication holds:
$$x = 5 \implies x^2 = 25$$ But "the other way around" is not necessarily true, as the right statement is also true for $x=-5$ but not the left one. |
Random sequence is dense in (0,1) | Define $S:=\left\{ X_{n}\mid n=1,2\dots\right\} $ as random set.
Define $I:=\left\{ \left\langle r,s\right\rangle\in\mathbb{Q}\cap\left(0,1\right) \mid r<s\right\} $ and note that $I$ is countable.
For $\left\langle r,s\right\rangle \in I$
let $E_{r,s}$ denote the event that $S\cap\left(r,s\right)=\emptyset$.
Then $P\left(E_{r,s}\right)=0$ and consequently $P\left(\bigcup_{\left\langle r,s\right\rangle \in I}E_{r,s}\right)=0$.
(This can be proved on base of $P\{X_1\notin (r,s)\}=1-(s-r)<1$ combined the fact that the $X_n$ are iid.)
Equivalently: $$P\left(\bigcap_{\left\langle r,s\right\rangle \in I}E_{r,s}^{c}\right)=1$$
and this
can be recognized as the event that $S$ is dense in $\left(0,1\right)$. |
Limits of integration for random variable | Hint: Are the random variables independent?
If so, you can avoid integration by using the facts
the sum of independent normally distributed random variables has a normal distribution
the mean of the sum of random variables is equal to the sum of the means
the variance of the sum of independent random variables is the sum of the variances
for a standard normal distribution $N(0,1)$: $\Phi^{-1}(0.99)\approx 2.326$ |
Adaptation of sum of arrival times of Poisson process | Let $L_t=\sum\limits_{i=1}^{N_t}T_i$, then $L_t=\int\limits_0^t(N_t-N_s)\mathrm ds$. Since $(N_s)_{0\leqslant s\leqslant t}$ is $F_t$-measurable, this proves that $L_t$ is $F_t$-measurable.
To prove the key-formula used above, fubinize $L_t$, that is, using the identity $[s\lt T_i]=[N_s\lt i]$, write $L_t$ as
$$
L_t=\sum\limits_{i\geqslant1}\mathbf 1_{i\leqslant N_t}\int_0^\infty \mathbf 1_{s\lt T_i}\mathrm ds=\int_0^\infty \sum\limits_{i\geqslant1}\mathbf 1_{N_s\lt i\leqslant N_t}\mathrm ds=\int_0^\infty(N_t-N_s)^+\mathrm ds.
$$ |
Prove that $(A-B)\cup (B-A)=(A\cup B)-(A\cap B)$ | Looks like you're on the right track. You could make two cases now based on your "or" statement.
Case $1$: $x \in A$ and $x\notin B$. Then obviously $x\notin A\cap B$.
Case $2$: $x \in B$ and $x\notin A$. Then again $x \notin A\cap B$.
In either case (that is, whether we start with $x\in A$ or $x\in B \equiv x \in A\cup B$) we know $x\notin A\cap B$ so $x\in (A \ \cup B)\setminus (A\cap B)$ and you can conclude that $$(A-B)\cup (B-A)\subseteq (A\cup B)-(A\cap B)$$ Can you complete the second half of the proof? |
Is there any efficient algorithm for computing all semigroups of order n? | There is no efficient algorithm for computing all the semigroups of order $n$. The number of semigroups of order 10 is not even known (up to isomorphism and anti isomorphism). This is an extremely hard problem, and the number of semigroups grows very very quickly.
Almost all semigroups of any given size are 3-nilpotent, meaning that the product of any three elements is some fixed element 0, and that there is a product of two elements that is not 0. Arguably these semigroups are not very interesting.
See my answer here where I answer a similar question.
A library of all semigroups with order at most 8 (again up to isomorphism and anti isomorphism) is available in the GAP package Smallsemi. |
characterization of Riemann integerability | Let $\{p_n\}$ be a sequence of all rationals in $[0,1]$. For each $n$, put $f_n(p_m) = 1$ for $m\leq n$, and $f_n(x)=0$ everywhere else on $[0,1]$. Each $f_n$ has a finite number of discontinuities, hence is Riemann-integrable. Their limit is not. |
How can one know that a given integral diverges? | $$\int^\Lambda d^dk \frac{k^n}{(k^{2}-\mu^2)^m}\approx\int^\Lambda dk k^{d-1} \frac{k^n}{(k^{2}-\mu^2)^m}\approx \Lambda\Lambda^{d-1+n-2m}=\Lambda^{d+n-2m}$$
this is the behaviour of the integral in the ultraviolet regime, namely $k$ that goes to infinity. You have to think $\Lambda$ as a constant (cut-off) and then push it to infinity to understand the kind of divergence. Remind that:
$$\int^\Lambda dk \frac{1}{k}=\log\Lambda \quad log\quad divergent$$
$$\int^\Lambda dk =\Lambda \quad linearly\quad divergent$$
$$\int^\Lambda dk k =\Lambda^2 \quad quadratically\quad divergent$$ |
Differentiation under the integral sign, please help | The steps to differentiate under the integral sign are as follows;
\begin{align}
\frac{\mathrm{d}}{\mathrm{d}x} &\left (\int_{a(x)}^{b(x)}f(x,t)\,\mathrm{d}t \right)= \\
&\quad= f\big(x,b(x)\big)\cdot b'(x) - f\big(x,a(x)\big)\cdot a'(x) + \int_{a(x)}^{b(x)} f_x(x,t)\; \mathrm{d}t.
\end{align}
Basically, the limits are important as you must evaluate your function at these limits and evaluate a partial derviative in the last term, that's what the subsrcipt $x$ means.
For reference, see the Leibniz Integral Rule on Wolfram Mathworld. |
Convolution with Gaussian, without dstributioni theory, part 3 | Well, it seems that Folland's Real Analysis contains a Theorem that applies in this case: My edition is the Second One, so you might look at Thm. 8.15 for an idea for the proof for an even more general setting (the same problem is true if you only require that $p\in [1,+\infty]$). |
$(2p-x-y)^2 = 4xy+1$ and $2p-x-y = 2m+1$ | Hint:
From first two identities, you have
\begin{align*}
(2p-x-y)^2 &=(2m+1)^2 \\
&=4m^2+4m+1\\
&=4xy+1
\end{align*}
So you have $m^2+m=xy$. Then you can write:
\begin{align*}
m^2+m &= xy \\
(m+1)dh &= dky \\
(m+1)h &= ky
\end{align*}
Therefore you have $\frac{m+1}{y} = \frac{k}{h}$ where $h,k$ are relatively prime. Hence you should have
$$
m+1 = \gcd(m+1,y)k, \quad y = \gcd(m+1,y)h
$$ |
$\sum$ over disjoint union of sets | This holds in general only for $a_k\ge 0$.
(For a counterexample consider e.g. $A_k:=\{2k,\ 2k+1\}$, and $a_n:=(-1)^n$.)
Now suppose that $a_k\ge 0$. Then $\sum_{k\in A}a_k\ =\ \sup\left\{\sum_{k\in A'}a_k\,\mid\, A'\right.\,$ is finite subset of $\,\left.A\right\}$.
Using this, for any $N\in\Bbb N$, we always have
$$\sum_{k\in A}a_k \ \ge\ \sum_{n=0}^N\left(\sum_{k\in A_n}a_k\right)\,,$$
as the right hand side is a finite sum of supremum of finite sums (of positive numbers).
By the same reasoning, we also have
$$\sum_{k\in A}a_k\ \le\ \sum_{n=0}^\infty\left(\sum_{k\in A_n}a_k\right)\,.$$ |
How to find the formula to this summation and the prove it by induction? | We have
$$
\sum_{k = 0}^{n} (2k + 1) \binom{n}{k}
= 2 \sum_{k = 0}^{n} k \binom{n}{k} + \sum_{k = 0}^{n} \binom{n}{k}
= 2 \underbrace{\sum_{k = 0}^{n} k \binom{n}{k}}_{:= S} + 2^n
$$
Since $(x + 1)^n = \sum_{k = 0}^{n} \binom{n}{k} x^k$ we have
$$
n (x + 1)^{n - 1}
= \frac{d}{dx} (x + 1)^n
= \sum_{k = 0}^{n} \binom{n}{k} \frac{d}{dx} x^k
= \sum_{k=0}^{n} \binom{n}{k} k x^{k-1}
$$
With $x = 1$ you find $S = n 2^{n - 1}$ and therefore
$$
\sum_{k = 0}^{n} (2k + 1) \binom{n}{k}
= 2n 2^{n - 1} + 2^n
= n 2^n + 2^n
= 2^n (n + 1).
$$ |
Calculate random integer inside a range of real numbers | If exists an integer number between minFloat and maxFloat this should work:
let min = ceil(minFloat)
let max = floor(maxFloat)
F(minFloat, maxFloat) = floor(r * (max - min + 1) + min) |
Kernel and rank involving composition of linear transformations | $A \subseteq B \Leftrightarrow A \cap B = A$
$ ker(S) \cap ker(T∘S) \subset ker(S)$
Let $ x \in ker(S) \cap ker(T∘S)$, $x \in ker(S)$
$ ker(S) \subset ker(S) \cap ker(T∘S)$
Let $ x \in ker(S) $ , $T(S(x)) =T(0) = 0$, so $ x \in ker(S) \cap ker(T∘S)$
Then $$ker(S) \subseteq ker(T∘S) \Leftrightarrow ker(S) \cap ker(T∘S) = ker(S)$$
and $$dim(ker(S)) ≤ dim(ker (T∘S))$$
According to Rank–nullity theorem : $ r(T∘S) +dim(U) =- dim(ker (T∘S))$
and $ r(S) + dim(U) =-dim(ker (S))$
Then $rk(T∘S) ≤ rk(S)$
and $rk(T∘S)≤ rk(T)$ because $ Im(T∘S) \cap Im(S)$,
Let $ y \in Im(T∘S), \exists x \in V / T(S(x)) = y$
therefore $ y \in Im(T)$ because $ S(x) \in V$.
Finally $$rk(T∘S)≤min(r(T),r(S))$$ |
Representation theory of Lie groups and outer automorphisms | If $\varphi : G \to G$ is an automorphism of $G$, then given any representation $\rho : G \to \text{Aut}(V)$, we get another representation $\rho \circ \varphi : G \to \text{Aut}(V)$, which will generally be different if $\varphi$ is an outer automorphism. In particular the two $3$-dimensional representations of $SL_3$ are related by the nontrivial outer automorphism of $SL_3$ in this way. But a priori there could've been other unrelated representations of the same dimension. |
Why are bounded functions dense? | A few corrections:
On a finite measure space $L^p$ is dense in $L^q$ when $p > q$. So you have this inequality backwards.
On discrete measure spaces, the hierarchy goes the other way, and $L^p$ is dense in $L^q$ when $p<q$. When the space is infinite, this hierarchy is strict. $\mathbb{N}$ with the counting measure is the canonical example here, to the point that $L^p$ in this setting is commonly denoted by $\ell^p$. This also generalizes directly to atomic measure spaces, where all positive measure subsets have at least some specified measure.
On an infinite measure space which is not purely atomic, there is no hierarchy of $L^p$ spaces. This is because large $p$ amplifies singularities but dampens out infinite tails. $\mathbb{R}$ with the Lebesgue measure is the canonical example of such a space. The two example functions to keep in mind are as follows. Take $c \in (0,1)$. Then $x^{-c} \chi_{[0,1]}$ is in $L^p$ only for small $p$ while $x^{-c} \chi_{[1,\infty)}$ is in $L^p$ only for large $p$.
As for your original question, there are a number of views. One "low level" view notes that simple functions are dense and bounded. This "simple approximation theorem" is either one of the foundational results proven early on in the theory of Lebesgue integration, or is a definition. I call this view "low level" because the way you discuss it depends on the specifics of your definitions of Lebesgue integration.
A "high level" view is to use the Lebesgue dominated convergence theorem to build a concrete approximating sequence. For instance, $f_n = \text{sign}(f) \cdot \min \{ |f|,n \}$ is a sequence of bounded functions which converges to $f$. I call this view "high level" because it depends only on the theorems of Lebesgue integration, not on the specifics of the construction. |
every non-principal ultrafilter contains a cofinite filter. | Yes, every non-principal ultrafilter on an infinite set $X$ contains the cofinite filter on $X$. Let $\mathscr{U}$ be a non-principal ultrafilter on $X$, and let $x\in X$ be arbitrary. Since $\mathscr{U}$ is an ultrafilter, exactly one of the sets $\{x\}$ and $X\setminus\{x\}$ belongs to $\mathscr{U}$, and since $\mathscr{U}$ is non-principal, $\{x\}\notin\mathscr{U}$. Thus, $X\setminus\{x\}\in\mathscr{U}$ for each $x\in X$. Now let $F$ be any finite subset of $X$; then $$X\setminus F=\bigcap_{x\in F}\left(X\setminus\{x\}\right)$$ is the intersection of finitely many members of $\mathscr{U}$. Every filter is closed under finite intersections, so $X\setminus F\in\mathscr{U}$. The cofinite filter on $X$ is precisely $\{X\setminus F:F\subseteq X\text{ is finite}\}$, so we’ve just shown that $\mathscr{U}$ contains the cofinite filter on $X$.
Fréchet filter and cofinite filter are two names for the same thing; there is no difference.
Yes, an ultrafilter on $X$ is free iff it contains the Fréchet filter on $X$. In the first paragraph I proved that every free ultrafilter on $X$ contains the Fréchet filter on $X$. For the converse, suppose that $\mathscr{U}$ is a fixed ultrafilter on $X$; then there is an $x\in X$ such that $\{x\}\in\mathscr{U}$. But then $X\setminus\{x\}$ is an element of the Fréchet filter that is not in $\mathscr{U}$, so $\mathscr{U}$ does not contain the Fréchet filter. |
Probability of rolling product of 200 | First, you can consider that the die has only $5$ faces, numbered from $2$ to $6$, and is fair, since rolling a $1$ doesn't change the product.
Then, factorising $200$ gives you $200=2^3\times5^2$. It can be done either in 4 rolls (two 5's, one 2, one 4) or 5 rolls (three 2's instead of one 2 and one 4).
The probability of reaching 200 in 4 rolls is $(\frac15)^4\times(\frac{4!}{2!})$. The only problem here is that 2 can't be the last number rolled because we would stop at 100. Hence, the actual probability of this happening is $(\frac15)^4\times(\frac{4!}{2!}-\frac{3!}{2!})=(\frac15)^4\times9$.
When we want to count in how many ways we can reach 200 in 5 rolls without having 2 as the last roll, we are just left to find in how many ways we can order one 5 and three 2's, since the last roll will be a 5 for sure. This can be done in 4 different ways. Hence, the probability of reaching 200 in 5 rolls is: $(\frac15)^5\times 4$.
So the total probability is $(\frac15)^4\times9+(\frac15)^5\times4$. |
The necessity of the axiom of induction | First Question: You say "in your opinion". Try to prove it, then, and we will show the flaw.
Second Question: You proved every $I_n:=\{1,...,n \}$ is finite. "By induction", you just repeat it: you proved that for every $n \in \mathbb{N}$, the property holds, meaning: every $I_n:=\{1,...,n \}$ is finite. $\mathbb{N}$ is none of those $I_n:=\{1,...,n \}$, so why do you assume $\mathbb{N}$ is finite? |
Cardinality of the set of all total orders on $\Bbb{N}$ | What the definition says, somewhat convolutedly, is:
Let $S$ be some subset of $\mathbb N_0$. Use $S$ to split $\mathbb N_+$ into two subsets $A$ and $B$, such that each number $n$ ends up in either $A$ or $B$ depending on whether or not $n-1$ is in $S$. It is possible that either $A$ or $B$ ends up being empty; that is fine.
Now consider the following total order on $\mathbb N$:
First come all elements of $B$, with the usual ordering between them.
Then comes $0$.
Finally come all elements of $A$, in their usual order.
This gives a different order for different $S$, because if we have some $n$ such that $n\in S_1$ but $n\notin S_2$, then we have $0\le_{S_1}n+1$ but $n+1\le_{S_2}0$. So $\le_{S_1}$ and $\le_{S_2}$ are different orderings. More generally, we can reconstruct $S$ just by knowing the ordering $\le_S$, simply by using it to compare $0$ to each nonzero number.
A different construction that would be simpler to explain would be
Given $S\subseteq\mathbb N$ take the usual order on $\mathbb N$, and then interchange the elements $2n$ and $2n+1$ for each $n\in S$.
or, in symbols, as a set of ordered pairs:
$$ \bigl({\leq} \setminus \{ \langle 2n,2n+1\rangle \mid n\in S \}\bigr) \cup \{\langle 2n+1,n\rangle \mid n \in S \} $$ |
eigenvalue of P9 of the differential operator | Yes. Note that for $1\in V$,
$$
D(1)=0=0\cdot 1
$$
which implies that $0$ is an eigenvalue of $D$. |
Why is $E_{\lambda}$ the kernel of the linear map $\alpha-\lambda I$ | $\alpha(v) = \lambda v$ if and only if $\alpha(v) - \lambda v = 0$ if and only if $(\alpha - \lambda I)(v) = 0$. |
Proof about lucas numbers. | In the induction step for $\,n = 1\,$ you implicitly use $\,l_0 = f_{-1}\! + f_{1}\, (= 1 + 1).\ $ The induction proof essentially shows that if $\,l_n = f_{n-1} + f_{n+1}$ holds true for two successive values $\, n = k,\, k\!+\!1,\:$ then it remains true for all $\,n \ge k.$
Equivalently $\ g_n = f_{n-1}\!+f_{n+1}\!-l_n\ $ satisfies the Fibonacci recurrence (being a sum of solutions) but has initial conditions $\:g_0\! = 0,\,\ g_1\! = 0,\:$ so $\:g_n\! = 0\:$ by the uniqueness theorem for recurrences. Said uniqueness theorem has a straightforward two-line inductive proof.
As I often emphasize, uniqueness theorems provide powerful tools for proving equalities. |
Evaluate limit of $a_n = \bigg( 1 + \frac{2}{n} \bigg)^n$ | Look up how use l'Hopital's rule. To use it on an indeterminate form $1^\infty$ as we have here, we need to take the logarithm, get an indeterminate form either $0/0$ or $\infty/\infty$, apply l'Hopital's rule to get the limit of that, then exponentiate.
\begin{align}
f(n) &= \left(1+\frac{2}{n}\right)^n,\qquad\text{indeterminate form }{1^\infty}
\\
\log f(n) &= n\log\left(1+\frac{2}{n}\right),\qquad\text{indeterminate form }\infty \times 0
\\
\log f(n) &= \frac{\log\left(1+2/n\right)}{1/n},\qquad\text{indeterminate form }\frac{0}{0}
\\
\log f(n) &\sim \frac{\frac{d}{dn}[\log\left(1+2/n\right)]}{\frac{d}{dn}[1/n]}\\
&\text{where I wrote $\sim$ for: "has the same limit as,}
\\
&\qquad\text{provided the limit on the right exists"}
\\
\log f(n) &\sim
\frac{-2/n^2}{(-1/n^2)(1+2/n)} = \frac{2}{1+2/n}
\\
\lim\log f(n) &= 2
\\
\log \lim f(n) &= 2
\\
\lim f(n) &= e^2 .
\end{align}
Question: Why is $1^\infty$ indeterminate?
Answer. Because we can get different results for limits of that form. Example
$$
\lim_{n\to\infty}\left(1+\frac{2}{n}\right)^n = e^2
\\
\lim_{n\to\infty} 1^n = 1
\\
\lim_{n\to\infty} \left(1+\frac{1}{n}\right)^{n^2} = +\infty
$$
and so on. |
is subset of probability measures with finite second moment a Borel set? | Yes, of course. Write the finiteness in terms of integrals of bounded continuous functions: there exists $N$ such that for all $n$, ... |
Integrate Product of Matrix Exponentials | You must do a numerical calculation. Let $Z(t)=\int_0^t e^{\tau A}e^{\tau A^T}d\tau$.
For example, $Z(t)$ is the solution of the ODE: $AZ(t)+Z(t)A^T=Z'(t)-I_n$, s.t. $Z(0)=0$. |
Suppose $BAC=A$ where $A$ is a $k \times n $ matrix. What's the relation between $B$ and $C$? | If $C$ is invertible we have
$$
BA=AC^{-1}
$$
Right multiply by $A^T$
$$
BAA^T=AC^{-1}A^T
$$
Note that $AA^T$ is a square matrix, and if it is invertible we have:
$$
B=AC^{-1}A^T(AA^T)^{-1}
$$
If $B$ and $A^TA$ are invertible we can do the same to the other side and find:
$$
C=(A^TA)^{-1}A^TB^{-1}A
$$ |
How to obtain the convolution directly (not graphical) of the two functions $e^{-t}u(t)$ and $e^{-2t}u(t)$? | Yes, you can find it directly as:
$$\displaystyle \int_{-\infty}^{\infty} e^{- \tau}~u(\tau)~e^{-2(t-\tau)}~u(t-\tau)~d\tau = e^{-2t}~\int_0^t e^{\tau}~d\tau = e^{-2t}(e^t-1)$$
A plot shows:
When we have functions $f(t)u(t)$ and $g(t)u(t)$ with the Heaviside Unit Step Function, we can just write:
$$\displaystyle (f*g)(t) = \int_0^t f(\tau)~g(t-\tau) ~ d\tau$$
Having said all of that, I think it is very important to understand what is going on graphically. I recommend spending time with the examples, particulary $3.4.1$, Example $1$ as they solve the general example to yours and do it both ways. It is critical to understand the graphical method as it can keep you away from unrecognizable integrals.
This is also a useful Convolution Table. Especially review "Convolution using graphical method (1)". |
Prop: Every sequence has a Cauchy subsequence. | You need to do a recursive construction that involves a sequence of radii converging to $0$. I’ll use radii $2^{-n}$ for $n\in\Bbb N$.
Start with your sequence $\langle p_n:n\in\Bbb N\rangle$. $X$ has a finite cover by open balls of radius $1$, so there is an infinite $A_1\subseteq\Bbb N$ such that for all $k,\ell\in A_1$, $d(p_k,p_\ell)<2\cdot 1=2$. Similarly, $X$ has a finite cover by open balls of radius $\frac12$, and $A_1$ is infinite, so there is an infinite $A_2\subseteq A_1$ such that for all $k,\ell\in A_2$, $d(p_k,p_\ell)<2\cdot\frac12=1$.
In general, if $A_n$ is an infinite subset of $\Bbb N$, $X$ has a finite cover by open balls of radius $2^{-n}$, so there is an infinite $A_{n+1}\subseteq A_n$ such that for all $k,\ell\in A_{n+1}$, $d(p_k,p_\ell)<2\cdot2^{-n}=2^{-n+1}$.
Now let $k_1=\min A_1$, and for $\ell\in\Bbb N$ let $k_{\ell+1}=\min(A_\ell\setminus\{k_1,\ldots,k_\ell\})$. Show that $\langle p_{k_i}:i\in\Bbb Z^+\}$ is a Cauchy subsequence of $\langle p_n:n\in\Bbb N\rangle$. |
Showing $\pi\int_{0}^{\infty}[1+\cosh(x\pi)]^{-n}dx={(2n-2)!!\over (2n-1)!!}\cdot{2\over 2^n}$ | First: remove the useless constant by setting $x=\frac{z}{\pi}$. Then, through $z=\log u$ and $v=\frac{1}{u}$:
$$ \int_{0}^{+\infty}(1+\cosh(z))^{-n}\,dz = 2^n\int_{1}^{+\infty}\frac{\left(2+u+\frac{1}{u}\right)^{-n}}{u}\,du=2^n\int_{0}^{1}\frac{\left(2+v+\frac{1}{v}\right)^{-n}}{v}\,dv$$
so the LHS equals:
$$ 2^{n-1}\int_{0}^{+\infty}\frac{u^{n-1} du}{(u+1)^{2n}}\,du = 2^n\int_{0}^{+\infty}\frac{t^{2n-1}\,dt}{(1+t^2)^{2n}}= 2^{n-1} B(n,n) = 2^{n-1}\frac{\Gamma(n)^2}{\Gamma(2n)}.$$
As an alternative, just apply IBP multiple times. It leads to a recursion similar to the one for $\int_{0}^{\pi/2}\sin^{2n}(\theta)\,d\theta.$ |
Do you need given standard deviation when determining the critical values of a test statistic? | Without knowing the hypothesis being tested or the distribution of the test statistic under the null hypothesis, it's unreasonable to state the critical value of the test, so I consider this question ill-posed. For example, without knowing whether the test is one-tailed or two-tailed, the critical value is impossible to state. Even the way the sample was collected will affect the choice of test.
That said, if we assume that the test is for a location parameter and is two-tailed; e.g., $$H_0 : \mu = \mu_0 \quad \text{vs.} \quad H_a : \mu \ne \mu_0,$$ and a simple random sample of size $n = 15$ was taken, and that the observations are assumed to have been drawn from a distribution with known variance $\sigma^2 = 15$, then the test statistic $$Z \mid H_0 = \frac{\bar x - \mu_0}{\sigma/\sqrt{n}}$$ is approximately standard normal (unless the observations were drawn from a normal distribution, in which case $Z \mid H_0$ is exactly standard normal).
This test statistic will reject $H_0$ at $\alpha = 0.025$ if $|Z| > z^*_{\alpha/2}$, where $\Phi(z^*_{\alpha/2}) = 1 - \alpha/2 = 0.9875$; that is to say, $z^*_q$ is the upper $q^{\rm th}$ quantile of the standard normal distribution. Using a computer or statistical table, $$z^*_{0.0125} \approx 2.2414.$$ This is the critical value of the test.
Note that this is where the two-tailed nature of the hypothesis test comes into play. Had the test been one-tailed, e.g. $$H_a : \mu > \mu_0,$$ then you would reject $H_0$ in favor of $H_a$ if $Z > z^*_{\alpha}$, with no absolute values, and the critical value is now $$z^*_{0.025} \approx 1.95996.$$
And if the test were one-tailed in the other direction, $$H_a : \mu < \mu_0,$$ then your critical value would be $z_{0.025} = -1.95996$, and you would reject $H_0$ if $Z < z_{0.025}$.
So as you can see, I have insufficient information to uniquely identify the appropriate critical value for your test. One thing I can say is that a $t$-test is not appropriate if the variance is assumed to be known. |
Why does free probability use Von Neumann algebras instead of C star algebras? | I'm not yet an expert in the field, so maybe someone more experienced can add to this later.
In my understanding, Voiculescu was originally interested in the free group factor problem, which concerns specifically the von Neumann algebras coming from the free groups on $n$ generators. For this reason, much of the focus of free probability theory has been on the von Neumann algebraic level, hoping to obtain some nice invariants for these von Neumann algebras.
That said, free probabilistic techniques are still interesting on the C* level. For a reference on free probability that focuses only on C* aspects and not at all on the von Neumann algebraic aspects, try the nice book "Lectures on the Combinatorics of Free Probability" by Nica and Speicher. |
Properties, bounds and limits about difference of two inverse standard normal CDF variables and extreme value distribution | We have to start
$$\Phi^{-1}\left(1-\frac{x}{n}\right) = \sqrt{2}\ \text{erf}^{-1}\left(1-\frac{2x}{n}\right)$$ which makes
$$\sigma_n=\sqrt{2}\Bigg[\text{erf}^{-1}\left(1-\frac{1}{en}\right)-\text{erf}^{-1}\left(1-\frac{1}{n}\right) \Bigg]$$
For small $x$ we have
$$\text{erf}^{-1}\left(1-x\right)=\sqrt{\frac{1}{2} \left(\log \left(\frac{2}{\pi x^2}\right)-\log \left(\log
\left(\frac{2}{\pi x^2}\right)\right)\right)}$$ (have a look here). It is very good for $0\leq x \leq 0.1$.
Using it, we have
$$\sigma_n=\sqrt{\log \left(\frac{2 e^2 n^2}{\pi }\right)-\log \left(\log \left(\frac{2 e^2 n^2}{\pi
}\right)\right)}-$$ $$\sqrt{\log \left(\frac{2 n^2}{\pi }\right)-\log \left(\log
\left(\frac{2 n^2}{\pi }\right)\right)}$$ and the limit is $0$.
Making $n=10^k$, the table contains the approximate and exact values of $\sigma_n$
$$\left(
\begin{array}{ccc}
k & \text{approximation} & \text{exact} \\
1 & 0.430284 & 0.443256 \\
2 & 0.328501 & 0.328637 \\
3 & 0.272163 & 0.271588 \\
4 & 0.236707 & 0.236187 \\
5 & 0.211987 & 0.211577 \\
6 & 0.193551 & 0.193231 \\
7 & 0.179145 & 0.178892 \\
8 & 0.167499 & 0.167296 \\
9 & 0.157836 & 0.157671 \\
10 & 0.149656 & 0.149518 \\
11 & 0.142615 & 0.142498 \\
12 & 0.136474 & 0.136354 \\
13 & 0.131056 & 0.131152 \\
14 & 0.126231 & 0.126569 \\
15 & 0.121898 & 0.133751
\end{array}
\right)$$ |
How to prove that a sequence is Cauchy | Easier:
$$x_n = \frac{1}{1-\frac{1}{2n^2}} \stackrel{n \to \infty}\longrightarrow 1$$
and convergent sequences are Cauchy sequences. |
Moment generating function of two variables | We have $M(2t)=M(t)^3 M(-t)$ and $M(t)=M(-t)$, so $M(2t)=M(t)^4$.
Hence we have a functional equation for $m(t)=t^{-2}\log M(t)$ (let $m(0)=\frac12s^2=\frac12$):
$$m(2t)=m(t)\quad\text{for all }t\tag{1}$$
But we know
$$
m(t)=\frac12+o(1)\text{ for small }t\tag{2}
$$
from the expansion of $M(t)$.
Now equations (1) and (2) together implies $m$ is constant $\frac12$. Can you see why? |
Find the area of the Grayed triangle Given the following Figure | As mentioned in the comments, there simply isn't enough information. All you know about the grayed triangle is that it is a right triangle with one of its legs having length $4,$ and the other having length somewhere between $0$ and $4\sqrt3.$ With only this information, the best we can conclude is that the grayed triangle can have any area between $0$ and $8\sqrt3.$ (And I do mean any such area. All we have to do is position $C$ appropriately.)
Is there perhaps some other information that you left out because you weren't sure how it was relevant, or because you missed it on your first read-through? |
If $a>0$ and $ab>0$, then $b>0$ | By way of contradiction, if $b \leq 0$, then since $a > 0$ we have $ab \leq 0$. The proof for the latter is similar. |
Is $\mathcal{U}(\mathfrak{g})$ semisimple (as a module over itself)? (and related examples) | If $U(\mathfrak{g})$ were semi-simple as a $U(\mathfrak{g})$-module, then the same would be true of any quotient. However, $U(\mathfrak{g})/U(\mathfrak{g})\mathfrak{b}\cong M(0)$, where $M(0)$ is the Verma module of highest weight $0$, $\mathfrak{b}=\mathfrak{h}\oplus\mathfrak{n}^+$ and $\mathfrak{g}=\mathfrak{n}^-\oplus\mathfrak{h}\oplus\mathfrak{n}^+$ is a triangular decomposition of $\mathfrak{g}$.
It is well known, and easy to check that $M(0)$ is not semi-simple. Try it for $\mathfrak{sl}_2$. |
Isomorphism from Adjoining Root of Minimal Polynomial | The comment is confusing at best. What he means is define
$$K[x] \to K[\alpha]\quad f(x) \mapsto f(\alpha) $$
This is easily verified to be a surjective homomorphism of rings whose kernel is precisely the ideal generated by $m_\alpha$, so the first isomorphism theorem provides the desired isomorphism.
EDIT: oh, I'm a necromancer. |
Finding Taylor Series of the following function? | Hint
$$\sin x=x-{x^3\over 6}+\cdots$$and $${1\over {8-x}}={1\over 8}\left(1+{x\over 8}+{x^2\over 64}+{x^3\over 512}+\cdots\right)$$and calculate the terms consisting $x^\alpha$ in the Taylor expansion of $f(x)$ where $\alpha\in\{0,1,2,3,4,5\}$ |
Line that separates one point from $n-1$ others points. | Let the $n$ points be denoted $P_1,...,P_n$, and let $S = \{P_1,...,P_n\}$.
Since $S$ is finite, $S$ is bounded, so is contained in some disk, $D$ say.
Take any line which is not parallel to any of the lines through two distinct points of $S$.
Slide it in parallel until it's fully outside of $D$.
Now slide it in parallel, back towards $D$, and keep sliding it until it hits some point of $S$.
Argue that it hits exactly one point of $S$.
Now slide it in parallel a little more, but not so much as to hit another point of $S$.
Stop$\,-\,$you're done.
That was a geometric proof, and in my opinion, it's the "right" proof.
But if desired, it can easily be converted to an algebraic proof.
Here's an algebraic version of the same proof . . .
Choose a line $L$ not parallel to any of the lines between pairs of points of $S$, and suppose $L$ has the equation
$$ax + by = c$$
where $a,b,c \in \mathbb{R}$, and $a,b$ are not both zero.
Define $c_1,...,c_n$ by
$$c_i = ax_i + by_i$$
where $P_i = (x_i,y_i)$.
Let $L_i$ be the line with equation $ax + by = c_i$.
By choice of $c_i$, the line $L_i$ hits $S$ at the point $P_i$.
But each $L_i$ is parallel to $L$.
Hence each $L_i$ hits $S$ only at the point $P_i$.
It follows that $c_1,...,c_n$ are distinct.
Relabel the points $P_1,...,P_n$ so that
$$c_1 < \cdots < c_n$$
Then the line $L'$ with equation
$$ax + by = c'$$
$$\text{where}$$
$$c' = \frac{c_1+c_2}{2}$$
separates $P_1$ from the rest of the points of $S$. |
Complex bilinear transformation. | $T(z)=e^{i\theta}\frac{z-z_0}{z-\bar{z}_0}$ is the most general transformation mapping the upper half plane to the unit circle, provided $z_0$ is in the upper half plane. (Link on MSE to the proof)
$$T(2i)=0 \Rightarrow 2i-z_0=0 \Rightarrow z_0=2i$$
So :
$$T(z)=e^{i\theta}\frac{z-2i}{z+2i}$$
Can you go from there ? |
Show that $S'$ is closed for any $S \subseteq \mathbb{C}$ | To continue from where you left:
Claim. $D(z, \epsilon) \cap S' = \emptyset.$
Proof. Suppose not. Let $z' \in D(z, \epsilon) \cap S'.$
Choose $r > 0$ sufficiently small such that $D(z', r) \subset D(z, \epsilon)$.
(You can do this since $D(z, \epsilon)$ is open.)
Now, since $z'$ is a limit point (it's an element of $S'$), we must have that $D(z', r) \cap S \neq \emptyset$.
As $D(z', r) \subset D(z, \epsilon)$, this gives us that $D(z, \epsilon) \cap S \neq \emptyset$. A contradiction!
Thus, you get that $D(z, \epsilon) \subseteq \Bbb C\setminus S'$, as desired. |
What is the domain of this composite function $\sin^2 x$ | The domain $D$ of $f \circ g$ is
$$D=\{x \in \mathbb R: \sin x \ge 0\}.$$
Hence
$$D=\bigcup_{n \in \mathbb Z} [2n\pi, (2n+1)\pi],$$
as Kavi Rama Murthy wrote in his comment. |
Using the Riesz Representation theorem to show a measure is unique. | Let $E$ be a measurable set, $\mu_{\Phi}(E)=\int 1_Edu_{\Phi}=\int 1_E\circ \phi d\mu=\int 1_{\phi^{-1}(B)}d\mu=\mu(\phi^{-1}(B))$.
Remark that $(1_E\circ\phi)(x)=1$ iff $1_E(\phi(x))=1$ this equivalent to saying that $\phi(x)\in E$ or equivalently $x\in \phi^{-1}(E)$ so $1_E\circ \phi =1_{\phi^{-1}(E)}$. |
Applying the Law of Large Numbers recursively | I suppose it depends on the situation; are you looking at something like this; probabilities of probabilities.
Where for example, there a coin that is tossed, and there is a 50 percent chance, that it will have a 40 percent chance of coming up heads. And in addition, there is also a 50 percent chance, that it will have a 30 percent chance, of coming up heads.
And if you are using the strong law of large numbers etc. There are generalizations for this law, such as kolmogorov's Generalized strong law of large numbers which generalizes to, independent but not identically distributed random variables, given that certain variances and conditions are boundedd. . See http://mathworld.wolfram.com/StrongLawofLargeNumbers.html
And perhaps you want to know how this differs, with regard to variance and rates of convergence from a standard biased coin with a fixed chance (of coming up heads being) 0.35 (ie binomial distribution with p=0.35
Moreover, whether one can take the limit of the expected value in the former case (0.35) to make claims like 'almost surely, or almost all the measure (with Pr~=1) is concentrated on sequences whose limiting relative frequency of heads is 0.35 (expected, sample average generating function)sample probably. I have some results on this, using generating functions.
Ie Formally, it appears that in the traditional Kolmogorov (measure theoretical interpretation), whilst the situations are (whilst formally distinguished), the empirical consequence which are entailed by said formalism, appear to be identical and indistinguishable. At least that is what I and one of my co-supervisors have concluded.
This being at least in relation to limiting relative frequencies (within standard probability theory, as I said I am not sure what occurs when one uses a banach spaces real valued random variable approach, or a non-standard hyper-finite formalism).
I suppose it would be interesting to consider the variances and rates of convergence, using a non-standard approach.
Perhaps, with each progressive iteration of the LLN (ie, ' with probability 1,the relative frequency of trials with chance of chance of chance, with probability1, with probability of trials with chance of chance etc,,,, with probability1, with probability1,with probability 1, with probability the limiting rel freq(heads)=0.35) the rate of frequency convergence to the expected value, slows, down or changes,
But presumably this would require an uncountable iteration, of said almost surely's (PR(1)), if the appropriate variance conditions and mean conditions are met each time, although I am not sure. I (can only) presume that in some sense the measure of convergent sequences, is un-countably greater, but this may not be quite correct.
Perhaps, something might change, if there is an infinite (presumably it would have to be uncountable or maybe not)-meta-distribution of probabilities
.Due to the limitations of the sequences themselves being countably infinite, there may not be another elements in the sequence, to have one, or what is more important, an infinite of said elements in each with each appropriate probability value. |
How to describe the Caratheodory extension induced by this step function? | $\mu$ is simply the 'delta measure at $0$': $\mu(A)=1$ if $0 \in A$ and $\mu(A)=0$ otherwise. |
Algebraic transformations to continuously extend functions | In the context of finding derivatives (Question 2), the only problem with $$f(x+h)-f(x)\over h$$ is that at $h=0$ it gives you $0/0$. And the only way to fix that is to find an $h$ in the numerator to cancel that $h$ in the denominator; or, to finesse the problem by doing manipulations to bring it to a form already handled, as in the proof of the product rule.
As an example of what you may have to do to get cancelation of $h$, if you have $$\sqrt{x+h}-\sqrt x\over h$$ you have to multiply top and bottom by $\sqrt{x+h}+\sqrt x$.
I think the 1st question is too vague to admit of any useful answer. |
Expressing a cycle/set as odd or even. | A cycle of even length / order is an odd permutation, and vice versa (that's just a conventional dissonance we have to live with; the alternative is worse). The product of two odd or of two even permutations (not just cycles) is even, while the product of one even and one odd permutation (in whatever order) is odd. The identity permutation is considered even.
Armed with this knowledge, we get that the cycle $(1,5,6)$ has length $3$ and is therefore even, and $(2,8)$ has length $2$ and is therefore odd, so their product is odd. |
$f(x) =\frac{1}{x^{\alpha}}$ continuity and uniform continuity | For $0<\alpha\leqslant1$, we have
$$f(x) = x^{-\alpha} = \exp\left(-\alpha\log x \right), $$
so continuity follows from the composition of continuous functions.
However, given the Cauchy sequence $x_n=\frac1n$ in $(0,1)$, we have
$$f(x_n) = \left(\frac1n\right)^{-\alpha} = n^\alpha\stackrel{n\to\infty}\longrightarrow\infty, $$
so $f$ is not Cauchy-continuous, and therefore not uniformly continuous. To see this more explicitly, fix $m$ and consider the behavior of $|f(x_n)-f(x_m)|$ as $n\to\infty$. |
Limit of norm of bounded operators | This is not true even in dimension $1$.
Consider, for example, the linear operators $T_n \colon \mathbb{R}\to \mathbb{R}$ defined by
$T_n x := (-1)^n x$. |
how to justify the probability of a linear transformation of a event? | It is because of simple algebra (for $\sigma > 0$): $$\begin{align}
x>a &\iff x-\mu > a-\mu \\
&\iff \frac{x-\mu}{\sigma} > \frac{a-\mu}{\sigma}.
\end{align}$$
So $\{X> a\}$ occurs if and only if $\left\{\frac{X-\mu}{\sigma} > \frac{a-\mu}{\sigma}\right\}$ occurs. Thus the two probabilities are the same. |
How to calculate center point in geographic coordinates? | For completeness (I know this is pretty late; no need to change your accepted answer):
You have $n$ points on the globe, given as latitude $\phi_i$ and longitude $\lambda_i$ for $i=1,\ldots,n$ (adopting Wikipedia's notation). Consider a Cartesian coordinate system in which the Earth is a sphere centered at the origin, with $z$ pointing to the North pole and $x$ crossing the Equator at the $\lambda=0$ meridian. The 3D coordinates of the given points are
$$\begin{align}
x_i &= r\cos\phi_i\cos\lambda_i,\\
y_i &= r\cos\phi_i\sin\lambda_i,\\
z_i &= r\sin\phi_i,
\end{align}$$
(compare spherical coordinates, which uses $\theta=90^\circ-\phi$ and $\varphi=\lambda$).
The centroid of these points is of course
$$(\bar x,\bar y,\bar z) = \frac1n \sum (x_i, y_i, z_i).$$
This will not in general lie on the unit sphere, but we don't need to actually project it to the unit sphere to determine the geographic coordinates its projection would have. We can simply observe that
$$\begin{align}
\sin\phi &= z/r, \\
\cos\phi &= \sqrt{x^2+y^2}/r, \\
\sin\lambda &= y/(r\cos\phi), \\
\cos\lambda &= x/(r\cos\phi),
\end{align}$$
which implies, since $r$ and $r\cos\phi$ are nonnegative, that
$$\begin{align}
\bar\phi &= \operatorname{atan2}\left(\bar z, \sqrt{\bar x^2+\bar y^2}\right), \\
\bar\lambda &= \operatorname{atan2}(\bar y, \bar x).
\end{align}$$
So yes, the code you linked to does appear to switch the latitude and longitude in the output. You should submit a patch to the author. |
Problem With Lower And Upper No-Arbitrage Barriers And Inequalities-Financial Mathematics | (a) Use put-call parity and the fact that the put price must be +ve. (b) Call price cannot exceed the value of the stock, as otherwise buying the stock and selling the call would generate an arbitrage profit. |
Exercise on Manifolds: Transition Maps | Note that $\varphi, \psi \colon U \rightarrow \mathbb{R}^3$ and you have no idea what $U$ is (it is some open subset of some manifold $M$).
The equations
$$y_1 = x_1, y_2 = x_2 - x_1^3, y_3 = x_3 + 3x_1 x_2^2$$ already give you (practically by definition)
$$(\psi \circ \varphi^{-1})(x_1,x_2,x_3) = (y_1,y_2,y_3) = (x_1, x_2 - x_1^3, x_3 + 3x_1 x_2^2). $$
In more details, if $q \in U$ then the coordinates of $q$ with respect to the coordinate system $\varphi$ are $\varphi(q) = (x_1(q), x_2(q),x_3(q))$. Similarly, the coordinates of $q$ with respect to the coordinate system $\psi$ are $\psi(q) = (y_1(q),y_2(q),y_3(q))$. The transition function $\psi \circ \varphi^{-1}$ eats a triple $(x_1,x_2,x_3)$ and returns the $y_i$ coordinates of the point $q = \varphi^{-1}(x_1,x_2,x_3)$.
The meaning of the equations $y_i = f_i(x_1,x_2,x_3)$ you are given is that $y_i(q) = f_i(x_1(q),x_2(q),x_3(q))$ (the $y_i$ coordinate of $q \in U$ is related to the $x_i$ coordinates of $q$ by $f_i$). Letting $q = \varphi^{-1}(x_1,x_2,x_3)$ we get
$$ y_i(\varphi^{-1}(x_1,x_2,x_3)) = f_i(x_1(\varphi^{-1}(x_1,x_2,x_3)),x_2(\varphi^{-1}(x_1,x_2,x_3)),x_3(\varphi^{-1}(x_1,x_2,x_3))) = f_i(x_1,x_2,x_3) $$
but the left hand side is precisely the $i$-th coordinate of $\psi \circ \varphi^{-1}$.
In order to get $\varphi \circ \psi^{-1} = \left( \psi \circ \varphi^{-1} \right)^{-1}$ you need to invert $\psi \circ \varphi^{-1}$. Namely, you need to solve for the $x_i$'s in terms of the $y_i$'s. In your case,
$$ x_1 = y_1, x_2 = y_2 + x_1^3 = y_2 + y_1^3, \\
x_3 = y_3 - 3x_1x_2^2 = y_3 - 3y_1(y_2 + y_1^3)^2 $$
so
$$ (\varphi \circ \psi^{-1})(y_1,y_2,y_3) = (y_1, y_2 + y_1^3, y_3 - 3y_1(y_2 + y_1^3)^2). $$ |
Let $x$ be a $p$-letter word, $f^{n}(x)=x \implies x$ must be a 'boring' word | First observe that $f^p(x) = x$ for all $x\in D$, i.e. $f^p = \mathrm{id}_D$. In particular, $f$ is invertible. Furthermore, notice that if $f^n(x) = x$ for some $x$, then $f^{kn}(x) = x$, for any integer $k$.
Now, let $1\leq n < p$ and $x$ be such that $f^n(x) = x$. Since $p$ is prime, $n$ and $p$ are relatively prime. By the Bézout's identity, there exist integers $a$ and $b$ such that $an+bp = 1$. Finally,
$$f(x) = f^{an+bp}(x) = (f^{an}\circ f^{bp})(x) = x.$$
Thus, $x$ is boring. |
Domain when dividing rational expression | As Bill has stated in the comments this is essentially a matter of definition. When the concept of division is rigorously defined for functions in this sense, it is generally defined to be multiplication by the reciprocal function. In this case the expression
$$R(x)\div\frac{x+a}{x-b},$$
where $R(x)$ is some rational function, is technically just shorthand for
$$R(x)\times\left(\frac{x+a}{x-b}\right)^{-1}=R(x)\times\frac{x-b}{x+a}.$$
This corresponds to your reasoning that the domain of the expression does not include $x=-a$ but does include $x=b$.
Your textbook seems to be taking the more heuristic approach though, where we interpret
$$R(x)\div \frac{x+a}{x-b}$$
to be a new function, that for any inputted value of $c$ takes the value of $R(c)$ (now just a number) divided by $\frac{c+a}{c-b}$ (another number) to be the output. In this case we see that now the domain of this new function can't include $b$, because then we'll be attempting to divide one number by $0$ (this is again just multiplication by the reciprocal of numbers, but the problem is $0$ has no reciprocal obviously).
It basically comes down to whether you first consider multiplying by the reciprocal of the rational function to construct your new function (and how exactly you are defining inverse), or whether you define your new function only by what happens at the evaluation step. Either way is fine in your setting, so I would just follow the conventions that your textbook and teacher do, but also know that your interpretation is perfectly valid, and is often the one used when doing more advanced mathematics. |
Does $u \in W^{1,2}_0(\Omega)$ imply $|u| \in W^{1,2}_0(\Omega)$? | Apparently there is a chain rule for weak derivatives; someone on this site recommends Evans and Gariepy as a reference: How to prove the chain rule with respect to weak derivatives?
EDIT: The below argument is wrong - see comments.
Define $F_\epsilon(t)=\sqrt{t^2+\epsilon^2}-\epsilon.$ For smooth $u$, the usual chain rule shows that $u_\epsilon=F(u)$ is in $W^{1,2}_0(\Omega)$ with weak derivatives $D_iu_\epsilon=F_\epsilon'(u)D_iu,$ which converge in $L^2$ to $\operatorname{sgn}(u)D_iu$ by dominated convergence (taking $\operatorname{sgn}(0)=0$). Note $\|\operatorname{sgn}(u)D_iu\|_2\leq \|D_iu\|_2.$ (Presumably they're actually equal, though I think that would take more work to show.) Since $C^\infty_0(\Omega)$ is dense in $W^{1,2}_0(\Omega)$, this proves that the map $u\mapsto |u|$ extends to a $1$-Lipschitz operator on $W^{1,2}_0(\Omega).$ |
How can I prove this inequality? $c\leq x\leq a+b$ | The inequality $c\leq x$ is just the statement "the shortest distance between any two points is a straight line." The upper bound $x\leq a+b$ is harder.
So far as I know, there isn't an elementary geometric argument. Suppose the graph curve is rectifiable (let's say with arclength $s$ rather $x$), and let $\gamma(x) = (x,f(x))$ parameterize the curve. Given $\epsilon>0$, the definition of rectifiability implies there are points $0=x_0<x_1<\dotsm< x_n=b$ such that
$$s-\epsilon<\sum_{i=0}^{n-1}d(\gamma(x_i),\gamma(x_{i+1}))$$
where $d$ is distance in $\mathbb{R}^2$. By triangle inequality we have
$$d(\gamma(x_i),\gamma(x_{i+1})) = \sqrt{(x_i-x_{i+1})^2+(f(x_i)-f(x_{i+1})^2}\leq |x_i-x_{i+1}|+|f(x_i)-f(x_{i+1})|.$$
Moreover, a continuous injective function is monotone (and hence decreasing in our case), so $|x_i-x_{i+1}|=x_{i+1}-x_i$ and $|f(x_i)-f(x_{i+1})| = f(x_i)-f(x_{i+1})$. Thus
$$s-\epsilon < \sum_{i=0}^{n-1}d(\gamma(x_i),\gamma(x_{i+1}))\leq\sum_{i=0}^{n-1}(x_{i+1}-x_i)+\sum_{i=0}^{n-1}\big(f(x_i)-f(x_{i+1})\big) = b+a.$$
Therefore $s<a+b+\epsilon$. Since $\epsilon>0$ was arbitrary in our argument, we conclude $s\leq a+b$, as desired.
As an aside, I believe the upper bound should be achievable with a construction similar to that of the Cantor function (whose graph in the unit square is nondecreasing and has length 2). |
finding $\arg\min_c \sum(c_{yj}-c_j)^2 +\lambda\sum|c_j|$ | You're missing an important detail in your argument: that function is not differentiable at zero. The case where $-\lambda/2<c<\lambda/2$ corresponds to the case where $x=0$. One way to see this is the following: suppose $-\lambda/2<c<\lambda/2$ then if $x>0$ then you will want to decrease it's value and the contrary if $x<0$. Thus, the optimum is $x=0$. Of course there is a literature on how to solve problems with kinks but I don't think it adds much value here. Now, can you figure out why does this solution converge to zero? |
Given the fundamental theorem of calculus part 1, prove the part 2 | See that we have, given $f(x)$ is continuous and that for some unknown continuous function $F(x)$,
$$\frac d{dx}\left(\int_a^xf(t)dt-[F(x)-F(a)]\right)=f(x)-F'(x)=0$$
Since the derivative of the integral is taken care of by the FTOC Part I, and we are trying to find the function $F(x)$ that satisfies this. It is then trivial to see that if we must have $F'(x)=f(x)$, then $F(x)$ is, by definition, the antiderivative of $f(x)$.
Thus,
$$\int_a^xf(t)dt-[F(x)-F(a)]=c\ \forall\ (x,a)\in\mathbb R$$
for an unknown constant $c$, since the only function with $0$ as it's derivative for all $x$ is a constant function. By having $x=a$, you can see that $c=0$, thus, we have
$$\int_a^xf(t)dt-[F(x)-F(a)]=0$$
$$\int_a^xf(t)dt=F(x)-F(a)$$ |
Prime ideals in $L \otimes_K \overline{K}$ | First, recall the standard fact that for a finite seperable extension like $L/K$ there are precisely $n = [L : K]$ embeddings of $L$ into $\overline K$ which fix $K$. Hence, this tensor product is $L \otimes_K \overline K \cong (\overline K)^n$. It seems to me that you already used this fact, but I just want to explicitly mention it.
Anyways, let's focus on the alleged $2^n - 1$ prime ideals of the form $(1) \times \dots \times (1)$. It seems like you're saying that for every nonempty subset $S$ of $\{1, \dots, n\}$ we associate a prime ideal by taking a product of $(1)$ in the slots corresponding to the elements of $S$. More formally, we associate to this subset $S$ the product $I_1 \times \dots \times I_n$ such that $I_j = (1)$ if $j \in S$ and $I_j = (0)$ otherwise. Now, this construction will indeed get you all $2^{n - 1}$ nonzero ideals of $(\overline K)^n$ but they will not all be prime. Indeed, say we had $(1) \times (0) \times (0) \subseteq (\overline K)^3$ corresponding to $S = \{1\}$. Then the quotient $(\overline K)^3 / (1) \times (0) \times (0)$ is $(\overline K / (1)) \times (\overline K / (0)) \times (\overline K / (0)) \cong (\overline K)^2$. This is not a domain, so the ideal $(1) \times (0) \times (0)$ is not prime.
Indeed, not all subsets $S \subseteq \{1, \dots, n\}$ yield prime ideals in this fashion. So which ones do? We can compute the quotient $(\overline K)^n / \prod I_j$ explicitly as $\prod \overline K / I_j$. Now, if all of these factors are trivial we get the trivial ring, which is not a domain. That is, $S = \{1, \dots, n\}$ corresponds to the unit ideal which is not prime. If at least two of these factors are nontrivial then we get a product of at least two nontrivial rings, which is not a domain. Hence, any subsets $|S| \leq n - 2$ fail to yield prime ideals. We are left precisely with those subsets $|S| = n - 1$, corresponding to the ideals of the form $(1) \times \dots \times (1) \times (0) \times (1) \times \dots \times (1)$ which are the unit ideal in all but one slot. The quotient by this ideal is precisely $\overline K$, so it is in fact maximal.
Hence, $Spec(L \otimes_K \overline K)$ corresponds to subsets of $\{1, \dots, n\}$ of cardinality $n - 1$. There are $\binom{n}{n - 1} = n$ of these, as desired.
As a completely unnecessary aside, there is an interesting generalization of this computation to the prime spectrum of an arbitrary product of fields $Spec(\prod_{i \in I} F_i)$. These turn out to be in bijection to the ultrafilters on the power set of $I$, and the set of all ideals corresponds to the filters on the power set of $I$. |
Formula for composite numbers | You can generate the composites with this Excel cell formula which uses no number theoretic functions at all:
=IF(COLUMN()=1,1,IF(ROW()=COLUMN(),ROW()-ROW()*PRODUCT(INDIRECT(ADDRESS(ROW(),1)
&":"&ADDRESS(ROW(),COLUMN()-1))),IF(ROW()>COLUMN(),INDIRECT(ADDRESS(ROW()
-COLUMN(),COLUMN())),"")))
but if you look at the cell formula and the table in the picture, you will find that it is only a sieve since every time one row contains a zero it will cause the diagonal entries to be equal to the row index, which happens at the composite numbers.
You can make it into the characteristic sequence of composite numbers by not multiplying with the row index:
=IF(COLUMN()=1,1,IF(ROW()=COLUMN(),1-PRODUCT(INDIRECT(ADDRESS(ROW(),1)
&":"&ADDRESS(ROW(),COLUMN()-1))),IF(ROW()>COLUMN(),INDIRECT(ADDRESS(ROW()
-COLUMN(),COLUMN())),"")))
But a recurrence for the prime counting function using these spreadsheet formulas I don't think is possible. |
the meaning of the notation behind derivatives | The notation is reminiscent of that for differences. The thing is that the first difference of a function $\Delta f=f(x+\Delta x)-f(x)$ is typically of the same order of $\Delta x$ (it is so at points where $f$ is differentiable and $f'(x)\neq 0$. If you consider the second difference $\Delta^2f=\Delta f(x+\Delta x)-\Delta f(x)$ (a difference of differences) it is a smaller infinitesimal, typically of the order of $\Delta x^2$, etc. This is why the interesting limits are the limits of $\Delta f/\Delta x$, $\Delta^2f/\Delta x^2$, etc. which are precisely the corresponding derivatives. If you divide $\Delta^2f/\Delta x^3$ you get infinity in the limit, if you divide $\Delta^2f/\Delta x$ you get zero because the infinitesimals have different orders. This is why $d^nf/dx^m$ with $n\neq m$ is not an interesting object, unless.. at a critical point you have $f'(x)=0$ and $\Delta f$ happens to be typically quadratic. Then, it makes sense to look at $\lim \Delta f/\Delta x^2$ which is equal to $f''(x)/2$ (just do Taylor expansion). At certain singular points such mixed objects are finite and make perfect sense.
Nowadays they teach you that $df/dx$ is just a symbol and not a fraction. This is really sad, since the connection to differences is lost, not to mention the intuition behind the concept. There are fashions in Math like in anything. $df/dx$ is not a fraction now, but it was a fraction for the Mathematicians who created all the fundamentals of infinitesimal calculus in the XVIII and XIX centuries. |
How to show that this limit does not exist? $\lim_{(x,y) \to 0, (x,y)\neq 0} \frac{x^3 y}{(x^4+y^2)\sqrt{x^2+y^2}}$ | On the path $\{(0,y):y \in R \}$ the limit is zero.
On the path $\{(x,x^2):x \in R\}$ the limit is non-zero as $x \to 0^+$
So the limit at $(0,0)$ does not exists. |
Prove that ∀n≥1, (1/(1⋅3))+(1/(3⋅5))+(1/(5⋅7))+...+(1/(2n−1)(2n+1)) =( n/(2n+1))| | Just suggesting another approach.
If you are familiar with partial fraction, notice that,
$$\frac{1}{(2k-1)(2k+1)}=\frac12\left( \frac{1}{2k-1}-\frac{1}{2k+1} \right)$$
Using telescoping sum,
$$\sum_{k=1}^n \frac{1}{(2k-1)(2k+1)}=\sum_{k=1}^n \frac12\left( \frac{1}{2k-1}-\frac{1}{2k+1} \right)=\frac12\left(1-\frac{1}{2n+1} \right)=\frac{n}{2n+1}$$ |
Differentiation of series with vectors and matrices | Let's denote the elementwise (Hadamard) product by $(A\odot B)$, the inner (Frobenius) product by $(A:B)$ and the ordinary matrix product by $(AB)$.
Let $\{e_k\}$ denote the standard euclidean base vectors. The vector of all ones is then $$\eqalign{u=\sum_{k=1}^n e_k \cr}$$
Finally, let's define a vector specific to your problem
$$\eqalign{x = w\odot(Qw)\cr\cr}$$
Putting this all together, your function can be written as
$$\eqalign{
R&=\sum_{i=1}^n\sum_{j=1}^n \,\,(x_i-x_j)^2 \,=\,\sum_{i=1}^n\sum_{j=1}^n\,\Big((e_i-e_j)^Tx\Big)^2 \cr
&=\sum_{i=1}^n\sum_{j=1}^n\,\Big((e_i-e_j)(e_i-e_j)^T\Big):(xx^T) \cr
&=\sum_{i=1}^n\sum_{j=1}^n\,\Big(e_ie_i^T-e_je_i^T-e_ie_j^T+e_je_j^T\Big):(xx^T) \cr
&=\sum_{i=1}^n\,\Big(ne_ie_i^T-ue_i^T-e_iu^T+I\Big):(xx^T) \cr
&=\Big(nI-uu^T-uu^T+nI\Big):(xx^T) \cr
&= 2(nI-uu^T):(xx^T) \cr
\cr
}$$
In this form, it's straightforward to find the differential and gradient
$$\eqalign{
dR &= 2(nI-uu^T):2\,{\rm sym}(dx\,x^T) \cr
&= 4\,{\rm sym}(nI-uu^T):(dx\,x^T) \cr
&= (4nI-4uu^T)x:dx \cr
&= (4nx-4\beta u):dx \cr
&= (4nx-4\beta u):(dw\odot Qw + w\odot Qdw) \cr
&= \Big(Qw\odot(4nx-4\beta u) + Q(w\odot (4nx-4\beta u))\Big) : dw \cr
&= \Big(4n\,Qw\odot x-4\beta Qw + 4nQ(w\odot x)-4\beta Qw\Big) : dw \cr
&= \Big(4n\,Qw\odot x-8\beta Qw + 4nQ(w\odot x)\Big) : dw \cr
&= \Big(4n\,Qw\odot Qw\odot w-8\beta Qw + 4nQ(w\odot Qw\odot w)\Big) : dw \cr
\cr
\frac{\partial R}{\partial w}
&= 4n\,Qw\odot Qw\odot w-8\beta Qw + 4nQ(w\odot Qw\odot w) \cr
}$$
The key property used in the derivation is the mutual commutability of the Hadamard and Frobenius products
$$\eqalign{
A:B &= B:A \cr
A\odot B &= B\odot A \cr
A\odot B:C &= A:B\odot C \cr
}$$
And that the all ones matrix is the identity element for the Hadamard product.
To get rid of the Hadamard products in the final result, change the vectors into diagonal matrices. Then you can use ordinary matrix products
$$\eqalign{
A &= {\rm Diag}(Qw) \cr
W &= {\rm Diag}(w) \cr
\beta &= u^Tx = w^TQw \cr\cr
\frac{\partial R}{\partial w}
&= (4n\,A^2-8\beta Q+ 4n\,QWA)\,w
}$$ |
3D positively curved space | For 3-Sphere the radius of the Curvature is R. For more information look this
Cosmology Mathematical Tripos
If you look the equations 1.1.5 and 1.1.14 that you ll see that the author defined a is the radius of the 3-sphere as "a" and used it to desribe the metric. |
Proving the distance between two points is 1 on a continuous function | Without loss of generality, let $y = x + 1$. Define $g(x) = f(x+1) - f(x), 0 \leq x \leq 1$. This function is continuous on $[0, 1]$ since $f(x)$ is continuous. Observe that
$$
g(1) = f(2) - f(1)
$$
and
$$
g(0) = f(1) - f(0) = f(1) - f(2)
$$
If $f(2) = f(1)$, we've find a $(x, y)$ pair satisfying the problem with $x = 1, y = 2$. Otherwise, since one of $g(1)$ and $g(0)$ is greater than $0$ and another is less than $0$, there must exists a $x' \in [0, 1]$ such that $g(x')=0$ since $g(x)$ is continuous. Then $f(x') = f(x' + 1)$. |
Definite integral $\int_{0}^{1}\left(\frac{x^{2}-1}{x^{2}+1}\right)\ln\left(\operatorname{arctanh}x\right)dx$ | $$I=-\int _0^1\frac{1-x^2}{1+x^2}\:\ln \left(\operatorname{arctanh}\left(x\right)\right)\:dx$$
$$=-\int _0^1\frac{1-x^2}{1+x^2}\ln \left(\ln \left(\frac{1+x}{1-x}\right)\right)\:dx-\ln \left(\frac{1}{2}\right)\int _0^1\frac{1-x^2}{1+x^2}\:dx$$
$$=-4\int _0^1\frac{t\ln \left(-\ln \left(t\right)\right)}{\left(1+t^2\right)\left(1+t\right)^2}\:dt-\ln \left(\frac{1}{2}\right)\left(\frac{\pi }{2}-1\right)$$
$$=-2\int _0^1\frac{\ln \left(-\ln \left(t\right)\right)}{1+t^2}\:dt+2\int _0^1\frac{\ln \left(-\ln \left(t\right)\right)}{\left(1+t\right)^2}\:dt-\ln \left(\frac{1}{2}\right)\left(\frac{\pi }{2}-1\right)$$
$$=-2\int _0^{\infty }\frac{\ln \left(x\right)}{1+e^{-2x}}\:e^{-x}\:dx+2\int _0^{\infty }\frac{\ln \left(x\right)}{\left(1+e^{-x}\right)^2}\:e^{-x}\:dx+\frac{\pi }{2}\ln \left(2\right)-\ln \left(2\right)$$
$$=-2\sum _{k=0}^{\infty }\left(-1\right)^k\:\int _0^{\infty }e^{-x\left(2k+1\right)}\ln \left(x\right)\:dx+2\int _0^{\infty }\frac{\ln \left(x\right)}{\left(1+e^{-x}\right)^2}\:e^{-x}\:dx+\frac{\pi }{2}\ln \left(2\right)-\ln \left(2\right)$$
$$=2\sum _{k=0}^{\infty }\left(-1\right)^k\:\left(\frac{\ln \left(2k+1\right)+\gamma }{2k+1}\right)+2\int _0^{\infty }\frac{\ln \left(x\right)}{\left(1+e^{-x}\right)^2}\:e^{-x}\:dx+\frac{\pi }{2}\ln \left(2\right)-\ln \left(2\right)$$
$$=2\sum _{k=0}^{\infty }\frac{\left(-1\right)^k\ln \left(2k+1\right)}{2k+1}+2\gamma \sum _{k=0}^{\infty }\frac{\left(-1\right)^k}{2k+1}+2\underbrace{\int _0^{\infty }\frac{\ln \left(x\right)}{\left(1+e^{-x}\right)^2}\:e^{-x}\:dx}_{K}+\frac{\pi }{2}\ln \left(2\right)-\ln \left(2\right)$$
$$\boxed{I=-2\beta '\left(1\right)+\frac{\gamma \pi }{2}+\ln \left(\pi \right)-\gamma +\frac{\pi }{2}\ln \left(2\right)-2\ln \left(2\right)}$$
Where $\displaystyle \beta '\left(s\right)$ is the derivative of the Dirichlet beta function.
Also see here for the integral $K$. |
Proof of multivariable chain rule for $f(t,x(t))$ | You can use $f(t+h,x(t+h))-f(t,x(t+h))=hf_x(t+\theta_1 h,x(t+h))$ for some $\theta_1\in(0,1)$
and $f(t,x(t+h))-f(t,x(t))=f(t,x(t)+hx'(t+\theta_2h))-f(t,x(t))=hx'(\theta_2h)f_y(t,x(t)+\theta_3h\theta_2x'(t+\theta_2h)$. |
Linear Algebra: Cross Product w/ Matrices | It's the matrix$$\begin{bmatrix}0&-v_3&v_2\\v_3&0&-v_1\\-v_2&v_1&0\end{bmatrix},$$as you can check. |
Induction Proof Verification | correction
$x^{2(k+1)}=x^{2k+2}=x^{2k}x^{2}=(-x)^{2k}(-x)^{2}=(-x)^{2k+2}=(-x)^{2(k+1)}$
same way you would show $(x+5)^{k+1}=(x+5)^{k}(x+5)$
this step doesn't prove anything $(-x)^{2k}x^{2}=(-x)^{2k+2}$ |
Circumradius of trigonic | The reduction of the problem to three “peaks” sounds promising. Intuitively it seems correct.
Assuming the problem really can be reduced this way, the three “peaks” form a triangle in some plane. Find the circumcircle of that triangle. From the center of that circle, project a line perpendicular to the plane of the circle. Where that line intersects the $x,y$ plane is the center of your sphere. |
Is a connected $\sigma$-locally compact metric space necessarily "connectedly $\sigma$-locally compact"? | Okay, I just found an answer, in the paper http://www.few.vu.nl/~dijkstra/research/papers/sinecurve.pdf. The answer is no; just take a "punctured topologist's sine curve"
$$ X \ := \ \{(0,y) : y \in [-1,1) \} \, \cup \, \{ (x,\sin(\tfrac{1}{x})) : x \in (0,1] \}. $$
It is clear that for every $r \in [-1,1)$, if $A$ is a connected neighbourhood of $(0,r)$ then $A$ contains $\{ (x,\sin(\tfrac{1}{x})) : x \in (0,\varepsilon] \}$ for some $\varepsilon>0$; but then if $A$ is also closed, then $A$ must also contain $\{(0,y) : y \in [-1,1) \}$. So $A$ cannot be compact. |
If $\mathbf{A} = \mathbf{LL}^{\top}$, can we find $\mathbf{LDL}^{\top}$ for diagonal $\mathbf{D}$ without computing the decompositon? | It's not possible: if we had a formula to compute $\mathbf{LDL}^{\top}$, it would be straightforward from there to compute $\mathbf{L}$.
Set $\mathbf{D}$ to be the matrix with $\mathbf{D}_{ii}=1$ and all other entries $0$. Then $\mathbf{LDL}^{\top}$ simplifies to $\mathbf{xx}^\top$ where $\mathbf x$ is the $i^{\text{th}}$ column of $\mathbf L$. In particular, the diagonal entries of $\mathbf{xx}^\top$ are the squared entries of $\mathbf x$, so we can read off $\mathbf x$ up to signs from there alone.
(The signs of the entries of $\mathbf x$ relative to each other can be deduced from looking at the off-diagonal entries. We can't deduce the absolute sign, since negating a column of $\mathbf L$ won't change $\mathbf{LL}^\top$.)
Doing this for all $i$ gets us $\mathbf{L}$. |
Metrics vs. Norms (Fréchet spaces, Banach spaces, etc.) | In a metric space $d(u,0)=0$ implies $u=0$, (axiom of a metric space) so property 3 is satisfied for this "faux norm", given a linear space distance.
So defining $\|x\|=d(x,0)$ only fails to be a norm because of the essential property $\|t\cdot x\|=|t|\|x\|$ for norms.
An example of a Fréchet space that is not normable, is $\mathbb{R}^{\mathbb{N}}$ in its standard complete metric (inducing the product topology): $$d((x_n), (y_n))=\sum_{n=0}^\infty \frac{1}{2^n} \min(1,|x_n - y_n|)$$
It's easy to see that $d$ is translation-invariant, and its induced "faux-norm" $d(x,0)$ does not obey property 2, but of course does obey 1 and 3.
In general, when looking at these concepts of different types of linear spaces, it's good to compare concrete examples. For these product topologies , not that not all coordintes behave the same: in the metric, what displacements happen in high coordinates contributes less to the metric distance (because basic open sets essentially depend on finitely many coordinates, this is unavoidable, and any metric inducing the product topology will have this effect, here I chose the standard metric induced by the seminorms $\|(x_n)\|_m=|x_m|, m \in \Bbb N$, in fact)
There is no norm that can induce the same topology on $\mathbb{R}^{\mathbb{N}}$ because all open neighbourhoods of $0$ are not "bounded" and in a normed space, (absolute) homogeneity is used in a strong way to show that all open balls are bounded in the linked sense. |
Decomposition of $\mathbb{C}\mathbb{Z_5}$ into irreducible one dimensional modules using Artin-Wedderburn Theorem | I understand the following part of the question:
Decompose $\mathbb{C} \mathbb{Z}_5$ using Artin Wedderburn Theorem and then determine the isomorphism explicitly in terms of the natural basis on the right hand side.
In this case, the Artin Wedderburn theorem allows us to conclude that there is an isomorphism $\Phi:\Bbb C \Bbb Z_5 \to [M_1(\Bbb C)]^5$, and that the diagonal entries in the image corresponds to the irreducible representations $\phi:\Bbb Z_5 \to \Bbb C^\times$.
Every irreducible representation of $\Bbb Z_5$ has the form $\phi(g^k) = \omega^k$ for some $\omega$ satisfying $\omega^5 = 1$. These solutions $\omega$ can be written in the form $\omega = e^{2 \pi ji/5}$ for $j = 0,1,2,3,4$. With that, we note that one such isomorphism can be written explicitly as
$$
\Phi(g^k) = \pmatrix{1\\&e^{2 \pi ki/5} \\ && \ddots \\ &&& e^{4(2 \pi k)i/5}}, \quad k = 0,\dots,4.
$$
This isomorphism is expressed in terms of the "natural basis on the right hand side". That is, we have written the elements of $\Phi(G) = [M_1(\Bbb C)]^5 \subset M_5(\Bbb C)$ in the way that they would "naturally" be written. This is consistent with the notation and definitions of Dummit and Foote.
Depending on how you or your professor prefer to to think about $[M_1(\Bbb C)]^5$, we could also write this in the form
$$
\Phi(g^k) = (1,e^{2 \pi ki/5} , \cdots , e^{4(2 \pi k)i/5}), \quad k = 0,\dots,4.
$$
In this case, the set corresponding to the product $[M_1(\Bbb C)]^5$ of algebras is taken to be a Cartesian product $M_1(\Bbb C) \times \cdots \times M_1(\Bbb C)$ rather than a "block-diagonal" subset of $M_5(\Bbb C)$.
I am not totally sure what your instructor means by "determine the isomorphism explicitly in terms of the basis {...} of $\Bbb C \Bbb Z_5$".
My best guess it that for this part of the problem, we're meant to interpret each element of $\Bbb C \Bbb Z_5$ as a linear operator over the vector space $\Bbb C \Bbb Z_5$ and obtain the matrix $\Phi(g^k)$ by writing the matrix of that linear operator relative to the given basis.
If that is the case, then we obtain an isomorphism with a subset of $M_5(\Bbb C)$ which is no longer a standard presentation of $[M_1(\Bbb C)]^5$. Also, this construction doesn't have anything to do with the Artin-Wederburn theorem on its own. |
Ratios of real analytic functions | Quotient of two real-analytic functions is real-analytic whenever denominator is not zero. You can find the proof here: https://www.math.ucdavis.edu/~hunter/intro_analysis_pdf/ch10.pdf (pages 10-11) |
Knowing number of elements in a relation from this information about equivalence classes. | Within an equivalence class, every element is paired with every element, every possible ordered pair is there. So for instance, if one equivalence class is $\{a,b,c,d\}$ you have every possible ordered pair $(x,y)$ where $x,y\in\{a,b,c,d\}$, i.e you have the full cross product and thus 16 ordered pairs relating those 4 elements. Same thing with the 3 sets, you have $3^2$ there for each one. Add them up. |
Criteria of regularity that a Cayley's Diagram should meet . | A graph can be a Cayley diagram only if all the subgraphs consisting of blue edges (and the adjacent vertices) are isomorphic, and all the subgraphs containing red edges (and the adjacent vertices) are isomorphic (this generalises to more generators). This is essentially what "regularity" is getting at.
In this example, it is not regular because the top red region is different from the bottom two red regions. To utilise this to find a contradiction, note that the blue edges imply that there is a symmetry along them (obtained by swapping as follows: $1\leftrightarrow5$, $2\leftrightarrow6$, $3\leftrightarrow7$, $4\leftrightarrow 8$). However, this is not a symmetry of your graph! |
How do I find the value of this sort of series? | I am renaming $p$ to $x$. Call your series $f(x)$. Divide your series by $x$, which gives
$$
f(x)/x = \sum_{k=0}^\infty kx^{k-1}.
$$
An antiderivative of this is
$$
G(x) = \sum_{k=0}^\infty x^k = \frac{1}{1-x}.
$$
Now what you want is
$$
f(x) = xG'(x) = \frac{x}{(1-x)^2}.
$$ |
show that $x^2e^x-\ln{x}-1>0$ | Rewrite the inequality to be proved as
$$\left(1+u\over2\right)^2e^{(1+u)/2}-\ln\left(1+u\over2\right)-1\gt0$$
for $-1\lt u\lt1$. This can be rewritten as
$$(1+u)^2\sqrt ee^{u/2}-4\ln(1+u)+4\ln2-4\gt0$$
Now $e^{u/2}\ge1+{u\over2}$, so
$$(1+u)^2\sqrt ee^{u/2}\ge(1+2u)\sqrt e\left(1+{1\over2}u\right)\ge\left(1+{5\over2}u\right)\sqrt e$$
Also, $\ln(1+u)\le u$, so
$$-4\ln(1+u)\ge-4u$$
Putting these together, we have, for $|u|\le1$,
$$\begin{align}
(1+u)^2\sqrt ee^{u/2}-4\ln(1+u)+4\ln2-4
&\ge\sqrt e+4\ln2-4+\left({5\over2}\sqrt e-4\right)u\\
&\ge\sqrt e+4\ln2-4-\left|{5\over2}\sqrt e-4\right|
\end{align}$$
It remains to compute $\sqrt e+4\ln2-4\approx0.4213$ while ${5\over2}\sqrt e-4\approx0.1218$.
There might be some slicker way to do this that avoids the messy numerics at the end. I'd like to see it.
Added later: Oh, here's a way to avoid the messy numerics. Note first that $e\gt2.56=1.6^2=(8/5)^2$, so ${5\over2}\sqrt e-4$ is positive. This simplifies things to showing $4\ln2\ge{3\over2}\sqrt e$. If we now allow ourselves to know that $\ln2\gt{2\over3}$, it suffices to show that ${16\over9}\gt\sqrt e$, or ${256\over81}\gt e$, which is clear, since ${256\over81}\gt3\gt e$.
If you want a quick proof that $\ln2\gt{2\over3}$, note that that inequality is equivalent to $8\gt e^2$, which follows from $8\gt{196\over25}=\left(14\over5\right)^2=2.8^2\gt e^2$.
And just to leave no numerical stone unturned, here's why we can confidently say that ${14\over5}\gt e\gt{64\over25}$:
$$e\gt1+1+{1\over2}+{1\over6}=2+{2\over3}\gt2+{14\over25}={64\over25}$$
since $2\cdot25\gt3\cdot14$, and
$$e^{-1}\gt1-1+{1\over2}-{1\over6}+{1\over24}-{1\over120}={11\over30}\gt{5\over14}$$
since $11\cdot14\gt5\cdot30$. |
Find all $x \in \mathbb{Z}$ satisfying the congruence equation $x \equiv 1 \pmod 5$ | You are doing fine but actually you could have just written $x = 5m+1$ where $m\in \mathbb{Z}$.
$$\{ x \in \mathbb{Z}: x \equiv 1 \pmod 5 \} = \{ x \in \mathbb{Z}: \exists m \in \mathbb{Z}, x=5m+ 1 \}$$
While it is not wrong, letting $m=n+1$ doesn't really simplify things. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.