title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Isotropy over $p$-adic numbers | Given your questions, you absolutely must purchase the Cassels book, and quickly, otherwise you are completely screwed.
http://www.amazon.com/Rational-Quadratic-Forms-Dover-Mathematics/dp/0486466701
We find tables of the Hilbert Norm Residue Symbol $(a,b)_p$ on pages 43 and 44, one for $\mathbb Q_p$ with odd prime $p,$ a bigger table for $\mathbb Q_2,$ then a smaller one for $\mathbb Q_\infty.$
On page 55, for a quadratic form
$$ f(x_1, \ldots, x_n) = a_1 x_1^2 + \ldots + a_n x_n^2, \; \; a_j \in \mathbb Q_p^\ast,$$
he gives the definition
$$ c_p(f) = \prod_{i < j} (a_i, a_j)_p $$
which is his version of the Hasse-Minkowski Invariant.
On page 59, we have Lemma 2.5, which says a (nondegenerate) ternary is isotropic in $\mathbb Q_p$ if and only if
$$ c_p(f) = (-1, - \det(f))_p$$
Then we have the calculations. This is intricate and prone to error, but not conceptually difficult at this stage.
$$
\begin{array}{ccccccc}
p & (3,7)_p & (3,-15)_p & (7,-15)_p & c_p & (-1,315)_p & \mbox{comment} \\
2 & -1 & 1 & 1 & -1 & -1 & 315 \equiv 3 \pmod 8 \\
3 & 1 & -1 & 1 & -1 & 1 & 35 = \frac{315}{9} \equiv -1 \pmod 3 \\
5 & 1 & -1 & -1 & 1 & 1 & 63 = \frac{315}{5} \equiv 3 \pmod 5 \\
7 & -1 & 1 & -1 & 1 & -1 & 45 = \frac{315}{7} \equiv 3 \pmod 7 \\
\infty & 1 & 1 & 1 & 1 & 1 & \mbox{indefinite}
\end{array}
$$
Some authors define their version of the Hasse-Minkowski Invariant as the quantity that Cassels would call
$$ c_p(f) \; \cdot \; (-1, - \det(f))_p,$$ in which case you get isotropy if and only if the number is 1. |
Optimized version of sieving for primes | This is just a standard sieve, optimized slightly for space but not for speed. Instead of an array with an entry for each positive integer, it uses an array with only entries for the odd numbers from 3 onwards. So entry $i$ in the array (with $0$-based indexing) represents the integer $2i+3$.
Note that contrary to what you said, it does not matter whether or not the index $i$ is prime, it only matters whether the number it represents, $2i+3$, is prime.
Anyway, the algorithm removes multiples of all numbers up to $n$. That is to say, the corresponding index $i$ is such that $2i+3\le n$. This leads to the expression you ask about.
That is actually not an optimized bound. Every composite number $\le n$ will actually have a factor that is $\le\sqrt{n}$, so really they could have lowered that bound significantly. |
Cannot understand why do we need to split $x$ variables into $x^+$ and $x^-$ in Linear Programming | When you're converting an optimisation problem from general form to standard form, this is how you deal with variables which have no sign constraints.
If $x$ is a variable without a sign constraint, that means that $x$ can take on both positive and negative values. So both $x \geq 0$ and $x < 0$ are possible. But in standard form, we want all our variables to be non-negative. To deal with this problem, we introduce two variables in place of $x$, both of which are positive such that $x$ could take the whole spectrum of positive and negative values. If we write $x = x^+ - x^-$ where both $x^+, x^- \geq 0$ are non negative, then if $x^+ \geq x^-\ , x \geq 0$ and if $x^- < x^+ \ ,x <0$. This way we meet the criteria for converting our variable (variables) into standard form. |
Mutually orthogonal vectors in a complex vector space? | This is not possible because you can't have a collection of $n$ orthonormal vectors in $\mathbb{C}^m$ if $n > m$. Alternatively,
$$ \operatorname{rank}(A^{\dagger} A) \leq \min ( \operatorname{rank}(A^{\dagger}), \operatorname{rank}(A) ) \leq \operatorname{rank}(A) \leq m $$
since $A$ is an $m \times n$ matrix with $m < n$ and in particular, we can't have $A^{\dagger} A = I_n$ as this would imply that $\operatorname{rank}(A^{\dagger} A) = n$. |
Binomial Probability Problem (Not even sure?) | Hint: Use the Law of Total Probability
$$\mathsf P(A)=\mathsf P(A\mid B)\mathsf P(B)+\mathsf P(A\mid B^\complement)\mathsf P(B^\complement)$$ |
Check whether the given function is injective or surjective | For surjectivity, you need to ask yourself whether every $z \in \mathbb{Z}$ is the image of some $n \in \mathbb{N}$.
Scratch work: for $n$ odd, you have $f(n)=\tfrac{n+1}{2}$, which is clearly positive. How do you need to pick $n$ to get a fixed, positive $z$-value?
$$\frac{n+1}{2} = z \iff n=2z-1$$
Note that for $z > 0$, $n=2z-1$ is odd so... Similar scratchwork for $n$ even, corresponding to the negative $z$-values. Concluding:
for $z > 0$, take $n=2z-1$, then $n$ is odd and $f(n) = \ldots$
for $z \le 0$, take $n=-2z$, then $n$ is even and $f(n) = \ldots$ |
Estimating lambda parameter | Comment continued: Just to illustrate the suggestion. (Not a proof.)
If this makes sense to you, then you can fill in the analytic details.
Suppose $n=20,$ $\mu=\sqrt{3},$ $\mu^2 = 3.$ Then let's see
what happens with a million random samples of size 20 from an exponential
distribution with mean $\mu.$ Denote $Q = \frac 1n\sum_i X_i^2.$ (Computation using R statistical software.)
m = 10^6; n = 20; mu = sqrt(3); lam=1/mu; x = rexp(m*n, lam)
MAT=matrix(x, nrow=m) # m x n matrix, each row a sample of 20
a = rowMeans(MAT); mean(a)
## 1.731704 # aprx E(X_i) = sqrt(3)
q = rowMeans(MAT^2); mean(q)
## 5.996244 # aprx E(Q) = 6 |
Question involving the Cauchy-Goursat Theorem | Without loss of generality let $z_0 = 0$ and $C$ the unit circle travelled in clockwise direction. Then
$$ \int_C \frac{dz}{z^n} = i\int_0^{2\pi} e^{-in\phi} e^{i\phi} d\phi = \frac{i}{1-n} \left. e^{i(1-n)\phi} \right|_0^{2\pi} = 0
$$
for $n \neq 1$. The case for $n = 1$ follows from Cauchy integral formula. |
E is an elliptic curve over the finite field Z/pZ. Let N= number of points on E. If N is divisible by p, show that either N=p or p=2 | As stated, your problem is slightly wrong. Let's see first what the Hasse bound gives us. Let $N=pk$ for some $k\in \mathbb N_{>0}$ be the number of points of $E$. Then $|pk-p-1|\leq 2\sqrt{p}$. Now if $k\geq 2$ we have that $|pk-p-1|\geq p-1$. But the inequality $p-1\leq 2\sqrt{p}$ doesn't hold for $p\geq 7$. Hence, if $p\geq 7$, then $k=1$ and $N=p$.
Let us check the cases $p=3,5$. If $p=5$ and $k\geq 3$, then $|pk-p-1|\geq 9$, which is bigger than $2\sqrt{5}$. Hence the only possibilities are $k=1,2$. Let $k=2$, and suppose $E$ is an elliptic curve over $\mathbb F_5$ with $10$ points. Let $y^2=f(x)$ be a weierstrass equation for $E$ with $f(x)$ a monic polynomial of degree $3$. Since $E$ has $10$ points, $f(x)$ has exactly $1$ root, so it factors as $(x-a)g(x)$ where $a\in \mathbb F_5$ and $g(x)$ is monic irreducible of degree $2$. The number of points of $E$ is given by
$$10=1+\sum_{x\in \mathbb F_5}\left(\left(\frac{f(x)}{5}\right)+1\right)=6+\sum_{x\neq a}\left(\frac{(x-a)g(x)}{5}\right)$$
Here $\displaystyle \left(\frac{\cdot}{5}\right)$ is the Legendre symbol. Since $g$ was arbitrary, up to a translation we can assume it is of the form $x^2+c$. Since translations do not change irreducibility, we must have $c=2,3$. Thus it is enough to show that for every $a\in \mathbb F_5$, $$\sum_{z\neq a}\left(\frac{(x-a)(x^2+c)}{5}\right)\neq 4$$
for $c=2,3$. Let $c=2$. If $x=1$, then $1-a$ must be a non-square, so $2$ or $3$. Thus $a=-1,3$. It is easy to rule out both cases, and the same for $c=3$.
For $p=3$, one can check directly that the elliptic curve $y^2 = x^3 + x^2 + 1$ over $\mathbb F_3$ has $6$ points and the curve $y^2 = x^3 + x^2 + 2$ has $3$ points. Therefore the right claim is: if $p$ divides $N$, then $N=p$ if $p>3$ while $N\in\{p,2p\}$ if $p=3$. |
Is there a proof that $\int \frac {dx}{x}=\ln |x|+c$? | If we accept that $\frac{d}{dx}e^x=e^x$, then we can simply use implicit differentiation.
$$\begin{align}y&=\ln x\\x&=e^y\\\frac{d}{dx}x&=\frac d {dx}e^y\\1&=\frac{dy}{dx}e^y\\\frac 1{e^y}&=\frac{dy}{dx}\\\frac{dy}{dx}&=\frac 1 x\end{align}$$ |
Solvability and representation of finite groups | The finite Heisenberg group $H_n(q)$ has $q-1$ irreducible representations of dimension $q^n$ (and $q^{2n}$ irreducible representations of dimension $1$), but its derived series just has length $2$. |
How to solve this combinatorial problem correctly? | Here is how I solve it:
1- Use FrobeniusSolve to find all possible solutions (how many candies each kid gets):
FrobeniusSolve[{1, 1, 1}, 10]
(*Out: {{0, 0, 10}, {0, 1, 9}, {0, 2, 8}, {0, 3, 7}, ...} *)
(*Output Length: 66 *)
2- Since each kid should gets at least one candy, filter the possibilities contain zero:
ps = DeleteCases[FrobeniusSolve[{1, 1, 1}, 10], l_ /; MemberQ[l, 0]];
(*Out: {{1, 1, 8}, {1, 2, 7}, {1, 3, 6}, {1, 4, 5}, ...} *)
(*Output Length: 36 *)
To create a list like yours that specify which candy goes to which kid, I use MapIndexed, for example:
Flatten[MapIndexed[ConstantArray[#2, #1] &, {1, 8, 1}]]
(*Out: {1, 2, 2, 2, 2, 2, 2, 2, 2, 3} *)
For finding out how many ways you can distribute in {1,8,1} format, I use Permutations which produces all the tuples without duplicate:
Length[Permutations[Flatten[MapIndexed[ConstantArray[#2, #1] &, {1, 8, 1}]]]]
(*Out: 90 *)
3- Now just apply previous steps to all the possibilities stored in ps and sum it up:
Sum[Length[Permutations[Flatten[MapIndexed[ConstantArray[#2, #1] &, i]]]], {i,ps}]
(*Out: 55980 *) |
How to find all ring homomorphisms from $\mathbb Z_{12} \to \mathbb Z_{30}$? | I don't know why the OP has thought that there is "a lot to check". There are only six possibilities: $f(1)\in\{0,5,10,15,20,25\}$ and among these the idempotents are $\{0,10,15,25\}$, so there are four ring homomorphisms. |
Find the length of the the green line | The given information is not enough to find the length of the green line.
If we make the assunption that those right triangles are congruent to each other then we may approach as follows.
Using the length of the blue segment and the measure of the angle opposite to that we figure out that the length of the green line is $$8 \tan (38.928) = 6.4616..$$ |
Can a function be both upper and lower quasi-continuous? | For a slightly non-trivial example, consider
$$f(x)=\begin{cases}\sin\Bigl(\dfrac1x\Bigr)&x\ne0,\\a&x=0.\end{cases}$$
I think you will find that this function is quasi-continuous (i.e. upper and lower) if $\lvert a\rvert\le1$, more generally upper quasi-continuous iff $a\ge -1$ and lower quasi-continuous iff $a\le1$. |
Why does this pattern occur when using modular arithmetic against set of prime numbers? | Let $p_n$ be the $n$'th prime, and let $y_n = p_{n + p_{n}} \mod p_n$, i.e.
$p_{n+p_{n}} = x_n p_n + y_n$ where $0 \le y_n < p_n$ and $x_n = \lfloor p_{n+p_n}/p_n \rfloor$. Now the point is that
$r_n = p_{n+p_n}/p_n$ will tend to grow, but
slowly: $p_n \sim n \log n$, $p_{n+p_n} \sim (n + p_n) \log(n + p_n) \sim n (\log n)^2$, so $r_n \sim \log n$. $x_n = \lfloor r_n \rfloor$ will typically be constant for long intervals. For example, it is $12$ for
$3070 \le n \le 7073$. On an interval where $x_n = c$, $y_n = (r_n - c) p_n$
will tend to be increasing, from near $0$ at the beginning of the interval to near $p_n$ at the end. |
How to mathematically write: last matrix position that equals one? | Usually for a matrix A we identify the elements by the symbol $$A_{ij}\quad \text{or}\quad a_{ij}$$
where $i$ is the row index and $j$ is the column index.
In your matrix for example $a_{34}=1$. |
Resolve the equation $xy''-(x+n)y'+ny=0$ | $$xy'' - (x+n)y' + ny = x(y'-y)' - n(y'-y) = 0$$
Let $y'-y = z$. We then get
$$xz' - nz = 0 \implies \dfrac{dz}z = n \dfrac{dx}x \implies \log(z) = n \log(x) + c \implies z = k x^n$$
Hence,
$$y'-y = kx^n \implies (y e^{-x})' = kx^ne^{-x} \implies y(x) = k e^x \int_0^x t^n e^{-t} dt + ce^x$$ |
given Sn, find a_n and sum of a_n | Actually, $a_1 = s_1 = 0, a_n = \frac{2}{n} - \frac{2}{n+1} ,n \ge 2$, so there is no contradiction. |
There's a link between annihilating polynomials and annihilators (ring theory)? | If $V$ is a finite dimensional vector space over $k$ and $M$ is an endomorphism of $V$, then $V$ is a module over the polynomial ring $k[x]$ by having $x^n$ act like $M^n$ and extending linearly.
The annihilator of $V$ will be the set of annihilating polynomials for $M$. |
Does knowing the nth digit of $\pi$ help in finding the next digit? | That depends, perhaps, on which kinds of "help" you're willing to accept.
$\pi$ is widely expected to be normal in all bases. If this is true, an arguable answer to your question would be "no": In a precise technical sense, knowing one digit of $\pi$ tells you nothing about what the next digit is. If you pick a digit position at random, every digit is equally likely to be there no matter what the preceding digit is.
(A caveat here is that nobody has been able to prove that $\pi$ is normal; we mostly believe it is because nobody has been able to suggest a good reason it wouldn't be normal either, and the trillions of digits that have been computed so far sure look like it's normal in base 10).
On the other hand, some possible algorithms for approximating $\pi$ can be sped up very slightly by knowing any one of the previous digits. For example, suppose you compute $\pi$ by bisection, based on testing whether candidates are greater or smaller than $\pi$ (say, by computing $\cos\frac x2$ for each candidate $x$ using the power series), knowing a digit in advance can allow you to narrow the candidate interval to get the next digit right without any work once during the entire computation. However, the most efficient known methods for computing $\pi$ are not directly amenable to this optimization anyway -- choosing a method that can make use of the next-to-last digit would be a net loss in efficiency. |
Tate's proof that the character group of a local field is isomorphic to its additive group | Any topological abelian group naturally carries a uniform space structure, and there is a notion of completion for uniform spaces. It is not too hard to show that a locally compact (Hausdorff) abelian group is complete in this uniformity; this was discussed in another MSE post in somewhat broken English, but the idea should be clear: if $x_\alpha$ is a Cauchy net, then the differences $x_\alpha - x_\beta$ lie within a compact neighborhood of the identity for all $\alpha, \beta$ sufficiently large. Hold such $\beta$ fixed and let such $\alpha$ vary; by compactness, the limit
$$\lim_\alpha x_\alpha - x_\beta$$
exists. If this limit is $x$, then $x+x_\beta$ is the limit of the original Cauchy net.
This applies in particular to the Pontryagin dual $\widehat{k^+}$: it is locally compact, hence complete as a uniform space. So is the image of $k$ under the map $i: k^+ \to \widehat{k^+}$ induced by the pairing on $k^+$, according to Tate. In other words, $i(k^+)$ is complete as a subspace of $\widehat{k^+}$, and since complete subspaces of complete Hausdorff spaces are closed, this completes Tate's argument.
I actually hadn't known this argument; it's nice. But I think the surjectivity you were asking about can also be deduced by other means. I'll be sketchy here. Let $\mathcal{O} \hookrightarrow k^+$ be the ring of integers in the completion, viewed as an additive subgroup. We have an exact sequence
$$0 \to \mathcal{O} \to k^+ \to k^+/\mathcal{O} \to 0$$
where the quotient is a discrete group (for example, in the paradigmatic case where $k^+ = \mathbb{Q}_p$ and $\mathcal{O} = \mathbb{Z}_p$, the quotient will be a Prüfer group $\mathbb{Z}[1/p]/\mathbb{Z}$). Now apply the Pontryagin dual functor to this exact sequence. It turns out that the Pontryagin dual of the discrete group $k^+/\mathcal{O}$ is isomorphic to the compact group $\mathcal{O}$ (and so the dual of $\mathcal{O}$ will be $k^+/\mathcal{O}$ again). Thus the dual sequence takes the form
$$0 \to \mathcal{O} \to \widehat{k^+} \to k^+/\mathcal{O} \to 0.$$
There is a map from the first short exact sequences to the second exact sequence that is induced from the pairing on $k^+$, that restricts to the identity maps on the ends. One concludes that the map in the middle $k^+ \to \widehat{k^+}$ is an isomorphism of topological abelian groups, by a topological version of the short five lemma (see remark 4 here).
To answer another question in the post: if the underlying space is Hausdorff and the uniformity admits a countable sub-base -- which turns out to be the case here -- then the uniform space is in fact metrizable, i.e., the uniformity is indeed induced from a metric. |
Linear Programming Confusion about Complexity | Linear Programming can be solved in polynomial time with ellipsoid algorithms.
However, in practice, the simplex algorithm is more efficient, despite the fact that its worst time run times are exponential.
So in summary : linear programming problems are not "exponentially complex", but they are typically solved with an algorithm that may run in exponential times in worst case scenarios. |
Is it possible to find the difference between two sets in set-builder form? | The result $A-B$ can be written as, in set-builder form, $\{x\in\mathbb{R}\mid 0<x<3 \wedge \neg (1\leq x\leq 5)\}$. Other set operations such as union, intersection, and complement can also be done in this way. However, when you are going to further use this set, it's kind of obliged for you to solve the condition and write down precisely the elements. |
Prove that $\inf_{t>0}\frac{ω(t)}t=\lim_{t\to\infty}\frac{ω(t)}t$ if $ω:[0,\infty)\toℝ$ be bounded above on every finite interval and subadditive | I'm sorry, but the answer to my question is trivial: If $\omega_0>-\infty$, then $\gamma=\omega_0+\varepsilon$ for some $\varepsilon>0$ and we all know that, by definition of the infimum, there is some $t_0>0$ with $$\frac{\omega(t_0)}{t_0}<\omega_0+\varepsilon\;.$$ And if $\omega_0=-\infty$, then $$\frac{\omega(t_0)}{t_0}<-\left|\theta\right|\le\theta\;.$$ for some $t_0>0$, again, by definition of the infimum. |
Curious limit of a sequence used to prove Etemadi's SLLN | The mentioned paper of Etemadi is this one. I found a free copy by search engine, but I'm not sure if it is legal so I will not include a link here. (I can't find the result in Durrett, but the book in question has at least 4 editions.) In addition to the already linked question, there is even this deleted question. Someone's class has a lot of MSE users?
Anyway. Fix an arbitrary $\alpha>1$. For every natural $k\ge\lceil \alpha \rceil $, there is an $n=n(k)$ such that $k$ belongs in one of the following intervals $$\lceil \alpha^n \rceil \le k \le \lceil \alpha^{n+1} \rceil -1.$$
Thus by $x_i\ge 0$,
$$ \sum_1^{\lceil \alpha^{n} \rceil} x_i \le \sum_1^{k} x_i \le \sum_1^{\lceil \alpha^{n+1} \rceil} x_i, $$
and therefore,
$$ \frac1{\lceil \alpha^{n+1} \rceil-1}\sum_1^{\lceil \alpha^{n} \rceil} x_i \le \frac1k\sum_1^{k} x_i \le \frac1{\lceil \alpha^{n} \rceil}\sum_1^{\lceil \alpha^{n+1} \rceil} x_i. $$
Actually, we want a slightly different inequality. Note that $\lceil \alpha^{n+1} \rceil \le \lceil \alpha \lceil \alpha^n \rceil \rceil \le {\lceil \alpha^{n} \rceil }\alpha+1$. This means that we have the more relevant inequalities
$$\frac{1}{\alpha \lceil \alpha^{n}\rceil} \sum_1^{\lceil \alpha^{n} \rceil} x_i \le \frac1k\sum_1^{k} x_i \le \frac{\alpha \lceil \alpha^{n+1} \rceil}{\lceil \alpha^{n+1} \rceil-1}\frac1{\lceil \alpha^{n+1} \rceil}\sum_1^{\lceil \alpha^{n+1} \rceil} x_i .$$
As $k\to\infty$, $n(k)\to\infty$. Thus, we obtain
$$ \frac c{\alpha}=\lim_{n\to\infty} \frac1{\alpha \lceil \alpha^{n} \rceil}\sum_1^{\lceil \alpha^{n}\rceil} x_i=\lim_{k\to\infty} \frac1{\alpha \lceil \alpha^{n(k)} \rceil}\sum_1^{\lceil \alpha^{n(k)} \rceil} x_i=\liminf_{k\to\infty} \frac1{\alpha \lceil \alpha^{n(k)} \rceil}\sum_1^{\lceil \alpha^{n(k)} \rceil} x_i \le \liminf_{k\to\infty} \frac1k\sum_1^{k} x_i .$$
In an analogous manner, using the fact that $\lim_{n\to\infty} \frac{\lceil \alpha^{n+1}\rceil}{\lceil \alpha^{n+1}\rceil-1}=1$, we can control the limsup by $\alpha c$. Since a liminf is bounded by the corresponding limsup, we have proven that
$$ \frac c\alpha \le \liminf_{k\to\infty} \frac1k\sum_1^{k}x_i \le \limsup_{k\to\infty} \frac1k\sum_1^{k}x_i \le \alpha c.$$
Since $\alpha>1$ is arbitrary, we conclude $\liminf_{k\to\infty} \frac1k\sum_1^{k}x_i = \limsup_{k\to\infty} \frac1k\sum_1^{k}x_i = c$, and therefore
$$ \lim_{k\to\infty} \frac1k\sum_1^{k}x_i = c.$$ |
Ratio and Proportion Maths Problem Solving | You should assign letters to the unknown quantities and form simultaneous equations from them.
I would let the jar's weight be $J$ and the total weight of the cookies be $C$.
The first statement tells you:
$$J+C=700\tag 1$$
Then Meghan eats $\frac 45$ of the cookies, and the new total weight is $400g$. Can you then see that this means:
$$J +\frac15 C = 400 \tag 2$$
You now have a pair of simultaneous equations. Im sure you know how to continue this. |
$f$ is product and $ \sum_{d|n}f(d)=(1+a_1t)...(1+a_kt) $ | Notice that $f$ is a multiplicative function. To show this, note that if $n = ab$ with $a$ and $b$ coprime, then the prime factors of $n$ are split between $a$ and $b$. If $k$ primes divide $n$, $k_1$ primes divide $a$ and $k_2$ primes divide $b$, we have $k_1 + k_2 = k$. Therefore
$$f(n) = t^k = t^{k_1}t^{k_2}=f(a)f(b)$$
Let $F(n)$ denote the sum. It follows that $F$ is also multiplicative. For a prime power $p^a$, we have
$$F(p^a) = f(1) + f(p) + f(p^2) + \cdots + f(p^a) = 1+\underbrace{t+t+\cdots+t}_{a\ \text{times}}=1+at$$
Therefore if $n$ has prime factorization $$n = p_1^{a_1}p_2^{a_2}\cdots p_k^{a_k}$$
it follows that we have
$$F(n) = F(p_1^{a_1}\cdots p_k^{a_k}) = F(p_1^{a_1})\cdots F(p_k^{a_k})=(1+a_1t)\cdots(1+a_kt)$$ |
Finding remaining eigenvector | The row echelon form has leading $1$'s in columns $2$ and $3$, so the variables $x_1$ and $x_4$ corresponding to the other two columns are arbitrary. The nonzero rows correspond to equations
$$ \eqalign{x_2 + 2 x_4 &= 0\cr
x_3 &= 0\cr} $$
The general solution is thus
$$\pmatrix{x_1 \cr -2 x_4 \cr 0\cr x_4}
$$
With $x_1 = 1$ and $x_4 = 0$ you get vector
$$ \pmatrix{1 \cr 0 \cr 0\cr 0\cr}$$
and with $x_1 = 0$ and $x_4 = 1$ you get $$\pmatrix{0\cr -2 \cr 0\cr 1\cr}$$ |
BCH formula for finding $\hat{C}$ in equation $e^{\hat{A}}e^{\hat{B}}=e^{\hat{A}+\hat{B}}e^{\hat{C}}$ | For $t\in\mathbb{R}$, let $$\hat F(t):=\exp(\hat At)\exp(\hat Bt)\exp\big(-(\hat A+\hat B)t\big)\,.$$
Then,
$$\hat F'(t)=\text{e}^{\hat At}\hat A\,\text{e}^{\hat Bt}\,\text{e}^{-(\hat A+\hat B)t}+\text{e}^{\hat At}\,\hat B\text{e}^{\hat Bt}\,\text{e}^{-(\hat A+\hat B)t}-\text{e}^{\hat At}\,\text{e}^{\hat Bt}\,(\hat A+\hat B)\text{e}^{-(\hat A+\hat B)t}\,.$$
Hence,
$$\hat F'(t)=\text{e}^{\hat At}\,\Big[(\hat A+\hat B),\text{e}^{\hat Bt}\Big]\,\text{e}^{-(\hat A+\hat B)t}\,.$$
Since $\hat Z:=[\hat A,\hat B]$ commutes with $\hat B$, we obtain
$$\left[\hat A,\hat B^k\right]=k\,\hat Z\,\hat B^{k-1}$$
so that
$$\left[\hat A,\text{e}^{\hat Bt}\right]=\hat Zt\,\text{e}^{\hat Bt}\,.$$
Because $\hat B$ commutes with $\text{e}^{\hat Bt}$, we get
$$\hat F'(t)=\text{e}^{\hat At}\,\hat Zt\,\text{e}^{\hat Bt}\,\text{e}^{-(\hat A+\hat B)t}\,.$$
As $\hat Z$ commutes with $\hat A$ as well, we obtain
$$\hat F'(t)=\hat Zt\,\left(\text{e}^{\hat At}\,\text{e}^{\hat Bt}\,\text{e}^{-(\hat A+\hat B)t}\right)=\hat Zt\,\hat F(t)\,.$$
Consequently,
$$\hat F(t)=\exp\left(\int_0^t\,\hat Zs\,\text{d}s\right)\,\hat F(0)=\exp\left(\frac{1}{2}\,\hat Zt^2\right)\,,$$
since $\hat F(0)$ is the identity operator. Hence, $\hat F(1)=\exp\left(\dfrac{1}{2}\,\hat Z\right)$, which implies
$$\text{e}^{\hat A}\,\text{e}^{\hat B}=\text{e}^{\hat A+\hat B}\,\text{e}^{\frac{1}{2}\hat Z}=\text{e}^{\frac{1}{2}\hat Z}\,\text{e}^{\hat A+\hat B}=\text{e}^{\hat A+\hat B+\frac{1}{2}\hat Z}\,.$$ |
Let $a,b \in$ group $G$ such that $ab=ba, \gcd(O(a),O(b))=1$. Prove that $O(ab)=O(a)O(b)$. | You're close!
Unfortunately your proof isn't correct because $a^pb^p=e\Rightarrow a^p=b^{-p}$ but it may be still not $e$. But the result is true: it is in fact $e$ because if you consider the two cyclc subgroups of $G$: $<a>$ and $<b>$ and then you consider $H=<a>\cap <b>$, then according to Lagrange's Theorem $|H||m$ and $|H||n$ because $H$ is a subgroup of both $<a>$ and $<b>$ and $|<a>|=O(a)$. Thus $|H||\gcd(m,n)$ which is $e$. Thus $H=1$. Hence: $$a^p=b^{-p}\Rightarrow a^p\in\, <a>\cap <b>\Rightarrow a^p=e$$.
This proves that $m|p$ and $n|p$. Since $\gcd(m,n)=1$ therefore $mn|p$.
Now $(ab)^{mn}=a^{mn}b^{mn}=(a^m)^n(b^n)^m=e$. Thus $p|mn$. |
Show that $3$ distinct points $(p, p^2), (q, q^2)$ and $(r, r^2)$ can never be collinear, using the triangle formula. | If factoring area formula seems hard, equivalently, you may row reduce the determinant and show the product along diagonal is never zero:
$$\begin{vmatrix}1&p&p^2\\1&q&q^2\\1&r&r^2 \end{vmatrix}\ne 0$$
when $p,q,r$ are distinct. |
Finding the Standard Matrix of a Linear Transformation | It looks like your solution is correct. |
How to prove an implication within an if and only if | It helps to call $D$ the statement $B\implies C$. One has to prove $A\iff D$. So we need to show that $A$ implies $D$, and $D$ implies $A$. This means again, that, assuming $A$ it must follow $C$ if we assume $B$, and conversely, that whenever $C$ follows from $B$, then $A$ follows. Now check your $4$ statements according to this reasoning. |
Approximation when $|a(t)|\ll b$ | No, the smallness of function $a$ does not given any information about the size of $\dot a$. The derivative can take arbitrarily large values, if $a$ oscillates rapidly. Taking the idea of Henning Makholm's example, let
$$a(t) = \epsilon b t^2 \sin(t^{-10}) \tag1$$
where $\epsilon$ can be as small as you wish. The derivative of (1) exists at all $t$: it is $0$ when $t=0$, and
$$\dot a(t) = 2\epsilon b t \sin(t^{-10}) - 10 \epsilon b t^{-9}\cos(t^{-10}),\quad t\ne 0 \tag2$$
The formula (2) shows that $\dot a$ is unbounded in any neghborhood of (2). |
Understanding a proof of the Uniform LLN | To answer your last paragraph: yes, I think the proof sketch given in the answer to math.stackexchange.com/questions/2469152 does describe the overall structure of the proof you are reading.
The rest of this answer is predicated on knowledge of that answer. I will keep it short (unusually short for me), just giving a hint.
A key step is covering the space of functions with a collection of finitely many compact sets of functions that catch most of the probability mass. Recall the Arzelà–Ascoli theorem, which describes relatively compact subsets of continuous functions: they have to bounded, and they have to be equicontinuous. That is, they cannot wiggle too much, oscillate too much. The mysterious sup - inf terms in the $\max_k$ expression you ask about has to do with bounding the oscillation of the summand functions over certain $\delta$-balls covering $\Theta$. The $\max$ picks the worst such $\delta$-ball, and guess what, it's tame enough, after all. |
Recursive function with code $n$, whose output is the number $n$ itself | This is not entirely just s-m-n theorem. You need to use the Recursion theorem, which is a fairly easy consequence of the s-m-n theorem.
Define the partial computable function
$\Psi(x,y) = x$
Since it is partial computable, there exists an $e$ such that
$\Psi(x,y) = \Phi_e(x,y)$
By applying the s-m-n theorem (just like I described in one of your earlier questions, i.e. letting $f(x) = s(e,x)$, where $s$ is the function coming from the s-m-n theorem), there exists a computable function $f$ such that
$\Psi(x,y) = \Phi_e(x,y) = \Phi_{f(x)}(y)$
By the recursion theorem, there exists a $n$ such that
$\Phi_{f(n)} = \Phi_n$
Hence on all input $y$,
$\Phi_n(y) = \Phi_{f(n)}(y) = \Phi_e(n,y) = \Psi(n,y) = n$
So $\Phi_n$ is a program with code $n$ which is also constant function taking input $n$. |
Union of two denumerable sets is denumerable | The proof is almost correct, and is actually a good way of approaching the problem. One small error is that $h$ is not actually bijective, since you can have $a_i=b_j$ for some $i, j$. However, $h$ is still surjective, which suffices to show the union is countable. |
Does this statement on symmetric matrix hold? | Yes, they are equal.
$$(C^T K D)^T = D^T K^T C = D^T K C$$
Since the left-hand side is a scalar (as is the right-hand side), it equals its transpose. |
How to show nonassociativity of the positive rationals under a binary operation defined in terms of max and min? | Select $q<p<p+q/2<r$, for example, $q=2,p=3,r=5$.
We get $(p\circ q)\circ r=r+p/2+q/4$ and $p\circ(q\circ r)=r+p/2+q/2$. |
Basic guidance to write a mathematical article. | Terence Tao has posted some lucid remarks about writing on his blog at here. |
Partial derivative and dependent variables | In you enforce that at any order $a+b=1$, then $f$ is not really a function of $a$ and $b$, but just one of them, say $a$. Then $f(a,b)=f(a,b(a))\equiv F(a)$ where $b(a)=1-a$. Then there is only ordinary derivatives, namely, $\frac{dF}{da}(a)$.
Alternatively, you may consider $f(a,b)$ a mathematically valid even outside of the line $a+b=1$, but from a physical point of view, you happen to be only interested in the derivatives along the line $a+b=1$. Then you are dealing with function of two variables, and the actual derivative of $f$ is really the gradient of $f$, $\nabla f$. See for instance here.
In short then, the derivative is a function that is well defined in a point along that line -assuming $f$ is indeed derivable on all points of that line.
You can also think of it in the following way: Taking the partials is just a trick, or a mnemonic for us to easily calculate the limit involved in the definition of $\nabla f$ or $dF/da$. |
How do I apply the Yoneda lemma to this functor? | Fix $a$.
The diagram and the formula shows that there is a unique arrow $f_x :X(x,G'a) \to X(x,Ga)$ making the wanted diagram commute.
Now if you prove that this $f_x$ (namely $\varphi\circ (\sigma_x)^* \circ \varphi'^{-1}$; $\varphi$ and $\varphi'$ should actually be indexed by $x$ and $a$, and so for our purposes at least by $x$, since $a$ is fixed, but McLane often doesn't index these) is the component of a natural transformation $f:X(-, G'a) \to X(-, Ga)$, the Yoneda lemma tells you that this map is induced by a unique arrow $\tau_a: G'a\to Ga$.
This $\tau_a$, he claims, is then the component of a natural transformation $\tau : G'\to G$ (it's actually easy to see because if you follow the proof of the Yoneda Lemma you can actually make this $\tau_a$ explicit, and it is clearly natural in $a$, since $\varphi, \varphi'$ are).
I think this is what is meant by "apply the Yoneda Lemma to $\varphi\circ (\sigma_x)^* \circ \varphi'^{-1}$", though the way he writes it is a bit unclear as it should be more something like $\varphi\circ (\sigma_{-})^* \circ \varphi'^{-1}$ or ($x\mapsto \varphi\circ (\sigma_x)^* \circ \varphi'^{-1}$) (indeed, $\varphi\circ (\sigma_x)^* \circ \varphi'^{-1}$ is not by itself a natural transformation, it's only the component of such a transformation evaluated at $x$) |
Determine convergence of $\frac{2n^3 + 7}{n^4 \sin^2 n}$ | We have
$$\frac{2n^3+7}{n^4 \sin^2 n}\geq \frac{2n^3+0}{n^4 \cdot 1}=\frac{2}{n}$$
Thus, the sum
$$\sum_{n=1}^\infty \frac{2n^3+7}{n^4 \sin^2 n}\geq \sum_{n=1}^\infty \frac{2}{n}=\infty$$
and diverges. |
$3$. If the difference between the simple interest and compound interest on some principal amount at $20$% per annum for $3$ years is ` $48$ | Given,
T = 3yrs
Rate. Interest = 20%
Principal = ?(Let's consider it as x)
$$\therefore S.I = \frac{x * T * R}{100}$$ $$\implies S.I = \frac{x * 3 * 20}{100}$$
$$\implies S.I = \frac{3x}{5}$$
$$\implies Amount on S.I = \frac{8x}{5}$$
$$\therefore Amount on C.I = x * (1 + \frac{Rate}{100})^T$$
$$\implies Amount on C.I = x * (1 + \frac{20}{100})^3$$
$$\implies Amount on C.I = x * (\frac{6}{5})^3$$
$$\implies Amount on C.I = \frac{216x}{125}$$
$$Amount on C.I - Amount on S.I = P + C.I - P - S.I= C.I - S.I$$
$$\implies \frac{216x}{125} - \frac {8x}{5} = 48$$
$$\implies \frac{216x - 200x}{125} = 48$$
I hope You can do the rest on your own |
Prove that there are no natural numbers $x$ whose digits are $0$ or $2$, such that $x$ is a perfect square. | Dividing by an appropriate power of $100$ we can ensure that the final two digits are not both $0$. But a simple search (or congruence argument) shows that none of $2,20,22$ are squares $\pmod {100}$. |
Can a submanifold of a flat Manifold be curved? | Curvature is independent of coordinate system. If a metric is flat, it is flat. (For example, in $\Bbb R^2$, we have the standard metric $ds^2=dx^2+dy^2$; when we switch to polar coordinates, we have $ds^2=dr^2+r^2\,d\theta^2$. Christoffel symbols may be nonzero, but the curvature is still $0$.)
Of course flat spaces have submanifolds of all sorts of curvature. Just take any surfaces you wish sitting in $\Bbb R^3$; the same holds in all dimensions. It all then depends on the second fundamental form, which is not intrinsic to the submanifold. And you can certainly foliate flat space by non-flat hypersurfaces, yes. |
A probability interview questions on pen type | How can I select a pen randomly? Supposing that it is not possible to detect which pen is which merely by feeling the pens (because the difference is only their color, for example),
I can shake up the contents of the box, perhaps reach in and stir them with my hand,
then finally grasp a pen and pull it out.
Suppose that while I am stirring the pens I grasp one and then put my fingers on a second pen;
I then carefully grasp the second pen while dropping the first pen so that I make sure
the first pen is not the one I am holding, and I pull the second pen from the box.
With respect to the probability that I finally choose a type A pen, how is this procedure different from any other method of stirring the pens and finally pulling one out?
With respect to the probability that I finally choose a type A pen, how is this procedure different from putting the first pen grasped in an empty corner of the box and then grabbing the second pen, ensuring that I do not take the first pen?
With respect to the probability that I finally choose a type A pen, how is this procedure different from removing the first pen without looking and
putting it in some other place outside the box where I still do not see it, and then
grabbing the second pen?
I think there is no need here for any calculation more involved than dividing one number by another. |
Let $a,b,c,d>0$ and $a+b+c+d=1$. Prove that $\frac{abc}{1+bc}+\frac{bcd}{1+cd}+\frac{cda}{1+ad}+\frac{dab}{1+ab}\le \frac{1}{17}$ | You could approach it as follows:
$$
\sum_{cyc} \frac{abc}{1+bc}=\sum_{cyc} a\left(1-\frac{1}{1+bc}\right)=\sum_{cyc} \left(a-\frac{a}{1+bc}\right)=1-\sum_{cyc} \frac{a}{1+bc}
$$
So the inequality is equivalent to:
$$
1-\sum_{cyc} \frac{a}{1+bc}\le\frac{1}{17}\iff\frac{16}{17}\le\sum_{cyc} \frac{a}{1+bc}
$$
Using CS, this can be reduced to prove the following:
$$
\left(\sum_{cyc} \frac{a}{1+bc}\right)\cdot\left(\sum_{cyc} a(1+bc)\right)\ge(a+b+c+d)^2=1\iff\sum_{cyc} \frac{a}{1+bc}\ge\frac{1}{\sum_{cyc} a(1+bc)}=\frac{1}{a+b+c+d+abc+bcd+cda+dab}=\frac{1}{1+abc+bcd+cda+dab}
$$
So if
$$
\frac{1}{1+abc+bcd+cda+dab}\ge\frac{16}{17}\iff abc+bcd+cda+dab\le\frac{1}{16}
$$
is true, the original inequality would be true as well.
Edit:
The inequality
$$
abc+bcd+cda+dab\le\frac{1}{16}
$$
is true due to Maclaurin's inequality , which, in a special case, states that:
$$
\left(\frac{abc+bcd+cda+dab}{4}\right)^{\frac13}\le\frac{a+b+c+d}{4}=\frac{1}{4}\iff abc+bcd+cda+dab\le\frac{1}{16}
$$
And your inequality is proven. |
Are all operators to or from $\ell_1$ completely continuous? | Operators with the property you are interested in are called completely continuous. The answer in both cases is yes.
Every weakly convervent sequence in $\ell_1$ converges strongly, so the answer is trivially yes if $E=\ell_1$. Suppose now that $F=\ell_1$ and let $(x_n)_{n=1}^\infty$ be a weakly convergent sequence in $E$. As $T$, being bounded, is weak-to-weak continuous, $(Tx_n)_{n=1}^\infty$ is weakly convergent, hence also strongly convergent. |
How does this factoring work? | The first one
is using
$z^2-a^2
=(z-a)(z+a)
$
where
$a = \sqrt{2i}
=1+i
$
since
$(1+i)^2
=1+2i-1 = 2i
$.
The second one
just uses the quadratic formula,
which works for
complex as well as real coefficients,
to solve
$z^2 − 3iz − 3 + i
= 0
$.
If the roots are
$u$ and $v$,
then
$z^2 − 3iz − 3 + i
= (z-u)(z-v)
$.
(This was added later)
Using the quadratic formula,
the roots are
$\begin{array}\\
\dfrac{3i\pm\sqrt{(-3i)^2-4(-3+i)}}{2}
&=\dfrac{3i\pm\sqrt{-9+12-4i}}{2}\\
&=\dfrac{3i\pm\sqrt{3-4i}}{2}\\
&=\dfrac{3i\pm(2-i)}{2}
\qquad\text{since }\sqrt{3-4i} = 2-i\\
&=\dfrac{3i+(2-i), 3i-(2-i)}{2}\\
&=\dfrac{2+2i, -2+4i}{2}\\
&=1+i, -1+2i\\
\end{array}
$ |
General solution of a second order non homogenous ODE | We just need to find two linearly independent solutions $y_1,y_2$ to the homogeneous equation, then the general solution to the non-homogeneous equation is $y_P+c_1y_1+c_2y_2$, where $y_P$ is a particular solution to the non-homogeneous equation (e.g. $y_P(t)=\phi_1(t)=t^2$).
By linearity, the difference of two solutions to the non-homogeneous equation is a solution to the homogeneous equation. So we can take $y_1(t)=\phi_3(t)-\phi_2(t)=1$ and $y_2(t)=\phi_2(t)-\phi_1(t)=e^{2t}$ as our solutions to the homogeneous equation. It is easy to check that $y_1$ and $y_2$ are linearly independent.
Thus the general solution to the non-homogeneous equation is $$y(t)=t^2+c_1+c_2e^{2t},$$
for some arbitrary constants $c_1,c_2$. |
Group and subgroup proof | If $g \in H$, then $Hg=H=gH$; if $g \notin H$, then necessarily both $Hg=\complement_GH$ and $gH=\complement_GH$, so $Hg=gH$. This is because the cosets have the same cardinality, so in your case they are just 2.
Now, $Hg=gH \Leftrightarrow (Hg \subseteq gH) \wedge (gH \subseteq Hg)$; $Hg \subseteq gH \Leftrightarrow \forall h \in H, \exists h' \in H \mid hg=gh' \Leftrightarrow \forall h \in H, \exists h' \in H \mid h=gh'g^{-1} \Leftrightarrow H \subseteq gHg^{-1}$, and likewise for the other term of the "$\wedge$", whence sets' equality. |
Reciproque version of TAF | Sure. Let $\phi(x) = f(c) x$. |
A question about sequences of integrable functions | Nope, not true, even for everywhere pointwise convergence: Consider
$$
f_n(x) = \begin{cases}
n^2x & 0\leq x\leq 1/n\\
2n-n^2x & 1/n<x\leq 2/n\\
0 & x>2/n
\end{cases}.
$$
(Note that all $f_n$ are continuous…). Also look up Lebesgue's dominated convergence theorem. |
What is the relationship between completeness and local compactness? | Note that the Baire space $\mathbb{N}^\mathbb{N}$ (which is homeomorphic to the space of irrationals) is a completely metrizable space which is not locally compact (in fact all compact subsets of $\mathbb{N}^\mathbb{N}$ have empty interior).
While locally compact metric spaces may not be complete (for example, as in Brad's answer, the open unit interval $(0,1)$ under the usual metric), these spaces will always be completely metrizable. There are at least two ways to see this result:
Every locally compact metric space will be an open subset of its completion, and every G$_\delta$ subset of a complete metric space is completely metrizable.
Every locally compact completely regular space is Čech-complete (i.e., is a G$_\delta$ subset of its Stone-Čech (or any other) compactification), and a metric space is completely metrizable iff it is Čech-complete. |
How to find number of maps from set $A$ to set $B$ | First of all, map is another way of saying function, a relation between A and B such that each element in A is connected to only one element in B.
Select an element from the set A, let's call it a. The map must connect a to one of the elements of B, so there are 2 choices for a. Since a is arbitrary, there are 2 choices for each element.
Therefore, first we select an element from B for a, then we select an element from B for the second element of A, and then the third one. As a result, there are
2 x 2 x 2 = $2^3$
different maps from A to B. |
Pigeonhole principle question - relatively prime | Think about prime factorizations. Here's an analogy: Suppose you have a list of at least 27 English words (or strings of letters, really). There are only 26 letters in the English alphabet, so at least two words contain the same letter. |
countable or uncountable set of functions | your right, the first set is countable, it is essentially $\mathbb N ^{|S|}$. My favorite injection from this set to $\mathbb N$ is the one that sends $(a_1,a_2,\dots,a_{|S|})$ to the number $2^{a_1}3^{a_2}5^{a_3}\dots p_{|S|}^{a_{|S|}}$
The second set is $|S|^\mathbb N$ and is not countable. You can show that $2^\mathbb N$ is not countable by using Cantor's theorem for example, or bijecting it with the real numbers via binary representation. |
Linear Algebra question help. | We know that for any square matrix, its trace is the sum of its eigenvalues and its determinant is the product of the eigenvalues. (The eigenvalues could be complex).
Further, if $t$ is an eigenvalue of a Matrix $M$, then $1+t$ is an eigenvalue of $M+I$:
$$\det [(A+I)-(1+t) I] = \det [A-tI]=0$$
Let $A$ be a $n \times n$-matrix of rank $1$.
Since $A$ has rank $1$, it has the eigenvalues $0$ with multiplicity $n-1$ and $\lambda \neq 0$ with mulitplicity $1$.
Thus we get $\det (A) = 1^{n-1} \cdot(\lambda+1)= \lambda+1$ and $\operatorname{tr}(A) = (n-1)\cdot 0+\lambda= \lambda $
This implies the claim.
Edit: (This is more like a comment, but too long)
I see, you want to use that for $\lambda$ an eigenvalue of $M$
$$0= \det(M-\lambda I) = \sum_{i=0}^n b_i (-\lambda)^{n-1} $$
with $b_i$ the sum of all principal minors of order $i$.
In particular $b_0:=1$, $b_1 = \operatorname{tr} M$, $a_n= \det M$.
So for $M=A+I$:
$$ 0= \det (A+I-\lambda I) = (-\lambda)^n + (-\lambda)^{n-1}\operatorname{tr}(A+I)+\det(A+I) + \dots ,$$
and thus for $\lambda=1: $
$$ 0=\det(A) = (-1)^n (1-\operatorname{tr}(A+I)+ \dots) +\det(A+I) + .$$
This implies $$\det(A+I)= (-1)^{n+1} (1-\operatorname{tr}(A) -n+\dots) $$
But I don't see how we can use $\operatorname{rk} A=1$ in order to calculate the remaining summands. |
If $S$ is an open set in $\mathbb{R}^n$, $p \in S$, $q \notin S$, then there is a boundary point of $S$ on the line segment joining $p$ and $q$ | Hint: Consider the real number $t^*=\sup\{t\in[0,1]\mid tq+(1-t)p\in S\}$ and the point $p^*=t^*q+(1-t^*)p$. |
Show that: $F$ is a ring with addition and multiplication given by matrix multiplication in $Mat(2;\mathbb{F_5})$ | For the first part, typically the := means “is defined as”. That notation just represents that it is being defined this way. For the second part, are you assuming that it is a field because it is labeled F? You would definitely need to show it’s a ring before it’s a field which I’m sure that you know. For the third part, I interpret $M(2;\mathbb{F}_5)$ as the space of $2\times2$ matrices with coefficients in the finite field with 5 elements. Addition and Multiplication are defined by matrix addition and multiplication but you need to be careful since you’re working in $\mathbb{Z}/5\mathbb{Z}$. So really the last part is just asking you to show that matrices of the given form form an abelian group with standard matrix addition and satisfy the ring multiplication axioms with standard matrix multiplication. |
Probability of random events | Turned out to be a long answer. The outline is:
(1) What are we trying to do with sampling?
(2) Tool #1: Bernoulli random variable (generalized coin flip)
(3) Tool #2: Expected value
(4) Tool #3: Unbiased estimators
(1) By sampling, we are trying to discover an unknown parameter, the proportion of good apples in the box -- call it $p$. Put another way, if we are drawing (and putting back into the box after each draw) a random apple and defining a good apple draw as a success, each draw has probability $p$ of success.
Note that we could find $p$ directly by looking through all the apples at once and counting the rotten ones. Then $p=\frac{\text{Number of good apples}}{100}.$ But we are trying to guess at $p$ indirectly by sampling. The idea is to come up with a formula that will "reliably" give us the correct answer with a "large enough" sample.
(2) It will be helpful to introduce a concept called a Bernoulli random variable. This is essentially a way to describe coin flip with probabilities other than 1/2 of getting heads. Lets say flipping heads is a success, tails is failure. Suppose the coin is not necessarily fair, and has probability $p$ of heads, $1-p$ of tails. We could describe this coin by a random variable $X=1$ with probability $p$, $X=0$ with probability $1-p$. We could describe the 6th flip of this coin by $X_6=1$ with probability $p$, $X_6=0$ with probability $1-p$ since it's the same coin and probabilities don't change over time. Similarly, we could describe the kth coin flip with $X_k=1$ with probability $p$, $X_k=0$ with probability $1-p$. Notice that in our situation, we can consider the kth draw of an apple to be a Bernoulli random variable with parameter p, i.e. $X_k=1$ with probability $p$, $X_k=0$ with probability $1-p$.
(3) Next, we introduce the idea of expected value. This is just an idea of weighted averages: Given possible values of a random variable and probabilities for each possibility, we define the expected value to be the sum of the the possible values weighted by the probabilities. (Example: X=1 with probability 1/3, X=2 with probability 1/3, X=3 with probability 1/3. Then expected value of X is $E[X]=1/3 * 1 + 1/3 * 2 + 1/3 * 3=2$.) Notice that the expected value of a Bernoulli random variable is just the probability of success, p. We will use this fact in (4).
(4) Lastly, we consider the idea of estimators and unbiased estimators. Let $\hat p=\frac{\text{Number of good apples drawn}}{\text{Number of total apples drawn}}=\frac{1}{n}\sum_{k=1}^n X_k$ . Call this our estimator. An estimator is called unbiased if the expected value of the estimator is the same as the value of the true parameter (here, the proportion of good apples). Expected value can be interchanged with sums and the expected value of a constant is just that constant, so $E[\hat p]=E[\frac{1}{n}\sum_{k=1}^n X_k]=\frac{1}{n}\sum_{k=1}^n E[X_k]=\frac{1}{n}\sum_{k=1}^n p=\frac{1}{n} *np=p$. So $\hat p$ is an unbiased estimator for the true proportion of good apples! This means we can use it as a good estimate, given a random sample.
So the question could be interpreted as "Find an unbiased estimator for the true proportion of good apples, and tell me what the value of that estimator is, given the random sample of 47 good and 3 rotten." |
Converges of integral, knowing the derivative has a limit | Hint: if $(\ln f)'(x) \le c < 0$ for all $x \ge x_0$, then
$\ln f(x) \le \ln f(x_0) + \ldots $ |
how to solve $\int e^{7x}\cos(2x)dx$ | Hint: Complexify the problem by noting that $\cos(nx)+i\sin(nx)=\exp(inx)$. Then determine the integral by:
$$\int \exp(7x)\cos(2x) dx =\operatorname{Re}\left[ \int \exp(7x)\exp(2ix) dx\right],$$
in which $\operatorname{Re}\left[ z \right]$ returns the real part of $z$. |
Why is this subset not open? | First of all two basic notions:
I assume that $S^1$ is equipped with the Subspace Topology. (the most natural topology in this case)
So the open sets $V$ in $S^1$ are precisely of the form $V = S^1 \cap \Omega$ where $\Omega$ is open in $\mathbb{R}^2$. It's very easy to see that in this case ($\mathbb{R}^2$ equipped with the usual topology - generated by the open balls-) we have a base for our topology in $S^1$ (explained in the link). In this case the base is the family of "open" circular arcs ( the starting and final point of the arc doesn't belong to the arc). So if a set in $S^1$ can be written as union of this arcs it's open and viceversa.
The problem:
Let's figure out what is $f([0,\frac{1}{2}))$. The image of $f$ trace a path beginning from the point $(1,0) = (\cos(2\pi 0),\sin(2\pi 0)$ in anti clockwise direction along the circumference and stop at the point $(\cos(2\pi \frac{1}{2} ),\sin(2\pi \frac{1}{2})=(-1,0)$ but it didn't reach it, it's arbitrarily near it.
The informal explanation:
The problem is that the image (:= $f([0,\frac{1}{2}))$) we traced isn't open in the topology of $S^1$, because the point $(1,0)$ has not a neighborhood entirely contained in such image. (By the definition of basis, the neighborhood can be thought of the form of an open arc centered in $(1,0)$ - it's immediate to notice that half of this arc lies outside our image, even if the open arc is arbitrarily small but still contains $(1,0)$). Please note that this reasoning doesn't work if we consider a point near $(-1,0)$. Call $y$ such point, $y$ has a open neighborhood contained in the image because we can go arbitrarily near $-1,0)$ so there is always space inside the image for a little arc which contains $y$.
The formalization:
Is a boring reasoning with arbitrarily small ball (centered in $(1,0)$ and $y$) intersected with $S^1$. |
Proving function's continuity | HINT: Try taking a sequence $(x_n, y_n)\in D$ converging to $(x, y)\in D$ and prove that
$$(1) \lvert f(x_n, y_n) - f(x, y)\rvert \to 0.$$
Your adversary is the fact that in (1) you have both $y_n$ and $y$, which might be different, so you cannot apply Lipschitz condition directly. Try solving it by means of the triangle inequality, with a typical $2\epsilon$ argument. |
Circle - finding the equation | I think I know what you're trying to solve. So you want to find the equation of a circle such that the circle is sitting tangent to the x-axis at $\left(4,0\right)$ and its two intercepts of the y-axis must result in a line of length $6$ between them. In other words, you need to know its radius so that you can find the center and therefore its equation. Well, you already have the x-value, $4$, so no need to worry about that. To find the radius you'll need to draw a triangle connecting the center of your proposed circle to both of the y-intercepts. From there you know the length of the distance between them to be $6$, and the height of that triangle to be $4$ units. Then by simply dividing $6$ in two you arrive at one of the two triangle's bases, and then the Pythagorean theorem allows you to arrive at the missing side's length, which is the radius of your circle:
$$ 3^2+4^2=r^2\implies{r}=\sqrt{25}=\boxed{5} $$
So the equation of your circle is thus
$$ 25=\left(x-4\right)^2+\left(y-5\right)^2 $$
Just drew a quick picture, I hope I'm right...double check me. |
Determine whether $\int_1^\infty \sin \frac{1}{x^2}\,dx $ is divergent or convergent | $|\sin u| \le u$ for all $u \ge 0$. Your integral absolutely converges. |
Divisibility by 9 Proof | $$n=d_kd_{k-1}. . .d_2d_1=d_1+d_2\times 10+d_3\times 10^2+ . . . d_{k-1}\times 10^{k-2}+ d_k\times 10^{k-1}$$
$10^i=(9+1)^i= 9m_i +1$; $ m_i∈ N.$
⇒ $$n=d_1+d_2+ . . .+d_{k-1}+d_k+ 9(d_1m_1+d_2m_2+d_3m_3+ . . .d_{k-1}m_{k-1}+d_km_k)$$
Now if $9|n$ then we must have:
$$9|d_1+d_2+ . . .+d_{k-1}+d_k$$ |
Why $\alpha |u|^p+\beta \leq \gamma |u|$ implies that $u$ is bounded? | Let $f(y)=\alpha y^p-\gamma y +\beta$, $y\in \{|u(x)| :x\in \mathbb{R} \}$.
Here to show $\sup\{y: f(y)\leq 0 \}$ is finite.
We can start with all $y\in \mathbb{R}^+$. Note that, $f'(y)=0$ gives us the solution $y^*=(\frac{\gamma}{\alpha p})^{\frac{1}{p-1}}$
$f''(y^*)=\alpha p(p-1)(\frac{\gamma}{\alpha p})^{\frac{p-2}{p-1}}>0$.
Then $f(y)$ has only one minima which is $y^*$, and as $y\to\infty$, $f(y)\to\infty$, since $|\beta|<\infty$. Hence $\sup\{y\in \mathbb{R}: f(y)\leq 0 \}$ is finite. Hence $\sup\{y\in \{|u(x)| :x\in \mathbb{R} \}: f(y)\leq 0 \}$ is finite. |
What rule can I use to compute $\frac{d^{107}}{dx^{107}} \sin x$? | One important thing to remember is that
$$\frac{d^4}{dx^4}(\sin x)=\sin x$$
So...
$$\frac{d^{4n}}{dx^{4n}}(\sin x)=\sin x$$
Now, what is the remainder when $107$ is divided by $4$ (a.k.a. $107\mod 4$)?
Basically, the derivative will "cycle" around and around (through $\sin$, $\cos$, etc.), so just the very last few derivatives after it "stops cycling" are important. |
What is the solution to linear ODE $\dot x = Ax + b$? | As an alternative approach, we can use a technique that you might have seen for solving the inhomogeneous scalar differential equation $\dot x= kx+b(t)$: guess that a particular solution is of the form $\mathbf x(t) = \exp(tA)\mathbf w(t)$. Substituting this into the differential equation gives $$A e^{tA}\mathbf w(t)+e^{tA}\dot{\mathbf w}(t) = Ae^{tA}\mathbf w(t)+\mathbf b(t)$$ so that $$\dot{\mathbf w}(t) = e^{-tA}\mathbf b(t).$$ Integrating, we get $$\mathbf w(t) = \int_0^t e^{-sA}\mathbf b(s)\,ds$$ and $$\mathbf x(t) = e^{tA}\mathbf w(t) = \int_0^t e^{(t-s)A}\mathbf b(s)\,ds.$$ This obviously satisfies $\mathbf x(0)=0$, so the general solution is $$\mathbf x(t) = e^{tA}\mathbf x_0+\int_0^t e^{(t-s)A}\mathbf b(s)\,ds.$$
When $\mathbf b$ is constant and $A$ invertible, the above integral is equal to $(\exp(tA)-I)A^{-1}\mathbf b$. |
How to find the Laurent series expansion of an exp function. | Yes your answer is correct. Since the taylor series of $e^{x}$ converges absolutely it is the same as the laurent series. Now if you want to you can simplify.
$\frac{1}{z^{3}}\sum\frac{z^{2n}}{n!}$
$\sum\frac{z^{2n-3}}{n!}$
Which expands as...
$$\frac{1}{z^3}+\frac{1}{z}+\frac{z}{2!}+\frac{z^3}{3!}...$$ |
What is the difference between $\dot{x} = Ax + b +Bu$ and $\dot{x} = Ax + Bu$, where b is a constant vector? | If you read the equation as
$$\dot x-Ax=b+Bu,$$
$b$ is nothing but a constant excitation, i.e. a translation of the equilibrium point.
As you probably know, we can get rid of it with $y:=x+A^{-1}b$, giving
$$\dot y-Ay=Bu.$$ |
Examples of Smooth Immersions | Your example in 1. is indeed unbounded, but it is not closed: the point $(1,1,0)$ is the limit of the sequence of points
$$\left(\frac{n+1}{n},\frac{n+1}{n},0 \right) = \Psi\left(\frac{n}{n+1},\frac{n}{n+1}\right) \in \Psi(M)
$$
but $(1,1,0) \not\in \Psi(M)$.
Regarding 2, I would simply suggest that you write down formulas for very simplest example of immersions, and then attempt to use the formula to prove the properties of $\Psi(M)$ that you are asked to prove. |
Monotonicity of a integral function $p_{k}$ | Yes, it's quite hard. Note that we can rewrite it as $$p_{k}=k\exp\left(-2H_{k-1}\right)\int_{0}^{1}\exp\left(2\sum_{i=1}^{k-1}\frac{s^{i}}{i}\right)ds
$$ where $H_{n}
$ is the $n
$-th harmonic number and using the well-known bound $$H_{n}=\log\left(n\right)+O\left(1\right)
$$ we get, as $k\longrightarrow\infty
$, $$p_{k}=\frac{k}{\left(k-1\right)^{2}}e^{O\left(1\right)}\int_{0}^{1}\exp\left(2\sum_{i=1}^{k-1}\frac{s^{i}}{i}\right)ds.
$$ The problem is the estimation of the integral. Note that, if $k
$ is big, then $$\sum_{i=1}^{k-1}\frac{s^{i}}{i}\approx-\log\left(1-s\right)
$$ and so $$\int_{0}^{1}\exp\left(2\sum_{i=1}^{k-1}\frac{s^{i}}{i}\right)ds\approx\int_{0}^{1}\frac{1}{\left(1-s\right)^{2}}ds
$$ and the last integral doesn't converge. Using the closed form $$2\sum_{i=1}^{k-1}\frac{s^{i}}{i}=-2\log\left(1-s\right)-2s^{k}\Phi\left(s,1,k\right)
$$ where $ \Phi\left(a,b,c\right)
$ is the Lerch trascendent, for a precise control of the integral we have to study $$\int_{0}^{1}\exp\left(2\sum_{i=1}^{k-1}\frac{s^{i}}{i}\right)ds=\int_{0}^{1}\frac{1}{\left(1-s\right)^{2}}\exp\left(-2s^{k}\Phi\left(s,1,k\right)\right)ds
$$ and I think it's not trivial. |
Uniquely decodable code - extension is non singular | So $C^∗$ is singular because we have 2 strings that map to the same codeword ?
Yes (though I'd rather say "macrocodeword" - i.e., concatenation of codewords). Hence, because that code is singular with resepect to the extensions, it's not uniquely decodable.
does $C^∗$ just mean all possible combinations and lengths of the codeword given the symbols and alphabet
Yes, one should consider all posible concatenations of input symbols. |
Random number generator with discrete probability distribution | Use the Mersenne Twister (https://en.wikipedia.org/wiki/Mersenne_twister)
to generate uniformly distributed random numbers first. It has a very
long period and other great properties. Alternatively, you could use
another uniformly distributed random number generator instead.
Suppose now you have generated a random number $x$ this way and that
the probability density you want to sample from is $\phi$. You need
to find $y$ s.t.
$$
x=\int_{-\infty}^{y}\phi\left(s\right)ds.
$$
In other words, if $\Phi$ is the cumulative distribution, you need
to find
$$
y=\Phi^{-1}\left(x\right).
$$
There are special cases in which you can make very fast algorithms
to do this. For example, for normally distributed random numbers,
there exists two methods:
Box-Muller: https://en.wikipedia.org/wiki/Box_muller
Ziggurat (arguably better): https://en.wikipedia.org/wiki/Ziggurat_algorithm |
Find the number of binary sequences with a certain property. | After finding the first few values $a_0=1, a_1=2, a_2=4, a_3=7$, look in the OEIS. It is sequence A000071 with a different offset. Thus, $a_n=F_{n+3}-1$ with recursion $a_n=a_{n-1}+a_{n-2}+1$. This comes from looking at any sequence $b\in \{0,1\}^*$, where if it ends in $0$ the rest is counted by $a_{n-1}$, if it ends in $1$ the rest is counted by $a_{n-2}$, or else it is the empty sequence and counted once. |
For the variable traingle ABC with... | you will get the system
$$3x-1=\sin(t)+\cos(t)$$
$$3y-2=\sin(t)-\cos(t)$$
adding these two equations we get
$$\sin(t)=\frac{3x+3y-3}{2}$$
thus you will get $t$
and analogously $\cos(t)$
and finally you have to calculate $$\sin(t)^2+\cos(t)^2=1$$ with the tems above |
Behavior of $|\Gamma(z)|$ as $\text{Im} (z) \to \pm \infty$ | Let $z=a+ib$. As $\vert b \vert \to \infty$, the argument of $z$ does not approach $\pi$. Thus Stirling's approximation applies, giving
$$\vert \Gamma(z) \vert \sim \left\vert \sqrt{\frac{2\pi}{a+ib}}\left(\frac{a+ib}{e}\right)^{a+ib}\right\vert\sim \sqrt{2\pi}\left\vert(ib)^{-1/2}\left(\frac{a+ib}{e}\right)^{a+ib}\right\vert$$
$$\sim\sqrt{2\pi}\vert b \vert^{-1/2} \left\vert \left(\frac{a+ib}{e}\right)^{a+ib}\right\vert \sim \sqrt{2\pi}\vert b \vert^{-1/2}e^{-a} \vert z^z\vert,$$
The $\vert z^z \vert$ term above satisfies
$$\vert z^z \vert = \vert e^{z \log z} \vert = \vert e^{(a+ib)(\log \vert z \vert + i \mathrm{arg}z)}\vert=e^{a \log \vert z \vert-b \mathrm{arg} z} \sim \vert b \vert^a e^{-b \mathrm{arg} z}$$
because $\vert z \vert \sim \vert b \vert$ as $b \to \infty$. Thus, it remains to approximate $e^{-b \,\mathrm{arg z}}$ as $\vert b \vert \to \infty$.
Note: An easy mistake at this point is the false implication
$$\mathrm{arg}z \to \pm \pi/2 \quad \Longrightarrow \quad e^{b \mathrm{arg} z} \to e^{\pm b \pi/2}.$$
However, this need not hold, because the exponential is sensitive to non-dominant terms.
We compute
$$\lim_{\vert b \vert \to \infty} e^{-b\, \mathrm{arg} z}e^{-a+\vert b \vert \pi/2}=\mathrm{exp}\lim_{\vert b \vert \to \infty}\left(-b \arctan(b/a)-a+\vert b \vert \pi/2\right)$$
$$=\mathrm{exp}\lim_{\vert b \vert \to \infty}-b\left(\frac{\pm\pi}{2}-\frac{a}{b}+O\left(\vert b \vert^{-3}\right)\right)-a+\vert b \vert \pi/2,$$
in which we take the positive sign if $b \to \infty$ and the negative sign if $b \to -\infty$. (This is simply the Taylor series to $\arctan(b/a)$ at $\pm \infty$.) Therefore we can replace said $\pm$ with $\vert b \vert/b$, at which point we see that our limit is $1$.
Thus
$$\vert \Gamma(z) \vert \sim \sqrt{2\pi}\vert b \vert^{-1/2}e^{-a}\vert b \vert^a e^{-b \,\mathrm{arg} z} \sim \sqrt{2\pi}\vert b \vert^{-1/2}e^{-a}\vert b \vert^a e^{a-\vert b \vert \pi/2}$$
$$\sim\sqrt{2\pi}\vert b \vert^{a-1/2} e^{-\vert b \vert \pi/2},$$
as claimed. |
Number of Vertices in a Self-Complementary Graph | The core of your proof is correct, but it's very difficult to read. My primary complaint would be the unnecessary use of contradiction. I might proceed directly as follows:
Let $G$ be a self-complementary graph. First note that the number of edges in $G$ must be exactly $\frac{1}{2}\binom{n}{2}$ since there are a total of $\binom{n}{2}$ possible edges on $n$ vertices. It follows that $\binom{n}{2}$ must be even. Observe the following cases:
If $n = 2$, then $\binom{n}{2} = 1$.
If $n = 3$, then $\binom{n}{2} = 3$.
If $n = 4t + 2$ for $t \in \mathbb{Z}^+$, then $\binom{n}{2} = 8t^2 + 6t + 1$.
If $n = 4t + 3$ for $t \in \mathbb{Z}^+$, then $\binom{n}{2} = 8t^2 + 10t + 3$.
In all these cases, we find that $\binom{n}{2}$ is odd. This shows that $n$ must be of the form $4t$ or $4t+1$ for $t \in \mathbb{Z}^+$, as desired. |
Is $f(x)=x^2+\frac{x^2}{1+x^2}+\frac{x^2}{(1+x^2)^2}+\dots,x \in \Bbb{R}$ continuos? | As I commented, $f(0) \neq g( 0)$, since the formula of geometric series is not applicable when $x = 0$. Thus
$$
f(x) =
\begin{cases}
0, & x=0\\
g(x), & \text {else}
\end{cases}.
$$ |
What does this notation mean? $\mathbb{Z}_{2}^{3}$ | $\mathbb Z_2^3\,$ can mean $\mathbb Z_2\times \mathbb Z_2\times \mathbb Z_2$, the set of triplets of elements of $\mathbb Z_2$.
I.e., $X=\{(0,0,0),(0,0,1),(0,1,0),(0,1,1),(1,0,0),(1,0,1),(1,1,0),(1,1,1)\}$. |
can you solve $x'+x''=\sqrt {x}$ | There is a way to reduce such equations (in which the independent variable $t$ does not appear explicitly) to the integration of first order equations.
Let $p=x'$ and consider $p$ as a function of $x$. Then
$$
x''=\frac{dp}{dt}=\frac{dp}{dx}\,\frac{dx}{dt}=p\,\frac{dp}{dx}.
$$
The equation becomes
$$
a\,p\,\frac{dp}{dx}+b\,p=f(x),\quad\text{or}\quad\frac{dp}{dx}=\frac{f(x)}{a}\,\frac1p-\frac{b}{a}.
$$
Unfortunately this equation does not always have an easy solution. If you find $p=p(x)$, you still have to solve
$$
\frac{dx}{dt}=p(x),
$$
a first order equation whose solution is
$$
\int\frac{dx}{p(x)}=t+C.
$$
Again, in a lot of cases the integral does not admit a closed form solution in terms of elementary functions. |
Radon-Nikodym derivative as a Martingale | Hints:
Show (or recall) that any $\mathcal{F}_n$-measurable function $X$ is of the form $$X(\omega) = \sum_{j=0}^{2^n-1} c_j 1_{(j 2^{-n},(j+1)2^{-n}]}(\omega)$$ for suitable constants $c_j \in \mathbb{R}$.
Since $X_n$ is $\mathcal{F}_n$-measurable, Step 1 shows that there exist constants such that $$X_n(\omega) = \sum_{j=0}^{2^n-1} c_{j,n} 1_{(j 2^{-n},(j+1)2^{-n}]}(\omega).$$
If $X_n$ is the Radon-Nikodym derivative of $\nu$ with respect to $P$ restricted to $\mathcal{F}_n$, it holds that $$\int_F X_n \, d\mathbb{P} = \nu(F)$$ for all $F \in \mathcal{F}_n$. Choosing $F := (j2^{-n},(j+1)2^{-n}] \in \mathcal{F}_n$, we find $$c_{j,n} 2^{-n} = \int_F X_n \, d\mathbb{P}= \nu(F) = \nu((j 2^{-n},(j+1)2^{-n}]),$$ i.e. $$c_{j,n} = \frac{\nu((j 2^{-n},(j+1)2^{-n}])}{2^{-n}}.$$ Hence, $$X_n(\omega) = \sum_{j=0}^{2^n-1} \frac{\nu((j 2^{-n},(j+1)2^{-n}])}{2^{-n}}1_{(j 2^{-n},(j+1)2^{-n}]}(\omega).$$ |
Change of variables and linear transformations in multiple integrals | $D$ is the region bounded by the lines $y=2x$, $y=2x-2$, $y=0$, $y=2$. Or:
$\frac{2x-y}{2}=0$, $\frac{2x-y}{2}=1$, $y=0$, $y=2$.
So let $u=\frac{2x-y}{2}$ and $v=y$.
The answer isn’t unique, notice that the boundary can also be written as: $\frac{y}{2}=0$, $\frac{y}{2}=1$, $2x-y=0$, $2x-y=2$.
So $u=\frac{y}{2}$ and $v=2x-y$ also works. |
How to decide if these two maps are proper? | First recall that the image of a compact set under a continuous function between topological spaces is itself compact. This is proved by taking the given open cover on the image, using continuity to get an open cover of the original set, using compactness to reduce this to a finite open cover, and then taking the corresponding finite cover of the image set.
Then notice how if $m$ is odd, then $f$ and $g$ have (continuous) inverses $f^{-1}(y) = ~ ^m \sqrt y$ and $g^{-1}(y) =~^{m} \sqrt {a -y}$ which must map compact sets to compact sets.
If $m$ is even the problem is slightly harder. If $C$ is compact then the inverse map to $f$ is only defined over the positive numbers. So $f^{-1}(C)= f^{-1}(C ~\cap~[0, \infty) )$ Since $C$ is compact in $ \mathbb{R}$ it is also closed. $[0, \infty)$ is also closed, so the intersection is a closed subset of a compact set, which requires it to be compact. The inverse function to $f$ is well defined over our new compact set and thus maps it to a compact set.
The even case for $g$ can be done by a similar method. The only difference is that restriction of domain we must make to define the inverse map is a bit more complicated. |
$\Delta ^2(.)-\frac{\lambda}{|x|^4}(.): W^{2,2}(\Omega) \cap W^{1,2}_0(\Omega) \to W_0^{-2,2}(\Omega) $ is coercive. | It appears that you are using the notation $\langle \cdot, \cdot \rangle_{\mathbb{H}}$ for the inner product on you Hilbert space $W^{2,2}(\Omega)\cap W^{1,2}_0(\Omega)$ and $\langle \cdot,\cdot\rangle_{L^2}$ for the dual pairing with the space $W^{-2,2}_0(\Omega)$, which is the co-domain of the operator $L$. The term $\langle Lu,u\rangle_{\mathbb{H}}$ then does not make sense unless $u$ has more differentiability and even then will not be coercive.
The appropriate bi-linear form for coercivity, i.e. for existence of solutions via the Lax-Milgram theorem, is $\langle Lu,u\rangle_{L^2}$ which you have already shown is coercive. |
Stuck with Euclidean space geometry exercise | You are right in using the recipeocal of Thales. To continue the proof, you need to invoke another Greek guy’s result.
Consider $\frac{AM}{MB}$ and $\frac{AN}{NC}$.
We’d like to show that the two are equal. Where could we get these values from? Well, N is the intersection of a line with another line in a triangle so Menelaus’ Theorem is your friend here.
Applying Menelaus in triangle ABD with the line E-M-I we obtain that $\frac{AM}{MB}\frac{EA}{AD}\frac{IB}{BD} = 1$ and we obtain a value for $\frac{AM}{MB}$ and similarly for $\frac{AN}{NC}$ applying Menelaus in triangle ACD with the line E-N-J you get that $\frac{AN}{NC}\frac{EA}{AD}\frac{JC}{CD} = 1$. Since $\frac{JC}{CD} =\frac{IB}{BD}$ you have all the ingredients to show that $\frac{AM}{MB}$ and $\frac{AN}{NC}$ are equal |
Not strongly positive operator | I assume that in this context, $A$ is positive definite if $(x,Ax) \geq 0$ for all $x$ and $Ax = 0 \implies x = 0$.
With this in mind, an example on $\ell^2$: take the operator
$$
A(x_1,x_2,x_3,\dots) = \left(\frac{x_1}{1},\frac{x_2}{2},\frac{x_3}{3}, \dots\right).
$$
The induced bilinear form is not strongly positive because for $e_n$ (the $n$th standard basis vector) we have $\|e_n\| = 1$ and $(e_n,A e_n) = \frac 1n$. |
Show $f$ is continuous everywhere but nowhere differentiable | Large comment
Fix some $x\in\mathbb{R}$ and consider the difference:
$$f(x+h)-f(x)=\sum_{n=1}^\infty a_n\cos(b_n(x+h))-\sum_{n=1}^\infty a_n\cos(b_nx)=\sum_{n=1}^\infty a_n(\cos(b_n(x+h))-\cos(b_nx)),$$
since both series are absolutely convergent. We also have:
$$\frac{f(x+h)-f(x)}{h}=\sum_{n=1}^\infty a_n\frac{\cos(b_n(x+h))-\cos(b_nx)}{h}.$$
Now, since $\cos(b_nx)$ is didderentiable on $\mathbb{R}$, from M.V.T. there exists some $\xi_n=\xi_n(h)$ between $x$ and $x+h$ such that:
$$-b_n\sin(b_n\xi_n)=\frac{\cos(b_n(x+h))-\cos(b_nx)}{h}.$$
So, we have:
$$\frac{f(x+h)-f(x)}{h}=-\sum_{n=0}^\infty a_nb_n\sin(b_n\xi_n)=-\sum_{n=0}^\infty10^{n^2-n}\sin(10^{n^2}\xi_n).$$
Note that now you can take $\xi_n$ to be arbitrarily close to $x$, since $\xi_n$ is dependent on $h\neq0$ and then you can verify that the last series does not converge. |
Amount of all subsets of size $n$ in a set with $e$ elements | What you are looking for are binomial coefficients. In your case, the quantity you want is $\binom{e}{n}$.
Using induction, you will find that $\binom{e}{n}=\frac{e!}{n!(e-n)!}$.
If you did not have previous knowledge of binomial coefficients, here is how you might find those values:
you choose a first element (you have $e$ choices), $e-1$ elements remain
you choose a second element (you have $e-1$ choices), $e-2$ remain
you go on until you have chosen $n$ elements, you had $e\times (e-1) \times .... \times (e-n+1) = \frac{e!}{(e-n)!}$ ways of doing that
what you now have is an ordered collection of elements, but you only want a subset of your initial set, that is, an unordered collection. To switch between the two, you must find how many ways you have to order the $n$ elements you extracted. That number is $n!$.
So in the end, you indeed have $\frac{e!}{n!(e-n)!}$ ways of extracting your subset. |
Why is this matrix skew-symmetric? | You cannot conclude directly to the skew symmetry of the matrix $A=B-\tilde B$ because you have the same variable vector $x$ in the bilinear form $$(Ax,x)=0.$$
In order to conclude, you can proceed as follows. For any two vectors $x,y$, you have $$\begin{aligned}0=(A(x+y),x+y)&=(Ax+Ay,x+y)\\
&=(Ax,x)+(Ax,y)+(Ay,x)+(Ay,y)\\
&=(Ax,y)+(Ay,x)\end{aligned}$$
Which proves that $A$ is skew symmetric providing that $(\cdot,\cdot)$ is nondegenerate. |
Showing that $f(x) \mid f(x^{m + i}) - f(x^{m})$ given that $f(x) \mid x^m - 1$ | This is not true as it stands: for a counterexample consider $m=2,\ f(x):=x+1$ and $i=1$. Then $f(x^{m+i})=f(x^3)=x^3+1$ and $f(x^m)=f(x^2)=x^2+1$, so $f(x^3)-f(x^2)=x^3-x^2=x^2(x-1)$ which is not dividable by $x+1$.
You probably want to conclude instead that $f(x)\ |\ \,f(x^{m+i})-f(x^i)$.
Your attempt 1. is wrong!!! Just consider divisablity among numbers: $3\,|\,4-1$ though $3$ neither divides $4$ nor $1$.
Instead, perhaps simplest way is to consider the quotient ring $R:=\Bbb Z[x]\,/\,(f(x))$. The elements of this quotient ring are basically those of $\Bbb Z[x]$, only that $f(x)=0$ (and all its consequences -- using ring structure) holds in $R$. In particular, as $x^m-1=h(x)\cdot f(x)$ for some polynomial $h\in\Bbb Z[x]$, we have that
$$x^m=1\quad\text{ in }R\,.$$
Then, in $R$ we have
$$f(x^{m+i}) =\
a_0+a_1x^mx^i+a_2(x^m)^2x^{2i}+\dots a_n(x^m)^nx^{ni} = \\
= a_0+a_1x^i+a_2x^{2i}+\dots a_nx^{ni} \ =\ f(x^i)
$$
So, we conclude $f(x^{m+i})-f(x^i)=0$ in $R$ but that proves exactly the statement.
(Alternatively, we can basically use that $x^m-1\ |\ (x^m)^k-1$.) |
Difficulties in denesting radicals $\sqrt{17+12\sqrt{2}}\,+\,\sqrt{17-12\sqrt{2}}$ | Note that $17+12\sqrt{2}=(3+2\sqrt{2})^2$ and (therefore) $17-12\sqrt{2}=(3-2\sqrt{2})^2$. |
Difference between two integers | (1) We are given $(10a+b)-(10b+a)=27$. The left hand side is $9(a-b)$, hence $a-b=3$. |
Central Limit Theorem to evaluate $\frac{1}{(n-1)!} \int_0^n e^{-x}x^{n-1}dx$ | We're actually using the large-$n$ approximation $S_n\approx N(n,\,n)$ to evaluate $P(S_n\le\color{blue}{n})$, so the probability is approximated as $\tfrac12$. |
Showing $F[x,y]/(ax^2+by^2-1)$ is a Dedekind domain | To show that $R$ is a UFD is not a good idea. For instance, if $F=\mathbb R$, and $a=b=1$ this is not true; see here.
Instead, you can prove that $R$ is integrally closed. This follows from a general result which I proved here. See also the (solved) exercise 4.H from these notes.
For $\dim R=1$ you have to recall that $\dim F[X,Y]=2$. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.