title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Moment generating function: Z is a random variable such that Mz(t) = infinity for t>=5. (explain why it is undefine) | I will give one example and leave it to you to find another example.
Let $X\sim\mathrm{Exp}(\lambda)$, so that $X$ has density $f(x) = \lambda e^{-\lambda x}\cdot\mathsf 1_{(0,\infty)}(x)$. Then the moment-generating function $M_X(t)$ of $X$ is computed by
\begin{align}
M_X(t) &= \mathbb E[e^{tX}]\\
&= \int_0^\infty e^{tx}f(x)\ \mathsf dx\\
&= \int_0^\infty e^{tx}\lambda e^{-\lambda x}\ \mathsf dx\\
&=\lambda \int_0^\infty e^{-(\lambda-t)x}\ \mathsf dx\\
&=\frac\lambda{\lambda-t}.
\end{align}
Now, this only converges for $t<\lambda$ (you can see there is an obvious problem when $t=\lambda$). So set $\lambda = 5$, then for $t\geqslant 5$ the moment-generating function does not converge. |
Check whether two conjugate subgroups are still conjugate in some subgroup | $G=S_6$, $H_1=\langle (1,2,3) \rangle$, $H_2=\langle (4,5,6) \rangle$, $K = \langle (1,2,3),(4,5,6),(4,5) \rangle \cong C_3 \times S_3$. |
Resource for a library of closed curves. | This is really a good site for what you are looking for. |
Some Questions regarding preparing for Math Olympiads (searched but didn't get answers) | Here are some things to keep in mind:
Not everybody solves every problem, nor solves every problem quickly.
It is likely that you'll learn the most from trying to solve problems. You can probably learn helpful strategies about solving problems from "coaching" texts, but you are likely to learn (and retain) a lot more from practical experience.
Don't "try to hard." Speed is not everything in life. As a rule of thumb, chess players get good at fast games by first getting good at slow games. I think mathematics is much the same.
Try to "relax": the type of problems you are solving will usually be most efficiently solved by some insight. Ask yourself "does this look like anything I've seen before?" A lot of students struggle with problems when they are overthinking them.
Another thing I transfer over from chess is to not get discouraged by slow progress. Problem solving is a skill that takes time to develop, and not everybody develops at the same speed.
Another thing from chess: hindsight is $20/20$, so don't feel bad when solutions seem obvious. Hamilton struggled for a decade to multiply triples of real numbers to form a division algebra, but the disproof of this is an accessible exercise of undergraduate ring theory now.
If you do eventually wind up reading a solution, try to reason to yourself how you could have figured that out. This is usually possible, and helps to build those problem solving muscles.
Exposure to many problems is good. (This is actually more advice from chess.) The more you see, the more you learn and hopefully can transfer to other problems. This is one reason not to linger on problems too long. But it's not justification to give up on problems after thinking for only 15 minutes :)
If you can find a teacher or friend that's skillful enough not to "spoil" problems for you, you might find it refreshing to exercise solving problems together.
If you ever make a mistake (don't worry, you will) don't just brush it off and try to forget it. Think about what happened for a while and try to understand what went wrong, and you will probably be wiser the next time around. (coughchesscough)
From your rough description, it sounds like you are just feeling the initial discouragement that lots of people feel when first starting against a challenge. My advice would be to pick a fixed duration to struggle with a problem. Feel free to break the rules and extend it if you're having a good time. If I were you, I might spend an hour or two or more on a problem before looking for a hint. It depends on the problem. I would definitely avoid looking for a full solution for as long as possible. When practicing for qualification exams, I definitely spent an hour on many problems. One qual itself took $8$ hours to finish, and we were only submitting $8$ solutions. We all took the full time :)
Actually, think about moving on to other problems before looking up the solution to the last one. Sometimes you will find yourself solving the problem later, maybe the next day. There is no rule that you have to solve and understand them sequentially. I know for sure that in every math test I took, I jumped around doing different problems, and often felt like I was finishing the test more efficiently that way. Problems that were initially puzzling usually resolved themselves by the time I got around to them. |
Coercivity for functional and complete orthonormal system | To be coercive you need to have an estimate of the form $J(\rho) > C\|\rho\|_{W^{1,2}}^2$. If you consider (say) $\rho_k(x) = \sin kx$ you have $J(\rho_k) = \dfrac{\pi}{4}$ and $\|\rho_k\|_{W^{1,2}}^2 = \dfrac{\pi}{4} ( 1 + k^2)$. The coercivity condition would force $C$ to satsify $ 0 < C < \dfrac{1}{1 + k^2}$ for all natural numbers $k$. |
Expected matching number | Let $X_i=1$ if student $i$ is sitting in her original seat, and let $X_i=0$ otherwise. Then the number $Y$ of students sitting in their original seats is given by $Y=X_1+X_2+\cdots+X_n$.
By the linearity of expectation, we have $E(Y)=E(X_1)+E(X_2)+\cdots +E(X_n)$.
We have $E(X_i)=\frac{1}{n}$, so $E(Y)=1$. |
If I define $F(x)=\int f(x)\:dx$, is right to say that $F(u)=\int f(u)\:du$ | Well, let us make some clarifications over the symbol $\int f(x) dx$. In general, whith this, we are used to note a whole family of functions, not just one function - you can see the specific construction of this family in the first part of this answer. So, writing:
$$F(x)=\int f(x)dx$$
has no meaning, since the left hand side of this equation is a function and the right one is a set of functions. Even writing:
$$F(t)=\int f(x)dx$$
has no meaning, for the same reasons.
So, keeping in mind that the indefinite integral is a symbol for a set of functions - with a common property - what one could use is the symbol of the definite integral, as mentioned in the comments section by @Mark Viola. So, for instance, one can write:
$$F(x)=\int_a^xf(t)dt$$
which is, if $f(x)\geq0$ over its doamin, the area under $f$ from $a$ to $x$ - or the opposite of this area, if $x<a$. Note that "inside" the integral we use a different variable than that we use at its limits. So, one can also write:
$$F(x)=\int_a^xf(u)du$$
Using the same variable "inside" and "outside" the integral would cause a large confusion as of with respect to what are we integrating.
You can also make both limits of integration be variable:
$$F(x)=\int_x^{x+1}f(t)dt$$
or, more generally, if $g,h$ are two "nice" functions - for instance, continuous - we can consider:
$$F(x)=\int_{g(x)}^{h(x)}f(t)dt$$
Note, however, that the following has a meaning:
$$F(x)=\int_a^xf(x)dt$$
To interpret this, one can think as follows: $dt$ tells us with respect to which variable are we integrating. So, in our case, this variable is $t$ and, hence, everything that does not contain $t$ is considered to be a fixed number. So, for this case $f(x)$ is a plain number, and, we have:
$$F(x)=\int_a^xf(x)dt=f(x)\int_a^x1dt=f(x)(x-a)$$
So, this was a more "tricky" way to write the function $f(x)(x-a)$.
Finally, we can have even more complex cases, as the following:
$$F(x)=\int_a^xg(x-t)f(t)dt$$
but the discussion has already gone too far... |
Given a commutative ring $R$ and a monic polynomial $p(x) \in R[x]$ is $R[x]/\langle p(x) \rangle$ always a finite integral extension of $R$? | If $p(x)=x^d+a_{d-1}x^{d-1}+\dots+a_0$, then the quotient is generated (in fact, freely generated) by $S=\{1,x,\dots,x^{d-1}\}$. Indeed, you can prove by induction that $x^n$ is in the submodule generated by $S$ for each $n$. For $n<d$ this is trivial. For $n\geq d$, you have $x^{n-d}p(x)=0$ so $x^n=-a_{d-1}x^{n-1}-\dots-a_0x^{n-d}$, which is generated by lower powers of $x$ and hence by $S$. To see that $S$ freely generates the quotient, note that any nonzero multiple of $p(x)$ has degree at least $d$, so no nontrivial $R$-linear combination of the elements of $S$ can be a multiple of $p(x)$. |
Parabolic subalgebra | The answer is NO.
Take $g=sl(3)$ and $\Delta=\{a_1,a_2\}$. The unique choice for $\Delta'$ is $\{a_2\}$.
The result is clearly false in this scenery. |
Describing the symmetries of a $2n$-gon in $\Bbb R^2$ with matrices. | Note that we can write any such reflection as the matrix product:
$R^kS$, where:
$R^k = \begin{bmatrix}\cos(\frac{\pi k}{n})&-\sin(\frac{\pi k}{n})\\ \sin(\frac{\pi k}{n})&\cos(\frac{\pi k}{n})\end{bmatrix}$, $k = 0,1,\dots,2n-1$;
and
$S = \begin{bmatrix}1&0\\0&-1\end{bmatrix}$.
It should not be hard for you to prove that:
$R^{2k}S$ fixes the $k$-th vertex and the $k+n$-th vertex, and is thus the reflection we seek. |
Systematic way to solve modular equations? | There is no such general approach that I know of, but in general these problems only look difficult (possibly because you lack training or self-confidence), in reality they are really simple and 1 minute should suffice for each of them.
You have $n (n + 4) = n^2 + 4n \equiv 7 \pmod {10}$ and $n \not\equiv 7 \pmod {10}$.
Question: what may be the last digits of $n$ and $n+4$ such that their product should have the last digit $7$? We only have the possibilities $(1, 7)$, $(7, 1)$, $(3, 9)$, $(9, 3)$.
Note that the possibility $(7, 1)$ is ruled out by the requirement that $n \not\equiv 7 \pmod {10}$.
Note also that if $n \equiv 3 \pmod {10}$, then $n^2$ ends in $9$ and $4n$ ends in $2$, so $n^2 + 4n$ ends in $9 + 2 \equiv 1 \pmod {10}$, not in $7$ as required. Similarly, if $n \equiv 1 \pmod {10}$, then $n^2 + 4n$ ends in $1^2 + 4 \cdot 1 \equiv 5 \pmod {10}$, again not satisfying the requirement of the problem.
Therefore, the only remaining possibility for the last digits of the pair $(n, n+4)$ is $(9, 3)$, so that the last digit of $n+3$ is $9 + 3 \equiv 2 \pmod {10}$. |
Real $2\pi$ periodic functions | I think that above means the following:
You have function $f(x)\in C^{2\pi}$ defined on $\mathbb{R}^1$. You must to make it a periodic function on $\mathbb{T}=\{z\in \mathbb{C}: |z|=1\}.$ We can take segment $[0,2\pi]$ and transform $t\mapsto e^{it}$ translates it to unit circle. Right?
For example: Let $f(x)=\sin x$. We know that it's continuous function with period $2\pi$ defined on $\mathbb{R}^1$. Then $f(e^{ix})=\sin x$. |
Partial derivatives of $\ln(x^2+y^2)$ | It is correct. To justify your feeling, you can apply the chain rule to the maps $g\colon x\mapsto x^2+y^2$ (where $y$ is fixed) and $f\colon x\mapsto \ln x$. |
Vector spaces and subspaces. | Your answer to (a) doesn't work because $a^{-1}$ doesn't exist when $a=0$. Even apart from this difficulty, your proposed multiplication won't satisfy the distributive law (with respect to scalar addition). There won't be any example over the field of rational numbers; I suggest trying the field of complex numbers.
For (b), I think the intention is that $S$ should be a subspace of $V$ using (as the definition requires) the restrictions to $S$ of the operations of $V$, and should also be a vector space with some other operations. In other words, it's similar to question (a) in that the same set is a vector space for two different choices of operations. |
$L^2$ function on finite interval implies $L^1$? | To give the question a chance to get off the unanswered list:
Yes, every square integrable function on a finite interval is integrable. Even more generally, if $(X,\mathcal{A},\mu)$ is a finite measure space ($\mu(A) < \infty$), then for all $1 \leqslant q < p \leqslant \infty$ we have $L^p(\mu) \subset L^q(\mu)$.
Hölder's inequality states that for $r,s \geqslant 1$ with $\frac1r + \frac1s = 1$, and any measurable $u,v$, we have
$$\int_X \lvert u(x)v(x)\rvert\,d\mu \leqslant \left(\int_X \lvert u(x)\rvert^r\right)^{1/r}\cdot \left(\int_X \lvert v(x)\rvert^s\right)^{1/s}.$$
Applying that with $u = 1$, $v = \lvert f\rvert^q$, $s = \frac{p}{q}$ and $s = \frac{p}{p-q}$ yields
$$\lVert f\rVert_q^q \leqslant \mu(X)^{1-q/p}\cdot \lVert f\rVert_p^{q},$$
or
$$\lVert f\rVert_q \leqslant \mu(X)^{1/q-1/p}\cdot \lVert f\rVert_p.$$
So $\lVert f\rVert_p < \infty$ implies $\lVert f\rVert_q < \infty$ for $q < p$ if $\mu(X) < \infty$. |
sequence with infinitely many limit points | You can use something like :{ $1/n$}$\cup${$1+1/n$} $ \cup ....\cup$ {$k+1/n$} $\cup.... $ |
Module homomorphism is well defined | You are mapping $m+A$ to $m+N$ and have to show that this is independent of the representing element of the equivalence class $mA$. So let $m=n\in M/A$. In other words, $m-n=a\in A$. You have to show that $f(m)-f(n)\in N$. So consider $$f(m)-f(n)=f(m-n)=f(a)=a\in A\subset N.$$ So your map is independent of the representing element and well-defined. |
Proving distributive law of natural numbers | Perfect ... though you may want to note that you need associativity of addition, since:
$a \times (b + S(c)) = a \times (S(b + c)) = (a \times (b+c)) + a = \color{red}{(}(a \times b) + (a \times c)\color{red}) + a$
while:
$(a \times b) + (a \times S(c)) = (a \times b) + \color{red}((a \times c) + a\color{red})$ |
Impose PDE itself as Boundary Condition? | The underlying problem is that the boundary condition you have provided does not assist in evaluating the solution.
Consider that, for your PDE, if $u=f(x,y)$ is a solution, then so is $u=f(x,y)+C$ for some constant, $C$. This constant cannot be determined using your boundary condition.
Indeed, consider that the following expression is a solution for any choice of constants $C_i$:
$$
u=C_1+C_2(x-y)+C_3e^{-x}+C_4e^{-y}+C_5e^{-x-y}
$$
None of these constants can be determined by the boundary condition as provided.
That being said, there are often implied conditions that can be used to obtain boundary conditions. For example, if you are working in a real-world context, you may need to apply an energy-conservation requirement. Alternatively, boundedness and related concepts often imply some form of boundary condition. For instance, consider
$$
xy'+y=0
$$
In the limit as $x\to0$, we get $y=0$ unless $y'$ tends to infinity. Indeed, the general solution is $y=C/x$, and so if $y$ must remain bounded, then $y(0)=0$, and thus $y=0$.
However, this is actually a separate condition on the problem that generates a boundary condition. You cannot simply apply the DE to the boundary, as it provides no additional information, and boundary conditions are applied to provide information that is otherwise missing.
The mistake you make is the claim that you must apply something at the boundary. This is untrue. You must apply a condition that allows you to identify the correct solution (unless you want a family of solutions, that is).
The nature of the condition will depend on the goal of the differential equation. As mentioned above, sometimes you need to apply some energy-conservation condition. Other conditions are possible - minimisation conditions are common. For example, you might want the solution to your problem that has the least amount of variation (that is, minimise $\iint u_{x}^2+u_{y}^2\ dA$).
There is one other category - cases where your choice won't matter. This is much like how, when performing integration by parts, you don't need to include the constant of integration for the "$\int dv$" - it will cancel out anyway. In this situation, pick the simplest boundary condition for the problem - for instance, if $u(0,0)=1$ and $u(0,\infty)=0$, then you might pick $u(0,y)=e^{-y}$.
EDIT: Here, I'll show what your forward-difference application of the PDE at the boundary is actually assuming for the boundary. Assuming you use central difference for first derivatives when not at the boundary, you have...
$$
\begin{align*}
\frac{u_{1,j}-u_{0,j}}{\Delta x}+\frac{u_{0,j+1}-u_{0,j-1}}{2\Delta y}+\frac{u_{2,j}-2u_{1,j}+u_{0,j}}{(\Delta x)^2}+\frac{u_{0,j+1}-2u_{0,j}+u_{0,j-1}}{(\Delta y)^2}&=0\\
\frac{u_{2,j}-u_{0,j}}{2\Delta x}+\frac{u_{1,j+1}-u_{1,j-1}}{2\Delta y}+\frac{u_{2,j}-2u_{1,j}+u_{0,j}}{(\Delta x)^2}+\frac{u_{1,j+1}-2u_{1,j}+u_{1,j-1}}{(\Delta y)^2}&=0
\end{align*}
$$
With a bit of algebra, we can cancel the $u_{2,j}$ terms to get
$$\begin{align*}
\left(\frac12+\frac{2+\Delta x}{(\Delta y)^2}\right)u_{0,j} - \left(\frac12+\frac2{(\Delta y)^2}\right)u_{1,j} & \\
+ \left(\frac{\Delta x}{4\Delta y} - \frac{\Delta x}{2(\Delta y)^2} + \frac{1}{2\Delta y} - \frac{1}{(\Delta y)^2}\right)u_{0,j-1} - \left(\frac{1}{2\Delta y} - \frac{1}{(\Delta y)^2}\right)u_{1,j-1} & \\
- \left(\frac{\Delta x}{4\Delta y} + \frac{\Delta x}{2(\Delta y)^2} + \frac{1}{2\Delta y} + \frac{1}{(\Delta y)^2}\right)u_{0,j+1} + \left(\frac{1}{2\Delta y} + \frac{1}{(\Delta y)^2}\right)u_{1,j+1} & =0
\end{align*}$$
By Taylor expansion around the $(0,j)$ point, letting $u$ be the function at that point and stopping at $O(\Delta)$ (treating both $\Delta x$ and $\Delta y$ as proportional to $\Delta$), we get
$$\begin{align*}
\frac{\Delta x}{(\Delta y)^2} u - \frac{\Delta x}2 (u_x+u_y+2u_{xy}) - \frac{\Delta x^2}4 u_{xx} + O(\Delta) & =0
\end{align*}$$
where I have left in the terms that come out explicitly... but note that this is equivalent to
$$
\frac{\Delta x}{(\Delta y)^2} u + O(\Delta) = 0
$$
And so, if $\Delta x$ and $\Delta y$ are both scaled down in proportion with each other, your boundary condition is effectively $u(0,y)=0$.
However, be careful, as this analysis applies for keeping $\Delta x$ and $\Delta y$ in proportion. I suspect that some issues may arise if you reduce $\Delta x$ without reducing $\Delta y$. |
Probability of fair coin given 20 heads observed | Let A denote the event of picking a fair coin.
Let B denote the event of tossing $20$ consecutive heads.
$$P(A|B)=\frac{P({A}\cap{B})}{P(B)}=\frac{\frac{1000000-1}{1000000\cdot2^{20}}}{\frac{1000000-1}{1000000\cdot2^{20}}+\frac{1}{1000000\cdot1^{20}}}\approx48.81\%$$ |
Linear Programming:What combination of two loams to minimize cost | One way to get around the the lack of a restraint on the total amount produced is:
Let $x$ be the number of pounds of premium loam used in the production of 100lbs of your final mixture. Let $y$ be the number of pounds of generic loam used in the production of 100lbs of your final mixture.
The problem is to $$\min \; \frac{5}{50}x + \frac{1}{50}y$$ subject to
\begin{align}
0.6x + 0.2y &\ge 36 \\
0.4x + 0.1y &\ge 20 \\
x+y&=100\\
x &\ge 0 \\
y &\ge 0 \\
\end{align} |
Proving the existence of $[2k, k, k]$-codes | Let us separate it into two cases :
Case 1 : $k$ is not the form $2^r$
Let us define $m:=\lceil\log_2(k)\rceil-1$ where we have $2^m\lt k\lt 2^{m+1}$.
Then,
$$2^{m-i+1}\lt \frac{k}{2^{i-1}}\lt 2^{m-i+2}$$
So,
$$\begin{align}\sum_{i=1}^{k}\left\lceil\frac{k}{2^{i-1}}\right\rceil&=\sum_{i=1}^{m+1}\left\lceil\frac{k}{2^{i-1}}\right\rceil+\sum_{i=m+2}^{k}\left\lceil\frac{k}{2^{i-1}}\right\rceil\\\\&\ge\sum_{i=1}^{m+1}(2^{m-i+1}+1)+\sum_{i=m+2}^{k}1\\\\&=2^{m+1}-1+m+1+k-(m+2)+1\\\\&=2^{m+1}-1+k\\\\&\gt 2k-1\end{align}$$
Since $\sum_{i=1}^{k}\left\lceil\frac{k}{2^{i-1}}\right\rceil$ is an integer, we are done.
Case 2 : $k=2^m$
$$\begin{align}\sum_{i=1}^{k}\left\lceil\frac{k}{2^{i-1}}\right\rceil&=\sum_{i=1}^{m+1}\left\lceil\frac{2^m}{2^{i-1}}\right\rceil+\sum_{i=m+2}^{2^m}\left\lceil\frac{2^m}{2^{i-1}}\right\rceil\\\\&=\sum_{i=1}^{m+1}2^{m-i+1}+\sum_{i=m+2}^{2^m}1\\\\&=2^{m+1}-1+2^m-(m+2)+1\\\\&=2k+(k-\log_2k-2)\\\\&\gt 2k\end{align}$$
Here, we used
$$k-\log_2k-2\gt 0\tag1$$
for $k\ge 5$.
Finally, let us prove $(1)$ for $k\ge 5$.
Let $f(k)=k-\log_2k-2$.
Then,
$$f'(k)=1-\frac{1}{k\ln 2}=\frac{k\ln 2-1}{k\ln 2}\gt 0$$
So, $f(k)$ is increasing with $f(5)=3-\log_2(5)\gt 3-\log_2(8)=0$.
It follows from this that $f(k)\gt 0$ for $k\ge 5$. |
Does pullback of schemes by monomorphism produce topological pullback? | Yes.
By the the explicit construction of the fiber product of locally ringed spaces (in particular, of schemes), it follows that the continuous map $|X \times_Z Y|\to |X| \times_{|Z|} |Y|$ is surjective and that the fiber over some point $(x,y,z)$ in $|X| \times_{|Z|} |Y|$ is $\mathrm{Spec}(\kappa(x) \otimes_{\kappa(z)} \kappa(y))$. A basis of the topology on $|X \times_Z Y|$ is given by the open subsets
$$\Omega(U,V,T,f) = \{(x,y,z,\mathfrak{p}) : x \in U, y \in V, z \in T, f(x,y,z) \notin \mathfrak{p}\}$$
where $U \subseteq X$, $V \subseteq Y$, $T \subseteq Z$ are open subsets such that $U$ and $V$ map into $T$, $f \in \mathcal{O}_X(U) \otimes_{\mathcal{O}_Z(T)} \mathcal{O}_Y(V)$, and $f(x,y,z)$ denotes the image of $f$ in $\kappa(x) \otimes_{\kappa(z)} \kappa(y)$.
A reference for the theory of monomorphisms of schemes resp. epimorphisms of commutative rings is
Séminaire Samuel. Algèbre commutative, 2, 1967-1968, Les épimorphismes d'anneaux, available at nundam.
By Prop. 1.5 in Lazard's Exp. No. 4 a monomorphism $X \to Z$ is injective on the underlying sets and induces isomorphisms on residue fields. Therefore, $\mathrm{Spec}(\kappa(x) \otimes_{\kappa(z)} \kappa(y)) = \mathrm{Spec}(\kappa(y))$ is a single point. It follows that the continuous map $|X \times_Z Y|\to |X| \times_{|Z|} |Y|$ is bijective. The image of a basic-open subset $\Omega(U,V,T,f)$ is
$$\{(x,y,z) \in |U| \times_{|T|} |V| : f(x,y,z) \neq 0\}.$$
It is open: If $f(x,y,z) \neq 0$, then $f_{x,y,z} \in \mathcal{O}_{X,x} \otimes_{\mathcal{O}_{Z,z}} \mathcal{O}_{Y,y}$ is invertible. Since directed colimits commute with tensor products, we infer that there are open neighborhoods $U',V',T'$ of $x,y,z$ inside $U,V,T$ such that a) the inverse $f^{-1}$ of $f$ is defined in $\mathcal{O}_X(U') \otimes_{\mathcal{O}_Z(T')} \mathcal{O}_Y(V')$, and b) the equation $f f^{-1} = 1$ already holds in this tensor product. It follows that $U' \times_{V'} T'$ is contained in the set, and this is open with respect to the fiber product topology. |
Find the general solution of equation $\cot x+\tan x=2$ | Hmm, wrong reduction to algebraic form.
$$\tan x\equiv u\implies u+1/u=2\implies u=1$$
$$u=1\implies x=n\pi+\frac\pi4$$ |
Relationship between determinant and integral? | Consider a sphere $M$ with standard metric and spherical coordinate chart ($\varphi$, $\theta$). Let's have a field $\phi=(\cos\theta, 0)$, then its Jacobian matrix:
$$
\nabla\phi = \begin{pmatrix}0&-\sin\theta\\0&0\end{pmatrix}.
$$
Take some constant $B=\begin{pmatrix}1&1\\-1&2\end{pmatrix}$, then $\det(B+\nabla\phi)=3-\sin\theta$.
Consider integral:
$$
\int_M \det(B+\nabla\phi) = \int_{-\pi}^{\pi}\int_{-\pi/2}^{\pi/2}(3-\sin\theta)\cos\theta d\theta d\varphi = 12\pi = \underbrace{\mathop{\mathrm{vol}}(M)}_{4\pi}\underbrace{\det B}_{3}.
$$
where $\mathrm{vol}(M)=\int_M1$ is a total volume (in that case area) of a manifold $M$. |
Proper ideals of local rings | A maximal ideal is maximal by set-inclusion. In a local ring, there is just one maximal ideal. So any other ideal must be contained in it. |
difference between similarity and affine transformation | In very simple words,
An affine transformation can be thought of as the composition of two operations: (1) First apply a linear transformation, (2) Then, apply a translation
Essentially, an affine transformation is like a linear transformation but now you can also "shift" or translate the origin. (Recall that in an linear transformation, the origin is sent to the origin)
This means that an affine transformation will still transform lines to lines and parallel lines to parallel lines. But in general, angles may not be preserved (just like a linear transformation).
A similarity transform is a special kind of affine transformation that preserves "shape". You can think of this as some combination of (1) Translation, (2) Rotation, (3) Uniform Scaling (all dimensions are scaled the same way), and (4) Reflection
Preserving shape means that a similarity transform also preserves angles |
Tangent bundle to Grassmannian | The Grassmannian represents a functor. You can compute the tangent bundle by evaluating the functor on square zero nilpotent extensions. |
What Method is used for Projecting the Rauzy Fractal? | Have a look at this paper by Sirvent and Wang. It's not elementary but it defines the process quite completely with a worked example. In particular, the contracting space ${\mathbb H}_c$ is clearly defined. On slide 3 of the presentation, it simply says that ${\mathbb H}_c$ is "generated by the eigenvectors of $\beta$ Galois conjugates". (Of course, it's a presentation so we could perhaps excuse the terseness.) The relevant part in the paper is the so called valuation map $E$ which accepts strings and returns complex numbers. The image of the orbit of the fixed word is exactly the Rauzy fractal.
Suppose, for example, that you're dealing with the substitution
$$1\to 12, \: 2\to 13, \: 3\to 1.$$
This has incidence matrix
$$
\left(
\begin{array}{ccc}
1 & 1 & 0 \\
1 & 0 & 1 \\
1 & 0 & 0 \\
\end{array}
\right)
$$
one of whose eigenvectors is approximately
$$\omega = \langle \omega_1, \omega_2, \omega_3 \rangle =\langle -0.412 + 0.61 i, 0.223 - 1.12i, 1\rangle.$$
The complex conjugate of $\omega$ is also an eigenvector; the remaining eigenvector corresponds to a real eigenvalue and not relevant.
Given a finite word $U$ let $|U|_i$ denote the number of occurrences of the symbol $i$ in $U$. Then, according to equation 3 in the paper, the valuation $E$ is defined by
$$E(U) = \sum_{i=1}^3 |U|_i \omega_i.$$
Note that the $\omega_i$s are simply the components of the vector $\omega$ and have complex values. So clearly, $E(U)$ is a complex number.
Now, the fixed word for this substitution starts something like so:
12131211213121213121121312131211213121213121121312112131212131211213121312112131212131
Of course, he first few terms of the orbit of this finite word under the shift operator are
12131211213121213121121312131211213121213121121312112131212131211213121312112131212131
2131211213121213121121312131211213121213121121312112131212131211213121312112131212131
131211213121213121121312131211213121213121121312112131212131211213121312112131212131
31211213121213121121312131211213121213121121312112131212131211213121312112131212131
If we apply the valuation map to each of these, we obtain points in the complex plane. If we start with a much longer approximation to the fixed word and perform the process for many points in the orbit, we get a good approximation to the rauzy fractal. We even obtain a decomposition of the rauzy fractal by examining those terms in the orbit starting with 1, 2, or 3.
That is exactly the process I used to generate the following: |
Show $b^n > n$ from first principles | If $b > 1$ you can write $b = 1 + x$ with $x > 0$ and by Bernoulli's inequality $$b^n = (1+x)^n \ge \frac{n(n-1)}{2}x^2.$$ Thus $$\frac{b^n}{n} \ge \frac{(n-1)x^2}{2} \ge 1$$ for all $n > \dfrac{2}{x^2} + 1.$ |
$R$ is a Noetherian ring if and only if both $I$ and $J$ are Noetherian $R$-modules, where $I,J$ are distinct maximal ideals | Hint:
If $I$ and $J$ are distinct maximal ideals, you have a surjective homorphism
$$I\oplus J\longrightarrow I+J=R.$$ |
A doubt about lower nil radical while proving 2-primality of ring.( Baer-McCoy Radical) | I am guessing the intent is to prove that $RaR$ is nilpotent to show that it is contained in all prime ideals, and then use the characterization of $Nil_\ast(R)$ as the intersection of all prime ideals to conclude that $RaR\subseteq Nil_\ast(R)$.
If you haven't proven that equivalent definition of $Nil_\ast(R)$, it is highly recommended, and may even be carried out in the text already.
I'm also not totally sure why $(aRa)^n=\{0\}$ would actually help (is that really what's written?) Knowing that $aRaaRaaRaa\ldots =\{0\}$ does not seem to directly imply that $RaRaRa\ldots aR=\{0\}$.
But it is easy enough to show that $aRaRa\ldots aRa=\{0\}$ whenever there are $n$ or more $a$'s. Let $m\geq n$ and $r_i\in R$. Then
$$a^mr_1=0\implies a^{m-1}r_1a=0\\
a^{m-1}r_1ar_2=0\implies a^{m-2}r_1ar_2a=0\\
\ldots\\
a^2r_1ar_2a\ldots r_{m-1}=0\implies ar_1ar_2a\ldots r_{m-1}a=0$$
So $aRaR\ldots aRa=\{0\}$ whenever there are at least $n$ $a$'s, and that means $(RaR)^n=R(aRaRa\ldots aRa)R=\{0\}$. Thus $RaR$ is contained in every prime ideal, and therefore it is contained in $Nil_\ast(R)$.
Note: an element $a\in R$ is called strongly nilpotent if there exists a nonnegative integer $k$ such that $ar_1ar_2a\ldots ar_{k-1} a=0$ for all choices of $r_i\in R$. Just as the nilradical of a commutative ring is the set of all nilpotent elements, the lower nilradical $Nil_\ast(R)$ is the set of all strongly nilpotent elements.
Edit Poster has since changed $(aRa)^n$ to $(RaR)^n$. In that case, it looks like you might have overlooked the fact that for an ideal $A$ and prime ideal $P$, $A^n\subseteq P\implies A\subseteq P$. This is all you need to prove that if $RaR$ is nilpotent, it's contained in all prime ideals. It's analogous to the idea that nilpotent elements of commutative rings are contained in all prime ideals. |
Intuition about $P(\hat {x}|x)$ in rate distortion R(D) | It appears that you think that $\hat{X}$ should be an explicit function of $X$, say $\hat{X} = g(X)$ for some numerical function $g(\cdot)$. In that case, indeed, it makes more sense to optimize over $g(\cdot)$ since the distribution $p(\hat{x}\mid x)$ is, trivially, equal to $\delta(\hat{x}-g(x))$. However, note that optimizing over $p(\hat{x}|x)$ also covers the above case, i.e., the definition is more general.
Now, what you should understand is that the optimal encoder is not necessarily of the form $\hat{X} = g(X)$ as discussed above. In the general case, the optimal encoder operates as follows:
Map (encode) each "symbol" $X$ to a "symbol" $\hat{X}$, which is
obtained randomly from a distribution $p(\hat{x}\mid x)$.
Note that this does not mean that the encoder is random in the sense that, for the same $X$, it generates a random $\hat{X}$! It means that for the same $X$, it always generates the same $\hat{X}$ (known both by encoder and decoder), however, the value of $\hat{X}$ that is used is obtained by generating (offline) a single realization of a random variable.
See the "Gaussian source" example in the "Elements" as an example of a source where the above approach is indeed optimal.
This procedure is exactly parallel to the transmission capacity scenario, where the set of codewords ("codebook") is fixed during transmission, however, the codeword values are selected offline as a (single) realization of a random variable. |
Induction proof for an inequality with a recursive formula $a_n = a_{\lfloor{n/2}\rfloor} + a_{\lceil{n/2}\rceil} + 3n + 1$ | This is NOT a complete answer! (... is I believe after edits)
I do not understand your recursion on $k$. $3k\cdot2^{k}+4\cdot2^{k}$ is strictly increasing in $k$ hence it suffices to show the inequality for the lowest $k$ such that $n\leq 2^{k}$ (for all $n$).
Let me reformulate your problem.
First, note that $a_{n}$ is strictly increasing. To see this, notice the $a_{n}$ sequence can be rewritten as $a_{1}=3$, $a_{2}=13$ and $a_{n}=a_{n-1}+3+a_{n/2}-a_{n/2-1}$ when $n$ is even and $a_{n}=a_{n-1}+3+a_{(n+1)/2}-a_{(n-1)/2}$ when $n$ is odd.
Second, given this, you need to prove the inequality only for values of $n$ and $k$ such that $n=2^{k}$. For $k\in\{0,1,2,\ldots\}$ this implies $n\in\{1,2,4,8,\ldots\}$.
Third, with $n=2^{k}$ and hence $k=\log{n}/\log{2}$, your inequality becomes $a_{n}\leq n\left(2\frac{\log{n}}{\log{2}}+4\right)-1$ for $n\in\{1,2,4,8,\ldots\}$. Or, alternatively, $a_{2^{k}}\leq2^k(3k+4)-1$ for $k\in\{0,1,2,\ldots\}$. My Mathematica in fact claims that the inequality holds with equality and I think this is the direction in which a complete argument should go.
Fourth, using the expressions for $a_{n}$ above, one can write $a_{2^{k}}$ for $k\in\{0,1,2,\ldots\}$ as a sequence $b_{k}$, with $b_{0}=3$, $b_{1}=13$ and $b_{k}=3b_{k-1}-2b_{k-2}+3\cdot2^{k-1}$. Solution to this recurrence relation, according to my Mathematica (and easily checked), is $b_{k}=3\cdot2^{k}k+2^{k+2}-1$. |
points vs functions in prime spectra | There are many zeros, one for each term in the union. So for instance $f(x) = 0$ really means $f(x) = 0_x$ Where $0_x\in R/p_x$ and $0_x \neq 0_y$ when $x\neq y$. |
Find if this sequence has a limit | You do not need Riemann sums:
Note that $\sum_{k=0}^{n} \frac{(-1)^k}{2k+1}\left ( \frac{1}{3} \right )^k=\sqrt{3}\sum_{k=0}^{n} \frac{(-1)^k}{2k+1}\left ( \frac{1}{\sqrt{3}} \right )^{2k+1}$.
Now consider the series $\sum_{k=0}^{n }(-1)^{k}x^{2k}=\sum_{k=0}^{n }(-1)^{k}\left ( x^{2} \right )^{k}$. |
Fast way to check if finite extension is unramified? | Observe that if $L/K/\mathbb Q_p$, and $\alpha \in \mathcal O_L$ then $\overline \alpha = \alpha \pmod {\mathfrak p}\in k_L$, where $\mathfrak p$ is the maximal ideal of $\mathcal O_L$ and $L$ has residue field $k_L$.
So if $k$ is the residue field of $K$, then $k(\overline \alpha)\subset k_L$, although you won't have equality in general.
This is sometimes useful. For example, we can see that $\mathbb Q_5(\sqrt 2)/\mathbb Q_5$ is unramified because $X^2-2$ is irreducible in $\mathbb F_5$, and hence $[k:\mathbb F_5]\ge 2$; since $[\mathbb Q_5(\sqrt 2) :\mathbb Q_5] = 2$, it follows that $[k:\mathbb F_5] = 2$, and hence the extension is unramified.
In your case, this won't be of much use, as $X^2 -5$ is reducible over $\mathbb F_5$. In fact, your extension is ramified, which can be seen in two ways:
As Jyrki pointed out, $X^2-5$ is Eisenstein, so the extension must be totally ramified.
Alternatively, you could observe that the unique prime ideal $(5) \subset \mathbb Z_5$ factorises as $(\sqrt 5)^2$ in $\mathcal O_{\mathbb Q(\sqrt 5)}$. |
Varying definitions of symmetric and selfadjoint operators | Ok, let me try to sort this out.
There is no disagreement between the definitions you quote. However, your initial paraphrases of those definitions are not accurate.
You wrote:
For example Lax, in his "Functional Analysis" book, says on pp. 354 that a symmetric operator $M:H\rightarrow H$ on a Hilbert space $H$ is one that fulfills [A] $\left< Mx,y \right>=\left<x,My \right>$ for all $x,y\in H$ and on pp. 377 defines a selfadjoint operator to be an operator $M:D\subseteq H\rightarrow H$ with a dense subset $D$, such that [B] $\left<Mx,y \right>=\left<x,My \right>$ for all $x,y\in D$.
I have added the marks [A], [B] for reference.
I say that your [A] is what everyone calls "symmetric and bounded", your [B] is what everyone calls "symmetric", and that neither of them is what anyone calls "self-adjoint".
Your definition [A] does match Lax's first definition; however, in that definition he is defining what it means for a bounded operator to be symmetric. For a bounded operator defined on all of $H$, [A] and [B] are of course equivalent.
Your definition [B] does not match Lax's second definition; it is completely different. The key point is the introduction of the new subspace $D^*$ which in general may be different from $D$; Lax's requirement that $D = D^*$ is crucial and by no means trivial. Lax's second definition agrees with every other definition of "self-adjoint" (for unbounded operators) that I have ever seen.
Kreyszig's first definition matches your definition [B], which as I claimed is what people call "symmetric".
Kreyszig's second definition matches neither your [A] nor [B], but if you investigate his definition of $T^*$, I claim you will find it equivalent to Lax's second definition. It is again the standard definition of "self-adjoint".
Edit. I'll address your added comments.
Let me first mention that when working with unbounded operators, one has to keep in mind that an unbounded operator is really a pair $(A,D)$ of an operator and its domain: a linear subspace $D \subset H$ and a linear map $A: D \to H$. In particular, for two operators to be equal, they have to have the same domain. The fact that people often just use the word "operator" and talk mostly about $A$ instead of $D$ tends to obscure this essential point. For the rest of this answer, I'll write the operator and its domain explicitly.
So suppose $(A,D)$ is an unbounded operator and $D$ is dense (people say "$A$ is densely defined"). As you say, we define a new operator $(A^*, D^*)$, called the adjoint of $(A,D)$, as in Lax: $v \in D^*$ iff there is a vector $w_v \in H$ such that for every $u \in D$ we have $\langle Au, v \rangle = \langle u, w_v \rangle$. For each $v \in D^*$, if $w_v$ exists then it is unique, and we define $A^*v := w_v$. We say $(A,D)$ is self-adjoint if $(A,D) = (A^*, D^*)$, i.e. $D=D^*$ and $A=A^*$.
It is true that we have the relationship
$$\langle Au, v \rangle = \langle u, A^* v \rangle, \quad \forall u \in D, v \in D^* \quad \quad (\star)$$
But I want to emphasize that $(A^*, D^*)$ is not just some operator satisfying $(\star)$; it is the specific operator defined in the previous paragraph. (In some sense, it is the operator satisfying $(\star)$ and having largest possible domain.)
Then, as you have argued, if $(A,D)$ is self-adjoint, then in particular [B] holds: we have $\langle Au, v \rangle = \langle u, A v \rangle$ for every $u,v \in D$. That is to say, if $A$ is self-adjoint then it is symmetric.
The converse is false: a symmetric operator need not be self-adjoint. Here is a typical counterexample.
Let $H = L^2([0,1])$, and let $D = C^2_c((0,1))$. That is, $D$ consists of $C^2$ functions having compact support contained in $(0,1)$. In particular, if $f \in D$ then $f$ and all its derivatives vanish at $0$ and $1$. For $f \in D$, let $Af = f''$ be the second derivative of $f$.
Recall from calculus that for any $f,g \in C^2([0,1])$ we have, by integrating by parts twice:
$$\int_0^1 f''(x) g(x)\,dx = f'(1) g(1) - f'(0) g(0) - f(1)g'(1) + f(0)g'(0) + \int_0^1 f(x) g''(x)\,dx.$$
If $f,g \in D$, then all the boundary terms vanish and we have $\int_0^1 f''(x) g(x)\,dx = \int_0^1 f(x) g''(x)\,dx$. That is, for all $f,g \in D$, $\langle Af, g \rangle = \langle f, Ag \rangle$. So $(A,D)$ is symmetric.
But I claim that $(A,D)$ is not self-adjoint; in particular, $D^*$ is a proper superset of $D$. Indeed, we can see that if $f \in D$ and $g \in C^2([0,1])$, then again all boundary terms vanish and $\int_0^1 f''(x) g(x)\,dx = \int_0^1 f(x) g''(x)\,dx$, i.e. $\langle Af, g \rangle = \langle f, g'' \rangle$. So $g \in D^*$ and $A^* g = g''$. Thus $C^2([0,1]) \subset D^*$: we see that $D^*$ contains functions which do not vanish at 0 and 1 and hence are not in $D$. (In fact, $D^*$ is the Sobolev space $H^2([0,1])$, though it takes a little more work to show this.) So $(A,D) \ne (A^*, D^*)$.
Moreover, $(A^*, D^*)$ is not symmetric. Take for example $f(x) = x$ and $g(x) = x^2$. Since $f,g$ are $C^2$ they are contained in $D^*$ and $A^*f = f'' = 0$, $A^*g = g'' = 2$. Then you can check that $\langle A^* f, g \rangle = 0$ while $\langle f, A^*g \rangle = 1$.
(Actually, you can show that if $(A,D)$ and $(A^*, D^*)$ are both symmetric, then $(A^*, D^*)$ is self-adjoint, and in fact $(A^*, D^*)$ equals the closure of $(A,D)$. In this case $(A,D)$ is said to be essentially self-adjoint.) |
Showing the range of Fourier transform on $L^1(\mathbb{R})$ is in $L^1(\mathbb{R})$ for a particular scenario | You could use the heat kernel. For example, the following holds for all $f\in L^{1}(\mathbb{R})$:
$$
\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}\hat{f}(s)e^{-s^2t}e^{isx}ds =
\frac{1}{\sqrt{4\pi t}}\int_{-\infty}^{\infty}f(y)e^{-(x-y)^{2}/4t}dy.
$$
Under your conditions, $\hat{f} \ge 0$ and $f$ is continuous at $0$. Setting $x=0$ gives
$$
\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}\hat{f}(s)e^{-s^{2}t}ds
=\frac{1}{\sqrt{4\pi t}}\int_{-\infty}^{\infty}f(y)e^{-y^{2}/4t}dy
$$
Now, letting $t\downarrow 0$, the right side converges to $f(0)$ because $f$ is continuous at $0$. By the monotone convergence theorem,
$$
\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}\hat{f}(s)ds = f(0).
$$
The left side is finite because the limit on the right exists. |
Property of a set of a positive Lebesgue measure | Define $A=\{(x,y): x<0,y<0, x-2y <0\}.$ This is the open region in the third quadrant between the $x$-axis and the line $y=x/2.$ Note that if $D_r$ is the open disc centered at $(0,0)$ of radius $r,$ then
$$\frac{m(A\cap D_r)}{m(D_r)} = \frac{\arcsin (1/2)}{2\pi}$$
for all $r>0.$ Here $m$ is Lebesgue measure on $\mathbb {R}^2.$
Assume $E\subset \mathbb {R}^2, m(E)>0.$ All we seek is an $(a_1,a_2) \in E$ such that $[(a_1,a_2) + A]\cap E$ is nonempty. In fact any $(a_1,a_2) \in E$ that is a point of (symmetric) density of $E$ will work. To see this, note that if $(a_1,a_2)$ is a point of density of $E$ and the above set is empty, then
$$\frac{m([(a_1,a_2) + D_r]\cap E)}{m(D_r)} \le \frac{2\pi - \arcsin (1/2)}{2\pi}$$
for all $r.$ Thus $(a_1,a_2)$ can't be a point of density, contradiction. Thus for all points of density $(a_1,a_2)$ in $E,$ we can find $(b_1,b_2)\in E$ to give us the conclusion. By Lebesgue of course, a.e. $(a_1,a_2) \in E$ is a point of density, so we not only have one instance, but an embarrassment of riches (as often happens in the Lebesgue setting). |
Concerning a Cyclic Galois Group | Let us assume $\;\Bbb Q(i)\subset \Bbb K\;$ is such that $\;\Bbb K/\Bbb Q\;$ is a Galois cyclic extension, and thus $\;Gal(\Bbb K/\Bbb Q)=\langle\sigma\rangle\;$.
Now, since $\;\sigma (q)=q\;\;\;\forall\,q\in\Bbb Q\;$ ,we have that
$$\sigma(i)=i\iff \sigma=\text{Id.}_{\Bbb Q(i)}$$
But this can't be by the Fundamental Theorem of Galois Extensions, since the fixed field of $\;\sigma\;$ is $\;\Bbb Q\;$ ... |
Algebraic structure of the roots of a complex number | Given any two roots, $z_1,z_2$ you have $$\left(\frac{z_1}{z_2}\right)^n =\frac ww=1.$$ So, once you have one solution, the other $n$ can be expressed as the product of that solution and some solution to $y^n=1.$
You still need to find one root of $z^n=w,$ but I don’t think there is an algebraic solution there. |
Real Number Continuum | Intuitively speaking, you can describe a bijection as follows. Warning: some cheating is involved.
First, identify a real $x$ in $[0,1]$ with its expansion base $2$, $x = 0.x_1x_2\dots$. For instance, $0 = 0.000\dots$, $1/2 = 0.100\dots$, $1 = 0.111\dots$, and so on. (It's not quite honest what I did, because some numbers have more than one binary expansion - but trust me that it won't change much).
Next, a sequence $x_1,x_2,\dots$ of $0$s and $1$s can be identified with a set of natural numbers. Simply look at the set $A = \{ n \in \mathbb{N} \ | \ x_n = 1 \}$. You can convince yourself that this correspondence is indeed a bijection between all sequences and all sets.
Finally, combining the two steps, you find a bijection between $[0,1]$ and $\mathbb{N}$. If you want to replace $[0,1]$ by $\mathbb{R}$, just find any bijection between the two (hint: think of $\tan x$). |
A difficulty in understanding The implementation level description of a multi tape TM. | To put multiple tapes on a single unified tape, write the following on the unified tape:
List the current contents of the tapes in order and put a dividing symbol $\mathtt{\#}$ between them so you know where the contents of each tape begin and end.
Each of the tapes has its own TM head, so use a marker decoration $\dot{}$ to mark the cell where the TM's head is supposed to be on that tape.
Now you can simulate moving the head left and right on each of the tapes separately (just erase the "head marking" dot and move it to the left or right.)
The virtual tapes are supposed to seem infinite, even though you've only allocated finite space to them with $\texttt{#}$ divider symbols in between. If you ever move past the end of the allocated space (i.e. you reach a divider symbol), you can always create some more space: to make more space on the right of the current position, just shift all those symbols one step rightward. |
How to turn this matrix to Jordan normal form? | Let's follow the algorithm described here.
The characteristic polynomial of $A$ is
$$
\DeclareMathOperator{char}{char}\char_A(t)=-(t+1)^3
$$
so the only eigenvalue of $A$ is $\lambda=-1$ with algebraic multiplicity $m=3$. Note that
\begin{align*}
A+I&=\begin{bmatrix}4&0&8\\ 3&0&6\\-2&0&-4\end{bmatrix} &
(A+I)^2 &= \begin{bmatrix}0&0&0\\0&0&0\\0&0&0\end{bmatrix}
\end{align*}
so $\DeclareMathOperator{rank}{rank}\DeclareMathOperator{null}{null}$
\begin{align*}
\dim\null(A+I) &= 2 & \dim\null(A+I)^2&=3
\end{align*}
We then compute the numbers
\begin{align*}
d_1 &= \dim\null(A+I) & d_2 &= \dim\null(A+I)^2-\dim\null(A+I) \\
&= 2 & &= 3-2 \\
&&&= 1
\end{align*}
so we must fill the boxes
$$
\begin{matrix}
\Box & \Box \\
\Box
\end{matrix}
$$
with vectors. Note that $u=\begin{bmatrix}1&0&0\end{bmatrix}^\top$ satisfies $u\in\null(A+I)^2$ but $u\notin\null(A+I)$. Put $v=(A+I)u=\begin{bmatrix}4&3&-2\end{bmatrix}^\top$ so the diagram takes the form
$$
\begin{matrix}
v & \Box \\
u &
\end{matrix}
$$
Since $w=\begin{bmatrix} 0&1&0\end{bmatrix}^\top$ is linearly independent from $v$ we complete the Jordan basis $\{u,v,w\}$.
Finally, put
$$
P=
\begin{bmatrix}
4&1&0\\ 3&0&1\\ -2&0&0
\end{bmatrix}
$$
and note that the Jordan form is
$$
J=P^{-1}AP=\begin{bmatrix}-1&1&0\\0&-1&0\\0&0&-1\end{bmatrix}
$$ |
Parametric Curves, Lines and Projections | $\newcommand{\Cv}{\mathcal{C}}\DeclareMathOperator{\Atan}{atan2}$In the diagram, and as clarified in the comments, the curve $\Cv$ is a polar graph $r = f(\theta)$, for some positive function $f$ to be determined from the component functions $(x, y)$ of the parametrization. In terms of $f$, the desired distance function is
$$
d_{\theta} = f(\theta) + f(\theta + \pi).
$$
To find $f$, write $\theta = \Theta(t)$ and
\begin{alignat*}{2}
x(t) &= r\cos\theta &&= f(\theta) \cos\theta, \\
y(t) &= r\sin\theta &&= f(\theta) \sin\theta,
\end{alignat*}
so that
$$
f(\theta) = \sqrt{x(t)^{2} + y(t)^{2}}
\tag{1}
$$
and
$$
\Theta'(t) = \frac{d\theta}{dt} = \frac{y(t) x'(t) - x(t) y'(t)}{x(t)^{2} + y(t)^{2}}.
\tag{2}
$$
Under the geometric assumptions on $\Cv$, the preceding expression is non-vanishing, and without loss of generality (i.e., assuming $\Cv$ is traced counterclockwise) may be taken to be positive. If $[a, b]$ is the domain of the given parametrization of $\Cv$, then
$$
\Theta(t) = \int_{a}^{t} \frac{y(\tau) x'(\tau) - x(\tau) y'(\tau)}{x(\tau)^{2} + y(\tau)^{2}}\, d\tau
\tag{3}
$$
defines an invertible function $\Theta:[a, b] \to [0, 2\pi]$. The function $f$ is given by (1):
$$
f(\theta) = \sqrt{x(\Theta^{-1}(\theta))^{2} + y(\Theta^{-1}(\theta))^{2}}.
$$
While this analytic expression is exact, it entails inverting the definite integral (3), and may therefore be inconvenient for practical use. If instead the goal is to calculate $d_{\theta}$ for finitely many specified values of $\theta$, it suffices, for a given $\theta$, to find (using Newton's method, say) the values $t_{1}$ and $t_{2}$ such that
$$
\Atan(y(t_{1}), x(t_{1})) = \theta,\qquad
\Atan(y(t_{2}), x(t_{2})) = \theta + \pi,
$$
so that
$$
d_{\theta} = \sqrt{x(t_{1})^{2} + y(t_{1})^{2}} + \sqrt{x(t_{2})^{2} + y(t_{2})^{2}}.
$$
As usual, $\Atan(y, x)$ denotes the branch of polar angle of the ray from the origin through $(x, y)$ and taking values in $(-\pi, \pi)$. |
How many numbers between 1 and 150 contain the digit '1' in their base 10 representation? | Mathematical "law" is not well defined in this context. I guess you are after a formula or an algorithm (other than listing all numbers) that gives us the result for any arbitrary integer $n \ge 0$. Yes, we can find one, and I'll describe it in the end. Here's my reasoning and how I got there:
Start counting at $0$.
We notice that every ten numbers we get one number that has a '1' in it, except for one particular/special set of ten numbers that has all 10 numbers with a '1' (these are the numbers $10$ to $19$). In other words from $0$ to $9$ we have 1 number with a '1' in it, from $10$ to $19$ all numbers have '1', from $20$ to $29$ we have one number with '1' in it and so on.
So for the first one hundred numbers ($0$-$99$) we get $9\times 1 + 10= 19$ numbers that have '1' in them. And this happens for every one hundred numbers, except for one particular/special set of 100 numbers that all of them have a '1' (these are numbers $100$ to $199$).
We can do the same process for the first 1000 numbers, the first 10,000 numbers and so on. And we can see the general pattern: the number of '1' depends on what we have found in the previous 'level'. So in general, for every $10^r$ numbers, we get $S_r$ numbers that have '1' in them. $S_r = 9\times S_{r-1} + 10^{r-1}, \text{with } S_0=0$. Except one particular/special set of $10^r$ numbers where all of them have a '1' in them.
Here are the values for the first few $S_r$
$$\begin{align}
S_1 &= 1 \\
S_2 &= 9\times 1 + 10 = 19 \\
S_3 &= 9\times 19 + 100 = 271 \\
S_4 &= 9\times 271 + 1000 = 3439
\end{align}$$
There are ways to find a closed formula from a recursive formula, but I do not find it necessary to complicate things at this point, since we can quickly and easily calculate any $S_r$.
We notice that every number $n$ can be broken into units, tens, hundreds, thousands, etc. For example, $251$ can be broken in 2 hundreds, 5 tens, and one unit. Two hundreds means that we have two groups of $100(=10^2)$ numbers. How many numbers in these two groups have '1' in them? We know that at least one group has $S_2$ numbers that have '1' in them (it's a 'normal' group). The other group has either another $S_2$ (it's a normal group) or another $100$ numbers (it's a special group) that have '1' in them. It's easy to see that if the hundreds are above 1 then the "special" hundreds are always included. So we should have $100 + S_2$ for our first 200 hundred numbers. Then we can move to the tens. We have 5 of them. Similarly, since we are above 1 ten, we have the special group and 4 regular groups, meaning we have another $4\times S_1 + 10$ numbers that have '1' in them. Finally we have 1 unit, which means we have yet another number with '1' in it. Final result is 134.
There is a tricky situation when we have just one hundred, or just one ten, just one of any group. For example, think about $n=1033$. We realise that the first thousand from 0-999 has $S_3$ numbers with '1' in them, and all the rest 1000-1034 (that is 34 numbers) all have '1' in them.
Here's the general algorithm:
Let's have an integer $n$ that is represented with $k$ digits in base
10 as $(d_{k-1}\cdots d_1d_0)_{10}$.
Starting from the most significant digit and an intermediate result of
zero, we apply these rules:
If digit $d_i=0$ we move on to the next digit.
If digit $d_i = 1$ then we have $S_i + (d_{i-1}\cdots d_0)_{10} +1$ numbers that have '1' in them. $\text{Final answer} = S_i + (d_{i-1}\cdots d_0)_{10} +1 + \text{intermediate result}$. Stop.
If $d_i > 1$ then $(d_i-1)S_i + 10^i$ numbers have '1' in them. $\text{intermediate result} = (d_i-1)S_i + 10^i + \text{intermediate result}$. We continue with the next digit in the sequence.
If there are no more digits we stop, and the intermediate result is our final answer.
Let's see some examples:
Take the number $n=150$, which means $d_2=1, d_1=5, d_0 = 0$
$d_2=1$ so $\text{Final answer} = S_2 + (50)_{10} +1 + \text{intermediate result} = 19 +50 + 1 + 0 = 70$
Let's see a more complicated example $n=4561$.
$d_3=4$, so intermediate result $= (d_3-1)S_3 + 10^3 + \text{intermediate result} = (4-1) \times 271 + 1000 + 0 = 1813$. Moving to the next digit.
$d_2=5$, so intermediate result $= (d_2-1)S_2 + 10^2 + \text{intermediate result} = (5-1) \times 19 + 100 + 1813 = 1989$. Moving to the next digit.
$d_1=6$, so intermediate result $= (d_1-1)S_1 + 10^1 + \text{intermediate result} = (6-1) \times 1 + 10 + 1989 = 2004$. Moving to the next digit.
$d_0=1$, so Final answer $= S_0 + (d_{-1}\cdots d_0)_{10} +1 + \text{intermediate result} = 0 + 0 + 1 + 2004 = 2005$.
I hope this answers your question. Let me know in the comments if you would like clarifications on any part. |
Why am I allowed to remove $\ln$ from both sides of an equation? | Equations don't equal anything. Equations are sentences. You can replace a sentence with a different sentence as long as it is also true. 1)$27x = 54y$ is a sentence. 2)$x = 2y$ is a different sentence. The first sentence in only true if the second one is too and vice versa. But they are different sentences. We can replace one with the other because one follows from the other. We don't care what $27x$ equals. We care what $x$ equals.
If $a = b$ then $anythinginvolving(a) = anythinginvolving(b)$. So if $a = b$ then $e^a = e^b$. Likewise $a + 27 = b+ 27$ or $a^2 - \sqrt{3a} + 5 = b^2 -\sqrt{3b} + 5$. If $a = b$ we can do anything to either side.
So if $\ln (x+2) = \ln x^2$ we can do $e^{\ln(x+2)} = e^{\ln x^2}$. Why not? If we wanted to, we could say $\ln(x+2)$ fried in batter and stuffed inside $\sqrt{\text{a tomato}}=\ln(x^2)$ fried in batter and stuffed inside $\sqrt{\text{a tomato}}$ for all anyone cares.
But $e^{\ln(x+2)}= e^{\ln x^2}$ is a smart thing to care about. (Whereas $\ln(x+2)$ fried in batter and stuffed inside $\sqrt{\text{a tomato}}=\ln(x^2)$ fried in batter and stuffed inside $\sqrt{\text{a tomato}}$ is an utterly stupid thing to care about.)
BY DEFINITION: $e^{\ln a} = a$, ALWAYS. And for real numbers $\ln e^a = a$. $e^x$ and $\ln x$ are inverses and that means they can "undo" each other. Ith this way they are just like multiplication and division. If you have $3(x+2) = 3x^2$ you can "undo" the "times $3$ by dividing both sides by $3$ because division "undoes" multiplication.
So $\ln(x+2) = \ln x^2$ is a true sentence. We don't care.
That means $e^{\ln(x+2)} = e^{\ln x^2}$ is a different true sentence.
But this different true sentence means the same thing as:
$x+2 = e^{\ln(x+1)} = e^{\ln x^2} = x^2$ so now we have yet another true sentence:
$x + 2= x^2$. And we care because the takes us one step closer to soling what $x$ is.
====
Postscipt:
If $a = b$ we can do anything to either side and get a true statement.
BUt be careful. Not everything we do can be undone.
For example: If $x = 7$ then $3x + 5 = 3*7 + 5 = 26$. And $0\times (3x+5) = 0\times 26$ and so $0 = 0$. These are all true sentences. But the are not equivalent sentences. Multiplying by $0$ can not be "undone". But multiplying by $3$ can be undone (by dividing by $3$) and adding $5$ can be undone (by subtracting $5$).
Raising $e$ to a power can be undone by taking the logorithm. And taking the logorithm can be undone by raising $e$ to the power but only if the number was positive to begin with. |
When is this norm defined? | Let $n=2$. If $k=1$, let $\|.\|$ be the corresponding defined norm.
If $a_0=1, a_1=-1$, then we have $\|x^2-1\|=0$, even thought $x^2-1 \ne 0$.
In fact, if $n=2$ and $k=1$, then we have $\|(x-a_0)(x-a_1)\|=0$.
In general, if $k+1 \le n$, then we have $\|\prod_{i=0}^k (x-a_i) \|=0$ and $\prod_{i=0}^k (x-a_i)$ is of degree $k+1$ and non-zero.
Hence we need $k \ge n$ due to the positive definite requirement.
Now to verify sub-additivity,
\begin{align}
\|f + g\| &= \max_{0 \le i \le k} \{|(f+g)(a_i)| \} \le \max_{0 \le i \le k} \{|f(a_i)| + |g(a_i)| \} \le \|f\| + \|g\|
\end{align}
Verification of absolutely homogeneous is direct.
As an exercise, you might like to answer the question when is it a seminorm. |
Random variable problems | Your proof looks correct to me. Your argument is correct. You could clean it up a bit in places if you want, but the changes are mostly aesthetic.
For example, your argument in part (a) could be rewritten: Since $\mathscr{F}$ doesn't contain singleton sets, we have $\{c\} = H^{-1}(\{1\}) \not\in \mathscr{F}$. So $H$ isn't a random variable.
In part (b), your argument is again correct. It might be worth noting that, if you want to check whether $X$ is a random variable, it suffices to check that $X^{-1}(B)$ is measurable (i.e. in $\mathscr{F}$) for all $B$ in some generating set for $\mathscr{B}(\mathbb{R}) \cap X(\Omega)$ instead of checking all possible $B$ (as you've done). So, in this case, since $\mathscr{B}(\mathbb{R}) \cap X(\Omega) = \sigma(\{-1\}, \{0\}, \{1\}) \cap X(\Omega)$, it suffices to check that $X^{-1}(\{-1\}), X^{-1}(\{0\}),$ and $X^{-1}(\{1\})$ are measurable instead of checking all combinations (as you've done).
As a side note, you probably mean $B_1^c \cap B_2$ instead of $B_1^cB_2$? I haven't seen the latter notation used. |
How to characterize a Backward Euler Method | You're quite right.
An explicit method is given by $y_{n+1} = F(t_n,y_n)$, with no mention of $y_{n+1}$ on the RHS, and you can compute $y_{n+1}$ directly.
An implicit method is given by $F(t_n,y_n,y_{n+1})=0$, and you have to solve an equation to find $y_{n+1}$.
Both these definitions are for one-step methods: $y_{n+1}$ depends directly or indirectly just on $t_n$ and $y_n$, not on any $y_k$ for $k<n$. |
Find $ \int {dt\over 2t+1}$ | You've got the right idea about the "form" of the integral, but recall, we need to account for the chain-rule before integrating, by u-substitution for example.
In general, when you have an integral of the form $\int \dfrac{f'(t)}{f(t)} \,dt$
your result will indeed be: $$\int \dfrac{f'(t)}{f(t)} \,dx = \ln|f(t)| + C$$
In your case, we have $f(t) = u =2t + 1$. Now, we need $f'(t):\; du = 2 dt\iff \frac 12 du = dt$, so we need to obtain the form $$\int \frac{f'(t)}{f(t)}\,dt$$ which we can obtain directly, or through substitution:
$$\int \frac{dt}{2t + 1} = \frac 12 \int \underbrace{\frac{2\,dt}{2t + 1}}_{\dfrac{f't}{f(t)} dt} \quad \overset{\text{substitute}}{=} \quad \frac 12 \int \frac{du}{u} \quad = \quad \frac 12\ln|u| + C = \cdots$$ |
If $u \in W^{1,p}(U)$, prove that $Du=0$ a.e. on the set $\{u=0\}$. | This exercise is the result of Chain rule:
Recall that if $f\in C^1(\mathbb R)$ with $f'\in L^\infty(\mathbb R)$ then we have $f(u)\in W^{1,p}(U)$ if $U$ is bounded, and we have $\partial_i f(u)=f'(u)\partial_i u$ in weak sense.
Hence, for part $(a)$, the function $f(x)=|x|$ has derivative $-1$ or $1$ and is (pise-wise) $C^1$, and hence the chain rule applied.
The part $b$ works in the same way. Function $F_\epsilon$ satisfies the condition of chain and hence $F_\epsilon(u)\in W^{1,p}(U)$ and the chain rule applied. What you need is push $\epsilon\to 0$. Notice that $F_\epsilon(x)\to x^+$ for $\epsilon\to 0$.
For part $c$, notice that $u^+=u^-=0$ on the set $\{u\equiv0\}$. And from part $b$, you actually have
\begin{cases}
Du^+=Du &x\in \,spt(u^+)\\
Du^+=0 & x\notin\,spt(u^+)
\end{cases} |
Differences between REF and RREF | REF is enough. RREF will give you the solutions if any, but REF is enough since the necessary and sufficient condition to have no solution is that the rank of the augmented matrix is greater than the rank of the matrix of the linear system. |
Find extra work done by Bob | Each tile moves independently. For Type 1 moves, Bob has to move each tile that doesn't start out in the right place once. At a maximum he will make $N^2-1$ moves. For Type 2 moves, each tile has to move the Manhattan distance that it starts from its final spot number of times. Each tile moves from $0$ to $2N-2$ times. To get the average, note we can consider the two axes independent. In one axis and the continuous approximation, the average distance is the average distance between points in a unit square $\frac 1{15}(2+\sqrt 2 +5 \sinh^{-1}1 \approx 0.521405433$ times the size of the square $N-1$ so on average Bob will move $2(N-1)(\frac 1{15}(2+\sqrt 2 +5 \sinh^{-1}1)$ moves. |
Is there a formula for this integral $I(a,b)=\int_0^1 t^{-3/2}(1-t)^{-1/2}\exp\left(-\frac{a^2}{t}-\frac{b^2}{1-t} \right)dt$ | I will derive the following result using the convolution theorem for Laplace Transforms:
$$I(a,b) = \int_0^1 dt \, t^{-3/2} \, e^{-a^2/t} \, (1-t)^{-1/2} \, e^{-b^2/(1-t)} = \frac{\sqrt{\pi}}{|a|} e^{-(|a|+|b|)^2}$$
I assume $a$ and $b$ are $\gt 0$ for the derivation below, but you will see where the absolute values come from. It is very easy to check the correctness of this result with a few numerical examples in, say, Wolfram Alpha.
To begin, I refer you to the derivation of the following LT relation:
$$\int_0^{\infty} dt \, t^{-3/2} \, e^{-1/(4 t)}\, e^{-s t} = 2 \sqrt{\pi} \, e^{-\sqrt{s}}$$
We may rescale this to get
$$\int_0^{\infty} dt \, t^{-3/2} \, e^{-a^2/t}\, e^{-s t} = \frac{\sqrt{\pi}}{a} \, e^{-2 a\sqrt{s}}$$
Now the convolution theorem states that the convolution of the above
$$\int_0^1 dt \, t^{-3/2} \, e^{-a^2/t} \, (1-t)^{-3/2} \, e^{-b^2/(1-t)}$$
is equal to the inverse LT of the product of the individual LTs. This is easily expressed as follows:
$$\frac{1}{i 2 \pi} \int_{c-i \infty}^{c+i \infty} ds \, e^s \frac{\pi}{a b} e^{-2 (a+b) \sqrt{s}}$$
Note that this is being evaluated at $t=1$. And we of course know what the this integral evaluates to:
$$\int_0^1 dt \, t^{-3/2} \, e^{-a^2/t} \, (1-t)^{-3/2} \, e^{-b^2/(1-t)} = \sqrt{\pi} \left (\frac{1}{a} + \frac{1}{b} \right ) e^{-(a+b)^2}$$
Of course, this is not the integral sought. But we may derive this integral by differentiating with respect to the parameter $b$:
$$\frac{\partial}{\partial b} \int_0^1 dt \, t^{-3/2} \, e^{-a^2/t} \, (1-t)^{-1/2} \, e^{-b^2/(1-t)} = -2 b \int_0^1 dt \, t^{-3/2} \, e^{-a^2/t} \, (1-t)^{-3/2} \, e^{-b^2/(1-t)} $$
which means we need to evaluate the following integral:
$$-2 \sqrt{\pi} \int db \, b\, \left (\frac{1}{a} + \frac{1}{b} \right )\, e^{-(a+b)^2} = -\frac{2 \sqrt{\pi}}{a} \int db\, (a+b) e^{-(a+b)^2} = \frac{\sqrt{\pi}}{a} e^{-(a+b)^2} + C$$
Using the fact that the sought-after integral goes to zero as $b \to \infty$, $C=0$ and we get
$$I(a,b) = \int_0^1 dt \, t^{-3/2} \, e^{-a^2/t} \, (1-t)^{-1/2} \, e^{-b^2/(1-t)} = \frac{\sqrt{\pi}}{a} e^{-(a+b)^2}$$
BONUS
Of course, I could have considered the convolution between two different functions:
$$f(t) = t^{-3/2} e^{-a^2/t}$$
$$g(t) = t^{-1/2} e^{-b^2/t}$$
with corresponding LTs
$$\hat{f}(s) = \frac{\sqrt{\pi}}{a} \, e^{-2 a\sqrt{s}}$$
$$\hat{g}(s) = \sqrt{\frac{\pi}{s}} \, e^{-2 b\sqrt{s}}$$
(I will not derive the latter LT right now.) The convolution is then
$$\frac{\pi}{a} \frac{1}{i 2 \pi} \int_{c-i \infty}^{c+i \infty} ds \, e^s \, s^{-1/2} e^{-2 (a+b) \sqrt{s}} = \frac{\sqrt{\pi}}{a} e^{-(a+b)^2}$$ |
Order of the product of two elements in an abelian group | Since $G$ is abelian, $(ab)^{mn}=a^{mn} b^{mn}=(a^m)^n (b^n)^m=1$. Thus
$o(ab)| mn$. So $mn=o(ab)s_1$, for some $s \in \Bbb{Z}$.
On the other hand $A:=lcm(m,n)/gcd(m,n)=mn/gcd(m,n)^2$. Hence we have $o(ab)=\frac{mn}{s_1}=\frac{A (gcd(m,n))^2}{s_1} $. Put $s_2=\frac{gcd(m,n))^2}{s_1}$. So $o(ab)=As_2$, i.e. $A|o(ab)$, as desierd. |
Are there arbitratily long runs of consecutive integers n that are NOT of the form $n = p^k$ or $n = 2p^k$ for some $k>0$ and $p$ an odd prime? | There are $\pi(x)\sim x/\log x$ primes up to $x$ and $\pi(x/2)\sim x/2\log x$ even semiprimes up to $x$. The highest power up to $x$ is $\lfloor\log_2x\rfloor$ and so there are $\ll2\log_2x\sqrt x$ other numbers of the form $p^k$ or $2p^k$ up to $x$. As a result $P(n)\sim\frac{3}{2\log n}\to0$ and hence there are arbitrarily long runs of numbers without primitive roots. |
How to evaluate the integral $\displaystyle\int \frac{x^{15}}{{(1+x^3)^{\frac{2}{5}}}} dx$ | $$\int\frac{x^{15}}{(1+x^3)^{2/5}}\,\mathrm dx$$
Let $u=1+x^3$, so that $x=(u-1)^{1/3}$ and $\mathrm dx=\frac13(u-1)^{-2/3}\,\mathrm du$. Then the integral becomes
$$\frac13\int\frac{(u-1)^5}{u^{2/5}}(u-1)^{-2/3}\,\mathrm du=\frac13\int(u-1)^{13/3}u^{-2/5}\,\mathrm du$$
which almost resembles the beta function. However, you mention in the comments that the original integral is taken over the set $x\in[0,\sqrt[4]2]$, so the latter integral is taken over $u\in[1,1+\sqrt[4]8]$ and has a value of
$$\frac13\int_1^{1+\sqrt[4]8}(u-1)^{13/3}u^{-2/5}\,\mathrm du=3\,{}_2F_1\left(\frac25,\frac{16}3;\frac{19}3;-\sqrt[4]8\right)\approx2.11381$$
(computation courtesy of Mathematica) where ${}_2F_1$ is the hypergeometric function. I wouldn't expect this to have an elementary form, so perhaps there is an error made by the author or your instructor. |
Isomorphism of inverse limits. | Someone might want to translate this into ring-theoretic terms, but to expand on Damien L's pos(e)t, let
$\mathcal P$ be the poset category with objects polynomials $f$ in $M^*$ and with morphisms $f_1 \to f_2$ iff $f_1\mid f_2$
$\mathcal Q$ be the poset category with objects are pairs $(f,M')$ such that $f\in M'\in\mathcal S_R$ and with morphisms $(f_1,M')\to (f_2,M'')$ iff $f_1\mid f_2$ and $M'\subseteq M''$
(I.e. $\mathcal Q$ agrees with $B$.) Consider the diagrams $$X:\mathcal P^{\text{op}}\to \textbf{Ring}\text{ mapping }\left\{f_1\to f_2\right\}\overset{X}{\longmapsto} \left\{R/(f_1)\leftarrow R/(f_2)\right\}$$ $$Y:\mathcal Q^{\text{op}}\to \textbf{Ring}\text{ mapping }\left\{(f_1,M')\to (f_2,M'')\right\}\overset{Y}{\longmapsto} \left\{R/(f_1)\leftarrow R/(f_2)\right\}$$ Where $R/(f_1)\leftarrow R/(f_2)$ is in both cases the obvious quotient morphism. By definition of $X$ and $Y$, $$\lim X=\varprojlim_{f \in M^*}R[q]/(f)=R[q]^M$$ $$\lim Y=\varprojlim_{f\in (M')^*,\ M' \in \mathcal{S}_R} R[q]/(f)=R[q]^S$$ Now consider the functor $F:\mathcal Q\to \mathcal P$ mapping $\left\{(f_1,M')\to (f_2,M'')\right\}\overset{F}{\longmapsto} \left\{f_1\to f_2\right\}$, and observe the natural isomorphism $$X\circ F^{\text{op}}\simeq Y$$ (They're the exact same map.) Furthermore $F$'s fullness and essential surjectivity imply the equivalence of the images of $X\circ F^\text{op}$ and $X$. (The two actually turn out to be the exact same.) If follows that $$\lim X\cong \lim X\circ F^\text{op}\cong \lim Y$$ As desired $\blacksquare$
Note that the "fullness and essential surjectivity" of $F$ follows from the fact that for any two $f_1,f_2\in M^*$ for which $f_1\mid f_2$, one may choose an $M'\in \mathcal S_R$ such that $f_1,f_2\in M'$. Some care is also required in justifying $$\varprojlim_{M' \in \mathcal{S}_R} \left( \varprojlim_{f \in (M')^*} R[q]/(f) \right)=\varprojlim_{f\in (M')^*,\ M' \in \mathcal{S}_R} R[q]/(f)$$ The main idea is to describe a natural bijection between cones over the former diagram and those over the latter. (Drawing a picture helps.) |
Covariance for Optimal Bayes Estimator using Gaussian distributions | Yeah, I assumed wrong my covariance, since I forgot that I have to substract mean in covariance formula, which was always 0 for previous distributions, therefore since $E[\beta | y] = (X^TX + I_d)^{-1}X^Ty = A y$ and $\beta = Ay + z$:
$$ Cov(\beta | y) = E[(\beta - E[\beta | y])(\beta - E[\beta | y])^T | y] = E[(Ay + z - Ay)(Ay + z - Ay)^T | y] = E[zz^T |y] = E[zz^T] = (X^TX + I_d)^{-1}$$ |
Fourier transform of a linear operator | This is not true for any linear operator. Consider for example the linear operator:
$$\mathcal{L}: u \mapsto a(x) \cdot u$$
where $a$ is any sufficiently regular function.
Then:
$$\widehat{\mathcal{L} u} = \widehat{a} \star \widehat{u} $$
and for a general $a$, $\widehat{\mathcal{L} u}_q$ does not depend only on $\widehat{u}_q$. |
What is the difference between these Calculus topics? | While not exactly a book, Paul's online notes provide material from algebra/pre-calc all the way up to Calculus III and differential equations.
You will also notice that under the "contents" tab, there are links to
Notes
Practice Problems
Assignment problems
Some of the books you've mentioned I've heard of, and from what I can see, some of them are probably bad places to start, depending on your level of skill. Clearly if the first chapter is too much, you should put the book down or wait until another day or find another book.
You will also find plenty books will probably assume you have background knowledge. Multi-variable Calculus, for example, requires you to understand single-variable Calculus.
AFAIK, the general starter for how you should tackle your first Calculus course should be something like
Limits
Derivatives
Integrals
And applications should be sprinkled in between.
Also don't forget that if you think your getting it but maybe one particular problem is stumpting you, MSE is happy to try and help. |
Central limit theorem, problem with distribution value | I assume that the rolls are independent. Let $X$ denote the sum of the rolled numbers after 1000 rolls. Then, indeed by the central limit theorem, the distribution of $(X-3500)/(1.71\sqrt{1000})$ will be approximately standard normal. All your calculations so far are correct. The last step missing is to evaluate the cdf at those points. First, the symmetry of the standard normal distribution implies $\phi(-x)=1-\phi(x)$ for $x\in\mathbb{R}$. Hence,
$$\phi(\frac{4000-3500} { 1.71\sqrt{1000}})-\phi(\frac{4000-3500} {1.71\sqrt{1000}})= \phi(\frac{500} {1.71\sqrt{1000}})-\phi(\frac{-500} {1.71\sqrt{1000}}) \\ = 2\phi(\frac{500} {1.71\sqrt{1000}})-1 = 2\phi(9.25)-1.$$
Unfortunately the cdf the normal distribution cannot be written down using elementary functions. So, we have to rely on numerical approximations which are usually implemented in software packages (e.g. pnorm(x) in R) or written down as tables in statistics books. Note that 9.25 is more than 9 standard deviations (=1) away from the mean (=0). We have that $\phi(3)\approx 0.9986501$ or $\phi(5)\approx0.9999997$ are already quite close to 1. Remembering that the tails of the normal distribution get thinner and thinner the more we move away from the mean, $\phi(9.25)$ will be a number very, very close to 1. Hence, the probability is approximately $2\phi(9.25)-1\approx 1.$
Note that most software packages will evaluate $\phi(9.25)$ simply as 1 (because there are only a limited numbers of digits in floating point numbers available). |
Difference of numbers in a unit interval | Let $E$ be the event $\{ \max(x, y, z) - \min(x, y, z) \le \frac{2}{3}\}$. Our goal is to find $\Pr[E]$. Now let's split $E$ into 6 events $E_{xy}, E_{yx}, E_{xz}, E_{zx}, E_{yz}, E_{zy}$, where
$$E_{xy} = \{ (x = \max(x, y, z)) \land (y = \min(x, y, z)) \land (x - y \le 2/3)\},$$
and the rest are defined similarly. Note that $E$ is the union of these six events and the probability of intersection of any two of these events is $0$. Also, due to symmetry, all these 6 events have the same probability. Hence $\Pr[E] = 6\Pr[E_{xy}]$.
It remains to compute $\Pr[E_{xy}]$. Note that $E_{xy}$ happens iff $y\le x\le y + 2/3$ and $z\in[y, x]$. I.e.
\begin{align*}
\Pr[E_{xy}] &= \int\limits_0^1 dy \int\limits_y^{\min(y + \frac{2}{3}, 1)} dx \int\limits_y^x dz
= \int\limits_0^1 dy \int\limits_y^{\min(y + \frac{2}{3}, 1)} (x - y) dx \\
&= \int\limits_0^1 dy \cdot \frac{(x - y)^2}{2} \Bigg|_{y}^{\min(y + \frac{2}{3}, 1)}
= \int\limits_0^1 \frac{\left(\min(y + \frac{2}{3}, 1) - y\right)^2}{2} dy\\
&= \int\limits_0^{\frac{1}{3}} \frac{\left(\frac{2}{3}\right)^2}{2}dy + \int\limits_{\frac{1}{3}}^1 \frac{(1 - y)^2}{2}dy \\
&= \frac{2}{27} -\frac{(1 - y)^3}{6} \Bigg |_{\frac{1}{3}}^1 = \frac{10}{81}.
\end{align*}
And thus the answer is $ 6\cdot \frac{10}{81} = \frac{20}{27}$ |
asymptotic expansions of integral | First note that, for $x \in \left[ {0,1} \right]$, $f'(x) = a - 2bx \geq a-2b > 0$ and $f(0)=1$. So $f(x)\geq 1$ on $\left[ {0,1} \right]$. Accordingly, we can write
$$
\int_0^1 {f^{n - 1}(x) dx} = \int_0^1 {e^{(n-1)\log f(x)} dx} .
$$
Since $f'(x) > 0$, the exponent is increasing on the interval of integration and reaches a maximum at the endpoint $x=1$. Since the saddle point is at $x=\frac{a}{2b}>1$, this is the linear endpoint case of Laplace's method (see, e.g, Eq. (4.3) in http://www.macs.hw.ac.uk/~simonm/ae.pdf). Thus,
$$
\int_0^1 {e^{(n - 1)\log f(x)} dx} \sim \frac{1}{{n - 1}}\frac{{f(1)}}{{f'(1)}}e^{(n - 1)\log f(1)} \sim \frac{1}{n}\frac{{f^n (1)}}{{f'(1)}}
$$
as $n\to +\infty$. |
Narrow convergence of probability measures with a not bounded function | This is false. Let $m$ be Lebesgue measure on $[0,1]$. Let $X_n$ be a sequence of random variables converging almost surely to $0$ such that $EX_n$ does not tend to $0$. (Example: $X_n=nI_{(0,\frac 1n)}$ on $(0,1)$ with Lebesgue measure). Let $\nu_n=m \times \mu_n$ and $\nu=m \times \mu$ where $\mu_n$ is the law of $X_n$ and $\mu$ is the law of $0$ (i.e. $\mu =\delta_0$). Then you can check that $\nu_n \to \nu$ but $\int yd\nu_n =1$ for all $n$ and $\int yd\nu=0$. |
Solve $-x^2+\sqrt{5}x \le 0$ | Your calculation are correct. In fact, you can first do: $$x(\sqrt{5}-x)\leq0$$ Using the product law, we can say that the solutions are:$$x\geq\sqrt{5} \; \vee \; x\leq0$$ |
Matlab - Multiplication of matrices $A.B$, A being a triangular matrix | $$C_{ij}=\sum_{k=1}^n A_{ik}B_{kj}$$
When $i < k$, $A_{ik}=0$, hence we can simplify the expression above as
$$C_{ij}=\sum_{\color{red}k=1}^\color{red}i A_{ik}B_{kj}$$
Try to edit your based on the expression above. |
modular exponentiation $14^{20} \pmod{33}$ | Hint: Consider $14^{20}$ mod $3$ and mod $11$. Use Fermat's theorem. |
Find all holomorphic functions giving $ u(x,y) = \phi (x^2+y^2) $ | I assume your function is supposed to be entire, i.e. holomorphic on all of $\mathbb C$.
Let $a = \text{Re}(f(1))$. Thus $g(z) = i (f(z) - a)$ is real on the circle $|z| = 1$. Note that $h(z) = \overline{g(1/\overline{z})}$ is analytic on
$\mathbb C \backslash \{0\}$ and equal to $g(z)$ on $|z|=1$, so it is equal to $g(z)$ everywhere. In particular, $g$ is bounded. Now use Liouville's theorem. |
How can I think of infimum of set sequence when there is no common intersection? | Consider $A_1 = [-1, 1]$, $A_2 = [-0.5, 0.5]$, so in general $A_n = [-1/n, 1/n]$. $0$ is in all of the sets so the intersection is not empty. So yes, if there is no intersection then the infimum would be the empty set. |
Is algebra over a set also algebra over a field? | There is a connection.
An algebra over a set is actually a boolean ring: if we interpret $+$ as symmetric difference $\triangle$ and $\cdot$ as intersection $\cap$, they satisfy all (unital) ring axioms (with $0=\emptyset$ and $1$ equal to the universe), and moreover $x+x=x\triangle x=\emptyset=0$. (However, it is never a field, unless it has two elements, despite being called a field of sets: no element except $1$ itself has a $\cap$-inverse.)
Every boolean ring is trivially an algebra over the two-element field, so there (moreover, every ring is an algebra over the ring of integers, but that is not a field).
But this is, I think, mostly incidental. As far as I can tell, historically, the term algebra was used in a sense closer to the one used in universal algebra now, which is rather more abstract than modern "algebra over a field". |
(continuously) extend a function from a $C^1$-boundary to the whole space | It is not completely clear what is given and what can be chosen here.
I describe the situation where $1\leq k<d$ and we have a compact $k$-dimensional $C^\alpha$ ($\alpha\geq 1$) submanifold $M$ (without boundaies) of ${\Bbb R}^d$ and a function $f\in C^\beta(M)$, $0\leq \beta\leq \alpha$.
For each $a\in M$ you may choose an open neighborhood $U_a \subset {\Bbb R}^d$ of $a$ and a $C^\alpha$ diffeo $\phi_a : U_a \rightarrow V_a=\phi_a(U_a)\subset {\Bbb R}^d$ so that $\phi_a(a)=0_d$ and
$$ \phi_a(U_a \cap M) = V_a\cap({\Bbb R}^k \times \{0_{d-k}\}).$$
(This is one definition of a smooth submanifold and equivalent to any other as far as I know). Let $\pi:{\Bbb R}^k \times {\Bbb R}^{d-k} \rightarrow{\Bbb R}^k \times \{0_{d-k}\}$ be the natural projection. Possibly shrinking neighborhoods we may assume that $\pi (V_a)\subset V_a$.
Use compactness to construct a finite cover $U_1,\ldots,U_N$ with an associated smooth partition of unity (see wiki) for $M$ subordinate to the cover. In other words, we obtain $C^\infty$ functions $\chi_i\in C^\infty_c(U_i)$ such that $\sum_i \chi_i(x)=1$ for each $x\in M$.
Given the function $f\in C^\beta(M)$ we obtain a function $f_i$ with compact support and of the same regularity by declaring for each $i$: $$f_i(x) = f(\phi_i^{-1} \circ \pi \circ \phi_i(x)) \chi_i(x), \ x\in U_i,$$
and extending by zero to the rest of ${\Bbb R}^d$.
Note that for $x\in M\cap U_i$ we have by construction $f_i(x)=f(x)\chi_i(x)$.
Therefore, $F(x) = \sum_i f_i(x)\in C^\beta_c({\Bbb R}^d)$ provides an extension of $f$ as required.
Remarks: There is no need for tubular neighborhoods (created e.g. using normal flows) to make such an extension unless you want to study formulae for co-area, Laplacians or similar stuff. It makes life more complicated than necessary The above extension is obviously far from unique. |
Steps in Finding the Multiplicity of a Point in a Projective Curve | Usually when I need to calculate the multiplicity of a point on a projective curve I do the following. Say $C$ is your projective curve.
1) Knowing what $x\in C$ is, $x$ is in a chart of $\mathbb{P}^2$ (Define $E_i = \{(x_0:x_1:x_2)\in \mathbb{P}^2|x_i\neq 0\}$, then $E_0, E_1, E_2$ is an atlas of $\mathbb{P}^2$ and each $E_i$ I call a chart). Multiplicity being a local quantity of the curve it only depends on a neighborhood of $x\in C$ in $\mathbb{P}^2$. Since $E_i$ are open, assuming $x\in E_0$ (or any other $E_i$) then you can study the multiplicity of $x$ in $E_0$ (the neighborhood containing it).
2) Dehomogenize the homogeneous polynomial with respect to this chart, meaning $f(x_1, x_2) = P(1, x_1, x_2)$ where $P$ is the homogeneous polynomial. Now multiplicity of $x=(1:x_1:x_2)$ on $C$ is the same as multiplicity of $(x_1, x_2)$ in affine curve $C'\subset \mathbb{A}^2$ where $C'$ is determined by $f$.
3) Shift $(x_1, x_2)$ to $(0,0)$ by a coordinate transformation. This doesn't change the multiplicity. Denote $\tilde{f}$ as your transformed polynomial.
4) The multiplicity of $(0,0)$ for a curve in $\mathbb{A}^2$ is simply the degree of the leading homogeneous polynomial (leading means least degree) in $\tilde{f}$; i.e. if $\tilde{f}=\tilde{f}_n + \cdots + \tilde{f}_r$ (decreasing order) where $\tilde{f}_d$ is homogeneous of degree $d$, then multiplicity is simply $r$.
As a side note one can prove that the multiplicity of $x\in C$ is independent of what chart you choose for doing this procedure. |
In $\mathbb{Z}[X]$ what inputs can X take? | In $R[X]$, $X$ is an indeterminate, i.e. a symbolic variable.
When you say you "put" some $x$ in place of $X$, you're talking about the mapping
$f_x: \sum_i r_i X^i \mapsto \sum_i r_i x^i$.
If $R$ is a commutative ring and $x$ a member of some commutative ring $S$ that contains $R$, this is a ring homomorphism of $R[X]$ into $S$. |
If $\{f_n\}$ is Cauchy in measure, then there is a measurable function $f$, such that $\{f_n\}$ converges in measure to $f$ | The previous inequality shows that
$$|f(x)-f_{n_k}(x)| < 2^{-k+1} \qquad \text{for all $x \in D \backslash \bigcup_{j=k}^{\infty} E_j$,}$$
i.e.
$$x \in D \backslash \bigcup_{j=k}^{\infty} E_j \implies x \in \{|f-f_{n_k}| < 2^{-k+1}\}.$$
This is equivalent to
$$D \backslash \bigcup_{j=k}^{\infty} E_j \subseteq \{|f-f_{n_k}| < 2^{-k+1}\}.$$
Taking complement on both sides yields
$$\{|f-f_{n_k}| \geq 2^{-k+1}\} \subseteq \bigcup_{j=k}^{\infty} E_j,$$
and now the monotonicity of the measures proves the claimed inequality. |
Convergence in measure implies convergence almost everywhere (on a countable set!) | Assume that $X=\{x_j,j\in\mathbb N\}$ and that $\mu\{x_j\}>0$ for each $j$ and $f=0$, $f_n\geqslant 0$. We have to show that for each $j$, $f_n(x_j)\to 0$ as $n$ goes to infinity.
Fix $j$ and $\varepsilon\gt 0$; for $n$ large enough, we have $\mu\{x,f_n(x)\geqslant\varepsilon\}\lt\mu\{x_j\}$, hence $f_n(x_j)\lt\varepsilon$ for these $n$.
In general, it may be possible that $\{x_j\}$ is not measurable. However, the $\sigma$-algebra on $X$ is generated by a countable partition and each $f_n$ is constant on each element of the partition, hence we can without loss of generality assume that $X=\{x_j,j\in\mathbb N\}$ endowed with the powerset.
This shows the convergence at the points $x_j$ which have positive measure. In a general measure space, we can show that if $f_n\to 0$ in measure and $\{x_0\} $ has positive measure, then $f_n(x_0)\to 0$. But this does not guarantee the almost everywhere convergence because we cannot remove (as in the case where $X$ is countable) the points $x$ such that the measure of $\{x\}$ is $0$. |
Zig-Zag traversal of list | You have a pointer $L$ to the beginning of the list. First, create a new pointer $R$ that also points to the beginning of the list. Now, traverse the list with $R$, looking for the struct with string $w$. While $R$ is traversing the list, you should also keep a count $p$ equal to how many elements $R$ has iterated over. When you find your the struct with string $w$, simply compute $p-n$ and iterate this many times from $L$ to find what you're looking for.
In C++, the algorithm may look something this:
int spaces_to_move(LL* R, std::string word_to_find)
{
int i = 0;
while(R->w != word_to_find){
R = R->next;
i++;
if(!R){
std::cout << "Word not found\n";
exit(0);
}
}
return i-R->n;
} |
Show that $\int\int_R f(x,y)dxdy\not=\int\int_R f(x,y) dydx$ For $R=[0,1]\times[0,1]$ and $f(x,y)=\frac{x^2-y^2}{(x^2+y^2)^2}$ | Look at the Wikipedia article, there is your integral:
Failure of Fubini's theorem for non-integrable functions. |
Does this sequence of functions defined on a closed disk have a pointwise limit? | Consider
$$f_n (1) = (-1)^n\longrightarrow?? \text{ as } n\longrightarrow \infty$$ |
equivalence of two complex integrals in Ahlfors Complex Analysis | There is a condition for this: The map $f:\gamma \rightarrow \Gamma$ should be 1-1 and orientation preserving. For example, $z\mapsto z^2$ maps the unit-circle to itself but winds around twice; the integrals are not the same. Similarly for $z\mapsto 1/z$. When the condition is verified it amounts to a reparametrization of the closed curve $\Gamma$. |
One to one functions and inverse | The inverse of a function is an equation for which f(y)=x. That means, for every point (x,y) on the original function, there is a point (y,x) on the inverse. This means that $g(x)=g^{-1}(y)$, for any (x,y) pair on g(x). So, the value of $g^{-1}(3)$ is asking for what value of x is y equal to 3, the converse of $g(3)$, which is asking for what value of y is x equal to 3. For sets like g(x), that means looking through and finding pairs in the form (x,3). For equations like h(x), replace the x with y, and the y with x, and isolate the new y in the new equation. That equation is then the equation of the inverse. Then, you can evaluate that equation normally to find various values of $g^{-1}(x)$. |
Mark a six-sided die with the results of six rolls of a previous die. How many iterations until all the faces match? | The probability that all sides are the same is $6$ times the probability that all sides are $1$. So we don’t have to worry about the distribution of all numbers on the die, we just have to keep track of the number of $1$s. That yields a Markov chain with $7$ different states, two of which ($0$ and $6$) are absorbing. The transition matrix is
$$
\mathsf P(i\to j)=6^{-6}\binom6ji^j(6-i)^{6-j}
$$
(where $0^0=1$), or in matrix form:
$$
P=6^{-6}\pmatrix{
46656&0&0&0&0&0&0\\
15625&18750&9375&2500&375&30&1\\
4096&12288&15360&10240&3840&768&64\\
729&4374&10935&14580&10935&4374&729\\
64&768&3840&10240&15360&12288&4096\\
1&30&375&2500&9375&18750&15625\\
0&0&0&0&0&0&46656\\
}\;.
$$
To my surprise, this matrix has a rather nice eigensystem:
$$
P=\pmatrix{6\\5&5&-5&25&-5&3725&1\\4&8&-4&-8&8&-10576&2\\3&9&0&-27&0&14337&3\\2&8&4&-8&-8&-10576&4\\1&5&5&25&5&3725&5\\&&&&&&6}\\
\times\pmatrix{6\\&38160\\&&120\\&&&2448\\&&&&120\\&&&&&648720\\&&&&&&6}^{-1}
\\
\times
\pmatrix{1\\&\frac56\\&&\frac59\\&&&\frac5{18}\\&&&&\frac5{54}\\&&&&&\frac5{324}\\&&&&&&1}
\\\times\pmatrix{1\\-2681&981&1125&1150&1125&981&-2681\\7&-8&-5&0&5&8&-7\\-14&33&-6&-26&-6&33&-14\\1&-4&5&0&-5&4&-1\\-1&6&-15&20&-15&6&-1\\&&&&&&1}\;,
$$
where the first diagonal matrix is just for normalization (so that I could write the matrices containing the left and right eigenvectors with integers) and the second diagonal matrix contains the eigenvalues.
Thus, since we start in state $1$, the probability to have reached state $6$ after $n$ rolls is
$$
-\frac{5\cdot2681}{38160}\left(\frac56\right)^n+\frac{5\cdot7}{120}\left(\frac59\right)^n-\frac{25\cdot14}{2448}\left(\frac5{18}\right)^n+\frac{5\cdot1}{120}\left(\frac5{54}\right)^n-\frac{3725\cdot1}{648720}\left(\frac5{324}\right)^n+\frac16
\\[3pt]
=
-\frac{2681}{7632}\left(\frac56\right)^n+\frac7{24}\left(\frac59\right)^n-\frac{175}{1224}\left(\frac5{18}\right)^n+\frac1{24}\left(\frac5{54}\right)^n-\frac{745}{129744}\left(\frac5{324}\right)^n+\frac16\;.
$$
We need to multiply this by $6$ to get the probability of having reached this state for any of the $6$ numbers on the die in order to get the probablity that the number $N$ of required rolls is less than or equal to $n$:
$$
\mathsf P(N\le n)=1-\frac{2681}{1272}\left(\frac56\right)^n+\frac74\left(\frac59\right)^n-\frac{175}{204}\left(\frac5{18}\right)^n+\frac14\left(\frac5{54}\right)^n-\frac{745}{21624}\left(\frac5{324}\right)^n\;.
$$
Here we can check that $\mathsf P(N\le0)=0$ and $\mathsf P(N\le1)=6^{-5}$, as they must be. The probability that we need exactly $n$ rolls is
\begin{eqnarray}
\mathsf P(N=n)
&=&
\mathsf P(N\le n)-\mathsf P(N\le n-1)
\\[3pt]
&=&
\frac{2681}{6360}\left(\frac56\right)^n-\frac75\left(\frac59\right)^n+\frac{455}{204}\left(\frac5{18}\right)^n-\frac{49}{20}\left(\frac5{54}\right)^n+\frac{47531}{21624}\left(\frac5{324}\right)^n
\end{eqnarray}
for $n\gt0$ and $\mathsf P(N=0)=0$. Here’s a plot, in agreement with your numerical results. The expected value of $N$ is
\begin{eqnarray}
\mathsf E[N]
&=&\sum_{n=0}^\infty\mathsf P(N\gt n)
\\[3pt]
&=&
\sum_{n=0}^\infty\left(\frac{2681}{1272}\left(\frac56\right)^n-\frac74\left(\frac59\right)^n+\frac{175}{204}\left(\frac5{18}\right)^n-\frac14\left(\frac5{54}\right)^n+\frac{745}{21624}\left(\frac5{324}\right)^n\right)
\\[6pt]
&=&
\frac{2681}{1272}\cdot\frac6{6-5}-\frac74\cdot\frac9{9-5}+\frac{175}{204}\cdot\frac{18}{18-5}-\frac14\cdot\frac{54}{54-5}+\frac{745}{21624}\cdot\frac{324}{324-5}
\\[6pt]
&=&
\frac{31394023}{3251248}
\\[6pt]
&\approx&9.656\;,
\end{eqnarray}
also in agreement with your numerical results. |
Is my method for finding the Basis for the Row(A) correct? | Elementary row operations preserve the row space of $A$ (but change the column space). If you want to find a basis for $\mathrm{row}(A)$, perform elementary row operations on $A$ until you reach the reduced row echelon form. The non-zero rows of the reduced row echelon form will form a basis for $\mathrm{row}(A)$. In fact, you don't even have to perform row operations until you reach the reduced row echelon form - you can stop whenever it is clear what is the basis for $\mathrm{row}(A)$.
In your example,
$$ \left( \begin{matrix} -1 & 0 & 2 \\ 3 & 2 & 0 \\ 0 & 1 & 3 \end{matrix} \right) \xrightarrow{R_1 = (-1) \cdot R_1}
\left( \begin{matrix} 1 & 0 & -2 \\ 3 & 2 & 0 \\ 0 & 1 & 3 \end{matrix} \right) \xrightarrow{R_2 = R_2 - 3R_1}
\left( \begin{matrix} 1 & 0 & -2 \\ 0 & 2 & 6 \\ 0 & 1 & 3 \end{matrix} \right) $$
and from the last matrix it is already clear that
$$ \mathrm{row}(A) = \mathrm{span} \{ (1, 0, -2), (0, 1, 3) \}. $$
If you want to find the column space of $A$, you can do the same with column operations (which preserve the column space but change the row space). |
Simplify the expression: $a^{\log {\sqrt \frac bc}}×b^{\log {\sqrt \frac ca}}×c^{\log {\sqrt \frac ab}}$ | Technically correct.
But we can try as follows:
As $\log \dfrac bc=\log b-\log c$ when all the logarithm remain defined.
$$a^{\log\sqrt{\dfrac bc}}=a^{\dfrac{\log b-\log c}2}=\left(e^{\log a}\right)^{\dfrac{\log b-\log c}2}$$
$$=e^{\dfrac{\log a\log b-\log c\log a}2}$$ |
Surjections of functions and proof by counterexample | function f is not function there since it has only 1 element in domain and codomain has more elements.It is against the definition of function. This makes your argument wrong.f(1) can also be 2 instead and makes g not surjection. |
Line$ 3x-4y+k$ touches a circle $x^2+y^2-4x-8y-5 = 0$ at $(a,b)$, then find $k+a+b =?$ | We can take advantage of the fact that the line touches the circle at a single point:
$$y=\frac34x+\frac{\lambda}{4} \Rightarrow \\
x^2+\left(\frac34x+\frac{\lambda}{4}\right)^2-4x-8\left(\frac34x+\frac{\lambda}{4}\right)-5 = 0 \Rightarrow \\
25x^2+2(3\lambda-80)x+\lambda^2-32\lambda-80=0 \quad (1)$$
The quadratic equation has a single solution when $D=0$:
$$(3\lambda-80)^2-25(\lambda^2-32\lambda-80)=0 \Rightarrow \\
\lambda_1=-15,\lambda_2=35.$$
When we plug $\lambda=35>0$ into $(1)$:
$$25x^2+50x+1225-1120-80=0 \Rightarrow \\
25x^2+50x+25=0 \Rightarrow \\
x^2+2x+1=0 \Rightarrow \\
(x+1)^2=0 \Rightarrow \\
x=-1=a \Rightarrow y=8=b$$
Hence:
$$\lambda+a+b=35-1+8=42.$$ |
Can an algebraic variety be described as a category, in the same way as a group? | The conjectural remarks by Qiaochu are supported by the following theorem (joint work with Alexandru Chirvasitu):
Let $X,Y$ be schemes, where $X$ is quasi-compact and quasi-separated. Then $f \mapsto f^*$ establishes an equivalence between $\hom(Y,X)$ and the category of cocontinuous symmetric monoidal functors $\mathsf{Qcoh}(X) \to \mathsf{Qcoh}(Y)$.
Therefore, the category of (nice) schemes embeds fully faithfully into the $2$-category of cocomplete symmetric monoidal categories. Large parts of algebraic geometry can be translated and generalized by means of this embedding; this will be hopefully explained in full detail in my thesis. For example, under suitable finiteness assumptions, one can construct the "projective tensor bundle" $\mathbb{P}_{\otimes}(\mathcal{E})$ for an object $\mathcal{E}$ of a cocomplete symmetric monoidal category $\mathcal{C}$, which classifies invertible quotients of $\mathcal{E}$, and when $\mathcal{C}=\mathsf{Qcoh}(S)$ for some scheme $S$ this coincides with $\mathsf{Qcoh}(\mathbb{P}_S(\mathcal{E}))$.
In this picture, the monoidal structure is crucial; there are lots of auto-equivalences of $\mathsf{Qcoh}(X)$ which are not induced by automorphisms of $X$. On the other hand, if $X,Y$ are quasi-separated schemes such that $\mathsf{Qcoh}(X)$ and $\mathsf{Qcoh}(Y)$ are equivalent as abstract categories, then $X$ and $Y$ are isomorphic; this is a quite deep theorem (proved by Gabriel for noetherien schemes, then generalized by Rosenberg, and corrected by Gabber) and has motivated a new perspective on non-commutative algebraic geometry using abelian categories (see the work by Artin, Zhang, Rosenberg and others). |
If I have this relation: $f(x)f(1-y) = f(y)f(1-x)$ and I know that $f'()<0$, what can I say of $x$ and $y$? | You cannot, since it might not be true. It depends on $f$. For example, let $f(x) = 1- e^x$. Then, $x = \ln(1+e), y = 2$ is a solution to your equation. (Thanks to smcc who pointed exact solution to me). |
Limit superior of iid Poisson random variables | The notation $b_n=o(c_n)$ means $b_n/c_n\to0$ as $n\to\infty$. Therefore, $b_n=o(1)$ simply means $b_n\to0$. Thus, the first equality is asserting that
$$
-\lambda + a_n\log\lambda - \sum_{j=1}^{a_n}\log j = -a_n\log a_n(1 + b_n),
$$
for some sequence $\{b_n\}$ satisfying $b_n\to0$. This is equivalent to
$$
\frac{\lambda - a_n\log\lambda + \sum_{j=1}^{a_n}\log j}{a_n\log a_n}
\to 1.
$$
The main thing needed to prove this is to show that $\sum_{j=1}^n \log j\sim n\log n$, meaning their ratio tends to $1$ as $n\to\infty$. You can guess at this result by comparing the sum to the corresponding integral $\int_1^n\log x\,dx$. To check it rigorously, you could use the Stolz–Cesàro theorem.
The second equality asserts that if $b_n\to0$, then
$$
-a_n\log a_n(1 + b_n) = (-\delta + c_n)\log n
$$
for some sequence $\{c_n\}$ satisfying $c_n\to0$. This is equivalent to
\begin{equation}\label{1}
\frac{a_n\log a_n}{\log n} \to \delta.\tag{1}
\end{equation}
A complete, rigorous proof of this would need to account for the fact that $a_n$ is an integer. But in this sketch, I will work as though
$$
a_n = \frac{\delta\log n}{\log\log n}.
$$
In this case,
$$
\log a_n = \log\delta + \log\log n - \log\log\log n.
$$
The leading term here is $\log\log n$ and
$$
\frac{a_n\log\log n}{\log n} = \delta,
$$
suggesting that \eqref{1} holds and verifying the second equality. |
Show that set is null set | Hint: Note, that for every $k \in \mathbb N$ we have
$$ A \subseteq \bigcup_{n \ge k} A_n $$
and hence $\mu(A) \le \sum_{n \ge k}\mu(A_n)$. |
Cauchy completeness of an ordered field | The field $\mathbb{R}(t)$ itself is not Cauchy complete and neither is $\mathbb{Q}(t)$. Indeed, writing $t$ for the identity function, the sequence $(\sum \limits_{k=0}^n \frac{1}{k! \ t^k})_{n \in \mathbb{N}}$ is Cauchy but does not converge.
The Cauchy completion of $\mathbb{R}(t)$ is the field $\mathbb{R}((t))$ of formal Laurent series, i.e. formal sums $\sum \limits_{k>n} a_k t^{-k}$ where $n \in \mathbb{Z}$ and $(a_k)_{k >n}$ is a family of real numbers. Fields of hyperreal numbers don't embed into $\mathbb{R}((t))$. Indeed, on the one hand, in such a field $^*\mathbb{R}$, for each positive infinite element $H$, there is a hyperreal $\exp(H)$ with $\exp(H)>H^n$ for all $n\in \mathbb{N}$.
On the other hand, for any positive infinite $y \in \mathbb{R}((t))$, the sequence $(y^n)_{n \in \mathbb{N}}$ has no upper bound.
Likewise, the Cauchy completion of $\mathbb{Q}(t)$ is the field $\mathbb{Q}((t))$ of formal Laurent series with rational coefficients. This field does not contain an isomorphic copy of $\mathbb{R}$, for it contains no square root of $2$. |
Basic Math, exponents and algebra | Square both sides:
$$
\frac{x_1^{-1}}{x_2^{-1}}=\frac{p_1^2}{p_2^2}
$$
Rewrite the left hand side:
$$
\frac{x_1^{-1}}{x_2^{-1}}=
\frac{1/x_1}{1/x_2}=\frac{1}{x_1}\left(\frac{1}{x_2}\right)^{-1}=
\frac{1}{x_1}x_2=\frac{x_2}{x_1}
$$
Thus your equality is
$$
\frac{x_2}{x_1}=\frac{p_1^2}{p_2^2}
$$
or
$$
x_2=\frac{p_1^2}{p_2^2}x_1
$$ |
Assistance on simplifying a set | If you keep doing distribution, you'll just end up going around in circles:
$$A' \cap (A' \cup B') =$$
$$ (A ' \cap A') \cup (A' \cap B') = A' \cup (A' \cap B') = (A ' \cup A') \cap (A' \cup B') = $$
$$A' \cap (A' \cup B')$$
Instead, you can do:
$$A' \cap (A' \cup B') = (A' \cup \emptyset) \cap (A' \cup B'
) = A' \cup (\emptyset \cap B') = A' \cup \emptyset = A'$$ |
What do we know if we know the determinant and trace of a matrix? | Specifying the determinant and trace doesn't restrict the set of all matrices as much as you'd like it to. In lower dimensions (namely $n=2$), things are a little better, but for $n\geq 3$ the situation isn't all that great.
One thing we do know is that the determinant and trace both show up in the characteristic polynomial in the matrix. For example, in the $2\times 2$ case, if the characteristic polynomial of $A$ factors into $(t-a)(t-b)$, then
$$
(t-a)(t-b) = t^2 - (a+b)t + ab = t^2 - \text{trace}~A\cdot t + \det A.
$$
At least in this case, having the determinant and trace completely determines the characteristic polynomial, and conversely. Sadly, this is known not to be enough to completely determine a matrix up to similarity. In particular, if the eigenvalues are repeated in the $2\times 2$ case then there are two possible forms of the Jordan matrices. On the other hand, if the eigenvalues are distinct then the matrix is diagonalizable, so in this specific case among $2\times 2$ matrices we can determine $A$ up to similarity explicitly.
In higher dimensions, we can show the same thing happens, but we get additional terms whose coefficients are traces of what are known as exterior powers of $A$. But if the matrix is unkonwn a priori then you won't be able to determine the characteristic polynomial with just the determinant and trace.
Of course, if one knows the Jordan form then one knows essentially everything one could possibly want about the matrix. |
Number of values of Z(real or complex) satisying the given system of equations..? | When you manipulate equations in non-reversible ways (like multiplying by $z-1$, or dividing) you are potentially introducing additional roots. In this case the roots of the original system are among the roots of $z^4=1$, but not all roots of $z^4=1$ necessarily satisfy the original system.
Alternative hint to solve it: $\;\gcd(z^m-1,z^n-1)=z^{\gcd(m,n)}-1\,$ and of course $\gcd(18,14)=2\,$.
[ EDIT ] Combining the same $\gcd$ idea with OP's approach:
$z^{18}=1 \implies \left(z^{18}\right)^3=z^{54}=1$
$z^{14}=1 \implies \left(z^{14}\right)^4=z^{56}=1$
Dividing the two equations gives $z^2=1\,$, and dropping the extraneous root $z=1$ which was introduced by the multiplication with $z-1$ leaves $z=-1\,$. |
Proving $f \in L^{2}[0, 1] = 0$ a.e. if integral of $x^{n}f(x)$ is 0 for each n | First note that by Cauchy-Schwartz (or Holder) that $L^2[0,1] \subset L^1[0,1]$. Suppose first that $f$ is continuous. Then for every $\epsilon > 0$, there is a polynomial $p_n(x)$ such that $|f(x) - p_n(x)| < \epsilon$. Then by the criterion, $\int f^2(x) \le \int f(x)(f(x) - p(x)) < \epsilon \int f$. Thus $0 \le \int f^2 = 0$, and $f^2$ must be zero a.e.
Now use the density of continuous functions with compact support in the $L^2$ norm. |
Find the solution $\Phi$ of an IVP | You solved the problem for the first equation and obtained $y_1=2e^t$. As said by Amzoti, substitute $y_1$ epression in the second equation and you arrive to $$y_2' =2 e^{-t}+t y_2$$ So, as usual, you first solve $$y_2' =t y_2$$ which leads to $$y_2=K(t) e^{\frac{t^2}{2}}$$ which you differentiate again and bring back to the differential equation. This now leads to $$e^{\frac{t^2}{2}} K'(t)=2 e^{-t}$$ that is to say $$K'(t)=2 e^{-\frac{t^2}{2}-t}$$ which you need to integrate. As Amzoti said, this is not the most pleasant integration but a rather simple change of variable leads to the solution $$K(t)=\sqrt{2 e \pi } \text{erf}\left(\frac{t+1}{\sqrt{2}}\right)+C$$ so the solution is $$y_2=e^{\frac{t^2}{2}} \left(C+\sqrt{2 e \pi }
\text{erf}\left(\frac{t+1}{\sqrt{2}}\right)\right)$$ Applying the boundary condition, you obtain $$C=1-\sqrt{2 e \pi } \text{erf}\left(\frac{1}{\sqrt{2}}\right)$$ and so $$y_2=e^{\frac{t^2}{2}} \left(\sqrt{2 e \pi }
\left(\text{erf}\left(\frac{t+1}{\sqrt{2}}\right)-\text{erf}\left(\frac{1}{\sqrt{2
}}\right)\right)+1\right)$$ A tedious differentiation will confirm that the second differential equation and its boundary condition are satisfied with this expression.
In this answer, I tried to go into as much details to refresh your memory. Do not hesitate to post if you need further clarification. |
$|\mu(A \cap B) − \mu(A \cap C)| \leq \mu(B \triangle C)$ provided $\mu(A)<+\infty$. | Since
$$
\mu(A\cap B)=\mu(A\cap (B\cap C)) + \mu(A\cap (B-C))
$$
and
$$
\mu(A\cap C)=\mu(A\cap (B\cap C)) + \mu(A\cap (C-B)).
$$
Since $\mu(A)<\infty$, we get
$$\mu(A\cap B)-\mu(A\cap C)=\mu(A\cap (B-C))-\mu(A\cap (C-B)).$$
So
$$
\begin{aligned}
|\mu(A\cap B)-\mu(A\cap C)| &= |\mu(A\cap (B-C))-\mu(A\cap (C-B))|\\
&\le \mu(A\cap (B-C))+\mu(A\cap (C-B)) \\
&=\mu (A\cap (B\triangle C)) \le \mu(B\triangle C)
\end{aligned}
$$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.