title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
A bijection from N^2 to N that is not Cantor's pairing function | Yes. Here's an example. Let$$S=\{2^a3^b\mid a,b\in\mathbb N\}=\{6,12,18,\ldots\}.$$There is a bijection $\beta\colon\mathbb N\longrightarrow S$: just define $\beta(n)$ as the $n$th element of $S$. And there is a bijection $\varphi\colon\mathbb N^2\longrightarrow S$: $\varphi(a,b)=2^a3^b$. Now take $\varphi^{-1}\circ\beta$. |
Proof help: Log Inequalities | What you suggest, $x<a\log(x)\Rightarrow x<2a\log(a)$, does not follow from the proposition.
The argument in the task is an argument taken from formal logic: A implies ($\Rightarrow$) B, is equivalent to saying: $\bar A$ is a necessary condition for $\bar B$.
Example: if it rains (A), (this implies that) the street is wet (B). This is equivalent to: for the street to be dry ($\bar B$), it is necessary that it doesn't rain ($\bar A$). It's not sufficient, since somebody can spill water on the street.
However, all of that is never being used in their proof. I guess they will use the argument later. For the proof, they always directly prove A ($\Rightarrow$) B .
In the proof's chain of arguments, they first prove that directly (for $a\in(0,\sqrt{e}]$), and then (for $a>\sqrt{e}$) they prove that $f(x)=x-a\ln(x) > 0$, or $x > a\ln(x)$, (B), when $x\geq 2a\ln(a)$ holds (A).
A little piece is missing. They say: "Thus, for $x>a$ the derivative is positive and the function increases." However, what they can take for granted is $x>2a\log(a)$. So they must show that $x>2a\log(a)$ implies $x>a$. That will be the case for $2 \log(a) > 1$ . Since we are in the domain $a>\sqrt{e}$, we have indeed $2 \log(a) > 2 \log \sqrt{e} = 1 $. So here we see that it makes sense to split the domain for $a$. |
an AP is changed to form a GP | Let the three numbers be $5-d,5, 5+d$
New numbers are $6-d,6,9+d$ are in G.P
Clearly $6^2=(9+d)(6-d)\Rightarrow d=3,-6$
Original numbers are $2,5,8$ or $11,5,-1$. |
Can any highest weight module in $\mathcal{O}$ be constructed from a Verma module? | A module $V$ is highest weight of highest weight $\lambda$ if there exists a weight vector $v_\lambda \in V_\lambda$ such that $e_i(v) = 0$ for all $i$, and $v_\lambda$ generates $V$ as a $\mathfrak{g}$-module. (This is just a definition of what a highest weight module is).
If $V$ is a highest weight module of highest weight $\lambda$, then there exists a surjection $M(\lambda) \twoheadrightarrow V$ taking the highest-weight vector $m_\lambda$ to $v_\lambda$. So Verma modules always surject onto highest-weight modules. (In fact, you can use this as a definition of what a Verma module is, via a universal property).
Not every module of category $\mathcal{O}$ is highest weight. For example, if $V$ and $W$ are highest-weight modules, then their direct sum $V \oplus W$ is not, since it is not generated by a single weight vector. For a more involved example, take an indecomposable highest-weight module with nontrivial composition factors, and take the BGG dual.
Category $\mathcal{O}$ is not semisimple, since the Verma modules $M(\lambda)$ for dominant integral weights $\lambda$ have proper submodules which are not summands. You can see this easily even in the case $\mathfrak{g} = \mathfrak{sl}_2$, by writing out the basis for such a Verma module. |
Lack of torsion implies 1x = 1 | Assume that $1\cdot x=y\neq x$. Then $1\cdot y=y$, because $(1\cdot 1)\cdot x=1\cdot x$. So $1\cdot (x-y)=0$, but $1\neq 0$ and $x-y\neq 0$. |
Normal operator + only real eigenvalues implies self-adjoint operator? | To elaborate on fedja's comment: Let $(X,\mu)$ be a measure space, let $h$ be a bounded measurable complex-valued function on $X$, and let $T$ be the multiplication operator on $L^2(X,\mu)$ defined by $Tf = hf$. Show that $T$ is normal, and is self-adjoint iff $h$ is real-valued almost everywhere. Now show that $\lambda$ is an eigenvalue of $T$ iff $\mu(\{h= \lambda\}) > 0$. Taking as an example $X = [0,1]$ with Lebesgue measure, you should be able to use this to construct a normal, non-self-adjoint operator with only real eigenvalues, or with no eigenvalues at all. |
Expected value of randomly answered questions | Guide:
Let $X$ denote the number of questions that are answered correct.
Then the number of points at the end can be written as $f(X)$ where $f$ is the function that maps the number of questions correctly answered to the associated amount of points.
The last step is determining:$$\mathbb Ef(X)=\sum_{k=1}^{10}f(k)P(X=k)$$ |
Solving integral with substitution gives wrong result | The function is even, so rewrite the integral as
$$\int_{-\infty}^\infty \frac{x^2}{(1+|x|^3)^4}dx = 2\int_0^\infty\frac{x^2}{(1+x^3)^4}dx$$
Now try your substitution. |
sum in closed form $\frac{1}{3}+\frac{1\cdot 5}{3\cdot 7}+\frac{1\cdot 5\cdot 9}{3\cdot 7 \cdot 11}+...............$ | The series does not converge. You can use the comparison test against the divergent series $$\frac{1}{3} + \frac{1}{7} + \frac{1}{11} + ...$$ |
Prove relative error with condition number of matrix inequality | The standard perturbation inequality when perturbing only the matrix of the linear system is (in your notation)
$$
\frac{\|x-x_2\|}{\|x\|}\leq \frac{\epsilon}{1-\epsilon}, \qquad
\epsilon=\mathrm{cond}(U)\frac{\|U-U_2\|}{\|U\|},
$$
assuming that $\epsilon<1$ (I guess there's a mistake or a typo in your definition of $y$, it should be $y=\|U^{-1}\|\|U-U_2\|<1$ instead).
Considering the function $f(\epsilon)=\epsilon/(1-\epsilon)$, you can make its Taylor expansion at $0$: $f(\epsilon)=\epsilon+O(\epsilon^2)$. Hence
$$
\frac{\|x-x_2\|}{\|x\|}\leq \epsilon+O(\epsilon^2) = \mathrm{cond}(U)\frac{\|U-U_2\|}{\|U\|} + O(\epsilon^2).
$$ |
A product of two functions is periodic; are the functions individually periodic? | For the sum, find some periodic function $P$ and a non-periodic function $f$, and set $g(x) = P(x) - f(x)$.
For the product, find some periodic function $P$, and a non-periodic function $f$ which never equals $0$, and set $g(x) = \frac{P(x)}{f(x)}$. |
Proving summation identities | $\sum_{i=1}^{n}\sum_{i \neq j}^n \frac{z_i}{z_i-z_j}$
$=\sum_{i=1}^{n}\sum_{i \neq j}^n \left(1+\frac{z_j}{z_i-z_j} \right)=n(n-1)-\sum_{i=1}^{n}\sum_{i \neq j}^n \frac{z_j}{z_i-z_j}.
$
So the first equation proved.
You can now use the same strategy to split
$
\frac{z_i^2}{z_i-z_j}=\frac{(z_i^2-z_j^2)+z_j^2}{z_i-z_j}=(z_i+z_j)+\frac{z_j^2}{z_i-z_j}
$
on the second equation, so on and so forth. |
How to use density argument to obtain inequality? | The answer to your question is no. There are linear operators $T$ that are bounded on a dense subspace, but not bounded.
See this example which has $Y=\mathbb R$, but is not constructive.
For your edit:
Here, the operator is not given on $X$, but only a dense subset $D$. Therefore we can choose which values $T$ will have on the points $X\setminus D$
You have to do following steps (i will still leave out some details):
define $T f= \lim T f_n$ where $f_n\in D$ such that $f_n\to f$ for all $f\in X$.
in order to justify 1., show that the limit actually exists (hint: use Cauchy sequences).
in order to justify 1., show that the definition is independent of the chosen sequence $f_n$.
show that the resulting operator $T:X\to Y$ is bounded. |
Vector Problem in 3d - Find A Point on A Plane | Let $\mathbf{n}$ be the vector $\langle3,2,4\rangle$ normal to the plane. Note
\begin{align}
\mathbf{w}=\stackrel{\longrightarrow}{BA}-\text{ proy}_{\mathbf{n}}{\stackrel{\longrightarrow}{BA}}&=\stackrel{\longrightarrow}{BA}-\left(\frac{\stackrel{\longrightarrow}{BA}\cdot\mathbf{n}}{\left\|\mathbf{n}\right\|}\right)\frac{\mathbf{n}}{\left\|\mathbf{n}\right\|} \\
&=\langle2,-2,4\rangle-\left(\frac{\langle2,-2,4\rangle\cdot\langle3,2,4\rangle}{\sqrt{29}}\right)\frac{\langle3,2,4\rangle}{\sqrt{29}}\\
&=\langle2,-2,4\rangle-\left(\frac{18}{\sqrt{29}}\right)\frac{\langle3,2,4\rangle}{\sqrt{29}}\\
&=\langle2,-2,4\rangle-\langle\frac{54}{29},\frac{36}{29},\frac{72}{29}\rangle\\
&=\langle\frac{4}{29},-\frac{94}{29},\frac{44}{29}\rangle
\end{align}
is a vector normal to $\mathbf{n}$ (since $\mathbf{w}\cdot\mathbf{n}=0$), then $C$ is the final point of $\stackrel{\longrightarrow}{OB}+\mathbf{w}=\langle\frac{62}{29},-\frac{65}{29},\frac{73}{29}\rangle$, as a sort of comprobation we can compute $\stackrel{\longrightarrow}{AC}\cdot\stackrel{\longrightarrow}{CB}$:
\begin{align}
\stackrel{\longrightarrow}{AC}\cdot\stackrel{\longrightarrow}{CB} & =\langle-\frac{54}{29},-\frac{36}{29},-\frac{72}{29}\rangle\cdot\langle-\frac{4}{29},\frac{94}{29},-\frac{44}{29}\rangle \\
&=\frac{216-3384+3168}{29^2} \\
&=0.
\end{align} |
Why is $S^\infty \times_{\mathbb{Z}/2} S^2 \sim RP^2$? | You have a fiber bundle $S^\infty \to (S^\infty \times_{\Bbb Z/2} S^2) \to (* \times_{\Bbb Z/2} S^2)$. The fiber is contractible, so the last map is at least a weak homotopy equivalence; indeed it's an actual homotopy equivalence by Whitehead's theorem. |
Help with Implicit Differentiation: Finding an equation for a tangent to a given point on a curve | You made an error when you differentiated implicitly. You did not apply the Product Rule $(fg)' = f'g + fg'$ to the term $xy$. Keeping in mind that $y$ is a function of $x$, you should obtain
$$(xy)' = 1y + xy' = y + xy'$$
Therefore, when you differentiate
$$x^2 + xy + y^2 = 3$$
implicitly with respect to $x$, you should obtain
$$2x + y + xy' + 2yy' = 0$$
Solving for $y'$ yields
\begin{align*}
xy' + 2yy' & = -2x - y\\
(x + 2y)y' & = -2x - y\\
y' & = -\frac{2x + y}{x + 2y}
\end{align*}
As you can check, evaluating $y'$ at the point $(1, 1)$ yields $y' = -1$. Therefore, the tangent line equation is $y = -x + 2$. |
Area between the curves $x=(5/3)y$ and $x=\sqrt{1+y^2}$ | In a first year calculus course, symbolab is your best friend.
http://symbolab.com/math/search/%5Cint_%7B0%7D%5E%7B0.75%7D%5Csqrt%7B1%2By%5E%7B2%7D%7Ddy/?origin=button |
Determining graph minors quickly | Sorry for the answer on an extremely old question but I came across it and could not ignore it.
These are the things I came up with in worst case nlogn time you can sort all the degrees adn compare them. if there is a single time in which position x in the sorted list of B is smaller than the same position x in the sorted list of A . B cannot be a minor of A. same thing goes with the length of edges (non existant lengths and degrees are set to minus infinity). |
Method for computing polar coordinates surface element? | When the variables are changed between $x,y$ and $u,v$, your $dx dy$ comes from the area of a rectangle, but it is mapped to a region that is not a rectangle. So instead of doing $du dv$, you will have to do $|d\vec{u}\times d\vec{v}|$, the area of a parallelogram, which gives you the jacobian. In fact, here is the formula,
$$dx dy =|\begin{bmatrix}
\frac{\partial x}{\partial u} & \frac{\partial x}{\partial v}\\
\frac{\partial y}{\partial u} & \frac{\partial y}{\partial v}
\end{bmatrix}| du dv.$$ |
Argument Principle Application | No, not quite. Notice that $\Gamma$ winds twice around the origin. |
Uniqueness of ODE solution with Dirichlet conditions on the half line | For uniqueness of a second-order differential equation you generally need two initial conditions, typically $u(0) = \ldots$ and $u'(0) = \ldots$.
In this case, solutions other than $u = 0$ are
$$ u(x) = c \sqrt{\dfrac{2}{c^2+1}} \text{sn} \left(\dfrac{x}{\sqrt{c^2+1}},c\right) $$
for arbitrary real $c$, where $\text{sn}$ is a Jacobi elliptic function. In particular, for $c=\pm 1$ this is $\pm \tanh(x/\sqrt{2})$, corresponding to
$u'(0) = \pm 1/\sqrt{2}$. If $|u'(0)| > 1/\sqrt{2}$, it seems the solutions blow up at a finite value of $x$, but if $0 < |u'(0)| < 1/\sqrt{2}$ you get a periodic solution with a real value of $c$. |
Compute $\sum_{k=1}^{\infty}e^{-\pi k^2}\left(\pi k^2-\frac{1}{4}\right)$ | I don't know about high school math, but there is an answer using Mellin transforms. First compute the Mellin transform of the sum, then invert to get a closed form expression.
Introduce $$ f(x) = \sum_{k\ge 1} e^{- k^2 x} \left(\pi k^2 - \frac{1}{4} \right),$$
so that we are looking for $f(\pi).$
We have straightforwardly (using the definition of the Mellin transform) that the Mellin transform $f^*(s)$ of $f(x)$ is given by
$$ f^*(s) = \mathfrak{M}\left(f(x); s\right) =
\Gamma(s)
\sum_{k\ge 1} \left(\frac{\pi}{k^{2(s-1)}} - \frac{1}{4} \frac{1}{k^{2s}} \right) =
\Gamma(s) \left(\pi \zeta(2(s-1)) - \frac{1}{4} \zeta(2s) \right).$$
Now the Mellin inversion integral (which we'll evaluate at $x=\pi$) is
$$\frac{1}{2\pi i} \int_{c-i\infty}^{c+i\infty} f^*(s) x^{-s} ds =
\frac{1}{2\pi i} \int_{c-i\infty}^{c+i\infty} \Gamma(s) \left(\pi \zeta(2(s-1)) - \frac{1}{4} \zeta(2s) \right) x^{-s} ds.$$
Now the only singularity of the first zeta term is at $s=3/2$, with residue
$$ \operatorname{Res}\left(\Gamma(s) \pi \zeta(2(s-1)) x^{-s}; s=3/2\right) =
1/2\,{\frac {\Gamma \left( 3/2 \right) \pi }{{x}^{3/2}}}.$$
The only singularity of the second zeta term is at $s=1/2$, with residue
$$ \operatorname{Res}\left(\Gamma(s) \frac{1}{4} \zeta(2s) x^{-s}; s=1/2\right) =
1/8\,{\frac {\Gamma \left( 1/2 \right) }{\sqrt {x}}}.$$
It follows that
$$ f(x) = 1/2\,{\frac {\Gamma \left( 3/2 \right) \pi }{{x}^{3/2}}}
- 1/8\,{\frac {\Gamma \left( 1/2 \right) }{\sqrt {x}}}.$$
Finally set $x=\pi$ to get
$$\frac{1}{\sqrt{\pi}} \left(1/2\Gamma(3/2)-1/8\Gamma(1/2)\right) =
\frac{1}{\sqrt{\pi}} \left(1/4\Gamma(1/2)-1/8\Gamma(1/2)\right) =
\frac{1}{8} \frac{1}{\sqrt{\pi}} \Gamma(1/2) = \frac{1}{8}.$$
The reason why there is only one pole in every case is because the trivial zeros of the zeta function cancel the poles of the gamma function. |
Example showing L1 is not a reflexive space | This was answered in this question, the top two answers have a textbook reference and an actual proof. |
max min is less than min max proof | $f(x_0, y_0) \leq f(x, y)$ because $f(x, y_0) \leq f(x, y)$ for all $x$ and $y$, and $f(x_0, y_0) \geq f(x, y_0)$ since $x_0$ is a maximizer. Similar logic applies to $f(x_1, y_1)$. |
Interchanging order of limits | The polynomial dose not converges to $e^{\lambda}$ uniformly on the whole real line, so you cannot carelessly interchanging the limit and summation.
For a thorough discussion, see this link or any general textbook on analysis. |
Viewing a UHF-algebra by an infinite tensor-product | We consider $M_{2^\infty}$ as the limit of
$$
(M_2^{\otimes n}, \varphi_n)
$$
where $\varphi_n(x) = x \otimes 1$. Furthermore, we identify elements in $M_{2^n}$ with their images in $M_{2^\infty}$.
Then, for say $x = e_{11} \in M_2$ you get
$$
\sigma(x) = 1 \otimes x.
$$
But $1 \otimes x \neq x \otimes 1$ and $x \otimes 1 = x$ in $M_{2^\infty}$. |
How many numbers (up to $2^n$) is the power of any number (exponent greater than $1$)? | The number of squares below or equal to $2^n$ is exactly
$$
\lfloor \sqrt{2^n}\rfloor = \lfloor\sqrt{2}^n\rfloor\approx 1.41^n
$$
which grows exponentially in $n$. If you include higher powers into the count, then it will get larger, but it's still clearly bounded above by $2^n$. We conclude that the number of pure powers below $2^n$ grows exponentially in $n$. |
Orthogonal Complement of the Column Space | You want those vectors which are orthogonal to the columns of given matrix $A$. So you want a row vector $x$ such that
$$xA=0$$
Taking transpose
$$A^Tx^T=0$$
So essentially you want to solve for the solution of the homogeneous system with the matrix $A^T$.
note: this exercise is based on the fact that the null space is orthogonal to row space (same as column space of the transposed matrix). |
Difference between Zorn's Lemma and the ascending chain condition | This is much weaker.
Consider the natural numbers, with an added maximal point which we shall call $\infty$. This partial order satisfies Zorn's lemma, every chain has an upper bound, indeed $\infty$ itself is an upper bound (it's a maximum element!) of any chain.
But the chain $\Bbb N$ is strictly increasing and does not stabilize. So this linear order does not satisfy the ascending chain condition.
The two statements are very close, and for a good reason. Stating that "the ascending chain condition implies the existence of maximal elements" is equivalent to stating Zorn's lemma when the chains are finite. Namely, "if every chain is finite then there exists a maximal element". This is also equivalent to the Principle of Dependent Choice which is a weak version of the axiom of choice (although strong enough to prove countable choice). |
Homotopic maps to $S^n$ | Here's a proof in terms of homology. Let $x$ be an $n$-chain on $M$ that is a fundamental class of $(M,\partial M)$ (i.e., $x$ is a cycle relative to $\partial M$ and $[x]\in H_n(M,\partial M)$ is the generator corresponding to the orientation on $M$). Let $i,j:M\to M'$ be the two inclusion maps. Then I claim that $y=i_*(x)-j_*(x)$ is a fundamental class for $M'$. Indeed, since $\partial x$ is contained in $\partial M$, $\partial i_*(x)=\partial j_*(x)$, so $y$ is a cycle. Now note that $H_n(M',j(M))\cong H_n(M,\partial M)$ by excision and that the image of $[y]$ in $H_n(M',j(M))\cong H_n(M,\partial M)$ is just $[x]$, which is a generator of $H_n(M,\partial M)$. If $\varphi:\mathbb{Z}\to\mathbb{Z}$ is a homomorphism and $\varphi(n)=1$, we must have $n=\pm 1$. Applying this to the map $\mathbb{Z}\cong H_n(M')\to H_n(M,\partial M)\cong\mathbb{Z}$, we conclude that $[y]$ must be a generator of $H_n(M')$.
So to compute the degree of $f\circ \pi$, we can just compute $(f\circ\pi)_*([y])\in H_n(S^n)$. But $\pi_*i_*=\pi_*j_*$ so $f_*\pi_*(y)=f_*\pi_*i_*(x)-f_*\pi_*j_*(x)=0$ (as a chain, not just as a homology class!). Thus $\deg(f\circ \pi)=0$. |
Find the Ф(28) = 12 primitive roots modulo 29... | A primitive root modulo $29$ is a generator of the cyclic group $C_{28}$.
Hence if $a$ is a primitive root modulo $29$, then $a^k$ is also a primitive root modulo $29$ if and only if $gcd(k,\phi(29))=1$. Hence the primitive roots for $29$ are: $2$, $2^3 = 8$, $2^5 \equiv 3$, $2^9 \equiv 19$,
$2^{11}\equiv 18$,$2^{13}\equiv 14$,$2^{15}\equiv 27$, $2^{17}\equiv 21$, $2^{19}\equiv 26$, $2^{23}\equiv 10$, $2^{25}\equiv 11$,and $2^{27}\equiv 15 \bmod 29$. |
Homotopy class of a loop in the $2$-skeleton of a simplicial complex | The inclusion of the $2$-skeleton into $X$ induces an isomorphism on $\pi_1$, and in particular an injection. This means that two loops in the skeleton which are homotopic in $X$ are already homotopic in the skeleton! |
How is quantifier elimination accomplished in second and higher order logic? | You just replace $\forall x \exists y.\phi$ by $\exists f \forall x.\phi[f(x)/y]$ for higher-order formulas $\phi$ in the same way that you did for a first-order formula. The Skolem functions will acquire more and more arguments as you go. The axiom of choice is needed to show that the skolemised formula is equivalent to the original.
Your example would skolemise like this:
$$
\forall x \exists y \exists R \exists f. P(x, y) \land Q(f(x)) \land R(y, f(x))
\Leftrightarrow
\exists y' \exists R' \exists f \forall x.
P(x, y'(x)) \land Q(f(x)) \land R'(x, y'(x), f(x))
$$
where I have made the slight optimisation of leaving $f$ as it is because it is already paramatrized by $x$. |
Will the product of these two matrices be unitary? | $V'$ is unitary, therefore yes. |
When I read the question related to two population hypothesis test, how can I decide whether the population is dependent or independent? | It does not make sense to talk about dependence of populations. You need to ask yourself: "if I take item #1 from sample set #1, can I find an item in sample #1 to which it has a real relationship, i.e., can two such items logically be paired?"
Example: say, you own a pizza shop and you decide to change your recipe. You take 10 customers and ask each of them to rate your old-style pizzas and the, ask the same 10 people, to rate the new pizzas. Then you can match these two sample sets into pairs, because they involve the same people: Sarah's score for old-style forms a pair with Sarah's score for the new pizzas.
This is when you will apply a paired-sample t-test (for average scores).
However, if you asked 10 (or other number) DIFFERENT people after changing the recipe, the score of John for old-style pizza cannot be related to any of the new-style scores, because John is not part of the new scores. More strongly, if at least one of the person in the survey has been added/deleted/exchanged, you cannot have 10 matching pairs of scores anymore, hence you cannot apply paired-sample t-test, but you must recourse to an independent-samples t-test.
While independent-samples test can always be applied, it gives slightly larger uncertainties (larger p-value) than matching-pairs test, so you should apply the latter whenever it is feasible. |
$2$ different $3\times 3$ matrices, $1$ of its eigenvalue is exactly the same, is the eigenvector same? | No.
$\begin{pmatrix}
1 & 0 & 0 \\
0 & 2 & 0\\
0 & 0 & 3
\end{pmatrix}
$
and
$\begin{pmatrix}
0 & 0 & 0 \\
0 & 1 & 0\\
0 & 0 & 0
\end{pmatrix}
$
both have an eigenvalue of $1$ but have different eigenvectors. |
Solve $\frac{\sin (10^\circ) \sin (30^\circ)}{\sin 40^\circ \sin (80^\circ-x^\circ)} = \frac{\sin 20^\circ}{\sin x}$ | You can use
\begin{equation}
\sin a\sin b\sin c=\frac{1}{4}(\sin(a+b-c)+\sin(a-b+c)+\sin(-a+b+c)-\sin(a+b+c))
\end{equation}
With this you get
\begin{equation}
\sin(40-x)+\sin(-20+x)+\sin(20+x)-\sin(40+x) \\
=
\sin(-20+x)+\sin(60-x)+\sin(100-x)-\sin(140-x)
\end{equation}
Then you can use $\sin(140 - x) = \sin(180 - 140 + x) = \sin(40+x)$ to reduce this to
\begin{equation}
\sin(40-x)+\sin(20+x) =\sin(60-x)+\sin(100-x) \\
\sin(70-(30+x))+\sin(-10+(30+x)) - \sin(90-(30+x)) - \sin(130-(30+x)) = 0 \\
\end{equation}
Now I use formula $\sin(a\pm(30+x))=\sin a\cos (30+x)\pm \cos a\sin (30+x)$.
\begin{equation}
\cos(30+x)(\sin 70 - \sin 10 - 1 -\sin130)+\sin(30+x)(-\cos 70 +\cos 10 + \cos 130) = 0 \\
-\cos(30+x)(\sin 250 + \sin 10 + 1 + \sin130)+\sin(30+x)(\cos 250 +\cos 10 + \cos 130) = 0
\end{equation}
Now we can notice the following
$(\cos 250 +\cos 10 + \cos 130) = \Re e^{i 10^\circ}(e^{i 0^\circ} + e^{i 120^\circ} + e^{i 240^\circ})$ and $(\sin 250 +\sin 10 + \sin 130) = \Im e^{i 10^\circ}(e^{i 0^\circ} + e^{i 120^\circ} + e^{i 240^\circ})$. But $e^{i 0^\circ} + e^{i 120^\circ} + e^{i 240^\circ} = 0$, which yields
\begin{equation}
-\cos(30+x) = 0
\end{equation}
which has two solutions $x=60$ and $x=-120$. |
Differentiabily of a complex valued function | $f(z)=x^2+y^2+y+1+ix$ where $z=x+iy$
$u=x^2+y^2+y+1$ ; $v=x$
Compute the partial derivatives and check the Cauchy-Riemann equations
$u_x=2x,u_y=2y+1;v_y=0;v_x=1$
which is satisfied only when $x=0,y=-1$ i.e option c
Also the partial derivatives are continuous at $-i$ |
Rudin's discussion on cantor set | As you observed the length of the interval $\big(\frac{3k+1}{3^m},\frac{3k+2}{3^m}\big)$ is $\frac{1}{3^m}$. Now given $\alpha<\beta\in\mathbb{R}$, let $m$ be sufficiently large so that $\frac{1}{3^m}<\frac{\beta-\alpha}{4}$. This choice of $m$ implies that $$\text{($\star$) }\beta-\frac{4}{3^m}>\alpha$$
From now on $m$ is fixed and we have to choose $k$ (which is going to be dependent on $m$). Since $\frac{3k+1}{3^m}\rightarrow\infty $ as $k\rightarrow\infty$ there exists $k$ for which $\frac{3k+1}{3^m}>\alpha$, let $k_0$ be the minimal $k$ satisfying this condition.
It is left to show that the interval $\big(\frac{3k_0+1}{3^m},\frac{3k_0+2}{3^m}\big)$ lies in $(\alpha,\beta)$. We already know that $\alpha<\frac{3k_0+1}{3^m}$ hence it left to show that $\frac{3k_0+2}{3^m}<\beta$.
Suppose not, then $\frac{3k_0+2}{3^m}\geq\beta$, hence $\frac{3k_0-2}{3^m}+\frac{4}{3^m}\geq \beta$ we conclude that $\frac{3k_0}{3^m}\geq\beta-\frac{4}{3^m}$ From ($\star$) we have that $\frac{3(k_0-1)+1}{3^m}>\alpha$ which is a contradiction to the minimality of $k_0$. This completes the proof. |
Using thm.13 on pg.374 in Royden and Fitzpatrick "Real analysis" fourth edition. | If you have $E=\bigcup_nE_n$ where $E_n\in\mathcal{M}$ you can use this result to prove countable additivity. First, the typical trick to express as an integral over the whole space,
$$
\nu(E)=\int_Efd\mu=\int_Xf\chi_Ed\mu.
$$
Next, it is clear that $X=E^c\cup E=E^c\cup\bigcup_nE_n$. Using the theorem,
$$
\nu(E)=\int_{E^c}f\chi_Ed\mu+\sum_n\int_{E_n}f\chi_Ed\mu=0+\sum_n\int_{E_n}fd\mu=\sum_n\nu(E_n).
$$
Since $f\ge 0$ $\mu$-a.e., $\nu(E)\ge 0$ for all $E\in\mathcal{M}$, so $\nu$ is a measure. |
Papers published by mathematicians | Papers published in the field of Mathematics are no different than any other field. In PhD thesis what you usually do is to usually put all your related work together and defend it against a committee. For example, if you have published three papers improving/related to your previous work then you combine your work together and defend it as your PhD thesis but if it is just one that you have published at all then you defend that alone as your PhD thesis. The question should be is the Conference/Journal you're publishing your work is of higher quality/ranked ? Higher the rank better the recognition. |
Is the space $s$ separable? | What you’ve written as a norm isn’t one: it’s not true, for instance, that $\|2x\|_s=2\|x\|_s$. However, if you set
$$d(x,y)=\sum_{k\ge 1}\frac{|x_k-y_k|}{2^k\left(1+|x_k-y_k|\right)}\;,$$
the function $d$ is a metric on the space of real sequences. And yes, the set of rational sequences with only finitely many non-zero terms is indeed a countable dense subset of the resulting metric space. |
How do I solve this problem using binomial theorem with square root in it? | Hint:
Any term in the expansion would be of the form $${18\choose r} \cdot (x^2)^r \cdot\left(\frac 12 x^{-1/2}\right)^{18-r} $$ You need $x^{2r -\frac{18-r}{2}}=\frac{1}{x\sqrt x} =x^{-3/2}$ or $$2r-\frac{18-r}{2} =-\frac 32$$ Solve for $r$ from this equation and just plug its value in the first expression. |
If (X,$\tau$) Hausdorff, then $Der(E) \in \tau^* \ \forall E \subseteq X$ | If A is a subset of a T1 space S, then
A' (the derived set of is A) is closed.
To show S - A' is open, assume x in S - A'.
Thus some open U with x in U and empty U $\cap$ {A - {x}).
If y in U, y /= x, then y in S\A'. Proof:
y in open U - {x}; (U - {x}) $\cap$ (A - {y}) is empty; y not in A'
As x in S - A', altogether U subset S - A' and S - A' is open.
Notice that only T1 is needed to show A' is closed.
For other spaces there is the theorem
(for all x, {x}' is closed) iff (for all A, A' is closed). |
Stirling number of the second kind recurrence relations | Hint for the first two:
1)Fix the (n+1)th-element in one class. You want to construct every possibility for that particular class. So you want the cases when that class has 1 element, 2 elements, etc...So, if you want j elements in that class, you must choose j-1 elements of the n remaining and the other n-1-(j-1) elements must be divided in other k classes.
2)Fix the (n+1)th-element in the one class. Think that $(k+1)^{n-j}$ says, combinatorially, that you want to assign to those n-j elements one of the k+1 possible colors. |
Are $\operatorname{asin}$, $\operatorname{acos}$ and $\operatorname{atan}$ considered acceptable standard notation? | All three notations are quite common. You can almost certainly use the notation $\operatorname{atrig}$ (where $\operatorname{trig}$ is any trigonometric function you like, e.g. $\sin$, $\cos$, $\tan$, etc.) freely without causing any confusion or worrying about ambiguity.
For example:
The question What are "$\tan$" and "$\operatorname{atan}$"? on Math SE asks for an explanation of the $\operatorname{atan}$ function in the context of Java programming. The Java documentation confirms that this is part of the language's syntax.
Other programming languages also use the abbreviated $\operatorname{atrig}$ syntax:C++, python, FORTRAN, and so on. Indeed, on the Wikipedia page discussing inverse trigonometric functions, it is claimed in the notation section "In computer programming languages the inverse trigonometric functions are usually called by the abbreviated forms asin, acos, atan." This is marked "citation needed", but still appears reliable.
The calculator I have used since high school uses ASIN, ACOS, and ATAN:
On the other hand, TI and Casio calculators appear to use the $\operatorname{trig}^{-1}$ notation.
That being said, my feeling is that the use of $\operatorname{atrig}$ in purely mathematical writing is somewhat uncommon. I can't recall ever having seen this abbreviated notation in a paper, and I likely wouldn't use it myself, but I can't imagine that you would cause confusion. On the other hand, mathematical typesetting packages make it easy to change notation on the fly. For example using the AMS's LaTeX packages, use the \DeclareMathOperator macro, e.g.
\DeclareMathOperator{\asin}{arcsin}
in order to use \asin to typeset the text $\DeclareMathOperator{\asin}{arcsin}\asin$. If you don't like that, you could instead use
\renewcommand{\arcsin}{sin^{-1}}
to get $\renewcommand{\asin}{\sin^{-1}}\asin$. This lets you display a more verbose notation while retaining the more abbreviated notation in your LaTeX source. |
Evaluation $\lim_{n\to \infty}\frac{n^{\log m}}{m^{\log n}}$ | $$n^{\log m} = e^{\log n \log m} = m ^{\log n}$$ |
Quotient Ring of $M_n(\mathbb Z)$ | For $I$ a non-zero two-sided ideal of $M_n(\Bbb{Z})$ then $\Bbb{Q} I$ is a two sided ideal of $M_n(\Bbb{Q})$ thus the whole it, a $n^2$-dimensional $\Bbb{Q}$-vector space, generated by $n^2$ elements, once multiplied by the right integer they are in $I$ and they generate a finite index sub-$\Bbb{Z}$-module of $M_n(\Bbb{Z})$. |
Free context grammar for $\mathscr{L_2}=\{0^i1^j2^k|i,j,k\geq0,i+j=2k\}$ | You need to add any two symbols of $\{0,1\}$ when adding one $2$, so there are three options:
$$S \to T_{00} \\
T_{00} \to 00T_{00}2|T_{01} \\
T_{01} \to 01T_{11}2|T_{11} \\
T_{11} \to 11T_{11}2|\epsilon$$ |
Does it stand that $\lim \inf |f_n|^p=|f|^p$ for this reason? | Firstly, the right hand side of the implication you wrote is the exact definition of the left hand side, which is meaningless.
For your question, since $f_n \rightarrow f$ a.e., $\liminf |f_n (x)|^p = \lim |f_n (x) |^p = |f(x)|^p$ for $x$-a.e, which is written as $\liminf |f_n|^p = |f|^p$ a.e. |
Simple question on using Equivalence relation property to solve an equation | Well it's reflexive so we must have $(m,n)\sim(m,n)$ or $m-An=Bm-n$ for all $m,n$. So $(1-B)m=(A-1)n$ for all $m,n$. If $1-B \ne 0$ then $\frac {A-1}{1-B} = \frac mn$ for all $m,n; n\ne 0$. But that's impossible as $\frac {A-1}{1-B}$ would be a constant. So $1-B=0$ and $B=1$ and $(A-1)n=0$ for all $n$. So $A-1=0$ and $A=1$.
Those are the only case that makes $\sim$ reflexive. So $(m,n)\sim(u,v) \iff m-n=u-v$.
So that is the only possible answer. But we have to show that that is an answer and that $(m,n)\sim(u,v) \iff m-n = u-v$ is an equivalence.
We have to show that is an equivalence relation. It's reflexive and $m-n= m-n$ for all $m,n$.
Is it symmetric? If $m-n = u-v \implies u-v = m-n$? Yes. So it is symmetric.
Is it transitive? If $m-n = u-v$ and $u-v = w-z$ does that imply $m-n=w-z$. Yes it does. so it is transitive. |
The number of dominating sets of a bipartite graph is not exactly divisible by $2$ | This is actually true for all graphs $G$, and can be proven by considering the Domination Polynomial of a graph; $D(G,x) := \sum_{k=0}^{|V(G)|} d_k(G) x^k$
where $d_k(G)$ is the number of dominating sets of cardinality $k$ in $G$.
Your problem is thus to show that $D(G,1) \equiv 1 \mod 2$.
We proved it as a corollary to another result in Subset-Sum Representations of Domination Polynomials. (Graphs Combin. 30 (2014), no. 3, 647–660.)
It can also be shown by induction using this recurrence from Recurrence relations and splitting formulas for the domination polynomial: (Electron. J. Combin. 19 (2012), no. 3, Paper 47)
$$D(G, x) = xD(G/u, x) + D(G − u, x) + xD(G − N[u], x) − (1 + x)p_u(G, x)$$
When we substitute $x=1$ into this you can see that the last term is even and thus there are 3 terms belonging to smaller graphs which are necessarily odd by the induction hypothesis; 3 odd numbers added makes an odd number.
The first proofs we found of this result were published by Andries E. Brouwer: The number of dominating sets of a finite graph is odd. |
Why does the following set always have finite measure? | If $ n=\infty $, then $ n\not\in\mathbb N $. |
Finding derivative at a point from functional equation | Just using the functional relation, for $y \ne 0$, we have that
$$
\frac{f(x+y)-f(x)}{y} =\frac{f(x)f(y)-f(x)}{y} = f(x) \cdot \frac{f(y)-1}{y-0}
$$
Since $f(0)=1$, and taking the limit as $y \to 0$, we get
$$
f'(x) = f(x)\cdot f'(0).
$$
In particular, $f'(5) = f(5)\cdot f'(0)$. So, the expected answer would be $f'(5)=6$. As it was pointed out by @DMcMor, another issue is to see if such a function exists... |
Cesàro sum of $1+ 0 - 1 + 1 + 0 - 1 + \dots$ | Lets say the first term is the $0$th term. If $s_n,n=0,1,2,3,\dots$ are the partial sums,
$$ s_{3n} = 1,\\s_{3n+1}=1,\\s_{3n+2}=0.$$
Consequently
$$ \sum_{n=0}^{3k} s_n = 2k+1, \\\sum_{n=0}^{3k+1} s_n= 2k+2, \\\sum_{n=0}^{3k+2}s_n = 2k+2. $$
The Cesàro sums are the average of the partial sums $c_k = \frac1{k+1}\sum_{n=0}^k s_n$, and its easy to check via squeeze theorem that $c_{3k}\to 2/3$, $c_{3k+1}\to 2/3$, $c_{3k+2}\to 2/3$. Hence $c_{k}\to 2/3$. |
Pigeonhole Principle and Geometry | HINT: Think parity. The midpoint of $\langle a,b\rangle$ and $\langle r,s\rangle$ is $\left\langle\frac{a+r}2,\frac{b+s}2\right\rangle$. What does it take for both $a+r$ and $b+s$ to be even? |
Is an open subset of an open set of a topological space open? | Yes. $U$ open in $V$ means that $U=V\cap W$ for some $W$ open in $X$. So $U$ is an intersection of two open sets in $X$. |
Why is a curve parameterized by arc length necessarily a unit speed curve? | If $t=s$ your definition of arc-length becomes
$$
s=\int_{s_0}^s|\sigma'(u)|\,du
$$
Differentiating w.r.t $s$ we get
$$
1=|\sigma'(s)|
$$
which shows when parametrized using $s$, the curve becomes unit speed. |
Proving there are only finitely many $n$-tuples of prime numbers such that $(p_{1}p_{2}...p_{n})|(p_{1}+k)(p_{2}+k)...(p_{n}+k)$ | Let $p_1,\ldots,p_n$ be such a tuple. Choose a sequence $q_0,q_1,\ldots\in\{p_1,\ldots,p_n\}$ such that $q_i|q_{i+1}+k$ for all $i$, and $q_0 = \max\{p_i\}$. Let $i_0$ be the smallest value of $i$ for which $q_{i}\neq q_{i+1}+k$. Write $\ell$ for the smallest prime not dividing $k$.
I claim $i_0< \ell$. If instead $i_0\geq\ell$, then since $k$ is coprime to $\ell$, one of the $\ell$ values $q_i=q_0-ik$ for $0\leq i\leq \ell-1$ would be divisible by $\ell$, hence equal to $\ell$, in which case $q_{i+1}=\ell-k\leq 1$ could not be prime.
Now, by assumption $q_{i_0} = q_0-i_0k$ is a proper divisor of $q_{i_0+1}+k$, so
$$
q_0 + k\geq q_{i_0+1} + k \geq 2(q_0-i_0k),
$$
which implies
$$
q_0\leq (2i_0+1)k <2\ell k.
$$
So we have a bound on $q_0=\max\{p_i\}$ depending only on $k$. This implies that for fixed $k$, there are only finitely-many tuples satisfying the given divisibility. |
Mapping (in/sur/bi-jective) between a group to its (normal subgroup, quotient group) | There is a bijection $G\to N\times Q$ as follows:
I'll take $Q$ to be the set of right cosets of $N$. For each $q\in Q$ fix a representative $g_q\in G$ which maps to $q$ in the quotient map $G\to Q$. The bijection is $x\mapsto(xg_{Nx}^{-1},Nx)$. This has inverse $(n,q)\mapsto ng_q$ so is bijective.
Rather trivially, the above is a homomorphism if and only if $G\cong N\times Q$.
As for when an injective homomorphism $G\to N\times Q$ exists, I don't believe there's any general criteria, other than the trivial case $G\cong N\times Q$ - it's certainly not possible in general, for example $G=\mathbb{Z}/4\mathbb{Z}$ has normal subgroup and quotient $N\cong Q\cong\mathbb{Z}/2\mathbb{Z}$, but $G\not\cong\mathbb{Z}/2\mathbb{Z}\times\mathbb{Z}/2\mathbb{Z}$.
It is however possible to find infinite classes of non-trivial examples where it is possible, like $G=\mathbb{Z}$ and $N=n\mathbb{Z}$ for any integer $n$, with injective map $x\mapsto (nx,0)$ |
Let $u$ be transcendental over $\mathbb{Z}_p$. Why is $t^p-u$ irreducible over $\mathbb{Z}_p(u)$? | The top degree term of $v(u)$ has degree divisble by $p$. The top degree term of $uw(u)^p$ has degree congruent to $1$ mod $p$ (the $w(u)^p$'s leading term has degree divisible by $p$ and multiplying by $u$ adds $1$) and in particular not divisible by $p$. So these two terms are unequal and hence cannot cancel. |
How to find sum of coefficients of $\frac{d^n}{dx^n} \left( Z(x)^m \right)$ | Here is a simple approach: for every fixed $(m,n)$, Leibniz rule shows that indeed there exists some unique (polynomial) function $U_{m,n}$ such that the desired identity holds for every regular enough function $Z$ and every $x$. Let us use a specific function $Z$, say, $$Z(x)=\mathrm e^x.$$ Then $Z^{(k)}(x)=Z(x)$ for every $k$ and $Z(x)^m=\mathrm e^{mx}$, hence $$U_{m,n}(\mathrm e^x,\mathrm e^x,\ldots,\mathrm e^x)=\frac{\mathrm d^n}{\mathrm dx^n}\left(\mathrm e^{mx}\right)=m^n\mathrm e^{mx},$$ in particular, for $x=0$, $\mathrm e^x=\mathrm e^{mx}=1$ hence
$$U_{m,n}(1,1,\ldots,1)=m^n.$$
Likewise, for every $a$,
$$U_{m,n}(a,a,\ldots,a)=m^na^m,$$
and finally, for every $\color{blue}{c}$ and $\color{red}{a}$,
$$U_{m,n}(\color{red}{a},\color{red}{a}\color{blue}{c},\color{red}{a}\color{blue}{c}^2,\ldots,\color{red}{a}\color{blue}{c}^n)=(m\color{blue}{c})^n\color{red}{a}^m.$$ |
Help with showing $\int_{\mathbb{C}P^1} \omega_{FS} = \pi$ | Let $\varphi : U_1 \to \mathbb C$ mapping $p \mapsto z(p)$ be the first coordinate chart.
$\omega_{FS}$ is a differential form on $\mathbb CP^1$; it can't be written in terms of $z$'s and $\bar z$'s.
However, $(\varphi^{-1})^\star \omega_{FS}$ is a differential form on $\mathbb C$. $z$ and $\bar z$ are coordinates on this $\mathbb C$. We can write
$$
(\varphi^{-1})^\star \omega_{FS} = \frac i 2 \frac 1 {(1 + |z|^2)^2} dz \wedge d\bar z
$$
So
$$
\int_U \omega_{FS} = \int_{\varphi(U)} (\varphi^{-1})^\star \omega_{FS} = \int_{\mathbb C} \frac i 2 \frac 1 {(1 + |z|^2)^2} dz \wedge d\bar z
$$
To address the question about $U_2$, note that $U_1 \cap U_2 $ is non-empty. So
$$
\int_{\mathbb {CP}^1} \omega_{FS} \neq \int_{\mathbb U_1} \omega_{FS} +\int_{\mathbb U_2} \omega_{FS}
$$
because the expression on the RHS double-counts on the overlap!
To evaulate the integral correctly, you can use a partition of unity subordinate to the cover $U_1, U_2$. Or even better, notice that $U_2 \backslash U_1$ is a single point - the south pole - which has measure zero. so really, $\int_{\mathbb {CP}^1} \omega_{FS} =\int_{\mathbb U_1} \omega_{FS} = \pi $. |
Homeomorphism between $Y$ and $\{b\}\times Y$ | Yes, your $T$ is the restriction (to $\{b\} \times X_2$) of the second projection $\pi_2: X_1 \times X_2 \to X_2$, which is continuous by the definition of the product topology, and restrictions of continuous functions are still continuous.
Its inverse is $j: X_2 \to X_1 \times X_2$ defined by $j(x)=(b,x) \in \{b\} \times X_2$, which is continuous as $\pi_1 \circ j$ is constantly $b$ (so continuous) and $\pi_2 \circ j$ is the identity on $X_2$ (so continuous too), using the universal property of continuity of $X_1 \times X_2$. So $T$ is continuous with continuous inverse and so a homeomorphism. |
Prove that the determinant of this matrix is non-zero. | Considering all possible combinations you have that
$$D=(\pm6\pm25)\pm3(\pm9\pm20)\pm4(\pm15\pm8)=\{\pm31,\pm19\}\pm\{\pm87,\pm33\}\pm\{\pm92,\pm28\}$$ where the $,$ in the $\{,\}$ should be interpreted as "or".
Now since the summand of the last $\{\}$ is even you should try to cancel it out with a combination of numbers of the two brackets. But after a not very tedious inspection you can see that no combination of them yields neither $\pm92$ (this is almost immediate) nor $\pm28$ so that the even term of the last term cannot be canceled out. Hence $D\neq0$. |
Describe geometrically what the vector signifies. | I suppose that you know that $\mathbb{R}^2$ is a vector space of dimension $2$ and this means (by definition of dimension of a vector space) that a basis in $\mathbb{R}^2$ contains at most two linearly independent vectors. Or, in other words that $\mathbb{R}^2$ the span of a couple of linearly independent vectors.
Now note that $\vec v=[1,1]$ and $\vec u= [2,6]$ are linearly independent, and, as a consequnce the span of these two vectors is $\mathbb{R}^2$ and, in particular, $\vec w[3,5]=2[1,1]+\frac{1}{2}[2,6]$ is a linear combination of $\vec v$ and $\vec u$.
So the ''geometric description'' of $B$ is the whole $\mathbb{R}^2$ |
Prove that matrix $A=c \cdot I$ where $I$ is an identity matrix | Use special matrices. If $E_{h}$ is the matrix having $1$ in place $(h,h)$ and $0$ elsewhere, then $AE_{h}$ is the matrix
$$
\begin{matrix}
[\,0&\dots&0&a_h&0&\dots&0\,]\\
&&&\uparrow\\
&&& \scriptstyle h
\end{matrix}
$$
having all zero columns except for the $h$-th column, which is the $h$-th column of $A$. Similarly, the matrix $E_{h}A$ is the matrix having all zero rows except for the $h$-th row, which is the $h$-th row of $A$.
In particular, $a_{hj}=0$ for $j\ne h$. Since $h$ is arbitrary, we conclude that $A$ is diagonal.
Now consider the matrix $E_{hk}$ that's obtained from the identity by switching the $h$-th row with the $k$-th row (for $h\ne k$). It's easy to show that $E_{hk}$ is the matrix obtained from $A$ by switching the $h$-th row with the $k$-th row. Similarly, $AE_{hk}$ is obtained from $A$ by switching the $h$-th column with the $k$-th column.
Thus, by comparing $E_{1k}A$ with $AE_{1k}$, we conclude that the coefficient in place $(h,h)$ of $A$ is the same as the coefficient in place $(1,1)$.
Therefore all coefficients on the diagonal are equal. Call this common coefficient $c$ and we have proved that $A=cI$.
Note that this uses only matrix multiplication where the special matrices used have only coefficients $0$ and $1$, so it holds for matrices over any commutative ring. |
Is given vector in spanned vector space | HINT
Since you got a row of zeros, there must exist such numbers $a,b,c \in \mathbb{R}$ that $$a \vec{v} + b\vec{w} = c \vec{u}.$$ |
Find and classify the stationary points of $y = x^ 2/(x-4)$ | Take the second derivatives to determine whether a function is maximized or minimized at a point.
$4^+$ means approaching the point $4$ from above. |
Limit of a sequence in a decreasing family of open sets | The same counterexample as to the linked question works:
Let $A_0 = (0, 3)$
Let $A_i = (0, 1/i) \cup (1-1/i, 1+1/i)$ for $i>0$
Then $A = \{1\}$ but if we pick $a_i = \frac{1}{2i} \to 0$, we don't have $0 \in A$. |
$A^n=0$ if $a_{ij}=0$ for all $i\geq j$. | Observe that such a matrix is strictly upper triangular, i.e., all entries on and below the main diagonal are zero. Our claim is therefore that any strictly upper triangular matrix is nilpotent.
Consider a strictly upper triangular $n \times n$ matrix $A.$ Observe that the matrix $xI - A$ is upper triangular. Particularly, all of the diagonal entries are $x.$ Considering that the determinant of an upper triangular matrix is the product of the diagonal entries, we have that the characteristic polynomial of $A$ is given by $\chi_A(x) = \det(xI - A) = x^n.$ By the Cayley-Hamilton Theorem, we have that $0 = \chi_A(A) = A^n,$ as desired. |
Convex combination and convex set | Yes, $tx + (1-t)x'$ is a (the) straight line from the point $x$ to $x'$, for values of $t$ between $0$ and $1$.
Reversing the positions of $x$ and $x'$ makes no difference, it's still a line - it just runs in the opposite direction as you increase/decrease $t$.
And finally, yes, that would be a proof of convexity. |
If a sequence of measurable functions $f_{n}$ converge to $f$ almost everywhere then $f$ is measurable | The equality doesn't seem to be correct.
$$
\begin{alignat}{10}
A \cap f^{-1}((a,\infty)) &= \{ x \in A : f(x) > a \}\\
&= \left\{ x \in A : \lim_{i \to \infty} f_i(x) > a \right\} \\
&= \left\{ x \in A : \lim_{i \to \infty} f_i(x) \geq a + 1/n \text{ for some } n \geq 1 \right\} \\
&= \bigcup_{n=1}^\infty \left\{ x \in A : \lim_{i \to \infty} f_i(x) \geq a + 1/n \right\} \\
&= \bigcup_{n=1}^\infty \{ x \in A : \exists\ N \geq 1 \text{ such that } f_i(x) \geq a + 1/n \text{ for all } i \geq N \}\\
&= \bigcup_{n=1}^\infty \bigcap_{i=N}^\infty \{ x \in A : f_i(x) \geq a + 1/n \}\\
&= \bigcup_{n=1}^\infty \bigcap_{i=N}^\infty A \cap f_i^{-1}([a+1/n,\infty))\\
&= A \cap \bigcup_{n=1}^\infty \bigcap_{i=N}^\infty f_i^{-1}([a+1/n,\infty)).
\end{alignat}
$$
So, you should be getting
$$
A \cap f^{-1}((a,\infty)) = A \cap \bigcup_{n=1}^\infty \bigcap_{i=N}^\infty f_i^{-1}([a+1/n,\infty)),
$$
where for each $n \in \mathbb{N}$, $N \in \mathbb{N}$ is such that $f_i(x) \geq a+1/n$ for all $i \geq N$.
Alternatively, you could use that for all $x \in A$,
$$
f(x) = \lim_{n \to \infty} f_n(x) = \limsup_{n \to \infty} f_n(x) = \inf_{n \geq 1} \left \{ g_n(x) \right\},
$$
where for each $n \in \mathbb{N}$, $g_n : A \to \mathbb{R}$ is defined by
$$
g_n(x) = \sup_{i \geq n} \{f_i(x)\}.
$$
This gives
$$
A \cap f^{-1}((a,\infty)) = \bigcap_{n=1}^\infty g_n^{-1}((a,\infty))
= A \cap \bigcap_{n=1}^\infty \bigcup_{i=n}^\infty f_i^{-1}((a,\infty)).
$$ |
Projective Tetrahedral Representation | Yes. Explicitly one has:
$
\newcommand{\ze}{\zeta_3}
\newcommand{\zi}{\ze^{-1}}
\newcommand{\vp}{\vphantom{\zi}}
\newcommand{\SL}{\operatorname{SL}}
\newcommand{\GL}{\operatorname{GL}}
$
$$ \SL(2,3) \cong
G_1 = \left\langle
\begin{bmatrix} 0 & 1 \vp \\ -1 & 0 \vp \end{bmatrix},
\begin{bmatrix} \ze & 0 \\ -1 & \zi \end{bmatrix}
\right\rangle
\cong
G_2 = \left\langle
\begin{bmatrix} 0 & 1 \vp \\ -1 & 0 \vp \end{bmatrix},
\begin{bmatrix} 0 & -\zi \\ 1 & -\ze \end{bmatrix}
\right\rangle
$$
and
$$G_1 \cap Z(\GL(2,R)) = G_2 \cap Z(\GL(2,R)) = Z = \left\langle\begin{bmatrix}-1&0\\0&-1\end{bmatrix}\right\rangle \cong C_2$$
and
$$G_1/Z \cong G_2/Z \cong A_4$$
This holds over any ring R which contains a primitive 3rd root of unity, in particular, in the 13-adics, $\mathbb{Z}_{13}$. The first representation has rational (Brauer) character and Schur index 2 over $\mathbb{Q}$ (but Schur index 1 over the 13-adics $\mathbb{Q}_{13}$), and the second representation is the unique (up to automorphism of $A_4$) 2-dimensional projective representation of $A_4$ with irrational (Brauer) character.
You can verify that if $G_i = \langle a,b\rangle$, then $a^2 = [a,a^b] = -1$, $ a^{(b^2)} = aa^b$, and $b^3 = 1$. Modulo $-1$, one gets the defining relations for $A_4$ on $a=(1,2)(3,4)$ and $b=(1,2,3)$. |
Tangent Question: Find the Y-int? | If the equation of the tangent is y=2x+5, then it's already in slope - intercept form, that is y=mx+b where b is the y-intercept. So the y - intercept of the tangent at 4 is 5. |
Can an open subset of an affine variety and the variety itself have isomorphic rings of regular functions? | 1) If $X$ is a locally noetherian normal scheme, $F\subset X$ a closed subset of codimension at least $2$ and $U=X\setminus F$, then the restriction morphism $$\mathcal O(X)\stackrel {\cong}{\to} \mathcal O(U)$$ is an isomorphism.
This the analogue of a much earlier result of Hartogs in the holomorphic context and is the geometric reflexion of the algebraic formula $$ A=\cap_{ht(\mathfrak p)=1} A_\mathfrak p $$ where $A$ is a normal noetherian domain and $\mathfrak p$ runs through the height $1$ primes of $A$.
2) The above result is completely false if $F$ is of codimension one.
For example if $A$ is a domain and $f\in A$ is non invertible, the restriction morphism $$\mathcal O(\operatorname {Spec}(A))=A \hookrightarrow \mathcal O(\operatorname {Spec}(A)\setminus V(f))=A_f$$ cannot be surjective since the function $\frac 1f \in A_f$ is not in $A$ .
3) So in particular the answer to your question is:
The ring of regular functions on $\mathbb A^n_K\setminus F$ (where $F$ is a closed subvariety) is the polynomial ring $K[T_1,...,T_n]$ if and only if $F$ has codimension $\geq 2$. |
I need someone to explain this shape to me. | Your confusion probably comes from the fact that you write $a = b$ to mean “$a$ fits perfectly into $b$” instead of the usual “$a$ is equal to $b$”.
The ‘$=$’ symbol carries some assumptions, and one of them is that it is a transitive relation. This is exactly what you were describing: a relation ‘$\sim$’ (using a more generic symbol to avoid confusion) is called transitive if and only if $a\sim b$ and $b\sim c$ together imply $a\sim c$ (for all $a,b,c$ in the domain of interest). Equality is a transitive relation, and several other relations are transitive as well, e.g. the ordering relation ‘$<$’. Or in this case more relevant, the congruence between shapes would be a transitive relation as well. (It would even be an equivalence relation, which is a stronger requirement and one often associated with a symbol like ‘$=$’.)
But your “fits perfectly into” relation is not transitive. As you observed yourself. And there is no mathematical reason it should be. The main reason why you assumed that it might be is in my opinion because you used a symbol for this relation which usually describes a transitive relation. |
Why is $\int_{-\infty}^{+\infty} (e^{itx}-1-\frac{itx}{1+x^2})\frac{1}{\pi x^2} dx = -|t|?$ | We will use the fact that $\int^{\infty}_{-\infty}\frac{\sin tx}{x}dx=\pi\ \text{sgn}\ t.$ First, split the integral up into two parts and write the second part as ${\displaystyle\int}\dfrac{1}{\left(\frac{1}{x^2}+1\right)x^3}\,\mathrm{d}x$. We find that
$\displaystyle\int \dfrac{1}{x\left(x^2+1\right)}dx=-\dfrac{\ln\left(\frac{1}{x^2}+1\right)}{2}+C$. Thus, this part of the definite integral is zero.
For the first part, omitting the factor $1/\pi$ for the moment, integrate ${\displaystyle\int}\dfrac{\mathrm{e}^{\mathrm{i}tx}-1}{x^2}\,\mathrm{d}x$ by parts, to get $-\dfrac{\mathrm{e}^{\mathrm{i}tx}-1}{x}-{\displaystyle\int}-\dfrac{\mathrm{i}t\mathrm{e}^{\mathrm{i}tx}}{x}\,\mathrm{d}x.$ The first term gives zero so we are left with
${\displaystyle\int^{\infty}_{-\infty}}\dfrac{\mathrm{i}t\mathrm{e}^{\mathrm{i}tx}}{x}\,\mathrm{d}x=\displaystyle it\int^{\infty}_{-\infty}\frac{\cos tx}{x}dx-t\int^{\infty}_{-\infty}\frac{\sin tx}{x}dx.$
Using the definition of the Cauchy principal value, we may interpret the first of these integrals on the right-hand side as in $(1)$ in the above-cited link. It is then seen to be zero because the integrand is odd. So, finally, we have
$\displaystyle -t\int^{\infty}_{-\infty}\frac{\sin tx}{x}dx=-t(\pi\ \text{sgn}\ t)$. All that remains now is to multiply by $1/\pi$. |
Three equal rough cylinders on a horizontal plane. | Taking moments at the point of contact between one of the lower cylinders and the plane gives $$Sr\sin 30=\mu S(r+r\cos 30)$$
$$\Rightarrow \frac 12 S=\mu S(1+\frac{\sqrt{3}}{2})$$
$$\Rightarrow \mu =2-\sqrt{3}$$ |
Discrete mathematics set relations anti symmetric | A relation $R$ on a set $A$ is antisymmetric if for any $x,y\in A,$ we have $x=y$ when $x\:R\:y$ and $y\:R\:x.$ Your second relation satisfies $x=y$ when and only when $x\:R\:y$ and $y\:R\:x,$ meaning that the second relation is antisymmetric, and is also reflexive on $A.$ As a side note, the second relation is the only antisymmetric relation with domain $A$ that is also symmetric on $A$, as discussed here.
For the first relation, $x\:R\:y$ and $y\:R\:x$ is never satisfied, so it is vacuously antisymmetric.
Added: One fairly natural way to think about a (binary) relation $R$ on a set $A$ is as a subset of the "square" $A^2=\bigl\{\langle x,y\rangle: x,y\in A\bigl\}.$ We distinguish the diagonal of $A$ as the set of elements of $A^2$ whose entries are equal--more formally, $$\Delta_A:=\bigl\{\langle a,a\rangle: a\in A\bigl\}.$$ We then define the reflection across the diagonal of $A$ to be the function $\rho_A:A^2\to A^2$ given by $\langle x,y\rangle\mapsto\langle y,x\rangle.$
Then the reflexive relations on $A$ are precisely those that contain the diagonal of $A$--that is, those $R\subseteq A^2$ such that $\Delta_A\subseteq R.$ The symmetric relations on $A$ are those that are symmetric across the diagonal of $A$--that is, those $R\subseteq A^2$ such that $\rho_A[R]=R.$ The asymmetric relations on $A$ are those having no points in common with their reflections--that is, those $R\subseteq A^2$ such that $R\cap\rho_A[R]=\emptyset.$ Finally, the antisymmetric relations on $A$ are those that have only points of the diagonal in common with their reflection--that is, those $R\subseteq A^2$ such that $R\cap\rho_A[R]\subseteq\Delta_A.$
Hopefully, this aids in the intuition of why the first relation is antisymmetric. |
Let $X$ and $Y$ be Banach spaces, show that if they are isomorphic, then $X$ is reflexive iff $Y$ is reflexive. | I feel like the other answer is a bit sketchy, and possibly is begging the question (by assuming that the diagram commutes). So here's my own proof:
If we have a bounded operator $f:X \to Y$ then $f^*:Y^* \to X^*$ and $f^{**} = (f^*)^*: X^{**} \to Y^{**}$ then
$$f^{**}(i_X x) = i_Y f(x)$$
because
$$f^{**}(i_X x)(\psi) = i_X x (f^*(\psi)) = f^* (\psi)(x) = \psi(fx) = i_Y(fx)(\psi)$$
where $\psi \in Y^*$ and $i_X, i_Y$ are the canonical injective maps from $X,Y$ to $X^{**},Y^{**}$ respectively. We say that $X$ is reflexive (by definition) if $X \cong X^{**}$ isometrically via $i_X$ (so it suffices to show that $i_X$ is onto, as we know it's an isometry).
Now assume $X$ is reflexive. If $f$ is an isomorphism then $f^*$ is an isomorphism (because $(f^{-1})^*=(f^*)^{-1}$ and is linear) and so likewise $f^{**}$ is an isomorphism. But isomorphisms are necessarily onto, so $f^{**} \circ i_X$ is onto and hence given any $\xi \in Y^{**}$ we can find an $x\in X$ such that the first $=$ below is true, then the second $=$ below was proven above:
$$\xi (\psi)=f^{**}(i_Xx)(\psi) = i_Y (fx)(\psi)$$
So this shows that $i_Y$ is onto, hence $Y$ is reflexive. |
finding a close enough point with implicit function theorem | Define $F:\mathbb R^{2n}\to\mathbb R^n$ as
$$
F(u_1,\cdots,u_n,C)=\begin{bmatrix}
u_1+|B-a_1|-|C-a_1|\\
\vdots\\
u_k+|B-a_k|-|C-a_k|
\end{bmatrix}=\begin{bmatrix}
F_1(u_1,\cdots,u_n,C)\\
\vdots\\
F_n(u_1,\cdots,u_n,C)
\end{bmatrix}\\\\
F(0,\cdots,0,B)=\begin{bmatrix}
0\\
\vdots\\
0
\end{bmatrix}\\
$$
Use $${\partial |C-a_i|\over\partial C_j}={\partial \sqrt{\sum_{k=1}^n(C_k-a_{ik})^2}\over\partial C_j}={(C_j-a_{ij})\over|C-a_i|}$$
to see that
$$J=\begin{bmatrix}
{\partial F_1\over\partial C_1}\;\cdots\;{\partial F_1\over\partial C_n}\\
\vdots\\
{\partial F_n\over\partial C_1}\;\cdots\;{\partial F_n\over\partial C_n}
\end{bmatrix}\Big{|}_{(0,\cdots,0,B)}\\$$
has
$$
{B-a_i\over|B-a_i|}
$$
as its $i$-th row. If these vectors are linearly independent $\det J\neq0$. Then according to implicit function theorem $\exists$ open sets $U,V\subset\mathbb R^n$ containing $(0,\cdots,0),B$ respectively and a unique $\mathcal C^1$ function $g:U\to V:$
$$
x\in U\implies F(x,g(x))=[0,\cdots,0]^\top
$$
So $g(u_1,\cdots,u_n)$ is the required $C$. Uniqueness of $g$ implies uniqueness of $C$. I think it might be helpful if you take a look at my answer to another implicit function theorem question. |
Multiplying equations by variables | In your second example you create an indeterminate form, 0/0, which is why it fails to work. |
Differentiation Of A Inverse Hyperbolic Sine Function | For checking answers, one could consult wolfram alpha.
However, it is a very useful skill to be able to come up with says to sanity check your own work. Some ideas of how you might do that by hand are:
You could numerically estimate the derivative at several points; e.g. compute $(f(1.00001) - f(1)) / (1.00001 - 1)$ and compare to the value of the claimed derivative
You could do symbolic estimation or compute the first two terms of the Taylor series about some point by alternate means. e.g. around $x=1$ we can estimate:
$$ \sqrt{2x^2 - 1} \approx \sqrt{1 + 4(x-1)^2} \approx 1 + (1/2) \left( 4(x-1)^2 \right) $$
where the second approximation uses the Taylor series for $\sqrt{1 + t}$ about $t=0$. You can continue along until you get an estimate $y \approx a + b (x-1) $, and then compare $b$ to the value of the claimed derivative at $x=1$.
Neither of these are foolproof, but these sorts of things are very frequently useful for getting some quick sanity checks on your work. |
How should I start Factoring this? | Note, for $f(x)=2x^5+3x^4−10x^3−15x^2+8x+12$, $$f(1)=f(-1)=f(2)=f(-2)=f(-\frac{3}{2})=0$$
Using synthetic divisions with $1$ and $-1$, you get that $$f(x)=(x+1)(x-1)(2x^3+3x^2-8x-12)$$
You can use the other roots to further factor the degree-3 polynomial. |
Lebesgue-Radon-Nikodym representation | Because $\lambda$ and $f\,dm$ are mutually singular, one can easily check that
$$
|\nu|=|\lambda|+|f\,dm|.
$$
And writing $f=h\,|f|$, with $|h|=1$, it is apparent from the definition of total variation that $|f\,dm|=|f|\,dm$. |
Subsheaves and stalks | It is a general fact that if a morphism of sheaves induces an isomorphism on all the stalks, then it is an isomorphism. Working with strict equalities with sheaves is a bit awkward since there is so much data. I believe this is how it would be done using equality at the stalks.
If $s \in F(U)$ is a section, then $[s,U] \in F_p$ for all $p \in U$. Then, since $F_p = G_p$, we know that there is some $U_p$ so that $[s|_{U_p}, U_p] \sim [s, U]$ is in $G_p$ (using the equality, some representative of $[s, U]$ is in $G_p$) so that $s|_{U_p} \in G(U_p)$. Then $\{U_p\}_{p \in U}$ is an open cover of $U$ with $s|_{U_p}$ all agreeing on intersections so by the sheaf condition, $s \in G(U)$. This shows that $F(U) \subseteq G(U)$ and the reverse direction is symmetric. |
Evaluating Sums $\sum_{i=1}^{n} \sum_{j=0}^{n-i}$ | First off, that last step isn’t right, so I’m going to start from the beginning.
$$\begin{align*}
\sum_{i=1}^n\sum_{j=0}^{n-i}\left(3j^2-2\right)&=\sum_{i=1}^n\left(\sum_{j=0}^{n-i}3j^2-\sum_{j=0}^{n-i}2\right)\\\\
&=\sum_{i=1}^n\sum_{j=0}^{n-i}3j^2-\sum_{k=1}^n\sum_{j=0}^{n-i}2\;.
\end{align*}$$
Now let’s deal separately with these summations. There are $n-i+1$ terms in the inner summation, so
$$\begin{align*}
\sum_{i=1}^n\sum_{j=0}^{n-i}2&=\sum_{i=1}^n2(n-i+1)\\\\
&=2\left(\sum_{i=1}^n(n+1)-\sum_{i=1}^ni\right)\\
&=2\left(n(n+1)-\frac{n(n+1)}2\right)\\\\
&=n(n+1)\;.
\end{align*}$$
The first one is a little messier:
$$\begin{align*}
\sum_{i=1}^n\sum_{j=0}^{n-i}3j^2&=\sum_{i=1}^n\frac{(n-i)(n-i+1)\big(2(n-i)+1\big)}2\\\\
&=\frac12\sum_{i=1}^n\Big((n-i)(n-i+1)\big(2(n-i)+1\big)\Big)\\\\
&\overset{*}=\frac12\sum_{k=0}^{n-1}k(k+1)(2k+1)\\\\
&=\frac12\sum_{k=0}^{n-1}\left(2k^3+3k^2+k\right)\\\\
&=\sum_{k=0}^{n-1}k^3+\frac32\sum_{k=0}^{n-1}k^2+\frac12\sum_{k=0}^{n-1}k\\\\
&=\left(\frac{(n-1)n}2\right)^2+\frac{(n-1)n\big(2(n-1)+1\big)}4+\frac{(n-1)n}4\;.
\end{align*}$$
The step marked with an asterisk is accomplished by letting $k=n-i$: as $i$ runs from $1$ through $n$, $n-i$ runs from $n-1$ down through $0$. Now just simplify this last result, combine with the first one, and simplify again to get the final result. |
Showing $\int_a^{a+T}f(x) dx=\int_0^T f(x)dx$ for all $a\in\Bbb R$ | If $f$ is $T$-periodic, then $f(t+T)=f(t)$. Now, note that we can write
$$\int_a^{a+T}f(t)\,dt=\int_a^0 f(t)\,dt+\int_0^T f(t)\,dt+\int_T^{T+a} f(t)\,dt \tag 1$$
Enforcing the substitution $t\to t+T$ in the third integral of $(1)$ reveals
$$\begin{align}\int_a^{a+T}f(t)\,dt&=\int_a^0 f(t)\,dt+\int_0^T f(t)\,dt+\int_0^{a} f(t+T)\,dt\\\\&= \int_a^0 f(t)\,dt+\int_0^T f(t)\,dt+\int_0^{a} f(t)\,dt\\\\&=\int_0^T f(t)\,dt
\end{align}$$
as was to be shown! |
Big O Notation proof check | if $d(n) =\mathcal{O}(f(n))\iff \exists K>0, \exists n_0\ge 0,\;\forall n>n_0,\quad|d((n)|\le K|f(n)|$
so $a>0,\quad a|d((n)|\le aK|f(n)|\iff |ad((n)|\le K'|f(n)|\iff ad(n) =\mathcal{O}(f(n))$ with $K'=aK$ |
Integrating cos and csc | Guide:
\begin{align}
\int \frac{(\cos x+1)^\frac12}{ \csc x} \, dx = \int \sin x(\cos x+1)^\frac12 \, dx
\end{align}
Try substitution $u = \cos x + 1$. |
If $X$ is uncountable, then the set of bounded injective functions: $f: X \rightarrow \mathbb{R}$ has empty interior. | Your proof is good, though as a comment mentioned you should be clearer what topology (and ambient space) you're using.
Another idea is to define $g(x)=\epsilon\lfloor f(x)/\epsilon\rfloor$ for all $x$. This gives $|f(x)-g(x)|<\epsilon$ for all $x$ as in your argument. But the image of $g$ is finite: it's a subset of $\{-N\epsilon,-(N-1)\epsilon,\dots,N\epsilon\}$ for some bound $N$. So by the pigeonhole principle $g$ cannot be injective. |
Convexity of CES functions | The CES utility function is not always convex, and nor is it always concave. Indeed, one can show that in your particular case, it will be concave whenever $\alpha$ is in $[0, 1]$ and $\rho \leq 1$.
With regards to your statement about montonic transformations, let us review the following result (see, for example, Jehle and Reny's book on Microeconomic Theory for a proof).
Theorem. Suppose $u: X \to \mathbb{R}$ is a utility function that represents a preference relation $\succeq$ on $X$, and $g: \mathbb{R} \to \mathbb{R}$ is a monotonic function. Then $g \circ u: X \to \mathbb{R}$ is also a utility function that represents $\succeq$.
What you don't seem to grasp is that it doesn't matter whether $u$ itself is monotonic. What the theorem says is that you can always apply a monotonic transformation to a utility function without modifying the preference relation it represents. Another way of saying this is that utility functions are unique up to a monotonic transformation.
In order to show that $u$ is concave for some range of values of $\alpha$ and $\rho$, you may show that $u$ is quasiconcave and homogeneous of degree 1.
Another point of confusion you seem to be having is that preferences and utility functions representing them are two separate objects. The indifference curves associated with the CES function for some range of $\alpha$ and $\rho$ values is convex, but the function itself is quasiconcave when this is the case.
This mimeo on the CES function might come in handy if you're struggling with either step. |
How do I show that an angle is a certain value in a triangle with two sides given? | $12-5\sqrt{3}=\sqrt{3}*(4\sqrt{3}-5)$ |
How to find $\frac{\partial x^TAx}{\partial A}$ | The differential of your map $f=x^TAx$ is $df=x^TdAx$. It is a linear map sending $H\mapsto x^T H x$, a scalar. Then write $tr(x^T H x)=tr(xx^T H)=\langle\nabla f,H\rangle_F$ with the "gradient vector" $\nabla f=(xx^T)^T=xx^T.$ |
$\int _{\mathbb{T}^n}f(t)g(x-t)d\lambda (t)\in\mathbb{C}$ for all $x\in\mathbb{T}^n$ and $f,g\in L^1(\mathbb{T}^n)$? | Let $f$ be any function in $L^{1}$ which is not in $L^{2}$. Take $g(y)=f(-y)$ so that $g$ is also in $L^{1}$. Then then convolution does not exist at $x=0$. |
What curves have a closed-form formula for projecting a point onto them in multiple dimensions? | This means that you would like the following equation
$F(t)=(P-C(t)) \cdot C^{'}(t)=0$
to have a close-form solution for its roots.
When $C(t)$ is a polynomial of degree 1 and 2, $F(t)$ is a polynomial of degree 1 and 3, which will have a close-form solution for its root(s).
When $C(t)$ is a circle (partial or not), I think there is also a close-form solutoion for the projected point. |
Show that the limit of $\frac{\sin(x) - \sin(y)}{x+y}$ does not exist. | Hint: If you take the limit along the line $x=0$ what do you get? How about along the line $y=0$? |
A question involving polynomials and sine | Let $h$ the function defined by $h(x)=f(x)-\sin x$ so $h$ is continuous and since $f$ has odd degree then
$$\lim_{x\to\pm\infty}h(x)=\pm\infty\; \text{or}\; \mp\infty$$
and we use the intermediate value theorem to conclude. |
Need help with proving a lemma | [This answer is given at http://mathhelpboards.com/differential-equations-17/need-help-proving-lemma-10176.html#post47171 .]
You have quoted Grönwall's inequality wrongly. The $\phi$ in the exponential should be $\psi$.
You can write the term $\delta_2(t-a)$ as $\displaystyle\delta_1\int_a^t \frac{\delta_2}{\delta_1}\,ds.$ Now replace the function $\phi(t)$ by $\phi(t) + \dfrac{\delta_2}{\delta_1}$. Take $\alpha(t)$ to be the constant $\dfrac{\delta_2}{\delta_1} + \delta_3$ and $\psi(t)$ to be the constant $1$, and apply Grönwall's inequality. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.