title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
subgroups with cycles | The existence of an $n$-cycle shows that $H$ is transitive. Then the existence of an $(n-1)$-cycle shows that it is $2$-transitive. .... Prove by induction on $r$ that the existence of the $(n-r)$-cycle implies that $H$ is $(r+1)$-transitive. |
cross product between two vectors | The cross product or vector product is defined as it is because it is a useful concept in physics, and helps us write down physical laws and equations of motion in a concise way - especially laws involving angular momentum, moments of forces, torques and the interaction of magnetic fields and electric charges (as in the Lorentz force equation).
Mathematically, the cross product of two vectors is a less "natural" concept than the dot or scalar product, and it does not generalise to higher dimensions as simply as the scalar product does.
I cannot think of a physical example where the cross product of two force vectors is physically meaningful, and I don't think it has a specific name. |
Harmonic functions proof explanation | I think there is a mistake in the first equality of your formula.
$$f'(z)=\lim_{t \to 0} \frac{f(z+t)-f(z)}{t}$$
Chosing $t \in \mathbb R$ gives
$$f'(z)=\lim_{t \to 0} \frac{u(x+t,y)+iv(x+t,y)}{t}=\frac{\delta u}{\delta x} + i \frac{\delta v}{{ \delta \color{red} x}}$$
Now, C-R gives the first equality.
Now derivating one more time, you should get
$$f''(z) = \frac{\delta^2 u}{ \delta x^2} + i \frac{\delta^2 v}{\delta \color{red} x^2}$$ |
Tangent space on the sphere | You are almost there for $\subset$ since $f(x)\in S^m$, $\langle f(x),f(x)\rangle =1$ so it is constant, and for the other direction, use the fact that the dimensions of the image $df_x$ and $TS^m_p$ are equal. |
Estimate the error that results when $\sqrt{1 + x}$ is replaced by $1 + \frac{1}{2}x$ if $|x| < 0.01$ | expanding about $x=0$
$$ \begin{equation}
\begin{aligned}
P_n(x) = f(0)
+ \frac{f'(0)}{1!}(x - 0)
+ \frac{f''(a)}{2!}(x - 0)^2
& + \ldots
+ \frac{f^{(n)}(0)}{n!}(x - 0)^n \\
\end{aligned}
\end{equation}\\
$$
$$ \begin{equation}
\begin{aligned}
P_n(x) = 1
+ \frac{1}{2}x + \frac{-\frac14}{2!}x^2
& + \ldots
\end{aligned}
\end{equation}\\
$$so w..r.t.$|x| < 0.01$
$$\begin{equation}
\begin{aligned}
R_n(x) & = \frac{- \frac{1}{4(1 + \xi)^{+3/2}} }{2!}
\end{aligned}
\end{equation}\leq \frac{- \frac{1}{4} (1 + (-0.01))^{-3/2}}{2!}=\frac{- \frac{1}{4} (0.99)^{-3/2}}{2!}$$ |
Equivalent definition of Jordan measure | Suppose $E$ is Jordan measurable. Then, for every $\varepsilon>0$, there exist two elementary sets $A,B$ such that $A\subset E\subset B$ and $m(B\setminus A)<\varepsilon$. So you obtain the condition because $A\triangle E \subset B\setminus A$.
Conversely, suppose the condition is valid. Then there exists an elementary set $A'$ such that $A\triangle E \subset A'$ and $m(A')<\varepsilon$.
The sets $A\setminus A'$ and $A\cup A'$ are elementary. Moreover you have $$A\setminus A'\subset E\subset A\cup A'$$ and $$A\cup A'\setminus (A\setminus A')=A'.$$
That is enough to say that $E$ is Jordan measurable. |
Vector in row space and null space of a matrix is necessarily 0 | This is not true. Consider the matrix $\begin{pmatrix} 1 & 1 \end{pmatrix}$ in the field of integers modulo 2. The only row of the matrix is in its null space. |
I need to identify $G/H$ upto isomorphism. | Your map is surjective. It follows from one of the isomorphism theorems that its codomain is isomorphic to the quotient of its domain by its kernel. |
Homology of the Empty set | Here's an idea. The reduced homology of the empty set is what you describe: one $\mathbb Z$ in degree $-1$ and $0$ otherwise. This is because the chain complex computing the reduced homology groups has an extra $\mathbb Z$ in degree $-1$, whose role is to kill one $\mathbb Z$ in degree $0$. However, if we are looking at the empty set, all chain groups are $0$ so there is only the extra $\mathbb Z$ in degree $-1$. I'm just guessing here. |
Bijection from GL$_{n}$ ($\mathbb{R}$) to $\mathbb{R} \setminus\{0\}$ | The cardinalities are the same (they're both uncountable), but I'm not sure how to construct an explicit bijection. But the case for $n = 1$ is easy. Pick a basis vector for $\mathbb R$. Then every element of $GL_1(\mathbb R)$ is represented by a $1 \times 1$ matrix whose determinant is nonzero, i.e. a nonzero real number.
In general, an element of $GL_n(\mathbb R)$ is represented by a $n \times n$ matrix with nonzero determinant. This gives an injection from $GL_n(\mathbb R)$ into the set $\mathbb R^{n^2}$. So the cardinalities of $GL_n(\mathbb R)$ and $\mathbb R^{n^2}$ are the same. It's then a standard result that the cardinality of $\mathbb R^{n^2}$ is the same as the cardinality of $\mathbb R$. |
Is there any examples of a Banach algebra which every ideal of it, is a maximal ideal? | Consider the reals under multiplication and addition. Then the only proper ideal is $0$. Thus every proper ideal is maximal. |
Prove that $\forall A \exists !B\forall C (\forall x[(x\in C\land x\notin A)\iff (x\in C \land x\in B )])$ | Here is an answer in a more 'calculational' style. Using slightly different logical notations (see EWD1300),$
\newcommand{\calc}{\begin{align} \quad &}
\newcommand{\op}[1]{\\ #1 \quad & \quad \unicode{x201c}}
\newcommand{\hints}[1]{\mbox{#1} \\ \quad & \quad \phantom{\unicode{x201c}} }
\newcommand{\hint}[1]{\mbox{#1} \unicode{x201d} \\ \quad & }
\newcommand{\endcalc}{\end{align}}
\newcommand{\ref}[1]{\text{(#1)}}
\newcommand{\P}[1]{\mathscr P(#1)}
\newcommand{\true}{\text{true}}
$ you are trying to rewrite
$$
\tag 0
B\in \P{U} \;\land\; \langle \forall C : C \in \P{U} : \langle \forall x :: x \in C \land x \notin A \;\equiv\; x \in C \land x \in B \rangle \rangle
$$
in the form $\;B = \ldots\;$.
Let's see how far we get rewriting the main part of $\ref 0$:
$$\calc
\langle \forall C : C \in \P{U} : \langle \forall x :: x \in C \land x \notin A \;\equiv\; x \in C \land x \in B \rangle \rangle
\op\equiv\hints{logic: extract common conjunct $\;x \in C\;$ out of $\;\equiv\;$}
\hint{-- to move $\;x \in C\;$ closer to the other occurrences of $\;C\;$}
\langle \forall C : C \in \P{U} : \langle \forall x : x \in C : x \notin A \;\equiv\; x \in B \rangle \rangle
\op\equiv\hint{logic: merge quantifications; definition of $\;\P{\cdot}\;$}
\langle \forall C,x : x \in C \subseteq U : x \notin A \;\equiv\; x \in B \rangle
\op\equiv\hint{set theory: simplify quantifications}
\langle \forall x : x \in U : x \notin A \;\equiv\; x \in B \rangle
\op\equiv\hints{logic: reintroduce common conjunct $\;x \in U\;$}
\hint{-- working towards $\;\langle \forall x :: \ldots \;\equiv\; x \in B \rangle\;$ and extensionality}
\langle \forall x :: x \in U \land x \notin A \;\equiv\; x \in U \land x \in B \rangle
\op\equiv\hint{simplify RHS using $\;B \in \P{U}\;$ or equivalently $\;B \subseteq U\;$}
\langle \forall x :: x \in U \land x \notin A \;\equiv\; x \in B \rangle
\op\equiv\hint{extensionality}
U \setminus A \;=\; B
\endcalc$$
That leaves us with the obligation to show that this $\;B\;$ satisfies the first part of $\ref 0$, $\;B \in \P{U}\;$, which is simple.
Note that we did not need to use the assumption that $\;A \in \P{U}\;$.
To formally wrap things up, we conclude
$$\calc
\langle \forall A :: \langle \exists! B : B\in \P{U} \;\land\; \langle \forall C : C \in \P{U} : \langle \forall x :: x \in C \land x \notin A \;\equiv\; x \in C \land x \in B \rangle \rangle \rangle \rangle
\op\equiv\hint{by the above calculation; $\;U \setminus A \in \P{U}\;$}
\langle \forall A :: \langle \exists! B : B \;=\; U \setminus A \rangle \op\equiv\hint{by definition, $\;\langle \exists! B : B \;=\; \ldots \rangle \;\equiv\; \true\;$}
\true
\endcalc$$ |
When a function is defined at some point, does limit exist at that specific point? | Note that, for a limit to exist at a point,
$$\lim_{x \to c} f(x) = L \;\;\;\;\; \text{if and only if} \;\;\;\;\; \lim_{x \to c^+} f(x) = \lim_{x \to c^-} f(x) = L$$
Or, in words, the limit at a point exists and is equal to $L$, if and only if the limit from each side is also equal to $L$. (Note: it need not be true that the limit as $x \to c$ is equal to $f(c)$ for existence to occur.)
For example, consider the step function:
$$u(x) = \left\{ \begin{matrix}
0 & x \le 0 \\
1 & x > 0
\end{matrix} \right.$$
What is the limit as $x \to 0$? If you approach from the left, it's $0$, and if you approach from the right it's $1$. Therefore, the limit as $x \to 0$ for $u(x)$ doesn't exist, since you get a different limit of approach on each side, i.e.
$$\lim_{x \to 0} u(x) \; \text{doesn't exist, because} \; 1 = \lim_{x \to 0^-} u(x) \ne \lim_{x \to 0^+} u(x) = 0$$
This can be seen graphically: see that "jump" at $x = 0$?
In your scenario, you're correct: the limit as $x \to 1$ from each side is different for your $f$. Therefore, you rightfully can conclude that the limit does not exist.
You can read more on the existence of limits on Brilliant. |
Right triangle : Find $\angle{BPQ}$. | Let's call $\angle BPQ =x$ then at the triangle $PQB$ we get
$$\tan x=\frac{QB}{PB}$$
but, at the triangle $CQB$ we get:
$$\tan 23°=\frac{QB}{CB}\to QB=CB\cdot \tan 23°$$
and from the triangle $ABP$ we have:
$$\tan 7°=\frac{PB}{AB}\to PB=AB\cdot \tan 7°$$
so,
$$\tan x=\frac{CB}{AB}\cdot \frac{\tan 23°}{\tan 7°}$$
Now from the triangle $ABC$ we have
$$\tan 21°=\frac{CB}{AB}$$
and then
$$\tan x=\frac{\tan 21°\cdot\tan 23°}{\tan 7°}\to x=\arctan \left(\frac{\tan 21°\cdot\tan 23°}{\tan 7°}\right)= 53°$$ |
Isomorphism of rings (with parameters) | HINT: Whatever $a$ and $b$, the ring will be a $\Bbb F_p$ vector-space of dimension $2$. As ring we have the following three possibilities for the quotient:
A ring with a unique maximal ideal $\frak m$ such that $\frak m^2=(0)$.
The ring $\Bbb F_p\times\Bbb F_p$.
A field with $p^2$ elements.
These situations should be linked to the behaviour of the roots of the polynomial $X^2+aX+b$ to complete the answer.
Note that there's only one finite field of given cardinality up to isomorphism. |
Three different definitions of Modular Tensor Categories | In his paper Non-degeneracy conditions for braided finite tensor categories, Shimizu shows the equivalence of these definitions in the non-semisimple case.
Of course, the $s$-matrix doesn't work there, but it is not hard to show (and I believe he does that) that in the semisimple setting, non-degeneracy of the $s$-matrix is equivalent to non-degeneracy of the Hopf pairing of Lyubashenko's coend. |
The real numbers $x,y$ and $z$ are such that $x-7y+8z=4$ and $8x+4y-z=7$. What is the maximum value $x^2-y^2+z^2?$ | Solving the system
$$x-7y+8z=4$$
$$8x+4y-z=-7$$
we get
$$y=\frac{12}{5}-\frac{13}{5}x$$
$$z=\frac{13}{5}-\frac{12}{5}x$$
and we get
$$x^2-y^2+z^2=1$$
Very NICE!
Below please find an image of the surface $x^2-y^2+z^2=1$ together with the (thick red) line of intersection of the two given planes. |
Prove Set is disjoint | The definition of $B_n$ does not make sense. If we interpret it as $B_n=\{b \in \mathbb Z: n|(b+1)\}$ then statement is correct: Suppose $m \in A_n \cap B_n$. Then $m=an+1$ and $m=kn-1$ for some integer $k$. But then $(k-a)n=2$ which is contradiction since $n$ divides then left hand side but not the right hand side. |
Group isomorphic to all its proper quotients, and not simple | $\newcommand{\C}{\mathbb{C}}$$\newcommand{\Set}[1]{\left\{ #1 \right\}}$The Prüfer $p$-group is non-simple, and satisfies your hypothesis.
Given a prime $p$, you realize it as the subgroup of the multiplicative group $\C^{*}$ given by
\begin{equation*}
G = \Set { z \in \C^{*} : z^{p^{k}} = 1 \ \text{for some $k \ge 0$}}.
\end{equation*} |
Direct/Lagrange to show max/min | Since $x\geqslant 0, f(x,y) \leqslant 2x(1-x^2)-2x^2$. Now find $x$ between 0 and 1 to maximize this.
Finding the minimum is a lot easier since $2xy^2\geqslant 0$ so we must have $y=0$. Can you do the rest?
===============
We can also use polar coordinates. Let $x=R\cos \alpha, y=R\sin \alpha$, with $0\leqslant R \leqslant 1, -\frac{\pi}{2} \leqslant \alpha \leqslant \frac{\pi}{2}$.
(1) On boundary $S_1 = \left\{ x^2+y^2=1, x\geqslant 0 \right\}$ we have $R^2=1$ so $f(x,y) = 2\cos \alpha \sin^2 \alpha - 2 \cos^2 \alpha = 2\cos \alpha - 2\cos^3 \alpha - 2 \cos^2 \alpha$.
Now $\frac{\partial f}{\partial \alpha}= -2 \sin \alpha-6\cos^2\alpha (-\sin \alpha)-4 \cos \alpha (-\sin \alpha) \\
= 2 \sin \alpha (-1+3 \cos^2\alpha +2\cos \alpha)\\
= 2\sin\alpha (3\cos \alpha -1) (\cos \alpha+1)
$
$\frac{\partial{f}}{\alpha}=0 \Rightarrow \sin \alpha =0 $ or $\cos \alpha = \frac{1}{3}$
When $R=1, \sin \alpha =0, x=1, y=0, f(x,y) = -1$.
When $R=1, \cos \alpha = \frac{1}{3}, x=\frac{1}{3}, y=\pm \frac{2\sqrt{2}}{3}, f(x,y)=\frac{10}{27}.$
(2) On boundary $S_2 = \left\{ x=0, -1\leqslant y \leqslant 1\right\}$ we have $f(x,y)=0$.
Therefore the minimum is $-1$ at $(1,0)$ and the maximum is $\frac{10}{27}$ at $(\frac{1}{3}, \pm \frac{2\sqrt{2}}{3})$ |
Computing periodic continued fractions. | Note that the period can be written as:
$$
x = 1+\frac{1}{4+\frac{1}{x}}
$$
At this point solve the quadratic:
$$
4x^2-4x-1 = 0 \Rightarrow x = \frac{1}{2} \pm \frac{\sqrt{2}}{2}
$$
and plug the positive solution into:
$$
1 + \frac{1}{2+\frac{1}{3+\frac{1}{x}}}
$$ |
Questions about impact velocity angle | The impact speed is just the launch speed (by energy conservation), and the angle is $-40^\circ$. Because the $x$ component velocity is the same, and the $y$ component reversed. However, if you define the impact velocity as the angle between the trajectory with the +ve $x$ axis at the moment when it hits the ground, then $140^\circ$ is also ok (if it is from a physics textbook :) ). But I would say $-40^\circ$ (or $320^\circ$) is more correct mathematically. |
is the following complex function is defined at deleted neighborhood of $z=0$ | The function is not defined when $\sinh(1/z)=0$. Since $\sinh w=(e^w-e^{-w})/2$, saying $\sinh w=0$ means $e^{2w}=1$, that is, $2w=2k\pi i$ for integer $k$. So the function is not defined at
$$
z=\frac{1}{k\pi i}
$$
Hence… |
Are there any Combinatoric proofs of Bertrand's postulate? | Interesting idea to use a grid. I doubt it will work for this question, since writing things in that grid format in a sense splits into arithmetic progressions modulo $p$, and usually saying things about the primes in a progression is harder.
However, I cannot help but mention that using a grid like that was applied brilliantly by Maier to prove a counter-intuitive result, (using the Prime Number Theorem for Arithmetic Progessions) and the idea is now called the Maier Matrix Method.
(A small digression, but it is interesting!)
The Maier Matrix Method
There are questions regarding primes in short intervals, and we can ask ourselves what does $$\pi(x+y)-\pi(x)$$ look like? To say anything meaningful, $y$ cannot be too small, but here lets suppose $y=\log^B(x)$ for some $B>2$. Selberg proved that under the Riemann Hypothesis, we have $$\pi(x+y)-\pi(x)\sim \frac{y}{\log x}$$ as $x\rightarrow \infty$ for almost all $x$. (A set with density $\rightarrow 1$) It was then conjectured that this asymptotic must hold for all $x$ which are sufficiently large. (This conjecture was made for several reasons, one of which is that it is true under Cramer's probabilistic model)
In a surprising turn of events, Maier showed it was false, and that there exists $\delta>0$ and arbitrarily large values of $x_1$ and $x_2$ such that both $$\pi(x_1 +\log^B (x_1))-\pi (x_1)> (1+\delta)\log^{B-1}(x_1)$$and $$\pi(x_2 +\log^B (x_2))-\pi (x_2)< (1-\delta)\log^{B-1}(x_2)$$hold, despite the fact that the asymptotic holds for a set of density $1$.
He proved this using a method which is now called the "Maier Matrix Method." Essentially, it is just drawing a grid which is similar to the one above, and then applying a clever combinatorial argument. The columns are arithmetic progressions, and by PNT4AP, we can easily say things about them to understand the number of primes in the grid. There is a little trick with oscillation of the Dickman Function, but then basically by the pigeon hole principle the question is solved.
I definitely think you might find this expository article by Dr.Andrew Granville to be interesting. (It is quite readable, and gives an a more in depth, and very clear explanation) |
Logarithm of an infinite series | The exponential function $e^y$ is defined as
$$ e^y:=\sum_{k = 0}^{\infty} {y^k \over k!}.$$
So your sum is simply (by setting $y=x^{\frac{1}{v}}$)
$$e^{x^{\frac{1}{v}}}.$$
Summary
If you mean to calculate the logarithm of the sum raised to the power of $v$, log(sum^v):
$$\begin{align}
\log \left(\left(\sum_{k = 0}^{\infty} {x^{\frac{k}{v}} \over k!}\right)^v\right)&=v\log \left(\sum_{k = 0}^{\infty} {x^{\frac{k}{v}} \over k!}\right)\\
&=v\log\left(e^{x^{\frac{1}{v}}}\right)\\
&=vx^{\frac{1}{v}}\log e \\
&=vx^{\frac{1}{v}}.\end{align}$$
If you mean to calculate the logarithm of the sum all raised to the power of $v$, (log(sum))^v:
$$\begin{align}
\left(\log \sum_{k = 0}^{\infty} {x^{\frac{k}{v}} \over k!}\right)^v&=\left(\log e^{x^{\frac{1}{v}}}\right)^v\\
&=\left(x^{\frac{1}{v}}\right)^v\\
&=x^{\frac{v}{v}}=x.\end{align}$$ |
Proving a function to be linear: Complex Analysis | Let $g(z) = f(z) - i z$. Then your inequality says $\text{Im}(g(z)) \ge 0$ for all $z$. What can you say about $1/(g(z) + i)$? |
Bound on size of subset of $\{1,2,\ldots,2n\}$ where no member is a multiple of another | Let $P(n)$ be the proposition that given any set of integers $\{a_j\}$ containing $n+1$ distinct $a_j$ all of which are less than or equal to $2n$, then there exists some $j_1 \neq j_2$ such that $a_{j_1}$ divides $a_{j_2}$.
$P(1)$ is trivial to prove: For any set of two integers all of which are less than or equal to 2, either that set contains two identical integers (which divide each other) or it contains $1$, which divides $2$.
Now say that $P(n)$ holds and that $P(n+1)$ is false. Then since $P(n+1)$ there there is some set $S \equiv \{s_j\}$ containing $n+1+1$ integers and all the $s_j \leq 2n+2$, and no $s_j$ divides any $s_k$ for $k \neq j$.
Unless $2n+1 \in S$ and $2n+2 \in S$, the set $T = S-s_j: s_j > 2n$ contains at least $n+1$ integers, no of which exceed $2n$, and by $P(n)$ it is known that one of these elements divides another one. Since we know that no two element of $S$ divides another element of $S$ that can't be. So if $S$ exists,
$$
2n+1 \in S \text{ and } 2n+2 \in S $$
We then know that $(n+1) \not \in S$ since that divides $(2n+2)$.
Now consider the set $U = S - \{(2n+1), (2n+2)\} + \{(n+1)\}$. $U$ has $n+1$ elements, all of which are less than or equal to $2n$. So by $P(n)$ one of those elements divides another. But if the two elements in question do not include the element $(n+1)$ then they also divide each other when considered as elements of $S$, which by assumption cannot be the case.
So some element of $u\in U$ divides $n+1$ (or $n+1$ divides some other element of $U$, which cannot be since the maximum element in $U$ is less than $2n+1$).
But $u\in S$ and $u|n+1 \implies u|2n+2$, yet $2n+2 \in S$ so we have a contradiction again.
Therefore if $P(n)$ holds, $P(n+1)$ must be true, thus establishing induction. |
Minimum of a function using framing rather than derivatives | Without derivatives:
$$2+x^2(3-x)=4x^2-x^3+2-x^2=x^2(4-x)+(4-x)(4+x)-14\geq-14.$$
The equality occurs for $x=4$, which says that we got a minimal value. |
Find the posterior density of $\log \theta$ and Bayes estimator under 0-1 loss | reading you posts I think you are a clever guy thus with a sketch I think you can do the rest by yourself.... I am just an amateur, but this is what I would do in this case...
Find the posterior density of $log \theta$.
The model is uniform thus
$$p(\mathbf{x}|\theta)=\frac{1}{\theta^n}\cdot\mathbb{1}_{(0;\theta)}(x_{(n)})=\frac{1}{\theta^n}\cdot\mathbb{1}_{(x_{(n)};+\infty)}(\theta)$$
The Prior is lognormal
$$\pi(\theta)\propto \frac{1}{\theta}e^{-(\log\theta-\mu)^2/(2\sigma^2)}$$
First I find the posterior of $\theta$
$$\pi(\theta|\mathbf{x})\propto \frac{1}{\theta^{n+1}}e^{-(\log\theta-\mu)^2/(2\sigma^2)}\cdot\mathbb{1}_{(x_{(n)};+\infty)}(\theta)$$
Now I transform the posterior getting the density of its log with the standard transformation theorem
$$\lambda=\log\theta$$
$$\theta=e^{\lambda}$$
$$\theta'=e^{\lambda}$$
Thus
$$\pi(\lambda|\mathbf{x})\propto e^{-\lambda n}e^{-(\lambda-\mu)^2/(2\sigma^2)}$$
Let's focus on the exponent
$$-\frac{1}{2\sigma^2}[2\sigma^2 n \lambda+\lambda^2+\mu^2-2\lambda\mu]$$
Waste the element $\mu^2$ because not depending on $\lambda$ (it will be part of the normalizing constant) do some algebraic manipulation completing the square inside the bracket and you will find the kernel of a certain Gaussian density... truncated in the support
$$(\lambda|\mathbf{x}) >\log x_{(n)}$$ |
How does one assign probabilities to events? | You sort of answer your own question: "it could be anything provided it satisfies the axioms". In general, all one needs is to satisfy the criteria with a given function and it then gains the label of "probability function".
Now, if you are asking more about how we do modeling, then it might be instructive to study a few explicit derivations. A simple example would be to determine the probability of drawing a given color marble out of a bag:
There are $b$ blue marbles, $r$ red marbles, and $g$ green marbles in a bag. The probability one will draw a red marble is $r/(r+b+g)$. The probability one will draw a blue marble is $b/(r+b+g)$. The probability one will draw a green marble is $g/(r+b+g)$. In each case the probability to draw a given color marble is given as the amount of that color marble divided by the total number of marbles. The division by the total ensures that the probabilities of all events sum to 1 (i.e. to draw any color marble, the probability is 1/1). The procedure with the bag is important here, as we must randomly sample from the collection which means that each marble has a $1/(r+b+g)$ probability. Only after the fact do we notice the color. This is actually your intuition above: each marble is chosen with equal probability (as we use uniform randomness to make our choice) and then we group the events based on the color feature. So in the case of red marbles, each has a $1/(r+b+g)$ probability, but we have $r$ of them, implying the probability to choose any red is $r/(r+b+g)$.
Typically these types of idealized examples rely on some form of combinatorics to determine probabilities of events. However, idealization is a key feature here, as this example is deceptively simple. Idealized models assume we have perfect knowledge of the scenario/system. If one begins to look at examples in the real world, things begin to become less certain. Here we might consider a dice example:
Consider a 6 sided dice. Ideally, one would say that each side has an equal probability of occurring (i.e. each outcome has probability 1/6), but how do we know that the dice is not slightly off balanced or totally biased? We seem to be making a simplifying assumption based on how we believe dice to behave, when it may very well be the case that the dice is not the same as every other dice due to variations in it's physical structure. If we sit and toss the dice N times we might gather some data and assign probabilities based on that. The issue here is that there is no guarantee that we won't get all sixes or something else unintuitive during our data collect. One would need to roll the dice infinitely many times in order to produce the actual probability distribution. Even then it would seem that the dice's state would change due to impact with the surface we rolled in upon. Now we are technically dealing with a different dice (in a different physical state than the previous roll) each time we roll it. One might imagine a world where we could perfectly model the physics of the dice and then determine the various biases based on it's exact center of mass state at the time of each trail. In this case a very precise physical model would drive the assignment of probabilities to events. So we see here that you could construct a model and then derive event probabilities from that, and/or work from a collection of event sample data to assign probabilities to events.
One other thing you might find interesting are the various approaches to probability/statistics. Try asking Google about "Frequentists vs Baysians". |
Rational Canonical Form Confusion; Choosing Basis Which Gives the Rational Canonical Form. | I found my mistake.
I am wrong when I say that $$\ker f= (F[x]e_1+\cdots+F[x]e_k)+(F[x]a_1(x)e_{k+1}+\cdots+F[x]a_{n}(x)e_n)$$
There exists a basis $(e_1',\ldots,e_n')$ of $F[x]^n$ for which
$$
\ker f= (F[x]e_1'+\cdots+F[x]e_k')+(F[x]a_1(x)e_{k+1}'+\cdots+F[x]a_{n}(x)e_n')
$$
But that basis need not be $(e_1,\ldots,e_n)$. |
Calculate after how many bounces screen saver logo will hit a corners? | Basic approach. Imagine an infinite grid. An imaginary ball, corresponding to the real ball on your screen, starts in the unit square. It starts moving in some direction and continues moving forever in that direction.
When it passes a line of the form $x = j$, where $j$ is an integer, that corresponds to the real ball bouncing off either the left or the right side of the screen. When it passes a line of the form $y = k$, where $k$ is an integer, that corresponds to the real ball bouncing off either the upper or the lower side of the screen.
The imaginary ball always moves in the same direction, but the real ball, of course, changes direction each time it passes one of these integer lines.
However, the moments at which the imaginary ball encounters a corner of the form $(j, k)$, where $j$ and $k$ are both integers, correspond to those moments when the real ball hits a corner as well. So the question is, given an initial starting point $(x_0, y_0)$ and a velocity vector $(v_x, v_y)$, if the line
$$
\frac{y-y_0}{v_y} = \frac{x-x_0}{v_x}
$$
has an integer solution. If so, the displacement between the initial point $(x_0, y_0)$ and the solution point $(j, k)$, along with the component velocities $v_x$ and $v_y$ will tell you how long it takes to get there.
I'll try to add more about this problem when I get more time. |
Product measure: How does this fit together? | The event $x(0)=1$ is a cylinder set. Let $y_k=1$ for all $k\in\Bbb Z$; then the event in question is $\operatorname{cyl}(y_0^0)$. |
Are more conjectures proven true than proven false? | This is rather a philosophical question, and merits an answer of a more or less feuilletonistic nature.
Of course I could program my computer to formulate 1000 conjectures per day, which in due course would all be falsified.
Therefore let's talk about serious conjectures formulated by serious mathematicians.
Some conjectures (Fermat's conjecture, the four color conjecture, conjectures about the nonexistence of certain finite projective planes, etc.) are derived from the available data, and proofs of special cases.
If such a conjecture (tentatively and secretly formulated by a mathematician) is wrong it will be less likely that it will see the light of day nowadays than a hundred years ago, since the available computational powers for producing a counterexample (if there is one) have greatly increased.
If, however, a conjecture is the result of deep insight into, and long contemplation of, a larger theory, then it is lying on the boundary of the established universe of truth, and, as a surface in ${\mathbb R}^3$ splits the neighborhoods of any of its points into two about equal parts, should turn out to be true in $50\%$ of the cases. Only conjectures fulfilling this criterion are worth a full bit $\ldots$ |
Number of boys in school | Your equations are correct, so you apparently just made a mistake in solving the system.
I’d multiply the second equation by $10$ to get $200=b+0.2g$ and then subtract that from the first equation to get $200=0.8g$. Then $g=250$, so $b=400-250=150$. |
Is the representation of a linear affine space unique? | Yes, it is.
The $U$ in the representation $S=s+U$ is the same for all $s\in S$, because it is defined as $U\{s-t:s,t\in S\}$.
As a consequence you have that $s_U^\perp = t_U^\perp$ for all $s,t\in S$,
because $s-t=(s_U-t_U)+(s_U^\perp-t_U^\perp) \in U$ and $s_U-t_U\in U$ imply $s_U^\perp-t_U^\perp\in U$, so it has to be $0$.
Moreover this vector can be characterized as the projection of $0$ on $S$. |
Direction Variation Question | HINT
So indeed, you are given that $x = ay^3$ and $y = b\sqrt{z}$, which implies $x = a \left(b\sqrt{z}\right)^3 = ab^3 z^{3/2}$.
Now you are given then $z=4 \implies x=1$.
Find $ab^3$ with direct substitution
Now plug in both $ab^3$ and $x=27$, can you find $z$? |
Let $G$, $H$ be groups. Prove: $e_G \times H$ is normal in $G \times H$ | $\{e_G\}\times H$ is (obviously?) the kernel of the homomorphism $\pi_1\colon G\times H\to G$, $(g,h)\mapsto g$. |
GM-AM Inequality Proof | Here's where the hint is headed. If the numbers in your list are not all equal, then there exist two numbers, call them $a_1$ and $a_2$, such that $A$ lies strictly between them. Create a new list from the original list by replacing $a_1$ with $A$, and replacing $a_2$ with $a_1+a_2-A$. Then, according the the hint (which you've proved via @dxiv's hint), the new list has the same arithmetic mean as, but larger geometric mean than, the original. What next? You can repeat this procedure until you obtain a list consisting of all $A$. What can you say about the arithmetic mean and geometric mean of this final list, and how does these relate to the original list? |
number of solutions and rank | The null space of an $a \times b$ matrix $A$ has dimension $b - \text{rank}(A)$.
The column space has dimension $\text{rank}(A)$.
If a system $Ax = y$ has infinitely many solutions, the null space must have dimension at least $1$.
If a system $Ax = y$ has one solution, the null space must have dimension $0$
and the column space must have dimension $a$.
If a system $Ax = y$ has no solution, the column space must have dimension less than $a$. |
Divergence theorem and singularities | The divergence of $ \overline{f} $ is zero everywhere except at the origin where it is undefined, because of the $ \frac{1}{r^2} $ term.
However, the integral of $ \overline{f} $ over $V$ is not zero, because the origin acts as a "source of divergence". This is analogous to the Dirac delta function, which is zero everywhere but at the origin, and whose integral is 1.
Note that since the origin is the only "source of divergence", the flux of $ \overline{f} $ through the sphere has the same value for every radius $R$ — the value of $ 4\pi $ is correct.
If the domain of integration didn't include the origin, this value would have been zero. |
Find Basis of $V$ based on $V/W$ and $W$ | You just need to use the fact that
$$ [T]_{\gamma} = \begin{bmatrix} [T(w_{1})]_{\gamma} & \cdots & [T(w_{m})]_{\gamma} & [T(v_{1})]_{\gamma} & \cdots & [T(v_{n})]_{\gamma} \end{bmatrix} $$
Since $T(W) \subseteq W$, you can express each $T(w_{j})$ in terms of the $w_{i}$. This gives you that for $1 \leq j \leq m$,
$$[T(w_{j})]_{\gamma} = \begin{bmatrix} [T(w_{j})]_{\alpha} \\ 0 \end{bmatrix}$$
The second part is a little tricky: you need to use the fact that for each $v_{j}$, you can write $T(v_{j}) = w+T_{2}(v_{j})$ (where in a slight abuse of notation here I am considering the codomain of $T_{2}$ to simply be the span of the $v_{i}$). Now since $w$ is a linear combination of the $w_{i}$, you have that
$$[T(v_{j})]_{\gamma} = \begin{bmatrix} * \\ [T(v_{j})]_{\beta} \end{bmatrix}$$
for each $1 \leq j \leq n$, where the $*$ represents the coefficients on the $w_{i}$ that are needed to obtain $w$. |
proof by contradiction with the well ordering property | The negation of what we want to prove should read "there exists an integer greater or equal to 2 that cannot be expressed as a product of primes". This is equivalent to the statement that your set $S$ is non-empty. Therefore, upon arriving at a contradiction, this statement has to be false, i.e., $S$ has to be empty. |
Find a plane that passes through a point and is perpendicular to 2 planes | you have two vectors(normals of the given planes) $$u=(2,1,-2), v=(1,0,3)$$ then $n=u×v$
The plane equation is then $$[(x,y,z)-P].n=0$$ |
How to compute $J_{\varepsilon}=\int_{B(0, \varepsilon)^c}\Phi(y)\Delta_yf(x-y)dy$? | For $J_{\epsilon}$ , First the Laplacian is converted from $\Delta_x$ to $\Delta_y$, basically its easy to see that $\nabla_x$ = -$\nabla_y$, then for the laplacian the negatives cancel and you get that the Laplacian in either variable is equal.
Next instead of using the divergence theorem, Use the following form of Greenes first identity
$\int_{U}Dv\cdot Du\,dx = -\int_{U}u\Delta v\,dx +\int_{\partial{U}}\frac{\partial v}{\partial \nu}u\,dS$
both integrals of $K_\epsilon$ and $J_\epsilon$ follow from this
Hope his helps! |
Can someone explain the particular solution for non homogeneous recurrence relations? | $$\begin{array} {rl}
%
a_{n} &= 5a_{n-1} - 6a_{n-2} + 4^n + 2n + 3 \\
%
a_{n+1} &= 5a_{n} - 6a_{n-1} + 4^{n+1} + 2(n+1) + 3 \\
%
a_{n+2} &= 5a_{n+1} - 6a_{n} + 4^{n+2} + 2(n+2) + 3 \\
%
a_{n+3} &= 5a_{n+2} - 6a_{n+1} + 4^{n+3} + 2(n+3) + 3 \\
%
\end{array}$$
Then let $x = 4^n$, $y = 2n$, $z = 1$, plug that into the above 4 equations to get 4 linear equations that are linear in $a_{*}$ and $x, y, z$. Eliminate $x,y,z$ from the equations to get 1 homogenous equation, note that the roots of the original equation $a_n = 5a_{n-1} - 6a_{n-2}$ will also be roots of the new homogenous equation. |
Clues on how to solve these types of problems within 2-3 minutes for competitive exams | Hint:
By the product rule you have the following result:
$$\dfrac{\mathrm d}{\mathrm dx}\prod_{k=1}^{100}(x-k)=\left(\prod_{k=1}^{100}(x-k)\right)\left(\sum_{k=1}^{100}\dfrac{1}{(x-k)}\right)$$
Integrate both sides from $0$ to $102$, use the Fundamental Theorem of Calculus and you'll be done in no time. |
Asymptotic behavior of $a_{n+1}=a_n + f(a_n)$ | As a first cut, I would write
$a_{n+1}-a_n \approx a'(x)$
so my guess would be
$a'(x) \approx f(a(x))$. |
Rolle's theorem proof in Apostol: meaningfulness of interior | The requirement that $f'(a)$ and $f'(b)$ also exist makes the premise unnecessarily strong. Allowing $c=a$ or $c=b$ makes the conclusion unnecessarily weak. Hence both your changes weaken the theorem. |
Evaluate $\int_0^1\frac{\tan^{-1}ax}{x\sqrt{1-x^2}}\,dx$ | Let $$I(a) = \int_0^{1}\frac{\arctan {ax}}{x\sqrt{1-x^2}}dx$$
Differentiating the integral with respect to $a$,
$$I'(a) = \int_0^{1}\frac{dx}{\sqrt{1-x^2}(1+a^2x^2)}$$
Let, $$u^2 = \frac{1-x^2}{1+a^2x^2} \implies x^2=\frac{1-u^2}{1+a^2u^2} \therefore xdx=-\frac{u(1+a^2)}{(1+a^2u^2)^2}du$$
Therefore, the integral reduces to,
$$I'(a)=\frac{1}{\sqrt{1+a^2}}\int_0^{1}\frac{du}{\sqrt{1-u^2}}$$
$$\therefore I'(a)=\frac{\pi}{2\sqrt{1+a^2}} \implies I(a)=\frac{\pi}{2}\ln|a+\sqrt{1+a^2}|+C$$ where C is an undetermined constant that can be found by putting $a=0$ in the original integral which evaluates to $0$. Therefore, $C=0$ and the integral is
$$I(a)=\frac{\pi}{2}\ln|a+\sqrt{1+a^2}|$$ |
Show that this function is not continuously differentiable | If $f'$ is continous on $[0,1]$ it's especially continous in 0 but $$f'(x) = ax^{a-1}\sin\frac{1}{x} - x^{a-2}\cos\frac{1}{x}$$ for $x>0$ and because $$\lim_{x \searrow 0}\; f'(x)$$ doesn't exists for $a \in (1,2]$ the derivative $f'$ is not continuous in $0$ hence in $[0,1]$. |
Check whether $f(x,y)$ is the density of a gaussian vector | If it is really a Gaussian distribution, there should exist a positive definite matrix ${\bf A} \in \mathbb{R}^{2 \times 2}$ and a vector ${\bf m} \in \mathbb{R}^2$ such that:
$$f({\bf u}) = \frac{1}{2\pi \sqrt{|\det{{\bf A}}|^{-1}}}e^{-\frac{1}{2}{(\bf u - \bf m)}^{\top} {\bf A} {(\bf u - \bf m)}},$$
where ${\bf u} = \begin{bmatrix}x\\y\end{bmatrix}.$
Let $${\bf A} = \begin{bmatrix}a & b\\b & d\end{bmatrix} ~\text{and}~ {\bf m} = \begin{bmatrix}m_x \\ m_y\end{bmatrix}.$$
Then:
$$({\bf u} - {\bf m})^{\top} {\bf A} ({\bf u} - {\bf m}) = ax^2 + dy^2 -2 x(a m_x+bm_y) - 2y(bm_x +d m_y) + 2bxy + (am_x^2 + 2bm_x m_y + dm_y^2) = x^2 - 2xy + 9y^2.$$
As a straight consequence, $a = 1$ ($ax^2 = x^2$), $b = -1$ ($2bxy = -2xy$) and $d = 9$ ($dy^2 = 9y^2$).
Notice that the eigenvalues of
$${\bf A} = \begin{bmatrix}a & b\\b & d\end{bmatrix}$$
are both positive since both $\text{tr}({\bf A}) = a + d = 10$ and $\det{\bf A} = ad-b^2 = 8$ are positive. Hence, ${\bf A}$ is positive definite.
Now, we need to check that $|\det{{\bf A}}|^{-1} = 1.$
Recall that $\det({\bf A}) = ad - b^2$, and hence:
$$|\det{{\bf A}}|^{-1} = \frac{1}{|ad - b^2|} = \frac{1}{|9 - 1|} = \frac{1}{8} \neq 1.$$
In conclusion, the function you wrote is not a Gaussian distribution. |
Exponential Generating Function for the recurrence relation $a_n=\sum_{k=1}^n {n\choose k}a_{n-k}$ | As pointed out in comments, at $n=0$ the "note" fails; you need to separate out this case beforehand.
$$A(x)=\sum_{n=0}^\infty a_n\frac{x^n}{n!}=1+\sum_{n=1}^\infty a_n\frac{x^n}{n!}=1+\sum_{n=1}^\infty\sum_{k=1}^n \binom nka_{n-k}\frac{x^n}{n!}$$
$$=1+\sum_{n=1}^\infty\sum_{k=0}^n \binom nka_{n-k}\frac{x^n}{n!} - \sum_{n=1}^\infty a_n\frac{x^n}{n!}=2+\sum_{n=1}^\infty\sum_{k=0}^n \binom nka_{n-k}\frac{x^n}{n!} - A(x)$$
$$=1+\sum_{n=0}^\infty\sum_{k=0}^n \binom nka_{n-k}\frac{x^n}{n!} - A(x)=\cdots=1+A(x)e^x-A(x)$$ |
Properly discontinuous action and Fundamental Domain | Proper discontinuity does imply the existence of a fundamental domain, at least in the case of an action by isometries. You can then take, as a fundamental domain, the Dirichlet domain defined as follows:
$$D := \{ y \in \mathbb{R}^n | \forall g \in G, d(y, x) \leq d(y, gx) \}$$
where $x$ is some fixed element of $\mathbb{R}^n$ (for example $0$).
(In the general case this statement is also true, but you have to use a different definition for proper discontinuity and possibly impose some additional conditions. The Dirichlet domain construction no longer works, so the proof is much harder.)
On the other hand, the assumption of having a compact fundamental domain is stronger than proper discontinuity. For example consider the action of $\mathbb{Z}$ on $\mathbb{R}^2$ by translations (by integer multiples of, say, the first basis vector $e_1$). This action is properly discontinuous; but its fundamental domains are vertical strips of width $1$ and infinite height, so they are not compact. |
How to find the locus of a midpoint from a known point and a moving point? | $$C: x^2+y^2-12x+8y+20=0$$
$$(x-6)^2+(y+4)^2=32$$
In polar form, let the point on the circle be $$P\equiv(4\sqrt2\cos\theta+6,4\sqrt2\sin\theta-4)$$
By mid-point formula,
$$Q\equiv(2\sqrt2\cos\theta+3,2\sqrt2\sin\theta-2)$$
To get the locus of $Q$,
$$(x-3)=2\sqrt2\cos\theta$$
$$(y+2)=2\sqrt2\sin\theta$$
Squaring and adding,
$$(x-3)^2+(y+2)^2=8$$ |
Adjoining elements to a ring | In general, given a commutative ring $R$ and an element $\alpha$ lying in some ring $E$ where $R \subseteq E$ we have the subring $\langle R \cup \{\alpha\}\rangle = \{f(\alpha): f \in R[x]\}$.
This is forced upon us by the closure axioms of a ring, polynomial rings over a commutative ring can be thought of as "universal adjuction rings". The above statement is usually framed this way:
There is a unique surjective ring-homomorphism $R[x] \to R[\alpha]$ that sends $x \to \alpha$ (sometimes this is phrased in terms of a similar homomorphism $R[x] \to E$). The astute reader will note this defines a universal mapping property which essentially turns the rings $R[x]$ into "free objects" of some kind.
One can think of this as "polynomial expressions in $\alpha$", but beware: it can happen that $f(\alpha) = 0$ in $E$, so these expressions aren't always unique. For example, when $R = \Bbb Z$, and $\alpha = \frac{p}{q} \in \Bbb Q$ with $\gcd(p,q) = 1$ then we can write $k = rp + sq \in \Bbb Z$, and the polynomials expressions induced by $f(x) = k$ and $g(x) = rqx + sq$ are, in fact, the same element of $\Bbb Q$.
This is one of the reasons why polynomial rings are so important, they serve as templates for understanding (simple) extension rings. |
Isolatedness of Lefschetz map - almost there | An alternative solution, directly continuing Jellyfish's beginning of a solution to G&P's Problem 5-10, Chapter 1.
Since $df_x$ has no eigenvalues equal $1$, by Problem 5-9 of G&P
$$\text{graph}(df_x) \pitchfork \Delta(T_x(X)\times T_x(X)).$$
It's an easy exercise to check that $\Delta(T_x(X)\times T_x(X)) = T_{x,x}(\Delta(X\times X)),$ so we now have
$$\text{graph}(f) \pitchfork \Delta(X\times X).$$
The overlap of these two sets is $W := \{(x,f(x))\colon x\in X, x=f(x)\},$ in bijective (in fact, diffeomorphic) correspondence with the set of fixed points of $f$. Now, by the transversality theorem from G&P, $W$ is a submanifold of $X\times X$. Furthermore, $\text{codim}(W) = \text{codim}(\text{graph}(f)) + \text{codim}(\Delta(X\times X)) = 2n.$ But this shows that $W$ is in fact a manifold of zero dimension, that is a set of isolated points. Thus the set of fixed points of $f$ is also an isolated set. As a subset of compact $X$ it must thus be finite. |
find all four solutions $x^2 \equiv 133 \pmod {143}$ | You could benefit from some preamble before you jump into the separate modular solutions.
Since $143=11\cdot 13$, with $11$ and $13$ prime, we can find the solutions, if any, by considering the problem of $x^2\equiv 133 \bmod 143$ to each prime modulus and then combining those results with the Chinese Remainder theorem.
So then ( as you already have):
$ \begin{align}
x^2 &\equiv 133 \pmod {11}\\
x^2 &\equiv 1 \pmod {11}\\
x &\equiv \pm 1 \pmod {11} \end{align}$
$\qquad \qquad \qquad$
$\begin{align}
x^2 &\equiv 133 \pmod {13}\\
x^2 \equiv 3 &\equiv 16 \pmod {13}\\
x &\equiv \pm 4 \pmod {13}\end{align}
$
Finding a number which squares to a given value under a particular modulus may not be simple. For small modulus values like this it's easy enough to spot squares just by adding the modulus to look for natural squares though.
Given two values for each of the contributing prime moduluses, the Chinese remainder theorem will give you four answers in range, but note that two of them will simply be the negation of the other two. So we can pick $x\equiv 1 \bmod 11$ for example and combine with both the answers on the $\bmod 13$ side:
As a preliminary, $11^{-1}\equiv (-2)^{-1} \equiv -7\equiv 6 \bmod 13$
$\left .
\begin{array}{r}
x\equiv 1 \bmod 11 \\
x\equiv 4 \bmod 13 \end{array}
\right\}\space
1+11k \equiv 4 \implies k\equiv 11^{-1}\cdot 3 \equiv 6\cdot 3 \equiv 5 \bmod 13 \\\implies x\equiv 1+11\cdot 5 \equiv 56 \bmod 143$
$\left .
\begin{array}{r}
x\equiv 1 \bmod 11 \\
x\equiv -4\equiv 9 \bmod 13 \end{array}
\right\}\space
1+11k \equiv 9 \implies k\equiv 11^{-1}\cdot 8 \equiv 6\cdot 8 \equiv 9 \bmod 13 \\\implies x\equiv 1+11\cdot 9 \equiv 100 \bmod 143$
Then taking $x\equiv -1\bmod 11$ will give the negation of these values, $-56\equiv 87$ and $-100\equiv 43\bmod 143$ (which you can work through to check if you are not sure about this) so $x\equiv \{43,56,87,100\} \bmod 143$ |
There are 10 sticks of length 1,..,10. How many triangles can be formed | A first approach is to calculate small values and check OEIS. Assuming you are asking the number of selections without replacement of three sticks that make a nondegenerate triangle I get $1,3,7,7,13,22\dots$ It is easiest to start from $n=4$, for which there is only one triangle. Then for $n=5$ you just need to count the triangles that include $5$, and so on. Putting that into OEIS finds A002623, which the first comment says it is just what you are after. For $n=10$, there are $50$. In the formula section we see $a(n) = \sum_{k=0}^n (-1)^{n-k}{k+3\choose 3}$ and $a(n) = \sum_{k=0}^n \lfloor(k+2)^2/4\rfloor$ |
questions about how to show sequence of functions are uniform convergent | The theorem is just saying this: Suppose that $f_n \rightarrow f$ and $f'_n \rightarrow g$ uniformly. Then $f$ must be nice enough that we can take its derivative, and we also have $$\frac{d}{dx} f = \frac{d}{dx} \lim f_n = \lim \frac{d}{dx} f_n = g.$$
In other words, we can swap the order of taking limits and taking derivatives whenever the derivatives converge uniformly to something ($g$ in this case). |
Functional derivative or operator (of $f$) giving $\frac {f''}{f}$? | If $h$ is a function such that $$\frac{\partial }{\partial t}\left\{h(f(t))\right\} = \frac{f''(t)}{f(t)}$$ holds for all $f$ then take $f(t)=t$ to see that $$\frac{\partial }{\partial t}\left\{h(t))\right\} =0$$ which means that $h$ is a constant. Hence there is no such function. |
Embedding partially ordered spaces in a product of chains | Just to give closure to the question, the most relevant notion for my question that I was able to find, is the one used in defining the Order dimension of a poset. |
Questions regarding the proof of quantifier elimination of DLO | Re: your first question, the issue is that different models of a theory might behave differently. Remember that "$\Gamma\models\chi$" means "every model of $\Gamma$ satisfies $\chi$" - this is an easy point to get tripped up on. Basically, the issue is that a specific case doesn't tell you too much about the general case.
Knowing that $\mathbb{Q}\models$ DLO and $\mathbb{Q}\models \phi$ just tells you that $\phi$ is consistent with DLO; other models of DLO might satisfy $\neg\phi$. For example, letting $G$ be your favorite abelian group and Grp be the group axioms, we have that $G\models$ Grp and $G\models x_1*x_2=x_2*x_1$, but we know that $x_1*x_2=x_2*x_1$ isn't a logical consequence of Grp (since there are non-abelian groups).
However, DLO is complete, so any sentence true in some model of DLO is true in every model of DLO. Put another way, anything true in some model of DLO - like $\mathbb{Q}$ - is a logical consequence of DLO; there's no "possible-but-not-necessary" going on here.
Now for your second question, look back at the definition of "equivalent" - two sentences $\psi,\theta$ are equivalent over a theory $T$ if $T$ entails $\psi\leftrightarrow\theta$.
One easy way this could happen is if $T$ entails each sentence separately; that is, if $T$ entails $\psi$ and $T$ entails $\theta$. This is because (think back to truth tables) $\psi\leftrightarrow\theta$ is true in a structure iff either both $\psi$ and $\theta$ are true in that structure or both $\psi$ and $\theta$ are false in that structure; if $T\models\psi$ and $T\models\theta$ then both $\psi$ and $\theta$ are true in every model of $T$, so $\psi\leftrightarrow\theta$ is true in every model of $T$. (And of course we'd get the same behavior of $T$ entails the negation of each sentence separately.) That's what's going on in this case:
$\mathbb{Q}$ is a model of DLO and satisfies $\phi$.
Since DLO is complete, this means every model of DLO satisfies $\phi$; that is, DLO$\models \phi$.
And of course DLO $\models x_1=x_1$, since in fact $\emptyset\models x_1=x_1$ (that is, "$x_1=x_1$" is true with respect to every variable assignment in every structure).
So DLO $\models\phi\leftrightarrow x_1=x_1$. |
Show that the triangle inequality remains for the following norm in $W^{m,p}(\Omega)$ | Answer to the original question:
You're just being overcautious. For each multi-index $\alpha$, by Minkowski's inequality, we have
\begin{align}
\left(\int_{\Omega}|D^{\alpha} (u+v)|^p\textrm{d}x\right)^{1/p}&=\left(\int_{\Omega}|D^{\alpha} u+D^{\alpha}v|^p\textrm{d}x\right)^{1/p}\\
&\leq \left(\int_{\Omega}|D^{\alpha} u|^p\textrm{d}x\right)^{1/p}+\left(\int_{\Omega}|D^{\alpha} v|^p\textrm{d}x\right)^{1/p}
\end{align}
Use this for each term in the sum, and you're done.
In general, the sum of semi-norms is, again, a semi-norm.
Answer to the current question:
Now you have a composition of norms, but it isn't actually much of a problem:
It's probably easier to write
$$
||u||_{m,p}=\left(\sum_{|\alpha|\leq m} ||D^{\alpha}u||_p^p\right)^{1/p}
$$
Thus, applying Minkowski
$$
||u+v||_{m,p}=\left(\sum_{|\alpha|\leq m} ||D^{\alpha}u+D^{\alpha}v||_p^p\right)^{1/p}\leq \left(\sum_{|\alpha|\leq m}(||D^{\alpha}u||_p+||D^{\alpha}v||_p)^p\right)^{1/p}
$$
Now, for each $n,$ $(\sum_{j=1}^n |x_j|^p)^{1/p}$ defines a norm on $\mathbb{R}^n$ (it's $L^p$ of the counting measure on $\mathbb{R}^n$ if you will).
Thus, applying the triangle inequality of this norm, we arrive at
$$
||u+v||_{m,p}\leq \left(\sum_{|\alpha|\leq m}||D^{\alpha}u||_p^p\right)^{1/p}+ \left(\sum_{|\alpha|\leq m}||D^{\alpha}v||_p^p\right)^{1/p}=||u||_{m,p}+||v||_{m,p},
$$
which was what we wanted. |
Convolution and laplace transform | Using convolution, for $f(t) = e^{t},~ g(t) = \sin t$, we have
$$(f * g)(t) = \displaystyle \int_0^t e^{\tau} ~\sin(t - \tau)~d \tau = \dfrac{1}{2}(e^{t} + \sin t - \cos t)$$
Calculate the Laplace Transform of that convolution result.
Next, note that (compare to the previous result)
$$\mathcal{L}(f * g) = \mathcal{L}(f)~\mathcal{L}(g) = F(s)~G(s) = \frac{1}{s-1} ~ \frac{1}{s^2+1}$$
Update
Lastly
$$\mathcal{L} (e^t \sin t) = \dfrac{1}{(s-1)^2+1}$$
We derive this using entirely different methods than above, for example, using the definition of the Laplace Transform, we have
$$\mathcal{L}(e^t \sin t) = \displaystyle \int_0^{\infty} e^{-s t} f(t) dt = \int_0^{\infty} e^{-s t} (e^t \sin t)~dt = \dfrac{1}{s^2-2 s+2}$$
We can also use item $19.$ from the Laplace Table, we have
$$\mathcal{L} (e^{at} \sin (b t)) = \dfrac{b}{(s-a)^2 + b^2 } = \dfrac {1}{(s-1)^2 + 1^2 } = \dfrac {1 }{(s-1)^2 + 1 } = \dfrac{1}{s^2-2 s+2}$$
Do you now understand why you are having issues? In one case, we are finding the Laplace Transform of the convolution of two functions and in the other, we are finding the Laplace Transform of the product of two functions and these are different things. |
Why integer GCDs are positive? [unit normalization of GCDs] | The text is using the $\color{#c00}{\rm universal}$ definition of a gcd, namely
$$\ c\mid a,b \!\!\color{#c00}{\overset{\rm u\!\!}\iff}\! c\mid \gcd(a,b)$$
Direction $(\Leftarrow)$ implies that gcd is a common divisor of $a,b\,$ (by choosing $ c = \gcd(a,b))$ and the reverse direction $(\Rightarrow)$ implies that the gcd is "greatest" w.r.t. divisibility order, i.e. divisible by all other common divisors $c$ of $a,b\,$ (so "greater" magnitude in $\,\Bbb Z,\,$ and greater degree in $K[x])$
Generally a gcd is not unique: if $\,d,d'$ are both gcds of $\,a,b\,$ then $\, c\mid d\!\!\color{#c00}{\overset{\rm u\!\!}\iff}\! c\mid a,b\!\!\color{#c00}{\overset{\rm u\!\!}\iff}\! c\mid d'\,$ so specializing $\,c =d\,$ and $\,c = d'\,$ shows $\,d\mid d'\mid d,\,$ i.e. $\,d\sim d'\,$ are associate (divide each other). The converse also holds true: if $\,d=\gcd(a,b)\,$ is associate to $\, d'\,$ then $\,d\mid d'\mid d,\,$ so $\,c\mid d\!\iff\! c\mid d',\,$ so $\,d'$ is also a gcd of $\,a,b.\,$ In an integral domain $\,a\,$ is associate to $\,b\!\iff\!$ they differ by a unit multiple, i.e. $\,a = ub\,$ where $\,u\,$ is a unit (invertible). Thus gcds are preserved by unit scalings.
In some rings with simple unit group structure we can choose canonical representatives of associate classes, which allows is to choose normal forms for gcds, e.g. in $\,\Bbb Z\,$ (with units $\pm 1)$ we normalize gcds $\ge 0,\,$ and in a polynomial ring $\,K[x]\,$ over a field (units = constants $0\neq c\in K) $ we normalize polynomial gcds to be monic (lead coeff $\,c_n = 1),\,$ by scaling the polynomial by $\,c_n^{-1}\,$ if need be (so a constant gcd $\,c_0\neq 0$ normalizes to $1).\,$ Hence in both cases we can say that two elements are coprime $\!\iff\!$ their gcd $= 1$ (vs. a unit). Such normalizations are sometimes called unit normal representatives in the literature. |
Covariance matrix and projection | Your original data matrix containing the points (in this case, 2-dimension) is
$$ D = \begin{bmatrix}
-1 & -1\\
1 & 2
\end{bmatrix} $$
A shift of origin can be performed to center the points (-1,1) and (-1,2).
The mean of the x coordinates = -1.
The mean of the y coordinates = 3/2.
This will transform the data matrix to
$$ D = \begin{bmatrix}
-1-(-1) & -1-(-1)\\
1-\frac{3}{2} & 2-\frac{3}{2}
\end{bmatrix} =
\begin{bmatrix}
0 & 0\\
-\frac{1}{2} & \frac{1}{2}
\end{bmatrix}$$
The covariance matrix is given by $DD^T$
$$ DD^T = \Sigma =
\begin{bmatrix}
0 & 0\\
0 & \frac{1}{2}
\end{bmatrix}
$$
The projection (in this case, into 1-dimension) of the points in $D$ on a vector $v$ is given by $v^TD$.
$v^TD$ can now be thought of as your 'Projected Data Matrix (P)' whose components give the coordinates of the points in the projected space (in this case, along the vector $v$).
Noting that $DD^T=\Sigma$, the covariance (variance, if 1-dimension) of $P$ is thus given by $PP^T$,
$$PP^T=v^TD(v^TD)^T=v^TDD^Tv=v^T\Sigma v$$ |
Showing two conditions are equivalent for a non-empty convex set $K$ of a Hilbert Space $X$. | You will find the proof (with even a figure !) in Bourbaki Topological Vector Spaces Ch V § 5 Theorem 1.
Note : You don't need $K$ to be closed for the equivalence, only if $K$ is closed it necessarily exists that closest point $y$. |
Polynomial nth derivative | Recall the structure of the Taylor series centered at $x=a$:
$$f(a)+f'(a)(x-a)+\cdot\cdot\cdot+\dfrac{f^n(a)}{n!}(x-a)^n+\cdot\cdot\cdot$$
While of course the function is just a polynomial of $n$ degrees and not a Taylor series, the convenience with this series is that $f^n(a)$ can be found easily.
Note: For the polynomial of $n$ degrees, we can have a generic polynomial
$f(x)=a_n(x-a)^n+\cdot\cdot\cdot+a_1(x-a)+a_0 \quad a_i\in \mathbb{R},\quad\!\! i\in\{1,2,\cdot\cdot\cdot,n\}$
$f(a)=a_0=b_0$
$f'(a)=a_1=b_1$
$\quad\cdot$
$\quad\cdot$
$\quad\cdot$
$f^n(a)=a_n=b_n$
This reveals something special about the statement "show that there exists a polynomial of degree at most $n$ such that conditions are true." |
How to add this combinations? PLEASE HELP | Hint
The expression is same as
$$\binom {1000}{950}+\binom {999}{950}+\binom {998}{950}\cdots +\binom {950}{950}$$
And now use the Hockey stick identity to get the answer as $\binom {1001}{951}$ |
Why does the commutator subgroup of a unipotent algebraic group have smaller dimension? | If $U$ is a linear algebraic group, then you can embed it at a subgroup of $\operatorname{GL}_n$. Furthermore, if $U$ is unipotent, then $U$ can be embedded as a subgroup of $U_n\subseteq\operatorname{GL}_n$, where $U_n$ denotes the group of all upper triangular unipotent matrices, i.e. upper triangular matrices with $1$ on the diagonal.
Since $U_n$ is solvable, so is any subgroup of it. In particular, any unipotent linear algebraic group is solvable, and therefore $[U,U]$ must be a proper subgroup of $U$.
From now on, we are in characteristic zero. I do not know if the following holds otherwise.
Since $U$ is connected (see below), $U/[U,U]$ is also connected. Therefore, $U/[U,U]$ can not be a finite group, otherwise $[U,U]$ would not be a proper subgroup of $U$. This implies that $[U,U]$ actually has strictly smaller dimension than $U$.
Connectedness: Let $u\in U$, then there is a nilpotent matirx $x$ with $u=\exp(x)$.
For $t\in\Bbbk$, you have $\exp(tx)=\exp(x)^t=u^t\in U$. Since $\Bbb Z$ is dense in $\Bbbk$, this implies that $\exp(tx)\in U$ holds for all $t\in\Bbbk$. This gives you a path connecting $u$ to the identity matrix. |
how to solve the inequality? | We have
$$ 2(1 + y) - y(1-y)^2 = 2 + 2y - y + 2y^2 - y^3 = -y^3 + 2y^2 + y + 2$$
and hence
$$ f_2(y) = \frac{1-y}{-y^3 + 2y^2 + y + 2} $$
Now let $y$ be given, we will find the $x \in \mathbb R$ with $f_1(x) < f_2(y)$, we have
\begin{align*}
f_1(x) &< f_2(y)\\
\iff x^2 - 4x + 3 &< 3f_2(y)\\
\iff (x-2)^2 - 1 &< 3f_2(y)\\
\iff (x-2)^2 &< 3f_2(y) + 1\\
\end{align*}
We have
\begin{align*}
3f_2(y) + 1 &= \frac{-y^3+2y^2-2y + 5}{-y^3 + 2y^2 + y + 2}\\
&= \frac{y^3 -2y^2 + 2y - 5}{y^3 - 2y^2 - y -2}
\end{align*}
As we want $(x-2)^2 < 3f_2(y) + 1$, we want this to be positive, Wolfram|Alpha tells us (I'm sure, one can do this by hand using Cardano's formulas to find the points where numerator and denominator change sign) that this is the case for
$\def\ytwo{\frac13\left(2 + \sqrt[3]{44-3\sqrt{177}}+\sqrt[3]{44+3\sqrt{177}}\right)}\def\yone{\frac 13\left(2 - 2 \sqrt[3]{\frac 2{155 + 3\sqrt{1473}}}+\sqrt[3]{\frac12\left(155 + 3\sqrt{1473}\right)}\right)}$
\[ y \not\in \left[\yone, \ytwo\right] \]
Then we can continue
$$ (x-2)^2 < 3f_2(y) + 1 \iff |x-2| < \sqrt{3f_2(y) + 1} $$
that gives $\def\ftwoy{\frac{y^3 -2y^2 + 2y - 5}{y^3 - 2y^2 - y -2}}$
$$ x \in \left(2-\sqrt{\ftwoy}, 2+\sqrt{\ftwoy}\right). $$ |
Symmetric matrix as a sum | Yes, it is true [provided you meant: real symmetric] and is just a rephrasing of the diagonalization theorem for such matrices.
Since $A$ is symmetric, it is diagonalizable with real spectrum $\{\lambda_i\}$ in an orthonormal basis $\{e_i\}$.
This means that, as an operator, $A$ acts on $\mathbb{R}^n$ as follows:
$$
Ax=\sum_{i=1}^n \lambda_i \langle e_i,x\rangle e_i
$$
Now the matrix of each operator $x\longmapsto \langle e_i,x\rangle e_i$ is precisely $e_ie_i^T$. |
Automorphisms of $D_{2n}$ | Yes, that's correct. When $n>2$, the cyclic rotation group is characteristic, so all automorphisms have that form. The automorphisms that map $r$ to $r$ form a normal cyclic subgroup $N$ of order $n$. The subgroup that maps $s$ to $s$ forms a non-normal subgroup $U$ of order $\phi(n)$ isomorphic to the automorphism group of $N$, The complete automorphism group of $D_{2n}$ is the semidirect product $N \rtimes U$ (also known as the holomorph of $N$). |
About infinite conditional variance and infinite unconditional variance | We usually only define the variance of a random variable which has finite second moment, but if you wanted to extend the definition to those random variables with with only finite first moment then you would just get infinite variance. So in this case the inequality is trivial as both sides are equal to infinity. |
Solving a Diophantine equation involving fourth powers and squares | Consider a positive integer solution, with $y$ minimal, of
$$ x^4-3y^4=z^2\tag{1}$$
We can suppose that $x,y,z$ are pairwise coprime since a common factor of any pair of $x,y,z$ would be a factor of all and cancellation could occur. Then, considered modulo $8$ we see that $y$ is even. We rewrite the equation as
$$\left (\frac{x^2-z}{2}\right )\left (\frac{x^2+z}{2}\right)=12t^4.$$
Since the two bracketed factors, $L$ and $M$ say, differ by the odd integer $z$ and have integer product, they are both integers. Furthermore, if $q$ is a prime common factor of $L$ and $M$, then $q$ would be a factor of both $x$ and $z$, a contradiction.
Therefore $\{L,M\}=\{au^4,cv^4\}$, where $ac=12$ and $t=uv$, with $u$ and $v$ coprime, and $$au^4+cv^4=x^2.$$ Since $x$ is odd the only possibility is
$u^4+12v^4=x^2.$ (Note that $4u^4+3v^4=x^2$ is not possible modulo $4$). Then
$$\left (\frac{u^2-x}{2}\right )\left (\frac{u^2+x}{2}\right)=-3v^4.$$
Since the two bracketed factors, $L$ and $M$ say, differ by the integer $x$ and have integer product, they are both integers. Furthermore, if $q$ is a prime common factor of $L$ and $M$, then $q$ would be a factor of both $x$ and $u$, a contradiction.
Therefore $\{L,M\}=\{aU^4,cV^4\}$, where $ac=-3$ and $v=UV$, with $U$ and $V$ coprime, and $$aU^4+cV^4=u^2.$$ Modulo $3$ the only possibility is $\{a,c\}=\{1,-3\}.$ From a solution $(x,y,z)$ of equation (1) we have therefore deduced a second solution $(X,Y,Z)$, where $$Y=\frac{y}{2UX}<y,$$ a contradiction.
NOTE
This is the method used on
Does the equation $y^2=3x^4-3x^2+1$ have an elementary solution? |
Problem on Straight lines | the mirrors can be any of the two bisectors of the lines $$3x+4y = 5, 5x - 12y = 10.$$ they are given by $$\frac{3x+4y - 5}{5} = \pm \frac{5x-12y - 10}{13}\tag 1$$ this simplifies to $$14x+112y-15 = 0, 64x-8y-115=0 \tag 2$$ these are equivalent to $$ \frac{14}{15}x+\frac{112}{15}y = 1, \frac{64}{115}x-\frac{8}{115}y=1 \tag 3$$
therefore, there are two choices for the pair $(a,b)$:$$ a = \frac{14}{15}, b = \frac{112}{15} \text{ or } a = \frac{64}{115}, b = -\frac{8}{115}.$$ |
Proof of Riemann_Lebesgue Lemma(Spivak Calculus 15-26) | The inequality is not valid since $\sin(\lambda x)$ changes sign (many times in the limit!). Explicitly, consider the interval $[a,b]=[0,2\pi]$ and function
$$f(x)=\begin{cases}0 & \text{if }x < \pi \\ \frac{x}{\pi}-1 & \text{if }x>\pi\end{cases}$$
Let $s_1(x)=0$ and $s_2(x)=\chi_{[\pi,2\pi]}$. Note that $s_1\leq f\leq s_2$. Let $\lambda=2$ and we see that for any $x\in [\pi,2\pi]$ that $s_1(x)\sin(\lambda x)=0$, but $s_2(x)\sin(\lambda x)=\sin(\lambda x)$ takes on both positive and negative values on the interval $[\pi,2\pi]$. |
Direct sums of modules versus products | The definition of the product of modules in general is that $R$ is a ring, and $M_i$, $i\in I$ are $R$-modules, then the product $M=\prod_{i\in I}M_i$ is the $R$-module of tuples $\{ (m_i)\mid m_i\in M_i\}$ with componentwise addition and "diagonal" multiplication $r.(m_i)_{i\in I}$. The direct sum $\oplus_{i\in I}M_i$ of $R$-modules $M_i$ is defined as the submodule of the direct product consisting of all $(m_i)$ satisfying $m_i=0$ for almost all $i\in I$. In general, an infinite product is not isomorphic to an infinite sum. For example, the module $M=\prod_{i\ge 0}\mathbb{Z}$ is not free-abelian, but $N=\oplus_{i\ge 0}\mathbb{Z}$ is free-abelian. |
Funny problem. How to average over periodic numbers | This has come up many times on this site in the context of averaging angles. One response is here. About the best you can do is find a central date for each event and average the days from then. For a summer event, maybe the central date is August 1, and you count days to the closest of those. Maybe the central date has to vary by country. |
In least squares estimation, why are the residuals constrained to lie within the space defined by the following equations? | You find $\hat{a}$ and $\hat{b}$ by looking for minima of the function (in $a$ and $b$):
$$\sum_{i=1}^n e_i^2=\sum_{i=1}^n(y_i-a-bx_i)^2$$
so taking partial derivatives in $a$ and $b$ and making them equal to $0$ yields:
$$0=\left[\frac{\partial}{\partial a}\sum_{i=1}^n(y_i-a-bx_i)^2\right]_{a=\hat{a},b=\hat{b}}=-2\sum_{i=1}^n(y_i-\hat{a}-\hat{b}x_i)=-2\sum_{i=1}^n \hat{e_i}$$
$$0=\left[\frac{\partial}{\partial b}\sum_{i=1}^n(y_i-a-bx_i)^2\right]_{a=\hat{a},b=\hat{b}}=-2\sum_{i=1}^n(y_i-\hat{a}-\hat{b}x_i)x_i=-2\sum_{i=1}^n \hat{e_i}x_i$$
which gives you the desired properties. |
Increasing the tail of exponential function | I gather you want an exponential-like distribution with a constant tail for $x$ which is zero away from the interval $0<x<1$. Such a PDF can be written
$$ P(x) = A \ \text{max}\big[C,e^{-\lambda x}\big]\big[\theta(x)-\theta(x-1)\big]$$
where $C<1$ and $0< 1/\lambda < 1$. $C$ is your constant tail, and the turnover from exponential-like to constant happens at $x = -\lambda^{-1}\log C$ (a positive number between 0 and 1). The normalization constant $A$ is determined by
$$1 = \int_0^1 P(x) dx = A \Big[ \int_0^{-\lambda^{-1}\log C} e^{-\lambda x} dx + \int_{-\lambda^{-1}\log C}^{1} C dx\Big],$$
and this gives
$$ A = -\frac{\lambda}{C(1+\log C)}$$
so
$$ P(x) = -\frac{\lambda}{C(1+\log C)} \ \text{max}\big[C,e^{-\lambda x}\big]\big[\theta(x)-\theta(x-1)\big].$$
If you want to draw random variables from this distribution, probably the best way is to sample from an exponential distribution and throw away values that don't work (A Monte Carlo method). |
Relations in Group Presentation | This won't address your specific question here, but more of my general feeling about presentations.
Key idea: Presentations make it easy to communicate the particular group you're working with, but are generally hard to come up with, or work with!
For example, there are lots and lots of groups of order $96$ -- 231 of them, to be precise. But if you've found an interesting one (say, this guy), how in the world would you describe it to someone, especially if it doesn't belong to a fairly well-known family, or have a nice description as (semi)direct products?
That's where a presentation comes into play. Supposing you have such a presentation, you just write it down, tell your friend, and that's that. Your job is done!
This is ignoring the fact that it's really nontrivial to determine a set of relations that pins down your group. I've never even thought of doing this, but I'd wager it's not a pleasant task. Why would I be willing to wager that?
Let's go back to your friend, when she receives receives the compact presentation you sent earlier. She has her work cut out for her! See this answer of mine for an idea of the kind of work required just to list elements, for a group of order only $8$. Long story short, it's completely nontrivial to actually unpack a presentation, in general. This is without even mentioning the word problem, which in a sense makes precise how difficult it is.
So in summation, group presentations are nice as exactly that -- presentations. If you have any other description of the group to work with, chances are, it'll be easier than working with the presentation. |
Finding the inverse of $f(x)=x(x+1)$. | You will first have to define a domain to make the function injective, before finding its inverse.
eg. If you take the domain of $f(x)$ as $\left[ \frac{-1}{2}, \infty \right)$
$$y=x(x+1)$$
$$y=[x^2+(2)\frac{1}{2}x +(\frac{1}{2})^2 ]-\frac{1}{4}$$
$$y=(x+\frac{1}{2})^2-\frac{1}{4}$$
$$x=-\frac{1}{2} \pm \sqrt{y+\frac{1}{4}}$$
Now, since the domain is restricted to be $> \frac{-1}{2}$ we are down to one branch of the $\pm$, specifically the $+$ one.
So,
$$f^{-1}(x)=-\frac{1}{2} +\sqrt{x+\frac{1}{4}}$$
Similarly, if the domain was $(-\infty,-\frac{1}{2}]$, the $-$ branch would be selected.
Here, $$f^{-1}(x)=-\frac{1}{2}-\sqrt{x+\frac{1}{4}}$$
$${}$$ |
How to calculate this limit without L'Hopital rule? | Note that for $x\log(x)<1$,
$$e^{x\log(x)}\le \frac1{1-x\log(x)}$$
whereby we see that for $x\le 1$
$$\begin{align}
\frac{e^{x\log(x)}-1}{x}&\le \frac{\log(x)}{1-x\log(x)}\\\\
&\le \frac{e}{e+1}\,\log(x)
\end{align}$$
Inasmuch as $\log(x)\to -\infty$, we find that
$$\lim_{x\to 0^+}\frac{e^{x\log(x)}-1}{x}=-\infty$$ |
Proof that field trace in $GF(2^k)$ maps half of the elements to 0 and the other half to 1. | The trace is a linear form, so it is a map
$$Tr: GF(2^k)\to GF(2).$$
But then linear algebra tells us (rank-nullity) that the null space of this surjective (see below) map is isomorphic to $GF(2^{k-1})$, because it is a vector subspace of $GF(2^k)$ of dimension $k-1$. By definition the null space is things that map to $0$, and the cardinality is clearly half of all the elements, so the rest must go to $1$.
This is surjective because $Tr$ is the sum of the Galois conjugates, and you know the Galois group is cyclic and generated by $x\mapsto x^2$, so anything of trace $0$ satisfies $x+x^2+x^4+x^8+\ldots +x^{2^{k-1}}=0$. But only $2^{k-1}$ things can possibly satisfy this, so something has trace $1$. |
How do I prove it for a general case? | That doesn't seem to be true. Let us define the hyperbola defined as
$$|d(A,X) -d(B,X)| = c$$
Here we need to find a maximum $c$ given a line $l$ for $X$. My hypothesis is that $l$ would need to be tangent to the curve. Using this, we can find $c$ |
Laplace transform of a Cauchy's problem | $\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
$\ds{\mrm{u}'\pars{t} + \int_{0}^{t}\expo{t - s}
\bracks{\int_{0}^{s}\mrm{u}\pars{r}\,\dd r}\dd s = 0\,,\quad t > 0.
\qquad\mrm{u}\pars{0} = 1}$.
\begin{align}
0 & =
\mrm{u}'\pars{t} + \expo{t}\int_{0}^{\infty}\expo{-s}\bracks{s < t}
\int_{0}^{\infty}\mrm{u}\pars{r}\bracks{r < s}\dd r\,\dd s
\\[5mm] & =
\mrm{u}'\pars{t} + \expo{t}\int_{0}^{\infty}\mrm{u}\pars{r}
\int_{0}^{\infty}\expo{-s}\bracks{r < s < t}\dd s\,\dd r
\\[5mm] &=
\mrm{u}'\pars{t} + \expo{t}\int_{0}^{\infty}\mrm{u}\pars{r}\bracks{r < t}
\pars{\expo{-r} - \expo{-t}}\,\dd r
\\[5mm] & =
\mrm{u}'\pars{t} + \expo{t}\int_{0}^{t}\mrm{u}\pars{r}\expo{-r}\,\dd r -
\int_{0}^{t}\mrm{u}\pars{r}\,\dd r
\end{align}
Multiply both members by $\ds{\expo{-st}}$ and integrate over $\ds{t > 0}$
$\ds{\pars{~\mbox{note that}\ \hat{\mrm{f}}\pars{s} \equiv \int_{0}^{\infty}\mrm{f}\pars{t}\expo{-st}\,\dd t ~}}$:
\begin{align}
0 & =
-\ \overbrace{\mrm{u}\pars{0}}^{\ds{=\ 1}}\ + s\,\hat{\mrm{u}}\pars{s} +
\int_{0}^{\infty}\expo{-st}\expo{t}\int_{0}^{t}\mrm{u}\pars{r}\expo{-r}
\,\dd r\,\dd t -
\int_{0}^{\infty}\expo{-st}\int_{0}^{t}\mrm{u}\pars{r}\,\dd r\,\dd t
\\[5mm] & =
-1 + s\,\hat{\mrm{u}}\pars{s} +
\int_{0}^{\infty}\mrm{u}\pars{r}\expo{-r}
\int_{r}^{\infty}\expo{-\pars{s - 1}t}\dd t\,\dd r - \int_{0}^{\infty}\mrm{u}\pars{r}\int_{r}^{\infty}\expo{-st}\dd t\,\dd r
\\[5mm] & =
-1 + s\,\hat{\mrm{u}}\pars{s} + {1 \over s - 1}\int_{0}^{\infty}\mrm{u}\pars{r}\expo{-sr}\dd r -
{1 \over s}\int_{0}^{\infty}\mrm{u}\pars{r}\expo{-sr}\dd r
\\[5mm] & =
-1 + s\,\hat{\mrm{u}}\pars{s} + \pars{{1 \over s - 1} -
{1 \over s}}\hat{\mrm{u}}\pars{s} \implies
\bbx{\ds{\,\hat{\mrm{u}}\pars{s} = {s^{2} - s \over s^{3} - s^{2} + 1}}}
\end{align}
$\ds{s^{3} - s^{2} + 1 = 0}$ has one negative real root
$\ds{s_{1} \approx -0.7549}$ and two complex roots
$\ds{a \pm b\ic\approx 0.8774 \pm 0.7449\ic}$.
With $\ds{c > a}$, $\ds{\mrm{u}\pars{t}}$ is given by:
\begin{align}
\mrm{u}\pars{t} & =
\int_{c - \infty\ic}^{c + \infty\ic}{s^{2} - s \over s^{3} - s^{2} + 1}\,
\expo{st}\,{\dd s \over 2\pi\ic} =
{s_{1} - 1 \over 3s_{1} - 2}\,\expo{-\verts{s_{1}}t} +
2\,\Re\pars{{a + b\ic - 1 \over 3\bracks{a + b\ic} - 2}\,
\expo{\bracks{a + b\ic}t}}
\end{align} |
An inequality about hypergeometric function(1F2). | This can be brute-forced by using the same approach as here.
The left-hand side of the inequality will have to be expanded to order 16, and the right-hand side to order 34; the approximation errors will be bounded by the absolute values of the $x^{18}$ and the $x^{36}$ terms respectively.
Then, since everything is polynomial, a Sturm sequence can be computed to prove that the difference of the lower bound for the left-hand side and the upper bound for the right-hand side does not have zeros on $(0,3]$. |
Show $sin(\frac{x}{k})$ is uniform convergent | $|\sin (\frac x k)| \leq \frac {|x|} k \leq \frac R k$ which shows that $\sin (\frac x k) \to 0$ uniformly.
[By MVT $|\sin t|=|\sin t-\sin 0|=|t| |\cos s|$ for some $s$ so $|\sin t| \leq |t|$]. |
A question about a free abelian finitely generated group. | You have to use the fact that $(m, n) = 1$. This means there exists integers $a, b \in \mathbb Z$ such that $an + bm = 1$. One way to think of bases is they are the columns of matrices in $\mathrm{GL}_2(\mathbb Z)$. So you look for a way to write down a $2 \times 2$ matrix with determinant $\pm1$ using the integers $a, b, n, m$. My first guess was
$$\begin{bmatrix} a & b \\ m & -n \end{bmatrix}$$
and this corresponds to the basis $\{af_1 + mf_2, bf_1 - nf_2\}$. I'll leave you to check that letting these be $g_1$ and $g_2$ works.
I wish I could say something more insightful about how to come up with this answer. Having a limited amount of information means there are a very limited number of things you can actually try, so you just follow your nose and start trying them. |
Formally proving that a function is $O(x^n)$ | HINT $\quad\rm ax^3 + bx^2 + cx + d\ \le \ (|a|+|b|+|c|+|d|)\ x^3 \ $ for $\rm\ x > 1$ |
Explain why graph of $f(x)=(x+1)\sin(3x)$ lies below the $x$-axis in interval $[4\pi/9,5\pi/9]$ | first find the critical points($x$ at which $f(x)= 0$ or undefined). luckily your function $f$ is defined everywhere. because $f$ is already factored, finding the zeros are made easier. the critical numbers are given by $$(x+1)\sin 3x = 0 \to x = -1 \text{ or} \sin 3x = 0 \to x = -1, 3x = 0, \pi, 2\pi, 3\pi, \cdots $$ dividing by $3,$ the critical numbers of $f$ are
$$\cdots, -2\pi/3, -1, -\pi/3, 0, \pi/3, 2\pi/3, \cdots $$
now pick a test point in each these intervals and the sign of $f$ at that point should determine the sign for all points in the interval. |
Show that there are infinitely many primitive Pythagorean triples satisfying $z=x+1$ | hint for the first
$$x^2+y^2=(x+1)^2=x^2+2x+1$$
$$\implies y^2-1=2x $$
take an odd $y$ and get $x=(y^2-1)/2$ and $z=x+1$.
for example,
$$y=3\implies x=8/2=4\implies z=5$$ |
In how many different ways can we make a flag of $5$ stripes if we have have $2$ red stripes, $2$ white stripes and $3$ green stripes? | Hint. There is at least a green stripe G, so we distinguish 3 cases:
If we have one G then the other stripes are RRWW. In this case, the number of different flags is $\frac{5!}{1!2!2!}$.
If we have two Gs then the other stripes are RWW or RRW. In this case, the number of different flags is ??
If we have three Gs then the other stripes are RR or WW or RW. In this case, the number of different flags is ?? + ??
Can you take it from here? |
Evaluate Derivative $\lim_{x \to 1}\frac{10x-1.86x^2 - 8.14}{x - 1}$ | The solutions of $-1.86x^2+10x-8.14=0$ are $x=1$ and $x=4.37634$.
So,the limit is equal to this one:
$$\lim_{x \to 1} \frac{-1.86(x-1)(x-4.37634)}{x-1}=\lim_{x \to 1} (-1.86(x-4.37634))$$ |
Calculate Probability of a Range of Dice Rolls given their Distribution | your vector has $P(n) = P(x \ge n)$
so $P(3) = P(x \in \{3,4,5,6\} ) $
So $P(x \in \{4,5\}) = P(4) - P(6)$
in General $P( a \le x \le b) = P(a)-P(b+1)$
so for an n sided dice you need to include $P(n+1)=0$ ( I see you have done that for your first vector ) |
Show that a set of homotopy classes has a single element | Your proof is correct!
Just one comment, though: perhaps you can mention why the maps $F$ and $G$ are continuous, for the sake of more clarity. |
Contour integral of inverse square root. | Idea:
Note that for large $R$ we have that $\sqrt{1+z^2} \approx z$. This result is analytic without any branch cut. So it is useful for integration.
Formal:
We have to branch points of $\sqrt{1+z^2}$ at $z=\pm i$. We may choose the branch cut between between these branch points. With this choice, we have that (for $|z| > 1$)
$$ \sqrt{ 1+z^2} = z \sqrt{1+1/z^2}$$
where on the right hand side the branch cut does not play a role [as $\operatorname{Re}(1+1/z^2)>0$].
With that we have for the upper semi-circle ($z= R e^{i\theta}$)
$$\int \frac{dz}{\sqrt{1+z^2}} = \int \frac{dz}{z \sqrt{1+1/z^2}} = i \int_{0}^{\pi} d\theta + O(1/R) \to \pi i \quad (R\to \infty).$$
To this, we add the contribution of the integral along the real line. For this we obtain
$$\int_{-\infty}^{\infty}\frac{dx}{\pm\sqrt{1+x^2}}$$
where the sign $+$ has to be chosen for $x>0$ and $-$ for $x<0$ (given that the branch cut is between $+i$ and $-i$). Due to the fact that we integrate an odd function, this part of the contour vanishes and we have that
$$\oint \frac{dz}{\sqrt{1+z^2}} = i \pi\;.$$ |
Product in category TOP(2) | Tom Dieck is using the notation $(X,A)\times (Y,B)$ to mean
$$\left(X\times Y, (X\times B)\cup(A\times Y)\right)$$
(see page 32), although as he says there it's not the categorical product (which your interpretation is).
I think he's also identifying a space $X$ with the pair $(X,\emptyset)$, although I can't see where he says so explicitly. At least, the Proposition you mention and the diagram illustrating it make sense if you take $D^n\times (I,0)$ to mean $(D^n,\emptyset)\times (I,0)$, which is just $(D^n\times I, D^n\times 0)$ according to his meaning for the product of pairs. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.