title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
If $P(n)$ divides $P(P(n)-2015)$, prove that $P(-2015)=0$ | $\bmod \color{#c00}{P(n)}\!:\ 0\equiv P(\color{#c00}{\overbrace{P(n)}^{\large \equiv\ 0}}-2015)\equiv P(-2015)\ $ by the Polynomial Congruence Rule.
Thus $\, P(n)\mid P(-2015)\, $ for all $n$ so $\,P(-2015) = 0\,$ since $P$ is nonconstant so unbounded.
Remark $ $ Without congruences, put $\,x = P(n),\, a = -2015\,$ below
$\quad$ if $\,x\,$ divides $\,P(x\!+\!a)\,$ then $\,\underbrace{x\ {\rm divides}\ P(x\!+\!a)-P(a)}_{\rm\large Factor\ Theorem}\ $ so $x$ divides $P(a) = $ their difference |
Dirichlet theorem and expansion of fourier series | \begin{align}
&\frac{1}{a}\int_{-a}^{a}f(t)\sin(n\pi t/a)dt\sin(n\pi x/a)+\frac{1}{a}\int_{-a}^{a}f(t)\cos(n\pi t/a)dt\cos(n\pi x/a) \\
& = \frac{1}{a}\int_{-a}^{a}f(t)\cos(n\pi(x-t)/a)dt \\
& = \frac{1}{a}\int_{-a}^{a}f(t)\frac{e^{in\pi(x-t)/a}+e^{-in\pi(x-t)}}{2}dt \\
& = \frac{1}{2a}\int_{-a}^{a}f(t)e^{-in\pi t/a}dt e^{in\pi x/a}
+ \frac{1}{2a}\int_{-a}^{a}f(t)e^{in\pi t/a}dt e^{-in\pi x/a}.
\end{align} |
Prove that $a$ commutes with each of its conjugates in $G$ iff $a$ belongs to an abelian normal subgroup of $G$. | Let $H$ be generated by all conjugates of $a$. Then $H\lhd G$ because it is conjugation-invariant. Use the given condition that $H$ is abelian. |
$\frac{\sin(t)}{t} = \prod_{m=1}^{+\infty} \cos(\frac{t}{2^m})$ | $$P=\prod_{m=1}^{n} \cos(\frac{t}{2^m})=\cos(\frac{t}{2})\cos(\frac{t}{2^2})\cdots\cos(\frac{t}{2^n})\cdot\frac{\sin(\frac{t}{2^n})}{\sin(\frac{t}{2^n})}=\frac{\sin t\cdot\frac{t}{2^n}\cdot \frac{1}{t}}{\sin{\frac{t}{2^n}}}$$
Thus:
$$\lim_\limits{n\to+\infty}P=\frac{\sin t}{t}.$$ |
Find the expected frequency of some state in a state sequence of length N given a transition matrix M | The probability of state i after n jumps can only be defined depending on what your initial probability vector is. Suppose you let u be your initial vector. It is simply a probability distribution of time 0, where you choose randomly. If you need to find the probablity distribution after n steps, you simply multiply u by your transition matrix raised to the nth power. I.e.:
Suppose $$u= (.5,.5)$$ corresponding to the state space $$(H,T)$$
$$P(X=k)=u*M^n$$
Note that this outputs a one by two vector (because M is a 2 by 2 matrix), which is a probability distribution under the state space (H,T).
Basically, you need to know the intitial distribution. |
Complete subspace of connected space | If $B$ is complete then $B$ has to be closed. But that is true for $A\setminus B$ too. So $B$ and $A\setminus B$ are clopen. |
How many different values for $5$ different numbers multiplying? | Generically, we get $2^{|S|} - |S| - 1$ different piles of more than one number to multiply together. Unfortunately, it's possible to get the same product many ways. For instance, in the given example, $2\times6 = 3\times4$ and even better $2\times3 = 6$. This is hard to deal with automatically; in fact, there's a somewhat similar problem called the subset sum problem that is known to be very computationally complex. In this particular case, we get $32-5-1=26$ subsets, but only 21 distinct results: $12=2\times6=3\times4$, $24=2\times3\times4=4\times6$, $30=2\times3\times5=5\times6$, $60=2\times5\times6=3\times4\times5$, and $120=2\times3\times4\times5=4\times5\times6$ all appear twice. |
Proving simple sequence using natural deduction | Here is a proof via natural deduction:
$(s \vee t) \wedge \neg(s \wedge t)$ premise
$\neg (k \wedge f)$ premise
$t \rightarrow k$ premise
$\neg k \vee \neg f$ DeMorgan's Rule 2
$k \rightarrow \neg f$ implication 4
$t \rightarrow \neg f$ hypothetical syllogism 3,5
$s \vee t$ conjunction elimination (aka simplification) 1
$\neg \neg s \vee t$ double negation 7
$\neg s \rightarrow t$ implication 8
$\neg s \rightarrow \neg f$ hypothetical syllogism 6,9
$f \rightarrow s$ contrapositive law 10
$\therefore f \rightarrow s$
You were on the right track, you just needed $s \vee t$ to deduce your conclusion. Remember that conjunction elimination permits you to split propositions joined by conjunctions onto new lines. |
Finding the general solution to a linear system | Using Gaussian Elimination method you can solve this linear system. See this examples, they are really simple and I hope you can read it by yourself. If you have any confusion let me know. |
Equivalent definitions of a Hodge structure | Fortunately for you, I've had a headache because of the conflict between these exact references before.
Wikipedia is correct, Peters and Steenbrink are not.
Question 1. Is it equivalent? No.
Here's an example: Let $k=0$, let $H \cong \mathbb Z$. Let
$$
0= F_0 = F_1=\cdots ;\quad H \otimes \mathbb C= F_{-1}=F_{-2}=\cdots
$$
You can check that the condition in Peters and Steenbrink is satisfied, but this is not a Hodge structure of weight $0$, as all the graded pieces will vanish.
In general, take any Hodge structure of weight $k$. According to the condition in Peters and Steenbrink, it is automatically a Hodge structure of any weight $k'\ge k$, since the Hodge filtration is decreasing, which is never true. |
Writing higher order derivatives using the limit definition of derivative? | You are right, but you could do with an easier expression in one limit:
$$
f''(x) = \lim_{h \to 0} \frac{f(x+2h) - 2f(x+h) + f(x)}{h^2}
$$ |
Probability Question Finding Salary | If you assume that population distribution is a normal distribution with mean $\mu = \$ 60,000$, and standard deviation $\sigma = \$6,500$. Let $\{x_1, \ldots, x_m \}$ be sample of size $m$, where each $x_i$ is i.i.d. from $\mathcal{N}(\mu, \sigma)$. You are asking to compute
$$
\mathbb{P}\left( \frac{1}{m} \left( x_1 + \cdots + x_m \right) \le z \right)
$$
But the sum of normal variables has normal distribution. To determine it, one needs to compute its mean and variance:
$$
\mu_m = \mathbb{E}\left( \frac{1}{m} \left( x_1 + \cdots + x_m \right) \right) = \frac{1}{m} \left( \underbrace{\mu + \cdots + m}_{\text{m times}} \right) = \mu
$$
$$
\sigma_m^2 = \mathbb{Var} \left( \frac{1}{m} \left( x_1 + \cdots + x_m \right) \right) = \frac{1}{m^2} \left( \underbrace{\sigma^2 + \cdots + \sigma^2}_{\text{m times}} \right) = \frac{\sigma^2}{m}
$$
Thus
$$ \begin{eqnarray}
\mathbb{P}\left( \frac{1}{m} \left( x_1 + \cdots + x_m \right) \le z \right) &=& \Phi\left( \frac{z - \mu_m }{\sigma_m} \right) = \Phi\left( \frac{z-\mu}{\sigma} \sqrt{m} \right) \\ &=&
\Phi\left( \frac{57,000 -60,000}{6,500} \sqrt{35} \right) = \Phi(-2.73) = 0.0032
\end{eqnarray}
$$ |
Problem with limits when using polar coordinates: | $$\lim _{ r\rightarrow 0 }{ \frac { 3{ r }^{ 2 } }{ \sqrt { { r }^{ 2 }+4 } -2 } } =\lim _{ r\rightarrow 0 }{ \frac { 3{ r }^{ 2 }\left( \sqrt { { r }^{ 2 }+4 } +2 \right) }{ { r }^{ 2 } } } =12$$ |
Homotopy equivalence in terms of strong deformation retract | It is wrong. You ask whether the following two conditions are equivalent for two maps $f:X\to Y$ and $g:Y\to X$:
$f$ and $g$ are homotopy equivalences.
There is a $A\subset X$ such that $A$ be a strong deformation retract of $X$ and $f(A)$ be a strong deformation retract of $Y$.
First note that $g$ does not play any role in 2.
Now let $f : S^1 \to *$ be the constant map, where $*$ is a one-point space, and $g : * \to S^1$ be any map. $f$ is no homotopy equivalence. Now take $A = S^1$. Then you see that 2. is satisfied.
Update for Q2:
It is wrong. For $n \le 0$ let $C_n \subset \mathbb R^2$ be the circle with radius $1/3$ and center $(n,0)$, for $n > 0$ let $C_n = \{(n,0)\}$. Define
$$X = Y = \bigcup_{n \in \mathbb Z} C_n ,$$
$$f : X \to Y, f(z) = \begin{cases} z + (1,0) & z \in C_n, n \ne 0 \\ (1,0) & z \in C_0 \end{cases}$$
and $g = f$. This map translates $C_n$ to $C_{n+1}$ if $n \ne 0$ and collapses the circle $C_0$ to the point $C_1$. $f$ is not a homotopy equivalence. Let $A = X$. Then $f(A) = X$ and your condition on $f$ is satisfied. Since $g = f$, also the condition on $g$ is satisfied. |
Integral of two iid variables | Rewrite $S_n$ as the sum of $X_k, X_{k+1},\dots,X_n, X_1,\dots,X_{k-1}$, in this order, with the obvious changes if $k=n$ or $k=1$. Writing the sum this way, it is clear that the joint distribution of $X_1$ and $S_n$ is identical to the joint distribution of $X_k$ and $S_n$.
Therefore
$E [ f(X_1,S_n) ]= E [f (X_k,S_n)]$, and in particular, letting $f (x,y) = x {\bf 1}_B(y)$, for a Borel set $B$, we have $E[X_1,S_n \in B] = E [X_k,S_n \in B]$.
Finally ${\cal G} = \{S_n \in B: B \mbox{ Borel in }\mathbb R\}$ is a $\sigma$-algebra. Since every $\sigma$-algebra with respect to which $S_n$ is measurable must contain ${\cal G}$, it follows that ${\cal G} = \sigma(S_n)$. |
Similarity transform of a matrix preserves the determinant? | Recall that scalar matrices commute with all other matrices. Therefore
$$\lambda I=\lambda IZ^{-1}Z=Z^{-1}\lambda IZ$$
It then follows that
$$ Z^{-1}AZ-\lambda I=Z^{-1}AZ-Z^{-1}\lambda IZ=Z^{-1}(A-\lambda I)Z$$ |
Limiting distribution of $n^2T_n$ where $T_n$ is the minimum of $n$ independent $\chi^2(1)$-random variables. | $$
\begin{align*}
P(n^2T_n>y)&=P(T_n>\frac{y}{n^2})\\
&=\prod_{i=1}^n P(X_i^2>\frac{y}{n^2})\\
&=\prod_{i=1}^n(2P(X_i<-\frac{\sqrt{y}}{n}))\\
&=(2\Phi(-\frac{\sqrt{y}}{n}))^n\\
&=(1-\frac{2\phi(0)\sqrt{y}}{n}+o(\frac{1}{n}))^n\end{align*}
$$
$$
\lim_{n\rightarrow \infty}P(n^2T_n>y)=e^{-2\phi(0)\sqrt{y}}
$$ |
How do I find a functor from the category $\mathbf{Ab}$ to $\mathbf{Rng}$? | There are many ways to turn an abelian group into a ring, but the simplest and natural one is to take the zero multiplication. That is, having an abelian group $A$ define multiplication on it as $$\forall x,y \in A: \, x\cdot y = 0$$
This makes an rng from an abelian group. To make this into an actual functor, we should specify how it acts on morphisms. Let's just map all homomorphisms in $\operatorname{Ab}$ into the same functions, which appear to be rng homomorphisms, thanks to zero multiplication:
$$f(x\cdot y)=f(0)=0=f(x)\cdot f(y)$$ |
How to calculate percentage of value in arbitrary range | To calculate percentages, we need a zero based range.
In the other question, there were positive values, minimum 174 and maximum 424.
The values were shifted by -174 to achieve the zero-based range.
This is your case, when the values can be both negative and positive:
Minimum value: -0.094
Maximum value: 0.078
How can we calculate a percentages in that range then? Well, we can shift the numbers to the zero-based range by adding the opposite number of the initial minimum value to all figures. Min and max would then be:
Minimum value: -0.094+0.094 = 0
Maximum value: 0.078+0.094 = 0.172
Now we have the zero based range which we can use to calculate percentages normally. Just remember to add 0.094 to your value in the range to calculate the percentage from, too. |
2-norm of upper non-singular $n \times n$ submatrix as an upper bound of the 2-norm of the entire $m\times n$ matrix | Consider a QR factorization of $A$ in the form
$$
A=\begin{bmatrix}
A_1\\A_2
\end{bmatrix}
=
\begin{bmatrix}
Q_1\\Q_2
\end{bmatrix}
R=:QR
$$
where $Q$ has orthogonal columns and is partitioned in the same way as $A$. The matrix $A$ has full column rank so $R$ is nonsingular.
We have $A^+=R^{-1}Q^*$ and hence $\|A^+\|_2=\|R^{-1}\|_2$. We have to show that $\|R^{-1}\|_2\leq\|A_1^{-1}\|_2$. From $A_1=Q_1R$, we get $R^{-1}=A_1^{-1}Q_1$ and
$$
\|R^{-1}\|_2\leq\|Q_1\|_2\|A_1^{-1}\|_2.
$$
But $Q_1$ is a submatrix of $Q$ so $\|Q_1\|_2\leq\|Q\|_2=1$. |
Euler's Formula To Show 2E=3V | The last sentence in the original problem is the only one that matters. From each vertex you have 3 edges. If the edges would not be connected at both ends, you will have $3V$ edges. Since each edge is connected to two vertices, the number of edges is $E=\frac{3V}2$.
To check that this is true in other cases, see a tetrahedron, with $V=4$, $E=6$, or a triangular prism $V=6$, $E=9$. |
Let $f:[0,\frac{\pi}{2}] \to \mathbb R$ is defined as $f(x):=\text{max}\left\{x^2,\cos(x)\right\}$ for all $x \in [0,\frac{\pi}{2}] $ | Hint
The blue inequality immediately follows from the fact that $\cos x$ is decreasing on $[0, \frac{\pi}{2}]$.
Also the map $g(x) = \cos x - x^2$ is the difference of a decreasing map and an increasing map on $[0, \frac{\pi}{2}]$. Therefore it is decreasing on that interval. The fact that $g(0)=1$ and $g(\pi/2) = -\pi^2/4$ provides the answer to your second question. |
Showing two matrix blocks are similar | Using Jordan form: there exists matrices $P,Q,R$ such that $P^{-1}AP = J_A$, $Q^{-1}BQ = J_B$, and $RCR^{-1} = J_C$ are all in Jordan normal form.
We then note that
$$
\pmatrix{P\\&Q}^{-1}\pmatrix{A\\&B} \pmatrix{P\\&Q} = \pmatrix{J_A\\& J_B}\\
\pmatrix{P\\&R}^{-1}\pmatrix{A\\&C} \pmatrix{P\\&R} = \pmatrix{J_A\\& J_C}
$$
By the uniqueness of Jordan form (up to permutations of blocks), the two matrices on the right can only be similar if $J_B$ is similar to $J_C$, which is to say that $B$ is similar to $C$ as desired. |
A trigonometry exercise $2\sin \frac{\pi}{14}+2\sin \frac{5\pi}{14}-2\sin \frac{3\pi}{14}=1$ | Using the formula
we need $$S=2\cos3\pi/7+2\cos\pi/7-2\cos2\pi/7$$
As $2\pi/7+5\pi/7=\pi,\cos(\pi-x)=-\cos x$
$$S=2\sum_{r=0}^2\cos(2r+1)\pi/7$$
Use How can we sum up $\sin$ and $\cos$ series when the angles are in arithmetic progression? |
Calculate cardinality of a set | Every element of $B$ belongs to exactly $k$ sets out of the $A_i$. So if we count all the elements of the $A_i$, we have counted each element of $B$ exactly $k$ times. This means that
$$|A_1|+|A_2|+\cdots+|A_n|=k|B|.$$ |
Method for pefect square | There is a $O(\ln N)$ running time algorithm to check if an integer is square number or not.
Given $N$ :
Let $x_0 = N$ and $x_{k+1} = \lfloor \frac{1}{2} \left( x_k + \lfloor \frac{N}{x_k} \rfloor \right) \rfloor$ and stop when $x_{k+1} \geq x_k$.
For $N=2284$
So $x_0 =2284$ , $x_1 = 1142$ and $x_2 = 572$ , $x_3 =287$, $x_4 = 147$ ,$x_5 = 81$, $x_6 = 54$, $x_7 =48 $ , $x_8 = 47$ , $x_9 = 47$ , stop iterations.
And because $2284-47^2 \not =0$ so $2284$ is not a square integer. |
Which analogy between polynomials and differential equations did Rota have in mind in his TEN LESSONS? | This might be a partial answer, but I will add later if need be.
There is quite a bit of theory underlying a systematic change of variables. The theory underlying it all is Noether's Theorem, which states for any symmetry there is a corresponding invariant. This invariant then is a useful substitution, as it will effectively reduce the order of a differential equation by 1, and if the order is already 1, the substitution makes the equation separable.
The standard example of this is with first order homogenous equations. In this case, we can see that the differential equation has symmetry under the transformation $x \rightarrow \lambda x$ and $y \rightarrow \lambda y$. An invariant corresponding to the symmetry is some quantity that does not change under the transformation; thus, $y/x$ is an invariant.
There are many other examples of such a change of variable. For example, take the differential equation
$$x^{3/2}y''+\sqrt{x}y'+\frac{y^2}{x}=1$$
In this case, you can check that that the differential equation is symmetric with respect to the transformation $x \rightarrow \lambda^2 x$, $y \rightarrow \lambda y$. With this transformation, $y^2/x$ is an invariant of the transformation, and therefore the order of the equation can be reduced by 1 by substituting $u=y^2/x$. (specifically, it changes the equation into a differential equation that contains $u''$ and $u'$, so a substitution can be made to reduce the order).
There are however many other symmetries, and corresponding invariants, possible however, beyond these "scaling" symmetries. So the question then becomes how to find such substitutions. The most complete and systematic method is via Lie Groups to calculate the symmetries. Some references for doing so are here and here. The most comprehensive resource on this question is here. You will notice that these resources are generally graduate level however; I am not aware of systematic treatments of calculating these symmetries (beyond scaling symmetries) at a lower level. |
Order Properties on Open Sets | In point set topology, one can define the open sets $\mathcal{T}$ of a topological space $(X,\mathcal{T})$ however one wants as long as the elements of $\mathcal{T}$ have the following three properties: First, both $\varnothing$ and $X$ must be open. Second, the intersection of two open sets must also be open. Third, the union of any collection of open sets is open.
One can translate these conditions to the language of ordered sets. Clearly, the subset order on the open sets of a topological has a maximal, has a minimum, has finite infimums, and has arbitrary supremums. In other words, such an ordered set is a bounded lattice that happens to be closed under arbititrary supremums / 'meets'.
Conversely any lattice $L$ of this sort is actually the set of open sets of a topological space $(L, \mathcal{L})$ as follows: consider all subsets $\mathcal{L}$ of $L$ that consist of some element $l \in L$ along with all elements $k \in L$ such $k \leq l$. This class of sets is closed under arbitrary intersection, arbitrary union, and contains both $\varnothing$ and $L$ and thus forms the desired topology.
Going back to your question about total suborders of the open set order: any any total order $T = (|T|, \leq_T)$ is a suborder of another total order with both a 'bottom' element (like $\varnothing$) and a 'top' element (like $K$). This order is, in turn, a supremum closed lattice which is isomorphic to some 'open sets in a topology' order by the paragraph above.
A related question might be which (total?) orders show up in the open subset order of a metric space. Or, for that matter, any restricted class of topological spaces. I believe one can say this: A total order is not the suborder of an open-set order unless it countable. |
$y=(x+1)/(x-1)$ rearrange to make $x$ the subject | Let's do the motions:
$$
y = \frac{x+1}{x-1}\\
(x-1)y = x+1\\
xy-y = x+1\\
xy-x = 1+y\\
x(y-1) = 1+y\\
x = \frac{1+y}{y-1}
$$
I don't know which step you did which introduced the sign error (it could be basically anything, including copying the original problem wrong), but the answer key was right. |
Proof explanation: convexity of the numerical range of an operator (Toeplitz-Hausdorff Theorem) | Note first that, for $x$ with $\|x\|=1$, $$ \langle (\alpha I+\beta T)x,x\rangle =\alpha+\beta\langle Tx,x\rangle.$$ Since $\eta=t\lambda+(1-t)\mu$,
$$
t=\frac{\eta-\mu}{\lambda-\mu}.
$$
So, if $\mu\in W(T)$, then $\mu=\langle Ty,y\rangle$ for some $y$ with $\|y\|=1$, and then
$$
t=\frac{\eta-\mu}{\lambda-\mu}=\alpha+\beta\mu=\alpha+\beta\langle Ty,y\rangle=\langle (\alpha I+\beta T)y,y\rangle \in W(\alpha I+\beta T).
$$
Conversely, if $t\in W(\alpha I+\beta T)$, then $t=\alpha+\beta\langle Tz,z\rangle$ for some $z$ with $\|z\|=1$, and
\begin{align}
\eta=t\lambda+(1-t)\mu&=\lambda\alpha+\lambda\beta\langle Tz,z\rangle+\mu-\mu\alpha-\mu\beta\langle Tz,z\rangle\\ \ \
&=(\lambda-\mu)(\alpha+\beta\langle Tz,z\rangle)+\mu\\ \ \\
&=-\mu+\langle Tz,z\rangle+\mu\\ \ \\
&=\langle Tz,z\rangle\in W(T).
\end{align}
The fact that $g(\theta_0)$ is real is used in the "straightforward calculation" that shows that $f$ is real.
The trick in the proof is to change the question of whether $\eta\in W(T)$ into $t\in W(S)$. At the end of the proof the function $f$ is used to show that all of $[0,1]\subset W(S)$, so in particular the $t$ from the beginning satisfies $t\in W(S)$, which in turns implies $\eta\in W(T)$. |
Is $AB$ a covariance matrix? | Since $AB=BA$, then $A$ and $B$ can be simultaneously diagonalized by some matrix $U$. Hence it follows
\begin{align}
AB = UD_1U^{-1}UD_2U^{-1} = UD_1D_2U^{-1}.
\end{align}
Thus, the eigenvalues of $AB$ are product of eigenvalues of $A$ and $B$. Thus, it follows $AB$ is also positive semi-definite since the eigenvalues are nonnegative. |
How many solutions does $2^z=1$ has, where z is a non-zero complex number. | You can write $2$ as $e^{\ln 2}$, so that $2^z = e^{z\ln 2}$. Then $2^z = 1 \Rightarrow e^{z\ln 2} = 1$. Does that help?
As $z = x + iy$, $e^{z\ln 2} = e^{x\ln 2 + iy\ln2} = e^{x\ln 2}[\cos (y\ln) 2 + i\sin (y\ln 2)]$, by using Euler's formula $e^{i\theta} = \cos \theta + i \sin \theta$.
Therefore, $e^{x\ln 2}[\cos (y\ln) 2 + i\sin (y\ln 2)] = 1$. As the imaginary part must be $0$, we have $\sin(y\ln 2) = 0 \Rightarrow y\ln 2 = n\pi \Rightarrow y = n\dfrac{\pi}{\ln2}$, and $\cos(y\ln 2) = \cos n\pi = (-1)^n$.
Thus, $(-1)^n e^{x\ln 2} = 1 \Rightarrow x = 0$ and $n$ is even.
So, $z = \dfrac{2k\pi i}{\ln 2}, k \in \mathbb{Z}$. |
Weak Law of Large Numbers | You are supposed to generate $n = 10000$ random variables in R. You could
obtain them by typing
x<-runif(10000)
This generate $x_1, \ldots, x_n$ from a uniform $U \left( 0, 1 \right)$.
To get an estimate of $\log \left( 2 \right)$, you compute the sum
$$ \frac{1}{n}\sum_{i = 1}^n \frac{1}{x_i + 1} $$
which converges by the law of large number to $E \left[ \frac{1}{1 + x}
\right] = \log \left( 2 \right)$ |
Why are variables in integration by substitution so counter intuitive? | Since forever we have studied functions with the idea of the variable $x$ and write $y=f(x)$ for the graph of $f$ in the coordinate plane. The next common letter for a dummy variable is $t$, possibly from physic for time.
Similarly, when study integration of a function, it's common that we write the integration as $\int f(x)dx$. And most formulas/exercises are given in this form (power law, exponential law,...). It's up to the writer's taste to choose the notations.
In this case my guess is that when he wrote the Substitution Law, he had the picture of the final integral in mind. That is, the last integral that the students/readers would encounter in the computation process.
I would like to add that many other books choose different notations. For example
$$\int^b_a f(u(x))u'(x)dx= \int^{u(b)}_{u(a)}f(u)du.$$
This choice puts the original integral first which in some cases helps the students recognize the substitution patterns faster. Again, it's totally personal taste and I wouldn't take that too seriously. |
Extending a Field Monomorphism | Take a look at A3.4:
The fact that $L$ is algebraically closed implies that the number of distinct roots of the polynomial $f\in F[x]$ in an algebraic closure of $F$ is equal to the number of distinct roots of $\sigma f\in L[x]$ in $L$. |
A birational map from $\mathbb{P}^1$ to an irreducible plane projective curve | As you noticed, one can try the function $$F:(x:y)\in P^1\longmapsto \bigl(x:y:-\tfrac{g(x,y)}{f(x,y)}\bigr)\in P^2.$$ This has image contained in the curve $C$. It is important that this is well-defined: the point $(x:y)$ is equal to $(\lambda x:\lambda y)$, and the points $(x:y:-\tfrac{g(x,y)}{f(x,y)}\bigr)$ and $(\lambda x:\lambda y:-\tfrac{g(\lambda x,\lambda y)}{f(\lambda x,\lambda y)}\bigr)$ are also equal, precisely because $f$ and $g$ are homogeneous of the degrees they have.
Notice that the map is defined only at the points $(x:y)$ of $P^1$ where $f(x,y)\neq0$; this is not a big problem: there are finitely many such points. The domain of our function is the open set which is complementary to the zero set of $f$.
On the other hand, we have a function $$G:(x:y:z)\in C\longmapsto (x:y)\in P^1.$$ You should find exactly where this is defined and check that it is actually well-defined. Finally, you should check that $F$ and $G$ are inverse in the appropriate sense. |
How do I find the maximum volume of an A4 piece of paper using the isoperimetric inequality? | The isoperimetric inequality says that the largest volume enclosed by a surface of given area is the volume of the sphere of this area. In the linked problem we do not want to "enclose" fluid, but just "hold" it. One then has to prove that the largest volume that can be "hold" in a surface of given area is the volume of a half ball whose spherical boundary part has the given area. Given that we can argue as follows:
The A4 sheet has area $A=210\cdot297$ mm$^2$ If we cut up this sheet into tiny strips and glue the strips together to a half sphere of radius $r$ we obtain $2\pi r^2<A$, or $r<99.63$ mm. The volume of the resulting half ball then is ${2\pi\over3}r^3<2.071$ liter. |
The continuity of the function $F(y)=\int_0^1 \frac{yf(x)}{x^2+y^2}dx$ | To show continuity of $F$, it is enough to study continuity of $G(y)=\int^1_0\frac{f(x)}{x^2+y^2}\,dx$, for then $F(y)=yG(y)$ and $y\mapsto y$ is continuous.
Here we only prove continuity at any $y_0\neq0$. For all $y\in B(y_0;|y_0|)$
\begin{align}
|G(y)-G(y_0)|&\leq\int^1_0|f(x)|\Big|\frac{1}{x^2+y^2}-\frac{1}{x^2+y^2_0}\Big|\,dx\\
&\leq \|f\|_\infty\int^1_0\frac{|y^2-y^2_0|}{(x^2+y^2)(x^2+y^2_0)}\,dx\\
&\leq \|f\|_\infty|y^2-y^2_0|\frac{1}{y^2y^2_0}\xrightarrow{y\rightarrow y_0}0
\end{align}
Continuity at $y=0$ requires conditions about intergrability of $\int^1_0x^{-2}f(x)\,dx$. |
Computing $\mathrm{Hom}(\mathbb Z_n,\mathbb Z_m)$ as $\mathbb Z$-module | Let $C$ be a cyclic group with generator $\sigma$, and let $A$ be any abelian group. Then any homomorphism $f: C \rightarrow A$ is determined by $f(\sigma)$.
If $C$ is infinite cyclic -- let's call it $Z$ -- then there are no restrictions on $f(\sigma)$ and thus $\operatorname{Hom}(Z,A) = A$. In particular $\operatorname{Hom}(Z,Z) = Z$.
If $C$ is finite of order $n$ -- let's call it $Z_n$ -- then $f(\sigma)$ must have order dividing $n$ in $A$, and this is the only restriction. Thus $\operatorname{Hom}(Z_n,A) = A[n]$, the set of elements of order dividing $n$ in $A$.
Since $Z$ has no nonzero elements of finite order, $\operatorname{Hom}(Z_n,Z) = 0$.
Finally $\operatorname{Hom}(Z_n,Z_m) = Z_m[n]$, i.e., the subgroup of elements of order dividng $n$ in a finite cylic group of order $m$. I leave it to you to identify this subgroup explicitly. Hint: Such an element has order dividing $m$ and order dividing $n$, so it has order dividing... |
Properties of Independent event in Probability | Hint Use the identity
$$
A_1\cup A_2\cup......\cup A_n=(A_1^c\cap A_2^c\cap......\cap A_n^c)^c
$$
to write
$$
P(A_1\cup A_2\cup......\cup A_n)=1-P(A_1^c\cap A_2^c\cap......\cap A_n^c)
$$
and proceed from there. |
Finding a permutation | Basically, you need to find the permutation on the middle digits 4abcdef1. What remains are {1→1;2→1;3→2;4→2}.
So if you put the rest of the 4's, that would be C(6,4). Then, put all the 3's, that's C(4,2). Then, put the lone 2, that would be C(2,1).
So the total number of possible permutations is C(6,4) * C(4,2) * C(2,1) = 15 * 6 * 2. |
Integral of product of two inverse polynomials | $\DeclareMathOperator{\argth}{argth}\DeclareMathOperator{\argcoth}{argcoth}$
To come back to the idea of integrating in $\argth$, whenever you have a quadratic you may try to symmetrize it by shifting the variable to the average of the tow roots.
Here $r(r+b)$ has middle point $\frac b2$ so substitute $u=r+\frac b2$
$$\int\dfrac{\mathop{du}}{a(u-\frac b2)(u+\frac b2)}=\int\dfrac{\mathop{du}}{a(u^2-\frac {b^2}4)}$$
And now you substitute $t=\dfrac{2u}b$ to get $(1-t^2)$ group
For $|t|<1$ or $r\in]-b,0[$ you have $\displaystyle \int \dfrac{-2\mathop{dt}}{ab(1-t^2)}=C-\dfrac{2}{ab}\,\argth(t)$
For $|t|>1$ or $r\in]-\infty,-b[\cup\mathbb R^+$ you have $\displaystyle \int \dfrac{-2\mathop{dt}}{ab(1-t^2)}=C-\dfrac{2}{ab}\,\argcoth(t)$
With $t=1+\dfrac {2r}b$.
Note: $\argth/\argcoth(x)=\dfrac 12\ln\left|\dfrac{1+x}{1-x}\right|$, let's verify it is equal to the logarithmic solution found in the other answers.
$f(x)=-\dfrac {2}{ab}\dfrac 12\ln\left|\dfrac{2+\frac{2r}b}{-\frac{2r}b}\right|=-\dfrac {1}{ab}\ln\left|\dfrac{2b+2r}{2r}\right|=-\dfrac {1}{ab}\ln\left|\dfrac{b+r}{r}\right|=\dfrac{\ln|r|-\ln|b+r|}{ab}$
Remark: in Botond's answers he did not care about the domain of definition but he should have put absolute values, imranfat did and has a similar result if we simplify the inner of the logarithm instead of expanding it. |
Question on a symmetric inequality | let $$a=\cos{A},b=\cos{B},c=\cos{C},A+B+C=\pi$$
because it is known
$$\cos^2{A}+\cos^2{B}+\cos^2{C}+2\cos{A}\cos{B}\cos{C}=1$$
then $$a+b+c=\cos{A}+\cos{B}+\cos{C}\le\dfrac{3}{2}$$ |
How to compute $\sum\limits_{n=3}^{\infty}\frac{(n-3)!}{(n+2)!}$ | Hint: There exists some $c_k$ independent of $n$ such that
$$
\frac1{(n-2)(n-1)n(n+1)(n+2)}=\sum_{k=-2}^2\frac{c_k}{n+k}.
$$
To find $c_k$, multiply both sides by $n+k$ and evaluate the result at $n=-k$. For example,
$$
c_{-2}=\left.\frac1{(n-1)n(n+1)(n+2)}\right|_{n=2}=\frac1{24}.
$$
Sanity check: $\sum\limits_{k=-2}^2c_k=0$ and every $c_k$ should be a multiple of $\frac1{24}$ with the sign of $(-1)^k$ and depending only on $|k|$.
Once this is done, note that the value $S$ of the series you are looking for is
$$
S=c_{-2}\cdot\left(\frac11+\frac12\right)+c_{-1}\cdot\left(\frac12\right)+c_{1}\cdot\left(-\frac13\right)+c_{2}\cdot\left(-\frac13-\frac14\right),
$$
which yields the value you got thanks to W|A, namely, $S=\dfrac1{96}$. |
Prove that ((∩))/ is a subgroup of /. Then, Deduce ((∩))/ is Abelian. | Hint Use the canonical projection: $\pi:G\to G/H$ by $\pi (x)=xH$, where $H\trianglelefteq G$.
Also, the facts that a normal subgroup is also normal in any subgroup (that contains it). And, of course, any subgroup of an abelian group is abelian. |
Find an equation in rectangular coordinates for the surface represented by the cylindrical equation | To complete the square you want to write
$$
x^2+y^2-6y+9 - 9 = x^2+(y-3)^2 - 9
$$
since $(y-3)^2=y^2-6y+9$. |
Symmetry of point about a line in 3d | Let $\frac{x+1}{4}=\frac{y+1}{-3}=\frac{z-15}{16}=t$ and $(x,y,z)$ be the needed point.
Hence, $$\frac{x+t_x}{2}=-1+4t,$$
$$\frac{y+t_y}{2}=-1-3t$$ and
$$\frac{z+t_z}{2}=15+16t,$$ which gives,
$$x=-2+8t-t_x,$$
$$y=-2-6t-t_y$$ and
$$z=30+32t-t_z.$$
In another hand $$4(t_x-x)-3(t_y-y)+16(t_z-z)=0$$ and after substitution of values $x$, $y$ and $z$ in this equation we can find a value of $t$
and from here we can get values of $x$, $y$ and $z$.
I got $$t=\frac{4t_x-3t_y+16t_z-239}{281}$$ |
Fraction ring contains another implies prime contains another | $as=t \notin n$ implies $s \notin n$ since $n$ is an ideal.
In general, if $A$ is a commutative ring and $S,T \subseteq A$ are multiplicative subsets, then there is a homomorphism of $A$-algebras $A_S \to A_T$ (and this is unique) if and only if $A \to A_T$ sends the elements of $S$ to units, if and only if for all $s \in S$ there is some $a \in A$ and $t \in T$ such that $\frac{s}{1} \cdot \frac{a}{t}=1$ in $A_T$, i.e. $\frac{sa}{t}=1$, i.e. there is some $t' \in T$ such that $t'(t-sa)=0$. Hence, $A_S \to A_T$ exists iff for all $s \in S$ there is some $a \in A$ such that $sa \in T$.
In the special case $S=A \setminus \mathfrak{p}$, $T = A \setminus \mathfrak{q}$ for two prime ideals $\mathfrak{p},\mathfrak{q}$, we see that there is a homomorphism of $A$-algebras $A_\mathfrak{p} \to A_\mathfrak{q}$ iff $A \setminus \mathfrak{p} \subseteq A \setminus \mathfrak{q}$ iff $\mathfrak{q} \subseteq \mathfrak{p}$. |
Show that $\lvert \mathbb{Q}(\sqrt2 , \sqrt{1+i}) : \mathbb{Q} \rvert = 8$ | Use multiplication of degrees $$\lvert \mathbb{Q}(\sqrt2 , \sqrt{1+i}) : \mathbb{Q} \rvert=[\Bbb Q(\sqrt{1+i}):\Bbb Q(\sqrt2)]\cdot[\Bbb Q(\sqrt2):\Bbb Q]$$
Let $x=\sqrt{1+i}$ and find its minimal polynomial over $\Bbb Q(\sqrt2)$
$$x^2=1+i\notin\Bbb Q(\sqrt2)[x]$$ because $1+i\notin\Bbb Q(\sqrt2)$ but $(x^2-1)^2=-1\Rightarrow x^4-2x^2+2\in\Bbb Q[x]\subset\Bbb Q(\sqrt2)$ then the degree
$[\Bbb Q(\sqrt{1+i}):\Bbb Q(\sqrt2)]=4$ hence the proposed degree 8. |
Do we really need to specify a basis to describe a tuple? | More or less by definition, tuples are elements of $\mathbb{R}^n$. What's being glossed over is the following.
Selecting an ordered basis $\mathcal{B} = (u,v)$ for a two-dimensional vector space $V$ amounts to the same thing as choosing a linear isomorphism $T : \mathbb{R}^2 \to V$, defined by
$$ T(a,b) = a \cdot u + b \cdot v $$
When you say "the element of $V$ represented by $(1,2)$", what you really mean is "the vector $T(1,2)$".
Another notation people sometimes write is, for $x \in V$, to use the notation $[x]_\mathcal{B}$ to mean the value $T^{-1}(x)$, which is a tuple. This notation is meant to be read as "the coordinates of $x$ relative to $\mathcal{B}$". So the statement you make in the OP would be written
$$ [x]_\mathcal{B} = (1,2) $$
There's a similar notation for linear transformations; given a choice of bases for the input and output spaces $[A]_{\mathcal{B}'}^{\mathcal{B}}$ means the matrix whose entries are the coordinates of $A$ relative to the two choices of bases.
So, for example, using matrix arithmetic to compute linear transformations boils down to the identity
$$ [A]^\mathcal{B}_{\mathcal{B}'} \cdot [x]_\mathcal{B} = [A(x)]_{\mathcal{B}'}$$
As an aside, when interpreting vectors of $\mathbb{R}^n$ as matrices, there are reasons why it's most natural to consider them as $n \times 1$ matrices (i.e. "column vectors") rather than as $1 \times n$ matrices (i.e. "row vectors"). |
given polynomial has a root in $Z_p$... | The polynomial $f(x)$ has a double root $x=-1$ over $\mathbb{F}_3$, so it is not difficult to find its factorization by division, namely
$$
f(x)=(x^2+1)(x+1)^2
$$
Over $\mathbb{F}_7$, it has a root $x=1$, and we see that $f(x)=(x^3+5)(x-1)$, and $x^3+5$ has no root over $\mathbb{F}_7$, and hence is irreducible. Hence $(c)$ is correct. As for $(d)$, over the integers, the polynomial is irreducible. It has no root, and the equation $f(x)=(x^2+ax+b)(x^2+cx+d)$ gives a contradiction. |
Is $\langle x^2 + 1\rangle$ a maximal ideal in $Q[x]$ or is it just a prime ideal? | The statement in your question is incorrect: every maximal ideal is a prime ideal, but the converse is not necessarily true.
However, because $\mathbb{Q}[x]$ is a principal ideal domain, every non-zero prime ideal in $\mathbb{Q}[x]$ is maximal. In particular, $(x^2+1)$ is a maximal ideal because $x^2+1$ is an irreducible polynomial over $\mathbb{Q}$.
It might help to contrast this with the case of $\mathbb{Z}[x]$: the ideal $(x^2+1)$ is still prime in $\mathbb{Z}[x]$, but it is not maximal because it is contained in the proper ideal $(3,x^2+1)$. |
Convergence in $L^1_{loc}$ implies convergence almost everywhere | Use Cantor's diagonalization argument. |
Proving det(B) = -det(A) theorem - am I on the right track? | I would work from a different definition of the determinant, either the view as alternating multi-linear form:
$$
\det : V^n \to F \\
\det(A) = \det(a_1, \dotsc, a_n), A = (a_1, \dotsc, a_n)
$$
or use
$$
\DeclareMathOperator{sgn}{sgn}
\det(A) = \sum_{\pi \in S_n} \sgn(\pi) \, a_{1\pi(1)} \dotsb a_{n\pi(n)}
$$ |
Probability of an odd number of heads when flipping a fair coin 3 times. | The coins are tossed onto a glass-topped coffee table. I am under the table, looking up (don't ask).
The person tossing sees an odd number of heads if and only if I see an even number of heads. But by symmetry odd number of heads is just as likely as odd number of tails.
Thus odd number of heads and even number of heads are equally likely, and the probability of an odd number of heads is $\frac{1}{2}$. |
Gradient and Hessian of $x x^T$ w.r.t. $x$, where $x \in \mathbb{R}^{n \times 1}$,? | Gradient
$$\frac{\partial \mathbf{Y}}{\partial x_i} =
\begin{bmatrix}
\frac{\partial y_{11}}{\partial x_i} & \frac{\partial y_{12}}{\partial x_i} & \cdots & \frac{\partial y_{1n}}{\partial x_i}\\
\frac{\partial y_{21}}{\partial x_i} & \frac{\partial y_{22}}{\partial x_i} & \cdots & \frac{\partial y_{2n}}{\partial x_i}\\
\vdots & \vdots & \ddots & \vdots\\
\frac{\partial y_{m1}}{\partial x_i} & \frac{\partial y_{m2}}{\partial x_i} & \cdots & \frac{\partial y_{mn}}{\partial x_i}\\
\end{bmatrix}.$$
Let $$\mathbf{Y} = \mathbf{xx^T} = \begin{bmatrix} x_1^2 & x_1x_2 & \ldots & x_1x_n \\
x_1x_2 & x_2^2 & \ldots & x_2x_n \\
\ldots &\ldots & \ldots & \ldots \\
x_nx_1 & x_nx_2 & \ldots & x_n^2 \\
\end{bmatrix}$$
So $$\frac{\partial \mathbf{xx^T}}{\partial x_i} = \mathbf{Z}_i + \mathbf{Z}_i^T \qquad i \in \lbrace 1 \ldots x \rbrace$$
where $\mathbf{Z}_i$ is an all zero matrix except vector $x$ in its $i^{th}$ column.
Hessian
The derivative of $\frac{\partial \mathbf{Z}_i}{\partial x_j}$ is an all zero matrix except $1$ at its $(j,i)$ position. By symmetry, the derivative of $\frac{\partial \mathbf{Z}_i^T}{\partial x_j}$ is an all zero matrix except $1$ at its $(i,j)$ position. This means that
$$\frac{\partial \mathbf{xx^T}}{\partial x_i \partial x_j} = \mathbf{K}_{i,j} \qquad (i,j) \in \lbrace 1 \ldots x \rbrace$$
where $\mathbf{K}_{i,j}$ is an $n \times n$ matrix which is all-zero except at positions $(i,j)$ and $(j,i)$. Note that if $i = j$, we get a $2$ in the $i^{th}$ (or $j^{th}$) element. |
if $d|a$ then $|d| \leq |a|$. | Your statement is if $a$ divides $d$ and your proof starts let $d$ divide $a$. Your proof should start let $a|d$ and then each of the remaining steps should have d and a swapped around. |
Checking work Taylor Series | It is better to have a single term.
Use
$$\cos(x)=\frac{1}{2} \left(e^{i x}+e^{-i x}\right)\qquad \text{and} \qquad \sin(x)=-\frac{i}{2} \left(e^{i x}-e^{-i x}\right)$$ Now, use the series expansion of the exponentials to get
$$\cos(x)+\sin(x)=\sum_{n=0}^\infty \frac{\left(\frac{1}{2}+\frac{i}{2}\right) \left((-i)^n-i i^n\right)}{n!}x^n=\sum_{n=0}^\infty \frac{\sin \left(\frac{\pi n}{2}\right)+\cos \left(\frac{\pi n}{2}\right)}{n!} x^n$$
You could prefer
$$\cos(x)+\sin(x)=\sum_{n=0}^\infty \frac{i^{(n-1) n}}{n!}x^n$$ |
Trigonometry - Calculate 3D position of objects by their offset | x = x + offset_x * cos_ry * cos_rz - offset_x * sin_rx * sin_ry * sin_rz - offset_y * cos_rx * sin_rz + offset_z * sin_ry * cos_rz + offset_z * sin_rx * cos_ry * sin_rz;
y = y + offset_x * cos_ry * sin_rz + offset_x * sin_rx * sin_ry * cos_rz + offset_y * cos_rx * cos_rz + offset_z * sin_ry * sin_rz - offset_z * sin_rx * cos_ry * cos_rz;
z = z - offset_x * cos_rx * sin_ry + offset_y * sin_rx + offset_z * cos_rx * cos_ry; |
Determine with a proof the largest number which can be written as a product of natural numbers which have sum 2012 | Let the maximum product be $p$.
From AM-GM, we have
$$\dfrac{\sum_{k=1}^n a_k}n \geq \left(\prod_{k=1}^n a_k \right)^{1/n}$$
Hence, we get that
$$\dfrac{2012}n \geq p^{1/n}$$
Hence, we have that
$$p \leq \left(\dfrac{2012}n \right)^n$$
Now study the behavior of the function $f(x) = \left(\dfrac{2012}x \right)^x$ or equivalently the behavior of the function $g(x) = \log(f(x)) = x \log (2012) - x \log(x)$. We then have
$$g'(x) = \log(2012) - 1 - \log(x)$$
$$g'(x) = 0 \implies x = \dfrac{2012}e$$ Hence, $n \approx \dfrac{2012}e$ and to maximize the product all number must be more or less equal to $e$. Since, all the numbers are natural numbers, they must be either $2$ or $3$. Now make use of the fact that $2 + 2 + 2 = 3 + 3$ and $2^3 < 3^2$ to conclude that we need six hundred and seventy $3's$ and a $2$ to maximize the product. |
Prove $\{a_n\}^r$ is a Cauchy sequence. | A Cauchy sequence is just a sequence where the values eventually get arbitrarily close to each other. Formally, we say that for an arbitrarily small value $\varepsilon > 0$, there is a “threshold” $N \in \mathbb{N}$ such that for all indices $n, m \geq N$, the values $a_n$ and $a_m$ differ by no more than $\varepsilon$. This definition may seem daunting, but if you break it down, it should make sense. Following from this definition, here's how you would solve your problem:
Start by picking some arbitrary value of $\varepsilon$. Here, the word “arbitrary” indicates that $\varepsilon$ can be anything—notice that we aren't picking a specific value for $\varepsilon$ (as you did when you said “Can we take $\varepsilon = 0$ because the difference is zero?”).
Next, identify your “threshold” $N$. All values of your sequence past this threshold must be within a distance of $\varepsilon$ of each other. Usually, the tricky part is choosing the right value for $N$, but it shouldn't be too difficult for your problem.
Prove that for all indices $n, m \geq N$, the distance $|a_n - a_m| < \varepsilon$. This is to just verify that the threshold you picked in (2) is indeed valid.
And that's it! We have shown that for arbitrarily small values of $\varepsilon$ (1), there is a threshold $N$ (2) such that all values past this threshold are $\varepsilon$-close to each other (3). Almost every time you want to prove a sequence is Cauchy, you will follow these steps. |
Is every homeomorphism of $\mathbb{Q}$ monotone? | No, homeomorphisms of $\mathbb{Q}$ need not be monotone. For an irrational $c > 0$, let
$$h_c(x) = \begin{cases}x &, \lvert x\rvert < c\\ -x &, \lvert x\rvert > c. \end{cases}$$
Then $h_c$ is a non-monotonic homeomorphism of $\mathbb{Q}$. |
How to sample from unknown distribution | You apply the mathematical inverse of the cumulative distribution function to numbers randomly sampled from a uniform distribution on the interval $[0,1]$.
Suppose for example you want to sample numbers from the exponential distribution which has a probability density function,
$$ f_X(x) = \frac{1}{\tau} e^{-x/\tau}\qquad (0\leq x ),$$
the cumulative distribution function is defined as,
$$ F(z) = P( X < x)$$
$$= \int_0^z f_X(x) dx $$
$$= \frac{1}{\tau} \int_0^z e^{-x/\tau} dx $$
$$= e^{-z/\tau} - 1 $$
Now we have that,
$$F(z) = 1 - e^{-z/\tau},$$
the mathematical inverse of this function is,
$$F^{-1}(z) =-\tau \log(1-z).$$
======
Now we will apply the method I described at the beginning of this answer to get numbers sampled from the exponential distribution.
First I need a source of uniformly random numbers on the interval from $[0,1]$. I will use random.org to generate these numbers.
https://www.random.org/decimal-fractions/
======
I generated 100 random number sampled from the uniform distribution on $[0,1]$ using the random.org link above. The histrogram from these numbers follows.
Then I applied $F^{-1}(z)$ to each of these numbers (I chose $\tau=1$). The resulting list of numbers obtained from this process obeys an exponential distribution. Their histogram is shown below.
You can see that the histogram has changed to have a shape consistent with an exponential distribution. |
Finite index subgroup $G$ of $\mathbb{Z}_p$ is open. | Just a remark - this does hold in much greater generality.
According to a theorem of Nikolay Nikolov and Dan Segal, in any topologically finitely-generated profinite group (that is, a profinite group that has a dense finitely-generated subgroup) the subgroups of finite index are open. In particular, this holds for the $p$-adic integers. |
count of matrix of 0,1 in which each row and each column have at least one 1. | Let's say $S_{t,u}$ is the number of binary $n$ by $n$ matrices with $t$ rows of zeroes and $u$ columns of zeroes. The rows can be chosen in $C(n,t)$ ways and the columns can be chosen in $C(n,u)$ ways. The remaining portion of the matrix has $(n-t)(n-u)$ binary elements. So
$$S_{t,u} = C(n,t)C(n,u)2^{(n-t)(n-u)}$$
(Note that this formula works even when $t=u=0$.)
By inclusion/exclusion, the number of matrices with no row of zeroes and no column of zeroes is
$$\begin{align}
N_0 &= \sum_{t=0}^n \sum_{u=0}^n (-1)^{t+u} S_{t,u} \\
&= \sum_{t=0}^n \sum_{u=0}^n (-1)^{t+u} C(n,t)C(n,u)2^{(n-t)(n-u)}
\end{align}$$
Now make a change of indices to $r=n-t$ and $s=n-u$. The result is
$$\begin{align}
N_0 &= \sum_{r=0}^n \sum_{s=0}^n (-1)^{2n-r-s} C(n,n-r)C(n,n-s) 2^{rs} \\
&= \sum_{r=0}^n \sum_{s=0}^n (-1)^{r+s} C(n,r)C(n,s) 2^{rs}
\end{align}$$ |
Can these two random variables be independent? | Consider the random vector $\vec v=(X,Y)$, which is a normal random vector whose covariance matrix is $\sigma^2 I$. The amazing property of the (multivariate) Gaussian distribution is that it is rotationally symmetric whenever the covariance matrix is isotropic, as it is in this case. Rotational symmetry implies that the direction that the vector $\vec v$ points in is uniformly distributed on the circle, regardless of what the magnitude of $\vec v$ is. More precisely, it means that the conditional distribution of the angle is independent of the magnitude, which is equivalent to saying that they are independent.
At this point, the burning question is: why is the distribution of $\vec v$ rotationally symmetric? One way to see it is by changing to polar coordinates. Indeed, by definition of the normal distribution, we have that for any measurable subset $A$ of the plane,
$$
\mathbb P(\vec v\in A)=\frac{1}{2\pi}\int_{A}e^{-x^2/2\sigma^2}e^{-y^2/2\sigma^2}\ dx\ dy
$$
$$
=\frac{1}{2\pi}\int_{A}e^{-r^2/2\sigma^2}\ (r\ dr\ d\theta).
$$
Here, we see that the integrand does not depend on $\theta$, which implies the rotational symmetry.
Finally, we can answer the question: $U=\|\vec v\|^2$ is a function of the magnitude and $V=\cos\angle \vec v$ is a function of the angle, so by the first paragraph these quantities are independent. |
How to simplify a given expression? | Note
$$|1+z|^2=(1+z)(1+\overline{z})=1+(z+\overline{z})+|z|^2$$
If $z$ is purely imaginary, then $z=-\overline{z}$ and the above reduces to $1+|z|^2$. I claim that's the situation we have here, so I'll reduce the problem to showing that, defining $m:=c/a$ and $t_i := \tan\theta_i$,
$$\frac{1+\phi+(\phi-1)\exp(-2i\theta_2)}{1+\phi-(\phi-1)\exp(-2i\theta_2)}=
i\;\frac{t_2+mt_1}{1-mt_1t_2} \tag1$$
(where I'm assuming all constants are real). First, we observe that
$$\phi = imt_1 \qquad \exp(-2i\theta_2) = \frac{i+t_2}{i-t_2} \tag{2}$$
so that we can write the left-hand side of $(1)$ as
$$\frac{(1+imt_1)(i-t_2)+(imt_1-1)(i+t_2)}{(1+imt_1)(i-t_2)-(imt_1-1)(i+t_2)} \tag3$$
which readily simplifies to the right-hand side of $(1)$. $\square$ |
Is there a general method to find if ideal is maximal | Let $R$ be the $\mathbb Q$ vector space generated by all ordinals of cardinality less than $c$ together with the ordinal $c$. Let multiplication be given by intersection. Let $I$ be the ideal generated by all countable ordinals. Since the continuum hypothesis is undecidable, it is undecidable whether $I$ is a maximal ideal. |
$f: X \to Y$ order preserving implies $Ord(X) \leq Ord(Y)$ | If $\beta<\alpha$, you can view $f$ as a map from $\alpha$ into $\alpha$. $\{\xi\in\alpha:f(\xi)<\xi\}\ne\varnothing$, since clearly $f(\beta)<\beta$. Let $\eta=\inf\{\xi\in\alpha:f(\xi)<\xi\}$, and derive a contradiction with the assumption that $f$ is strictly order-preserving. |
Recurrence relations: cashier has no change | Define $a_k$ to be the change in the amount of 10\$ bills in the cashier, after costumer $k$ pays. Now define, $S_k$ to be the the amount of 10\$ bills in the cashier after costumer $k$, that means $S_k=\sum_{j=1}^ka_j$ (because the cashier starts with no cash).
By the details provided, we can conclude:
$S_k \geq 0$ always.
$S_{2n} = 0$
$a_1 = (+1)$ and $a_{2n} = (-1)$
So this problem is equivalent to finding the number of Dyck words of length $2n$.
Let $A_n$ be the number of options in for the described scenario for $2n$ costumers. Trivially $A_0=A_1=1$. Now, suppose we know the values of $A_0,\dots,A_{n-1}$ and we want to know the value of $A_n$. Note that if we fix some integer $1 \leq j_0 \leq n$ and constrain ourselves only to the cases where: $$\begin{cases}S_j \geq 1, \quad \text{if} \ \ 0<j<2j_0 \\ S_{2j_0}=0\end{cases}$$
we can use $A_0,\dots,A_{n-1}$ to get the number options. Lets use $F_{j_0}$ to note the number of options under the constraint , we know already that $a_0 = (+1)$ and thanks to the constraint on $F_{j_0}$ we also know the $a_{2j_0 -1} = (-1)$. So we can devide the problem under the constraint to two distinct and independent sequences of "customers" (words, sequences, etc.). One is customers $a_2,..a_{2j_0-1}$ (under the constraint that $\sum_{k=2}^{2j_0-1}a_k\geq 0$, which is equivalent to considering $a_1,..a_{2j_0}$ with $S_k \geq 1$), which is equal to $A_{j_0-1}$. The second is $a_{2j_0+1},\dots,a_{2n}$, which is equal to $A_{2n-j_0}$.
So we have now: $$F_{j0}=A_{j_0-1}\cdot A_{n-j_0}$$
Consider the sum $\sum_{j=1}^nF_j$, because $j$ is (equivalently) defined to be the first index in which the cashier reaches $0$ 10\$ bills, then different $F_j$'s don't double count cases. Thus: $$A_n = \sum_{j=1}^nF_j = \sum_{j=1}^nA_{j-1} A_{n-j} $$
By the way: this is called the Catalan number $n$, and it to turns out to come up in several places in combinatorics. |
Discriminant of a Conic Section | There are two ways to prove this:
Formal:
You can show, through a bunch of ugly computation, that the expression $B^2-4AC$ is invariant under rotation. So, consider when $B=0$ (in other words, when the conic section's directrix is parallel to one of the axes). It is easy to see that for a hyperbola $-4AC$ is positive, for an ellipse $-4AC$ is negative, and for a parabola $-4AC$ is $0$. For a better worded explanation, go to this link.
Very Informal But Intuitive:
Take the equation $Ax^2+Bxy+Cy^2+Dx+Ey+F=0$. Imagine if $x$ and $y$ were very large numbers. We can forget about $Dx+Ey+F$ because it becomes insignificant when compared to $Ax^2+Bxy+Cy^2=0$. Now, divide by $x^2$:
$$A+B\left(\frac{y}{x}\right)+C\left(\frac{y}{x}\right)^2=0$$
We notice that we now have a quadratic in $\frac{y}{x}$.
This next part is a little hard to explain in words (and my English is sort of bad) but I will try my best.
The number of solutions to this equation represents the number of ways in which the graph of the equation "zooms off towards infinity." Imagine zooming out really far from a graph of a hyperbola. You would only see an "X" formed by two lines (these lines are the asymptotes of the hyperbola). If you solve for $\frac{y}{x}$ in the above equation, you would be solving for the slopes of those lines. Imagine zooming out really far from a graph of a parabola. You would only see one line (the axis of symmetry for the parabola). If you solve for $\frac{y}{x}$ in the above equation, you would be solving for the slope of that line. If you zoomed out really far from a graph of an ellipse, you would see a point.
So, if $A+B\left(\frac{y}{x}\right)+C\left(\frac{y}{x}\right)^2=0$ has two solutions for $\frac{y}{x}$, the equation is a hyperbola. One solution means parabola. Zero solutions means ellipse or circle. The number of solutions corresponds to the sign of $B^2-4AC$.
I sort of like this informal proof because it explains why the discriminant of a conic looks like that of a quadratic. |
Number of homomorphisms $ \phi: _4 \to _4 $ | Let $a = \phi(12)$. As you have noted, this is either the identity, or one of the five order-2 elements of $D_4$.
Consider $\phi(23)$. Since $(12)(23)$ has order $3$, the element $\phi((12)(23))$ must have either order $3$ (impossible), or be the identity. Thus $\phi(23) = \phi(12)^{-1} = a$. A similar argument shows that all the other four transpoositions must also be mapped to $a$.
How many candidates for homomorphisms does this leave us with? Are all of these actually homomorphisms? |
Show that the polynomial $x^8 -x^7+x^2-x+15$ has no real root | Note that $x^8-x^7$ and $x^2 - x$ look very similar. We can use this to factor some of the terms:
$$
x^8 - x^7 + x^2 - x + 15\\
= x^7(x-1) + x(x-1) + 15\\
= (x^7+x)(x-1) + 15\\
= x(x^6+1)(x-1) + 15
$$
If the entire expression is to be zero, then at the very least, we must have $x(x^6+1)(x-1)< 0$, which happens for $x\in (0, 1)$. But for $x\in (0, 1)$, we have
$$
|x|<1\\
|x^6+1|<2\\
|x-1|<1
$$
which implies $|x(x^6-1)(x-1)|<2$, which in turn implies that $x(x^6+1)(x-1) + 15$ is always larger than $13$. |
A metric space is connected if $\overline{E}\cap\overline{X\setminus E}\neq \emptyset$ | It seems that you have doubts whether $x \in \overline A \cap \overline B$ is contained in $X$.
The closure of a subset $C \subset X$ is the set
$$\overline C = \{ x \in X \mid \text{Each open neighborhood $U$ of $x$ in $X$ has non-empty intersection with $C$} \} .$$
Therefore by definition $\overline C \subset X$. It wouldn't make any sense to add points outside of $X$ to $\overline C$.
Thus $\overline A \subset X$ and $\overline B \subset X$. Hence $\overline A \cap \overline B \subset X$, in particular $x \in X$. |
How to prove "square arrangement product" converges? | Look at the partial sums:
$$
\sum_{i=1}^Nd_i=\Bigl(\sum_{i=1}^Na_i\Bigr)\Bigl(\sum_{i=1}^Nb_i\Bigr).
$$ |
Equivalent Sequential Definitions of Continuity | Assume the second one, we can prove the first one. For a convergent $\{x_{n}\}$, say, $x_{n}\rightarrow x$, let $y_{n}=x$, and $z_{2n}=x_{n}$, $z_{2n+1}=y_{n}=x$, then $z_{n}\rightarrow x$ and by the assumption then $f(z_{n})\rightarrow L$, so $f(z_{2n+1})\rightarrow L$, but $f(z_{2n+1})=f(x)$, so $L=f(x)$. On the other hand, we also have $f(z_{2n})=f(x_{n})\rightarrow L=f(x)$. |
Does $\ell^3$ norm preserving linear transforms exist? | Given a linear transformation $\,(x,y)\to (X,Y) := (a x+b y,c x+d y)\,$
then the condition for preserving $\,|x|^3+|y|^3\,$ splits
into cases depending on the signs of $\,x\,$ and $\,y.\,$
and also $\,X\,$ and $\,Y.$
For the case $\,x>0,\,y>0,\,X>0,\,Y>0$ we must have
$$ a^3+c^3=b^3+d^3=1, \quad a\,b^2+c\,d^2=a^2 b+c^2 d=0.$$
The two real solutions of this system of equations is
$\,(X,Y) = (x,y)\,$ and $\,(X,Y) = (y,x).$
The other cases are similar and allows to negate $\,X\,$
or $\,Y.$ |
Proving two functions are monotonically related | There are a few questions in this post, so I'll just try to answer one of them. Hopefully I've interpreted it correctly. The quesiton: say $f$ and $g$ are monotonically related. Then there is a monotonic function $h$ such that $f = h∘g$, where $∘$ is function composition.
First, define monotonically related. The function $f$ if monotonically related to $g$ if $f(x) < f(y)$ implies $g(x) < g(y)$. The definition as I've given it may be strengthened to an equivalence instead of an implication but it's all we need for now. I.e. note that the order of $f$ and $g$ matters according to this definition.
Now, it is easy to see that $g(x) = g(y)$ implies $f(x) = f(y)$. Otherwise, $f(x) < f(y)$ or $f(y) < f(x)$, and in either case by our definition of monotonic relatedness then $g(x) \neq g(y)$. Call this result (1).
OK, now for any function we can always define a set valued "inverse". In other words $g^{-1}(y) = \{x | g(x) = y\}$.
Now, we can also define a function $f_s$ as follows. The domain of $f_s$ is the set of all sets whose image is the same under $f$. That is, $f_s$ takes as its argument a set $S_y = \{x | f(x) = y\}$. The value of $f_s(S_y) = y$.
Now, we can define our function $h$. Define $h = f_s∘g^{-1}$. To show that this definition is always possible, consider the result (1). This says that whenever $g(x) = g(y)$ then $f(x) = f(y)$. Now, for all values $u, v$ in the set $g^{-1}(y)$, we know by definition that $g(u) = g(v)$ and so by (1) $f(u) = f(v)$, so the set is a valid input argument for $f_s$ and the definition of $h$ makes sense.
Now, we can easily show that $h∘g = f$. We know by definition of $h$ that $h∘g(x) = f_s∘g^{-1}∘g(x)$. Now, we know that $x$ must be in the set $g^{-1}∘g(x)$, and we know that the value of $f_s$ is the value of $f$ for any value in that set, so after applying $f_s$ we get $f(x)$, as required.
Now that we've defined such a $h$, we need to show that it's monotonic. Well for this we need to strengthen the definition of monotonically related from the earlier one to an equivalence. So now we have also that $g(x) < g(y)$ implies $f(x) < f(y)$. I have a feeling this is the standard definition anyway, because the word "related" has no hint of direction in it.
Now, let's show the monotonicity of $h$. We note that the domain of $h$ is the range of $g$. So to show $h$ is monotonic we just need to show for some $g(x) < g(y)$ that $h(g(x)) < h(g(y))$. But by our definition of $h$, we know that this amounts to $f(x) < f(y)$, which follows directly from monotonic relatedness.
I'm not sure how useful this is. It looks like you're trying to show something specific, involving the particular functions you've posted. Hopefully it helped in some way.
EDIT: We can also prove the converse of the above. That is, given a monotonic $h$ such that $f = h∘g$ then $f$ and $g$ are monotonically related. To prove this, we need to show two things: that $f(x) < f(y)$ implies $g(x) < g(y)$ and vice versa.
So let's prove $f(x) < f(y)$ implies $g(x) < g(y)$. Well from our assumptions $f = h∘g$ and $f(x) < f(y)$, then we get $h(g(x)) < h(g(y))$. Now we know that $g(x) \neq g(y)$, otherwise $h(g(x)) < h(g(x))$, which is clearly false. So, assuming a total ordering, we have either $g(x) < g(y)$ or $g(x) > g(y)$. Well, by monotonicity of $h$, $g(x) > g(y)$ implies $h(g(x)) > h(g(y))$ which contradicts our earlier $h(g(x)) < h(g(y))$. So we're left to conclude $g(x) < g(y)$, finishing this case.
The second case is to show $g(x) < g(y)$ implies $f(x) < f(y)$. This is simpler. We want to show $f(x) < f(y)$, which by our earlier assumption is that same as $h(g(x)) < h(g(y))$. Now this follows directly from $g(x) < g(y)$ and the monotonicity of $h$. |
Non linear BVP using Second order finite difference method. | In practice, a second-order differential equation is recast into a first-order system. By setting $Y=(y_1,u_2)^\top$ where $y_1 = y$ and $y_2=y'$, we have
$$
Y' = (y_2, -3 y_1 y_2)^\top = F(Y) \, ,
$$
which an autonomous first-order ODE system.
An initial value problem of the type $Y(x_0) = Y_0$ can be solved numerically using for instance Runge-Kutta methods. In the second case,
$$
Y' = (y_2, -3 x y_1 y_2)^\top = F(x,Y)
$$
is non-autonomous, and Runge-Kutta methods still apply. The resolution of a boundary value problem with such conditions as $y(x_0) = y_0$ and $y(x_1) = y_1$ can be achieved using the shooting method. |
Find the limit of $f(n)$ where $f(n)=(2-f(n+1))^2$ | It is easy to see
$$ f(n+1)=2-\sqrt{f(n)} $$
and hence
$$ |f(n+1)-1|=|1-\sqrt{f(n)}|=\frac{|f(n)-1|}{1+\sqrt{f(n)}}. $$
It is not hard to see $2-\sqrt2\le f(n)\le 2$ and so
$$ |f(n+1)-1|=\frac{|f(n)-1|}{1+\sqrt{f(n)}}\le\frac{1}{3-\sqrt2}|f(n)-1|. $$
Thus
$$ |f(n)-1|\le \frac{1}{(3-\sqrt2)^{n-1}}|f(1)-1|. $$
Letting $n\to\infty$ gives
$$ \lim_{n\to\infty}f(n)=1. $$ |
Condition on Inequalities | This is not true.
For instance, you want $Y<1$, but you can take $p = 0.9$, $q=0.01$, $a=0.001$, and $b=0.01$, and then
$Y \approx 73.$
Wolfram alpha |
Modular equations system | The principle is essentially the Chinese Remainder Theorem. Because the moduli are pairwise coprime, there is a solution for any value of $a$.
But suppose you didn't know the value of $a$. One way to proceed is the following (which is often a faster way of solving such systems rather than the constructive proof of the Chinese Remainder Theorem):
If $x\equiv 1\pmod{2}$, then you must have $x = 1+2r$ for some integer $r$. Plugging that into the second congruence, you have $1+2r\equiv 2\pmod{3}$, or $2r\equiv 1 \pmod{3}$. Multiplying through by $2$ we get $r\equiv 2 \pmod{3}$, so $r$ must be of the $r=2+3s$. Plugging that into $x$, we get that $x$ must be of the form $x = 1+2r = 1+2(2+3s) = 1+4+6s = 5+6s$.
Finally, we plug that into the final congruence. It gives $5+6s\equiv a\pmod{5}$, which is equivalent to $s\equiv a\pmod{5}$. This can always be solved, $s=a+5k$, so the system always has a solution. The solution(s) is $x=5+6s = 5+6(a+5k) = 5+6a + 30k$. that is, $x\equiv 5+6a\pmod{30}$. |
If $f(z)^3$ is analytic then $f(z)$ is analytic.Is it true?if yes prove it.otherwise give counterexample. | Since there is no continuity requirement for $f(z)$, just pick your favourite non-zero analitic function $g(z)$ and solve $f(z)^3=g(z)$. At almost all points you have three choices and you can easily make it non analytic.
E.g. Let $1, \omega, \omega^2$ be the three roots of unity. Define $f$ to be sometimes $1$, sometimes $\omega$... Then $f$ cannot be analytic, but $f^3$ is constant.... |
Solve $\lim_{x \rightarrow +\infty}\frac{\sqrt{x}+x}{x+\sqrt[3]{x}}$ | Hint:
$$\frac{\sqrt{x}+x}{x+\sqrt[3]{x}}=\frac{\frac1{\sqrt x}+1}{1+\frac1{\sqrt[3]{x^2}}}$$
It's that simple! |
Quaternion Projective Space $\mathbb HP^n$ and Octonionic Projective Space $\mathbb OP^n$ | The CW strucure of quaternionic projective spaces is usually explained in books dealing with such things. In particular, you get the $n$-dimensional space by with one $4k$ cells for each $k$ from $0$ to $n$.
With the octonions, it is more complicated: the projective line works as usual, but already for the plane one has to work quite a bit to even define what one means (see the book Octonions by Conway) because of the lack of associativity: the octonions are an alternative algebra, and that is enough to get by. But there are no higher dimensional octonionic projective spaces. |
Definite integral question with negative variable in integral | Yes, indeed. Your work is correct, until you compute $F''(0)$
Check again, and you should find that $F''(0) = 1$.
$$F''(x)=\dfrac{-x^2+2x+1}{(x^2+1)^2} \implies F''(0) = \dfrac{-0 + 0 + 1}{(0 + 1)^2} = \frac{1}{1} = 1$$ |
Box Plot Log Scaled? | The main thing is that the ends and the middle band of the box are preserved by a logarithmic transformation, because they are the middle three quartiles, which depend only on order. So it sounds like a reasonable thing to do. The only thing can change is which data points are considered outliers if you use a 1.5 IQR criterion for the ends of the whiskers — but depending on your data this may actually work better in the log scale: see a two-dimensional example in Figure 3 of Rousseeuw et al.'s "The Bagplot: A Bivariate Boxplot". (Disclaimer: I am not a statistician.) |
Sufficient Condition for $\lim_{n\rightarrow \infty} \frac{a_n}{b_n}=1$ | If $x_n = b_n/a_n \rightarrow L \neq 0$, then $x_n^{-1} = a_n/b_n \rightarrow L^{-1}.$
We have
$$|x_n^{-1}- L^{-1}| = \frac{|x_n - L|}{|x_n||L|}.$$
If $(x_n)$ converges then it is bounded and $|x_n| > K > 0$ for $n$ sufficently large if the limit is non-zero.
Note that $||x_n| - |L|| \leq |x_n - L|< |L|/2$ for $n$ sufficiently large and we can take $K = |L|/2$.
Hence
$$|x_n^{-1}- L^{-1}| < \epsilon$$ when
$$|x_n - L| < K|L|\epsilon.$$
which will be true for all $n$ sufficiently large.
In this case $L = 1$. |
How to design a closed rectangular box of minimum cost using Lagrange Multipliers | $$f(x,y,z)= 2bxy+2cyz+2dxz$$
is the (cost) function to be maximized
The constraint is
$$ g(x,y,z)=xyz=V_0 $$
now solve $\vec \nabla f = \lambda \vec \nabla g$ |
Find context free grammar and prove its correctness | HINTS:
(a) This is the union of the languages $\left\{a^ib^ja^k:i<j\right\}$, $\left\{a^ib^ja^k:i>j\right\}$, $\left\{a^ib^ja^k:j<k\right\}$, and $\left\{a^ib^ja^k:j>k\right\}$, and it’s not too hard to write context-free grammars for these languages.
(b) Finding a proof that your grammar generates the language $L=\{w\in\{a,b\}^*:|w|_a=|w|_b\}$ seems a bit difficult, but I can do it for a somewhat different grammar. Say that $w\in L$ is irreducible if it is non-empty and not of the form $uv$ for any non-empty $u,v\in L$. Prove that if $w$ is irreducible, then there is a $u\in L$ such that $w\in\{aaub,abua,buaa\}$. Then consider the grammar with these productions:
$$\begin{align*}
&S\to TS\mid\varepsilon\\
&T\to aaSb\mid abSa\mid bSaa
\end{align*}$$ |
There exist $x_{1},x_{2},\cdots,x_{k}$ such two inequality $|x_{1}+x_{2}+\cdots+x_{k}|\ge 1$ | Let $x_i=\frac{1}{k}$ for $i=1$ to $k$. Then both inequalities hold. |
Deriving separable differential equation when velocity is function of position | I'll continue based on my comments. I assume that the task at hand is that you're given the acceleration as a function of position, and you want to figure out what the velocity is as a function of position?
And I guess you've seen an argument something like the following:
\begin{align}
a &= \dfrac{dv}{dt} \\
&= \dfrac{dv}{dx} \cdot \dfrac{dx}{dt} \tag{chain rule}\\
&= \dfrac{dv}{dx}v \\
&= \dfrac{d}{dx} \left( \dfrac{v(x)^2}{2}\right)
\end{align}
Hence,
\begin{align}
v(x) &= \pm \sqrt{2 \int a(x)\, dx + C}
\end{align}
where the $\pm$ is to be decided based on the sign of the velocity for the given problem at hand, and the arbitrary constant $C$ is to be determined based on initial conditions
While the above argument is very quick, it completely mixes up the roles played by different functions by hiding it all inside Leibniz's notation and avoiding writing out the compositions involved.
A more "behind the scenes" calculation might proceed along the following lines. We assume that we're given a function $\alpha: x \mapsto \alpha(x)$, which we interpret as the acceleration as a function of position. Now, the key step which is left implicit in the above discussion is that we actually have an invertible function $t\mapsto \gamma(t)$ which we interpret as giving for each time $t$, the position at time $t$. Also, for each position $x$, we interpret $\gamma^{-1}(x)$ as being the time elapsed while travelling a position $x$.
Note that it is crucial that $\gamma$ be invertible for this all to make sense; it is precisely because this function is invertible that it is "acceptable" to be so imprecise as to whether we think of velocity/acceleration as functions of time or position. Let me now make a list of all the functions with everything made explicit:
$\gamma$ is position as a function of time (which like I said, means that for each time $t$, $\gamma(t)$ is the position at time $t$)
$\gamma^{-1}$ is time as a function of position
$v := \gamma'$ is the velocity as a function of time
$\nu := v \circ \gamma^{-1}$ is velocity as a function of position
$\alpha$ as I defined above is the acceleration as a function of position (which we assume is given)
Lastly, $a:= \alpha \circ \gamma$ is the acceleration as a function of time. But also (by definition) we have $a = v' = \gamma''$
So, let's now carry out (almost) the same computation as above. In every equal sign that follows, we have an actual equality of functions (you might have to refer to the above list, and compose with $\gamma$ or $\gamma^{-1}$ where appropriate to get from one equal sign to the next):
\begin{align}
\alpha &= a \circ \gamma^{-1} \\
&= v' \circ \gamma^{-1} \\
&= (\nu \circ \gamma)' \circ \gamma^{-1} \\
&= [(\nu' \circ \gamma) \cdot \gamma'] \circ \gamma^{-1} \tag{chain rule} \\
&= (\nu' \circ \gamma \circ \gamma^{-1}) \cdot (\gamma' \circ \gamma^{-1}) \\
&=\nu' \cdot (v \circ \gamma^{-1}) \\
&= \nu' \cdot \nu \\
&= \left( \dfrac{\nu^2}{2}\right)'
\end{align}
Hopefully, now you can try to pattern-match each equality in these two derivations, and see where exactly the abuse of notation is going on (and how to fix it for yourself in future examples).
From here, we would have to integrate both sides, and solve for $\nu$ in terms of an integral of $\alpha$.
One final remark: very often in physics, people would use the notation $x(t)$ instead of $\gamma(t)$ to describe the position as a function of time. Most of the time, I would have absolutely no issue with such notation. So, they are considering the curve as $t\mapsto x(t)$ for what I wrote as $t \mapsto \gamma(t)$. But the trouble in this example is that we also have to consider the inverse function $\gamma^{-1}$, which we like to think of as a function of position. So, we like to use $x$ as the input, and write $\gamma^{-1}(x)$ as the output (the time elapsed for position $x$).
Clearly, there will be an issue if we choose to write $t \mapsto x(t)$ for the name of the curve, because then the inverse function would be $x^{-1}(\cdot)$, which people might refer to as $t(\cdot)$. But now what letter do we use for the arguments? $x$ again? so that we write $t(x)$? Clearly this would be very confusing because we're using the letters $t,x$ to mean both a function and also points in the domain. Thus, in this specific case, I chose to introduce a new letter $\gamma$ to keep the two concepts separate, so that we can free up the letters $t,x$ to simply mean points in the domain (of $\gamma$ and $\gamma^{-1}$ respectively). |
Given that the roots of the equation $x^3-9x^2+bx-216=0$ are consecutive terms in a geometric sequence, find the value of b. | Let $\alpha$, $\beta$ and $\gamma$ be our roots.
Thus, $$\beta^2=\alpha\gamma,$$
$$\alpha+\beta+\gamma=9$$ and $$\alpha\beta\gamma=216.$$
Thus, from the first and the third we obtain:
$$\beta^3=216$$ or
$$\beta=6.$$
Thus, $$a+\gamma=3$$ and
$$\alpha\gamma=36.$$
Id est,
$$b=\alpha\beta+\alpha\gamma+\beta\gamma=6(\alpha+\gamma)+\alpha\gamma=6\cdot3+36=54,$$
which says that you are right and your book is wrong.
It's interesting that for $b=54$ we have
$$(x-6)(x^2-3x+36)=0,$$
which has no three real roots, but these roots are geometric sequence.
Thus, or in the given should be $b$ is a real number or we need to work also with other two cases:
$$\beta=6(\cos120^{\circ}+i\sin120^{\circ});$$
$$\beta=6(\cos240^{\circ}+i\sin240^{\circ}).$$ |
Expanding the precision or range of a random number generator | For convenience, shift the ranges to [0,9] and [0,29].
Draw a first number A. Draw a second one, B; if 9, draw again until not 9.
Compute 3.A + B/3.
The reason to reject 9 is to ensure that B/3 is uniform with probabilities 1/3.
If I am right, you can also keep the remainder from one drawing to the next, add it to B (giving a uniform number in [0,11]) and divide by 4 instead of 3.
Initialize R
Loop
Draw A
Draw B
Output 3.A + (B+R) / 4
Keep R = (B+R) % 4 |
When minimax theorem does not hold: counterexample of absence of convexity | Take a set $X$ with just two points. On $X\times X$ define $f(x,y)=1$ if $x=y$ and $0$ otherwise. This is example where min and max cannot be switched but $f$ is continuous. |
On the validity of a simplification made on a differential equation | You can treat the case $y=0$ separately. Look at the equation: $y=0$ is clearly a solution.
When $y\neq 0$, you define $v=\frac{1}{y^2}$ and what you've done is correct. |
Formula for expectation of Bivariate Data | In the case of this particular question, the means of $X$ and $Y$ will be just the geometric center of the parallelogram as it is a uniform distribution.
Intuitive understanding
Think of the parallelogram as a dartboard, with one corner at the origin and its length on the X-axis. Think of the uniform distribution as a uniform force attracting darts, all across the parallelogram. Just consider the X-axis. When you throw darts at this parallelogram, since the force is uniform across X-axis, the mean of the darts will be the mean of length along the X-axis. A similar logic shows that the mean along Y-axis will be the mean of the height. |
Confusion about properties of linear transformation. | If $a\not=0$, then $T(0)\not=0$. The transformation is what we call affine, but not strictly linear. |
Expected codelength for Huffman-algorithm with probabilities | HINT: The induction is on $n$. If $n=2$, the only possibility is that $k_1=k_2=1$: no other combination gives you $p_1+p_2=1$. You can easily verify that in that case $h(c_1)=h(c_2)=1$. For the induction step, assume the result for some $n\ge 2$, and prove it for $n+1$. Note that after you combine the the two least probable characters in the first step of the Huffman algorithm, you have in effect an alphabet of $n$ characters, one corresponding to the union of the two least probable characters, so you can apply the induction hypothesis. |
Is there a good software that helps with the outline of a proof? | Such a software, in the sense that you have in mind, doesn't exist. There are formal proof checkers but 'formal proofs' look nothing like the proofs you would write and like to be checked.
Also, I don't think it's a good idea to use such tools for practice purposes, if they existed . Instead of relying on external sources, carefully think through every step of your proof. This not only allows you to self-verify your work but in addition builds a certain mathematical maturity that is crucial for any kind of mathematical work. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.