title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
What are some easy to understand ways of defining an order among a group of people | In John Barth's novel The end of the road, whenever "the Doctor" cannot make a decision between alternatives, he resorts to three principles:
Sinistrality ("if the alternatives are side by side, choose the one on the left")
Antecedence ("if they're consecutive in time, choose the earlier")
Alphabetic Priority ("choose the alternative whose name begins with the earlier letter of the alphabet")
In the current case, one can generalize these to:
sinistrality in birthplace (western birthplaces first, eastern birthplaces last)
antecedence in birthdate (older people first, younger people last)
alphabetical priority by last name (ordered by last name)
Others include:
alphabetizing by birthplace
height
alphabetizing by father's (or mother's or ...) first names
ordering by driver's license number
ordering by middle digits of social security number
etc. |
Isomorphism class of vector bundle over $\mathbb S^1$. | $\require{AMScd}$Consider the map $f:t\in[0,1]\mapsto\exp(2\pi it)\in S^1$. If $E$ is a vector bundle in $S^1$, then $f^*(E)$ is a vector bundle on $[0,1]$. It is not difficult to show that all vector bundles on $[0,1]$ are trivial. You can see from this that there is a continuous bundle map $F$ such that the diagram
\begin{CD}
[0,1]\times\mathbb R^n @>F>> E \\
@Vp_2VV @V\xi VV \\
[0,1] @>f>> S^1
\end{CD}
commutes and $F$ is an isomorphism on fibers. In particular, we can construct a unique linear isomorphism $\phi_f:\mathbb R^n\to\mathbb R^n$ such that $F(1,\phi_f(v))=F(0,v)$ for all $v\in\mathbb R^n$.
Deduce from this that all vector bundles on $S^1$ are obtained from a trivial vector bundle $[0,1]\times\mathbb R^n\to[0,1]$ by identifying $(0,v)$ with $(1,\phi(v))$ for a fixed automorphism $\phi:\mathbb R^n\to\mathbb R^n$.
Next, show that if $\phi$ and $\psi$ are two automorphisms of $\mathbb R^n$ in the same path component of $GL(n,\mathbb R)$, then the correspinding bundles are isomorphic. |
Solve integral $\int \frac{\sqrt{x+1}+2}{(x+1)^2 - \sqrt{x+1}}dx$ | $$\int \frac{\sqrt{x+1}+2}{(x+1)^2 - \sqrt{x+1}}dx$$
Set $s=\sqrt{x+1}$ and $ds=\frac{dx}{2\sqrt{1+x}}$
$$\int\frac{(s+2)2s}{s^4-s}ds=\int\frac{2s^2+4s}{s^4-s}ds=2\int\frac{s+2}{s^3-1}ds\overset{\text{partial fractions}}{=}2\int\frac{-s-1}{s^2+s+1}ds+2\int\frac{ds}{s-1}$$
$$=-\int\frac{2s+1}{s^2+s+1}ds-\int\frac{ds}{s^2+s+1}+2\int\frac{ds}{s-1}$$
set $p=s^2+s+1$ and $dp=(2s+1)ds$
$$=-\int \frac 1 p-\int \frac{ds}{s^2+s+1}ds+2\int\frac{ds}{s-1}$$
$$=-\ln|p|-\int\frac{ds}{\left(s+1/2\right)^2+3/4}+2\int\frac{ds}{s-1}$$
Set $w=s+1/2$ and $dw=ds$
$$=-\ln|p|-\frac 4 3\int\frac{dw}{\frac{4w^2}{3}+1}$$
$$=-\ln|p|-\frac{2\arctan(\frac{2w}{\sqrt 3})}{\sqrt 3}-\ln|p|+2\ln|s-1|+\mathcal C$$
$$=\color{red}{2\ln|1-\sqrt{x+1}|-\ln|x+\sqrt{x+1}+2|-\frac{2\arctan\left(\frac{2\sqrt{x+1}+1}{\sqrt 3}\right)}{\sqrt 3}+\mathcal C}$$ |
PDE continuous dependence on data definition | See http://www.ann.jussieu.fr/~frey/cours/UdC/ma691/ma691_ch3.pdf , Page 7 (Definition 3.4 and text below) for a Definition. Basically, it means the latter of your proposals, although we need not have a Lipschitz-continuity. |
How many positive integers between 0 and 1,000,000 contain the digit 9? | You have six slots. In each slot you can place any digit between 0 and 9 (ten possibilities). This generates all $10^6$ possible numbers from 0 to $999,999$ (ignoring leading zeros as necessary). To generate a number that doesn't contain a nine, there are only nine possibilities for each slot (namely, $0$ through $8$), yielding $9^6$. So the total number of integers between zero and $999,999$ that do contain the digit nine is $10^6-9^6$. This is the same as the number of positive integers between 0 and a million that do so. |
Summation formula for this? | This is an application of the formula $\sum_{i=0}^n x^i ={1-x^{n+1}\over 1-x}$. Here you have $x=1/2$. |
Let $f(z)=(z^3-1)^{1/2}$, find a branch of the logarithm that makes $f(z)$ holomorphic inside the unit disc and satsfies $f(0)=i$ | The ideal situation is to find a branch defined on $\mathbb{C}\backslash S$, where $S$ is some ray from the origin to infinity. Let's first decompose $f$ as a composition of simpler functions. $f = (c\circ b\circ a)(z)$ where:
$$
\begin{align}
a(z) &= z^3 \\
b(z) &= z-1 \\
c(z) &= z^\frac{1}{2},\,\text{whenever it makes sense taking the square root.}
\end{align}
$$
First, $a$ sends the unit disk to itself, $b$ translates the image of $a$ to the left by $1$. So $b\circ a$ maps the unit disk ($D_1(0)$) to $D_1(-1)$.
Remember that, giving a branch of the logarithm is equivalent to giving a branch of the argument, and that
$$
\log z = \log|z| + i\arg z.
$$
So,
$$
\begin{align}
z^\frac{1}{2} :&=\exp\left(\frac{1}{2}\log z\right)\\
&= \exp\left(\frac{1}{2}\log|z|\right)\exp\left(\frac{i}{2}\arg z\right) \\
&=|z|^\frac{1}{2} \exp\left(\frac{i}{2}\arg z\right)
\end{align}
$$
Considering $\arg z \in (0,2\pi)$, for $f(0)=(-1)^\frac{1}{2} = i$ we must have:
$$
\frac{\arg (-1)}{2} = \frac{\pi}{2} \iff \arg(-1) = \pi.\qquad (\bigstar)
$$
So one of the rays you're looking for is the non-negative real axis, i.e, the branch of the logarithm defined in $\mathbb{C}\backslash [0,\infty)$.
$(\bigstar)$ In fact, we could've chosen any branch of the argument as long as the ray didn't intersect with $D_1(-1)$. |
Why does $|e^{ix}| = 1$ when $x$ is a real number | Recall that if $z = x + iy$ is a complex number, with $x$ and $y$ real, then $|z| = \sqrt{x^2 + y^2}$.
Recall also that $e^{ix} = \cos x + i \sin x$.
Therefore:
$$ |e^{ix}| = |\cos x + i\sin x| = \sqrt{\cos^2x + \sin^2x} = \sqrt1 = 1$$ |
Why did Gauss think the reciprocity law so important in number theory? | It's the first non-trivial result in elementary number theory, and to find out whether a second-degree equation has a solution modulo some number, basically comes down to knowing quadratic reciprocity. The Disquisitiones starts with the basic definition of congruences, then solutions of first-degree congruence equations, and then second-degree, where one naturally bumps up against this problem.
Here is from the book itself:
Since almost everything that can be said about quadratic residues
depends on this theorem, the term fundamental theorem which we will
use from now on should be acceptable.
Later on he speaks of how Euler and Legendre came close to finding a proof, but despite their best efforts, could not. Clearly it was also a matter of pride for him.
Given how much of modern number theory revolves around generalizations of quadratic reciprocity, such as Artin reciprocity and the reciprocity conjecture of Langlands, it's easy to consider Gauss's intuition as justified. |
Propositional logic question for designing new proof system | I prefer to write the rules with the symbol : $\vdash$ for derivability always explicit.
Thus, the above rule is :
$$\frac{\Gamma, \varphi \vdash \psi }{\Gamma \vdash \varphi \rightarrow \psi} \quad (\rightarrow-i)$$
They are the rules of Natural Deduction in "sequent calculus-style" and you can find them for example in Jan von Plato, Elements of Logical Reasoning (2013), page 65 :
$$\frac{\Gamma, \vdash \varphi }{\Gamma \vdash \varphi \lor \psi} \quad (\lor-i_1)$$
$$\frac{\Gamma \vdash \psi }{\Gamma \vdash \varphi \lor \psi} \quad (\lor-i_2)$$
$$\frac{\Gamma \vdash \varphi \quad \Gamma \vdash \psi}{\Gamma \vdash \varphi \land \psi} \quad (\land-i)$$
$$\frac{\Gamma, \varphi \vdash \psi \quad \Gamma, \varphi \vdash \lnot \psi}{\Gamma \vdash \lnot \varphi} \quad (\lnot-i)$$
and the corresponding elimination rules :
$$\frac{\Gamma \vdash \varphi \rightarrow \psi }{\Gamma, \varphi \vdash \psi} \quad (\rightarrow-e)$$
$$\frac{\Gamma, \varphi \vdash \tau \quad \Gamma, \psi \vdash \tau \quad \Gamma \vdash \varphi \lor \psi}{\Gamma \vdash \tau} \quad (\lor-e)$$
$$\frac{\Gamma \vdash \varphi \land \psi }{\Gamma \vdash \varphi} \quad (\land-e_1)$$
$$\frac{\Gamma \vdash \varphi \land \psi }{\Gamma \vdash \psi} \quad (\land-e_2)$$
$$\frac{\Gamma \vdash \lnot \lnot \varphi }{\Gamma \vdash \varphi} \quad (\lnot-e)$$ |
discrete random variable conditional expectation | Hint: It looks like a familiar type of distribution; one with an interesting property.
Geometric. Memoryless . |
Complex analysis, showing a function is constant | Let $\delta $ be a positive real number and $\tilde{f}(z)=f(z+\delta e^{i\alpha })$. It is easy to see that
$\tilde{f}(z)\in H(\Pi)$, $|\tilde{f}(z)|<1$ and
$$
\lim_{r\rightarrow\infty}\frac{\log|\tilde{f}(re^{i\alpha})|}{r}=-\infty.
$$
Of course $\tilde{f}(z) \in C(\bar\Pi)$.
Therefore Eclipse Sun's argument may apply to $\tilde{g}_n(z)=\tilde{f}(z)e^{nz}$ and we have $\tilde{f}=0$, which implies $f(z)=0$ for $\operatorname{Re}\, z>\delta $. Since we can take $\delta>0$ arbitrarily (or by the coincidence theorem) we see $f=0$ for all $z\in \Pi$. |
Functional Equations Methodology | There's a nice survey by Kuczma (1964) at http://pefmath2.etf.rs/files/60/130.pdf, and the book by Aczel and Dhombres gives an excellent and accessible treatment. There's also the interesting website http://eqworld.ipmnet.ru, though it seems to be down at present (!?). |
Exact sequence construction | You always may write an $R$-module $M$ as a quotient of a free $R$-module $L$, for instance, if $S$ is any generating set for $M$, the free $R$-module $L=R^{(S)}$, with basis $(e_s)_{s\in S}$, where each $e_s$ is defined as
$$e_s[i]=\begin{cases}1&\text{if }i=s,\\ 0&\text{otherwise}.\end{cases}$$
The surjective homomorphism from $L$ to $M$ is defined as
\begin{align}
f:L&\longrightarrow M \\
\sum_{s\in S}\lambda_s e_s&\longmapsto \sum_{s\in S}\lambda_s\, s&&\text{(the sums are actually finite, by definition of $R^{(S)})$.}
\end{align}
If you denote $K$ the kernel of this linear map, you obtain the short exact sequence
$$0\longrightarrow K\longrightarrow L\longrightarrow M\longrightarrow 0.$$
However, in general, $K$ (a.k.a. the module of linear relations between the generators in $S$) has no reason to be a free module. |
Algorithm for getting consecutive line segment edge points from midpoints | We have cell centers $c_i$ (along one coordinate axis), with $c_i \lt c_{i+1}$. For $N$ cells, there are $N+1$ cell edges, so one cell edge (or the width of a specific cell) needs to be known in order for us to find the coordinates for the rest of the cell edges.
Let's label the left edge of cell $i$ as $b_{i-1}$, and right edge $b_{i}$, $1 \le i \le N$.
Because $b_{i-1}$ is the left edge of cell $i$, and $b_{i}$ is the right edge, and $c_{i}$ is at the center, we know that the half-width of cell $i$ is $c_i - b_{i-1} = b_i - c_i$ (the left side being the lower half, the right side the upper half). The full width is twice the half-width, $2(c_i - b_{i-1}) = 2(b_i - c_i) = b_{i} - b_{i-1}$, as required.
Because the left edge is half-width less than its center, the left edge of cell $i$ is
$$b_{i-1} = c_{i} - (b_{i} - c_{i}) = 2 c_{i} - b_{i}$$
Similarly, because the right edge of a cell is half-width more than its center, the right edge of cell $i$ is
$$b_{i} = c_{i} + (c_i - b_{i-1}) = 2 c_{i} - b_{i-1}$$
If we know that one cell edge should be at $x$, we can find the index $k$ for which $b_k = x$:
$$\begin{cases}
k = 0, & x \lt c_1 \\
k = N, & c_N \lt x \\
1 \le k \le N-1, & c_{k} \lt x \lt c_{k+1} \end{cases}$$
Fixing the cell $k$ width to $w$ defines the edges of that cell:
$$\begin{align}
b_{k-1} &= c_k - \frac{w}{2} \\
b_{k} &= c_k + \frac{w}{2}\end{align}$$
although only the latter is actually used (similar to the one known edge case). When the one edge is known, we can directly define the rest of the edge coordinates recursively:
$$\begin{array}{rcll}
b_{i-1} &=& 2 c_{i} - b_{i}, & 1 \le i \le k \\
b_{i} &=& 2 c_{i} - b_{i-1}, & k+1 \le i \le N
\end{array}$$
Note that in a computer program, you only need two loops, one from $k$ to $1$ (setting $b_{k-1}$ to $b_0$), and another from $k+1$ to $N$ (setting $b_{k+1}$ to $b_N$).
In pseudocode:
function cell_bounds_edge(N, c, x):
# - N is the number of cells
# - c[1..N] is the center of each cell
# - x is the coordinate for one desired cell boundary
# Return value is an array b[0..N],
# cell i edges being b[i-1] and b[i], b[i] > b[i-1]
# if successful, FAILED otherwise.
# Determine the index k for b[k] = one.
if (x < c[1]) then:
k = 0
else if (x > c[N]) then:
k = N
else:
k = -1
for i = 1 to N-1:
if (c[i] < x) && (c[i+1] > x) then:
k = i
break
end if
end for
if (k == -1) then:
return FAILED
end if
end if
return do_cell_bounds(N, c, k, x)
function cell_bounds_width(N, c, k, w):
# - N is the number of cells
# - c[1..N] is the center of each cell
# - k is a cell number, 1 to N, inclusive
# - w is the width of cell k
# Return value is an array b[0..N],
# cell i edges being b[i-1] and b[i], b[i] > b[i-1]
# if successful, FAILED otherwise.
if (k < 1) or (k > N) then:
return FAILED
end if
return do_cell_bounds(N, c, k, c[k] + w/2)
function do_cell_bounds(N, c, k, x):
# N, k not valid?
if (N < 1) or (k < 0) or (k > N) then:
return FAILED
end if
# x not valid?
if (k > 0) and (x <= c[k]) then:
return FAILED
end if
if (k < N) and (x >= c[k+1]) then:
return FAILED
end if
# Verify all cell centers are in increasing order
for i = 2 to N:
if (c[i-1] >= c[i]) then:
return FAILED
end if
end for
# Assign the known edge
b[k] = x
# Fill in edges below k
for i = k down to 1:
b[i-1] = 2*c[i] - b[i]
if (b[i-1] >= b[i]) then:
return FAILURE
end if
end for
# Fill in edges above k
for i = k+1 to N:
b[i] = 2*c[i] - b[i-1]
if (b[i-1] >= b[i]) then:
return FAILURE
end if
end for
return b
I have not verified whether negative-width cells actually occur, but there is nothing in the algorithm that would reject them. (Mathematically there is nothing wrong, as the centers are at the midpoint of the two edges. It is only the physical/logical interpretation of the edges that make it impossible.) Fortunately, such a case is easy to detect, and that is why I have the above pseudocode detect that case.
The above (edited) pseudocode caters for the two possible use cases: cell_bounds_edge() takes the coordinate for one edge, and cell_bounds_width() takes a cell number and the width for that cell. Both functions determine the index $k$ and the (upper) edge for that cell, $b_k$, and calls the helper function do_cell_bounds(). The helper function implements sanity and validity checks, and fills the cell edge array (based on the centers of the cells, and that one known cell (upper) edge.
I would not try to "guess" or automatize the choice of the one edge to be fixed! (That includes choosing a cell and its width.)
If I absolutely had to "guess" a good $k$ and $w$, I'd first complain, loudly with injective, and probably rant quite a while before I'd actually do it. It would be a lot of work. It would have to be a heuristic based on the distribution of cell widths for each possible value of $k$, with varying $0 \lt w \lt c_{k+1}-c_{k-1}$ for each. That is a lot of calculation, especially for larger $N$. Even then, an user could choose better. There may -- and I bet, usually are -- other rules that cannot be sanely conveyed to the program; for example, some kind of symmetries desired, or maybe piecewise monotonically increasing/decreasing cell sizes, and so on.
The best distribution of cell widths may require the user a few tries, so having a simple utility to show them (so that the user can make a few choices, according to their desires/unconveyed restrictions, and pick the one that works best for them) with little effort.
The OP saw the results of their implicit choice: they defined the width of the first cell, assuming it would not affect the results. That assumption was wrong. The initial choice, the choice of where one of the cell edges is, is an initial value, and affects all the results, because all the rest of the cell edges depend on it (nearest ones directly, further ones indirectly). Oscillation between two values when the centers are at a constant distance is a typical result.
(I'm not certain if this problem can be classified as an ill-posed problem in the Hadamard sense, or an ill-conditioned one, but definitely the selection of the one boundary affects all the results, in a way that is probably unintuitive.) |
Suppose that $X$ has a uniform distribution on $\left [ 0, \frac{1}{2} \right ]$. Then find pdf of $X^2$ | No. $X^{2} \leq y$ means $X \leq \sqrt y$, not $X \leq \sqrt y /2$.
$P(X^{2} \leq y)=P(X \leq \sqrt y)=2\sqrt y$ and the density is $\frac 1 {\sqrt y}$ on $(0,\frac 1 4)$. |
Prove that for a real number $a > 1$, the set $\{ a^n \}$ is unbounded from above. | From $a^n \geqslant 2$ you obtain $a^{n\cdot m} \geqslant 2^m$, which is unbounded from above since $2^m > m$.
Another way to see the unboundedness is Bernoulli's inequality which shows
$$a^m \geqslant \left(1 + \frac1n\right)^m \geqslant 1 + \frac{m}{n}.$$ |
Solving a system of ordinary differential equations | The matrix of coefficients is $$A=\begin {bmatrix}-1/2&1\\1/4&-1/2\end {bmatrix}$$ with eigenvalues of $-1$ and $0$
The eigenvectors are $\begin {pmatrix}2\\-1\end {pmatrix}$ and $\begin {pmatrix}2\\1\end {pmatrix}$
Thus your sollutions are $$\begin {pmatrix}x\\y\end {pmatrix}= c_1e^{-t} \begin {pmatrix}2\\-1\end {pmatrix}
+c_2\begin {pmatrix}2\\1\end {pmatrix}$$
You find the constants from initial values. |
Does sum of real and imaginary part being bounded imply constant | Yes, $f$ is constant.
With Liouville’s theorem, you can prove that if $f$ is not constant, $f(\mathbb{C})$ is dense in $\mathbb{C}$. Suppose at the contrary that there exists $c \in \mathbb{C}$, such that there’s no sequence in $ f(\mathbb{C})$ converging to $c$. Then, there exists $\epsilon \in \mathbb{R}$, such that $B(c, \epsilon) \bigcap f(\mathbb{C}) = \emptyset$. But then the entire function $g(z) = \frac{1}{f(z) - c}$ is bounded, which is not possible by Liouville’s theorem. Therefore, for all $n \in \mathbb{N}$, there’s a sequence in $f(\mathbb{C})$ converging to $n + i*n$.. Thus, the sum of the real and the imaginary parts cannot be bounded. In fact, we can generalize this and state that for any continuous unbouded function $g : \mathbb{C} \to \mathbb{C}$, and any non-constant entire function $f$, $g \circ f$ is not bounded. |
Applying Dominated Convergence and/or Monotone Convergence for $|Z|^{1/n}$ | a) $X=\mathbf 1_{Z\ne0}$ (consider separately the cases $Z(\omega)=0$ and $Z(\omega)\ne0$ and proceed).
b) Since $X_n\leqslant1+|Z|$ for every $n\geqslant1$ (consider separately the cases $|Z(\omega)|\lt1$ and $|Z(\omega)|\geqslant1$ and proceed), Lebesgue dominated convergence theorem implies that $\mathrm E(X_n)\to\mathrm E(X)=\mathrm P(Z\ne0)=1$.
Note: One can also apply Lebesgue monotone convergence theorem twice, separately to the random variables $U_n=|Z|^{1/n}\,\mathbf 1_{|Z|\leqslant1}$ and $V_n=|Z|^{1/n}\,\mathbf 1_{|Z|\gt1}$. To see this, note that $X_n=U_n+V_n$ for every $n\geqslant1$, the sequence $(U_n)$ is nondecreasing, the sequence $(V_n)$ is nonincreasing, $U_n\to U=\mathbf 1_{0\lt|Z|\leqslant1}$ and $V_n\to V=\mathbf 1_{|Z|\gt1}$. Hence $\mathrm E(X_n)\to\mathrm E(U)+\mathrm E(V)=1$. |
Need help with permutations and combinations problems | Inviting 3 out of 6 distinct friends: there are ${6 \choose 3} = 20$ possible ways of doing this. Then we want to find all the possible permutations of size $5$ of this set of size $20$: this is called a k-permutation, where $k=5$. Thus, there are $$\frac{20!}{(20-5)!}$$ ways of doing this. This is also equal to $20\cdot 19 \cdot \ldots \cdot 16$, so your answer was actually correct. |
Intuitive way of knowing why pivot positions matter? | Pivot positions (or pivot columns) are important for various reasons. One of the most fundamental reasons to why they are important is because they tell you whether your system of linear equations has no solution, exactly one solution, or infinitely many solutions.
For example, consider the system: $\\$
$x_{1} - x_{2} + 2x_{3}$ = 0 $\\$
$2x_{1} - x_{2} + x_{3}$ = 0 $\\$
$x_{1} - x_{2} + 2x_{3}$ = 0 $\\$
Suppose we wanted to find the solutions for $x_{1} , x_{2} , x_{3}$ such that Ax = 0.
Throwing this system into an augmented matrix [A|0] and row reducing produces: $\\$
\begin{bmatrix}
1 & 0 & 1 & 0 \\[0.3em]
0 & 1 & -1 & 0 \\[0.3em]
0 & 0 & 0 & 0 \\[0.3em]
\end{bmatrix} $\\$
Notice that we only have two pivot columns. However, we have a system of 3 equations and 3 unknowns. What does this tell us? It tells us that there are infinite solutions to this system of linear equations. However, if we had a pivot in each column where n = r, then we know for a fact that the only solution to our system is the trivial one. Although this illustration may seem trivial, I think it will give you a better idea of why pivot columns are important and what they tell us about the solutions to a system of linear equations. |
Plane of intersection of two spheres | Let $(x_1,y_1,z_1)$ be a point that belong to both spheres, then it satisfies both equations and any linear combination of them, in particular the linear combination
\begin{align}
x_1^2+y_1^2+z_1^2+2x_1+2y_1+2z_1+2-\left(\color{blue}{x_1^2+y_1^2+z_1^2+x_1+y_1+z_1-\frac{1}{4}}\right)&=0\\
x_1+y_1+z_1+\frac{9}{4}&=0
\end{align}
So the point belong to the plane $4x+4y+4z+9=0$. |
Simplifying a Multi-Variate Fraction | it's just like adding fractions; find a common denominator and add. here $8b$ is a common denominator. we get
$$
-\frac{4}{4}\frac{3b-5y}{2b}+\frac{3b+6y}{8b}+\frac{8b}{8b}\frac{2}{1}
$$
$$
=\frac{-4(3b-5y)+(3b+6y)+8b(2)}{8b}
$$
$$
=\frac{-12b+20y+3b+6y+16b}{8b}
$$
$$
=\frac{7b+26y}{8b}
$$ |
Prove whether or not the following is an inner product? | Symmetry: Let $f, g \in \mathcal{P}2$ then
$$\langle f,g \rangle = f(1)g(1) + f(2)g(2) = g(1)f(1) + g(2)f(2) = \langle g,f \rangle.$$
Linearity: Let $f,g \in \mathcal{P}2$ and $\alpha \in \mathbb{R}$ then
$$\langle \alpha f,g \rangle = \alpha f(1) g(2) + \alpha f(2) g(2) = \alpha (f(1)g(1) + f(2)g(2)) = \alpha \langle f,g \rangle.$$
Semi positive-definite: Let $f \in \mathcal{P}2$ then
$$\langle f,f \rangle = f(1)^2 + f(2)^2 \geq 0.$$ It is not positive-definite because the polynomial $f=(X-1)(X-2)$ is nonzero, has degree $2$ but $\langle f,f \rangle =0$. Positive definite means that for all $0 \neq f \in \mathcal{P}2$ we have $\langle f,f \rangle = 0$ and $\langle 0,0 \rangle=0$. So what you're describing is not a inner-product.
You can also show that for any $f,g,h \in \mathcal{P}2$
$$\langle f+g,h \rangle = \langle f,h \rangle + \langle g,h \rangle$$
which shows that this is a semi-inner-product. |
Analogue of Cartan theorem A in algebraic geometry | Over an affine scheme any quasi-coherent sheaf is generated by its global sections, so I guess Theorem A is always true. |
How do I find a tangent plane without a specified point? | Let $f(x,y,z)=3x^2-4y^2-z$. Then your surface is $\bigl\{(x,y,z)\in\mathbb{R}^3\,|\,f(x,y,z)=0\bigr\}$. You are after the points $(x,y,z)$ in that surface such that $\nabla f(x,y,z)$ is a multiple of $(3,2,2)$. So, solve the system$$\left\{\begin{array}{l}6x=3\lambda\\-8y=2\lambda\\-1=2\lambda\\3x^2-4y^2-z=0.\end{array}\right.$$ |
A specific improper integral | $$\int_{-\infty}^{0} \frac{e^{3u}-e^{u}}{u} du=\int_{-\infty}^{0} \int_1^3 e^{xu}\ dx\ du=\int_1^3 \int_{-\infty}^{0} e^{xu}\ du\ dx=\int_1^3 \dfrac1x\ dx=\ln3$$ |
Showing that the matrix $A+I$ is invertible | The equation $A^3=kA$ tells you that $f(A)=0$ for the polynomial $f(x)=x^3-kx=x(x^2-k)$, so the minimal polynomial (a divisor of $f$) can only have the zeroes $0$ and $\pm \sqrt{k}$ (provided $k\ge 0$). The characteristic polynomial has the same zero set as the minimal polynomial, so $-1$ is not an eigenvalue of $A$. Hence $A+I$ is invertible. |
Quadratics Word Problem | This is probably what they mean:
A parabola is characterized by 3 coefficients, so you need 3 pieces of information to determine a parabola. Two of the pieces of information are given directly, as $f(0) = 1$ and $f(12) = 0$.
For the third, it seems they are attempting to say that a unit parabola was shifted so that the leading coefficient was scaled from $A=1$ to $A=-1/6$. So you have:
$$f(t) = At^2 + Bt + C \tag{1}$$
$$\begin{cases} A=-\frac 16 \\ f(0) = 1 \\ f(12) = 0\end{cases}$$
Can you take it from here?
You know that $A = -1/6$, so (1) becomes:
$$f(t) = -\frac16t^2 + Bt + C \tag{2}$$
Now you know that $f(0) = 1$, so (2) becomes
$$f(0) = -\frac160^2 + B\cdot 0 + C$$
$$1 = 0 + 0 + C$$
$$1 = C$$
So
$$f(t) = -\frac16t^2 + Bt + 1\tag{3}$$
Now you just have to find out the value of $B$, so use $f(12) = 0$ (3):
$$f(12) = -\frac16\cdot 12^2 + B\cdot12 + 1$$
$$0 = -\frac{144}{6} + 12 B + 1$$
$$23 = 12 B$$
$$\frac{23}{12} = B$$
So
$$f(t) = -\frac16 t^2 + \frac{23}{12} t + 1$$
Now you know the equation of the height of the football. Then the question becomes, does it ever reach a height of 8m? So set:
$$8 = -\frac16 t^2 + \frac{23}{12} t + 1$$
and solve for $t$ using the quadratic equation. You want to check if there is a positive real number that solves the equation. |
Why can we resolve indeterminate forms? | So you're looking at something of the form
$$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g(x)}{h(x)} $$
and if this limit exists, say the limit it $L$, then it doesn't matter how we rewrite $f(x)$. However, it's possible you can write $f(x)$ in different ways; e.g. as the quotient of different functions:
$$f(x) = \frac{g_1(x)}{h_1(x)} = \frac{g_2(x)}{h_2(x)}$$
The limit of $f$ either exists or not, but it's possible that the individual limits in the numerator and denominator exist, or not. More specifically, it's possible that
$$\lim_{x \to +\infty} g_1(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_1(x)$$ do not exist, while $$\lim_{x \to +\infty} g_2(x) \quad\mbox{and}\quad \lim_{x \to +\infty} h_2(x)$$do exist. What you did by dividing numerator and denominator by $x$, is writing $f(x)$ as another quotient of functions but in such a way that the individual limits in the numerator and denominator now do exist, which allows the use of the rule in blue ("limit of a quotient, is the quotient of the limits; if these two limits exist"):
$$\lim_{x \to +\infty} f(x) = \lim_{x \to +\infty}\frac{g_1(x)}{h_1(x)} = \color{blue}{ \lim_{x \to +\infty}\frac{g_2(x)}{h_2(x)} = \frac{\displaystyle \lim_{x \to +\infty} g_2(x)}{\displaystyle \lim_{x \to +\infty} h_2(x)}} = \cdots$$and in this way, also find $\lim_{x \to +\infty} f(x)$.
When you try to apply that rule but the individual limits do not exist, you "go back" and try something else, such as rewriting/simplifying $f(x)$; this is precisely what happens:
$$\begin{align}
\lim_{x \rightarrow +\infty} f(x) & = \lim_{x \rightarrow +\infty} \frac{x+1}{x-1} \color{red}{\ne} \frac{\displaystyle \lim_{x \rightarrow +\infty} (x+1)}{\displaystyle \lim_{x \rightarrow +\infty} (x-1)}= \frac{+\infty}{+\infty} = \; ? \\[7pt]
& = \lim_{x \rightarrow +\infty} \frac{1+\tfrac{1}{x}}{1-\tfrac{1}{x}} \color{green}{=} \frac{\displaystyle \lim_{x \rightarrow +\infty} (1+\tfrac{1}{x})}{\displaystyle \lim_{x \rightarrow +\infty} (1-\tfrac{1}{x})} = \frac{1+0}{1+0} = 1 \\
\end{align}$$ |
multivariable maxima and minima | There are two answers because it looks like you need the maximum and the minimum. (6, -8) does not work because is violates your condition $x^2+y^2<=25$. I assume you got this by calculating the derivative and setting it equal to zero. If you investigate the function in Wolfram Mathematica you will see our points then must lie on the circle. Clearly the absolute minimum which you calculated to be in the center of the dark red circle on the contour plots does not meet your constraint. In this case the green dot is your min and the red dot is your max.
To get to the solution mathematically we need to write the equation as:
$$L = x^2+y^2-12x+16y+\lambda(x^2+y^2-25)$$
Next we formulate the conditions:
$$x^2+y^2<=25$$
$$\frac{\partial L}{\partial x} = 2x-12+2\lambda x=0$$
$$\frac{\partial L}{\partial y} =2y+16+2\lambda y=0$$
$$\lambda (x^2+y^2-25)=0$$
$$\lambda >=0$$
Solving this system will yield the points you want at (-3, 4) and (3, -4). For example say we take $\lambda = 0$ then we will get the point (6, -8) but this violates the first condition so $\lambda > 0$. With lambda greater than zero we can reduce the system to:
$$x^2+y^2=25$$ (on the circle boundary)
$$2x-12+2 \lambda x = 0$$
$$2y+16+2 \lambda y = 0$$
The solution of this will yield your answers. |
Is UMVUE unique? Is the best unbiased estimator unique? | Suppose $\theta$ is the unknown quantity of interest. A necessary and sufficient condition for an unbiased estimator (assuming one exists) of some parameteric function $g(\theta)$ to be UMVUE is that it must be uncorrelated with every unbiased estimator of zero (assuming of course the unbiased estimator has finite second moment). We can use this result to prove uniqueness of UMVUE whenever it exists.
If possible, suppose $T_1$ and $T_2$ are both UMVUEs of $g(\theta)$.
Then $T_1-T_2$ is an unbiased estimator of zero, so that by the result above we have
$$\operatorname{Cov}_{\theta}(T_1,T_1-T_2)=0\quad,\,\forall\,\theta$$
Or, $$\operatorname{Var}_{\theta}(T_1)=\operatorname{Cov}_{\theta}(T_1,T_2)\quad,\,\forall\,\theta$$
Therefore, $$\operatorname{Corr}_{\theta}(T_1,T_2)=\frac{\operatorname{Cov}_{\theta}(T_1,T_2)}{\sqrt{\operatorname{Var}_{\theta}(T_1)}\sqrt{\operatorname{Var}_{\theta}(T_2)}}=\sqrt\frac{\operatorname{Var}_{\theta}(T_1)}{\operatorname{Var}_{\theta}(T_2)}\quad,\,\forall\,\theta$$
Since $T_1$ and $T_2$ have the same variance by assumption, correlation between $T_1$ and $T_2$ is exactly $1$. In other words, $T_1$ and $T_2$ are linearly related, i.e. for some $a,b(\ne 0)$, $$T_1=a+bT_2 \quad,\text{ a.e. }$$
Taking variance on both sides of the above equation gives $b^2=1$, or $b=1$ ($b=-1$ is invalid because that leads to $T_1=2g(\theta)-T_2$ a.e. on taking expectation, which cannot be true as $T_1,T_2$ do not depend on $\theta$). So $T_1=a+T_2$ a.e. and that leads to $a=0$ on taking expectation. Thus, $$T_1=T_2\quad,\text{ a.e. }$$ |
How do we determine if $f '(0)$ exists | I am afraid that the theorem that you quoted will be of no use. Nevertheless what you can do is as follows.
First of all it should be $\lim_{x\to0^{+}}f(x) = 3$ instead of $\lim_{x\to0}f(x) = 3$ as $f'(x)$ exists only for $x>0$ and in that case $f'(0)$ may not exist, for example, consider the function $f(x) = |3x|$.
And if you are assuming that $\lim_{x\to0}f(x) = 3$, then you are assuming implicitly that $f'(x)$ exists for negative values of $x$ sufficiently close to $0$. In that case $f'(0)$ exists and in fact $f'(0) = 3$. Because showing that $f'(0) = 3$ is equivalent to showing that given $\epsilon >0$ there exists $\delta >0$ such that
\begin{equation}
\biggl{|}\frac{f(x)-f(0)}{x} - 3\biggr{|} < \epsilon
\end{equation}
whenever $|x|<\delta$. Since $\lim_{x\to0}f(x) = 3$, it follows that given $\epsilon>0$, there exists a $\delta_1>0$ such that $|f'(c)-3|<\epsilon$ whenever $|c|<\delta_1$, $c \neq 0$ obviously. Now choose $\delta = \delta_1$. Now for any $x$ with $|x|<\delta = \delta_1$, we can apply Mean-Value Theorem to $f$ on the interval $I_x$ whose end points are $x$ and $-x$. Applying Mean-Value Theorem will give us that there exists $b\in I_x$ such that
\begin{equation}
\frac{f(x)-f(0)}{x} = f'(b)
\end{equation}
In particular,
\begin{equation}
\biggl{|}\frac{f(x)-f(0)}{x} - 3\biggr{|} = |f'(b) - 3| < \epsilon
\end{equation}
whence it follows that $f'(0)$ exists and in fact $f'(0) = 3$.
Note: If $\lim_{x\to0^{+}}f(x) = 3$ is the condition then by the same argument what you can actually show is that $f'(0^{+})$ exists and $f'(0^{+})=3$, where $f'(0^{+})$ denotes the right hand derivative of $f$ at $0$. Just apply the Mean-Value Theorem to the interval $[0,x]$ for $x$ sufficiently close to $0$. |
Legendre Symbol $\left(\frac{83}{127}\right)$ | You start having problems once you get to $\left(\frac{6}{11}\right)$ and want to turn it into $\left(\frac{11}{6}\right)$.
First of all, if you're sticking to the Legendre symbol, then $\left(\frac{11}{6}\right)$ is not even defined, because $6$ is not prime. You should continue by factoring $\left(\frac{6}{11}\right) = \left(\frac{2}{11}\right) \left(\frac{3}{11}\right)$ and then computing each of the factors. (The first is $-1$, the second is $1$.)
If you decide you want to work with the Jacobi symbol, many steps like this are alright; for example, you could turn $\left(\frac{9}{11}\right)$ into $(-1)^{\frac{9-1}{2} \cdot \frac{11-1}{2}}\left(\frac{11}{9}\right) = \left(\frac{11}{9}\right) = \left(\frac{2}{9}\right)$ and that would be perfectly okay. However, even for the Jacobi symbol, $\left(\frac{11}{6}\right)$ is still not defined, because the lower argument must always remain odd. You must still proceed by factoring $\left(\frac{6}{11}\right) = \left(\frac{2}{11}\right) \left(\frac{3}{11}\right)$.
(In general, working with the Jacobi symbol, you still have to factor out any powers of $2$, because $2$ is a special case of quadratic reciprocity; however, you don't have to factor the upper argument in other cases, which makes it more computationally efficient for very large numbers.) |
Why $\sum_{k\leq n}\sum_{d\mid k}=\sum_{d\leq n}\sum_{k\leq \lfloor n/d\rfloor }$? | $$\sum_{k\le n}\sum_{d\mid k}a_{k,d}$$
is the sum of $a_{k,d}$ where $k$ and $d$ vary through
all pairs of numbers between $1$ and $n$ inclusive with $k$ a multiple
of $d$. For a $d$ between $1$ and $n$ the $a_{k,d}$ that occur
in this sum are $a_{d,d},a_{2d,d},\ldots,a_{rd,d}$ where $r=\lfloor n/d\rfloor$. Therefore
$$\sum_{k\le n}\sum_{d\mid k}a_{k,d}=\sum_{d\le n}\sum_{j\le\lfloor n/d\rfloor}
a_{jd,d}.$$ |
Show that the dual map is well-defined | Before showing that $f^t$ is linear, you have to show that, for $g\in W^*$, $f^t(g)\colon V\to F$ ($F$ is the base field) is linear. Otherwise $f^t$ wouldn't be a map $W^*\to V^*$.
The map $f^t(g)$ is defined by $v\mapsto g(f(v))$. Since it is the composition of linear maps, it is indeed linear. |
Convergence of a sequence of inverse operators | There are some details missing regarding type of convergence. Consider norm-convergence: If you assume
$T_n$ invertible and $\|T^{-1}_n\|\leq M$ then for $\|T-T_n\|<1/M$,
$T$ is also invertible and you get
a von neumann series by developping
$$ T^{-1}= (T_n + (T-T_n))^{-1} = (I + T_n^{-1} (T-T_n))^{-1} T_n^{-1}$$
Sometimes you may use the identity
$$ T_n^{-1} - T^{-1} = T_n^{-1} (T - T_n) T^{-1} $$
to get better estimates. If you don't have any bounds on $T_n^{-1}$ nor $T^{-1}$ then it becomes more difficult... |
Can I use control theory when the output is stochastic and has delayed reactions to the controller? | Model
Let $A$ be the number of clicks on button A, and $B$ - on button B. Define $e:=A-B$, and the goal is to drive $e$ to zero.
At each step we have
$$
e(t+1) = \begin{cases}e(t)+1 \text{ with the probability }p(A), \\e(t)-1 \text{ with the probability }1-p(A). \\\end{cases}
$$
I assume that the next control action is taken after $N$ clicks are collected, and thus I consider expectations. Then we obtain $e(t+1) = e(t) + N(2p(A)-1) = e(t)+Ng(u(t))$,
where $g \in (-1,1)$
$$
g(u(t)) = 2\frac{1}{1+e^{-u(t)-v}}-1,
$$
$u$ is the control signal, and $v$ is a constant.
Control
Note that $g(u)$ is monotonic. Without any deep analysis, I would suggest trying a simple PI controller
$$
\begin{aligned}
z(t) &= z(t-1)+\alpha e(t),\\
u(t) &= -z(t) - \beta e(t).
\end{aligned}
$$
Here the gains $\alpha$ and $\beta$ are to be tuned. |
Finding the Gradient of a Vector Function by its Components | If $\vec{g}=\left[\begin{array}{c}f_1\\\vdots\\f_m\end{array}\right]$ then the derivative of $\vec{g}$ is the matrix
$$J\vec{g}=\left[\begin{array}{c}\nabla f_1\\\vdots\\\nabla f_m\end{array}\right],$$
which is an $m\times n$ - rectangular array.
In components, you would see it as
$$J\vec{g}=\left[\dfrac{\partial f_i}{\partial x_j}\right],$$
where $i$ is for rows and $j$ is for columns, and where $x_1,...,x_n$ are the standard coordinate functions of $\Bbb R^n$. |
Calculus and Newton's Law of Cooling help please. | Since it is cooling,$\frac{dT}{dt} = -k(T-T_{ambient}) => k = -\frac{1}{6}$. Then $12 = (T_{initial} -24)e^{-\frac{20}{6}}$.If you rearranage, then you get$T_{initial} = 12e^{\frac{10}{3}} +24$ |
limit is also bounded if sequence is bounded and converge? | Assume the limit lies outside of $[c,d]$. Say the limit is $L$ and assume without loss of generality that $L>d$. Let $\varepsilon$ be such that $\varepsilon < L-d$. Therefore: $$\exists n_0\in \mathbb N \mid \forall n\geq n_0 \ |L-a_n|<\varepsilon <L-d$$ $$\Rightarrow L-a_n<L-d\Rightarrow a_n > d$$
Which is a contradiction with the hypothesis of every $a_n$ being in $[c,d]$. The argument is analogous if $L<c$. |
Difference equation formula $\sum a^t = \frac{a^t}{a-1}$. | Take $\Delta$ of the function appearing on the right-hand side and check that you get $a^t$.
$$\Delta \left( \frac{a^t}{a-1} + C(t) \right) = \frac{a^{t+1}}{a-1} - \frac{a^t}{a-1} + \Delta C(t) = \frac{a^{t+1} - a^t}{a-1} = \frac{a^t(a-1)}{a-1} = a^t.$$ |
Imaginary fraction square root? | Good question! You have discovered that it's not possible to define a square root function in the complex numbers that obeys the rule $\sqrt{ab}=\sqrt{a}\,\sqrt{b}$ (or the equivalent $\sqrt{a/b}=\sqrt{a}/\sqrt{b}$, with $b\ne0$).
You get the same dilemma, in an easier way, by considering
$$
i=\sqrt{-1}=\sqrt{\frac{1}{-1}}=\frac{\sqrt{1}}{\sqrt{-1}}=\frac{1}{i}=-i
$$
Note that this is clearly wrong, which doesn't tell us that mathematics is contradictory, but that we have used an unproved (and unprovable) property, namely that we can define a square root function satisfying the rule above.
Note that the false argument produces both complex numbers whose square is $-1$, the same happens in your argument.
A suggestion: never use the symbol $\sqrt{-1}$, because it suggests the possibility to apply the wrong property. Neither use $\sqrt{z}$, for the same reason, unless $z$ is a real number with $z\ge0$. |
Contractible manifolds of dimension $n\le 2$ | Forget the one-point compactification, it's a dead end. By the time you prove that a neighborhood of infinity looks like $[0,\infty) \times S^1$ (which is what you'd need to see the 1-pt compactification is a manifold) you're more or less already done.
Theorem: If $\Sigma$ is an open surface with $H_1(\Sigma; \Bbb F_2) = 0$, then $\Sigma$ is homeomorphic to $\Bbb R^2$.
Sketch of proof.
(1) First prove that $\Sigma$ has exactly one end; if $K \subset \Sigma$ is a compact subset, then $\Sigma \setminus \text{int}(K)$ may have many connected components, but only one of them is noncompact. The proof will be by contrapositive: if $\Sigma$ has two ends, show that $H_1(\Sigma; \Bbb F_2) \neq 0$. You will need to know that the inclusion $\partial \Sigma \to \Sigma$ is nonzero on first homology whenever $\Sigma$ is a noncompact surface with boundary.
(2) Use that $\Sigma$ has a compact exhaustion --- $\Sigma = \bigcup \Sigma_n$, where $\Sigma_n$ is a compact surface and $\Sigma_n \subset \text{int}(\Sigma_{n+1})$, so that $\Sigma_{n+1} = \Sigma_n \cup S_n$, where $S_n$ is also a compact surface. This can be justified using the fact that $\Sigma$ has a proper smooth function to $\Bbb R$, together with Sard's theorem.
(3) Using (1) and (2) together, observe that $\Sigma \setminus \text{int}(\Sigma_n)$ only has one noncompact piece. Modify your compact exhaustion so that $\Sigma \setminus \Sigma_n$ is connected and so that $S_n$ is connected.
(4) Prove that $\partial \Sigma_n$ is a single circle, as otherwise $\Sigma$ has positive genus (you'll glue on a pair of pants as $S_n$, because $S_n$ is connected), which would imply $H_1(\Sigma;\Bbb F_2) \neq 0$; again this involves some Mayer-Vietoris work.
(5) Prove that each $\Sigma_1$ is a disc and each $S_n$ is a cylinder. From here it follows that $\Sigma \cong \Bbb R^2$.
The details are not completely trivial, in particular on (3). I will not see or respond to comments, so please feel free to edit this answer as you desire. This gives a rough strategy for the general classification of noncompact surfaces without boundary, the ideas simplify when $\Sigma$ is contractible like this; similarly you can show that if $H_1(\Sigma)$ is finite, then $\Sigma$ is obtained by deleting finitely many points from a closed surface. |
Algebraic and Geometric multiplicity of Eigenvalues of Symmetric matrices | Generally, this is true for all diagonalizable matrices. Symmetric matrices being diagonalizable by the spectral theorem, the result follows for them. |
Definition: product $\alpha \cdot f $ with $ f \in \operatorname{Hom}_K(E,F)$ , $\alpha \in K$ | If $f:E\to F$ is a linear transformation with $E,F$ vector space, then $\alpha\cdot f$ is the linear transformation obtained by mapping each $x\in E$ to $\alpha\cdot f(x)\in F$, where $\cdot$ is the action of $K$ on $F$, or scalar multiplication in $F$. Remember for each $x\in E$; $f(x)=y$ is a vector in $F$, so it makes sense to talk about the scalar product $\alpha y$.
In particular, for any pair of $K$ vector spaces $E\to F$ we can make ${\rm Hom}_K(E,F)$ into a vector space by defining the sum of two linear transformations and the scalar product of $\alpha \in K$ and a linear transformation as follows:
$$\begin{align}(1)\hspace{1cm} (f+g)(x):=&f(x)+g(x)\\
(2)\hspace{1cm} (\alpha\cdot f)(x):=&\alpha \cdot f(x)\end{align}$$
where $+$ and $\cdot$ on the left are the new sum and product, and $+$,$\cdot$ on the right is the usual sum in $F$.
You can check all axioms with these definitions to see that ${\rm Hom}_K(E,F)$ will obtain a $K$-vector space structure. Moreover, if $E$ and $F$ are of finite dimension, $n$ and $m$ respectively, this space ends up being isomorphic to $K^{m\times n}$. |
Two measures are equivalent iff their image measures have same support? | Untrue. Consider $\Omega=\mathbb R^d$, $\mathcal F=\mathcal B(\mathbb R^d)$, $X$ the identity, and $\mathbb P$ and $\mathbb Q$ some discrete probability measures with positive weights at the elements of some countable sets $S$ and $T$, respectively.
Then the supports of $\mathbb P=\mu_\mathbb P$ and $\mathbb Q=\mu_\mathbb Q$ are the closures of $S$ and $T$. These may coincide while $S\cap T$ is empty, that is, one can have $\mathrm{supp}(\mu_\mathbb P)=\mathrm{supp}(\mu_\mathbb Q)$ while $\mathbb P$ and $\mathbb Q$ are mutually singular.
Example: if $d=1$, one can choose $S$ the set of rational numbers whose reduced form has an even denominator and $T$ the set of rational numbers whose reduced form has an odd denominator. |
Number of placements | If balls of the same colour are indistinguishable, use Stars and Bars to count the number of ways of placing the whites. Use Stars and Bars to find the number of ways of placing the blacks. Multiply. |
Can ∂x and ∂y in a derivate be seen as ∂ times x or ∂ times y? | No, $\partial x$ cannot be understood as a product. If it could be, then you would get
$$\frac{\partial y}{\partial x} = \frac{y}{x}$$
which obviously is not true in general.
The expression $\frac{\partial}{\partial x}$ is also known as derivation operator. This can be understood as follows:
Given a function of several variables, say $f(x,y)$, partial derivation with respect to $x$ can be seen as a map that maps the function $f$ to the function $\partial f/\partial x$. If you do so, you notice that this is a linear map, since the functions form a vector space, and
$$\frac{\partial(\alpha f + \beta g)}{\partial x} = \alpha \frac{\partial f}{\partial x} + \beta \frac{\partial g}{\partial x}$$
Such linear functions on vector spaces are also known as operators. Now traditionally, the application of an operator to a vector is written like a product; this is because the prototype of this operation is multiplying a matrix with a vector.
So if you want to write down the derivation operator, you need a notation for it. And one common notation is to write the derivation operator as $\partial/\partial x$. The reason is probably exactly because it looks like application of a normal multiplication rule, so that
$$\frac{\partial}{\partial x}f = \frac{\partial f}{\partial x}$$
and thus the rule is easy to remember and apply. On the other hand, as your question shows, it can easily mislead.
Another common notation for the partial derivative operator is $\partial_x$. This notation is usually preferred in the context of differential geometry. It has the advantage that you are not as easily misled about its meaning (and that is is less to write/type; for reasons that are outside the scope of this answer it's also a very natural notation in differential geometry), but has the slight disadvantage that you're less likely to figure out the meaning of $\partial_x f$ than of $(\partial/\partial x)f$ if you are not familiar with it. |
If the angle between $\vec a$ and $\vec b$ is $60^\circ $... | Let $\bar{\theta}$ be the angle between $2\vec a$ and $-2\vec b$. So $$\cos \bar{\theta} =\dfrac {\vec 2a\times \vec -2b}{ |2\vec a|\times |-2\vec b|} = \dfrac {-4(\vec a\times \vec b)}{ 4(|\vec a|\times |\vec b|)} = -\dfrac {\vec a\times \vec b}{ |\vec a|\times |\vec b|} = -\cos(\theta) = -\frac{1}{2},$$
what implies
$$\bar{\theta}=\arccos(-1/2)=120^\circ.$$ |
Find the ratio between two segments of a similar triangle. | Divide the figure into regions with colors as shown.
Note (1) the grey coded parts are equal in area; (2) [green + blue] = .... = [yellow + pink]
$\dfrac {[green]}{[green + blue]} = \dfrac {3k}{20}$
Setup a similar ratio for the yellow and pink.
Perform a suitable operation to those two ratios such that the [green + blue], [yellow + pink], and 'k' terms can be eliminated altogether.
The required is equal to $\dfrac {[green]}{[pink]}$. |
Inverse of Multivariable Functions on Manifolds | Let $\phi_N((x,y))=z=\frac{x}{1-y}$
Then $x=z(1-y)$
Substituting this into $x^2+y^2=1$ we have $z^2{(1-y)}^2+y^2=1$
Then, solving this for $y$, (I used the quadratic formula) $$(1+z^2)y^2-2z^2y+z^2-1=0$$
$$y=\frac{2z^2\pm\sqrt{4z^4-4(1+z^2)(z^2-1)}}{2(1+z^2)}$$
$$y=\frac{2z^2\pm2\sqrt{z^4-(z^4-1)}}{2(1+z^2)}$$
$$y=\frac{z^2\pm1}{1+z^2}$$
Now, if we had $y=\frac{z^2+1}{z^2+1}=1$, then this is not actually in the domain of $\phi_N$, since $(x,1)\notin$ $U_N$ for any x.
So we must have $$y=\frac{z^2-1}{z^2+1}$$
Then, from $x=z(1-y)$, we can just substitute our expression for y in to get
$$x=\frac{2z}{z^2+1}$$
Thus, $${\phi_N}^{-1}(z)=(\frac{2z}{z^2+1}, \frac{z^2-1}{z^2+1})$$
Similar procedure for finding ${\phi_S}^{-1}$ |
Limit of definite integral with parameter | $$\lim_{\alpha\to +\infty}\alpha \int_{-\alpha}^{1+\alpha}\frac{dx}{1+x^2+\alpha}
=\lim_{\alpha\to +\infty}\frac{\alpha}{\sqrt{1+\alpha}} \arctan \frac{x}{\sqrt{1+\alpha}}\Bigg|_{-\alpha}^{1+\alpha}$$
$$=\lim_{\alpha\to +\infty}\frac{\alpha}{\sqrt{1+\alpha}} ( \arctan \frac{1+\alpha}{\sqrt{1+\alpha}} -\arctan \frac{-\alpha}{\sqrt{1+\alpha}})= \lim_{\alpha\to +\infty}\frac{\alpha}{\sqrt{1+\alpha}} ( \arctan \sqrt{1+\alpha} +\arctan \frac{\alpha}{\sqrt{1+\alpha}})$$
$$\lim_{\alpha\to +\infty}\frac{\alpha}{\sqrt{1+\alpha}} ( \frac{\pi}{2} +\frac{\pi}{2})=+\infty$$ |
Given x is an exponential random variable, find median & probability | a) For the median, I believe your computation is correct, my only doubt is because of the lack of TeX. We want $m$ such that
$$\int_0^m \lambda e^{-\lambda x}\,dx=\frac{1}{2}.$$
Integrate. We want
$$1-e^{-\lambda m}=\frac{1}{2},$$
or equivalently
$$e^{-\lambda m}=\frac{1}{2}.$$
Take the logarithm of both sides. We get
$$-m\lambda=\ln(1/2)=-\ln 2,$$
and therefore $m=\frac{\ln 2}{\lambda}$.
b) The probability that $X_1=X_2$ is $0$. (It is the integral over the line $x_1=x_2$ of the joint density function.)
By symmetry, $\Pr(X_1\gt X_2)=\Pr(X_2\gt X_1)$. By symmetry each is equal to $\frac{1}{2}$.
Or else we could integrate. The joint density function of $X_1$ and $X_2$ is $\lambda^2 e^{-\lambda x_1}e^{-\lambda x_2}$ for $x_1\gt 0$, $x_2\gt 0$, and $0$ elsewhere. Thus
$$\Pr(X_2\gt X_1)=\int_{x_1=0}^\infty \left(\int_{x_2=x_1}^\infty \lambda^2 e^{-\lambda x_1}e^{-\lambda x_2}\,dx_2 \right)\,dx_1.$$
After some calculation we get $\frac{1}{2}$. Exploiting symmetry is a lot easier!
c) The probability that $Y\gt 3$ is equal to the probability both $X_1$ and $X_2$ are $\gt 3$. The probability that $X_1$ is greater than $3$ is $\int_3^\infty \lambda e^{-\lambda x_1}\,dx_1$. This is $e^{-3\lambda}$. For the probability that $Y\gt 3$, square.
d) I do not know what tools you are expected to use, possibly the memorylessness of the exponential. Then we want the probability that an exponential with parameter $\lambda$ is $\gt 8$ given that it is $\ge 3$. This is the probability that an exponential with parameter $\lambda$ is $\gt 5$. |
Computing the Laurent series of $\frac{1}{z^2}$ around the point $z = 3$ | Hint consider series for $1/z$ then differentiate it. |
Find summation definition of a certain function | We build multiplication tables of $x_j\ast x_k$ for small $n$ and check for a pattern of the signs. Since the pattern differs slightly for $n$ even and $n$ odd we consider $n=6$ and $n=7$.
Case $n=6$:
We have in this case
\begin{align*}
&x_1x_2-x_1x_3+x_1x_4-x_1x_5+x_1x_6\\
&\qquad-x_2x_3+x_2x_4+x_2x_5-x_2x_6\\
&\qquad\qquad\quad\ +x_3x_4-x_3x_5-x_3x_6\\
&\qquad\qquad\qquad\qquad\ \ -x_4x_5+x_4x_6\\
&\qquad\qquad\qquad\qquad\qquad\qquad+x_5x_6
\end{align*}
The tables below give the signs of the terms $x_j\ast x_k$ and the values $j+k \mod 4$ which show a correlation to the plus and minus signs.
$$
\begin{array}{c|cccccc|cccccc}
\ast&x_2&x_3&x_4&x_5&x_6&\qquad\qquad\qquad\ast&x_2&x_3&x_4&x_5&x_6\\
\hline
x_1&+&-&-&+&+&\qquad\qquad\qquad x_1&3&0&1&2&3&\\
x_2&&-&+&+&-&\qquad\qquad\qquad x_2&&1&2&3&0&\\
x_3&&&+&-&-&\qquad\qquad\qquad x_3&&&3&0&1&\\
x_4&&&&-&+&\qquad\qquad\qquad x_4&&&&1&2&\\
x_5&&&&&+&\qquad\qquad\qquad x_5&&&&&3&
\end{array}
$$
From the tables above we obtain for even $n$:
\begin{align*}
\sum_{{1\leq j<k\leq n}\atop{{j+k \equiv 2\ \mathrm{mod}\,(4)}\atop{j+k \equiv 3\ \mathrm{mod}\,(4)}}}x_jx_k
-\sum_{{1\leq j<k\leq n}\atop{{j+k \equiv 0\ \mathrm{mod}\,(4)}\atop{j+k \equiv 1\ \mathrm{mod}\,(4)}}}x_jx_k
=\color{blue}{\sum_{1\leq j<k\leq n}(-1)^{\left\lfloor\frac{1}{2}\left(j+k+2-4\left\lfloor\frac{j+k+2}{4}\right\rfloor\right)\right\rfloor}x_jx_k}
\end{align*}
Case $n=7$:
We have in this case
\begin{align*}
&x_1x_2+x_1x_3-x_1x_4-x_1x_5+x_1x_6+x_1x_7\\
&\qquad-x_2x_3-x_2x_4+x_2x_5+x_2x_6-x_2x_7\\
&\qquad\qquad\quad\ +x_3x_4+x_3x_5-x_3x_6-x_3x_7\\
&\qquad\qquad\qquad\qquad\ \ -x_4x_5-x_4x_6+x_4x_7\\
&\qquad\qquad\qquad\qquad\qquad\qquad+x_5x_6+x_5x_7\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\ -x_6x_7\\
\end{align*}
The tables below give the signs of the terms $x_j\ast x_k$ and the values $j+k \mod 4$ which show a correlation to the plus and minus signs.
$$
\begin{array}{c|ccccccc|ccccccc}
\ast&x_2&x_3&x_4&x_5&x_6&x_7&\qquad\qquad\qquad\ast&x_2&x_3&x_4&x_5&x_6&x_7\\
\hline
x_1&+&+&-&-&+&+&\qquad\qquad\qquad x_1&3&0&1&2&3&0&\\
x_2&&-&-&+&+&-&\qquad\qquad\qquad x_2&&1&2&3&0&1&\\
x_3&&&+&+&-&-&\qquad\qquad\qquad x_3&&&3&0&1&2&\\
x_4&&&&-&-&+&\qquad\qquad\qquad x_4&&&&1&2&3&\\
x_5&&&&&+&+&\qquad\qquad\qquad x_5&&&&&3&0&\\
x_6&&&&&&-&\qquad\qquad\qquad x_6&&&&&&1&
\end{array}
$$
From the tables above we obtain for odd $n$:
\begin{align*}
\sum_{{1\leq j<k\leq n}\atop{{j+k \equiv 0\ \mathrm{mod}\,(4)}\atop{j+k \equiv 3\ \mathrm{mod}\,(4)}}}x_jx_k
-\sum_{{1\leq j<k\leq n}\atop{{j+k \equiv 1\ \mathrm{mod}\,(4)}\atop{j+k \equiv 2\ \mathrm{mod}\,(4)}}}x_jx_k
=\color{blue}{\sum_{1\leq j<k\leq n}(-1)^{\left\lfloor\frac{1}{2}\left(j+k+1-4\left\lfloor\frac{j+k+1}{4}\right\rfloor\right)\right\rfloor}x_jx_k}
\end{align*} |
Two small Divergence Theorem questions | The divergence theorem applies to a body $B\subset{\mathbb R}^3$ and its outward oriented boundary $\partial B=:S$. It says that for any field ${\bf F}$ defined in a neighborhood of $B$ one has
$$\int_S {\bf F}\cdot d\vec \omega=\int_B{\rm div}({\bf F})\ dV\ .$$
(i) Computing the divergence we obtain
$${\rm div}({\bf F}(x,y,z)=z-2\ .$$
Here $S=\partial B$ and therefore
$$\int_S {\bf F}\cdot d\vec \omega=\int_B{\rm div}({\bf F})\ dV=\left({3\over2}-2\right){\rm vol}(B)=-{3\pi\over2}\ .$$
(ii) The boundary $\partial B$ of $B$ consists of three pieces: The bottom surface $S_1$, the top surface $S_2$, and the cylindrical surface $S_3$. We are told to compute the flow integral $\int_S {\bf F}\cdot d\vec \omega$ for $S:=S_3$.
According to the divergence theorem
$$\int_{S_3} {\bf F}\cdot d\vec \omega= \int_B{\rm div}({\bf F})\ dV-\int_{S_1} {\bf F}\cdot d\vec \omega-\int_{S_2} {\bf F}\cdot d\vec \omega\ .$$
On the bottom surface $S_1$ the outward normal is $(0,0,-1)$, and the scalar surface element is ${\rm d}\omega={\rm d}(x,y)$, i.e., the area element on the unit disk $D$. Therefore
$$\int_{S_1}{\bf F}\cdot d\vec \omega=\int_D(0,3x,0)\cdot(0,0,-1)\ {\rm d}(x,y)=0\ .$$
On the top surface $S_2$ the outward normal is $(0,0,1)$, Therefore
$$\int_{S_2}{\bf F}\cdot d\vec\omega=\int_D(3x,3x,-6)\cdot(0,0,1)\ {\rm d}(x,y)=-6 \int_D {\rm d}(x,y)=-6\pi\ .$$
It follows that
$$\int_{S_3} {\bf F}\cdot d\vec \omega=-{3\pi\over2}-0-(-6\pi)={9\pi\over2}\ .$$ |
Convergence of relative sum of iid random variables | $$
Z_n = \frac{\sum_1^n (X_i -E(X))}{\sum_1^n X_i}
= 1 - \frac{\mu}{\sum_1^n X_i/n},
$$ where $\mu = E(X)$.
Then,
$$ Z = \lim_\limits{n \rightarrow \infty} Z_n
=_d 1 - \frac{\mu}{G_n },
$$
where the limit is in distribution, with r.v. $G_n =_d N( \mu, \sigma^2/n)$
This convergence in distribution implies directly:
$$
P(|G_n - \mu| >\delta ) = 2\Phi( -\frac{\delta \sqrt{n}}{\sigma})
$$
Convergence in distribution to a constant implies convergence in probability (to the same constant), that is, $G_n \rightarrow_P \mu$.
Then, $Z_n$ converges to zero with rate $1/n$ for a fixed error (as can be seen from the simple Wiki proof of weak LLN using Chebyshev inequality). |
number of solutions of $f(f(f(f(x))))$ | Hint :
$$
f(x) = x^2+10x+20 = (x+5)^2 - 5
$$
so
$$
f(f(x)) = ((x+5)^2 - 5 + 5)^2 - 5 = (x+5)^4 - 5 $$
$$f(f(f(x))) = (x+5)^8 - 5
$$
... and it should be easier to analyse $f^{(4)}
(x)$ now. |
To show the equality | Taking the beginning of wuestenfux:
$$[(A\cap X)\cup (B\cap X^c)]^c = (A\cap X)^c \cap (B\cap X^c)^c \\
= (A^c\cup X^c) \cap (B^c\cup X) = (A^c\cap B^c)\cup (A^c\cap X)\cup (X^c\cap B^c)\cup (X^c\cap X)\\
= (A^c\cap B^c)\cup (A^c\cap X)\cup (X^c\cap B^c)\\
= (A^c\cap B^c \cap X)\cup (A^c\cap B^c \cap X^c) \cup (A^c\cap X)\cup (X^c\cap B^c)\\
= (A^c\cap X)\cup (X^c\cap B^c).$$
The last equality is because of $(A^c\cap B^c \cap X) \subseteq (A^c\cap X)$ and $(A^c\cap B^c \cap X^c) \subseteq (B^c \cap X^c)$. |
Integral of rational functions. | 1.for the integrand $\dfrac{1}{x^2+2px+q}$ complete the square to get
$\frac{1}{\dfrac{1}{4}(4q-4p)^2+(p+x)^2}$
2.then substitute $u=p+x$
3.factor out $\dfrac{1}{4}(4q-4p)^2$
The integral reduced to simple trigonometric function |
Operator into the dual space is compact | You have a continuous linear map $T:X\to Y'$, $w\mapsto A(w,\cdot)$ (the difference to $A$ is the range $Y'$ instead of $X'$). A theorem of Schauder says that the transposed $J^t: Y'\to X'$ of the compact inclusion $J:X\hookrightarrow Y$ is again compact ($J^t$ is the restriction).
Hence $A=J^t\circ T$ is compact. |
union of ordinal number | HINT for $\bigcup x\in x$ and $x\in\bigcup x$: Is $\bigcup\varnothing\in\varnothing$? Is $\varnothing\in\bigcup\varnothing$?
$x\in\bigcup\varnothing$ iff there is a $y\in\varnothing$ such that $x\in y$, so ... ?
Remember, $\varnothing$ is also the ordinal $0$, so your answer to this question applies to both parts of your problem.
Added: Not much point in a hint now, so I’ll just point out that if $\alpha$ is an ordinal, $$\bigcup\alpha=\sup\alpha=\sup\{\beta:\beta<\alpha\}=\begin{cases}\alpha,&\text{if }\alpha\text{ is a limit ordinal or }0\\
\beta,&\text{if }\alpha=\beta+1\;.
\end{cases}$$ |
Help with Probability distributions arithmetics | One way to show this is to use a change of variable.
Call $Y = \sqrt 2 X$. Then $X = Y/\sqrt 2$, and
\begin{align*}f_Y(y) &= \frac{f_X(y/\sqrt 2)}{\left|\frac{dy}{dx}\right|_{y/\sqrt 2}} \\
&= \frac{1}{\sqrt{2\pi}}\exp\left\{-\frac{1}{2}\left(\frac{y/\sqrt 2 -0}{1}\right)^2\right\}\cdot \frac{1}{\sqrt 2}\\
&=\frac{1}{\sqrt{2\pi}\sqrt 2} \exp\left\{-\frac{1}{2}\left(\frac{y -0}{\sqrt{2}}\right)^2\right\}.
\end{align*}
This is the density for a $N(0, 2)$. In general, this is not applicable to all distributions. You will have to become familiar with each one. |
Exponentiation with negative base and properties | Note that $4^{\frac {1}{4}}$ has 4 values $(\sqrt {2},-\sqrt {2},i\sqrt {2},-i\sqrt {2})$
When you square a number or an equation then you are increasing solution values. |
A marketing report concerning personal computers/Inclusion–Exclusion | $A:\;$ The set of all owners who buy a printer. $\;|A| = 650,00$
$B:\;$ The set of all owners who buy at least one software package. $\;|B| = 1,250,000$
$A \cup B:\;\;$ The set of all owners who buy a printer OR at least one software package.
$\qquad\qquad|A \cup B| = 1,450,000$
$A \cap B: $ The set of all owners who buy a printer AND at least one software package.
$$|A\cap B| = |A| + |B| - |A\cup B|$$ |
Trivia math question | HINT: Interpret this situation as a graph.
Have a vertex for each person, and each edge is a handshake.
Consider the degree of each vertex. In particular the relation between the degree of someone and the degree of their partner. |
Why is the number of subgroups of a finite group G of order a fixed p-power congruent to 1 modulo p? | I agree with Derek's remark, that is a very neat way to prove it. Another way to prove it is the following and it learns you more about counting principles in $p$-groups. I assume that you have proved Exercise 1C.8 of Isaacs' book that says that you can reduce the problem to $p$-groups.
Lemma 1. Let $H$ be a proper subgroup of a finite $p$-group $G$ and suppose $|H|=p^b$. Then the number of subgroups of order $p^{b+1}$ which contain $H$, is congruent 1 mod $p$.Proof. Fix a subgroup $H$ of order $p^b$ of the $p$-group $G$. Since $H$ is normal and proper in $N_G(H)$, one can find a subgroup $K$ with $|K:H|=p$ (look at the center of $N_G(H)/H$, which is non-trivial, pick an element of order $p$, say $\overline{x}$ and put $K=\langle x \rangle$). On the other hand, if $K$ is a subgroup of $G$ with $H \subset K$ and $|K:H|=p$, then $H \lhd K$ (because $p$ is the smallest prime dividing $|K|$), and hence $K \subseteq N_G(H)$. We conclude that |{$K \leq G: H \subset K$ and $|K:H|=p$}| = |{$\overline{K} \leq N_G(H)/H: |\overline{K}|=p$}|. The lemma now follows from the fact that in the group $N_G(H)/H$ the number of subgroups of order $p$ is congruent to $1$ mod $p$ (in any group, which order is divisible by the prime $p$, this is true and follows easily from the McKay proof of Cauchy’s Theorem). $\square$
Lemma 2. The number of subgroups of order $p^{b-1}$ of a $p$-group $G$ of order $p^b$ is congruent 1 mod $p$.
Proof. Let $G$ be a non-trivial group of order $p^b$. Let $\mathcal{S}$ be the set of all subgroups of $G$ of order $p^{b-1}$. Observe that this set is non-empty and fix an $H \in \mathcal{S}$. We are going to do some counting on the set $\mathcal{S}$, by defining an equivalence relation $\sim$ as follows: $K\sim L$ iff $H \cap K=H \cap L$ for $K, L \in \mathcal{S}$. Observe that for $K, L \in \mathcal{S}$ with $K \neq L$, $G=KL$, $K \cap L \lhd G$ and $|K \cap L|=p^{b-2}$. It is easy to see that the equivalence class $[H]$ is a singleton. In addition, $H=K$ iff $H \in [K]$ for $K\in \mathcal{S}$. Now fix a $K \in \mathcal{S}$, $K \neq H$. Counting orders one can see that if $L \in \mathcal{S}$, then $H \cap K\subset L$ iff $L \in [K]$. Hence the number of elements in $[K]$ is exactly the number of subgroups of order $p^{b-1}$ containing $H \cap K$, minus 1, namely $H$. Owing to the previous lemma, we conclude that $|[K]| \equiv 0$ mod $p$. The lemma now follows. $\square$.
Using these two lemma's you now can prove the statement by cleverly counting the number of subgroup pairs $H$ and $K$ of $G$ having order $p^b$ and $p^{b+1}$ respectively. Now back to your original question
Theorem. Let $G$ be a group of order $p^a$, $p$ prime. Let $0 \leq b \leq a$ and $n_b=$|{$H \leq G: |H|=p^b$}|. Then $n_b \equiv 1$ mod $p$.
Proof. Let $H$, $K \leq G$, with $|H|=p^b$ and $|K|=p^{b+1}$. Define a function $f$ as follows: $f(H,K)=1$ if $H \subset K$ and $f(H,K)=0$ otherwise. Let us compute $\sum_{H} \sum_{K} f(H,K)$ in two different ways: $\sum_{H}$$\sum_{K} f(H,K) = \sum_{H} \sum_{H \subset K}1$ $\equiv \sum_{H} 1$ mod $p$, according to Lemma 1 above. Similarly, by reversing the order of summation of $H$ and $K$, the sum equals $\sum_{K} 1$ mod $p$, using Lemma 2. In other words, for all $b$, the number of subgroups of $G$ of order $p^b$ is congruent mod $p$ to the number of subgroups of $G$ of order $p^{b+1}$. The theorem now follows from the fact that the number of subgroups of order $p^a$ equals 1, namely $G$ itself. $\square$.
Remark. The theorem counts all subgroups of fixed order $p^b$. If we restrict ourselves to the normal subgroups of order $p^b$ the same holds: |{$H \unlhd G: |H|=p^b$}|$\equiv 1$ mod $p$. Sketch of proof: let $G$ act by conjugation on the set of all subgroups $H$ of order ${p^b}$. The fixed point are exactly the normal subgroups. Now apply the Theorem above.
Finally, also see this StackExchange entry. |
Finding the expected value in a Negative Binomial Problem. | Edited for details: The probability you are charged \$100 on a particular day will be if the company did not show up that day. So, if the company does not show up on day one, but does show up on day 2, you are charged just the \$100 for the day the did not show as per your assumptions. The probability of this event is $(0.2)(0.8)$. Similarly if they missed the first two days and show on the third, you are looking at \$200 with a probability of $(0.2)(0.2)(0.8)$.
The fee will be 100 with probability $0.2(0.8)$.
The fee will be 200 with probability $0.2^2(0.8)$.
The fee will be 300 with probability $0.2^3(0.8)$.
etc.
Therefore the expected value will be
$$\sum_{i=1}^{\infty} 100n\cdot 0.2^n(0.8) = 80\sum_{n=1}^{\infty} n\cdot 0.2^n$$
Hence the given formula. |
Integrating $\log(-ix)\exp(-ix)/x^2$ | Assuming $\epsilon>0$, the given integral, after the substitution $-ix=z$, is $$i\int_{\epsilon-i\infty}^{\epsilon+i\infty}z^{-2}e^z\log z\,dz=-if'(2),\qquad f(s)=\int_{\epsilon-i\infty}^{\epsilon+i\infty}z^{-s}e^z\,dz=\frac{2\pi i}{\Gamma(s)}$$ (the last equality is basically Hankel's integral). The final result is then $\color{blue}{-2\pi(1-\gamma)}$. |
Determine the number of point $z \in \mathbb{C}$ such that $(z-2)^4=(1+i)^4$ | HINT
We have that
$w=(1+i)^4$
then evaluate
$$(z-2)^4=w$$ |
On uniform Structure induced by pseudo metric | Suppose $f: X \to (Y, \mathcal{U}_d)$ is given for some pseudometric $d$ on $Y$.
So a base of the induced uniformity on $X$ will be all sets of the form
$$(f \times f)^{-1}[U_d(r)] \text{, where } U_d(r)=\{(y,y') \in Y^2: d(y,y') < r\}$$
with $r>0$. Instead, we can define a psueodmetric on $X$ by $d_X(x,x')=d(f(x), f(x'))$ and we have $d_X = d \circ (f \times f)$ as maps and
$$(f \times f)^{-1}[U_d(r)] = U_{d_X}(r)$$ by definition and so we can see this induced uniformity also as the one induced by the pseudometric $d_X$. |
Group theoretic construction for permutation algorithm | Assuming you compose permutations right to left, it may be helpful to observe that $(x_2,x_k,x_{k-1},\dots,x_3)(x_1,x_2,\dots,x_k)=(x_1,x_k)$ |
value of $x(\sqrt 2)$ | Let $$y(t):=2+\int_0^t{x(s)}ds\geq 2$$
Then from the intgral inequality we have
$$\dot{y}=x(t)\leq \sqrt{y}$$
or equivalently
$$\frac{d}{dt}\left(2{y^{1/2}(t)}-t\right)\leq 0$$
which yields
$$2y^{1/2}(t)-t\leq 2y^{1/2}(0)=2\sqrt{2}$$
i.e.
$$y^{1/2}(t)\leq \frac{t}{2}+\sqrt{2}$$
From the initial inequality we have
$$x(t)\leq \left(2+\int_0^t{x(s)ds}\right)^{1/2}=y^{1/2}(t)\leq \frac{t}{2}+\sqrt{2} $$
For $t=\sqrt{2}$ we obtain from above
$$x(\sqrt{2})\leq \frac{3}{\sqrt{2}}$$ |
Fine the value of the polynomial specified at x=n+1 | Look at the polynomial $(x+1)p(x)-x$ it is of degree $n+1$ and it has roots $0,1,2\ldots,n$. Hence
$$(x+1)p(x)-x=ax(x-1)(x-2)\cdots (x-n)$$
Put $x=-1$ to get
$$1=a(-1)(-2)\cdots (-n-1) = a(n+1)!(-1)^{n+1} \implies a= \frac{(-1)^{n+1}}{(n+1)!}$$
Thus if $x=n+1$
$$(n+2)p(n+1)-(n+1)=a(n+1)!$$
Hence
$$p(n+1)=\frac{(n+1)+a(n+1)!}{n+2}=\frac{(n+1)+(-1)^{n+1}}{n+2}$$ |
Show that $p(x)=2x^6+12x^5+30x^4+60x^3+8x^2+30x+45$ has no real roots | The problem is unfortunately false; $p(-1) = -17$, but $p(0) = 45$, so there must be a root in $(-1,0)$. |
Remainder of the polynomial: What is wrong with this approach? | I think the problem is, with P(1), P(-1) and P(0), you are actually getting the value of r(1), r(-1) and r(0), instead of r(x).
r(x) is linear in this case, so you are able to get the correct answer by factoring P(x) by x |
How to find range of a rational function? | Hint: write it as $\,(x^2+1)+\cfrac{1}{x^2+1}-1\,$ and use $\,a+\cfrac{1}{a}\ge 2\,$ for $\,a \ge 1\,$. |
What is the limit value of $a$ that $\sum\limits_{k = 1 }^ \infty \frac{ \ln(k)}{k^a}$ converges | It converges for $a\gt 1$. Try the integral test. |
Show that the integer $Q_n = n! + 1$ where n is a positive integer, has a prime divisor greater than n. | Hint: Can it have any prime divisors $\leq n$? |
Z-transform of white noise | The noise should be defined as a random process $N[n]$ instead of R.V.
If the mean of noise is stationary along time and the autocorrelation of noise can represent in terms of time difference. The random process of noise defined as W.S.S.
And $N[n=i]$ is independent of $N[n\neq i]$
In this case, the autocorrelation of noise random process can be defined as:
$R_{n}[\tau=n_{1}-n_{2}]=\sigma^2\delta[\tau]$ and $m_{n}=0$
And the power spectral density (PSD) of $N[n]$ is Z-transform of $R_{n}[\tau]$:
$S_{n}(z)=\sigma^2$ |
Prove that there exist $a,b\in A$ such that $d(a,b)=\sup{\{d(x,y)\mid x,y\in A\}}$ if $A$ is compact | $d:A\times A\rightarrow \mathbb{R}$ defined by $d(x,y)$ is a continuous function. Since $A\times A$ is compact (Tichonof) it attains its maximum. |
How to find the limit $ \lim_{x \to 4}{2x-8\sqrt x+8 \over \sqrt x-2}$? | hints:
$$\frac{2x-8\sqrt x+8}{\sqrt x-2}=2\frac{x-4\sqrt x+4}{\sqrt x-2}=2\frac{(\sqrt x-2)^2}{\sqrt x-2} =\ldots$$ |
Elementary proof of the Prime Number Theorem - Need? | As KCd explains in a comment, the proof of the PNT in Hardy's time seemed to be intimately connected to the complex analytic theory of the $\zeta$-function. In fact, it was known to
be equivalent to the statement that $\zeta(s)$ has no zeroes on the half-plane $\Re s \geq 1$.
Although this equivalence may seem strange to someone unfamiliar with the subject, it is in
fact more or less straightforward.
In other words, the equivalence of PNT and the zero-freeness of $\zeta(s)$ in the region
$\Re s \geq 1$ does not lie as deep as the fact that these results are actually true.
The possibility that one could then prove PNT in a direct way, avoiding complex analysis, seemed unlikely to Hardy, since then one would also be giving a proof, avoiding complex
analysis, of the fact that the complex analytic function $\zeta(s)$ has no zero in the region $\Re s \geq 1$, which would be a peculiar state of affairs.
What added to
the air of mystery surrounding the idea of an elementary proof was the possibility of accessing the Riemann hypothesis this way. After all, if one could prove in an elementary way that
$\zeta(s)$ was zero free when $\Re s \geq 1$, perhaps the insights gained that way
might lead to a proof that $\zeta(s)$ is zero free in the region $\Re s > 1/2$ (i.e.
the Riemann hypothesis), a statement which had resisted (and continues to resist)
attack from the complex analytic perspective.
In fact, when the elementary proof of PNT was finally found, it didn't have the ramifications that Hardy anticipated (as KCd pointed out in his comment).
For a modern perspective on the elementary proof, and a comparison with the complex analytic proof, I strongly recommend Terry Tao's exposition of the prime number theorem. In this exposition,
Tao is not really concerned with elementary vs. complex analytic techniques as such, but rather with
explaining what the mathematical content of the two arguments is, in a way which makes
it easy to compare them. Studying Tao's article should help you develop a deeper sense of the mathematical content of the two arguments, and their similarities and differences.
As Tao's note explains, both arguments involve a kind of "harmonic analysis" of the primes,
but the elementary proof works primarily in "physical space" (i.e. one works directly with the prime counting function and its relatives), while the complex analytic proof works much more in "Fourier space" (i.e. one works much more with the Fourier transforms of the prime counting function and its relatives).
My understanding (derived in part from Tao's note) is that Bombieri's sieve is (at least in part) an outgrowth of the elementary proof, and it is in sieve methods that one can look to find modern versions of the type of arguments that appear in the elementary proof. (As one example, see Kevin Ford's paper On Bombieri's asymptotic sieve, which in its first two pages includes a
discussion of the relationship between certain sieving problems and the elementary proof.)
But I should note that modern analytic number theorists don't pursue sieve methods out of some desire for having "elementary" proofs. Rather, some results can be proved by $\zeta$- or $L$-function methods, and others by sieving methods; each has their strengths and weaknesses. They can be combined, or played off, one against the other. (The
Bombieri--Vinogradov theorem is an example
of a result proved by sieve methods which, as far as I understand, is stronger than what
could be proved by current $L$-function methods; indeed, it an averaged form of the
Generalized Riemann Hypothesis.)
To see how this mixing of methods is possible, I again recommend Tao's note. Looking at it should give you a sense of how, in modern number theory, the methods of the two proofs of PNT (elementary and complex analytic) are not living in different, unrelated worlds, but are just two different, but related, methods for approaching the "harmonic analysis of the primes". |
Prove that a square matrix over an algebraic closed field is nilpotent if and only if all their eigenvalues are zero. | Hint: If all the eigenvalues are zero, then what is the characteristic polynomial of the matrix? Apply Cayley-Hamilton Theorem now.
Conversely, if $A^k = 0$ for some $k> 0, k \in \mathbb{Z}$, then suppose $ \lambda$ is an eigenvalue with eigenvector $v \neq 0 $. So, $ Av = \lambda v$. Keep applying A both sides of the equation, then you get $ A^k = \lambda^k v$. Conclude. |
set algebra having union and minus | $A \cup (B\setminus C) = A⋃(B⋂C^c )=(A⋃B)⋂(A⋃C^c )=(A⋃B)\setminus(A⋃C^c )^c = (A⋃B)\setminus(A^c ⋂ C)=(A \cup B)\setminus (C\setminus A)$
here $A^c=U\setminus A$: complement of A |
Given a solid sphere of radius R, remove a cylinder whose central axis goes through the center of the sphere. | It takes me a long time to draw a picture with software, so you will have to do it for me. Without your picture, the solution below will have no meaning.
Please draw the top half of the circle with center the origin and radius $R$. This circle has equation $x^2+y^2=R^2$. I am a little allergic to fractions, so temporarily let $h=2k$.
What does our sphere with a hole in it look like? Draw the vertical line that goes straight up from the point $A(-k,0)$ until it meets the circle at a point we will call $P$. Make $k$ fairly large, say at least $\frac{3}{4}R$. That helps with the visualization later. Also draw the vertical line that goes straight up from $B(k,0)$ until it meets the circle at a point we will call $Q$. Join the two points $P$ and $Q$ by a horizontal line.
Now for the hard part! Let $\mathcal{W}$ be the region above the line $PQ$ but below the curve $y=\sqrt{R^2-x^2}$. The hole was drilled horizontally, along the $x$-axis. All that's left of the original sphere is the solid that we obtain by rotating the region $\mathcal{W}$ about the $x$-axis. This solid is sometimes called a napkin ring. Note that this solid has height $2k$. The radius of the hole is the length of the line segment $AP$. So this radius is $\sqrt{R^2-k^2}$. It is kind of funny to talk about height, since this "height," and the drilling, is along the $x$-direction. Too late to fix.
We first find the volume obtained by rotating the region below $y=\sqrt{R^2-x^2}$, above the $x$-axis, from $x=-k$ to $x=k$. It is standard solid of revolution stuff that the volume is
$$\int_{-k}^k \pi (R^2-x^2)\,dx.$$
Evaluate. It is easier to integrate from $0$ to $k$ and double. We get
$$2\pi R^2k -\frac{2\pi k^3}{3}.\qquad\qquad(\ast)$$
The hole is simply a cylinder of height $2k$, and radius $AP$, which is $\sqrt{R^2-k^2}$. So integration is unnecessary. The volume of the hole
$$\pi(R^2-k^2)(2k).\qquad(\ast\ast) $$
To find the volume of what's left, subtract $(\ast\ast)$ from $(\ast)$. The $\pi R^2 k$ terms cancel, and after some algebra we get $\dfrac{4}{3}\pi k^3$.
Recall that $k=\frac{h}{2}$ and substitute. We end up with
$$\frac{\pi h^3}{6}.$$
Note that the answer turned out to be independent of the radius $R$ of the sphere! |
Given T:R^n -> R^n , T(vn)=v1 and for $ n-1\ge i \ge 1$ $T(v_i)=v_{i+1}$ T is diagonalizable ? if yes ,than show [T]E Diagonal | Put $v_{n+1}=v_1$. Let $\zeta$ be any of $n$ complex roots of $1$ and $v=\sum_{k=1}^n \zeta^{-k} v_k$. Then
$$Tv=\sum_{k=1}^n \zeta^{-k} v_{k+1}=\zeta\sum_{k=1}^n \zeta^{-k-1}v_{k+1}=\zeta v.$$
So $\zeta$ is an eigenvalue of $T$. Since $T$ has $n$ distinct eigenvalues, its matrix is diagonalizable over $\Bbb C$. |
Does graph theory have application in pure mathematics? | Graph Theory should be Pure Mathematics because we can study its elements in the way we would study any other geometric object in Geometry. Things get out of control when people call Inequalities Applied Mathematics or Analysis of Convex Functions Convex Analysis, but they do that... In my point of view, Inequalities and Convex Analysis are a horrendous mistake. An inequality involving Real Analysis should be studied inside of Real Analysis: if we call that inequalities, then it is a branch of Analysis, not Applied Mathematics. If we study a real convex function, that could be inside of Real Analysis. In the same way, Graph Theory should be inside of Geometry if what we are doing there is dealing with geometric objects in the traditional way: studying the shape, producing reasoning that leads to calculations, and so on. |
Why is it that large linear SVM coefficients denote the most important features? | My colleague Doron convinced me in the following explanation:
Suppose we are trying to classify documents to the "is_fashion" target variable - are they dealing with fashion or not? We have only 2 features x is TFIDF score of the word "dress", and y is the TFIDF score of the word "new", and each document can be represented as a point in this 2D space.
Suppose that the y feature isn't relevant at all to the target, while the x feature is a great predictor, so that every document with a x score of above 3 is dealing with fashion, and those with a score of below 3, does not deal with fashion. The SVM separating line would look like 1x + 0y -3 = 0, or x = 3.
The classification of a new document would be 1*x + y*0 - 3, which is nothing but the projection (dot product) of the document on the perpendicular to the separating line. This example clearly shows how a feature with a 0 coefficient does not affect the classification.
Let's complicate a bit. What if the word "new" is a bit relevant to fashion, but not as relevant as "dress"? The separating line (black in the image) would now be 5*x + 1*y - 3 = 0, or y = -5x + 3. The perpendicular (red in the image) would be y = x/5 + 3
Classification would now be a projection on the red line. The point <2,2> would get a classification score: 5*2 + 1*2 -3 = 9. If our important feature x is subtracted 1, we would need to move 5 steps higher in the y dimension to get the same score, to the point <1,7> which would also get a score of 5*1 + 1*7 -3 = 9. |
How to find sum of powers from 1 to r | As I said in the comment, it is called geometric series:
$$a^0+a^1+a^2+\ldots + a^n = \sum_{k=0}^n a^k = \frac{a^{n+1}-1}{a-1}$$
So in your case we do not begin witht the exponent 0 but with 1 so we just substract $a^0=1$:
$$a^1 + a^2+a^3 + \ldots + a^n = \frac{a^{n+1}-1}{a-1} - 1$$
In your concrete case $a=3$ and $n=3$:
$$3^1+3^2+3^3 = \frac{3^{4}-1}{3-1} -1 = 39$$
You can derive it as follows:
Let $$S = a^0 + a^1 + \ldots a^n.$$ Therefore
$$ a\cdot S = a^1 + a^2 \ldots + a^{n+1}.$$
So $$(a-1)S = aS-S = a^{n+1}-a^0 = a^{n+1} -1$$ results when dividing by $(a-1)$ in:
$$S = \frac{a^{n+1}-1}{a-1}$$ |
For nonempty set of reals bounded above, show its supremum is in the closure. | You can give a constructive proof.
Let $\epsilon \gt 0$ be given. Then since $s = $ sup $B$, $s - \epsilon$ is not an upper bound of $B$. Hence for any $\epsilon \gt 0$, we can find an $x \in B$ with $s - \epsilon \lt x \le s$, else $s- \epsilon$ would be an upper bound. This says precisely that every neighborhood of $s$ contains some $x \in B$. Thus $s$ is a limit point of $B$, so s is in the closure of $B$. |
Conditional Expectation and Radon-Nikodyn | I actually think that I have it. $\frac{d\nu'}{d\mathbb{P}'}$ is measurable wrt $\mathcal{F}$. If it is also simple then for $B \in \mathcal{F}$, $\int_B \frac{d\nu'}{d\mathbb{P}'} d\mathbb{P}'$ = $\int_B \frac{d\nu'}{d\mathbb{P}'} d\mathbb{P}$ by the fact that $\mathbb{P}'$ is the restriction of $\mathbb{P}$ to $\mathcal{F}$. Even if $\frac{d\nu'}{d\mathbb{P}'}$ is not simple it can be approximated from below by simple functions in $\mathcal{F}$ (since $\frac{d\nu'}{d\mathbb{P}'}$ is a.s. positive). The result follows by two applications of monotone convergence. |
Show that a sequence has a uniformly convergent subsequence. | You already proved that $\{Kf_n\}$ is equicontinuous and equibounded. Recall that these are precisely the conditions which should be met in order to conclude the existence of a uniformly convergent subsequence.
I think that you confused the Ascoli-Arzela theorem with the following corollary of it:
If $X$ is a compact and $S \subset C(X)$, then $S$ is compact $\iff$ $S$ is equibounded, equicontinuous and closed. |
Integral of bounded function with limit zero at $\pm \infty$ | Imagine a simple example in 1 dimension $$f(x)=\left\{\begin{array}{c}1/|x|,\quad x>1\\
0,\quad \mbox{otherwise}\end{array}\right.$$
This function is bounded, and its limit at infinity is zero. However, $\int_{-\infty}^{\infty}f(x)dx=\infty$. the condition $\lim_{x\to\pm\infty}=0$ is a necessary condition for the converging of the integral, but it is not sufficient.
In order to get a convergence, you need an extra assymptotic behavior of $f(x)$: it has to go to zero faster than $1/x$. For example, the function
$$f(x)=\left\{\begin{array}{c}1/|x|^p,\quad x>1\\
0,\quad \mbox{otherwise}\end{array}\right.$$
with converges $\forall p\in(1,\infty)$ |
Partial derivative of dz/dx wrt theta | I assume you can exchange partial derivatives, so that:
$\frac{\partial}{\partial \theta} \frac{\partial z}{\partial x} = \frac{\partial}{\partial x}\frac{\partial z}{\partial \theta}$
Then you can use your answer form part (i) to plug into $\frac{\partial z}{\partial \theta}$, and take the partial of that with respect to $x$. |
$ f $ is open if and only if $f^{-1}(\mathrm{Int}(B))\supset \mathrm{Int}(f^{-1}(B))$ | Let $O$ be an open set; you want to prove that $f(O)$ is open. Suppose otherwise. Then there is some $x\in O$ such that $f(x)\notin f(O)^\circ$. In other words, $x\notin f^{-1}\bigl(f(O)^\circ\bigr)$. But $x\in O\subset f^{-1}\bigl(f(O)\bigr)$. Since $O$ is open $x\in\bigl(f^{-1}\bigl(f(O)\bigr)\bigr)^\circ$. So, taking $B=f(O)$ this proves that $f^{-1}(B^\circ)\not\supset\bigl(f^{-1}(B)\bigr)^\circ$. |
Trying to prove that the set of global minimisers of a convex function $f:\mathbb{R}^n\rightarrow\mathbb{R}$ is a convex set | If $\gamma$ is the infimum of the function then $\gamma \leq f(x)$ is satisfied for any $x$. So $x \in S$ iff $f(x)=\gamma$ which menns $f$ is minimized at $x$. |
Does the sum of $\sum_{n=1}^{\infty}{\frac{2}{n^2+2n}}$ depend at which $n$ the series starts? | Consider a telescopic series, (This one is easier to understand.)
$$\sum_{n=m}^{\infty}{\frac{1}{n}-\frac{1}{1+n}}$$
$$=\frac{1}{m}$$
Wouldn't it depend on what $'m'$ is? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.