title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Pointwise convergence implies uniform convergence | I think your argument works, the only thing that isn't completely justified is why you can conclude that since $\hat{f}_n$ converges uniformly to $f$, so does $f_n$. And where do you use that there are only a finite number of points of discontinuity (the example below shows that this assumption is necessary). You can complete the argument as follows; Given $\epsilon>0$ there exists $\hat{N}$ and $\delta>0$ such that
$$ n>\hat{N} \ \Rightarrow \ \forall x \ : \ |\hat{f}_n(x)-f(x)|<\epsilon$$
Label the points of discontinuity $\{y_1,\ldots,y_k\}$. By pointwise convergence we know that for each point of discontinuity $y_j$, there exists and $N_j$ such that
$$ n>N_j \ \Rightarrow \ |f_n(x)-\hat{f}_n(x)|<\epsilon$$
But now taking $N=\max(\hat{N},N_1,\ldots,N_k)$ should give
$$ n>N \ \Rightarrow \ |f_n(x)-f(x)|<2\epsilon$$
EDIT:The following answers a misunderstood version of the question, but since reference to the example is made above, I will leave it up. Missed the assumption that points of discontinuity are assumed independent of $n$ and finite in number.
I am not convinced that this is necessarily true. How about the sequence of functions
$$ f_n(x) = \begin{cases} 0 & \text{ if } x\in ]-\infty,-1/n[\cup [0;\infty[\\
1/2+1/n & \text{ if }x\in [-1/n;0[ \\
\end{cases}$$
seems to me like this sequence of functions will converge pointwise to $0$ and be stictly decreasing, but it is clearly not uniformly convergent, since
$$ \forall n : \ \max(\{|f_n(-\delta) - f_n(0)| \ \big| \ \delta \in [-1;0]\}) >1/2$$ |
Deriving formula for rolling a run of q consecutive identical rolls on a k-sided die within n rolls | Let me just give you some pointers to get started. I haven't worked this out, so I don't know if you'll find a "nice" formula, but I can tell you that there's a better way than what you've been doing.
Precisely because of the difficulties with inclusion and exclusion that you've encountered, it's easier to count the sequences than do not have a run of $q$ identical outcomes. Call a sequence "good" if no run of $q$ identical outcomes occurs in it. Let $S_j^{(n)}$ be the number of good sequences of length $n$ than end in a maximal run of $j$ identical outcomes. For example, $S_3^{(10)}$ is the number of sequences of length $10$ that end in a run of $3$ identical outcomes, but not $4$ identical outcomes. Define $$S^{(n)}=\sum_{j=1}^{q-1}S_j^{(n)}$$ so that it is $S^{(n)}$ that we want to compute.
We have $$
\begin{align}
S_1^{(1)}&=k\\
S_j^{(1)}&=0,\ j>1\\
S_j^{(n)}&=S_{j-1}^{(n-1)}, 1<j<q\\
S_1^{(n)}&=(k-1)S^{(n-1)}
\end{align}
$$
The first two equations are clear.
For the third equation, note that a good sequence of length $n$ that ends in a run of $j>1$ can only arise from a good sequence of length $n-1$ that ends in a run of $j-1;$ of course, the next outcome must be the same a the one in the run, so it can only arise in one way.
For the fourth equation, a good sequence of length $n$ ending in a run of exactly $1$ arises from a good any sequence of length $n-1$ followed by any of the $k-1$ outcomes that is different from the last outcome of that sequence.
For given values of $n,k,q$ this allows you to work out the value of $S^{(n)}$ quickly, especially if you use a computer. Then you just need to divide by $k^n,$ and subtract from $1.$
I suggest you try working out examples with $q=2$ and small values of $k$ to begin with, and see if you find any patterns.
Good luck.
EDIT
Corrected a typo. There are $k^n$ possible sequences of length $n$ of course, not $q^n$. |
Is it possible in some ring $1=0$? | The only ring in which $0=1$ is the trivial ring, the ring with 1 element. Proof: Let $x\in R$. Then $1x=x=0x=(0+0)x=0x+0x$, hence $0x=0=1x=x$, hence all $x\in R$, $x=0$.
Some authors don't allow this to be a ring, but most do. |
Making $f$ differentiable | $e^h = 1 + h + o(h)$, so your last equation implies $\lim\limits_{h \to 0} \left(\frac{a - b}{h} + 1 + o(1)\right) = 2b$.
There are limits of the second and the third terms, so for limit of sum to exist the first term should also have limit. But it has limit only if $a - b = 0$ - so we geet $a = b$. Substituting, we get that limit is equal to $1$, thus $2b = 1$.
Finishing, we have $a = b = \frac{1}{2}$. |
Extending $\sum_{n=0}^\infty s^{n^2}$ beyond its natural boundary | Well, I'm a month late, but I wanted to go further with something Daniele mentioned in the comments. I've been considering something similar for a while, and I think the Laurent strategy Daniele mentioned is close to what I view as the correct answer is.
First, let's look at how something similar to the Laurent series can be used to analytically continue some of the more commonly seen functions. For instance, if we wished to continue
$$f(x) = \sum_{n=0}^\infty nx^n$$
We could simply use the fact that $\sum_{n=0}^\infty nx^n = \sum_{n=0}^\infty n^{-(-1)}x^n = Li_{-1}(x)$, and Li is defined for all x values. We can extend this to finite sums of powers of n, so we could see that
$$f(x) = \sum_{n=1}^\infty (n+3n^2-n^3)x^n = Li_{-1}(x) + 3Li_{-2}(x)-Li_{-3}(x)$$
But, why have I introduced logarithmic integrals in relatively simple problems? Logorithmic integrals happen to have an incredibly fortunate method for continuation, which his that, for $|x|>1$,
$$Li_{-k}(x)=(-1)^{1+k}\sum_{n=1}^\infty n^kx^{-n}$$
This tells us that for the previous problems, we could have done
$$\sum_{n=1}^\infty (n+3n^2-n^3)x^n = \sum_{n=1}^\infty nx^n+3\sum_{n=1}^\infty n^2x^n-\sum_{n=1}^\infty n^3x^n$$
can be continued to
$$\sum_{n=1}^\infty nx^{-n}+3\sum_{n=1}^\infty n^2x^{-n}+\sum_{n=1}^\infty n^3x^{-n}=\sum_{n=1}^{\infty}\left(n-3n^{2}-n^{3}\right)x^{-n}$$
when |x|>1, and when |x|<1 we can use the regular sum. This fact allows us to gain a power and easy way to extend functions of the form $\sum_{n=1}^\infty f(n)x^n$ which have power series for f(n).
$$\sum_{n=1}^\infty f(n)x^n = \sum_{n=1}^\infty \sum_{k=0}^\infty x^nb_kn^k $$
if one allows blindly interchanging the summations
$$\sum_{k=1}^\infty b_k\sum_{n=0}^\infty x^nn^k = \sum_{k=1}^\infty b_k(-1)^{1+k}\sum_{n=0}^\infty x^{-n}n^k = -f(0)+\sum_{n=1}^\infty x^{-n}\sum_{k=1}^\infty b_k(-n)^k = -f(0)+\sum_{n=1}^\infty x^{-n}f(-n)$$
Then this method gives us a powerful method to continue functions which have a 'natural' power series, but struggles with the fact that $f(n)$ doesn't carry enough information to uniquely define a power series.
In particular, for this question, one possible valid definition for $f(n)$ is
$$f(x) = \frac{\sin\left(\pi\left(x\right)\right)}{\pi\left(x\right)}+\sum_{k=1}^{\infty}\left(\frac{\sin\left(\pi\left(x-k^{2}\right)\right)}{\pi\left(x-k^{2}\right)}+\frac{\sin\left(\pi\left(x+k^{2}\right)\right)}{\pi\left(x+k^{2}\right)}\right)$$
which corresponds to what Daniele got, since $f(n) = f(-n)$, and so this is one reason why Daniele's method happened to get something that works well.
I think there is at least a few reasons to prefer something like this for continuing this gap series. For one, it sort of generalizes square numbers. Since
$$\> 2 \quad 2 \quad 2 \quad 2 \quad 2 $$
$$\>\>\>\> \text{-}5 \quad \text{-}3 \quad \text{-}1 \quad 1 \quad 3 \quad 5 \quad$$
$$\text{ }9 \quad \text{ }4 \quad \text{ }1 \quad \text{ }0 \quad \text{ }1 \quad \text{ }4 \quad 9$$
seems to suggest that an extension of squared numbers should be even, whereas doing the same process of $n^3$ gives negatives on one side, and positive on the other, which suggests it should be odd. Indeed, choosing $f(n)$ is even when $n^k$ is even, and $f(n)$ is odd when $n^k$ is even produces continuous extensions, while doing it the opposite way does not.
Further, one can show that the two functions, $$\sum_{n=0}^\infty x^{(n^2)} \text{ and } \sum_{n=0}^\infty x^{(-n^2)}$$ have derivatives that are all zero and agree at -1 by applying some regularizations to allow the functions to converge all the way up -1. The functions also just seem to mesh together in a natural way:
I think this is only a first step, though, since the uniqueness problem is not addressed in any rigorous fashion--but I think they're some good reasons to think this is a good option for being the correct continuation. Let me know if you guys have any ideas on this. |
Solving a nonhomogeneous recurrence relation | I would treat is
as the real part of
$u_{k+1}
= u_k + u_{k-1} + a e^{i\omega k}
= u_k + u_{k-1} + a c^k
$
(where
$c
= e^{i \omega}
$),
solve this,
and then take the
real part of the solution. |
Is the fiber product of the connected component of a group scheme connected? | Thanks to some outside help I figured out the answer to the question. Actually everything needed was here http://stacks.math.columbia.edu/download/varieties.pdf.
Since the identity is a $k$-rational point contained in $G^0$ and this one is connected, we can apply Lemma 5.14. Then $G^0$ is geometrically connected, this allows us to conclude using Lemma 5.4. |
Determinant of matrices with independent indeterminates entries | Both statements are immediate consequences of the fact that the determinant $D_n$ is an irreducible polynomial, as is proved in this question. |
How to show the optimal condition of $f(\alpha) = \frac{R^2+G^2\sum_{i=1}^k \alpha_i^2}{2\sum_{i=1}^k \alpha_i}$ | Let $A=(\alpha_1,\alpha_2,\dots,\alpha_k)$ be any optimal solution, and suppose that the values aren't equal.
Now consider all possible permutations of $A$: they must all obtain the same optimal value by symmetry.
Now consider the mean of those permutations. This mean will have the same value for each $\alpha_i$. The mean is a convex combination, and the function is convex, so the optimal value at this new point must be less than or equal to the value at $A$. You've just recovered an optimal solution with all equal entries.
This does not prove that an optimal solution exists. Nor does it prove that the only optimal solutions have equal entries. It just proves this: that if there is an optimal solution, there must be at least one with all equal entries. However, now that you've reduced it to a univariate problem, you shouldn't have difficulty establishing the optimal value, probably analytically.
If you wish to prove that solutions with unequal $\alpha_i$ do not exist, you'll want to show that the function is strictly convex. I'm pretty sure that's the case at least for positive $R$ and $G$. |
Products of a polynomial and its conjugate | Yes, it is true. The $k$th coefficient is$$a_0\overline{a_k}+a_1\overline{a_{k-1}}+\cdots+a_k\overline{a_0},$$which is real beacause it is equal to its own conjugate. |
Mixing binomial distributions | As others have stated, your problem is not very clearly expressed, and my answer might not answer the problem you thought you wanted to ask.
Assuming you are interested in a random variable $S$ whose distribution is the $(.43, .36, .21)$ mixture of the three indicated binomial distributions,
with $$PS\in A)= .43P(X_1\in A) + .36P(X_2\in A) + .21P(X_3\in A)$$ for any set $A$, you can work out the exact value of $P(S\le2)$ by working out $P(X_i\le2)$ for each $i$ (the case $i=1$ is trivial, and the others not difficult), multiply the probabilities by the coefficients, and add.
You also have two methods of approximating $P(S\le2),$ namely, mix the normal approximations, or use a single normal approximation as in your problem statement.
It is not clear to me which would be more accurate, nor whether either is cheaper than figuring out your answer exactly. |
Spectral decomposition and equivalance | Use Parseval's identity an the fact that
$$
\Bigl|\frac{\lambda_n^2}{\lambda_n^2+1}\,a_n\Bigr|\le|a_n|.
$$ |
Right Triangle Trig: Why are the Angles the Same? | It must be noted that they are corresponding angles. Under the "Corresponding Angles Axiom", we state that
"When a transversal (here, the line $t$) cuts two parallel lines, then the corresponding angles ($\alpha$ and $\alpha_1$) are equal."
This is an axiom, and can be visually shown by superposition.
In your case, the line through $K$ at $HF$ is parallel to the line $GF$, and line $GH$ is the transversal. Hence, the angles under consideration are equal. |
Fourier Series: going from $a_n$ and $b_n$ to $c_n$ | We need consider only one $n > 0$ (the case $n = 0$ is easily verified). In the exponential form, we consider the two terms with indices $n$ and $-n$,
$$\begin{align}
c_n e^{2\pi inx/P} + c_{-n} e^{-2\pi inx/P} &= c_n (\cos (2\pi nx/P) + i\sin (2\pi nx/P)) + c_{-n}(\cos (2\pi nx/P) - i \sin (2\pi nx/P))\\
&= (c_n + c_{-n})\cos (2\pi nx/P) + i(c_n - c_{-n})\sin (2\pi nx/P),
\end{align}$$
so $a_n = c_n + c_{-n}$, and $b_n = i(c_n - c_{-n})$. In the trigonometric form, we consider
$$\begin{align}
a_n \cos (2\pi nx/P) + b_n\sin (2\pi nx/P) &= \frac{a_n}{2}\left(e^{2\pi inx/P} + e^{-2\pi inx/P}\right) + \frac{b_n}{2i}\left(e^{2\pi inx/P} - e^{-2\pi inx/P}\right)\\
&= \frac{a_n - ib_n}{2}e^{2\pi inx/P} + \frac{a_n + ib_n}{2}e^{-2\pi inx/P}
\end{align},$$
so we get $c_n = \frac{a_n-ib_n}{2}$ and $c_{-n} = \frac{a_n + ib_n}{2}$. |
Using only logical symbols and "$+, \cdot$" translate into a first-order logic: "$c$ is not the greatest common factor of $a$ and $b$". | That looks pretty much correct, except that you've used the symbol $\le$, which is not a logical symbol and is not $+$ or $\cdot$. However, you can express $d \le c$ using only $+$, $\cdot$ and logical symbol by observing that
$$d \le c \quad \Leftrightarrow \quad (\exists k)(c+1=d+k)$$ |
Value of $\sum\limits^{\infty}_{n=1}\frac{\ln n}{n^{1/2}\cdot 2^n}$ | Consider
$$f(s):=\sum_{n=1}^\infty \frac {\left(\frac 12\right)^n}{n^s}=\operatorname{Li}_s\left(\frac 12\right)$$
with $\operatorname{Li}$ the polylogarithm then (since $\,n^{-s}=e^{-s\ln(n)}$) :
$$f'(s)=\frac d{ds}\operatorname{Li}_s\left(\frac 12\right)=-\sum_{n=1}^\infty \frac {\ln(n)}{n^s}\left(\frac 12\right)^n$$
giving minus your answer for $s=\frac 12$.
You may use the integrals defining the polylogarithm to get alternative formulations but don't hope much simpler expressions... |
Sketching the graph of a trig function | There's a much easier way of sketching such a curve.
My claim is that we can write $\cos x + \sin x$ as $\sqrt{2}\cos\left(x-\frac{\pi}{4}\right)$.
The graph is a shifted cosine curve, moving between $-\sqrt{2}$ and $\sqrt{2}$. The shift is to the right by $\frac{\pi}{4}$.
The aim is to write it in the form $R\cos(x - \alpha)$ for suitable $R$ and $\alpha$. First apply the formula:
$$R\cos(x-\alpha) \equiv R\cos x \cos \alpha + R \sin x \sin \alpha$$
We want $\cos x + \sin x \equiv R\cos(x-\alpha)$ and so we need to find an $R$ and an $\alpha$ for which $R\cos\alpha = 1$ and $R\sin\alpha = 1$. We can find $R$ and $\alpha$ by using Pythagoras' Theorem and some trigonometry. First,
divide both of the terms:
$$\frac{R\sin\alpha}{R\cos\alpha} \equiv \tan \alpha = \frac{1}{1} = 1$$
It follows tha $\alpha = \frac{\pi}{4}$. To find $R$, we can square both terms. First notice that
$$(R\cos\alpha)^2 + (R\sin\alpha)^2 = 1^2 + 1^2$$
Expanding and then using the identity $\cos^2\alpha + \sin^2\alpha \equiv 1$ gives $R^2 = 2$, hence $R = \sqrt{2}$. We have:
$$\cos x + \sin x \equiv \sqrt{2}\cos\left(x-\frac{\pi}{4}\right)$$ |
Is $dv$ only approximate of $(dv/dx)dx$? | The answer you cite is using the symbols $dV$ and $dx$ in two senses. What it means is something more like
$$\Delta V \approx \frac{dV}{dx} \Delta x$$
where $\Delta V$ is the change in $V$ as an ordinary real number. By definition
$$\frac{dV}{dx} = \lim_{\Delta x \to 0} \frac{\Delta V}{\Delta x}$$
and hence it must be the case that we can make $\displaystyle\frac{\Delta V}{\Delta x}$ as close as we like to $\displaystyle\frac{dV}{dx}$ by making $\Delta x$ sufficiently small.
Now, the percentage change is $\displaystyle \frac{\Delta x}{x}$ and $\displaystyle \frac{\Delta V}{V}$. For your problem at hand it is useful to write the relation $V = 2x^3$ as
$$\ln V = \ln(2x^3) = \ln 2 + 3\ln x$$
Useful, because when we now differentiate both sides with respect to $x$,
$$\frac 1V \frac{dV}{dx} = \frac 3x$$
or
$$\frac 1V \frac{\Delta V}{\Delta x} \approx \frac 3x \quad\text{ by the logic above }$$
and thus
$$\frac{\Delta V}{V} \approx 3\frac{\Delta x}{x} \, .$$
Hence as 2% change in $x$ is (approximately) equivalent to a 6% change in $V$. |
Dense sets must be open in irreducible space? | Consider $I\subset \mathbb{C}$ and infinite numerable subset for example $\mathbb{N}$. $\mathbb{C}-I$ is dense in $C$, but it is not open for the Zariski topology since $I$ cannot be the set of zero of a finite number of polynomials. |
Relation between steps and turns in a simple symmetric random walk | The expectation does not exist (i.e., $E|2\sigma-\tau|=+\infty$). To see it, condition upon $\tau=n$ (an event of probability about $n^{-3/2}$). Fix now a very small $\alpha>0$ and consider the sequence $S_{4k}$, $k<n/4$.
Claim Typically there are at least $\alpha n$ values of $k$ with $S_{4k}=S_{4(k+1)}$ ("level" intervals of length $4$).
Proof The total number of admissible paths is about $2^nn^{-3/2}$. Consider all paths in which the condition in the claim is violated. Then we have at least $\frac n4-\alpha n$ pieces of length $4$ that cannot be "level", so the total number of such paths is at most ${n/4\choose \alpha n}10^{n/4-\alpha n}16^{\alpha n}$, which gives a $2^{-cn}$ reduction over the trivial bound $2^n$ if $\alpha>0$ is small enough.
Now consider the "good part" of the probability space and condition upon the values of $S_{4k}$. Then pick up $\frac\alpha 2 n$ separated "level" intervals of length $4$ and condition upon all values $S_m$ except the ones inside those intervals. Then the contributions of those intervals to the total number of turns become independent integer-valued bounded non-constant random variables, so their sum has a constant probability to deviate from any given number by $c\sqrt{\alpha n}$, whence $E[1_{\tau=n}|2\sigma-\tau|]\ge c/n$, so the series diverges. |
Convergence of $(1+f(n))^{g(n)}$ when $f(n) \to 0$ and $g(n) \to +\infty$ | We have that
$$(1+f(n))^{g(n)}=e^{g(n)\log(1+f(n))}$$
and since
$$g(n)\log(1+f(n))=g(n)\cdot f(n)\cdot \frac{\log(1+f(n))}{f(n)}$$
with
$$\frac{\log(1+f(n))}{f(n)}\to 1$$
all boils down in $g(n)f(n)$. |
Is there a description of the underlying space of an embedded submanifold? | (Embedded) submanifolds are locally closed. The constant rank theorem tells us that if $E$ is an $n-k$ dimensional submanifold of a manifold $X$, then for every $p \in E$, there exists a chart $(U,\phi)$ of $X$ containing $p$, such that
$$\phi(U \cap E) = \phi(U) \cap \mathbb{R}^{n-k}$$
where $\mathbb{R}^{n-k}$ is identified with the subspace $(x_1, ... , x_{n-k},0, ... , 0)$ of $\mathbb{R}^n$. Since $\phi(U) \cap \mathbb{R}^{n-k}$ is closed in $\phi(U)$, $U \cap E$ is closed in $U$.
So $E$ is covered by open sets $U$ of $X$ such that for each $U$, $E \cap U$ is closed in $U$. This means that $E$ is locally closed in $X$. |
Time derivative of inverse of flow diffeomorphisms | You don't have the group property $\varphi_{t+s} = \varphi_t\circ \varphi_s$ if $V$ is time-dependent. This holds only if $V$ is autonomous. The only thing you have is that $\varphi_0 = {\rm Id}$. It's also good to note that $({\rm d}/{\rm d}t)(\varphi_t) = V \circ \varphi_t$ by definition. But in any case, this is indeed an instance of the chain rule, as follows: fixed $p \in M$, we have the equality $$\varphi_t(\varphi_t^{-1}(p)) = p.$$Of course we want to take the derivative of both sides with respect to $t$. For the right side life is great and one gets zero. For the left side, the classical trick applies: let $f(t,s) = \varphi_t^{-1}(\varphi_s(p))$ and note that what we want is $$\frac{{\rm d}}{{\rm d}t} f(t,t) = \frac{\partial f}{\partial t}(t,t) + \frac{\partial f}{\partial s}(t,t).$$Now, we have that $$\frac{\partial f}{\partial t}(t,s) = \left(\frac{{\rm d}}{{\rm d}t}\varphi_t^{-1}\right)\Bigg|_{\varphi_t^{-1}(\varphi_s(p))} \implies \frac{\partial f}{\partial t}(t,t) = \left(\frac{{\rm d}}{{\rm d}t}\varphi_t^{-1}\right)\Bigg|_p,$$and also that $$\frac{\partial f}{\partial s}(t,s) = (\varphi_t^{-1})_\ast\left(\left(\frac{{\rm d}}{{\rm d}s}\varphi_s\right)\Bigg|_p\right) \implies \frac{\partial f}{\partial s}(t,t) = (\varphi_t^{-1})_\ast\left(\left(\frac{{\rm d}}{{\rm d}t}\varphi_t\right)\Bigg|_p\right)$$Omitting $p$, we have that $$\frac{{\rm d}}{{\rm d}t}\varphi_t^{-1} = -(\varphi_t^{-1})_\ast \left(\frac{{\rm d}}{{\rm d}t}\varphi_t\right)$$as wanted. |
How to select a minimum subgraph of a graph containing any k nodes | I believe this is called the $k$-size cut problem. In general, it is NP hard though approximable since it is submodular. There was a discussion in the CS theory SE which may be useful to you, found here.
Of course, note that the problem there is the same problem except that $w \mapsto -w$, since we want to flip maximization/minimization, etc, etc. |
Riemann Integral, show $f(x) \in R(x)$ on $ [0,2]$ | The original partition may not contain $1$. You'll note that they originally take an arbitrary partition of $[0,2]$. For example, it could have been $P =\{0,2\}$ which doesn't contain $1$.
We want to show that given $\epsilon>0$, there is a partition $P$ of $[0,2]$ so that $U(P,f)-L(P,f) < \epsilon$. If we can show that there is a partition $P$ such that $U(P,f) - L(P,f) = 0$, then we will have certainly achieved our goal. The proof is being succinct by not stating the goal of the proof explicitly. |
Using matrices to transform a graph | You are correct. You have to substitute something for $x$ and $y$ to transform the equation, so you need to express these old variables in terms of the new variables $x'$ and $y'$. Doing that entails inverting $A$. |
How does mutliplication work? | It's using Horner's method and the distributive property on the expansion of the first factor in base 2.
In your example,
$11 = 1011_2 = 2^3+2^1+2^0 = (((1)\cdot2+0)\cdot2+1)\cdot2+1$ |
Confusion over probability distribution | For a fixed vector $a$ of $1$'s and $0$'s, $\sum_{i=1}^n a_i M_i$ is a vector of independent binomial$(\sum_{1=1}^n a_i, 1/2)$ random variables (here $M_i$ is the $i^{th}$ column of $M$). Let $S$ be the set of all such vectors $a$, of which there are $2^n$ and the probability that $v$ is equal to any given one is $\frac{1}{2^n}$.
Thus for any vector $x$ of non-negative integers not exceeding $n$,
\begin{align}
P[u=x]
&= \frac{1}{2^n}\sum_{a \in S} P[u = x \mid v = a] \\
&= \frac{1}{2^n}\sum_{a \in S} P\left[\sum_{i=1}^n a_i M_i = x \right] \\
&= \frac{1}{2^n}\sum_{a \in S} \prod_{i=1}^n \binom{\sum_{j=1}^n a_j}{x_i}
\left(\frac{1}{2}\right)^{\sum_{j=1}^n a_j}
\chi\left\{x_i \leq \sum_{j=1}^n a_j \right\}
\end{align} |
Algorithm for optimal assignment of tasks to a team of people | This problem is $NP$-complete, even in the case of two people and where both people take the same amount of time for each task (i.e.: $n = 2$ and $a_i = b_i$ for all $i=1,\ldots,m$) since this is essentially the PARTITION problem. As such, it is unlikely that you will find a polynomial-time algorithm that solves the problem exactly. However, it is possible that suitable approximations and/or super-polynomial-time solutions might exist for your purposes. |
Image of the Zero matrix | Let $A$ be a real $m\times n$ matrix. Then $A$ defines a linear mapping $\mathbb{R}^{n}\rightarrow \mathbb{R}^{m}$, where the domain is $\mathbb{R}^{n}$ and the codomain is $\mathbb{R}^{m}$.
The image of $A$ is the set of all vectors in $\mathbb{R}^{m}$ that we get when we input any vector in $\mathbb{R}^{n}$ into the mapping, i.e.
$$\mathrm{im} (A)=\left\{A\mathbf{x}\ \vert\ \mathbf{x}\in \mathbb{R}^{n}\right\}.$$
In the case of the $m\times n$ zero matrix $\mathbf{0}$, we have
$$\mathrm{im} (\mathbf{0})=\left\{\mathbf{0}\mathbf{x}\ \vert\ \mathbf{x}\in \mathbb{R}^{n}\right\}.$$
Since $\mathbf{0}\mathbf{x}=\mathbf{0}\in \mathbb{R}^{m}$ for any $\mathbf{x}\in \mathbb{R}^{n}$, then the image of the zero map is indeed the trivial subspace $\left\{\mathbf{0}\right\}.$ |
Finding two sets that satisfy certain conditions | For the first two sets, consider the set of all even numbers and the set of all odd numbers. Given any even/odd number, there is always an odd/even number thatβs greater than that. Also, the two sets donβt intersect.
For the second set, the first condition restricts the set to be $(-\infty, 1)$, and the second condition restricts it to be $(-1, \infty)$. Thus the set we want should be the intersection of the two. |
Proof of absolute convergence | Hint:
Use the series expansion of the exponential function: $$e^{a_n} -1 = a_n + \frac{1}{2}a_n ^2 +...$$ |
value of $f'(0)$ in composite function | This is a partial answer.
If $x$ is an integer, say $x = n$, the last relation in the OP is a linear difference equation that can be solved to obtain $f(n) = c + n$. Constant $c$ can be computed so that $f(f(n)) = 1 + n$, yielding $f(n) = n + \frac 12$. So, any such function $f$ must be linear, at least over the integers. Naturally, $f(x)=x + \frac 12$ is one possibility. Are there others? The answer is: apparently yes (see here).
It is easy to see that $f$ must be injective and, in fact, increasing. If we assume that $x_1 \ne x_2$ and $f(x_1)=f(x_2)=\alpha$ we obtain a contradiction ($f(\alpha)$ would have to be simultaneously $1+x_1$ and $1+x_2$).
Thinking about how such function would look like (periodic, increasing, coinciding with $x+\frac 12$ over the integers), I'm inclined to conclude that either $f$ is not differentiable at integer points or, being differentiable, its graphic must be tangent to $x+\frac 12$.
My conjecture is that if $f$ is differentiable at $x=0$, the derivative is $f'(0)=1$. |
Formula for number of apple pieces to number of slices made? | If you want a single line, I think people generally do this by an indicator function or using the Kronecker delta:
$$
\delta_{ij} = \begin{cases} 1 & \text{ if } i=j \\ 0 & \text{ otherwise } \end{cases}
$$
Then, if you accept $0^0=1$, you can write:
$$ f(n) = (2n)^{1-\delta_{0n}} $$
Or, more simply:
$$ f(n) = \delta_{0n} + 2n(1-\delta_{0n}) $$ |
The link between the infinity norm of the difference between matrices and the infinity norm of the difference between these matrices inverted. | No. Suppose that $A_n=\frac1n\operatorname{Id}$ and that $B=\operatorname{Id}$. Then $(\forall n\in\Bbb N):\|A_n-B\|_\infty\leqslant1$. However,$$\left\|A^{-1}-B^{-1}\right\|_\infty=\|n\operatorname{Id}-\operatorname{Id}\|_\infty,$$which can be arbitrarily large. |
Expected maximum number of unpaired socks | I did some Monte Carlo with this interesting problem and came to some interesting conclusions. If you have $N$ pairs of socks the expected maximum arm length is slightly above $N/2$.
First, I made 1,000,000 experiments with 100 pairs of socks and recorded maximum arm length reached in each one. For example, maximum arm length of 54 was reached about 90,000 times. And it all looks like a normal distribution to me. The average value of maximum arm length was 53.91, confirmed several times in a row.
Nothing changed with 100 pairs of socks and 10,000,000 experiments. Average value remained the same. So it looks like you need about a million runs to draw up a meaningful conclusion.
Here is what I got when I doubled the number of socks to 200 pairs. Maximum arm length on average was 105.12, still above 50%. I got the same value in several repeated experiments ($\pm0.01$).
Finally, I decided to check expected maximum arm length for different number of sock pairs, from 10 to 250. Each number of pairs was tested 2,000,000 times before the average value was calculated. Here are the results:
$$
\begin{array}{c|rr}
\textbf{Pairs} & \textbf{Arm Length} & \textbf{Increment} \\
\hline
10 & 6.49 & \\
20 & 12.03 & 5.54 \\
30 & 17.41 & 5.38 \\
40 & 22.71 & 5.30 \\
50 & 27.97 & 5.26 \\
60 & 33.20 & 5.23 \\
70 & 38.40 & 5.20 \\
80 & 43.59 & 5.19 \\
90 & 48.75 & 5.16 \\
100 & 53.91 & 5.16 \\
110 & 59.07 & 5.16 \\
120 & 64.20 & 5.13 \\
130 & 69.33 & 5.13 \\
140 & 74.46 & 5.13 \\
150 & 79.58 & 5.12 \\
160 & 84.69 & 5.11 \\
170 & 89.80 & 5.11 \\
180 & 94.91 & 5.11 \\
190 & 100.02 & 5.11 \\
200 & 105.11 & 5.09 \\
210 & 110.20 & 5.09 \\
220 & 115.29 & 5.09 \\
230 & 120.38 & 5.09 \\
240 & 125.47 & 5.09 \\
250 & 130.56 & 5.09
\end{array}
$$
It looks like a straight line but it's actually an arc, slightly bended downwards (take a look at the increment column).
Finally, here is the Java code that I used for my experiments.
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
public class Basket {
public static final int PAIRS = 250;
public static final int NUM_EXPERIMENTS = 2_000_000;
int n;
List<Integer> basket;
Set<Integer> arm;
public Basket(int n) {
// basket size
this.n = n;
// socks are here
this.basket = new ArrayList<Integer>();
// arm is just a set of different socks
this.arm = new HashSet<Integer>();
// add a pair of same socks to the basket
for(int i = 0; i < n; i++) {
basket.add(i);
basket.add(i);
}
// shuffle the basket
Collections.shuffle(basket);
}
// returns maximum arm length
int hangSocks() {
// maximum arm length
int maxArmLength = 0;
// we have to hang all socks
for(int i = 0; i < 2 * n; i++) {
// take one sock from the basket
int sock = basket.get(i);
// if the sock of the same color is already on your arm...
if(arm.contains(sock)) {
// ...remove sock from your arm and put the pair over the hot pipe
arm.remove(sock);
}
else {
// put the sock on your arm
arm.add(sock);
// update maximum arm length
maxArmLength = Math.max(maxArmLength, arm.size());
}
}
return maxArmLength;
}
public static void main(String[] args) {
// results of our experiments will be stored here
int[] results = new int[PAIRS + 1];
// run millions of experiments
for(int i = 0; i < NUM_EXPERIMENTS; i++) {
Basket b = new Basket(PAIRS);
// arm length in a single experiment
int length = b.hangSocks();
// remember how often this result appeared
results[length]++;
}
// print results in CSV format so that we can plot them in Excel
for(int i = 0; i < results.length; i++) {
System.out.println(i + "," + results[i]);
}
// find average arm length
int sum = 0;
for(int i = 0; i < results.length; i++) {
sum += i * results[i];
}
double average = (double) sum / (double) NUM_EXPERIMENTS;
System.out.println(String.format("Average arm length is %.2f", average));
}
}
EDIT: For N=500, the average value of maximum arm length after 2,000,000 tests is 257.19. For N=1000, the result is 509.23.
It seems that for $N\to\infty$, the result goes down to $N/2$. I don't know how to prove this. |
Graph Theory Question - length of path vs. independent sets | For a (presumably connected) graph $G$ with $n > 10$ nodes, you ask whether the maximal size of an independent set being at most 10 implies that between any two nodes there's a path of length at least 10. The answer is no. Consider the star with 10 leaves, $K_{1,10}$; the maximal size of an independent set is 10, but the maximal length of a path is 2. If you meant to say the maximal size of an independent set was less than 10, the answer is still no: just add any edge between two leaves of $K_{1,10}$. |
Finding the constant for the particular solution to $y''(x) + y(x) = 2^x$ | The "test solution" should be $a \cdot 2^x$, not $A^x$. You want the same exponential multiplied by a constant coefficient. |
Is the following set closed in $\mathbb R^2$? | No it is not, as $(0,1)$ is a limit point of $S$ and $(0,1)\not\in S$.
It is a limit point since
$$
\left(\frac{1}{n},\frac{\sin\big(\frac{1}{n}\big)}{\frac{1}{n}}\right)\in S,
$$
and
$$
\frac{\sin\big(\frac{1}{n}\big)}{\frac{1}{n}}=n\sin\big(\tfrac{1}{n}\big)\to 1,
$$
as $n\to\infty$. |
Linear Algebra, eigenvalues and eigenvectors | I calculated the same polynomial and I got
$$P(X)= X^2 (X-1)^3 (X-4) \,.$$
Note that $tr(A)=7$ has to be the sum of eigenvalues.
Just to get you started:
$$\det(A-tI)= \det \pmatrix{2-t & 1 & 1 & 1 & 1 & 1 \\ 1 & 1-t & 0 & 1 & 0 & 1 \\ 1 & 0 & 1-t & 0 & 0 & 1 \\ 1 & 0 & 0 & 1-t & 0 & 0 \\ 1 & 0 & 0 & 0 & 1-t & 0 \\ 1 & 0 & 0 & 0 & 0 & 1-t}$$
Subtract the 6th row from 4th and 5th:
$$\det(A-tI)= \det \pmatrix{2-t & 1 & 1 & 1 & 1 & 1 \\ 1 & 1-t & 0 & 1 & 0 & 1 \\ 1 & 0 & 1-t & 0 & 0 & 1 \\ 0 & 0 & 0 & 1-t & 0 & t-1 \\ 0 & 0 & 0 & 0 & 1-t & t-1 \\ 1 & 0 & 0 & 0 & 0 & 1-t}$$
Now, $(1-t)$ common factor on rows 4 and 5.
$$\det(A-tI)= (t-1)^2\det \pmatrix{2-t & 1 & 1 & 1 & 1 & 1 \\ 1 & 1-t & 0 & 1 & 0 & 1 \\ 1 & 0 & 1-t & 0 & 0 & 1 \\ 0 & 0 & 0 & 1 & 0 & -1 \\ 0 & 0 & 0 & 0 & 1 & -1 \\ 1 & 0 & 0 & 0 & 0 & 1-t}$$
Next add Column 4 and column 5 to Column 6, and you can get a smaller $4 \times 4$ determinant.... |
Variation of Bin Packing/Knapsack problem | Suppose we have $n$ bins and fruits $f_1,\ldots,f_m$. Let the upper bound on the number of fruits of type $f_k$ in bin $s$ be $b(f_k,s)$. A rephrase of the problem is as follows:
Given any term in the product below, can you find which terms of the product multiply to form the term in the product?
$$\prod\limits_{i=1}^n\left(f_i^{b(f_1,i)}+f_2^{b(f_2,i)}+\ldots+f_m^{b(f_m,i)}\right)$$
The reason why this works is that each multiplicand in the product represents a bucket, and we pick one fruit in each bucket.
You can use generating functionology from here. |
Dimension of normalizer of closed connected subgroup | Let $L_G, L_H,L_N$ be the respective Lie algebras of $G, H$ and $N_G(H)$. Consider the action of $H$ on $L_G/L_H$ by $u_x(y)=p([x,y])$ where $p:L_G\rightarrow L_G/L_H$ is the quotient map, the theorem of Engel implies the existence of $x_0\neq 0\in L_G$ such that $u_x(x_0)=0$ for every $x\in L_H$, we deduce that $x_0\in L_N$ and $dim(N_G(H))>dim(H)$. |
A bilinear matrix inequality | To make the problem more concise, I substitute $y_i = K_ix$, then I obtain the following.
$$
\begin{array}{cl}
\min_{x,P} &x^\top Q x + b^\top x\\
&P\succeq \mathbb{I}\\
&a_i^\top PK_ix \le -1, \quad \forall i=1,\ldots,k
\end{array}
$$
Maybe, there is some other elegant solution. I think the following way is one way to make the problem nicer.
Reformulation with Augumented Lagrangian
Substitute $\theta_i = PK_ix$, now constructing the augumented Lagrangian (the ADMM way [1]), we have
$$
\begin{array}{cl}
\min_{x,P,\theta_1,\ldots,\theta_m}\max_{\mu_1\ldots\mu_k} &x^\top Q x + b^\top x + \sum_{i=1}^k\mu_i^T(\theta_i - PK_ix) + \frac{\rho}{2}\sum_{i=1}^k\|\theta_i - PK_ix\|_2^2\\
&P\succeq \mathbb{I}\\
&a_i^\top \theta_i \le -1, \quad \forall i=1,\ldots,k\,,
\end{array}
$$
where $\mu_i,\theta_i \in \mathbb{R}^m$ for all $i \in [m]$ and $\rho>0$. Note that $\rho$ is chosen before optimization starts and larger $\rho$ should work well.
Now, using Alternating updates (ADMM), it is straight forward to handle the constraints, as all the involved constraints are closed convex sets. You may also refer [2].
Tip for large $k$: The $\theta_i$ variables are not dependent on each other, hence $\theta_i$ updates can be done in parallel. The same also applies also for $\mu_i$.
References:
[1] https://web.stanford.edu/~boyd/papers/pdf/admm_distr_stats.pdf
[2] https://web.stanford.edu/~boyd/papers/pdf/admm_slides.pdf |
Time transform of Brownian motion | $B_t \sim N(0,t)$ for all $t >0$ so it is legitimate to replace $t$ by $e^{2t}$ and conclude that $B_{e^{2t}} \sim N(0,e^{2t})$. There is no probability theory involved in this implication.
You are right in saying that $(e^{-t}B_{e^{2t}})$ is not a standard BM. |
A fundamental question about relations between axioms | Once a set of axioms has been determined, one system has been
established. For example, based on Hilbert's axioms, we develop
Euclidean geometry in a rigorous way. However, each set of axioms can
be derived from a more basic/fundamental set of axioms, by defining
their terminology properly.
So you say that the axioms of any theory $A$ can always be derived from the axioms some "more basic" theory $B$. Of course that means $B$ can derived from some more basic theory $C$, which derives from $D$, which derives from ... I hope you see the problem here.
What you are talking about is actually not deriving $A$ from theory $B$, but rather modeling theory $A$ within theory $B$. Exactly because you cannot keep defining things in terms of previously defined things, but must start somewhere, a mathematical theory starts with a collection of "undefined" primitive terms, and a collection of statements ("axioms") describing how those terms relate to each other. By this, the axioms take the place of definition, in a sense "defining" the primitives in terms of each other. And together, the axioms and primitives define the mathematical theory. They determine what the theory is about, and what is true or false within it. One has to be careful in choosing the axioms, because a poor choice can result in a contradiction, and anything can be proven from a contradiction. So in such a theory, every statement is both true and false. Such a theory is inconsistent. Theories without a contradiction are consistent.
Now, within a different theory $B$, we can sometimes define the primitives of $A$ in terms of the primitives of $B$, and use those definitions to prove that the axioms of $A$ are all true in $B$. This is not deriving $A$ from $B$, but rather modelling $A$ in $B$. The definitions used are our choice, not something that "must be". For example, you can model the Peano axioms in set theory by defining the successor function as either $s(n) = \{n\}$ or as $s(n) = n \cup \{n\}$. The sets acting as natural numbers are radically different, but either approach works, allows you to prove the Peano axioms from the definition.
So modelling doesn't derive the theory. What it does is tell you that if $B$ is consistent, then $A$ has to be as well, because a contradiction in $A$ would then by model also be a contradiction in $B$.
Here, my curiosity extends to physics. In physics, there are also
different systems, such as Newtonian mechanics, quantum mechanics,
relativity and so on. In each system, there is a set of laws, which
Newton called axioms and he even considered geometry as part of
mechanics.
It has only been in the last ~150 to 200 years that mathematics and physics have been recognized as two separate fields. Before then, it was not that physical laws were considered "axioms", but rather that axioms were considered to be "laws" - statements about nature that we believe are true, but can only substantiate empirically. The description I gave above about axioms being statements we use to define a theory was not part of their thinking. One could no more claim that there was more than one line through a point parallel to a given line, than one could claim that objects dropped on Earth do not fall.
And of course, that was the problem. First, some mathematicians working to prove the parallel postulate found that you apparently got a consistent theory if you assumed it was false. Then Weierstrass had to go off and show that a model for a part of this theory could be built within Euclidean geometry. And Poicare one-upped him by modelling the entirety of hyperbolic geometry within Euclidean geometry. Suddenly accepting that the parallel postulate was true also meant accepting that it can be false instead. This was a matter of choice, not something that just "is". And if it is the case for one axiom of mathematics, why should it not be the case for the rest? And so Mathematics and Physics got an amicable divorce and went their separate ways.
Let's take just Newtonian mechanics. There are three laws. Are these
laws considered to be axioms in a mathematical sense? (To be honest,
it doesn't seem to me that mathematics and physics are separate. I
don't think even the great mathematicians/physicists such as Galileo,
Newton, Einstein, didn't think that way, too.)
Galileo, Newton, and many others lived before this separation. Einstein lived after, and as part of his development of General Relativity studied geometries quite deeply. He was well aware of why Math and Physics broke up.
So, could the laws in Newtonian mechanics, for example, the famous $F = ma$, be deduced from set theory by defining the proper terminology such as Force, mass, velocity and so on, as we can do it with Peano axioms and Hilbert's axioms?
You can take the laws of physics as axioms of a mathematical theory. Effectively, this is exactly how physicists (or any other scientist) applies mathematics to their field. The physics terminologies such as "force, mass, distance, time" are primitives and the laws are the axioms that establish the theory. So "physics" becomes a mathematical theory. But if that is what you do, then ALL you have is a mathematical theory - a bunch of results about abstract concepts without any "real-world" meaning. It doesn't become actual physics until you step out of that theory and decide that "force" is what happens when you press on something, what pushes against the ground, and what the ground applies to hold you up.
In "Surely You Are Joking, Mr. Feynman", Richard Feynman discussed his trouble teaching some students who had the theory of optics down pat. But when he tried to get them to apply the very results just discussed to a real world situation, they would give the wrong answer, because they had apparently learned the theory as if it were only mathematics, not physics, and when thinking of the real world, they never thought to apply it, but went with their misguided intuition instead.
I am a mathematician (at least, at heart). I love mathematics. I love the freedom to build "castles in the air" as I please. Anything I build is, and always will be, mathematics. Physics is different. A theoretical physicist may spend years working in a particular theory, only eventually to have experimental results disagree. All the work done ceases to be physics at that point. It is still mathematics, and still may be useful in other areas, but it is no longer physics.
Lastly, even though they are not deduced by set theory, does set
theory or a rigorous set of mathematical axioms have to be with them
if they are dealt with rigor? I am not sure how to view the laws or
principles in physics in a rigorous mathematical point of view. Any
insights into or help with this would be very much appreciated! Thank
you very much.
Even if you build a mathematical theory of physics within the set theory or elsewhere, it does absolutely nothing to establish the truth or falsity of the physics. Having no logical contradictions is important for a physical theory, but what establishes a physical theory is its usefulness in describing the real world - in particular, in making predictions about that world that turn out to be true. Engineers do this constantly.
The difference between mathematics and any science is that mathematics establishes its truths deductively - logically proving results from the axioms. The limitation to mathematics is that it does not establish that $1 + 1 = 2$. Instead, it establishes that "under the Peano axioms, $1 + 1 = 2$". On the other hand, science establishes its truths empirically - by making predictions over and over again, and showing that the statements have always held true every time they've been tried and the results carefully established. The limitation to this approach is that you can never be completely sure that the next time will also be true.
You might have an extraordinarily sucessful theory about mechanics that just works everywhere and has everyone lauding your name as the greatest ever. Then some guy plays with some equations and up pops a fixed speed for electromagnetic radiation, contrary to your theory's prediction that all speeds are relative. And while everyone is working to explain why your theory must still be right, some other doofus says "well maybe it is constant", and then someone tests it, and next thing you know, that doofus is declared right and goes on poking more holes in your theory, and everyone is proclaiming him to be the greatest. |
How to solve system of equation, $\sqrt{x-1}+\sqrt{y-1}=4\sqrt 3$, $\sqrt{y-4}+\sqrt{z-4}=4\sqrt3$ and $ \sqrt{x-9}+\sqrt{z-9}=4\sqrt3$ . | So here are the 3 equations:
$$\begin{cases}\sqrt{x-1}+\sqrt{y-1}=4\sqrt 3\\\sqrt{y-4}+\sqrt{z-4}=4\sqrt3\\\sqrt{x-9}+\sqrt{z-9}=4\sqrt3\end{cases}$$
As suggested by transcenmental,
$\sqrt{x-1}-\sqrt{y-1}=4\sqrt 3 - 2 \sqrt{y-1}$ and multiplying with the first eq. gives
$$
x - y = 4\sqrt 3 (4\sqrt 3 - 2 \sqrt{y-1})
$$
For the second eq., use
$$-\sqrt{y-4}+\sqrt{z-4}=4\sqrt 3 - 2 \sqrt{y-4}$$ and multiplying with the second eq.
$$
z - y = 4\sqrt 3 (4\sqrt 3 - 2 \sqrt{y-4})
$$
Plugging into the last one gives an eq. in y:
$$
\sqrt{y + 4\sqrt 3 (4\sqrt 3 - 2 \sqrt{y-1})-9}+\sqrt{ y + 4\sqrt 3 (4\sqrt 3 - 2 \sqrt{y-4})-9}=4\sqrt3
$$
This is pretty akward, but $y = 28/3$ is a solution (by computer). From here the others follow, namely
$$
x = 52/3$$
and
$$
z = 76/3$$
EDIT:
with a little bit of hindsight and a little bit of psychology, you could argue
as follows (with a twinkling of an eye):
suppose the person asking the question prefers a reasonably nicely looking solution (psychology 1). Then all variables should either be multiples of 3 or of $1/3$ to get rid of the $\sqrt 3$ on the RHS. Let's try $1/3$ (hindsight 1). So let $x = x' / 3$ etc. Now assume further that also the numerator of the variables should be nice, e.g. no roots etc. (psychology 2). Then we should have that $x'-3\cdot 1$ and $x'-3\cdot 9$ are "nice" squares (likewise with the other variables). If we want it even nicer, they should be squares of integers (hindsight 2).
So $x' = 3 + n^2$ and $x' = 27 + m^2$. Now start playing. "Nice" integers n and m will be reasonably small (psychology 3). $x' = 52$ does it nicely, with $n=7$ and $m=5$. A small number of trials, to match all three variables, will then give the solution.... |
Why is $\frac{d^2}{dx^2} = \frac{1}{\delta^2}\frac{d^2}{dX^2}$ after the substitution $X = \frac{x}{\delta}$? | What this actually means is that $X(x) = \frac x\delta$ is to be composed with a function in $x, g(x)$ such that we obtain a function in $X, f(X)$, hence.
$$f(X(x)) = f(\frac x\delta)$$
and
$$\frac {d^2}{dx^2} f(X(x)) = \frac{d^2}{dx^2} f(\frac x\delta) = \frac d{dx} \Big( f'(\frac x\delta) \cdot \underbrace{\frac d{dx} X(x)}_{=\frac1\delta} \Big) \\
= \frac1\delta \frac d{dx} f'(\frac x\delta) = \frac1\delta f''(\frac x\delta) \cdot \underbrace{\frac d{dx} X(x)}_{=\frac1\delta} = \frac1{\delta^2} f''(\frac x\delta) = \frac1{\delta^2} f''(X)$$
Where $f'' = \frac {d^2}{dX^2} f$, so we can say that
$$\frac {d^2}{dx^2} = \frac1{\delta^2} \frac{d^2}{dX^2}$$
This is all a bit unclean because you think of $\frac d{dX}$ as a function where it is in fact an operator (It operates on functions). |
Linear Algebra determinant equal 0 | Proof by contradiction:
Suppose there exists an $A$ such that $detA \ne 0$ and $A\ne I$ and $A^2 = A.$
$detA \ne 0$ implies $A$ has an inverse.
$A^{-1}(A^2) = A^{-1}A\\
A = I$
But that contradicts our premise. |
Differential equation for a matrix-valued function | The correct conclusion from $\frac{d}{dt}(\text{something})=0$ is that $\text{(something)}$ does not depend on $t$. This is all one can get, because any function independent of $t$ has zero derivative with respect to $t$.
But if it is also known that $Y(t_0)$ is orthogonal for some $t_0$, then from $Y^T(t_0)Y(t_0)=I$ it indeed follows that $Y^TY\equiv I$. |
PMF for total on a random number of dice? | Hint:
The mean can be found on base of:$$\mathbb E(Y)=\sum_{i=1}^6\mathbb E(Y\mid N=i)P(N=i)$$ |
The number of ways one can predict the outcome of 11 matches such that exactly 6 turn out to be true | Strictly speaking there is no probability/randomness mentioned in the problem, but you have the right idea, assuming predictions are made uniformly at random and independently for each match.
Another way to think about it is to just count the number of ways to correctly predict: there are ${11\choose 6}$ ways to choose which matches are correctly predicted. For each of those, there is no choice of outcome - it must be the correct outcome. For the other $11-6=5$ matches, you must predict incorrectly, so there are $2$ possible outcomes for each. Thus ${11\choose 6}2^5$. |
In the polynomial ring $\mathbb Z_3[x]$, the ideal generated by $x^6+1$ is a prime ideal. | We know ideal $I$ is prime iff $\frac{R}{I}$ is integral domain
Here $I= <x^6+1> $
We have.
$a= (x^2+1)\neq0 $ and$b=(x^4-x^2+1) \neq 0$ in $\frac{Z_3[x]}{<x^6+1>}$
But $ab =(x^2+1)(x^4-x^2+1) =0$
$\implies \frac{Z_3[x]}{<x^6+1>}$ is not an integral domain
$\implies <x^6+1> $is not prime |
Sufficient statistics when mean is known | Define $T=\sum X_i^2$ and $U=\frac{1}{n-1}\sum (X_i-\bar{X})^2=\frac{\sum X_i^2 -n \bar{X}^2}{n-1}$.
If you want to show a statistic is not a sufficient statistic , you can compare it with minimal sufficient statistic. Use the fact that
a minimal sufficient statistic
is a function of any sufficient statistic.
It is obvious that $T=\sum X_i^2$ is a minimal sufficient statistic for $\sigma^2$. Since $T$ is minimal sufficient statistic, so it is a function of any sufficient statistic. It is enough to show that $T$ is not a function of $U$.
$T$ is a function of $U$ if $U(a_1)=U(a_2
)$ $\Rightarrow T(a_1)=T(a_2)$. So it is enough to find two points that $U(a_1)= U(a_2)$ but $T(a_1)\neq T(a_2)$ , and hence $T$ is not a function of $U$ and hence $U$ is not a sufficient statistic.
$a_1=(x_1=1,x_2=1, \cdots ,x_n=1)$
$a_2=(x_1=0,x_2=0, \cdots ,x_n=0)$
So $0=U(a_1)=U(a_2
)$ but $1=T(a_1)\neq 0=T(a_2
)$ |
Generalisation of an inequality to infinite dimensional normed linear spaces | One argument would be based on Dvoretzky's theorem, i.e. given Banach space $(X,||.||)$ then there are constants $c_{1}$ and $c_{2}$ such that for any $n$ one can find a a subspace $X_{n}$ of $X$ such that
$$c_{1}*(\sum a_{k}^{2})^{1/2} <= ||\sum a_{k}*x_{k}|| <= c_{2}*(\sum a_{k}^{2})^{1/2}$$
for $x_{k} \in X$ and $a_{k}$ scalars.
In words, $X$ admits uniformly copies of $l_{n}^{2}$.
Then one would use your argument for Hilbert space above.
Now, this is too complex argument and there must be a simpler one but I do not see it ... . |
Easy way to show $\frac{2x}{x-1}>x$ | You can not divide by $x$ because $x$ can be positive or $x$ can be negative.
It's $$\frac{2x}{x-1}-x>0$$ or
$$\frac{x(3-x)}{x-1}>0,$$ which gives the answer:
$$(-\infty,0)\cup(1,3).$$
I used the intervals method.
We need to draw the $x$ axis and to put there points $0$, $1$ and $3$.
Now, on $(3,+\infty)$ the expression $\frac{x(3-x)}{x-1}$ is negative and if we pass a point $3$ then the sign changes.
Thus, the sign on $(1,3)$ is $+$.
If we pass a point $1$ then the sign changes again.
Thus, the sign on $(0,1)$ is $-$.
And if we pass a point $0$ then the sign still changes.
Thus, the sign on $(-\infty,0)$ is $+$.
Now, we can write the answer. |
Why is there no possibility of no solution in homogeneous systems | Your example means $5x=0$, which gives you $x=0$. Remember the left side is the coefficient matrix. The numbers are coefficients of linear equations.
For a homogeneous system, zero is always a solution. So there is always this solution, which is the so called trivial solution. |
Finding $k$ such that a given matrix has a real eigenvalue of algebraic multiplicity $2$ | For us to have a double root (i.e. an algebraic multiplicity of two), the discriminant of the characteristic equation must be zero. This holds for quadratics in general: if $f(x) = ax^2 + bx + c$ has a root of multiplicity two - a double root - then $b^2 - 4ac = 0$.
The characteristic equation in this case is
$$\lambda^2 + 9\lambda + (18-2k) = 0$$
and the discriminant being the rooted expression in the quadratic formula, which, here, is
$$9^2 - 4(1)(18-2k) = 81 - (72-8k) = 9 + 8k$$
So, for the discriminant to be zero, we require
$$9+8k = 0$$
Only one $k$ satisfies this equation, and I imagine you can find it pretty easily. |
Probability least number of tosses of a pair of dice | Let me help you to answer your own question. I will not give you the full answer but I'm sure you'll be able to compute it afterwards.
If we throw once, the probability of throwing $11$ is $\frac{2}{36}=\frac{1}{18}$. So if we thrown $n$ times, what is the probability that we have not yet thrown $11$? That's right:
$P_n=(\frac{17}{18})^n$
We are interested in the $n$ such that $1-P_n$ exceeds a) 0.5 and b) 0.95.
Can you pick it up from here? Try first finding equality and then looking at which side of the equality your amount of tosses should lie. Good luck! |
Null Space and Orthogonal Complement | For the first equality,
$$\begin{align}
v \in N(A) &\iff Av = 0 \\
&\iff \forall w \, \langle Av,w\rangle = 0 \tag{*}\\
&\iff \forall w \, \langle v,A^Tw \rangle = 0 \\
&\iff v \in R(A^T)^\perp.
\end{align}$$
The only possibly tricky step is going from (*) to the preceding line, which requires the lemma that, if $\langle x,y \rangle = 0$ for all $y$, then $x=0$.
The proof for the other equality is similar.
These equalities are special cases of a broader result: If $T:V\to W$ is a linear map and $T^*: W^*\to V^*$ its adjoint, then the image of $T^*$ annihilates the kernel of $T$, and the kernel of $T^*$ annihilates the image of $T$. |
How can I represent the rotation of a point in $\mathbb{R}^2$ graphically? | For any linear transformation $T: V_1 \rightarrow V_2$, where $V_1$ and $V_2$ are vector spaces, its matrix representation consists of the images under $T$ of the basis vectors of $V_2$. Since $V_1 = \mathbb{R}^2$, we have that the matrix for a counterclockwise rotation of $\theta$ is
$$\left[\begin{array}{ll} \cos \theta & - \sin \theta \\
\sin \theta & \cos \theta \end{array}\right]$$ |
Definition of "point at infinity" | (Sorry if what follows is too elementary and you already knew it! If so, skip to below the horizontal line)
Because the equations are homogeneous, if $(a,b,c)$ is any solution, then so is $(\lambda a,\lambda b,\lambda c)$ for any nonzero constant $\lambda$. Moving to rational solutions, it makes sense then to consider all solutions of the form $(\lambda a,\lambda b,\lambda c)$, $\lambda\neq 0$, to be "equivalent" in a sense (much like with Pythagorean triples, we don't really distinguish between then $(3,4,5)$ triangle and the $(6,8,10)$ triangle when trying to describe all solutions, focusing then on the primitive ones).
This naturally leads you to projective 2-space over the rationals, which is precisely the quotient of the nonzero rational triples, $(\mathbb{Q}^3-\{(0,0,0)\})$, modulo the equivalence relation $\sim$ that is $(a,b,c)\sim(r,s,t)$ if and only if there exists $\lambda\neq 0$ such that $(\lambda a,\lambda b,\lambda c)=(r,s,t)$. (I'm doing this over $\mathbb{Q}$ because we are looking at diophantine problems, but this can be done over any field, and in algebraic geometry you would usually do it over an algebraically closed field, e.g., $\mathbb{C}$).
(An informal way of thinking about projective $2$-space can be found in this answer).
We denote the equivalence class of the point $(a,b,c)$ in Projective $2$-space by $[a:b:c]$.
Projective $2$-space contains copies of the usual ("affine") $2$-space over $\mathbb{Q}$: we can identify $\mathbb{Q}^2$ with all the points of the form $[a:b:1]$. (There are two other copies of the rational plane: the points of the form $[a:1:c]$ and the points of the form $[1:b:c]$). The points that have $c=0$ form the "line at infinity" of projective $2$-space (all representatives of the equivalence class of $[a:b:0]$ will have third coordinate equal to $0$); points on the line at infinity are called "points at infinity".
In the Fermat equation, $x^n + y^n = z^n$, viewed as a projective equation (which we can do because it is homogeneous), the "points at infinity" are the solutions to the equation with $z=0$. In the projective plane, for $n$ odd, the only such point is $[1:-1:0]$. Similarly with $x^3+y^3=60z^3$; these solutions lie on the "line at infinity", so they are "points at infinity" of the curve (which in fact only has one such point).
In the Fermat equation you run into the slight complication that if $n$ is even, then there are no solutions at the "line at infinity" $z=0$, because when $n$ is even the three variables don't play symmetric roles. One can instead look at a different copy of the affine plane, e.g. the $y\neq 0$ subset of the projective plane, which would lead to the line $y=0$ being the "line at infinity", and the only solution that lies at this line at infinity is $[1:0:1]$. If you switch to the $x=0$ line at infinity, you would have $[0:1:1]$ as the "point at infinity".
So given a projective variety, you can pick a copy of the corresponding affine space; the "points at infinity" of the variety will be those that lie in the complement of that copy of the affine space. You can switch viewpoints and select a different affine hyperplane to get a different set of "points at infinity" (that is, the notion of 'points at infinity' depends on the particular affine hyperplane you have decided on, and is not intrinsic to the curve). |
Is this partial derivative evaluated correctly? | My solution is:
$$\frac{\partial w}{\partial k} \enspace = \enspace 3\frac{\partial w}{\partial x} + (h^2+2)\frac{\partial w}{\partial y} $$
And therefore,
$$ \frac{\partial^2 w}{\partial h \, \partial k} \enspace = \enspace 6hk \, \frac{\partial^2 w}{\partial x \,\partial y} + 2h \frac{\partial w}{\partial y} + 2kh(h^2+2) \frac{\partial^2 w}{\partial y^2}$$
Note that the derivative of $w$ with respect to $x$, i.e. $\tfrac{\partial w}{\partial x}$, can still be dependent on $y$. In other words $\tfrac{\partial w}{\partial x} = \tfrac{\partial w}{\partial x}(x,y)$ and thus $\tfrac{\partial}{\partial y} \tfrac{\partial w}{\partial x} = 0$ is NOT true in general. Note furthermore, that in general
$$ \frac{\partial w}{\partial y} \frac{\partial w}{\partial y} \enspace = \enspace \Big( \frac{\partial w}{\partial y} \Big)^2 \enspace \neq \enspace \frac{\partial^2 w}{\partial y^2}$$
One can always check if his solution is correct by calculating the inverse order, i.e. calculating $\tfrac{\partial^2 w}{\partial k \, \partial h}$ instead of $\tfrac{\partial w^2}{\partial h \, \partial k}$. The solutions should coincide. |
Is this construction already a contradiction | Any bounded linear operator from $l^2$ to $l^p$, is compact, and thus cannot be an isomorphism, so no isomorphism can exist between them. So they cannot be Isomorphic.
Let us try to make one, let $L:U\to l^p$ defined by $L_x(y)=\sum_nx_ny_n$, for $x\in U,y\in l^p$, here $L_x(y)=\langle x,y\rangle_{l^2}$, thus $|L_x(y)|=|\langle x,y\rangle_{l^2}|\leq\|x\|_{l^2}\|y\|_{l^p}$, since $x$ is fixed for each $L_x$, we see that the operator is bounded, and takes values in $\mathbb{R}$ (or $\mathbb{C}$), which is finite dimensional, so the operator is of finite rank and thus is compact. |
How does this conclusion work (number theory) | $ n\equiv 2^k\!\pmod{\!2^{k+1}}\!\!\iff\! 2^{k+1}\!\mid n\!-\!2^k\Rightarrow \color{#0a0}{2^k\mid n},\ \color{#c00}{2^{k+1}\nmid n}\ $ so $\,\overbrace{\color{#0a0}{2^k}\!\cdot\color{#c00}{\rm odd}}^{\large n}= 2^k(1\!+\!2j) = 2^k\! +\! 2^{k+1}j $ |
Scale rectangles so they have same height and don't exceed a total width? | First, scale them all to the same height (tallest, first, some constant).
Second, add the (new) widths and divide the sum by the desired total width.
Third, scale them all by the inverse of this factor. |
Most effective algorithms for each step of the basic RSA-Algorithm | I'll try to address your steps. Let $N=pq,$ have bitlength $n$.
Choose two large primes $p\neq q$ (We can use random number generators with the help of primality tests)
You want to choose large pseudoprimes which are not too close together say within 10 bits of each other in bitlength. You can pick a random odd integer with bitlength $n/2$ in $O(n)$ steps and if you test roughly $\log N=n,$ such numbers you will hit a prime.
These steps have overall complexity $O(n^2)=O(\log^2 N).$ But there is the primality testing, which has complexity something like $O(\log^3 N)$ for Miller-Rabin, say.
Step 1 ends up taking $O(k \log^4 N),$ since we repeat Miller-Rabin $\log N$ times and do $k$ iterations for lowering the probability of error to $1-2^{-2k}.$
Compute $N=pq$ and $\varphi = (p-1)(q-1)$
$O((\frac{n}{2})^{1.58})=O(n^{1.58})=O(\log^{1.58}N)$ by Karatsuba algorithm. The Harvey-Hoegen algorithm seems to be not practical, as in the comment by Peter Kosinar.
Chose $e\in\mathbb{N}$ so that $\texttt{gcd}(e,\varphi(n))=1$ and $1< e <\varphi (N)$
Choose $e$ randomly (complexity $O(\log N)$) and check GCD. Success after a constant number of trials. Since you use extended Euclidean, complexity is $O(\log N).$
Compute $d=e^{-1} \bmod \varphi(N)$ (Ext. Euclidean Algorithm)
You can use CRT and then extended Euclidean mod $p-1$ and mod $q-1$ to get
$e^{-1} \bmod{p-1}$ and $e^{-1} \bmod{q-1}$ and then multiply. This is a real
saving in practice but still $O(\log N).$
Make $(e,n)$ public and keep $(d,p,q)$ secret. (prob. not a real step/operation)
Constant complexity.
Encryption of message $M$ with $C:=M^e \bmod N$ (Square-And-Multiply?)
Yes, but now without the factorisation of $N$ available to the sender. So $O(\log N)$.
Decryption of ciphertext $C$ with $C^d \bmod N$ (Square-And-Multiply?)
Yes, but with the factorisation available to recipient via CRT. Again $O(\log N).$ |
Calculation mistake in variation of length functional? | Thanks to user98130's comment. Came back later and it seemed obvious. Anyway, because there is no dependence on $\gamma$ (which I'll call $f$ for simplicity) or $v$, the Euler-Lagrange equation is "homogeneous":
\begin{align*}
\int_0^1 g(\dot{f},\dot{v}) dt &= \int_0^1 g_{ij}\dot{f}^i\dot{v}^j dt \\
&= g_{ij}\dot{f}^iv(t)\bigg|_{t=0}^{t=1} - \int_0^1 \frac{d}{dt}(g_{ij}\dot{f}^i)v dt \\
\frac{d}{dt}(g_{ij}\dot{f}^j) &= \frac{\partial g_{ij}}{\partial f^k}\dot{f}^k\dot{f}^i + g_{ij}\ddot{f}^j = 0,
\end{align*}
which are the geodesic equations when $g(\dot{f},\dot{f}) = 0$. |
Show that $X_n/n$ does not converge almost surely | Define $$A_n := \left\{ \left| \frac{X_n}{n} \right| \geq 1 \right\}.$$ The set $$A := \limsup_{n \to \infty} A_n$$ satisfies, by the Borel-Cantelli lemma, $\mathbb{P}(A)>0$ and $\frac{X_n(\omega)}{n}$ does not converge to $0$ for each $\omega \in A$. |
Combinations and odds | This solution assumes that the balls are chosen without replacement. In other words, I'll assume that the second ball is chosen from a pool of $79$ possibilities once the first ball is removed.
Suppose you've picked the first ball already.
There are $79$ remaining balls. Of these, $15$ are the same color as the first, and the rest are not. This means that the probability of the balls being the same color is $$\boxed{\frac{15}{79}\,}$$ |
endomorphism as sum of two endomorphisms (nilpotent and diagonalizable) | Hint: You can make use of the fact that a Jordan form matrix is a block matrix.
Just show the claim for a single Jordan block, and then argue via block matrices that it holds for the whole thing.
It may be helpful to note that the diagonal part of a Jordan block is a scalar multiple of the unit matrix. |
solving $f(x)=\frac x{1-x^2}+2(1+x)f(x^2)$ without power series expansion | Even simpler than my other answer.
Let $f(x) = \frac{x}{(1-x)^2} + \frac{h(x)}{1-x}$. Then $h$ satisfies $h(0)=0$ and
$$
h(x) = 2h(x^2).
$$
Then
$$
h(x) = \lim_{k\to\infty} 2^k h\bigl(x^{2^k}\bigr) = \lim_{y\to0} h(y)\log(y)=0
$$
as soon as we suppose that $h'(0)$ exists. |
Residue with transcendental function | $\lim_{z\to i\pi} \frac{(e^z + 1)(z-i\pi)}{\sin^2(iz)} = \lim_{z\to i\pi} \frac{e^z(z-i\pi) + e^z +1}{i\sin(2iz)}= \lim_{z\to i\pi} \frac{e^z(z-i\pi) + 2e^z}{-2\cos(2iz)} = 1$ by two applications of L'Hopital. |
How to evaluate $\int_{0}^{\infty}{\ln(x)\over x}\left({1+x+e^x\over e^x+e^{2x}}-{1\over \sqrt{1+ x^2}}\right)\mathrm dx?$ | $$\int\limits_0^\infty \frac{\ln x}{x}\left(\frac{1+x+e^x}{e^x+e^{2x}}-\frac{1}{\sqrt{1+x^2}}\right)dx=\int\limits_0^\infty \frac{\ln x}{e^x+e^{2x}}dx - \int\limits_0^\infty \frac{\ln x}{x}\left(\frac{1}{\sqrt{1+x^2}}-\frac{1}{e^x}\right)dx$$
with
$$\int\limits_0^\infty \frac{\ln x}{e^x+e^{2x}}dx=\int\limits_0^\infty \frac{\ln x}{e^x}dx-\int\limits_0^\infty \frac{\ln x}{1+e^x}dx=\frac{(\ln 2)^2-2\gamma}{2}$$
and
$$\int\limits_0^\infty \frac{\ln x}{x}\left(\frac{1}{\sqrt{1+x^2}}-\frac{1}{e^x}\right)dx=\frac{(\ln 2)^2-\gamma^2}{2}$$ .
Therefore the result is $\enspace\displaystyle -\frac{\gamma}{2}(2-\gamma)$ . $\enspace\gamma$ is here the Euler-Mascheroni constant.
For the second integral you can use the integration by parts with:
$$\frac{d}{dx}(\ln x)^2=2\frac{\ln x}{x}$$
$$\int\limits_0^\infty \frac{(\ln x)^2}{e^x}=\gamma^2+\frac{\pi^2}{6}$$
$$\int\limits_0^\infty \frac{x(\ln x)^2}{\sqrt{1+x^2}^3}=\frac{\pi^2}{6}+(\ln 2)^2$$
Based on the comment of Lucian we calculate now $\enspace\displaystyle \int\limits_0^\infty \frac{\ln t}{t}(\frac{1}{\sqrt{1+t^2}}-\frac{1}{e^t}) dt\,$ in a different way than above. Be $\,F(x):=x\Gamma(x)\,$ . We have
$\displaystyle \int\limits_0^\infty \frac{dt}{t^x e^t}=\Gamma(1-x)=F(-x)\enspace$ , $\enspace\displaystyle \int\limits_0^\infty \frac{dt}{t^x \sqrt{1+t^2}}= \frac{\Gamma(\frac{1-x}{2})\Gamma(\frac{x}{2})}{2\sqrt{\pi}} =\frac{2 F(\frac{1-x}{2})F(\frac{x}{2})}{\sqrt{\pi}x(1-x)} \,$ ,
$\enspace\displaystyle F(-\frac{x}{2})F(\frac{1-x}{2})= \frac{\sqrt{\pi}}{2^{1-x}} F(1-x)\,$ , $\,\displaystyle (\ln F(x))β = \lim\limits_{n\to\infty}(\ln n - \sum\limits_{k=1}^n\frac{1}{k+x})\,$ , $\,\displaystyle (\ln F(x))ββ = \sum\limits_{k=1}^\infty\frac{1}{(k+x)^2}\,$ .
$\enspace\displaystyle \int\limits_0^\infty \frac{1}{t^{1-x}} ( \frac{1}{\sqrt{1+t^2}} -\frac{1}{e^t})dt= \frac{1}{x}(\frac{2 F(\frac{1-x}{2}) F(\frac{x}{2})}{\sqrt{\pi}(1-x)}-F(x)) $$\displaystyle =\frac{1}{x}(\frac{2^x F(-x) F(\frac{x}{2}) }{ F(-\frac{x}{2}) }-F(x))$
$\enspace\displaystyle \int\limits_0^\infty \frac{\ln t}{t^{1-x}} ( \frac{1}{\sqrt{1+x^2}} -\frac{1}{e^x})dt= \frac{d}{dx} \left( \frac{1}{x} ( \frac{2^x F(-x) F(\frac{x}{2}) }{ F(-\frac{x}{2}) } -F(x))\right)= \frac{1}{x^2}\left(- \frac{2^x F(-x) F(\frac{x}{2}) }{ F(-\frac{x}{2})} +F(x) + x (\frac{2^x F(-x) F(\frac{x}{2}) }{ F(-\frac{x}{2})} -F(x))β \right) $
$\enspace$ L'HΓ΄pital's rule two times
$\enspace\displaystyle \int\limits_0^\infty \frac{\ln t}{t} ( \frac{1}{\sqrt{1+t^2}} -\frac{1}{e^t})dt=\lim\limits_{x\to 0} \int\limits_0^\infty \frac{\ln t}{t^{1-x}} ( \frac{1}{\sqrt{1+t^2}} -\frac{1}{e^t})dt =$$\displaystyle =\lim\limits_{x\to \pm 0} \frac{1}{x^2} \left(- \frac{2^x F(-x) F(\frac{x}{2}) }{ F(-\frac{x}{2})} +F(x) + x (\frac{2^x F(-x) F(\frac{x}{2}) }{ F(-\frac{x}{2})} -F(x))β \right) = \frac{1}{2} \left(- \frac{2^x F(-x) F(\frac{x}{2}) }{ F(-\frac{x}{2})} +F(x) + x (\frac{2^x F(-x) F(\frac{x}{2}) }{ F(-\frac{x}{2})} -F(x))β \right)ββ|_{x=0}$
$\displaystyle = \frac{1}{2}\left(\left(\frac{2^x F(-x) F(\frac{x}{2}) }{ F(-\frac{x}{2})}\right)ββ -Fββ(x) + x \left(\frac{2^x F(-x) F(\frac{x}{2}) }{ F(-\frac{x}{2})} -F(x) \right)βββ\right)|_{x=0}$
$\displaystyle \left(\frac{2^x F(-x) F(\frac{x}{2}) }{ F(-\frac{x}{2})}\right)ββ=$$\displaystyle = \frac{2^x F(-x) F(\frac{x}{2}) }{ F(-\frac{x}{2})} ((\ln 2+ ( \ln F(-x) )β+( \ln F(\frac{x}{2}) )β-(\ln F(-\frac{x}{2}))β)^2 $$\hspace{3.5cm}\displaystyle + ( \ln F(-x) )ββ+ (\ln F(\frac{x}{2}) )ββ-(\ln F(-\frac{x}{2}))ββ)$
$\hspace{8mm}$ => $\enspace\displaystyle \left(\frac{2^x F(-x) F(\frac{x}{2}) }{ F(-\frac{x}{2})}\right)ββ|_{x=0}=$
$\hspace{2cm}\displaystyle =(\ln 2 + \gamma -\frac{\gamma}{2}-\frac{\gamma}{2} )^2+(\frac{\pi^2}{6}+\frac{\pi^2}{24}-\frac{\pi^2}{24})=(\ln 2)^2+ \frac{\pi^2}{6} $
$\displaystyle Fββ(x)=F(x)((\ln F(x))β^2+(\ln F(x))ββ) \enspace$ => $\enspace\displaystyle Fββ(0)=\gamma^2 + \frac{\pi^2}{6}$
$\enspace\displaystyle \left(x \left(\frac{2^x F(-x) F(\frac{x}{2}) }{ F(-\frac{x}{2})} -F(x))β \right)βββ\right)|_{x=0} = 0\enspace$ since $\enspace\displaystyle \left(\frac{2^x F(-x) F(\frac{x}{2}) }{ F(-\frac{x}{2})} -F(x))β \right)βββ|_{x=0} \enspace$ is limited
$\enspace\displaystyle \int\limits_0^\infty \frac{\ln t}{t}(\frac{1}{\sqrt{1+t^2}}-\frac{1}{e^t}) dt = \frac{1}{2}\left (((\ln 2)^2+\frac{\pi^2}{6})-(\gamma^2+\frac{\pi^2}{6})+0\right)=\frac{(\ln 2)^2-\gamma^2}{2}$ |
Is $(0,1]$ open and closed in the set $A := (0,1] \cup \{2\}$ where $A \subset \mathbb{R}$? | Openness is something relative. For example, $(0,1)$ is open in $\mathbb{R}$ but neither closed nor open in $\mathbb{R}^2$. Another example is the every space is closed and open with respect to itself.
One needs to know several definitions to understand what is meant by open and closed sets,
open ball
limit point of a set
closed set
interior point
open set
For your question, every point in (0,1] has an open ball which intersection with $A$ lies completely in $A$, even if you take the pont $x = 1$. By the way the set $A$ is also not connected |
What is $\lim_{n\to\infty}P\left(\sum^n_{i=1}X_i\ge\sum^n_{i=1}Y_i\right)?$ | Represent the limit probability equivalently as $$\lim_{n \to \infty} P\left(\frac{1}{n} \sum_{i=1}^n \left( X_i - Y_i \right) > 0 \right)$$
Then the average of $n$ samples sampled from the difference of these two distributions would converge to the expected value of the difference of these two distributions by the weak law of large numbers. In this case, the expected value would be equal to $\frac{1}{2}-1=-\frac{1}{2}$, so the limit would be equal to $0$ since $-\frac{1}{2} \not > 0$. |
A geometry problem - proving collinearity | Let $M$ be the origin and consider the three vertices $A$, $B$, $C$ as vectors $a$, $b$, $c$. You get the vector $a'=(1-t)b+t c\>$ representing $A_0$ by solving
$$\bigl((1-t)b+tc\bigr)\cdot a=0$$
for $t\in{\mathbb R}$ and obtain
$$a'={(a\cdot b)c-(a\cdot c)b\over a\cdot(b-c)}\ .$$
Using analogy for $b'$ and $c'$ it follows that
$$\bigl(a\cdot(b-c)\bigr) a'+\bigl(b\cdot(c-a)\bigr) b'+\bigl(c\cdot(a-b)\bigr) c'=0\ .$$
But $\lambda a'+\mu b'+\nu c'=0$ with $\lambda+\mu+\nu=0$ means that $a'$, $b'$, and $c'$ are collinear. |
Difference between $\sin^{-1}(x)$ and $\frac1{\sin(x)}$? | They are two totally different functions with different domains and ranges and different definitions.
The notation is confusing and it takes a while for students to master the concepts.
Note that the arcsine function or $\sin ^{-1} x$ as it is common to write is the inverse function under composition not under multiplication.
That is $$\sin ( \sin ^{-1} x )=x$$ is true on the domain of $ \sin ^{-1} x$
While for $\csc x$ the story is different because it is the multiplicative inverse of $\sin x$ which is $$ ( \csc x)\times (\sin x) =1$$ |
Determine the Dimension of Given subspace | Answer will be $2$..
Because from $x_3$ to $x_{10}$ all can be written as a linear combination of $x_1$ and $x_2$ |
What is the infimum? | Theorem: Let $a>0$ be irrational. Then the sequence of natural numbers is dense in $\mathbb R/(a\mathbb Z)$.
Proof:
The sequence is injective: Assume $n$ and $m$ with $n\ne m$ are equivalent in $\mathbb R/(a\mathbb Z)$. That means $n-m=az$ for a non-zero integer $z$. So $a=(n-m)/z \in\mathbb Q$. Contradiction.
Since $\mathbb R/(a\mathbb Z)$ is compact and the sequence infinite (implied by injectivity) it has a cluster point. In particular we have $n,m$ with $n<m$ which are arbitrarily close together. Let $b\in \mathbb R/(a\mathbb Z)$ and $\varepsilon>0$. Choose $n,m$ with distance less then $\varepsilon$. Then by adding multiplies of $(m-n)$ to an arbitrary element of $\mathbb R/(a\mathbb Z)$ we get into an $\varepsilon$-neighborhood of $b$.
QED.
Answer of the question:
Because $\pi$ is irrational the sequence of natural numbers is dense in $\mathbb R/(\pi\mathbb Z)$ (by our Theorem). So there is an increasing sequence of natural numbers $(a_n)_{n\in\mathbb N}$ converging to the equivalence class of $0$. Since $\sin(x)^2:\mathbb R\to \mathbb R$ is periodic of period $\pi$ it factors through $\mathbb R/(\pi\mathbb Z)$. Continuity of $\sin(x)^2$ gives you
$$\lim_{n\to\infty }\sin(a_n)^2 = 0.$$
Since we have $\sin(x)\ge 0$ for all $x\in\mathbb R$ we have shown
$$\inf_{n\in\mathbb N} \sin(n)^2=0.$$ |
Convergence test for Harmonic (ish) sum | Dirichlet's test: $\{w^n\}_{n\geq 1}$ is a sequence with bounded partial sums and $\left\{\frac{1}{n^{1/3}}\right\}_{n\geq 1}$ is a decreasing sequence that converges to zero. The series converges to the value of a Dirichlet L-function, $L\left(\chi,\frac{1}{3}\right)$, where $\chi$ is a non-principal character $\!\!\pmod{3}$. |
Show that $\bigcap\limits_{\alpha \in J} A_{\alpha}$ contains $\bigcap\limits_{\alpha \in I} A_{\alpha}$ if $J \subset I$ | Let $B=\bigcap\limits_{\alpha\in J}A_\alpha$, $C=\bigcap\limits_{\alpha\in I\setminus J}A_\alpha$ and $D=\bigcap\limits_{\alpha\in I}A_\alpha$. The hypothesis is that $J\subseteq I$. This is equivalent to $I=J\cup(I\setminus J)$, which implies that $D=B\cap C$ by definition. Hence $D\subseteq B$. |
The coefficient of $x^{n}$ in the expansion of $(2-3 x) /(1-3 x+$ $\left.2 x^{2}\right)$ is | You're on the right track. Consider splitting the fraction up using partial fractions.
\begin{align*}
\frac{2-3x}{1-3x+2x^2} &= \frac{2-3x}{(1-2x)(1-x)}\\ &=\frac{1}{1-2x} + \frac{1}{1-x}\\&=\sum_{n=0}^\infty 2^nx^n + \sum_{n=0}^\infty x^n\\&=\sum_{n=0}^\infty \left (2^n+1\right )x^n
\end{align*}
Hence, the coefficient of $x^n$ is $2^n+1$. |
Limit of an expression to $e$ | Observe that
$$\frac{\sin\frac1{n^2}}{\cos\frac1n}n^2=\frac{\sin\frac1{n^2}}{\frac1{n^2}}\frac1{\cos\frac1n}\xrightarrow[n\to\infty]{}1\cdot\frac11=1$$
Remember also that
$$\text{if}\;\;\lim_{n\to\infty}f(n)=\infty\;,\;\;\text{then}\;\;\lim_{n\to\infty}\left(1+\frac1{f(n)}\right)^{f(n)}=e$$
so now check with
$$\frac1{f(n)}=\frac{\sin\frac1{n^2}}{\cos\frac1n}=\frac1{\frac{\cos\frac1n}{\sin\frac1{n^2}}}\;\ldots$$ |
Help regarding odd/even functions and limits. | $f(x)$ is defined on all $\mathbb{R}$ and so is $g(x)$.
$g(-x)=f(-x)-f(x)=-g(x)$ thus $g(x)$ is an odd function so $g(x)\to 0$ as $x\to -\infty$
This means that $g(x)$ is not increasing or decreasing on all $\mathbb{R}$. There is at least one point $x_0$ where $g'(x_0)=0$
$g'(x_0)=f'(x_0)+f'(-x_0)=0\to f'(x_0)=-f'(-x_0)$
As $f(x)$ is differentiable, $f'(x)$ is continuous and $f'(x)$ has opposite values at $x_0$ and $-x_0$, then for the IVT there exists at least one $x_1\in(-x_0,x_0)$ such that $f'(x_1)=0$ |
Solution of an exponential equation | we can write $\frac{1}{n}=(1-a)^t$ taking the logarithm of both sides we get $$-\ln(n)=t\ln(1-a)$$ and thus we get $$t=-\frac{\ln(n)}{\ln(1-a)}$$ |
Bernoulli distribution vs the probability mass function | I think your question is a little confusing.
A probability mass function is a function for a discrete random variable which returns probabilities. We denote this by $Pr(X=x)$ which means the probability of the random variable $X$ being $x$. Some examples:
$$
\begin{align}
P(X=x)&=\frac{e^{-\lambda}\lambda^x}{x!}, \quad x=1, 2, \dots \tag{Poisson}\\
P(X=x)&={n \choose x}\theta^x(1-\theta)^{n-x}, \quad x=0, 1, \dots, n \tag{Binomial}\\
P(X=x)&=\frac{1}{10}, \quad x=1, 2, \dots, 10 \tag{Uniform}\\
P(X=x)&=\theta^x(1-\theta)^{1-x}, \quad x=0, 1 \tag{Bernoulli}
\end{align}
$$
The last one is the probability mass function of the Bernoulli distribution, but all of these are probability mass functions. So your question "Are [the Bernoulli distribution and the probability mass function] the same thing?" does not really make sense.
As you point out, if you let $n=1$ in the binomial then you have a Bernoulli distribution. Furthermore, with $\theta=1/2$ you get your example of coin flips. But $\theta$ can be anything between 0 and 1. |
If a symmetric matrix $A$ has $m$ identical rows show that $0$ is an eigen value of $A$ whose geometric multiplicity is atleast $m-1$. | Direct method.
By definition, the geo-multiplicity of $0$ is just the dimension of the space of solution for $\boldsymbol {Ax}= \boldsymbol 0$. By symmetry, $\boldsymbol A$ has $m$ identical columns. Suppose these columns are the $(k_j)_{j =1}^m$-th columns of $\boldsymbol A$. Let $\boldsymbol e_j$ be the $j$-th standard basis vector, then clearly at least
$$
\boldsymbol e_{k_1} - \boldsymbol e_{k_j} \quad [j =2, \ldots, m]
$$
are solutions of $\boldsymbol {Ax=0}$, thus $\dim(\mathrm{Ker} \boldsymbol A) \geqslant m-1$ as we desire. |
Discretise differential equation | Normally if you would have some non-linear time-invariant multidimensional differential equation with input vector $\vec{u}(t)$ of the form,
$$
\frac{d}{dt}\vec{x}(t) = \vec{f}(\vec{x}(t), \vec{u}(t)), \tag{1}
$$
with,
$$
\vec{x}(t) = \begin{bmatrix}
x_1(t) & x_2(t) & \cdots & x_n(t)
\end{bmatrix}^T, \tag{2}
$$
$$
\vec{u}(t) = \begin{bmatrix}
u_1(t) & u_2(t) & \cdots & u_m(t)
\end{bmatrix}^T, \tag{3}
$$
$$
\vec{f}(\vec{x}, \vec{u}) = \begin{bmatrix}
f_1(\vec{x}, \vec{u}) & f_2(\vec{x}, \vec{u}) & \cdots & f_n(\vec{x}, \vec{u})
\end{bmatrix}^T, \tag{4}
$$
you could linearize it around an equilibrium point $\vec{x}_{eq}$ (such that $\vec{f}(\vec{x}_{eq}, \vec{0})=\vec{0}$). Such a linearization would yield a differential equation of the following form,
$$
\vec{z}(t) = \vec{x}(t) - \vec{x}_{eq}, \tag{5}
$$
$$
\frac{d}{dt}\vec{z}(t) = A\,\vec{z}(t) + B\,u(t), \tag{6}
$$
where $A$ is a $n$ by $n$ matrix and $B$ a $n$ by $m$ matrix. Matrix $A$ will be equal to the Jacobian evaluated at the linearization point, which is equivalent to your first order Taylor expansion, which can be found with,
$$
A = \left.\begin{bmatrix}
\frac{\partial f_1}{\partial x_1} & \cdots & \frac{\partial f_1}{\partial x_n} \\
\vdots & \ddots & \vdots \\
\frac{\partial f_n}{\partial x_1} & \cdots & \frac{\partial f_n}{\partial x_n}
\end{bmatrix}\right|_{(\vec{x},\vec{u})=(\vec{x}_{eq},\vec{0})}. \tag{7}
$$
Matrix $B$ can be found in a similar way,
$$
B = \left.\begin{bmatrix}
\frac{\partial f_1}{\partial u_1} & \cdots & \frac{\partial f_1}{\partial u_m} \\
\vdots & \ddots & \vdots \\
\frac{\partial f_n}{\partial u_1} & \cdots & \frac{\partial f_n}{\partial u_m}
\end{bmatrix}\right|_{(\vec{x},\vec{u})=(\vec{x}_{eq},\vec{0})}. \tag{8}
$$
The solution for $\vec{z}(t)$ subjected to equation $(6)$ and initial condition $\vec{z}(t_0)=\vec{z}_0$ for $t\geq t_0$ can be found to be,
$$
\vec{z}(t) = e^{A\,(t-t_0)}\,\vec{z}_0 + \int_{t_0}^t e^{A\,(t - \tau)}\,B\,\vec{u}(\tau)\,d\tau. \tag{9}
$$
Now if you would like to simulate this system in steps of $T$ and assume that,
$$
\vec{u}(t) = \vec{u}(k\,T), \quad k\,T \leq t < (k+1)\,T, \quad \forall\ \ k\in\mathbb{Z}, \tag{10}
$$
then for $t_0=k\,T$ and $t=(k+1)\,T$, equation $(9)$ can be written as,
$$
\vec{z}((k+1)\,T) = \underbrace{e^{A\,T}}_{A_d}\,\vec{z}(k\,T) + \underbrace{\int_{k\,T}^{(k+1)\,T}\mkern-18mu e^{A\,((k+1)\,T - \tau)}\,d\tau\,B}_{B_d = A^{-1}\left(A_d-I\right)B}\,\vec{u}(k\,T). \tag{11}
$$
This is equivalent to the discrete representation,
$$
\vec{z}[k+1] = A_d\,\vec{z}[k] + B_d\,\vec{u}[k]. \tag{12}
$$
For your system you only gave $f_1(\vec{x},\vec{u})$, so you would also need $f_2(\vec{x},\vec{u})$ in order to complete the total linearization and subsequently the discretization.
You could also use your approach, but it will be less accurate. Namely this would only do a first order approximation of the matrix exponentiation and will only yield approximately the same results for very small $T$,
$$
e^{A\,T} = I + A\,T + \frac12 A^2\,T^2 + \frac{1}{3!} A^3\,T^3 + \cdots. \tag{13}
$$ |
Why does the functional have a local minimum at $0$? | If $y_0$ is a local minimum, then there is $\delta>0$ so that $J(y) \ge J(y_0)$ whenever $||y-y_0||<\delta$.
Now fix $h\neq 0$ and consider the function $\mathcal{J}: (- \epsilon, \epsilon) \to \mathbb{R}$, $\mathcal{J}(t) = J(y_0 + th)$. So if $\epsilon <\frac{\delta}{||h||}$, we have (for all $|t|<\epsilon$)
$$||y_0 + th - y_0|| = |t|\cdot ||h|| <\delta \Rightarrow J(y_0 +th) \ge J(y_0)$$
which is the same as $\mathcal J(t) \ge \mathcal J(0)$. Thus $\mathcal J$ has a local minimum at $t=0$. |
Equiprobable model combinations | Since we are selecting a subset of four balls from a set of balls numbered from $1$ to $10$, the sample space is the set of all four-element subsets of that ten-element set, which has cardinality
$$|S| = \binom{10}{4}$$
Event $E$ is the event that at least one of the numbers on the balls in the sample is less than $4$. To find this probability, we subtract the probability that event $E$ does not occur from $1$. The probability that event $E$ does not occur is the probability that all four balls are selected from the seven balls numbered at least $4$, which is
$$P(\overline{E}) = \frac{|\overline{E}|}{|S|} = \frac{\binom{7}{4}}{\binom{10}{4}}$$
Therefore, the probability that event $E$ occurs is
$$P(E) = 1 - P(\overline{E}) = 1 - \frac{\binom{7}{4}}{\binom{10}{4}}$$
Similarly, the probability, $P(F)$, that at least one ball is even is found by subtracting the probability that all the selected balls have odd numbers. The probability that all the balls have odd numbers is
$$P(\overline{F}) = \frac{|\overline{F}|}{|S|} = \frac{\binom{5}{4}}{\binom{10}{4}}$$
Hence, the probability that at least one of the balls is even is
$$P(F) = 1 - P(\overline{F}) = 1 - \frac{\binom{5}{4}}{\binom{10}{4}}$$
Trying to compute $P(E \cap F)$ directly is difficult. Observe, however, that $E \cap F = S \backslash (\overline{E} \cup \overline{F})$. The event
$\overline{E} \cup \overline{F}$ consists of all subsets of four balls with numbers that are at least $4$ or that contain only balls with odd numbers. In general,
$$|A \cup B| = |A| + |B| - |A \cap B|$$
Observe that since the only odd numbers that are at least $4$ are $5$, $7$, and $9$, it is not possible to select four balls with odd numbers that are at least four. Hence, $\overline{E} \cap \overline{F} = \emptyset$. Thus,
$$|\overline{E} \cup \overline{F}| = |\overline{E}| + |\overline{F}| - |\overline{E} \cap \overline{F}| = |\overline{E}| + |\overline{F}|$$
Hence, the probability that event $E \cap F$ occurs is
\begin{align*}
P(E \cap F) & = P(S \backslash (\overline{E} \cup \overline{F}))\\
& = \frac{|S| - |\overline{E} \cup \overline{F}|}{|S|}\\
& = \frac{|S| - (|\overline{E}| + |\overline{F}|)}{|S|}\\
& = \frac{|S| - |\overline{E}| - |\overline{F}|}{|S|}\\
& = 1 - P(\overline{E}) - P(\overline{F})
\end{align*}
which is
$$P(E \cap F) = 1 - P(\overline{E}) - P(\overline{F}) = 1 - \frac{\binom{7}{4}}{\binom{10}{4}} - \frac{\binom{5}{4}}{\binom{7}{4}}$$ |
Evaluate $\int_{0}^{\infty} (-1)^{\lfloor x\rfloor}\cdot e^{-x} dx $ | This could work if I didn't make some mistake
$$\int_0^\infty (-1)^{[x]}e^{-x}dx=\sum_{n=0}^\infty\int_n^{n+1}(-1)^ne^{-x}dx=\sum_{n=0}^\infty(-1)^n\left(-e^{-n-1}+e^{-n}\right)=$$
$$=-\frac1e\sum_{n=0}^\infty(-e^{-1})^n+\sum_{n=0}^\infty\left(-e^{-1}\right)^n=\left(1-\frac1e\right)\sum_{n=0}^\infty\left(-e^{-1}\right)^n=$$
$$\frac{e-1}e\cdot\frac1{1+e^{-1}}=\frac{e-1}{e+1}$$ |
Symbol for rational/irrational part of a number | You are talking in the realm of e.g. quadratic rings like $\mathbb{Q}(\sqrt{d})$. Often $d$ is negative (Gaussian integers, for instance), and (even when it isn't) you might as well use the notations $\Re(z)$ and $\Im(z)$. But make double sure your audience knows what you are talking about.
Note that if you want to talk about cubic or higher rings, you get more "basis vectors," and you'd need to extend the notation somehow. |
How do you find the intercepts of 2 some functions | So, you have two functions $$f(x)=600 \sin \left(\frac{2\pi}{3} \left(y-\frac{1}{4}\right)\right)+1000$$ $$g(x)=320 \sin \left(\frac{2 \pi }{7}y\right)+500$$ Plotting the functions, you see that they cross eachother an infinite number of times and you are able to locate them more or less accurately.
So, now, you are looking for the zero's of $$h(x)=600 \sin \left(\frac{2\pi}{3} \left(y-\frac{1}{4}\right)\right)-320 \sin
\left(\frac{2 \pi }{7}y\right)+500$$
Only numerical methods will allow you to solve the problem. Probably the simplest would be Newton method which, starting from a "reasonable" guess $y_0$, will update it according to $$y_{n+1}=y_n-\frac{h(y_n)}{h'(y_n)}$$ For this problem $$h'(x)=400 \pi \cos \left(\frac{2\pi}{3}
\left(y-\frac{1}{4}\right)\right)-\frac{640}{7} \pi \cos \left(\frac{2 \pi
}{7}y\right)$$
Let us apply the method for the first positive root using $y_0=2$; the method generates the following iterates $\{1.890686453,1.897805832,1.897828411\}$ and we obtained the solution in three iterations for ten significant figures.
Let us repeat for the second root using $y_0=3$; the method generates the following iterates $\{2.954599701,2.953387580,2.953386630\}$; again, three iterations for ten significant figures. |
Path of diffusion process with discontinuous drift | Applying Girsanov's theorem is overkill. Note that, by definition,
$$X_t = X_0 + \int_0^t \mu_s \, ds+ B_t, \qquad t \geq 0.$$
We know that $t \mapsto B_t$ is continuous (almost surely); moreover, it is well-known that mappings of the form
$$t \mapsto I(t) := \int_0^t \mu_s \, ds$$
are continuous whenever the integral is well-defined. (Just consider e.g. $\mu(s) = 1_{[1,2]}(s)$; draw a picture to see that $t \mapsto I(t)$ is continuous, it doesn't have any jumps.)
This means that $t \mapsto X_t$ is continuous almost surely since it is the sum of the continuous functions. |
Finding the Discriminant of $f(x)=x^n+ax+b$ Using Differentiation | Undoubtedly you have seen the formula
$$
\operatorname{disc}(\alpha)=(-1)^{n(n-1)/2}\prod_{i=1}^nf'(\alpha_i),
$$
where $\alpha_i$, $i=1,2,\ldots,n,$ are the conjugates of $\alpha$.
The relations arising from the factorization
$$
f(x)=\prod_{i=1}^n(x-\alpha_i)\qquad(*)
$$
will also play a role.
The norm calculation that your professor talked about is equivalent to my use of $f(q)$ below. Basically it means that for a rational number $q$ we have $f(q)=N(q-\alpha)$. You may have done related tricks in class, so I cannot tell how familiar you are with this technique.
I first describe how I would do this (this sounds familiar actually - I'm fairly sure I have done this exercise at some point). Then at the bottom I make a few comments about the material you posted. I'm afraid I'm not sure that I will answer your questions.
The relations
$$
f'(\alpha_i)=\frac{-n(a\alpha_i+b)+a\alpha_i}{\alpha_i}
$$
that hold for all $i$ are the key. We get
$$
\prod_{i=1}^nf'(\alpha_i)=\prod_{i=1}^n\frac{a(1-n)\alpha_i-nb}{\alpha_i}.\qquad(**)
$$
Here the product of the denominators is $\prod_{i=1}^n\alpha_i=(-1)^nb$, because that product emerges as the constant term of the minimal polynomial $f$ (= the norm of $\alpha$ up to a sign). In the numerator let's write
$$
a(1-n)\alpha_i-nb=-a(1-n)\left(\frac{nb}{a(1-n)}-\alpha_i\right)
$$
Here the fraction $q=nb/(a(1-n))$ is independent of $i$. Thus the factorization $(*)$ tells us that
$$
\begin{aligned}
\prod_{i=1}^n(a(1-n)\alpha_i-nb)&=(-1)^n(a(1-n))^n\prod_{i=1}(q-\alpha_i)\\
&=(-1)^na^n(1-n)^nf(q).
\end{aligned}
$$
Combining this with $(**)$ tells us that
$$
\prod_{i=1}^nf'(\alpha_i)=\frac{a^n(1-n)^n}{b}f(q).
$$
I'm sure you can take it from here.
You can also use the fact that you mentioned:
$$g(x):=\left( \frac{x-nb}{(n-1)a}\right)^n + a\left( \frac{x-nb}{(n-1)a}\right)+b$$
is the minimal polynomial of $(n-1)a\alpha +nb=a(1-n)(q-\alpha)$. You need to first scale $g(x)$ so that it becomes monic. Then you need to expand and find the constant term of that scaled $g$. That can be used much the same way as I used $f(q)$ (that I didn't bother to calculate!). If you pick the terms that do not contain $\beta$ from the left hand side of the equation on the third line from the bottom, you do get this. I'm not sure why you used that $\beta$ though. |
How do we find the distance of a point on oblate (spheroid) from its two foci ? Can anyone give example? | This is the fourth time you have asked the same question, but this time you provide an important extra piece of information. Since the two "minor axes" are equal the ellipsoid is a solid of revolution about the $x$ axis and the sum of the distances from the foci to any point on the ellipse is in fact the same $2a$ as in the $x$-$y$ plane. |
Why do you draw the triangle as in the picture below and not in any other way? | Set the top vertex in the origin. "In the closed area limited by the graph" sounds a little bit sloppy formulated, because the graph itself does not limit any closed area. I guess they mean "limited by the graph and the $x$-axis". Then the base should be above the $x$-axis and below the line $y=4$. "Parallel to $x$-axis" - as it says. "Inscribed" means two other vertices on the parabola. Otherwise, no more restrictions. The base on the picture goes through $y=3$, but it is just a coincidence. You can draw it any other way, it is simply an example of a possible triangle (not likely the one that maximizes the area). When you vary the base all the possible ways between the $x$-axis and the line $y=4$ you'll get different triangles, and you are to find the one with the largest area. |
Prove $f$, satisfying $\left|f(x)-f(y)\right|\le K\left|x-y\right|^{\alpha}$, is constant. Proof strategy. | Let $y \in [0,1] \Rightarrow \displaystyle \lim_{x\to y} \left|\dfrac{f(x)-f(y)}{x-y}\right| \leq \displaystyle \lim_{x\to y}K|x-y|^{\alpha-1}\Rightarrow |f'(y)| = 0 \Rightarrow f'(y) = 0 \Rightarrow f = C$ |
Convergent series implication question. | This is a consequence of the fact that convergent sequences are Cauchy sequences, applied to the sequence of partial sums of the series:
For any $\varepsilon>0$, there exists $N$ such that
$$ \Big|\sum_{k=n}^mx_k\Big|<\varepsilon $$
for all $m\geq n\geq N$. Fixing $n$ and taking $m\to\infty$ shows that
$$ \Big|\sum_{k=n}^{\infty}x_k\Big|\leq \varepsilon $$
for all $n\geq N$, and since $\varepsilon$ is an arbitrary positive number it follows that $\sum_{k=n}^{\infty}x_k\to 0$ as $n\to\infty$. |
If $A$ is an $m\times n$ matrix and $B$ is an $n\times m$ matrix such that $AB=I$, prove that rank$(B)=m$ | We can see the matrix $B$ as a linear transformation
$$B:\Bbb R^m\to \Bbb R^n,\quad x\mapsto Bx$$
and recall a famous and simple result from the basic set theory:
If $f\circ g$ is injective then $g$ is injective
so from the hypothesis $AB=I$ we get that $B$ is injective and then by the rank-nullity theorem
$$\operatorname{rank}(B)=\dim\Bbb R^m=m$$ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.