title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
How prove this $ \sum_{i\neq j}|x_{i}-y_{i}|\ge\sum|x_{i}-x_{j}|+\sum|y_{i}-y_{j}|$ | Let $n=2$ and choose $x_1=y_2=0$ and $x_2=y_1=1$. Then the left side is $0$ since the two possible nonzero contributions $|x_1-y_1|$ and $|x_2-y_2|$ are ruled out by the summation restriction $i \ne j.$ But the right side is positive, since it contains at least the term $|x_1-x_2|=1.$
I believe the inequality might hold if the $\ge$ is replaced by $\le$, by use of, for $i \ne j$
$$|x_i-y_j|-|x_i-y_i| \le |y_i-y_j|, \\
|x_i-y_j|-|x_j-y_j| \le |x_i-x_j|,$$
each of these obtained by a rearranged triangle inequality. The extra subtracted terms on the left are of the type excluded in the left sum of the proposed inequality of the OP (with the inequality sense reversed). Looking at some small $n$ however, I couldn't get an actual proof using the $\le$ direction either.
Added: The inequality doesn't hold in the $\le$ version either, since one can take all the $x_i$ equal to $a$ and all the $y_i$ equal to $b$, which makes the right side $0$ but each term on the left side is $|a-b|$. |
Problem with central limit theory | The reason that the theorem is called the "central limit theorem" is that it describes the functional form of the center (middle) of the distribution. It really has nothing to say about the tails (something that is frequently misunderstood). |
Haar basis on $L^2(0,1)$ - proof? | There is actually a very nice way to prove completeness, which I learned in my stochastic calculus course. Just to simplify notation, let $J_{j,n}:=((j-1)2^{-n},j2^{-n}]$ for $j=1,\dots,2^n$. Then you can write $f_{n,j}(t):=2^{\frac{n}{2}}(\mathbf1_{J_{2j-1,n+1}}(t)-\mathbf1_{J_{2j,n+1}}(t))$, where $\mathbf1_{J_{2j,n+1}}(t)$ stands for the characteristic function of the set $J_{2j,n+1}$. Moreover, let
$$\mathcal{B}_n:=\sigma(J_{j,n+1};j=1\dots,2^{n+1})$$
then $\mathcal{B}_n$ increase to $\mathcal{B}[0,1]$. Let $f\in L^2[0,1]$, we have by the standard example of conditional expectation:
$$E[f|\mathcal{B}_n]=\sum_{j=1}^{2^{n+1}}2^{n+1}(\int f\mathbf1_{J_{j,n+1}}d\lambda)\mathbf1_{J_{j,n+1}}\tag{1}$$
where $\lambda$ denotes the Lebesgue measure. Now suppose $(f,f_{n,j})=0$ for all $n,j$ then one obtains by induction that $\int f\mathbf1_{J_{j,n+1}}d\lambda=0$ for all $n,j$. Hence the RHS in $(1)$ vanishes. But using the martingale convergence theorem, the LHS of $(1)$ converges to $f$ $\lambda$-a.s. on $[0,1]$. So we conclude that if $f$ is orthogonal in $L^2[0,1]$ to all $f_{n,j}$ we get $f\equiv 0$. |
Existence of a bounded sequence for a family of continuous linear functionals over a Banach space | By assumption, $T:X\to\ell_1$, $x\mapsto (\varphi_n(x))_{n\in\mathbb N}$ is a well-defined map which is linear and has closed graph (due to the continuity of all $\varphi_n$). For $\varphi\in X^*$ define a functional $\psi: T(X) \to \mathbb K$ by $T(x) \mapsto \varphi(x)$. Then check that $\psi$ is well-defined and continuous with respect to the $\ell_1$-norm. By Hahn-Banach, you can extend $\psi$ to an element $\Psi \in \ell_1^*$ which has a representation $\Psi(y)=\sum\limits_{j=1}^\infty a_j y_j$ for a bounded sequence $a$. Then you get $$\varphi(x)=\psi(T(x))=\Psi(T(x))=\sum_{j=1}^\infty a_j\varphi_j(x).$$ |
Doubt about whether repunits are square | An odd number can be a square; for example $121=11^2$.
But if you try squaring even ($2k$) and odd ($2k+1$) numbers, you will see that squares must leave remainder $0$ or $1$ when divided by $4$.
The remainder when a number is divided by $4$ is the same as the remainder when the number comprising its last two digits is divided by $4$.
Therefore, the remainder when the repunit number $111\dots11$ is divided by $4$ is the same as the remainder when $11$ is divided by $4$, which is $3$, so $111\dots11$ cannot be a square. |
Recurrence relations for annuities | $\ddot a_{\overline{n}\rceil i}$ represents the present value of an annuity-due of $1$ paid at the beginning of each period with periodic effective interest rate $i$, for $n$ periods (payments). As such, it is easy to see that the present value of the $k^{\rm th}$ payment is $v^{k-1}$, where $v = 1/(1+i)$ is the periodic present value discount factor. The sum of the present values of each payment is therefore $$\ddot a_{\overline{n}\rceil i} = 1 + v + v^2 + \cdots + v^{n-1}.$$
We may also observe that for such an annuity-due, if there is an additional payment at the beginning of period $n+1$, the present value of this payment is simply $v^n$; therefore, the total present value is the sum of the present values of the annuity-due for the first $n$ periods, plus the present value of the $(n+1)^{\rm th}$ payment; i.e. $$\ddot a_{\overline{n+1}\rceil i} = (1 + v + v^2 + \cdots + v^{n-1}) + v^n = \ddot a_{\overline{n}\rceil i} + v^n.$$ This is one such recursion relation.
Another possible recursion relation that we may write is found by considering that by deferring an annuity-due of $n$ payments by one period, we get the equivalent of the cash flow for payments $2, 3, \ldots, n+1$ on an annuity-due with $n+1$ payments, hence $$\ddot a_{\overline{n+1}\rceil i} = 1 + v(1 + v + v^2 + \cdots + v^{n-1}) = 1 + v \ddot a_{\overline{n}\rceil i}.$$
Part (b) is trivial and is left as an exercise.
Part (c) is unclear. Is the question asking for the recurrence for $a_{\overline{n}\rceil i}$, an annuity-immediate for $n$ periods, or for $\bar{a}_{\overline{n}\rceil i}$, a continuous annuity over $n$ periods with effective periodic interest rate $i$?
Part (d) requires more consideration than you have provided in your work. We have $$(I\ddot a)_{\overline{n}\rceil i} = 1 + 2v + 3v^2 + \cdots + nv^{n-1}.$$ This is the definition along the lines of what we have reasoned for the level annuity-due above. We wish to establish a recursion relation for this. Of course, we can simply write $$(I\ddot a)_{\overline{n+1}\rceil i} = (1 + 2v + 3v^2 + \cdots + nv^{n-1}) + (n+1)v^n = (I \ddot a)_{\overline{n}\rceil i} + (n+1)v^n,$$ or we can write
$$\begin{align*}
(I \ddot a)_{\overline{n+1}\rceil i} &= 1 + 2v + 3v^2 + \cdots + (n+1)v^n \\
&= (1 + v + v^2 + \cdots + v^n) + (v + 2v^2 + 3v^3 + \cdots + nv^n) \\
&= \ddot a_{\overline{n}\rceil i} + v(1 + 2v + 3v^2 + \cdots + nv^{n-1}) \\
&= \ddot a_{\overline{n}\rceil i} + v (I\ddot a)_{\overline{n}\rceil i}
\end{align*}$$
The problem you are having is that you are not reasoning from first principles, instead electing to use the derived formulas for these annuities, when you have not in fact demonstrated them to be true. Moreover, these formulas are not particularly useful for writing out recurrence relations, since the original summation much more readily yields a number of possible recurrences. |
How to define characteristic function of the coalitional game based on the situation? | Your starting point is a general bankruptcy problem that is defined by an ordered pair $(E,\mathbf{d})$, where $E \in \mathbb{R}$ and $\mathbf{d}:=(d_{1},\ldots,d_{n}) \in \mathbb{R}^{n}$ s.t. $d_{i} \ge 0$ for all $1 \le i \le n$ and $0 \le E \le \sum_{i=1}^{n}\,d_{i}$ is given. From this generalized bankruptcy problem a bankruptcy game $(N,v_{E;\mathbf{d}})$ in characteristic function form can be derived while defining
$v_{E;\mathbf{d}}(S):= \max\big(0,E-\sum_{j \in N\backslash S}\,d_{j}\big)$
for all $S \subseteq N$. In case that $E_{1}=100$, and $\mathbf{d}=(100,200,300)$, then for the associated bankruptcy game we get $v_{E_{1},\mathbf{d}}(N)=100$, otherwise zero, and the Shapley value distributes $100/3$ for each player. For $E_{2}=200$, we get $shv(v_{E_{2},\mathbf{d}})=(100,250,250)/3$, and finally for $E_{3}$, we obtain $shv(v_{E_{3},\mathbf{d}})=(50,100,150)$.
These values are different from the allocations assigned by the generalized contested garment principle under the bankruptcy problem, which coincide with the nucleoli of the corresponding bankruptcy games. I left this as an exercise up to you. |
The radical of the commutator subgroup | Yes, because the commutator subgroup is a characteristic subgroup of $G$. Then the claim follows from this duplicate:
Prove that if $H$ is a characteristic subgroup of $K$, and $K$ is a normal subgroup of $G$, then $H$ is a normal subgroup of $G$ |
Approximate a Sum using Integration by Parts | You could do a change of variables $t \to t/\mu -1$. Then, the Integral takes the for of the exponential integral function with a pre-factor and different argument:
$$ \int_0^n \frac{e^{\mu t}}{1 + t} dt = e^{-\mu} \int_1^{\mu(n+1)} e^t/t~ dt = e^{-\mu} \Big( \text{Ei}[\mu(n+1)] - \text{Ei}[1] \Big) $$
Does that help?
Alternatively:
We know that the exponential function is increasing very rapidly. So rapidly, in fact, that the integral over all values up to $x$ is about as large as $e^x$ itself. Maybe this holds for the sum as well:
$$\int_0^x e^{\mu t} dt \approx \mu^{-1} e^{\mu x} \overset{?}{\rightarrow} \sum_{i=0}^n e^{i \mu} \approx \mu^{-1} e^{\mu (n+1)}.$$
If so, then the $1/(1+i)$ should not change much:
$$\sum_{i=0}^n \frac{e^{\mu i}}{i+1} \approx \mu^{-1} \frac{e^{\mu (n+1)}}{n+2}. $$
I tried a few examples and it seems to give at least the right order of magnitude for $n\mu$ up to 100. That means about $50 \%$ error, but such deviations already appears when comparing $\int e^t dt$ and $\sum_i e^i$. So a better approximation can probably not be achieved through the integral but only by looking at the sum directly. |
An observation about $x^n$ when $n$ is a positive integral power of $5$ | In base 10, in order for $x^n$ to have the same final digit as $x$ (what I assume you meant to type), we gather requirements for $n$ by looking at each possible final digit.
0 and 1 place no requirements on $n$. (That is, if $x$ ends in 0 or 1, then so does $x^n$ regardless of $n$)
2 adds the requirement that $n$ be one more than a multiple of $4$. (since if $x$ ends in 2, then $x^n$ ends in 2, 4, 8, 6, 2, ...)
3 adds the same requirement (that $n$ be one more than a multiple of $4$).
4 adds the requirement that $n$ be odd.
5 and 6 place no requirement on $n$.
7 requires that $n$ be one more than a multiple of $4$. So does 8.
9 requires that $n$ be odd.
So in fact, any $n$ that is one more than a multiple of 4 should work, and indeed:
1^13 = 1
2^13 = 8192
3^13 = 1594323
4^13 = 67108864
5^13 = 1220703125
6^13 = 13060694016
7^13 = 96889010407
8^13 = 549755813888
9^13 = 2541865828329 |
(UPDATED): Find the minimum-variance unbiased estimator of a given function | Yes, this solution is almost correct, and it's a nice solution. The only slight technical error is that you should argue that $T$ is a complete sufficient statistic. If a sufficient statistic isn't complete, different unbiased estimators based on it may have different variance. |
Prove that the sequence $a_{n+1}=0.5(a_n+\frac{1}{a_n})$ with $a_1=2$ is decreasing | $a_{n+1} \geq 1$ by AM-GM inequality. Hence $a_n \geq 1$ for all $n$. Now $a_{n+1} \leq a_n$ reduces to $a_n+\frac 1 {a_n} \leq 2a_n$ or $a_n \geq \frac 1{a_n}$ which is true. |
Decomposing interval in $\mathbb{R^n}$ | I will use $d$ for the dimension of the ambient space $\Bbb R^d$. I understand that each $I'_n$ is an interval. Fix an integer $M$ and decompose $I'_n$ into $M^d$ subintervals, where the length of each side of the subinterval is the length of the parallel side of $I'_n$ divided by $M$. The diameter of each subinterval is the diameter of $I'_n$ divided by $M$. Tou can make it as small as you wish by taking $M$ large enough. |
Bayesian inference from normal distribution | In the step that you are referencing, the square is to be completed on $\theta$: the LHS is not complete because the exponent is a sum of squares, and it needs to be rewritten as a normal likelihood with respect to the posterior distribution of $\theta$.
How does the algebra actually proceed? Consider just the summation in the LHS expression:
$$\begin{align*} \sum_{i=1}^n \frac{(x_i - \theta)^2}{2\sigma_i^2}
&= \frac{1}{2} \sum_{i=1}^n \left(\frac{\theta^2}{\sigma_i^2} - \frac{2x_i}{\sigma_i^2} \theta + \frac{x_i^2}{\sigma_i^2}\right) \\
&= \frac{1}{2} \left( \frac{\theta^2}{v} - 2 \frac{m}{v} \theta + \sum_{i=1}^n \frac{x_i^2}{\sigma_i^2} \right) \\
&= \frac{1}{2v} \left( (\theta^2 - 2m \theta + m^2) - m^2 + v \sum_{i=1}^n \frac{x_i^2}{\sigma_i^2} \right) \\
&= \frac{(\theta-m)^2}{2v} + C,
\end{align*}$$
where $C$ is a constant that does not depend on $\theta$. Then, when taking the exponential, $\exp(-C)$ turns into the multiplicative constant $d$ used by the text. |
Suppose that $V ⊂\mathbb R^n$ is a subspace. How to show that $V$ is a normal subgroup of $\mathbb R^n$? | It's a subspace, which means it has a group law under which the group is closed (the law comes from the "sub" part, and closure is the "space" part). It's a normal subgroup because $\mathbb{R}^n$ is abelian, and any subgroup of an abelian group is abelian.
Alternately, you could directly verify the group law and abelian nature of the subspace. |
Proving nested interval theorem from least upper bound property | He is saying that the nested interval property works even if the set of nested intervals is indexed by the empty set, $\emptyset$. The intersection over the empty set is always the space you are working in because the elements of an intersection are those elements that appear in every set being intersected. As there are no sets for the elements to not appear in, they appear in every set. |
Finding Lagrange Error Bound | The Lagrange error bound of a Taylor polynomial gives the worst-case scenario error of the Taylor approximation on some interval. It levarages the fact that a Taylor-approximation of order n has an error of order n+1. More preciesely
$$(\forall x_0 \in I), (\forall n \in \mathbb{Z_+}), (\exists \xi \in I), \text{ s.t. } f(x) - T_n(x) = \frac{f^{(n+1)}(\xi)}{(n+1)!} (x-x_0)^{n+1}$$
This error term can be bounded by above as follows:
$$\bigg| \frac{f^{(n+1)}(\xi)}{(n+1)!} \bigg| \le \max_{t \in I} \bigg|\frac{f^{(n+1)}(t)}{(n+1)!} \bigg| = M_n$$
And this is the Lagrange error bound of the n-th order Taylor polynomial. |
How to show for a simple regression with an intercept and one independent variable $R^2 = r ^2$ , where $r$ is the ordinary correlation coefficient. | Consider:
The Pearson Product Moment Correlation Coefficient $r$ is an estimate of $\rho$, the population correlation coefficient, which measures the strength of a linear relationship between the two variables $x$ and $y$ ($x$ independent and $y$ dependent):
$r$ $=$ $\dfrac{\sum_{i=1}^{n}(x_i-\bar{x})(y_i-\bar{y})}{\sqrt{\sum_{i=1}^{n}(x_i-\bar{x})^2 \cdot \sum_{i=1}^{n}(y_i-\bar{y})^2 }}$
where the bar in $\bar{x}$ and $\bar{y}$ represents the mean value of $x$ and $y$. $R^2$ is the percentage of variance accounted for by the regression model and is defined as indicated above and more fully as:
$R^2$ $=$ $\dfrac{Sum Squares Regression}{Sum Squares Total)}$ $=$ $\dfrac{\sum_{i=1}^{n}(\hat{y_i}-\bar{y})^2}{\sum_{i=1}^{n}(y_i-\bar{y})^2}$
where the hat in $\hat{y_i}$ represents the estimated $ith$ $y$ value using the estimated regression line. Note that $\hat{Y}$ represents the estimated $Y$ value based on the estimated regression line, $\bar{Y}$ represents the mean of the dependent variable $Y$, and $Y$ represents the dependent variable.
Post Script: $\hat{y_i}$ $=$ $b_0 + b_1 \cdot x_i$ where $b_0$ is the estimate of the intercept and $b_1$ is the estimate of the slope. |
If $A$ is $k\times l$ and $B$ is $l\times k$, $l\geq k$ and full-rank, must $AB$ have full-rank? | A counterexample is
$$\begin{bmatrix}1 & 0 & 0 \\ 0 & 1 & 0\end{bmatrix} \begin{bmatrix}0 & 0 \\ 1 & 0 \\ 0 & 1\end{bmatrix} = \begin{bmatrix}0 & 0 \\ 1 & 0\end{bmatrix}.$$
In general, if $k < \ell$, then $A$ will have a nontrivial kernel, so there will be at least one vector $x \in \mathbb R^{\ell}$ such that $Ax=0$. The easiest way to make sure $AB$ is not invertible is to make $x$ one of the columns of $B$: in that case, $x$ is definitely in the image of $B$, so there will be a vector $y\in\mathbb R^k$ such that $x = By$ and therefore $ABy=0$.
Also, in the case $k=1$, the quoted statement is just saying "the dot product of two vectors in $\mathbb R^\ell$ is never zero", which is very false. |
A Countable Powerset -- Where am I wrong? | You hit some sort of circularity with separation.
While there are only countably many formulas (and that countable collection is not really countable from the standpoint of set theory. It's nonexistent, since it's a part of the meta-theory); these formulas are allowed to have parameters.
All you really show here is that a lot of the subsets of $\Bbb N$ will be definable using separation axioms of the form $\varphi(x,p):= x\in p$ where $p$ is a parameter, and $A=\{x\in\Bbb N\mid\varphi(x,A)\}$. It's a bit of circular, but not if you really think about it.
But there is another issue here. Separation axioms don't rely on "previously added sets" in the hierarchal order of things. Note that $\{(n,k)\mid 2^{\aleph_n}=\aleph_k\}$ is a definable subset of $\Bbb N^2$ (and so by encoding it defines a subset of $\Bbb N$), but it's not really added after adding $\Bbb N$. It will only be added after we've added a lot more, namely all the $\aleph_n$'s and all their power sets.
And we can add all sort of crazy definitions like that. The freedom to use parameters, and to use formulas which address the entire set theoretical universe and not just $\Bbb N$, show that there are uncountably many subsets. The power set axiom just ensures that there is just "set many" of them.
What's even more important is to understand that the axioms don't "define" the structure". Instead the structure is given and we check that it satisfies the axioms. Does your argument mean that any model of a countable language is countable, just because? Not really. It just shows that a countable language can define only countably many elements in a model (unless you allow parameters).
Let me point a similarity with the von Neumann construction of $V$ (or really, any other hierarchal construction). The sets come from somewhere. They already exist. We just show that we can write the universe as a limit of a hierarchy of sets. Similarly in your question, the sets already exist, the axioms don't bring them to life, they just allow us to show that particular sets exist. |
What does the symbol "`\`" mean in the context of set operation? | The operator $\setminus$ is the set difference defined by
$$A\setminus B=\{a\in A:a\not\in B\}$$ |
Is the tensor algebra on a differential ring a differential ring, naturally? | Maybe the arXiv preprint 1106.1856
J. Terilla, T. Tradler, S. O. Wilson: Homotopy DG algebras induce homotopy BV algebras (2011)
is close to what you are looking for. In fact, the abstracts says
Let $TA$ denote the space underlying the tensor algebra of a vector space $A$. In this short note, we show that if $A$ is a differential graded algebra, then $TA$ is a differential Batalin-Vilkovisky algebra. Moreover, if $A$ is an $A$-infinity algebra, then $TA$ is a commutative BV-infinity algebra. |
Question on notation in Eisenbud's Commutative Algebra With a View Toward Algebraic Geometry Exercise 2.19(b) | It means either of the following things, which are all canonically isomorphic (here, canonically = uniquely if you require that it respects the map coming from $M$):
$M[(f_if_j)^{-1}]$, $M[f_i^{-1}][f_j^{-1}]$, $M[\{f_i,f_j\}^{-1}]$
In particular, you have a morphism $M[f_i^{-1}]\to M[f_i^{-1}][f_j^{-1}]$ and in particular $m_i$ (an $m_j$) is sent to someone in the latter.
If you want to view it as $M[(f_if_j)^{-1}]$, then indeed, you will have that the image of $m_i = \frac{n_i}{f_i^{k_i}}$ is $\frac{n_if_j^{k_i}}{(f_if_j)^{k_i}}$
But you donw't have to view it that way, you can also simply say it's $\frac{m_i}{1}$ for instance (that's the $M[f_i^{-1}][f_j^{-1}]$ point of view) or just $\frac{n_i}{f_i^{k_i}}$ (that's the $M[\{f_i,f_j\}^{-1}]$ point of view) |
Proof that a function constant on 2 disjoint closed subsets of $\Bbb R$ with standard topology can be extended to continuos function on $\Bbb R$. | If $A$ is closed and $x$ apoint, then the distance of $x$ from $A$ can be defined as $$d(x,A):=\inf\{\,|x-a|:a\in A\,\}$$
and is positive iff $x\notin A$. (It can be defined also if $A$ is not closed, but then it may be $0$ even for some $x\notin A$). Note that $d$ is continuous in $x$.
Now define
$$ f(x):=\frac{d(x,B)c_A+d(x,A)c_B}{d(x,A)+d(x,B)}.$$
Since $A,B$ are closed and disjoint, it is always guaranteed that the denominator is positive. |
How to use Lagrange Multiplier for same degree? | Lagrange Multipliers are typically used in constrained optimization, so if you have one constraint, for example, you will enforce 3 constraints tied to the partial derivatives and the actual constraint, yielding you 4 equations for 4 unknowns... |
Eigenvalues of $A+v_1d^T$ where $Av_1 = \lambda_1 v_1$ (shift of first eigenvalue) | Sketch: Use Jordan decomposition. WLOG, assume
\begin{align}
A = UJU^{-1}
\end{align}
where
\begin{align}
J =
\begin{pmatrix}
\lambda& 1 & 0 & \dots & \dots & 0\\
0 & \lambda & 1 & 0 & \dots & \vdots\\
\vdots & 0 & \ddots & \ddots & \dots &\vdots\\
\vdots & \vdots & \ddots& \ddots & 1 & \vdots\\
\vdots & \dots& \dots & 0 & \lambda & 1\\
0 & \dots & \dots & \dots & 0 & \lambda
\end{pmatrix}
\end{align}
Since $Ue_1 = v_1$ then we see that
\begin{align}
A+v_1d^T =&\ U(J+U^{-1}v_1d^TU)U^{-1}\\
=&\ U(J+e_1(U^Td)^T)U^{-1}.
\end{align}
Note that
\begin{align}
e_1(U^Td)^T =
\begin{pmatrix}
1 \\
0 \\
\vdots\\
0
\end{pmatrix}
\begin{pmatrix}
v_1^Td & * &\dots & *
\end{pmatrix}
=
\begin{pmatrix}
v_1^Td & * &\dots & *\\
0 & 0 & \dots & 0\\
\vdots & \vdots & \dots & \vdots\\
0 & 0 & \dots & 0
\end{pmatrix}.
\end{align} |
Identifying left- and right-Riemann sums of $\int_9^{14}e^{-x^4}\ dx$ | No matter what $n$ and $m$ are, $R_n<L_m$ based on your knowledge that $R_n<A$ and $A<L_n$. So $R_{1200}$ should be the smallest of the three: $0.33575$
Now both $L_{20}$ and $L_{1200}$ overestimate the value of $A$. Informally, $L_{1200}$ is closer to $A$, because $A=\lim_{n\to\infty}L_{n}$. The function is not particularly weird enough for $L_{1200}$ to break the downward trend of $L_n$ towards $A$ as $n\to\infty$. So this much understanding suggests $L_{1200}$ is the smaller of the remaining two numbers.
A little more formally, $L_{20}$ is the area of a certain $20$ rectangles. And $L_{1200}$ is the area of a certain $1200$ rectangles. Since $20$ divides into $1200$, we can in fact place sets of $60$ of the rectangles from $L_{1200}$ inside each rectangle from $L_20$. Since the function is deacreasing, the $60$ rectangles will fit inside the one rectangle with room to spare. So again, $L_{1200}$ should be less than $L_{20}$. |
If a curve $c$ can be reparametrized by arc length, then is $c$ regular? | I think one of the issues is that there is no explicit definition of what it means for a curve to be "parametrized by arc length". The implicit definition use in the proof of 2.4 is that a curve is parametrized by arclength if it is given as $\gamma(s)=c(t(s))$ for some regular curve $c$, with $t(s)$ the inverse of the arclength function $s(t)$ of $c$. Since Prop 2.3 guaratees that $t(s)$ exists only when $c$ is regular, this definition a priori only makes sense under the assumption that $c$ is regular (and of course then $\gamma$ can be proved to is regular as in the second part of Prop 2.4 (in fact $|\gamma'(s)|=1$ so $\gamma'(s)\neq 0$)).
Now your question can be interpreted as "can $t(s)$ exist for non-regular $c$"?
The answer is that if you require $t$ to be differentiable (which is part of saying that $s$ is a diffeomorphism), then it follows that $c$ is regular, since $s(t(s))=s$ and differentiating $s'(t)t'(s)=1$, and by main thm of calculus $s'(t)=|c'(t)|$, so $|c'(t)||t'(s)|=1|$, so $c'(t)\neq 0$ (on the other hand, if you only require $t$ to be a continuous inverse for $s$ then you can find plenty of examples where $c$ is non-regular, and they can then be "reparametrized by a continuous change of parameter"; however the resulting $\gamma(s)$ has no reason to be continuous, much less smooth (but can be arranged to be smooth in some cases)). In summary, if $c$ is regular all the definitions make sense and and resulting $\gamma$ is smooth. If $c$ is not regular, $t(s)$ can not be smooth, but sometimes can still exists, however other things will usually break. |
Find the area of the sector | What you need is $$A =\left(\dfrac \theta{2\pi}\right)\cdot (\pi r^2 ) = \dfrac{r^2 \theta}{2}$$ for $\theta$ measured in radians, so the formula is correct. You are correct that with $\theta = 0.5$, the resultant area is $25$. |
Entire function such that $|f(z)| = 1$ on the real line | You solved this earlier than you think.
Since $f$ is holomorphic on $U$, $1/\overline{f(\overline{z})} = f(z)$ for all $z \in U$. Thus $1/\overline{f(\overline{z})}$ extends to an entire function.
... a nonvanishing entire function, since the numerator is nonzero (the denominator is continuous, so it won't go to infinity anywhere). And this function is $f$, as you already saw. |
Understanding difference between reduction methods | You are right; these are two different methods.
Number 1 is a Turing reduction. In a Turing reduction from $A$ to $B$ you show an algorithm that can solve $A$ if it has access to an oracle for $B$ -- that is, at any time the algorithm for $A$ can decide to construct an instance of problem $B$ and instantaneously get a correct answer from the oracle. The algorithm is allowed to ask as many questions of the oracle as it wants to, and it can use the answers for anything, including to use them for deciding what its next question is going to be.
In your example the algorithm asks only one question. The example is not worded using the "oracle" terminology, but says "assume we have a machine that decides $B$" instead of "assume we have an oracle for $B$". For most purposes this makes no difference, but theoretical work often favors the "oracle" wording, such as to exclude arguments that go
Assume we have a Turing machine for $B$. But we know (by a different technique) that $B$ is undecidable, so this is a contradiction. Therefore the moon is made of green cheese, and thus ...
(These arguments are valid, if a bit pointless, as far as proving undecidability of $B$ goes, but they are not really Turing reductions, and in some theoretical contexts we're interested in whether two problems that are both already known to be undecidable are Turing reducible to each other).
Number 2 is a many-one reduction. In a many-one reduction from $A$ to $B$ you show an algorithm for converting any instance of $A$ to an instance of $B$ that has the same answer as the input $A$ instance.
Of course, whenever you have a many-one reduction you can easily get a Turing reduction -- just replace "return X" in the many-one-reduction with "ask the oracle about X and return its answer".
However, a Turing reduction can't necessarily be rewritten as a many-one reduction. For example let DIVERGE${}_{\mathit{TM}}$ be the set of all Turing machines that don't halt. There are easy Turing reductions from HALT to DIVERGE and vice versa (just ask the oracle about your input and invert the answer), but there cannot be a many-one reduction from HALT to DIVERGE (or vice versa) because that would make HALT both r.e. and co-r.e., which would make it decidable.
Many-one reductions are thus a weaker technique than Turing machines for showing undecidability. But this also means that the existence of a many-one reduction gives more fine-grained information about the relation between problems. You can use many-one reductions to show that a problem is or isn't recursively enumerable (which Turing reductions show nothing about). And in complexity theory, NP-completeness and several other complexity classes are explicitly defined by many-one reductions. |
Fourier Transform Inverse of 1 / (jw - a) | In general, we have ${\cal F}(t \mapsto f(-t))(\omega) = ({\cal F}f) (-\omega)$.
If $f(t) = e^{-at} u(t)$, with $a>0$, we have $({\cal F}f)(\omega) = { 1 \over i \omega +a}$.
If we let $g(t) = -f(-t)$, we have $({\cal F}g)(\omega) = -({\cal F}f)(-\omega) = {1 \over i \omega -a}$.
Hence the inverse is $t \mapsto - e^{at} u(-t)$. |
About linear combinations of primes | This is known as the Frobenius coin problem. As you state, the solution for one and two numbers is known. For three (or more) only partial results are available. |
Cardinality of k-bijections | Choice lets us well-order $k$ and choose its elements' images under $f$ in ascending order, with $k$ unused values available at each point. So there are $k^k$ permutations. But $2^k\le k^k\le (2^k)^k=2^{k^2}=2^k$, completing the proof (note $k^2=k$ follows from choice too). |
Evaluate $\int_\gamma \frac{dz}{z-1}$, where $\gamma$ is the unit circle. | $$
\mbox{Note that}\quad
\overbrace{\oint_{\left\vert{z}\right\vert\ =\ \color{red}{\large 1^{+}}}
{\mathrm{d}z \over z - 1}}^{\displaystyle =\ 2\pi\mathrm{i}}\ -\
\overbrace{\oint_{\left\vert{z}\right\vert\ =\ \color{red}{\large 1^{-}}}
{\mathrm{d}z \over z - 1}}^{\displaystyle =\ 0}\ =\
\bbox[10px,border:1px groove navy]{2\pi\mathrm{i}}
$$ |
CDF and PDF of absolute difference of two exponential random variables. | Sadly, your answer cannot be correct because the random variable $Y$ remains in your CDF and PDF. The CDF of $Z$ should be only a function of $z$ and the parameters.
Your initial approach is sound, however. Let's see how to complete it.
$$\Pr[Z \le z] = \Pr[X \le Y + z] - \Pr[X \le Y - z]$$ is correct. Now, let's see how to attack each term on the right. We have
$$\Pr[X \le Y + z] = \int_{y=0}^\infty \Pr[X \le y + z] f_Y(y) \, dy,$$ by the law of total probability. This in turn yields $$\Pr[X \le Y + z] = \int_{y=0}^\infty (1 - e^{-(y+z)}) e^{-y} \, dy = 1 - e^{-z}/2.$$ Similarly, $$\Pr[X \le Y - z] = \int_{y=0}^\infty \Pr[X \le y - z]f_Y(y) \, dy.$$ However, you must be careful here. Whereas in the first case we had no issues with $y + z < 0$, since $y$ and $z$ must both be nonnegative, in this situation we could have $0 \le y < z$, in which case $\Pr[X \le y-z] = 0$, because $X$ cannot be negative. Consequently, we can write this as $$\Pr[X \le Y - z] = \int_{y=z}^\infty \Pr[X \le y-z] f_Y(y) \, dy,$$ noting that the lower limit of integration can begin at $y = z$, since on the interval $y \in [0,z)$, the integrand is zero.
I have left the remaining calculations as an exercise. What is your result? Is it surprising? If so, how might you go about verifying it? |
Taking a const out of a series | Yes this true since a series $\displaystyle \sum_n a_n$ is nothing but the partial sum $\displaystyle\left(\sum_{k=0}^n a_k\right)_n$ which is a sequence and we know (and we can prove it using the definition) that for a sequence $(u_n)_n$ we have
$$\lim_{n\to\infty}\lambda u_n=\lambda\lim_{n\to\infty} u_n$$ |
can a real number be added to a complex number | The real numbers inherit complex addition. Indeed, the inclusion map $i : \mathbb{R} \to \mathbb{C}$ by $x \mapsto x + 0i$ allows you to view any real number as a complex number in a canonical way. That is, when you add a real to a complex, you are effectively using complex addition.
So, to answer the question directly: if you are being really mathematically precise (or pedantic), it doesn't make sense to add a real to a complex, any more than it makes sense to add an apple to a banana. However, in any context where you don't have to be totally precise about these things, it does make sense, because we implicitly identify each real number with its associated complex number.
For example: $1_{\mathbb{R}} + (1+i)_{\mathbb{C}}$ very strictly doesn't make sense. However, we identify $1_{\mathbb{R}}$ with $(1+0i)_{\mathbb{C}}$, and so we may identify the original sum with $(1+0i)_{\mathbb{C}} +_{\mathbb{C}} (1+i)_{\mathbb{C}} = (2+i)_{\mathbb{C}}$. There are very few instances where a mathematician would not do this "view a real as a complex" automatically and without thinking about what they were doing.
All this is to say that the reals are isomorphic to a subfield of the complexes, and we freely identify elements of the field $\mathbb{R}$ with their corresponding elements of the field $\mathbb{C}$ (where we pick the correspondence in the obvious way).
The exponential function is to do with lots and lots of different things, and is very much worth asking in a different question (if the answer isn't already on Math.SE). |
Vector valued function in $\mathbb{R}^2$ with non-finite arc length | I'm convinced the examples using a sine curve with decreasing amplitude work out fine, but I think calculating or bounding the arc length might be tedious. Let me suggest something more elementary.
Consider the curve $\gamma:[0,1]\to\mathbb R^2$ given by
$$
t\mapsto \left(t, \frac{1}{\lfloor 1/t\rfloor} - \left| 2t \lceil 1/t\rceil-2-\frac{1}{\lfloor 1/t\rfloor}\right|\right). \tag{1}\label{gamma}
$$
Figure: Plot of the curve $\gamma:[0,1]\to\mathbb R^2$ given by \eqref{gamma}.
All this complicated expression does is put a tent of height $\frac 1 k$ above the interval $\left[\frac 1{k+1}, \frac 1 k\right]$. It's obvious that the arc length along one tent is at least $\frac 2 k$. Thus, the length of the overall curve is at least $\sum_{k=1}^\infty \frac 2 k=\infty$. |
Is there an example where for a sequence $\{z_n\}_{n \geq 0}$ which converge to $z \not= 0$, but $\{arg (z_n)\}_{n \geq 0}$ diverge? | I assume you define your $\arg$ with value in $]-\pi,\pi].$
Then take a discrete convergent spiral with center on the line $]-\infty,0]$.
Edit : Easier idea. Consider the sequence $z_n = -1 + (-1)^n \frac{i}{n}.$ This sequence converges to $-1$ and $arg(z_n)$ does not converge. |
Convolution of measures and stochastic dominance | Let $ f $ is a non decreasing measurable functions, then
\begin{align}
&\int f(z)\,(\mu\ast\mu)(\mathrm{d}z)=\mathsf{E}[f(X+Y)]\quad \text{where $X,Y$ are i.i.d. and $X\overset{d}{=}\mu$}\\
&\quad=\int \Biggl[\int f(x+y)\,\mu(\mathrm{d}x)\Biggr]\mu(\mathrm{d}y)
\qquad \text{For fixed $ y $, $ f(x+y) $ is non decreasing in $ x $ }\\
&\quad\ge \int \Biggl[\int f(x+y)\,\nu(\mathrm{d}x)\Biggr]\mu(\mathrm{d}y)
\qquad\text{ $ g(y)=\int f(x+y)\,\nu(\mathrm{d}x) $ is non decreasing in $ y $ }\\
&\quad\ge \int \Biggl[\int f(x+y)\,\nu(\mathrm{d}x)\Biggr]\nu(\mathrm{d}y)\\
&\quad=\int f(z)\,(\nu\ast\nu)(\mathrm{d}z)
\end{align} |
Domain of function when open circles are included in graph | To determine the domain, you need to figure out which values of $x$ have corresponding values of $y$ (that is, for which $x$ is $f(x)$ defined. In this case, clearly $[-4,-2]$ is in the domain, and also $-1$ is not. The point you are probably asking about is $x=1$. At $x=1$, even though there is an open circle at the point $(1,2)$, the function is defined at $x=1$, with $f(1) = 1$. So the interval $(-1,4]$ is in the domain of $f$. |
Function with divergence, curl and normal trace on boundary equals zero is zero | For $u\in H^1(\Omega)$, you can write :
\begin{equation}
||u||_{H^1(\Omega)}\leq C \{ ||u||_{L^2(\Omega)}+ ||div\ u||_{L^2(\Omega)} + ||curl\ u||_{L^2(\Omega)} + ||u\cdot n||_{H^{\frac{1}{2}}(\partial\Omega)}\}
\end{equation}
or,
\begin{equation}
||u||_{H^1(\Omega)}\leq C \{ ||u||_{L^2(\Omega)}+ ||div\ u||_{L^2(\Omega)} + ||curl\ u||_{L^2(\Omega)} + ||u\times n||_{H^{\frac{1}{2}}(\partial\Omega)}\}.
\end{equation}
Now if $u\cdot n =0$ on $\partial\Omega$, then you have stronger estimate to write
\begin{equation}
||u||_{H^1(\Omega)}\leq C \{ ||div\ u||_{L^2(\Omega)} + ||curl\ u||_{L^2(\Omega)}\}
\end{equation}
or, if $u\times n=0$ on $\partial\Omega$ then also
\begin{equation}
||u||_{H^1(\Omega)}\leq C \{ ||div\ u||_{L^2(\Omega)} + ||curl\ u||_{L^2(\Omega)}\}.
\end{equation}
Now, if $div\ u=curl\ u =0$ in $\Omega$ then it is straightforward to conclude $u = 0$ in $\Omega$.
Remark: The above statement holds for $u\in W^{1,p}(\Omega)$ but not necessarily for $u\in L^p(\Omega)$.
Source: (1) "On the Stokes equations with the Navier type boundary conditions": Cherif Amrouche and Nour Seloula. |
If $(x+\sqrt{x^2 + 1})(y+\sqrt{y^2 + 1})=p$, find $x+y$ | This is a bad question, since:
The identity $(x+\sqrt{x^2 + 1})(y+\sqrt{y^2 + 1})=p$ does not determine $x+y$.
Example: Let $p=4$, then $(x,y)=\left(0,\frac{15}8\right)$ and $(x,y)=\left(\frac34,\frac34\right)$ are both solutions, for $x+y=\frac{15}8$ and $x+y=\frac32$ respectively.
If one wishes to add the constraint that $$x=y$$ as the OP seems to be willing to do, then one should ask at the onset to solve $$x+\sqrt{x^2 + 1}=\sqrt{p}$$ Then $$(x+\sqrt{x^2+1})\cdot(x-\sqrt{x^2+1})=-1$$ hence every solution $x$ is such that $$2x=(x+\sqrt{x^2+1})+(x-\sqrt{x^2+1})=\sqrt{p}-\frac1{\sqrt{p}}$$ that is, since once again one assumed $x=y$, $$x+y=2x=\frac{p-1}{\sqrt{p}}$$ |
What is the difference between the line integrals $\oint_b\,ds$, $\oint_b\,dx$, and $\oint_b\,x\,ds$? | The first integral is the arclength of the curve. (Since your curve is a circle of radius $b$, this is just $2\pi b$.)
The second integral is the line integral of the vector field $\langle x,y\rangle=\langle 1,0\rangle$ along the curve. By symmetry, this is zero.
The third integral is the "mass" of the curve if the "mass" density per unit length is $x$. (Of course, $x$ won't really be a mass density, because it takes negative values. The point is that, in general, we interpret the integral $\int_c f\,ds$ as the integration of the density of some quantity per unit length.) Again by symmetry, the integral is zero. |
Confusion about the functoriality of $H_A:\mathscr A^{op}\to \mathbf{Set}$ | The confusion is that you defined $H_A$ on $\mathscr A$, whereas it is defined on the opposite category.
So if $g: B\to B'$ is an arrow in $\mathscr A^{op}$ (it corresponds to $g:B'\to B$ in $\mathscr A$), you get a map $H_A(B)\to H_A(B')$
Similarly for composition, if $g:B\to B'$, $f:B'\to B''$ in $\mathscr A^{op}$, (they correspond to your $g:B'\to B, f: B''\to B'$ in $\mathscr A$), everything makes sense
It could help to look up the definition of contravariant functor (which isn't necessary as it is subsumed by the definition of functor, by just taking opposite categories, but if you're not at ease with them yet, it can be interesting) |
Can the decimal point values of fractions be predicted? | Consider p/q, q > 1, 0 < p < q.
If the denominator, q, is relatively prime to both 2 and 5, then for some natural n and m, mq = 10^n-1. The decimal form of the fraction will then be mp repeated every n digits.
If the denominator, q, only has factors of 2 and 5, then it terminates.
Else, the fraction can be split into 2 fractions(p/q=a/b+c/d), where b is relatively prime to both 2 and 5, and d has only factors of 2 and 5. |
Cycle permutation problem, no women sit next to each other | 1st put the men around the chair in $5!$ ways now there are $6$ places in between and so we've to put the women between this positions which we've to choose $5$ places out of this $6$ places which can be done in $6$ ways and $5$ women can be seated in 5! ways hence total number is $6.5!.5!$ |
Finding paths in a graph with n vertices | The graph you are describing is a Wheel graph: http://en.wikipedia.org/wiki/Wheel_graph
To get the number of $P_{3}$ (a path of length $2$, which has $3$ vertices) in the graph, you consider the paths along the exterior of the graph. There are $n$ such paths, where $n = |V|$. Then you look at the interior paths (only interior edges are used) through the center vertex, which forms an arithmetic progression $\sum_{i=1}^{n-1} i$. Finally, you count paths using an exterior and an interior edge. You again get $n$ such paths.
To count cycles, you have $C_{i}$, for $i \in \{3, ..., n\}$, for $n \geq 3$. |
Convergence or divergence of $\sum \frac{3^n + n^2}{2^n + n^3}$ | $$3^n+n^2\sim_03^n,\quad2^n+n^3\sim_\infty2^n,\enspace\text{hence}\quad \frac{3^n+n^2}{2^n+n^3}\sim_\infty\Bigr(\frac32\Bigl)^n,$$
which doesn't even tend to $0$. |
Finding the distance from a point to a curve. | Hints: Consider these two functions of the three coordinates.
$$f=\frac{x^2}{4}+y^2+\frac{z^2}{4}-1$$
$$g=x+y+z-1$$
Then the tangent $\hat{t}$ to the curve $C$ must be perpendicular to both $\nabla f$ and $\nabla g$ (why?). In other words
$$\hat{t} \propto \nabla f \times \nabla g$$
At the point $p$ on the curve C that is closest to the origin, you get that this tangent vector must be perpendicular to the vector $(x,y,z)$ (why?)
Solve this simultaneously with $f=g=0$ and you get two points on the ellipsoid where these are all met. One is the nearest to the origin and one is the farthest. Pick the smaller distance of these (smallest norm). [Edited, thanks to comments below] |
Maximum triangle area | If the lengths of two medians of a triangle are $m_1$ and $m_2$ and the angle formed by these two medians is $\theta$, then the area of the triangle is $$K_\triangle=\frac{2}{3}m_1m_2\sin\theta.$$ Since the maximum value of $\sin\theta$ is $1$, the maximum area of your triangle is $\frac{2}{3}\cdot3\cdot8\cdot1=16$.
edit The formula above is probably not obvious. Suppose we have $\triangle ADE$ with $B$ and $C$ being the midpoints of $\overline{AE}$ and $\overline{AD}$, respectively (more because that's what I happened to draw than anything else).
The area of any quadrilateral with diagonals $d_1$ and $d_2$ and angle between then $\theta$ is $\frac{1}{2}d_1d_2\sin\theta$ (to derive this, the diagonals split the quadrilateral into 4 triangles, each with sides that are parts of the diagonals and included angles $\theta$ or $\pi-\theta$, the area of a triangle with sides $x$ and $y$ and included angle $\phi$ is $\frac{1}{2}xy\sin\phi$, and do some algebra). This gives the area of quadrialteral (trapezoid) $BCDE$ as $\frac{1}{2}m_1m_2\sin\theta$.
Now, $\triangle ABC$ is a dilation image of $\triangle AED$ by a factor of $\frac{1}{2}$ centered at $A$ (because of the midpoints, etc.), so it has $\frac{1}{4}$ of the area of the larger triangle. That is, $K_{\triangle ABC}=\frac{1}{4}K_{\triangle ADE}$ and $$K_{\text{quad }BCDE}=\frac{3}{4}K_{\triangle ADE},$$ so $$K_{\triangle ADE}=\frac{4}{3}\frac{1}{2}m_1m_2\sin\theta=\frac{2}{3}m_1m_2\sin\theta.$$
edit 2 Here is a picture of a triangle with medians with lengths in the ratio $8:3$ that are perpendicular: |
Sequences unbounded above diverges? | What about $a_{1}=1$, $a_{2}=2$, $a_{3}=1$, $a_{4}=4$, $a_{5}=1$, $a_{6}=6$, ...? |
Is there an action of $S_2$ on $\{1,2,3,4,5\}$ that has exactly two orbits? | Am I thinking about this incorrectly?
You are on the right track, I think. Let's clarify things.
If $G$ is a group acting on a set $X$ and $x\in X$ then there is a bijection (even $G$-isomorphism) between the orbit $Gx$ and the collection of all cosets $G/G_x$ where $G_x$ is the stabilizer subgroup for $x$. This is also known as the orbit-stabilizer theorem. In particular this shows that in the finite case $|Gx|$ divides $|G|$, as a consequence of Lagrange's theorem.
And so in our case $G=S_2$ (which has exactly $2$ elements) it means that any orbit (of any action on any set) has either $1$ or $2$ elements. The case when an orbit has one element is trivial: $S_2$ acting on any set $Y$ via $(g,y)\mapsto y$. The case when an orbit has two elements is for example when $S_2$ acts on itself (or disjoint union of itselves) via $(g,h)\mapsto gh$.
And since the set $X=\{1,2,3,4,5\}$ has $5$ elements then it has to have at least $3$ orbits regardless of how $S_2$ acts on it. That's because orbits form a partition of any set. |
Why mathematicians use natural language? | Bill Thurston's "On Proof and Progress" in Mathematics really helped me clarify my thinking about what it is we are doing in math. Thurston makes the quite strong, but I think defensible claim that the point of doing mathematics is not "proving theorems," but rather "furthering human understanding of mathematics". One powerful reason for using language in written mathematics is that the point is not to lay out a formal deduction to put a theorem in the bag, but rather to communicate what it is we have understood in the hopes that others will learn from it. |
Fractional part of integer multiples | If $x=\frac{p}{q}$, then, for $k,n\in\mathbb Z$:
$$(k+nq)x-\lfloor (k+nq)x\rfloor=kx+np-\lfloor kx+np\rfloor=kx+np-\lfloor kx\rfloor - np = kx-\lfloor kx\rfloor$$
thus the values $kx-\lfloor kx\rfloor$ are never all distinct if $x$ is rational, and they repeat with the period at most $q$. |
Existence of infinitely many iid RVs given sets of iid RVs of every finite size | The answer to your question is yes, and it follows from this technical lemma:
Lemma. Let $(\Omega,\mathcal F,\mathbb P)$ be a probability space such that there exists a sequence of partitions $\{\mathcal
P_n\}_{n\in\mathbb N}$ where each $\mathcal P_n$ consists of countably
many disjoint measurable sets whose union is $\Omega$. Suppose that $$
\lim_{n\to\infty}\sup \bigl\{\mathbb P(A)\colon A\in \mathcal
P_n\bigr\}=0.\qquad (\star) $$ Then there exists a random variable
$Y\colon \Omega\to [0,1]$ whose distribution is uniform on $[0,1]$.
(Terminology note: A set is countable if it injects into $\mathbb N$ - in particular finite sets are countable.)
Before proving the lemma, I'll explain why it answers your question. First, let's
exclude the case of deterministic $X$ (i.e. $\mathbb P(X=x)=1$ for some $x\in\mathbb R$), when the answer is apparent. Next, note that even if we start with an arbitrary random variable, we can always reduce to the case of countable range by replacing $X$ with $X'=\lfloor kX\rfloor$ for some deterministic $k$ sufficiently large such that $X'$ is not deterministic. Then, by letting $v\in \mathbb R^n$ range over the countable set of values attained by the random vector $(X_1^n,\ldots,X_n^n)$, we obtain a countable measurable partition of $\Omega$ given by
$$
\mathcal P_n=\Bigl(\bigl\{\omega\in\Omega\colon (X_1^n,\ldots,X_n^n)=v\bigr\}\Bigr)_v.
$$
It satisfies the condition $(\star)$, since $\alpha:=\sup_{x\in\mathbb R}\mathbb P(X=x)<1$ by assumption and we have the bound $\mathbb P(A)\leq \alpha^n$ for all $A\in\mathcal P_n$ (by independence). Thus the lemma applies, and we obtain a random variable defined on $\Omega$ which is uniformly distributed on $[0,1]$. Finally, since any iid sequence of random variables can be constructed on the probability space $[0,1]$ (a fundamental and well-known fact - e.g. Theorem 2.19 in Kallenberg's textbook, or prove it yourself using binary expansion) we can compose this standard construction with the measurable function $Y\colon \Omega\to [0,1]$ to obtain the desired iid sequence on $\Omega$.
Proof of the lemma. Let $\mathcal Q_n$ be the partition whose elements are all intersections of elements of $\mathcal P_1,\ldots,\mathcal P_n$. Then $\mathcal Q_n$ is also a countable measurable partition of $\Omega$ and $(\star)$ continues to hold after replacing $\mathcal P_n$ with $\mathcal Q_n$. Moreover, the partitions $\mathcal Q_n$ are nested: for all $m\leq n$, each set in $\mathcal Q_n$ belongs to a unique set in $\mathcal Q_m$. Due to this nesting property, the sets in each partition can be ordered as $\mathcal Q_n=(A_i^n)_{i\in\mathcal I}$ for all $n\in \mathbb N$, such that for all integers $m\leq n$ and all pairs of indices $i\leq j$, if $A_{i'}^m$ contains $A_i^n$ and $A_{j'}^m$ contains $A_j^n$ then $i'\leq j'$.
Using this ordering we construct a sequence of random variables $\{Y_n\}_{n\in\mathbb N}$ as follows. For all $\omega\in\Omega,$ let $i_n(\omega)$ be the index such that $\omega\in A_{i(\omega)}^n$, and let
$$
Y_n(\omega)=\sum_{j\leq i(\omega)}\mathbb P(A_j^n).
$$
Observe that if $0\leq a\leq b\leq 1$ and $a,b$ can be written as partial sums of $\mathbb P(A_j^n)$, then
$$
\mathbb P(a<Y_n\leq b)=b-a.\qquad (\star\star)
$$
Now let $Y(\omega)=\inf_{n\in\mathbb N}Y(\omega)$ and observe (by the nesting property of the partitions) that $(\star\star)$ holds with $Y$ in place of $Y_n$ as well. Sending $n\to\infty$ and using $(\star)$ implies that $a,b$ can be chosen to lie in any non-empty open subinterval of $[0,1]$, and thus $Y$ is uniformly distributed on $[0,1]$. $\square$ |
Find $\lim_{n\to \infty} \frac {(\log(n))^x}{n^y}$ with $x,y \in \Bbb{R}+$ | One can use derivative of the function $\varphi(x)=(\log x)/x^{\alpha}$ to show that for large $x$, one has $\log x\leq C_{\alpha}x^{\alpha}$, here $\alpha>0$. Return to your question, if we put $\alpha=y/(2x)$, then $(\log n)^{x}\leq C_{\alpha}n^{y/2}$ for large $n$, then $(\log n)^{x}/n^{y}\leq\dfrac{C_{\alpha}}{n^{y/2}}$, now taking $n\rightarrow\infty$ and use Squeeze Theorem to show the limit is zero. |
Poincare dual of a point | Let M be an oriented manifold. Recall that the closed Poincare dual of a $k$-dimensional submanifold of $M$ is an element in $H^{n-k}(M)$. In our case, as a point is $0$-dimensional and $H^n(\mathbb{R}^n)=0$, there is nothing to compute.
The compact dual should be some $\omega\in H_c^n(\mathbb{R}^n)$, such that for every closed $0$-form $\tau$ one has $$\int_{\mathbb{R}^n}\omega\wedge\tau=\int_p\tau,$$where $p$ denotes our point. Since a closed $0$-form is in fact a constant function, this equality holds for $$\omega=fdx_1\wedge\ldots\wedge dx_n,$$where $f$ is a function with compact support whose integral is equal to $1$. |
Motivation behind formula for solution in Chinese Remainder Theorem | For $u$, it's set to $n$ times the value of $n^{-1} \pmod m$ (here $n^{-1}$ means the multiplicative inverse of $n$ modulo $m$, i.e., the modulus value such that $n \times n^{-1} \equiv 1 \pmod m$, and since $\gcd(m,n) = 1$, this value always exists). Thus, you have $u \equiv 0 \pmod n$ as it's a multiple of $n$, while $u \equiv 1 \pmod m$ since $n(n^{-1}) \equiv 1 \pmod m$. It similarly sets up $v$ so $v \equiv 1 \pmod n$ and $v \equiv 0 \pmod m$. They did this so $u$ and $v$ would have the appropriate properties to give a solution, in particular, with $u$ having $(0,1)$, and $v$ having $(1,0)$, as the values modulo $n$ and $m$ is roughly analogous to the $\vec i$ and $\vec j$ basis vectors in $2$-dimensional Cartesian coordinate systems. Then linearity is used, as described in detail in Brian Moehring's and Bill Dubuque's answers.
As for how to know that $x = au + bv$ solves $x \equiv a \pmod m$ and $x \equiv b \pmod n$, as they show, when you check modulo $m$ and $n$, you get the required results of $a$ and $b$, respectively. For example, with mod $m$, since $v \equiv 0 \pmod m$, then $bv \equiv 0 \pmod m$. Also, as $u \equiv 1 \pmod m$, then $au \equiv a \pmod m$. Summing the $2$, you get that $x = au + bv \equiv 0 + a \equiv a \pmod m$. It then likewise shows checking mod $n$ to confirm $x \equiv b \pmod n$. |
Uniform convergence of $f(x) = \sum_{n=0}^\infty x^n(1-x)^n$ in $(0,1)$ | Hint Note that
$$
0\leq\sum_{n=0}^m x^n(1-x)^n\le \sum_{n=0}^m \frac{1}{4^n}
$$
and use the fact that $\sum_0^\infty 4^{-n}$ converges. Now let $m\to \infty$. |
Translation of logic terms into set unions and intersections (e.g. uniform and pointwise convergence) | You're rendering these incorrectly. $\forall, \exists$ correspond to $\bigcap, \bigcup$ respectively. This should be clear from thinking about what the symbols mean.
For pointwise convergence, the set of $x$ such that $f_n(x)\to f(x)$ pointwise is
$$
\bigcap_{m\in \Bbb N} \bigcup_{n_0\in \Bbb N}\bigcap_{n>n_0} E_{n,m}
$$
For uniform convergence, there really isn't a corresponding set:
it doesn't make sense to speak of "the $x$ at which $f_n(x)\to f(x)$ uniformly", because uniform convergence is a property of sets of $x$s and not of individual points $x$.
How, in words, would you describe the set you're trying to define for uniformly convergent $f_n\to f$? |
A sequence/queue converges Does my answer correct? | Your basic approach is right, but you made a small error when you added up the series. It telescopes to $$
(\ln(n^5+n^4+1)-\ln(n^5+1))n=n\ln\left(1+\frac{n^4}{n^5+1}\right)$$
When you made your error you got $$n\ln\left(1+\frac1n\right)$$ instead, so you were able to use the fact that $$\left(1+\frac1n\right)^n\rightarrow e\tag1$$
That was a good idea, but you have to do some more work in reality. The easiest way I've found is to write it as $$\lim_{n\to\infty}\frac{\ln\left(1+\frac1{n+n^{-4}}\right)}{1/n}$$ and use L'Hôpital's rule. One application of the rule gives the limit $1$, so the sequence converges to $e$.
We can do this without L'Hôpital's rule. When $x>0$ we have $\log(1+x)<x.$ (To see this, let $f(x)=\log(+1x)-x$ and note that $f(0)=0,\ f'(x)<0$ for $x>0$.) Therefore,$$
\lim_{n\to\infty}n\ln\left(1+\frac{n^4}{n^5+1}\right)\leq
\lim_{n\to\infty}\frac{n^5}{n^5+1}=1$$
In a similar way, one can see that $\log(1+x)>x-\frac{x^2}2$ for $x>0$ so that $$
\lim_{n\to\infty}n\ln\left(1+\frac{n^4}{n^5+1}\right)\geq
\lim_{n\to\infty}\frac{n^5}{n^5+1}-\frac n2\left(\frac{n^4}{n^5+1}\right)^2=1$$ |
Where is inverse tangent positive? | Recall that the principal value for the argument $\operatorname{Arg}(z)$ is defined in the interval $(-\pi,\pi]$, therefore since $0.4-0.2j$ lies in the fourth quadrant, its polar form should be $re^{i\theta}$ with $\theta \in (-\pi/2,0)$. |
Boolean Algebra fundementals | It's really all the same thing.
$$\begin{array}{c|c|cc}
A & B & A\vee B \\ \hline
1 & 1 & 1 \\
1 & 0 & 1 \\
0 & 1 & 1 \\
0 & 0 & 0 & \star\end{array}
\qquad
\begin{array}{c|c|c|cc}
A & B & C & A\vee B\vee C \\ \hline
1 & 1 & 1 & 1 \\
1 & 1 & 0 & 1 \\
1 & 0 & 1 & 1 \\
1 & 0 & 0 & 1 \\
0 & 1 & 1 & 1 \\
0 & 1 & 0 & 1 \\
0 & 0 & 1 & 1 \\
0 & 0 & 0 & 0 & \star\end{array}$$
etc. The truth value of $A\vee B\vee C$ is $1$ whenever any of the $A$, $B$ or $C$ is $1$. It is only zero if all of them are zero (in the rows marked $\star$). Replace $1$ with $T$ and $0$ with $F$ to your liking. |
Absolute value of two numbers | $|a| > |b| \Rightarrow a \not= \pm b \Rightarrow a - b \not= 0 \Rightarrow |a - b| > 0$ |
bijection between the Cantor set and $R$ | The Cantor set is the the subset of $[0,1]$ with open middle thirds removed. So every element of the Cantor set can be written as a ternary (base $3$) fraction with the digits $0$ and $2$, while the removed values require the digit $1$ in the ternary expression
So you might think we can get from the Cantor set to $[0,1]$ by taking the ternary expression, replacing all the $2$s by $1$s and reading the expression as a binary expression. We might then think of a suitable invertible expression to take the next step to get to $\mathbb R$ such as the log-odds or logit function $f(x) = \log\left(\frac{x}{1-x}\right)$
We still have a couple of problems:
one is that $0$ and $1$ are in the original Cantor set but the second step will take them to $-\infty$ and $+\infty$, neither of which are in $\mathbb R$
the other is that some pairs of values in the original Cantor set in fact get sent to the same value in the first step. As an example $\frac19$ in the Cantor set has ternary expression $0.0022222\ldots_3$ (or $0.01_3$, but that fails the requirement not to have $1$s) so gets sent to $0.0011111\ldots_2$, i.e. $\frac14$; meanwhile $\frac29$ in the Cantor set has ternary expression $0.02_3$ (or $0.0122222\ldots_3$, but that too fails the requirement) so gets sent to $0.01_2$, i.e. also $\frac14$
Fortunately the number of points in the Cantor set which are affected by either of these are countable and can be ordered, for example as $0,1,\frac13,\frac23,\frac19,\frac29,\frac79,\frac89,\frac1{27},\ldots$, and the points in $\mathbb R$ which might receive dual inputs are also countable and can be ordered, for example as $\log(\frac{1}{1}),\log\left(\frac{1}{3}\right),\log\left(\frac{3}{1}\right),\log\left(\frac{1}{7}\right),\log\left(\frac{3}{5}\right),\log\left(\frac{5}{3}\right),\log\left(\frac{7}{1}\right),\log\left(\frac{1}{15}\right),\log\left(\frac{3}{13}\right),\ldots$
So this now gives us a satisfactory approach:
for elements of the Cantor set which are not $0$ or $1$ or other rationals with a lowest-terms denominator of a power of $3$: write them as a ternary fraction, replace the $2$s with $1$s, read as a binary fraction, and then apply the log-odds function to them
for elements of the Cantor set which are $0$ or $1$ or other rationals with a lowest-terms denominator of a power of $3$: find their position in the list of such values ordered by lowest-terms denominator then lowest-terms numerator and chose the corresponding value from the list of positive rationals where lowest-terms numerator and denominator add to a power of $2$ ordered by this sum then lowest-terms numerator, and take the logarithm of the result
Both of these are invertible and so you have a bijection |
Does the mathematician use the prefix “non” always strictly mean “not necessarily”? | The prefix 'non' is not always used inclusively. For example 'let $x$ be nonzero' definitely does not allow the case that $x$ is $0$, and 'let $x$ be nonnegative' means that $x$ is either positive or $0$. |
Region closer to one given point than to any other given point | $P_5$ is too far so doesn't count.
You need to draw mediators between $P_0$ and $P_i$ and you will get a trapezoid with a perimeter equals to $6*(1+1+\sqrt2) - (1+1-\sqrt2) = 10+7\sqrt2$
EDIT :
I was wrong so I edited
user21820 is right (if I'm not wrong) :p |
Derivative of polynomial in GF(9) | Looks like $\alpha$ is a primitive element (=a generator of the multiplicative group), so it is of order $9-1=8$. Therefore $\alpha^4$ is of order 2. IOW $\alpha^4=-1=2.$ Thus $2\alpha=\alpha^4\cdot\alpha=\alpha^5$ in your table. |
Possible number of combinations from a subset | Take the binomial coefficient $\binom nk$ which gives the number of combinations of $k$ elements of $n$ objects. Thus you are looking for $$\sum_{k=1}^4 \binom 6k$$ Since $\sum_{k=0}^6 \binom 6k = 2^6$ we have
$$\sum_{k=1}^4 \binom 6k=2^6-\binom 60-\binom 65-\binom 66=2^6-1-6-1=2^6-8$$ different combinations. |
Why does zero correlation not imply independence? | Consider the following betting game.
Flip a fair coin to determine the amount of your bet: if heads, you bet \$1, if tails you bet \$2. Then flip again: if heads, you win the amount of your bet, if tails, you lose it. (For example, if you flip heads and then tails, you lose \$1; if you flip tails and then heads you win \$2.) Let $X$ be the amount you bet, and let $Y$ be your net winnings (negative if you lost).
$X$ and $Y$ have zero correlation. You can compute this explicitly, but it's basically the fact that you are playing a fair game no matter how much you bet. But they are not independent; indeed, if you know $Y$, then you know $X$ (if $Y = -2$, for instance, then $X$ has to be 2.) Explicitly, the probability that $Y=-2$ is $1/4$, and the probability that $X=2$ is $1/2$, but the probability that both occur is $1/4$, not $1/8$. (Indeed, in this game, there is no event with probability $1/8$.) |
Permutation question. | 'All urns do not contain at least 1 ball' $\not = $ 'Not all urns contain at least 1 ball' |
Is there a group that can act as a foundation of mathematics? | EDIT: The followup question below/above was asked and answered affirmatively at mathoverflow.
In classical logic, implication doesn't need to be included as a primitive operation: we can define it in terms of the operations of negation and disjunction, which are respectively unary and associative, as $$A\rightarrow B\quad:\equiv\quad (\neg A)\vee B.$$ Indeed, the set of Boolean operations $\{\neg,\vee\}$ is functionally complete. So we can rewrite (say) all the $\mathsf{ZF}$ axioms using only disjunction and negation.
(You may also be interested in cylindrical algebras, which provide an algebraic semantics for first-order logic whose basic operations are all either unary or associative.)
Of course, taht doesn't actually solve the notation problem you mention: we still need parentheses to distinguish between e.g. $(\neg A)\vee B$ and $\neg(A\vee B)$. Basically, the problem is that associative operations can be combined to produce non-associative operations. If what we actually want is to avoid parentheses, then instead of changing our basic logical connectives we should instead (as Asaf Karagila comments) adopt something like Polish notation. For instance, the parentheses-laden expression $$p\rightarrow\neg(q\rightarrow r)$$ would be written as $$CpNCqr.$$ This, though, has the drawback of (at least in my experience) generally being less readable for long expressions, especially when quantifiers enter the picture.
You separately ask, however, a question about structures, e.g. whether there is
a certain infinite group that has enough structure to correspond to the standard kinds of things discussed in mathematics.
It's unclear to me exactly what this means, but there are certainly some observations we can make which seem relevant. Most obviously, we have on the one hand the set of first-order formulas modulo $\mathsf{ZFC}$-provable equivalence, and on the other hand - given a computably presentable group $G$ - the set of words on the generators of $G$ modulo equality given the presenting relations of $G$, and we could ask for an appropriate correspondence. The coarsest question is:
Is there a finitely generated computably presentable group $G$ whose word problem is as complicated as the problem of telling whether two sentences are $\mathsf{ZFC}$-provably equivalent?
The answer is yes: we can in fact find a finitely presentable group $G$ such that the halting problem is many-one reducible to the word problem for $G$, and $\mathsf{ZFC}$-provable equivalence is many-one equivalent to the halting problem.
On a more algebraic note, we could ask:
Is there a finitely generated computably presentable group $G$ on generator set $A$ and a computable function $f$ from first-order formulas to words on $A$ such that $\mathsf{ZFC}\vdash\sigma\leftrightarrow\tau$ iff $f(\sigma)$ and $f(\tau)$ represent the same element in $G$?
I believe that the answer is yes - and in fact that we can use the same $G$ as before - but I don't know.
I added the "finitely generated" bit since without it the question is trivial: take a generator $a_\sigma$ for each formula $\sigma$ and consider an appropriate set of relations making $a_\sigma=a_\tau$ iff $\mathsf{ZFC}\vdash\sigma\leftrightarrow\tau$. |
What simple coupling argument might be meant here? | Pick $\delta_L$ satisfying case (1) and $\delta_L'$ satisfying case (2). Since by definition $\delta_L' >\delta_L$ for large enough $L$, $p'(0):= 1-2\delta_L' < p(0) : = 1-2\delta_L$ and $p'(1)=p'(2) : = \delta_L' > \delta_L = p(1)=p(2)$ for large enough $L$. Thus we can find a monotone coupling as follows.
Define a Uniform $(0,1)$ variable $U(v)$ on each vertex $v$ independently of each other. For case (1), define $X(v)$ to be $1$ if $U(v) \in (0,\delta_L)$, $X(v)$ to be $2$ of $U(v) \in (1-\delta_L,1)$ and $X(v)$ to be $0$ otherwise.
For case (2), define $X'(v)$ to be $1$ if $U(v) \in (0,\delta_L')$, $X'(v)$ to be $2$ of $U(v) \in (1-\delta_L',1)$ and $X'(v)$ to be $0$ otherwise.
To be more precise, for every vertex $v$ we have the measure space $([0,1]^2, \mathcal B [0,1]^2, \mu)$ where $\mu$ is the measure supported on the diagonal of $[0,1]^2$ with $\mu (\text{segment joining $(x,x)$ and $(y,y)$}) = |x-y|$. Call the segment joining $(x,x)$ and $(y,y)$ to be $s(x,y)$. Define $(X(v),X'(v))$ on the diagonal of $[0,1]^2$ (at other places it does'nt matter what you define) as follows. $(X(v),X'(v)) = (1,1)$ on $s(0,\delta_L)$, $(X(v),X'(v)) = (0,1)$ on $s(\delta_L,\delta_L')$, $(X(v),X'(v)) = (2,2)$ on $s(1-\delta_L,1)$, $(X(v),X'(v)) = (0,2)$ on $s(1-\delta_L',1-\delta_L)$, $(X(v),X'(v)) = (0,0)$ otherwise.
Clearly $X$ and $X'$ have the right distribution. Further for every realization, if we have an adjacent $1,2$ pair for $X$ then we must have an adjacent $1,2$ pair for $X'$ for large enough $L$ by the coupling. So we are done. |
Are these spaces homotopy equivalent? | They are definitely homotopy equivalent: retract the connecting line down to a point.
They are definitely not homeomorphic. If they were, any homeo would preserve cut-points (points whose removal results in a disconnected space). But the figure 8 has only one cut-point, the circles-with-line has many. |
Spectral sequence for Ext | Well, a useless thing to say is that in general you have the Grothendieck spectral sequence (cf answer by Matt Emerton Contravariant Grothendieck Spectral Sequence), (I'm thinking of A,B there as your $Rf_*F, Rf_*G$).
But as your morphism is affine then this does not help one bit.
I guess it's impossible a priori to give any comparison: essentially because $f_*$ and $\underline{Hom}$ aren't compatible (which, via $E^\vee \otimes F = \underline{Hom}(E,F)$ is another way of saying that $f_*$ and $\otimes$ aren't compatible).
But I would very like to be proven wrong, as I'm also similarly stuck! |
Is "Strong Induction" not actually stronger than normal induction? | I believe the crux of Noble's question, as presented in his recent comment, is:
[B]ut I can't see how assuming it's true for more than one value is more powerful.
In logical terms, we say that a statement $A$ is stronger than a statement $B$ if
$A \implies B$. It is clear that -- forgive me for writing $\wedge$ for and when discussing logical statements --
$A \wedge A' \implies A$,
and more generally
$A_1 \wedge A_2 \wedge \ldots \wedge A_n \implies A_n$.
In other words, assuming a set of things is stronger than assuming a subset of things.
This is the sense in which strong induction is "stronger" than conventional induction: for your predicate $P$ indexed by the positive integers, assuming $P(1) \wedge \ldots \wedge P(n)$ is stronger than just assuming $P(n)$. In more practical terms, the more hypotheses you assume, the more you have to work with and it can only get easier to construct a proof.
Now let me supplement with further comments:
Nevertheless the principle of mathematical induction implies (and, more obviously, is implied by) the principle of strong induction, via the simple trick of switching from the predicate $P(n)$ to the predicate $Q(n) = P(1) \wedge \ldots P(n)$.
Here is a further possible source of confusion in the terminology. Suppose I have a theorem of the form $A \wedge B \implies C$. Someone else comes along and proves
the theorem $A \implies C$. Now their theorem is stronger than mine: i.e., it implies my theorem. Thus when you weaken the hypotheses of an implication you strengthen the implication. (While we're here, let's mention that if you strengthen the conclusion of an implication, you strengthen the implication.) This apparent reversal may be the locus of the OP's confusion. |
What is the motivation behind the definition of adjoint of a linear operator? | A slightly different, but essentially equivalent, way of thinking of it is that the adjoint is the linear operator on the dual space $V^*$ induced by $T$.
Consider the dual space $V^*$, consisting of linear maps $V \to \Bbb{R}$. Then there is a map $T^*$ on $V^*$ defined as follows: for $\lambda \in V^*$, $T^*(\lambda)$ is the linear functional given by:
$$ \Big( T^*(\lambda) \Big)(v) = \lambda \Big( T(v) \Big) $$
If you choose a basis for $V$, for which $A$ is the matrix of $T$ in that basis, then the adjoint matrix (the transpose) is the matrix of $T^*$ in the dual basis.
The connection with your explanation is this: if you have a nondegenerate inner product on $V$ and $V$ is finite-dimensional, then there is an isomorphism between $V$ and $V^*$ using the inner product. For a vector $v$, define a linear functional $\lambda_v$ by the formula $\lambda_v(w) = \left<v,w\right>$. If you use this to identify $V$ and $V^*$, (and if the basis is orthonormal), then the basis and the dual basis are the same, and you can think of $T^*$ as an operator on $V$. |
Calculate the quotient graph | Your answer for b) is correct but incomplete. Note that a path can contain many edges.
$\leftrightarrow$ partitions the graph into the following sets: $\{a,b\}$, $\{c\}$ and $\{d,e,f,g,h\}$.
Two vertices of the graph are 'strongly connected' iff they are in the same set of this partition.
Your quotient graph will thus have $3$ vertices, given by $\{a,b\}$, $\{c\}$ and $\{d,e,f,g,h\}$. Can you figure out the edges between them? |
Help with the chain rule $h(t)=f(t, X(t))$ | If $X(t) = x(t)$ then $h(t) = f(t,x(t))$ and
$$
\frac{df}{dt} = \frac{\partial f}{\partial t}
+ \frac{\partial f}{\partial x}\frac{dx}{dt}
$$
Can you proceed from here? |
Area of circles inside an equilateral triangle compared to the area of the triangle itself. | Here's the kicker: an equilateral triangle can be split into an equilateral hexagon and three congruent equilateral triangles. The incircle of the large equilateral triangle will be inscribed in the hexagon, and the area of one of the small triangles is $\frac16$ of that of the hexagon.
From this, we can show that each of the three "second-generation" circles has an are that is $\frac19$ of the "first generation" circle's area. Likewise, each "third generation" circle has an area $\frac19$ that of a "second generation" circle, and so on. |
Conditional expectation bound | What we know is that for each positive $\varepsilon$, there exists a set $\Omega_\varepsilon $ of probability $1$ such that for all $\omega\in\Omega_\varepsilon$,
$$\lvert\mathsf{E}[g(X)\mid\mathcal{F} ]\left(\omega\right) \rvert\leqslant C\epsilon+\mathsf{E}[\lvert X\lvert\mid\mathcal{F}]\left(\omega\right)\epsilon^{-1}.$$
In order to safely replace $\varepsilon$ be something random, we should have the previous inequality on a set of probability one independent of $\varepsilon$.
To this aim, let $\Omega':=\bigcup_{\substack{ \varepsilon\in \mathbb Q\\ \varepsilon\gt 0}}\Omega_\varepsilon$. Then $\Omega'$ has probability one and for all $\omega\in\Omega'$ and all positive rational number $\varepsilon$,
$$\tag{*} \lvert\mathsf{E}[g(X)\mid\mathcal{F} ]\left(\omega\right) \rvert\leqslant C\epsilon+\mathsf{E}[\lvert X\lvert\mid\mathcal{F}]\left(\omega\right)\epsilon^{-1}.$$
Now, if $\varepsilon$ is a positive real number, there exists a sequence of rational positive numbers $\left(\varepsilon_n\right)_{n\geqslant 1}$ which converges to $\varepsilon$, hence applying (*) to $\varepsilon_n$ and letting $n$ going to infinity gives what we want. |
Determining limit of recursive sequence | We have
$$a_{n+1} - a_n = \dfrac{a_n+(2n-1)a_{n-1}}{2n} - a_n = \dfrac{(2n-1)a_{n-1}-(2n-1)a_{n}}{2n} = -\dfrac{2n-1}{2n} \left(a_n-a_{n-1}\right)$$
Let $b_n = a_{n+1}-a_{n}$. We then have
$$b_{n+1} = - \dfrac{2n-1}{2n}b_n$$
with $b_1 = l-k$. We then have
$$b_{n+1}=b_1(-1)^{n} \prod_{k=1}^n \dfrac{2k-1}{2k} = b_1(-1)^n \dfrac{(2n)!}{4^n(n!)^2} = b_1\left(-\dfrac14\right)^n \dbinom{2n}n$$
We have
$$a_{n+1} -a_1= \sum_{k=1}^n \left(a_{k+1}-a_k\right) = \sum_{k=1}^nb_k = b_1 \sum_{k=0}^{n-1} \left(-\dfrac14\right)^k \dbinom{2k}k$$
Recall that
$$\sum_{k=0}^{\infty} \dbinom{2k}k x^k = \dfrac{1}{\sqrt{1-4x}}$$for $\vert x \vert \leq 1/4$. Plugging in $x=-1/4$, we obtain
$$\lim_{n \to \infty} a_{n+1} = a_1 + \dfrac{b_1}{\sqrt2}=k+ \dfrac{l-k}{\sqrt2}$$ |
Basic probability question on expectation value | The simple though tedious approach is to list all $600$ possible results of the rolls, add up the results, and divide by $600$. This would not be hard in a spreadsheet with copy down and copy right. Any other approach is just a simpler way to get the same result.
An approach that is less work is to note that if I roll at least $21$ I have a guaranteed win. These rolls contribute $20(\sum_{i=21}^{30}i)=20\cdot 10 \cdot \frac{51}2=5100$ to the sum. The cases where we tie contribute $\sum_{i=1}^{20}-i=-210$ to the sum. The cases where I roll $20$ or less and we don't tie contribute a net of $0$ because of the symmetry, so the expected value to me is $\frac {5100-210}{600}=8.15$ |
With stochastic variable $X$ with density $2xe^{-x^2}$ and $Y=X^2$. Calculate $EY^n$ | Hint: $EY^{n}=EX^{2n}=\int_0^{\infty} 2x^{2n+1} e^{-x^{2}}dx$. Put $y=x^{2}$ and the integral becomes $\int_0^{\infty} y^{n} e^{-y}dy$. I will leave it to you to evaluate this integral. |
The boundary of this set is smooth? | The intersection of your sets might be any convex set. In particular it might be a square, which is not smooth.
To get a square consider
$$
\Omega_i = \{(x,y)\in \mathbb R^2 \colon f_i(x,y) < 0\}
$$
where
$$
f_i(x,y) = (|x|^i + |y|^i)^{\frac 1 i}.
$$ |
Simple question on factoring derivatives with "e" | Starting with $e^{-x}tx^{t-1}-e^{-x}x^t$
you probably definitely proceeded to $e^{-x}(tx^{t-1}-x^t)$
and then maybe you are overlooking that $x^t=x\cdot x^{t-1}$ so that you recognize both terms hold a factor of $x^{t-1}$.
I've seen this kind of blindness before when students struggle to factor things like $x^{1/2}+x^{3/2}$. They sometimes don't immediately see that $x^{1/2}$ is a common factor since $x^{3/2}=x\cdot x^{1/2}$. It's a good thing to be aware of! |
The most general solution of a tensor equation | Let's consider the matrix
$$\left(\begin{array}{cccc} \partial_1 f & \partial_2 f &\cdots & \partial_n f\\ \partial_1 C & \partial_2 C &\cdots & \partial_n C\end{array}\right).$$ If the conditions given in the question are satisfied, the matrix must have, at most, rank one. This means that at any point $\nabla f\| \nabla C.$
Assuming $\nabla C\ne 0,$ the condition $\nabla f\| \nabla C$ implies that $f$ is constant on the level hypersurfaces of $C$ (at least, on its connected components). So, at least locally, $f$ and $C$ share its level hypersurfaces. This means it is $f=f(C)$ or $C=C(f),$ at least locally. |
utility function question from my textbook | The consumer is solving
$$\max x_1^{1/3}x_2^{1/2}$$
subject to
$$2 x_1 + 5 x_2 \leq 40.$$
It should be clear that the constraint must bind ($2x_1 +5x_2=40$), since the objective is increasing in both $x_1$ and $x_2$. Using the method of lagrange multipliers, any optimal consumption plan must satisfy the following FOCs:
$$1/3 x_1^{-2/3} x_2^{1/2}=2 \lambda,$$
$$1/2 x_1^{1/3} x_2^{-1/2}=5\lambda.$$
Rearranging terms gives us
$$x_1=\frac{5}{3} x_2.$$
Finally, plugging this back into the constraint, we get
$$2x_1+3 x_1=40$$
So $x_1=8$, $x_2=24/5$. |
Finding derivative of function of $x$ and $y$ | While taking partial derivatives you should think of the given function as a function with one parameter depending on a single variable, in other words:
$$f_x(y):=f(x,y)=:f_y(x)$$
Where once you regard $x$ as a parameter and the other you do the same for $y$
[It might make this more understandable or not, it's up to you to decide]
Now that we have functions in just one variable you can apply the definition of derivative for functions in one variable, so:
$$\frac{\partial f(x,y)}{\partial x}=\frac{df_y(x)}{dx}$$
And analogously
$$\frac{\partial f(x,y)}{\partial y}=\frac{df_x(y)}{dy}$$
Now for you specific example:
$$f(x,y)= \frac{(7y + x^2)}{(1+y^2)}$$
So
$$\frac{\partial f(x,y)}{\partial x}=\frac{2x}{1+y^2}$$
(because the derivative of the "parameter " $y$ with respect to $x$ is zero)
Can you manage to do the partial derivative wrt to y? |
What does it mean when you differentiate and get a constant value? | No, it means $y$ changes as much as $4$ times of changes in $x$.
Derivative captures how much the changes of $y$ in accordance with changes in $x$, locally. A constant derivative means the change-relationship are constant in all places of $x$.
If you have a graph of $y=4x$, you'll see this pretty clearly.
Also even if you do not take the derivative, $y=4x$, any change in $
\delta x$ will result in as much as of $4\delta x$ changes in for $y$. |
Prove that $\int_0^1 \frac{1}{x^\delta} \,dx$ convergent $\Leftrightarrow \ \delta <1$ | For $\delta \neq 1$, $$\int_a^1 \frac 1{x^\delta} \mathrm d x = \left[\frac {x^{1-\delta}}{1-\delta}\right]_a^1 = \frac 1{1-\delta} \left(1-a^{1-\delta}\right)$$ This quantity has a finite limit when $a\to 0$ if and only if $1-\delta > 0$ so $\delta < 1$.
Now if $\delta = 1$, $$\int_a^1 \frac 1{x^\delta} \mathrm d x = \left[\ln x\right]_a^1 = -\ln a \to_{a \to 0} + \infty$$ |
Prove that if a path exists between two vertices $u$ and $v$ in a directed or undirected graph, a simple path exists between $u$ and $v$. | if you look for kind of a formal proof, then you can try something like that: let $P$ be the path between $u$ to $v$, as $u,v\in V$, and P has the form $\{u,...,v\}$. if P is simple then we finish, otherwise, there is a sequence in P s.t $\{v_i,...,v_i\}$ for some $v_i \in V$ (there is a cycle that start and finish in $v_i$. we can look at $P'$ as the sequence in P until the first $v_i$, concatenated with the sequence in P from the second $v_i$.we shall do this for every cycle in P, and we shall get P' a path that is simple, because it is still a path (we passed through a valid edges), and we don't have any repeated node. |
Sturm Liouville problem with additional term. | For each $\lambda \in\mathbb{C} $, there are two linearly independent solutions of $f''+(A+B)f=\lambda f$ on $[a,b]$, provided $A$ and $B$ are well-behaved on $[a,b]$. So it makes no sense to talk about the $n$-th eigenvalue $\lambda_{n}$ or the n-th eigenfunction.
By the way, most people prefer to write $-\frac{d^{2}}{dx^{2}}f+(A+B)f=\lambda_{n}f$ so that the eigenvalues may be arranged as $\lambda_{1} < \lambda_{2} < \lambda_{3} < \cdots$, at least for separated endpoint conditions. This is better correlated with Quantum Mechanics, too. For large $n$, the eigenvalues behave asymptotically the same as for the case where $A+B=0$ (you can see this because $\lambda$ overwhelms $A+B$ in $-f''+(A+B-\lambda)f=0$.) The corresponding eigenfunctions are also asymptotic to the classical trigonometric solutions of $-f''-\lambda f=0$. This is so strongly true that the Fourier series for the perturbed series converges at a point $x$ iff the classical series converges at the same point, and the rate of convergence is comparable as well. So $A$ and $B$ don't make much different for the large eigenvalues. It is only the lowest eigenvalues and eigenfunctions which are strongly affected by the presence of the perturbation $B$. A Rayleigh-Ritz type of method where you look at $(f''+(A+B)f,f)/(f,f)$ for functions $f$ constrained to satisfy the desired endpoint conditions reveals a definite dependency on $B$ in this case, and it also helps you estimate the change in eigenvalues. There is more stability for regular problems on finite intervals that one might expect, at least for separated endpoint conditions and smooth, bounded coefficients $A$, $B$. |
What's an example of a sequence that isn't bounded and whose only limit point is 0? | What about
$$
a_n = \begin{cases}
0 &\text{ if } n \text{ is even}\\
n &\text{ otherwise}
\end{cases}
$$
? |
How to compute integrals using Riemann sums | Factor out what can be factored:
$$\sum_{i=1}^n \frac{s^2i^2}{n^2}·\frac{s}n=\frac{s^3}{n^3}\sum_{i=1}^{n}i^2=\frac{s^3}{n^3}\frac{n(n+1)(2n+1)}6$$
for the upper Riemann sum, and almost the same for the lower one ($i=0$ to $n-1$). |
How can I create an equation for a Gaussian distribution based on a sum of a series? | Not sure I understand your question, but seems to me you could let $$A=50(e^{-1}+e^{-9}+e^{-25}+\cdots+e^{-729})^{-1}$$ and then use $f(x)=Ae^{-(2x-29)^2}$. The exponents in the expression for $A$ are the squares of the odd numbers from $1$ to $27$, which are also the exponents that will show up in the formula for $f(x)$ as $x$ goes $1,2,3,\dots,28$.
EDIT: It occurs to me that the binomial is a good approximation to the bell curve. The 28 numbers $27\choose0$, $27\choose1$,..., $27\choose{27}$ add up to $2^{27}$. Let me abbreviate the number $100/2^{27}$ by $B$. Then you could give out ${27\choose0}B$ the first day, ${27\choose1}B$ the second day, etc., to ${27\choose27}B$ the last day.
FURTHER EDIT: In response to the comment that the first gives a curve that's too steep (too concentrated at the center), there are ways to fiddle that. You could take the numbers $(100/B)e^{-(2x-29)^2/30}$ for $x=1,2,\dots,28$, where $B=\sum_1^{28}e^{-(2n-29)^2/30}$. If you don't like that shape, you could replace the $30$ with something bigger (to make the numbers flatter) or smaller (to make the numbers steeper).
Similarly, you can fiddle with the binomials; pick some number $n$, let $$A={2n+27\choose n}+{2n+27\choose n+1}+\cdots+{2n+27\choose n+27}$$ and then let your gifts be $${100\over A}{2n+27\choose n},{100\over A}{2n+27\choose n+1},\dots,{100\over A}{2n+27\choose n+27}$$ Again, the bigger $n$ you choose, the flatter the numbers you get. |
Normal and geodesic curvatures of intersection curve of two given surfaces | Uhm, if the surfaces are smooth and if at the origin you can express both of them as $z=S_i(x,y)$, then, using implicit differentiation, you can compute the tangent planes in the origin, and find the direction of the tangent line to the intersection line simply intersecting those planes. Furthermore, the gradients of the surfaces give you the basis in the tangent space, so that you should be able to build the first and second fundamental forms at the origin and proceed from there. |
How do I solve logarithmic expressions that have different bases | You have incorrectly changed the base.
A useful logarithm rule is the 'change of base rule'. It states:
$$\log_ab=\frac{\log_na}{\log_nb}$$
This lets you change any base to any other base as the $n$ only appears on the right hand side.
In questions like the one you teacher has given simplification can occur when you have a base which is a rational power of another base. In your problem we see that $25$ is a power of $5$ so we can simplify the second terms as follows:
$$\log_{25}{5}=\frac{\log_55}{\log_525}$$
$$=\frac{1}{\log_55^2}$$
$$=\frac{1}{2\log_55}$$
$$=\frac{1}{2}$$
You can then add it to the first term which you should be able to simplify to 3 by basic definitions.
So this would give you an answer of $$3+\frac{1}{2}-\log_{10}64$$
Additional:
Your textbook asked for it as a single logarithm. This is doable but won't look pretty so I'm questioning if that is actually what the book meant.
$$3+\frac{1}{2}-\log_{10}64=\frac{7}{2}-\log_{10}64$$
$$=\frac{7}{2}\log_{10}{10}-\frac{2}{2}\log_{10}64$$
$$=\frac{1}{2}\left(7\log_{10}10-2\log_{10}64\right)$$
$$=\frac{1}{2}\left(\log_{10}10^7-\log_{10}64^2\right)$$
$$=\frac{1}{2}\left(\log_{10}\frac{10^7}{4096}\right)$$
$$=\frac{1}{2}\left(\log_{10}\frac{78125}{32}\right)$$
$$=\log_{10}\sqrt{\frac{78125}{32}}$$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.