title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
GCSE: The volume of frustum | I will, as an A Level student, try to show you exactly what you'd need to understand to get full marks in a similar GCSE question. Firstly, before I begin, you need to realise that you have not actually found the volume of the solid.
You have found the cross sectional area of this solid when looked at perpendicular to its base. Indeed, this area is equal to $1.05\text{m}$. But this is not relevant to the question. We are looking for volume, not area or surface area.
Now to begin:
Throughout this solution I will assume that the bases of this truncated pyramid are square, which would be a more normal GCSE question.
Think of the solid as a larger pyramid with its top, a smaller pyramid, truncated. I will avoid trigonometry in case you have not covered it yet.
Let the height of the larger pyramid (which currently has its top chopped off) be $h$. Then the height of the smaller pyramid (which is chopped off) is $h-1.5$. Using similar triangles, we see that
$$\frac{h}{0.5}=\frac{h-1.5}{0.2}\implies0.2h=0.5h-0.75\implies h=\frac{0.75}{0.3}=2.5$$
So the height of the smaller pyramid is $2.5-1.5=1$. Now, you should know that the volume of a square based pyramid with base length $a$ and perpendicular height $h$ is
$$\frac{1}{3}a^2h$$
Hence, the volume, $V_1$, of the smaller pyramid is $\frac{1}{3}(0.4^2)(1)=\frac{4}{75}$. Similarly, the volume of the larger pyramid, $V_2$, is $\frac{1}{3}(1^2)(2.5)=\frac{5}{6}$. Hence, the volume of the solid we are dealing with is
$$ V_2-V_1=\frac{5}{6}-\frac{4}{75}=0.78\text{m}^3$$
Using $$\text{density}=\frac{\text{mass}}{\text{volume}}$$
we can rearrange to find the required mass:
$$\text{mass}=\text{density}\times\text{volume}=0.78\times2750=2145\text{kg}$$
As $2145\text{kg}$ is larger than $2$ tonnes, we conclude that the crane cannot lift the pedestal.
I hope I have helped you. If you have any further queries, please don't hesitate to ask. I wish you luck in whatever assessments your school give you in place of GCSE's.
Please feel free to email me at [email protected] if you have any other problems. I will be glad to help you. |
how to calculate remainder of large numbers? (no calculator) | You could use the Chinese remainder theorem.
$x\equiv30^{29}\bmod 51\implies x\equiv0\bmod3$ and $x\equiv13^{13}\bmod 17$. |
distance along the cubic curve $y=x^3$ | Your estimate is correct based on the question. It represents the total length of the two green lines
We also have a method to calculate the actual arc length. It requires integration. You may see the method described in the 2nd part of the problem in this answer.
So, the arc length would be given by the definite integral:
$$\int_{0}^{2}\sqrt{9x^{4}+1}\cdot dx$$
Which comes out to be around $8.63032922223$
And your estimate is approximately $8.485281$
Now that depends upon your error tolerance if you consider it as a good estimate or not. Cheers :) |
Is the Natural numbers definition (recursively using empty set) Circular? | I think you are confusing the logic you are building your naturals with, with the meta-logical level of the human who constructs naturals.
When you are constructing the naturals, you have no concept of finite cardinality of a set. You can say that, if there is a bijection between two sets, then they have the same cardinality, but, still, you don't have the concept of numbers at all because you have not built them yet.
It is true that, when constructing natural numbers recursively, you will end up to have that the definition of the number 3 (i.e. $\{\emptyset, \{\emptyset\}, \{\emptyset,\{\emptyset\}\}\}$) contains three element, but you are saying that it contains three elements because you know what is three (and of course the definition of equinumerosity and cardinality of sets). Before defining numbers, you don't have numbers, therefore, you cannot say anything about things measured, or in any ways, related to numbers.
You can count the number of objects inside a natural number because you, as a human, know natural numbers and know how to operate with them. When you say that the definition is circular, you are saying that, starting from a concept inside your theory, you get to an equivalent one, staying inside your theory. But it's not the case: you have the concept of naturals (outside your theory) and you managed to represent it inside your theory. This is not circular: this is the expression of an idea inside a theory. That is: you have the concept of the number three in your head. If somebody asks you what is three, what would your answer be?
If you have ever programmed, in whatever programming language, you surely had to define some new data type, right? There is a form, in Computer Science, which is for defining new data types (Backus–Naur form). Naturals could be expressed in this form with the syntax.
$$\mathbb{N} ::= 0 | S\,\mathbb{N}$$
Which is similar to the set-theoretical definition of natural numbers. Why have we defined it? Because computers cannot work with data types that are not defined. I think that's the same in set theory: before, you had only sets, axioms that speak about sets and, maybe, relations. Then you need to express the idea of natural number to be able to count. You don't have numbers, so you create them. The definition is not circular because this would imply that you already know, in your own theory, what is a natural number, what is counting, what is, for instance, a set of three elements. But you don't! You know them at a "meta-theory level", and, in order to use them in your theory, you need to construct them with the tools you have.
Counting is not a tool because, before counting the elements of a set, you need some way to say how many are they. And for that, you need naturals.
I hope I was clear, the answer is long, but this is a delicate topic: Herbert Enderton's Elements of Set Theory, has a paragraph dedicated to this problem. |
How can we explicitly write the distribution of this function of a discrete random variable? | You are over-complicating, there are two random variables involved in a chain: the first one, namely $X_1$, with Bernoulli distribution with say probability $p$ to get a crit, the second one, namely $X_2$, uniformly distributed in the set $\{1,\ldots ,100\}$, with the relations
$$\Pr [X_2=k|X_1=1]=\frac1{100},\quad\Pr [X_2=k|X_1=0]=0,\quad\text{ when }k\in\{1,\ldots,100\}$$
Therefore
$$
\Pr [X_2=k]=\Pr [X_2=k|X_1=1]\Pr [X_1=1]+\Pr [X_2=k|X_1=0]\Pr [X_1=0]\\[1em]
=\begin{cases}
\frac{p}{100},&\text{ when }k\in \{1,\ldots, 100\}\\
1-p,&\text{ when }k=0\\
0,&\text{ otherwise }
\end{cases}
$$
Above $k=0$ represent when you dont get a crit, you can change this part but the distribution will be the same as far as the function that map values to percentages is consistent with the game, by example we can set the map $f$ by
$$
\begin{align*}
&x\mapsto 100,\quad \text{ when }x=0\\
&x\mapsto 140,\quad\text{ for }x\in\{1,\ldots ,45\}\\
&x\mapsto 200,\quad \text{ for }x\in \{46,\ldots ,85\}\\
&\ldots\ldots \ldots \ldots \\
&x\mapsto 400,\quad \text{ when }x\in \{151,\ldots \}
\end{align*}
$$
All other factors are not random, so to evaluate "what is the probability to get $350\%$ crit factor or more" we do
$$
\Pr [Y\geqslant 350]=\Pr [X_2 \in \{x-\lfloor m_1 \rfloor:x\in f^{-1}([350-m_2,\infty))\}]
$$
where $m_2$ is the percentage modifier added from your player, $m_1$ is the modifier added to the base crit modifier, $f$ is the map described above, and $\lfloor \cdot \rfloor$ is the floor function (you can change it by the nearest integer function if you consider its best suited). |
How to find the points where the slope of the tangent is $-1$? | The derivative is
$$f'(x)=3x^2-4$$
The $x$-coordinates of the points where the slope is $-1$ can be found by solving
$$3x^2-4=-1 \Leftrightarrow x^2=1$$ |
Find the radius of convergence for the Taylor series of $f(z)$ at $z=0$. | $$\lim_{x\to 0}\frac{x}{\sin(x+x^2)}=\lim_{x\to 0}\frac{x}{x+x^2}\cdot\frac{x+x^2}{\sin(x+x^2)}=1$$
and the radius of convergence of the Taylor series in $x=0$ is given by the distance from the origin of the closest singularity. $\sin(z)$ vanishes only for $z\in \pi\mathbb{Z}$, hence:
$$ \rho = \frac{-1+\sqrt{1+4\pi}}{2}=1.34162771851\ldots$$ |
Consequences from Bott Periodicity | Consider the map of fibrations from the universal principal $U$-bundle over $BU$ to the path-loop fibration on $BU$:
$\require{AMScd}$
\begin{CD}
U @>>> EU @>>> BU \\
@VVV @VVV @| \\
\Omega BU @>>> PBU @>>> BU
\end{CD}
One can write down a map $EU \to PBU$; thus there is a map $U \to \Omega BU$. Looking at the long exact sequence in homotopy and using the five-lemma, one deduces that the induced map $U \to \Omega BU$ is a weak homotopy equivalence because $EU \simeq PBU \simeq *$. (Since both source and target can be equipped with a CW structure, this can be promoted to a homotopy equivalence.)
In general, this shows $\Omega BG \simeq G$ for a topological group $G$. See this answer for more details.
For your second question, there is a short exact sequence of topological groups $$1 \to SU \to U \xrightarrow{\det} S^1 \to 1$$ which is split, so $U \cong SU \rtimes S^1$ as topological groups. If we ignore the group structure and only care about the topology, then we have a homeomorphism $U \cong SU \times S^1$. Taking loops on both sides, we get $$\Omega U \cong \Omega SU \times \Omega S^1 \simeq \Omega SU \times \mathbb{Z}$$ as loop spaces, hence as $H$-spaces. |
Finding the smallest $x$ such that $ax\equiv b\mod m$ | $2x\equiv 2\pmod{6}$ if and only if $1x\equiv 1\pmod{3}$, where we divide by $gcd(a,m)$ throughout. There will then be $2$ solutions (the gcd) modulo $6$ (namely $1$ and $4$), but since you want the smallest (presumably ordered as positive integers) this isn't a problem. |
Condition for conjugate subgroups | Does "equality" mean "coincidence"? If so, then $K/K \cap Z$ equals $H/H \cap Z$ iff $K=H$ (since both $K$ and $H$ contain $K \cap Z$ and their factor-groups coincide). |
Is there a name for this curve? Or, how should I describe the behavior of this graph (in words)? | I would say something like "an upside down bell curve". Or Gaussian function. |
Enumerate vertices of convex hull defined by set of linear programming problems | If I understand you correctly, you're taking the set $S$ of points
$(b,z) \in \mathbb R^{m+1}$ where $z$ is the maximal value of the linear
programming problem, over all $b$ for which the problem is feasible.
Note that if $x$ is a feasible solution for $b$ and $x'$ is a feasible solution for
$b'$, and $0 \le \lambda \le 1$, then $\lambda x + (1-\lambda) x'$ is
feasible for $\lambda b + (1-\lambda) b'$, and of course
$c \cdot(\lambda x + (1-\lambda) x') = \lambda (c \cdot x) + (1-\lambda) (c \cdot x')$, so $z$ is a concave function of $b$. I'll assume the feasible set is nonempty and the problem is dual-feasible, so not unbounded.
One may consider basic solutions corresponding to a nonsingular submatrix $B$ of the augmented matrix, where the basic variable values are $B^{-1} b$, the nonbasic variable values are all $0$, and the objective value is $c_B B^{-1} b$. This is feasible in the convex set where all basic decision variable values are nonnegative and all basic slack variable values are $0$. The objective value is always equal to the value in one of the basic solutions. Each extreme point of $S$ should be an extreme point of the region of feasibility of one of the basic solutions. |
Particular inequality equivalence | Let $r_i=\frac{1}{e}>0$ and $r'_i=2>0$.
Then, as $\log \frac{1}{e} = -1$, the first sum would always be negative and equal $-M(2-\frac{1}{e})<0$.
On the other hand, the second sum would be
$$\sum^M \frac{2-\frac{1}{e}}{\frac{1}{e}}=Me \left ( 2-\frac{1}{e} \right ) >0 $$ |
Semigroups isomorphism | Note that in $\mathbb Z_{256}$ we only have two elements $x$ so that
$$x^2=x$$
Indeed, if $x$ is odd, it is invertible, otherwise $x-1$ is invertible $\mod{256}$.
In $S(4)$ there are many functions $f$ so that $f \circ f =f$, for example, all functions with only one element in the image.
So the answer is no.
Second solution The invertible elements in $S(4)$ are the permutations, thus $S(4)$ has $4!=24$ invertible elements. The invertible elements in $\mathbf Z_{256}$ are the numbers relatively prime to $256$ (i.e. odd numbers). Thus $\mathbf Z_{256}$ has 128 invertible elements. |
Find if there exists an element which is sum of squares | Are you fine with algebraic number theory? The ring of gaussian integers $\mathbb{Z}[i]$ is a euclidean domain, hence a unique factorization domain. $n=a^2+b^2$ is equivalent to $n= z\cdot\overline{z}$ with $z=(a+ib)\in\mathbb{Z}[i]$, hence the problem of understanding which positive integers can be represented as a sum of two squares boils down to understanding which integer primes can be represented as a sum of two squares. Lagrange's identity
$$(a^2+b^2)(c^2+d^2) = (ad-bc)^2 + (ac+bd)^2 $$
is a consequence of $(a-ib)(c+id) = (ac+bd)+i(ad-bc)$ and grants that the set of integers that can be represented as a sum of two squares is a semigroup.
If $p\in\mathbb{Z}^+$ is an odd prime and $a^2+b^2=p$, we have that both $a$ and $b$ are invertible elements in $\mathbb{Z}/(p\mathbb{Z})$ and the square of $a^{-1} b$ equals $-1$. In particular, if $p$ is an odd prime that can be represented as a sum of two squares, $-1$ is a quadratic residue in $\mathbb{Z}/(p\mathbb{Z})^*$. By Legendre's symbol
$$ 1 = \left(\frac{-1}{p}\right) \equiv (-1)^{\frac{p-1}{2}} \pmod{p}$$
we get that $-1$ is a quadratic residue in $\mathbb{Z}/(p\mathbb{Z})^*$ iff $\frac{p-1}{2}$ is even, i.e. iff $p=4k+1$.
Corollary1. Every prime of the form $4k+1$ can be represented as the sum of two squares and no prime of the form $4k+3$ can be represented as the sum of two squares.
Corollary2. Let $n\in\mathbb{Z}^+$ and let
$$ n = 2^{k} p_1^{\alpha_1}\cdots p_{l}^{\alpha_l}\cdot q_1^{\beta_1}\cdots q_m^{\beta_m} $$
be the factorization of $n$, with $p_i\equiv 1\pmod{4}$ and $q_j\equiv 3\pmod{4}$. $n$ can be represented as a sum of two squares iff $\beta_1\equiv \ldots\equiv \beta_m\equiv 0\pmod{2}$.
Corollary3. Every positive integer of the form $4k+3$ has a prime divisor $p$ of the same form, such that $\nu_p(n)$ is odd. In particular, no positive integer of the form $4k+3$ can be represented as the sum of two squares.
Isn't this the most epic twist on a trivial argument you have ever seen? :D |
The Nature of Differentials and Infinitesimals | You seem to be adding some context to the notation that was never meant to be there.
$\frac{\mathrm{d}}{\mathrm{d}x}$ is just an operator.$\frac{\mathrm{d}^2}{\mathrm{d}x^2}$ is just the notation used when operating twice.
In the language of infinitesimals, you can consider
$\frac{\mathrm{d}y}{\mathrm{d}x}$ to be the infinitesimal change in $y$ and how it compares to $x$ (in a ratio).
I took a course in my second year of University which addressed the differential, differential forms (one-forms, two-forms, etc) and wedge products. It's a slippery slope, but you might want to look at a textbook from a Calculus III course. I can't share mine, as my Professor was the author (I'm not sure if I'm able to distribute this book). |
Simulate unfair coin with a fair random generator | Run your random number generator until it produces $1$. The probability that you see no $0$s or an even number of $0$s before the first $1$ is $\frac 2 3$. The probability that you see an odd number of $0$s before the first $1$ is $\frac 1 3$. |
Variance of $X$ if $X_i\sim \operatorname{Ber}((\frac{r-1}{r})^m)$ | Is $X = \sum_{i=1}^r X_i$? Then
$$ X^2 = \sum_{i=1}^r \sum_{j=1}^r X_i X_j = \sum_{i=1}^r X_i^2 + 2 \sum_{i=1}^r \sum_{j = i+1}^r X_i X_j$$
and expected value of a sum is sum of the expected values. |
Converse of mean value theorem | It is not true. Let $f(t)=t^3$. Then $f'(0)=0$, but $\frac{f(y)-f(x)}{y-x}$ is never $0$. |
solve a system of equations by matrices | You can write this system as a matrix (assuming $p$ is known - if not, it will not be a system of linear equations):
$$\underbrace{\begin{bmatrix}2 & 3 & -1 \\ p & p+1 & -p \\ 1 & 1 & p \end{bmatrix}}_A \underbrace{\begin{bmatrix}x \\ y \\ z \end{bmatrix}}_v = \underbrace{\begin{bmatrix}p \\ 1 \\ 2\end{bmatrix}}_b$$
So the solution is $$v = A^{-1} b,$$ so all you have to do is invert $A$ and multipy it (the inverse of $A$) with $b$ in order to get $v$. This can be done easiest via Gauss algorithm if you have to do it by hand with no other means. |
On Negative Lengths And Positive Hypotenuses In Trigonometry | The trigonometric functions are defined for angles between 0 an 90° over the triangles. I assume you understood this idea.
For multiple reasons one wanted to extend the functions to other angles, so one has to come up with a creative idea.
If you would plot the triangles on the unit circle, you see that the trigonometric functions have something to do with the x any y coordiantes of the points on the unit circle.
More concrete: For all angles between 0 and 90° we have that
$sin(\alpha) = y/x$, where $x$y and $y$ are the coordinates of the point on the unit circle.
Then people just generalized this rule to every point on the unit circle. For degrees above $90$ the definition has not directly something to do with the triangles, it's just an extension.
Hence we don't need any triangles with negative length |
Limit at infinity of $\frac{1}{x^{n-1}}$? | Is your question about $\displaystyle\lim_{x\to +\infty} \frac x {x^n}$ being zero, or about $\displaystyle\frac \infty \infty$ being zero? The first one is true (for $n > 1$), the other one is indeterminate. Because for instance $\displaystyle\lim_{x\to +\infty} \frac {x^{2n}} {x^n} = +\infty$ is also of the form $\displaystyle\frac \infty \infty$.
The point is that $\displaystyle\frac \infty \infty$ is just a notation for a type of limits and not an equality with the previous term. |
How many 6 character passwords are possible if letters cannot be repeated but digits CAN be? | Edited to Joffan's input
How I would guess to approach this, I am also pretty new to combinatorics.
For the first space you have 26 options since there is NOT case sensitive.
(26 option)(..)(..)(..)(..)(..)
Now for the remaining five slots you would have six possible cases.
$$ \begin{array}{|c|c|} \hline
\text{# Letters (after first)} & 5 & 4 & 3 & 2 & 1 & 0 \\ \hline
\text{# Numbers} & 0 & 1 & 2 & 3 & 4 & 5 \\ \hline
\end{array}$$
There is another layer to think about say in the case of three letter and two numbers, you know have five slots and three slots to place letters so you have ${5 \choose 3}$ ways of picking positions for letters and for each of the positions you would have 25 options for the first slots since one letter is taken for the fist slot, 24 options for the second and 23 options for the third slots, as for the last two number slots you would have $10\cdot 10$ options.
I would then take a guess that the final number would be
$26 \cdot \left({5 \choose 5}
\cdot \frac{25!}{(25-5)!} \cdot 10^0 + {5 \choose 4}
\cdot\frac{25!}{(25-4)!} \cdot 10^1 + {5 \choose 3}
\cdot\frac{25!}{(25-3)!} \cdot 10^2 +{5 \choose 2}
\cdot\frac{25!}{(25-2)!} \cdot 10^3 +{5 \choose 1}
\cdot\frac{25!}{(25-1)!} \cdot 10^4 +{5 \choose 0}
\cdot\frac{25!}{(25)!} \cdot 10^5\right)$
or rather
$26 \cdot \sum_{k=0}^{5}\left({{5 \choose k}
\cdot\frac{25!}{(25-k)!}} \cdot 10^{5-k}\right)$.
And as a final summary since the multiplicative factor of 26 distributes to all summands we end up with
$$\sum_{k=0}^{5}\left({{5 \choose k}
\cdot\frac{26!}{(25-k)!}} \cdot 10^{5-k}\right)$$
This is my first time posting so I know my answer is poorly formatted, I cannot fully verify my answer but this is what I would turn in if I had to give it an attempt. I would love to get some feedback on this as I recently started self learning combinatorics. |
Coefficients of power series $\sin(x e^x)$ | Using the power series for $e^x$
\begin{eqnarray*}
xe^x = x+x^2+ \frac{x^3}{2} + \cdots
\end{eqnarray*}
The first couple of terms in the power series for $\sin x$ are
\begin{eqnarray*}
\sin x = x- \frac{x^3}{6} + \cdots
\end{eqnarray*}
Now substitute ...
\begin{eqnarray*}
\sin ( x e^{x})= x+x^2+ \frac{x^3}{2} + \cdots- \frac{1}{6} \left(x+x^2+ \frac{x^3}{2} + \cdots \right)^3+ \cdots
\end{eqnarray*}
We already have the first two terms ... but we shall calculate the third ... to be on the safe side
\begin{eqnarray*}
\sin ( x e^{x})= x+x^2+ \left( \frac{1}{2}-\frac{1}{6}\right) x^3+ \cdots
\end{eqnarray*}
So the coefficients of the first two (non zero) terms in the expansion are $\color{red}{1}$ & $\color{red}{1}$ ... & $\color{green}{\frac{1}{3}}$ if you want the third. |
Paley-Zygmund inequality. | $X1_{X <\lambda \mathbb E X} \leq \lambda (\mathbb EX) 1_{X <\lambda \mathbb EX}$. Now take expectation on both sides and use the fact that $P(X <\lambda \mathbb E X) \leq 1$
Note that the following is also correct:
$X1_{X \leq \lambda \mathbb E X} \leq \lambda (\mathbb EX) 1_{X \leq \lambda \mathbb EX}$; take expectation on both sides and use the fact that $P(X \leq \lambda \mathbb E X) \leq 1$ to get $\mathbb E X1_{X \leq \lambda \mathbb E X} \leq \lambda (\mathbb EX)$. |
What are the roots of the polynomial? | It is true. Let $D=\operatorname{diag}(a_{k+1},\ldots,a_n)$ and $A=\pmatrix{P&R\\ R^T&S}$ where $S$ has the same size as $D$. Since both $A$ and $\operatorname{diag}(a_1,\ldots,a_n)-A$ are positive definite, $P$ and $D-S$ are positive definite. Now, by using Schur complement, we obtain
\begin{aligned}
q(x)
&=\det\left((xI_k\oplus D)-A\right)\\
&=\det\pmatrix{xI-P&-R\\ -R^T&D-S}\\
&=\det(D-S)\det\left(xI-P-R(D-S)^{-1}R^T\right).
\end{aligned}
Therefore $q(x)=0$ if and only if $x$ is an eigenvalue of the positive definite matrix $P+R(D-S)^{-1}R^T$. It follows that all roots of $q(x)=0$ are positive. |
What does $|mK_V|=|M|+F$ mean for linear system $|mK_V|,|M|$? | To answer the questions in a different order:
2) You're right that in general the base locus of a linear system could be higher-codimensional; it's not hard to come up with examples. The point here is that Reid has written: "Let V′→V be a resolution of the base locus of $|mK_V|$... I can replace V by V′...". That is, he has blown up any components of the base locus that aren't divisors. Then since $K_{V'} = \pi^* K_V + aE$ for some nonnegative $a$ (assuming $V$ is canonical), the base locus of $|mK_{V'}|$ will be a divisor.
1) One way to say this is that every effective divisor in the linear system $|mK_V|$ can be written uniquely in the form $D+F$, where $D \in |mK_V|$. Another way to say it is that we have an equality of line bundles $mK_V = M \otimes O_V(F)$; since the latter bundle has a unique global section (this is exactly what fixed part means) we get an isomorphism between sections of $mK_V$ and those of $M$, by tensoring with the unique section of $O_V(F)$.
3) Whenever you have a free linear system $|M|$ it defines a map $\phi$ to projective space such that $\phi^* O(1) = M$. Now just plug this into the expression $mK_V = M \otimes O_V(F)$, and (confusingly) switch to additive notation! |
Let $A$ a set and suppose for each $x\in A$ exists a subset $G_x$ of $A$ such that $x\in G_x$. Prove $A=\bigcup_{x\in A}G_x$ | For $x\in A$, so $x\in G_{x}$ and $G_{x}\subseteq A$ for some $G_{x}$, so in particular, $x\in G_{x}\subseteq\displaystyle\bigcup_{x\in A}G_{x}$, and hence $x\in\displaystyle\bigcup_{x\in A}G_{x}$. This is true for all $x\in A$, and hence $A\subseteq\displaystyle\bigcup_{x\in A}G_{x}$.
For $a\in\displaystyle\bigcup_{x\in A}G_{x}$, then $a\in G_{x}$ for some $G_{x}$ such that $G_{x}\subseteq A$, so $a\in A$, this shows $\displaystyle\bigcup_{x\in A}G_{x}\subseteq A$. |
Disprove uniform convergence of sequence of functions $f_n: \mathbb{R} \to \mathbb{R}^2$, $f_n(x) = (\sin\frac{x}{n}, \cos\frac{x}{n})$ | $$\lim_{n\to\infty}\left\lVert f_n - f \right\rVert = \lim_{n\to\infty}\left\lVert (\sin\frac{x}{n}, \cos\frac{x}{n}) - (0, 1) \right\rVert$$
This is incorrect. The right hand-side is the pointwise limit, evaluated at some particular $x \in \mathbb{R}$, which we already know is $0$. But the usual definition for $||f_n - f||$ is $$\sup_{x \in \mathbb{R}} |f_n(x) - f(x)|,$$ the supremum over all possible $x$.
Your mission is to find an $\epsilon$ so that, given an arbitrarily large $n$, there is a point $x \in \mathbb{R}$ so that $|f_n(x) - f(x)| > \epsilon$. (Here $f$ is the constant function $f(x) = (0,1)$ for every $x$.) This shows that the supremum is greater than $\epsilon$. This should be easy, since the image of each $f_n$ is the unit circle. If convergence were uniform, the images would be sets that are shrinking closer and closer to $(0,1)$. |
Find the determinant of a symmetric matrix | A hint, a quick computer algebra calculation gives
for the determinant of the matrix $A_n$
\begin{align}
\text{det}A_1 &= +x_1y_1\\
\text{det}A_2 &= -x_1y_2(x_1y_2-x_2y_1)\\
\text{det}A_3 &= +x_1y_3(x_1y_2-x_2y_1)(x_2y_3-x_3y_2)\\
\text{det}A_4 &= -x_1y_4(x_1y_2-x_2y_1)(x_2y_3-x_3y_2)(x_3y_4-x_4y_3)\\
\text{det}A_5 &= +x_1y_5(x_1y_2-x_2y_1)(x_2y_3-x_3y_2)(x_3y_4-x_4y_3)(x_4y_5-x_5y_4)
\end{align}
So this suggests to show first the formula for $\text{det}A_2$, and then to somehow show the recursion
\begin{equation}
y_n \text{det}A_{n+1} = -y_{n+1}(x_ny_{n+1}-x_{n+1}y_n) \text{det}A_n,
\quad\text{with}\quad n > 2.
\end{equation}
Then one has to see what kind of coefficients $x_k$ and $y_k$ one has. |
Convergence or divergence of $ \sum \frac{(2n)!}{2^{2n}(n!)^2}$ using Raabe's Test | This does not use any test to show divergence.
Consider $$a_n=\frac{(2n)!}{2^{2n}(n!)^2}$$ Take logarithms $$\log(a_n)=\log((2n)!)-2n\log(2)-2\log(n!)$$ and use Stirling approximation for large values of $p$ $$\log(p!)=p (\log (p)-1)+\frac{1}{2} \left(\log (2 \pi )+\log
\left({p}\right)\right)+O\left({\frac{1}{p}}\right)$$ which makes $$\log(a_n)=-\frac{1}{2} \left(\log \left({n}\right)+\log (\pi
)\right)+O\left(\frac{1}{n}\right)$$ Continuing with Taylor $$a_n=e^{\log(a_n)}=\frac 1{\sqrt{\pi n}}+O\left(\frac{1}{n^{3/2}}\right)$$ By comparison with $p$-series, then $\sum a_n$ diverges.
You also could consider $$y=\sum_{n=0}^\infty \frac{(2n)!}{2^{2n}(n!)^2} x^n$$ and recognize that this is the infinite Taylor expansion of $y=\frac{1}{\sqrt{1-x}}$ and think about what is going on when $x\to 1$.
Edit
You could notice that the problem comes from the $\color{red}{2}$. Considering
$$b_n=\frac{(2n)!}{k^{2n}(n!)^2}$$ we should get $$\frac{b_{n+1}}{b_n}=\frac{2(2n+1)}{k^2 (n+1)}$$ which is $<1$ as soon as $k >2$.
Eventually, you could recognize that
$$y=\sum_{n=0}^\infty \frac{(2n)!}{k^{2n}(n!)^2} x^n=\frac{k}{\sqrt{k^2-4 x}}$$ making $$\sum_{n=0}^\infty \frac{(2n)!}{k^{2n}(n!)^2}=\frac{k}{\sqrt{k^2-4 }}$$ |
When do two real polynomials define the same real hypersurface? | Upon emailing an expert, I now realize that there is a real nullstellensatz, and that the relevant notion to answer both questions is to use the notion of a real radical, see for instance: https://en.wikipedia.org/wiki/Real_radical. |
Notation for a sum with restricted set of indices | $\sum\limits_{i = 1 \\ i \ne j}^n a_i$ |
Markov chain, probability of n people being infected after t periods. | Let $p_k$ denote that the probability that $k$ people are affected at time $T$. Clearly then, the probability that least $n$ people are affected is given by $\sum_{i = k}^n p_i$, so knowing $p_i$ is sufficient for both questions.
Let's fix some $1 \le k \le n$ and compute $p_k$. We need to propagate the virus exactly $k-1$ times over $T$ periods, so
there are $\binom{T}{k-1}$ ways to choose which steps will be the ones propagating
propagating steps must carry the extra weight of $p$ each, i.e. $p^{k-1}$ total weight
all steps must carry the weight of $1-q$ to make sure there is no vaccine, i.e. a total weight of $(1-q)^T$
Thus the total amount is $$p_k = \binom{T}{k-1} p^{k-1} (1-q)^T.$$
All that is assuming the event of creating a vaccine is a separate event from the disease propagation, i.e. at every step,
vaccine is invented with probability $q$
disease propagates with probability $(1-q)p$
disease does not propagate with probability $(1-q)(1-p)$
If instead, the propagation probability is $p$ and not propagation is $1-p-q$, the numbers would be different. |
Proving that the sum of delta functions is a measure on the Borel $\sigma$-algebra | In the second statement you proved only one direction.
Assume now that $\lim_n|x_n|=+\infty$
If there is a bounded interval $I$ that $\mu(I)=+\infty$ then exists a subsequence $y_n:=x_{k_n} \in I$
So $y_n$ is bounded and by bolzano-Weierstrass has a convergent subsequence $y_{n_l} \to s \Longrightarrow |y_{n_l}| \to^{l \to +\infty} |s|<+\infty$
From this you have a contadiction,since $|x_n| \to +\infty$
For the first statement you can just interchange the summation as mentioned in the comments. |
Given the following condition, show that $|\operatorname{trcl}$(y)$| < \kappa$ | As $y \subseteq H(\kappa)$, $\vert \operatorname{trcl}(x) \vert < \kappa$ for every element $x \in y$. $\operatorname{trcl}(y) = y \cup \bigcup_{x \in y} \operatorname{trcl}(x)$, thus follow by regularity of $\kappa$ that $\vert \operatorname{trcl}(y) \vert = \vert y \vert + \sup_{x \in y} \vert \operatorname{trcl}(x) \vert < \kappa$. |
How to simplify a multiplication of several summations? | The product can be written in closed form, because the first two sums are geometric series
$$\sum_{n=0}^\infty \left(\frac{w}{2i}\right)^n = \frac{1}{1-(w/2i)}=\frac{2}{2+iw}$$
$$\sum_{n=0}^\infty \left(\frac{w}{i}\right)^n=\frac{1}{1+iw}$$
an the third sum is related to $\cos x:$
$$\sum_{n=0}^\infty \frac{(-1)^n}{(2n)!} w^{2n-1}=
\frac{1}{w}\sum_{n=0}^\infty \frac{(-1)^n}{(2n)!} w^{2n}=\frac{\cos w}{w}$$
Therefore the result is
$$-\frac{\sin i\cos w}{2 (2+iw)(1+w)w} = \left(-\frac{1}{w} + \frac{3i}{2} + \frac{9w}{4}+O(w^2)\right)\sin i$$ |
Decryption in the Merkle-Hellman cryptosystem | You don't have the complete secret key, because the superincreasing sequence cannot be directly computed from the public key, even with known $b$ (multiplier) and $m$ (modulus); in our case $b^{-1} \pmod{m} = 6479$ But the Shamir paper see here does give some ideas about that.
How I solved the message is by completely ignoring the private key components and by building (using the public key only) a table (using a programme of course) that computes the encrpytion of all 3 letter combinations; a programme can do that in fewer than a second easily, and then match the observed cipher values.
There is no randomisation, so we just have a public key block cipher with a very small blocksize of 15 bits. This yielded FOR MUL A S TOL EN! as the five plain blocks BTW. |
Proof regarding Squarefree numbers | The computer programmer's answer: set a counter to zero; for each $a$ from 1 to 201 do the following: for each $m$ with $m\ge2$ and $m^2\le a$, see whether $a$ is divisible by $m^2$. If $a$ isn't divisible by any of those numbers $m^2$, add 1 to the counter. When you've gone through all the values of $a$, the counter holds the answer.
It's not pretty, there's plenty of room for optimization, and it won't teach you any Number Theory, but it will get you the answer. |
What is the area of the triangle having $z_1$, $z_2$ and $z_3$ as vertices in Argand plane? | Let $z_j = x_j + iy_j$, $j = 1, 2, 3$. The area of the triangle is given by
\begin{align*}
\frac{1}{2} \begin{vmatrix}
1 & x_1 & y_1 \\
1 & x_2 & y_2 \\
1 & x_3 & y_3
\end{vmatrix}&= \frac{1}{2} \begin{vmatrix}
1 & x_1+iy_1 & y_1 \\
1 & x_2+iy_2 & y_2 \\
1 & x_3+iy_3 & y_3
\end{vmatrix}\\
&= \frac{1}{4i} \begin{vmatrix}
1 & z_1 & z_1-z_1^*\\
1 & z_2 & z_2-z_2^* \\
1 & z_3 & z_3-z_3^*
\end{vmatrix}\\
\end{align*}
Now expand via first column to get the required expression. |
Sets and the principle of inclusion and exclusion | Claim: There is one element that every set has in common.
Proof by contradiction. Suppose there isn't such an element.
Fix a set $A_1$.
For each element $a_{1,i} \in A_1$, let $ A_{1,i}$ denote that sets (not including $A_1$) which contain $a_{1,i}$.
The $A_{1,i}$ are disjoint from each other, so $\sum |A_{1,i}| = 1985 - 1$.
Fix an element $a_{1,i} \in A_1 $.
By the assumption, $|A_{1,i} | < 1984$, and so there is another $j\neq i$ such that $ a_{1,j} \in A_1$ and $|A_{1,j}| > 0 $.
Let $B_k \in A_{1,j}$, where $B_k$ is one of the original sets with 45 elements.
We will prove by contradiction that $|A_{1,i}| \leq 44$.
Suppose not, so$ |A_{1,i}| \geq 45$. Then $B_j \backslash \{ a_{1,j}\} $ has 44 elements, and doesn't contain $a_{1,i}$.
So $B_k$ cannot intersect the 45+ sets in $A_{1,i}$, which are distinct sets after excluding $a_{1,i}$, which is a contradiction.
This shows that $ |A_{1,i} | \leq 44$.
Coming back to the original claim, we have
$$1984 = \sum_{i=1}^{45} |A_{1,i} | \leq 45 \times 44 = 1980,$$
which is a contradiction. |
Does $\operatorname{div}\left(\nabla G +xG\right)=0\Longleftrightarrow \nabla G +xG=0$? | I'm going to use an argument along the lines of what @hOff proposed. In addition to the smoothness of $G$ I am going to suppose that it decays rapidly at infinity so that all of the following calculations are justified. For example, we could assume that $G$ is Schwartz class.
As the first order of business we expand your equation as
$$
\Delta G(x) + x \cdot \nabla G(x) + d G(x) =0.
$$
To attack this problem we're going to use the Fourier transform, which we define as
$$
\hat{f}(\xi) = \int_{\mathbb{R}^d} f(x) e^{-2\pi i x\cdot \xi} dx.
$$
We have the following useful identities:
$$
\widehat{\Delta f}(\xi) = -4 \pi^2 |\xi|^2 \hat{f}(\xi) \\
\widehat{x \cdot \nabla f}(\xi) = -\xi \cdot \nabla \hat{f}(x) - d \hat{f}(\xi).
$$
These both follow by integrating by parts in the formula for the Fourier transform. For example,
$$
\widehat{x \cdot \nabla f}(\xi) = \sum_{j=1}^d \int_{\mathbb{R}^d} x_j \partial_j f(x) e^{-2\pi i x\cdot \xi} dx \\
= -\sum_{j=1}^d \int_{\mathbb{R}^d} f(x) \partial_j( x_j e^{-2\pi i x\cdot \xi}) dx \\
= -\sum_{j=1}^d \int_{\mathbb{R}^d} f(x) [x_j(-2\pi i \xi_j) + 1] e^{-2\pi i x\cdot \xi} dx \\
= - d \hat{f}(\xi) - \sum_{j=1}^d \xi_j \int_{\mathbb{R}^d} f(x)(-2\pi i x_j)e^{-2\pi i x\cdot \xi} dx \\
= - d \hat{f}(\xi) - \sum_{j=1}^d \xi_j \partial_{\xi_j} \int_{\mathbb{R}^d} f(x)e^{-2\pi i x\cdot \xi} dx \\
= - d \hat{f}(\xi) - \xi \cdot \nabla \hat{f}(\xi).
$$
Now we apply the Fourier transform to our equation for $G$ to find that
$$
0= - 4\pi^2 |\xi|^2 \hat{G}(\xi) -\xi \cdot \nabla \hat{G}(\xi) - d \hat{G}(\xi) + d \hat{G}(\xi) \\
= - 4\pi^2 |\xi|^2 \hat{G}(\xi) -\xi \cdot \nabla \hat{G}(\xi).
$$
This is a first order PDE that we can solve using the method of characteristics. Suppose for the moment that $\xi_0 \in \mathbb{R}^d$ is such that $|\xi_0|=1$. Consider the function $g(t) = \hat{G}(e^t \xi_0)$. We compute
$$
\frac{d}{dt} g(t) = \nabla \hat{G}(e^t \xi_0) \cdot e^t \xi_0 = -4\pi^2 |e^t \xi_0|^2 \hat{G}(e^t \xi_0) = -4\pi^2 e^{2t} g(t),
$$
which implies that (using standard ODE theory)
$$
g(t) = g(0) \exp\left(-2\pi^2(e^{2t} -1) \right)
$$
and hence that
$$
\hat{G}(e^t \xi_0) = \hat{G}(\xi_0) \exp\left(-2\pi^2(e^{2t} -1) \right).
$$
Now, for any $\xi \neq 0$ we write $\xi = e^t \xi_0$ for $\xi_0 = \xi/|\xi|$ and $|\xi| = e^t$. Plugging in above then shows that
$$
\hat{G}(\xi) = \hat{G}\left(\frac{\xi}{|\xi|} \right)\exp\left(-2\pi^2(|\xi|^2 -1) \right).
$$
Since we have assumed that $G$ is integrable, we know that $\hat{G}$ is continuous, and so we have derived the most general form for $\hat{G}$, namely
$$
\hat{G}(\xi) = K\left(\frac{\xi}{|\xi|} \right) e^{-2\pi^2 |\xi|^2}
$$
for some continuous function $K$. However, the exponential term is unity at $\xi=0$, so the continuity of $\hat{G}$ requires that actually $K$ is constant.
Hence $\hat{G}(\xi) = Ke^{-2\pi^2 |\xi|^2}$, which then implies that
$$
G(x) = \frac{K}{(2\pi)^{d/2}} e^{-|x|^2/2}.
$$
Let's now go back to your question of what assumptions have to be made on $G$. To make the above analysis valid we need that $G$, $\Delta G$, and $x\cdot \nabla G$ all decay fast enough to justify applying the Fourier transform. For example, we can assume that
$$
G, \Delta G, x \cdot \nabla G \in L^1(\mathbb{R}^d).
$$
A last remark: You should also be able to prove that $G$ is Gaussian if and only if $G$ is radial. One direction is trivial. For the other you assume that $G$ is radial and then rewrite the PDE as a second order ODE in $r$ and the resulting solution should be a Gaussian. |
Finding marginal PDF fy(y). Can someone check my work? | The domain can be equally represented as
$$ \{0 \le y \le 1, \ x \le y\} \cup \{y > 1,\ x \le 1 \} $$
You can prove this on your own by graphing the region.
Then we have
$$ \int_0^1\int_x^{\infty} f_{X,Y}(x,y) \ dy\ dx = \int_0^1 \int_0^y f_{X,Y}(x,y)\ dx\ dy + \int_1^{\infty}\int_0^1 f_{X,Y}(x,y)\ dx\ dy = 1 $$
We also know that
$$ \int_0^\infty f_Y(y)\ dy = 1 $$
Therefore
$$ f_Y(y) = \left\{ \begin{aligned} \int_0^y f_{X,Y}(x,y)\ dx &,\quad 0 \le y \le 1 \\ \int_0^1 f_{X,Y}(x,y)\ dx &,\quad y > 1 \end{aligned}\right. $$
As you probably realize, the final result should not have any dependence on $x$ |
The cancellation property for finite abelian groups | I think I've got it.
Let each $p_i$, $q_i \in \mathbb{N}$ be a power of a prime, not neccesarily distinct. $A$, $B$, $C$ are finite and Abelian which means that $A\cong \mathbb{Z}_{p_1}\oplus \mathbb{Z}_{p_2}\oplus\cdots \oplus \mathbb{Z}_{p_k}$, $B\cong \mathbb{Z}_{p_{k+1}}\oplus \mathbb{Z}_{p_{k+2}}\oplus\cdots \oplus \mathbb{Z}_{p_{k+n}}$, and $C\cong \mathbb{Z}_{q_{1}}\oplus \mathbb{Z}_{q_{2}}\oplus\cdots \oplus \mathbb{Z}_{q_{m}}$ for some $k$, $n$, $m\in \mathbb{N}$. So,
$$A\oplus B\cong \mathbb{Z}_{p_1}\oplus\cdots\oplus\mathbb{Z}_{p_k}\oplus\mathbb{Z}_{p_{k+1}}\oplus \cdots\oplus \mathbb{Z}_{p_{k+n}}$$
$$\mathbb{Z}_{p_1}\oplus\cdots\oplus\mathbb{Z}_{p_k}\oplus\mathbb{Z}_{p_{k+1}}\oplus \cdots\oplus \mathbb{Z}_{p_{k+n}}\cong A\oplus C$$
$$\mathbb{Z}_{p_1}\oplus\cdots\oplus\mathbb{Z}_{p_k} \oplus\mathbb{Z}_{p_{k+1}}\oplus \cdots\oplus \mathbb{Z}_{p_{k+n}}\cong \mathbb{Z}_{p_1}\oplus\cdots\oplus\mathbb{Z}_{p_k} \oplus \mathbb{Z}_{q_{1}}\oplus \cdots \oplus \mathbb{Z}_{q_{m}}$$
Since the number of terms and orders of each of the terms in the products on each side of the isomorphism are unique, it must be true that $n=m$ and it is possible to re-arrange the terms on the right so that $q_j=p_{k+j}$ for $1\leq j\leq n$, $j\in \mathbb{N}$. That means,
$$C\cong \mathbb{Z}_{p_{k+1}}\oplus \cdots\oplus \mathbb{Z}_{p_{k+n}}\cong B$$ and we are done. |
Maximum determinant of a non-negative matrix, given the sum of all entries | Here is an alternative solution:
We have Hadamard's inequality, $| \det M | \le \prod_k \|M e_k\|$.
We see that $\|Me_k\|= \sqrt{\sum_i M_{ik}^2 } $ and we can maximise $f(M)=\prod_k \sum_i M_{ik}^2$ subject to $\sum_i \sum_j M_{ij} = A$ and $M_{ij} \ge 0$. Let $M$ be a maximiser (which exists since the feasible set
is compact).
It is clear that by taking diagonal entries with the value ${A \over n}$ that
the $\max$ is $>0$. In particular, all columns of the maximising $M$ are
non zero.
Note that each column contains exactly one non zero entry. To see this,
suppose there are two non zero entries $M_{i_1j}, M_{i_2j}$, then notice
that if we let $\phi(t) = (M_{i_1j}+t)^2 + (M_{i_2j}-t)^2 = \phi(0)+ 2t(M_{i_1j} - M_{i_2j}) + 2 t^2$, then there is always some $t$ such that
$M_{i_1j}+t \ge 0, M_{i_2j}-t \ge 0$ and $\phi(t) > \phi(0)$, which contradicts
the maximality of $M$.
Hence the problem reduces to maximising $g(x) = x_1^2 \cdots x_n^2$ subject
to $\sum_k x_k = 0$ and $x_k \ge 0$. Then at a solution $x$ we have
$x_1=\cdots =x_n$. To see this, suppose $x_1 >x_2$ and let
$\eta(t) = (x_1-t)^2(x_2+t)^2 = (x_1 x_2+ t(x_1-x_2) + t^2)^2$. Then
$\eta'(0) = 2 x_1 x_2 (x_1-x_2) >0$ and so $g$ can increased, which
contradicts maximality. Hence we obtain $x_k ={A \over n}$.
Hence we have $|\det M| \le ({A \over n})^n$, and by choosing a diagonal
matrix with entries ${A \over n}$, we see that we have equality. |
Dual Space Isomorphism | You're on the right track. Let's say $\beta = \{ v_1, \dots , v_n \}$ is a basis for $V$ then we define $f_j: V \rightarrow \mathbb{F}$ by $f_j(v_i) = \delta_{ij}$ extended linearly. Now, show that $\beta^*$ forms a basis for $V^*$ and you're done since that shows the dimensions of $V$ and $V^*$ are the same (which shows $V$ and $V^*$ are isomorphic). So, what do you need to show $\beta^*$ is a basis ?
linear independence (LI)
spanning
I can show you both, but I stop here for now. Ok, so for LI consider scalars $c_1, \dots , c_n \in \mathbb{F}$ for which:
$$ c_1f_1+ \cdots + c_nf_n = 0$$
this is a sum of $\mathbb{F}$-valued functions on $V$. In particular, we can evaluate at $v_i$ to find:
$$ c_1f_1(v_i)+ \cdots +c_jf_j(v_i)+ \cdots + c_nf_n(v_i) = 0(v_i) = 0$$
which yields $c_i=0$ for arbitrary $i$ since only the $i=j$ term survives. But, this shows that $c_1 = 0, \dots , c_n=0$ hence $\beta^*$ is LI. To prove spanning consider $\alpha: V \rightarrow \mathbb{F}$ and $v \in V$ where $v = x_1v_1+ \cdots + x_nv_n$,
$$ \alpha(v) = \alpha(x_1v_1+ \cdots + x_nv_n) = x_1 \alpha(v_1) + \cdots + x_n \alpha(v_n).$$
Now, here's the neat thing: $x_j = f_j(v)$ (prove it for yourself if you doubt me) thus,
$$ \alpha(v) = \alpha(x_1v_1+ \cdots + x_nv_n) = \alpha(v_1)f_1(v) + \cdots + \alpha(v_n)f_n(v) = \left( \alpha(v_1)f_1 + \cdots + \alpha(v_n)f_n \right)(v).$$
Therefore, $\alpha \in \text{span}(\beta^*)$ and this completes the proof that $\beta^*$ is a basis for $V^*$.
Let me just add a bit more on this construction. Notice, our construction depends on the choice of $\beta$. Fortunately, we can show the number of vectors in a basis is independent of the choice (that is, dimension is well-defined). Hence, this choice is harmless. That said, if we could construct an isomorphism which did not rely on such a choice then that isomorphism would be a canonical isomorphism. In finite dimensions, $V$ and $V^{**}$ are canonically isomorphic by the rule $\alpha(v) = \tilde{v}(\alpha)$ where $\tilde{v} \in V^{**}$ and $\alpha \in V^*$ and the isomorphism is $v \mapsto \tilde{v}$. In contrast, for $V$ and $V^*$ we have $G(v_i)=f_i$ extended linearly gives the isomorphism, but, this construction depends on our choice of $\beta$. However, if we have an inner-product $g: V \times V \rightarrow \mathbb{F}$ then $g$ allows the construction of an isomorphism of $V$ and $V^*$ independent of basis choice; $\Psi(v)(w) = g(v,w)$ for all $v,w \in V$ defined $\Psi(v) \in V^*$ and you can check $\Psi: V \rightarrow V^*$ is an isomorphism. |
Confidence Interval and Variance of Coefficient of Variation | Estimating CVs. The coefficient of variation (CV) $\kappa = \sigma/\mu.$ It can be
estimated by $\hat \kappa = K = S/\bar X,$ where $\bar X$ and $S$
are the sample mean and SD, respectively. For small $n,$ this estimate
is biased on the low side, but for moderate and large samples
the bias is small. Methods of finding confidence intervals (CIs)
for the CV depend on the nature of the underlying distribution.
Because the type of population distribution may be unknown, it may
be useful to use a nonparametric bootstrap CI for the $\kappa.$
Because the population may be skewed (especially right-skewed) in
practice, the bootstrap must anticipate skewness.
Because I found the literature on CIs for the CV to be partly
hidden behind dollar barriers, and partly poorly explained, I'm
wondering if bootstrap CIs may be the best solution for your application. I gave
two examples of bootstrap CIs below, one using a sample from a
normal population and one using a sample from a gamma population.
At least, you can compare these results with results from formulas
you may find in your Internet searches.
Bootstrap CIs. If we knew the distribution of $V = K - \kappa,$ we could find
bounds $L$ and $U$ cutting 2.5% from its lower and upper tails,
respectively to get $P(L < K - \kappa < U) = 0.95,$ from which
we would obtain the 95% CI $(K - U, K - L)$ for $\kappa.$
Not knowing the distribution of $V,$ we re-sample from our data
$X = (X_1, X_2, \dots, X_n).$ Iteratively we find re-samples
of size $n$ with replacement from $X,$ find $K^* = S^*/\bar X^*$
and then $V* = K^* - \kappa^*$ for each re-sample, where
the observed CV $K_{obs}$ from the original sample $X$ is used
for $\kappa^*.$ Finally, we get $L^*$ and $U^*$ by cutting 2.5%
from each tail of the $V^*$'s, the 'bootstrapped' values of $V$,
and use these estimated bounds to get the a 95% bootstrap CI.
Examples of Bootstrap CIs. As a demonstration, I use a sample $X$ if $n = 100$ from
$\mathsf{Norm}(\mu = 200, \sigma=25)$ with $\kappa = 0.125.$
In the outline above of the bootstrap procedure, $*$'s represented
quantities based on re-sampling. In the R program below we use .re
for the same purpose.
Note: It is important to understand that re-sampling does not
create additional information. Re-sampling exploits information in existing
data to do statistical analysis.
Normal. For the particular normal sample we used $K_{obs} = 0.118$, and
the 95% nonparametric bootstrap CI obtained is $(0.102, 0.135).$
Because bootstrap procedures involve random re-sampling, each run
of the program may give a slightly different CI, but not much
different with as many as $B = 10^5 = 100,000$ iterations.
x = rnorm(100, 200, 25)
k.obs = sd(x)/mean(x); k.obs
## 0.1180088
B = 10^5; v.re = numeric(B)
for(i in 1:B) {
x.re = sample(x, 100, repl=T)
k.re = sd(x.re)/mean(x.re)
v.re[i] = k.re - k.obs }
UL = quantile(v.re, c(.975,.025))
k.obs - UL
## 97.5% 2.5%
## 0.1018754 0.1350186
Gamma. This bootstrap procedure is called 'nonparametric' because it does
not assume any particular type of distribution for the data. A
second sample of size $n = 100$ was taken from the distribution
$\mathsf{Gamma}(shape=\alpha = 4, rate=\lambda=.1)$ with
$\kappa = \sqrt{\alpha}/\alpha = 1/2.$ This sample has $K = 0.507$
and the 95% nonparametric bootstrap CI is $(0.442, 0.579).$
A second run of the bootstrap program with the same data gave
the CI $(0.442, 0.580).$ |
Matrix algebra properties square of $Ax-b$ | We have
$$(Ax-b)^T\,(Ax-b) = \sum_i (\sum_j A_{ij}x_j - b_i)^2\,,
$$
so taking the derivative of the above w.r.t. $x_k$ and setting equal to $0$ we have
$$
0 = \sum_i 2(\sum_j A_{ij}x_j - b_i)(\sum_j A_{ij}\delta_{jk}) = \sum_i 2(\sum_j A_{ij}x_j-b_i)A_{ik} = (A^TAx-A^Tb)_k\,,
$$
i.e.
$$
x = (A^TA)^{-1}A^T b\,.
$$ |
trying to graph a function with x and e(constant ?) | On $[0,e)$, it is a line segment $$y=1-\frac{x}{e},$$
It has slope $-\frac1e$ and intercept $1$.
Also, this is an even function, On $(-e,0]$, it is a line segment $$y=1+\frac{x}{e},$$
It has slope $\frac1e$ and intercept $1$.
It is zero everywhere else.
Remark: $e$ need not be a good choice of notation in general. |
Evaluating $\int_C e^{-z^2} dz$ as radius goes to infinity | Let $\phi=2\theta+\pi/2$. Then
\begin{align}
\left|\int_{C_R} e^{-z^2} dz\right|&=\left|\int_{-\pi/4}^0e^{-R^2\cos{2\theta}}e^{-iR^2\sin{2\theta}}iRe^{i\theta}d\theta\right|
\\
&=\frac1{2}\left|\int_{0}^{\pi/2}e^{-R^2\sin{\phi}}e^{iR^2\cos{\phi}}iRe^{\frac1{2}i(\phi-\pi/2)}d\phi\right|
\\
&\leqslant\frac{R}{2}\int_{0}^{\pi/2}e^{-R^2\sin{\phi}}d\phi
\\
&\leqslant\frac{R}{2}\int_{0}^{\pi/2}e^{-R^2\frac{2}{\pi}\phi}d\phi\tag1
\\
&=-\frac{\pi}{4R}e^{-R^2\frac{2}{\pi}\phi}\bigg|_{0}^{\pi/2}
\\
&=\frac{\pi}{4R}(1-e^{-R^2})
\\
&\to0 \quad\text{as }\:R\to\infty
\end{align}
$(1)$: by Jordan Lemma, for $x\in[0,\pi/2]$, $\:\sin x>\frac{2}{\pi}x$.
Hence
$$
\lim_{R \rightarrow \infty}\int_{C_R} e^{-z^2}dz=0
$$ |
For positive integers $n,k$ with $k\ge3$ prove $k^{n+k}>(n+k)^k$ | Hint.
From
$$
k^n k^k > (n+k)^k\to k^n > \left(\frac nk+1\right)^k
$$
and
$$
\left(\frac nk+1\right)^k < e^n
$$ |
Finding an explicit bijection between two open intervals | Rather than finding a map taking $20$ to $21$ and $17$ to $76$,
you should find a map taking $20$ to $17$ and $21$ to $76$.
The scale factor is $\dfrac{76-17}{21-20}=59$, so the answer is $f(x)=59(x-20)+17$,
which I leave to you to simplify. |
Minimize sum of Non-Negative Functions Vrs Minimize sum of Square of Non-Negative Functions | Let $f_1(x) = \frac{1}{2} (x-10)^2$ and $f_2(x) = (x - 100)^2$. The solution to the first problem is
$$0 = x-10 + 2x - 200\implies x=70$$
meanwhile for the second problem we have
$$0 = (x-10)^2(x-10) + 2(x-100)^2(2x-200)\implies x = 82 - 36 2^{1/3} + 18\times 2^{2/3}$$ |
Values of $\left (1+\frac{1}{n}\right )^n$ on a calculator | Your theory is the only one that actually happens in the real world, where all machines have finite precision and cannot represent numbers sufficiently close to $1$, like $1+1/n$. There is no other plausible explanation for why the computed expression suddenly drops to $1$. |
If a irreducible, stochastic, aperiodic matrix is not diagonalizable, can it converge on power method?? | The power method can be used to analyze matrices that are not diagonalizable. The way to do it is using a similar transformation, called Jordan decomposition (see here), where unlike the eigenvalue decomposition, Jordan decomposition always exists. |
Clarify why $\int_\eta^y \frac 1 {x-1} - \frac 1 {x+1} dx = \ln(|1-y|)-\ln(|1+y|)- \ln(|1-\eta|)+\ln(|1+\eta|)$ for $y \in (-1,1)$ and ?? | It seems that you are unfamiliar with derivatives of logarithms, namely that
$\int \frac{1}{x} \ dx = \log(x)$, or equivalently, $\frac{d}{dx} \log(x) = \frac{1}{x}$. (All my log's here are base $e$ and I an omitting constants of integration). So I will try to prove this result to you:
I assume you know about exponentials, for example $\frac{d}{dy} \left(-e^{-y}\right) = e^{-y}. (1)$
Now let $y = \log(x)$, so my objective is to prove $\frac{dy}{dx} = \frac{1}{x}$. Using the chain rule, the LHS of (1) is:
$\frac{d}{dy} \left(-e^{-y}\right) = \frac{dx}{dy} \frac{d}{dx} \left(-e^{-\log(x)}\right) = \frac{dx}{dy} \frac{d}{dx} \left(\frac{-1}{x}\right) = \frac{dx}{dy} \frac{1}{x^{2}}$
The RHS of (1) is: $e^{-y} = e^{-\log(x)} = \frac{1}{x}$. Comparing these expressions, we see that $\frac{dy}{dx} = \frac{1}{x}$ QED.
This is equivalent to $\int \frac{1}{x} \ dx = \log(x)$, or by making a substitution $x \mapsto x - a$, $\int \frac{1}{x-a} \ dx = \log(x-a)$ now this is the same form as the integrals in your question so hopefully you understand where the logarithms come from. The only non-continuous ($\Rightarrow$ non-differentiable) real point of $\log(x)$ is at $x = 0$ ($\log(x)$ isn't defined for $x<0$ anyway), hence in your question $y$ is taken to be in the open interval $(-1,1)$ to avoid the discontinuous point $1-1 = 0 = -1 + 1$.
(Important) Remark: Above I have used the fact that $\frac{dy}{dx} \equiv \frac{1}{\left(\frac{dx}{dy}\right)}$, this is non-trivial and in fact only holds for first order derivatives. |
Find the pedal equation of the ellipse $\frac {x^2}{a^2} + \frac {y^2}{b^2} = 1$ | The equation of the tangent to given ellipse at the point $~(X,Y),~$$~\dfrac {x^2}{a^2} + \dfrac {y^2}{b^2} = 1~,\tag1$ is$~\dfrac {xX}{a^2} + \dfrac {yX}{b^2} = 1~.\tag2$
Compared the equation of the tangent of the ellipse with $~AX + BY + C = 0 ~,$ we have $~A=\frac {x}{a^2},~B=\frac {y}{b^2}~$ and $~C=-1~.$
Now perpendicular distance of the tangent of the ellipse form $~(0,0)~$ is
$$p=\left|\frac {C}{\sqrt{A^2+B^2}}\right|=\left|\frac {-1}{\sqrt{\left(\frac {x}{a^2}\right)^2+\left(\frac {y}{b^2}\right)^2}}\right|\implies \frac 1{p^2}=\frac {x^2}{a^4} + \frac {y^2}{b^4}~.\tag3$$
We know that $$~r^2=x^2+y^2\implies r^2-b^2=x^2+(y^2-b^2)$$
$$\implies\frac{r^2-b^2}{a^2-b^2}=\frac{x^2+(y^2-b^2)}{a^2-b^2}=\frac{x^2-\frac{b^2}{a^2}x^2}{a^2-b^2}=\dfrac {x^2}{a^2}~.\qquad[\text{using equation $(1)$}]$$
Hence $~\dfrac {x^2}{a^2}=\dfrac{r^2-b^2}{a^2-b^2}~$.
Similarly, $~\dfrac {y^2}{b^2}=\dfrac{a^2-r^2}{a^2-b^2}~$.
Putting these two values in equation $(3)$, we have
$$\frac 1{p^2}=\frac {1}{a^2}\left(\frac{r^2-b^2}{a^2-b^2}\right) + \frac {1}{b^2}\left(\frac{a^2-r^2}{a^2-b^2}\right)$$
$$\implies \frac {a^2b^2}{p^2}=\frac {(r^2b^2-b^4)+(a^4-a^2r^2)}{a^2-b^2}=(a^2+b^2-r^2)$$
Hence the pedal equation of the ellipse $(1)$ is $$ \frac {a^2b^2}{p^2}=(a^2+b^2-r^2)~.$$ |
How to find the area of the trapezium | Let $DK$ be an altitude of $\Delta ADB$.
Thus, since $$DB=\sqrt{25^2-15^2}=20,$$ we obtain by calculating twice of the area of $\Delta ADB$:
$$DK\cdot 25=15\cdot20,$$ which gives $DK=12.$
Now, $$AK=\sqrt{15^2-12^2}=9.$$
Thus, $$DC=25-2\cdot9=7$$ and
$$S_{ABCD}=\frac{(25+7)12}{2}=192.$$ |
For what values of $n$ does every tangent line to the graph $y=x^n$ intersect the graph exactly once? | This was my answer before I noticed the restriction to positive integer $n$. This answer can be restricted to that situation.
If $n$ is a positive even integer, $y=x^n$ is concave up, and tangent lines only touch the curve at the point of tangency.
If $n$ is a negative odd integer, $y=x^n$ has two disjoint arms. The right arm is concave up, and tangent lines to the right arm only touch the curve at the point of tangency. Such lines cannot touch the other arm because they never enter the third quadrant. Likewise for tangent lines to the left arm.
If $n$ is a negative even integer, it's clear that the tangent line at $(1,1)$ will also cross the left arm, since it has negative slope and the left arm climbs to $\infty$.
If $n$ is a positive odd integer, then the tangent line at $(1,1)$ will also cross the curve in the third quadrant because the curve is concave down in that quadrant, and the line is below the curve at the $y$-axis.
If $n$ is $0$, of course tangent lines are the curve itself.
If $n$ is not an integer, then $x^n$ is only defined for $x\geq0$ and the curve is either concave up or concave down in this region (but doesn't toggle). So tangent lines only touch the curve at the point of tangency.
In summary, the collection of such $n$ is $$\left(1-2\mathbb{N}\right)\cup(\mathbb{R}\setminus\mathbb{N})\cup(2\mathbb{N})$$ (where I'm using the convention $\mathbb{N}=\{1,2,\ldots\}$) |
Finding significance of equalities involved in proving $\gcd(a,b) = \gcd(a{'}, b{'})$. | You have
$$ \begin{pmatrix} a' \\ b'\end{pmatrix} = \begin{pmatrix} p & q \\ r & s \end{pmatrix} \begin{pmatrix} a \\ b \end{pmatrix} $$
with the condition
$$ \left| \det \begin{pmatrix} p & q \\ r & s \end{pmatrix} \right| = |ps-qr| = 1 \text{.} $$
With some work you can convince yourself that the condition on the determinant makes the linear map a bijection (in this setting, from and to $\mathbb{Z}^2$). |
solve pairs of two variable simultaneous linear modular equations | Here's a full solution. You're solving $\begin{cases}80x\equiv 206y\pmod{3^5}\\ 19x\equiv 21y \pmod{2^5}\end{cases}$
Now find the Modular Multiplicative Inverse of $80$ mod $3^5$ and of $19$ mod $2^5$. You can apply the Extended Euclidean Algorithm (EEA).
We have $\gcd\left(80,3^5\right)=\gcd\left(19,2^5\right)=1$. By Bézout's lemma this means there exist $r,s,t,u\in\mathbb Z$ such that $80r+3^5s=1$ and $19t+2^5u=1$. We can find the $r,s,t,u$ using EEA.
Here's how you can use EEA. Subtract consecutive equations: $$243=80(0)+243(1)\\80=80(1)+243(0)\\3=80(-3)+243(1)\\2=80(79)+243(-26)\\1=80(-82)+243(27)$$
$$32=19(0)+32(1)\\19=19(1)+32(0)\\13=19(-1)+32(1)\\6=19(2)+32(-1)\\1=19(-5)+32(3)$$
Therefore $80(-82)\equiv 1\pmod{243}$, so $80^{-1}\equiv -82\pmod{243}$.
Also $19(-5)\equiv 1\pmod{32}$, so $19^{-1}\equiv -5\pmod{32}$.
Therefore your system of linear modular equations is equivalent to:
$$\begin{cases}x\equiv (206)(-82)y\equiv 118y\equiv -1097y\pmod{3^5}\\x\equiv (21)(-5)y\equiv 23y\equiv -1097y\pmod{2^5}\end{cases}$$
$$\iff \begin{cases}3^5\mid x-(-1097y)\\2^5\mid x-(-1097y)\end{cases}$$
$$\iff \text{lcm}\left(3^5,2^5\right)=2^5\cdot 3^5\mid x-(-1097y)$$
$$\iff x\equiv -1097y\pmod{2^5\cdot 3^5=7776}$$
I used the property that the system $a\mid c$, $b\mid c$ is equivalent to $\text{lcm}(a,b)\mid c$. Some people might call this property "the universal $\text{lcm}$ property" (see, e.g., here) but it is quite trivial.
You could've also used the Chinese Remainder Theorem instead of this property.
Answer: $(x,y)\equiv (-1097k,k)\pmod{7776}$ for any $k\in\mathbb Z$. |
Every simple group of order $60$ is isomorphic to $A_5$ - proof by contradiction | You are almost there. If any two distinct $2$-Sylows intersect trivially, then you are done. If $P$ and $Q$ are different $2$-Sylows which intersect non-trivially, we have $|P \cap Q| = 2$.
Observe that, since $|P| = |Q| = 4$, we obtain that $P$ and $Q$ are Abelian groups. Hence $P$ and $Q$ are both contained in the centralizer $C$ of $P \cap Q$,
$$C = \{g \in G \ | \ gh = hg \ \ \forall h \in P \cap Q\}.$$
Hence, $C$ contains the subgroup $H$ generated by $P$ and $Q$. Note that $H \neq G$, since else the group $P \cap Q$ of order $2$ would be contained in the center of $G$, a contradiction to the simplicity.
So $4$ divides the order of $H$, and $H \neq G$. Moreover, since $P$ and $Q$ are distinct, $|H| > 4$. Hence the index $(G:H)$ is either $3$ or $5$. |
Graphing social connections in a middle school. | I use Python and the NetworkX package: https://networkx.github.io/
Granted, there's some learning to be done in using it, but it's very straight forward and richly featured. And of course using Python makes the automation part simple too.
Take a look. I've used it in a board game setting (machine learning) and finance.
From their website:
Features
Python language data structures for graphs, digraphs, and multigraphs.
Many standard graph algorithms
Network structure and analysis measures
Generators for classic graphs, random graphs, and synthetic networks
Nodes can be "anything" (e.g. text, images, XML records)
Edges can hold arbitrary data (e.g. weights, time-series)
Open source BSD license
Well tested: more than 1800 unit tests, >90% code coverage
Additional benefits from Python: fast prototyping, easy to teach, multi-platform |
Is it true that if $\gcd(a,b) = 1$ and $\gcd(a,c) = 1$ then $\gcd(ac,b) = 1$? | Let $a=2, b=c=3$. Then $\gcd(a,b)=\gcd(a,c)=1$ and $\gcd(ac,b)=3$.
You can prove that $\gcd(a,bc)=1$, however. |
Where is $\Bbb R$ in the von Neumann hierarchy and the constructible hierarchy? | The question of where $\mathbb{R}$ lies in the cumulative hierarchy is sensitive to the exact definitions we use; it will always be in some $V_{\omega+n}$ for $n$ finite and small, but the precise value of $n$ may vary between reasonable choices of definition. At a glance I think your definition gives $n=6$ (Henning's calculation looks right to me), but I could be missing something.
However, the situation with regards to the constructible hierarchy is much more canonical and interesting:
First of all, it is consistent that $\mathbb{R}\not\in L$: this will happen exactly when there is a non-constructible real.
So really you want to ask about $L$'s version of the real numbers, $\mathbb{R}^L$. It turns out that this set shows up first in $L_{\omega_1^L+1}$ (where "$\omega_1^L$" is the least ordinal which $L$ thinks is uncountable - we may of course have $\omega_1^L<\omega_1$, just as we may have $\mathbb{R}^L\subsetneq\mathbb{R}$). The proof goes roughly:
First, by the condensation lemma every real in $L$ is in $L_{\omega_1^L}$.
Since being a real is a definable property, this means that $\mathbb{R}^L$ is a definable subset of $L_{\omega_1^L}$ - that is, $\mathbb{R}^L\in L_{\omega_1^L+1}$.
Finally, it can be shown (see the discussion here) that for every $\alpha<\omega_1^L$ there is a real $r$ in $L$ but not in $L_\alpha$. This shows that $\mathbb{R}^L\not\in L_\alpha$ for any $\alpha<\omega_1^L$, and so $\mathbb{R}^L\not\in L_{\omega_1^L}$ either.
Combining the previous two bullet points, we have that $\mathbb{R}^L$ appears in the $L$-hierarchy exactly at stage $\omega_1^L+1$. |
What does the logical condition "implies" actually mean? | In propositional calculus the "implies" connector means something a little more technical than implication in the cause and effect sense. It is usually the case that $A\implies B$ is shorthand for $\neg A \lor B$. |
Find a formula for the binomial coefficients of the Macluarin series for $\frac{1}{(1+x)^{1/2}}$ | The coefficient of $x^r$ is $$(-1)^r\frac {(2r)!}{2^{2r}(r!)^2}$$ |
Has category theory solved major math problems? | Emily Riehl's wonderful book Category Theory in Context is a book-length answer to your question. You can skip straight to the epilogue at the very end.
But, in particular, I think that the proofs of the Weil conjectures were driven by the category theoretical structure that Grothendieck and his colleagues worked out. This included a proof of a version of the Riemann hypothesis that is relevant for number theory. |
Number of connected components of a real variety | You should be careful when applying Bezout's theorem as you have (even the higher dimensional version). That bound works (as stated in the theorem) when the intersection is a collection of finite points.
Havings said that, there are some further generalizations which you might find useful. In Fulton's book on intersection theory (volume 2), it is mentioned that if we have affine varieties (in general schemes) $X_1,...,X_r$ in $\mathbb{P}^n$ whose codimensions add to at most $n$, then
$$\sum deg(Y_i) \leq \prod deg(X_j)$$
where the $Y_i$ are the components of the intersection of the $X_j$. You can read more about this here:
https://mathoverflow.net/questions/42127/generalization-of-bezouts-theorem
If you want a better bound, then the question is more difficult. You can read about one such result here:
http://arxiv.org/pdf/0806.4077.pdf |
A,B and C can do a piece of | On the first day, $\frac{1}{10} + \frac{1}{20} = \frac{3}{20} = \frac{9}{60}$ work gets done. On the second day, $\frac{1}{10} + \frac{1}{30} = \frac{4}{30} = \frac{8}{60}$ more work gets done, making $\frac{17}{60}$ in total. Total work continues to increase alternatingly by $\frac{9}{60}$ and $\frac{8}{60}$, and on the third and following days we have $\frac{26}{60},~ \frac{34}{60},~ \frac{43}{60},~ \frac{51}{60},$ and $ \frac{60}{60}$ work done in total, respectively. The work is therefore completed after $7$ days. |
How do we prove the continuity of the exponential function restricted to $\mathbb{Q}$? | For $M\ge 2$ and fixed rational $s$, suppose $r_n \to s^+.$ Write (for later) $M=1+a$ so that $a=M-1\ge 1.$ Then $M^{r_n}-M^s=M^s(M^{r_n-s}-1).$ So we want to show that $M^{r_n-s}-1 \to 0$ as $n \to \infty.$ Since $r_n-s \to 0$ from above, for any $k$ it is eventually less than $1/k.$ So if we show $M^{1/k}-1 \to 0$ we are finished (for the $r_n \to s$ from above case), because we have shown $M^{r_n}-M^s \to 0.$
Now note the inequality
$$M^{1/k}=(1+a)^{1/k}<1+\frac ak, \tag{1}$$
which follows from the power map $f(u)=u^k$ being increasing and the fact that $1+a <(1+\frac ak)^k$ since the first few terms of the latter are $1+a+(k\cdot(k-1)/2)(a/k)^2,$ and remaining terms are positive. Thus since the upper bound $1+\frac a k \to 1,$ and since $g(x)=M^x$ is increasing, we arrive at $M^{1/k}\to 1,$ that is, $M^{1/k}-1 \to 0.$
If $r_n \to s$ from below a similar argument works, applying the reciprocal inequality of $(1),$ namely
$$(1+a)^{-1/k} >(1+a/k)^{-1}.$$
Note: that $M^x$ is increasing is easily shown; details if anyone wants them can be included. |
Need help in solving parametric equation problem. | The formula for slope with parametric equations is
$${dy\over dx}= {{dy\over dt}\over{dx\over dt}}$$
If you can get the derivatives ${dy\over dt}$and${dx\over dt}$, you will have ${dy\over dx}$ in terms of $t$ and you would plug in $t=2$ to get the slope of the tangent line. The equation would then be found using the point-slope formula. |
distributive law of vector subspace | Sure. Let $k$ be the field that you are working with and take $L=k^2$, $V=k\times\{0\}$, $W=\{0\}\times k$, and $F=\{(x,x)\mid x\in k\}$. Then $F\cap V=F\cap W=0_L$. |
Let $X$ be any set. Prove there exist a vector space $V$ such that $X$ is a basis for $V$ | For any countable set $X$ with members $x_i$ we can define the formal sums $\sum \lambda_i x_i$ where the coefficients $\lambda_i$ are members of a field such as $\mathbb{Q}$, $\mathbb{R}$ or $\mathbb{C}$. The set of formal sums becomes a vector space if we define addition and scalar multiplication in the natural way:
$(\sum \lambda_i x_i) + (\sum \kappa_i x_i) = \sum (\lambda_i + \kappa_i)x_i$
$\kappa (\sum \lambda_i x_i) = \sum (\kappa \lambda_i) x_i$ |
How to decompose $X_p = X_r +X_n$? | I'm not sure where is your doubt. Why one can always do that decomposition, or how can one do it?
First. Given a real matrix $A^{n\times m}$, let $S \subset \mathbb{R}^m $ be its row space (equivalently, in your notation, $S=C(A^t)$) and let $T \subset \mathbb{R}^m $ be the null space of $A$.
By the Rank–nullity theorm, $\dim(S)+\dim(T)=m$.
Also, it's easy to see that the spaces are orthogonal: ${\bf s} \in S, {\bf t} \in T \implies {\bf s} \perp {\bf t} $.
Then, $S$ and $T$ together span $\mathbb{R}^m$.
Second. Given two orthogonal spaces $S,T$ that together span $\mathbb{R}^m$ , and given some ${\bf u} \in \mathbb{R}^m$, how can we decompose ${\bf u}={\bf s}+{\bf t}$ for some ${\bf s}\in S, {\bf t}\in T$ ?
This should be easy. Find some base for $S$ : $\{{\bf s_1},{\bf s_2} \cdots\}$ and one for $T$ $\{{\bf t_1},{\bf t_2} \cdots\}$; together they form a base for $\mathbb{R}^m$.
Then decompose ${\bf u}$ in that base
${\bf u}=\alpha_1 {\bf s_1} + \alpha_2 {\bf s_2} + \cdots + \beta_1 {\bf t_1} + \beta_2 {\bf t_2} + \cdots$.
Then simply take ${\bf s}=\alpha_1 {\bf s_1} + \alpha_2 {\bf s_2} + \cdots $ , ${\bf t}=\beta_1 {\bf t_1} + \beta_2 {\bf t_2} + \cdots$ and you're done. |
What's the name of this fractal? | Your figure can be constructed by a graph directed iterated function system. An iterated function system typically has no restrictions on which transforms may follow each other. A graph-directed IFS has restrictions like the one you have imposed: there is a directed graph in which the transformations correspond to edges. In your example the nodes of this directed graph would correspond to the corners of your figure, like this:
For a reference, with fractal dimension calculations, see:
"Hausdorff dimension in graph directed constructions" R. Daniel Mauldin and S. C. Williams (Trans. Amer. Math. Soc. 309 (1988), 811-829) http://www.ams.org/journals/tran/1988-309-02/S0002-9947-1988-0961615-4/
We introduce the notion of geometric constructions in $R^m$ governed by a directed graph $G$ and by similarity ratios which are labelled with the edges of this graph. |
Find a field extension F such that the entries of an eigenvector of a fixed matrix are in F | The polynomial $\lambda^3+\lambda^2-8\lambda+7$ is irreducible over $\Bbb Q$ (if not, then being cubic, it should have a root in $\Bbb Q$; but then the root must be in $\Bbb Z$, and an argument mod $2$ gives contradiction).
Thus the field you are looking for as an abstract field, is simply $F = \Bbb Q[X]/(X^3 + X^2 - 8X + 7)$. |
How to find the first term of a geometric series, given the sum to n is $8 - 2^{3 - 2n}$. | You need to put in values, starting from 1,2,3 and so on.
Since it asks for first term, use $n=1$.
Also for finding the sum to infinity, you need to use another formula:
$S$$\infty$ = $a/(1-r)$
where $a$ is the first term of the geometric progression
and $r$ is the common ratio.
Or, like @Rob Arthan mentioned, you can also calculate the sum to infinity by using :
$\lim_{n\to\infty} 8 - 2^{3-2n}$
This would be more useful as you wouldn't have to calculate the common ratio at all. |
Why is $|x|$ defined as $\sqrt{x^2}$ instead of $(\sqrt{x})^2$? | No, $\sqrt{x^2}$ does, in fact, force positivity, because $\sqrt{a}$, where $a$ is a positive number, is defined to be the positive square root. Yes, $a$ has two square roots, but the positive one is $\sqrt{a}$, and the negative one is $-\sqrt{a}$.
If $x$ is negative, we normally don't write $\sqrt{x}$, because things just got pushed into the complex plane, and roots get crazy their. |
Proving that a minimum is unique | Although I'm not entirely sure I understand what you are asking, I'm going to have to say no: being bounded and having a global local minima is not sufficient to show that the derivative will only be 0 when $x=c$.
For example take the dirty function that I just whipped up in Desmos to show a counterexample
$$
f(x) = \dfrac{x^5-x^4+x^3+x}{20e^{|x|}}+4
$$
This function is bounded both above and below and the local minima you find at $c=-4.756..$ is also equivilent to the global minima, which holds your condition that $f(x) > f(c) \forall x\in \mathbb{R}$. However there is also a local maxima at $c'=5.134..$. By definition the derivative will be zero at both of these points, which runs counter to your original claim that $f'(x)=0$ only when $x=c$ |
Showing two things are homotopic to each other | You won't be able to show that $I^2$ (the unit square, where $I=[0,1]$) is homotopy equivalent to $S^1$ because that's not true.
To show that $\mathbb C-\{0\}$ is homotopy equivalent to $S^1$, it suffices to find a deformation retraction of $\mathbb C-\{0\}$ onto $S^1$. Here's one way to do that: for each point $z\in\mathbb C-\{0\}$, the point on $S^1$ which is closest to $z$ is $z/|z|$. Make the deformation retraction by sending $z$ to $z/|z|$ at constant speed along the line connecting them. (Draw a picture if you don't understand!) Points that are farther away will have to move more quickly. Can you find the formula for this deformation retraction?
Edit: The border of the unit square (call it $X$) is homotopy equivalent to $S^1$. You can show it by making a straight-line homotopy (along the lines of the other part of this question) from $1_X$ to $1_{S^1}$. That is, connect each point of $X$ with the corresponding point on the unit circle (what do I mean by corresponding point?) and move it along the line connecting them. |
Is there any actual "real" example of a group $G$ having a cardinality of $2$? | You are correct that, in a group with cardinality $2$, the non-identity element is its own inverse.
Furthermore, if $e$ is the identity element, then we must have $e\ \circ \ s = \ s\ \circ \ e = s. $
Thus, the group table is
$$
\begin{array}{c|lcr}
\circ & e & s \\
\hline
e & e & s \\
s & s & e
\end{array}
$$
This is isomorphic to the group of integers modulo $2$ under addition, which is
$$
\begin{array}{c|lcr}
+ & 0 & 1 \\
\hline
0 & 0 & 1 \\
1 & 1 & 0
\end{array}
$$
and the group of units in $\mathbb Z$ under multiplication, which is
$$
\begin{array}{c|lcr}
\times & 1 & -1 \\
\hline
1 & 1 & -1 \\
-1 & -1 & 1
\end{array}
.$$ |
Show that $H \leq C_G(C_G(H))$. | Pick $h\in H$. We wish to show that $hg=gh$ for any $g\in C_G(H)$. But this is tautological from the definition of $C_G(H)$, so $H\subset C_G(C_G(H))$. |
Uniqueness of hyperplane | Your approach should work. Consider the following. I will use your notation for $P$ and $P'$.
Note that $P-x_0$ has dimension $n-1$ and it is spanned by $x_1-x_0, ..., x_{n-1}-x_0$. Thus $P-x_0 = span\{x_1-x_0, ..., x_{n-1}-x_0\}$.
Similarly, $P'-x_0$ has dimension $n-1$ and it is spanned by $x_1-x_0, ..., x_{n-1}-x_0$. Thus $P'-x_0 = span\{x_1-x_0, ..., x_{n-1}-x_0\}$.
This gives us $P - x_0 = P'-x_0$. Hence $P=P'$. |
Let $Z_t=\int{W_s }ds $. Show that $Z_t=\int (t-s) dW_s$ | Note that
\begin{align*}
d(sW_s) = sdW_s + W_s ds.
\end{align*}
Then you can integrate on both sides. |
Convergent successions is a closed set of bounded successions with the supremum distance | To avoid the probelm you are facing show that $(z_n)$ is Cauchy sequence. That is enough to say that it is a convergent sequence.
Use the inequality $|z_n-z_m| \leq |z_{k,n}-z_{k,m}|+|z_{k,n}-z_n|+|z_m-z_{k,m}|$. Can you finish? |
Filling the Unit Disk With Non-overlapping Rectangles | Hint: Find an infinite set of points, such that any 3 cannot be covered by a rectangle contained within the unit disc. |
Universal cover of complete hyperbolic surfaces and torsion-free, discrete groups of isometries of $\mathbb{H}^2$ | If $M$ is a closed, complete hyperbolic manifold, then the hyperbolic structure lifts to its universal cover, $\widetilde{M}$. This means that $\Gamma = \pi_1(M)$ acts on $\widetilde{M}$ by isometries, and $M = \widetilde{M}/\Gamma$. Since $\widetilde{M}$ is isometric to $\mathbb{H}^n$, we have $M = \mathbb{H}^n/\Gamma$. If $\Gamma$ had torsion, then the action would not be fixed-point free and the quotient would have a cone point and not be a manifold, so we can conclude that $\Gamma$ has no torsion.
(Here's a short discussion of bigger picture because it's interesting! Study of hyperbolic surfaces is especially relevant for dimensions $2$ and $3$, where "most" manifolds are hyperbolic. For orientable closed surfaces, the Euler characteristic and Gauss-Bonnet imply that only $S^2$ and $T^2$ are not hyperbolic. Geometrization (Thurston-Perelman are the two biggest names here) states that if a closed $3$-manifold $M$ has: $\pi_1M$ infinite (i.e., $M$ is not covered by $S^3$), $\pi_1(M)$ not a free product (i.e., no incompressible spheres - $M$ is prime), and $\pi_1(M)$ contains no $\mathbb{Z}^2$ (i.e., no incompressible tori), then $M$ is hyperbolic. In three dimensions, once a space is hyperbolic, its geometric invariants become topological invariants by Mostow rigidity: If two hyperbolic manifolds are homotopy-equivalent, then that homotopy-equivalence was induced by an isometry.) |
Picking points some distance $r$ from a line segment? | Well, assuming you are working in $\mathbb R^3$, WLOG (using translation and rotation), let $P_1 = (0, 0, 0)$ and $P_2 = (0, 0, p)$.
Then the line segment is given by $(0, 0, tp)$ for $t \in [0, 1]$.
The cylinder would be given by $(r \cos \theta, r \sin \theta, tp)$ for $\theta \in [0, 2\pi)$
and the "caps" at both ends by the two sets of equations:
$ [x^2 + y^2 + z^2 = r^2] \wedge [z < 0] \quad $ and
$ \quad [x^2 + y^2 + (z-p)^2 = r^2] \wedge [z > p]$.
Not sure if this is exactly what you wanted, but this may get you started. |
A question on the conjugacy class in infinite group | Equivalently, you want $G$ to be the union of the conjugates of $H$ in $G$. This is equivalent to saying that, in the permutation action of $G$ by multiplication on the (left or right, whichever you prefer) cosets of $H$, no elements of $G$ act fixed-point-freely.
So we are looking for a transitive group $G$ of permutations on an infinite set with no fixed-point-free elements. We can then take $H$ to be the stabilizer of a point. How about taking all permutations of an infinite set that move only finitely many points? |
$A$ and $B$ are positive self-adjoint matrix such that $AB$ is self-adjoint then $AB$ is positive | Let us start with the positive operators $A$, $B$ as in the OP. Then $AB$ self-adjoint implies
$$
AB = (AB)^*=B^*A^*=BA\ ,
$$
so that $AB$ are commuting. Using continuous functional calculus w.r.t. the function $f(x)=\sqrt x$ defined on $[0,\infty)$ (and this domain contains the spectrum of $A$, and the spectrum of $B$) we obtain square roots $S=f(A)$, $T=f(B)$, i.e. $A=S^2$, and $T=B^2$.
Explicitly, for $A$ only, let $p_n$ be a sequence of polynomials such that $p_n\to f$ on a compact interval (for instance $[0,\|A\|]$) contaning the spectrum of $A$. Such a sequence is insured by Stone-Weierstraß. Then $$p_n(A)\to f(A)=:S\ .$$
Here, $f(A)$ is defined as $\lim p_n(A)$, the limit exists, Cauchy sequence. (Functional calculus of bounded operators shows this does not depend on $(p_n)$, but we do not need this.)
We denote by $S$ this value.
Then
$$
\begin{aligned}
S^2
&=(f(A))^2=(\lim p_n(A))^2=\lim (p_n(A))^2
=\lim p_n^2(A)
\\
&=(\lim p_n^2)(A)
=(\lim p_n)^2(A)
\\
&=f^2(A)=\operatorname{id}(A)=A\ .
\\[3mm]
SB &=(\lim p_n(A))B=\lim p_n(A)B\\
&=\lim B p_n(A)=B(\lim p_n(A))=BS\ .
\\[3mm]
(S^*x, y)
&=(x,Sy)=(x,(\lim p_n(A))y)=(x,\lim p_n(A)y)=\lim (x,p_n(A)y)\\
&=\lim (p_n(A)x,y)=(\lim p_n(A)x,y)=((\lim p_n(A))x,y)\\
&=(Sx,y)\ ,\qquad\text{ for all $x,y$ in the given Hilbert space.}
\end{aligned}
$$
(We have used $AB=BA$. These properties are basic properties of the functional calculus. Starting from $AB=BA$ we get to $f(A)B=Bf(A)$, so $SB=BS$. Similarly, starting with $SB=BS$ we obtain $Sf(B)=f(B)S$. So $ST=TS$ the two operators $S,T$ also commute.)
Let now $x$ be $\ne 0$. We have in a row:
$$
(ABx,x) =(SSTTx,x)=(TSSTx,x)=(STx,STx)=\|STx\|^2>0\ .
$$
(We have $(STx,STx)=\|STx\|^2\ge 0$, and in case of an equality, from $(STx,STx)=0$ we get first $Tx=0$, since $S>0$, then from $(Tx,Tx)=0$ also $x=0$, since $T>0$. Contradiction, since we started with an $x\ne 0$.)
Alternatively, we could have intoduced only $T$, and have the same argument with $SS$ replaced by $A$, e.g. $(ABx,x)=(ATTx,x)=(TATx,x)=(ATx,Tx)>0$ since $Tx\ne 0$ since $(Tx,Tx)=(TTx,x)=(Bx,x)>0$.
Note: In case of finitely dimensional spaces, things are simple. The two commuting self-adjoint operators can be diagonalized simultaneously w.r.t. some ortonormal basis, and if $A=\operatorname{diag}(a_1,\dots,a_n)>0$, then $S=\sqrt A:=
\operatorname{diag}(\sqrt{a_1},\dots,\sqrt{a_n})>0$ is the explicit square root of $A$ (which is positive), it is diagonal w.r.t. the same basis, et caetera.
Note: See also
Functional calculus for bounded operators using continuous functions
Functional calculus for bounded operators using Borel functions |
Language decidability and Post's theorem | If you already know that "Post's theorem" implies decidable = $\Delta^0_1$, then the whole proof becomes a 1-liner: The RHS conditions say precisely that $L$ is both $\Sigma^0_1$ and $\Pi^0_1$, i.e. that $L$ is $\Delta^0_1$. In the event you haven't proved that, the following shows why it's true.
($\Longleftarrow$) Given decidable $A, B$, suppose $L$ is such that
$$\begin{align}
L &= \{x\mid(\exists y)[\langle x, y\rangle\in A]\} \tag{$\Sigma$}\\
&= \{x\mid(\forall y)[\langle x, y\rangle\in B]\} \tag{$\Pi$}.
\end{align}$$
The identity ($\Sigma$) shows that $L$ is $\Sigma^0_1$ (alias, semi-decidable, alias recursively enumerable). The identity ($\Pi$) shows that $L$ is $\Pi^0_1$ (hence its complement is semi-decidable, alias recursively enumerable). Therefore $L$ is $\Delta^0_1$, which is the same thing as decidable (alias recursive).
To see more explicitly why this is so, we can write $L^c$, the complement of $L$, in terms of $B$ in a $\Sigma^0_1$ way:
$$\begin{align}
L^c &= \{x\mid \neg(\forall y)[\langle x, y\rangle\in B]\} \\
&= \{x\mid (\exists y)[\langle x, y\rangle\notin B]\} \\
&= \{x\mid (\exists y)[\langle x, y\rangle\in B^c]\} \\
\end{align}$$
Because $B$ is decidable, so is $B^c$. Thus, just like $L$, the complement $L^c$ of $L$ is the projection of a decidable relation. To enumerate $x$ into $L^c$ involves an unbounded search through all $(x,y)$ until one is found that's in $B^c$.
An algorithm for deciding membership in $L$ is as follows. There are, in principle, procedures decide_A(x,y) and decide_not_B(x,y) which decide membership of $\langle x,y \rangle$ in $A$ and $B^c$ respectively. So here's a procedure for deciding membership in $L$:
def decide_L(x):
y = 0
while True:
if decide_A(x,y):
return True # x is in L
if decide_not_B(x,y):
return False $ x is in complement of L
y = y + 1
This procedure will always terminate, because for each $x$, some $y$ will be found such that one of the two procedures it calls will return True with arguments $x,y$.
Note: Actually, these "procedures" are all "functions", in the programming language sense, as they return values.
($\Longrightarrow$) If $L$ is decidable, then both $L$ and $L^c$ are semi-decidable, so there are decidable $A, C$ such that:
$$\begin{align}
L &= \{x\mid(\exists y)[\langle x, y\rangle\in A]\} \\
L^c &= \{x\mid(\exists y)[\langle x, y\rangle\in C]\}.
\end{align}$$
Then $B = C^c$ is decidable, and
$$\begin{align}
L^c &= \{x\mid(\exists y)[\langle x, y\rangle\in C]\} \\
&= \{x\mid(\exists y)[\langle x, y\rangle\notin B]\} \\
&= \{x\mid(\exists y)\neg[\langle x, y\rangle\in B]\} \\
&= \{x\mid \neg(\forall y)[\langle x, y\rangle\in B]\}.
\end{align}$$
Therefore, $L = \{x\mid (\forall y)[\langle x, y\rangle\in B]\}$. |
On periodic and nonperiodic functions | Your conjecture is not true. Define the function $f$ by letting $f(x) = 0$ if $x$ is transcendental and $f(x) = 1$ if $x$ is algebraic. Then $f$ is periodic (any positive algebraic number is a period), and for each positive integer $n$ the function $g_n$ defined by $g_n(x) = f(x^n)$ is also periodic (any positive algebraic number is a period). |
The set of real values of $x$ satisfying the equation $\left[\frac{3}{x}\right]+\left[\frac{4}{x}\right]=5$ | Here are some hints.
Note that as $x$ increases, the sum can get no larger - it is decreasing but not strictly so, because it is sometimes constant. Also with $x=1$ the sum is equal to $7$ so we must have $x \gt 1$ for a sum as low as $5$.
Now if $x\gt 1$ we have immediately that $\left[\frac{3}{x}\right]+\left[\frac{4}{x}\right]\le 2+3=5$ so we must have equality with $\left[\frac{3}{x}\right]=2$ and $\left[\frac{4}{x}\right]=3$
You should be able to complete things from there. |
Is the following equivalent to the axiom of choice? | We don't know.
This is known as the partition principle (note that the other implication is trivial in $\sf ZF$). The problem whether or not this is equivalent to the axiom of choice has been open for over a century now.
There isn't much to say about it, really. We know it implies some basic choice principles such as "Every infinite set is Dedekind infinite", but we don't know a lot more. Every time I think about the problem I run into the same problem, we don't have enough tools to manage - or even understand - the structures of cardinals in arbitrary models of $\sf ZF$. Not even under $\leq$ and let alone under $\leq^*$. |
Is the product of two vectors of null sum, a vector with null sum? | Sadly, it's false. For $n=2$, take $a=(-1,1)$, $b=(1,-1)=-a$. |
How do I interpret the formating of these given structures and prove they are not isomorphic? | To say that two structures are or are not isomorphic, you need to specify what you are saying they are(n't) isomorphic as. In this case, asking about isomorphism of monoids would be reasonable, since both $\mathbb{Z}^+$ and $\mathbb{Q}^+$ are monoids under multiplication.
Fortunately in this case, any reasonable notion of homomorphism $(\mathbb{Q}^+, {\cdot}) \to (\mathbb{Z}^+, {\cdot})$ must satisfy $f(ab) = f(a)f(b)$ for all $a,b \in \mathbb{Q}^+$. We can prove that no such homomorphism is an isomorphism (be it of monoids or otherwise), since it is not even bijective.
Indeed, first note that we must have
$$f(1) = f(1 \cdot 1) = f(1) \cdot f(1) \quad \Rightarrow \quad f(1) = 1$$
We were able to cancel the $f(1)$ since $f(1) > 0$. But then we also have
$$1 = f(1) = f\left( \frac{1}{2} \cdot 2 \right) = f\left(\frac{1}{2}\right) \cdot f(2)$$
and hence $f(2) = 1$ since that is the only positive integral factor of $1$.
But then $f(1)=f(2)$, so $f$ is not a bijection, so $f$ is not an isomorphism.
Footnote: what we actually proved is that $\mathbb{Z}^+$ and $\mathbb{Q}^+$ are not isomorphic as magmas under multiplication. Since every monoid, or even semigroup, has an underlying magma, it follows that they are not isomorphic as semigroups or as magmas. |
let $x\in\emptyset\subseteq\mathbb{R}$ why is $x\leq r,\forall r \in\mathbb{R}$ true | Three explanations: (But they all involve that as there are no $x \in \emptyset$ we can not negate $x \le r$ for any $r \in \mathbb R$)
....
Either $x\in \emptyset \implies x \le r :\forall r \in \mathbb R$ is true or it is false.
If it is false then that implies that there is an $x \in \emptyset$ and an $r \in \mathbb R$ so that $x > r$. That is impossible as there are no $x \in \emptyset$.
If it is true then for every $x \in \emptyset$ then ... something. We can never test this because we can never find an $x \in \emptyset$.
However we can do logic the statement $P \implies Q$ will be true if $P$ and $Q$ are both true, or if $P$ is false. It will only be false if $P$ is true and $Q$ is false. If $P= : x \in \emptyset$ and $Q = : x \le r :\forall r \in \mathbb R$ then $P$ is always false. And $Q$ can never be true for any actuall $x \in \mathbb R$. So $P \implies Q$ is true.
Also a statement is equivalent to the contrapostive. The contrapositive of $x\in \emptyset \implies x \le r:\forall r \in \mathbb R$ is:
$\exists r\in \mathbb R: r < x \implies x\not \in \emptyset$. Well that is certainly true! If $r < x$ then $x$ must exist. And if $x$ exists then $x \not \in \emptyset$!.
......
If $x\in \emptyset$ then $x$ is a green cheese eating alien who is the reincarnation of Elvis Presley is true because logically a false premise implies all conclusions.
So if $x \in \emptyset$ then $x \le r$ for all $r \in \mathbb R$. (it's also true that $x >r$ fora all $r \in \mathbb R$ and that $x = r$ for all $r \in \mathbb R$ and so on.... as $x$ does not exist we can say anything we want to be true about it.)
....
Or looking at it another way: If $A_r = (-\infty, r)$ then $\emptyset \subset A_r$ because the empty set is a subset of all sets. (The emptyset has no elements, so no elements aren't in any set.) So if $x \in \emptyset \implies x \in A_r$.
That is true for all $A_r: r\in \mathbb R$. So $x \in (-\infty, r)$ for all $r\in \mathbb R$ so $x \le r$ for all $r \in \mathbb R$.
You might say "there's got to be a trick in there somewhere; nothing can be in all of those intervals" and you'd be right. The trick is that no such $x\in \emptyset$ does exist. But if such an $x$ DID exist, it would have to be in every set including every interval including those intervals. |
Do we have to take a second subgroup to use the property of $G$? - Show that the groups are nilpotent | Let's start with 1. Suppose that $|H|$ and $|G:H|$ are not coprime, so there is a prime $p$ dividing both. Let $P \in {\rm Syl}_p(H)$. Then there exists $Q \in {\rm Syl}_p(G)$ with $P < Q$, and the containment is strict because $p$ divides $|G:H|$. So $Q \cap H = P$, which is not equal to $Q$ and not equal to $1$. So $Q \cap H = H$ and hence $H=P$. |
Finding $\lim_{n\to \infty} \sum_{r=1}^n \frac{6n}{9n^2-r^2}$ | Riemann Sum Approach
$$
\begin{align}
\lim_{n\to\infty}\sum_{r=1}^n\frac{6n}{9n^2-r^2}
&=\lim_{n\to\infty}\sum_{r=1}^n\frac{2}{1-\frac{r^2}{9n^2}}\frac1{3n}\tag1\\
&=\int_0^{1/3}\frac2{1-x^2}\,\mathrm{d}x\tag2\\
&=\int_0^{1/3}\left(\frac1{1-x}+\frac1{1+x}\right)\mathrm{d}x\tag3\\
&=\left.\log\left(\frac{1+x}{1-x}\right)\,\right|_0^{1/3}\tag4\\[6pt]
&=\log(2)\tag5
\end{align}
$$
Explanation:
$(1)$: divide numerator and denominator by $9n^2$
$(2)$: Riemann Sum with $x=\frac{r}{3n}$
$(3)$: partial fractions
$(4)$: integrate
$(5)$: evaluate |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.