title
stringlengths 20
150
| upvoted_answer
stringlengths 0
54.3k
|
---|---|
Farkas lemma variations | Add to your unknowns $x_1,\ldots,x_n$ a new one $x_{n+1}=c\cdot x$. |
Sequence of independent random variables with same expected value such that the weak law doesn't hold | Let $X_0, X_1, X_2, \dots$ be independent with $X_n = \pm 4^n$ each with probability $1/2$. Then $E[X_n] = \mu = 0$ for all $n$. Letting $S_k = X_0 + \dots + X_k$, we have
$$|S_{k-1}| \le \sum_{n=0}^{k-1} 4^n = \frac{1}{3} (4^k - 1)$$
so $$|S_k| \ge ||X_k| - |S_{k-1}|| \ge 4^k - \frac{1}{3} (4^k - 1) \ge \frac{2}{3} 4^k.$$
In particular, $|S_k/k| \to \infty$ almost surely. |
A lower bound for the condition number matrix | The statement is false. Consider the max norm, like here. So the counterexample is:
$A := \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$ is invertible and $B := \begin{pmatrix} 1/4 & 1/2 \\ 1/2 & 1 \end{pmatrix}$ is singular.
$$\|A^{-1}\| \cdot \|A-B\| = 1 \cdot \left\|\begin{pmatrix} 3/4 & -1/2 \\ -1/2 & 0 \end{pmatrix} \right\| = \frac{3}{4} < 1$$
The problem is indeed the absence of sub-multiplicative property. But if a matrix norm have this property then, the statement become true by the following lemma.
Lemma: Let $\|\cdot\|$ be a matrix norm on $\mathbb{R}^{n \times n}$. If $\|\cdot\|$ is sub-multiplicative, then there is a vector norm $\|\cdot\|_*$ on $\mathbb{R}^n$ such that both norms are compatible.
Proof: Fix $\boldsymbol{0} \neq \boldsymbol{y} \in \mathbb{R}^n$. Define $\|\cdot\|_* : \mathbb{R}^n \to [0,\infty)$ such that $\|\boldsymbol{x}\|_* := \|\boldsymbol{x}\boldsymbol{y}^T\|$ for every $\boldsymbol{x} \in \mathbb{R}^n$.
It's easy to check that $\|\cdot\|_*$ is a well defined vector norm because $\boldsymbol{y} \neq \boldsymbol{0}$ and the propierties of $\|\cdot\|$ for being, by hyphothesis, a sub-multiplicative matrix norm. So let's just check compatibility.
Let $A \in \mathbb{R}^{n \times n}$ and $\boldsymbol{x} \in \mathbb{R}^n$. Then by sub-multiplicative property of $\|\cdot\|$ we have
$$\|A\boldsymbol{x}\|_* = \|(A\boldsymbol{x})\boldsymbol{y}^T\| = \|A(\boldsymbol{x}\boldsymbol{y}^T)\| \leq \|A\| \cdot \|\boldsymbol{x}\boldsymbol{y}^T\| = \|A\| \cdot \|\boldsymbol{x}\|_*$$
Hence $\|\cdot\|$ is compatible with $\|\cdot\|_*$. $\blacksquare$
Then we are done since in the edited part of my post I had already demonstrated the result when we have sub-multiplicative and compatibility properties. |
Does the sequence $ (2\mathbb{N}+1 )^2 +4 $ find primes particularly well? | Using $x^2 + x + 41$ for $0 \leq x \leq 999,$ we find 581 primes. The smaller numbers are not the $x$ values, they are just the count of primes. I guess when the count $c$ is from $1 \leq c \leq 40$ we do have $x = c - 1,$ not with larger $c.$
Mon Jul 2 19:50:42 PDT 2018
1 41 2 43 3 47 4 53 5 61 6 71 7 83 8 97
9 113 10 131 11 151 12 173 13 197 14 223 15 251 16 281
17 313 18 347 19 383 20 421 21 461 22 503 23 547 24 593
25 641 26 691 27 743 28 797 29 853 30 911 31 971 32 1033
33 1097 34 1163 35 1231 36 1301 37 1373 38 1447 39 1523 40 1601
41 1847 42 1933 43 2111 44 2203 45 2297 46 2393 47 2591 48 2693
49 2797 50 2903 51 3011 52 3121 53 3347 54 3463 55 3581 56 3701
57 3823 58 3947 59 4073 60 4201 61 4463 62 4597 63 4733 64 4871
65 5011 66 5153 67 5297 68 5443 69 5591 70 5741 71 6047 72 6203
73 6361 74 6521 75 7013 76 7351 77 7523 78 7873 79 8231 80 8597
81 8783 82 8971 83 9161 84 9547 85 9743 86 9941 87 10141 88 10343
89 10753 90 11171 91 11383 92 11597 93 11813 94 12251 95 12473 96 12697
97 12923 98 13151 99 13381 100 13613 101 14083 102 14321 103 14561 104 15541
105 15791 106 16553 107 16811 108 17333 109 17597 110 17863 111 18131 112 18401
113 18947 114 19501 115 20063 116 20347 117 20921 118 21211 119 21503 120 22093
121 22391 122 22691 123 22993 124 23297 125 23603 126 23911 127 24533 128 24847
129 25163 130 25801 131 27431 132 27763 133 28097 134 28433 135 28771 136 29453
137 30491 138 30841 139 31193 140 31547 141 32261 142 32621 143 32983 144 33347
145 33713 146 35573 147 35951 148 36713 149 37097 150 37483 151 37871 152 38261
153 38653 154 39047 155 39443 156 39841 157 40241 158 41047 159 41453 160 42683
161 44351 162 44773 163 45197 164 46051 165 48221 166 48661 167 49103 168 49547
169 49993 170 50441 171 50891 172 51343 173 51797 174 52253 175 52711 176 53171
177 53633 178 54563 179 55501 180 56923 181 57881 182 58363 183 59333 184 61297
185 62791 186 64303 187 64811 188 66347 189 66863 190 67901 191 68947 192 69473
193 70001 194 71597 195 72671 196 74297 197 74843 198 75391 199 75941 200 76493
201 77047 202 78721 203 79283 204 79847 205 81551 206 83273 207 84431 208 85597
209 86183 210 86771 211 88547 212 92153 213 92761 214 93371 215 93983 216 94597
217 95213 218 96451 219 97073 220 98323 221 99581 222 100213 223 100847 224 101483
225 102121 226 102761 227 104047 228 104693 229 105341 230 110597 231 111263 232 112601
233 113947 234 115301 235 115981 236 116663 237 118033 238 120103 239 121493 240 122891
241 123593 242 124297 243 125003 244 125711 245 126421 246 127133 247 128563 248 129281
249 131447 250 132173 251 133631 252 134363 253 138053 254 138797 255 141041 256 141793
257 142547 258 144061 259 146347 260 147881 261 149423 262 150197 263 152531 264 153313
265 154097 266 154883 267 155671 268 157253 269 158047 270 158843 271 160441 272 162853
273 163661 274 164471 275 169373 276 170197 277 171023 278 171851 279 172681 280 174347
281 176021 282 179393 283 180241 284 181943 285 184511 286 185371 287 187963 288 188831
289 189701 290 190573 291 191447 292 192323 293 193201 294 194963 295 197621 296 199403
297 200297 298 201193 299 204797 300 205703 301 207521 302 208433 303 209347 304 210263
305 213023 306 213947 307 215801 308 216731 309 219533 310 220471 311 221411 312 226141
313 227093 314 229003 315 229961 316 232847 317 234781 318 235751 319 236723 320 238673
321 240631 322 243583 323 245561 324 247547 325 248543 326 249541 327 251543 328 253553
329 255571 330 258613 331 259631 332 260651 333 261673 334 262697 335 263723 336 265781
337 268883 338 270961 339 272003 340 273047 341 274093 342 276191 343 279353 344 280411
345 285731 346 286801 347 287873 348 288947 349 290023 350 291101 351 292181 352 293263
353 294347 354 295433 355 300893 356 301991 357 303091 358 304193 359 305297 360 307511
361 308621 362 311963 363 313081 364 318701 365 319831 366 322097 367 323233 368 327797
369 331241 370 332393 371 337021 372 338183 373 341681 374 346373 375 348731 376 349913
377 351097 378 353471 379 354661 380 355853 381 357047 382 358243 383 359441 384 361843
385 363047 386 365461 387 367883 388 369097 389 372751 390 381347 391 382583 392 383821
393 386303 394 388793 395 391291 396 392543 397 393797 398 396311 399 398833 400 402631
401 403901 402 406447 403 407723 404 410281 405 411563 406 419297 407 420593 408 421891
409 423191 410 424493 411 427103 412 428411 413 433663 414 434981 415 445597 416 446933
417 452297 418 453643 419 454991 420 459047 421 460403 422 464483 423 467213 424 468581
425 472697 426 474073 427 476831 428 478213 429 482371 430 483761 431 487943 432 490741
433 497771 434 499183 435 502013 436 503431 437 504851 438 507697 439 509123 440 510551
441 514847 442 516283 443 517721 444 519161 445 522047 446 523493 447 524941 448 526391
449 527843 450 530753 451 533671 452 535133 453 541001 454 549863 455 551347 456 552833
457 557303 458 560293 459 564793 460 570821 461 572333 462 573847 463 576881 464 578401
465 581447 466 582973 467 587563 468 593711 469 595253 470 599891 471 604547 472 609221
473 610783 474 617051 475 620197 476 623351 477 628097 478 629683 479 631271 480 644047
481 647261 482 648871 483 650483 484 653713 485 655331 486 656951 487 658573 488 660197
489 661823 490 668347 491 674903 492 683143 493 686453 494 688111 495 689771 496 691433
497 693097 498 694763 499 701447 500 703123 501 704801 502 706481 503 708163 504 709847
505 714911 506 723391 507 726797 508 731921 509 738781 510 743947 511 745673 512 747401
513 750863 514 754333 515 757811 516 759553 517 761297 518 763043 519 766541 520 770047
521 773561 522 778847 523 780613 524 782381 525 785923 526 787697 527 789473 528 791251
529 798383 530 800171 531 809141 532 810941 533 816353 534 825413 535 827231 536 830873
537 834523 538 836351 539 847361 540 849203 541 852893 542 860297 543 864011 544 865871
545 867733 546 869597 547 871463 548 873331 549 875201 550 877073 551 880823 552 882701
553 886463 554 894011 555 895903 556 899693 557 901591 558 907297 559 909203 560 911111
561 918763 562 922601 563 924523 564 930301 565 932231 566 936097 567 938033 568 939971
569 941911 570 947743 571 949691 572 951641 573 953593 574 959461 575 971251 576 983113
577 985097 578 987083 579 989071 580 993053 581 997043
Mon Jul 2 19:50:42 PDT 2018 |
Power series: If $|z-z_0|< R$ the series converges absolutely | The question was answered in Daniel Fischer's comment: the series $ \sum_{n \geq N} \left( \frac{|z|}{r} \right)^n $ is geometric, with ratio $|z|/r<1$. Thus it converges.
We even know its sum:
$$ \sum_{n \geq N} \left( \frac{|z|}{r} \right)^n = \frac{|z|^{N}/r^N}{1-|z|/r} < \frac{1}{1-|z|/r} \tag{1}$$
I'll squeeze out a little extra out of (1), under the additional assumption that there exists $N$ such that $|a_n|^{1/n}\le R^{-1}$ for all $n\ge N$. In this case we can use $r=R$ and keep $N$ the same for all $z$ in the disk. Then (1) implies the growth estimate
$$|f(z)|\le \frac{M}{R-|z|},\quad |z|<R\tag{2}$$
The assumption $|a_n|\le R^{-n}$ can be replaced with $|a_n|R^n $ being bounded, because $f$ can be multiplied by an appropriate constant to make $|a_n|R^n \le 1$. Thus, we get a nice corollary:
If $f(z)=\sum_{n=0}^\infty a_n z^n$ where the coefficients are bounded, then $(1-|z|)\,|f(z)| $ is bounded in the unit disk.
This sort of boundedness property comes up in complex analysis: see Bloch space |
Probability of drawing at least 2 aces out of a deck of 32 | The no. of ways in which one can get at least 2 aces in 3 draws is
(no. of ways he can get 2 aces) + (no. of ways he can get 3 aces)
= (AAN + NAA + ANA) + AAA
= (AAN).3 + AAA
= (4/32 ∗ 3/31 ∗ 28/30).3 + (4/32 ∗ 3/31 ∗ 2/30)
= 42/1240 + 1/1240 = 43/1240. |
How can the real numbers be a field if $0$ has no inverse? | Zero is always excluded from having a reciprocal.
(The axioms should say that every nonzero element of the field has a reciprocal.) |
What is the average radius of the circle containing three random points in a square? | The expected value of the radius is infinite. In fact, a stronger result can be proved.
Let $S$ be a nonempty open subset of the plane with finite measure.
For any three noncollinear points $x, y, z \in S$, let $R(x, y, z)$
denote the radius of the unique circle containing $x$, $y$, and $z$.
Since the probability is zero that three random points are
collinear, we can consider $R$ as a random variable on $S^3$. We will
show that the expected value of $R$ is infinite.
Choose $\epsilon > 0$ so that $S$ contains a ball of radius $3\epsilon$.
Let $V$ be the set of all triples $(x, y, z) \in S^3$ satisfying the
following conditions:
$\operatorname{dist}(y, \partial S) > 2\epsilon$,
$\epsilon < \operatorname{dist}(x, y) < 2\epsilon$, and
$\epsilon < \operatorname{dist}(y, z) < 2\epsilon$.
These conditions imply that $x$ and $z$ lie inside an annulus centered at $y$, with inner radius $\epsilon$ and outer radius $2\epsilon$, entirely contained in $S$. See Figure 1.
By continuity of the distance function, $V$ is a nonempty
open subset of $S^3$.
By the law of total expectation,
$$
\operatorname{E}[R] = \operatorname{E}[R\,|\,V] \operatorname{P}(V) + \operatorname{E}[R\,|\,V^c] \operatorname{P}(V^c) \ge \operatorname{E}[R\,|\,V] \operatorname{P}(V),
$$
where $\operatorname{P}(V)$ is the probability that a random element of $S^3$ belongs to $V$.
This probability is equal to $\mu(V)/\mu(S^3)$, where $\mu$ is the Lebesgue measure
on $\mathbb{R}^6$.
Since $\operatorname{P}(V)$ is positive, it suffices to prove that $\operatorname{E}[R\,|\,V]$ is infinite.
Observe that $V$ has continuous rotational symmetry.
The point $z$ can be rotated about the point $y$, keeping both $x$ and $y$ fixed.
In symbols, the map $r_{\theta}\colon V\to V$ is an isometry for each
$\theta \in [0, 2\pi)$, where
$$
r_{\theta}(x, y, z) = (x, y, y + (z - y) e^{i\theta}).
$$
This implies that the measure of the angle $xyz$ is uniformly distributed on
$[0, \pi]$ when $(x, y, z)$ is drawn from $V$.
Let $(x, y, z) \in V$, and let $\theta = \pi - m(\angle xyz)$, as shown in
Figure 2. We may assume without loss of generality that $\operatorname{dist}(x, y) \le \operatorname{dist}(y, z)$.
Since $\alpha \ge \beta$, $\theta + \alpha + \beta = \pi$, and $\alpha + \gamma = \pi/2$,
it follows that $\gamma \le \theta/2$. Therefore,
$$
R = \frac{\operatorname{dist}(x, y)}{2\sin \gamma} > \frac{\epsilon}{2 \sin(\theta/2)} > \frac{\epsilon}{\theta}.
$$
But $\theta$ is uniformly distributed on $[0, \pi]$. Therefore,
$$
\operatorname{E}[R\,|\,V] \ge \frac{1}{\pi} \int_0^{\pi} \frac{\epsilon}{\theta}\, d\theta = \infty.
$$
Since $V$ has positive measure, it follows that
$\operatorname{E}[R] = \infty$ as well. |
For every integer a, b, c, if 3c is not divisible by a, then b is not divisible by a or 3c is not divisible by b | First figure out how to say this by contrapositive.
The statement is of the form $\lnot P \to \lnot Q \lor \lnot R$.
Now $\lnot Q \lor \lnot R \iff \lnot (Q \land R)$ so the statment that you are trying to prove is equivalent to
$\lnot P \to \lnot(Q \land R)$ and that is equivalent to the contrapositive:
$(Q\land R) \to P$.
So it is equivalent to prove the following:
If $b$ is divisible by $a$ and $3c$ is divisible by $b$ then $3c$ is divisible by $a$.
That is almost too easy to deserve discussion:
Pf: $a|b$ so there exist an integer $k$ so that $b = ka$. And $b|3c$ so there exists an integer $j$ so that $3c = jb$. So $3c =j(ka) = (jk)a$. $jk$ is an integer so $a|3c$. QED.
To show that is equivalent to the title statement:
Pf: Note: If $a|b$ and $b|3c$ then we would have $a|3c$. But we were given that $a \not \mid 3c$. So we can't have both $a|b$ and $b|3c$. So we must have either $a\not \mid b$ or $b\not \mid 3c$.
That's it. We're done. Let's go home and eat lunch. |
Finding a stochastic matrix $M$ such that $M^{7} = I$ | Take a $7 \times 7$ linear transformation that maps $e_1$ to $e_2$,$e_2$ to $e_3$,$\cdots$ ,$e_6$ to $e_7$, $e_7$ to $e_1$. The matrix of this LT has the desire property. Here $e_j$'s are the standard basis vectors of $\mathbb R^{7}$. |
Simplifying the joint distribution | In a Marcov chain with a DAG of $A\to B\to C$, the events $A$ and $C$ are conditionally independent when given $B$. This is the structure of dependencies you have described. Therefore $\mathsf P(A,C\mid B)=\mathsf P(A\mid B)~\mathsf P(C\mid B)$, so...
$$\begin{align}\mathsf P(A,B,C)&=\mathsf P(B)~\mathsf P(A,C\mid B)\\[1ex]&=\mathsf P(B)~\mathsf P(A\mid B)~\mathsf P(C\mid B)\\[1ex]&=\mathsf P(A)~\mathsf P(B\mid A)~\mathsf P(C\mid B)\end{align}$$ |
Why are hyperbolas defined by two branches? | First, I just want to mention an experiment you can try yourself. If you take a flashlight, the beam of light which comes out of the end is roughly conical, and you can cast the beam onto a wall or other surface. See what conditions are necessary to get a circle, ellipse, parabola, and hyperbola. The next part of the experiment is a bit of a stretch, but imagine that all of the light that leaves your flashlight has been traveling in a straight line for all time, even before the light left the flashlight (it may help to think of the light traveling backwards in time for this). Circles, ellipses, and parabolas are what you get when that light didn't come into contact with the surface in the past, and a hyperbola is when this backwards beam did.
One way to think about conic sections is by projective geometry, and one way to think about projective geometry is by imagining a point light source casting light in all directions with a sphere around it which blocks light in a certain pattern. The rule for this sphere is that if it lets light out at some point, it must let light out at the exact opposite point, and vice versa. We interact with this contraption by putting a screen near the light and seeing the patterns cast on the screen, and where you place this screen is arbitrary. If you want to cast a line on the screen, you want there to be a great circle on the sphere which lets light through, and great circles always cast lines (so long as any light from the circle meets the screen, but one can always move the screen to see some light). This is the reason for the two-sides rule: if it looks like an infinitely long line from some perspective it had better look like an infinitely long line from all perspectives.
What casts a circle? A non-great circle on the sphere does. What does the light pattern look like which radiates from the sphere? A double cone. What does one get when after moving the screen? Circles, ellipses, parabolae, and hyperbolae. A parabola is a circle reprojected so one point is infinitely far away. A hyperbola is a circle reprojected so two points are infinitely far away, the two branches being the two halves of the circle. |
Find shortest path in graph that visits 2 nodes from a certain node set | Create a new graph, which consists of 3 "copies" of $G$. Each node $(a, n)$ of this new graph represents a situation "I've got to node $a$ of original graph and have visited nodes from $U$ exactly $n$ times". $n$ can be 0, 1 or 2.
Now you need to prepare edges. Edges would be directed and the rules are quite obvious: original edge between nodes $a$ and $b$ corresponds to several edges in a new graph: $(a, m) -> (b, m + x)$ where x is 1 if $b$ belongs to $U$ and 0 otherwise.
And now you only need to find a shortest way from $(s, 0)$ to $(t, 2)$. |
Continuous functions with values in separable Banach space dense in $L^{2}$? | Yes. Ignoring questions of measurability: Suppose $f:[0,1]\to X$ and $\int_0^1||f(t)||_X^2\,dt<\infty$.
Say $X_n$ is an increasing sequence of finite-dimensional subspaces of $X$ so that $\bigcup X_n$ is dense in $X$. For each $n$ choose $f_n:[0,1]\to X_n$ such that, say, $$||f_n(t)-f(t)||_X\le 2 d(f(t),X_n)$$for all $t\in [0,1]$. Note that $||f_n(t)||_X\le 3||f(t)||_X$. Since $||f_n(t)-f(t)||_X\to0$ for all $t$, dominated convergence shows that $$\int_0^1||f_n(t)-f(t)||_X^2\,dt\to0.$$
Since $X_n$ is finite-dimensional there certainly exists a continuous function $g_n:[0,1]\to X_n$ with $$\int_0^1||g_n(t)-f_n(t)||_X^2\,dt<\frac1n;$$this is immediate from the scalar-valued case. QED. |
Faster way to calculate the amount of odd coefficients of $(x+1)^{1000}$ | Use recurrence relations for n!/(n-r)!
There are quite a few, work from smaller n to larger n. |
Find the least upper and greatest lower bound of these sets | A way to approach the first problem is in two cases, $q=0$ and $q\ne0$. If $q=0$ then $\frac{p-q}{p+q} = 1$. If $q\ne0$ then $0 \lt \frac{p-q}{p+q} \lt 1$.
A way to be sure that $\frac{p-q}{p+q} \gt 0$ is to consider when $p$ and $q$ are the closest they can be, $1$ apart. i.e. $p = q +1$. Then $\frac{p-q}{p+q} = \frac{1}{2q+1} $ and $\lim_{q \to \infty} \frac{1}{2q+1} = 0 $
So an upper bound would be $1$ and a lower bound $0$. The set contains $1$, $1$ is also the upper bound, so $1$ is the largest element of the set. The lower bound is $0$, but $\frac{p-q}{p+q}$ can never be $0$, so the set does not have a smallest element.
note: If you don't take $0 \in \mathbb{N}$ then there is no largest element, because $\frac{p-q}{p+q}\ne 1$. |
Why does $\frac{X}{b}=d_n b^{n-1}+\ldots+ d_1+ \frac{d_0}{b}$ imply $d_0$ is the remainder? | If $D,d\in\mathbb N$ then, by definition. the quotient and the remainder of the division of $D$ by $d$ are the only numbers $q,r\in\mathbb{Z}^+$ such that $D=q\times d+r$ and that $0\leqslant r<d$.
So, since$$X=b\times(d_nb^{n-1}+b_{n-1}b^{n-2}+\cdots+d_1)+d_0$$and since $0\leqslant d_0<b$, $d_0$ is the remainder of the division of $X$ by $b$, by the definition of remainder. |
Solution of the simple form of the Schrodinger equation | Here's a formal solution:
$$\psi(t,x)=\sum_{k=0}^{+\infty}\left(i\frac{\alpha t}2\right)^k\frac1{k!}\psi_0^{(2k)}(x).$$
It is obtained directly from the formal solution:
$$\psi(t,x)=\mathrm{e}^{\alpha i t/2\Delta}\psi_0(x),$$
where $\Delta$ is the (spatial) Laplacian, i.e., the “second derivative with respect to $x$”. |
Let N represent the set of natural numbers $\{1,2,3, \ldots, 14, 15\}$ | Yes, the answers are correct provided if I understand your notation correctly, that is you are trying to express "less than or equal to" and "greater than or equal to" for $X$ and $Y$.
$X = \{ n \in N | n \ge 5\} = \{ 5, 6 , \ldots, 15\}$.
$Y = \{ n \in N | n \le 10\} = \{ 1, 2 , \ldots, 10\}$. |
Complex numbers in fraction | First, we have $8/i=-8i$. The overline signifies complex conjugation; $a+bi$ becomes $a-bi$. So we get the final result as $8i$ or $0+8i$. |
Is there a way to compute $\sum_{n=0}^\infty 1/(1+n!)$? | For a reasonable approximation
$$\sum_{n=0}^\infty\frac{1}{1+n!}\sim\sum_{n=0}^p\frac{1}{1+n!}+\sum_{n=p+1}^\infty\frac{1}{n!}-\sum_{n=p+1}^\infty\frac{1}{(n!)^2}+\cdots$$
$$\sum_{n=p+1}^\infty\frac{1}{n!}=e\left(1-\frac{ \Gamma (p+1,1)}{\Gamma (p+1)}\right)$$
$$\sum_{n=p+1}^\infty\frac{1}{(n!)^2}=I_0(2)-a_p$$ where the $a_p$ make the sequence
$$\left\{0,1,2,\frac{9}{4},\frac{41}{18},\frac{1313}{576},\frac{5471}{2400},\frac{118
1737}{518400},\frac{28952557}{12700800},\frac{1235309099}{541900800},\frac{15009
0055529}{65840947200}\right\}$$
Using $p=9$
$$\frac{10373124947763317933}{6797289565413518325}+e-\frac{98641}{36288}+\frac{150090055529}{65840947200}-I_0(2)$$ which gives
$$1.5260681344733308247571$$ while the "exact" value is
$$1.5260681344733308247780$$ |
Gaining an intuitive understanding of the variance of a point estimator | There are some minor errors in your question. What I suspect you actually did was minimise $a^2 \sigma^2_1 +(1-a)^2\sigma^2_2$ by taking the derivative and setting $2a \sigma^2_1 -2(1-a)\sigma^2_2=0$ to find $a = \frac{\sigma_{2}^2}{\sigma_{1}^2+\sigma_{2}^2}$ to give a minimum value of $\frac{2\sigma_{1}^2\sigma_{2}^2}{\sigma_{1}^2+\sigma_{2}^2}$
Without calculus, you could "complete the square" by saying $$a^2 \sigma^2_1 +(1-a)^2\sigma^2_2 \\ = a^2 (\sigma^2_1 + \sigma^2_2) -2a\sigma^2_2 + \sigma^2_2 \\ = (\sigma^2_1 + \sigma^2_2) \left( a^2 -2\frac{\sigma^2_2}{\sigma^2_1 + \sigma^2_2}a + \left(\frac{\sigma^2_2}{\sigma^2_1 + \sigma^2_2}\right)^2\right) + \sigma^2_2-(\sigma^2_1 + \sigma^2_2) \left(\frac{\sigma^2_2}{\sigma^2_1 + \sigma^2_2}\right)^2 \\ = (\sigma^2_1 + \sigma^2_2) \left( a -\frac{\sigma^2_2}{\sigma^2_1 + \sigma^2_2}\right)^2 + \frac{\sigma^2_1 \sigma^2_2}{\sigma^2_1 + \sigma^2_2} $$
so the left part of this $(\sigma^2_1 + \sigma^2_2) \left( a -\frac{\sigma^2_2}{\sigma^2_1 + \sigma^2_2}\right)^2$ is non-negative and is zero when $a = \frac{\sigma^2_2}{\sigma^2_1 + \sigma^2_2}$
while the right part of this $\frac{\sigma^2_1 \sigma^2_2}{\sigma^2_1 + \sigma^2_2}$ does not vary with $a$
implying that $a^2 \sigma^2_1 +(1-a)^2\sigma^2_2 \ge \frac{\sigma^2_1 \sigma^2_2}{\sigma^2_1 + \sigma^2_2}$ with equality if and only if $a = \frac{\sigma^2_2}{\sigma^2_1 + \sigma^2_2}$
A hand-waving intuition could be that the combined variance is minimised when each part is effectively contributing the same amount to the combined variance. But this may not be immediately obvious |
Game Theory: Prisoner Dilemma calculating the odds | I think you are assuming that a mixed strategy exists, that is, some probability that the Row player is indifferent between Confess and Deny. (That is what the equation in your question is assuming.) Let $q$ be that probability.
then the row player seeks to choose $q$ to maximize:
$$U=q[3p+(1-p)1]+(1-q)[10p+2(1-p)].$$
$$=-q(6p+1)+8p+2.$$
Maximize by taking the derivative
$$\frac{du}{dq} =-(6p+1).$$
But, as you point out, unless $p$ has the impossible value-1/6, the derivative does not equal zero. So the maximum value is a corner solution, where, since the derivative is negative, $q$ takes on it's maximum value, which, for a probability, is 1.
That is the famous solution to Prisoner's Dilemma, that whatever the other player does, your best (Dominant) strategy is to confess. |
Basis for Nullspace of matrix Null(A) | Both answers are correct. That space has infinitely many bases, of course. |
Drawing straight line graph with large scale | To graph the function, go to the "Window" screen. Set the $y$ limits to $y_{min}=-200,000,y_{max}=200,000$. Also, set the $x$ limits to $x_{min}=0,x_{max}=60$. Then, your graph will appear as a straight line intercepting through $150,000$. |
Help me understand the formal logic notation? | Express each of the following predicates and propositions in formal logic notation. The domain of discourse is the nonnegative integers, $\Bbb N$.
In addition to the propositional operators, variables and quantifiers, you may define predicates using addition, multiplication, and equality symbols, but no constants (like 0, 1, . . . ).
(b) x > 1.
Solution. The straightforward approach is to define x = 1 as ∀ y. xy = y and then express x > 1 as ∃ y. (y = 1) ∧ (x > y).
There is no symbol $1$ in the language. That means we need to come up with a symbol to represent it.
After defining the property "is equal to $1$" by the formula with one free variable $(\forall x)(x \times \cdot =x)$, we can only ever refer to $1$ by this formula. So we make a shorthand for it: the literal string "$z\ \underline{=1}$" is defined to be shorthand for $(\forall x)(x \times z = x)$, whatever $z$ is. I've used an underline to emphasise that this is not an expression in the original language, but a shorthand to be expanded.
Finally, to express $n>1$ without using the forbidden symbol $1$, it's enough to define $1$, assert that it exists at all, and then say that $n$ is bigger than it:
$$(\exists y)(y\ \underline{=1} \wedge n>y)$$
See also the solution to (a) as to how you define "greater than". |
Series using comparison test | Don't be too hasty! For large $n$, $\tan(1/n) \sim 1/n$, where $\sim$ means that the two are asymptotic as $n$ approaches $\infty$. As $n$ increases, your whole summand is asymptotic to $1/n^{(3/2)}$. I think your series actually converges.
This is a heuristic argument, though it could probably pass in a Calc 2 course. |
The functional equation $ f \left( x ^ 2 y + f ( y ) \right) = x ^ 2 f ( y ) + f \big( f ( y ) \big) $ | In fact, it's not true that linear functions are the only solutions. For any $ a , b \in \mathbb R _ { 0 + } $, the function $ f : \mathbb R \to \mathbb R $ defined with
$$ f ( x ) =
\begin {cases}
a x & x \ge 0 \\
b x & x < 0
\end {cases} $$
is another solution. This can be verified easily, using the fact that $ y $, $ a y $, $ b y $, $ \left( x ^ 2 + a \right) y $ and $ \left( x ^ 2 + b \right) y $ will always have the same sign (in the nonstrict sense), because $ a , b \ge 0 $. Considering these solutions together with linear functions, one can show that there is no other solution, which we will prove.
We continue the argument put forward by the OP, except that we only prove $ g ( y ) = 0 $ for $ y > 0 $, as it may not be true for $ x < 0 $ in the case of the above solutions. Fixing $ y > 0 $, we can choose $ x $ large enough, namely $ x \ge \sqrt { \frac { | a - a y - g ( y ) | } y } $, so that we have $ \left( x ^ 2 + a \right) y + g ( y ) \ge a $, and hence $ g \Big( \left( x ^ 2 + a \right) y + g ( y ) \Big) = 0 $. Then we can use the functional equation for $ g $ to get $ x ^ 2 g ( y ) = - g \big( a y + g ( y ) \big) $, which shows that $ g ( y ) = 0 $, as this is true for all large enough $ x $.
For the negative points, we can take a similar path. Letting $ b = - f ( - 1 ) $ and defining $ h : \mathbb R \to \mathbb R $ with $ h ( x ) = f ( x ) - b x $ for all $ x \in \mathbb R $, we have $ h ( 0 ) = h ( - 1 ) = 0 $ and the functional equation
$$ h \Big( \left( x ^ 2 + b \right) y + h ( y ) \Big) = x ^ 2 h ( y ) + h \big( b y + h ( y ) \big) $$
for all $ x , y \in \mathbb R $. Setting $ y = - 1 $, we get $ h ( x ) = h ( - b ) $ for all $ x \le - b $. In case $ b \le 1 $, we have $ - 1 \le - b $ and since $ h ( - 1 ) = 0 $, $ h ( - b ) = 0 $. In case $ b > 0 $, letting $ y = - b $ and choosing $ x $ large enough so that $ \left( x ^ 2 + b \right) y + h ( y ) \le - b $, we get $ x ^ 2 h ( - b ) = h ( - b ) - h \big( - b ^ 2 + h ( - b ) \big) $, which gives $ h ( - b ) = 0 $ since it's true for all large enough $ x $. So in any case, we have $ h ( x ) = 0 $ for all $ x \le - b $. Again, for any $ y < 0 $, choosing $ x $ large enough so that $ \left( x ^ 2 + b \right) y + h ( y ) \le - b $, we get $ x ^ 2 h ( y ) = - h \big( b y + h ( y ) \big) $, which shows that $ h ( y ) = 0 $, as that is true for all large enough $ x $.
So we know that $ g ( x ) = 0 $ for all $ x \ge \min ( a , 0 ) $ and that $ h ( x ) = 0 $ for all $ x \le \max ( - b , 0 ) $. Note that by definition of $ g $ and $ h $ we have $ g ( x ) - h ( x ) = ( a - b ) x $ for all $ x \in \mathbb R $. This implies that if $ a < 0 $, we can choose $ x $ so that $ a < x < 0 $, and thus have $ g ( x ) = h ( x ) = 0 $, which gives $ a = b $. Similarly, if $ b < 0 $, choosing $ x $ with $ 0 < x < - b $ we get $ a = b $. Therefore, if $ a $ is not equal to $ b $ then $ a , b \ge 0 $.
By definition of $ g $ and $ h $ and the previous results, we have $ f ( x ) = a x $ for $ x \ge 0 $ and $ f ( x ) = b x $ for $ x < 0 $. If $ f $ is not linear, then $ a $ is not equal to $ b $, and thus we must have $ a , b \ge 0 $. We already know that whatever the value of $ a $ and $ b $, we'll get a solution in such a case. Thus there is no solution other than linear ones and those of this form, and we're done. |
Integral of $f(x,y) \in W^{1,1}(B)$ | If $f$ is in $W^{1,1}(B)$, it has a unique trace on the segment $x=0$. This means that as you approach $x=0$ from the left and from the right, you should get the same value.
Hence,
$$1+y^2=a(y-1)^2 +by$$
for all $-1/2<y<1/2$.
This implies that $a=1$ and $b=2$. You can now compute your integrals using trig formulas and integration by parts. |
Artinian local rings which are not algebras over a field | For 2. this is alright. Clearly we have $\mathbb{Z}\subset A$ and all non-zero elements of $\mathbb{Z}$ are units in $A$, no artinian assumption is necessary. Thus, for any local ring with $k=A/\mathfrak{m}$, $\mathbb{Q}\subset A$, with no artinian assumptions.
If $k=A/\mathfrak{m}$ is a field of characteristic zero, then, Hensel's lemma would ensure that $k\subset A$.
Finally, in the case of $k$ as above is characteristic $p>0$, then the theory of Witt vectors says that $A$ may not be a $k$-algebra. |
Find whether a point is closer to any part of a line than other points | What I think you are missing is that for any point $p$, there is a unique point, $q(p)$, on the line segment $\overline{AB}$ that is nearest to $p$. In general, there is a well-defined notion of distance between two arbitrary non-empty subsets of the plane $X$ and $Y$, say, defined by:
$$
d(X, Y) = \inf \{d(x, y) \mid x \in X, y \in Y\}
$$
(where $\inf$ gives the greatest lower bound of a bounded-below non-empty set of real numbers).
In your case, take $X = \overline{AB}$ and $Y=P$.
Because $P$ is finite there will be at least one $p \in P$ such that $ d(q(p), p) = d(\overline{AB}, \{p\}) = d(\overline{AB}, P)$ and it is the point or points with that property that you are trying to find.
To find $q(p)$ for any given $p$, you first of all find the
orthogonal projection, $p_o$ say, of $p$ onto the infinite extension $\overline{AB}$ (which you can
do with a bit of vector arithmetic involving the dot product operation). $p_o$ is the closest point to $p$ on the infinitely extended line. If $p_o$ lies on $\overline{AB}$, then $q(p) = p_o$; otherwise $q(p)$ is whichever of $A$ and $B$ is nearer to $p$. (Note that $p$ cannot be equidistant from $A$ and $B$ if $p_o$ does not lie on $\overline{AB}$.)
To find the $p \in P$ and the corresponding point $q(p)$ that you are looking for, you just repeat the above construction for every $p \in P$ and then choose a $p$ that minimise $d(q(p), p)$ (there may be more than one such $p$ in general). |
Let $(x_n)$ and $(y_n)$ be sequences such that $ x_n$ in $[0, 1]$ and $y_n$ in $[1, 2]$ | The Bolzano–Weierstrass theorem holds not only for bounded sequences in $\Bbb R$, but also for bounded sequences in the finite-dimensional space $\Bbb R^d$. This can be seen by repeatedly picking convergent subsequences in each coordinate.
In your case ($d=2$) that would work as follows:
First apply Bolzano-Weierstrass to the (bounded) sequence $(x_n)_n$, that gives a convergent subsequence $(x_{n_k})_k$.
Then apply Bolzano-Weierstrass to the (bounded) subsequence $(y_{n_k})_k$ of $(y_n)_n$, that gives a convergent subsequence $(y_{n_{k_j}})_j$. Note that $(x_{n_{k_j}})_j$ is also convergent as a subsequence of $(x_{n_k})_k$.
Therefore, with $m_j = n_{k_j}$, $(x_{m_j})_j$ and $(y_{m_j})_j$ are “common” convergent subsequences of $(x_n)_n$ and $(y_n)_n$ respectively. |
Is this matrix definite positive? | It depends. $A$ is positive semidefinite for all choices of $y_i$, as for any $x \in \mathbb R^k$, we have
\begin{align*}
x^tAx &= \sum_{i=1}^n x^ty_i \cdot y_i^tx\\
&= \sum_{i=1}^n (x^ty_i)^2\\
& \ge 0
\end{align*}
If the $(y_i)$ form a spanning set for $\mathbb R^k$, then $A$ is definite, as then given $x \ne 0$, there is an $i$ with $x^t y_i \ne 0$, hence $(x^t y_i)^2> 0$, giving $> 0$ in the last line above. |
Prove that a ring R having the property that every finitely generated R-module is free is either a field or the zero ring. | Let $I \subseteq R$ be an ideal. Then $R/I$ is an $R$-module. Moreover, it is generated by the element $\overline{1} \in R/I$. So $R/I$ is a finitely-generated $R$-module.
Then by assumption $R/I$ is free. Assume for contradiction that $I$ is not equal to $0$ or $R$. First, since $R/I \neq 0$, $R/I$ is not spanned by the empty set. So it must have a basis $\{\overline{r_1}, \ldots \overline{r_k}\}$ which is nonempty. But since $I \neq 0$, we can choose some $i \in I\setminus \{0\}$, and then $i\cdot \overline{r_1} = 0$. Thus, the "basis" is actually linearly dependent.
This is a contradiction, and thus we conclude that $I$ must have been $0$ or $R$, i.e. $0$ and $R$ are the only ideals of $R$. This implies that $R$ is either the zero ring or a field. |
Straightedge-and-compass construction of the "kissing circles" for three given circles | If a non-euclidean construction is allowed:
Locus of center of circles touching two given externally touching circles is a hyperbola.
Centers of the given circles are foci, their commun point is a point on the hyperbola.
This fits perfectly to GeoGebra options.
For a similar problem with a detailed explanation, refer to this one.
EDIT
A straightedge-and-compas construction by Eric Eppstein is available here. |
Why is the expected number coin tosses to get $HTH$ is $10$? | Let the expected number of tosses until we get the pattern HTH be $a$.
Let the expected additional waiting time given that we have just tossed an H (and are not finished) be $b$.
Let the expected additional waiting time given that our last two tosses have been HT be $c$.
By conditioning, we have the following equations:
$$a=1+\frac{1}{2}b+\frac{1}{2}a;$$
This is because on our first toss, we have used up a toss. If we got an H, our expected additional time is $b$. If we got a T, we have made no progress, and our additional expected time is $a$.
$$b=1+\frac{1}{2}b+\frac{1}{2}c.$$
$$c=1+\frac{1}{2}a.$$
Since $a$, $b$, and $c$ are clearly finite, we can find them by solving the above system of three linear equations. |
analysis double series sums | The matrix looks like this: $$\pmatrix{-1&0&0&0&\dots\cr1/2&-1&0&0&\dots\cr1/4&1/2&-1&0&\dots\cr1/8&1/4&1/2&-1&\dots\cr\vdots&\vdots&\vdots&\vdots&\vdots\cr}$$ Look at any column. Ignoring any leading zeros, the sum of the entries is $-1+(1/2)+(1/4)+(1/8)+\cdots=0$, since $(1/2)+(1/4)+(1/8)+\cdots=1$ as the sum of a geometric series. So this says that for every $j$, $\sum_ia_{ij}=0$ --- the sum just adds up all the numbers in column $j$. Then $\sum_j\sum_ia_{ij}=\sum_j0=0$.
Now, look at the rows. The terms in the first row sum to $-1$; the second row, $-1/2$; the third row, $-1/4$; the fourth row, $-1/8$; in general, the $i$th row, $-1/2^{i-1}$ (to be rogorous, you'd have to insert a proof by induction here, but I'm sure you can handle that). So, $\sum_i\sum_ja_{ij}=\sum_i(-1/2^{i-1})$, and that's a geometric series with sum $-2$. Done. |
5 points determine a conic uniquely | You will need the condition that $P_i$'s are in general position, which means an open subset (in the Zariski sense) of all the possible configurations $\{(P_1,\ldots,P_5)\}$.
Precisely, you need to show the condition that $\{F(P_i)\}$ linearly independent is an open condition. Indeed, by basic linear algebra, the dependency of $\{F(P_i)\}$ is equivalent to that the determinants $D_i=D_i(P_1, \ldots, P_5)$ $(i=1,\ldots,6)$ of all the $5\times 5$ submatrices of
$\begin{bmatrix}F(P_1)\\\vdots\\F(P_5)\end{bmatrix}$ are zero, which is a closed condition. |
Find the inverse of a polynomial in a quotient ring | The usual approach to find the inverse of a polynomial $f$ when working modulo $g$ is to use the extended Euclidean algorithm to find polynomials $u$ and $v$ such that
$$ uf + vg = \gcd(f,g) = 1 $$
from which it immediately follows
$$ uf \equiv 1 \pmod g $$
If $\gcd(f,g)$ is not a unit, then $f$ does not have an inverse modulo $g$. The contrapositive is easy to see: if you have a polynomial $u$ such that
$$ uf \equiv 1 \pmod g $$
then there must exist a polynomial $v$ such that
$$ uf -1 = vg $$
and $uf - vg = 1$ implies $\gcd(f,g)$ divides $1$. |
Range of $f\in C_c(X)$ is compact subset of complex plane | Your proof is correct but verbose. Since it's either $f(X) = f(K)$ or $f(X) = f(K) \cup \{0\}$, you only need to show that both $f(K)$ and $f(K)\cup \{0\}$ are compact.
You seem to have asked quite a few questions on proof verification. It might be a good practice for you to take time verifying your own proof, and explain the part you feel dubious about. |
Orthogonal trajectories . eliminating parameter | I am giving a try here. Implicit diff gives: $y' = - \frac{x(a^2+t)}{ya^2}$ For orthogonal trajectories, take opposite reciprocal to get $y' = \frac{ya^2}{x(a^2+t)}$ By seperation of variables you get $\frac{y'}{ya^2}=\frac{1}{x(a^2+t)}$ Anti deriving both sides gives $\frac{ln|y|}{a^2} = \frac{ln|x|}{a^2+t}+C$ From which it becomes algebra to solve for y if you wish. Hope this is what you are looking for. |
Numerical method for system of linear and quadratic equations in several variables | So if I understood your question correctly you are trying to solve a non-linear system of equations of the form
$$ Ax + x^{T}Bx = b .$$
Restating it as
$$ ( A + x^{T}B ) x = b $$
$$ \Leftrightarrow x = (A + x^{T}B)^{-1}b = f(x) $$
$$ \Leftrightarrow x - f(x) = 0 $$
a fixed point is found. Any root-finding method can now be used. For example, the simplest approach would be a fixed-point iteration of the form
$$ x_{i+1} = (A + x_{i}^{T}B)^{-1}b ,$$
but alternatives include Newton's or any other higher order method of the Housholder family.
Note 1: In $x_{i+1} = (A + x_{i}^{T}B)^{-1}b$ the matrix $(A + x_{i}^{T}B)^{-1}$ should not be interpeted as computing the inverse but as solving the linear system of equations $(A + x_{i}^{T}B)x_{i+1} = b$. This system of equations will be solved multiple times until some criterion is met (e.g. the residual $r = \lvert ( A + x^{T}B ) x - b \rvert $ being smaller than some prescribed tolerance $\epsilon_{tol}$).
Note 2: this works assuming an unique solution exists. The closer your initial guess $x_0$ is to the "exact" solution, the less number of iterations you will need. As such a first step towards improving your method is to get a better initial guess. Another possibility is to use a method with a larger convergence radious first (e.g. a fix-point iteration), and then switch to a method with better convergence properties when some residual criterion is met (e.g. switch to Newton's method if $r<10^{-2}$). |
Block inverse of symmetric matrices | Certainly, one can use the bordering method for this (a special case of the usual formula for block inversion):
$$\begin{pmatrix}\mathbf A&\mathbf \delta\\\mathbf \delta^\top&Z\end{pmatrix}^{-1}=\begin{pmatrix}\mathbf A^{-1}+\frac{\mathbf A^{-1}\mathbf \delta\mathbf \delta^\top\mathbf A^{-1}}{\mu}&-\frac{\mathbf A^{-1}\mathbf \delta}{\mu}\\-\frac{\mathbf \delta^\top\mathbf A^{-1}}{\mu}&\frac1{\mu}\end{pmatrix}$$
where $\mathbf \delta^\top=(X\quad Y)$ and $\mu=Z-\mathbf \delta^\top\mathbf A^{-1}\mathbf \delta$. |
Limit integral switch for nondecreasing functions | We can show, in this case, that the convergence is uniform and switching the limit and integral is permissible.
Since $f$ is uniformly continuous on $[a,b]$, for any $\epsilon > 0$ there is a partition where on each subinterval we have
$$\tag{*}-\epsilon/2 < f(x_j) - f(x_{j-1}) < \epsilon/2$$
Any $x \in [a,b]$ belongs to some subinterval $[x_{j-1},x_j]$. By pointwise convergence and monotonicity of $f$ (since $f_n \to f$ and $f_n$ is nondecreasing), there exists $N$ depending on $x_1,\ldots,x_j \ldots$ and $\epsilon$ but not $x$ such that for $n > N$
$$f(x_{j-1}) - f(x_j) - \epsilon/2 < f_n(x_{j-1}) - f(x_j) \\< f_n(x) - f(x) \\< f_n(x_j) - f(x_{j-1}) < f(x_j) - f(x_{j-1}) + \epsilon/2.$$
Using (*) it follows that $|f_n(x) - f(x)| < \epsilon$ for all $x \in [a,b],$ and convergence is uniform. |
Finding moment generating functions for a dice roll | Let $X$ be the outcome of a fair dice roll. Thus, we have $\mathbb{P}(X = i) = 1/6, ~ i = 1,\ldots,6$.
The probability-generating function of $X$ is defined as
\begin{align}
P_X(z) := \mathbb{E}[z^X] = \sum_{x = 1}^6 \mathbb{P}(X = x) z^x = 1/6 \sum_{x = 1}^6 z^x = \frac{1}{6} \frac{z(z^6-1)}{z-1},
\end{align}
with $|z| \le 1$.
The moment-generating function of $X$ is defined as
\begin{align}
M_X(t) := \mathbb{E}[e^{tX}] = \sum_{x = 1}^6 \mathbb{P}(X = x) e^{tx} = 1/6 \sum_{x = 1}^6 e^{tx} = \frac{1}{6} \frac{e^t(e^{6t} - 1)}{e^t - 1},
\end{align}
with $t \in \mathbb{R}$.
The relation between the two generating functions is $P_X(e^t) = \mathbb{E}[e^{tX}] = M_X(t)$. Using the probability-generating function, one can compute the probability mass function and so-called $k$th factorial moments, see the wiki. The moment-generating function allows us to compute the moments of a random variable, see the wiki.
Answering the question in the comment
Computing the sums (in my opinion) is all about knowing a few tricks. I will show you how to do it for the two simplifications of the sum above. We assume that $|a| < 1$.
\begin{align}
I &= \sum_{i = 1}^n a^i = a \sum_{i = 0}^{n-1} a^i = a(1 + a + a^2 + \ldots + a^{n-1}) \\
&= a(1 + a + a^2 + \ldots + a^{n-1} + a^n - a^n ) \\
&= a(1 + I - a^n) = aI + a(1 - a^n) \\
\Rightarrow I(1-a) &= a(1-a^n) \\
\Rightarrow I &= \frac{a(1-a^n)}{(1-a)} = \frac{a(a^n-1)}{a-1}
\end{align} |
create a number using 4 different digit | Let $A_{k,n}$ the set of codes of length $n$ written with the only number $k$ (for $k\in\{4,5,6\}$), $B_{k,l,n}$ the set of codes of length $n$ written with at most the numbers $k$ and $l$ ($k\ne l$), and $C_{4,5,6,n}$ the set of codes of length $n$ written with at most the numbers $4$, $5$ and $6$.
What you want to compute is the cardinal of
$$C_{4,5,6,n}\setminus(B_{4,5,n}\cup B_{4,6,n}\cup B_{5,6,n})$$
But, as $B_{4,5,n}\cap B_{4,6,n}=A_{4,n}$, we have :
\begin{align}&{\rm card}(B_{4,5,n}\cup B_{4,6,n}\cup B_{5,6,n}) \\ =& {\rm card}(B_{4,5,n}) + {\rm card}(B_{4,6,n})+ {\rm card}(B_{5,6,n}) - {\rm card}(A_{4,n}) - {\rm card}(A_{5,n}) - {\rm card}(A_{6,n}) \\
=&2^n+2^n+2^n-1^n-1^n-1^n \\ =&3(2^n-1)
\end{align}
So the searched number is
$$3^n-3.2^n+3$$
For $n=4$, you find $3^4-3\times2^4+3=36$, which is also what you find by choosing the repeated letter : $3\times\binom42\times2!$. And for $n=5$, you find $150$, as mentionned above by @vks.
This calculation uses the principle of inclusion-exclusion, I think you can generalize it quite easily. |
Does a non-zero wedge product make a coordinate system? | If you work in a local coordinate chart $(U,(x_1,\ldots,x_n))$ then you have that $$df_1\wedge\ldots\wedge df_n=\det\left(\frac{\partial f}{\partial x}\right)dx_1\wedge\ldots\wedge dx_n$$
where $\frac{\partial f}{\partial x}$ is the Jacobian matrix for $f(x)=(f_1(x),\ldots, f_n(x))$.
So $df_1\wedge\ldots \wedge df_n(p)\neq 0$ if and only if the Jacobian $\frac{\partial f}{\partial x}(p)$ is invertible; and by the inverse functon theorem this last condition means that $x\to f(x)$ is a diffeomorphism around $p.$ |
Question regarding gcd in polynomial ring over a field | Yes. And you can find it inductively: first you find the gcd of the first two, then you use that gcd with the third polynomial to find the gcd for the first three and so on. You can then substitute backwards to find the $a_i$. |
How to solve this multivariable recursion? | For the recursive multivariate function $a(m,n,k)$, we can define the generating function:
$$\Phi(x,y,z)=\sum_{m,n,k}a(m,n,k) \cdot x^m y^n z^k$$
Using the initial values above, we can define $a(m,n,k)$ as follows:
$$a(m, n, k) = 2a(m-1, n-1, k-1) + a(m-1, n-1, k) + a(m-1, n, k-1) + a(m, n-1, k-1) + [m=n=k=0] + [m=n=0 \wedge k=1] + [m=k=0 \wedge n=1] + [n=k=0 \wedge m=1]$$
Actually, I believe there are still some initial conditions missing, since for example $a(0,1,1)$ is not well defined. Computing its value will result in negative arguments: $a(0,1,1) = 2a(-1, 0, 0) + a(-1, 0, 1) + a(-1, 1, 0) + a(0, 0, 0) = 2a(-1, 0, 0) + 2a(-1, 0, 1) + 1$.
Adding the extra condition that $a(m,n,k)=0$ for any negative argument(s) solves the issue.
Let's move forward and substitute the definition of $a(m,n,k)$ into the generating function:
\begin{align}
\Phi(x,y,z)
&=\sum_{m,n,k}a(m,n,k) \cdot x^m y^n z^k \\
&= 2\sum_{m,n,k}a(m,n,k) \cdot x^{m+1} y^{n+1} z^{k+1} + \sum_{m,n,k}a(m,n,k) \cdot x^{m+1} y^{n+1} z^k + \sum_{m,n,k}a(m,n,k) \cdot x^{m+1} y^n z^{k+1}+\sum_{m,n,k}a(m,n,k) \cdot x^m y^{n+1} z^{k+1} + 1 + x + y + z \\
&= 2\Phi(x,y,z) \cdot x y z + \Phi(x,y,z)\cdot x y + \Phi(x,y,z) \cdot x z + \Phi(x,y,z) \cdot y z + 1 + x + y + z \\
&= \Phi(x,y,z)\left(2 x y z + x y + x z + y z\right) + 1 + x + y + z
\end{align}
After a trivial transformation we can get the generating function for $a(m,n,k)$:
$$ \Phi(x,y,z) = \frac{1 + x + y + z}{1-2 x y z - x y - x z - y z}$$
for the next steps we will use the following well-known formulas:
\begin{align}
\frac{1}{1-\rho} &= \sum_i \rho^i \\
(x_1+x_2+x_3+x_4)^N &= \sum_{k_1+k_2+k_3+k_4=N}\binom{N} {k_1,k_2,k_3,k_4}x_1^{k_1}x_2^{k_2}x_3^{k_3}x_4^{k_4} \\
\binom{N}{k_1,k_2,k_3,k_4}&=\frac{N!}{k_1!k_2!k_3!k_4!}
\end{align}
Applying them, we get:
\begin{align}
\Phi(x,y,z)
&= \frac{1 + x + y + z}{1-2 x y z - x y - x z - y z} \\
&= (1 + x + y + z) \sum_{N}(2 x y z + x y + x z + y z)^N \\
&= (1 + x + y + z) \sum_{k_1+k_2+k_3+k_4=N} \binom{N} {k_1,k_2,k_3,k_4} (2 x y z)^{k_1} \cdot (x y)^{k_2} \cdot (x z)^{k_3} \cdot (y z)^{k_4} \\
&= (1 + x + y + z) \sum_{k_1+k_2+k_3+k_4=N} \binom{N} {k_1,k_2,k_3,k_4} 2^{k_1} x^{k_1+k_2+k_3} y^{k_1+k_2+k_4} z^{k_1+k_3+k_4} \\
&= (1 + x + y + z) \sum_{k_1+k_2+k_3\leq N} \binom{N} {k_1,k_2,k_3,N-k_1-k_2-k_3} 2^{k_1} x^{k_1+k_2+k_3} y^{N-k_3} z^{N-k_2}
\end{align}
From the last expression, it is not hard to get the formula for $a(m,n,k)$, which by the definition of $\Phi(x,y,z)$, equals to the coefficient for $x^m y^n z^k$:
\begin{align}
a(m,n,k)
&= \sum_{\max(m,n,k)\leq N\leq \frac{1}{2}(m+n+k)} \binom{N}{m+n+k-2N,N-m,N-n,N-k} 2^{m+n+k-2N} \\
&+ \sum_{\max(m-1,n,k)\leq N\leq \frac{1}{2}(m+n+k-1)} \binom{N}{m+n+k-2N-1,N-m,N-n,N-k+1} 2^{m+n+k-2N-1} \\
&+ \sum_{\max(m,n-1,k)\leq N\leq \frac{1}{2}(m+n+k-1)} \binom{N}{m+n+k-2N-1,N-m+1,N-n,N-k} 2^{m+n+k-2N-1} \\
&+ \sum_{\max(m,n,k-1)\leq N\leq \frac{1}{2}(m+n+k-1)} \binom{N}{m+n+k-2N-1,N-m,N-n-1,N-k} 2^{m+n+k-2N-1}
\end{align}
Probably, there is a way to simplify the last expression.
The closed equation could be computed programmatically faster (and with less memory consumption) than the original recursive function.
The recursion for $a(m,n,k)$ requires $\Theta(m\cdot n \cdot k)$ space and time if Dynamic Programming technique is used.
For the closed formula, we can precompute factorials and then perform iterations over $N$. In this approach the space and time complexity will be $\Theta(n+m+k)$.
Note: The performance analysis is not very accurate, since we ignored the time complexity for the arithmetic operations.
Let's just write a simple Python-3 program that computes $a(m,n,k)$ using two different expressions:
import functools
import math
@functools.lru_cache(maxsize=None)
def a(m, n, k):
if min(m, n, k) < 0:
return 0
if m + n + k == 0 or m + n + k == 1:
return 1
if m + n == 0 or m + k == 0 or n + k == 0:
return 0
return 2*a(m-1, n-1, k-1) + a(m-1, n-1, k) + a(m-1, n, k-1) + a(m, n-1, k-1)
@functools.lru_cache(maxsize=None)
def binom4(N, m, n, k):
m, n, k, r = sorted([m, n, k, N-m-n-k])
assert m >= 0
return math.factorial(N) // math.factorial(r) // math.factorial(k) // math.factorial(n) // math.factorial(m)
def a1(m, n, k):
if min(m, n, k) < 0:
return 0
s = 0
for N in range(max(m, n, k), (m + n + k) // 2 + 1):
s += binom4(N, N-m, N-n, N-k) * 2**(n+m+k-2*N)
for N in range(max(m-1, n, k), (m + n + k-1) // 2 + 1):
s += binom4(N, N-m+1, N-n, N-k) * 2**(n+m+k-2*N-1)
for N in range(max(m, n-1, k), (m + n + k-1) // 2 + 1):
s += binom4(N, N-m, N-n+1, N-k) * 2**(n+m+k-2*N-1)
for N in range(max(m, n, k-1), (m + n + k-1) // 2 + 1):
s += binom4(N, N-m, N-n, N-k+1) * 2**(n+m+k-2*N-1)
return s
r = a(100, 200, 210)
r1 = a1(100, 200, 210)
print("recursive equation:", r)
print("closed formula:", r1)
print(r == r1)
Note: We did not use the the property for symmetry, it can improve the computation, but I do not see where it can simplify the closed expression. |
Compensation Martingale | The idea in your answer is good, but you have to be more explicit.
For (a), you want a sequence $(a_n)$ such that defining $Z_n:=Y_n+a_n$, the sequence $(Y_n,\mathcal F_n)$ is a martingale. And indeed, after having computed $\mathbb E\left[Z_{n+1}\mid\mathcal F_n\right]-Z_n$, we see that
we need $a_{n+1}-a_n=c_n$.
Same thing for (b), except that this time, $Z_n:=b_nY_n$.
In both cases, the $a_n$ and $b_n$ can be written explicitely in terms of $c_n$'s. |
[solved]Compound interest problem | I get \$363 as the first payment $P$, solving
\begin{align*}
P + 2P v^3 & = 500 \times 1.05^3 v^3 + 500 \times (1+\frac{6\%}{2})^{(2\times4)} v^4, \\
\mbox{where }v &= 1/1.07. \\
\mbox{I.e.} P & = 500 \frac{ 1.05^3 \times 1.07^{-3} + 1.03^8 \times 1.07^{-4} }{1 + 2 \times 1.07^{-3}} \\
& = 363.
\end{align*} |
Lie Groups map question | Yes, I came across the same question. It would make sense to be $B \mapsto A \circ B$. From this as $L_A$ is linear we would get $$TL_A|_{T_B(GL(V))} : T_B(GL(V)) \simeq \{B\} \times L(V,V) \to T_{L_A(B)}(GL(V)) \simeq \{A \circ B\} \times L(V,V)$$ given by
$$\begin{align}TL_A|_{T_B(GL(V))} (B,X) &= (A \circ B, DL_A(X)) \\&= (A \circ B, L_A(X))\\&= (A \circ B, A \circ X) \end{align}$$ |
Existence of a ball passing through points in the interior of a given ball . | This is not necessarily true.
Consider the 3-dimension case with unit sphere and 2 points inside of the sphere at $(\frac{1}{4},0,0)$, $(\frac{1}{2},0,0)$ and a third point at $(\frac{3}{4},0,0)$.
Essentially, the points that must be on the boundary of $B$ have much more constraints than they will in the 2-dimension case. Furthermore, the points must be equidistant from some other point, $p$, where $p$ is the center of our ball $B$. However, I believe if they all lie on a line inside of this ball then there is no ball, $B$, that contains all of the $n$ points on its boundary. |
Consider $\sum_{k=1}^{\infty}{1\over 3^k}\cdot{\sinh(a/3^k)+\sinh(2a/3^k)\over 2\cosh(a/3^k)+\cosh(2a/3^k)+1.5}=S_a$ | Since you already have a conjectural $S_a$ that is related with $\coth\left(\frac{a}{2}\right)$, and the given series is screaming for creative telescoping, the most reasonable thing is to check how we may simplify
$$ 3a\coth(3a/2)-a\coth(a/2) $$
through the (hyperbolic) cotangent triplication formulas. If that gives (essentially) the general term of the series, we are fine. And indeed, it does:
$$ 3\coth(3x)-\coth(x) = \frac{8 \cosh(x)\sinh(x)}{1+2 \cosh(x)^2+2 \sinh(x)^2}.$$
At last, we just need to exploit the fact that $\lim_{x\to 0^+}x\coth(x)=1$. |
Algebraic multiplicity = geometric multiplicity? | Sure, I can give a simple example:
$$
A=\begin{pmatrix}
1 & 1\\
0 & 1
\end{pmatrix}.
$$
The characteristic polynomial is $(\lambda - 1)^2$, so the algebraic multiplicity os $2$, however, geometric multiplicity is $1$, indeed $dim Ker(A-I)=1$. |
What is the expected number of children until having at least a girl and a boy? | Your reasoning is incorrect because even though you're assuming that the firstborn is a girl, asking for "the number of girls before the first boy" includes cases where that number is zero (in which case your assumption from before doesn't hold).
Alternatively, you've forgotten to count the boy that's eventually born (after $1+X$ girls). |
Elliptic PDEs in Banach space | $\bullet$ For a bounded $\Omega\subset\mathbb{R}^d$ with $\partial\Omega\in C^1$, the dual space $(W_0^{1,p})'$ will be isometric to $W_0^{1,p'}$ if $a\in C(\overline{\Omega})$ with the $L_p$-theory being rather trivial, since an elliptic operator $L={\rm div}(a\nabla\cdot)$ with $a\in C(\overline{\Omega})$ in this context is equivalent to the Laplacian. When $a$ is not continuous, the $L_p$-theory becomes rather nontrivial, while the isometry generally fails outside certain neighbourhood of $p=2$. For details see "Elliptic and Parabolic Equations with Discontinuous Coefficients" by A.Maugeri, D.K.Palagachev, L.G. Softova and references therein. For instance, in the simplest case of $d=2$ every angular point on discontinuity line of $a$ might be a singular point of solution with RHS in $C_0^{\infty}(\Omega)$, as well as every intersection point of $\partial\Omega$ with a smooth line of discontinuity of $a$.
$\bullet$ For a bounded domain $\Omega\subset\mathbb{R}^d$ with $\partial\Omega\in C^1$, if $a$ is Lipschitz, the answer coincides with that for the Lalpacian in case $\Omega=\mathbb{R}^d$, i.e., solution $u\in W^{s,p}$ where $0<s-1<1-\frac{d}{p'}$ which is readily established using the standard PDE $L_p$-theory techniques. The problem with this techniques is that it still largely stays within Mathematical Folklore, i.e. already widely known, but not yet to be found in textbooks. |
Proof that the characteristic function of a bounded open set is in $H^{\alpha}$ iff $\alpha < \frac{1}{2}$ | As I mentioned in comments, that $\alpha <1/2$ implies the result is already on MSE: To what fractional Sobolev spaces does the step function belong? (Sobolev-Slobodeckij norm of step function). A more general result can be found in this paper. I wrote out the computation slowly in Lemma 6.1 of this preprint.
For the negative result in the case $\alpha = 1/2$ (and therefore $\alpha \ge 1/2$), we lower bound the square of the Gagliardo seminorm, which for indicators $\chi_D$, is the following double integral:
$$[ \chi_D]_{H^{1/2}}^2 = \int_D\int_{D^c}\frac{1}{| x-y|^{1+n}} \, \mathrm{d}x \, \mathrm{d}y\text{.}$$
It is standard (see e.g. the Hitchhiker's guide) that this is equivalent to the squared $L^2(\mathbb R^n)$ norm of $(-\Delta)^{1/2} \chi_D$.
The result is false even without assumptions on the boundary, but it seems that the proof is harder. Other than the above, the only 'technical' tools we use below is a diffeomorphism and some change of variables.
Reduction to local piece with flat boundary
Without loss of generality, $0\in \partial D$. $n=1$ is easy, so suppose $n>1$. As $\partial D\in C^2$ at $0$, there are open neighbourhoods $U,V$ of $0$ and a $C^2$ diffeomorphism $\Phi:U\to V$ with inverse $\Psi$ such that
$$ \Phi(D\cap U)=V\cap \{Y\in\mathbb R^n : Y_n > 0\},
\\ \Phi(D^c\cap U)=V\cap \{X\in\mathbb R^n : X_n \le 0\}.$$
performing a change of variables $x=\Psi(X),\ y=\Psi(Y)$, with $J_\Psi:=|\det\nabla\Psi|$,
\begin{align}
[ \chi_D]_{H^{1/2}}^2
&\ge \int_{D\cap U}\int_{D^c\cap U}\frac{1}{| x-y|^{1+n}} \, \mathrm{d}x \, \mathrm{d}y
\\
&=\int_{V\cap (Y_n>0)}\int_{V\cap (X_n\le0)} J_\Psi(X)J_\Psi(Y)\frac{1}{|\Psi(X)-\Psi(Y)|^{1+n}}\,\mathrm{d}X \, \mathrm{d}Y
\\
&=\int_{V\cap (Y_n>0)}\int_{V\cap (X_n\le0)} J_\Psi(X)J_\Psi(Y)\frac{|X-Y|^{1+n}}{|\Psi(X)-\Psi(Y)|^{1+n}} \frac1{|X-Y|^{1+n}} \,\mathrm{d}X \, \mathrm{d}Y
\\
&\ge C \int_{V\cap (Y_n>0)}\int_{V\cap (X_n\le0)} \frac1{|X-Y|^{1+n}} \mathrm{d}X \, \mathrm{d}Y,
\end{align}
where $C = \inf_{X,Y\in V} J_\Psi(X)J_\Psi(Y)\frac{|X-Y|^{1+n}}{|\Psi(X)-\Psi(Y)|^{1+n}} \in(0,\infty)$. As $V$ is an open neighbhourhood of $0$, we can further shrink $V$ to some open box $(-r,r)^n$. At the cost of a multiplicative constant depending on $r$, which we absorb into $C$, we may change variables $(X,Y)=(r\tilde X,r\tilde Y)$ to set $V=(-1,1)^n$. We return to writing $x,y$ for our integration variables. We thus have, setting $x=(x',x_n),y=(y',y_n)$,
\begin{align}
[\chi_D]_{H^{1/2}}^2
&\ge C \int_{x'\in[-1,1]^{n-1}}\int_{y'\in[-1,1]^{n-1}}\int_{y_n\in[0,1]}\int_{x_n\in[-1,0]}\frac{\mathrm{d}x_n \,\mathrm{d}y_n \,\mathrm{d}y' \,\mathrm{d}x'}{(|x'-y'|^2+(x_n-y_n)^2)^{(1+n)/2}}
\\
&=C\iint_{x',y'\in[-1,1]^{n-1}}\iint_{x_n,y_n\in[0,1]}\frac{1}{(|x'+y'|^2+(x_n+y_n)^2)^{(1+n)/2}}\,\mathrm{d}x_n \,\mathrm{d}y_n \,\mathrm{d}y' \,\mathrm{d}x'.
\end{align}
Inner two integrals
Define
$$J(r) := \iint_{[0,1]^2} \frac{\,\mathrm{d}a \,\mathrm{d}b}{(r^2 + (a+b)^2)^{n+1}}.$$
Instead of integrating on the square $[0,1]^2$, we lower bound by integrating on the triangle bounded by the axes and the line $a+b=1$. Changing coordinates $u=a+b,v=a-b$ we obtain
\begin{align}
J(r)
&\ge \frac14 \cdot 2\int_{u=0}^1 \int_{v=0}^u \frac{\,\mathrm{d} v\,\mathrm{d} u}{(r^2+u^2)^{(n+1)/2}}
\\
&= \frac14\int_{u=0}^1\frac{2u \,\mathrm{d} u}{(r^2+u^2)^{(n+1)/2}}
\\
&= \frac14\int_{u=0}^1\frac{\,\mathrm{d}(u^2)}{(r^2+u^2)^{(n+1)/2}}
\\
&= \frac14\left(\frac{-1}{(\frac{n+1}2-1)(r^2+1)^{(n+1)/2-1}} + \frac{1}{(\frac{n+1}2-1)r^{n-1}} \right)
\end{align}
Divergence
The first term is bounded on $[-1,1]^{2n-2}$, say with integral $\frac{C'}C$, $|C'|<\infty$ and doesn't affect the following calculations;
plugging our lower bound for $J(|x'+y'|)$ and absorbing all constants into $C$, we see
$$[\chi_D]_{H^{1/2}}^2\ge C'+C \iint_{x',y'\in[-1,1]^{n-1}} \frac{dx'dy'}{|x'+y'|^{n-1}}$$
using a similar change of variables as before $u'=x'+y'$, $v'=x'-y'$, and restricting to the region bounded by $|x_i\pm y_i|= 1$ ($i=0,1,\dots,n-1$),
$$[\chi_D]_{H^{1/2}}^2\ge C'+C \int_{v'\in [-1,1]^{n-1}}\,\mathrm{d} v'\int_{u\in [-1,1]^{n-1}}\frac1{|u|^{n-1}} \,\mathrm{d} u'$$
since $\frac1{|u'|^{n-1}}\notin L^1([-1,1]^{n-1},\,\mathrm{d} u')$, we conclude that $[\chi_D]_{H^{1/2}}^2=\infty$, so $\chi_D\notin H^{1/2}$. |
Maximum cardinality affinely independent subset of $\mathbb{R}^n$ | Let $\{s_0,s_1,\ldots,s_r\}\subset S$, then the family $\{\vec{s_0s_1},\ldots,\vec{s_0s_r}\}$ is linearly independent in the vector space $\Bbb R^n$, hence $r\leq n$.
So $S$ contains at most $n + 1$ elements. |
Maximise the value of two items with two inputs | Here's a hint which should help.
This is the same thing as starting at $(a,b)$, and making as many "knight's moves" as possible, i.e. adding $(-1, -2)$ or $(-2,-1)$ as many times as we can.
Since the problem is clearly symmetric, assume that $a \geq b$ without loss of generality.
If $a \geq 2b-1$, then show that the optimal path involves only $(-2, -1)$ moves (this is not too hard). So we can easily calculate how many moves we can make in this case.
If $b \leq a < 2b-1$, show that any optimal path contains moves of both types. Once you've done this, note that since the order of the moves is unimportant, we can simply take one move of each type first, and reduce it to the $(a-3, b-3)$ case, and repeat the process until we reach a state $(x,y)$ where $x \geq 2y-1$, covered in the previous part. |
What is the closed formula for the following summation? | There is no closed form as such. However, you can use the Abel summation technique from here to derive the asymptotic. We have
\begin{align}
S_n & = \sum_{k=2}^n \dfrac1{\log(k)} = \int_{2^-}^{n^+} \dfrac{d \lfloor t \rfloor}{\log(t)} = \dfrac{n}{\log(n)} - \dfrac2{\log(2)} + \int_2^{n} \dfrac{dt}{\log^2(t)}\\
& =\dfrac{n}{\log(n)} - \dfrac2{\log(2)} + \int_2^{3} \dfrac{dt}{\log^2(t)} + \int_3^{n} \dfrac{dt}{\log^2(t)}\\
&\leq \dfrac{n}{\log(n)} \overbrace{- \dfrac2{\log(2)} + \int_2^{3} \dfrac{dt}{\log^2(t)}}^{\text{constant}} + \dfrac1{\log(3)}\int_3^{n} \underbrace{\dfrac{dt}{\log(t)}}_{\leq S_n}\\
\end{align}
We have get that
$$S_n \leq \dfrac{n}{\log(n)} + \text{constant} + \dfrac{S_n}{\log(3)} \implies S_n \leq \dfrac{\log(3)}{\log(3)-1} \left(\dfrac{n}{\log(n)} + \text{constant}\right) \tag{$\star$}$$
With a bit more effort you can show that
$$S_n \sim \dfrac{n}{\log n}$$ |
Is there any way to express one exact concave form using max, min, etc functions? | Ok, let's translate everything to be at the origin. Let $f(x)=2\ln(x/6 + 5/3)$ and $g(x) = \log(x+1)$. Define the following functions,
$$\tilde{f}(x) = f(x+8)$$
and
$$\tilde{g}(x) = g(x+8).$$
Notice that $\tilde{f}(0)=\tilde{g}(0)$ since $f(8)=g(8)$.
Furthermore define
$$\hat{f}(x) = \tilde{f}(x)-\tilde{f}(0)$$
and
$$\hat{g}(x) = \tilde{g}(x) - \tilde{f}(0).$$
Then we can almost write the function we want as,
$$\hat{h}(x) = \max(\hat{g}(x),0) + \min(\hat{f}(x),0),$$
except that this has been translated so we must translate back,
$$h(x) = \hat{h}(x-8) + \tilde{f}(0).$$
So our final function is simply $h(x)$.
I leave it to you to understand why it's still concave and to compute explicitly all the functions.
Edit: made notation more consistent by switching $\hat{h}(x)$ and $h(x)$. |
Hartshorne Proposition 9.5 | @poorna The answer to your question follows from the universal property of the fibre product:
We have the fibre product
$$\begin{array}
X\ \ \ X' & \stackrel{}{\longrightarrow} & X \\
\ \ \downarrow{f'} & & \downarrow{f} \\
Spec(\mathscr O_{Y,y}) & \stackrel{j_y}{\longrightarrow} & Y
\end{array}
$$
Consider the local ring $A = Spec(\mathscr O_{X,x})$. Due to the universal property of the fibre product the commutative square defining in a topological sense $x$ and $y=f(x)$
$$\begin{array}
XSpec \ A & \stackrel{j_X}{\longrightarrow} & X \\
\ \ \downarrow{} & & \downarrow{f} \\
Spec(\mathscr O_{Y,y}) & \stackrel{j_y}{\longrightarrow} & Y
\end{array}
$$
maps into the fibre product via a unique morphism
$$Spec \ A \stackrel{j_{x'}}{\longrightarrow} X'$$
Referring to my previous comment, the latter morphism is a point of $X'$. It lifts the 'point' $x$, i.e. $Spec \ A \stackrel{j_X}{\longrightarrow} X$, to $X'$. |
Set of functions is linear subspace of $C(I_n)$ | Let $G, H \in S$. We may write $$G(x) = \sum_{j=1}^N\alpha_j \sigma(y_j^Tx +\theta_j) \textrm{ , } H(x) = \sum_{k=1}^M \beta_k \sigma(z_k^Tx + \zeta_k)$$ Now set $\alpha_{N+k} = \beta_k$,$y_{N+k} = z_k$, $\theta_{N+k} = \zeta_k$ and we get $$(G+H)(x) = \sum_{j=1}^{N+M}\alpha_j \sigma(y_j^Tx +\theta_j)$$ which lies in $S$. I'm sure you can verify the other property. |
How can I find the value of this Maclaurin Series. | The right term should be approximation.
After you found the four non-zero terms,
$$2x\cos^2(x) \approx \sum_{i=1}^7 a_i x^i$$
Integrate it term by term.
$$\int_0^12x\cos^2(x)\, dx \approx \sum_{i=1}^7 a_i \int_0^1x^i\, dx$$ |
Is there an unbiased estimator of the reciprocal of the slope in linear regression? | As you say, a second-order Taylor expansion for $g(\hat{\beta})=1/\hat{\beta}$ around $\beta$ gives:
\begin{equation}
\frac 1 {\hat{\beta}} = \frac 1 \beta - \frac{\hat{\beta}-\beta}{\beta^2} + \frac{(\hat{\beta}-\beta)^2}{\beta^3}
\end{equation}
Hence, $\operatorname E(1/\hat{\beta})=(1/\beta)+\operatorname{Var}(\hat{\beta})/\beta^3$ and is upward bias. So $(1/\hat{\beta})- \operatorname{Var}(\hat{\beta})/\hat{\beta}^3$ should, in principle, give a better estimate for $1/\beta$. Probably this is what you had thought of already and rejected. So apologies if so. |
Categorical description of matrix similarity | $\DeclareMathOperator\rank{rank}$If you allow $C\neq D$, an isomorphism $A\cong B$ in the category you describe is equivalent to $\rank(A)=\rank(B)$. You can of course define a category where as morphisms you only allow commutative squares with $C=D$, then an isomorphism $A\cong B$ is indeed equivalent to $A$ and $B$ being similar.
Another way you might want to look at this is in the setting of $K[X]$-modules, where $K$ is the underlying field. A $K[X]$-module amounts to a $K$-vector space $V$ together with a $K$-linear map $f\colon V\to V$ that describes the action of $X$, that is $f(v) = X\cdot v$. In this setting, matrices $A$ and $B$ yield $K[X]$-modules and those are isomorphic if and only if $A$ and $B$ are similar. |
Finding parametric equations given a point, orthogonal to another vector, and contained in a plane | The parametric equations will be of the form
$$x=0+at $$
$$y=0+bt $$
$$z=0+ct$$ with
$$3a+4b+2c=0$$
and
$$at-2bt+ct=0$$
thus
$$a=2b-c $$ and
$$6b-3c+4b+2c=0$$ which gives
$$c=10b $$ and
$$a=-8b $$
finally, we get
$$x=-8t $$
$$y=t $$
$$z=10t $$ |
Draw a convex hull in 2D | It's a triangle, which is convex, so its convex hull is itself... |
Show that two subspaces are equal with a given assumption | Following your second argument, suppose there exists a $y \in B$ but $y \notin C$.
If $y \in A$, then $y \in A \cap B$ but $y \notin A\cap C$, so $A \cap B \neq A \cap C$, contradiction. Therefore, $y \notin A$. But then $y \notin A+C$, contradicting the fact that $y \in A+B$ and $A+B = A+C$. Therefore $y \in C$ and we are done. |
Literature on bounds of Fubini's numbers | QING ZOU, "THE LOG-CONVEXITY OF THE FUBINI NUMBERS", http://toc.ui.ac.ir/article_21835_684378fec55e5c66c7fccd4321a84637.pdf
gives the bounds on $f_n$, the $n^{\text{th}}$ Fubini number:
$$ 2^n < f_n < \frac{n!}{(\ln 2)^{n+1}} < (n+1)^n \text{.} $$
The lower bound holds for $n \geq 3$ and the upper for $n \geq 1$.
(Zou cites Barthelemy, "AN ASYMPTOTIC EQUIVALENT FOR THE NUMBER OF TOTAL PREORDERS ON A FINITE SET ", from which we determine the intended base of the logarithm.) |
Algebra on a Louvre tablet | I think the intended solution was as follows:
$$
\left(\frac{x-y}{2}\right)^2=\left(\frac{a}{2}\right)^2-1.
$$
Thus $x-y$ can be solved for:
$$
x-y=\pm2\sqrt{\left(\frac{a}{2}\right)^2-1}=\pm\sqrt{a^2-4}
$$ Adding and subtracting with $x+y$ gives us:
$$
x=\frac{(x+y)+(x-y)}{2}=\frac{a\pm \sqrt{a^2-4}}{2},
$$
and likewise for $y$.
Note that this is a re-derivation of the quadratic formula; in particular, it defeats the purpose of this approach to use it! |
Proof of $O_P(1) o_P(1) = o_P(1)$ | It seems that you reversed the $\leq$ and $\gt$: it should be
$$
P(|U_nX_n|>\varepsilon, |U_n| \color{red}{\gt} M) \leq P(|U_n| \color{red}{\gt} M) < \epsilon
$$
and
$$P(|U_nX_n|>\varepsilon, |U_n|\color{red}{\leq} M) \leq P(|MX_n|> \varepsilon) = P\left(|X_n|> \frac \varepsilon M\right) \to 0.$$ |
Method of proof confusing between Vector space and Linear Transformation. | This is because linear transformations are precisely the maps that respect the vector space structure. For instance, if $V$ is an $n$-dimensional vector space, then $V$ is isomorphic to $\mathbb R^n$. Such an isomorphism corresponds to giving a basis for $V$. In other words, $V$ is completely determined by a basis, because every vector is a linear combination of the basis vectors.
Even if you aren't familiar with bases, the point is that linear structure is essentially determined by how we can add vectors together. If we want a map $T: V_1 \to V_2$ to preserve this structure, we would naturally require
$$
T(av + bw) = aT(v) + bT(w).
$$ |
For which $n$ does $\binom{2n}{n}$ divide $\binom{4n}{n}$? | Claim:
If $n > 24$, then ${\large{\binom{2n}{n}}}$ does not divide ${\large{\binom{4n}{n}}}$.
Proof:
It's easily verified that the claim holds for $n < 38$.
Suppose $n\ge 38$.
Let $p\;$be the least prime such that $p > {\large{\frac{4}{3}}}n$.
Then for $n \ge 2010760$, we have $p \le {\large{\frac{3}{2}}}n$ by Lowell Schoenfeld's generalization of Bertrand's Postulate
$\qquad$https://en.wikipedia.org/wiki/Bertrand%27s_postulate#Better_results
and for $38\le n < 2010760$, we have $p \le {\large{\frac{3}{2}}}n$ by direct evaluation via a Maple test program.
Then from ${\large{\frac{4}{3}}}n < p \le {\large{\frac{3}{2}}}n$, we get
\begin{align*}
&\bullet\;p{\,\not\mid\,}n!\\[4pt]
&\bullet\;p{\,\mid\,}(2n)!\\[4pt]
&\bullet\;p^2{\,\mid\,}(3n)!\\[4pt]
&\bullet\;p^3{\,\not\mid\,}(4n)!\\[4pt]
\end{align*}
hence
$${\large{\frac{\binom{4n}{n}}{\binom{2n}{n}}}}$$
is not an integer since identically we have
$${\large{\frac{\binom{4n}{n}}{\binom{2n}{n}}}}=\frac{n!(4n)!}{(2n)!(3n)!}$$
and for the fraction on the right, $p^3$ divides the denominator, but doesn't divide the numerator.
This completes the proof. |
Proof of existence of probability distribution: Why $P((-\infty,a])=P^{*}(X^{-1}((-\infty,a]))$? | So we have a random variable $X: \Omega \to \mathbb{R}$ (with $P^*$ been probability measure on $\Omega$). By definition, it's probability distribution is a measure $P$ on $\mathbb{R}$ such that $P(A) = P^*(\{t | X(t) \in A\}) = P^*(X^{-1}(A))$. |
How can the following inequality hold for any real $x$? | You can write $2 = (1+x) + (1-x)$ and use triangular inequality. |
Probability vs Confidence | Your question is a natural one and the answer is controversial, lying at heart
of a decades-long debate between frequentist and Bayesian statisticians. Statistical
inference is not mathematical deduction. Philosophical issues arise when
one takes a bit of information in a sample and tries to make a helpful
statement about the population from which the sample was chosen. Here is my
attempt at an elementary explanation of these issues as they arise in your
question. Others may have different views and post different explanations.
Suppose you have a random sample $X_1, X_2, \dots X_n$ from $Norm(\mu, \sigma)$
with $\sigma$ known and $\mu$ to be estimated. Then
$\bar X \sim Norm(\mu, \sigma/\sqrt{n})$ and we have
$$P\left(-1.96 \le \frac{\bar X - \mu}{\sigma/\sqrt{n}} \le 1.96\right) = 0.95.$$
After some elementary manipulation, this becomes
$$P(\bar X - 1.96\sigma/\sqrt{n} \le \mu \le \bar X + 1.96\sigma/\sqrt{n}) = 0.95.$$
According to the frequentist interpretation of probability, the two displayed
equations mean the same thing: Over the long run, the event inside parentheses will be true 95% of the time. This interpretation holds as long as $\bar X$ is viewed as a random variable based on a random sample of size $n$ from the normal population specified at the start. Notice that the second equation needs to
be interpreted as meaning that the random interval
$\bar X \pm 1.96\sigma/\sqrt{n}$ happens to include the unknown mean $\mu.$
However, when we have a particular sample and the numerical value of an
observed mean $\bar X,$ the frequentist "long run" approach to probability
is in potential conflict with a naive interpretation of the interval. In this
particular case $\bar X$ is a fixed observed number and $\mu$ is a fixed
unknown number. Either $\mu$ lies in the interval or it doesn't. There is no "probability" about it. The process
by which the interval is derived leads to coverage in 95% of cases over the
long run. As shorthand for the previous part of this paragraph, it is customary to use the word
confidence instead of probability.
There is really no difference between the two words. It is just that the
proper frequentist use of the word probability becomes awkward, and people have
decided to use confidence instead.
In a Bayesian approach to estimation, one establishes a probability framework
for the experiment at hand from the start by choosing a "prior distribution." Then a Bayesian probability
interval (sometimes called a credible interval) is based on a melding of the prior
distribution and the data. A difficulty Bayesian statisticians may have in helping
nonstatisticians understand their interval estimates is
to explain the origin and influence of the prior distribution. |
For $A$ positive semidefinite and $\theta_1 ^{T} \theta_2 = 0$ is it true that $\theta_1^{T} A \theta_1 + 2 \theta_1^{T} A \theta_2 \geq 0$? | This is true if both vectors are orthogonal eigenvectors, then $v_1^TAv_2 = \lambda v_1^Tv_2=0$.
Here is a counterexample: Let
$$
A=\pmatrix{1&0\\0&0}, v = \pmatrix{1\\1}, w = \pmatrix{-1\\1}
$$
then
$$
0=(v+w)^TA(v+w) = v^TAv + 2 v^TAw + w^Tw,
$$
since $w\ne 0$, it follows $v^TAv + 2 v^TAw<0$. |
Evaluate $\sum\limits_{k=0}^n(-1)^{n+k}{n\choose k}{ {n+k}\choose n} \frac{1}{k+2}$ | There is a neat trick. The shifted Legendre polynomials fulfill
$$ Q_n(x)=P_n(2x-1)=\sum_{k=0}^{n}(-1)^{n+k}\binom{n}{k}\binom{n+k}{k}x^k \tag{1}$$
hence our sum is given by
$$ S_n=\sum_{k=0}^{n}(-1)^{n+k}\binom{n}{k}\binom{n+k}{k}\frac{1}{k+2}=\int_{0}^{1}x\, Q_n(x)\,dx \tag{2}$$
and
$$\boxed{\quad S_0=\frac{1}{2},\qquad S_1=\frac{1}{6},\qquad S_{n\geq 2}=0\quad}\tag{3} $$
follow from $x=\frac{Q_0(x)+Q_1(x)}{2}$ and the orthogonality relations for Legendre polynomials. |
Get the mass of a plate with density function | Just use Cartesian coordinates supposing the square is centered at $0$. You will obtain the integral $$\int_{-1/2}^{1/2}\int_{-1/2}^{1/2} (16+(x^2+y^2)^2)dxdy,$$
which is easy to calculate. |
Having matrices $A$ and $T$, find $S$ such that $A=ST$. But what if $\det T=0$? | The matrix $S$ may exist, but it is also possible that it doesn't exist. If, for instance, $\det A\neq0$, then, since $\det T=0$, you can be sure that it doesn't exist. |
L'Hospital for $ \infty- \infty$ | An indeterminate form $\infty - \infty$ might be something like $\lim_{x \to +\infty} (a(x)-b(x))$. But here you have two different limits: $\lim_{x \to \infty} a(x) - \lim_{t \to 0} b(t)$, with no connection between the two variables $x$ and $t$ (you called them both $x$, but they are really different). So there's no reason to assign any particular value to this expression. |
A closet contains 10 pairs of shoes. If 8 shoes are randomly selected, what is the probability that there will be exactly 1 complete pair? | It looks fine to me. In general there are $\binom{10}k\binom{10-k}{8-2k}2^{8-2k}$ ways to choose the $8$ shoes so as to get exactly $k$ pairs, your calculation being the case $k=1$, and a quick numerical check confirms that
$$\sum_{k=0}^4\binom{10}k\binom{10-k}{8-2k}2^{8-2k}=\binom{20}8\;.$$ |
If $L$ is a Dedekind cut, is the set $\{ x^2:x \in L \}$ a Dedekind cut? | Brian Tung's comment has the answer to both:
(a) Condition (III) prevents $\{x^{2} \mid x \in L\}$ being a Dedekind cut for any Dedekind cut $L$.
Every Dedekind cut is an open interval $(-\infty,b)$ for some $b$. In particular this interval definitely contains some negative rationals. For example, if $b > 0$, we must have, by condition (III), $-b \in L$ as $-b < b$ and if $b \leq 0$, we can similar apply (III) and state $b-1 \in L$.
However $\{x^{2} \mid x \in L\}$, regardless of which cut $L$ is, contains no negative numbers.
(b) Again, we can give a cut for which (III) fails (and again, Brian Tung's suggestion is probably the clearest). Let $L = \{x \in \mathbb{Q} \mid x < 1\}$. In particular consider what happens to the rationals in the interval $(0,1)$ when we invert them: for every $y \in (0,1)$ we have $\frac{1}{y} > 1$. Then the set $L' = \{\frac{1}{x} \mid x\in L\} \cup \{0\}$ is $(-\infty,0) \cup (1,\infty)$. That is, there's a gap between $0$ and $1$, so we again violate (III). Pick any $y \in L'$ where $y >1$. Then by (III) we must have that $\frac{1}{2} \in L'$ as $\frac{1}{2} < y$, giving a contradiction. |
On branches of Lambert W function | Obviously, $x\ge 0$ for the square root. Take the square root on both sides,
$$
e^{\sqrt x/2}\ge 2\sqrt2(\sqrt x/2)\iff -\frac1{2\sqrt2}\le(-\sqrt x/2)e^{-\sqrt x/2}
$$
Now the function $ue^u$ has
a negative minimum at $u=-1$ with value$-e^{-1}$,
value $ue^u=0$ at $u=0$ and
$\lim_{u\to\-\infty}ue^u=0$.
Thus for $-e^{-1}<v<0$ the equation $v=ue^u$ has two solutions $W_{-1}(v)<-1$ and $-1<W_0(v)<0$, which are on the two branches of the Lambert-W function.
$W_{-1}$ is the inverse of the monotonically falling segment and thus also monotonically falling from $-1$ to $-\infty$,
for the same reasoning $W_0$ is monotonically increasing from $-1$ to $0$.
As $2\sqrt2>e$, if barely, you get these two interval end points on the branches and all $x$ to the left and right as solution of the inequality,
$$
-\sqrt x/2\ge W_0\left(-\frac1{2\sqrt2}\right)\text{ and }W_{-1}\left(-\frac1{2\sqrt2}\right)\ge -\sqrt x/2
$$ |
Show that V is a vector space. | Well, the zero function is the zero vector.
Furthermore, if $p,q\in V$, then $p(x)=p(x+1)$ and $q(x)=q(x+1)$ for each $x$.
Thus $(p+q)(x) = p(x)+q(x) = p(x+1)+q(x+1) =(p+q)(x+1)$. Hence $p+q\in V$.
Moreover, let $a$ be a real number and $p\in V$. Then $(a\cdot p)(x) = ap(x) = ap(x+1) = (a\cdot p)(x+1)$ and so $a\cdot p\in V$. |
Finding domain of a function | Your solution is correct.
For the second, it goes almost the same way, but we can't have $x=1$ or $x=-3$ to avoid division by zero.
For the third, notice we can't have have $x=1$. Further, we want $x+1$ and $x-1$ to be both positive or both negative. This gives $x \leq -1$ or $x \geq 1$, but we won't want $x=1$. So the third has domain:
$$x \leq -1 \vee x > 1$$
in the notation with $x$ or in the interval notation:
$$(\leftarrow,-1] \cup (1,\rightarrow) $$ |
Asymptotics of an expression of the root of a polynomial | All $O$-notations and $o$-notations work for $n\to\infty$.
First we have $2^{n+1}\ge n!x_0$, hence $x_0\le2^{n+1}/n!=O(2^n/n!)$. Take logarithm, we derive that
\begin{equation}
\begin{split}
(n+1)\ln(2-x_0)&=\ln x_0+\sum_{k=1}^n\ln(x_0+k)\\
&=\ln n!+\ln x_0+\sum_{k=1}^n\ln\left(1+\frac{x_0}k\right)
\end{split}
\end{equation}
therefore
\begin{gather}
\ln(2-x_0)=\ln2+\ln(1-x_0/2)=\ln2+O(x_0)\\
\sum_{k=1}^n\ln(1+x_0/k)=\sum_{k=1}^nO(x_0/k)=O(x_0H_n)
\end{gather}
\begin{equation}
\begin{split}
\ln x_0&=(n+1)\ln(2-x_0)-\sum_{k=1}^n\ln\left(1+\frac{x_0}k\right)-\ln n!\\
&=(n+1)(\ln 2+O(x_0))-\ln n!-O(x_0H_n)\\
&=-\ln n!+(n+1)\ln 2+O(nx_0)
\end{split}
\end{equation}
\begin{equation}
x_0=\frac{2^{n+1}(1+O(nx_0))}{n!}
\end{equation}
Notice that $nx_0=O(2^n/(n-1)!)=o(1)$, so we have $x_0\sim2^{n+1}/n!$, thus
\begin{equation}
x_0=\frac{2^{n+1}}{n!}+O(nx_0^2)
\end{equation}
Now we can observe $x_0$ more closely
\begin{equation}
\begin{split}
\ln(2-x_0)&=\ln2+\ln(1-x_0/2)=\ln2-x_0/2+O(x_0^2)\\
&=\ln2-2^n/n!+O(nx_0^2)
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\sum_{k=1}^n\ln(1+x_0/k)
&=\sum_{k=1}^n(x_0/k+O(x_0/k)^2)\\
&=H_nx_0+O\left(x_0\sum_{k\ge1}1/k^2\right)\\
&=H_n(2^{n+1}/n!+O(nx_0^2))+O(x_0^2)\\
&=\frac{2^{n+1}H_n}{n!}+O(x_0^2\cdot n\log n)
\end{split}
\end{equation}
\begin{equation}
\ln x_0=-\ln n!+(n+1)\ln2-\frac{2^n(n+2H_n+1)}{n!}+O(n^2x_0^2)
\end{equation}
\begin{equation}
\begin{split}
x_0&=\frac{2^{n+1}}{n!}\exp\left(-\frac{2^n(n+2H_n+1)}{n!}\right)(1+O(n^2x_0^2))\\
&=\frac{2^{n+1}}{n!}\left(1-\frac{2^n(n+2H_n+1)}{n!}\right)(1+O(n^2x_0^2))\\
&=\frac{2^{n+1}}{n!}\left(1-\frac{2^n(n+2H_n+1)}{n!}\right)+O(n^2x_0^3)
\end{split}
\end{equation}
Next, we compute $\ln M$
\begin{equation}
\begin{split}
\ln M&=\sum_{k=0}^n(k+2)(\ln(k+2)-\ln(k+x_0))\\
&=\sum_{k=1}^{n+2}k\ln k-\sum_{k=0}^n(k+2)\ln(k+x_0)
\end{split}
\end{equation}
where
\begin{equation}
\begin{split}
\sum_{k=0}^n(k+2)\ln(k+x_0)&=2\ln x_0+\sum_{k=1}^n(k+2)(\ln k+\ln(1+x_0/k))\\
&=2\ln x_0+\sum_{k=1}^n(k+2)\ln k+\sum_{k=1}^n(k+2)\ln(1+x_0/k)\\
&=2\left((n+1)\ln(2-x_0)-\sum_{k=1}^n\ln(1+x_0/k)-\ln n!\right)\\
&\qquad+\sum_{k=1}^n(k+2)\ln k+\sum_{k=1}^n(k+2)\ln(1+x_0/k)\\
&=2(n+1)\ln(2-x_0)+\sum_{k=1}^nk\ln k+\sum_{k=1}^nk\ln(1+x_0/k)
\end{split}
\end{equation}
thus
\begin{multline}
\ln M=(n+1)\ln(n+1)+(n+2)\ln(n+2)\\
-2(n+1)\ln(2-x_0)-\sum_{k=1}^nk\ln(1+x_0/k)
\end{multline}
therefore
\begin{equation}
x_0^2=\frac{4^{n+1}}{n!^2}(1+O(nx_0))^2=\frac{4^{n+1}}{n!^2}+O(nx_0^3)
\end{equation}
\begin{equation}
\begin{split}
\ln(2-x_0)&=\ln2+\ln(1-x_0/2)=\ln2-x_0/2-x_0^2/8+O(x_0^3)\\
&=\ln2-\frac{2^n}{n!}\left(1-\frac{2^n(n+2H_n+1)}{n!}\right)-\frac18\frac{4^{n+1}}{n!^2}+O(n^2x_0^3)\\
&=\ln2-\frac{2^n}{n!}+\frac{4^n}{n!^2}\left(n+2H_n+\frac12\right)+O(n^2x_0^3)
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\sum_{k=1}^nk\ln(1+x_0/k)&=\sum_{k=1}^nk(x_0/k-x_0^2/2k^2+O(x_0/k)^3)\\
&=nx_0-\frac12H_nx_0^2+O\left(x_0^3\sum_{k\ge1}1/k^2\right)\\
&=n\frac{2^{n+1}}{n!}\left(1-\frac{2^n(n+2H_n+1)}{n!}\right)-\frac12H_n\frac{4^{n+1}}{n!^2}+O(n^3x_0^3)\\
&=2n\frac{2^n}{n!}-\frac{4^n}{n!^2}(2n^2+4nH_n+2n+2H_n)+O(n^3x_0^3)
\end{split}
\end{equation}
We have enough stuff to estimate $\ln M$ now.
\begin{multline}
\ln M=(n+1)\ln(n+1)+(n+2)\ln(n+2)-2(n+1)\ln2\\
+\frac{2^{n+1}}{n!}-\frac{4^n}{n!^2}(n+2H_n+1)+O(n^3x_0^3)
\end{multline}
thus
\begin{equation}
\begin{split}
M&=\frac{(n+1)^{n+1}(n+2)^{n+2}}{4^{n+1}}\\
&\qquad\qquad\exp\left(\frac{2^{n+1}}{n!}\right)\exp\left(-\frac{4^n}{n!^2}(n+2H_n+1)\right)\\
&\qquad\qquad(1+O(n^3x_0^3))
\end{split}
\end{equation}
Finally, we have
\begin{gather}
\exp\left(\frac{2^{n+1}}{n!}\right)=1+\frac{2^{n+1}}{n!}+\frac{2\cdot4^n}{n!^2}+O(x_0^3)\\
\exp\left(-\frac{4^n}{n!^2}(n+2H_n+1)\right)=1-\frac{4^n}{n!^2}(n+2H_n+1)+O(n^2x_0^4)
\end{gather}
and
\begin{equation}
\begin{split}
M&=\frac{(n+1)^{n+1}(n+2)^{n+2}}{4^{n+1}}\\
&\qquad\qquad\left(1+\frac{2^{n+1}}{n!}-\frac{4^n}{n!^2}(n+2H_n-1)\right)\\
&\qquad\qquad(1+O(n^3x_0^3))
\end{split}
\end{equation}
Notice that the absolute error
\begin{equation}
O\left(\frac{n^3(n+1)^{n+1}(n+2)^{n+2}}{4^{n+1}}x_0^3\right)
\end{equation}
approaches $0$ when $n\to\infty$. |
Circumsphere of a tetrahedron undefined? | If this was StackOverflow they would say "show me your code." It looks like you've at least done something with the minus sign in calculating Dy. Here's my code where the four vertices of the tetrahedron are stored in the 4-by-3 matrix xyz:
w = ones(4,1);
s = sum(xyz.^2,2);
Dx = det([s xyz(:,2:3) w])
Dy = -det([s xyz(:,[1 3]) w])
Dz = det([s xyz(:,1:2) w])
a = det([xyz w])
c = det([s xyz])
r = 0.5*sqrt(Dx^2+Dy^2+Dz^2-4*a*c)/abs(c)
The resulting radius is a positive real value: about 8.72e-6. Yes, the negative Dy goes away when it gets squared (and could be omitted in your case) – did you maybe put the negative inside of the square root?
I can't answer your your more mathematical question, but I'd hazard to guess that this formula is fine. I'd think that numerical issues would potentially be a bigger issue. There's potential for overflow in your case if the numbers get bigger. |
Automorphism of $\mathbb{Q}$ | Here is a way that you can follow: $f:\Bbb Q\to \Bbb Q$ an automorphim.
Show that $f(nx)=nf(x)$ for all $n\in \Bbb N$ and $x\in \Bbb Q$,
Show that $f(-x)=-f(x)$ for all $x\in \Bbb Q$ ,
Show that $f(nx)=nf(x)$ for all $n\in \Bbb Z$ and $x\in \Bbb Q$,
Let $r=\dfrac{p}{q}\in \Bbb Q$, with $(p,q)\in \Bbb N\times \Bbb Z^*$, show that $qf(r)=pf(1)$.
Now your turn. |
Complex Analysis Question from Stein | I will assume we can write
$$f(z) = \frac{c}{z_0-z} + \sum_{n=0}^{\infty} b_n z^n$$
for some value of $c \ne 0$, and $\lim_{n \to \infty} b_n = 0$. Then
$$f(z) = \sum_{n=0}^{\infty} a_n z^n$$
where
$$a_n = b_n + \frac{c}{z_0^{n+1}}$$
Then
$$\begin{align}\lim_{n \to \infty} \frac{a_n}{a_{n+1}} &= \lim_{n \to \infty} \frac{\displaystyle b_n + \frac{c}{z_0^{n+1}}}{\displaystyle b_{n+1}+ \frac{c}{z_0^{n+2}}}\\ &= \lim_{n \to \infty} \frac{\displaystyle \frac{c}{z_0^{n+1}}}{\displaystyle \frac{c}{z_0^{n+2}}}\\ &= z_0\end{align}$$
as was to be shown. Note that the second step above is valid because $z_0$ is on the unit circle.
For a nonsimple pole, we may write
$$f(z) = \frac{c}{(z_0-z)^m} + \sum_{n=0}^{\infty} b_n z^n$$
for $m \in \mathbb{N}$. It might be known that
$$(1-w)^{-m} = \sum_{n=0}^{\infty} \binom{m-1+n}{m-1} w^n$$
Then
$$\lim_{n \to \infty} \frac{a_n}{a_{n+1}} = \lim_{n \to \infty} \frac{n+1}{n+m} z_0$$
EDIT
@TCL observed that we can simply require that $b_n z_0^n$ goes to zero as $n \to \infty$. Then for a simple pole
$$\lim_{n \to \infty} \frac{a_n}{a_{n+1}} = \lim_{n \to \infty} \frac{\displaystyle b_n z_0^n + \frac{c}{z_0}}{\displaystyle b_{n+1} z_0^n + \frac{c}{z_0^2}}$$
which you can see goes to $z_0$. |
A group of order $n^2$ with $n+1$ subgroups of order $n$ with trivial intersection is abelian | Let $H,K$ be two distinct subgroup of order $n$. Now $\mid HK\mid=\frac{\mid H\mid \mid K\mid}{\mid H\cap K\mid}=\mid H\mid \mid K\mid=n^2=\mid G\mid \Rightarrow G=HK$
For normality of $H$, we have to show that , $gHg^{-1}=H,\forall g\in G$. At the contrary let there is some $g\in G$ such that $gHg^{-1}\ne H$. Since, $\mid gHg^{-1}\mid =n$ , from the first part we get that $gHg^{-1}H=G$.So there is some, $h_1,h_2\in H $ such that $g=gh_1g^{-1}h_2\Rightarrow g\in H\Rightarrow H=G$ , which is impossible. So $H\trianglelefteq G$
Now let $H_1,H_2,....,H_{n+1}$ be the list of all n-ordered subgroups of $G$. Then $\mid H_1\cup H_2\cup ....\cup H_{n+1}\mid =(n-1)(n+1)+1=n^2\Rightarrow G=H_1\cup H_2\cup ....\cup H_{n+1}$.Now let $g_1\in G$. So, there is some n-ordered subgroup $H_i$ such that $g_1\in H_i.$ Consider any $g_2\in H_j, H_i\ne H_j$. Now $g_1g_2g^{-1}_1g^{-1}_2\in H_i\cap H_j$, using the normality of $H_i,H_j\Rightarrow g_1g_2g^{-1}_1g^{-1}_2={e_G}\Rightarrow g_1g_2=g_2g_1$. So $\mid C(g_1)\mid \ge n^2-(n-1)$ . But as $n\ge 2$ $C(g_1)$ forces to be $G$. As $g_1$ was chosen arbitrarily, we can say $ G$ is commutative. |
$\mathrm{Ext}^1$ of ideal in $k[[x,y]]$ | Any such ideal $I=fJ$ where $J$ has depth 2. $f$ is the gcd (makes sense in $R$ which is a UFD) of all elements in $I$. Since $I$ is isomorphic to $J$, we have $\mathrm{Ext}^1(I,R)$ is isomorphic to $\mathrm{Ext}^1(J,R)$ and thus you are done. |
Expected number of draws to draw 3 of the same balls out of an urn with replacement | Here is an approximate approach. If you make $n$ draws we will first consider the chance you have gotten at least three $1$s. We use a Poisson distribution to compute the probability with parameter $\lambda=\frac n{12}$. This is probably close, though $\frac 1{12}$ is not really small. The chance that you have exactly three $1$s is $\frac {\lambda ^3 e^{-\lambda}}{3!}$. Now we consider the chance of getting three of one number independent of the chance of getting three of another. If you know you did or didn't get three of one number that doesn't change the number of tries to get three of another very much. Intuitively, we want the chance of getting three of a specific number to be about $\frac 1{12}$. If we make twelve draws with a chance of $\frac 1{12}$ for each one, we have $\frac 1e \approx 0.368$ of failing on all of them, so about $0.632$ chance of getting at least one. We can solve for $\lambda$ in $$\frac {\lambda ^3 e^{-\lambda}}{3!}=\frac 1{12}\\\lambda \approx 1.17374\\n\approx 14$$ which seems amazingly small to me. I wrote a little program to simulate this using Python's random number generator and the average $n$ came out just below $11$, supporting the calculation. I was surprised how small it was. |
Relate $\sum{a_n}$ and $\sum{n a_n}$ | if $\lim_{n \to \infty} \sqrt[n]{|a_n|} \lt 1$ then the function:
$$
f(z) = \sum_{k=0}^{\infty} a_nz^n
$$
has a radius of convergence $\gt1$. consequently it is holomorphic on the unit disc. its derivative $\frac{df}{dz} = g(z)$ has the same radius of convergence as $f(z)$ and:
$$
f(1) = x \\
g(1) = y
$$ |
Show $\sqrt{1+\sqrt{2}}$ is algebraic over $\mathbb{Q}$ with degree $4$ | As pointed out by Bernard in the comments, $\alpha$ is a root of $f(x) = x^4 - 2x^2 \color{red}{-} 1$, not of $x^4 - 2x^2 + 1 = (x^2 - 1)^2$. Now observe that $f(\pm1) = -2$, so $f(x)$ doesn't have any rational root. This means that $\alpha \notin \Bbb{Q}$ has either degree $2$ or $4$.
On the other hand, if $\alpha$ had degree $2$ we would have
$$
\Bbb{Q} = \Bbb{Q}(\alpha^2) = \Bbb{Q}(1 + \sqrt{2}) = \Bbb{Q}(\sqrt{2})
$$
which is absurd, thus $\alpha$ must have degree $4$. In particular, this means that $f(x)$ must be irreducible. |
Factoring a joint conditional probability into a specific form. | By definition,
$$Pr(A, B, C|D)=\frac{Pr(A, B, C, D)}{Pr(D)}=\frac{Pr(A, B, C, D)}{Pr(C, D)}\frac{Pr(C, D)}{Pr(D)}=Pr(A, B|C, D)Pr(C|D).$$ |
Determine the number of pairs $(a,b) \in [(1,4)]$ which satisfy $|a|,|b|≤40$ | Hint: Let $c=1$ and $d=4$. The pair $(a,b)$ is in our equivalence class precisely if $4a=b$ (and $b\ne 0)$. Make a list, and count. Don't forget to include $(1,4)$!
And don't forget about the negative numbers. Note for example that $(-2,-8)$ is in our equivalence class. But there is a useful shortcut. If you have made a careful count of the pairs $(a,b)$ where $a$ and $b$ are positive, double that to get the full count. |
What is the expected value of the score? | Assuming that a court card has a value of $10$ then the expected value of the score of one card is simply the average of the scores of one suit's cards:
$$E(X) = \frac{1+2+3+4+5+6+7+8+9+10+10+10+10}{13} = 85/13.$$ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.