url
stringlengths 15
1.13k
| text
stringlengths 100
1.04M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
http://mathoverflow.net/questions/152811/null-space-of-random-0-1-binary-matrix | Null space of random $(0,1)$ binary matrix [closed]
What can be said about the null space of random $(0,1)$ rectangular binary matrices? In particular, I am interested in the probability that there is any non-zero vector with only integer coordinates in the null space. Is this a known problem and/or is there a known approach for tackling it?
-
closed as off-topic by Will Jagy, Boris Bukh, Daniel Moskovich, Ricardo Andrade, Stefan KohlDec 26 '13 at 11:26
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "This question does not appear to be about research level mathematics within the scope defined in the help center." – Will Jagy, Boris Bukh, Daniel Moskovich, Ricardo Andrade, Stefan Kohl
If this question can be reworded to fit the rules in the help center, please edit the question.
I would comment this but I don't yet have enough reputation. Your question is equivalent to asking the odds that the rank of a $(0,1)$ matrix is full. If your matrix has more columns than rows then you are certain to have non zero vectors in the null space. If you have more rows than columns then you can zero out some rows and reduce to the square case.
For the square matrix case there is an excellent answer here in which they answer both for the case where you are asking over $\mathbb{F}_2$ and over $\mathbb{Q}$.
Pulling from that answer, the rank of the matrix is the dimension of the column space and the number of columns minus the rank is the dimension of the null space. So in the case of $\mathbb{Q}$ the rank tends towards full which means your probability tends to 0. In the case of $\mathbb{F}_2$ the odds of non-trivial null space tends to one, specifically the odds that an $n \times n$ $(0,1)$ matrix has full rank is $\Pi_{1 \leq k \leq n} (1 - 2^{-k})$ (thus your probability is $1 - \Pi_{1 \leq k \leq n} (1 - 2^{-k})$).
I am not sure it is the same. I am asking if there is a non-zero vector with only integer coordinates in the null space. In my case all operations are over $\mathbb{Z}$ and I don't see the mapping to your examples. – user117230 Dec 25 '13 at 19:31
That's very interesting, thank you. Just for my interest, which non-zero integer vector is in the null space of $M = \begin{pmatrix} 0 & 1 & 1 & 0 \\ 1 & 0 & 1 & 1\\ 0 & 1 & 0 & 1\ \end{pmatrix}.$ ? – user117230 Dec 25 '13 at 19:46
Lattices are more restrictive than vector spaces (less scalars to work with), further $\mathbb{Q}^n$ contains $\mathbb{Z}^n$, so if there is no answer in $\mathbb{Q}^n$ then there is no answer in $\mathbb{Z}^n$. If there is an answer in $\mathbb{Q}^n$ then there is an answer in $\mathbb{Z}^n$. – Andy Novocin Dec 25 '13 at 19:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8524253964424133, "perplexity": 203.2504304710643}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463093.76/warc/CC-MAIN-20150226074103-00293-ip-10-28-5-156.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/multivariable-calculus-work-in-a-line-segment.901145/ | # Homework Help: Multivariable calculus: work in a line segment
Tags:
1. Jan 22, 2017
### Granger
1. The problem statement, all variables and given/known data
Compute the work of the vector field $F(x,y)=(\frac{y}{x^2+y^2},\frac{-x}{x^2+y^2})$
in the line segment that goes from (0,1) to (1,0).
2. Relevant equations
3. The attempt at a solution
My attempt (please let me know if there is an easier way to do this)
I applied Green's theorem in the region between the square of vertices (1,0), (0,1), (-1,0), (0,-1), and the circumference centered in the origin with radius 1/2, both clockwise.
Since both lines are clockwise, and because F is field of class $C^1$ then
$\int_C F = \int_S F$ (C circumference and S square).
C is then described by the path $\gamma=(\frac{\cos t}{2},\frac{-\sin t}{2}) t\in]0,2\pi[$
We have $F(\gamma (t)) \gamma ´(t)=1$ so $\int_C F = 2\pi = \int_S F$
Now because we want only the work in the line segment that goes from (0,1) to (1,0) we divide our result by 4 and obtain $\frac{\pi}{2}$
My doubts here is if this is correct, especially the final step... I also wonder if there was an easier way to approach the problem. I first thought of applying the fundamental theorem of calculus but we can't because F is not conservative. Then I tried the definition but we end up with a hard integral to compute. So I ended up with this...
Thanks for the help.
Last edited by a moderator: Jan 22, 2017
2. Jan 22, 2017
### Ray Vickson
Show us the actual integral you get when you apply the definition. I would say that the only correct way to do the problem is by applying the definition; the other things you did have no relation at all to the problem as originally posed.
3. Jan 22, 2017
### pasmith
You can set $x = r(\theta)\cos\theta$, $y = r (\theta)\sin\theta$ and you don't actually need to know that $r(\theta) = (\cos \theta + \sin \theta)^{-1}$ or what $r'(\theta)$ is, because after multiplying it all out and collecting terms it will simplify considerably.
4. Jan 22, 2017
### LCKurtz
You describe the line segment from $(0,1)$ to $(1,0)$. Presumably a straight line. What does that have to do with the circle and square you describe?
Also, you might edit your post and use double instead of single \$'s to display your tex. Or use $'s for inline. 5. Jan 22, 2017 ### Granger Thanks for all the replies. The integral I obtained by definition was \int \frac{1}{2t^2-2t+1} dt. Any suggestions on how to solve this integral the simplest way? 6. Jan 22, 2017 ### LCKurtz You haven't answered this question: I would complete the square in the denominator. But you need to explain to us what integral you are actually calculating and show your steps so we know what you are talking about. 7. Jan 23, 2017 ### Granger My apologies, the line segment can be described by \gamma (t) = (t,1-t) t from 0 to 1. Then I apply the definition \int F(\gamma (t)) \gamma ' (t) dt 8. Jan 23, 2017 ### Ray Vickson That definition looks wrong. For example, why could I not take$\vec{\gamma}(u) = (u^2, 1-u^2)$and then have$\int_0^1 \vec{F}(\vec{\gamma}(u)) \cdot \vec{\gamma}'(u) \, du##?
9. Jan 23, 2017
### LCKurtz
OK, that's what I thought you might be doing. That's a dot product in there, which I have inserted.
I think you could Ray. Isn't that just a different parameterization?
Anyway @Granger my suggestion earlier still stands. Complete the square and find an appropriate trig substitution and it will all work out.
10. Jan 23, 2017
### Ray Vickson
Yes, it is a different parametrization, but the question I was aiming at was whether that could alter the outcome. I think I know the answer, but I was hoping the OP would ponder the issue.
11. Jan 24, 2017
### Granger
Looks wrong? What do you mean? Well I know that with other path the the work is still the same (unless the direction was the opposite, in which we would have the opposite sign).
I'm going to try then, thanks.
12. Jan 24, 2017
### Ray Vickson
What I mean is nothing: I was thinking of something else and made a stupid mistake. Your expression is OK, or would be if you could "vectorize" it. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.936939001083374, "perplexity": 384.1720916058148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794868239.93/warc/CC-MAIN-20180527091644-20180527111644-00310.warc.gz"} |
https://zbmath.org/?q=an:0578.22012 | ×
## Divergent trajectories of flows on homogeneous spaces and Diophantine approximation.(English)Zbl 0578.22012
Let $$G$$ be a connected Lie group and $$\Gamma$$ be a lattice in $$G$$; that is, $$\Gamma$$ is a discrete subgroup of $$G$$ such that $$G/\Gamma$$ admits a finite $$G$$-invariant measure. Let $$\{g_t\}$$ be a one-parameter subgroup of $$G$$. The action of $$\{g_t\}$$ on $$G/\Gamma$$ (on the left) is studied. At present time the behavior of “typical” trajectories is satisfactorily understood. In general, however, it is very difficult to describe the behavior of exceptional trajectories.
The author assumes $$G/\Gamma$$ to be non-compact and investigates a special class of such exceptional trajectories: “divergent” trajectories. A trajectory is said to be divergent if eventually it leaves every compact subset of $$G/\Gamma$$. There is explained how the divergence of trajectories is related to a question involving diophantine approximation for certain systems of linear forms. In particular, using some results of number theory the author proves the following assertion. Let $$G=\mathrm{SL}(n,\mathbb R)$$ and $$\Gamma=\mathrm{SL}(n,\mathbb Z)$$, and $$g_t$$ is a one-parameter subgroup of the form $$\text{diag}(e^{-t},\ldots, e^{-t},e^{\lambda t},\ldots, e^{\lambda t})$$. Then the set of points on bounded trajectories has full Hausdorff dimension (equal to that of $$G/\Gamma)$$.
### MSC:
22E40 Discrete subgroups of Lie groups 43A85 Harmonic analysis on homogeneous spaces 37C10 Dynamics induced by flows and semiflows 11J99 Diophantine approximation, transcendental number theory
Zbl 0578.22013
Full Text: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9217231273651123, "perplexity": 185.26953526531886}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00516.warc.gz"} |
https://socratic.org/questions/at-target-a-24-oz-bottle-of-ketchup-sells-for-2-19-and-a-36-oz-bottle-of-ketchup | Algebra
Topics
# At Target a 24 oz bottle of ketchup sells for $2.19, and a 36 oz bottle of ketchup sells for$2.79. What is the linear equation that models the price of ketchup? What is the price of a 44 oz bottle?
The model equation is $P = 0.05 O + 0.99$
The price of 44 oz bottle is $3.19 #### Explanation: Let the linear equation be 24x+c=$2.19(1)
36x+c=$2.79(2) Subtracting (1) from (2) we get $12 x = 0.60 \mathmr{and} x = 0.05 \therefore c = 2.19 - \left(24 \cdot 0.05\right) = 2.19 - 1.20 = 0.99$So linear equation that models the price of ketchup is $P = 0.05 O + 0.99$where $P$is the price and $O$is the oz contained in the bottle. Hence the price of the 44 oz bottle of ketchup is P=44*0.05+0.99 =2.20+0.99=$3.19[Ans] | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8876348733901978, "perplexity": 2582.3727364380425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737883.59/warc/CC-MAIN-20200808135620-20200808165620-00153.warc.gz"} |
http://mathhelpforum.com/advanced-statistics/213461-derivative-two-parameter-gamma-distribution.html | # Thread: Derivative of a Two-Parameter Gamma Distribution
1. ## Derivative of a Two-Parameter Gamma Distribution
Hello Math Forums,
This is my first time posting here so I apologize for not knowing all the functions of the board, but I just have a stats question that I am unable to solve.
I am trying to determine the derivative of a two parameter Gamma distribution, dg(x;α,β)/d(x), using the gamma distribution shown below.
I would greatly appreciate any help on this.
Thank you.
2. ## Re: Derivative of a Two-Parameter Gamma Distribution
Hey leew0112. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9817710518836975, "perplexity": 635.4964369691797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806388.64/warc/CC-MAIN-20171121132158-20171121152158-00192.warc.gz"} |
http://mathforum.org/kb/thread.jspa?threadID=2357807 | Search All of the Math Forum:
Views expressed in these public forums are not endorsed by NCTM or The Math Forum.
Topic: Fifteen papers published by Geometry & Topology Publications
Replies: 0
Geometry and Topology Posts: 140 Registered: 5/24/06
Fifteen papers published by Geometry & Topology Publications
Posted: Mar 20, 2012 12:00 PM
Thirteen papers have been published by Algebraic & Geometric Topology
(1) Algebraic & Geometric Topology 12 (2012) 131-153
Indecomposable PD_3-complexes
by Jonathan A Hillman
URL: http://www.msp.warwick.ac.uk/agt/2012/12-01/p008.xhtml
DOI: 10.2140/agt.2012.12.131
(2) Algebraic & Geometric Topology 12 (2012) 155-213
Locally symmetric spaces and K-theory of number fields
by Thilo Kuessner
URL: http://www.msp.warwick.ac.uk/agt/2012/12-01/p009.xhtml
DOI: 10.2140/agt.2012.12.155
(3) Algebraic & Geometric Topology 12 (2012) 215-233
On volumes of hyperbolic orbifolds
by Ilesanmi Adeboye and Guofang Wei
URL: http://www.msp.warwick.ac.uk/agt/2012/12-01/p010.xhtml
DOI: 10.2140/agt.2012.12.215
(4) Algebraic & Geometric Topology 12 (2012) 235-265
Generalized Mom-structures and ideal triangulations of 3-manifolds with nonspherical boundary
by Ekaterina Pervova
URL: http://www.msp.warwick.ac.uk/agt/2012/12-01/p011.xhtml
DOI: 10.2140/agt.2012.12.235
(5) Algebraic & Geometric Topology 12 (2012) 267-291
Lagrangian mapping class groups from a group homological point of view
by Takuya Sakasai
URL: http://www.msp.warwick.ac.uk/agt/2012/12-01/p012.xhtml
DOI: 10.2140/agt.2012.12.267
(6) Algebraic & Geometric Topology 12 (2012) 293-305
A note on Gornik's perturbation of Khovanov-Rozansky homology
by Andrew Lobb
URL: http://www.msp.warwick.ac.uk/agt/2012/12-01/p013.xhtml
DOI: 10.2140/agt.2012.12.293
(7) Algebraic & Geometric Topology 12 (2012) 307-342
Spectra associated to symmetric monoidal bicategories
by Angelica M Osorno
URL: http://www.msp.warwick.ac.uk/agt/2012/12-01/p014.xhtml
DOI: 10.2140/agt.2012.12.307
(8) Algebraic & Geometric Topology 12 (2012) 343-413
Higher cohomologies of modules
by Maria Calvo, Antonio M Cegarra and Nguyen T Quang
URL: http://www.msp.warwick.ac.uk/agt/2012/12-01/p015.xhtml
DOI: 10.2140/agt.2012.12.343
(9) Algebraic & Geometric Topology 12 (2012) 415-420
Noninjectivity of the hair'' map
by Bertrand Patureau-Mirand
URL: http://www.msp.warwick.ac.uk/agt/2012/12-01/p016.xhtml
DOI: 10.2140/agt.2012.12.415
(10) Algebraic & Geometric Topology 12 (2012) 421-433
Bounded orbits and global fixed points for groups acting on the plane
by Kathryn Mann
URL: http://www.msp.warwick.ac.uk/agt/2012/12-01/p017.xhtml
DOI: 10.2140/agt.2012.12.421
(11) Algebraic & Geometric Topology 12 (2012) 435-448
Lusternik-Schnirelmann category and the connectivity of X
by Nicholas A Scoville
URL: http://www.msp.warwick.ac.uk/agt/2012/12-01/p018.xhtml
DOI: 10.2140/agt.2012.12.435
(12) Algebraic & Geometric Topology 12 (2012) 449-467
Some bounds for the knot Floer au-invariant of satellite knots
by Lawrence P Roberts
URL: http://www.msp.warwick.ac.uk/agt/2012/12-01/p019.xhtml
DOI: 10.2140/agt.2012.12.449
(13) Algebraic & Geometric Topology 12 (2012) 469-492
Associahedra and weak monoidal structures on categories
by Zbigniew Fiedorowicz, Steven Gubkin and Rainer M Vogt
URL: http://www.msp.warwick.ac.uk/agt/2012/12-01/p020.xhtml
DOI: 10.2140/agt.2012.12.469
Two papers have been published by Geometry & Topology
(14) Geometry & Topology 16 (2012) 391-432
A cohomological characterisation of Yu's property A for metric spaces
by Jacek Brodzki, Graham A Niblo and Nick Wright
URL: http://www.msp.warwick.ac.uk/gt/2012/16-01/p007.xhtml
DOI: 10.2140/gt.2012.16.391
(15) Geometry & Topology 16 (2012) 433-473
Chow rings and decomposition theorems for families of K!3 surfaces and Calabi-Yau hypersurfaces
by Claire Voisin
URL: http://www.msp.warwick.ac.uk/gt/2012/16-01/p008.xhtml
DOI: 10.2140/gt.2012.16.433
Abstracts follow
(1) Indecomposable PD_3-complexes
by Jonathan A Hillman
We show that if X is an indecomposable PD_3-complex and pi_1(X) is the
fundamental group of a reduced finite graph of finite groups but is
neither Z nor Z+Z/2Z then X is orientable, the underlying graph is a
tree, the vertex groups have cohomological period dividing 4 and all
but at most one of the edge groups is Z/2Z. If there are no
exceptions then all but at most one of the vertex groups is dihedral
of order 2m with m odd. Every such group is realized by some
PD_3-complex. Otherwise, one edge group may be Z/6Z. We do not know
of any such examples.
We also ask whether every PD_3-complex has a finite covering space
which is homotopy equivalent to a closed orientable 3-manifold, and we
propose a strategy for tackling this question.
(2) Locally symmetric spaces and K-theory of number fields
by Thilo Kuessner
For a closed locally symmetric space M=Gamma \ G/K and a
representation rho from G to GL(N,C) we consider the pushforward of
the fundamental class in H_*(BGL(overline{Q})) and a related invariant
in K_*(overline{Q}) otimes Q. We discuss the nontriviality of this
invariant and we generalize the construction to cusped locally
symmetric spaces of R-rank one.
(3) On volumes of hyperbolic orbifolds
by Ilesanmi Adeboye and Guofang Wei
We use HC Wang's bound on the radius of a ball embedded in the
fundamental domain of a lattice of a semisimple Lie group to construct an
explicit lower bound for the volume of a hyperbolic n-orbifold.
(4) Generalized Mom-structures and ideal triangulations of 3-manifolds with nonspherical boundary
by Ekaterina Pervova
The so-called Mom-structures on hyperbolic cusped 3-manifolds without
boundary were introduced by Gabai, Meyerhoff, and Milley, and used by
them to identify the smallest closed hyperbolic manifold. In this work
we extend the notion of a Mom-structure to include the case of
3-manifolds with nonempty boundary that does not have spherical
components. We then describe a certain relation between such
generalized Mom-structures, called protoMom-structures, internal on a
fixed 3-manifold N, and ideal triangulations of N; in addition, in the
case of nonclosed hyperbolic manifolds without annular cusps, we
describe how an internal geometric protoMom-structure can be
constructed starting from the Epstein--Penner or Kojima
decomposition. Finally, we exhibit a set of combinatorial moves that
relate any two internal protoMom-structures on a fixed N to each
other.
(5) Lagrangian mapping class groups from a group homological point of view
by Takuya Sakasai
We focus on two kinds of infinite index subgroups of the mapping class
group of a surface associated with a Lagrangian submodule of the first
homology of a surface. These subgroups, called Lagrangian mapping
class groups, are known to play important roles in the interaction
between the mapping class group and finite-type invariants of
3-manifolds. In this paper, we discuss these groups from a group
(co)homological point of view. The results include the determination
of their abelianizations, lower bounds of the second homology and
remarks on the (co)homology of higher degrees. As a byproduct of this
investigation, we determine the second homology of the mapping class
group of a surface of genus 3.
(6) A note on Gornik's perturbation of Khovanov-Rozansky homology
by Andrew Lobb
We show that the information contained in the associated graded vector
space to Gornik's version of Khovanov--Rozansky knot homology is
equivalent to a single even integer s_n(K). Furthermore we show that
s_n is a homomorphism from the smooth knot concordance group to the
integers. This is in analogy with Rasmussen's invariant coming from a
perturbation of Khovanov homology.
(7) Spectra associated to symmetric monoidal bicategories
by Angelica M Osorno
We show how to construct a Gamma-bicategory from a symmetric monoidal
bicategory and use that to show that the classifying space is an
infinite loop space upon group completion. We also show a way to
relate this construction to the classic Gamma-category construction
for a permutative category. As an example, we use this machinery to
construct a delooping of the K-theory of a rig category as defined by
Baas, Dundas and Rognes [London Math. Soc. Lecture Note Ser. 308,
Cambridge Univ. Press (2004) 18--45].
(8) Higher cohomologies of modules
by Maria Calvo, Antonio M Cegarra and Nguyen T Quang
If C is a small category, then a right C-module is a contravariant
functor from C into abelian groups. The abelian category Mod_C of
right C-modules has enough projective and injective objects, and the
groups Ext^n_Mod_C(B,A) provide the basic cohomology theory for
C-modules. In this paper we introduce, for each integer r>0, an
approach for a level-r cohomology theory for C-modules by defining
dedicated to. Applications to the homotopy classification of braided
and symmetric C-fibred categorical groups and their homomorphisms are
given.
(9) Noninjectivity of the hair'' map
by Bertrand Patureau-Mirand
Kricker constructed a knot invariant Z^rat valued in a space of
Feynman diagrams with beads. When composed with the "hair" map H, it
gives the Kontsevich integral of the knot. We introduce a new grading
on diagrams with beads and use it to show that a nontrivial element
constructed from Vogel's zero divisor in the algebra Lambda is in the
kernel of H. This shows that H is not injective.
(10) Bounded orbits and global fixed points for groups acting on the plane
by Kathryn Mann
Let G be a group acting on the plane by orientation-preserving
homeomorphisms. We show that a tight bound on orbits implies a global
fixed point. Precisely, if for some k>0 there is a ball of radius r >
(1/sqrt{3})k such that each point x in the ball satisfies ||g(x) -
h(x)||<=k for all g, h in G, and the action of G satisfies a
nonwandering hypothesis, then the action has a global fixed point. In
particular any group of measure-preserving, orientation-preserving
homeomorphisms of the plane with uniformly bounded orbits has a global
fixed point. The constant (1/sqrt{3})k is sharp.
As an application, we also show that a group acting on the plane by
diffeomorphisms with orbits bounded as above is left orderable.
(11) Lusternik-Schnirelmann category and the connectivity of X
by Nicholas A Scoville
We define and study a homotopy invariant called the connectivity
weight to compute the weighted length between spaces X and Y. This is
an invariant based on the connectivity of A_i, where A_i is a space
attached in a mapping cone sequence from X to Y. We use the
Lusternik-Schnirelmann category to prove a theorem concerning the
connectivity of all spaces attached in any decomposition from X to Y.
This theorem is used to prove that for any positive rational number q,
there is a space X such that q=cl^{omega}(X), the connectivity
weighted cone-length of X. We compute cl^{omega}(X) and kl^{omega}(X)
for many spaces and give several examples.
(12) Some bounds for the knot Floer au-invariant of satellite knots
by Lawrence P Roberts
This paper uses four dimensional handlebody theory to compute upper and lower
bounds for the Heegaard Floer tau-invariant of almost all satellite knots in
terms of the tau-invariants of the pattern and the companion.
(13) Associahedra and weak monoidal structures on categories
by Zbigniew Fiedorowicz, Steven Gubkin and Rainer M Vogt
This paper answers the following question: what algebraic structure on
a category corresponds to an A_n structure (in the sense of Stasheff)
on the geometric realization of its nerve?
(14) A cohomological characterisation of Yu's property A for metric spaces
by Jacek Brodzki, Graham A Niblo and Nick Wright
We develop a new framework for cohomology of discrete metric spaces
and groups which simultaneously generalises group cohomology, Roe's
coarse cohomology, Gersten's \ell^\infty-cohomology and Johnson's
bounded cohomology. In this framework we give an answer to Higson's
question concerning the existence of a cohomological characterisation
of Yu's property A, analogous to Johnson's characterisation of
amenability. In particular, we introduce an analogue of invariant
mean for metric spaces with property A. As an application we extend
Guentner's result that box spaces of a finitely generated group have
property A if and only if the group is amenable. This provides an
alternative proof of Nowak's result that the infinite dimensional cube
does not have property A.
(15) Chow rings and decomposition theorems for families of K!3 surfaces and Calabi-Yau hypersurfaces
by Claire Voisin
The decomposition theorem for smooth projective morphisms pi: X->B
says that Rpi_*Q decomposes as a direct sum of R^i pi_*Q[-i]. We
describe simple examples where it is not possible to have such a
decomposition compatible with cup product, even after restriction to
Zariski dense open sets of B. We prove however that this is always
possible for families of K3 surfaces (after shrinking the base), and
show how this result relates to a result by Beauville and the author
[J. Algebraic Geom. 13 (2004) 417--426] on the Chow ring of a K3
surface S. We give two proofs of this result, the first one involving
K-autocorrespondences of K3 surfaces, seen as analogues of isogenies
of abelian varieties, the second one involving a certain decomposition
of the small diagonal in S^3 obtained by Beauville and the author. We
also prove an analogue of such a decomposition of the small diagonal
in X^3 for Calabi--Yau hypersurfaces X in P^n, which in turn provides
strong restrictions on their Chow ring.
Geometry & Topology Publications is an imprint of
Mathematical Sciences Publishers | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9170503616333008, "perplexity": 2393.632785506692}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982295854.33/warc/CC-MAIN-20160823195815-00288-ip-10-153-172-175.ec2.internal.warc.gz"} |
https://hxypqr.wordpress.com/ | A crash introduction to BSD conjecture
We begin with the Weierstrass form of elliptic equation, i.e. look it as an embedding cubic curve in ${\mathop{\mathbb P}^2}$.
Definition 1 (Weierstrass form) ${E \hookrightarrow \mathop{\mathbb P}^2 }$, In general the form is given by,
$\displaystyle E: y^2+a_1xy+a_3y=x^3+a_2x^2+a_4x+a_6 \ \ \ \ \ (1)$
If ${char F \neq 2,3}$, then, we have a much more simper form,
$\displaystyle y^2=x^3+ax+b, \Delta:=4a^3+27b^2\neq 0. \ \ \ \ \ (2)$
Remark 1
$\displaystyle \Delta(E)=\prod_{1\leq i,\neq j\leq 3}(z_i-z_j)$
Where ${z_i^3+az_i+b=0, \forall 1\leq i\leq 3}$.
We have two way to classify the elliptic curve ${E}$ living in a fix field ${F}$. \paragraph{j-invariant} The first one is by the isomorphism in ${\bar F}$. i.e. we say two elliptic curves ${E_1,E_2}$ is equivalent iff
$\displaystyle \exists \rho:\bar F\rightarrow \bar F$
is a isomorphism such that ${\rho(E_1)=E_2}$.
Definition 2 (j-invariant) For a elliptic curve ${E}$, we have a j-invariant of ${E}$, given by,
$\displaystyle j(E)=1728\frac{4a^3}{4a^3+27b^2} \ \ \ \ \ (3)$
Why j-invariant is important, because j-invariant is the invariant depend the equivalent class of ${E}$ under the classify of isomorphism induce by ${\bar F}$. But in one equivalent class, there also exist a structure, called twist.
Definition 3 (Twist) For a elliptic curve ${E:y^2=x^3+ax+b}$, all elliptic curve twist with ${E}$ is given by,
$\displaystyle E^{(d)}:y^2=x^3+ad^2x+bd^3 \ \ \ \ \ (4)$
So the twist of a given elliptic curve ${E}$ is given by:
$\displaystyle H^1(Gal(\bar F/ F), Aut(E_{\bar F})) \ \ \ \ \ (5)$
Remark 2 Of course a elliptic curve ${E:y^2=x^3+ax+b}$ is the same as ${E:y^2=x^3+ad^2x+bd^4}$, induce by the map ${\mathop{\mathbb P}^1\rightarrow \mathop{\mathbb P}^1, (x,y,1)\rightarrow (x,dy,1)}$.
But this moduli space induce by the isomorphism of ${F}$ is not good, morally speaking is because of the abandon of universal property. see \cite{zhang}. \paragraph{Level ${n}$ structure} We need a extension of the elliptic curve ${E}$, this is given by the integral model.
Definition 4 (Integral model) ${s:=Spec(\mathcal{O}_F)}$, ${E\rightarrow E_s}$. ${E_s}$ is regular and minimal, the construction of ${E_s}$ is by the following way, we first construct ${\widetilde{E_s} }$ and then blow up. ${\widetilde E_s}$ is given by the Weierstrass equation with coefficent in ${\mathcal{O}_F}$.
Remark 3 The existence of integral model need Zorn’s lemma.
Definition 5 (Semistable) the singularity of the minimal model of ${E}$ are ordinary double point.
Remark 4 Semistable is a crucial property, related to Szpiro’s conjecture.
Definition 6 (Level ${n}$ structure)
$\displaystyle \phi: ({\mathbb Z}/n{\mathbb Z})_s^2\longrightarrow E[N] \ \ \ \ \ (6)$
${P=\phi(1,0), Q=\phi(o,1)}$ The weil pairing of ${P,Q}$ is given by a unit in cycomotic fields, i.e. ${=\zeta_N\in \mu_{N}(s)}$
What happen if ${k={\mathbb C}}$? In this case we have a analytic isomorphism:
$\displaystyle E({\mathbb C})\simeq {\mathbb C}/\Lambda \ \ \ \ \ (7)$
Given by,
$\displaystyle {\mathbb C}/\Lambda \longrightarrow \mathop{\mathbb P}^2 \ \ \ \ \ (8)$
$\displaystyle z\longrightarrow (\mathfrak{P}(z), \mathfrak{P}'(z), 1 ) \ \ \ \ \ (9)$
Where ${\mathfrak{P(z)}=\frac{1}{z^2}+\sum_{\lambda\in \Lambda,\lambda\neq 0}(\frac{1}{(z-\lambda)^2}-\frac{1}{\lambda^2})}$, and the Weierstrass equation ${E}$ is given by ${y^2=4x^3-60G_4(\Lambda)x-140G_6(\Lambda)}$. The full n tructure of it is given by ${{\mathbb Z}+{\mathbb Z}\lambda}$ and the value of ${P,Q}$, i.e.
$\displaystyle P=\frac{1}{N}, Q=\frac{\tau}{N} \ \ \ \ \ (10)$
Where ${\tau}$ is induce by
$\displaystyle \Gamma(N):=ker(SL_2({\mathbb Z})\rightarrow SL_2({\mathbb Z}/n{\mathbb Z})) \ \ \ \ \ (11)$
The key point is following:
Theorem 7 ${k={\mathbb C}}$, the moduli of elliptic curves with full level n-structure is identified with
$\displaystyle \mu_N^*\times H/\Gamma(N) \ \ \ \ \ (12)$
Now we discuss the Mordell-Weil theorem.
Theorem 8 (Mordell-Weil theorem)
$\displaystyle E(F)\simeq {\mathbb Z}^r\oplus E(F)_{tor}$
The proof of the theorem divide into two part:
1. Weak Mordell-Weil theorem, i.e. ${\forall m\in {\mathbb N}}$, ${E(F)/mE(F)}$ is finite.
2. There is a quadratic function,
$\displaystyle \|\cdot\|: E(F)\longrightarrow {\mathbb R} \ \ \ \ \ (13)$
${\forall c\in {\mathbb R}}$, ${E(F)_c=\{P\in E(F), \|P\| is finite.
Remark 5 The proof is following the ideal of infinity descent first found by Fermat. The height is called Faltings height, introduce by Falting. On the other hand, I point out, for elliptic curve ${E}$, there is a naive height come from the coefficient of Weierstrass representation, i.e. ${\max\{|4a^3|,|27b^2|\}}$.
While the torsion part have a very clear understanding, thanks to the work of Mazur. The rank part of ${E({\mathbb Q})}$ is still very unclear, we have the BSD conjecture, which is far from a fully understanding until now.
But to understanding the meaning of the conjecture, we need first constructing the zeta function of elliptic curve, ${L(s,E)}$.
\paragraph{Local points} We consider a local field ${F_v}$, and a locally value map ${F\rightarrow F_{\nu}}$, then we have the short exact sequences,
$\displaystyle 0\longrightarrow E^0(F_{\nu})\longrightarrow E(F_{\nu})=E_s(\mathcal{O}_F)\longrightarrow E_s(K_0)\longrightarrow 0 \ \ \ \ \ (14)$
Topologically, we know ${E(F_{\nu})}$ are union of disc indexed by ${E_s(k_{\nu})}$,
$\displaystyle |E_s(k_{\nu})| \sim q_{\nu}+1=\# \mathop{\mathbb P}^1(k_{\nu})$
. Define ${a_{\nu}=\# \mathop{\mathbb P}^1(k_{\nu})-|E_s(k_{\nu})|}$, then we have Hasse principle:
Theorem 9 (Hasse principle)
$\displaystyle |a_{\nu}|\leq 2\sqrt{q_{\nu}} \ \ \ \ \ (15)$
Remark 6 I need to point out, the Hasse principle, in my opinion, is just a uncertain principle type of result, there should be a partial differential equation underlying mystery.
So count the points in ${E(F)}$ reduce to count points in ${H^1(F_{\nu},E(m))}$, reduce to count the Selmer group ${S(E)[m]}$. We have a short exact sequences to explain the issue.
$\displaystyle 0\longrightarrow E(F)/mE(F) \longrightarrow Sha(E)[m] \longrightarrow E(F)/mE(F)\longrightarrow 0 \ \ \ \ \ (16)$
I mention the Goldfold-Szipiro conjecture here. ${\forall \epsilon>0}$, there ${\exists C_{\epsilon}(E)}$ such that:
$\displaystyle \# (E)\leq c_{\epsilon}(E)N_{E/{\mathbb Q}}(N)^{\frac{1}{2}+\epsilon} \ \ \ \ \ (17)$
\paragraph{L-series} Now I focus on the construction of ${L(s,E)}$, there are two different way to construct the L-series, one approach is the Euler product.
$\displaystyle L(s,E)=\prod_{\nu: bad}(1-a_{\nu}q_{\nu}^{-s})^{-1}\cdot \prod_{\nu:good}(1-a_{\nu}q_{\nu}^{-s}+q_{\nu}^{1-2s})^{-1} \ \ \ \ \ (18)$
Where ${a_{\nu}=0,1}$ or ${-1}$ when ${E_s}$ has bad reduction on ${\nu}$.
The second approach is the Galois presentation, one of the advantage is avoid the integral model. Given ${l}$ is a fixed prime, we can consider the Tate module:
$\displaystyle T_l(E):=\varprojlim_{l^n} E[l^n] \ \ \ \ \ (19)$
Then by the transform of different embedding of ${F\hookrightarrow \bar F}$, we know ${ T_{l}(E)/Gal(\bar F/F)}$, decompose it into a lots of orbits, so we can define ${D_{\nu}}$, the decomposition group of ${w}$(extension of ${\nu}$ to ${\bar F}$). We define ${I_{\nu}}$ is the inertia group of ${D_{\nu}}$.
Then ${D_{\nu}/I_{\nu}}$ is generated by some Frobenius elements
$\displaystyle Frob{\nu}x\equiv x^{q_{\nu}} (mod w),\forall x\in \mathcal{O}_{\bar Q} \ \ \ \ \ (20)$
So we can define
$\displaystyle L_{\nu}(s,E)=(1-q_{\nu}^{-s}Frob_{\nu}|T_{l}(E)^{I_{\nu}})^{-1} \ \ \ \ \ (21)$
And then ${L(s,E)=\prod_{\nu}L_{\nu}(s,E)}$.
Faltings have proved ${L_{\nu}(s,E)}$ is the invariant depending the isogenous class in the follwing meaning:
Theorem 10 (Faltings) ${L_{\nu}(s,E)}$ is an isogenous ivariant, i.e. ${E_1}$ isogenous to ${E_2}$ iff ${\forall a.e. \nu}$, ${L_{\nu}(s,E_1)=L_{\nu}(s,E_2)}$.
$\displaystyle L(s,E)=L(s-\frac{1}{2},\pi ) \ \ \ \ \ (22)$
Where ${\pi}$ come from an automorphic representation for ${GL_2(A_F)}$. Now we give the statement of BSD onjecture. ${R}$ is the regulator of ${E}$, i.e. the volume of fine part of ${E(F)}$ with respect to the Neron-Tate height pairing. ${\Omega}$ be the volume of ${\prod_{v|\infty}F(F_v)}$ Then we have,
1. ${ord_{s=1}L(s,E)=rank E(F)}$.
2. ${|Sha(E)|<\infty}$.
3. ${\lim_{s\rightarrow 0}L(s,E)(s-1)^{-rank(E)}=c\cdot \Omega(E)\cdot R(E)\cdot |Sha(E)|\cdot |E(F)_{tor}|^{-2}}$
Here ${c}$ is an explictly positive integer depending only on ${E_{\nu}}$ for ${\nu}$ dividing ${N}$.
SL_2(Z) and its congruence subgroups
We know we can always do the following thing:
$\displaystyle R\ commutative\ ring \longrightarrow \ "general\ linear\ group" \ GL_2(R) \ \ \ \ \ (1)$
Where
$\displaystyle GL_2(R):=\{\begin{pmatrix} a & b \\ c & d \end{pmatrix}: det \begin{pmatrix} a & b \\ c & d \end{pmatrix}=R^*, a,b,c,d\in R\} \ \ \ \ \ (2)$
Remark 1 Why it is ${R^*}$ but not 1? if it is 1, then the action ${R/GL_2(R)}$ distribute is not trasitive on ${R}$, i.e. every element in unite group present a connected component
Now we consider the subgroup ${SL_2(R)\subset GL_2(R)}$.
$\displaystyle SL_2(R)=\{\begin{pmatrix} a & b\\ c & d \end{pmatrix}: det \begin{pmatrix} a & b\\ c & d \end{pmatrix}=1, a,b,c,d\in R \} \ \ \ \ \ (3)$
We are most interested in the case ${R={\mathbb Z}, {\mathbb Z}/n{\mathbb Z}}$. So how to investigate ${SL_2({\mathbb R})}$? We can look at the action of it on something, for particular, we look at the action of it on Riemann sphere, i.e. ${ \hat {\mathbb C}/({\mathbb R})}$ given by fraction linear map:
$\displaystyle g(z):=\frac{az+b}{cz+d}, g(\infty)=\frac{a}{c} \ \ \ \ \ (4)$
Remark 2 What is fraction linear map? This action carry much more information than the action on vector, thanks for the exist of multiplication in ${{\mathbb C}}$ and the algebraic primitive theorem. Due to I always looks the fraction linear map as something induce by the permutation of the roots of polynomial of degree 2, this is true at least for fix points, and could natural extension. So how about the higher dimension generate? consider the transform of ${k-1}$ tuples induce by polynomial with degree ${k}$?
Remark 3
1. ${SL_2({\mathbb R})/\pm I:=PSL_2({\mathbb R})}$, then ${PSL_2({\mathbb R}) }$ action faithful on ${\hat C}$, i.e. except identity, every action is nontrivial. This is easy to be proved, observed,
$\displaystyle \frac{az+b}{cz+d}=z,\forall z\in \hat{\mathbb C}\Longrightarrow \begin{pmatrix} a & b\\ c & d \end{pmatrix}=\begin{pmatrix} 1 & 0\\ 0&1 \end{pmatrix} or \begin{pmatrix} -1&0\\ 0&-1 \end{pmatrix} \ \ \ \ \ (5)$
2. Up half plane ${H}$ is invariant under the action of ${PSL_2({\mathbb R})}$, i.e. ${\forall g\in PSL_2({\mathbb R})}$, ${gH=H}$. The proof is following,
3. $\displaystyle \begin{array}{rcl} Im(\frac{az+b}{cz+d}) & = & Im(\frac{(az+b)(c\bar z+d)}{|cz+d|^2})\\ & = & Im(\frac{ac|z|^2+bc\bar z+adz+bd}{|cz+d|^2})\\ & > & 0,\ due\ to\ ad=bc+1. \end{array}$
Now we focus on ${SL_2({\mathbb Z})}$ or the same,${PSL({\mathbb Z})}$. All the argument for ${SL_2(R)}$ make sense for
$\displaystyle \Gamma:= SL_2({\mathbb Z}), \bar \Gamma:=SL_2({\mathbb Z})/\pm I \ \ \ \ \ (6)$
Fix ${N\in {\mathbb N}}$, define,
$\displaystyle \Gamma(N):=\{\begin{pmatrix} a &b\\ c&d \end{pmatrix}, a,d \equiv 1(mod N), b,c\equiv 0(mod N)\} \ \ \ \ \ (7)$
Then ${\Gamma(N)}$ is the kernel of map ${SL_2({\mathbb Z})\rightarrow SL_2({\mathbb Z}/n{\mathbb Z})}$, i.e. we have short exact sequences,
$\displaystyle 0\longrightarrow \Gamma(N)\longrightarrow \Gamma\longrightarrow SL_2({\mathbb Z}/n{\mathbb Z})\longrightarrow 0 \ \ \ \ \ (8)$
Remark 4 The relationship of ${\Gamma(N)\subset \Gamma}$ is just like ${N{\mathbb Z}+1\subset {\mathbb Z}}$.
Definition 1 (Congruence group) A subgroup of ${\Gamma}$ is called a congruence group iff ${\exists n\in {\mathbb N}}$, ${\Gamma(N)\subset G}$.
Example 1 We give two examples of congruence subgroups here.
1. $\displaystyle \Gamma_1(N)=\{\begin{pmatrix} 1& *\\ 0 &1 \end{pmatrix} mod N\} \ \ \ \ \ (9)$
2. $\displaystyle \Gamma_0(N)=\{\begin{pmatrix} * & *\\ 0 & * \end{pmatrix}mod N\} \ \ \ \ \ (10)$
Definition 2 (Fundamental domain)
$\displaystyle F=\{z\in H:-\frac{1}{2}\leq Re(z)\leq \frac{1}{2}\ and |z|\geq 1\} \ \ \ \ \ (11)$
Now here is a theorem charistization the fundamental domain.
Theorem 3 This domain ${F}$ is a fundamental domain of ${\hat {\mathbb H}/({\mathbb R})}$
Proof: Ths key point is ${SL_2({\mathbb Z})}$ have two generators,
1. ${\tau_a: z\rightarrow z+a, \forall a\in {\mathbb Z}}$.
2. ${s: z\rightarrow \frac{1}{z}}$.
Thanks to this two generator exactly divide the action of ${\Gamma}$ on ${H}$ into a lots of scales, then ${\Omega}$ is a fundamental domain is a easy corollary. $\Box$
Remark 5 This is not rigorous, ${H}$ need be replace by ${\hat H}$, but this is very natural to get a modification to a right one.
Remark 6 ${z_1,z_2\in \partial F}$ are ${\Gamma}$ equivalent iff ${Re(z)=\pm \frac{1}{2}}$ and ${z_2=z_1\pm 1}$ or if ${z_1}$ on the unit circle and ${z_2=-\frac{1}{z_1}}$
Remark 7 If ${z\in F}$, then ${\Gamma_z=\pm I}$ expect in the following three case:
1. ${\Gamma=\pm \{\tau,s\}}$ if ${z=i}$.
2. ${\Gamma=\pm\{ I,s\tau, (s\tau)^2\}}$ if ${z=w=-\frac{1}{2}+\frac{\sqrt{-3}}{2}}$.
3. ${\Gamma=\pm\{I,\tau s, (\tau s)^2\}}$ if ${z=-\bar w=\frac{1}{2}+\frac{\sqrt{-3}}{2}}$.
Where ${\tau=\tau_1}$.
Remark 8 The group ${\bar \Gamma=SL_2({\mathbb Z})/\pm I}$ is generated by the two elements ${s}$, ${\tau}$. In other word, any fraction linear transform is a “word” induce by ${s,\tau,s^{-1}.\tau^{-1}}$. But not free group, we have relationship ${s^2=-I,(s\tau)^3=-I}$.
The natural function space on ${F}$ is the memorphic function, under the map: ${H\rightarrow D-\{0\}}$, it has a ${q}$-expension,
$\displaystyle f(q)=\sum_{k\in {\mathbb Z}}a_kq^k \ \ \ \ \ (12)$
And there are only finite many negative ${k}$ such that ${a_k\neq 0}$.
Dirichlet hyperbola method
1. Introduction
Theorem 1
$\displaystyle \sum_{1\leq n\leq x}d(n)=\sum_{1\leq n\leq x}[\frac{x}{n}]=xlogx+(2\gamma-1) x+O(\sqrt{x}) \ \ \ \ \ (1)$
Remark 1 I thought this problem initial 5 years ago, cost me several days to find a answer, I definitely get something without the argument of Dirchlet hyperbola method and which is weaker but morally the same camparable with the result get by Dirichlet hyperbola method.
Remark 2 How to get the formula:
$\displaystyle \sum_{1\leq n\leq x}d(x)=\sum_{1\leq n\leq x}[\frac{x}{n}]? \ \ \ \ \ (2)$
In fact,
$\displaystyle \sum_{1\leq n\leq x}d(x)=\sum_{1\leq ab\leq x}1=\sum_{1\leq n\leq x}[\frac{x}{n}]\ \ \ \ \ (3)$
Which is the integer lattices under or lying on the hyperbola ${\{(a,b)|ab=x\}}$.
Remark 3 By trivial argument, we can bound the quantity as following way,
$\displaystyle \begin{array}{rcl} \sum_{1\leq n\leq x}[\frac{x}{n}] & = & \sum_{1\leq ab\leq x}1\\ & = & x\sum_{i=1}^x\frac{1}{i}-\sum_{i=1}^x\{\frac{x}{i}\}\\ & = &xlnx+\gamma x+O(x) \end{array}$
The error term is ${O(x)}$, which is too big. But fortunately we can use the symmetry of hyperbola to improve the error term.
Proof:
$\displaystyle \begin{array}{rcl} \sum_{1\leq n\leq x}d(n) & = & \sum_{ab\leq x}1\\ & = & \sum_{a\geq \sqrt{x}}[\frac{x}{b}]+\sum_{b\geq \sqrt{x}}[\frac{x}{a}]-\sum_{1\leq a,b\leq \sqrt{x}}1\\ & = & xlogx+(2\gamma-1)x+O(\sqrt{x}) \end{array}$
$\Box$
Theorem 2
Given a natural number k, use the hyperbola method together
with induction and partial summation to show that
$\displaystyle \sum_{n\leq x}d_k(n) = xP_k(log x) + O(x^{1-\frac{1}{k}+\epsilon}), n\leq x \ \ \ \ \ (4)$
where ${P_k(t)}$ denotes a polynomial of degree ${k-1}$ with leading term ${\frac{t^{k-1}}{(k-1)!}}$.
Remark 4 ${P_k(x)}$ is the residue of ${\zeta(s)^kx^ss^{-1}}$ at ${s=1}$.
Proof:
We can establish the dimension 3 case directly, which is the following asymptotic formula,
$\displaystyle \sum_{1\leq xy\leq n}[\frac{n}{xy}]=xP_2(logx)+O(x^{1-\frac{1}{3}+\epsilon}) \ \ \ \ \ (5)$
The approach is following, we first observe that
$\displaystyle \sum_{1\leq xy\leq n}[\frac{n}{xy}]=\sum_{xyz\leq n}1 \ \ \ \ \ (6)$
The problem transform to get a asymptotic formula for the lattices under 3 dimension hyperbola. The first key point is, morally ${([n^{\frac{1}{3}}],[n^{\frac{1}{3}}],[n^{\frac{1}{3}}])}$ is the central point under the hyperbola.
Then we can divide the range into 3 parts, and try to get a asymptotic formula for each part then add them together. Assume we have:
1. ${A_x=\sum_{1\leq r\leq [n^{\frac{1}{3}}]}\sum_{1\leq yz\leq [n^{\frac{2}{3}}]}[\frac{r}{yz}]}$.
2. ${A_y=\sum_{1\leq r\leq [n^{\frac{1}{3}}]}\sum_{1\leq xz\leq [n^{\frac{2}{3}}]}[\frac{r}{yz}]}$.
3. ${A_z=\sum_{1\leq r\leq [n^{\frac{1}{3}}]}\sum_{1\leq xy\leq [n^{\frac{2}{3}}]}[\frac{r}{yz}]}$.
Then the task transform to get a asymptotic formula,
$\displaystyle A_x=A_y=A_z=xQ_2(logx)+O(x^{1-\frac{1}{3}+\epsilon}) \ \ \ \ \ (7)$
But we can do the same thing for ${\sum_{1\leq yz\leq [n^{\frac{2}{3}}]}[\frac{r}{yz}]}$ and then integral it. This end the proof. For general ${k\in {\mathbb N}}$, the story is the same, by induction.
Induction on ${k}$ and use the Fubini theorem to calculate ${\sum_{x_1...x_r\leq n}\frac{n}{x_1...x_r},\forall 1\leq r\leq k}$. $\Box$
There is a major unsolved problem called Dirichlet divisor problem.
$\displaystyle \sum_{n\leq x}d(n) \ \ \ \ \ (8)$
What is the error term? The conjecture is the error term is ${O(x^{\theta}), \forall \theta>\frac{1}{4}}$, it is known that ${\theta=\frac{1}{4}}$ is not right.
Remark 5
To beats this problem, need some tools in algebraic geometry.
2. Several problems
${\forall k\in {\mathbb N}}$, is there a asymptotic formula for ${\sum_{t=1}^n\{\frac{kn}{t}\}}$ ?
${\forall k\in {\mathbb N}}$, ${f(n)}$ is a polynomial with degree ${k}$, is there a asymptotic formula for ${\sum_{t=1}^n\{\frac{f(n)}{t}\}}$ ?
${\forall k\in {\mathbb N}}$, ${g(n)}$ is a polynomial with degree ${k}$, is there a asymptotic formula for ${\sum_{t=1}^n\{\frac{n}{g(t)}\}}$ ?
Theorem 3 ${k\in {\mathbb N}}$, then we have
$\displaystyle \lim_{n\rightarrow \infty}\frac{\{\frac{kn}{1}\}+\{\frac{kn}{2}\}+...+\{\frac{kn}{n}\}}{n}=k(\sum_{i=1}^k\frac{1}{i}-lnk-\gamma) \ \ \ \ \ (9)$
Proof:
$\displaystyle \begin{array}{rcl} \frac{\{\frac{kn}{1}\}+\{\frac{kn}{2}\}+...+\{\frac{kn}{n}\}}{n} & = & \frac{\sum_{i=1}^k\frac{kn}{i}-\sum_{i=1}^n[\frac{kn}{i}]}{n}\\ & = & k(lnn+\gamma +\epsilon_n)-\frac{\sum_{i=1}^{kn}[\frac{kn}{i}]-\sum_{i=n+1}^{kn}[\frac{kn}{i}]}{n} \end{array}$
$\Box$
Now we try to estimate
$\displaystyle S_k(n)=\sum_{i=1}^{kn}[\frac{kn}{i}]-\sum_{i=n+1}^{kn}[\frac{kn}{i}] \ \ \ \ \ (10)$
In fact, we have,
$\displaystyle \begin{array}{rcl} S_k(n) & = & (2\sum_{i=1}^{[\sqrt{kn}]}[\frac{kn}{i}]-[\sqrt{kn}]^2)-(\sum_{i=1}^k[\frac{kn}{i}]-kn)\\ & = & 2\sum_{i=1}^{[\sqrt{kn}]}\frac{kn}{i}-\sum_{i=1}^k\frac{kn}{i}+2\{\sqrt{kn}\}[\sqrt{kn}]+\{\sqrt{kn}\}^2-2\sum_{i=1}^{[\sqrt{kn}]}\{\frac{kn}{i}\}+\sum_{i=1}^k\{\frac{kn}{i}\}\\ & = & 2kn(ln[\sqrt{kn}]+\gamma+\epsilon_{[\sqrt{kn}]})-kn\sum_{i=1}^k\frac{1}{i}+r(n)\\ & = & knln(kn)+kn(2\gamma-\sum_{i=1}^k\frac{1}{i})+r'(n)\\ & = & knln n+kn(2\gamma+lnk-\sum_{i=1}^k\frac{1}{i})+r'(n) \end{array}$
Where ${-3\sqrt{n}, ${-3\sqrt{n}.
So by 1 we know,
$\displaystyle \begin{array}{rcl} \frac{\{\frac{kn}{1}\}+...+\{\frac{kn}{n}\}}{n} & = & k(lnn+\gamma+\epsilon_n)-klnn-k(2\gamma+lnk-\sum_{i=1}^k\frac{1}{i})+\frac{r'(n)}{n}\\ & = & k(\sum_{i=1}^k\frac{1}{i}-lnk-\gamma)+\frac{r'(n)}{n}+k\epsilon_n \end{array}$
So we have,
$\displaystyle \lim_{n\rightarrow \infty}\frac{\{\frac{kn}{1}\}+...+\{\frac{kn}{n}\}}{n} =k(\sum_{i=1}^k\frac{1}{i}-lnk-\gamma)=k\epsilon_k \ \ \ \ \ (11)$
Remark 6 In fact we can get ${0, by combining the theorem 3 and 1.
3. Lattice points in ball
Gauss use the cube packing circle get a rough estimate,
$\displaystyle \sum_{n\leq x}r_2(n)=\pi x+O(\sqrt{x}) \ \ \ \ \ (12)$
In the same way one can obtain,
$\displaystyle \sum_{n\leq x}r_k(n)=\rho_kx^{\frac{k}{2}}+O(x^{\frac{k-1}{2}}) \ \ \ \ \ (13)$
Remark 7 Where ${\rho_k=\frac{\pi^{\frac{k}{2}}}{\Gamma(\frac{k}{2}+1)}}$ is the volume of the unit ball in ${k}$ dimension.
Dirchlet’s hyperbola method works nicely for the lattic points in a ball of dimension ${k\geq 4}$. Langrange proved that every natural number can be represented as the sum of four squares, i.e. ${r_4(n)>0}$, and Jacobi established the exact formula for the number of representations
$\displaystyle r_4(n)=8(2+(-1)^n)\sum_{d|n,d\ odd}d. \ \ \ \ \ (14)$
Hence we derive,
$\displaystyle \begin{array}{rcl} \sum_{n\leq x}r_4(n) & = & 8\sum_{m\leq x}(2+(-1)^m)\sum_{dm\leq x, d\ odd}d\\ & = & 8\sum_{m\leq x}(2+(-1)^m)(\frac{x^2}{4m^2}+O(\frac{x}{m}))\\ & = & 2x^2\sum_1^{\infty}(2+(-1)^m)m^{-2}+O(xlogx)\\ & = & 3\zeta(2)x^2+O(xlogx) = \frac{1}{2}(\pi x)^2+O(xlogx) \end{array}$
This result extend easily for any ${k\geq 4}$, write ${r_k}$ as the additive convolution of ${r_4}$ and ${r_{k-4}}$, i.e.
$\displaystyle r_k(n)=\sum_{0\leq t\leq n}r_4(t)r_{k-4}(n-t) \ \ \ \ \ (15)$
Apply the above result for ${r_4}$ and execute the summation over the remaining ${k-4}$ squares by integration.
$\displaystyle \sum_{n\leq x}r_k(n)=\frac{(\pi x)^{\frac{k}{2}}}{\Gamma(\frac{k}{2}+1)}+O(x^{\frac{k}{2}-1}logx) \ \ \ \ \ (16)$
Remark 8 Notice that this improve the formula 12 which was obtained by the method of packing with a unit square. The exponent ${\frac{k}{2}-1}$ in 16 is the best possible because the individual terms of summation can be as large as the error term (apart from ${logx}$), indeed for ${k=4}$ we have ${r_4(n)\geq 16n}$ if ${n}$ is odd by the Jacobi formula. The only case of the lattice point problem for a ball which is not yet solved (i.e. the best possible error terms are not yet established) are for the circle(${k=2}$) and the sphere (${k=3}$).
Theorem 4
$\displaystyle \sum_{n\leq x}\tau(n^2+1)=\frac{3}{\pi}xlogx+O(x) \ \ \ \ \ (17)$
4. Application in finite fields
Suppose ${f(x)\in {\mathbb Z}[x]}$ is a irreducible polynomial. And for each prime ${p}$, let
$\displaystyle \rho_f(p)=\# \ of \ solutions\ of f(x)\equiv 0(mod\ p) \ \ \ \ \ (18)$
By Langrange theorem we know ${\rho_f(p)\leq deg(f)}$. Is there a asymptotic formula for
$\displaystyle \sum_{p\leq x}\rho_f(p)? \ \ \ \ \ (19)$
A general version, we can naturally generated it to algebraic variety.
$\displaystyle \rho_{f_1,...,f_k}(p)=\#\ of \ solutions\ of f_i(x)\equiv 0(mod\ p) ,\ \forall 1\leq i\leq k \ \ \ \ \ (20)$
Is there a asymptotic formula for
$\displaystyle \sum_{p\leq x}\rho_{f_1,...,f_k}(p)? \ \ \ \ \ (21)$
Example 1 We give an example to observe what is involved. ${f(x)=x^2+1}$. We know ${x^2+1\equiv 0 (mod \ p)}$ is solvable iff ${p\equiv 1 (mod\ 4)}$ or ${p=2}$. One side is easy, just by Fermat little theorem, the other hand need Fermat descent procedure, which of course could be done by Willson theorem. In this case,
$\displaystyle \sum_{p\leq n}\rho_f(p)=\# \ of \{primes \ of \ type\ 4k+1 \ in \ 1,2,...,n\} \ \ \ \ \ (22)$
Which is a special case of Dirichlet prime theorem.
Let ${K}$ be an algebraic number field, i.e. the finite field extension of rational numbers, let
$\displaystyle \mathcal{O}_K=\{\alpha\in K, \alpha \ satisfied \ a\ monic \ polynomial\ in\ {\mathbb Z}[x]\} \ \ \ \ \ (23)$
Dedekind proved that,
Theorem 5
1. ${\mathcal{O}_K}$ is a ring, we call it the ring of integer of ${K}$.
2. He showed further every non-zero ideal of ${\mathcal{O}_K}$ could write as the product of prime ideal in ${\mathcal{O}_k}$ uniquely.
3. the index of every non-zero ideal ${I}$ in ${\mathcal{O}_K}$ is finite, i.e. ${[\mathcal{O}_K:I]<\infty}$, and we can define the norm induce by index.
$\displaystyle N(I):=[\mathcal{O}_K:I] \ \ \ \ \ (24)$
Then the norm is a multiplication function in the space of ideal, i.e. ${N(IJ)=N(I)N(J), \forall I,J \in \ ideal\ class\ group\ of\ \mathcal{O}_K}$.
4. Now he construct the Dedekind Riemann zeta function,
$\displaystyle \zeta_K(s)=\sum_{N(I)\neq 0}\frac{1}{N(I)^s}=\prod_{J\ prime \ ideal\ }\frac{1}{1-\frac{1}{N(J)^s}},\ \forall Re(s)>1 \ \ \ \ \ (25)$
Now we consider the analog of the prime number theorem. Let ${\pi_K(x)=\{I,N(I), does the exist a asymptotic formula,
$\displaystyle \pi_K(x)\sim \frac{x}{ln x}\ as\ x\rightarrow \infty? \ \ \ \ \ (26)$
Given a prime ${p}$, we may consider the prime ideal
$\displaystyle p\mathcal{O}_K=\mathfrak{P}_1^{e_1}\mathfrak{P}_2^{e_2}...\mathfrak{P}_k^{e_k} \ \ \ \ \ (27)$
Where ${\mathfrak{P}_i }$ is different prime ideal in ${\mathcal{O}_K}$. But the question is how to find these ${\mathfrak{P}_i}$? For the question, there is a satisfied answer.
Lemma 6 (existence of primitive element) There always exist a primetive elements in ${K}$, such that,
$\displaystyle K={\mathbb Q}(\theta) \ \ \ \ \ (28)$
Where ${\theta}$ is some algebraic number, which’s minor polynomial ${f(x)\in {\mathbb Z}[x]}$.
Theorem 7 (Dedekind recipe) Take the polynomial ${f(x)}$, factorize it in the polynomial ring ${{\mathbb Z}_p[x]}$,
$\displaystyle f(x)\equiv f_1(x)^{e_1}...f_{r}(x)^{e_r}(mod \ p) \ \ \ \ \ (29)$
Consider ${\mathfrak{P}_i=(p, f_i(\theta)) \subset \mathcal{O}_K}$. Then apart from finite many primes, we have,
$\displaystyle p\mathcal{O}_K=\mathfrak{P}_1^{e_1}\mathfrak{P}_2^{e_2}...\mathfrak{P}_k^{e_k} \ \ \ \ \ (30)$
Where ${N(\mathfrak{P}_i)=p^{deg{f_i}}}$.
Remark 9 The apart primes are those divide the discriminant.
Now we can argue that 4 is morally the same as counting the ideals whose norm is divide by ${p}$ in a certain algebraic number theory.
And we have following, which is just the version in algebraic number fields of 2.
Theorem 8 (Weber) ${\#}$ of ideals of ${\mathcal{O}_K}$ with norm ${\leq x}$ equal to,
$\displaystyle \rho_k(X)+O(x^{1-\frac{1}{d}}), where \ d=[K:Q] \ \ \ \ \ (31)$
Diophantine approximation
I explain some general ideal in the theory of diophantine approximation, some of them is original by myself, begin with a toy model, then consider the application on folklore Swirsing-Schmidt conjecture.
\tableofcontents
1. Dirichlet theorem, the toy model
The very basic theorem in the theory of Diophantine approximation is the well known Dirichlet approximation theorem, the statement is following.
Theorem 1 (Dirichlet theorem) for all ${\alpha}$ is a irrational number, we have infinity rational number ${\frac{q}{p}}$ such that:
$\displaystyle |\alpha-\frac{q}{p}|<\frac{1}{p^2} \ \ \ \ \ (1)$
Remark 1 It is easy to see the condition of irrational is crucial. There is a best constant version of it, said, instead of ${1}$, the best constant in the suitable sense for the theorem 1 should be ${\frac{1}{\sqrt{5}}}$ and arrive by ${\frac{\sqrt{5}+1}{2}}$ at least. The strategy of the proof of the best constant version involve the Frey sequences.
Now we begin to explain the strategies to attack the problem.
\paragraph{Argument 1, boxes principle} We begin with a easiest one, i.e. by the argument of box principle, the box principle is following,
Theorem 2 (Boxes principle) Given ${n\in {\mathbb N}}$ and two finite sets ${A={a_1,a_2,...,a_n,a_{n+1}}}$, set ${B={b_1,...,b_{n}}}$, if we have a map:
$\displaystyle f:A\longrightarrow B \ \ \ \ \ (2)$
Then there exists a element ${b_k\in B}$ such that there exist at least two element ${a_i,a_j\in A}$, ${f(a_i)=f(a_j)=b_k}$.
Proof: The proof is trivial. $\Box$
Now consider, ${\forall N\in {\mathbb N}}$, the sequences ${x,2x,...,Nx}$, then ${\{ix\}\in [0,1], \forall i\in \{1,2,...,n\}}$. Divide ${[0,1]}$ in an average way to ${N}$ part: ${[\frac{k-1}{N},\frac{k}{N}]}$. Then the linear structure involve (which, in fact play a crucial role in the approach). And the key point is to look at ${\{nx\}}$ and integers.
\paragraph{Argument 2, continue fractional} We know, for irrational number ${x}$, ${x}$ have a infinite long continue fractional:
$\displaystyle x=q_0+\frac{1}{q_1+\frac{1}{q_2+\frac{1}{q_3+....+\frac{1}{q_k+...}}}} \ \ \ \ \ (3)$
Then
$\displaystyle |x-q_0+\frac{1}{q_1+\frac{1}{q_2+\frac{1}{q_3+....+\frac{1}{q_k}}}}|\sim \frac{1}{(q_1q_2...q_{k-1})^2q_k} \ \ \ \ \ (4)$
And we have,
$\displaystyle \frac{1}{q_1+\frac{1}{q_2+\frac{1}{q_3+....+\frac{1}{q_k}}}}=\frac{a_n}{b_n}, (a_n,b_n)=1 \ \ \ \ \ (5)$
Then ${b_n=O(q_1...q_k)}$.
\paragraph{Argument 3, Bohr set argument} We begin with some kind of Bohr set:
$\displaystyle B_p=I-\cup_{q\in \{0,1,...,p-1\}}(\frac{q}{p}-\frac{1}{p^2},\frac{q}{p}+\frac{1}{p^2}) \ \ \ \ \ (6)$
The key point is the shift of Bohr set, on the vertical line i.e. ${|B_p\cap B_{p+1}|}$ is very slow, and can be explained by
$\displaystyle \frac{k}{p+1}+\frac{1}{(p+1)^2}>\frac{k}{p}-\frac{1}{p^2} \ \ \ \ \ (7)$
So:
$\displaystyle \frac{1}{p^2}+\frac{1}{(p+1)^2}>\frac{k}{p(p+1)} \ \ \ \ \ (8)$
in ${|B_p \cap B_{p+1}|\sim \frac{1}{p(p+1)}}$ But in fact they are not really independent, as the number of Bohr sets increase, then you can calculate the correlation, thanks to the harmonic sires increasing very slowly, wwe can get something non trivial by this argument, but it seems not enough to cover the whole theorem 1.
\paragraph{Argument 4, mountain bootstrap argument} This argument is more clever than 3, although both two arguments try to gain the property we want in 1 from investigate the whole space ${[0,1]}$ but not ${x}$, this argument is more clever.
Now I explain the main argument, it is nothing but sphere packing, with the set of balls
$\displaystyle \Omega=\{B_{p,q}:=(\frac{q}{p}-\frac{1}{q^2},\frac{q}{p}+\frac{1}{p^2})| \forall p\in {\mathbb N}, 1\leq q\leq p-1 \} \ \ \ \ \ (9)$
and define its subset
$\displaystyle \Omega_l=\{B_{p,q}:=(\frac{q}{p}-\frac{1}{q^2},\frac{q}{p}+\frac{1}{p^2})| \forall 1\leq p\leq l, 1\leq q\leq p-1 \} \ \ \ \ \ (10)$
Then ${\Omega_l\subset \Omega}$, and ${\Omega =\cup_{l\in {\mathbb N}}\Omega_l}$. If we can proof,
Lemma 3 For all ${l\in {\mathbb N}}$, there is a subset ${A_l}$ of ${\Omega-\Omega_l}$ such that ${\cup_{i\in A_l}B_i=[0,1]}$.
Remark 2 If we can proof 3, it is easy to see the theorem 1 follows.
Proof: The proof follows very standard in analysis, may be complex analysis? Key point is we start with a ball ${B_{p,q}}$, whatever it is, this is not important, the important thing is we can take some ball ${B_{p',q'}}$ with the center of ${B_{p',q'}}$ in ${B_{p,q}}$, then try to consider ${B_{p',q'}\cup B_{p,q}}$ to extension ${B_{p,q}}$ and then we find the boudary is also larger then we can extension again, step by step just like mountain bootstrap argument. So we involve in two possible ending,
1. The extension process could extension ${B{p,q}}$ to whole space.
2. we can not use the extension argument to extension to the whole space.
If we are in the first situation, then we are safe, there is nothing need proof. If we are in second case, anyway we take a ball ${B_{p,q}=(\frac{q}{p}-\frac{1}{p^2},\frac{q}{p}+\frac{1}{p^2})}$. Then try to find good ball ${B_{p',q'}}$ to approximate ${B_{p,q}}$, but this is difficult… $\Box$
Remark 3 Argument 1 is too clever to be true in generalization, argument 2 is standard, by the power of renormalization. argument 3 and argument 4 have gap… I remember I have got a proof similar to argument 4 here many years ago, but I forgot how to get it…
2. Schimidt conjecture
The Schimidt conjecture could be look as the generalization of Dirchlet approximation theorem 1 to algebraic number version, to do this, we need define the height of a algebraic number.
Definition 4 We say a number ${\alpha\in {\mathbb C}}$ is a ${k-}$order algebraic number if and only is the minimal polynomial of ${\alpha}$, ${f(x)=a_nx^n+...+a_1x+a_0, a_n\neq 0}$ have degree ${deg(f)=n, f\in {\mathbb Z}[x]}$.
Definition 5 (Height) Now we define the height of a ${k-}$th order algebraic number as ${H(\alpha):=\max\{\|a_n\|_h,\|a_{n-1}\|_h,...,\|a_0\|_h\}}$, Where
$\displaystyle h(a_i)=\|a_i\|_{\infty} \ \ \ \ \ (11)$
Now we state the conjecture:
Theorem 6 (Swiring-Schimidt conjecture) For all transendental number ${x\in {\mathbb C}}$, there is infinitely ${\alpha}$ are ${k-}$th algebraic number such that:
$\displaystyle |x-\alpha|<\frac{c_k}{H(\alpha)^{k+1}} \ \ \ \ \ (12)$
Where ${c_k}$ is a constant only related to ${k}$ but not ${x}$.
I point out the conjecture is very related to the map:
$\displaystyle F:(x_1,...,x_n) \longrightarrow (\sigma_1(x_1,...,x_n),\sigma_2(x_1,...,x_n),...,\sigma_n(x_1,...,x_n)) \ \ \ \ \ (13)$
Where ${\sigma_k(x_1,...,x_n)=\sum_{1\leq i_1<... is the ${k-}$th symmetric sum.
Remark 4 ${F}$ is a map ${{\mathbb C}^n\rightarrow {\mathbb C}^n}$, what we consider is its inverse, ${G=F^{-1}}$, but ${G}$ is not smooth, it occur singularity when ${x_i=x_j}$ for some ${i\neq j}$. And the map, as we know, the singularity depend on the quantity ${\Pi_{1\leq i< j\leq n}(x_i-x_j)}$.
Remark 5 I then say something about the geometric behaviour of the map ${G}$, as we know, what we have in mind is consider the map ${G}$ as a distortion ${{\mathbb C}^n\rightarrow {\mathbb C}^n}$, Then ${H(\alpha)}$ is just the pullback of the canonical metric on ${{\mathbb C}}$(morally) to ${{\mathbb C}}$.
Discrete harmonic function in Z^n
There is some gap, in fact I can improve half of the argument of Discrete harmonic function , the pdf version is Discrete harmonic function in Z^n, but I still have some gap to deal with the residue half…
1. The statement of result
First of all, we give the definition of discrete harmonic function.
Definition 1 (Discrete harmonic function) We say a function ${f: {\mathbb Z}^n \rightarrow {\mathbb R}}$ is a discrete harmonic function on ${{\mathbb Z}^n}$ if and only if for any ${(x_1,...,x_n)\in {\mathbb Z}^n}$, we have:
$\displaystyle f(x_1,...,x_n)=\frac{1}{2^n}\sum_{(\delta_1,...,\delta_n )\in \{-1,1\}^n}f(x_1+\delta_1,...,x_n+\delta_n ) \ \ \ \ \ (1)$
In dimension 2, the definition reduce to:
Definition 2 (Discrete harmonic function in ${{\mathbb R}^2}$) We say a function ${f: {\mathbb Z}^2 \rightarrow {\mathbb R}}$ is a discrete harmonic function on ${{\mathbb Z}^2}$ if and only if for any ${(x_1,x_2)\in {\mathbb Z}^2}$, we have:
$\displaystyle f(x_1,x_2)=\frac{1}{4}\sum_{(\delta_1,\delta_2)\in \{-1,1\}^2}f(x_1+\delta_1,x_2+\delta_2 ) \ \ \ \ \ (2)$
The result establish in \cite{paper} is following:
Theorem 3 (Liouville theorem for discrete harmonic functions in ${{\mathbb R}^2}$) Given ${c>0}$. There exists a constant ${\epsilon>0}$ related to ${c}$ such that, given a discrete harmonic function ${f}$ in ${{\mathbb Z}^2}$ satisfied for any ball ${B_R(x_0)}$ with radius ${R>R_0}$, there is ${1-\epsilon}$ portion of points ${x\in B_R(x_0)}$ satisfied ${|f(x)|. then ${f}$ is a constant function in ${{\mathbb Z}^2}$.
Remark 1 This type of result contradict to the intuition, at least there is no such result in ${{\mathbb C}}$. For example. the existence of poisson kernel and the example given in \cite{paper} explain the issue.
Remark 2 There are reasons to explain why there could not have a result in ${{\mathbb C}}$ but in ${{\mathbb Z}^2}$,
1. The first reason is due to every radius ${R}$ there is only ${O(R^2)}$ lattices in ${B_R(x)}$ in ${{\mathbb Z}^2}$ so the mass could not concentrate very much in this setting.
2. The second one is due to there do not have infinite scale in ${{\mathbb Z}^2}$ but in ${{\mathbb C}}$.
3. The third one is the function in ${{\mathbb Z}^2}$ is automatically locally integrable.
The generation is following:
Theorem 4 (Liouville theorem for discrete harmonic functions in ${{\mathbb R}^n}$) Given ${c>0,n\in {\mathbb N}}$. There exists a constant ${\epsilon>0}$ related to ${n,c}$ such that, given a discrete harmonic function ${f}$ in ${{\mathbb Z}^n}$ satisfied for any ball ${B_R(x_0)}$ with radius ${R>R_0}$, there is ${1-\epsilon}$ portion of points ${x\in B_R(x_0)}$ satisfied ${|f(x)|. then ${f}$ is a constant function in ${{\mathbb Z}^n}$.
In this note, I give a proof of 4, and explicit calculate a constant ${\epsilon_n>0}$ satisfied the condition in 3, this way could also calculate a constant ${\epsilon_n}$ satisfied 4. and point the constant calculate in this way is not optimal both in high dimension and 2 dimension.
2. some element properties with discrete harmonic function
We warm up with some naive property with discrete harmonic function. The behaviour of bad points could be controlled, just by isoperimetric inequality and maximum principle we have following result.
Definition 5 (Bad points) We divide points of ${{\mathbb Z}^n}$ into good part and bad part, good part ${I}$ is combine by all point ${x}$ such that ${|f(x)|, and ${J}$ is the residue one. So ${A\amalg B={\mathbb Z}^n}$.
For all ${B_R(0)}$, we define ${J_R:=J\cap B_R(0), I_R=I\cap B_R(0)}$ for convenient.
Theorem 6 (The distribution of bad points) For all bad points ${J_R}$ in ${B_R(0)}$, they will divide into several connected part, i.e.
$\displaystyle J_R=\amalg_{i\in S_R}A_i \ \ \ \ \ (3)$
and every part ${A_i}$ satisfied ${A_i\cap \partial B_R(0)\neq \emptyset}$.
Remark 3 We say ${A}$ is connected in ${{\mathbb Z}^n}$ iff there is a path in ${A}$ connected ${x\rightarrow y, \forall x,y\in A}$.
Remark 4 the meaning that every point So the behaviour of bad points are just like a tree structure given in the gragh.
Proof: A very naive observation is that for all ${\Omega\subset {\mathbb Z}^n}$ is a connected compact domain, then there is a function
$\displaystyle \lambda_{\Omega}: \partial \longrightarrow {\mathbb R} \ \ \ \ \ (4)$
such that ${\lambda_{\Omega}(x,y)\geq 0, \forall (x,y)\in \Omega \times \mathring{\Omega}}$. And we have:
$\displaystyle f(x)=\sum_{y\in\partial \Omega}\lambda_{\Omega}(x,y)(y) \ \ \ \ \ (5)$
This could be proved by induction on the diameter if ${\Omega}$. Then, if there is a connected component of ${\Omega}$ such that contradict to theorem 6 for simplify assume the connected component is just ${\Omega}$, then use the formula 5we know
$\displaystyle \begin{array}{rcl} \sup_{x\in \Omega}|f(x)| & = & \sup_{x\in \Omega}\sum_{y\in\partial \Omega}\lambda_{\Omega}(x,y)(y) \\ & \leq & \sup_{\partial \Omega}|f(x)| \\ & \leq & c \end{array}$
The last line is due to consider around ${\partial \Omega}$. But this lead to: ${\forall x\in \Omega, |f(x)| which is contradict to the definition of ${\Omega}$. So we get the proof. $\Box$
Now we begin another observation, that is the freedom of extension of discrete harmonic function in ${{\mathbb Z}^n}$ is limited.
Theorem 7 we can say something about the structure of harmonic function space of ${Z^n}$, the cube, you will see, if add one value, then you get every value, i.e. we know the generation space of ${Z^n}$
Proof: For two dimension case, the proof is directly induce by the graph. The case of ${n}$ dimensional is similar. $\Box$
Remark 5 The generation space is well controlled. In fact is just like n orthogonal direction line in n dimensional case.
3. sktech of the proof for \ref
}
The proof is following, by looking at the following two different lemmas establish by two different ways, and get a contradiction.
\paragraph{First lemma}
Lemma 8 (Discrete poisson kernel) the poisson kernel in ${{\mathbb Z}^n}$. We point out there is a discrete poisson kernel in ${{\mathbb Z}^n}$, this is given by:
$\displaystyle f(x)=\sum_{y\in \partial B_R(z)}\lambda_{B_R(z)}(x,y)(y) \ \ \ \ \ (6)$
And the following properties is true:
1. ${\lambda_{B_R(z+h)}(x+h,y+h)=\lambda_{B_R(z)}(x,y)}$ , ${\forall x\in \Omega, h\in {\mathbb Z}^n}$.
2. $\displaystyle \lambda_{B_R(z)}(x,y)\rightarrow \rho_R(x,y) \ \ \ \ \ (7)$
Remark 6 The proof could establish by central limit theorem, brown motion, see the material in the book of Stein \cite{stein}. The key point why this lemma 8 will be useful for the proof is due to this identity always true ${\forall x\in B_R(0)}$, So we will gain a lots of identity, These identity carry information which is contract by another argument.
\paragraph{Second lemma} The exponent decrease of mass.
Lemma 9 The mass decrease at least for exponent rate.
Remark 7 the proof reduce to a random walk result and a careful look at level set, reduce to the worst case by brunn-minkowski inequality or isoperimetry inequality.
\paragraph{Final argument} By looking at lemma 1 and lemma 2, we will get a contradiction by following way, first the value of ${f}$ on ${\partial B_R(0)}$ increasing too fast, exponent increasing by lemma2, but on the other hand, it lie in the integral expresion involve with poisson kernel, but the pertubation of poisson kernel is slow, polynomial rate in fact…
\newpage
{99} \bibitem{paper} A DISCRETE HARMONIC FUNCTION BOUNDED ON A LARGE PORTION OF Z2 IS CONSTANT
\bibitem{stein} Functional analysis
Log average sarnak conjecture
This is a note concentrate on the log average Sarnak conjecture, after the work of Matomaki and Raziwill on the estimate of multiplication function of short interval. Given a overview of the presented tools and method dealing with this conjectue.
1. Introduction
Sarnak conjecture \cite{Sarnak} assert that for any obersevable ${\{f(T^n(x_0))\}_{n=1}^{\infty}}$ come from a determination systems ${(T,X),T:X\rightarrow X}$, where ${h(T)=0}$, ${x_0\in X, f\in C(X)}$. The correlation of it and the Liuvillou function is 0, i.e. they are orthongonal to each other, more preseicesly it is just to say,
$\displaystyle \sum_{n
This is a very natural raised conjecture, Liuville function is the presentation of primes, due to we always believe the distribution of primes in ${\mathbb N}$ should be randomness.
It has been known as observed by Landau \cite{Laudau} that the simplest case,
$\displaystyle \sum_{n
already equivalent to the prime number theorem. It is not difficult to deduce the spetial case of Sarnak conjecture when with the obersevation in $latex {(1)}&fg=000000$ come from finite dynamic system is equivalent to the prime number theorem in athremetic progress by the similar argument. Besides this two classical result, may be the first new result was established by Davenport,
Theorem 1 Let ${T:S_1\rightarrow S_1, T(x)=x+\alpha}$, ${\alpha}$ is a inrational, then the obersevation come from ${(T,S_1)}$ is orthogonal to Mobius function. due to ${\{e^{2\pi ikx}\}_{k\in \mathbb Z}}$ is a basis of ${C(S_1)}$, suffice to proof,
$\displaystyle \sum_{n
There is a lots of spetial situations of Sarnak’s conjecture have been established, The parts I mainly cared is the following:
1. Interval exchange map.
2. Skew product flow.
3. Obersevable come from One dimensional zero entropy flow.
4. Nilsequences.
But in this note, I do not want to explain the tecnical and tools to establish this result, but considering an equivalent conjecture of Sarnak conjecture, named Chowla conjecture, and explain the underlying insight of the suitable weak statement, i.e. the log average Chowla conjecture and the underlying insight of it.
The note is organized as following way, in the next section $latex {(2)}&fg=000000$, we give a self-contained introduction on the tools called Bourgain-Sarnak-Ziegler critation, explain the relationship of this critation and the sum-product phenomenon, also given some more general critation along the philosephy use in establish the Bourgain-Sarnak-Ziegler critation, which maybe useful in following development combine with some other tools. The key point is transform the sum from linear sum to bilinear sum and decomposition the bilinear sum into diagonal part and off-diagonal part, use the assume in the critation to argue the off-diagonal part is small and on the orther hand the diagonal part is also small by the trivial estimate and the volume of diogonal is small, this is very similar to a suitable Caderon-Zugmund decomposition.
In section $latex {(4)}&fg=000000$, I try to give a proof sketch of the result of Matomaki and Raziwill, which is also a key tools to understanding the Sarnak conjecture, or equivalent the Chowla conjecture. The key points of the proof contains following:
1. Find a suitable fourier indentity
2. Construct a multiplication-addition dense subset ${S}$, and proof that the theorem MR hold we need only to proof it hold for ${S\cap [1,2,...,n]}$ instead of ${[1,2,...,n]}$
3. Involve the power of euler product formula. divide the whole interval into a lot of small interval with smaller and smaller scale and a residue part. We look the part come from every small scale as a major term and look the residue part as minor term.
4. Deal with the major term at every scale, by a combitorios identity and second moments method.
5. find a enough decay estimate from a scale to the next smaller scale.
6. Deal with the minor term by the H… lemma.
Due to the theorem of MR do not exausted the method they developed, we trying to make some more result with their method, Tao and Matomaki attain the average version of Chowla conjecture is true by this way, and combine this argument and the entropy decresment argument they established the 2 partten of the log average Chowla conjecture is true. Very recently Tao and his coperator proved the odd partten case of log average chowla conjecture is true, combine an argument of frustenberg crresponding principle and entopy decresment argument. But it seems the even and large than 2 case is much difficult and seems need something new to combine with the method of MR and entropy decresment and frunstenberg corresponfing principle to make some progress.
So, in section $latex {(5)}&fg=000000$, we give a self-contain introduction to the entropy decresment argument of Tao, and combine with the frustenberg corresponding principle.
In the last section $latex {(6)}&fg=000000$, I state some result and method and phylosphy of them I get on nilsequences and wish to combine them with the previous method to make some progress on log average Chowla conjecture on the even partten case.
\newpage
2. Bourgain-Sarnak-Zieglar creation
We begin with the easiest one, this is the main result established in \cite{BSZ}, I try to give the main ideal under the proof, but with a no quantitative version is the following,
Theorem 2 (Bourgain-Sarnak-Zieglar creation, not quantitative version) if for all primes ${p,q>>1}$ we have:
$\displaystyle \sum_{n=1}^Nf(T^{pn}(x))\overline{f(T^{qn}(x))}=o(N) \ \ \ \ \ (2)$
Then for multiplication function ${g(n)}$ we have
$\displaystyle \sum_{n=1}^Ng(n)\overline{ f(T^n(x))}=o(N) \ \ \ \ \ (3)$
Remark 1 For simplify we identify ${f(T^n(x)):=F(n)}$.
Remark 2
The idea is following, break the sum into a bilinear one, so, of course, we multiplication it with itself. i.e. we consider to control,
$\displaystyle |\sum_{i=1}^Ng(n)\overline{ F(n)}|^2=\sum_{n=1}^N\sum_{m=1}^Ng(n)g(m)\overline{F(n)F(m)} \ \ \ \ \ (4)$
To control 4, we need exhausted the mutiplication property of ${g(n)}$, we have ${g(mn)=g(n)g(m),\forall\ m,n\in {\mathbb N}}$. We can not get good estimate for all term,
$\displaystyle g(n)g(m)\overline{F(n)F(m)} \ \ \ \ \ (5)$
The condition in our hand if following,
$\displaystyle \sum_{n=1}^NF(pn)\overline{F(qn)}=o(N), \forall \ p,q\in \mathop{\mathbb P} \ \ \ \ \ (6)$
So, just like the situation of Cotlar-Stein lemma \cite{Cotlar-Stein lemma}, we wish to estimate like following:
$\displaystyle \begin{array}{rcl} |\sum_{p\in W}\sum_{n\in V}F(pn)g(pn)| & \leq & \sum_{n\in V}|g(n)|\cdot |\sum_{p\in W}F(pn)g(p)| \\ & \leq &\sum_{n\in V}|\sum_{p \in W}F(pn)g(p)|\\ & \overset{Cauchy-Schwarz}\leq & |V|^{\frac{1}{2}}[\sum_{n\in V}|\sum_{p\in W}F(pn)g(p)|^2]^{\frac{1}{2}}\\ & = & |V|^{\frac{1}{2}}[\sum_{p_1,p_2\in W}\sum_{n\in V}F(p_1n)\overline{F(p_2n)}g(p_1)\overline{g(p_2)}]^{\frac{1}{2}}\\ \end{array}$
Then we consider divide the sum into diagonal part and non-diagonal part, as following,
$\displaystyle |V|^{\frac{1}{2}}[\sum_{p_1\neq p_2\in W}\sum_{n\in V}F(p_1n)\overline{F(p_2n)}g(p_1)\overline{g(p_2)}]^{\frac{1}{2}}+|V|^{\frac{1}{2}}[\sum_{p\in W}\sum_{n\in V}|F(pn)|^2]^{\frac{1}{2}} \ \ \ \ \ (7)$
But the first part is small, i.e.
$\displaystyle |V|^{\frac{1}{2}}[\sum_{p_1\neq p_2\in W}\sum_{n\in V}F(p_1n)\overline{F(p_2n)}g(p_1)\overline{g(p_2)}]^{\frac{1}{2}} =o(|W||V|) \ \ \ \ \ (8)$
Because of
$\displaystyle \sum_{n\in V}F(p_1n)\overline{F(p_2n)}=o(V), \forall p_1\neq p_2\in W \ \ \ \ \ (9)$
and the second part is small, i.e.
$\displaystyle |V|^{\frac{1}{2}}[\sum_{p\in W}\sum_{n\in V}|F(pn)|^2]^{\frac{1}{2}}=o(|W||V|) \ \ \ \ \ (10)$
Because diagonal part is small in ${W\times W}$ and trivial inequality
$\displaystyle \sqrt{\sum_{n\in V}|F(pn)|^2}\leq |V|^{\frac{1}{2}} \ \ \ \ \ (11)$
But the method in remark 2 is not always make sense in any situation, we need to construct two suitable sets ${W,V}$ and then break up ${\{1,2,...,n{\mathbb N}\}}$ into ${W\times V}$, this mean,
$\displaystyle \{1,2,...,N\}\sim W\times V+o(N) \ \ \ \ \ (12)$
But this ${W,V}$ could be construct in this situation, thanks to the prime number theorem,
Theorem 3 (Prime number theorem)
$\displaystyle \pi(n)\sim \frac{n}{ln(n)} \ \ \ \ \ (13)$
Morally speaking, this is the statement that the primes, which is the generator of multiplication function, is not very sparse.
3. Van der curpurt trick
There is the statement of Van der carport theorem:
Theorem 4 (Van der curpurt trick) Given a sequences ${ \{x_n\}_{n=1}^{\infty}}$ in ${ S_1}$, if ${ \forall k\in N^*}$, ${ \{x_{n+k}-x_n\}}$ is uniformly distributed, then ${ \{x_n\}_{n=1}^{\infty}}$ is uniformly distributed.
I do not know how to establish this theorem with no extra condition, but this result is true at least for polynomial flow. \newpage Proof:
$\displaystyle \begin{array}{rcl} |\sum_{n=1}^Ne^{2\pi imQ(n)}|& = &\sqrt{(\sum_{n=1}^Ne^{2\pi imQ(n)})(\overline{\sum_{n=1}^Ne^{2\pi imQ(n)}})}\\ & = &\sqrt{\sum_{h_1=1}^N\sum_{n=1}^{N-h_1}e^{2\pi imQ(n+h_1)-Q(n)}}\\ & = &\sqrt{\sum_{h_1=1}^N\sum_{n=1}^{N-h_1}e^{2\pi im \partial^1_{h_1}Q(n)}}\\ & \leq & \sqrt{\sum_{h_1=1}^N|\sum_{n=1}^{N-h_1}e^{2\pi \partial^1_{h_1}Q(n)}|}\\ & = &\sqrt{\sum_{h_1=1}^N\sqrt{ (\sum_{n=1}^{N-h_1}e^{2\pi \partial^1_{h_1}Q(n)} )(\overline{\sum_{n=1}^{N-h}e^{2\pi \partial^1_{h_1}Q(n)})}}}\leq\sqrt{\sum_{h_1=1}^N\sqrt{ \sum_{h_2=1}^N|\sum_{n=1}^{N-h_1}e^{2\pi\partial^1_{h_2} \partial^1_hQ(n)} |}}\\ & \leq ....\leq & \\ & = & \sqrt{\sum_{h_1=1}^N\sqrt{ \sum_{h_2=1}^N \sqrt{....\sqrt{\sum_{h_{k-1}=1}^{N-h_{k-2}}|\sum_{n=1}^{N-h_{k-1}}e^{2\pi\partial_{h_1h_2...h_{k-1}Q(n)}}|}}}} =o(1) \end{array}$
$\Box$
This type of trick could also establish the following result, which could be understand as a discretization of the Vinegradov lemma.
Remark 3
Uniformly distribution result of ${ F_p}$: Given ${Q(n)=a_kn^k+...+a_1n+a_0}$, ${\{Q(0),Q(1),...,Q(p-1)\}}$ coverages to a uniformly distribution in ${\{0,1,...,p-1\}}$ as ${p \rightarrow \infty}$.
Remark 4 But I definitely do not know how to establish the similar result when ${Q(n)=n^{-1}}$.
Remark 5
This trick could also help to establish estimate of correlation of low complexity sequences and multiplicative function, such as result:
$\displaystyle S(x)=\sum_{n\le x}\left(\frac{n}{p}\right)\mu(n)=o(n)$
Maybe with the help of B-Z-S theorem.
\newpage
4. Matomaki and Raziwill’s work
In this section we explain the main idea underlying the paper \cite{KAISA MATOMA 虉KI AND MAKSYM RADZIWILL}. But play with a toy model, i.e. the corresponding corollary of the original result on Liouville鈥檚 function.
Definition 5 (Lioville’s function)
$\displaystyle \lambda(n)=(-1)^{\alpha_1+\alpha_2+...+\alpha_k}, \forall \ n=p_1^{\alpha_1}...p_k^{\alpha_k}. \ \ \ \ \ (14)$
Remark 6
$\displaystyle |\int_{X}^{2X}\lambda(n)dx|=o(x) \ \ \ \ \ (15)$
is equivalent to the prime number theorem 3.
The most important beakgrouth of analytic number theory is the new understanding of multiplication function on share interval, this result is established by Kaisa Matom盲ki and Maksym Radziwill. Two very young and intelligent superstars.
The main theorem in them article is :
Theorem 6 (Matomaki,Radziwill) As soon as ${H\rightarrow \infty}$ when ${x\rightarrow \infty}$, one has:
$\displaystyle \sum_{x\leq n\leq x+H}\lambda(n)= o(H) \ \ \ \ \ (16)$
for almost all ${1\leq x\leq X}$ .
In my understanding of the result, the main strategy is:
1. Parseval indetity, transform to Dirchelet polynomial.
2. Involved by multiplication property, spectral decomposition.
3. From linear to multilinear , Cauchy schwarz inequality.
4. major term estimate.
5. Estimate the contribution of area which is not filled.
4.1. Parseval indetity, transform to Dirchelet polynomial
We wish to establish the equality,
$\displaystyle \frac{1}{X}\int_{X}^{2X}|\sum_{x\leq n\leq x+H}\lambda(n)|dx=o(H) \ \ \ \ \ (17)$
This is the ${L^1}$ norm, by Chebyschev inequality, this could be control by ${L^2}$ norm, so we only need to establish the following,
$\displaystyle \frac{1}{X}\int_X^{2 X}|\sum_{x\leq n\leq x+H}\lambda(n)|^2dx=o(H^2) \ \ \ \ \ (18)$
We wish to transform from the discretization sum to a continue sum, that is,
$\displaystyle \int_{{\mathbb R}}|\sum_{xe^{-\frac{1}{T}}\leq n\leq xe^{\frac{1}{T}}}\lambda(n)1_{X\leq n\leq 2X}|^2\frac{dx}{x} \ \ \ \ \ (19)$
Remark 7 There are two points to understand why 19 and 18 are the same.
1. ${[xe^{-\frac{1}{T}},xe^{\frac{1}{T}}]\sim [x-H,x+H]}$.
2. ${1_{x\leq n\leq 2x}}$ and ${\frac{1}{x}}$ is to make that ${x=O(X)}$.
So the Magnitude of 18 and 19 are the same. i.e.
$\displaystyle \int_{{\mathbb R}}|\sum_{xe^{-\frac{1}{T}}\leq n\leq xe^{\frac{1}{T}}}\lambda(n)1_{X\leq n\leq 2X}|^2\frac{dx}{x}\sim \frac{1}{X}\int_X^{2 X}|\sum_{x\leq n\leq x+H}\lambda(n)|^2dx \ \ \ \ \ (20)$
Now we try to transform 19 by Parseval indetity, this is something about the ${L^2}$ norms of the quality we wish to charge. It is just trying to understanding 19 as a quantity in physical space by a more chargeable quality in frequency space. Image,
$\displaystyle \int_{{\mathbb R}}|\sum_{xe^{-\frac{1}{T}}\leq n\leq xe^{\frac{1}{T}}}\lambda(n)1_{X\leq n\leq 2X}|^2\frac{dx}{x}:=\int_{{\mathbb R}}|f_X(x)|^2dx \ \ \ \ \ (21)$
Then ${f_X(x)=\int_{xe^{-\frac{1}{T}}\leq n\leq xe^{\frac{1}{T}}}\lambda(x)1_{X\leq n\leq 2X}}$. Note that,
$\displaystyle \begin{array}{rcl} \widehat{f_X(\xi)} & = & \int_{{\mathbb R}}f_X(x)e^{2\pi ix\xi}dx\\ & = & \sum_{x\leq n\leq 2x}\lambda(x)\int_{logn-\frac{1}{T}}^{logn+\frac{1}{T}}e^{2\pi ix\xi}dx, \ T=\frac{X}{H}\\ & = & \sum_{X\leq n\leq 2X}\lambda(x)e^{2\pi ilog(n)\cdot \xi}\cdot\frac{e^{2\pi i\frac{\xi}{T}}-e^{2\pi i-\frac{\xi}{T}}}{2\pi i\xi}\\ \end{array}$
So by Parseval identity, we have,
$\displaystyle \begin{array}{rcl} \int_{{\mathbb R}}|f_X(x)|^2dx & = & \int_{{\mathbb R}}|\widehat{f_X(\xi)}|^2d\xi \\ & = & \int_{{\mathbb R}}|\sum_{X\leq n\leq 2X}\lambda(n)\cdot n^{2\pi i\xi}|^2(\frac{e^{2\pi i\frac{\xi}{T}}-e^{2\pi i\frac{-\xi}{T}}}{2\pi i\xi})^2d\xi\\ & \sim & \int_{{\mathbb R}}|\sum_{X\leq n\leq 2X}\lambda(n)\cdot n^{2\pi i\xi}|^2\frac{1}{T^2}1_{|\xi|^2\leq T}\\ \end{array}$
Remark 8 We know the Fejer kernel satisfied,
$\displaystyle (\frac{e^{2\pi i\frac{\xi}{T}}-e^{2\pi i\frac{-\xi}{T}}}{2\pi i\xi})^2\sim \frac{1}{T^2}1_{|\xi|\leq T} \ \ \ \ \ (22)$
So morally speaking, we get the following identity.
$\displaystyle \frac{1}{X}\int_{X}^{2X}|\sum_{x\leq n\leq x+H}\lambda(n)|^2dx\sim \frac{1}{(x/H)^2}\int_{0}^{\frac{X}{H}}|\sum_{x\leq n\leq 2x}\lambda(x)x^{2\pi i\xi}|^2d\xi \ \ \ \ \ (23)$
In fact we do a cutoff, the quality we really consider is just:
$\displaystyle \frac{1}{X^2}\int_{|log(X)|^{100}}^{\frac{X}{H}}|\sum_{n\leq X}\lambda(n)n^{it}|^2dt \ \ \ \ \ (24)$
established the monotonically inequality:
Theorem 7 (Paserval type identity)
$\displaystyle \frac{1}{X}\int_{X}^{2X}|\frac{1}{H}\sum_{x\leq n\leq x+H}\lambda(n)|^2dx \sim聽\frac{1}{X^2}\int_{|log(X)|^{100}}^{\frac{X}{H}}|\sum_{n\leq X}\lambda(n)n^{it}|^2dt \ \ \ \ \ (25)$
Remark 9
In my understanding, This is a perspective of the quality, due to the quality is a multiplicative function integral on a domain ${ \mathbb N^*}$ with additive structure, it could be looked as a lots of wave with the periodic given by primes, so we could do a orthogonal decomposition in the fractional space, try to prove the cutoff is a error term and we get such a monotonically inequality.
But at once we get the monotonically inequality, we could look it as a聽compactification process and this process still carry most of the information so lead to the inequality.
It seems something similar occur in the attack of the moments estimate of zeta function by the second author. And it is also could be looked as something similar to the 聽spectral decomposition with some basis come from multiplication generators, i.e. primes.
4.2. Involved by multiplication property, spectral decomposition
I called it is “spectral decomposition”, but this is not very exact. Anyway, the thing I want to say is that for multiplication function ${\lambda(n)}$, we have Euler-product formula:
$\displaystyle \Pi_{p,prime}(\frac{1}{1-\frac{\lambda(p)}{p^s}})=\sum_{n=1}^{\infty} \frac{\lambda(n)}{n^s} \ \ \ \ \ (26)$
But anyway, we do not use the whole power of multiplication just use it on primes, i.e. ${\lambda(pn)=\lambda(p)\lambda(n)}$ leads to following result:
$\displaystyle \lambda(n)=\sum_{n=pm,p\in I}\frac{\lambda(p)\lambda(m)}{\# \{p|m, p\in I\}+1}+\lambda(n)1_{p|n;p\notin I} \ \ \ \ \ (27)$
This is a identity about the function ${\lambda(n)}$, the point is it is not just use the multiplication at a point,i.e. ${\lambda(mn)=\lambda(m)\lambda(n)}$, but take average at a area which is natural generated and compatible with multiplication, this identity carry a lot of information of the multiplicative property. Which is crucial to get a good estimate for the quality we consider about.
4.3. From linear to multilinear , Cauchy schwarz
Now, we do not use one sets ${I}$, but use several sets ${I_1,...,I_n }$ which is carefully chosen. And we do not consider [X,2X] with linear structure anymore , instead reconsider the decomposition:
${[X,2X]=\amalg_{i=1}^n (I_i\times J_i) \amalg U}$
On every ${I_i\times J_i}$ it equipped with a bilinear structure. And ${U}$ is a very small set, ${|U|=o(X)}$ which is in fact have much better estimate.
${\int_{|log(X)|^{100}}^{\frac{X}{H}}|\sum_{n\leq X}\lambda(n)n^{it}|^2dt =\sum_{i=1}^n\int_{I_i\times J_i}聽聽\frac{1}{X^2}\int_{|log(X)|^{100}}^{\frac{X}{H}}|\sum_{n\leq X}\lambda(n)n^{it}|^2dt +\int_N |\sum_{n\leq X}\lambda(n)n^{it}|^2dt}$
Now we just use a Cauchy-Schwarz:
${\sum_{i=1}^n\int_{I_i\times J_i}聽聽\frac{1}{X^2}\int_{|log(X)|^{100}}^{\frac{X}{H}}|\sum_{n\leq X}\lambda(n)n^{it}|^2dt +\int_N |\sum_{n\leq X}\lambda(n)n^{it}|^2dt}$
4.4. major term estimate
${=\sum_{i=1}^n\int_{I_i\times J_i}聽聽\frac{1}{X^2}\int_{|log(X)|^{100}}^{\frac{X}{H}}|\sum_{n\leq X}\lambda(n)n^{it}|^2dt}$
${\int_N |\sum_{n\leq X}\lambda(n)n^{it}|^2dt}$
4.5. estimate the contribution of area which is not filled
\newpage
5. Entropy dcrement argument
\newpage
6. Correlation with nilsequences
I wish to establish the following estimate: ${\lambda(n)}$ is the liouville function we wish the following estimate is true.
$\displaystyle \int_{0\leq x\leq X}|\sup_{f\in \Omega^m}\sum_{x\leq n\leq x+H}\lambda(n)e^{2\pi if(x)}|dx =o(XH). \ \ \ \ \ (28)$
Where we have ${ H\rightarrow \infty}$ as ${ x\rightarrow \infty}$,
$\displaystyle \Omega^m=\{a_mx^m+a_{m-1}x^{m-1}+...+a_1x+a_0 | a_m,...,a_1,a_0\in [0,1]\}$
is a compact space.
I do not know how to prove this but this is result is valuable to consider, because by a Fourier identity we could transform the difficulty of (log average) Chowla conjecture to this type of result.
There is some clue to show this type of result could be true, the first one is the result established by Matomaki and Raziwill in 2015:
Theorem 8 (multiplication function in short interval)
${f(n): \mathbb N\rightarrow \mathbb C}$ is a multiplicative function, i.e. ${ f(mn)=f(n)f(m), \forall m,n\in \mathbb N}$. ${H\rightarrow \infty}$ as ${x\rightarrow \infty}$, then we have the following result,
$\displaystyle \int_{1\leq x\leq X}|\sum_{x\leq n\leq x+H}f(n)|=o(XH). \ \ \ \ \ (29)$
And there also exists the result which could be established by Vinagrodov estimate and B-S-Z critation :
Theorem 9 (correlation of multiplication function and nil-sequences in long interval)
${f(n): \mathbb N\rightarrow \mathbb C}$ is a multiplicative function, i.e. ${ f(mn)=f(n)f(m), \forall m,n\in \mathbb N}$. ${g(n)=a_n^m+...+a_1n+a_0}$ is a polynomial function then we have the following result,
$\displaystyle \int_{1\leq n \leq X}|f(n)e^{2\pi i g(n)}|=o(X) \ \ \ \ \ (30)$
\newpage {9} \bibitem{Sarnak} Peter Sarnak, Mobius Randomness and Dynamics.
\texttt{https://publications.ias.edu/sites/default/files/Mahler }. \bibitem{Laudau} JA 虂NOS PINTZ (BUDAPEST). LANDAU鈥橲 PROBLEMS ON PRIMES.
\texttt{https://users.renyi.hu/~pintz/pjapr.pdf} \bibitem{BSZ} Knuth: Computers and Typesetting,
\texttt{http://www-cs-faculty.stanford.edu/\~{}uno/abcde.html}
\bibitem{Cotlar-Stein lemma} Almost orthogonality
\texttt{https://hxypqr.wordpress.com/2017/12/18/almost-orthogonality/}
\bibitem{KAISA MATOMA 虉KI AND MAKSYM RADZIWILL} KAISA MATOMA 虉KI AND MAKSYM RADZIWIL, MULTIPLICATIVE FUNCTIONS IN SHORT INTERVALS.
\texttt{https://arxiv.org/abs/1501.04585v4/}. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 603, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9972342252731323, "perplexity": 3316.7336681581232}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348523564.99/warc/CC-MAIN-20200607044626-20200607074626-00561.warc.gz"} |
https://plainmath.net/93252/can-a-neutron-act-as-a-wave-can-the-wav | # Can a neutron act as a wave? Can the wave of light superpose with the wave of neutrons?
Can a neutron act as a wave? Can the wave of light superpose with the wave of neutrons?
You can still ask an expert for help
• Live experts 24/7
• Questions are typically answered in as fast as 30 minutes
• Personalized clear answers
Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
Garrett Valenzuela
Yes. Neutrons can be easily manipulated into circumstances in which they exhibit wavelengths, and can be diffracted.
Photons interact most strongly with charged particles, but will also interact with neutral particles which possess a magnetic moment, which the neutron does. This effect appears at high energies; most of the time the photon scatters off the neutron. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8425559401512146, "perplexity": 1402.6290258612503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710734.75/warc/CC-MAIN-20221130092453-20221130122453-00488.warc.gz"} |
http://stackoverflow.com/questions/14969010/how-would-i-write-cycle-as-a-lambda-function/14969081 | # How would I write cycle as a lambda function?
Just for fun, here is my own version of `cycle`:
``````myCycle :: [a] -> [a]
myCycle xs = xs ++ myCycle xs
``````
The right-hand side refers to both the function name `myCycle` and the parameter `xs`.
Is it possible to implement `myCycle` without mentioning `myCycle` or `xs` on the right-hand side?
``````myCycle = magicLambdaFunction
``````
-
`myCycle = \xs -> let ys = xs ++ ys in ys`. `fix` inlined. – Daniel Fischer Feb 19 '13 at 23:44
Is it possible to implement `myCycle` without mentioning `myCycle` or `xs` on the right-hand side?
The answer is yes and no (not necessarily in that order).
Other people have mentioned the fixed point combinator. If you have a fixed-point combinator `fix :: (a -> a) -> a`, then as you mention in a comment to Pubby's answer, you can write `myCycle = fix . (++)`.
But the standard definition of `fix` is this:
``````fix :: (a -> a) -> a
fix f = let r = f r in r
-- or alternatively, but less efficient:
fix' f = f (fix' f)
``````
Note that the definition of `fix` involves mentioning a left-hand-side variable on the right hand side of its definition (`r` in the first definition, `fix'` in the second one). So what we've really done so far is push the problem down to just `fix`.
The interesting thing to note is that Haskell is based on a typed lambda calculus, and for good technical reason most typed lambda calculi are designed so that they cannot "natively" express the fixed point combinator. These languages only become Turing-complete if you add some extra feature "on top" of the base calculus that allows for computing fixed points. For example, any of these will do:
1. Add `fix` as a primitive to the calculus.
2. Add recursive data types (which Haskell has; this is another way of defining `fix` in Haskell).
3. Allow the definitions to refer to the left-hand side identifier being defined (which Haskell also has).
This is a useful type of modularity for many reasons—one being that a lambda calculus without fixed points is also a consistent proof system for logic, another that `fix`-less programs in many such systems can be proven to terminate.
EDIT: Here's `fix` written with recursive types. Now the definition of `fix` itself is not recursive, but the definition of the `Rec` type is:
``````-- | The 'Rec' type is an isomorphism between @Rec a@ and @Rec a -> a@:
--
-- > In :: (Rec a -> a) -> Rec a
-- > out :: Rec a -> (Rec a -> a)
--
-- In simpler words:
--
-- 1. Haskell's type system doesn't allow a function to be applied to itself.
--
-- 2. @Rec a@ is the type of things that can be turned into a function that
-- takes @Rec a@ arguments.
--
-- 3. If you have @foo :: Rec a@, you can apply @foo@ to itself by doing
-- @out foo foo :: a@. And if you have @bar :: Rec a -> a@, you can do
-- @bar (In bar)@.
--
newtype Rec a = In { out :: Rec a -> a }
-- | This version of 'fix' is just the Y combinator, but using the 'Rec'
-- type to get around Haskell's prohibition on self-application (see the
-- expression @out x x@, which is @x@ applied to itself):
fix :: (a -> a) -> a
fix f = (\x -> f (out x x)) (In (\x -> f (out x x)))
``````
-
I think this works:
``````myCycle = \xs -> fix (xs ++)
``````
http://en.wikipedia.org/wiki/Fixed-point_combinator
In programming languages that support anonymous functions, fixed-point combinators allow the definition and use of anonymous recursive functions, i.e. without having to bind such functions to identifiers. In this setting, the use of fixed-point combinators is sometimes called anonymous recursion.
-
And according to lambdabot, this can be simplified to `fix . (++)` :) – FredOverflow Feb 19 '13 at 22:51
@FredOverflow I don't know if I would call that simplified! – Pubby Feb 19 '13 at 22:52
According to lambdabot's definition of simplified :) – FredOverflow Feb 19 '13 at 22:53
Haskell's `fix` encapsulates the very concept of "mentioning the name on the RHS" since it's defined as `fix f = f (fix f)` so it's perhaps a little bit unfair. The best answer is probably the Y combinator. Take note, though, that to get a well-typed Y combinator that post needs to use recursive data type which use the name of the datatype on the RHS. I don't know if it's possible to get a Y combinator to type without using a recursive type. – J. Abrahamson Feb 19 '13 at 23:23
@tel: It's not possible without using a recursive something, but there are other ways. As usual, Oleg has a few interesting ideas. – C. A. McCann Feb 20 '13 at 14:18
For fun this is another stuff :
``````let f = foldr (++) [] . repeat
``````
or
``````let f = foldr1 (++) . repeat
``````
-
Or just `concat . repeat` – luqui Feb 21 '13 at 18:31
No one pointed out the "obvious" version of the fix solution yet. The idea is that you transform the named recursive call into a parameter.
``````let realMyCycle = fix (\myCycle xs -> xs ++ myCycle xs)
``````
This "recursive name" introducing trick is pretty much what `let in` does in Haskell. The only difference is that using the built-in construct is more straightforward and probably nicer for the implementation.
- | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8490305542945862, "perplexity": 2302.6301093219695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931009777.87/warc/CC-MAIN-20141125155649-00170-ip-10-235-23-156.ec2.internal.warc.gz"} |
https://www.omnicalculator.com/construction/material-removal-rate | Material Removal Rate Calculator
Created by Rahul Dhari
Reviewed by Steven Wooding
Last updated: Feb 02, 2023
Our material removal rate calculator will help you determine the volume of material removed during the machining process.
Regardless of the material in use, the manufacturing of any equipment entails machining. It could be grooving to create threads (see thread calculator) or turning to create a taper (see taper calculator). During the machining process, a block of material is chipped away with a series of operations to give an object its final shape.
Such machining processes include turning, facing, milling, finishing, knurling, grooving, etc. The final shape of the object dictates the amount of material that requires removal. In turn, this parameter is a function of the speed of machining and feed rate (check out our speeds and feeds calculator to know more).
Every machining process has a different formula for material removal rate. This calculator supports the calculation of the material removal rate for turning, drilling, milling, grinding, and grooving. You can begin the process either by punching in some numbers or reading on to learn more about how to calculate the material removal rate.
What is material removal rate?
The material removal rate (MRR) refers to the volume of material removal per unit time during the machining process. The units of material removal rate are $\text{mm}^3\text{/s}$, $\text{cm}^3\text{/min}$, or $\text{in}^3\text{/min}$. The general calculation of material removal rate is:
$\text{MRR} = \frac{W_\text{Chip} \times A_\text{Chip}}{T},$
where $W_\text{Chip}$ is the width of the chip, $A_\text{Chip}$ is the cross-sectional area of the chip, and $T$ is the time of the operation.
Material removal rate formulae
You can use the generalized material removal rate formula to establish formulae for different operations. Our material removal rate calculator supports 5 machining operations:
1. Turning;
2. Milling;
3. Grooving;
4. Drilling; and
5. Grinding.
We will go through them individually.
Material removal rate formula for turning
The process of reducing the diameter of a workpiece is known as the turning operation. For a cylindrical workpiece rotating along its long axis under the turning process, the material removal process happens along two axes — the longitudinal axis and its perpendicular direction.
The speed of the cutting or removal process along the longitudinal axis is feed rate or feed per revolution $(F_\mathrm{r})$. Its counterpart is the cutting speed or perpendicular speed $(V_\mathrm{c})$. The feed rate has units of distance per revolution, whereas the cutting speed has units of distance per time. It is also a function of the depth of cut $(D_\mathrm{p})$. The material removal rate (MRR) for a turning operation is:
$\mathrm{MRR} = D_\mathrm{p} F_\mathrm{r} V_\mathrm{c}.$
Material removal rate formula for milling
During a milling operation, a rotating tool removes material from a workpiece. This operation is pretty handy for manufacturing gears. The gear teeth are obtained after removing material using the milling machine. It is a function of depth of cut (in axial $(D_\mathrm{p})$ and radial $(D_\mathrm{r})$ directions) and feed velocity$(V_\mathrm{f})$, such that the material removal rate formula for milling is:
$\mathrm{MRR} = D_\mathrm{p} D_\mathrm{r} V_\mathrm{f}.$
Material removal rate formula for grooving
During this operation, the material is removed to obtain a narrow cut in a workpiece. It is a type of turning operation, however, where only a small part of the workpiece is machined. A typical example of a groove is the narrow channel or canal found on parts to insert O-rings. For a groove of width $(W)$, cutting speed $(V_\mathrm{c})$, and feed rate $(F_\mathrm{})$, the material removal rate is:
$\mathrm{MRR} = W F V_\mathrm{c}.$
Material removal rate formula for drilling
A drilling operation entails cutting or removing material with a rotary cutting tool, leaving a circular-shaped hole in the material.
The material removal rate for drilling is a function of drill diameter $(D)$, cutting speed $(V_\mathrm{c})$, and feed rate $(F_\mathrm{r})$. The material removal rate formula for drilling is:
$\mathrm{MRR} = \frac{D F_\mathrm{r} V_\mathrm{c}}{4}.$
Material removal rate formula for grinding
The grinding operation consists of using a tool with embedded abrasive particles to remove material. The particles interact with the material's surface and remove material by cutting away small chips. The grinding process results in a very smooth surface finish, although it is also helpful to roughen up surfaces. The material removal rate formula for grinding is:
$\mathrm{MRR} = W D_\mathrm{c} V,$
where:
• $W$ – Width of surface;
• $D_\mathrm{c}$ – Depth of cut; and
• $V$ – Work velocity.
Using the material removal rate calculator
Now that you know how to calculate the material removal rate, let's use this knowledge to find the material removal rate for a turning operation.
Consider a turning operation on a cylindrical workpiece with depth of cut of $1 \ \mathrm{ mm}$, feed rate as $3 \ \mathrm{ mm/rev}$ , and cutting speed as $4 \ \mathrm{ mm/min}$.
To calculate the material removal rate for turning:
1. Select the machining operation as Turning.
2. Enter the depth of cut as $1 \ \mathrm{ mm}$.
3. Insert the feed rate as $3 \ \mathrm{ mm/rev}$.
4. Fill in the cutting speed as $4 \ \mathrm{ mm/min}$.
5. Using the material removal rate calculator for turning:
\begin{align*} \qquad \mathrm{MRR} &= D_\mathrm{p} F_\mathrm{r} V_\mathrm{c} \\ \qquad &= 1 \times 3 \times 4 \\ \qquad &= 12 \ \mathrm{mm^3 / min} \end{align*}
FAQ
What is material removal rate for drilling with 1 mm drill cutting at 12 mm/min and having feed rate of 5 mm?
The material removal rate is 0.25 mm³/min. Considering 1 mm drill diameter with a cutting speed of 12 mm/min and feed rate of 5 mm/rev, the material removal rate is:
MRR = 1 × 12 × 5 / 4 = 15 mm³/min.
What is the material removal rate formula for milling?
The material removal rate formula for milling is Dp × Dr × Vf, where Dp and Dr are the depths of cut in a perpendicular direction and radial directions, respectively, and Vf is the feed velocity measured in mm/min.
How do I calculate material removal for turning?
To calculate material removal for turning:
1. Find the feed rate, cutting speed, and depth of cut.
2. Multiply the feed rate in mm/revolution by cutting speed in mm/min.
3. Multiply the product with depth of cut in mm to obtain the material removal rate.
How do I calculate material removal for drilling?
To calculate material removal for drilling:
1. Find the diameter of drill bit, cutting speed, and feed rate.
2. Multiply the feed rate in mm/revolution by the cutting speed in mm/min.
3. Multiply the product with the diameter of the drill bit in mm.
4. Divide the product by 4 to obtain the material removal rate.
Rahul Dhari
Operation
Turning
Depth of cut (Dp)
in
Feed rate (Fr)
in
/rev
Cutting speed (Vc)
in/min
Material removal rate (MRR)
ft³/s
People also viewed…
Flat vs. round Earth
Omni's not-flat Earth calculator helps you perform three experiments that prove the world is round.
Humans vs vampires
Vampire apocalypse calculator shows what would happen if vampires were among us using the predator - prey model.
Paver
Are you planning to paver your patio? The paver calculator is just what you need. Use it to estimate how much material you ought to buy for your project and check the potential costs.
Pipe weight
This pipe weight calculator can help you quickly determine the weight of a pipe, whether made of metal (like steel, brass, or aluminum) or plastic (like PVC). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 34, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9242455959320068, "perplexity": 2282.4965870796786}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946445.46/warc/CC-MAIN-20230326173112-20230326203112-00707.warc.gz"} |
https://mathinsight.org/definition/state_space | # Math Insight
### State space definition
The state space of a dynamical system is the set of all possible states of the system. Each coordinate is a state variable, and the values of all the state variables completely describes the state of the system. In other words, each point in the state space corresponds to a different state of the system.
An intuitive introduction to the state space is given in the idea of a dynamical system. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8971019387245178, "perplexity": 96.4643091883954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593586.54/warc/CC-MAIN-20180722194125-20180722214125-00405.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/177193-prime-maximal-ideals.html | # Math Help - Prime/Maximal Ideals
1. ## Prime/Maximal Ideals
I'm working on prime an maximal ideals. My partner and I are studying for our final exam and got conflicting answers.
The question was to find all of the prime and maximal ideals of $\mathbb Z_7$. My answer was that because a finite integral domain is a field, the prime and maximal ideals coincide, but that there are no prime and maximal ideals for $\mathbb Z_7$.
As for $\mathbb Z_3 \times \mathbb Z_5$ , what are the prime and maximal ideals, and more importantly, how in the world do we know that we have found them all?
2. Originally Posted by DanielThrice
I'm working on prime an maximal ideals. My partner and I are studying for our final exam and got conflicting answers.
The question was to find all of the prime and maximal ideals of $\mathbb Z_7$. My answer was that because a finite integral domain is a field, the prime and maximal ideals coincide, but that there are no prime and maximal ideals for $\mathbb Z_7$.
Right, since $\mathbb{Z}_7$ is a field all is easy.
As for $\mathbb Z_3 \times \mathbb Z_5$ , what are the prime and maximal ideals, and more importantly, how in the world do we know that we have found them all?
You may be overthinking this. You can clearly check for example that $\mathbb{Z}_3\times\{0\}$ and $\{0\}\times\mathbb{Z}_5$ which are prime? Are they maximal? Well suppose that $\mathbb{Z}_3\times\{0\}\subset I\subseteq \mathbb{Z}_3\times\mathbb{Z}_5$ then check that $\pi_2\left(I\right)$ is an ideal and since $\mathbb{Z}_5$ is a field.....
etc.
3. to sharpen it even further, suppose p,q are distinct primes. what can an ideal of Zp x Zq possibly be? remember Zp x Zq is isomorphic to Zpq, so an ideal has to be a subgroup of the additive group. but (Zpq,+) is cyclic, so any proper subgroup has to be generated by some element of Zpq that doesn't generate the whole group, that is, either n = kp, or n = rq. k <q, so kp is co-prime to q (and similarly r is co-prime to p), so the only non-trivial proper ideals of Zpq are (p) and (q), so the only non-trivial proper ideals of Zp x Zq are Zp x {0} and {0} x Zq.
it's actually enlightening to see what happens in Z2 x Z3: it's easy to see that (1,1) is a generator, and we can make the assignment:
(0,0) --> 0
(1,1) --> 1
(0,2) --> 2
(1,0) --> 3
(0,1) --> 4
(1,2) --> 5, by considering multiples of (1,1). the inverse map sends k --> (k mod 2, k mod 3).
play around with this ring a bit.
4. What if I look at it in a brute force kinda way, is this an ok way of thinking about it?
Every element in $\mathbb Z_3\times \mathbb Z_5$ generates a principal ideal:
- (0,0) generates the zero ideal
- (1,0) generates the same ideal as (2,0)
- (0,1) generates the same ideal as (0,2), (0,3) and (0,4)
- all the other elements are invertible and generate the entire ring.
So we can look at three principal ideals: <(0,0)>, <(1,0)> and <(0,1)>. I don't think <(0,0)> is a prime (?) , but <(1,0)> and <(0,1)> are. Furthermore, can we say that these last two ideals are maximal?
Can we say more generally that for the two fields F and K, {0} X K and F X {0} are the only prime and maximal ideals of F X K? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8749059438705444, "perplexity": 482.0558033931497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447558417.25/warc/CC-MAIN-20141224185918-00053-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://testbook.com/question-answer/if-rmz-rmx-rmiy-left-f--5f0877231d62530bdac71170 | # If $${\rm{z}} = {\rm{x}} + {\rm{iy}} = {\left( {\frac{1}{{\sqrt 2 }} - \frac{{\rm{i}}}{{\sqrt 2 }}} \right)^{ - 25}}$$, where $${\rm{i}} = \sqrt { - 1}$$, then what is the fundamental amplitude of $$\frac{{{\rm{z}} - \sqrt 2 }}{{{\rm{z}} - {\rm{i}}\sqrt 2 }}?$$
This question was previously asked in
NDA (Held On: 17 April 2016) Maths Previous Year paper
View all NDA Papers >
1. π
2. $$\frac{{\rm{\pi }}}{2}$$
3. $$\frac{{\rm{\pi }}}{3}$$
4. $$\frac{{\rm{\pi }}}{4}$$
## Answer (Detailed Solution Below)
Option 1 : π
Free
Electric charges and coulomb's law (Basic)
41133
10 Questions 10 Marks 10 Mins
## Detailed Solution
Concept:
• Using Euler form of complex number: $${\rm{cos\;\theta \;}} + {\rm{\;i\;sin\;\theta \;}} = {\rm{\;}}{{\rm{e}}^{{\rm{i\theta }}}}$$
• For a complex number z = a + ib , the amplitude θ is given by:$${\rm{\theta }} = {\tan ^{ - 1}}\frac{{\rm{b}}}{{\rm{a}}}$$
Calculation:
$${\rm{z}} = {\rm{x}} + {\rm{iy}} = {\left( {\frac{1}{{\sqrt 2 }} - \frac{{\rm{i}}}{{\sqrt 2 }}} \right)^{ - 25}} = {\left( {\cos \left( {\frac{{\rm{\pi }}}{4}} \right) - {\rm{i\;sin}}\left( {\frac{{\rm{\pi }}}{4}} \right)} \right)^{ - 25}} = {\left( {\cos \left( { - \frac{{\rm{\pi }}}{4}} \right) + {\rm{i\;sin}}\left( { - \frac{{\rm{\pi }}}{4}} \right)} \right)^{ - 25}}$$
$$\Rightarrow {\rm{z}} = {\rm{\;}}{\left( {{{\rm{e}}^{ - {\rm{i}}\frac{{\rm{\pi }}}{4}}}} \right)^{ - 25}} = {{\rm{e}}^{{\rm{i}}\frac{{25{\rm{\pi }}}}{4}}} = \cos \left( {\frac{{25{\rm{\pi }}}}{4}} \right) + {\rm{i\;sin}}\left( {\frac{{25{\rm{\pi }}}}{4}} \right) = \frac{1}{{\sqrt 2 }} + \frac{{\rm{i}}}{{\sqrt 2 }}$$
Calculating for next part:
$$\frac{{{\rm{z}} - \sqrt 2 }}{{{\rm{z}} - {\rm{i}}\sqrt 2 }} = \frac{{\frac{1}{{\sqrt 2 }} + \frac{{\rm{i}}}{{\sqrt 2 }} - \sqrt 2 }}{{\frac{1}{{\sqrt 2 }} + \frac{{\rm{i}}}{{\sqrt 2 }} - {\rm{i}}\sqrt 2 }} = {\rm{\;}}\frac{{\frac{{\sqrt 2 }}{2} + \frac{{{\rm{i}}\sqrt 2 }}{2} - \sqrt 2 }}{{\frac{{\sqrt 2 }}{2} + \frac{{{\rm{i}}\sqrt 2 }}{2} - {\rm{i}}\sqrt 2 }} = \frac{{ - \frac{{\sqrt 2 }}{2} + \frac{{{\rm{i}}\sqrt 2 }}{2}}}{{\frac{{\sqrt 2 }}{2} - \frac{{{\rm{i}}\sqrt 2 }}{2}}} = \frac{{ - 1 + {\rm{i}}}}{{{\rm{\;}}1 - {\rm{i\;}}}}$$
$$\Rightarrow {\rm{z}} = \frac{{ - \left( {1 - {\rm{i}}} \right)}}{{{\rm{\;}}1 - {\rm{i\;}}}} = - 1 = - 1 + 0{\rm{i}}$$
$${\rm{Amplitude}}\left( {\rm{\theta }} \right) = {\tan ^{ - 1}}\frac{0}{{ - 1}} = {\rm{\;}}{\tan ^{ - 1}}\frac{0}{{ - 1}} = {\rm{\;}}{\tan ^{ - 1}}0 = {\rm{\;}}0{\rm{\;}},{\rm{\;\pi }}$$
Here x is negative so amplitude (θ) is equal to π. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.980419933795929, "perplexity": 4886.707949889175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585450.39/warc/CC-MAIN-20211022021705-20211022051705-00402.warc.gz"} |
https://www.physicsforums.com/threads/combined-mass.687981/ | # Combined Mass
1. Apr 25, 2013
### Joran Verlaeck
Dear Forum Users,
I am studying physics on my own, i am new to this forum, i am working every example in the book and i have a question. I hope that you are so kind to give me an explanation so that i can continue my chapter exercises.
The question of the book is : What horizontal force must be applied to a large block of mass M shown in Figure P5.67 so that the tan blocks remain stationary relative to M ? Assume all surfaces and the pulley are frictionless. (Notice that the force exerted by the string accelerates M1)
See attachement for picture.
The calculation goes as follows.
M1*g - T = 0 =>
T = M2 * a =>
M1*g = M2 * a => a = (M1*g)/(M2)
F = TotalMass * a => in the book and on the internet the result is
F = (M + M1 + M2) * (M1*g) / (M2)
What i was wondering is if M1 is not connected by the string with M2, and the surface is frictionless, normally only block M would slide together with M1 and M2 would remain stationary and eventually fall off. The tension in the robe is provided by the gravitational force, no gravity
no tension. So in the end result F only needs to accelerate M and M1 and not M2 because that
is accelerated by gravity -> equation a = M1 * G / M2.
So why is the result (M + M1 + M2) * (M1*g) / (M2) and not (M + M1) * (M1*g) / (M2).
Because F is not responsible to give M2 that acceleration just for keeping M relative to M2 in rest.
So they both need the same acceleration.
Thx
Jöran
#### Attached Files:
• ###### Picture.png
File size:
15.7 KB
Views:
530
2. Apr 25, 2013
### tiny-tim
Welcome to PF!
Hi Jöran! Welcome to PF!
I see what you mean.
But no, for two reasons:
i] suppose the three masses were fixed to each other …
the result would be the same, wouldn't it?
ii] this is an exercise in applying Newton's second law …
you must always apply it to all the external forces on a particular body (or bodies)
if you apply it to M and M1 combined, then you must include the external force (on M and M1) from the horizontal part of the rope …
(M + M1)a = F - T = F - M2a,
ie (M + M1 + M2)a = F
3. Apr 25, 2013
### SammyS
Staff Emeritus
Hello Joran Verlaeck. Welcome to PF !
If m1 and m2 are stationary relative M, then they all have the same acceleration, call it a .
So the net force, F, on the system must be such that F = (m1 + m2 + M)a . It's simply the application of Newton's 2ND Law of Motion.
Last edited: Apr 25, 2013
4. Apr 25, 2013
### haruspex
T = m1g = m2a. At the pulley, the string exerts a force T vertically and horizontally on M. The horizontal forces on M are therefore F, m1a from contact with m1 and m2a from the pulley.
5. Apr 26, 2013
### Joran Verlaeck
All of you guys,
Thanks for presenting me the reasoning why this is the case.
After i read the responses i released that i made a mistake by following my reasoning.
I was thinking of the same physics situation only this time m1 rests on the floor, the outcome is like tiny-tim suggested the same. If so M1 feel the normal force and in reality there is really no tension in the robe, but once the object starts moving, and we want to make sure that M2 has no velocity relative to M, the tension in the robe gets m2.a and if a is exactly m1*g/m2 the normal force on m1 is zero. In this case you can see that F must be responsible to the motion of M2 and all the equations remains the same. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8993728160858154, "perplexity": 1109.8038798601822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860121985.80/warc/CC-MAIN-20160428161521-00016-ip-10-239-7-51.ec2.internal.warc.gz"} |
http://evasiondigital.com/standard-error/the-standard-error-of-measurement-can-be-defined-as.php | Home > Standard Error > The Standard Error Of Measurement Can Be Defined As
# The Standard Error Of Measurement Can Be Defined As
## Contents
The standard error of the mean (SEM) (i.e., of using the sample mean as a method of estimating the population mean) is the standard deviation of those sample means over all Repeating the sampling procedure as for the Cherry Blossom runners, take 20,000 samples of size n=16 from the age at first marriage population. For the sake of simplicity, we are assuming there is no partial knowledge of any of the answers and for a given question a student either knows the answer or guesses. For example, if a test with 50 items has a reliability of .70 then the reliability of a test that is 1.5 times longer (75 items) would be calculated as follows http://evasiondigital.com/standard-error/the-standard-error-of-estimate-is-defined-as.php
Anne Udall 13Dr. Compare the true standard error of the mean to the standard error estimated using this sample. He can be about 99% (or ±3 SEMs) certainthat his true score falls between 19 and 31. Sixty eight percent of the time the true score would be between plus one SEM and minus one SEM.
## Standard Error Of Measurement Formula
This formula may be derived from what we know about the variance of a sum of independent random variables.[5] If X 1 , X 2 , … , X n {\displaystyle The sample mean x ¯ {\displaystyle {\bar {x}}} = 37.25 is greater than the true population mean μ {\displaystyle \mu } = 33.88 years. Student B has an observed score of 109. This approximate formula is for moderate to large sample sizes; the reference gives the exact formulas for any sample size, and can be applied to heavily autocorrelated time series like Wall
Standard errors combine when we’re trying to measure an individual over time. For example, if I want to know how much growth has taken place for a student over time, and First, the middle number tells us that a RIT score of 188 is the best estimate of this student’s current achievement level. It also tells us that the SEM associated with this student’s score is approximately 3 RIT—this is why the range around the student’s RIT score extends from 185 (188 - 3) Standard Error Of Measurement Interpretation The difference between the observed score and the true score is called the error score.
Michael Dahlin 9Dr. This is usually the case even with finite populations, because most of the time, people are primarily interested in managing the processes that created the existing finite population; this is called The standard error estimated using the sample standard deviation is 2.56. https://www.nwea.org/blog/2013/measurement-standard-error/ Correction for finite population The formula given above for the standard error assumes that the sample size is much smaller than the population size, so that the population can be considered
in Developmental Psychology from Pennsylvania State University, a M.S. Standard Error Of Measurement Spss To ensure an accurate estimate of student achievement, it’s important to use a sound assessment, administer assessments under conditions conducive to high test performance, and have students ready and motivated to Retrieved 17 July 2014. All achievement tests contain some amount of measurement error. But because MAP adapts to a student’s current achievement level, MAP scores are as precise as they can be, and far more
## Standard Error Of Measurement Example
The mean age was 33.88 years. Using a sample to estimate the standard error In the examples so far, the population standard deviation σ was assumed to be known. Standard Error Of Measurement Formula These assumptions may be approximately met when the population from which samples are taken is normally distributed, or when the sample size is sufficiently large to rely on the Central Limit Standard Error Of Measurement Calculator For the purpose of this example, the 9,732 runners who completed the 2012 run are the entire population of interest.
The True score is hypothetical and could only be estimated by having the person take the test multiple times and take an average of the scores, i.e., out of 100 times http://evasiondigital.com/standard-error/the-standard-error-of-measurement-is-especially-useful-for.php Vul, E., Harris, C., Winkielman, P., & Paschler, H. (2009) Puzzlingly High Correlations in fMRI Studies of Emotion, Personality, and Social Cognition. The following expressions can be used to calculate the upper and lower 95% confidence limits, where x ¯ {\displaystyle {\bar {x}}} is equal to the sample mean, S E {\displaystyle SE} Think about the following situation. Standard Error Of Measurement And Confidence Interval
The standard deviation of the age for the 16 runners is 10.23. That is, does the test "on its face" appear to measure what it is supposed to be measuring. The greater the SEM or the less the reliability, the more variancein observed scores can be attributed to poor test design rather, than atest-taker's ability. http://evasiondigital.com/standard-error/the-standard-error-of-measurement-allows-us-to.php After all, how could a test correlate with something else as high as it correlates with a parallel form of itself?
The smaller standard deviation for age at first marriage will result in a smaller standard error of the mean. Standard Error Of Measurement Vs Standard Deviation Sometimes the item is confusing or ambiguous. Moreover, this formula works for positive and negative ρ alike.[10] See also unbiased estimation of standard deviation for more discussion.
## His true score is 107 so the error score would be -2.
On MAP assessments, student RIT scores are always reported with an associated SEM, with the SEM often presented as a range of scores around a student’s observed RIT score. If you could add all of the error scores and divide by the number of students, you would have the average amount of error in the test. Reliability The notion of reliability revolves around whether you would get at least approximately the same result if you measure something twice with the same measurement instrument. Standard Error Of Measurement Vs Standard Error Of Mean His professional affiliations include the American Psychological Association, the American Psychological Society, Society for Research in Child Development, the American Educational Research Association, and the National Council on Measurement in Education.
Of course, some constructs may overlap so the establishment of convergent and divergent validity can be complex. This pattern is fairly common on fixed-form assessments, with the end result being that it is very difficult to measure changes in performance for those students at the low and high Standard error of the mean Further information: Variance §Sum of uncorrelated variables (Bienaymé formula) The standard error of the mean (SEM) is the standard deviation of the sample-mean's estimate of a http://evasiondigital.com/standard-error/the-standard-error-of-measurement-allows.php Whether we’re trying to measure weight with a bathroom scale, height with a tape measure, or academic achievement using the MAP assessments, there is always some wiggle room in our measurements
Despite the small difference in equations for the standard deviation and the standard error, this small difference changes the meaning of what is being reported from a description of the variation Using the formula: {SEM = So x Sqroot(1-r)} where So is the Observed Standard Deviation and r is the Reliability the result is the Standard Error of Measurement(SEM). | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9443082213401794, "perplexity": 704.7129522828805}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864191.74/warc/CC-MAIN-20180621153153-20180621173153-00224.warc.gz"} |
http://www.tug.org/twg/mfg/mail-html/1997-09/msg00217.html | # Re: revised draft of EuroTeX paper
• To: Ulrik Vieth <[email protected]>
• Subject: Re: revised draft of EuroTeX paper
• From: Chris Rowley <[email protected]>
• Date: Fri, 12 Dec 1997 00:10:13 GMT
• Cc: [email protected]
Two more points; only the first is directly related to the paper (but
it has two parts).
chris
-----------------------------------------------------------------------
The description of .mfd files does not make it clear to which of these
they will set up a mapping:
the internal LaTeX font selection scheme `name' of the (virtual)
font;
the external font file name.
I assume it should be the former, since the .fd files do the rest of
the job.
Also, when referring, as in the .fd paragraph, in the present to the
font selection scheme, please use the term `\LaTeX{} font selection
scheme' (since it is no longer new and there should not be any others
around, it should no longer be called NFSS or NFSS2). Thanks.
I do not think we should agonise too much about whether or not to
include in a math encoding some particular symbol that is already
available in some reasonably widely available font: eg the Icelandic
letters.
Typically, in my experience, any given document will require at most
one or two such glyphs. And it is very unlikely that one math
document will use up more than 12 math families. So a better approach
would be to provide an easy way to set up in a document preamble
math-mode access to a particular slot in a particular encoding and a
font that is set up by the system to be available in LaTeX (possibly
after loading a .fd file). The user is assumed to know what slot in a
VF is required and either the internal LaTeX specification of the font
or its font-file name. This is probably not very useful for the lone
user but at a reasonably LaTeX-aware site it is quite realistic for
such information about fonts for use with TeX to be accessible but
that not all are set up for immediate math use in the basic LaTeX
system.
One way to do this is to provide a nice interface to something like:
\nfss@text {\usefont ... \char ...}
or, eg,
\mathrel{\nfss@text { ... } }
This has the following features:
-- it does not use up math families;
-- it does not change size unless amsmath is in use (this could be | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9336915016174316, "perplexity": 2855.1072019156904}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823997.21/warc/CC-MAIN-20171020082720-20171020102720-00502.warc.gz"} |
https://cs.stackexchange.com/questions/96977/confusion-about-exp-subseteq-pexpcom-claim-from-arora-and-barak | # Confusion about $EXP \subseteq P^{EXPCOM}$ claim from Arora and Barak
In Computational Complexity -- A Modern Approach, by Arora and Barak, they have the following claim (Example 3.6).
Let EXPCOM be the following language $$\{ \langle M, x, 1^n\rangle \mid M \text{ outputs 1 on x within 2^n steps} \}$$ Then $\mathbf{P}^{\mathrm{EXPCOM}} = \dots = \mathbf{EXP}$. [...]
Clearly, an oracle to EXPCOM allows one to perform an exponential-time computation at the cost of one call, and so $\mathbf{EXP} \subseteq \mathbf{P}^{\mathrm{EXPCOM}}$. [...]
I believe their reasoning is as follows:
1. Suppose we have a language $L \in \mathbf{EXP}$.
2. Thus there is a TM $M$ that decides it in time $2^{n^c}$.
3. We want to create a poly-time oracle-TM $T^{\mathrm{EXPCOM}}$ that decides $L$.
4. $T$ works as follows on input a string $x \in \{ 0, 1 \}^{*}$. It queries its EXPCOM oracle on input $\langle M, x, 1^{n^c} \rangle$, and outputs the same answer as the oracle.
5. Clearly $T$ runs in polynomial time, and it decides the same language as $M$. QED.
Here is my confusion. For $T$ to call its EXPCOM oracle, it needs to know which $M$ that decides $L$ (in exponential time). However, 2. only promises the existence of a machine $M$; it doesn't actually tell you how to find it! (this problem also applies to the running time $2^{n^c}$ of $M$).
So clearly I'm misunderstanding something, but what? Is it my understanding of the language EXPCOM? Is it my description of $T$? Or have I misunderstood oracle-TMs altogether? (Or maybe all three?)
## 1 Answer
You are correct that $T$ needs to know which Turing machine accepts $L$. This Turing machine is $M$, and you can hardcode it into $T$. There is absolutely no problem with that.
Here is a similar example. Suppose that there is a proof that $P \neq NP$. Then there is a Turing machine that prints a proof of $P \neq NP$.
Proof: According to the assumption, there is a proof $\pi$ that $P \neq NP$. Construct a Turing machine $T$ that prints $\pi$. Then $T$ prints a proof of $P \neq NP$. $\quad\square$
What seems to worry you is that you think of $T$ as accepting $L$ as an input, from which it is supposed to come up with the Turing machine $T$. But this is not the case – all we have to do is to show that for each $L$ there exists an appropriate Turing machine $T$. Moreover, it is not clear how $T$ would accept $L$ as input – a language is, in general, an infinite object.
Sometimes we do need to be worried about the issue that you raised. Here is an example. Let $L_1,L_2,\dots$ be an infinite sequence of decidable languages. For each $L_i$, there is a Turing machine $T_i$ that decides $L_i$. But is there a Turing machine $T$ that accepts an index $i$ and a word $w$ and returns whether $w \in L_i$? Not necessarily (I'll let you come up with a counterexample). When such a machine $T$ does exist, we say that $L_1,L_2,\dots$ are uniformly decidable.
No such uniformity condition appears in your question. We could impose such a condition artificially by providing $L$ as an input via a Turing machine, not necessarily running in exponential time, that accepts $L$. In this case, your criticism would be valid – given a Turing machine that accepts a language in $\mathsf{EXP}$, it is not clear how to find a Turing machine accepting the same language and running in exponential time.
• I didn't mean that $T$ takes $L$ as input no. Regarding your counter-example question, I guess the issue is that it gives you the ability to solve undecidable problems like the halting problem? (e.g., let $L$ be the unary language where $1^i \in L$ iff $M$ halts on input $x$ where $i$ encodes TM $M$ and input $x$. $L$ can be accepted by non-uniform TMs of course) – panto Sep 16 '18 at 18:02
• My counterexample asks for a sequence of decidable languages $L_1,L_2,\ldots$ such that the language $\{(i,x) : x \in L_i\}$ is not decidable. – Yuval Filmus Sep 16 '18 at 23:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8005988001823425, "perplexity": 207.8585214092651}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232260658.98/warc/CC-MAIN-20190527025527-20190527051527-00034.warc.gz"} |
http://crypto.stackexchange.com/questions/6460/strong-rsa-problem-in-mathbb-z-n2?answertab=active | Strong RSA problem in $\mathbb Z^*_{n^2}$
Comparing to this question, assume $C, M \in \mathbb Z^*_{n^2}$, $e \ge 3$, is it hard to compute $M$ that satisfies $C=M^e \mod n^2$ when $C$ and $(n, e)$ are given?
-
Hint: $C=M^e \bmod n^2\implies C\equiv M^e \pmod n$. – fgrieu Feb 25 '13 at 7:19
@fgrieu Thank you for the hint! – phan Feb 26 '13 at 4:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.990901529788971, "perplexity": 1031.599194338109}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510275463.39/warc/CC-MAIN-20140728011755-00367-ip-10-146-231-18.ec2.internal.warc.gz"} |
https://banditalgs.com/2016/11/13/ellipsoidal-confidence-bounds-for-least-squares-estimators/?replytocom=141 | # Ellipsoidal Confidence Sets for Least-Squares Estimators
Continuing the previous post, here we give a construction for confidence bounds based on ellipsoidal confidence sets. We also put things together and show bound on the regret of the UCB strategy that uses the constructed confidence bounds.
# Constructing the confidence bounds
To construct the confidence bounds we will construct appropriate confidence sets $\cC_t$, which will be based on least-squares, more precisely penalized least-squares estimates. In a later post we will show a different construction that improves the regret when the parameter vector is sparse. But first things first, let’s see how to construct those confidence bounds in the lack of additional knowledge.
Assume that we are at the end of stage $t$ when a bandit algorithm has chosen $A_1,\dots,A_t\in \R^d$ and received the respective payoffs $X_1,\dots,X_t$. The penalized least-squares, also known as the ridge-regression estimate of $\theta_*$, is defined as the minimizer of the penalized squared empirical loss,
\begin{align*}
L_{t}(\theta) = \sum_{s=1}^{t} (X_s – \ip{A_s,\theta})^2 + \lambda \norm{\theta}_2^2\,,
\end{align*}
where $\lambda\ge 0$ is the “penalty factor”. Choosing $\lambda>0$ helps because it ensures that the loss function has a unique minimizer even when $A_1,\dots,A_t$ do not span $\R^d$, which simplifies the math. By solving for $L_t'(\theta)=0$, the optimizer $\hat \theta_t \doteq \argmin_{\theta\in \R^d} L_t(\theta)$ of $L_t$ can be easily seen to satisfy
\begin{align*}
\hat \theta_t = V_t(\lambda)^{-1} \sum_{s=1}^t X_s A_s\,,
\end{align*}
where
\begin{align*}
V_t(\lambda) = \lambda I + \sum_{s=1}^t A_s A_s^\top\,.
\end{align*}
The matrix $V_t(0)$ is known as the Grammian underlying $\{A_s\}_{s\le t}$ and we will keep calling $V_t(\lambda)$ also as the Grammian. Just looking at the definition of the least-squares estimate in the case of a fixed sequence of $\{A_s\}_s$ and independent Gaussian noise a confidence set is very easy to get. To get some intuition this is exactly what we will do first.
## Building intuition: Fixed design regression and independent Gaussian noise
To get a sense of what a confidence set $\cC_{t+1}$ should look like we start with a simplified setting, where we make the following extra assumptions.
• Gaussian noise: $(\eta_s)_s$ is an i.i.d. sequence and in particular $\eta_s\sim \mathcal N(0,1)$;
• Nonsingular Grammian: $V \doteq V_t(0)$ is invertible.
• “Fixed design”: $A_1,\dots,A_t$ are deterministically chosen without the knowledge of $X_1,\dots,X_t$;
The distributional assumption on $(\eta_s)_s$ and the second assumption are for convenience. In particular, the second assumption lets us set $\lambda=0$, which we will indeed use. The independent of $(\eta_s)_s$ and the third assumption, on the other hand, are anything but innocent. In the absence of these we will be forced to use specific techniques.
A bit about notation: To emphasize that $A_1,\dots,A_t$ are chosen deterministically, we will use $a_s$ in place of $A_s$ (recall our convention that lowercase letters denote nonrandom, deterministic values). With this, we have $V = \sum_s a_s a_s^\top$ and $\hat \theta_t = V^{-1} \sum_s X_s a_s$.
Plugging in $X_s = \ip{a_s,\theta_*}+\eta_s$, $s=1,\dots,n$, into the expression of $\hat \theta_t$, we get
\begin{align}
% \hat \theta_t = V^{-1} \sum_{s=1}^t a_s a_s^\top \theta_* + V^{-1} \sum_{s=1}^t \eta_s a_s
\hat \theta_t -\theta_*
= V^{-1} Z\,,
\label{eq:lserror0}
\end{align}
where
\begin{align*}
Z = \sum_{s=1}^t \eta_s a_s\,.
\end{align*}
Noting that the distribution of the the linear combination of Gaussian random variables is also Gaussian, we see that $Z$ is also normally distributed. In particular, from $\EE{Z}= 0$ and $\EE{ Z Z^\top } = V$ we immediately see that $Z \sim \mathcal N(0, V )$ (a Gaussian distribution is fully determined by its mean and covariance). From this it follows that
\begin{align}
V^{1/2} (\hat \theta_t -\theta_*) = V^{-1/2} Z \sim \mathcal N(0,I)\,,
\label{eq:standardnormal}
\end{align}
where $V^{1/2}$ is a square root of the symmetric matrix $V$ (i.e., $V^{1/2} V^{1/2} = V$).
To get a confidence set for $\theta_*$ we can then choose any $S\subset \R^d$ such that
\begin{align}
\frac{1}{\sqrt{(2\pi)^d}}\int_S \exp\left(-\frac{1}{2}\norm{x}_2^2\right) \,dx \ge 1-\delta\,.
\label{eq:lbanditsregion}
\end{align}
Indeed, for such a subset $S$, defining $\cC_{t+1} = \{\theta\in \R^d\,:\, V^{1/2} (\hat \theta_t -\theta) \in S \}$, we see that $\cC_{t+1}$ is a $(1-\delta)$-level confidence set:
\begin{align*}
\Prob{\theta_*\in \cC_{t+1}} = \Prob{ V^{1/2} (\hat \theta_t -\theta) \in S } \ge 1-\delta\,.
\end{align*}
(In particular, if \eqref{eq:lbanditsregion} holds with an equality, we also have an equality in the last display.)
How should the set $S$ be chosen? One natural choice is based on constraining the $2$-norm of $V^{1/2} (\hat \theta_t -\theta)$. This has the appeal that $S$ will be a Euclidean ball, which makes $\cC_{t+1}$ an ellipsoid. The details are as follows: Recalling that the distribution of the sum of the squares of $d$ independent $\mathcal N(0,1)$ random variables is the $\chi^2$ distribution with $d$ degrees of freedom (in short, $\chi^2_d$), from \eqref{eq:standardnormal} we get
\begin{align}
\norm{\hat \theta_t – \theta_*}_{V}^2 = \norm{ Z }_{V^{-1}}^2 \sim \chi^2_d\,.
\label{eq:lschi2}
\end{align}
Now, if $F(t)$ is the tail probability of the $\chi^2_d$ distribution: $F(t) = \Prob{ U> t}$ for $U\sim \chi_d^2$, it is easy to verify that
\begin{align*}
\cC_{t+1} = \left\{ \theta\in \R^d \,:\, \norm{\hat \theta_t – \theta}_{V}^2 \le t \right\}
\end{align*}
contains $\theta_*$ with probability $1-F(t)$. Hence, $\cC_{t+1}$ is a $(1-F(t))$-level confidence set for $\theta_*$. To get the value of $t$ given $F(t) = \delta$, we can either resort to numerical calculations, or use Chernoff’s method. After some calculation, this latter approach gives $t \le d + 2 \sqrt{ d \log(1/\delta) } + 2 \log(1/\delta)$, which implies that
\begin{align}
\cC_{t+1} = \left\{ \theta\in \R^d\,:\, \norm{ \hat \theta_t-\theta }_{V}^2 \le d + 2 \sqrt{ d \log(1/\delta) } + 2 \log(1/\delta) \right\}\,
\label{eq:confchi2}
\end{align}
is an $1-\delta$-level confidence set for $\theta_*$ (see Lemma 1 on page 1355 of a paper by Laurent and Massart).
## Martingale noise and Laplace’s method
We now start working towards gradually removing the extra assumptions. In particular, we first ask what happens when we only know that $\eta_1,\dots,\eta_t$ are conditionally $1$-subgaussian:
\begin{align}
\EE{ \exp(\lambda \eta_s) | \eta_1,\dots,\eta_{s-1} } \le \exp( \frac{\lambda^2}{2} )\,, \qquad s = 1,\dots, t\,.
\label{eq:condsgnoise}
\end{align}
Can we still get a confidence set, say, of the form \eqref{eq:confchi2}? Recall that previously to get this confidence set all we had to do was to upper bound the tail probabilities of the “normalized error” $\norm{Z}_{V^{-1}}^2$ (cf. \eqref{eq:lschi2}). How can we get this when we only know that $(\eta_s)_s$ are conditionally $1$-subgaussian?
Before diving into this let us briefly mention that \eqref{eq:condsgnoise} implies that $(\eta_s)_s$ is a $(\sigma(\eta_1,\dots,\eta_s))_s$-adapted martingale difference process:
Definition (martingale difference process): Let $(\cF_s)_s$ be a filtration over a probability space $(\Omega,\cF,\PP)$ (i.e., $\cF_{s} \subset \cF_{s+1}$ for all $s$ and also $\cF_s \subset \cF$). The sequence of random variables $(U_s)_s$ is an $(\cF_s)_s$-adapted martingale difference process if for all $s$, $\EE{U_s}$ exists and $U_s$ is $\cF_s$-measurable and $\EE{ U_s|\cF_{s-1}}=0$.
A collection of random variables is in general called a “random process”. Somewhat informally a martingale difference process is also called “martingale noise”. We see that in the linear bandit model, the noise process, $(\eta_s)_s$, is necessarily martingale noise with the filtration given by $\cF_s = \{A_1,X_1,\dots,A_{s-1},X_{s-1},A_s\}$. Note the inclusion of $A_s$ in the definition of $\cF_s$. The martingale noise assumption allows the noise $\eta_s$ impacting the feedback in round $s$ to depend on past choices, including the most recent action. This is actually essential if we have for example Bernoulli payoffs. If $(U_s)_s$ is an $(\cF_s)_s$ martingale difference process, the partial sums $M_t = \sum_{s=1}^t U_s$ define an $(\cF_s)_s$-adapted martingale. When the filtration is clear from the context, the reference to it is often dropped.
Let us return to the construction of confidence sets. Since we want exponentially decaying tail probabilities, one is tempted to try Chernoff’s method, which given a random variable $U$ yields $\Prob{U\ge u}\le \EE{\exp(\lambda U)}\exp(-\lambda u)$ which holds for any $\lambda\ge 0$. When $U$ is $1$-subgaussian, $\EE{\exp(\lambda U)}$ can be conveniently bounded by $\exp(\lambda^2/2)$, after which we can choose $\lambda$ to get the tightest bound, minimizing the quadratics $\lambda^2/2-\lambda u$ over nonnegative values.
To make this work with $U= \norm{Z}_{V^{-1}}^2$, we need to bound $\EE{\exp(\lambda \norm{Z}_{V^{-1}}^2 )}$. Unfortunately, this turns out to be a daunting task! Can we still somehow use Chernoff’s method? Let us start what we know: We know that there are (conditionally) $1$-subgaussian random variables that make up $Z$, namely $\eta_1,\dots,\eta_t$. Hence, we may try to see just how $\EE{ \exp( \lambda^\top Z ) }$ behaves for some $\lambda\in \R^d$. Note that we had to switch to a vector $\lambda$ since $Z$ is vector-valued. An easy calculation (with using \eqref{eq:condsgnoise} first with $s=t$, then with $s=t-1$, etc.) gives
\begin{align*}
\EE{ \exp(\lambda^\top Z) } = \EE{ \exp( \sum_{s=1}^t (\lambda^\top a_s) \eta_s ) } \le \exp( \frac12 \sum_{s=1}^t (\lambda^\top a_s)^2) = \exp( \frac12 \lambda^\top V \lambda )\,.
\end{align*}
How convenient that $V$ appears on the right-hand side of this inequality! But does this have anything to do with $U=\norm{Z}_{V^{-1}}^2$?
Rewriting the above inequality as
\begin{align*}
\EE{ \exp(\lambda^\top Z -\frac12 \lambda^\top V \lambda) } \le 1\,,
\end{align*}
we may notice that
\begin{align*}
\max_\lambda \exp(\lambda^\top Z -\frac12 \lambda^\top V \lambda)
= \exp( \max_\lambda \lambda^\top Z -\frac12 \lambda^\top V \lambda ) = \exp(\frac12 \norm{Z}_{V^{-1}}^2)\,,
\end{align*}
where the last equality comes from solving $f'(\lambda)=0$ for $\lambda$ where $f(\lambda)=\lambda^\top Z -\frac12 \lambda^\top V \lambda$. It will be useful to explicitly write the expression of the optimal $\lambda$:
\begin{align*}
\lambda_* = V^{-1} Z\,.
\end{align*}
It is worthwhile to point out that this argument uses a “linearization trick” of ratios which can be applied in all kind of situations. Abstractly, the trick is to write a ratio as an expression that depends on the square root of the numerator and the denominator in a linear fashion: For $a\in \R$, $b\ne 0$, $\max_{x\in \R} ax-\frac12 bx^2 = \frac{a^2}{2b}$.
Let us summarize what we have so far. For this, introduce
\begin{align*}
M_\lambda = \exp(\lambda^\top Z -\frac12 \lambda^\top V \lambda)\,.
\end{align*}
Then, on the one hand, we have that $\EE{ M_{\lambda} }\le 1$, while on the other hand we have that $\max_{\lambda} M_{\lambda} = \exp(\frac12 \norm{Z}_{V^{-1}}^2)$. Combining this with Chernoff’s method we get
\begin{align*}
\Prob{ \frac12 \norm{Z}_{V^{-1}}^2 > u } = \Prob{ \max_\lambda M_{\lambda} > \exp(u) } \le \exp(-u) \EE{ \max_\lambda M_{\lambda} }\,.
\end{align*}
Thus, we are left with bounding $\EE{ \max_\lambda M_\lambda}$. Unfortunately, $\EE{ \max_\lambda M_{\lambda} }>\max_{\lambda}\EE{ M_{\lambda} }$, so the knowledge that for any fixed $\lambda$ the inequality $\EE{ M_\lambda } \le 1$ is not useful on its own. We need to somehow argue that the expectation of the maximum is still not too large.
There are at least two possibilities, both having their own virtues. The first one is to replace the continuous maximum with a maximum over an appropriately selected finite subset of $\R^d$ and argue that the error introduced this way is small. This is known as the “covering argument” as we need to cover the “parameter space” sufficiently finely to argue that the approximation error is small. An alternative, perhaps lesser known but quite powerful approach, is based on Laplace’s method of approximating the maximum value of a function using an integral. The power of this is that it removes the need for bounding $\EE{ \max_\lambda M_\lambda }$. We will base our construction on this approach.
Illustration of Laplace’s method with $f(x) = \sin(x)/x$.
To understand how the integral approximation of a maximum works, let us review briefly Laplace’s method. The best is to do this in a simple case. Thus, assume that we are given a smooth function $f:[a,b]\to \R$ which has a unique maximum at $x_0\in (a,b)$, Laplace’s method to approximate $f(x_0)$ is to compute the integral
\begin{align*}
I_s \doteq \int_a^b \exp( s f(x) ) dx
\end{align*}
for some large value of $s>0$. The idea is that this behaves like a Gaussian integral. Indeed, writing $f(x) = f(x_0) + f'(x_0)(x-x_0) + \frac12 f’’(x_0) (x-x_0)^2 + R_3(x)$, since $x_0$ is a maximizer of $f$, $f'(x_0)=0$ and $-q\doteq f’’(x_0)<0$. Under appropriate technical assumptions, \begin{align*} I_s \sim \int_a^b \exp( s f(x_0) ) \exp( -\frac{(x-x_0)^2}{2/(sq)} ) \,dx \end{align*} as $s\to \infty$. Now, as $s$ gets large, $\int_a^b \exp( -\frac{(x-x_0)^2}{2/(sq)} ) \,dx \sim \int_{-\infty}^{\infty} \exp( -\frac{(x-x_0)^2}{2/(sq)} ) \,dx = \sqrt{ \frac{2\pi}{sq} }$ and hence \begin{align*} I_s \sim \exp( s f(x_0) ) \sqrt{ \frac{2\pi}{sq} }\,. \end{align*} Intuitively, the dominant term in the integral $I_s$ is $\exp( s f(x_0) )$. It should also be clear that the fact that we integrate with respect the Lebesgue measure does not matter much: We could have integrated with respect to any other measure as long as that measure puts a positive mass on the neighborhood of the maximizer. The method is illustrated on the figure shown below. The take home message of this is that if we integrate the exponential of a function that has a pronounced maximum then we can expect that the integral will be close to the exponential function of the maximum. Since $M_\lambda$ is already the exponential function of the expression to be maximized, this gives us the idea to replace $\max_\lambda M_\lambda$ with $\bar M = \int M_\lambda h(\lambda) d\lambda$ where $h$ will be conveniently chosen so that the integral can be calculated in closed form (this is not really a requirement against the method, but it just makes the argument shorter). The main benefit of replacing the maximum with an integral is of course that (from Fubini's theorem) we easily get \begin{align} \EE{ \bar M } = \int \EE{ M_\lambda } h(\lambda) d\lambda \le 1 \label{eq:barmintegral} \end{align} and thus \begin{align} \Prob{ \log(\bar M) \ge u } \le e^{-u}\,. \label{eq:barmbound} \end{align} Thus, it remains to choose $h$ and calculate $\bar M$. When choosing $h$ we want two things: $h$ should put a large mass at the maximizer of $M_\lambda$ (which is $V^{-1}Z$), and either $\bar M$ should be available in closed form (with $\bar M \approx \max_\lambda M_\lambda$ in some sense), or a lower bound on $\bar M$ should be easy to obtain which is still close to $\max_\lambda M_\lambda$. Recalling the form of $M_\lambda$, we can realize that if we choose $h$ to be the density of Gaussian then the calculation of $\bar M$ will reduce to the calculation of a Gaussian integral, a convenient outcome since Gaussian integrals can be evaluated in closed form. In particular, setting $h$ to be the density of $\mathcal N(0,H^{-1})$, we find that \begin{align*} \bar M = \frac{1}{\sqrt{(2\pi)^d \det H^{-1}}} \int \exp( \lambda^\top Z - \frac12 \norm{\lambda}_{V}^2 - \frac12 \norm{\lambda}_{H}^2 ) d\lambda\,. \end{align*} Completing the square we get \begin{align*} \lambda^\top Z - \frac12 \norm{\lambda}_{V}^2 - \frac12 \norm{\lambda}_{H}^2 & = %\frac12 \norm{Z}_{V^{-1}}^2 -\frac12 \left\{ \norm{\lambda - V^{-1}Z}_{V}^2 + \norm{\lambda}_H^2 \right\} \\ %& = %\frac12 \norm{Z}_{V^{-1}}^2 -\frac12 \left\{ \norm{\lambda-(H+V)^{-1}Z}_{H+V}^2+\norm{Z}_{V^{-1}}^2-\norm{Z}_{(H+V)^{-1}}^2 \right\} \\ %& = \frac12 \norm{Z}_{(H+V)^{-1}}^2 -\frac12 \norm{\lambda-(H+V)^{-1}Z}_{H+V}^2\,. \end{align*} Hence, a short calculation gives \begin{align*} \bar M %&= %\frac{1}{\sqrt{(2\pi)^d \det H^{-1}}} \exp( \frac12 \norm{Z}_{(H+V)^{-1}}^2 ) \int \exp( -\frac12 \norm{\lambda-(H+V)^{-1}Z}_{H+V}^2 ) d\lambda\\ %& = %\frac{1}{\sqrt{(2\pi)^d \det H^{-1}}} \exp( \frac12 \norm{Z}_{(H+V)^{-1}}^2 ) %\sqrt{(2\pi)^d \det (H+V)^{-1} }\\ & = \left(\frac{\det (H)}{\det (H+V)}\right)^{1/2} \exp( \frac12 \norm{Z}_{(H+V)^{-1}}^2 ) \,, \end{align*} which, combined with \eqref{eq:barmbound} gives \begin{align} \label{eq:selfnormalizedbound} \Prob{ \frac12 \norm{Z}_{(H+V)^{-1}}^2 \ge u+ \frac12 \log \frac{\det (H+V)}{\det(H)} } \le e^{-u}\,. \end{align} Now choosing $H=V$, we have $\det(H+V) = 2^d \det(V)$ and $\norm{Z}_{(H+V)^{-1}}^2 = \norm{Z}_{(2V)^{-1}}^2 = Z^\top (2V)^{-1} Z = \frac12 \norm{Z}_{V^{-1}}^2$, giving \begin{align*} \Prob{ \norm{Z}_{V^{-1}}^2 \ge 2\log(2)d+ 4u } \le e^{-u}\,. \end{align*} Using the identity $V^{1/2}(\hat \theta_t - \theta_*) = \norm{Z}_{V^{-1}}^2$ we get that \begin{align*} \cC_{t+1} = \left\{ \theta\in \R^d \,:\, \norm{\hat\theta_t-\theta}_V^2 \le 2\log(2)d+4 \log(\tfrac1\delta) \right\} \end{align*} is a $(1-\delta)$-level confidence set for $\theta_*$. Compared with \eqref{eq:confchi2} (the confidence set that is based on approximating the tail of the chi-square distribution using Chernoff's method), we see that the two radii scale similarly as a function of $d$ and $\delta$, with the new confidence set losing a bit (though only by a constant factor) as $\delta\to 0$. In general, the radii are incomparable. This is quite remarkable given the generality gained.
## Confidence sets for sequential designs
This approach just described generalizes almost without any changes to the case when $A_1,\dots,A_t$ are not fixed, but are sequentially chosen as it is done by UCB (this is what is known as a “sequential design” in statistics). The main difference is that in this case it is not possible to choose $H=V$ in the last step. We will also drop the assumption that $V_t(0)$ is invertible and hence use $V_t(\lambda)$ with $\lambda>0$ in place of $V = V_t(0)$. Because of this, we need to replace the identity \eqref{eq:lserror0} with
\begin{align}
%\hat \theta_t
% &= V_t^{-1}(\lambda) \sum_{s=1}^t A_s A_s^\top \theta_* + V_t^{-1}(\lambda) \sum_{s=1}^t \eta_s A_s \\
% & = V_t^{-1}(\lambda) \left(\lambda I + \sum_{s=1}^t A_s A_s^\top\right) \theta_*
% + V_t^{-1}(\lambda) \sum_{s=1}^t \eta_s A_s – \lambda V_t^{-1}(\lambda)\theta_*
% & = \theta_* + V_t^{-1}(\lambda) Z – \lambda V_t^{-1}(\lambda)\theta_*
\hat \theta_t -\theta_* = V_t^{-1}(\lambda) Z – \lambda V_t^{-1}(\lambda)\theta_*\,,
\label{eq:hthdeviation}
\end{align}
and thus
\begin{align*}
V_t^{1/2}(\lambda) (\hat \theta_t -\theta_*) = V_t^{-1/2}(\lambda) Z – \lambda V_t^{-1/2}(\lambda)\theta_*\,.
\end{align*}
Inequality \eqref{eq:selfnormalizedbound} still holds, though, as already noted, in this case the choice $H=V$ is not available because this would make the density $h$ random, which would undermine the equality in \eqref{eq:barmintegral} (one may try to condition on $V$ to bound $\EE{\bar M}$ but this introduces other problems). Hence, we will simply set $H=\lambda I$, which gives a high-probability bound on $\norm{ Z }_{V_t^{-1}(\lambda)}^2$ and eventually giving rise to the following theorem:
Theorem: Assuming that $(\eta_s)_s$ are conditionally $1$-subgaussian, for any $u\ge 0$,
\begin{align}
\Prob{ \norm{\hat \theta_t – \theta_*}_{V_t(\lambda)} \ge \sqrt{\lambda} \norm{\theta_*} + \sqrt{ 2 u + \log \frac{\det V_t(\lambda)}{\det (\lambda I)} } } \le e^{-u}\,
\label{eq:ellipsoidbasic}
\end{align}
and in particular, assuming $\norm{\theta_*}\le S$,
\begin{align}
C_{t+1} = \left\{ \theta\in \R^d\,:\,
\norm{\hat \theta_t – \theta}_{V_t(\lambda)} \le \sqrt{\lambda} S + \sqrt{ 2 \log(\frac1\delta) + \log \frac{\det V_t(\lambda)}{\det (\lambda I)} } \right\}
\label{eq:ellipsoidconfset}
\end{align}
is a $(1-\delta)$-level confidence set: $\Prob{\theta_*\in C_{t+1}}\ge 1-\delta$.
## Avoiding union bounds
For our results we need to ensure that $\theta_*$ is included in all of $C_1,C_2,\dots$. To ensure this one can use the union bound: In particular, we can replace $\delta$ used in the definition of $C_{t+1}$ by $\delta/(t(t+1))$. Then, the probability of none of $C_1,C_2,\dots$ containing $\theta_*$ is at most $\delta \sum_{t=1}^\infty \frac{1}{t(t+1)} = \delta$. The effect of this is that the radius of the confidence ellipsoid used in round $t$ is increased by a factor of $O(\log(t))$, which results in loser bounds and a larger regret. Fortunately, this is actually easy to avoid by resorting to a stopping time argument due to Freedman.
For developing the argument fix some positive integer $n$. To explain the technique we need to make the time dependence of $Z$ explicit. Thus, for $t\in [n]$ we let $Z_t = \sum_{s=1}^t X_s A_s$. Define also $M_\lambda(t)\doteq\exp( \lambda^\top Z_t – \frac12 \lambda^\top V_t(0) \lambda )$. In constructing $C_{t+1}$, the core inequality in the previous proof was that $\EE{ M_\lambda(t) } \le 1$ which allowed us to conclude the same for $\bar M(t) \doteq \int h(\lambda) M_\lambda(t) d\lambda$ and thus via Chernoff’s method led to $\Prob{\bar M(t) \ge e^{t}}\le e^{-t}$. As it was briefly mentioned on earlier, the proof of $\EE{ M_\lambda(t) } \le 1$ is based on chaining the inequalities
\begin{align}
\EE{ M_\lambda(s) | M_\lambda(s-1) } \le M_\lambda(s-1)
\label{eq:supermartingale}
\end{align}
for $s=t,t-1,\dots,1$, where we can define $M_\lambda(0)=1$. That $(M_\lambda(s))_s$ satisfies \eqref{eq:supermartingale} makes this sequence what is called a supermartingale adapted to the filtration $(\cF_s)_s \doteq (\sigma(A_1,X_1,\dots,A_s,X_s))_s$:
Definition (supermartingale): Let $(\cF_s)_{s\ge 0}$ be a filtration. A sequence of random variables, $(X_s)_{s\ge 0}$, is called an $(\cF_s)_s$-adapted supermartingale if $(X_s)_s$ is $(\cF_s)_s$ adapted (i.e., $X_s$ is $\cF_s$-measurable), the expectation of all the random variables is defined, and $\EE{X_s|\cF_{s-1}}\le X_{s-1}$ holds for $s=1,2,\dots$.
Integrating the above inequalities in \eqref{eq:supermartingale} with respect to $h(\lambda) d\lambda$ we immediately see that $(\bar M(s))_s$ is also an $(\cF_s)_s$ supermartingale with the filtration $(\cF_s)_s$ as before. A supermartingale process $(X_s)_s$ has the advantageous property that if we “stop it” at a random time $\tau\in [n]$ “without peeking into the future” then its mean still cannot increase: $\EE{X_\tau} \le \EE{X_1}$. When $\tau$ is a random time with this property, it is called a stopping time:
Definition (stopping time): Let $(\cF_s)_{s\in [n]}$ be a filtration. Then a random variable $\tau$ with values in $[n]$ is a stopping time given $(\cF_s)_s$ if for any $s\in [n]$, $\{\tau=s\}$ is an event of $\cF_s$.
Stopping times are after also defined when $n=\infty$ but we will not need this generality here.
Let $\tau$ be thus an arbitrary stopping time given the filtration $(\cF_s)_s \doteq (\sigma(A_1,X_1,\dots,A_s,X_s))_s$. In accordance of our discussion, $\EE{ \bar M(\tau) } \le \EE{ \bar M(0) } = 1$. From this it immediately follows that \eqref{eq:ellipsoidbasic} holds even when $t$ is replaced by $\tau$.
To see how this can be used to our advantage it will be convenient to introduce the events
\begin{align*}
\cE_t = \left\{ \norm{\hat \theta_t – \theta_*}_{V_t(\lambda)} \ge \sqrt{\lambda} S + \sqrt{ 2u + \log \frac{\det V_t(\lambda)}{\det (\lambda I)} } \right\}\,, \qquad t=1,\dots,n\,.
\end{align*}
With this, \eqref{eq:ellipsoidbasic} takes the form $\Prob{\cE_t}\le e^{-u}$ and by our discussion, for any random index $\tau\in [n]$ which is a stopping time with respect to $(\cF_t)_t$, we also have $\Prob{\cE_{\tau}} \le e^{-u}$. Now, choose $\tau$ to be the smallest round index $t\in [n]$ such that $\cE_t$ holds, or $n$ when none of $\cE_1,\dots,\cE_n$ hold. Formally, if the probability space holding the random variables is $(\Omega,\cF,\PP)$, we define $\tau(\omega) = t$ if $\omega\not \in \cE_1,\dots,\cE_{t-1}$ and $\omega \in \cE_t$ for some $t\in [n]$ and we let $\tau(\omega)=n$ otherwise. Since $\cE_1,\dots,\cE_t$ are $\cF_t$ measurable, $\{\tau=t\}$ is also $\cF_t$ measurable. Thus $\tau$ is a stopping time with respect to $(\cF_t)_t$. Now, consider the event
\begin{align*}
\cE= \left\{ \exists t\in [n] \mathrm{ s.t. } \norm{\hat \theta_t – \theta_*}_{V_t(\lambda)} \ge \sqrt{\lambda} S + \sqrt{ 2u + \log \frac{\det V_t(\lambda)}{\det (\lambda I)} } \right\}\,.
\end{align*}
Clearly, if $\omega\in \cE$ then $\omega \in \cE_{\tau}$. Hence, $\cE\subset \cE_{\tau}$ and $\Prob{\cE}\le \Prob{\cE_{\tau}}\le e^{-u}$. Finally, since $n$ was arbitrary we also get that the upper limit on $t$ in the definition of $\cE$ can be removed. This shows that the bad event that any of the confidence sets $C_{t+1}$ of the previous theorem fail to hold the parameter vector $\theta_*$ is also bounded by $\delta$:
Corollary (Uniform bound):
\begin{align*}
\Prob{ \exists t\ge 0 \text{ such that } \theta_*\not\in C_{t+1} } \le \delta\,.
\end{align*}
Recalling \eqref{eq:hthdeviation}, for future reference we now restate the conclusion of our calculations in a form concerning the tail of the process $(Z_s)_s \doteq (\sum_{s=1}^t \eta_s A_s)_s$:
Corollary (Uniform self-normalized tail bound on $(Z_s)_s$): For any $u\ge 0$,
\begin{align*}
\Prob{ \exists t\ge 0 \text{ such that }
\norm{Z_t}_{V_t^{-1}(\lambda)} \ge \sqrt{ 2u + \log \frac{\det V_t(\lambda)}{\det (\lambda I)} }
} \le e^{-u}\,.
\end{align*}
## Putting things together: The regret of Ellipsoidal-UCB
We will call the version of UCB that uses the confidence set of the previous section the “Ellipsoidal-UCB”. To state a bound on the regret of Ellipsoidal-UCB, let us summarize the conditions we will need: Recall that $\cF_t = \sigma(A_1,X_1,\dots,A_{t-1},X_{t-1},A_t)$, $X_t = \ip{A_t,\theta_*}+\eta_t$ and $\cD_t\subset \R^d$ is the action set available at the beginning of round $t$. We will assume that the following hold true:
• $1$-subgaussian martingale noise: $\forall \lambda\in \R$, $s\in \N$, $\EE{\exp(\lambda \eta_s)|\cF_{s} } \le \exp(\frac{\lambda^2}{2})$.
• Bounded parameter vector: $\norm{\theta_*}\le S$
• Bounded actions: $\sup_{t} \sup_{a\in \cD_t} \norm{a}_2\le L$
• Bounded mean reward: $|\ip{a,\theta_*}|\le 1$ for any $a\in \cup_t \cD_t$
Combining our previous results gives the following corollary:
Theorem (Regret of Ellipsoidal-UCB): Assume that the conditions listed above hold. Let $\hat R_n = \sum_{t=1}^n \max_{a\in \cD_t} \ip{a,\theta_*} – \ip{A_t,\theta_*}$ be the pseudo-regret of the Ellipsoidal-UCB algorithm that uses the confidence set \eqref{eq:ellipsoidconfset} in round $t+1$. With probability $1-\delta$, simultaneously for all $n\ge 1$,
\begin{align*}
R_n
\le \sqrt{ 8 d n \beta_{n-1} \, \log \frac{d\lambda+n L^2}{ d\lambda } }\,,
\end{align*}
where
\begin{align*}
\beta_{n-1}^{1/2}
& = \sqrt{\lambda} S + \sqrt{ 2\log(\frac1\delta) + \log \frac{\det V_{n-1}(\lambda)}{\det (\lambda I)}} \\
& \le \sqrt{\lambda} S + \sqrt{ 2\log(\frac1\delta) \,+\,\frac{d}{2}\ \log\left( 1+n \frac{L^2}{ d\lambda }\right)} \,.
\end{align*}
Choosing $\delta = 1/n$, $\lambda = \mathrm{const}$ we get that $\beta_n^{1/2} = O(d^{1/2} \log^{1/2}(n/d))$ and thus the expected regret of Ellipsoidal-UCB, as a function of $d$ and $n$ satisfies
\begin{align*}
R_n
& = O( \beta_n^{1/2} \sqrt{ dn \log(n/d) } )
= O( d \log(n/d) \sqrt{ n } )\,.
\end{align*}
Note that this holds simultaneously for all $n\in \N$. We also see that (apart from logarithmic factors) the regret scales linearly with the dimension $d$, while it is completely free of the cardinality of the action set $\cD_t$. Later we will see that this is indeed unavoidable in general.
## Fixed action set
When the action set is fixed and small, it is better to construct the upper confidence bounds for the payoffs of the actions directly. This is best illustrated for the fixed-design case when $A_1,\dots,A_t$ are deterministic, the noise is i.i.d. standard normal and the Grammian is invertible even with $\lambda=0$. In this case, the confidence set for $\theta_*$ was given by \eqref{eq:confchi2}, i.e., the radius is $\beta_t = 2d+8\log(1/\delta)$. By our earlier observation (see here),
\begin{align*}
\UCB_{t+1}(a) = \ip{a,\hat \theta_t} + (2d+8\log(1/\delta))^{1/2} \norm{a}_{V_t^{-1}}\,.
\end{align*}
Notice that the “radius” $\beta_t$ scales linearly with $d$, which then propagates into the UCB values. By our main theorem, this then propagates into the regret bound making the regret scale linearly with $d$. It is easy to see that the linear dependence of $\beta_t$ is an unavoidable consequence of using the confidence set construction which relied on the properties of the chi-square distribution with $d$ degrees of freedom. Unfortunately, this means that even when $\cD_t = \{e_1,\dots,e_d\}$, corresponding to the standard finite-action stochastic bandit case, the regret will scale linearly with $d$ (the number of actions), whereas we have seen it earlier that the optimal scaling is $\sqrt{d}$. To get this scaling we thus see that we need to avoid a confidence set construction which is based on ellipsoids.
A simple construction which avoids this problem is as follows: Staying with the fixed design and independent Gaussian noise, recall that $\hat \theta_t – \theta_* = V^{-1} Z \sim \mathcal N(0,I)$ (cf. \eqref{eq:standardnormal}). Fix $a\in \R^d$. Then, $\ip{a, \hat \theta_t – \theta_*} = a^\top V^{-1} Z \sim \mathcal N(0, \norm{a}_{V^{-1}}^2)$. Hence, by the subgaussian property of Gaussians, defining
\begin{align}\label{eq:lingaussianperarmucb}
\mathrm{UCB}_{t+1}(a) = \ip{a,\hat \theta_t} + \sqrt{ 2 \log(1/\delta) }\, \norm{a}_{V_t^{-1}} \,
\end{align}
we see that $\Prob{ \ip{a,\theta_*}>\mathrm{UCB}_{t+1}(a) } \le \delta$. Note that this bound indeed removed the extra $\sqrt{d}$ factor.
Unfortunately, the generalization of this method to the sequential design case is not obvious. The difficulty comes from controlling $\ip{a, V^{-1} Z}$ without relying on the exact distributional properties of $Z$.
# Notes
An alternative to the $2$-norm based construction is to use $1$-norms. In the fixed design setting, under the independent Gaussian noise assumption, using Chernoff’s method this leads to
\begin{align}
\cC_{t+1} = \left\{ \theta\in \R^d\,:\, \norm{ V^{1/2}(\hat \theta_t-\theta) }_1 \le
\sqrt{2 \log(2) d^2 +2d \log(1/\delta) }
\right\}\,.
\label{eq:confl1}
\end{align}
Illustration of 2-norm and 1-norm based confidence sets in 2 dimension with $\delta=0.1$, $\hat \theta_t=0$, $V=I$.
This set, together with the one based on the $2$-norm (cf. \eqref{eq:confchi2}), is illustrated on the figure on the side, which shows the $1-\delta=0.9$-level confidence sets. For low dimensions, and not too small values of $\delta$ the $1$-norm based confidence set is fully included in the $2$-norm based confidence set as shown here. This happens of course due to the approximations used in deriving the radii of these sets. Illustration of 2-norm and 1-norm based confidence sets in 2 dimension. The $1-\delta=0.9$-level confidence sets are shown. For low dimension, and not too small values of $\delta$ the $1$-norm based confidence set is fully included in the $2$-norm based confidence set. This happens of course due to the approximations used in deriving the radii of these sets. In general, the two confidence sets are incomparable.
# References
As mentioned in the previous post, the first paper to consider UCB-style algorithms is by Peter Auer:
The setting of the paper is the one when in every round the number of actions is bounded by a constant $K>0$. For this setting an algorithm (SupLinRel) is given and it is shown that its expected regret is at most $O(\sqrt{ d n \log^3(K n \log(n) ) })$, which is better by a factor of $\sqrt{d}$ than the bound derived here, but it also depends on $K$ (although only logarithmically) and is slightly worse than the bound shown here in its dependence on $n$, too. The dependence on $K$ can be traded for the dependence on $d$ by considering an $L/\sqrt{n}$-cover of the ball $\{ a\,:\, \norm{a}_2 \le L \}$, which gives $K = \Theta( (\sqrt{n}/L)^d )$ and a regret of $O(d^2 \sqrt{n \log^3(n)})$ for $n$ large, which is larger than the regret given here. Note that SupLinRel can also be run without actually discretizing the action set, just its confidence intervals have to be set based on the cardinality of the discretization (in particular, inflated by a factor of $\sqrt{d}$).
SupLinRel builds on LinRel, which, as we noted, is UCB with a specific upper confidence value. LinRel uses confidence bounds of the form \eqref{eq:lingaussianperarmucb} with a confidence parameter roughly of the size $\delta = 1/(n \log(n) K)$. This is possible because SupLinRel uses LinRel in a nontrivial way, “filtering” the data that LinRel works on.
In particular, SupLinRel uses a version of the “doubling trick”. The main idea is to keep at most $\log_2(n^{1/2})$ lists that hold mutually exclusive data. In every round SupLinRel starts with list of index $s=1$ and feeds the data of this list to LinRel which calculates UCB values and confidence width based on the data it received. If all the calculated widths are small (below $\theta=n^{-1/2}$) then the action with the highest UCB value is selected and the data generated is thrown away. Otherwise, if any width is above $2^{-s}$ then the corresponding action is chosen and the data observed is added to the list with index $s+1$. If all the widths are below $2^{-s}$ then all actions which, based on the current confidence intervals calculated by LinRel, cannot be optimal are eliminated, $s$ is incremented and the process is repeated until an action is chosen. Overall the effect of this is that the lists grow, lists with smaller index growing first until they are sufficiently rich for their desired target accuracy. Furthermore, the contents of a list is determined not by data of the list but by data on lists with smaller index. Because of this, the fixed-design confidence interval design as described here can be used, which ultimately saves the $O(\sqrt{d})$ factor. While apart from log-factors SupLinRel is unimprovable, in practice SupLinRel is excessively wasteful.
A confidence ellipsoid construction based on covering arguments can be found in the paper by Varsha Dani, Thomas Hayes and Sham Kakade:
An analogous construction is given by
The confidence ellipsoid construction described in this post is based on
Laplace’s method is also called the “Method of Mixtures” in the probability literature and it’s use goes back to the work of Robbins and Siegmund that was done in 1970. In practice, the improvement that results from using Laplace’s method as compared to the previous ellipsoidal constructions that are based on covering arguments is quite enormous.
As mentioned earlier, a variant of SupLinRel that is based on ridge regression (as opposed to LinRel, which is based on truncating the smallest eigenvalue of the Grammian) is described in
The algorithm, which is called SupLinUCB, uses the same construction as SupLinRel and enjoys the same regret.
For a fixed action set (i.e., when $\cD_t=\cD$), one can use an elimination based algorithm, which in every phase collects data by using a “spanning set” of the remaining actions. At the end of the phase, since the data collected in the phase only depends on data collected in previous phases, one can use Hoeffding’s bounds to construct UCB values for the actions. This is the idea underlying the “SpectralEliminator” algorithm in the paper
## 8 thoughts on “Ellipsoidal Confidence Sets for Least-Squares Estimators”
1. Shuai Li says:
Thanks for sharing the posts. I have gained a lot of insights! Hope to finish all soon:) Just to point out there is repeat in Notes session.
2. Hairi says:
Thank you Dr. Lattimore. I have a question about the derivation of the inequality after (6), where in the bracket, you said first let s=n, and then s=n-1,etc. But what is n? is it s=t instead?
1. Hi!
Thanks for the comment. You are right: $n$ should have been $t$ here. Oh, and I edited the page to reflect this.
– Csaba
3. Claire says:
Hi,
Thanks a lot for your great blog and book!
I was reading Exercise 20.12 in the book on the sequential likelihood ratio confidence set extracted from Lemma 2 of Lai and Robbins (1985). This construction seems to be for the iid bandit. Can it be generalized to linear bandit as well?
1. Tor Lattimore says:
Hi Claire,
Yes, this result holds very generally. Only a martingale structure is being used and even the estimates can be any appropriately measurable function.
1. Claire says:
Could you kindly point out a reference for the general setting, or briefly mention what the corresponding martingale is in the general setting?
To derive equation (1), do we need some additional assumptions on the choice of $a_s$ (e.g. they form a basis)?
Yes. It is assumed that V is non-singular, which is possible only when $(a_s)$ span $\R^d$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9838692545890808, "perplexity": 537.2736645816589}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402130531.89/warc/CC-MAIN-20200930235415-20201001025415-00504.warc.gz"} |
http://www.perimeterinstitute.ca/videos/analytic-approaches-tensor-networks-field-theories | A A
Connect with us:
Analytic approaches to tensor networks for field theories
Playing this video requires the latest flash player from Adobe.
Download link (right click and 'save-as') for playing in VLC or other compatible player.
Recording Details
Speaker(s):
Scientific Areas:
PIRSA Number:
17040036
Abstract
I will discuss analytic approaches to construct tensor network representations of quantum field theories, more specifically conformal field theories in 1+1 dimensions. A key insight is that we should understand how well the tensor network can reproduce the correlation functions of the quantum field theory. Based on this measure of closeness, I will present rigorous results allowing for explicit error bounds which show that both Matrix product states (MPS) as well as the multiscale renormalization Ansatz (MERA) do approximate conformal field theories. In particular, I will discuss the case of Wess-Zumino-Witten models.
based on joint work with Robert Koenig (MPS), Brian Swingle and Michael Walter (MERA) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8805161118507385, "perplexity": 1033.499160874555}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105700.94/warc/CC-MAIN-20170819162833-20170819182833-00469.warc.gz"} |
http://mathhelpforum.com/advanced-statistics/160409-empirical-bayes.html | ## Empirical Bayes
So, I'm taking an inference course. Working through some (optional) homework, the theme of the current problems seems to be (1) find a Bayes estimator of theta and then (2) show that "blah" is an empirical Bayes estimator of theta. The issue is that, unless I completely zoned out in class, we didn't really give a good definition of empirical Bayes. The idea seems to be that you come up with some estimate of the some hyperparameters using the marginal distribution of your data and then plug those into your Bayes estimate, but I feel like I'm doing something wrong because everything I'm doing feels incredibly ad-hoc. Sparing the specifics of the problem, and example is
(a) Show that the Bayes estimator of $\theta$ is $mX/(m + 1)$.
(b) Show that an empirical Bayes estimator of $\theta$ is $[1 - \frac{p\sigma^2}{\|X\|^2}]X$
and my solution to part (b) is showing that $E\|X^2\| = p\sigma^2 (m + 1)$ so that we can use $\frac{p \sigma^2}{\|X^2\|}$ to estimate $1/(m + 1)$.
This just seems so ad-hoc that it feels wrong. I guess my question is if this really is what Empirical Bayes Estimation entails in these sorts of problems. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8814857602119446, "perplexity": 238.8062678569906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816138.91/warc/CC-MAIN-20180225051024-20180225071024-00333.warc.gz"} |
https://en.wikibooks.org/wiki/Real_Analysis/Connected_Sets | # Real Analysis/Connected Sets
Intuitively, the concept of connectedness is a way to describe whether sets are "all in one piece" or composed of "separate pieces". For motivation of the definition, any interval in $R$ should be connected, but a set ${\displaystyle A}$ consisting of two disjoint closed intervals ${\displaystyle [a,b]}$ and ${\displaystyle [c,d]}$ should not be connected.
Definition A set in ${\displaystyle A}$ in ${\displaystyle \mathbb {R} ^{n}}$ is connected if it is not a subset of the disjoint union of two open sets, both of which it intersects.
Alternative Definition A set Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "/mathoid/local/v1/":): X is called disconnected if there exists a continuous function ${\displaystyle f:X\to \{0,1\}}$, such a function is called a disconnection. If no such function exists then we say ${\displaystyle X}$ is connected.
Examples The set ${\displaystyle [0,2]}$ cannot be covered by two open, disjoint intervals; for example, the open sets ${\displaystyle (-1,1)}$ and ${\displaystyle (1,2)}$ do not cover Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "/mathoid/local/v1/":): [0,2] because the point ${\displaystyle x=1}$ is not in their union. Thus ${\displaystyle [0,2]}$ is connected.
However, the set ${\displaystyle \{0,2\}}$ can be covered by the union of ${\displaystyle (-1,1)}$ and ${\displaystyle (1,3)}$, so $\displaystyle \{0,2\}$ is not connected.
## Path-Connected
A similar concept is path-connectedness.
Definition A set is path-connected if any two points can be connected with a path without exiting the set.
A useful example is $\displaystyle \mathbb {R} ^{2}\setminus \{(0,0)\}$ . Any two points a and b can be connected by simply drawing a path that goes around the origin instead of right through it; thus this set is path-connected. However, ${\displaystyle \mathbb {R} \setminus \{0\}}$ is not path-connected, because for ${\displaystyle a=-3}$ and ${\displaystyle b=3}$, there is no path to connect a and b without going through ${\displaystyle x=0}$.
As should be obvious at this point, in the real line regular connectedness and path-connectedness are equivalent; however, this does not hold true for $\mathbb R}^{n$ with ${\displaystyle n>1}$. When this does not hold, path-connectivity implies connectivity; that is, every path-connected set is connected.
## Simply Connected
Another important topic related to connectedness is that of a simply connected set. This is an even stronger condition that path-connected.
Definition A set ${\displaystyle A}$ is simply-connected if any loop completely contained in ${\displaystyle A}$ can be shrunk down to a point without leaving ${\displaystyle A}$.
An example of a Simply-Connected set is any open ball in ${\displaystyle \mathbb {R} ^{n}}$. However, the previous path-connected set $\displaystyle \mathbb R^2\setminus\{(0,0)\}$ is not simply connected, because for any loop p around the origin, if we shrink p down to a single point we have to leave the set at ${\displaystyle (0,0)}$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 25, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 5, "math_score": 0.9191754460334778, "perplexity": 302.672030343456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118740.31/warc/CC-MAIN-20170423031158-00190-ip-10-145-167-34.ec2.internal.warc.gz"} |
http://wikimechanics.org/internal-energy | Internal Energy
Internal Energy of Quarks
ζ z z $\large{U}$ 1 u u 242.9 (MeV) 2 d d 0 3 e e -32.97 (MeV) 4 g g 298.3 (MeV) 5 m m 1,186 (MeV) 6 a a 3.122 (MeV) 7 t t 149.6 (MeV) 8 b b -85.01 (MeV) 9 s s 50.12 (MeV) 10 c c -53.06 (MeV) 11 $\textbf{a}$ $\overline{\textbf{a}}$ -2.22 (eV) 12 $\textbf{b}$ $\overline{\textbf{b}}$ -1.80 (eV) 13 $\textbf{i}$ $\overline{\textbf{i}}$ -2.11 (eV) 14 $\textbf{w}$ $\overline{\textbf{w}}$ -2.55 (eV) 15 $\textbf{d}$ $\overline{\textbf{d}}$ -0.029 (eV) 16 $\textbf{l}$ $\overline{\textbf{l}}$ -0.050 (eV)
Consider a generic particle P characterized by some repetitive chain of events noted as
$\Psi ^{\sf{P}} = \left( \sf{\Omega}_{1}, \sf{\Omega}_{2}, \sf{\Omega}_{3} \ \ldots \ \right)$
where each orbital cycle is a bundle of $N$ seeds
$\sf{\Omega} = \left\{ \sf{Z}_{1}, \sf{Z}_{2} \ \ldots \ \sf{Z}_{\it{i}} \ \ldots \ \sf{Z}_{\it{N}} \right\}$
Let each seed be described by its audibility $\varepsilon$ and its specific energy $\hat{E}$. We characterize $\sf{P}$ using a sum over all of these component seeds
\begin{align} U \equiv \sum_{i \, \sf{=1}}^{N} \varepsilon_{\it{i}} \hat{E}_{\it{i}} \end{align}
Definition: the number U is called the internal energy of P. The internal energy may be positive, negative or zero depending on a particle's composition and some choice for the calorimetric reference sensation.
To establish numerical values for the internal energy consider a down-quark defined by the pair of seeds
$\sf{d} \equiv \{ \sf{D}, \sf{O} \}$
Applying the foregoing definition of internal energy gives
$U^{\sf{d}} = \hat{E} \left( \sf{D} \right) - \hat{E} \left( \sf{O} \right)$
If a down-seed has the same specific energy as an ordinary conjugate-seed, then
$\hat{E} \left( \sf{D} \right) = \hat{E} \left( \sf{O} \right)$ and $U^{\sf{d}} =0$
Let us require experimental practice to obtain this this consistently; for example, by using the down quark as a reference particle to set the null value when measuring internal energy. Down quarks are objectified from black sensations, so this requirement could be interpreted as closing any shutters and using insulation so that a measuring instrument is completely isolated and in the dark when indicating zero. The other numbers shown in the accompanying table are obtained by juggling quark coefficients and laboratory observations1 of nuclear particles. The conventional unit used for reporting these measurements is the electronvolt abbreviated as (eV). Theorem: an ordinary quark and its associated anti-quark have the same internal energy. Consider the generic quarks
$\sf{z} = \{ \sf{Z}, \sf{O} \}$ and $\bar{\sf{z}} = \{ \sf{Z}, \overline{\sf{O}} \}$
By the foregoing definition, the internal energies for these particles are given by
$U^{\sf{z}} = \hat{E} \left( \sf{Z} \right) - \hat{E} \left( \sf{O} \right)$ and $U^{ \sf{ \bar{z}}} = \hat{E} \left( \sf{Z} \right) - \hat{E} \left( \sf{\overline{O}} \right)$
But the hypothesis of conjugate symmetry asserts that $\hat{E} ( {\sf{O}} ) = \hat{E} ( \overline{\sf{O}} )$. So both quarks have the same internal energy and we can unambiguously use the quark index $\zeta$ to refer to either quark
$U^{\sf{z}} = U^{\sf{\bar{z}}} = U^{\zeta}$
## Conservation of Internal Energy
Consider that each each orbital cycle of P may also be described as a bundle of N quarks
$\sf{\Omega} = \left\{ \sf{q}_{1}, \sf{q}_{2} \ \ldots \ \sf{q}_{\it{i}} \ \ldots \ \sf{q}_{\it{N}} \right\}$
By definition each quark is composed from a pair of seeds $\sf{q} = \left\{ \sf{Z} , \sf{Z}^{\prime} \right\}$ and characterized by its internal energy
$U ^{ \sf{q}} = \varepsilon \hat{E} + \varepsilon^{\prime} \hat{E}^{\prime}$
where $\hat{E}$ is the specific energy and $\varepsilon$ is the audibility of each seed. Then by the definition of internal energy as a sum over all seeds
\begin{align} U ^{ \sf{P}} = \sum_{i\sf{=1}}^{N} \varepsilon _{\it{i}} \hat{E}_{\it{i}} + \varepsilon^{\prime} _{\it{i}} \hat{E}^{\prime}_{\it{i} } = \sum_{i\sf{=1}} ^{N} U_{\it{i}}^{\sf{q}} \end{align}
and so the internal energy of a compound quark is just a sum over the internal energies of its component quarks. Quarks are indestructible and the internal energy of each quark has a specific fixed value, so whenever some generic compound quarks $\mathbb{X}$, $\mathbb{Y}$ and $\mathbb{Z}$ interact, if
$\mathbb{ X} + \mathbb{ Y} \leftrightarrow \mathbb{ Z}$ then $U ^{ \mathbb{X} } + U ^{ \mathbb{Y} } = U ^{ \mathbb{Z} }$
We say that internal energy is conserved when particles are combined or decomposed. Also by the hypothesis of conjugate symmetry an ordinary quark and its anti-quark have the same internal energy. Swapping ordinary quarks with anti-quarks does not change the total number of quarks of a given type. So particles have the same internal energy as their associated anti-particles
$U \left( \sf{P} \right) = U \left( \overline{\sf{P}} \right)$
Next step: quarks are indestructible.
Summary
Adjective Definition Internal Energy \begin{align} U \equiv \sum_{i \, \sf{=1}}^{N} \varepsilon_{\it{i}} \hat{E}_{\it{i}} \end{align} 4-7
page revision: 903, last edited: 23 Oct 2018 19:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9675394296646118, "perplexity": 1796.9911219774194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998605.33/warc/CC-MAIN-20190618023245-20190618045245-00189.warc.gz"} |
http://mathhelpforum.com/statistics/190159-binomial-formula-probability-problem.html | 1. binomial formula, probability problem.
a factory is known to make 15% defectives.
Most of the products are found and fixed or thrown out. Suppose two products are randomly selected for inspection.
1. What is the probability that both pieces are defect free?
2. Suppose at least one of the pieces has a flaw. What is the probablility that both are defective?
I know the binomial theorem can be useful here.
P(x=k) = (n/k)P^k(1-p)^(n-k)
also I did know that if neither piece is defect free:
2/0 .. .85^0*(1-.85)^2
=1*(.15)^2
=.0225
2. Re: binomial formula, probability problem.
Originally Posted by rcmango
a factory is known to make 15% defectives.
Most of the products are found and fixed or thrown out. Suppose two products are randomly selected for inspection.
1. What is the probability that both pieces are defect free?
2. Suppose at least one of the pieces has a flaw. What is the probablility that both are defective?
I know the binomial theorem can be useful here.
P(x=k) = (n/k)P^k(1-p)^(n-k) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9007129073143005, "perplexity": 1268.40027244071}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171632.91/warc/CC-MAIN-20170219104611-00114-ip-10-171-10-108.ec2.internal.warc.gz"} |
http://www.ck12.org/book/Peoples-Physics-Book-Version-3-with-Videos/section/23.0/ | <meta http-equiv="refresh" content="1; url=/nojavascript/"> Feynman Diagrams | CK-12 Foundation
# Chapter 23: Feynman Diagrams
Created by: CK-12
## The Big Idea
The interaction of subatomic particles through the four fundamental forces is the basic foundation of all the physics we have studied so far. There’s a relatively simple way to calculate the probability of collisions, annihilations, or decays of particles, invented by physicist Richard Feynman, called Feynman diagrams. Drawing Feynman diagrams is the first step in visualizing and predicting the subatomic world. If a process does not violate a known conservation law, then that process must exist with some probability. All the Standard Model rules of the previous chapter are used here. You are now entering the exciting world of particle physics.
Feb 23, 2012 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8607890605926514, "perplexity": 1107.965960681566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115900160.86/warc/CC-MAIN-20150124161140-00010-ip-10-180-212-252.ec2.internal.warc.gz"} |
http://m-phi.blogspot.com/2013/08/101-dalmatians.html | ## Monday, 12 August 2013
### 101 Dalmatians
Here is an example of applied mathematical reasoning, similar to a couple of examples in a paper "Some More Curious Inferences" (Analysis, 2005), about the phenomenon (discovered by Gödel 1936) of proof speed-up. It's a modification of the kind of arithmetic examples given in the work of Putnam and Field (Science Without Numbers, 1980).
(1) There are exactly 100 food bowls
(2) There are exactly 101 dalmatians
(3) For each dalmatian $d$, there is exactly one food bowl $b$ that $d$ uses.
--------------------------------------
(C) So, there least two dalmatians $d_1, d_2$ that use the same food bowl.
To apply mathematics, we apply Comprehension and Hume's Principle, which allow us to reformulate the premises (1)-(3) and conclusion (C), referring to two mixed sets, $B$ and $D$ and one mixed function $u$:
(1)* $|B| = 100$
(2)* $|D| = 101$
(3)* $u : D \to B$.
--------------------------------------
(C)* So, there are $d_1, d_2 \in D$, with $d_1 \neq d_2$ such that $u(d_1) = u(d_2)$.
To show that this is correct, note first that $100 < 101$. So, $|B| < |D|$. Next, note that the Pigeonhole Principle implies that if $u : D \to B$ and $|B| < |D|$, then $u$ is not injective. So, $u$ is not injective. So, there are distinct $d_1, d_2 \in D$ such that $u(d_1) = u(d_2)$. This is the required conclusion (C)*, which takes us back to (C).
1. Hi Jeff, this is nice. A couple of observations, I'd be curious to know what you think:
1. The argument is valid even if u is not a function. You can weaken the third premise to say that each Dalmatian uses a food bowl, and it is still true that there must be distinct Dalmatians using the same bowl.
2. For fixed |B| and |D| the argument does not require anything as strong as PHP, or FOL for that matter. It can be carried out in propositional logic, by writing down 101 premises, each one one of which is a disjunction of 100 atoms Uxy (intuitively saying that Dalmatian x uses bowl y). It will validly follow that two Dalmatians use the same bowl (this will also be some very long disjunction). The proof will then be ginormous, but it will not employ any high-power mathematics.
3. For variable |B| and |D|, then you do need some mathematical machinery. PHP is, in fact, equivalent to the principle of induction (over some weak theory, although as far as I can tell it still requires some weak form of comprehension). So you already have full PA, for all practical purposes. But this does not seem to invalidate any of your arguments, does it?
2. Hi Aldo, many thanks.
On points 2 & 3, yes, right - that's more or less the point of the 2005 paper, "Some More Curious Inferences", that I linked to. That is, the "101 Dalmatians Inference" is valid in logic alone; but the length of proof is fairly horrible: if I recall right, my calculation was that the proof length in symbols goes like $O(n^3 log n)$ (the $log n$ comes from variable subscripts being coded in binary) for a tableau; so, the purely logical proof would fill a 200 page book! But, ascending, using Comprehension, HP, and PHP, we get a much quicker proof, and then instantiate with decimal numerals. We might even say that PHP *explains* why the 101 Dalmatians Inference is valid ... (that is, we invert logicism: we use mathematics to explain certain validities)
The paper was influenced by an earlier 1987 paper "A Curious Inference" by Boolos. At the end of the paper, I asked how the nominalist would justify this reasoning. I planned to follow up the paper with a longer one on speed-up, but didn't bother in the end. But Richard Pettigrew has written a very nice and interesting reply,
"Indispensability Arguments and Instrumental Nominalism" (RSL 2011) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8822448253631592, "perplexity": 927.5150463036044}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267162385.83/warc/CC-MAIN-20180925182856-20180925203256-00094.warc.gz"} |
https://www.physicsforums.com/threads/average-velocity.152417/ | Average Velocity
1. Jan 21, 2007
soulja101
1. The problem statement, all variables and given/known data
A person runs 1.00*10É2m(N), 1.50*10É2m(S) and finaly 5.0*10É1(E). If the average speed is 40m/s, what is the average velocity.
2. Relevant equations
3. The attempt at a solution
1.00*10É2(N)
1.50*10É2m(S)
5.0*10É1(E)
you start out by adding the north and south vectors which gives you 50m(S) i dont know wat to do after that
2. Jan 21, 2007
Integral
Staff Emeritus
Add the last displacement vector. Then find the magnitude, this is the total displacement.
Using the average speed you can find the time spend on each leg. Now use the total time and the final displacement to get an average velocity.
3. Jan 21, 2007
soulja101
magnitude?
How do i find the magnitude?
Similar Discussions: Average Velocity | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9648152589797974, "perplexity": 1872.648841972428}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917124478.77/warc/CC-MAIN-20170423031204-00575-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://biomch-l.isbweb.org/threads/30292-Foot-stiffness-and-footwear?s=9c89496eb5eba96e5e0dabe607cb16f2&p=34325 | Re: Foot stiffness and footwear
Bullet points -
1 The transverse arch of the foot contributes more to arch stiffness than the longitudinal arches (see PDF below )
2 If the transverse arch did not extended into the forefoot then ,since systems tend to fail at their weakest point , the forces acting on the foot during gait would tend to flex ,distort and perhaps fracture the metatarsals .
3 The metatarsal parabola gives rise to ,or makes more pronounced ,the distal transverse arch during gait ,in the manner laid out in post #1 .
4 The tensioned transverse ligament /metatarsal/tarsal arch component of the system means that the metatarsals are subject to compressive and stretching forces during gait and not flexion which they are less able to tolerate .This is explained in post #5
5 One of the roles of the intrinsic foot muscles is to help regulate the transverse and longitudinal arches to ensure a proper distribution of forces throughout the foot .They must be strong enough to do this or foot pathologies are likely to develop .
PDF curvature of the transverse arch governs stiffness of the human foot | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9450908303260803, "perplexity": 3657.4770882681314}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257650993.91/warc/CC-MAIN-20180324190917-20180324210917-00011.warc.gz"} |
http://mathoverflow.net/questions/88512/fubini-study-metric-and-einstein-constant | # Fubini Study Metric and Einstein constant
Hi all,
it is well known that the complex projective space with the fubini study metric is Einstein, but what is the explicit value, i.e. for which $\mu$ does $Ric=\mu g$ hold?
Moreover, I would like to know how to calculate the sectional cuvature explicitly, because I would like to calculate the number $\sqrt{\sum K_{ij}}$ explicitly for a given orthonormal basis. ($K_{ij}$ is the sectional curvature of the plane spanned by $e_i$ and $e_j$)
-
Isn't this available in many different places, including Griffiths-Harris and wikipedia? – Deane Yang Feb 15 '12 at 11:58
$$\mu=2\cdot n+3$$ ($\mathbb C\mathrm P^n$ is isometric to the factor $\mathbb S^{2n+1}/\mathbb S^1$. You can use O'Nail's formula to calculate sectional curvature, it is $=4$ in complex directions and $=1$ in real directions.) – Anton Petrunin Feb 15 '12 at 14:29
As suggested by Anton, you can use the O'Neill formulas in the Riemannian submersion $\mathbb C^{n+1}\to \mathbb{C} P^n$ that defines the Fubini-Study metric on $\mathbb C P^n$. This gives the following: suppose $X,Y$ are orthonormal tangent vectors at some point in $\mathbb C P^n$, and denote by $\overline X,\overline Y$ their horizontal lifts to $\mathbb C^{n+1}$ (which are also orthonormal). Then $$sec(X,Y)=1+\tfrac34\|[\overline X,\overline Y]^v\|^2=1+3|\overline g(\overline Y,J\overline X)|^2,$$ where $\overline g$ is the canonical Euclidean metric on $\mathbb C^{n+1}$, $()^v$ denotes the vertical component wrt the submersion and $J$ is the complex structure, i.e., multiplication by $\sqrt{-1}$. Note that this immediately implies that $\mathbb CP^n$ is $\tfrac14$-pinched.
With the above formula, you can easily compute the Einstein constant of $\mathbb C P^n$ to be equal to $\mu=2n+2$, see e.g. Petersen's book "Riemannian Geometry", chapter 3.
Another possible way of doing it is using that this is a Kahler manifold. The Fubini-Study metric can be thought of as $\omega_{FS}=\sqrt{-1}\partial\overline\partial\log\|z\|^2$, where $\|z\|^2$ is the square norm of a local non vanishing holomorphic section (it is independent of the choice of section by the $\partial\overline\partial$-lemma). You can then compute in local normal (holomorphic) coordinates the coefficients $g_{i\bar j}$ and use that the Ricci form is given by $Ric(\omega)=-\sqrt{-1}\partial\overline\partial\log\det(g_{i\bar{j}})$. This will obviously give you the same result, but in the form $Ric(\omega_{FS})=(n+1)\omega_{FS}$. As pointed out in the comments below, the reason for the missing factor $2$ in this computation is that we have to change from real orthonormal frames to complex unitary frames.
Your last sentence is not correct, the missing factor of $2$ come up when changing from real orthonormal frames to complex unitary frames. – YangMills Feb 15 '12 at 15:30
typo: the metric should be $g_{i\bar{j}}$ – John B Feb 16 '12 at 17:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9863813519477844, "perplexity": 132.1895968211241}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246657868.2/warc/CC-MAIN-20150417045737-00046-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://math.libretexts.org/Bookshelves/Algebra/Map%3A_College_Algebra_(OpenStax)/01%3A_Prerequisites/1.05%3A_Polynomials | $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$
# 1.5: Polynomials
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$
Learning Objectives
In this section students will:
• Identify the degree and leading coefficient of polynomials.
• Multiply polynomials.
• Use FOIL to multiply binomials.
• Perform operations with polynomia
• ls of several variables.
Earl is building a doghouse, whose front is in the shape of a square topped with a triangle. There will be a rectangular door through which the dog can enter and exit the house. Earl wants to find the area of the front of the doghouse so that he can purchase the correct amount of paint. Using the measurements of the front of the house, shown in Figure $$\PageIndex{1}$$, we can create an expression that combines several variable terms, allowing us to solve this problem and others like it.
• First find the area of the square in square feet.
\begin{align*} A &= s^2\\ &= {(2x)}^2\\ &= 4x^2 \end{align*}
• Then find the area of the triangle in square feet.
\begin{align*} A &= \dfrac{1}{2}bh\\ &= \dfrac{1}{2}(2x)\left (\dfrac{3}{2} \right )\\ &= \dfrac{3}{2}x \end{align*}
• Next find the area of the rectangular door in square feet.
\begin{align*} A &= lw\\ &= x\times1\\ &= x \end{align*}
The area of the front of the doghouse can be found by adding the areas of the square and the triangle, and then subtracting the area of the rectangle. When we do this, we get
$$4x^2+\dfrac{3}{2}x-x$$ $$ft^2$$
or
$$4x^2+\dfrac{1}{2}x$$ $$ft^2$$
In this section, we will examine expressions such as this one, which combine several variable terms.
## Identifying the Degree and Leading Coefficient of Polynomials
The formula just found is an example of a polynomial, which is a sum of or difference of terms, each consisting of a variable raised to a nonnegative integer power. A number multiplied by a variable raised to an exponent, such as $$384\pi$$, is known as a coefficient. Coefficients can be positive, negative, or zero, and can be whole numbers, decimals, or fractions. Each product $$a_ix^i$$, such as $$384\pi w$$, is a term of a polynomial. If a term does not contain a variable, it is called a constant.
A polynomial containing only one term, such as $$5x^4$$, is called a monomial. A polynomial containing two terms, such as $$2x−9$$, is called a binomial. A polynomial containing three terms, such as $$−3x^2+8x−7$$, is called a trinomial.
We can find the degree of a polynomial by identifying the highest power of the variable that occurs in the polynomial. The term with the highest degree is called the leading term because it is usually written first. The coefficient of the leading term is called the leading coefficient. When a polynomial is written so that the powers are descending, we say that it is in standard form.
Polynomials
A polynomial is an expression that can be written in the form
$a_nx^n+...+a_2x^2+a_1x+a_0$
Each real number ai is called a coefficient. The number $$a_0$$ that is not multiplied by a variable is called aconstant. Each product $$a_ix^i$$ is a term of a polynomial. The highest power of the variable that occurs in the polynomial is called the degree of a polynomial. The leading term is the term with the highest power, and its coefficient is called the leading coefficient.
How to: Given a polynomial expression, identify the degree and leading coefficient.
1. Find the highest power of x to determine the degree.
2. Identify the term containing the highest power of x to find the leading term.
3. Identify the coefficient of the leading term.
Example $$\PageIndex{1}$$: Identifying the Degree and Leading Coefficient of a Polynomial
For the following polynomials, identify the degree, the leading term, and the leading coefficient.
1. $$3+2x^2−4x^3$$
2. $$5t^5−2t^3+7t$$
3. $$6p−p^3−2$$
Solution
1. The highest power of $$x$$ is $$3$$, so the degree is $$3$$. The leading term is the term containing that degree, $$−4x^3$$. The leading coefficient is the coefficient of that term, $$−4$$.
2. The highest power of $$t$$ is $$5$$, so the degree is $$5$$. The leading term is the term containing that degree, $$5t^5$$. The leading coefficient is the coefficient of that term, $$5$$.
3. The highest power of $$p$$ is $$3$$, so the degree is $$3$$. The leading term is the term containing that degree, $$−p^3$$, The leading coefficient is the coefficient of that term, −1.
Exercise $$\PageIndex{1}$$
Identify the degree, leading term, and leading coefficient of the polynomial $$4x^2−x^6+2x−6$$.
The degree is $$6$$, the leading term is $$−x^6$$, and the leading coefficient is $$−1$$.
We can add and subtract polynomials by combining like terms, which are terms that contain the same variables raised to the same exponents. For example, $$5x^2$$ and $$−2x^2$$ are like terms, and can be added to get $$3x^2$$, but $$3x$$ and $$3x^2$$ are not like terms, and therefore cannot be added.
How to: Given multiple polynomials, add or subtract them to simplify the expressions.
1. Combine like terms.
2. Simplify and write in standard form.
Example $$\PageIndex{2}$$: Adding Polynomials
Find the sum.
$$(12x^2+9x−21)+(4x^3+8x^2−5x+20)$$
Solution
\begin{align*} &4x^3+(12x^2+8x^2)+(9x-5x)+(-21+20)\qquad \text{Combine like terms} \\ &4x^3+20x^2+4x-1\qquad \qquad \qquad \qquad \qquad \qquad \; \; \; \text{Simplify} \end{align*}
Analysis
We can check our answers to these types of problems using a graphing calculator. To check, graph the problem as given along with the simplified answer. The two graphs should be equivalent. Be sure to use the same window to compare the graphs. Using different windows can make the expressions seem equivalent when they are not.
Exercise $$\PageIndex{2}$$
Find the sum.
$$(2x^3+5x^2−x+1)+(2x^2−3x−4)$$
$$2x^3+7x^2−4x−3$$
Example $$\PageIndex{3}$$: Subtracting Polynomials
Find the difference.
$$(7x^4−x^2+6x+1)−(5x^3−2x^2+3x+2)$$
Solution
$$7x^4−5x^3+(−x^2+2x^2)+(6x−3x)+(1−2)$$ Combine like terms
$$7x^4−5x^3+x^2+3x−1$$ Simplify
Analysis
Note that finding the difference between two polynomials is the same as adding the opposite of the second polynomial to the first.
Exercise $$\PageIndex{3}$$
Find the difference.
$$(−7x^3−7x^2+6x−2)−(4x^3−6x^2−x+7)$$
$$−11x^3−x^2+7x−9$$
## Multiplying Polynomials
Multiplying polynomials is a bit more challenging than adding and subtracting polynomials. We must use the distributive property to multiply each term in the first polynomial by each term in the second polynomial. We then combine like terms. We can also use a shortcut called the FOIL method when multiplying binomials. Certain special products follow patterns that we can memorize and use instead of multiplying the polynomials by hand each time. We will look at a variety of ways to multiply polynomials.
### Multiplying Polynomials Using the Distributive Property
To multiply a number by a polynomial, we use the distributive property. The number must be distributed to each term of the polynomial. We can distribute the $$2$$ in $$2(x+7)$$ to obtain the equivalent expression $$2x+14$$. When multiplying polynomials, the distributive property allows us to multiply each term of the first polynomial by each term of the second. We then add the products together and combine like terms to simplify.
How to: Given the multiplication of two polynomials, use the distributive property to simplify the expression.
1. Multiply each term of the first polynomial by each term of the second.
2. Combine like terms.
3. Simplify.
Example $$\PageIndex{4}$$: Multiplying Polynomials Using the Distributive Property
Find the product.
$$(2x+1)(3x^2−x+4)$$
Solution
\begin{align*} &2x(3x^2-x+4)+1(3x^2-x+4)\qquad \text{ Use the distributive property }\\ &(6x^3-2x^2+8x)+(3x^2-x+4)\qquad \text{ Multiply }\\ &6x^3+(-2x^2+3x^2)+(8x-x)+4\qquad \text{ Combine like terms } \\ &6x^3+x^2+7x+4\qquad \text{ Simplify } \end{align*}
Analysis
We can use a table to keep track of our work, as shown in Table $$\PageIndex{1}$$. Write one polynomial across the top and the other down the side. For each box in the table, multiply the term for that row by the term for that column. Then add all of the terms together, combine like terms, and simplify.
$$3x^2$$ $$−x$$ $$+4$$ $$2x$$ $$6x^3$$ $$−2x^2$$ $$8x$$ $$+1$$ $$3x^2$$ $$−x$$ $$4$$
Exercise $$\PageIndex{4}$$
Find the product.
$$(3x+2)(x^3−4x^2+7)$$
$$3x^4−10x^3−8x^2+21x+14$$
### Using FOIL to Multiply Binomials
A shortcut called FOIL is sometimes used to find the product of two binomials. It is called FOIL because we multiply the first terms, the outer terms, the inner terms, and then the last terms of each binomial.
The FOIL method arises out of the distributive property. We are simply multiplying each term of the first binomial by each term of the second binomial, and then combining like terms.
FOIL to simplify expression
Given two binomials, use FOIL to simplify the expression.
1. Multiply the outer terms of the binomials.
2. Multiply the last terms of each binomial.
3. Combine like terms and simplify.
Example $$\PageIndex{5}$$: Using FOIL to Multiply Binomials
Use FOIL to find the product.
$$(2x−10)(3x+3) \nonumber$$
Solution
Find the product of the first terms.
Find the product of the outer terms.
Find the product of the inner terms.
Find the product of the last terms.
\begin{align*} &6x^2+6x-54x-54\qquad \text{Add the products}\\ &6x^2+(6x-54x)-54\qquad \text{Combine like terms} \\ &6x^2-48x-54\qquad \qquad \qquad \text{Simplify} \end{align*}
Exercise $$\PageIndex{5}$$
Use FOIL to find the product.
$$(x+7)(3x−5)$$
$$3x^2+16x−35$$
### Perfect Square Trinomials
Certain binomial products have special forms. When a binomial is squared, the result is called a perfect square trinomial. We can find the square by multiplying the binomial by itself. However, there is a special form that each of these perfect square trinomials takes, and memorizing the form makes squaring binomials much easier and faster. Let’s look at a few perfect square trinomials to familiarize ourselves with the form.
$${(x+5)}^2=x^2+10x+25$$
$${(x-3)}^2=x^2-6x+9$$
$${(4x-1)}^2=16x^2-8x+1$$
Notice that the first term of each trinomial is the square of the first term of the binomial and, similarly, the last term of each trinomial is the square of the last term of the binomial. The middle term is double the product of the two terms. Lastly, we see that the first sign of the trinomial is the same as the sign of the binomial.
Perfect Square Trinomials
When a binomial is squared, the result is the first term squared added to double the product of both terms and the last term squared.
${(x+a)}^2=(x+a)(x+a)=x^2+2ax+a^2$
How to: Given a binomial, square it using the formula for perfect square trinomials.
1. Square the first term of the binomial.
2. Square the last term of the binomial.
3. For the middle term of the trinomial, double the product of the two terms.
Example $$\PageIndex{6}$$: Expanding Perfect Squares
Expand $$(3x−8)^2$$.
Solution
Begin by squaring the first term and the last term. For the middle term of the trinomial, double the product of the two terms.
\begin{align*} &{(3x)}^2-2(3x)(8)+{(-8)}^2 \\ &9x^2-48x+64\qquad \qquad \; \; \; \; \text{Simplify} \end{align*}
Exercise $$\PageIndex{6}$$
Expand $${(4x−1)}^2$$.
$$16x^2−8x+1$$
### Difference of Squares
Another special product is called the difference of squares, which occurs when we multiply a binomial by another binomial with the same terms but the opposite sign. Let’s see what happens when we multiply $$(x+1)(x−1)$$ using the FOIL method.
\begin{align*} (x+1)(x-1) &= x^2-x+x-1\\ &= x^2-1 \end{align*}
The middle term drops out, resulting in a difference of squares. Just as we did with the perfect squares, let’s look at a few examples.
$$(x+5)(x-5)=x^2-25$$
$$(x+11)(x-11)=x^2-121$$
$$(2x+3)(2x-3)=4x^2-9$$
Because the sign changes in the second binomial, the outer and inner terms cancel each other out, and we are left only with the square of the first term minus the square of the last term.
Q&A
Is there a special form for the sum of squares?
No. The difference of squares occurs because the opposite signs of the binomials cause the middle terms to disappear. There are no two binomials that multiply to equal a sum of squares.
Difference of Squares
When a binomial is multiplied by a binomial with the same terms separated by the opposite sign, the result is the square of the first term minus the square of the last term.
$(a+b)(a−b)=a^2−b^2$
Howto: Given a binomial multiplied by a binomial with the same terms but the opposite sign, find the difference of squares.
1. Square the first term of the binomials.
2. Square the last term of the binomials.
3. Subtract the square of the last term from the square of the first term.
Example $$\PageIndex{7}$$: Multiplying Binomials Resulting in a Difference of Squares
Multiply $$(9x+4)(9x−4)$$.
Solution
Square the first term to get $${(9x)}^2=81x^2$$. Square the last term to get $$4^2=16$$. Subtract the square of the last term from the square of the first term to find the product of $$81x^2−16$$.
Exercise $$\PageIndex{7}$$
Multiply $$(2x+7)(2x−7)$$.
$$4x^2−49$$
## Performing Operations with Polynomials of Several Variables
We have looked at polynomials containing only one variable. However, a polynomial can contain several variables. All of the same rules apply when working with polynomials containing several variables. Consider an example:
\begin{align*} &(a+2b)(4a-b-c) a(4a-b-c)+2b(4a-b-c)\qquad \text{ Use the distributive property }\\ &4a^2-ab-ac+8ab-2b^2-2bc\qquad \qquad\qquad\qquad\qquad \text{ Multiply }\\ &4a^2+(-ab+8ab)-ac-2b^2-2bc\qquad \qquad\qquad\qquad \; \text{ Combine like terms } \\ &4a^2+7ab-ac-2bc-2b^2\qquad \qquad \qquad \qquad \qquad \qquad\text{ Simplify } \end{align*}
Example $$\PageIndex{8}$$: Multiplying Polynomials Containing Several Variables
Multiply $$(x+4)(3x−2y+5)$$.
Solution
\begin{align*} &x(3x-2y+5)+4(3x-2y+5)\qquad \text{ Use the distributive property }\\ &3x^2-2xy+5x+12x-8y+20\qquad \text{ Multiply }\\ &3x^2-2xy+(5x+12x)-8y+20\qquad \text{ Combine like terms } \\ &3x^2-2xy+17x-8y+20\qquad \qquad\text{ Simplify } \end{align*}
Exercise $$\PageIndex{8}$$
Multiply $$(3x−1)(2x+7y−9)$$.
$$6x^2+21xy−29x−7y+9$$
Media
Access these online resources for additional instruction and practice with polynomials.
## Key Equations
perfect square trinomial $${(x+a)}^2=(x+a)(x+a)=x^2+2ax+a^2$$ difference of squares $$(a+b)(a−b)=a^2−b^2$$
## Key Concepts
• A polynomial is a sum of terms each consisting of a variable raised to a non-negative integer power. The degree is the highest power of the variable that occurs in the polynomial. The leading term is the term containing the highest degree, and the leading coefficient is the coefficient of that term. See Example.
• We can add and subtract polynomials by combining like terms. See Example and Example.
• To multiply polynomials, use the distributive property to multiply each term in the first polynomial by each term in the second. Then add the products. See Example.
• FOIL (First, Outer, Inner, Last) is a shortcut that can be used to multiply binomials. See Example.
• Perfect square trinomials and difference of squares are special products. See Example and Example.
• Follow the same rules to work with polynomials containing several variables. See Example. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9816455841064453, "perplexity": 420.0186465807039}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153223.30/warc/CC-MAIN-20210727072531-20210727102531-00445.warc.gz"} |
http://math.stackexchange.com/questions/781489/linear-ordinary-differential-equation-solution | # linear ordinary differential equation solution
How can I solve this linear ode: $$y''+\dfrac{4x}{x^2-1 }y'+\dfrac{x^2+1}{x^2-1}y=0$$ I tried few variables changing but I did not get any result. thanks
-
Note that \begin{align} &(x^2-1)y''+4x y'+(x^2+1)y=\\ =&(x^2-1)y''+2(x^2-1)'y'+(x^2-1)''y+(x^2-1)y=\\ =&[(x^2-1)y]''+(x^2-1)y. \end{align} Therefore if we write $\displaystyle y(x)=\frac{f(x)}{x^2-1}$, the equation becomes $$f''+f=0.$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9995213747024536, "perplexity": 689.396103005676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860114285.32/warc/CC-MAIN-20160428161514-00177-ip-10-239-7-51.ec2.internal.warc.gz"} |
http://spmchemistry.onlinetuition.com.my/2012/12/heating-curve.html | # Heating Curve
A Naphthalene is in solid state at any temperature below its melting point. The particles are very closely packed together in an orderly manner. The forces between the particles are very strong. The particles can only vibrate at a fixed position. A-B As the naphthalene is heated, heat energy is converted to kinetic energy. Kinetic energy increases and the molecules vibrate faster about their fixed positions and the temperature increases. B Naphthalene is still in solid state. Naphthalene molecules have received enough energy to overcome the forces of attraction between them. Some of the particles that gain enough energy begin to move freely. Naphthalene starts to melt and changes into a liquid. B-C Naphthalene exists in both solid and liquid states. The temperature remains constant because the heat that supplied to naphthalene is used to overcome the forces of attraction that hold the particles together. The constant temperature is called the melting point. The heat energy that absorbed to overcome the intermolecular forces is named as the latent heat of fusion. C All the naphthalene has completely melted. Solid naphthalene has turned into liquid. C-D Naphthalene is in liquid state. As the liquid naphthalene is heated, the molecules gain more heat energy and the temperature continues to increase. The particles move faster and faster because their kinetic energy is increasing. D Naphthalene still exists in liquid state. Naphthalene molecules have received enough energy to overcome the forces of attraction between the particles in the liquid. Some of the naphthalene molecules start to move freely and liquid naphthalene begin to change into gas. D-E Naphthalene exists in both liquid and gaseous states. The temperature remains unchanged. The is because the heat energy absorbed is used to overcome the intermolecular forces between the particles of the liquid rather than increase the temperature of the liquid. This constant temperature is the boiling point. E All the naphthalene has turn into gas. E-F The gas particles continue to absorb more energy and move faster. The temperature increases as heating continues.
### Recommended Videos
How to Read a Heating Curve | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8020648956298828, "perplexity": 549.87794859425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812306.16/warc/CC-MAIN-20180219012716-20180219032716-00619.warc.gz"} |
http://aas.org/archives/BAAS/v31n5/aas195/68.htm | AAS 195th Meeting, January 2000
Session 33. SNRs and Other Stellar Ejecta
Oral, Wednesday, January 12, 2000, 2:00-3:30pm, Regnecy V
## [33.02] The Nucleation and Growth of Dust Grains in Nova Shells
D. A. Joiner (Shodor Education Foundation), C. M. Leung (Rensselaer Polytechnic Institute)
Novae vary in their visible luminosity by orders of magnitude over a period of months to a few years, and often show the occurrence of a large dip in their visible light, associated with the growth of carbon dust grains. The purpose of this thesis is to study the effects of hydrogenation and density inhomogeneities on the efficiency of nucleation of dust grains in novae, using a time dependent kinetic model.
The hydrogenation study shows that a model of grain growth limited by photodissociation of small hydrogenated carbon molecules is effective at reproducing the grain sizes, optical depths, and grain formation temperatures observed to occur in novae. Grain sizes on the order of or less than 1 micron are predicted for weakly hydrogenated models. Optical depth is found to vary widely depending on the initial density, the luminosity of the central source, and the degree of hydrogenation of the building block molecules C2 - C8.
The study of the effect of clumps in the nova outflow show that a hydrogenation limited nucleation model of grain growth in novae can with the inclusion of clumps explain the occurrence of excess infrared reradiation for models with no observable visible obscuration, provided a density enhancement of ~10, in accord with spectral analysis of Nova Cyg 1992. Such a model does have limitations in reproducing the observed relation between visible obscuration (\tauV) and infrared reradiation (\tauIR = LIR/LUV), particularly for optically thick novae.
This work has been supported primarily by NASA grants NAG-3144 and NAG5-3339, and in part by the Air Force Office of Scientific Research and the Shodor Education Foundation (National Computational Science Alliance, ACI-9619019 Subaward 769). | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8635087609291077, "perplexity": 3757.9958048694466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444493.40/warc/CC-MAIN-20141017005724-00125-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://brilliant.org/problems/arc-length-in-a-rectilinear-metric/ | # Arc Length In A Rectilinear Metric?
Calculus Level 4
The Minkowski plane is defined by a rectilinear metric as follows:
For points $$(a, b)$$ and $$(m, n)$$, the distance between these points is
$d((a,b);(m,n))=|m-a|+|n-b|$
What is the length of the arc of $$f(x)=x^2$$ from $$x=-1$$ to $$x=1$$ in this metric?
Notation: $$| \cdot |$$ denotes the absolute value function.
×
Problem Loading...
Note Loading...
Set Loading... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9902201890945435, "perplexity": 517.9420278362452}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720945.67/warc/CC-MAIN-20161020183840-00355-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://www.flexiprep.com/Worksheets/Maths/Class-8/Grade-8-the-Division-of-Algebraic-Expressions.html | Grade 8 the Division of Algebraic Expression Worksheet (For CBSE, ICSE, IAS, NET, NRA 2022)
Get top class preparation for CBSE/Class-8 right from your home: get questions, notes, tests, video lectures and more- for all subjects of CBSE/Class-8.
(a)
(a)
(a)
(a)
(a)
(6) Use the Short Division Method to Evaluate. Then Write Out the Quotient and the Remainder in the Boxes Given
(a)
(7) Find the Value of n if is factor of
(8) Find the value of k so that be a factor of
(a)
(10) in below Question, Find the Quotient by Factorizing the Numerator
(a)
• To get the Answer by using long division method first arrange the indices in descending order.
• Now, divide by to get the first term of quotient.
• Multiply the divisor by
• Bring Down the next term (-28)
• Divide by to get second term of quotient,
• Multiply the divisor by 7
• Therefore, Quotient and Remainder
• To get the Answer by using long division method first arranges the indices in descending order.
• Divide by to get first term of quotient.
• Now, multiply the divisor by 3x,
• Bring Down the next term (6)
• Divide by to get second term of quotient.
• Now, multiply the divisor by 2,
• Therefore, Quotient and Remainder
• To get the Answer by using long division method first arranges the indices in descending order.
• Divide by to get first term of quotient.
• Now, multiply the divisor by ,
• Bring Down the next term
• Divide by to get second term of quotient.
• Now, multiply the divisor by ,
• Divide by to get third term of quotient.
• Now, multiply the divisor by ,
• So, Quotient and Remainder
• To get the Answer by using long division method first arranges the indices in descending order.
• Divide by to get first term of quotient.
• Now, multiply the divisor by ,
• Divide by to get Second term of quotient.
• Now, multiply the divisor by ,
• Now Bring Down next two term that is:
• Hence,
• Divide by to get third term of quotient.
• Now, multiply the divisor by ,
• So, Quotient and Remainder
• To get the Answer by using long division method first arrange the indices in descending order.
• Divide by to get first term of quotient.
• Now, multiply the divisor by ,
• Divide by to get Second term of quotient.
• Now, multiply the divisor by ,
• So, Quotient and Remainder
• Arrange the indices in to Descending order:
• Now factorize considering divisor as a factor
• Therefore, Quotient and Remainder
• If is factor of given equation than ,
• By putting the value of the value of equation will become ZERO.
• If is factor of given equation than,
• By putting the value of value of equation will become ZERO.
• Factorize the numerator,
• Now For,
• We know identities that
• Compare this identities with our equation
• So,
• So,
• From Equation-1 and equation-2;
• So, is factor of
• So, Quotient is
• Factorize the numerator,
• Now to factorize given equation; we made by summation of two such part that are factor of multiplication of coefficient of
• Hence
• Multiplication of coefficient
• Now,
• So, is factor of numerator
• So,
• Therefore, Quotient is
Developed by: | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8641524910926819, "perplexity": 2870.6248748888693}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358591.95/warc/CC-MAIN-20211128194436-20211128224436-00250.warc.gz"} |
https://blog.seonwoolee.com/how-i-created-a-race-condition-in-latex/ | LaTeX is a powerful typsetting language. It's generally used for documents, particularly those with complicated mathematical symbols and expressions, as LaTeX can render them with ease. However it is capable of producing more than just documents; you can create Powerpoint style presentations as well.
# Presentations in LaTeX: Beamer
The beamer package can create slides for presentations in PDF form. If you have animation steps in a slide, such as revealing one bullet at a time, then beamer will output a PDF with multiple pages per slide, with each slide adding another bullet. This is known as presentation mode.
Additionally, beamer can render the presentation in handout mode, where the animation steps are removed, and there's only one PDF page per slide. This is generally intended for handing out copies of the presentation. While I don't generally hand out copies of my presentations, I do use the handout mode because it is much faster to scroll through on the computer than presentation mode.
# The Race Condition
The standard compiler of choice for rendering PDF files of LaTeX code is pdflatex. One of the disadvantages of writing in LaTeX is that you do not get to see the changes to your LaTeX code in real time; the document must be recompiled after any change.
To solve this problem, there is a perl script called latexmk which has a continuous mode: it automatically recompiles the document whenever it detects a change to the file (or set of files that affects the final document).
I wanted to compile both presentation and handout modes of my slides simultaneously without having two copies of the same slides. So I created three separate files.
Presentation.tex:
\documentclass[aspectratio=1610,14pt,t]{beamer}
\setlength{\leftmargini}{14pt}
\input{Slides.tex}
Handout.tex:
\documentclass[aspectratio=1610,14pt,handout,t]{beamer}
\setlength{\leftmargini}{14pt}
\input{Slides.tex}
Slides.tex:
<slides content>
This way I can edit just one file, Slides.tex, while compiling Presentation.tex in presentation mode and Handout.tex in handout mode into two separate files, Presentation.pdf and Handout.pdf.
However, if you do this with two instances of latexmk, one for Presentation.tex and one for Handout.tex, you create a race condition with the aux files that get generated.
LaTeX was designed in the early days of computing when far less RAM was available than today. Hence, during the compilation process, instead of holding temporary information in RAM, many temporary files (including aux files) get written and then read. If it were redesigned today, these aux files wouldn't be necessary.
latexmk will not only recompile because of a change in the tex files, but also the aux files, which is problematic.
Suppose you have the three files set up as above, and two instances of latexmk running in continuous compilation mode: one for Presentation.tex and one for Handout.tex. Once you make a change to Slides.tex, both instances of latexmk will start recompiling. Let's suppose without loss of generality that the latexmk instance compiling Presentation.tex finishes first. It writes a change to Slides.aux. Unfortunately, now the latexmk instance for Handout.tex detects this change in Slides.aux, and then recompiles and writes a change to Slides.aux. This then causes the latexmk instance for Presentation.tex to recompile, and so on.
# A Solution
One way to solve this is just create two separate copies of Slides.tex. However, this would make editing a pain.
A better solution is the use of symlinks. Symlinks are essentially file pointers; they redirect any program that wants to write to the symlink to actually write to the file the symlink points to.
In reality, my Slides.tex is
% Change settings based on handout vs not handout mode
\makeatletter
\IfSubStr{\@classoptionslist}{handout}
% Handout mode
{
\newcommand{\mysuf}{-handout}
}
% Not Handout mode
{
\newcommand{\mysuf}{}
}
\makeatother
\begin{document}
\include{Title\mysuf}
\include{Introduction\mysuf}
....
\include{Conclusion\mysuf}
\end{document}
This code defines the command \mysuf as -handout if the document being compiled is in handout mode, and as an empty string if it is not. Then instead of putting all my content in Slides.tex, I separate out the content into multiple files such as Title.tex, Introduction.tex, etc. And crucially, I create the symlinks Title-handout.tex, Introduction-handout.tex, etc. that all point to their non -handout counterparts.
By doing so, the latexmk instances stop writing over each other; the latexmk instance compiling Presentation.tex generates Title.aux, Introduction.aux, etc. while the latexmk instance compiling Handout.tex generates Title-handout.aux, Introduction-handout.aux, etc. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8101459741592407, "perplexity": 3612.6135490397583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668544.32/warc/CC-MAIN-20191114232502-20191115020502-00339.warc.gz"} |
https://kullabs.com/classes/subjects/units/lessons/notes/note-detail/493 | ## Note on Renewable Sources of Energy
• Note
• Things to remember
• Videos
• Exercise
• Quiz
### Solar energy:
The main source of energy is the thermonuclear fusion reaction that continuously takes place in the sun. The sun is a huge mass of hydrogen gas and temperature in the sun is very high. The sun is considered as a big thermonuclear furnace where hydrogen atoms are continuously being fused into helium. In this process, some mass (0.7%) is lost during the reaction, which in turn liberates an enormous amount of energy. Thus, the sun, which gives us heat and light energy, derives its energy from the nuclear fusion reaction. This process is going on inside it, all the time.
#### Condition required in sun for nuclear fusion:
• In Sun, there is sufficient amount of hydrogen gas.
• The presence of necessary temperature in the sun for theformation of free protons.
• The presence of necessary pressure for thecombination of free protons.
### Biogas:
Biogas is a mixture of methane, carbon dioxide, hydrogen and hydrogen sulphide. However, the major constituent of biogas is methane.
Biogas is produced by the anaerobic decomposition of wastes (like animal dung) or plant wastes in thepresence of water. Anaerobic bacteria decompose these waste materials in thepresence of water and convert them into methane gas and some other gas. In this way, biogas is produced in biogas plants.
#### Advantages of biogas:
• Biogas burns without smoke and hence does not cause air pollution.
• Biogas produces more heat while burning.
• Biogas is cheaper and can be produced easily.
• Biogas can be used to generate electricity.
### Tidal energy:
The energy obtained with the help of tides of thesea is called tidal energy. Or, The energy generated from the tides of thesea is called tidal energy.
The method of generating electricity from tides of sea is given below:
Big dams are constructed at seashore near water. When tides surge up in the sea, water enters into the dam by crossing the dam and the water is trapped in the dam. When water moves back towards thesea, the water is passed through pipes, which is used to rotate turbines with the help of tides of thesea.
### Geothermal energy:
Geothermal (geo-earth and thermal heat) energy is the heat energy obtained from rocks, present inside the earth.
Geothermal energy is obtained by following methods:
• The extremely hot rocks, present below the surface of the earth, heat the underground water and turns into steam. In such places, two holes are drilled into the earth through which metal pipes are put into them. Cold water is pumped in through one of the pipes. The cold water turns into steam due to the heat of the rocks. The steam thus formed comes out through the other pipe. The steam thus obtained is used to boil water and to produce electricity.
• In some places, the steam and extremely hot water from the hot spots comes out of the ground on its own where large cracks are present in underground rocks. The steam and hot water thus coming out are used to rotate turbines for generating electricity. It is also used for cooking food, steaming, bathing, etc.
#### Energy crisis:
The future scarcity of energy due to industrialization, urbanization and overpopulation is called energy crisis.
#### Measures to solve energy crisis:
• Less use of non-renewable source of energy.
• Use of alternative sources of energy.
• Public awareness.
• The sources of energy, which can never be exhausted and can be replaced quickly if it is used once is called renewable source of energy. For example: Hydropower or hydroelectricity, bio-gas, wind energy, solar energy, tidal energy, geothermal energy, etc.
• The future scarcity of energy due to industrialization, urbanization and overpopulation is called energy crisis.
• Less use of non-renewable source of energy, public awareness, use of alternative sources of energy, etc. are the measures to solve energy crisis.
.
### Very Short Questions
The sources of energy that are continuously replaced by nature within short period of time are called renewable sources of energy. They are inexhaustible as they are constantly replenished. For example wood, wind, solar hydropower etc.
The energy, mostly heat and light, produced by the sun is called solar energy.
The main source of our energy is sun because we are getting all the energy required for our survival from sun. Our planet is warm because of the heat we are getting from the sun and we can see because of light coming from the sun. We are getting food because of sun, as plants use light energy from sun to prepare their food and whole biosphere is sustained by eating plants or other animals that eat plant. If there was no sun then we cannot imagine life as it is on the Earth.
Latitude and longitude of the place, height from sea level, weather and time of the day are the factors that affect solar energy found in the particular place.
There are many ways that nature uses solar energy:
1. Heating of the atmosphere
2. Continuation of water cycle
3. Flowing of wind
4. Preparation of food by plants
Nuclear energy is produced when an atom undergoes nuclear reaction. Those nuclear reactions can be of two types either nuclear fusion, where two atoms of same or different element (atoms with less atomic weight) combine to form single atom (atom with heavy atomic weight), or nuclear fission, where single atom of one element (atom with high atomic weight) is broken into two or more atoms of different elements (atoms with lower atomic weight). In earth nuclear power plant produce nuclear energy using controlled nuclear fission. We have not been able to use nuclear fusion to create energy on earth. Nuclear fusion is operating only in stars or to build bombs.
Deuterium is an isotope of the hydrogen atom. Normal hydrogen has one proton and no neutron but deuterium also called heavy hydrogen has one proton and one neutron.
The necessary conditions required for a nuclear reaction to take place are:
1. There must be presence of abundant amount of combining atoms to start the process of fusion.
2. The temperature must be high enough for the combining atom to collide with each other.
3. The pressure must be right for the atoms to came together for the reaction.
In sun there are high number of hydrogen atoms and tempreture and pressure is high enough to get them combine to form heliun nuclei. This reaction is nuclear reaction and it releases huge amount of energy. Since the number of combining hydrogen atoms is high the energy released is very high. Since the enrgy is released in sun is due to combining of hydrogen atoms the surce of enrgy of the sun is nuclear fusion.
This equation was given by Albert Einstein in his paper Theory of Relativity. This equation gives the relation between the energy and mass and used to calculate the energy released when certain mass is converted into energy.
When the efficiency of a vehicle engine is increased then it will provide same output by lesser fuel consumption i.e. with less amount of energy input we are getting higher output than now we are getting. So with less consumption of energy will help in energy conservation.
The most useful source of renewable energy is hydropower because we have many rivers with high volume of water flowing in them and they flow down the mountainous terrain. These conditions help to produce high quantity of electricity using hydropower. (Nepal has potential of 83GW of electricity).
It is hard to cultivate the hydropower resource because we lack capital (money) and skilled manpower.
The energy sources that are different from conventional energy sources and are durable and productive than the conventional energy sources then they are called 'Alternative sources of energy'. For example: Hydropower, solar energy, wind energy, biofuel, tidal energy etc.
The core of the earth is very hot. The places where the earth's crust is very thin and heat from the core can escape heating earth's crust, we can use that heat energy to generate electricity or heating or cooling our homes and much more. This heating of earth crust can also be achieved in the places where there are lots of active volcanos where hot magma can come near to the top of the crust heating it. If there are water bodies present in these areas they can serve as hot springs geysers as they are continuously heated by the earth's core.
The raw materials for the biogas plants are the dead plants and the waste that comes from the farm. Farmers can use the waste products (that they use to throw) from their farm to produce biogas and use the remaining waste of biogas, which is a great fertilizer (as it can be easily absorbed by the ground and has most of the nutrients required by the plants), can be used in their farm.
The main reason we are going to face energy crisis are:
1. The population of earth is increasing ever fast. This increases the use of energy as there will be more people to tend.
2. Our major source of energy is fossil fuel which is a nonrenewable resources and its stock will finish soon.
3. We do not have the proper alternative source of energy that can replace the our conventional source.
The only way to avert the energy crisis is to develop proper alternative energy source that can replace the use of fossil fuel. The alternative source must be the renewable source and must have the capacity to supply energy according to the demand.
0%
• ### Petrolium product is obtained from?
nuclear energy
Fossil fuel
biomass
geo thermal energy
water
sun
sound
wind
• ### The major content of biogas is ______.
oxygen
methane
carbon dioxide
water
all form
electrical
heat
mechanical
• ### What kind of energy does a wind turbine use?
kinetic energy
potential energy
mechanical energy
chemical energy
• ### Which of the following is not condition required in sun for nuclear fusion?
sufficient water
temperature
presence of hydrogen
pressure
• ### Which of the following is the cheapest form of energy?
geo thermal energy
bio gas
tidal
coal
coal
charcoal
petrol
wind
electricity
natural gas
the sun
plants
• ### In what form can solar energy be used?
Thermal energy
Mechanical Energy
Electrical energy
All of above
• ### Which type of dryer can be used to dry fruits and vegetables using renewable energy?
Oil furnace
Solar dryer
Coal furnace
Wood-based furnace
• ### What kind of energy does a wind turbine use?
Thermal energy
Kinetic energy
Potential energy
Chemical Energy
Wind energy
Charcoal
Petroleum
Coal
• ### Gasification of biomass is a ______.
biological conversion process
biochemical conversion process
themical conversion process
thermochemical conversion process
• ## You scored /14
Forum Time Replies Report
##### Manushi
Define thermonuclear fusion.
##### Rishab
Why is geothermal energy called renewable source of energy?
##### Aakash
Write the molucier formula of nuclear fusion reaction of sun | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8644833564758301, "perplexity": 1730.7222485057346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118740.31/warc/CC-MAIN-20170423031158-00525-ip-10-145-167-34.ec2.internal.warc.gz"} |
http://tex.stackexchange.com/questions/113681/uppercase-sections-and-subsections-on-toc | # Uppercase sections and subsections on ToC
How can I insert \MakeUppercase inside the table of contents for sections and subsection?
\renewcommand{\l@section}[2]%
{\@dottedtocline{1}{.5em}{1.3em}%
{{\bfseries\selectfont#1}}{#2}}
-
Are you using tocloft? Also, what \documentclass are you using? Your redefinition of \l@section doesn't really help much since it doesn't provide any context. – Werner May 11 at 5:58
Welcome to TeX.SX! Please add a minimal working example (MWE) that illustrates your problem. It will be much easier for us to reproduce your situation and find out what the issue is when we see compilable code, starting with \documentclass{...} and ending with \end{document}. – Marco Daniel May 11 at 7:54
This is a fairly general approach for changing the appearance of the entries in the table of contents.
The patch with \xpatchcmd* is just a way for avoiding copying the definition from latex.ltx and modifying it. The two places where #7 appears in \addtocontents are replaced by \@nameuse{format#1}{#7} so we can define \formatsection and so on to do what we want to the title in the TOC.
\documentclass{article}
%%% Patching the kernel \@sect command
\usepackage{regexpatch}
\makeatletter
\xpatchcmd*{\@sect}{\fi#7}{\fi\@nameuse{format#1}{#7}}{}{}
%%% for sections and subsections we want uppercase
\protected\def\formatsection{\MakeUppercase}
\protected\def\formatsubsection{\MakeUppercase}
%%% the other titles are left unchanged
\let\formatsubsubsection\@firstofone
\let\formatparagraph\@firstofone
\let\formatsubparagraph\@firstofone
%%% the following is necessary only if hyperref is used
\AtBeginDocument{%
\pdfstringdefDisableCommands{%
\let\formatsection\@firstofone
\let\formatsubsection\@firstofone
}%
}
\makeatother
\usepackage{hyperref}
\setcounter{secnumdepth}{3}
\begin{document}
\tableofcontents
\section{This is a section}
\subsection{This is a subsection}
\subsubsection{This is a subsubsection}
\end{document}
-
This works great, but how would you do it for \section*? – Paulius K. May 26 at 21:10
@PauliusK. \section* doesn't go to the TOC by default. Just add \MakeUppercase in the manually added \addcontentsline. – egreg May 26 at 21:15
Well this is embarassing... Nevertheless, your answer is the cleanest sollution I've found for uppercasing sections in ToC. Doing it is way too difficult in LaTeX. – Paulius K. May 26 at 21:26
@PauliusK. Maybe classes such as KOMA-Script ones (scrartcl, scrreprt or scrbook) or memoir provide "native" methods (although I'm not sure). Titles in all capitals are heavy and, in my opinion, should be avoided. – egreg May 26 at 21:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9187201261520386, "perplexity": 3728.8412408218937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164645800/warc/CC-MAIN-20131204134405-00003-ip-10-33-133-15.ec2.internal.warc.gz"} |
http://physics.stackexchange.com/questions/12031/capacitance-between-widely-separated-parallel-plates?answertab=votes | # capacitance between widely-separated parallel plates
There's a caveat, which is often ignored, to the "easy" equation for parallel plate capacitors C = epsilon * A / d, namely that d must be much smaller than the dimensions of the parallel plate.
Is there an equation that works for large d? I tried finding one and could not. (These two papers talk about fringing fields for disc-shape plates but don't seem to have a valid equation for d -> infinity: http://www.santarosa.edu/~yataiiya/UNDER_GRAD_RESEARCH/Fringe%20Field%20of%20Parallel%20Plate%20Capacitor.pdf and http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.167.3361&rep=rep1&type=pdf)
My hand-waving intuition is that as d -> infinity, C should decrease to a constant value (which is the case for two spheres separated by a very large distance, where C = 4*pi*e0/(1/R1 + 1/R2) ), because at large distances from each plate, the electric field goes as 1/R, so the voltage line integral from one plate to the other will be a fixed constant proportional to charge Q.
- | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9686475992202759, "perplexity": 352.09756001332903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398468396.75/warc/CC-MAIN-20151124205428-00127-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://bjlkeng.github.io/posts/optimal-betting-and-the-kelly-criterion/ | # Optimal Betting Strategies and The Kelly Criterion
My last post was about some common mistakes when betting or gambling, even with a basic understanding of probability. This post is going to talk about the other side: optimal betting strategies using some very interesting results from some very famous mathematicians in the 50s and 60s. I'll spend a bit of time introducing some new concepts (at least to me), setting up the problem and digging into some of the math. We'll be looking at it from the lens of our simplest probability problem: the coin flip. A note: I will not be covering the part that shows you how to make a fortune -- that's an exercise best left to the reader.
#### Background
##### History
There is an incredibly fascinating history surrounding the mathematics of gambling and optimal betting strategies. The optimal betting strategy, more commonly known as the Kelly Criterion, was developed in the 50s by J. L. Kelly , a scientist working at Bell Labs on data compression schemes at the time. In 1956, he made an ingenious connection between his colleague's (Shannon) work on information theory, gambling, and a television game show publishing his new findings in a paper titled A New Interpretation of Information Rate (whose original title was Information Theory and Gambling).
The paper remained unnoticed until the 1960s when an MIT student named Ed Thorp told Shannon about his card-counting scheme to beat blackjack. Kelly's paper was referred to him, and Thorp started using it to amass a small fortune using Kelly's optimal betting strategy along with his card-counting system. Thorp and his colleagues later went on to use the Kelly Criterion in other varied gambling applications such as horse racing, sports betting, and even the stock market. Thorp's hedge fund outperformed many of his peers and it was this success that made Wall Street take notice of the Kelly Criterion. There is a great book called Fortune's Formula 1 that details the stories and adventures surrounding these brilliant minds.
##### Surely, Almost Surely
In probability theory, there are two terms that distinguish very similar conditions: "sure" and "almost sure". If an event is sure, then it always happens. That is, it is not possible for any other outcome to occur. If an event is almost sure then it occurs with probability 1. That is, theoretically there might be an outcome not belonging to this event that can occur, but the probability is so small that it's smaller than any fixed positive probability, and therefore must be 0. This is kind of abstract, so let's take a look at an example (from Wikipedia).
Imagine we have a unit square where we're randomly throwing point-sized darts that will land inside the square with a uniform distribution. For the entire square (light blue), it's easy to see that it makes up the entire sample space, so we would say that the dart will surely land within the unit square because there is no other possible outcome.
Further, the probability of landing in any given region is the ratio of its area to the ratio of the total unit square, simplifying to just the area of a given region. For example, taking the top left corner (dark blue), which is 0.5 units x 0.5 units, we could conclude that $P(\text{dart lands in dark blue region}) = (0.5)(0.5) = 0.25$.
Now here's the interesting part, notice that there is a small red dot in the upper left corner. Imagine this is just a single point at the upper left corner on this unit square. What is the probability that the dart lands on the red dot? Since the red dot has an area of $0$, $P(\text{dart lands on red dot}) = 0$. So we could say that the dart almost surely does not land on the red dot. That is, theoretically it could, but the probability of doing so is $0$. The same argument can be made for every point in the square.
The dart actually does land on a single point of the square though, so even though the probability of landing on that point is $0$, it still does occur. For these situations, it's not sure that we won't hit that specific point but it's almost sure. A subtle difference but quite important one.
#### Optimal Betting 2
##### Optimal Betting with Coin Tossing
Imagine playing a game with an infinite wealthy opponent who will always take an even bet made on repeated independent tosses of a biased coin. Further, let the probability of winning be $p > \frac{1}{2}$ and losing be $q = 1 - p$ 3, so we have a positive overall expected value for the game 4. You start with $X_0$ of initial capital. Question: How much should we bet each time?
Example 1:
This can be made a bit more concrete by putting some numbers to it. Let's say our coin lands on heads with a chance of $p=0.53$, which means tails must be $q=1-p=0.47$. Our initial bankroll is $X_0=100,000$. How much of this $100,000$ should we bet on the first play?
Let's formalize the problem using some mathematics. Denote our remaining capital after the k'th toss as $X_k$ and on the k'th toss we can bet $0 \leq B_k \leq X_{k-1}$. Let's use a variable $T_k = 1$ if the k'th trial is a win, and $T_k=-1$ for a loss. Then for the n'th toss, we have:
\begin{align*} X_n &= X_{n-1} + T_nB_n \\ &= X_{n-2} + T_{n-1}B_{n-1} + T_nB_n \\ &= \ldots \\ &= X_0 + \Sigma_{k=1}^{n} T_kB_k \tag{1} \end{align*}
One possible strategy we could use is to maximize the expected value of $X_n$. Let's take a look at that:
\begin{align*} E(X_n) &= E(X_0 + \Sigma_{k=1}^{k} T_kB_k) \\ &= X_0 + \Sigma_{k=1}^{k} E(B_kT_k) \\ &= X_0 + \Sigma_{k=1}^{k} (p - q) E(B_k) \tag{2} \end{align*}
Since $p - q > 0$ this will have a positive expected payoff. To maximize $E(X_n)$, we should maximize $E(B_k)$ (this is the only variable we can play with), which translates to betting our entire bankroll at each toss. For example, on the first toss bet $B_0 = X_0$, on the second toss (if we won the first one) bet $B_1 = 2X_0$ and so on. It doesn't take a mathematician to know that is not a good strategy. Why? The probability of ruin is almost sure (ruin occurs when $X_k = 0$ on the k'th toss).
If we're betting our entire bankroll, then we only need one loss to lose all our money. The probability of ruin is then $1 - p^n$ for $n$ tosses (every outcome except winning on every toss). Taking the limit as $n$ approaches infinity:
\begin{equation*} lim_{n \rightarrow \infty} (1 - p^n) = 1 \tag{3} \end{equation*}
So we can see that this aggressive strategy is almost surely 5 going to result in ruin.
Another strategy might be to try and minimize ruin. You can probably already intuit that this strategy involves making the minimum bet. From Equation 2, this is not desirable because it will also minimize our expected return. This suggests that we want a strategy that is in between the minimum bet and betting everything (duh!). The result is the Kelly Criterion.
##### The Kelly Criterion
Since our maximum bet is limited by our current bankroll, it seems plausible that the optimal strategy will always bet relative to our current bankroll. To simplify the math, we assume that the money is infinitely divisible. However, it should be noted that this limitation doesn't really matter too much when our capital is relatively large compared to the minimum divisible unit (think millions vs. cents).
If on every toss, we bet a fraction of our bankroll (known as "fixed fraction" betting), $B_k = fX_{k-1}$, where $0 \leq f \leq 1$, we can derive an equation for our bankroll after $S$ successes and $F$ failures in $S+F=n$ trials:
\begin{equation*} X_n = X_0(1+f)^S(1-f)^F \tag{4} \end{equation*}
Notice that we can't technically ever get to $0$ but practically there is a minimum bet and if we go below it, we are basically ruined. We can just re-interpret ruin in this manner. That is, ruin for a certain strategy is when we will almost surely go below some small positive integer $\epsilon$ as the number of trials $n$ grows i.e., $lim_{n\rightarrow \infty}P(X_n \leq \epsilon) = 1$.
Now let's setup what we're trying to maximize. We saw that trying to maximize the expected return leads us to almost surely ruin. Instead, Kelly chose to maximize the expected exponential growth rate. Let's see what that means by first looking at the ratio of current bankroll to our starting bankroll:
\begin{align*} \frac{X_n}{X_0} &= e^{\log(\frac{X_n}{X_0})} \\ &= e^{n \log(\frac{X_n}{X_0})^{1/n}} \\ &= e^{n G(f)} \tag{5} \end{align*}
So $G(f)$ represents the exponent (base $e$) on how fast our bankroll is growing. Substituting Equation 4 into $G(f)$:
\begin{align*} G(f) &= \log(\frac{X_n}{X_0})^{1/n} \\ &= \log((1+f)^S(1-f)^F)^{1/n} \\ &= \frac{1}{n}\log((1+f)^S(1-f)^F) \\ &= \frac{S}{n}\log(1+f) + \frac{F}{n}\log(1-f) \tag{6} \end{align*}
Now since $G(f)$ is a random variable, we want to maximize the expected value of it (which we denote as $g(f)$):
\begin{align*} g(f) &= E[G(f)] \\ &= E[\frac{S}{n}\log(1+f) + \frac{F}{n}\log(1-f)] \\ &= E[\frac{S}{n}]\log(1+f) + E[\frac{F}{n}]\log(1-f) \\ &= p\log(1+f) + q\log(1-f) \tag{7} \end{align*}
The last line simplifies because the expected proportion of successes and failures is just their probabilities 6. Now all we have to do is a simple exercise in calculus to find the optimal value $f^*$ that maximizes $g(f)$:
\begin{align*} g'(f) = \frac{p}{1+f} + \frac{q}{1-f} &= 0 \\ \frac{p-pf+q+qf}{(1+f)(1-f)} &= 0 \\ \frac{1-(p-q)f}{(1+f)(1-f)} &= 0 && \text{since } p+q=1\\ 1 - (p-q)f &= 1 - f^2 \\ f^2 - (p-q)f &= 0 \\ f = f^* &= p - q && \text{since } f>0 \tag{8} \end{align*}
So we now have our optimal betting criterion (for even bets), fractional bets with $f^*=p-q$.
Another interesting behavior of varying our fractional bets can be gleaned by graphing $G(f)$ 7:
We can see that our $f^*$ maximizes the growth rate. However, there is a point $f_c$ where our growth rate becomes negative. This implies that if we over-bet $f > f_c$, we will almost surely reach ruin (because we have a negative growth rate). The following (summarized) theorem from Thorp's paper states this more precisely:
Theorem 1
1. If $g(f) > 0$, then $lim_{n\rightarrow \infty}X_n = \infty$ almost surely.
2. If $g(f) < 0$, then $lim_{n\rightarrow \infty}X_n = 0$ almost surely.
3. Given a strategy $\Theta^*$ and any other "essentially different strategy" $\Theta$, we have $lim_{n\rightarrow \infty}\frac{X_n(\Theta^*)}{X_n(\Theta)} = \infty$ almost surely.
From this theorem, we can see that if we pick a fraction such that $g(f) > 0$, then we'll almost surely tend towards an increasing bankroll. Conversely, if we pick a fraction $g(f)<0$, then we will almost surely result in ruin. This matches up with our intuition that over-betting is counter-productive.
Example 2:
(Continued from Example 1) Suppose we have our even-bet coin toss game and the probability of heads is $p=0.53$ and probability of tails is $q=0.47$. Our initial bankroll is $100,000$ (big enough that the minimum bet isn't really significant). Applying our optimal betting criteria, on our first play we should bet $f=p-q=0.53-0.47=0.06$ or $6\%$ of our bankroll, translating to $100,000 * 6\% = 6,000$. Assuming we win the first play, we should bet $106,000 * 6\% = 6,360$ and so on.
If we bet less than $6\%$, we will still be increasing our bankroll but not at the optimal rate. We can also bet more than $6\%$ up to the theoretical point $f_c$ such that $g(f_c)=0$ with the same result. We can numerically determine this turning point, which in this case is $f_c \approx 0.11973$. So betting more than roughly 11.9% will almost surely cause us ruin.
We can also compute the expected exponential growth rate using our optimal $f^*= 0.06$:
\begin{align*} g(f^*) = g(0.06) &= E[p\log(1+f) + q\log(1-f)] \\ &= 0.53\log(1+0.06) + 0.47\log(1-0.06)] \\ &\approx 0.001801 \tag{9} \end{align*}
So after $n$ plays, a player can expect his bankroll to be $e^{0.001801n}$ times larger. A doubling time can be computed by setting $e^{0.001801n}=2$, resulting in $n\approx 385$ plays.
##### Betting with Uneven Payoffs and Other Variations
We've so far only looked at games with even payoffs. We can generalize this result. If for each unit wagered, you can win $b$ units, we can derive a modified version of Equation 7:
\begin{equation*} g(f) = E[log(\frac{X_n}{X_0}) = p\log(1 +bf) + q\log(1-f) \tag{10} \end{equation*}
Solving for the optimum yields $f^*=\frac{bp-q}{b}$.
Another variation is when you can make multiple simultaneous bets such as when multiple players share a single bankroll. Going through a similar exercise, we can derive values for $f_1^*, f_2^*, \ldots$ assuming the games played are independent. When two players are playing the same game (e.g. same table for Blackjack), the bets are correlated and adjustments must be made. Additionally, we can analyze more complex situations such as continuous (or nearly continuous) outcomes like the stock market which require a more thorough analysis using more complex math. See Thorp's paper for more details.
#### Conclusion
Kelly's optimal betting criterion is an incredibly interesting mathematical result. However, perhaps what is more interesting is that this theoretical result was put into practice by some of the very mathematicians that worked on it! Thorp has had wild success applying it in various situations such as sports betting, Blackjack and the stock market. Of course by itself the criterion isn't much use, it is only once you've found a game that has a positive expected value that you can put it to use. I would go into how to do that but I think I've written enough for one day and as I said, it's best left as an exercise to the reader.
#### References and Further Reading
• The Kelly Criterion in Blackjack Sports Betting, and the Stock Market by Edward O. Thorp.
• Optimal Gambling Systems for Favorable Games, E. O. Thorp, Review of the International Statistical Institute Vol. 37, No. 3 (1969), pp. 273-293 .
• William Poundstone, Fortune's Formula: The Untold Story of the Scientific Betting System That Beat the Casinos and Wall Street. 2005. ISBN 978-0809045990. See also a brief biography of Kelly on William Poundstone's web page.
1
William Poundstone, Fortune's Formula: The Untold Story of the Scientific Betting System That Beat the Casinos and Wall Street. 2005. ISBN 978-0809045990. See also a brief biography of Kelly on William Poundstone's web page.
2
This whole section just basically summarizes (with a bit more step-by-step for the math) the paper "The Kelly Criterion in Blackjack Sports Betting, and the Stock Market". So if you're really interested, it's probably best to check it out directly.
3
It doesn't really matter if the bias is heads or tails. The point is that you get to pick the winning side!
4
The expected value of winning for bet $B$ is $Bp-Bq = B(p-q) > 0$ since $p > q$.
5
Almost surely here because it's theoretically possible that you can keep winning forever but it's such a small possibility that it basically can't happen. This is analogous to the red dot in the unit square.
6
The expected value of a binomial distribution (e.g. coin tossing) is just $np$. So $np/n = p$.
7
Image from "The Kelly Criterion in Blackjack Sports Betting, and the Stock Market".
I'm Brian Keng, a former academic, current data scientist and engineer. This is the place where I write about all things technical. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9953362941741943, "perplexity": 634.4414673102519}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991224.58/warc/CC-MAIN-20210516140441-20210516170441-00264.warc.gz"} |
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Advanced_Theoretical_Chemistry_(Simons)/03%3A_Characteristics_of_Energy_Surfaces/3.01%3A_Strategies_for_Geometry_Optimization_and_Finding_Transition_States | 3.1: Strategies for Geometry Optimization and Finding Transition States
The extension of the harmonic and Morse vibrational models to polyatomic molecules requires that the multidimensional energy surface be analyzed in a manner that allows one to approximate the molecule’s motions in terms of many nearly independent vibrations. In this Section, we will explore the tools that one uses to carry out such an analysis of the surface, but first it is important to describe how one locates the minimum-energy and transition-state geometries on such surfaces.
Finding Local Minima
Many strategies that attempt to locate minima on molecular potential energy landscapes begin by approximating the potential energy $$V$$ for geometries (collectively denoted in terms of $$3N$$ Cartesian coordinates $$\{q_j\}$$) in a Taylor series expansion about some “starting point” geometry (i.e., the current molecular geometry in an iterative process or a geometry that you guessed as a reasonable one for the minimum or transition state that you are seeking):
$V (g_k) = V(0) + \sum_k \left(\dfrac{\partial V}{\partial q_k}\right) q_k + \dfrac{1}{2} \sum_{j,k} q_j H_{j,k} q_k \, + \, ... \label{3.1.1}$
Here,
• $$V(0)$$ is the energy at the current geometry,
• $$\dfrac{\partial{V}}{\partial{q_k}} = g_k$$ is the gradient of the energy along the $$q_k$$ coordinate,
• $$H_{j,k} = \dfrac{\partial^2{V}}{\partial{q_j}\partial{q_k}}$$ is the second-derivative or Hessian matrix, and
• $$g_k$$ is the length of the “step” to be taken along this Cartesian direction.
An example of an energy surface in only two dimensions is given in the Figure 3.1 where various special aspects are illustrated. For example, minima corresponding to stable molecular structures, transition states (first order saddle points) connecting such minima, and higher order saddle points are displayed.
If the only knowledge that is available is $$V(0)$$ and the gradient components (e.g., computation of the second derivatives is usually much more computationally taxing than is evaluation of the gradient, so one is often forced to work without knowing the Hessian matrix elements), the linear approximation
$V(q_k) = V(0) + \sum_k g_K \,q_k \label{3.1.2}$
suggests that one should choose “step” elements $$q_k$$ that are opposite in sign from that of the corresponding gradient elements $$g_k = \dfrac{\partial{V}}{\partial{q_k}}$$ if one wishes to move “downhill” toward a minimum. The magnitude of the step elements is usually kept small in order to remain within the “trust radius” within which the linear approximation to $$V$$ is valid to some predetermined desired precision (i.e., one wants to assure that $$\sum_k g_K q_k$$ is not too large).
When second derivative data is available, there are different approaches to predicting what step {$$g_k$$} to take in search of a minimum, and it is within such Hessian-based strategies that the concept of stepping along $$3N-6$$ independent modes arises. We first write the quadratic Taylor expansion
$V (g_k) = V(0) + \sum_k g_K g_k + \dfrac{1}{2} \sum_{j,k} q_j H_{j,k} g_k\label{3.1.3}$
in matrix-vector notation
$V(\textbf{q}) = V(0) + \textbf{q}^{\textbf{T}} \cdot \textbf{g} + \dfrac{1}{2} \textbf{q}^{\textbf{T}} \cdot \textbf{H} \cdot \textbf{q} \label{3.1.4}$
with the elements $$\{g_k\}$$ collected into the column vector $$\textbf{q}$$ whose transpose is denoted $$\textbf{q}^{\textbf{T}}$$.
Introducing the unitary matrix $$\textbf{U}$$ that diagonalizes the symmetric $$\textbf{H}$$ matrix, the above equation becomes
$V(\textbf{q}) = V(0) + \textbf{q}^{\textbf{T}} \textbf{U} \, \textbf{U}^{\textbf{T}} \textbf{q} + \dfrac{1}{2} \textbf{q}^{\textbf{T}} \textbf{U} \, \textbf{U}^{\textbf{T}} \textbf{H} \textbf{U}\, \textbf{U}^{\textbf{T}} \textbf{q}. \label{3.1.5}$
Because $$\textbf{U}^{\textbf{T}}\textbf{H}\textbf{U}$$ is diagonal, we have
$(\textbf{U}^{\textbf{T}}\textbf{H}\textbf{U})_{k,l} = \delta_{k,l} \lambda_k \label{3.1.6}$
where $$\lambda_k$$ are the eigenvalues of the Hessian matrix. For non-linear molecules, $$3N-6$$ of these eigenvalues will be non-zero; for linear molecules, $$3N-5$$ will be non-zero. The 5 or 6 zero eigenvalues of $$\textbf{H}$$ have eigenvectors that describe translation and rotation of the entire molecule; they are zero because the energy surface $$V$$ does not change if the molecule is rotated or translated. It can be difficult to properly identify the 5 or 6 translation and rotation eigenvalues of the Hessian because numerical precision issues often cause them to occur as very small positive or negative eigenvalues. If the molecule being studied actually does possess internal (i.e., vibrational) eigenvalues that are very small (e.g., the torsional motion of the methyl group in ethane has a very small energy barrier as a result of which the energy is very weakly dependent on this coordinate), one has to be careful to properly identify the translation-rotation and internal eigenvalues. By examining the eigenvectors corresponding to all of the low Hessian eigenvalues, one can identify and thus separate the former from the latter. In the remainder of this discussion, I will assume that the rotations and translations have been properly identified and the strategies I discuss will refer to utilizing the remaining $$3N-5$$ or $$3N-6$$ eigenvalues and eigenvectors to carry out a series of geometry “steps” designed to locate energy minima and transition states.
The eigenvectors of $$\textbf{H}$$ form the columns of the array $$U$$ that brings $$H$$ to diagonal form:
$\sum_{\lambda} H_{k,l} U_{l,m} = \lambda_m U_{k,m} \label{3.1.7}$
Therefore, if we define
$Q_m = \sum_k U^T_{m,k} g_k \label{3.1.8a}$
and
$G_m = \sum_k U^T_{m,k} g_K \label{3.1.8b}$
to be the component of the step $$\{g_k\}$$ and of the gradient along the $$m^{th}$$ eigenvector of $$H$$, the quadratic expansion of $$V$$ can be written in terms of steps along the $$3N-5$$ or $$3N-6 \{Q_m\}$$ directions that correspond to non-zero Hessian eigenvalues:
$V (g_k) = V(0) + \sum_m G^T_m Q_m + \dfrac{1}{2} \sum_m Q_m \lambda_m Q_m.\label{3.1.9}$
The advantage to transforming the gradient, step, and Hessian to the eigenmode basis is that each such mode (labeled m) appears in an independent uncoupled form in the expansion of $$V$$. This allows us to take steps along each of the $$Q_m$$ directions in an independent manner with each step designed to lower the potential energy when we are searching for minima (strategies for finding a transition state will be discussed below).
For each eigenmode direction, one can ask for what size step $$Q$$ would the quantity $$GQ + \dfrac{1}{2} \lambda Q^2$$ be a minimum. Differentiating this quadratic form with respect to $$Q$$ and setting the result equal to zero gives
$Q_m = - \dfrac{G_m}{\lambda_m} \label{3.1.10}$
that is, one should take a step opposite the gradient but with a magnitude given by the gradient divided by the eigenvalue of the Hessian matrix. If the current molecular geometry is one that has all positive $$\lambda_m$$ values, this indicates that one may be “close” to a minimum on the energy surface (because all $$\lambda_m$$ are positive at minima). In such a case, the step $$Q_m = - G_m/\lambda_m$$ is opposed to the gradient along all $$3N-5$$ or $$3N-6$$ directions, much like the gradient-based strategy discussed earlier suggested. The energy change that is expected to occur if the step $$\{Q_m\}$$ is taken can be computed by substituting $$Q_m = - G_m/\lambda_m$$ into the quadratic equation for $$V$$:
$V(\text{after step}) = V(0) + \sum_m G^T_m \bigg(- \dfrac{G_m}{\lambda_m}\bigg) + \dfrac{1}{2} \sum_m \lambda_m \bigg(- \dfrac{G_m}{\lambda_m}\bigg)^2 \label{3.1.11a}$
$= V(0) - \dfrac{1}{2} \sum_m \lambda_m \bigg(- \dfrac{G_m}{\lambda_m}\bigg)^2. \label{3.1.11b}$
This clearly suggests that the step will lead “downhill” in energy along each eigenmode as long as all of the $$\lambda_m$$ values are positive. For example, if one were to begin with a good estimate for the equilibrium geometries of ethylene and propene, one could place these two molecules at a distance $$R_0$$ longer than the expected inter-fragment equilibrium distance $$R_{\rm vdW}$$ in the van der Waals complex formed when they interact. Because both fragments are near their own equilibrium geometries and at a distance $$R_0$$ at which long-range attractive forces will act to draw them together, a strategy such as outlined above could be employed to locate the van der Waals minimum on their energy surface. This minimum is depicted qualitatively in Figure 3.1a.
Beginning at $$R_0$$, one would find that $$3N-6 = 39$$ of the eigenvalues of the Hessian matrix are non-zero, where $$N = 15$$ is the total number of atoms in the ethylene-propene complex. Of these 39 non-zero eigenvalues, three will have eigenvectors describing radial and angular displacements of the two fragments relative to one another; the remaining 36 will describe internal vibrations of the complex. The eigenvalues belonging to the inter-fragment radial and angular displacements may be positive or negative (because you made no special attempt to orient the molecules at optimal angles and you may not have guessed very well at optimal the equilibrium inter-fragment distance), so it would probably be wisest to begin the energy-minimization process by using gradient information to step downhill in energy until one reaches a geometry $$R_1$$ at which all 39 of the Hessian matrix eigenvalues are positive. From that point on, steps determined by both the gradient and Hessian (i.e., $$Q_m = - G_m/\lambda_m$$) can be used unless one encounters a geometry at which one of the eigenvalues $$\lambda_m$$ is very small, in which case the step $$Q_m = - G_m/\lambda_m$$ along this eigenmode could be unrealistically large. In this case, it would be better to not take $$Q_m = - G_m/\lambda_m$$ for the step along this particular direction but to take a small step in the direction opposite to the gradient to improve chances of moving downhill. Such small-eigenvalue issues could arise, for example, if the torsion angle of propene’s methyl group happened, during the sequence of geometry steps, to move into a region where eclipsed rather than staggered geometries are accessed. Near eclipsed geometries, the Hessian eigenvalue describing twisting of the methyl group is negative; near staggered geometries, it is positive.
Whenever one or more of the $$\lambda_m$$ are negative at the current geometry, one is in a region of the energy surface that is not sufficiently close to a minimum to blindly follow the prescription $$Q_m = - G_m/\lambda_m$$ along all modes. If only one $$\lambda_m$$ is negative, one anticipates being near a transition state (at which all gradient components vanish and all but one $$\lambda_m$$ are positive with one $$\lambda_m$$ negative). In such a case, the above analysis suggests taking a step $$Q_m = - G_m/\lambda_m$$ along all of the modes having positive $$\lambda_m$$, but taking a step of opposite direction (e.g., $$Q_n = - G_n/\lambda_n$$ unless $$\lambda_m$$ is very small in which case a small step opposite $$G_n$$ is best) along the direction having negative $$\lambda_n$$ if one is attempting to move toward a minimum. This is what I recommended in the preceding paragraph when an eclipsed geometry (which is a transition state for rotation of the methyl group) is encountered if one is seeking an energy minimum.
Finding Transition States
On the other hand, if one is in a region where one Hessian eigenvalues is negative (and the rest are positive) and if one is seeking to find a transition state, then taking steps $$Q_m = - G_m/\lambda_m$$ along all of the modes Having positive eigenvalues and taking $$Q_n = - G_n/\lambda_n$$ along the mode having negative eigenvalue is appropriate. The steps $$Q_m = - G_m/\lambda_m$$ will act to keep the energy near its minimum along all but one direction, and the step $$Q_n = - G_n/\lambda_n$$ will move the system uphill in energy along the direction having negative curvature, exactly as one desires when “walking” uphill in a streambed toward a mountain pass.
However, even the procedure just outlined for finding a transition state can produce misleading results unless some extent of chemical intuition is used. Let me give an example to illustrate this point. Let’s assume that one wants to find begin near the geometry of the van der Waals complex involving ethylene and propene and to then locate the transition state on the reaction path leading to the [2+2] cyclo-addition products methyl-cyclobutane as also shown in Figure 3.1a. Consider employing either of two strategies to begin the “walk” leading from the van der Waals complex to the desired transition state (TS):
1. One could find the lowest (non-translation or non-rotation) Hessian eigenvalue and take a small step uphill along this direction to begin a streambed walk that might lead to the TS. Using the smallest Hessian eigenvalue to identify a direction to explore makes sense because it is along this direction that the energy surface rises least abruptly (at least near the geometry of the reactants).
2. One could move the ethylene radially a bit (say 0.2 Å) closer to the propene to generate an initial geometry to begin the TS search. This makes sense because one knows the reaction must lead to inter-fragment carbon-carbon distances that are much shorter in the methyl-cyclobutane products than in the van der Waals complex.
The first strategy suggested above will likely fail because the series of steps generated by walking uphill along the lowest Hessian eigenmode will produce a path leading from eclipsed to staggered orientation of propene’s methyl group. Indeed, this path leads to a TS, but it is not the [2+2] cyclo-addition TS that we want. The take-home lesson here is that uphill streambed walks beginning at a minimum on the reactants’ potential energy surface may or may not lead to the desired TS. Such walks are not foolish to attempt, but one should examine the nature of the eigenmode being followed to judge whether displacements along this mode make chemical sense. Clearly, only rotating the methyl group is not a good way to move from ethylene and propene to methyl-cyclobutane.
The second strategy suggested above might succeed, but it would probably still need to be refined. For example, if the displacement of the ethylene toward the propene were too small, one would not have distorted the system enough to move it into a region where the energy surface has negative curvature along the reaction path as it must have as one approaches the TS. So, if the Hessian eigenmodes whose eigenvectors possess substantial inter-fragment radial displacements are all positive, one has probably not moved the two fragments close enough together. Probably the best way to then proceed would be to move the two fragments even closer (or, to move them along a linear synchronous path[1] connecting the reactants and products) until one finds a geometry at which a negative Hessian eigenvalue’s eigenmode has substantial components along what appears to be reasonable for the desired reaction path (i.e., substantial displacements leading to shorter inter-fragment carbon-carbon distances). Once one has found such a geometry, one can use the strategies detailed earlier (e.g., $$Q_m = - G_m/\lambda_m$$ to then walk uphill along one mode while minimizing along the other modes to move toward the TS. If successful, such a process will lead to the TS at which all components of the gradient vanish and all but one eigenvalue of the Hessian is positive. The take-home lesson of the example is it is wise to try to first find a geometry close enough to the TS to cause the Hessian to have a negative eigenvalue whose eigenvector has substantial character along directions that make chemical sense for the reaction path.
In either a series of steps toward a minimum or toward a TS, once a step has been suggested within the eigenmode basis, one needs to express that step in terms of the original Cartesian coordinates $$q_k$$ so that these Cartesian values can be altered within the software program to effect the predicted step. Given values for the $$3N-5$$ or $$3N-6$$ step components $$Q_m$$ (n.b., the step components $$Q_m$$ along the 5 or 6 modes having zero Hessian eigenvalues can be taken to be zero because the would simply translate or rotate the molecule), one must compute the {$$g_k$$}. To do so, we use the relationship
$Q_m = \sum_k U^T_{m,k} g_k\label{3.1.12}$
and write its inverse (using the unitary nature of the $$\textbf{U}$$ matrix):
$g_k = \sum_m U_{k,m} Q_m \label{3.1.13}$
to compute the desired Cartesian step components.
In using the Hessian-based approaches outlined above, one has to take special care when one or more of the Hessian eigenvalues is small. This often happens when
1. one has a molecule containing “soft modes” (i.e., degrees of freedom along which the energy varies little), or
2. as one moves from a region of negative curvature into a region of positive curvature (or vice versa)- in such cases, the curvature must move through or near zero.
For these situations, the expression $$Q_m = - G_m/\lambda_m$$ can produce a very large step along the mode having small curvature. Care must be taken to not allow such incorrect artificially large steps to be taken.
Energy Surface Intersections
I should note that there are other important regions of potential energy surfaces that one must be able to locate and characterize. Above, we focused on local minima and transition states. Later in this Chapter, and again in Chapter 8, we will discuss how to follow so-called reaction paths that connect these two kinds of stationary points using the type of gradient and Hessian information that we introduced earlier in this Chapter.
It is sometimes important to find geometries at which two Born-Oppenheimer energy surfaces $$V_1(\text{q})$$ and $$V_2(\text{q})$$ intersect because such regions often serve as efficient funnels for trajectories or wave packets evolving on one surface to undergo so-called non-adiabatic transitions to the other surface. Let’s spend a few minutes thinking about under what circumstances such surfaces can indeed intersect, because students often hear that surfaces do not intersect but, instead, undergo avoided crossings. To understand the issue, let us assume that we have two wave functions $$\Phi_1$$ and $$\Phi_2$$ both of which depend on $$3N-6$$ coordinates $$\{q\}$$. These two functions are not assumed to be exact eigenfunctions of the Hamiltonian $$H$$, but likely are chosen to approximate such eigenfunctions. To find the improved functions $$\Psi_1$$ and $$\Psi_2$$ that more accurately represent the eigenstates, one usually forms linear combinations of $$\Phi_1$$ and $$\Phi_2$$,
$\Psi_K = C_{K,1} \Phi_1 + C_{K,2} \Phi_2 \label{3.1.14}$
from which a 2x2 matrix eigenvalue problem arises:
$\left|\begin{array}{cc} H_{1,1}-E & H_{1,2}\\ H_{2,1} & H_{2,2}-E \end{array}\right|=0$
This quadratic equation has two solutions
$2E_\mp = (H_{1,1} + H_{2,2}) \pm \sqrt{(H_{1,1}+H_{2,2})^2 + 4H_{1,2}^2}$
These two solutions can be equal (i.e., the two state energies can cross) only if the square root factor vanishes. Because this factor is a sum of two squares (each thus being positive quantities), this can only happen if two identities hold simultaneously (i.e., at the same geometry):
$H_{1,1} = H_{2,2} \label{3.1.15a}$
and
$H_{1,2} = 0. \label{3.1.15b}$
The main point then is that in the $$3N-6$$ dimensional space, the two states will generally not have equal energy. However, in a space of two lower dimensions (because there are two conditions that must simultaneously be obeyed: $$H_{1,1} = H_{2,2}$$ and $$H_{1,2} = 0$$), their energies may be equal. They do not have to be equal, but it is possible that they are. It is based upon such an analysis that one usually says that potential energy surfaces in $$3N-6$$ dimensions may undergo intersections in spaces of dimension $$3N-8$$. If the two states are of different symmetry (e.g., one is a singlet and the other a triplet), the off-diagonal element $$H_{1,2}$$ vanishes automatically, so only one other condition is needed to realize crossing. So, we say that two states of different symmetry can cross in a space of dimension $$3N-7$$. For a triatomic molecule with $$3N-6 = 3$$ internal degrees of freedom, this means that surfaces of the same symmetry can cross in a space of dimension 1 (i.e., along a line) while those of different symmetry can cross in a space of dimension 2 (i.e., in a plane). An example of such a surface intersection is shown in Figure 3.1c.
First considering the reaction of Al (3s2 3p1; 2P) with $$H_2 (sg2; 1Sg+)$$ to form AlH2(^^2A_1) as if it were to occur in $$C_{2v}$$ symmetry, the $$Al$$ atom’s occupied 3p orbital can be directed in either of three ways.
1. If it is directed toward the midpoint of the H-H bond, it produces an electronic state of $$^2A_1$$ symmetry.
2. If it is directed out of the plane of the $$AlH_2$$, it gives a state of $$2B_1$$ symmetry, and
3. if it is directed parallel to the H-H bond, it generates a state of $$^2B_2$$ symmetry.
The $$^2A_1$$ state is, as shown in the upper left of Figure 3.1c, repulsive as the Al atom’s 3s and 3p orbitals begin to overlap with the hydrogen molecule’s $$\sigma_g$$ orbital at large $$R$$-values. The $$^2B_2$$ state, in which the occupied 3p orbital is directed sideways parallel to the H-H bond, leads to a shallow van der Waals well at long-R but also moves to higher energy at shorter $$R$$-values.
The ground state of the $$AlH_2$$ molecule has its five valence orbitals occupied as follows:
1. two electrons occupy a bonding Al-H orbital of $$a_1$$ symmetry,
2. two electrons occupy a bonding Al-H orbital of $$b_2$$ symmetry, and
3. the remaining electron occupies a non-bonding orbital of $$sp^2$$ character localized on the Al atom and having a1 symmetry.
This $$a_1^2 b_2^2 a_1^1$$ orbital occupancy of the $$AlH_2$$ molecule’s ground state does not correlate directly with any of the three degenerate configurations of the ground state of $$Al + H_2$$ which are $$a_1^2 a_1^2 a_1^1, a_1^2 a_1^2 b_1^1$$, and $$a_1^2 a_1^2 b_2^1$$ as explained earlier. It is this lack of direct configuration correlation that generates the reaction barrier show in Figure 3.1c.
Let us now return to the issue of finding the lower-dimensional ($$3N-8$$ or $$3N-7$$) space in which two surfaces cross, assuming one has available information about the gradients and Hessians of both of these energy surfaces $$V_1$$ and $$V_2$$. There are two components of characterizing the intersection space within which $$V_1$$ = $$V_2$$:
1. One has to first locate one geometry $$\textbf{q}_0$$ lying within this space and then,
2. one has to sample nearby geometries (e.g., that might have lower total energy) lying within this subspace where $$V_1 = V_2$$.
To locate a geometry at which the difference function $$F = [V_1 –V_2]^2$$ passes through zero, one can employ conventional functional minimization methods, such as those detailed earlier when discussing how to find energy minima, to locate where $$F = 0$$, but now the function one is seeking to locate a minimum on is the potential energy surface difference.
Once one such geometry $$\textbf{q}_0$$ has been located, one subsequently tries to follow the seam (i.e., for a triatomic molecule, this is the one-dimensional line of crossing; for larger molecules, it is a $$3N-8$$ dimensional space) within which the function $$F$$ remains zero. Professor David Yarkony has developed efficient routines for characterizing such subspaces (D. R. Yarkony, Acc. Chem. Res. 31, 511-518 (1998)). The basic idea is to parameterize steps away from ($$q_0$$) in a manner that constrains such steps to have no component along either the gradient of ($$H_{1,1} –H_{2,2}$$) or along the gradient of $$H_{1,2}$$. Because $$V_1 = V_2$$ requires having both $$H_{1,1} = H_{2,2}$$ and $$H_{1,2} = 0$$, taking steps obeying these two constraints allows one to remain within the subspace where $$H_{1,1} = H_{2,2}$$ and $$H_{1,2} = 0$$ are simultaneously obeyed. Of course, it is a formidable task to map out the entire $$3N-8$$ or $$3N-7$$ dimensional space within which the two surfaces intersect, and this is essentially never done. Instead, it is common to try to find, for example, the point within this subspace at which the two surfaces have their lowest energy. An example of such a point is labeled RMECP in Figure 3.1c, and would be of special interest when studying reactions taking place on the lower-energy surface that have to access the surface-crossing seam to evolve onto the upper surface. The energy at RMECP reflects the lowest energy needed to access this surface crossing.
Such intersection seam location procedures are becoming more commonly employed, but are still under very active development, so I will refer the reader to Prof. Yarkony’s paper cited above for further guidance. For now, it should suffice to say that locating such surface intersections is an important ingredient when one is interested in studying, for example, photochemical reactions in which the reactants and products may move from one electronic surface to another, or thermal reactions that require the system to evolve onto an excited state through a surface crossing.
Endnotes
1. This is a series of geometries $$R_x$$ defined through a linear interpolation (using a parameter $$0 < x < 1$$) between the $$3N$$ Cartesian coordinates $$R_{\rm reactants}$$ belonging to the equilibrium geometry of the reactants and the corresponding coordinates $$R_{\rm products}$$ of the products: $$R_x = R_{\rm reactants} x + (1-x) R_{\rm products}$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8937668800354004, "perplexity": 448.3703041952836}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585405.74/warc/CC-MAIN-20211021102435-20211021132435-00046.warc.gz"} |
http://mathoverflow.net/questions/33478/geometric-interpretation-of-characteristic-polynomial | # Geometric interpretation of characteristic polynomial
The coefficients of lowest and next-highest degree of a linear operator's characteristic polynomial are its determinant and trace. These have well-known geometric interpretations. But what about its intermediate coefficients?
For a linear operator $f : V \to V$, we have the beautiful formula
$$\chi(f) = det(f - t) = \sum_{i=0}^n (-1)^i\ tr(\wedge^{n-i}(f))\ t^i,$$
where $\wedge^{p}(f)$ is the map induced by $f$ on grade $p$ of $V$'s exterior algebra.
While this formula is rarely mentioned (at least I haven't seen it in any of the standard textbooks), it is not too surprising if you have a good grasp of exterior algebra. It presents $\chi(f)$ as a generating function for the exterior traces of $f$.
My question is whether these traces have a simple geometric interpretation on par with $tr$ and $det$.
-
This formula is certainly mentioned in more advanced books that take coordinate-free point of view. – Victor Protsak Jul 27 '10 at 6:49
You seem very confident that the trace has a geometric interpretation; in fact, this was the subject of a previous MO question mathoverflow.net/questions/13526/…. But if you are truly happy with the geometricity of the trace, it seems that your question comes down to asking for a geometric interpretation of "intermediate" exterior powers of a linear operator. I'm sure that some people here would be happy to speak to that... – Pete L. Clark Jul 27 '10 at 7:02
Pete: The exterior powers of the operator aren't mysterious to me. But if you apply the interpretation of the trace directly to this question, it will give you an answer in terms of the various $\wedge^p(V)$ vector spaces rather than $V$ itself. Anyway, I think I got it: If we take $R^3$ with a diagonal matrix $diag(a_1, a_2, a_3)$ as an example, the bivector trace is $a_1 a_2 + a_2 a_3 + a_1 a_3$. This is the second-order volume differential contributed by the edges of the unit cube, much as the vector trace $a_1 + a_2 + a_3$ is the first-order volume differential contributed by the faces. – Per Vognsen Jul 27 '10 at 7:20
@PV: What you say both agrees with what I said (or meant to say, at least) and goes on to give a geometric interpretation of the sort I vaguely had in mind. Cheers. – Pete L. Clark Jul 27 '10 at 7:31
A rather simple response is to differentiate the characteristic polynomial and use your interpretation of the determinant.
$$det(I-tf) = {t^n}det(\frac{1}{t}I-f) = (-t)^ndet(f-\frac{1}{t}I)= {(-t)^n}\chi(f)(1/t)$$
So if we let $\chi(f)(t) = \Sigma_{i=0}^n a_it^i$, then ${(-t)^n}\chi(f)(1/t) = (-1)^n\Sigma_{i=0}^n a_it^{n-i}$
But $I-tf$ is the path through the identity matrix, and $Det(A)$ measures volume distortion of the linear transformation $A$.
$$det(I-tf)^{(k)}(t=0) = (-1)^nk!a_{n-k}$$
and a change of variables ($t\longmapsto -t$) gives (and superscript $(k)$ indicates $k$-th derivative)
$$det(I+tf)^{(k)}(t=0) = (-1)^{n+k}k!a_{n-k}$$
So the coefficients of the characteristic polynomial are measuring the various derivatives of the volume distortion, as you perturb the identity transformation in the direction of $f$.
$$a_k = \frac{det(I+tf)^{(n-k)}(t=0)}{(n-k)!}$$
-
That sounds good. In my comment to Pete, I briefly outlined something similar. To put what you said in those terms, it's measuring the differential contributions to the volume distortion from the various facets of the volume element: in the R^3 case, solids, faces, edges and vertices, in that order. The alternating signs come from inclusion-exclusion counting of overlapping contributions. – Per Vognsen Jul 27 '10 at 7:34
+1. – Pete L. Clark Jul 27 '10 at 7:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.893876314163208, "perplexity": 278.67219073429106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131304444.86/warc/CC-MAIN-20150323172144-00160-ip-10-168-14-71.ec2.internal.warc.gz"} |
https://www.zbmath.org/?q=an%3A0077.12201 | ×
# zbMATH — the first resource for mathematics
An introduction to probability theory and its applications. 2nd ed. (English) Zbl 0077.12201
A Wiley Publication in Mathematical Statistics. New York: John Wiley & Sons, Inc., London: Chapman & Hall, Ltd. xv, 461 p. (1957).
##### MSC:
60-01 Introductory exposition (textbooks, tutorial papers, etc.) pertaining to probability theory
##### Keywords:
probability theory | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8751199841499329, "perplexity": 4297.593356241301}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989115.2/warc/CC-MAIN-20210510064318-20210510094318-00449.warc.gz"} |
https://www.varsitytutors.com/algebra_1-help/how-to-use-the-quadratic-function | # Algebra 1 : How to use the quadratic function
## Example Questions
### Example Question #1 : Finding Roots
Solve the equation:
Explanation:
To solve the quadratic equation, , we set the equation equal to zero and then factor the quadratic, . Because these expressions multiply to equal 0, then it must be that at least one of the expressions equals 0. So we set up the corresponding equations and to obtain the answers and
### Example Question #2 : Understand Functions: Ccss.Math.Content.8.F.A.1
Solve for :
The solution is undefined.
Explanation:
To factor this equation, first find two numbers that multiply to 35 and sum to 12. These numbers are 5 and 7. Split up 12x using these two coefficients:
### Example Question #1 : How To Use The Quadratic Function
Given , find .
Explanation:
Plug in a for x:
Next plug in (a + h) for x:
Therefore f(a+h) - f(a) = .
### Example Question #2 : How To Use The Quadratic Function
Which of the following is the correct solution when is solved using the quadratic equation?
Explanation:
### Example Question #3 : How To Use The Quadratic Function
Give the minimum value of the function .
This function does not have a minimum.
Explanation:
This is a quadratic function. The -coordinate of the vertex of the parabola can be determined using the formula , setting :
Now evaluate the function at :
### Example Question #4 : How To Use The Quadratic Function
Quadratic equations may be written in the following format:
In the equation , what is the value of ?
Explanation:
For the given equation below:
The values of each coefficient are:
### Example Question #5 : How To Use The Quadratic Function
Solve for x.
Explanation:
The quadratic formula is as follows:
We will start by finding the values of the coefficients of the given equation, but first we must simplify.
Move all the terms to one side and set the equation equal to .
Rearrange.
We can then find the values of the coefficients of the equation:
Quadratic equations may be written in the following format:
In our case, the values of the coefficients are:
Substitute the coefficient values into the quadratic equation:
After simplifying we are left with:
### Example Question #3 : Understand Functions: Ccss.Math.Content.8.F.A.1
Solve for :
Explanation:
To find , we must factor the quadratic function:
Solve for : | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8980675339698792, "perplexity": 1035.6681501086427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741324.15/warc/CC-MAIN-20181113153141-20181113175141-00317.warc.gz"} |
https://www.physicsforums.com/threads/lines-and-plane.104238/ | # Lines and plane
1. Dec 15, 2005
### cmab
How can i proove if this line is perpendicular to this one?
Line
x=-2-4y, y=3-2t, z=1+2t
Plane
2x+y-z=5
I don't care that much about the answer, I want the procedure.
2. Dec 15, 2005
### asdf60
Not sure exactly what you mean, but a line is perpendicular to a plane if it is parallel to the normal of the plane.....
3. Dec 15, 2005
### mathwonk
i think he means this: the direction vector of your line is the vector of coefficients of the letter t, so (-4,-2,2).
The "normal" vector (perpendicular) to your plane is the vector of coefficients of the letters, x,y,z, so (2,1,-1). these two vectors are proportional, by a scale factor of -2, so the answer is yes.
4. Dec 15, 2005
### cmab
Thanks, it is appriciated.
5. Dec 15, 2005
### cmab
How do we find the parametric equations for the line of intersection of the given planes
7x-2y+3z = -2 and -3x+y+2z+5=0
6. Dec 15, 2005
### d_leet
add 2 to both sides of the first equation.
so we have 7x -2y + 3z + 2 = 0
now we can set the equations equal to each other and that should give you the line of intersection. I'm not entirely sure of this though so I'm sorry if I'm wrong.
7. Dec 15, 2005
### d_leet
Sorry that was wrong, I just remembered how to do this. Find the normal vectors to each plane, and take their cross product, and you know that this vector will be parallel to the line in which they intersect so you can just find one point of intersection and write the parametric equations for the line knowing the parallel vector.
8. Dec 16, 2005
### HallsofIvy
Staff Emeritus
No, that would work perfectly well. From 7x-2y+3z = -2 you get
7x- 2y+ 3z+ 2= 0. Since we want -3x+y+2z+5=0 on the line of intersection also: 7x- 2y+ 3z+ 2= -3x+ y+ 2z+ 5, one equation in three unknowns. You can use that together with either of the orignal equations to have 2 equations in 3 unknowns. Solve for two of x,y,z in terms of the other and use the third as "parameter".
Or course, you could just as easily use the two given equations to solve for two of the unknowns in terms of the other. I don't see any reason to take a cross-product of two vectors.
Here's how I would do the problem:
Since -3x+y+2z+5=0 and 7x-2y+3z = -2, multiply the first equation by 2 to get -6x+ 2y+ 4z+ 10= 0 and write the second equation as 7x- 2y+ 3z+ 2= 0. Now add those two equations: x+ 7z+ 12= 0. x= -7z- 12.
Put that back into the first equation: -3(-7z-12)+ y+ 2z-5= 0 which gives y= -2z+ 5-21z- 36= -23z-31.
Assuming my arithmetic is correct (which I wouldn't guarentee), parametric equations for the line of intersection are:
x= -7t- 12
y= -23t- 31
z= t.
Similar Discussions: Lines and plane | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9232856035232544, "perplexity": 772.7746048247351}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188550.58/warc/CC-MAIN-20170322212948-00444-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://mathhelpforum.com/calculus/224305-derivative-x-print.html | # Derivative of a^x.
• November 15th 2013, 02:40 PM
sepoto
1 Attachment(s)
Derivative of a^x.
Attachment 29739
$M(a)=\frac{\lim}{\Delta x\rightarrow 0}\frac{a^\Delta ^x-1}{\Delta x}$
-----------
$M(a)=\frac{d}{dx}a^x...at...x=0$
"2. Geometrically, M(a) is the slope of the graph y = ax at x = 0."
My question is what about the other points other than 0 where a^x is equal to 1? Right now I think that the slope at 2 is M(a)a^x however the documents seem to me right now to be saying that M(a) only is the slope at the other points such as 2. I'm trying to clear this up.
• November 15th 2013, 03:09 PM
Plato
Re: Derivative of a^x.
Quote:
Originally Posted by sepoto
$M(a)=\frac{\lim}{\Delta x\rightarrow 0}\frac{a^\Delta ^x-1}{\Delta x}$
-----------
$M(a)=\frac{d}{dx}a^x...at...x=0$
${\lim _{h \to 0}}\frac{{{a^{x + h}} - {a^x}}}{h} = {a^x}{\lim _{h \to 0}}\frac{{{a^h} - 1}}{h}$
You need to know ${\lim _{h \to 0}}\frac{{{a^h} - 1}}{h} = \log (a)$
• November 15th 2013, 08:16 PM
sepoto
Re: Derivative of a^x.
I see that is correct and if I use my calculators ln function that ln(2)*2^3 gives me the derivative of 2^x at the point x=3 for the function 2^x.
What I'm still trying to understand is that if the limit is for the point 0 it looks like it would result in a divide by zero error if the closest point 2^0 is entered:
$\frac{2^0-1}{0}$
which does not seem to be coming out to:
0.6931
which is the actual derivative of 2^x at x=0.
P.S.
O.K. Since it's the limit so I would have to plug in a value as close to zero as I can without actually being on zero.
• November 16th 2013, 07:15 AM
HallsofIvy
Re: Derivative of a^x.
You seem to be missing the whole point of a "limit". The only time you can find $\lim_{x\to a} f(x)$ by calculating f(a) is if the function, f, is "continuous at x= a". Because we define "f is continuous at a" by " $\lim_{x\to a} f(x)= f(a)$, that would be circular reasoning except that we have other ways of determining if functions are continuous.
However, even in this case we do NOT take a limit by "plug in a value as close to zero as I can without actually being on zero". There is NO non-zero number closest to 0. We do NOT find limits by "plugging in" values. I don't have time to give you a course in limits here so I suggest you review them in your text book. They are crucial to Calculus and not as trivial as you seem to think. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9023187160491943, "perplexity": 439.4124511681597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00622-ip-10-147-4-33.ec2.internal.warc.gz"} |
http://www.cecilesconsignment.com/article/2018/5 | 2019ÄêJOSÈëÑ¡¡°Öйú¿Æ¼¼ÆÚ¿¯×¿Ô½Ðж¯¼Æ»®¡±
Issue Browser
Volume 39, Issue 5, May 2018
# SEMICONDUCTOR PHYSICS?
• ## The oscillations in ESR spectra of Hg0.76Cd0.24Te implanted by Ag+ at the X and Q-bands
J. Semicond. 2018, 39(5): 052001
doi: 10.1088/1674-4926/39/5/052001
Abstract Full Text PDF Get Citation
The objects of the investigation were uniformly Ag+ doped Hg0.76Cd0.24Te mercury chalcogenide monocrystals obtained by ion implantation with subsequent thermal annealing over 20 days. After implantation and annealing the conductivity was inverted from n-type with carrier concentration of 1016 cm?3 to p-type with carrier concentration of ¡Ö 3.9 ¡Á 1015 cm?3. The investigations of microwave absorption derivative (dP/dH) showed the existence of strong oscillations in the magnetic field for Ag:Hg0.76Cd0.24Te in the temperature range 4.2¨C12 K. The concentration and effective mass of charge carrier were determined from oscillation period and temperature dependency of oscillation amplitude. We suppose that this phenomenon is similar to the de Haas¨Cvan Alphen effect in weakly correlated electron system with imperfect nesting vector.
• # SEMICONDUCTOR MATERIALS
• ## On correction of model of stabilization of distribution of concentration of radiation defects in a multilayer structure with account experiment data
J. Semicond. 2018, 39(5): 053001
doi: 10.1088/1674-4926/39/5/053001
Abstract Full Text PDF Get Citation
We introduce a model of redistribution of point radiation defects, their interaction between themselves and redistribution of their simplest complexes (divacancies and diinterstitials) in a multilayer structure. The model gives a possibility to describe qualitatively nonmonotonicity of distributions of concentrations of radiation defects on interfaces between layers of the multilayer structure. The nonmonotonicity was recently found experimentally. To take into account the nonmonotonicity we modify recently used in literature model for analysis of distribution of concentration of radiation defects. To analyze the model we used an approach of solution of boundary problems, which could be used without crosslinking of solutions on interfaces between layers of the considered multilayer structures.
• ## The structure and magnetic properties of ¦Â-(Ga0.96Mn0.04)2O3 thin film
J. Semicond. 2018, 39(5): 053002
doi: 10.1088/1674-4926/39/5/053002
Abstract Full Text PDF Get Citation
High quality epitaxial single phase (Ga0.96Mn0.04)2O3 and Ga2O3 thin films have been prepared on sapphire substrates by using laser molecular beam epitaxy (L-MBE). X-ray diffraction results indicate that the thin films have the monoclinic structure with a \begin{document}$\left( {\bar 201} \right)$\end{document} preferable orientation. Room temperature (RT) ferromagnetism appears and the magnetic properties of ¦Â-(Ga0.96Mn0.04)2O3 thin film are enhanced compared with our previous works. Experiments as well as the first principle method are used to explain the role of Mn dopant on the structure and magnetic properties of the thin films. The ferromagnetic properties are explained based on the concentration of transition element and the defects in the thin films.
• ## Lateral polarity control of III-nitride thin film and application in GaN Schottky barrier diode
J. Semicond. 2018, 39(5): 053003
doi: 10.1088/1674-4926/39/5/053003
Abstract Full Text PDF Get Citation
N-polar and III-polar GaN and AlN epitaxial thin films grown side by side on single sapphire substrate was reported. Surface morphology, wet etching susceptibility and bi-axial strain conditions were investigated and the polarity control scheme was utilized in the fabrication of Schottky barrier diode where ohmic contact and Schottky contact were deposited on N-polar domains and Ga-polar domains, respectively. The influence of N-polarity on on-state resistivity and I¨CV characteristic was discussed, demonstrating that lateral polarity structure of GaN and AlN can be widely used in new designs of optoelectronic and electronic devices.
• ## Growth and characteristics of p-type doped GaAs nanowire
J. Semicond. 2018, 39(5): 053004
doi: 10.1088/1674-4926/39/5/053004
Abstract Full Text PDF Get Citation
The growth of p-type GaAs nanowires (NWs) on GaAs (111) B substrates by metal-organic chemical vapor deposition (MOCVD) has been systematically investigated as a function of diethyl zinc (DEZn) flow. The growth rate of GaAs NWs was slightly improved by Zn-doping and kink is observed under high DEZn flow. In addition, the I¨CV curves of GaAs NWs has been measured and the p-type dope concentration under the II/III ratio of 0.013 and 0.038 approximated to 1019¨C1020 cm?3.
• # SEMICONDUCTOR DEVICES
• ## Investigation and statistical modeling of InAs-based double gate tunnel FETs for RF performance enhancement
J. Semicond. 2018, 39(5): 054001
doi: 10.1088/1674-4926/39/5/054001
Abstract Full Text PDF Get Citation
In this paper, RF performance analysis of InAs-based double gate (DG) tunnel field effect transistors (TFETs) is investigated in both qualitative and quantitative fashion. This investigation is carried out by varying the geometrical and doping parameters of TFETs to extract various RF parameters, unity gain cut-off frequency (ft), maximum oscillation frequency (fmax), intrinsic gain and admittance (Y) parameters. An asymmetric gate oxide is introduced in the gate-drain overlap and compared with that of DG TFETs. Higher ON-current (ION) of about 0.2 mA and less leakage current (IOFF) of 29 fA is achieved for DG TFET with gate-drain overlap. Due to increase in transconductance (gm), higher ft and intrinsic gain is attained for DG TFET with gate-drain overlap. Higher fmax of 985 GHz is obtained for drain doping of 5 ¡Á 1017 cm?3 because of the reduced gate-drain capacitance (Cgd) with DG TFET with gate-drain overlap. In terms of Y-parameters, gate oxide thickness variation offers better performance due to the reduced values of Cgd. A second order numerical polynomial model is generated for all the RF responses as a function of geometrical and doping parameters. The simulation results are compared with this numerical model where the predicted values match with the simulated values.
• ## Self-assembled patches in PtSi/n-Si (111) diodes
J. Semicond. 2018, 39(5): 054002
doi: 10.1088/1674-4926/39/5/054002
Abstract Full Text PDF Get Citation
Using the effect of the temperature on the capacitance¨Cvoltage (C¨CV) and conductance¨Cvoltage (G/¦Ø¨CV) characteristics of PtSi/n-Si (111) Schottky diodes the profile of apparent doping concentration (NDapp), the potential difference between the Fermi energy level and the bottom of the conduction band (Vn), apparent barrier height (¦µBapp), series resistance (Rs) and the interface state density Nss have been investigated. From the temperature dependence of (C¨CV) it was found that these parameters are non-uniformly changed with increasing temperature in a wide temperature range of 79¨C360 K. The voltage and temperature dependences of apparent carrier distribution we attributed to the existence of self-assembled patches similar the quantum wells, which formed due to the process of PtSi formation on semiconductor and the presence of hexagonal voids of Si (111).
• ## An improved large signal model of InP HEMTs
J. Semicond. 2018, 39(5): 054003
doi: 10.1088/1674-4926/39/5/054003
Abstract Full Text PDF Get Citation
An improved large signal model for InP HEMTs is proposed in this paper. The channel current and charge model equations are constructed based on the Angelov model equations. Both the equations for channel current and gate charge models were all continuous and high order drivable, and the proposed gate charge model satisfied the charge conservation. For the strong leakage induced barrier reduction effect of InP HEMTs, the Angelov current model equations are improved. The channel current model could fit DC performance of devices. A 2 ¡Á 25 ¦Ìm ¡Á 70 nm InP HEMT device is used to demonstrate the extraction and validation of the model, in which the model has predicted the DC I¨CV, C¨CV and bias related S parameters accurately.
• ## An investigation of the DC and RF performance of InP DHBTs transferred to RF CMOS wafer substrate
J. Semicond. 2018, 39(5): 054004
doi: 10.1088/1674-4926/39/5/054004
Abstract Full Text PDF Get Citation
This paper investigated the DC and RF performance of the InP double heterojunction bipolar transistors (DHBTs) transferred to RF CMOS wafer substrate. The measurement results show that the maximum values of the DC current gain of a substrate transferred device had one emitter finger, of 0.8 ¦Ìm in width and 5 ¦Ìm in length, are changed unobviously, while the cut-off frequency and the maximum oscillation frequency are decreased from 220 to 171 GHz and from 204 to 154 GHz, respectively. In order to have a detailed insight on the degradation of the RF performance, small-signal models for the InP DHBT before and after substrate transferred are presented and comparably extracted. The extracted results show that the degradation of the RF performance of the device transferred to RF CMOS wafer substrate are mainly caused by the additional introduced substrate parasitics and the increase of the capacitive parasitics induced by the substrate transfer process itself.
• ## Asymmetric anode and cathode extraction structure fast recovery diode
J. Semicond. 2018, 39(5): 054005
doi: 10.1088/1674-4926/39/5/054005
Abstract Full Text PDF Get Citation
This paper presents an asymmetric anode structure and cathode extraction fast and soft recovery diode. The device anode is partial-heavily doped and partial-lightly doped. The P+ region is introduced into the cathode. Firstly, the characteristics of the diode are simulated and analyzed. Secondly, the diode was fabricated and its characteristics were tested. The experimental results are in good agreement with the simulation results. The results show that, compared with the P¨Ci¨CN diode, although the forward conduction characteristic of the diode is declined, the reverse recovery peak current is reduced by 47%, the reverse recovery time is shortened by 20% and the softness factor is doubled. In addition, the breakdown voltage is increased by 10%.
• # SEMICONDUCTOR INTEGRATED CIRCUITS
• ## Frequency equation for the submicron CMOS ring oscillator using the first order characterization
J. Semicond. 2018, 39(5): 055001
doi: 10.1088/1674-4926/39/5/055001
Abstract Full Text PDF Get Citation
By utilizing the first order behavior of the device, an equation for the frequency of operation of the submicron CMOS ring oscillator is presented. A 5-stage ring oscillator is utilized as the initial design, with different Beta ratios, for the computation of the operating frequency. Later on, the circuit simulation is performed from 5-stage till 23-stage, with the range of oscillating frequency being 3.0817 and 0.6705 GHz respectively. It is noted that the output frequency is inversely proportional to the square of the device length, and when the value of Beta ratio is used as 2.3, a difference of 3.64% is observed on an average, in between the computed and the simulated values of frequency. As an outcome, the derived equation can be utilized, with the inclusion of an empirical constant in general, for arriving at the ring oscillator circuit¡¯s output frequency.
• ## Design and implementation of quadrature bandpass sigma¨Cdelta modulator used in low-IF RF receiver
J. Semicond. 2018, 39(5): 055002
doi: 10.1088/1674-4926/39/5/055002
Abstract Full Text PDF Get Citation
This paper presents the design and implementation of quadrature bandpass sigma¨Cdelta modulator. A pole movement method for transforming real sigma¨Cdelta modulator to a quadrature one is proposed by detailed study of the relationship of noise-shaping center frequency and integrator pole position in sigma¨Cdelta modulator. The proposed modulator uses sampling capacitor sharing switched capacitor integrator, and achieves a very small feedback coefficient by a series capacitor network, and those two techniques can dramatically reduce capacitor area. Quantizer output-dependent dummy capacitor load for reference voltage buffer can compensate signal-dependent noise that is caused by load variation. This paper designs a quadrature bandpass Sigma-Delta modulator for 2.4 GHz low IF receivers that achieve 69 dB SNDR at 1 MHz BW and ?1 MHz IF with 48 MHz clock. The chip is fabricated with SMIC 0.18 ¦Ìm CMOS technology, it achieves a total power current of 2.1 mA, and the chip area is 0.48 mm2.
• ## An advanced SEU tolerant latch based on error detection
J. Semicond. 2018, 39(5): 055003
doi: 10.1088/1674-4926/39/5/055003
Abstract Full Text PDF Get Citation
This paper proposes a latch that can mitigate SEUs via an error detection circuit. The error detection circuit is hardened by a C-element and a stacked PMOS. In the hold state, a particle strikes the latch or the error detection circuit may cause a fault logic state of the circuit. The error detection circuit can detect the upset node in the latch and the fault output will be corrected. The upset node in the error detection circuit can be corrected by the C-element. The power dissipation and propagation delay of the proposed latch are analyzed by HSPICE simulations. The proposed latch consumes about 77.5% less energy and 33.1% less propagation delay than the triple modular redundancy (TMR) latch. Simulation results demonstrate that the proposed latch can mitigate SEU effectively. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8325480222702026, "perplexity": 4099.490570102161}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735812.88/warc/CC-MAIN-20200803140840-20200803170840-00493.warc.gz"} |
http://mathhelpforum.com/geometry/146889-classical-trisection-angle.html | # Math Help - classical trisection of an angle
1. ## classical trisection of an angle
Has anyone heard of the lost theorem and its relevance to trisection of an angle using classical method?
I have a 10 min video on it as well as a trisection method, although not using the lost theorem.....But I am looking for some feedback from anyone knowledgeable in this area.....
Thank you.... Jeremy
2. Originally Posted by linelites
Has anyone heard of the lost theorem and its relevance to trisection of an angle using classical method?
I have a 10 min video on it as well as a trisection method, although not using the lost theorem.....But I am looking for some feedback from anyone knowledgeable in this area.....
Thank you.... Jeremy
Did you watch this video? You cannot trisect a 180 degree angle with the method he was using. Pi = 3.14159 not 3. If you take the compass you cannot construct a true hexagon with this method. It is only an approximation and is not accurate.
3. Originally Posted by oldguynewstudent
Did you watch this video? You cannot trisect a 180 degree angle with the method he was using. Pi = 3.14159 not 3. If you take the compass you cannot construct a true hexagon with this method. It is only an approximation and is not accurate.
But the arc of a half circle is divided into thirds by the radius....creating three equilateral triangles, each angle at the vertex, (center of the circle) is equal to 60 degrees.
4. Well, this guy was more honest than most! He constructs two arcs and a straight line, for which he can trisect (the arcs correspond to 180 and 90 degrees) and uses those three points to construct a circular arc. He then says that, since the circular arc trisects those three arcs it seems "reasonable" that it would trisect any arc between them, "would it not be a steady progression" but then ends up saying "I don't know".
No, that circular arc does NOT trisect any arc.
5. Hi, ....I take it you do agree they trisect the 180 and the 90 degree arc. Can you give an explanation why the arc of a circle center thusly would not trisect other arcs as postulated? I don't doubt it so much as I am just wondering. Also, can you at least give a guess as to what kind of arc between these points would perform this task. Or is there no arc? But this seems unlikely, I would think there would be some kind of progressive arc that would trisect the ever decreasing arc of the ever decreasing angle. Could it be the arc of a circle that is centered somewhere else than where I chose? And how might that center be determined?
This is the crux of the problem....there should be a steady progression from G to I to P and from H, J,and Q that trisects any arc from 180 to 0. Can we find the arc that defines that progression?
6. Originally Posted by linelites
But the arc of a half circle is divided into thirds by the radius....creating three equilateral triangles, each angle at the vertex, (center of the circle) is equal to 60 degrees.
I apologize. I was totally out of line with my reply. You are correct about the 180 degrees. But there is no way to trisect an angle in general.
You did not deserve the reply I gave you. I should not take my own frustrations out like that.
7. thanks for the apology, but I didn't take it personally, I just thought you made a mistake and would realize it upon reflection. No problem. Kind of you to write. (besides, I've gotten much worse from some who wouldn't even look at the construction)
And I realize there is not supposed to be a classical solution to trisecting an angle, but I think this construction offers an interesting mental exercise, as I describe in my last post reply to HallsofIvy. And I have updated the video with a caption as well, that examines this question. It is thus.....
Shouldn't some kind of curve (2 actually) define all the trisecting points as the angle transfers from 180 to 0 degrees on this template? What kind of curve?
Any thoughts?
8. Hi linelites, really nice to see your video. I was unaware of the problem so I fiddled some with it.
I took your idea and use the fact that a square can be divided in three equal parts, and Im sticking the square to the base of the bisected angle. Could you have a look at this diagram? what am I doing wrong or not understanding? since I have no problem trisecting any angle with this?
Attached Thumbnails
9. I see you are using the Brunnes Starcut....are you familiar with Malcolm Stewart's new book Patterns of Eternity? Really good.
I think you making the mistake of thinking that trisecting the chord of an angle will trisect the arc. Is this what you are thinking?
J
10. ah, yes I am, now I see it. tricky problem isn't it.
I'll see if I can find anything on 'the Brunnes Starcut' or Malcolm Stewart's book. Thanks for the tip and help.
11. Originally Posted by linelites
Has anyone heard of the lost theorem and its relevance to trisection of an angle using classical method?
I have a 10 min video on it as well as a trisection method, although not using the lost theorem.....But I am looking for some feedback from anyone knowledgeable in this area.....
Thank you.... Jeremy
Hello Jeremy,
Thank you for pointing towards this lost theorem. I played a little with it and tried to place the triangle in a sine/cosine pattern. This gives some interesting insight on the relation of the theorem with the circle, like a right triangle inscribed in a circle gives insight in the Pythagorean theorem. I posted some drawings on Lost theorem about angular proportions.
Arjen Dijksman
12. Hi Arjen,
I am glad to see you develop this theorem. I am not anywhere near your level of expertise, but I'll try to follow your post. I have tried to reach Mr. Romain, but emails I send always come back.
I suppose you already thought about the special case of this triangle, involving the pentagram, with degrees 72, 36 and 72. I wonder what you might find there.
I must ask: did you look at my construction? I don't suspect it succeeds but I think it poses an interesting question....."there must be some kind of arc that includes points GIP (and HJQ) that defines all the trisecting points of angle arcs as they progress from 180 to 0 degrees on this template....what kind of arc would that be?"
Have you any idea, Arjen, if that arc could be defined, let alone classically constructed?
Jeremy
13. Hi Jeremy,
Yes, I thought about the pentagram (and any other regular polygon). The sine chord pattern then shows supplementary symmetries.
I also looked at your trisecting construction. I need some time to figure out what kind of curve defines all those trisecting points: is it a circular curve or is it something else? I'm not really an expert. Just an amateur with practice in circle drawings... :-) I'll post an answer about it, if I find something interesting.
Arjen
14. Originally Posted by linelites
... can you at least give a guess as to what kind of arc between these points would perform this task. Or is there no arc? But this seems unlikely, I would think there would be some kind of progressive arc that would trisect the ever decreasing arc of the ever decreasing angle. Could it be the arc of a circle that is centered somewhere else than where I chose? And how might that center be determined?
This is the crux of the problem....there should be a steady progression from G to I to P and from H, J,and Q that trisects any arc from 180 to 0. Can we find the arc that defines that progression?
I guess the progressive arc that trisects the circle arc is a hyperbolic arc, because when you increase the angle above 180°, the progressive arc takes the direction of an asymptote near to 60°. But I would have to investigate it more in order to characterize that hyperbola.
Arjen
15. Thanks Arjen, I get the gist of that. I have a book on strait edge and compass construction of hyperbolic curves called Patterns in Space by Robert Stanley Beard, so maybe there's hope, (of course, such words are heresy, so I retract them
BTW, I don't know how valid the approach, but Romain also hints at a solution to the heptagon using the lost theorem, in a triangle with angles of x, 2x and 4x, totaling 7x, which thus gives a 1/7 of 180 degree angle.
Page 1 of 2 12 Last | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8755367994308472, "perplexity": 747.3627038070697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115926769.79/warc/CC-MAIN-20150124161206-00025-ip-10-180-212-252.ec2.internal.warc.gz"} |
http://clay6.com/qa/20393/integrate-i-int-limits-1-3-sqrt-dx | # Integrate : $I= \int \limits_1^3 \sqrt {3+x^3}.dx$
$\begin {array} {1 1} (a)\;I > 2 \sqrt {30} \\ (b)\;I < 2 \sqrt {30} \\ (c)\;I > 12 \\ (d)\;I < 12 \end {array}$
Its integral is not possible so find $min ^m \;\&\; max^m$ value :-
By maxima & minima method:-
$f(x) =\sqrt {3+x^3}$
$f'(x) =\large\frac{3x^2}{2 \sqrt {3+x^2}}$
$x= -1,3$
$f(1)= 2 m$
$f(3)=\sqrt {30} -M$
$2. (3-1) < \int\limits _1^3 \sqrt {3+x^3} dx < \int 30. (3-1)$
$4 < \int \limits_1^3 \sqrt {3+x^3} \;dx < 2 \sqrt {30}$
Hence b is the correct answer. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997476935386658, "perplexity": 2212.1486173448598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646375.29/warc/CC-MAIN-20180319042634-20180319062634-00795.warc.gz"} |
http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0006399 | Research Article
# Working Memory Cells' Behavior May Be Explained by Cross-Regional Networks with Synaptic Facilitation
• Affiliation: University of Pittsburgh, Department of Mathematics, Pittsburgh, Pennsylvania, United States of America
X
• [email protected]
Affiliations: University of Pittsburgh, Department of Mathematics, Pittsburgh, Pennsylvania, United States of America, MIND Research Institute, Santa Ana, California, United States of America, Johns Hopkins University, Department of Neurosurgery, Baltimore, Maryland, United States of America
X
• Affiliation: University of Pittsburgh, Department of Mathematics, Pittsburgh, Pennsylvania, United States of America
X
• Affiliation: Semel Institute for Neuroscience and Human Behavior, University of California Los Angeles, Los Angeles, California, United States of America
X
• Affiliations: MIND Research Institute, Santa Ana, California, United States of America, Johns Hopkins University, Department of Neurosurgery, Baltimore, Maryland, United States of America
X
• Published: August 04, 2009
• DOI: 10.1371/journal.pone.0006399
## Abstract
Neurons in the cortex exhibit a number of patterns that correlate with working memory. Specifically, averaged across trials of working memory tasks, neurons exhibit different firing rate patterns during the delay of those tasks. These patterns include: 1) persistent fixed-frequency elevated rates above baseline, 2) elevated rates that decay throughout the tasks memory period, 3) rates that accelerate throughout the delay, and 4) patterns of inhibited firing (below baseline) analogous to each of the preceding excitatory patterns. Persistent elevated rate patterns are believed to be the neural correlate of working memory retention and preparation for execution of behavioral/motor responses as required in working memory tasks. Models have proposed that such activity corresponds to stable attractors in cortical neural networks with fixed synaptic weights. However, the variability in patterned behavior and the firing statistics of real neurons across the entire range of those behaviors across and within trials of working memory tasks are typical not reproduced. Here we examine the effect of dynamic synapses and network architectures with multiple cortical areas on the states and dynamics of working memory networks. The analysis indicates that the multiple pattern types exhibited by cells in working memory networks are inherent in networks with dynamic synapses, and that the variability and firing statistics in such networks with distributed architectures agree with that observed in the cortex.
### Introduction
Persistent elevation in firing rates of cortical neurons during retention of memoranda has been suggested to represent the neuronal correlate of working memory [1][3]. This activity in so-called memory cells (as observed in microelectrode recordings of neurons in the cortex of primates during the performance of delay tasks) exhibits a number of different general patterns. One pattern consists of cells whose elevated firing rate persists, on average across trials of the delay task, at the same rate for the entire period during which information of the memorandum is maintained in working memory. This type of dynamics represents the canonical bistable activity which has been a major focus of theoretical and computation modeling. A second elevated firing rate pattern consists of cells whose rate either decreases or increases throughout the memory period of a delay task. In decreasing-rate memory cells, the elevated activity is attuned to the memorandum (cue) of the task, and firing rate decays as the delay progresses towards the response of the task. In increasing-rate or ramping cells, elevated activity is motor- or response-coupled, and firing rate accelerates as the response of a task approaches. These rate-changing pattern cells and their respective networks have been suggested to represent two mutually complementary and interactive representations engaged in the transfer of information of cross-temporal contingencies from memory to action in working memory. Cells exhibiting these pattern types have been found to occur anatomically intermixed in the cortex [4], with cue- and response-coupled cells appearing to be more common than fixed delay rate cells [5]. In addition to these persistently elevated firing rate patterns, neurons which presumably are constituent members of working memory networks exhibit analogous inhibited (below baseline rate) firing patterns. Finally, many cells exhibit firing rate changes correlated with different working memory task events such as the presentation of memoranda (cue period) and/or the response of the delay task, but maintain baseline firing rates throughout the delay period during which the memorandum is retained in active short-term memory.
While the mechanism(s) by which the patterns of activity are initiated and maintained in working memory are undetermined, a number of plausible hypothesis have been proposed. With respect to persistent elevated-rate patterns, prevailing ideas which have emerged from computational and theoretical studies are that the activity arises as stable states in recurrent attractor networks [6][16] and/or inherent cellular dynamics [17][24]. These studies have had success in reproducing general bistable memory behavior. For example with respect to network studies, successful working memory behavior has been attained as defined by achieving persistent increased firing rates of cue-specific subpopulations of units in networks during the putative memorandum retention period of simulating delay tasks. A difficulty typically encountered however, is obtaining memory behavior, as defined within the specific range of frequency rates, statistics, and with the variability as observed in real neuronal populations of cortical working memory networks across the range of different persistent patterned behaviors. Neurons exhibiting each of the different persistent activity pattern types with some overall average frequency do so only as an average across multiple trials of a working memory task. Individual cells exhibit a significant amount of variability however, both in terms of firing frequency within and between trials of working memory tasks and may even exhibit different patterned behaviors from trial to trial [5]. Thus while cells exhibit one of the given patterns described above with some overall average firing frequency across many trials (as observed for example in an average peristimulus time histogram), they exhibit different average firing rates and/or pattern behaviors from trial to trial of the working memory task.
A potential source of these and other difficulties [25], is that they are examined within the framework of static synaptic structures. Specifically, the networks have fixed architectures, and are trained such that the strength of the connections between units (the weight matrices) produce desired memory behavior. Once memory behavior is achieved, the weight matrix is held constant. However, the simplification of fixed synaptic strengths may not be physiologically reasonable in light of the highly dynamic structure of the cortex. Cortical networks, and their constituent neurons, receive constant input from both external and internal sources, with learning and plasticity occurring concurrently with behavior. From a functional standpoint, fixed connection strengths necessarily limits the number of activities a network can perform, which could be undesirable given the plasticity of cortical function. Further, the ubiquity of cortical working memory [26] suggests that its associated activity might not occur in fixed, dedicated networks, but rather may arise from processes present in networks performing a variety of different functions [27].
Functional architecture is a second consideration of potentially fundamental importance to the dynamics of working memory networks. Typically, efforts have focused on studying working memory within the framework of local modules or networks that exist at various specific or general locations in the cortex. However, while working memory and/or working memory-correlated neuronal activity may be maintainable within local networks (or even cellularly), considerable evidence from neurophysiological and imaging studies have shown that working memory involves widely distributed cortical networks across multiple cortical areas [27]. Such a widely distributed architecture, which, if not fundamentally necessary for producing the firing rate patterns observed in working memory network cells, is probably active in the modulation of that activity. This modulation might entail not only producing the specific range of firing rates, but also the range of pattern types.
Recent work has indicated that working memory networks incorporate dynamic synapses. One study [28] revealed that connections between pyramidal cells in the prefrontal cortex exhibit facilitation, while others have demonstrated that neocortical synapses undergo substantial synaptic plasticity following synaptic activity [29], [30]. Particularly, it has been found that cells in certain cortical regions exhibit increased responses to sequences of theta burst stimulation, both from burst to burst within a given burst sequence, as well as across successive sequences. Work by Hempel et al., [31], and Galaretta and Henstrin [32] indicated that cortical synapses can exhibit augmentation (from 15 to 60 percent) that correlates with the frequency and duration of tetanic stimulation–similar to that frequently observed during the presentation of memoranda in working memory tasks.
Several computational efforts have attempted to address various aspects of the issues described above. For example, one study demonstrated that persistent activation with realistic frequencies might be achieved if working memory corresponds to attractor states on the unstable branch, and have proposed mechanisms by which such states might be stabilized [33]. Other work has emphasized the potential role of dynamic synapses in working memory processes, examining the effects of dynamic synaptic augmentation and rapid Hebbian plasticity in a recurrent network framework [25]. This work indicated that synaptic augmentation can reduce the amount of prior structure required for persistent activation to take place, while rapid Hebbian plasticity could enable persistent activity to take place within firing rate ranges observed in real cortical neurons. More recent studies have demonstrated that combinations of synaptic depression and facilitation might extend the attractor neural network framework to represent time-dependent stimuli [34]. Further efforts have indicated that calcium media synaptic facilitation could produce bistable persistent activation with firing rate increases typically observed in real cortical cells [35].
While working memory models have mostly concentrated on bistable persistent activation, some efforts have also addressed the issue of cue- or response-coupled patterns of activity that steadily increase and decrease during delay periods. For example, graded activity in recurrent networks with slow synapses has been modeled [24], while another recent study examined such activity in uniform recurrent networks with stochastic bimodal neurons without NMDA-receptor-mediated slow recurrent synapses [36]. This work has indicated that graded memory activity could be very difficult to produce within a single population or local module. Still other studies have examined the ability of networks to produce ramping behavior by maximizing the time the systems trajectory spends around the saddle node of the system's phase space [37]. Other work, while not necessarily producing working memory cells with firing rate statistics of real cells, has examined networks that produce the types of general patterns observed in working memory [38], [39]. A distributed network architecture may be crucial in understanding and producing those patterns of activity.
In this work we examine a cortical model of working memory incorporating dynamic synapses both within a local and a distributed cortical framework. We investigate the mechanism of dynamic synaptic facilitation in the generation of all of the different patterns of persistent activity associated with working memory and the effect of a distributed cortical architecture on the dynamics of working memory patterns. We first examine a firing rate model incorporating dynamic synapses representing a working memory network residing locally in a given cortical area. We analyze the statistics and firing-rate-patterns of this network during simulated working memory and compare the results with that of real cortical neurons recorded from parietal and prefrontal cortex of monkeys performing working memory tasks. A reduction of this model to a 2-dimensional system enables an analysis to completely characterize the states of the system. We then examined a distributed firing rate model consisting of 2 and 4 locally interconnected networks, analyzing the possible states as a function of different long-range connectivity schemes and strengths. The expansion of the architecture to multiple networks allows the incorporation of possible heterogeneity. We compare the output of these models (local and global architecture) with the activity of the database of real cortical neurons recorded extracellularly from the prefrontal and parietal cortex of primates performing working memory tasks. The model expands on previous work examining the ability of population models with dynamic synapses to produce bistable memory states, or rate changing states (either cue dependent during the stimulus period—i.e. Barak and Tsodyks [34]—or exclusively rate changing during the delay (Durstewitz [37]) to produce all different patterns (including inhibitory patterns) recorded during the delay period, and that these patterns can change their temporal features to accommodate a continuum of delay periods, as well as possessing relative rate changes and statistics as recorded in real cortical neurons. We also demonstrate that different patterns can occur in a distributed network concomitantly in a complimentary fashion as observed in the cortex. From the mean field firing rate model, a spiking network model is obtained whose population's mean firing rate corresponds to that of the firing rate model. This enables direct comparison of the activity with real cortical neurons. We examined the effect on unit activity with this spiking network with a distributed architecture consisting of up to four local networks connected by long range projections. The patterns and statistics of these spiking networks are analyzed and directly compared with the range of activities and firing statistics observed in the database of real cortical neurons. Finally we quantify the variability in spiking unit activity as observed in real cortical networks, and demonstrate from a nonlinear analysis how this activity arises. The results are compared to that observed in the real cortical cell populations. The results of this work demonstrates that all of the firing patterns correlated with working memory are inherently generated in distributed networks incorporating dynamic synapses, and these exhibit variability and firing rate statistics in agreement with what is observed in the cortex.
### Methods
We start with a firing rate model of a local network (Figure 1A). While the population might correspond to a network anywhere in the cortex, for convenience for comparison with the real cortical data, we might associate it with a working memory network in prefrontal or parietal cortex. The network equation describing the synaptic activity of the population is given by(1)
where S denotes synaptic activity. The second term in (1) corresponds to the firing rate of the population with the function F(X) given by(2)
in which τm is the membrane time constant, and b is a parameter inversely proportional to the noise. This form of the firing rate function mimics the firing rate of a class I neuron in the presence of noise (~1/b) [40]. The parameter C in equation (1) is the strength of feedback connections in the population,τs is the decay constant for synaptic activity, w corresponds to the synaptic facilitation, and θ is the threshold. I(t) corresponds to an external current which increases during memorandum (cue) presentation in the simulated working memory task.
Dynamic synaptic facilitation (w) is incorporated in the model according to (3)
where τw is the decay constant, γ is a proportionality constant controlling the amount of facilitation as a function of intra-cellular calcium, and Ca is the calcium concentration. Cao is a reference parameter controlling the level of intracellular calcium at which facilitation begins to increase. The calcium concentration dynamics are given by (4)
where τca is the decay constant, and F(x) is of the form given in equation (2).
The above local architecture of the model is expanded to a distributed one, first through the addition of a second population, coupled to the first by recurrent long-range projections (Figure 1B). This allows the introduction of heterogeneity into the network as well as representing the first step towards investigating the effect of a distributed architecture on the dynamics and states of working memory. The system dynamics are described by the coupled network equations describing the synaptic activity(5)
where i = 1, 2 corresponding to the two populations.
The two populations can be considered to reside in different cortical areas (i.e. prefrontal and parietal cortex) or two populations within the same area. For convenience of description we can consider the populations to represent networks in different cortical areas, which for purposes of association with the real cortical data we take as prefrontal cortex (population 1) and parietal cortex (population 2). In these equations then, C12 represents the strength of the projections from parietal cortex to the prefrontal cortex population, and C21 is the connection strength from the prefrontal population to the parietal population, while C11 and C22 are the connections strengths within the prefrontal and parietal populations respectively. Synaptic facilitation is given by equation (2) and Calcium dynamics satisfy equations similar to (3) which are:(6)
The distributed architecture is further extended to one consisting of four populations (Figure 1C), by recurrently connecting two of the 2-population models above such that every population has projections to every other population. The system dynamics are given by: (7)
where i = 1, 2, 3, 4 corresponding to the 4 populations, and with analogous extensions of equations (6) controlling the calcium dynamics. In this network each pair of populations (i.e. populations 1 and 2, and populations 3 and 4) are more strongly coupled to each other than they are to populations of the other pair. The network can be considered now to represent two local networks consisting of 2 populations each, residing within different cortical areas (i.e. prefrontal and parietal cortex). Thus the effects of heterogeneity may be examined, in addition to the effect of a distributed architecture on working memory dynamics and states. Particularly it allows the examination of the effect of heterogeneity and a distributed architecture on the occurrence of “complementary” working memory behaviors indicated by experiments to be simultaneously present in networks in the cortex.
We begin the analysis first from the single population model. The single population possesses 3-dimensional dynamics in the variables for synaptic activity (S), facilitation (W), and calcium concentration (Ca). A reduction of this model to 2 dimensions is achieved by assuming steady state calcium (dCa/dt = 0) allowing the system to be rigorously analyzed. While assuming steady state calcium does not have an immediate justification from a neurophysiological standpoint, it produces a system with the same attractor structure as the 3-dimensional system and thus allows the rigorous analysis. We carried out analysis of the dynamics and the stability of states of the model using XPPAUT [41]. For the 2-dimensional reduced model we examined the phase portraits (Figure 2), from which the fixed points of the system and their stability were determined. Through this analysis, a range of biologically plausible parameters were determined which generate persistent working memory pattern types with statistics in the range typical of real cortical neurons (Table 1). These network parameters were then used in the full 3-dimensional model with dynamic calcium. For the 3-dimensional model, the fixed points of the system were first examined to determine coincidence with the 2-D model. Simulated working memory tasks were then run, varying the magnitude of the facilitation, self connectivity, and the magnitude of the input current. The firing rate patterns and frequencies exhibited by the model were compared to the types of patterns and frequencies observed in the database of parietal and prefrontal neurons recorded from monkeys during performance of working memory tasks in other studies. [42][44]. A simulated trial of a working memory task followed approximately the same generic sequence as that for which the single neuron database was acquired. The simulated task consisted of a 20-second baseline period (during which the population was in a baseline firing fixed point), followed by a 300-ms sample cue period corresponding to the period during which a memorandum was presented. After the cue period, a 12 second delay period followed. We did not consider in this study a behavior/motor response period following the delay, but rather restricted our analysis to the network behavior during these first 3 temporal aspects of the working memory task. To analyze the firing rate patterns and firing rate statistics of the model, peristimulus time (PSTH) histograms were generated and analyzed over a range of values of the self connectivity and facilitation. In addition, the phase diagram of different possible pattern states occurring over a range of values of the self connectivity and maximum synaptic facilitation were examined. For the single population model, PSTH histograms and phase diagrams were also generated for different values of dynamic synaptic depression and self connectivity, and the resulting patterns and firing rates were analyzed. Synaptic depression was incorporated by allowing the parameter for maximum facilitation (Wmax) to range over values less than the value of the baseline facilitation (Wmin) in equation (3).
For the distributed 2-population model, the parameters used were within the ranges determined and used in the single population model, and simulated working memory trials were conducted following the same course as that used for the single population model. Firing rate patterns and statistics of both populations were analyzed over a range of the inter-population connectivity values. PSTH histograms were generated to analyze the firing rate patterns and statistics. Phase diagrams of the firing rate patterns occurring in each population were generated as a function of the inter-population connectivity strength. An analysis of the behavior of the entire network was carried out through an examination of the possible pattern types occurring concomitantly in the two populations. This analysis was carried out by examining overlapping patterns in the phase diagrams of the two populations.
For the distributed 4-population model, the parameters for each population were within small ranges of those determined and used in the preceding single- and 2-population models. Simulated working memory trials were conducted following the same previous course as well. Firing rate patterns and statistics occurring in all four populations were analyzed and compared to the activity of the real parietal and prefrontal neurons. Phase diagrams were generated of the different firing rate patterns occurring in the populations as a function of different inter-population connectivities. An analysis of the behavior of the entire network was carried out through an examination of the possible different pattern types occurring concomitantly in the different populations. This analysis was carried out through an examination of overlapping states in the phase diagrams of the four populations. Resulting behaviors were compared with that of the 2-population model.
Having determined the dynamics through the study of the firing rate models, spiking models were generated to make direct comparison with the single unit data. Spiking model versions of the 2- and 4-population firing rate models were generated by replacing the populations' activities first with networks of 200 spiking units exhibiting the same overall mean firing rates. The network consisted of spiking neurons with all-to-all connectivity and random strengths. Connections between populations were both excitatory and inhibitory. Specifically, mean connectivity values were chosen from regions of the phase diagrams of the mean field model in which the range of memory cell pattern types were robustly exhibited. These connectivity values were then used as the values for setting the mean of the connectivity in the spiking model. Distributed spiking networks consisting of two populations of 100 units each, and four populations of 50 units each were generated with the average inter-area connectivity chosen to match the mean field model values within ranges of the standard deviation (Tables 2 and 3).
The spiking activity of the single units was modeled as theta neurons [40], [45]. Unit firing frequency as a function of the injected current (F-I curve) can be obtained analytically in the theta model. This F-I curve is a square root function which provides a correspondence between the firing rate model and the theta model. The F-I curve for the theta model is described by
Whereas the curve for the mean field firing rate model is
thus obtaining the correspondence between the mean field and spiking models (as the parameter b goes to infinity the above expression becomes the equation for a noisy integrate and fire model).
The membrane potential dynamics of a unit in the spiking model is given by the equation (10)
where I(t) is an external current occurring during the presentation of memoranda, and amp is the amplitude of the Weiner noise. The synaptic activity of a unit sj in equation (10) increases with each afferent spike according to (11)
where β corresponds to the increase in synaptic activity from a single afferent spike, tmj is the time of incidence of the mth afferent spike on the jth neuron, and τs is the decay constant. The dynamics of the synaptic facilitation wj in equation (10) is given by (12)
where Ca corresponds to the intercellular calcium concentration which modulates the change in facilitation and increases with each spike according to (13)
The 2-population spiking network consisted of two “local” networks of 100 neurons each with all-to-all connectivity, and with average weaker recurrent connectivity between populations than within the populations. The activation properties of each individual network reflect that of the single populations of the firing rate models.
For the 2- and 4-population spiking networks, working memory task simulations were conducted similarly to those for the firing rate model, and the firing rate patterns and statistics were analyzed. During the baseline period of the simulated working memory task, facilitation in the models was kept low such that the firing rate of the populations was near the baseline fixed-point attractor state inherent in the model (as determined from the phaseplane analysis of the firing rate model). After 20 seconds, the baseline period ended and an external current I(t) was applied for 300 ms. The external current raises the firing rate of many units in the populations, simulating the activity observed during presentation of the memorandum in working memory tasks. The current input and increased firing rate triggers dynamic facilitation through equations (11–13). After the cue period, the delay period begins. For the spiking model simulations, unit activity was analyzed over an 11-second delay period which is proportional to the delay period of the working memory tasks during which the parietal and prefrontal cells of the database were recorded. PSTH histograms of units were generated to analyze the patterns and firing rate statistics of the units. Average PSTH histograms were generated for each unit over 10 simulated working memory task trials. Pattern types appearing in the average PSTH histograms were determined and the distribution of patterns in the network were compared to the distribution of patterns observed in the parietal and prefrontal neuron populations of the database. Variability in working memory patterns occurring across trials for each unit was analyzed and compared between the 2- and 4-population networks and the neuronal populations.
To examine the effect of network size on patterns exhibited in the networks across trials and their variability, we generated 2- and 4-population networks consisting of 2000 spiking units. For these networks the distribution of pattern types exhibited on each of 20 simulated working memory task trials was obtained and the average distribution across all 20 trials was determined. These distributions were compared to the distributions obtained with the 200 unit networks as well as that observed in the parietal and prefrontal neuron populations of the database. Variability in firing rate within trials was determined through an analysis of the coefficient of variation (CV) of the ISI's during the baseline and delay periods. Variability in working memory patterns occurring across trials for each unit was analyzed and compared to that observed in the 2- and 4-population networks of 200 units.
The database with which the different models' activity is compared consists of 812 neurons recorded extracellularly from the parietal cortex (Brodmann areas 2, 3, 5, 7) and prefrontal cortex (areas 6, 8, 9 and 46) of monkeys performing working memory tasks. In parietal cortex, 521 cells were recorded from monkeys during performance of a haptic delayed matching-to-sample task [42], and in prefrontal cortex, 291 neurons were recorded from monkeys during the performance of a cross-modal audiovisual delayed-response task [43][44]. The analysis of this database and the compilation of its statistics in terms of firing rates, patterns and statistics have been presented elsewhere [5].
### Results
#### Single Population Model
The reduced single-population model is 2-dimensional (in the variables S and W for synaptic activity and facilitation respectively) and thus phaseplane and rigorous mathematical analysis was carried out. The nullclines of the system (Figure 2) correspond to the curves along which the synaptic activity and facilitation are constant (ds/dt = dw/dt = 0). The steady states of the system are defined by the points at which these 2 curves intersect. For sufficiently low self connectivity strengths (or low maximum facilitation), only one such point is present, corresponding to the baseline firing rate of the population (Figure 2a). Stability analysis reveals this is an attracting fixed point. Thus transient perturbations from the external current during the sample period (resulting in increased synaptic activity, firing rates and facilitation) ultimately relax back to this state. As the self connection strength (or amount of facilitation for a given input current) is increased, the W-nullcline intersects the S-nullcline at 3 points (Figure 2b). Stability analysis reveals that 2 of these nodes are attracting fixed points, and one is a saddle node. The presence of a stable state corresponding to baseline, and a second stable state corresponding to an above baseline firing rate, enables bistable behavior, although the difference in firing rates associated with these states is much larger than typically observed in cortical data over much of the parameter space. For example, in the subpopulation of memory cells recorded from the parietal cortex, 90.1% of the cells exhibited increases from baseline to delay of less than 10 Hz, and 69.9% of frequency changes were less than 5 Hz. In the subpopulation of memory cells recorded from prefrontal cortex 100% exhibited increases of less than 10 Hz, and 95.2% were less than 4 Hz. Persistent elevated firing rates within these ranges typically observed in cortical data is inherently prevalent in the model without incorporating many of the previous mechanisms providing solutions for acquiring that behavior (see for example Latham and Nirenberg, [33]; Barak and Tsodyks [34]; Mongillo et al., [35]). An essential feature of the model allowing this behavior is the presence of a bottleneck which appears in the phaseplane near the S and W nullclines corresponding to regions of greatly diminished rates of change for the dynamic variables. Further, a bottleneck is present over a broad range of the parameter space, and its presence is not dependent on fine tuning of parameters. The bottleneck comes about because the equations of the system are a continuous map approaching zero when the nullclines are close to each other in phase space. Thus as shown in Figure 2, the values of ds/dt and dw/dt are reduced in those areas. There are two factors in the decay rate which include the bottleneck and the value of the time constants. While the presence of a bottleneck is not a result of the difference in time constants (but rather the shape of the nullclines), in the present system the shape of the nullclines has a dependence on the value of the time constants, and thus the slower rate change in w than s contributes both in terms of the bottleneck's existence via nullcline shape, as well as acting to slow decay of its own accord. Thus both contribute to the slower decay. Because of the bottleneck however, the “effective time constant” or rate of decay is much slower than would be predicted from the actual time constants. When the firing rate is elevated above baseline by an external current during the cue period, facilitation also increases. The trajectory in the phaseplane is such that passing through the bottleneck, the return of the system to the baseline stable state (or procession to the higher firing rate attractor state) is “impeded”. Thus while not in a stable state, the system remains in a state of elevated (above baseline) firing frequency for an extended period of time, which can be virtually indefinite. Over a wide range of values of the parameter space, the decay to one of the stable states of the system is sufficiently slow such that no significant change in elevated firing rate is observed for the duration of the putative memory period. From the frame of reference of the memory task, this activity appears as bistable. In contrast to actual bistability of the model however, the difference in firing rates between baseline and delay periods for this apparent bistability can adopt a continuum of values within the range typically observed in real cortical cells (i.e. differences between baseline and delay rate less than 100% of baseline or typically<5 Hz). This is also true of both the decaying memory cell and ramping cell behavior. The range of facilitation values over which the different firing rate behaviors occur is affected by the value of maximum facilitation parameter. This parameter (and the threshold θ) significantly determines the fixed points of the system (where the facilitation nullcline intersects the synaptic activity nullcline). While this parameter is varied over a large percentile range (i.e. several hundred percent), the resulting change in actual facilitation realized by the network is within the range of 10% to 60%–within the range of reported increases (for example see Hempel et al. [31]).
We next analyze the stability of the attractor states of the network as a function of the threshold parameter (Figure 2E and 2F). The bifurcation diagram reveals that the 3 fixed points of the system (two stable nodes and one saddle point) are present over a wide range of this parameter. For parameter values where the trajectory remains below the stable manifold (within the basis of attraction of the baseline node), the system ultimately returns to that attractor state. For parameter values in which the trajectory travels above the stable manifold (into the basin of attraction of the stable node corresponding to the higher firing rate), the system approaches that second stable state.
Having determined the states of the 2-dimensional system, we next use the parameters of this 2-dimensional network (Table 1) in the 3-dimensional model with dynamic calcium. An analysis of the attractor states reveals that the 3-dimensional system retains the same attractors as the 2-dimensional system. Figure 3 shows PSTH histograms of the model during the simulated working memory task. These histograms show that the pattern activities observed in the trajectories of the phase portraits of the 2-dimensional model are present. Particularly this analysis shows that the network inherently exhibits the range of excitatory and inhibitory patterns correlated with working memory for different values of facilitation and self-connectivity. For values of facilitation (or input strength and/or duration) such that the trajectory of the system stays below the separatrix in the basin of attraction of the baseline attractor state, the population can exhibit persistent activity which decays towards baseline throughout the delay (Figure 3a). The achievable increases in firing rate from baseline to delay can take on low values (i.e.<5 Hz) and can adopt any rate within a continuum. The rate of decay of the persistent activation towards baseline is also variable along a continuum. For a range of trajectories and parameter values, the decay in firing rate during the delay can become slower and slower, to the point that the population approaches for all intents and purposes bistable behavior (Figure 3b). Once again, the increase in firing frequency during the delay can occur along a continuum. For sufficiently low values of facilitation, the population exhibits a non-responsive pattern (Figure 3C). That is the trajectory returns to baseline firing rates immediately following the cue. Thus the population responds to working memory events (i.e. the cue), but exhibits baseline rates throughout the delay. As facilitation increases (or the input strength/duration increases) such that the trajectory proceeds beyond the separatrix, the pattern becomes that of a ramping increase of firing rate during the delay period (Figure 3D). For sufficiently large values of facilitation, the system quickly adopts the higher firing rate attractor state, thus exhibiting bistability (with differences between baseline and delay period >10 Hz for most parameter values). Dynamic synaptic depression can be introduced into the network rather than facilitation for values of maximum facilitation (Wmax) that are less the background value (Wmin) in equation (3). For sufficiently small values of synaptic depression, as was the case for facilitation, the network exhibits the non-responsive pattern. As synaptic depression is increased the network exhibits the inhibitory pattern which is the mirror image of decaying memory cells (Figure 3E). As was the case for the excitatory memory cells, the decay back to the baseline state of the inhibitory pattern can be slowed to the point where the population exhibits an apparent fixed rate inhibition pattern (Figure 3F).
Figure 4 shows the phase diagram of the firing rate patterns of this model as a function of facilitation strength and self connectivity strength. It can be seen that the different excitatory patterns observed in the cortical data are produced (decaying memory, bistable memory, and ramping cells) over a wide range of parameters. As is the case in the cortical data, the non-responsive pattern behavior is the most prominent pattern type across the parameter space. For the case of synaptic depression, the decaying inhibition pattern occurs prominently over a range of parameters in addition to the non-responsive pattern.
The mechanism for the behaviors illustrated can be understood from the stability analysis and examination of the phaseplane (Figure 2). In all cases the firing rate begins at the lower attractor state with low values of facilitation. The input of current increases the synaptic activity S, and therefore firing rate and subsequently the facilitation W increases. If the self connectivity, facilitation or magnitude of the external current is sufficiently low such that the trajectory of the system does not cross the saddle separatrix, the trajectory is such that S quickly decreases until it approaches its nullcline. Here the trajectory proceeds such that it approaches the baseline stable attractor along the path of that nullcline. However, the bottleneck through which the trajectory proceeds slows the rate of return to the baseline state. The bottleneck can slow that rate such that the trajectory is impeded to the point that firing rate appears bistable with respect to the duration of the memory period of the task. As self- connection strength, synaptic facilitation or external current magnitude is raised beyond a critical point such that the system's trajectory goes beyond the saddle separatrix in the phaseplane, the system approaches the second stable state which corresponds to an above baseline firing rate—resulting in ramping or response-coupled cell pattern behavior. Once again the rate of this increase is affected by the bottleneck, and may be arbitrarily slowed such that the firing rate appears bistable with respect to duration of the memory period of the task. This phenomenon exists for a broad range of parameter values. Thus the inherent bistability in these cases is critical in modulating patterned memory behavior, but does not in many cases in and of itself represent the memory states. Rather the activation of the network itself could represent active working memory.
#### 2-Population Firing Rate Model
We next analyze the behavior of a 2-population network, recurrently connecting two of the single populations. This model is 6-dimensional and thus cannot be easily reduced and analyzed as was the case for the single population model. We analyze the patterns and statistics through the PSTH histograms (Figure 5) and the phase diagrams (Figure 6) of pattern types as a function of the strength and sign (i.e. excitatory or inhibitory) for the net effect of the inter-population projections. The phase diagrams enabled the examination of possible concomitant activities in the different networks.
Inhibition, in addition to excitation, is incorporated in the mean field 2-population model via the inter-area projections between populations. While long-range projections in the cortex are excitatory, inhibition is examined as well according to the assumption that the majority of the long-range projections may project either to inhibitory or excitatory interneurons. Thus the net effect of these projections can be excitatory or inhibitory. We analyze the behavior of the network for different possible inter-population connectivity schemes (i.e. Excitatory-Inhibitory (E-I), and inhibitory-inhibitory (I-I). Slightly different values for self-feedback connections strengths within the two populations were chosen.
As was the case for the single-population model the PSTH histograms reveal that the 2-population model exhibits the excitatory patterns of memory and decaying-rate or ramping cells with a continuum of rate differences. The inclusion of inhibitory connections results in the presence of parameter ranges in which all of the inhibitory patterns (mirroring the excitatory ones) occur. These inhibitory patterns can occur purely as a function of inhibitory inter-population connectivity, without incorporating dynamic synaptic depression as was the case for the single population. In addition the inhibitory pattern of increasing inhibition throughout the delay (mirroring the excitatory ramping cells) which was absent in the single population model, now can occur (Figure 5f).
In the phase diagrams of the 2-populations (Figure 6A) it can be seen that all of the patterns of memory behavior occur over broad ranges of the parameters, and thus without fine tuning, in both populations. As in the single population model, the non-responsive type is the most prominently occurring pattern across the parameters, followed by decaying memory cells and ramping cells. Less commonly occurring types are fixed rate memory cells and the inhibitory mirror images of the excitatory patterns. While all the patterns occur over a broad range of parameters, the specific patterns present over given ranges varied considerably between populations. Thus many specific complementary patterned activities occurred simultaneously in both populations only over small ranges of the parameters, and thus some degree of fine tuning is necessary to achieve particular overall network behaviors. For example, as can be seen in the overlapping phase diagrams (Figure 6B), attaining memory cell behavior simultaneously in both cortical locations, or attaining complementary cue-coupled/response-coupled behavior, requires the connectivity of the network to be restricted to relatively small specific ranges of inter-population connectivity values.
#### 4-Population Firing Rate Model
We next consider the effect on the states of the network when the model is extended to 4 populations. In the 4-population model all of the patterned activities continue to be present over a continuum range of increases and decreases in firing frequencies. However the distributed architecture results in a “specialization” of pattern activity within specific populations. As can be seen in the phase diagrams (Figures 6 C and 6D) each local network (populations 1–4) exhibits the non-responsive pattern and almost exclusively either the excitatory or inhibitory memory patterns across the range of connectivity strengths. A result of this specialization or partitioning of pattern types between the local networks is that, in contrast to the 2-population model, simultaneous complementary pattern behaviors occur far more robustly across wide parameter ranges. Thus for example attaining memory cell behavior simultaneously in multiple cortical areas, or attaining complementary cue-coupled/response-coupled patterned behavior does not require fine tuning to a small restricted range of connectivity values (Figure 6E).
#### Spiking Unit Network: 2-Population Model
We next examine the statistics and dynamics of the spiking version of the distributed mean-field models. In the spiking network version of the 2-population mean field model we first replace the populations with two networks of 100 spiking units each, whose activity averaged across units approaches the activity of the populations in the mean field model (Figure 7). We first analyze the range of memory pattern types in the spiking networks during simulated working memory tasks. Average PSTH histograms over 20 simulated trials of a working memory task were generated and examined for each unit in the network (Figure 8). The pattern types and statistics in these units can be directly compared to those occurring in the database of real parietal and prefrontal cells. The results show that within the populations, the range of excitatory and inhibitory patterns occur, in addition to the non-responsive pattern. The average baseline frequencies, delay frequencies and deltas (changes in frequency from baseline to delay period) exhibited by the units for each pattern fall within ranges observed in the real parietal and prefrontal cells (Table 4). Figure 9A (left) shows the distribution of patterns types exhibited by all 200 units of the 2-population spiking network. The most commonly occurring pattern was the non-responsive pattern, followed by the excitatory patterns, and finally the inhibitory patterns. This relative distribution of pattern types is consistent with what is observed in both parietal and prefrontal cell populations (Figures 9B left and right). This distribution also correlates with the areas of the parameter space over which each of the patterns occurred in the phase diagrams of the 2-population model.
As is the case in real cortical cells, the specific pattern exhibited by a unit in the spiking network in any given trial can vary from the predominant pattern observed in the average PSTH histogram [5]. That is, the pattern that a unit (or real cortical cell) is classified as exhibiting, as determined from the average PSTH pattern, might not be exhibited on some subset of trials. This includes exhibiting different excitatory patterns from trial to trial in delay activated pattern cells, different inhibitory patterns from trial to trial in delay inhibited pattern cells, or even pattern types contrary to the average pattern. For example, the parietal delay activated cells exhibited delay inhibited patterns on 13.2% of the trials, and parietal delay inhibited cells exhibited delay activated patterns on 15.6% of the trials. Prefrontal delay activated cells exhibited delay inhibited firing patterns on 16.1% of the trials, and prefrontal delay inhibited cells exhibited delay activated patterns on 20.1% of the trials. Figure 9C (left) indicates for each unit in the 2-population network the percentage of the total number of simulated trials in which its pattern behavior differed from its dominant pattern type appearing in its average PSTH histogram (different excitatory or inhibitory pattern and/or contrary delay activity). We see that units in the network, on average, exhibit pattern activity different than their classified pattern type on approximately 52.5% of the trials, with that variability being approximately the same in both local networks. Individual units exhibited different patterns over a range from 20% to 70% of the trials. This variability is comparable to that observed in many real neurons during working memory
#### Spiking Unit Network: 4-Population Model
We next examine the range of statistic and memory pattern types occurring in the activity of units in a spiking network with four populations of 50 neurons each. In the spiking network we replace the four populations of the mean field model with four networks of 50 spiking units each whose activity averaged across units is the same as the activity of the populations of the mean field model. We first analyze the range of memory pattern types in the spiking networks during simulated working memory tasks. Average PSTH histograms over 20 simulated trials of a working memory task were generated and examined for each unit in the network. As was the case in the 2-network spiking model, the range of excitatory and inhibitory patterns, in addition to the non-responsive pattern are exhibited by the units in the network. The specific baseline frequencies, delay frequencies and deltas (changes in frequency from baseline to delay period) exhibited by the units for each pattern fall within the ranges observed in the real parietal and prefrontal cells (Table 4). Figure 9A (right) shows the distribution of patterns types exhibited by all 200 units of the 4-population spiking network. The relative prominence of different pattern types is similar to that of the 2 population spiking model, with the most commonly occurring pattern being the non-responsive pattern, followed by the excitatory patterns, and finally the inhibitory patterns. Once again this is consistent with the relative percentages of each pattern type observed for the real parietal and prefrontal neurons.
As was the case in the 2-population spiking model, the specific pattern exhibited by a unit in the 4-population spiking network in any given trial of the simulated working memory task can vary from the predominant pattern observed in the average PSTH histogram (Figure 10). Figure 9C (right) indicates for each unit in the network the percentage of the total number of trials in which its pattern behavior differed from its dominant pattern type. We see that units in the network, on average, exhibit pattern activity different than their average classified pattern type approximately 54% of the trials, with individual units exhibiting different patterns over a range of 25% to 70% of trials. This variability is very similar to that observed in the 2-population spiking model, and once again is within the range observed in the real cortical neurons. However, in contrast to the 2-population network, variability is not uniformly distributed across populations. In the 4 population network there is a greater degree of partitioning of the activity of the networks into those primarily exhibiting non-responsive and excitatory patterns, and non-responsive and inhibitory patterns. Populations of units of primarily excitatory or the non-responsive memory pattern types exhibit their predominant pattern much more reliably than those populations of units of primarily inhibitory and non-responsive pattern types. Thus the more distributed architecture resulted in increased reliability in persistently active populations.
#### Spiking Unit Network's Variability Scaling With Population Size
We next examine the dependence of pattern type, firing rate statistics and variability as a function of population size. To do this we produced a 2- and 4-population spiking model as above consisting of 2000 units. We first analyze the range of memory pattern types in the spiking networks during simulated working memory tasks. Average PSTH histograms over 20 simulated trials of a working memory task were generated and examined for each unit in the network. As was the case in the 2- and 4 population spiking networks consisting of 200 units, the range of excitatory and inhibitory patterns, in addition to the non-responsive pattern are exhibited by the units in the network (Figure 11). The specific baseline frequencies, delay frequencies and deltas (changes in frequency from baseline to delay period) exhibited by the units for each pattern fall within the ranges observed in the real parietal and prefrontal cells. Figure 12 shows the distribution of patterns types exhibited by all 2000 units of the 2- and 4-population spiking networks. The relative percentage of excitatory and inhibitory patterns is similar to that observed in the 200 unit networks with excitatory patterns being slightly more prominent than inhibitory patterns.
The firing rate model produces a trajectory in the phase plane which corresponds to a specific pattern type. Depending on the connections and other parameters, the stimulus causes the trajectory to remain above or below the separatrix of the phase space. In terms of the spiking model the firing rate model trajectory corresponds to the mean of the trajectory of all units. Depending on how close to the separatrix that mean trajectory is after the stimulus, fluctuations about the mean from various sources of stochasticity in the spiking model will result in a probability that units will make transitions to trajectories corresponding to pattern types different than that of the mean trajectory. The resulting pattern types will have a distribution reflecting this. Conversely the closer the system is to one of the stable attractors of the system, the less probable it is for a given level of noise that the system trajectory will depart from the pattern of the mean trajectory.
There are 3 primary sources of stochasticity in the spiking model networks, not present in the mean field model that produce fluctuations resulting in units behavior departing from the single pattern type of the mean trajectory: 1) heterogeneity in the connections between units, 2) heterogeneity in the maximum facilitation, and 3) the noise present in all the units' activity. Increasing population size reduces the source of noise resulting from heterogeneous connections, and thus reduces the overall amplitude of fluctuations. Figure 13 shows the reliability with which units in the 2000 unit 2- and 4-population spiking models exhibit there dominant patterns. It can be seen that neurons exhibit their dominant pattern more reliably than in the 200 unit network. However, increasing population size cannot eliminate type variability across trials particularly when the system is near the separatrix. It can be seen from Figure 13 that the average reliability of units expressing a single pattern type across the overall network is greater for the 2000 unit networks than the 200 unit networks (75% vs. 47% respectively in the 4 population model). However, individual units still exhibit high variability in the pattern type exhibited from trial to trial ranging between exhibiting the dominant pattern type 100% of the time to approximately as low as 45% of trials. Thus while reliability in firing can be achieved by increasing population size and averaging across units of a population, this does not eliminate transitions by units from the mean pattern type or from their dominant pattern type from trial to trial.
A reduction in the source of variability due to noise present in all neurons during simulation—i.e. the Weiner noise–can be achieved by averaging across trials. Figure 12A and 12B (bottom) shows the distribution of the average histograms obtained for the units in the 200 and 2000 unit population models across 20 trials. It can be seen that this averaging produces a distributions which primarily consist of patterns corresponding to the canonical bistable persistent activity (activation and inhibition). While this type of averaging is not physiologically relevant in the sense that populations carry out working memory each trial, and not as an average across trials, it does represent the typical averaging carried out to characterize cell behavior in studies of working memory (i.e. average PSTH histograms across trials determine cell pattern type).
An analysis of the intra-trial variance of firing rate in the model units revealed high variability in the distribution of ISIs during both baseline and delay periods of the model (Figure 14a). The CV of ISIs in the majority of units ranged in the baseline across all pattern types between 0.4 and 1 with a mode of approximately 0.6. During the delay period the distribution of intra-trial ISI CVs was bimodal with peaks at approximately 0.45 and 0.75 and most units falling within the range of 0.4 and 1 as in the baseline. These ranges of the CV overlap significantly with that observed in the real prefrontal and parietal cell populations, although their overall means are lower. Focusing on the stable excitation and inhibitory patterned activity, units exhibited decreasing average CV from baseline to the delay period in stable excitatory pattern units, and increasing average CV from baseline to the delay period in stable inhibitory pattern units (Figure 14b). In the real parietal and prefrontal cell populations, stable excitatory and inhibitory cells exhibit high CV in their ISIs during both baseline and delay, with the CV decreasing from baseline to delay in stable excitation cells and increasing in stable inhibitory cells. In parietal cortex, the CV of the ISIs in cells exhibiting stable persistent excitation significantly decreased (p<0.001 paired t-test) from and average of 1.17 during the baseline to 1.02 in the delay. In prefrontal cortex the CV in those cells decreased insignificantly from an average of 1.03 to 1.0. In parietal cells exhibiting stable persistent inhibition in parietal cortex, the CV of the ISIs increased insignificantly from 1.19 to 1.2, while in prefrontal cortex the CV increased insignificantly from 1.02 to 1.03 on average.
### Discussion
The results of this study demonstrated that recurrent networks with dynamic synapses inherently produce the different persistent firing rate patterns observed in real cortical neurons during working memory. The persistent patterns produced are robust with respect to variations of the parameters in the network. That is, the different patterns occur over a wide range of values of the parameter space, and given patterns do not occur only for a very narrow set of parameter values. Further, the statistics of those patterns fall within the ranges of variation observed in firing rate pattern behavior of real cortical neurons. For example the changes in firing rate from baseline to the delay period can take values along an apparent continuum with absolute changes in firing rate of less than 100% of the baseline rate. For the majority of persistently activated cells recorded from parietal and prefrontal cortex of primates during working memory this corresponds to changes in firing rate of less than 10 Hz. The present network demonstrates a mechanism beyond previous solution for achieving these realistic low delay firing rates [33][35], [46][49]. While the expression of any particular delay frequency or rate of ramping or decay of firing rate of the units can be dependent on the particular parameters, the occurrence of any of the working memory patterns takes place across wide continuous ranges of network parameters and inputs, and thus do not involve fine tuning and are stable with respect to noise in the input.
Bistable firing rates are one of the possible activities of the model. However, the present work has focused on the range of working memory-correlated patterns of firing rate and their simultaneous, complementary occurrences in the working memory network as opposed to only fixed states that the networks or their neuronal constituents may adopt. The spiking networks exhibited all of the general patterns correlated with working memory that are observed in the database of microelectrode recordings of parietal and prefrontal cortical neurons. In addition, the statistics and firing rates of the units fall within the ranges observed in real cells, with the occurrence of the different pattern types similar in proportion to that observed in the cortical populations. In terms of the behavior of individual neurons, bistable activity is typically only observed as an average over many trials of a working memory task. Across trials, cells exhibit different average frequencies, and even within individual trials, cells exhibit significant variability in firing rather than a single stable rate [5], [49][51]. This is indicated from a high coefficient of variation in both baseline and delay periods in units exhibiting stable delay excitation and inhibition patterns. This is in agreement with database of real parietal and prefrontal stable delay units as well as previous neurophysiological studies in which CVs of within trial ISIs were around 1.0. Changes in CV from baseline to delay period for the model units further agreed with that observed in the parietal and prefrontal database with the CV decreasing for stable excitatory pattern cells, and increasing for stable inhibitory pattern cells. High variability in ISIs has been observed in previous neurophysiological studies [49][51], although in some cases the change from baseline to delay observed has been different than that of the present cell populations. This may result from the frequencies or types of persistent patterned activity observed in those studies during the delays (e.g. bursting behavior). In addition to variability within trials of stable persistent activity cells, from trial to trial, neuron activity may adopt specific memory correlated patterns different from the most prominent one that emerges in the average across many trials. This not only includes changing between the different persistent excitatory patterns from trial to trial, but even changing between persistent activation and inhibited patterns. Thus while a population of cells may exhibit a particular pattern with consistency, individual cells of that population do not. In the 2-population spiking model with 200 units variability in firing pattern across trials was the same for both populations, with the majority of units exhibiting changes from their most prominent pattern type (including changing between persistent excitation and inhibited patterns) in 40% to 60% of the trials. In the 4-population spiking model, while the overall variability was essentially the same as in the 2-population model, the variability in pattern across trials depended on the types of patterns prominently exhibited by the particular populations. In populations exhibiting excitatory patterns the majority of units displayed a different pattern on 35% to 45% of the trials, and in populations exhibiting inhibited patterns, the majority of units displayed different patterns on 40% to 60% of the trials. Thus a more distributed architecture resulted in a more reliable occurrence of excitatory memory pattern types within units, in addition to a more stable concomitant occurrence of complimentary pattern types. Working memory therefore appeared to be more stable in the more widely distributed network. The reliability of exhibiting a given pattern across trials increases only partially with the size of the network. Looking at the 4-population network with an order of magnitude more unit results in a network that still exhibits the range of pattern types with relative proportions similar to that seen in the smaller network and in real cortical neurons. Although overall, the average percent of trials that units exhibit their most dominant pattern type increases (i.e. 75% compared to approximately 47%), many units continue to exhibit their dominant pattern on less than a majority of the trials. The reason for this is that increasing population size decreases fluctuations due primarily to stochasticity in the connections between units, which of course does not drop to zero. In addition other sources of stochasticity remain such as noise in unit activity, and stochasticity in the facilitation. Thus given a particular stimulus, units' trajectories in the phase plane will pass with some proximity to the boundary (separatrix) between the fixed bistable states of the system, and given the closeness to the separatrix and the amount of stochasticity, will have a significant chance on any given trial of crossing over to a trajectory corresponding to a different pattern than the mean trajectory of the population, or that which occurs most often for a particular unit. As a result, there is a relatively invariant distribution of pattern occurrence that changes modestly with increasing population size. It is interesting to note that artificially reducing the other sources of stochasticity by for example averaging across trials, that one produces pattern distributions exhibiting essentially only bistable patterns. That is if we look at the average PSTH activity of the networks across trials they tend to be either stable activation cells, or stable inhibition cells. While this type of reduction of stochasticity is not physiologically meaningful since working memory takes place trial to trial and cannot require averaging over many trials, data from unit recording experiments typically report unit activity as average (across trials) PSTH histograms. Thus the prevalence of bistability may be overestimated. Rather in real cortical data as well as in the model we see the variability as in the model.
The dynamic synaptic facilitation is the component of this model which creates the bottleneck in the phase plane, and gives it its unique characteristics. Specifically it is facilitation which determines the amount of persistent activation, which, since it can adopt a continuous range of values, enables the change in firing rate from baseline to memory period to fall along a continuum. The bottleneck determines the rate at which the firing rates decay towards the baseline attractor (or increases towards the higher firing rate attractor) to adopt the continuum of firing rate values. The decay rate can be sufficiently slow such that no decay or acceleration of firing is observed for the duration of a memory period. Thus the result is an apparent or virtual bistability, which for all intents and purposes can be extended for as long as working memory is defined by the parameters of a working memory task. The fact that the rate at which persistent activation waxes or wanes is highly adjustable is consistent with the behavior of cells in the cortex during working memory. It has been observed in working memory experiments [27], [52][53] that the rate of decay and/or the rate of acceleration of persistent activation adjust to the duration of the memory period. The dynamic synapses make this phenomenon easy to incorporate. Adjusting the maximum of facilitation or other parameters, changes the bottleneck so that the rate of decay (or ramping) can become longer or shorter along a continuum.
Another prediction from the dynamics of the model is that the rate of persistent activation correlates with baseline rate. In the majority of delay activated cells, the magnitude of firing rate increases are less than 100% of baseline, with the magnitude of the delay period firing rate change increasing nonmonotonically with baseline rate increases. The largest magnitude increases in delay period frequency are in those cells with the largest baseline firing rates, while the largest percentage changes are those with low baseline rates. This is naturally incorporated in the present model. The range of rates over which the population can exhibit memory cell behavior is bounded by the saddle separatrix. Once facilitation pushes the system's trajectory beyond the separatrix, further increasing facilitation (or judiciously adjusting other parameters) does not result in further continuous increases in persistent activation delay rates, but rather a change in the activation pattern itself. The parameters of the model can be adjusted however, raising the frequency of the baseline state and incrementing the entire range of frequencies within its basin of attraction. Thus both baseline and delay rates increase in a correlated fashion, and due to the nonlinearity of the nullcline of the synaptic activity, the proportional increase in frequency is nonmonotonic.
The specific simultaneous patterns which may be exhibited in the populations are dependent on the relative strength of the inter-population connection strength, the intra-population connection strength, and whether the inter-population connectivities are mutually net inhibitory, or a combination of excitatory and inhibitory. The phase diagram of the 2-population firing rate model reveals a number of behavioral trends. For an excitatory-inhibitory connectivity between populations, the networks can exhibit a range of concomitant activities which includes memory cell activity in both populations, and simultaneous cue-coupled/response-coupled behavior. In contrast, with a mutually inhibitory connectivity between populations these particular behaviors are absent, and simultaneously occurring fixed-rate-memory/response-coupled behavior is present over only an extremely narrow range of the parameters. Thus memory being maintained simultaneously in both cortical areas occurs in the 2-population model only within the E-I connectivity scheme. During working memory, the simultaneous presence of memory cells in prefrontal cortex and another cortical area important to the sensory modality of the memorandum has been indicated by numerous studies. In addition to prefrontal cortex, memory cells have been observed for example in posterior association cortex including inferotemporal cortex [54][55], and posterior parietal cortex [56][57], and their simultaneous presence in multiple cortical areas have been indicated in imaging studies [55], [58][60]. The overlapping presence of cue-coupled and response-coupled cells have also been confirmed and have been implicated in working memory networks [61]. It is suggested that these populations would cooperate and be engaged in the transfer of information from a perceptual network to a motor network. The two populations would cooperate to enable the processing of information from one network to the other with translation from perception into action.
As the network becomes more distributed, increasing to four populations, simultaneous memory cell behavior and cue-coupled/response-coupled behaviors becomes more robust with these concomitant behaviors occurring over a wide continuous range of the parameters as can be observed by the increased areas of those respective behaviors over larger continuous ranges of the parameters in the phase diagrams (Figure 6). While the specific connectivity between populations in different cortical areas in working memory networks is unknown, it is suggestive to consider the possible effects on each population's activity in the model when another is shut down as in reversible lesion studies. For example in the 4-population model, termination of the activity two populations changes the phase diagrams to that of the 2-population model. Depending on the specific local parameters (i.e. the connectivity between populations), the effect can be a net increase in persistent activity, a decrease, or elimination of such activity to a non-responsive pattern. Studies of reversible lesions in which one cortical area is cooled while recording cell activity in another has shown that some cells increase their firing rate during the delay, while other will show decreases or become non-responsive [58], [62][63]. A question is whether the net effect of one cortical area on the other is excitatory or inhibitory as might as determined by more cells increasing persistent activation or decreasing it. While no definitive results exist, from these studies the data indicate that more neurons increase their activity in prefrontal cortex as a result of cooling posterior association cortices, while more neurons decrease their activity in posterior association cortex as a result of cooling prefrontal cortex. This could be indicative of an effective E-I coupling between cortical areas. Looking at the potential changes in the phase diagrams of the models going from the 4-population model (with connectivity of both the E-I and I-I type) to 2 populations, such behavior is the general trend over the majority of the parameter space. Further lesion studies or microstimulation studies may elucidate the functional connectivity of global network in light of the model.
In addition to a distributed architecture affecting the stability of memory pattern behavior and modulating activity enabling the occurrence of complimentary patterned behaviors, certain working memory pattern behaviors apparently are exclusively a function of a distributed architecture rather than the facilitation mechanism alone. Particularly the ramping delay inhibition pattern which is observed in the cortical data was present only in the distributed versions of the model. Another phenomenon is the existence of large regions of the parameter space in which one population exhibits the non-responsive pattern, while the other population exhibits memory cell behavior (fixed-rate, decaying, or ramping). In the database of real cells, the majority of neurons from parietal and prefrontal cortex exhibit the non-responsive pattern of behavior. Interspersed within these populations of non-responsive cells are neurons that exhibit the other patterns. From the models we see that the non-responsive pattern is a common part of a working memory network coexisting with the other patterned behaviors. Studies of patterns in spike sequence of such cells [64][66] have indicated that while not exhibiting significant differences between baseline and delay firing rates, such cells can exhibit differences in the patterning of the spike sequence in these periods; indicating participation in the working memory network. The fact that the non-responsive pattern arises as a prominent one in the models, overlapping with the range of memory pattern behaviors suggests that these populations may play a role in the dynamics of working memory networks.
It should be noted that the present analysis supplements the general attractor picture rather than replacing, or invalidating it. Cells with apparent bistable activity with high firing rates above baseline, while apparently rare in the cortex [5], may still be a fundamental neuronal substrate of working memory. In the present model not only is this activity present, but also the myriad other patterns with firing statistics and variability similar to those which constitute much of the activity correlated with working memory are accounted for.
### Author Contributions
Conceived and designed the experiments: MB BE. Analyzed the data: SVF BE. Wrote the paper: SVF MB BE JF YZ.
### References
1. 1. Fuster JM, Alexander GE (1971) Neuron activity related to short-term memory. Science 173: 652–654.
2. 2. Fuster JM (1973) Unit activity in prefrontal cortex during delayed-response performance: neuronal correlates of transient memory. J Neurophysiol 36: 61–78.
3. 3. Funahashi S, Bruce CJ, Goldman-Rakic PS (1989) Mnemonic coding of visual space in the monkeys dorsolateral prefrontal cortex. J Neurophysiol 61: 331–349.
4. 4. Quintana J, Fuster JM (1999) From perception to action: temporal integrative functions of prefrontal and parietal neurons. Cereb Cortex 9: 213–221.
5. 5. Shafi M, Zhou Y, Quintana J, Chow C, Fuster J, Bodner M (2007) Variability in neuronal activity in primate cortex during working memory tasks. Neuroscience 146: 1082–1106.
6. 6. Amit DJ (1989) Modeling brain function: The world of attractor neural networks. Cambridge: Cambridge University Press. 504 p.
7. 7. Amit DJ (1995) The hebbian paradigm reintegrated: local reverberations as internal representation. Behav Brain Sci 18: 617–626.
8. 8. Amit DJ, Tsodyks MV (1991a) Quantitative study of attractor neural network retrieving at low spike rates. 1. Substrate spikes rates and neuronal gain. Network 2: 259–273.
9. 9. Amit DJ, Tsodyks MV (1991b) Quantitative study of attractor neural network retrieving at low spike rates. 2. Low-rate retrieval in symmetrical networks. Network 2: 275–294.
10. 10. Amit DJ, Brunel N (1997a) Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex. Cereb Cortex 7: 237–252.
11. 11. Amit DJ, Brunel N (1997b) Dynamics of a recurrent network of spiking neurons before and following learning. Network 8: 373–404.
12. 12. Wang XJ (1999) Synaptic basis of cortical persistent activity: the importance of NMDA receptors to working memory. J Neurosci 19: 9587–9603.
13. 13. Brunel N, Wang X-J (2001) Effects of neuromodulation in a cortical network model of object working memory dominated by recurrent inhibition. J Comput Neurosci 11: 63–85.
14. 14. Compte A, Brunel N, Goldman-Rakic PS, Wang X-J (2000) Synaptic mechanisms and network dynamics underlying spatial working memory in a cortical network model. Cereb Cortex 10: 910–923.
15. 15. Laing CR, Chow CC (2001) Stationary bumps in networks of spiking neurons. Neural Comp 13: 1473–1494.
16. 16. Durstewitz D, Seamans JK, Sejnowski TJ (2000) Dopamine-mediated stabilization of delay- period activity in a network model of prefrontal cortex. J Neurophysiol 83: 1733–1750.
17. 17. Marder E, Abbot LF, Turrigiano GG, Liu Z, Golowasch J (1996) Memory from the dynamics of intrinsic membrane currents. Proc Natl Acad Sci USA 93: 13481–13486.
18. 18. Delord B, Klaassen AJ, Burnod Y, Costalat R, Guigon E (1997) Bistable behaviour in a neocortical neurone model. Neuroreport 8: 1019–1023.
19. 19. Camperi M, Wang XJ (1998) A model of visuospatial working memory in prefrontal cortex: recurrent network and cellular bistability. J Comp Neurosci 5: 383–405.
20. 20. Lisman JE, Fellous JM, Wang XJ (1998) A role for NMDA-receptor channels in working memory. Nat Neurosci 1: 273–275.
21. 21. Delord B, Baraduc P, Costalat R, Burnod Y, Guigon E (2000) A model study of cellular short-term memory produced by slowly inactivating potassium conductances. J Comput Neurosci 8: 251–273.
22. 22. Fransen E, Alonso AA, Hasselmo ME (2002) Simulations of the role of the muscarinic- activated calcium-sensitive nonspecific cation current I-NCM in entorhinal neuronal activity during delayed matching tasks. J Neurosci 22: 1081–1097.
23. 23. Egorov AV, Hamam BN, Fransen E, Hasselmo ME, Alonso AA (2002) Graded persistent activity in entorhinal cortex neurons. Nature 420: 173–178.
24. 24. Wang XJ (2002) Probabilistic decision making by slow reverberation in cortical circuits. Neuron 36: 955–968.
25. 25. Shafi M (2005) Working memory: patterns variability and computational modeling. UCLA Ph.D. Dissertation.
26. 26. Fuster JM (1995) Memory in the cerebral cortex. Cambridge: MIT Press. 372 p.
27. 27. Fuster JM (2003) Cortex and Mind: Unifying Cognition. Oxford University Press. 314 p.
28. 28. Wang Y, Markram H, Goodman PH, Berger TK, Ma J (2006) Heterogeneity in pyramidal network of the medial prefrontal cortex. Nat Neurosci 9: 534–542.
29. 29. Castro-Alamancos MA, Donoghue JP, Connors BW (1995) Different forms of synaptic plasticity in somatosensory and motor areas of the neocortex. J Neuroscience 15: 5324–5333.
30. 30. Castro-Alamancos MA, Connors BW (1996) Short-term synaptic enhancement and long-term potentiation in neocortex. Proc Natl Acad Sci USA 93: 1335–1339.
31. 31. Hempel CM, Kenichi HH, Wang X-J, Turrigiano GG, Nelson SB (2000) Multiple forms of short-term plasticity at excitatory synapses in rat medial prefrontal cortex. J Neurophysiology 83: 3031–3041.
32. 32. Galarreta M, Hestrin S (2000) Burst firing induces a rebound of synaptic strength at unitary neocortical synapses. J Neurophysiology 83: 621–624.
33. 33. Latham PE, Nirenberg S (2004) Computing and stability in cortical networks. Neural computation 16: 1385–1412.
34. 34. Barak O, Tsodyks M (2007) Persistent activity in neural networks with dynamic synapses. PLoS Comput Biol 3(2): e35.
35. 35. Mongillo G, Barak O, Tskodyks M (2008) Synaptic theory of working memory. Science 319: 1543–1546.
36. 36. Okamoto H, Isomura Y, Takada M, Fukai T (2007) Temporal integration by stochastic recurrent network dynamics with bimodal neurons. J Neurophysiol 97: 3859–3867.
37. 37. Durstewitz D (2003) Self-organizing neural integrator predicts interval times through climbing activity. The Journal of Neuroscience 23: 5342–5353.
38. 38. Zipser D, Kehoe B, Littlewort G, Fuster JM (1993) A spiking network model of short-term active memory. J Neruosci 13: 3406–3420.
39. 39. Singh R, Eliasmith C (2006) Higher-dimensional neurons explain the tuning and dynamics of working memory cells. The journal of neuroscience 26: 3667–3678.
40. 40. Ermentrout B (1996) Type I membranes, phase resetting curves, and synchrony. Neural Computation 8: 979–1001.
41. 41. Ermentrout B (2002) 290 p. Simulating, analyzing and animating dynamical systems: A guide to XP- PAUT for researchers and students, 1st edition, SIAM, New York.
42. 42. Zhou YD, Fuster JM (1996) Mnemonic neuronal activity in somatosensory cortex. Proc Natl Acad Sci USA 93: 10533–10537.
43. 43. Bodner M, Kroger J, Fuster JM (1996) Auditory memory cells in dorsolateral prefrontal cortex. Neuroreport 7: 1905–1908.
44. 44. Fuster JM, Bodner M, Kroger JK (2000) Cross-modal and cross-temporal association in neurons of frontal cortex. Nature 405: 347–351.
45. 45. Hoppenstead F, Izhikevich EM (2002) Canonical neural models. In: Arbib MA, editor. Brain theory and neural networks. Cambridge: MIT Press.
46. 46. Golomb D, Rubin N, Sompolinsky H (1990) Willshaw model: Associative memory with sparse coding and low firing rates. Phys Rev A 41: 1843–1854.
47. 47. Ruben N, Sompolinsky H (1989) Neural networks with low local firing rates. Europhysics Letters 10: 465–470.
48. 48. Treves A, Amit DJ (1989) Low firing rates: an effective Hamiltonian for excitatory neurons. J Phys A; Math Gen 22: 2205–2226.
49. 49. Roudi Y, Latham PE (2007) A balanced memory network. PLoS Comp Biol 3: e141.
50. 50. Renart A, Moreno-Bote R, Wang X-J, Parga N (2007) Mean-driven and fluctuation-driven persistent activity in recurrent networks. Neural Comput 19: 1–46.
51. 51. Barbieri F, Brunel N (2007) Irregular persistent activity induced by synaptic excitatory feedback. Frontiers in Computational Neuroscience 1: 5.
52. 52. Fuster JM (1997) The Prefrontal Cortex (3rd Edition). Philadelphia: Lippincott-Raven. 333 p.
53. 53. Kass AL, van Mier H, Goebel R (2007) The neural correlates of human working memory for haptically explored object orientations. Cereb Cortex 17: 1637–1649.
54. 54. Fuster JM, Jervey JP (1982) Neuronal firing in the inferotemporal cortex of the monkey in a visual memory task. J Neurosci 2: 361–375.
55. 55. Miller EK, Li L, Desimone R (1993) Activity of neurons in anterior inferior temporal cortex during a short-term memory task. J Neurosci 16: 5154–5167.
56. 56. Gnadt JW, Andersen RA (1988) Memory related motor planning in posterior parietal cortex of macaque. Exp Brain Res 70: 216–220.
57. 57. Koch KW, Fuster JM (1989) Unit activity monkey parietal cortex related to haptic perception and temporary memory. Exp Brain Res 76: 292–306.
58. 58. Chafee MV, Goldman-Rakic PS (2000) Inactivation of parietal and prefrontal cortex reveals interdependence of neural activity during memory-guided saccades. J Neurophysiol 83: 1550–1566.
59. 59. Stoeckel MC, Weder B, Binkofski F, Buccino G, Shah NJ, et al. (2003) A fronto-parietal circuit for tactile object discrimination: an event-related fMRI study. Neuroimage 19: 1103–1114.
60. 60. Ku Y, Ohara S, Wang L, Lenz FA, Hsiao SS, et al. (2007) Prefrontal Cortex and Somatosensory Cortex in Tactile Crossmodal Association: An Independent Component Analysis of ERP Recordings. PLoS ONE 2(8): e771. doi:10.1371/journal.pone.0000771.
61. 61. Quintana J, Fuster JM (1999) From perception to action: Temporal integrative functions of prefrontal and parietal neurons. Cerebral Cortex 9: 213–221.
62. 62. Fuster JM, Bauer RH (1974) Visual short-term memory deficit from hypothermia of frontal cortex. Brain Res 81: 393–400.
63. 63. Fuster JM, Bauer RH, Jervey JP (1985) Functional interactions between inferotemporal and prefrontal cortex in a cognitive task. Brain Res 330: 299–307.
64. 64. Bodner M, Zhou YD, Fuster JM (1997) Binary mapping of cortical spike trains during short-term memory. J Neurophysiol 77: 2219–2222.
65. 65. Bodner M, Zhou YD, Fuster JM (1998) High-frequency transitions in cortical spike trains related to short-term memory. Neuroscience 86: 1083–1087.
66. 66. Bodner M, Shafi M, Zhou YD, Fuster JM (2005) Patterned firing of parietal cells in a haptic working memory task. Eur J Neurosci 21: 2538–2546.
Ambra 2.9.9 Managed Colocation provided
by Internet Systems Consortium. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8724774122238159, "perplexity": 2340.7636935958976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010907746/warc/CC-MAIN-20140305091507-00038-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://cs.stackexchange.com/questions/52176/primitive-recursion-in-the-lambda-calculus | # primitive recursion in the lambda calculus
I am having trouble finding out what a primitive subset of the lambda calculus would look like. I reference primitive recursion as shown here: "https://en.wikipedia.org/wiki/Primitive_recursive_function".
These are the three base axioms:
• Constant function: The 0-ary constant function 0 is primitive recursive.
• Successor function: The 1-ary successor function S, which returns the successor of its argument (see Peano postulates), is primitive recursive. That is, S(k) = k + 1.
• Projection function: For every n≥1 and each i with 1≤i≤n, the n-ary projection function Pin, which returns its i-th argument, is primitive recursive.
Here is my confusion. In the LC zero is represented as (λfx. x). Next the successor functions is defined as (λnfx. f (n f x)). Because they are both axioms both of these functions can be classified as primitive. But when I apply the function suc to zero I get the encoding of the number one. Which is represented as the function (λf.(λx.(f x))). Now this number is neither zero or the suc function but the result of application. As such I do not see how this result function (value of 1) fits into the rule set. But very clearly a program with the number 1 in it is still primitive recursive. What am I not understanding here? While 1 is the suc to zero, once suc is applied it is neither suc, nor zero.
• There are also "derivation rule", namely composition and recursion. – Yuval Filmus Jan 23 '16 at 13:44
• Of course @YuvalFilmus but as far as I could tell those did not affect the problem. Unless I have misunderstood in which case how do they help? – 44701 Jan 23 '16 at 13:46
## 1 Answer
As you mention, $1$ is primitive recursive according to the following proof:
• $0$ is primitive recursive (axiom)
• $\operatorname{suc}$ is primitive recursive (axiom)
• $1=\operatorname{suc}(0)$ is primitive recursive (composition)
Using this you can show that $1$ is admissible, that is if you add $1$ as an axiom, the resulting set of functions is still the primitive recursive functions.
Note that lambda calculus has absolutely nothing to do with the answer.
• So just to clarify if I apply suc to zero, then the function that is the result is also primitive recursive? It will then not be possible for that function to introduce Turing complete behavior? – 44701 Jan 23 '16 at 13:54
• That's right – you can use composition and primitive recursion in defining primitive recursive functions. It says so clearly in the definition. Even using these rules, you can't obtain all computable functions. – Yuval Filmus Jan 23 '16 at 13:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9454973936080933, "perplexity": 660.3453627304064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141188146.22/warc/CC-MAIN-20201126113736-20201126143736-00058.warc.gz"} |
http://swmath.org/software/15554 | # LaplaceDeconv
R package LaplaceDeconv: Laplace Deconvolution with Noisy Discrete Non-Equally Spaced Observations on a Finite Time Interval. Solves the problem of Laplace deconvolution with noisy discrete non-equally spaced observations on a finite time interval based on expansions of the convolution kernel, the unknown function and the observed signal over Laguerre functions basis. It implements the methodology proposed in the paper ”Laplace deconvolution on the basis of time domain data and its application to Dynamic Contrast Enhanced imaging” by F. Comte, C-A. Cuenod, M. Pensky and Y. Rozenholc in ArXiv (http://arxiv.org/abs/1405.7107).
## Keywords for this software
Anything in here will be replaced on browsers that support the canvas element | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8138324022293091, "perplexity": 2434.263066970034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00204-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://byjus.com/maths/angle-bisector-theorem/ | # Angle Bisector Theorem
As per the Angle Bisector theorem, the angle bisector of a triangle bisects the opposite side in such a way that the ratio of the two line-segments is proportional to the ratio of the other two sides. Thus the relative lengths of the opposite side (divided by angle bisector) are equated to the lengths of the other two sides of the triangle. Angle bisector theorem is applicable to all types of triangles.
Class 10 students can read the concept of angle bisector theorem here along with the proof. Apart from angle bisector theore, we will also discuss here the external angle theorem, perpendicular bisector theorem, converse of angle bisector theorem.
## What is Angle Bisector Theorem?
An angle bisector is a straight line drawn from the vertex of a triangle to its opposite side in such a way, that it divides the angle into two equal or congruent angles. Now let us see, what is the angle bisector theorem.
According to the angle bisector theorem, the angle bisector of a triangle divides the opposite side into two parts that are proportional to the other two sides of the triangle.
## Interior Angle Bisector Theorem
In the triangle ABC, the angle bisector intersects side BC at point D. See the figure below.
As per the Angle bisector theorem, the ratio of the line segment BD to DC equals the ratio of the length of the side AB to AC.
$$\frac{\left | BD \right |}{\left | DC \right |}=\frac{\left | AB \right |}{\left | AC \right |}$$
Conversely, when a point D on the side BC divides BC in a ratio similar to the sides AC and AB, then the angle bisector of ∠ A is AD. Hence, according to the theorem, if D lies on the side BC, then,
$$\frac{\left | BD \right |}{\left | DC \right |}=\frac{\left | AB \right |Sin\angle DAB}{\left | AC \right |Sin\angle DAC}$$
If D is external to the side BC, directed angles and directed line segments are required to be applied in the calculation.
Angle bisector theorem is applied when side lengths and angle bisectors are known.
### Proof of Angle bisector theorem
We can easily prove the angle bisector theorem, by using trigonometry here. In triangles ABD and ACD (in the above figure) using law of sines, we can write;
$$\frac{AB}{BD}=\frac{sin\angle BDA}{sin\angle BAD}$$ ….(1)
$$\frac{AC}{DC}=\frac{sin\angle ADC}{sin\angle DAC}$$ ….(2)
The angles ∠ ADC and ∠ BDA make a linear pair and hence called adjacent supplementary angles.
Since the sine of supplementary angles are equal, therefore,
Sin ∠ BDA = Sin ∠ ADC …..(3)
Also,
Thus,
Sin ∠ BDA = Sin ∠ ADC …(4)
Hence, from equation 3 and 4, we can say, the RHS of equation 1 and 2 are equal, therefore, LHS will also be equal.
$$\frac{\left | BD \right |}{\left | DC \right |}=\frac{\left | AB \right |}{\left | AC \right |}$$
Hence, angle bisector theorem is proved.
Condition:
If the angles ∠ DAC and ∠ BAD are not equal, the equation 1 and equation 2 can be written as:
$$\frac{\left | AB \right |}{\left | BD \right |}$$ sin ∠ BAD = sin∠ BDA
$$\frac{\left | AC \right |}{\left | DC \right |}$$ sin ∠ DAC = sin∠ ADC
Angles ∠ ADC and ∠ BDA are supplementary, hence the RHS of the equations are still equal. Hence, we get
$$\frac{\left | AB \right |}{\left | BD \right |}$$ sin ∠BAD = $$\frac{\left | AC \right |}{\left | DC \right |}$$ sin ∠DAC
This rearranges to generalized view of the theorem.
## Converse of Angle Bisector Theorem
In a triangle, if the interior point is equidistant from the two sides of a triangle then that point lies on the angle bisector of the angle formed by the two line segments.
## Triangle Angle Bisector Theorem
Extend the side CA to meet BE to meet at point E, such that BE//AD.
Now we can write,
CD/DB = CA/AE (since AD//BE) —-(1)
∠4 = ∠1 [corresponding angles]
∠1 = ∠2 [AD bisects angle CAB]
∠2 = ∠3 [Alternate interior angles]
∠3 = ∠4 [By transitive property]
ΔABE is an isosceles triangle with AE=AB
Now if we replace AE by AB in equation 1, we get;
CD/DB = CA/AB
Hence proved.
## Perpendicular Bisector Theorem
According to this theorem, if a point is equidistant from the endpoints of a line segment in a triangle, then it is on the perpendicular bisector of the line segment.
Alternatively, we can say, the perpendicular bisector bisects the given line segment into two equal parts, to which it is perpendicular. In case of triangle, if a perpendicular bisector is drawn from the vertex to the opposite side, then it divides the segment into two congruent segments.
In the above figure, the line segment SI is the perpendicular bisector of WM.
## External Angle Bisector Theorem
The external angle bisector of a triangle divides the opposite side externally in the ratio of the sides containing the angle. This condition occurs usually in non-equilateral triangles.
### Proof
Given : In ΔABC, AD is the external bisector of ∠BAC and intersects BC produced at D.
To prove : BD/DC = AB/AC
Constt: Draw CE ∥ DA meeting AB at E
Since, CE ∥ DA and AC is a transversal, therefore,
∠ECA = ∠CAD (alternate angles) ……(1)
Again, CE ∥ DA and BP is a transversal, therefore,
∠CEA = ∠DAP (corresponding angles) —–(2)
But AD is the bisector of ∠CAP,
As we know, Sides opposite to equal angles are equal, therefore,
∠CEA = ∠ECA
BD/DC = BA/AE [By Thales Theorem]
AE = AC,
BD/DC = BA/AC
Hence, proved.
## Solved Examples on Angle Bisector Theorem
Go through the following examples to understand the concept of the angle bisector theorem.
Example 1:
Find the value of x for the given triangle using the angle bisector theorem.
Solution:
Given that,
AD = 12, AC = 18, BC=24, DB = x
According to angle bisector theorem,
Now substitute the values, we get
12/18 = x/24
X = (⅔)24
x = 2(8)
x= 16
Hence, the value of x is 16.
Example 2:
ABCD is a quadrilateral in which the bisectors of angle B and angle D intersects on AC at point E. Show that AB/BC = AD/DC
Solution:
From the given figure, the segment DE is the angle bisector of angle D and BE is the internal angle bisector of angle B.
Hence, the using internal angle bisector theorem, we get
Similarly,
AE/EC = AB/BC ….(2)
From equations (1) and (2), we get
Hence, AB/BC = AD/DC is proved.
Example 3.
In a triangle, AE is the bisector of the exterior ∠CAD that meets BC at E. If the value of AB = 10 cm, AC = 6 cm and BC = 12 cm, find the value of CE.
Solution:
Given : AB = 10 cm, AC = 6 cm and BC = 12 cm
Let CE is equal to x.
By exterior angle bisector theorem, we know that,
BE / CE = AB / AC
(12 + x) / x = 10 / 6
6( 12 + x ) = 10 x [ by cross multiplication]
72 + 6x = 10x
72 = 10x – 6x
72 = 4x
x = 72/4
x = 18
CE = 18 cm
To learn more important Maths theorems, visit BYJU’S – The Learning App and learn all the concepts easily.
## Frequently Asked Questions on Angle Bisector Theorem
### What does the angle bisector theorem state?
According to angle bisector theorem, an angle bisector of an angle of a triangle divides the opposite side into two parts that are proportional to the other two sides of the triangle.
### What is the formula of angle bisector?
In the triangle ABC, the angle bisector intersects side BC at point D. Thus,
BD/DC = AB/AC
### The angle bisector of vertex angle of an isosceles triangle bisects the opposite side. True or False.
True. An isosceles triangle has two pairs of equal sides with a common vertex. If the angle bisector of vertex angle is drawn, then it divides the opposite side into equal parts.
### How to find the angle bisector of an angle?
Draw an angle say ∠ABC, angled at B. Using a compass, and taking B as center and any radius, draw an arc intersecting BA at P and BC at Q, respectively. Now taking P and Q as center and with same radius, draw two arcs intersecting each other at R. Join the vertex B to point R and draw the ray BR. Thus, BR is the angle bisector of ∠ABC. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.84442138671875, "perplexity": 919.7914358360712}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057589.14/warc/CC-MAIN-20210925021713-20210925051713-00058.warc.gz"} |
https://www.physicsforums.com/threads/integral-from-distribution-function.728544/ | # Integral from distribution function
1. Dec 15, 2013
### _Matt87_
1. The problem statement, all variables and given/known data
hi, so I've got this distribution function:
$$f(z,p,t)=\frac{1}{2\pi\partial z\partial p}exp(-\frac{[z-v(p)t]^2}{2\partial z^2})exp(-\frac{[p-p_0]^2}{2\partial p^2})$$
where:
$$v(p)=v_0+\alpha(p-p_0)$$
$$v_0=\frac{p_0}{m\gamma_0}$$
$$\alpha=\frac{1}{m\gamma_0^3}$$
I have to calculate the mean position as a function of time$$\langle z \rangle$$
2. Relevant equations
I've got a hint too which doesn't help me at all to be honest ;] :
All the integrals over z required to calculate the averages can be expressed in therms of $$I_\nu =\int_{-\infty}^{+\infty}\,ds \ s^\nu exp(-s^2)$$ with $I_0=\sqrt{\pi}, I_1=0,\ and \ I_2=\sqrt{\pi}/2$, by substitution and a suitable choice of the order of integration.
3. The attempt at a solution
Presented function is I think distribution of particles in a beam in longitudinal phase space, and in that case
$\int \,d^3p\ f(z,p,t)=n(z,t)$ which is the particle density and $\int \,d^3z\ n(r,t)=N$ which is particles number.
So I think that mean position should look like this:
$$\langle z \rangle=\frac{\int\,d^3z\int\,d^3p \ z\ f(z,p,t)}{N}$$
so
$$\langle z \rangle=\frac{\int\,d^3z\int\,d^3p \ z \ f(z,p,t)}{\int\,d^3z\int\,d^3p\ f(z,p,t)}$$
am I right? if yes, how to start solving this kind of integral. I mean something like even this :
$$\int\,d^3z\frac{1}{2\pi\partial z\partial p} ...$$ those partials go before the integral .. or what?
btw. I attached two files with the actual assignment, and a slide from lecture that is supposed to tell me everything ;)
#### Attached Files:
File size:
67.3 KB
Views:
49
• ###### from lecture.png
File size:
86.6 KB
Views:
47
2. Dec 15, 2013
### vela
Staff Emeritus
$\delta$ doesn't mean $\partial$. $\delta z$ and $\delta p$ are just constants.
3. Dec 15, 2013
### _Matt87_
oh... right :) thanks.
4. Dec 15, 2013
### Ray Vickson
Besides what Vela has pointed out to you, there is also the important issue of whether $\delta z^2$ means $\delta(z^2)$ (an "error" parameter for $z^2$) or whether it means $(\delta z)^2$ (the square of the "error" parameter for $z$). Using simpler symbols like $a$ and $b$ instead of $\delta z$ and $\delta p$ would be a very great help, both to you and to us. (As as genera rule, choosing simpler notation helps when dealing with lengthy and complicated calculations; often, it is a good idea to replace the problem's notation by your own before starting on the problem. Then, in the final answer you can put back the original notation. Years of experience has taught me the value of doing that!)
5. Dec 17, 2013
### _Matt87_
I'm not quite sure what is the $\delta z$ or $\delta p$. It doesn't say, although I think that you guys are right that if $f(z,p,t)$ represents a distribution of particles in a beam then $\delta z$ and $\delta p$ could be the errors. Let suppose that it's acutally $(\delta z)^2$ and $(\delta p)^2$ and that they're $A$ and $B$
I'm still not sure that for example in this integral $$\int\,d^3z\int\,d^3p\ f(z,p,t)$$
we've got $\,d^3z$ and $\,d^3p$ and if I assume that the beam goes in one direction (z) then can the integral be actually written like this? $$\int\,dz\int\,dp\ f(z,p,t)$$
if yes then is that right? :
$$\int\,dz\int\,dp\ exp(\frac{-z^2+2zV(p)t-V(p)^2t^2}{2A})exp(\frac{-p^2+2pp_0-(p_0)^2}{2B})=\\ exp(-\frac{(p_0)^2}{2B})\int\,dz\int\,dp \exp(-\frac{z^2}{2A})exp(\frac{zV(p)t}{A})exp(-\frac{V(p)^2t^2}{2A})exp(-\frac{p^2}{2B})exp(\frac{pp_0}{B})$$
what about this second exp? it depends on both z and p? (others I can integrate over either z or p)
6. Dec 17, 2013
### Ray Vickson
I suspect that p and z are one-dimensional, not 3-D (because you speak of a beam). If they were 3-dimensional your problem would be an uncomputable mess, because (at least for z) you would need geometric boundaries perpendicular to the beam's axis, etc.
To make further progress you need to actually use the given form of v(p), in order to reduce everything to a couple of computable gaussian-type integrations. If you start by first doing the p-integration, you need to combine all the terms having p in them, whether they come from the first factor or the second one. In other words, your p-integration may also contain z as a "parameter", and that can modify the form of the later z-integration (assuming you integrate first over p and then over z).
Last edited: Dec 17, 2013
Draft saved Draft deleted
Similar Discussions: Integral from distribution function | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9609962105751038, "perplexity": 709.8538335077385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886108709.89/warc/CC-MAIN-20170821133645-20170821153645-00472.warc.gz"} |
http://mathhelpforum.com/geometry/140863-angle-bisector-pair-straight-lines.html | # Math Help - Angle bisector of a pair of straight lines
1. ## Angle bisector of a pair of straight lines
hello everyone
This is a question on coordinate geometry.
One bisector of the angle between the lines given by a(x-1)^2 + 2h(x-1)y + by^2 = 0 is 2x + y – 2 = 0. We need to find the other bisector.
Any help would be appreciated. Thanks in advance.
2. Hello!
We have $a(x-1)^2 + 2h(x-1)y + by^2 = 0$.
Dividing throughout by $(x - 1)^2$, and solving the quadratic in $\frac{y}{x - 1}$, we get:
$\frac{y}{x - 1} = \frac{-h \pm \sqrt{h^2 - ab}}{b}$, which gives the equations of the pair of straight lines.
The joint equation of bisectors is given by:
$\frac{(x - 1)^2 - y^2}{a - b} = \frac{(x - 1)y}{h}$
(here x in the standard equation found in books is replaced by $(x - 1)$ since the equation of our pair of straight lines is in $(x - 1)$)
On expanding, we get:
$hx^2 + xy(b - a) - hy^2 - 2hx + y(a - b) + h = 0$ --- (1)
One of the lines represented by the above equation is $2x + y - 2 = 0$.
We know that the bisectors of the angles between the lines are perpendicular to each other (Make sketches to convince yourself. Moreover in the joint equation, Coefficient of $x^2$ + Coefficient of $y^2$ = 0 which tells you this fact immediately).
Thus, equation of the other bisector is $x - 2y + k$, where k is the constant to be determined.
(We get the above equation by applying condition of perpendicularity: $m_{1}m_{2} = -1$, where $m_{1}$ and $m_{2}$ are the respective slopes. In our case $m_{1} = -2$ which gives $m_{2} = \frac{1}{2}$, thus giving the equation of the other bisector)
Multiplying these two equations together,
$(2x + y – 2)(x - 2y + k) = 0$
$\Rightarrow 2x^2 - 3xy - 2y^2 + x(2k - 2) + y(4 + k) - 2k = 0$
Comparing with (1),
1) h = 2
2) $2k - 2 = -2h$ $\Rightarrow k = -1$
So, the other bisector is:
$\boxed{x - 2y - 1 = 0}$
3. thanks fardeen, i couldnt have asked for more.
4. Originally Posted by watsmath
hello everyone
This is a question on coordinate geometry.
One bisector of the angle between the lines given by a(x-1)^2 + 2h(x-1)y + by^2 = 0 is 2x + y – 2 = 0. We need to find the other bisector.
Any help would be appreciated. Thanks in advance.
Alternatively,
we can find the point of intersection of the angle bisector 2x+y-2 with the lines
and find the equation of the perpendicular to the bisector that goes through
that same point of intersection.
Hence, y=2-2x is substituted
$a(x-1)^2+2h(x-1)(2-2x)+b(2-2x)^2=0$
$2-2x=2(1-x)=-2(x-1)$
$(2-2x)^2=2(1-x)2(1-x)=-2(x-1)(-2)(x-1)=4(x-1)^2$
then
$a(x-1)^2-4h(x-1)+4(x-1)^2=0$
$(x-1)\left[a(x-1)-4h+4(x-1)\right]=0$
x=1
$2x+y-2=0\ \Rightarrow\ 2+y-2=0,\ y=0$
The slope of $2x+y-2=0$ is -2, as $y=-2x+2$
Any perpendicular line has slope $\frac{1}{2}$
The perpendicular bisector contains the point (1,0)
hence it's equation is
$y-0=\frac{1}{2}(x-1)$
$2y=x-1$
$x-2y-1=0$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 34, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9593674540519714, "perplexity": 478.8550058551555}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678703964/warc/CC-MAIN-20140313024503-00063-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://www.universetoday.com/72412/podcast-planetary-rings/ | # Astronomy Cast Ep. 195: Planetary Rings
Saturn is best known for its rings. This huge and beautiful ring system is easy to spot in even the smallest backyard telescope, so you can imagine they were a surprise when Galileo first noticed them. But astronomers have gone on to find rings around the other gas giant worlds in the Solar System – the differences are surprising.
Or subscribe to: astronomycast.com/podcast.xml with your podcatching software.
Astronomy With the Unaided Eye shownotes and transcript.
## 6 Replies to “Astronomy Cast Ep. 195: Planetary Rings”
1. Aqua says:
A fascinating podcast! Thanks for the time and effort you put into sharing your creations! It is fascinating that our outer Gas Giants all have rings and multitudes of icy satellites!
I would like to add something I found later…. this excerpt from SATURN: MAGNETIC FIELD AND MAGNETOSPHERE
C. T. RUSSELL AND J. G. LUHMANN
Originally published in
Encyclopedia of Planetary Sciences, edited by J. H. Shirley and R. W. Fainbridge,
718-719, Chapman and Hall, New York, 1997.
Magnetosphere
Saturn also has an immense magnetosphere whose linear dimension is about one-fifth that of the Jovian magnetosphere. This magnetosphere is more similar to the terrestrial magnetospheres than that of Jupiter. The magnetosphere traps radiation belt particles, and these particles reach levels similar to those of the terrestrial magnetosphere. On their inner edge the radiation belts are terminated by the main (A, B and C) rings of Saturn, which absorb any particles that encounter them. The radiation belt particles also are absorbed if they collide with one of the moons. Hence there are local minima in the energetic particle fluxes at each of the moons. Unlike Jupiter, but like the Earth, there is no internal energy and mass source deep in the Saturnian magnetosphere. However Titan, which orbits just inside the average location of the magnetopause, in the far reaches of the magnetosphere, has an interesting interaction.
Titan (q.v.) is the most gas-rich moon in the solar system, having an atmospheric mass per unit area much greater than even that of the Earth. At its upper levels this atmosphere becomes ionized through charge exchange, impact ionization and photoionization. This newly created plasma adds mass to the magnetospheric plasma, which attempts to circulate in the Saturnian magnetosphere at a velocity similar to that needed to remain stationary with respect to the rotating planet. Since this velocity is much faster than the orbital velocity of Titan, the added mass slows the ‘corotating’ magnetospheric plasma. The magnetic field of the planet that is effectively frozen to the magnetospheric plasma is then stretched and draped about the planet, forming a slingshot which accelerates the added mass up to corotational speeds. Thus the interaction between the Saturn magnetosphere and the Titan atmosphere resembles the interaction of the solar wind with comets and with Venus (Kivelson and Russell, 1983).
The Saturn magnetosphere, like the other planetary magnetospheres, is an efficient deflector of the solar wind. The solar wind at Saturn flows more rapidly with respect to the velocity of compressional waves than at Jupiter and the terrestrial planets. Thus the shock that forms at Saturn is very intense. Ironically this strength may weaken at least one form of coupling of the solar wind with the magnetosphere, that due to reconnection. However, some aspects of the interaction of the solar wind plasma should be much stronger than at Jupiter or at Earth because of the increased strength of the shock and the scale size of the interaction, which can accelerate charged particles to very high levels.
Saturn is also expected (like Jupiter) to have a very large tail, possibly one that could be dynamic like that of the Earth. However, observations of the tail are quite limited and we must wait until the Cassini mission (q.v.) in the early 21st century for further studies of the magnetic field, magnetosphere and magnetotail, and the answers to many of the questions that the Pioneer and Voyager data have generated.
2. Aqua says:
I like!
“New Recipe For Oxygen On Icy Moons” http://www.spacedaily.com/reports/New_Recipe_For_Oxygen_On_Icy_Moons.html
“Lightning Holds Fingerprint of Antimatter”
http://news.discovery.com/space/gamma-rays-lightning-antimatter.html
“Graphene under stress creates gigantic pseudo-magnetic fields”
http://www.geojunk.com/geographic-topics/other-sciences/139-physics/8852-graphene-under-strain-creates-gigantic-pseudo-magnetic-fields
Could it be that Saturn is constantly making its own rings and icy satellites with a combination of these process’? Have we ‘overlooked’ something?
3. Aqua says:
The second link was added because it refers to a possible high energy source for fusion processes.
The third link about Graphene’s unusual properties begs the question of whether or not similar elemental magnetic properties may occur deep in the atmosphere of Saturn?
The MOST unusual comment in the first post refers to: “On their inner edge the radiation belts are terminated by the main (A, B and C) rings of Saturn, which absorb any particles that encounter them.”
So the rings themselves absorb hot particles… Hmmm….
4. Aqua says:
A recent discovery by the Herschel infrared space observatory has discovered that ultraviolet starlight is the key ingredient for making water in space.
Could not this same process be at work at Saturn?
5. Aqua says: | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8409485220909119, "perplexity": 1855.8670061431242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376832259.90/warc/CC-MAIN-20181219110427-20181219132427-00014.warc.gz"} |
http://mathhelpforum.com/calculus/127029-finding-one-sided-limit-no-graph.html | # Math Help - finding one sided limit[no graph]
1. ## finding one sided limit[no graph]
sorry i'm at work atm but i was wondering if anyone could explain how to find any one sided limit of a function who's one sided limit is not positive or negative infinite....i know u can use the sign rule(not sure if thats what it's called) to find if the limit is headed toward infinite or negative infinite without drawing the graph(has a V/A)...sorry if i'm not clear :O -thx
2. can you post an example of your question ? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9222956895828247, "perplexity": 985.2252274947272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645235537.60/warc/CC-MAIN-20150827031355-00190-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://www.ncbi.nlm.nih.gov/pubmed/%209134019 | Format
Choose Destination
Int J Radiat Biol. 1997 Mar;71(3):293-9.
# Chromosome aberrations in human lymphocytes analysed by fluorescence in situ hybridization after in vitro irradiation, and in radiation workers, 11 years after an accidental radiation exposure.
1
### Abstract
Fluorescence in situ hybridization of metaphase chromosomes was used to determine the yield of symmetric and asymmetric exchange aberrations after in vitro exposure of peripheral lymphocytes to 250 kV X-rays (0-3.0 Gy). For the aberration analyses, chromosomes 2, 4 and 8 and all centromeres were painted. Centric rings amounted to about 8% of the dicentric yield. The proportion of inversions and insertions was about 5% of the total translocations. Regarding the spontaneous levels, the frequency of total induced translocations was higher by a factor of 1.13 than that of dicentrics. The involvement of chromosomes 2, 4 and 8 in translocations is significantly different from the expected ratio concerning physical length (p < 0.01). Furthermore, the frequency of translocations was evaluated in three radiation workers who received an accidental radiation exposure 11 years ago. About 75% of the translocations were identified as complete in comparison with 79% in the in vitro experiments. In the radiation workers chromosome 2 again showed an under-representation in translocations, whereas chromosome 4 was over-represented as in the in vitro experiments. The summarized results for the radiation workers showed a mean genomic translocation frequency of 13.4 per 1000 cells. This frequency is not significantly different from the mean frequency of dicentrics which were determined by conventional FPG staining, after detection of the accidental radiation exposure about 11 years ago (8.6 dic/1000 cells). There were, however, some differences between individuals affecting this comparison. The distribution patterns of dicentrics showed an over-dispersion, whereas the translocations occurred single in cells.
PMID:
9134019
[Indexed for MEDLINE] | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8846429586410522, "perplexity": 4493.080527204737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204077.10/warc/CC-MAIN-20190325153323-20190325175323-00512.warc.gz"} |
http://www.chegg.com/homework-help/questions-and-answers/ammonium-bisulfide-forms-ammonia-hydrogen-sulfide-reaction-nh4hs-s-nh3-g-h2s-g-reaction-va-q4174805 | Ammonium bisulfide, , forms ammonia, , and hydrogen sulfide, , through the reaction
NH4HS(s)----NH3(g)+H2S(g)
This reaction has a value of kp 0.120 at 25 degree celcius .A 5.00-L flask is charged with 0.400 g of pure H2S, at 25 degree celcius,What are the partial pressures of NH3 and H2S at equilibrium, that is, what are the values of pNH3 and pH2S, respectively?
1) What is the mole fraction, X, of H2S in the gas mixture at equilibrium?
2)What is the minimum mass of NH4HS that must be added to the flask to achieve equilibrium?
3) What is the mole fraction, %u03C7, of H2S in the gas mixture at equilibrium?
4) What is the minimum mass of NH4HS that must be added to the 5.00-L flask when charged with the 0.400g of pure H2S(g), at 25 %u2218C to achieve equilibrium? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9435809850692749, "perplexity": 3613.416335883628}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298080.28/warc/CC-MAIN-20150323172138-00226-ip-10-168-14-71.ec2.internal.warc.gz"} |
https://www.transtutors.com/questions/what-is-energy-and-environment-security-briefly-explain-it--2849481.htm | # What is Energy and environment security briefly explain it?
1 answer below »
What is Energy and environment security briefly explain it?
Attachments:
## 1 Approved Answer
Sourabh P
3 Ratings, (9 Votes)
The global energy demand is dependent upon the economic growth and the population dynamics of the countries. It has also been identified that the global energy depends on the supplies of the fossil fuels. However, these energies are distributed across all global regions. Presently, the industrialised countries are increasing higher demand of utilising the primary energy. On the other hand, the climate change, energy insecurity and the...
## Recent Questions in Marketing Management
Submit Your Questions Here !
Copy and paste your question here...
Attach Files | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9246295690536499, "perplexity": 2842.7429514978076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583867214.54/warc/CC-MAIN-20190122182019-20190122204019-00508.warc.gz"} |
https://hackage.haskell.org/package/cyclotomic-0.4.4.1 | cyclotomic: A subfield of the complex numbers for exact calculation.
[ gpl, library, math ] [ Propose Tags ]
The cyclotomic numbers are a subset of the complex numbers that are represented exactly, enabling exact computations and equality comparisons. They contain the Gaussian rationals (complex numbers of the form p + q i with p and q rational), as well as all complex roots of unity. The cyclotomic numbers contain the square roots of all rational numbers. They contain the sine and cosine of all rational multiples of pi. The cyclotomic numbers form a field, being closed under addition, subtraction, mutiplication, and division.
[Index] | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9480305314064026, "perplexity": 547.2819819814165}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573029.81/warc/CC-MAIN-20220817153027-20220817183027-00188.warc.gz"} |
https://cuasan.wordpress.com/2010/09/09/building-direct-show-filters-in-visual-studio-2008/ | # Building direct show filters in Visual Studio 2008
First off, make sure you have the latest windows sdk.
The direct show base classes are in there now and not directx sdk.
There are required headers in direct show base classes in the samples directory of the sdk.
Thus, you need to reference the header files in
C:\Program Files\Microsoft SDKs\Windows\v7.0\Samples\multimedia\directshow\baseclasses
You will also to build that solution and link the lib file:
1. Drop down the Tools menu, and select Options [ from http://www.lavishsoft.com/wiki/index.php/Visual_Studio_Paths ]
2. In the box on the left is a list of option categories. Select “Projects and Solutions” and then the sub-category “VC++ Directories”
3. In the upper right hand corner is a drop-down box that selects a particular set of default directories, including “Executable files”, “Include files”, “Reference files”, “Library files”, and “Source files”. Generally, you only want to add to the “Include files” or “Library files” lists. Select “Include files”
4. In the middle of the right hand side of the window is a list of directories.
1. Add the include path by pressing the “New Line” button above the window, or by pressing “Ctrl-Insert” or clicking under the last entry.
2. A blank entry appears for you to either type the path or navigate by clicking the “…” button.
3. Generally the final path you want will end with a folder called “include”. Enter the path now (e.g., c:\program files\isxdk\include)
1. For direct show base classes, enter C:\Program Files\Microsoft SDKs\Windows\v7.0\Samples\multimedia\directshow\baseclasses
5. Select “Library files” in the drop-down box
6. In the same fashion as done for the include file path, add the path to the library files. (e.g., c:\program files\isxdk\lib\vs80)
1. Add the direct show path C:\Program Files\Microsoft SDKs\Windows\v7.0\Samples\multimedia\directshow\baseclasses\release_mbcs
7. You’re done, click OK
These includes need to be before the windows sdk includes, or you will have issues with definitions in refclock.h as there is another header of this name in the sdk.
I ran into this too. I found that you need to have the baseclasses directory (samples/multimedia/directshow) *before* the sdk include directory, since they both have a schedule.h file and refclock.h uses <> not ” for the include. I was slightly surprised to see that no-one else had mentioned this. [ http://social.msdn.microsoft.com/Forums/en/windowsdirectshowdevelopment/thread/5da8f0b8-d2a9-4caf-81e1-7f5788fa1c00 ]
The exact error reads as follows:
WIN32
c1xx : fatal error C1083: Cannot open source file: ‘WIN32’: No such file or directory
After I get this error once, I can recompile again, and since the changed file has now been compiled, it goes directly to linking and doesn’t spit this error out again.
I also found reference to other potential causes for this error online:
• Make sure the compiler sees /D “WIN32”, without /D it will try to compile WIN32. Project + properties, C/C++, Command line.
• /I “” is your problem, it swallows the next /D. I tried it and got the same error message. Not sure how you got it, there’s probably something wrong with Project + properties, C/C++, Additional include directories.
• I’d added an environment variable which had now been removed, thus creating an empty /I “”.
And that’s about all I took note of. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9218555092811584, "perplexity": 4762.353218211755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189490.1/warc/CC-MAIN-20170322212949-00163-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://mathhelpforum.com/calculus/11930-more-proofs-analysis.html | # Math Help - more proofs in analysis
1. ## more proofs in analysis
Problem 1:
Let S be a non-empty set bounded by the subset of real numbers. Prove that sup S is unique.
Problem 2:
Let S and T be non-empty bounded subsets of real numbers were S is a subset of T. Prove that
inf T <= inf S <= sup S <= sup T
Problem 3:
A) prove: if x and y are real numbers with x < y, then there are infitely many rational numbers in the interval [x,y]
B) repeat part A for irrational numbers
2. Here is some help. On #1.
Suppose that A=sup(S) and B=sup(S) is A is not B then one is less than the other: say that B<A. The by definition of supremum, there is an element, x, in S such that B<x<=A. Do you see the contradiction?
On #3(a). We know that between any two real numbers there is a rational number. Then there is a rational number between x & y, r_1. There is a rational number between x & r-1, r_2. How do we know that r_1 & r_2 are distinct? For each positive integer n>=3, there is a rational number r_n between x & r_{n-1}. How does this prove the statement?
3. Thank you for your help! I will look in to the insight you gave me. These problems were from a test, these are the problems that I got wrong. The professor is willing to give us a fourth of the problems missed back if we can come up with the right solution by monday. So how would you complete the problem?
4. Originally Posted by luckyc1423
Problem 1:
Let S be a non-empty set bounded by the subset of real numbers. Prove that sup S is unique.
Problem 2:
Let S and T be non-empty bounded subsets of real numbers were S is a subset of T. Prove that
inf T <= inf S <= sup S <= sup T
Problem 3:
A) prove: if x and y are real numbers with x < y, then there are infitely many rational numbers in the interval [x,y]
B) repeat part A for irrational numbers
Problem 1: I think Plato handled that one well.
Problem 2:
Let S and T be non-empty bounded subsets of real numbers were S is a subset of T. We have two cases: (1) S=T, and (2) S is a proper subset of T. We show that in either case, inf T <= inf S <= sup S <= sup T.
case 1: If S=T, then it follows immediately that inf T <= inf S <= sup S <= sup T.
case 2: S is a proper subset of T.
Then for all s in S, we have s in T.
By definition: infT<= supT and infS<=supS. Thus it is sufficient to show that (i) infT<=infS and (ii) supS<=supT.
(i) By definition, infT<= t, for all t in T, and so infT<=s for all s in S (since for all s in S, we have s in T). Let s0 = min{infS, infT}, so by definition, infT<=s0. So we have infT<=infT (which is trivial) or infT<=infS. Thus we have infT<=infS.
(ii) The proof of supS<=supT is similar to the part (i), we just change "inf" to "sup", "<=" to ">=", and "min" to "max"
Problem 3:
(A) Let x and y are real numbers with x < y. We will show that there are infinately many rationals between x and y, that is, x<(m/n)<y, for m,n integers and n>0.
x<(m/n)<y so we have nx<m<ny. Since x<y, we have y - x>0. By the Archimedean property, we have n(y - x)>1 for some n in N. Since ny - nx>1, clearly there is at least one integer between them, so nx<m<ny holds. Now we show that such an m exists. By the Archimedean property we have, for some k>max{|nx|,|ny|},
-k < nx < ny < k
So the set {a in Z: -k<a<=k and nx<a} is finite and nonempty. S we can let our m = min{a in Z: -k<a<=k and nx<a}.
Then xn< m, and m-1 <= an
so m = (m - 1) + 1 <= xn + 1 < xn + (yn - xn) = yn
so nx < m < ny holds.
(B) Let I be the set of irrational numbers. We show that for x,y in R, x<y, we have i in I, such that x < i < y
lemma: the set {i : i = r + sqrt(2), r in Q} is a subset of I.
Assume to the contrary that r + sqrt(2) is not a member of I. Then r + sqrt(2) = m/n for some m,n integers, n>0. So sqrt(2) = m/n - r = (m - nr)/n. Since m-nr and n are in Z. We have sqrt(2) being a rational number, which is clearly a contradiction. Thus, it must be the case that r + sqrt(2) are in I.
Now we show x < i < y.
Let x, y be real numbers. Then x-sqrt(2) and y-sqrt(2) are also real. By the denseness of Q property, proven in (A) above, we have some r in Q, such that x-sqrt(2) < r < y-sqrt(2). Adding sqrt(2) throughout the system, we obtain:
x < r + sqrt(2) < y. Be r + sqrt(2) represents an irrational number i, so x < i < y.
5. Originally Posted by luckyc1423
Problem 1:
Let S be a non-empty set bounded by the subset of real numbers. Prove that sup S is unique.
Problem 2:
Let S and T be non-empty bounded subsets of real numbers were S is a subset of T. Prove that
inf T <= inf S <= sup S <= sup T
Problem 3:
A) prove: if x and y are real numbers with x < y, then there are infitely many rational numbers in the interval [x,y]
B) repeat part A for irrational numbers
Here is, perhaps, a better proof for problem 2:
Let S and T be non-empty bounded subsets of real numbers were S is a subset of T.
By definition, infT<=supT and infS<=supS. Thus we have only to prove infT<=infS and supS<=supT.
Let s = infS, and t = infT. Then, by definition of infimum, if m is a lower bound for S, then s>=m. Now let y be in S. Then y is in T, since S is a subset of T. Thus, t <= y for every y in T, since t = infT. So t is a lower bound for S. But s>= t by the definition of infimum. so we have infS>=infT.
Showing supS<=supT is similar to the proof above.
6. Thanks alot for that insight, this helps me out a ton! Have a good day!
7. Originally Posted by luckyc1423
Thanks alot for that insight, this helps me out a ton! Have a good day!
So you followed everything right? What i did for parts 2 and 3 were not just insights, they were actual proofs. You should be able to hand them in verbatim and get full credit. I'd probably hand in the last proof i gave for problem 2. Of course where i wrote "the proof of <blank> is similar to <blank>", i'd actually write out the proof.
8. Okay, is the first problem a complete proof?
I am only asking this stuff is I can manage to get a fourth of the points back on the problems I missed. I didnt do well on the test, matter of fact I got a D, but I talked to the prof and he said considering how everyone else sucked on the test to, that would be a low b or a C.
I am usually very good in math but proofs are not my thing. I am praying and hoping I can pull a C out of the class because I am scheduled to graduate this coming May, and I already have a job lined up starting June first, so I have alot riding on this class.
I usually have homework once a week that is somewhat as complicated as this, so I am always needing help in this course.
Did you see my other post about the binomial coefficient?
9. Jhevon, how does what you did in part 3(a) prove that there are infinitely many rationals between x & y?
10. Originally Posted by Plato
Jhevon, how does what you did in part 3(a) prove that there are infinitely many rationals between x & y?
in 3(a) i showed that i can find integers m,n so that x< m/n < y. the set of integers is infinite, so there are infinite numbers of the form m/n. and of course, anything of the form m/n is rational
in 3 (b) since r is any element of Q, and Q is infinite, it follows there are infinite numbers of the form r + srqt(2). I showed that this number was irrational and that it was between x and y, so there are infinite irrationals of this form between x and y
11. Originally Posted by Jhevon
in 3(a) i showed that i can find integers m,n so that x< m/n < y. the set of integers is infinite, so there are infinite numbers of the form m/n
No that is no proof! You have shown that the is at least one rational.
You have not shown that there are infinitely many.
12. For any a<b real numbers.
Define now new numbers
a-sqrt(2)<b-sqrt(2)
We know there exists a rational "r" thus,
a-sqrt(2)<r<b-sqrt(2)
Thus,
a<r+sqrt(2)<b
Q.E.D.
13. Originally Posted by Plato
No that is no proof! You have shown that the is at least one rational.
You have not shown that there are infinitely many.
Read the proof again, i showed there is at least one m that satisfies nx < m < ny. But there are infinitely many n's that can work. Thus, for infinitely many n's i can find at least one m for each of them, so i end up with infinitely many m/n numbers
14. Originally Posted by Jhevon
Read the proof again, i showed there is at least one m that satisfies nx < m < ny. But there are infinitely many n's that can work. Thus, for infinitely many n's i can find at least one m for each of them, so i end up with infinitely many m/n numbers
The way I proved it for homework, is by contradiction. I assumed that there are only finitely many rational numbers. And I let the set S represent all the rationals, which is finite. Then I show that leads to a contradiction.
15. Originally Posted by luckyc1423
Problem 1:
Let S be a non-empty set bounded by the subset of real numbers. Prove that sup S is unique.
I do not see how that is a problem. Since "sup S" is used, does it not mean the definition is "well-defined", i.e. it is unique. Hence there is nothing to prove.
Page 1 of 2 12 Last | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.953952431678772, "perplexity": 616.2272944972563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246639121.73/warc/CC-MAIN-20150417045719-00134-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://socratic.org/questions/why-does-vapor-pressure-increase-with-temperature | Chemistry
Topics
Why does vapor pressure increase with temperature?
Jun 10, 2014
As temperature increases the molecular activity at the surface of the water would increase. This means that more molecules of water would be transitioning to gas. With more gas molecules there would be an increase in the vapor pressure assuming the volume of the container is remaining constant.
An increase in temperature would increase the vapor pressure. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9372258186340332, "perplexity": 479.7189575356955}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00785.warc.gz"} |
https://arxiv.org/abs/1512.03298 | Title:Analytical determination of orbital elements using Fourier analysis. I. The radial velocity case
Abstract: We describe an analytical method for computing the orbital parameters of a planet from the periodogram of a radial velocity signal. The method is very efficient and provides a good approximation of the orbital parameters. The accuracy is mainly limited by the accuracy of the computation of the Fourier decomposition of the signal which is sensitive to sampling and noise. Our method is complementary with more accurate (and more expensive in computer time) numerical algorithms (e.g. Levenberg-Marquardt, Markov chain Monte Carlo, genetic algorithms). Indeed, the analytical approximation can be used as an initial condition to accelerate the convergence of these numerical methods. Our method can be applied iteratively to search for multiple planets in the same system.
Comments: accepted to A&A Subjects: Earth and Planetary Astrophysics (astro-ph.EP) Journal reference: A&A 590, A134 (2016) DOI: 10.1051/0004-6361/201527944 Cite as: arXiv:1512.03298 [astro-ph.EP] (or arXiv:1512.03298v2 [astro-ph.EP] for this version)
Submission history
From: Jean-Baptiste Delisle [view email]
[v1] Thu, 10 Dec 2015 16:09:13 UTC (1,088 KB)
[v2] Fri, 8 Apr 2016 12:39:51 UTC (1,560 KB) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8524108529090881, "perplexity": 1256.563718019286}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496901.28/warc/CC-MAIN-20200330085157-20200330115157-00535.warc.gz"} |
http://studyadda.com/notes/4th-class/mathematics/multiplication-and-division/multiplication/8037 | # 4th Class Mathematics Multiplication and Division Multiplication
## Multiplication
Category : 4th Class
### Multiplication
When a quantity is added to itself number of times, use operation of multiplication to find the resulting quantity. Look at the following problems:
• If one bag contains 50 kg cement, what would be the amount of cement in such 45 bags?
• If 1500 books can be placed in one almirah, how many books can be placed in 5 almirah?
• If one row has 20 plants, how many plants are there in 132 rows?
To answer the first question we need to add 50 kg 45 times. 50 kg + 50 kg + 50 kg + 50 kg...........45 times It is a tedious job and chance of error is also very much. Therefore, a short cut method is used to solve such problems called multiplication. 50 kg + 50 kg + 50 kg + 50 kg ........... 45 times = 50 kg x 45
Here the quantity 50 kg is to be added 45 times. Therefore, 50 kg is multiplied by 45.
If one box contains 20 pencils, how many pencils are there in 154 boxes?
Solution:
To answer the question we will have to add the number of pencils contained by all 154 boxes.
Thus 20 + 20 + 20 + 20 ...... 154 times
Or 20 x 154 = 3080
There are 3080 pencils in 154 boxes.
Find the correct option for $\mathbf{7}+\mathbf{7}+\mathbf{7}+\mathbf{7}+\mathbf{7}+\mathbf{7}+\mathbf{7}+\mathbf{7}+\mathbf{7}$.
(a)$\text{7}\times \text{7}$
(b) $\text{7}\times \text{9}$
(c)$\text{7}\times \text{8}$
(d) $\text{9}\times \text{9}$
(e) None of these
Explanation-$~\text{7}+\text{7}+\text{7}+\text{7}+\text{7}+\text{7}+\text{7}+\text{7}+\text{7}=\text{7}\times \text{9}$
Terms Related to Multiplication
In the multiplication, the number which is multiplied is known as MULTIPLICAND, the number by which the multiplicand is multiplied is known as MULTIPLIER and the answer or the result of multiplication is known as PRODUCT.
$\text{45}\times \text{5}=\text{225}$. Here 45 is the multiplicand, 5 is the multiplier and 225 is the product.
Multiplication of a Number by Power of 10
Power of 10 = 10, 100, 1000, 10000 ......
Step 1: Write the required number and power of 10 in multiplication form Like$\text{4568}\times \text{1}00$.
Step 2: Put as many zeroes extreme right to the number, as contained by the power of 10. For example like$\text{4568}\times \text{1}00=\text{4568}00$
Multiply 89456 by 1000
Solution: The power of 10 contains three zeroes. Therefore, put three zeroes extreme right to the number 89456.
Thus$\text{89456}\times \text{1}000\text{ }=\text{ 89456}000$
Operation on Multiplication
Step 1: Write the multiplier below the multiplicand and put a sign of multiplication.
\,\,\,\,\,\,\,\,\,\,\,\begin{matrix} \text{Like} & \begin{align} & \text{4566}0\text{8} \\ & \underline{\times \,\,\,\,\,\,\,\text{546}} \\ \end{align} \\ \end{matrix}
Step 2: Now multiply the multiplicand with the first digit of multiplier from right and write the result below.
\begin{align} & \,\,\text{4}\,\text{5}\,\text{6}\,\text{6}\,0\,\text{8} \\ & \underline{\times \,\,\,\,\,\,\,\,\,\,\text{546}} \\ & \underline{2739648} \\ \end{align}
Step 3: Now multiply the multiplicand with the second digit of multiplier from right and write the result below leaving unit place
\begin{align} & \,\,\,\,\,\,\text{4}\,\text{5}\,\text{6}\,\text{6}\,0\,\text{8} \\ & \,\,\,\,\underline{\times \,\,\,\,\,\,\,\,\,\,\text{5}\,\text{4}\,\text{6}} \\ & \,\,\,\,\,\,2739648 \\ & \text{1826432} \\ \end{align}
Step 4: Now multiply the multiplicand with the third digit of multiplier from right and write the result below leaving unit and tens places
\begin{align} & \,\,\,\,\,\,\,\,\,\text{4}\,\text{5}\,\text{6}\,\text{6}\,0\,\text{8} \\ & \,\,\,\,\,\,\,\,\underline{\,\,\times \,\,\,\,\,\,\,\,\,\,\text{5}\,\text{4}\,\text{6}} \\ & \,\,\,\,\,\,\,\,\,\,2739648 \\ & \,\,\,\,\,\text{1826432} \\ & \underline{\text{2283}0\text{4}0\,\,\,\,\,\,\,\,\,} \\ \end{align}
Continue the process if more number of digits multiplier contains
Step 5: Now add the results obtained after multiplication
\begin{align} & \,\,\,\,\,\,\,\,\,\text{4}\,\text{5}\,\text{6}\,\text{6}\,0\,\text{8} \\ & \,\,\,\,\,\,\,\,\underline{\,\,\times \,\,\,\,\,\,\,\,\,\,\text{5}\,\text{4}\,\text{6}} \\ & \,\,\,\,\,\,\,\,\,\,2739648 \\ & \,\,\,\,\,\text{1826432} \\ & \underline{\text{2283}0\text{4}0\,\,\,\,\,\,\,\,\,} \\ & \underline{\text{2}\,\text{4}\,\text{9}\,\text{3}\,0\,\text{7}\,\text{9}\,\text{6}\,\text{8}} \\ \end{align}
Find the product of 908731 and 392
Solution:
\begin{align} & \,\,\,\,\,\,\,\,\,\,\text{9}\,0\,\text{8}\,\text{7}\,\text{3}\,\text{1} \\ & \,\,\,\,\,\underline{\times \,\,\,\,\,\,\,\,\,\,\,\,\,\text{3}\,\text{9}\,\text{2}} \\ & \,\,\,\,\,\,\,\,\text{1817462} \\ & \,\,\,\,\,\text{8178579} \\ & \,\underline{\text{2}\,\text{7}\,\text{2}\,\text{6}\,\text{1}\,\,\text{9}\,\text{3}\,\,\,} \\ & \underline{\text{356222552}} \\ \end{align}
Word Problem Based on Multiplication
In a park, there are 45672 trees in each row. Find the total number of trees in the park if there are 352 rows in the park.
Solution:
Number of trees in one row = 45672
Thus, number of trees in 352 rows $=\text{45672}\times \text{352}=\text{16}0\text{76544}$
There are 16076544 trees in the park.
B is 489875 times greater than A. If A = 458, find the value of B.
Solution:
B is 489875 times greater than A and A = 458 Thus$\text{B}=\text{458}\times \text{489875}=\text{22436275}0$
#### Other Topics
You need to login to perform this action.
You will be redirected in 3 sec | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 6, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000077486038208, "perplexity": 1909.1768318284946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102819.58/warc/CC-MAIN-20170817013033-20170817033033-00681.warc.gz"} |
http://math.stackexchange.com/questions/37536/are-vector-spaces-and-their-double-duals-in-fact-equal | # Are vector spaces and their double duals in fact equal?
Excuse me, in a course of linear algebra, our assistant stated that, if $\mathbb{V}$ is a finite-dimensional vector space, and $\mathbb{W}$ its double dual, $\mathbb{V}$ and $\mathbb{W}$ are actually equal to each other; I am wondering if this has anything to do with the viewpoint in algebraic number theory that realizes elements, in algebraic number fields, as functions?
In any case, thank you very much.
-
Regarding your second question, it is true in some informal sense that when we view elements of a commutative ring $R$ as functions on $\text{Spec } R$, we are also viewing the points $\text{Spec } R$ as functions on $R$; in fact they are precisely the morphisms $R \to k$ where $k$ is a field, up to a certain equivalence relation. So I would say that this is not completely unrelated to double duals of vector spaces, although there isn't a direct formal connection since in this case the dual of an object is a different kind of object. This is sometimes summarized in the slogan "algebra is dual to geometry."
-
This is a great answer, thanks very much. – awllower May 7 '11 at 3:26
Now I don't know which answer I am supposed to accept; they all are excellent. – awllower May 7 '11 at 8:46
Should anyone give me some advice in which one to accept, I will take it. I really do not know how to choose, but I want to do so, so as not to spare all your efforts. – awllower May 8 '11 at 2:12
It really doesn't matter that much. For the sake of not having this question get bumped, just pick one. – Qiaochu Yuan May 8 '11 at 2:35
Firstly, a vector space $V$ and its double dual are never equal. They may or may not be isomorphic, depending on whether $V$ is finite dimensional. Also, the answer to your original question is no; this is not related to viewing elements of a number field $K$ as rational functions on $\text{Spec}(\mathcal{O}_K)$.
-
@Zev Chonoles: Yes, that is exactly my point, when I talked to the assistant, but he insisted that they are actually equal; in fact he announced that some books do not explicitly prove that they are equal, but they are actually so; moreover, he said that a subspace of $\mathbb{V}$ and its annihilator are equal! Besides, I am sorry about the dimension of the vector space; they all should be finite. – awllower May 7 '11 at 1:40
And thank you for such an elaborate explanation. – awllower May 7 '11 at 1:44
I think "equal" was a bit of an overstatement. But nonetheless, a vector space is always canonically embedded in its bidual by seeing a vector $v$ as the function $|v \rangle : l \in V^* \mapsto l(v) \in K$. In the finite dimensional case, the dimension being equal, you get a canoncial isomorphism (roughly no arbitrary choice involved in the definition). In a categorical point of view, that's as close as you can get to being equal. – Joel Cohen May 7 '11 at 1:49
@awllower: I'm not sure, but it could be that there was a basic misunderstanding: You can define the annihilator $U^{\perp} \subset V^{\ast}$ of a subspace $U \subset V$ (the elements in $V^{\ast}$ sending $U$ to zero) and you can define the annihilator $W_{\perp} \subset V$ of a subspace $W \subset V^{\ast}$ (the elements of $V$ that are sent to zero by all elements of $W$). Then in fact you always have equality $(V^{\perp})_{\perp} = V$, because $V^{\perp} = \{0\} \subset V^{\ast}$. – t.b. May 7 '11 at 6:58
@awllower: I see. You have canonical identifications $U \cong U^{\ast\ast} \cong (U^{\perp})^{\perp} \subset V^{\ast\ast}$, so Zev's answer and some of the comments explain why the first and the second isomorphisms aren't equalities. However, in the notation of my previous comment, if $U$ is a finite dimensional subspace of $V$ then $(U^{\perp})_{\perp} = U$ (really equality, this time). I fully second Zev's and Joel's comments. – t.b. May 7 '11 at 8:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8157139420509338, "perplexity": 263.92391986329784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246642037.57/warc/CC-MAIN-20150417045722-00092-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://www.wikihow.com/Calculate-Mass-Percent | Edit Article
# wikiHow to Calculate Mass Percent
Mass percent tells you the percentage of each element that makes up a chemical compound.[1] Finding the mass percent requires the molar mass of the elements in the compound in grams/mole or the number of grams used to make a solution.[2] It is simply calculated using a basic formula dividing the mass of the element (or solute) by the mass of the compound (or solution).
### Method 1 Solving for Mass Percent When Given Masses
1. 1
Define the equation for mass percent of a compound. The basic formula for mass percent of a compound is: mass percent = (mass of chemical/total mass of compound) x 100. You must multiply by 100 at the end to express the value as a percentage.[3]
• Write the equation at the beginning of every problem: mass percent = (mass of chemical/total mass of compound) x 100.
• The mass of the chemical you’re interested in is the mass given in the problem. If the mass isn’t given, refer to the following section about solving for mass percent when the mass is not given.
• The total mass of the compound is calculated by summing the masses of all of the chemicals used to make the compound or solution.
2. 2
Calculate the total mass of the compound. When you know the masses of all the elements or compounds being added together, you simply need to add them together to calculate the total mass of the final compound or solution. This will be the denominator in the mass percent calculation.[4]
• Example 1: What is the percent mass of 5g of sodium hydroxide dissolved in 100g of water?
• The total mass of the compound is the amount of sodium hydroxide plus the amount of water: 100g + 5g for a total mass of 105g.
• Example 2: What masses of sodium chloride and water are needed to make 175 g of a 15% solution?
• In this example, you are given the total mass and the percentage you want, but are asked to find the amount of solute to add to the solution. The total mass is 175 g.
3. 3
Identify the mass of the chemical-in-question. When asked to find the "mass percent", you are being asked to find the mass of a particular chemical (the chemical-in-question) as a percentage of the total mass of all elements. Write down the mass of chemical-in-question. This mass will be the numerator in the mass percent calculation.[5]
• Example 1: The mass of the chemical-in-question is 5g of sodium hydroxide.
• Example 2: For this example, the mass of the chemical-in-question is the unknown you are trying to calculate.
4. 4
Plug the variables into the mass percent equation. Once you have determined the values for each variable, plug them into the equation.
• Example 1: mass percent = (mass of chemical/total mass of compound) x 100 = (5 g/105 g) x 100.
• Example 2: We want to rearrange the mass percent equation to solve for the unknown mass of the chemical: mass of the chemical = (mass percent*total mass of the compound)/100 = (15*175)/100.
5. 5
Calculate the mass percent. Now that the equation is filled in, simply solve to calculate the mass percent. Divide the mass of the chemical by the total mass of the compound and multiply by 100. This will give you the mass percent of the chemical.
• Example 1: (5/105) x 100 = 0.04761 x 100 = 4.761%. Thus, the mass percent of 5g of sodium hydroxide dissolved in 100g of water is 4.761%.
• Example 2: The rearranged equation to solve for mass of the chemical is (mass percent*total mass of the compound)/100: (15*175)/100 = (2625)/100 = 26.25 grams sodium chloride.
• The amount of water to be added is simply the total mass minus the mass of the chemical: 175 – 26.25 = 148.75 grams water.
### Method 2 Solving for Percent Mass When Not Given Masses
1. 1
Define the equation for mass percent of a compound. The basic formula for mass percent of a compound is: mass percent = (molar mass of element/total molecular mass of compound) x 100. The molar mass of a compound is the amount of mass for one mole of an element while the molecular mass is the amount of mass for one mole of the entire compound.[6] You must multiply by 100 at the end to express the value as a percentage.[7]
• Write out the equation at the beginning of every problem: mass percent = (molar mass of element/total molecular mass of compound) x 100.
• Both values have units of grams per mole (g/mol).
• When you aren’t given masses, you can find the mass percent of an element within a compound using molar mass.
• Example 1: Find the mass percent of Hydrogen in a water molecule.
• Example 2: Find the mass percent of carbon in a glucose molecule.
2. 2
Write out the chemical formula. If you are not given the chemical formulas for each compound, you will need to write them out. If you are given the chemical formulas you may skip this step, and proceed to the "Find the mass of each element" step.
• Example 1: Write out the chemical formula for water, H2O.
• Example 2: Write out the chemical formula for glucose, C6H12O6.
3. 3
Find the mass of each element in the compound. Look up the molecular weight of each element in your chemical formulas on the periodic table. The mass of an element can usually be found underneath the chemical symbol. Write down the masses of every element in the compound.[8]
• Example 1: Look up the molecular weight of Oxygen, 15.9994; and the molecular weight of Hydrogen, 1.0079.[9]
• Example 2: Look up the molecular weight of Carbon, 12.0107; Oxygen, 15.9994; and Hydrogen, 1.0079.
4. 4
Multiply the masses by the mole ratio. Identify how many moles (mole ratio) of each element are in your chemical compounds. The mole ratio is given by the subscript number in the compound. Multiply the molecular mass of each element by this mole ratio.[10]
• Example 1: Hydrogen has a subscript of two while oxygen has a subscript of 1. Therefore, multiply the molecular mass of Hydrogen by 2, 1.00794 X 2 = 2.01588; and leave the molecular mass of Oxygen as is, 15.9994 (multiplied by one).
• Example 2: Carbon has a subscript of 6, hydrogen, 12, and oxygen, 6. Multiplying each element by its subscript gives you:
• Carbon (12.0107*6) = 72.0642
• Hydrogen (1.00794*12) = 12.09528
• Oxygen (15.9994*6) = 95.9964
5. 5
Calculate the total mass of the compound. Add up the total mass of all the elements in your compounds. Using the masses calculated using the mole ratio, you can calculate the total mass of the compound. This number will be the denominator of the mass percent equation.[11]
• Example 1: Add 2.01588 g/mol (the mass of two moles of Hydrogen atoms) with 15.9994 g/mol (the mass of a single mole of Oxygen atoms) and get 18.01528 g/mol.
• Example 2: Add all of the calculated molar masses together: Carbon + Hydrogen + Oxygen = 72.0642 + 12.09528 + 95.9964 = 180.156 g/mol.
6. 6
Identify the mass of the element-in-question. When asked to find the "mass percent", you are being asked to find the mass of a particular element in a compound, as a percentage of the total mass of all elements. Identify the mass of the element-in-question and write it down. The mass is the mass calculated using the mole ratio. This number is the numerator in the mass percent equation.[12]
• Example 1: The mass of hydrogen in the compound is 2.01588 g/mol (the mass of two moles of hydrogen atoms).
• Example 2: The mass of carbon in the compound is 72.0642 g/mol (the mass of six moles of carbon atoms).
7. 7
Plug the variables into the mass percent equation. Once you have determined the values for each variable, plug them into the equation defined in the first step: mass percent = (molar mass of the element/total molecular mass of compound) x 100.
• Example 1: mass percent = (molar mass of the element/total molecular mass of compound) x 100 = (2.01588/18.01528) x 100.
• Example 2: mass percent = (molar mass of the element/total molecular mass of compound) x 100 = (72.0642/180.156) x 100.
8. 8
Calculate the mass percent. Now that the equation is filled in, simply solve to calculate the mass percent. Divide the mass of the element by the total mass of the compound and multiply by 100. This will give you the mass percent of the element.
• Example 1: mass percent = (2.01588/18.01528) x 100 = 0.11189 x 100 = 11.18%. Thus, the mass percent of Hydrogen atoms in a water molecule is 11.18%.
• Example 2: mass percent = (molar mass of the element/total molecular mass of compound) x 100 = (72.0642/180.156) x 100 = 0.4000 x 100 = 40.00%. Thus, the mass percent of carbon atoms in a molecule of glucose is 40.00%.
## Community Q&A
Search
• How do I determine the mass percentage of salt (NaCl) in a water-salt solution?
wikiHow Contributor
In order to calculate the mass percentage, you must know how much salt was added to a certain amount of water. For instance, if you added 50 grams of NaCl to 1000 grams of water, that mass percent would be 50/1050 x 100 = 4.76%.
• How would I calculate the mass percent of a MgBr2 solution that contains 5.80g of MgBr2 in 45.0g of solution?
wikiHow Contributor
Similar to the example given above, simply plug the numbers into the mass percent equation. Calculate total mass of the compound: 5.8 + 45 = 50.8. Divide mass of the chemical by mass of the compound and multiply by 100: 5.8/50.8 x 100 = 11.42%.
• How do I find the mass percent of a compound of a mixture?
wikiHow Contributor
You divide the mass of the element or compound you are looking for by the total mass of the compound. Don't forget to have the same units of mass for both numbers.
• How do I find mass percentage in a mixture?
• How do I calculate the mass percent?
• How do I calculate mass percent?
• What is the percentage by mass of nitrogen in calcium trioxonitrate (v)?
• How do I find the density of a solution and calculate mass percent? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9679262638092041, "perplexity": 1020.9400354413664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820487.5/warc/CC-MAIN-20171016233304-20171017013304-00102.warc.gz"} |
https://paperswithcode.com/task/temporal-relation-classification | # Temporal Relation Classification
4 papers with code • 0 benchmarks • 2 datasets
Temporal Relation Classification is the task that is concerned with classifying the temporal relation between a pair of temporal entities (traditional events and temporal expressions). Initial approaches aimed to classify the temporal relation in thirteen relation types that were depicted by James Allen in his seminal work "Maintaining Knowledge about Temporal Intervals". However, due to the ambiguity in the annotation, recent corpora have been limiting the type of relations to a subset of those relations.
Notice that although Temporal Relation Classification can be thought of as a subtask of Temporal Relation Extraction, the two tasks can be morphed if one adds a label that indicates the absence of a temporal relation between the entities (e.g. "no_relation" or "vague") to Temporal Relation Classification.
# Word-Level Loss Extensions for Neural Temporal Relation Classification
In this work, we extend our classification model's task loss with an unsupervised auxiliary loss on the word-embedding level of the model.
1
# Extracting Temporal Event Relation with Syntax-guided Graph Transformer
Extracting temporal relations (e. g., before, after, and simultaneous) among events is crucial to natural language understanding.
1 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9833520650863647, "perplexity": 3066.7190493803496}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710771.39/warc/CC-MAIN-20221130192708-20221130222708-00180.warc.gz"} |
https://georgiastatedar.org/uhdth2gp/e5eaae-equivalent-expressions-with-exponents | Free Exponents Calculator - Simplify exponential expressions using algebraic rules step-by-step. Simplifying expressions with exponents 2 examples of simplifying expressions using exponent properties Show Step-by-step Solutions. Simplifying expressions with exponents 2 examples of simplifying expressions using exponent properties Show Step-by-step Solutions. Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. courses that prepare you to earn Dividing Expressions with the Same Base. The pdf exercises include finding the value of expressions with one or more exponential notations in place, comparing two expressions with exponents, matching equivalent expressions, and finding the missing term of an expression. The negative exponents property holds that a negative exponent in the numerator should be moved to the denominator and should become positive. Write if the expressions are equal or unequal in part A and if one expression is <, >, or = to the other in part B. Exponential Expressions and Equations. 's' : ''}}. The cards have 2 and 3 step expressions without variables. Since the base is the same for each factor, we just add the exponents: x5 is our equivalent expression. Practice your math skills and learn step by step with our math solver. When simplifying expressions with exponents we use the rules for multiplying and dividing exponents, and negative and zero exponents. This shows that we write down our base and just multiply the given exponents. just create an account. Radical expressions are expressions that contain radicals. Learn more Accept. Polymathlove.com provides insightful advice on Equivalent Expressions Calculator, operations and adding and subtracting rational expressions and other math topics. There are 56 combinations of answers that include one to the anything equals something to the zero. For example, 3 2 × 3-5 = 3-3 = 1/3 3 = 1/27. Get the unbiased info you need to find the right school. flashcard set{{course.flashcardSetCoun > 1 ? integer exponents to generate equivalent numerical expressions. When you multiply monomials with the same base, you add the exponents. Videos, examples, solutions, and lessons to help Grade 8 students know and apply the properties of integer exponents to generate equivalent numerical expressions. Displaying top 8 worksheets found for - 8th Grade Equivalent Expressions Using Exponents. In other words, it means that the base and exponent need to be on the other side of the fraction line and the exponent needs to be positive. This property also works with negative exponents. study Equivalent Expressions Memory Cards. So, x-a = 1/xa. The cards have 2 and 3 step expressions without variables. As you can see, we keep the base the same and add the exponents together. and career path that can help you find the school that's right for you. These lessons cover the entire NUMBER … This is called a power, where 87 is the base and 4 is the exponent. Start studying Equivalent Expressions - exponents, coefficients, and constants. Evaluate the expression. Get Free Negative Exponents Lesson 5 now and use Negative Exponents Lesson 5 immediately to get % off or $off or free shipping. CCSS.Math: HSA.SSE.A.2, HSN.RN.A.2, HSN.RN.A. CCSS.Math.Content.8.EE.A.1 Know and apply the properties of integer exponents to generate equivalent numerical expressions. For example, let's say that we have: We would keep the base the same and add the exponents: The quotients of powers property says that when we divide powers with the same base, we simply subtract the exponents. Start studying Equivalent Expressions - exponents, coefficients, and constants. Warns against confusing "minus" signs on numbers and "minus" signs in exponents. Now you have all the properties of exponents available to help you to simplify the expression: x 1/2 (x 2/3 – x 4/3). You may select the problems to contain only positive, negative or a mixture of different exponents. Move all the terms containing a logarithm to the left side of the equation. You might need: Calculator. Practice to your heart's content with expressions containing single exponential notations with whole numbers up to 5 as powers and up to 10 as bases. Determine if two expressions are equivalent in Part A. Learn vocabulary, terms, and more with flashcards, games, and other study tools. The number a is known as base and m is said to be exponent and a m is said to be the exponent form of the number.. Exponents are also called powers or indices.. We read a m as a raised to the power m or just a to the power m. Provides worked examples, showing how the same exercise can be correctly worked in more than one way. flashcard sets, {{courseNav.course.topics.length}} chapters | When multiplying powers with different bases, we need to make the bases match. Equivalent expressions result in the same value. For instance, when we multiply 87 by itself four times (87 * 87 * 87 * 87), we get 57,289,761. Open up to a multitude of arithmetic expressions containing exponents with our free printable evaluating expressions involving exponents worksheets. You are here: Home → Worksheets → Exponents Exponents Worksheets. 9^1=3^2. Purplemath. Know and apply the properties of integer exponents to generate equivalent numerical expressions. | {{course.flashcardSetCount}} Writing algebraic expressions … A power is made up of a base and an exponent. When simplifying expressions with exponents we use the rules for multiplying and dividing exponents, and negative and zero exponents. Expand by moving outside the logarithm. Powers Complex Examples. Biology Lesson Plans: Physiology, Mitosis, Metric System Video Lessons, Lesson Plan Design Courses and Classes Overview, Online Typing Class, Lesson and Course Overviews, Airport Ramp Agent: Salary, Duties and Requirements, Personality Disorder Crime Force: Study.com Academy Sneak Peek. Convert radicals to expressions with rational exponents; Convert expressions with rational exponents to their radical equivalent; Square roots are most often written using a radical sign, like this, $\sqrt{4}$. The number 5 is called the base, and the number 2 is called the exponent. To make everyone agree about the value of expressions like $$6\cdot 4^{2}$$, the convention is to evaluate the part of the expression with the exponent first. Improve your math knowledge with free questions in "Identify equivalent expressions involving exponents II" and thousands of other math skills. With exponents, we can write: 874. ▶ Evaluate the Numerical Expressions using PEMDAS, ▶ Evaluating Expressions with Nested Parentheses, Evaluate the Numerical Expressions using PEMDAS, Evaluating Expressions with Nested Parentheses. Equivalent expressions Calculator Get detailed solutions to your math problems with our Equivalent expressions step-by-step calculator. Eliot's The Burial of The Dead, Quiz & Worksheet - Instruments of the Brass Family, I Wandered Lonely as a Cloud: Imagery & Themes, What is Carpal Tunnel Syndrome? Converting Exponent Worksheets. » 1 Print this page. Grab this exercise to give evaluating expressions involving exponents a fantastically firm foundation! 8^2=4^3. Select the equivalent expressions: This problem provides a particular numerical expression involving exponents. Radicals (which comes from the word “root” and means the same thing) means undoing the exponents, or finding out what numbers multiplied by themselves comes up with the number. Let them keep cool and simplify the term with an exponent and then follow the order of operations to figure out the value. Students learn how to compute using integer exponents building on their earlier experiences with adding and subtracting integers. Exponents, specifically, are our focus here. Now, let's talk about different exponent properties used to create equivalent expressions, which are expressions that are equal to each other. Through investigation, students discover ways to write equivalent exponential expressions, and then formalize their understanding of these strategies into properties of exponents. These worksheets are suitable for 6th grade, 7th grade, and 8th grade children. 8^3=2^9. We first simplify the powers in parentheses, then either multiply or divide the remaining problem. Algebraic expressions (6th grade) Read and write equivalent expressions with variables and exponents An updated version of this instructional video is available. How Do I Use Study.com's Assign Lesson Feature? Simplifying Expressions with Negative Exponents: Solving Equations 3: Solving Quadratic Equations: Parent and Family Graphs: Collecting Like Terms: nth Roots: Power of a Quotient Property of Exponents: Adding and Subtracting Fractions: Percents: Solving Linear Systems of Equations by Elimination: The Quadratic Formula: Fractions and Mixed Numbers Exponents Equivalent Expressions - Displaying top 8 worksheets found for this concept.. Problem. Tag Archives: equivalent expressions with exponents. The user is asked to select all equivalent numerical expressions from a multiple select list. Did you know… We have over 220 college Next lesson. You can test out of the Notice that we keep the base the same and subtract the exponents. © copyright 2003-2020 Study.com. We can still apply the quotient of powers property: Now, apply the negative exponents property to evaluate further: The power of a power property states that when we take the power of a power, we keep the base and multiply the exponents. Writing this long number down, or even the original equation, takes quite a bit of time, but exponents make things easier. Write if the expressions are equal or unequal in part A and if one expression is <, >, or = to the other in part B. ... Write using exponents. 16 chapters | Visit the ISTEP+ Grade 8 - Math: Test Prep & Practice page to learn more. How good are your grade 7 and grade 8 students at finding a missing term in an expression? Radical expressions come in many forms, from simple and familiar, such as$$\sqrt{16}$$, to quite complicated, as in $$\sqrt[3]{250{{x}^{4}}y}$$. Keep in mind that the exponent tells us to multiply the base, 87, by itself 4 times. Oct 21, 2017 - A fun game to practice finding equivalent expressions with exponents! • Complete the given example on your own. 2 − 3 t 3 + 5 t. 2^ {\large -3t^3+5t} 2−3t3+5t. Simplifying Expressions with Rational Exponents 7:41 How to Graph Cubics, Quartics, Quintics and Beyond 11:14 How to Add, Subtract and Multiply Polynomials 6:53 Visit to let ’ s use the meaning of exponents and how compute... Are right-associative and are solved from right to left math knowledge with questions. Or radicals are right-associative and are solved from right to left the terms containing a logarithm the. Is used as a factor cookies to ensure you get the unbiased info you to... And an exponent and then multiply or divide the remaining problem you earn progress by passing and. Prep & practice Page to learn more$ an expression that represents repeated multiplication of equation! = 3– 3 = 1/27 without variables some definitions and rules from simplifying exponents multiply the base same. You need to make the bases match exponents involving division or straight from, the rules exponents! We have got every aspect covered 're going to put our order of operations to equivalent... Worksheets to improve math calculation skills to ensure you get the best experience / ( a^n =a^. A base and just multiply the given algebraic expression down, or even original... Find matches 1/3³ = 1/27 things easier a 3-part comparing Worksheet the 5th grade through the grade... And more with flashcards, games, and constants expressions, over 83,000 lessons all. Instance, when we multiply 87 by 4. ) as you can see, we can write as. The bases match so, xaxb = x ( a - b ) } 5\cdot {! To add this Lesson will Show you how to use properties of exponents to generate equivalent numerical involving! Then either multiply or divide the remaining problem let ’ s use the rules straight so! Xaxb = x ( a - b ) in all major subjects, { courseNav.course.mDynamicIntFields.lessonCount! Examples, showing how the same problem signs in exponents in part C. exponents, the with! That rational exponents are subject to all of the first two years of college and thousands... In general, we have got every aspect covered exponents to generate equivalent numerical expressions numbers are easy write. The powers in the numerator should be moved to the third power. evaluating expressions turn over a leaf! Displays the equivalent expressions can … equivalent expressions, which are expressions that are fractions, decimals, and 're. Of age or education level Step-by-step Calculator this long number down, or even the original equation takes. ), we get 57,289,761 multiply the given algebraic expression itself four times 87! To ensure equivalent expressions with exponents get the best experience sometimes writing expressions and finding equivalent expressions exponential! Exponents: x5 is our equivalent expressions 6th grade ) Read and write equivalent expressions the... Out of the equation one of their terms utilizing the different properties C. exponents, the.! A particular numerical expression involving exponents a fantastically firm foundation expressions from a multiple list. Easy to write, such as the number of times the number of times the and!, it would be: x * x multiplied by x * x multiplied by *! Info you need to find matches ISTEP+ grade 8 - math: test Prep practice! Can write is as follows = x ( a + b ) so I just thought what... Rules straight, so I just thought about what exponents mean 8 math... = 1/27 multiplication of the same problem way, it would be: x * x * x by. Multiply 87 by itself, division and equivalent expressions with exponents operations the exponents adding fractions or value, polymathlove.com is the Between... These Algebra 1 - exponents worksheets 3 t 3 + 5 t. 2^ { \large -3t^3+5t } 2−3t3+5t to! Produces problems for working with exponents with division } 2−3t3+5t There are 56 combinations of answers that include one the... Activities: equivalent … integer exponents to generate equivalent numerical expressions and grade 7 and grade »! Numbers and minus '' signs in exponents need to find matches of 6! When simplifying expressions to find matches fractions and decimals multiplication and division of with... Ccss.Math.Content.8.Ee.A.1 Know and apply the properties and practice exponent problems utilizing the different properties same is...: x * x * x multiplied by x * x * x x! Unbiased info you need to make the bases match utilizing the different properties ) =... A fantastically firm foundation to improve math calculation skills exercise can be equivalent expressions with exponents. Right-Associative and are solved from right to left = 3-3 = 1/3 3 1/27! And thousands of other math skills ) / ( a^n ) =a^ ( m-n ) Compare numerical., 2019 - a fun game to practice finding equivalent expressions: this problem provides a numerical... Then equivalent expressions with exponents multiply or divide the remaining problem uses cookies to ensure you get best. The remaining problem we evaluate this expression can be written in a Course lets you earn progress by passing and. Divide the remaining problem, it would be: x * x different. Use properties of exponents if two expressions together, I will get eleven copies of a tells. Side of the equation to remove the variable from the exponent on the first term is exponents! Operations, mathematics and … Start studying equivalent expressions - displaying top 8 worksheets found for this..... Equivalent … integer exponents to decide if Equations are true learn step by step our... We wrote this out the value a shorter way using something called exponents warns against . Decisions Revisited: Why Did equivalent expressions with exponents Choose a Public or Private college explored properties of integer exponents negative... I will get eleven copies of a multiplied together rational expressions and study. We see that 87 * 87 * 87 * 87 * 87 = 874 from a multiple select.... This is called a power is made up of a number tells us to multiply the base 4! Like you have to work only with, or even the original equation, takes quite bit. Equations are true what college you want to attend yet is available answer sheet, and = symbols in C.. The given exponents includes 5 PowerPoint lessons that are fractions, decimals, and 8th grade expressions! Practice of evaluating expressions involving exponents only positive, negative or a mixture of different exponents more flashcards... Lower than the exponent appear in algebraic expressions ( 6th grade math I always tell students... Have assistance on adding fractions or value, polymathlove.com is the Difference Between Blended Learning & Learning... Apply the properties and practice exponent problems utilizing the different properties that represents repeated multiplication of equation. Particular numerical expression involving exponents a fantastically firm foundation displays the equivalent expressions Calculator detailed! Multitude of arithmetic expressions containing exponents with our equivalent expression Calculator is a language become positive trademarks and are. Is used as a factor = 3– 3 = 1/27 the anything equals something to the number times. Where 87 is the base and an exponent and then multiply or divide the problem! By step with our equivalent expressions using exponents understanding of these strategies into properties of exponents & expressions... Lesson Feature math skills b = x ( a + b ) hence, the exponent corresponds the... With an exponent and then multiply or divide the remaining problem of operations to equivalent! Algebra, I will get eleven copies of a number tells us how many the! To compute using integer exponents to generate equivalent numerical expressions students think about and understand expressions that the. The original equation, takes quite a bit of time, but others are more difficult apply your critical skills! 'Re all set to go subtracted from is lower than the exponent straight, so I thought! Oct 3, 2019 - a fun game to practice finding equivalent expressions in more than one way powers then! Asked to select all equivalent numerical expressions moved to the left side of equation! An exponent is intended to be evaluated first and write equivalent exponential expression for ` five the... Or education level expressions: this problem provides a particular numerical expression involving exponents unleash your potential with moderate... This is called the exponent power, where 87 is the exponent ensure get... Down, or even the original equation, takes quite a bit expressions to find the school... Unit 6.6, Lesson 13 ( printable worksheets for this topic rules as other exponents when they appear in expressions. Are right-associative and are solved from right to left the variable from the exponent for... Start studying equivalent expressions involving exponents times to multiply and divide powers, see! Same problem your grade 7 up a bit - exponents worksheets are suitable for 6th grade I! To find the right school supply of worksheets for practicing exponents and powers exponents involving division in a Course you. Graphing There are 8 printable worksheets for practicing exponents and powers students at finding a term. Exponents, do n't feel like you have to have assistance on adding fractions value. For working with exponents we use the rules for multiplying and dividing,. You may select the problems to contain only positive, negative or a mixture of different exponents negative..., ( xa ) b = x ( a - b ) 3-3 1/3. Both sides of the equation expressions Calculator to division, we can write is as follows or even original. We first simplify the term with an exponent and then follow the order operations. Have assistance on adding fractions or value, polymathlove.com is the base is as! Adding fractions or value, polymathlove.com is the exponent with integer bases as one of their terms students think and! How the same for each factor, we see that 87 * 87 = 874 exponents involving division same subtract! Math skills this Lesson, we simplify the powers and then multiply or divide the remaining problem is! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.846108078956604, "perplexity": 1213.6491311981088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154320.56/warc/CC-MAIN-20210802110046-20210802140046-00663.warc.gz"} |
https://en.wikipedia.org/wiki/Nijenhuis%E2%80%93Richardson_bracket | # Nijenhuis–Richardson bracket
In mathematics, the algebraic bracket or Nijenhuis–Richardson bracket is a graded Lie algebra structure on the space of alternating multilinear forms of a vector space to itself, introduced by A. Nijenhuis and R. W. Richardson, Jr (1966, 1967). It is related to but not the same as the Frölicher–Nijenhuis bracket and the Schouten–Nijenhuis bracket.
## Definition
The primary motivation for introducing the bracket was to develop a uniform framework for discussing all possible Lie algebra structures on a vector space, and subsequently the deformations of these structures. If V is a vector space and p ≥ −1 is an integer, let
${\displaystyle \operatorname {Alt} ^{p}(V)=({\bigwedge }^{p+1}V^{*})\otimes V}$
be the space of all skew-symmetric (p + 1)-multilinear mappings of V to itself. The direct sum Alt(V) is a graded vector space. A Lie algebra structure on V is determined by a skew-symmetric bilinear map μ : V × VV. That is to say, μ is an element of Alt1(V). Furthermore, μ must obey the Jacobi identity. The Nijenhuis–Richardson bracket supplies a systematic manner for expressing this identity in the form [μ, μ] = 0.
In detail, the bracket is a bilinear bracket operation defined on Alt(V) as follows. On homogeneous elements P ∈ Altp(V) and Q ∈ Altq(V), the Nijenhuis–Richardson bracket [P, Q] ∈ Altp+q(V) is given by
${\displaystyle [P,Q]^{\land }=i_{P}Q-(-1)^{pq}i_{Q}P.}$
Here the interior product iP is defined by
${\displaystyle (i_{P}Q)(X_{0},X_{1},\ldots ,X_{p+q})=\sum _{\sigma \in Sh_{q+1,p}}\mathrm {sgn} (\sigma )P(Q(X_{\sigma (0)},X_{\sigma (1)},\ldots ,X_{\sigma (q)}),X_{\sigma (q+1)},\ldots ,X_{\sigma (p+q)})}$
where the sum is over all (q+1, p)-shuffles of the indices, i.e. permutations ${\displaystyle \sigma }$ of ${\displaystyle \{0,\ldots ,p+q\}}$ such that ${\displaystyle \sigma (0)<\cdots <\sigma (q)}$ and ${\displaystyle \sigma (q+1)<\cdots <\sigma (p+q)}$.
On non-homogeneous elements, the bracket is extended by bilinearity.
## Derivations of the ring of forms
The Nijenhuis–Richardson bracket can be defined on the vector valued forms Ω*(M, T(M)) on a smooth manifold M in a similar way. Vector valued forms act as derivations on the supercommutative ring Ω*(M) of forms on M by taking K to the derivation iK, and the Nijenhuis–Richardson bracket then corresponds to the commutator of two derivations. This identifies Ω*(M, T(M)) with the algebra of derivations that vanish on smooth functions. Not all derivations are of this form; for the structure of the full ring of all derivations see the article Frölicher–Nijenhuis bracket.
The Nijenhuis–Richardson bracket and the Frölicher–Nijenhuis bracket both make Ω*(M, T(M)) into a graded superalgebra, but have different degrees.
## References
• Pierre Lecomte, Peter W. Michor, Hubert Schicketanz, The multigraded Nijenhuis–Richardson algebra, its universal property and application J. Pure Appl. Algebra, 77 (1992) 87–102
• P. W. Michor (2001) [1994], "Frölicher–Nijenhuis bracket", in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer Science+Business Media B.V. / Kluwer Academic Publishers, ISBN 978-1-55608-010-4
• P. W. Michor, H. Schicketanz, A cohomology for vector valued differential forms Ann. Global Anal. Geom. 7 (1989), 163–169
• A. Nijenhuis, R. Richardson, Cohomology and deformations in graded Lie algebras Bull. Amer. Math. Soc., 72 (1966) pp. 1–29
• A. Nijenhuis, R. Richardson, Deformation of Lie algebra structures, J. Math. Mech. 17 (1967), 89–105. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 7, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9263666868209839, "perplexity": 1137.5742248169793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511122.49/warc/CC-MAIN-20181017090419-20181017111919-00497.warc.gz"} |
https://www.physicsforums.com/threads/ever-increasing-amplitude-of-the-induced-impulse.878775/ | # A Ever-increasing amplitude of the induced impulse?
Tags:
1. Jul 13, 2016
### Ronnu
Hey
I got a little imaginary experiment which is little bit confusing to me. Maybe somebody with knowledge in signal transmitting or just good knowledge of physics can help me out.
So let's say there are two parallel conductors which length is endless and resistance is zero (no losses and voltage and current are in phase with each other). One of the conductors will be connected to a signal generator which creates a single impulse which then travels along the conductor. The impulse length is a lot smaller than the distance between the parallel conductors.
The impulse that travels along one of the conductors (let's say conductor nr. 1) can be looked as a moving charge. We know that a moving charge generates magnetic field around it and that this magnetic field wave propagates perpendicular to the conductor and with the speed of light. So in some time it will propagate to the other conductor (nr. 2 ) which is next to the propagating conductor (nr. 1). According to the induction law it will generate a voltage or in another word the same impulse (although with a smaller amplitude) that was flowing in the conductor number one.
This generated impulse will start to travel along that conductor also with the speed of light. So it should be that this generated impulse is always in phase with a magnetic field that is generated by the conductor nr. 1 and that reaches conductor nr. 2. From that it should be that the amplitude of the generated impulse in conductor nr. 2 is increasing as it travels down the conductor (magnetic field that passes through the conductor induces voltages and there are no losses). This is where I get lost, because if the conductors length is endless then the amplitude will increase endlessly and that cannot be. So what am I missing?
2. Jul 14, 2016
### tech99
The pulse on wire 2 will gradually increase until it is almost equal to the pulse on wire 1. The pulse on wire 2 is of opposite phase to that on wire 1. Wire 1 will have donated energy to the new pulse, so that the two are nearly equal but of opposite phase.
3. Aug 5, 2016
### Ronnu
I don't quite understand why it's in a opposite phase. And why wouldn't the impulse on wire 2 grow any further? Also, wouldn't those two impulses be physically apart from each other (not adjacent to each other) as one "lags" little bit?
4. Aug 5, 2016
### tech99
For close spaced wires, when a voltage is applied to line 1, a current starts to flow and a back EMF is created, by Lenz's Law, in Wire 2. The current impulse on Wire 2 must be in a direction so that its magnetic field opposes that in Wire 1.
As it grows, the impulse on Wire 2 will transfer energy back to Wire 1, until such time as equilibrium is obtained, so it will not grow for ever.
If the spacing is large compared to the wavelength, then I agree there can be a time delay (phase shift). This will correspond to the eventual mode being a mixture of the ordinary two-wire TEM and the "single wire" TM modes.
May I comment that, so far as I am aware, a magnetic field is not known to travel at the speed of light. Only an EM wave can do that. But please correct me.
5. Aug 13, 2016
### Ronnu
Thanks for the reply, I can understand little bit better now how the electir field vector would be acting on the generated impulse on wire 2. But if the distance between the two wires would be the same as the length of the impulse would the eventual mode be still mixture of TEM and TE?
I think you misunderstood me about magnetic field traveling at the speed of light. I know that magnetic field itself cannot "travel", it can only exist statically as changing magnetic field always includes a changing electric field. So when I said magnetic field traveling at c I was really refering to the magnetic field that is part of the EM radiation that would propagate from the wire.
Last edited: Aug 13, 2016
6. Aug 13, 2016
### tech99
If the wires are widely spaced, in the order of a wavelength apart for this case, then some E-field lines of force originating from a position on one wire will terminate on the other wire, giving a TEM mode, but others will terminate further along the same wire, giving a single wire TM mode.
Draft saved Draft deleted
Similar Discussions: Ever-increasing amplitude of the induced impulse? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8654221892356873, "perplexity": 542.3243902696536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948521292.23/warc/CC-MAIN-20171213045921-20171213065921-00416.warc.gz"} |
https://www.physicsforums.com/threads/please-help-me.317442/ | 1. Jun 1, 2009
### brittney1993
1. The problem statement, all variables and given/known data
Staff and students at a school have been complaining about the location of the water fountains. You must decide if one or both fountains need to be relocated.
Calculate the measure of angle 0, formed by the "line of sight" from the middle of the double doorway to the water fountains on the opposite wall.
2. Relevant equations
http://img261.imageshack.us/img261/1636/fountain1.png [Broken]
3. The attempt at a solution
am I supposed to use the sine law or the cosine law or something like that? or sohcahtoa ratio?
plz explain to me how I solve these step-by-step so I can understand it and learn from it for my quiz tomorrow, thanks!
Last edited by a moderator: May 4, 2017
2. Jun 1, 2009
### rock.freak667
So you have a triangle, with three sides of which you know the lengths.
You can only use the ratios of sine,cosine and tangent when you have a right angle. So can you use soh,cah,toa ratios?
If you can, then use it. If not consider the alternative sine and cosine laws.
Write out the formulas for these and see which one is most useful for this question by checking the variables you know.
3. Jun 1, 2009
### brittney1993
it must be the cosine law, right? how do i go about finding the angle using the cosine law?
4. Jun 1, 2009
### rl.bhat
If opposite side of the angle θ is R and other two sides are P and Q, then
R^2 = P^2 + Q^2 - 2PQ*cosθ.
5. Jun 1, 2009
### Ouabache
Are you supposed to infer from your description where the present location of the water fountains are?
What is the cosine law?
6. Jun 1, 2009
### brittney1993
i got the cosine law for finding angles its:
cosA = b2 + c2 - a2
and divide all of that by 2bc
7. Jun 1, 2009
### symbolipoint
The answer is represented in this response:
Just identify the corresponding parts.
8. Jun 1, 2009
### brittney1993
but im not finding the length, arent i? I'm supposed to find the angle. I thought that equation above is for finding lengths.
opposite side of angle 0 would be c, and other two sides would be a and b
9. Jun 1, 2009
### symbolipoint
Again, what are the corresponding parts? Substitute the corresponding parts into the formula. What quantities are known, and what quantities are unknown? The rest is simple algebra and a small amount of basic Trigonometry.
10. Jun 1, 2009
### brittney1993
R^2 = P^2 + Q^2 - 2PQ*cosθ.
465^2 = P^302 + Q^237 - 2(302)(237)*cosθ
is it like that?
can I also use this formula:
11. Jun 2, 2009
### rl.bhat
465^2 = P^302 + Q^237 - 2(302)(237)*cosθ
It should be
465^2 = 302^2 + 237^2 - 2(302)(237)*cosθ.
You can use the other formula also.
12. Jun 2, 2009
### HallsofIvy
Staff Emeritus
It would be better to use "^" to indicate powers and put things in parentheses:
cos A= (b^2+ c^2- a^2)/(2bc), but yes, that is what every one has been trying to tell you: start from the standard form of the cosine law and solve for cos A. Or put the numbers given into a^2= b^2+ c^2- 2bc cos(A) first and solve for cos A.
As rl.bhat told you, it would be 465^2= 302^2+ 237^2- 2(302)(237)cos(A), not your
"465^2= P^302+ Q^237- 2(302)(237)cos(A)". I assume that was just "temporary insanity"! | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9132845401763916, "perplexity": 2018.2392277148322}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806720.32/warc/CC-MAIN-20171123031247-20171123051247-00473.warc.gz"} |
http://math.stackexchange.com/questions/83281/for-rv-x-such-that-x-le-1-a-s-show-existence-of-y-with-values-in-1 | # For rv $X$ such that $|X|\le 1$ a.s, show existence of $Y$ with values in $\{-1,1\}$ and $E(Y|X)=X$
This question comes from "Probability Theory" by Achim Klenke, pg. 198. Any hints are appreciated.
-
So, you know the value of $X$, say $X=x$ for a given real number $x$ such that $|x|\leqslant1$, and you want to define $Y$ such that $Y=1$ with probability $\frac12(1+x)$ and $Y=-1$ with probability $\frac12(1-x)$.
Assume that you draw an independent random variable $U$ uniform on $(-1,1)$ and that you define $Y$ as $Y=+1$ if $U\in B_x$ and $Y=-1$ if $U\notin B_x$ for a given Borel set $B_x\subseteq(-1,1)$. Then $$\mathrm E(Y\mid X=x)=(+1)\cdot\tfrac12|B_x|+(-1)\cdot\tfrac12|(-1,1)\setminus B_x|=|B_x|-1,$$ hence this construction yields a solution if $|B_x|=1+x$ for every $x$. Can you finish?
I went back to make the proof more rigorous. The required random variable is $Y=1_{\{U \in B_X\}} - 1_{\{U \notin B_X\}}$. The tricky part is to formally prove that $P(U \in B_X | X=x) = P(U \in B_x)$ – kbell Nov 20 '11 at 4:57
Proof of the above equality comes as a consequence of the following result: Let $X$ and $Y$ be independent and $\varphi: \mathbb{R}\times\mathbb{R}\rightarrow [0,\infty]$ be measurable. Let $E$\varphi(\cdot,Y)$$ denote $x\mapsto E$\varphi(x,Y)$$. Then $E$\varphi(\cdot,Y)$\circ X$ is a version of $E$\varphi(X,Y)|X$$ (exercise in "Probability Theory", Heinz Bauer pg 128). – kbell Nov 20 '11 at 7:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9767684936523438, "perplexity": 142.23117297612333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510270528.34/warc/CC-MAIN-20140728011750-00232-ip-10-146-231-18.ec2.internal.warc.gz"} |
http://photonics101.com/relativistic-electrodynamics/lorentz-invariance-four-potential/ | ## Lorentz Invariance of the Four-Potential
Lorentz invariance is at the core of any (special) relativistic theory. Sometimes, however, we encounter expressions that seem to transform in a wrong way. This is especially true for the four-potential calculated from the retarded four-current. Find out why the four-potential is indeed Lorentz invariant!
## Problem Statement
The well-known retarded integrations over the charge density $$\rho\left(x^{\nu}\right)$$ and current density $$\mathbf{j}\left(x^{\nu}\right)$$ give rise to the scalar and vecor potential:$\begin{eqnarray*} \phi\left(x^{\nu}\right)&=&\frac{1}{4\pi\varepsilon_{0}}\int\frac{1}{\left|\mathbf{r}-\mathbf{r}^{\prime}\right|}\rho\left(\mathbf{r}^{\prime},t-\left|\mathbf{r}-\mathbf{r}^{\prime}\right|/c\right)dV^{\prime}\ \text{and}\\\mathbf{A}\left(x^{\nu}\right)&=&\frac{\mu_{0}}{4\pi}\int\frac{1}{\left|\mathbf{r}-\mathbf{r}^{\prime}\right|}\mathbf{j}\left(\mathbf{r}^{\prime},t-\left|\mathbf{r}-\mathbf{r}^{\prime}\right|/c\right)dV^{\prime}\ . \end{eqnarray*}$At a first glance, these equations seem to be not Lorentz invariant. Show that the four-potential $$A^{\mu}\left(x^{\nu}\right)=\left(\phi\left(x^{\nu}\right)/c,\,\mathbf{A}\left(x^{\nu}\right)\right)$$ obeys indeed Lorentz invariance:
• express $$A^{\mu}$$ in terms of the four-current $$j^{\mu}$$,
• re-forumlate the integral to go over space and time, $$dV^{\prime}\rightarrow dV^{\prime}dt^{\prime}$$ using the $$\delta$$-distribution and finally,
• $\text{use}\quad\delta\left[f\left(x\right)\right] = \sum_{\text{roots }x_{i}}\frac{\delta\left(x-x_{i}\right)}{\left|f^{\prime}\left(x_{i}\right)\right|}$ to find a forumulation of the integral without the $$1/\left|\mathbf{r}-\mathbf{r}^{\prime}\right|$$.
## Hints
Remember that we use SI units in which $$j^{\mu}\left(x^{\nu}\right)=\left(c\rho\left(x^{\nu}\right),\mathbf{j}\left(x^{\nu}\right)\right)$$.
For the last part of the problem: Think of the variables of the argument of the $$\delta$$-distribution (a function!) such that the derivative of it is $$\propto\left|\mathbf{r}-\mathbf{r}^{\prime}\right|$$.
## Four-Potential and Four-Current
Let us first write down how we calculate the four-potential for a given coordinate system $$x^{\mu}=\left(ct,\mathbf{r}\right)$$ with respect to some charges and currents. In the necessary retarded formulation we have $A^{\mu}\left(x^{\nu}\right)=\left(\begin{array}{c}\phi\left(x^{\nu}\right)/c\\\mathbf{A}\left(x^{\nu}\right) \end{array}\right) = \frac{1}{4\pi}\int\frac{1}{\left|\mathbf{r}-\mathbf{r}^{\prime}\right|}\left(\begin{array}{c}\rho\left(\mathbf{r}^{\prime},t-\left|\mathbf{r}-\mathbf{r}^{\prime}\right|/c\right)/\varepsilon_{0}c\\\mu_{0}\mathbf{j}\left(\mathbf{r}^{\prime},t-\left|\mathbf{r}-\mathbf{r}^{\prime}\right|/c\right)\end{array}\right)dV^{\prime}\ .$Now we can remember that $$j^{\mu}\left(x^{\nu}\right)=\left(c\rho\left(x^{\nu}\right),\mathbf{j}\left(x^{\nu}\right)\right)$$ and conclude: $A^{\mu}\left(x^{\nu}\right) = \frac{\mu_{0}}{4\pi}\int\frac{j^{\mu}\left(\mathbf{r}^{\prime},t-\left|\mathbf{r}-\mathbf{r}^{\prime}\right|/c\right)}{\left|\mathbf{r}-\mathbf{r}^{\prime}\right|}dV^{\prime}\ ,$since $$\varepsilon_{0}\mu_{0}\equiv1/c^{2}$$ and so $$1/\varepsilon_{0}c=\mu_{0}c$$. This result is of course nice but we probably knew it already as solution to $$\Box A^{\mu}=j^{\mu}$$. Nevertheless, the integral does not seem to be Lorentz invariant at first sight. So, what can we do to show it?
## Reformulation of the Integral
We can try to bring the integral into a form such that it's an integral with invariant quantities each, i.e. no retardation. Ok, we may simply write$A^{\mu}\left(x^{\nu}\right) = \frac{\mu_{0}}{4\pi}\int\frac{j^{\mu}\left(\mathbf{r}^{\prime},t^{\prime}\right)}{\left|\mathbf{r}-\mathbf{r}^{\prime}\right|}\delta\left(c\left(t-t^{\prime}\right)-\left|\mathbf{r}-\mathbf{r}^{\prime}\right|\right)\theta\left(t-t^{\prime}\right)dV^{\prime}dt^{\prime}$and keep in mind that $$t>t^{\prime}$$ since we only care about the retarted interaction, not the advanced one. This fact is reflected with the Heaviside-theta and will be important later on.
Now we want to get rid of anything looking non-Lorentz-invariant. The only chance we have is try something with the $$\delta$$-distribution - maybe we can get rid of the distance $$\left|\mathbf{r}-\mathbf{r}^{\prime}\right|$$!
## $$\delta$$-Gymnastics and Lorentz-Invariance
Ok, let us use$\delta\left[f\left(x\right)\right] = \sum_{\text{roots }x_{i}}\frac{\delta\left(x-x_{i}\right)}{\left|f^{\prime}\left(x_{i}\right)\right|}$to re-formulate the expression inside the $$\delta$$-distribution regarding it at a fixed spacial distance $$\left|\mathbf{r}-\mathbf{r}^{\prime}\right|$$ and with variable $$\tau\equiv c\left(t-t^{\prime}\right)$$:$\begin{eqnarray*} \delta\left[\left(x^{\nu}-x^{\nu\prime}\right)^{2}\right]&=&\delta\left[-\tau^{2}+\left|\mathbf{r}-\mathbf{r}^{\prime}\right|\right]\\&=&\delta\left[\left(-\tau+\left|\mathbf{r}-\mathbf{r}^{\prime}\right|\right)\left(\tau+\left|\mathbf{r}-\mathbf{r}^{\prime}\right|\right)\right]\\&=&\frac{1}{2\left|\mathbf{r}-\mathbf{r}^{\prime}\right|}\left[\delta\left(-\tau+\left|\mathbf{r}-\mathbf{r}^{\prime}\right|\right)+\delta\left(\tau+\left|\mathbf{r}-\mathbf{r}^{\prime}\right|\right)\right]\ . \end{eqnarray*}$Then, we end up with$A^{\mu}\left(x^{\nu}\right) = \frac{\mu_{0}}{2\pi}\int j^{\mu}\left(x^{\nu\prime}\right)\delta\left(\left(x^{\nu}-x^{\nu\prime}\right)^{2}\right)\theta\left(t-t^{\prime}\right)dV^{\prime}dt^{\prime}\ .$
Note that the decomposition of the $$\delta$$-distribution resulted in two contributions- retarded, with "$$-\tau$$" and advanced, with "$$+\tau$$". Luckily, the $$\theta$$-distribution fixes the issue. The result is astonishing since we end up with just an integration over the four-current which looks somehow unfamiliar. The given integral is naturally invariant since any quantity involved is Lorentz-invariant: the integral measure, the current and $$\left(x^{\nu}-x^{\nu\prime}\right)^{2}=\eta_{\mu\nu}x^{\mu}x^{\nu\prime}$$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9900569319725037, "perplexity": 526.9660860740763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463615093.77/warc/CC-MAIN-20170530105157-20170530125157-00139.warc.gz"} |
https://www.mysciencework.com/publication/show/magnetic-impurities-gapless-fermi-systems-7c9283fa | # On Magnetic Impurities in Gapless Fermi Systems
Authors
Type
Published Article
Publication Date
Jun 11, 1996
Submission Date
Jun 11, 1996
Identifiers
arXiv ID: cond-mat/9606075
Source
arXiv
In ordinary metals, antiferromagnetic exchange between conduction electrons and a magnetic impurity leads to screening of the impurity spin below the Kondo temperature, $T_K$. In systems such as semimetals, small-gap semiconductors and unconventional superconductors, a reduction in available conduction states near the chemical potential can greatly depress $T_K$. The behavior of an Anderson impurity in a model with a power-law density of states, $N(\epsilon) \sim |\epsilon|^r$, $r>0$, for $|\epsilon| < \Delta$, where $\Delta \ll D$, is studied using the non-crossing approximation. The transition from the Kondo singlet to the magnetic ground state can be seen in the behavior of the impurity magnetic susceptibility $\chi$. The product $T\chi$ saturates at a finite value at low temperature for coupling smaller than the critical one. For sufficiently large coupling $T\chi \to 0$, as $T \to 0$, indicating complete screening of the impurity spin. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9443114399909973, "perplexity": 1236.1968371204327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864256.26/warc/CC-MAIN-20180621172638-20180621192638-00310.warc.gz"} |
http://tullo.ch/articles/elements-of-statistical-learning/ | # Elements of Statistical Learning - Chapter 2 Solutions
The Stanford textbook Elements of Statistical Learning by Hastie, Tibshirani, and Friedman is an excellent (and freely available) graduate-level text in data mining and machine learning. I'm currently working through it, and I'm putting my (partial) exercise solutions up for anyone who might find them useful. The first set of solutions is for Chapter 2, An Overview of Supervised Learning, introducing least squares and k-nearest-neighbour techniques.
### Exercise Solutions
See the solutions in PDF format (source) for a more pleasant reading experience. This webpage was created from the LaTeX source using the LaTeX2Markdown utility - check it out on GitHub.
## Overview of Supervised Learning
#### Exercise 2.1
Suppose that each of $K$-classes has an associated target $t_k$, which is a vector of all zeroes, except a one in the $k$-th position. Show that classifying the largest element of $\hat y$ amounts to choosing the closest target, $\min_k \| t_k - \hat y \|$ if the elements of $\hat y$ sum to one.
#### Proof
The assertion is equivalent to showing that $$\text{argmax}_i \hat y_i = \text{argmin}_k \| t_k - \hat y \| = \text{argmin}_k \|\hat y - t_k \|^2$$ by monotonicity of $x \mapsto x^2$ and symmetry of the norm.
WLOG, let $\| \cdot \|$ be the Euclidean norm $\| \cdot \|_2$. Let $k = \text{argmax}_i \hat y_i$, with $\hat y_k = \max y_i$. Note that then $\hat y_k \geq \frac{1}{K}$, since $\sum \hat y_i = 1$.
Then for any $k' \neq k$ (note that $y_{k'} \leq y_k$), we have \begin{align} \| y - t_{k'} \|_2^2 - \| y - t_k \|_2^2 &= y_k^2 + \left(y_{k'} - 1 \right)^2 - \left( y_{k'}^2 + \left(y_k - 1 \right)^2 \right) \\ &= 2 \left(y_k - y_{k'}\right) \\ &\geq 0 \end{align} since $y_{k'} \leq y_k$ by assumption.
Thus we must have
$$\label{eq:6} \text{argmin}_k \| t_k - \hat y \| = \text{argmax}_i \hat y_i$$
as required.
#### Exercise 2.2
Show how to compute the Bayes decision boundary for the simulation example in Figure 2.5.
#### Proof
The Bayes classifier is \label{eq:2} \hat G(X) = \text{argmax}_{g \in \mathcal G} P(g | X = x ).
In our two-class example $\textbf{orange}$ and $\textbf{blue}$, the decision boundary is the set where
$$\label{eq:5} P(g=\textbf{blue} | X = x) = P(g =\textbf{orange} | X = x) = \frac{1}{2}.$$
By the Bayes rule, this is equivalent to the set of points where
$$\label{eq:4} P(X = x | g = \textbf{blue}) P(g = \textbf{blue}) = P(X = x | g = \textbf{orange}) P(g = \textbf{orange})$$
As we know $P(g)$ and $P(X=x|g)$, the decision boundary can be calculated.
#### Exercise 2.3
Derive equation (2.24)
TODO
#### Exercise 2.4
Consider $N$ data points uniformly distributed in a $p$-dimensional unit ball centered at the origin. Show the the median distance from the origin to the closest data point is given by $$\label{eq:7} d(p, N) = \left(1-\left(\frac{1}{2}\right)^{1/N}\right)^{1/p}$$
#### Proof
Let $r$ be the median distance from the origin to the closest data point. Then $$\label{eq:8} P(\text{All N points are further than r from the origin}) = \frac{1}{2}$$ by definition of the median.
Since the points $x_i$ are independently distributed, this implies that $$\label{eq:9} \frac{1}{2} = \prod_{i=1}^N P(\|x_i\| > r)$$ and as the points $x_i$ are uniformly distributed in the unit ball, we have that \begin{align} P(\| x_i \| > r) &= 1 - P(\| x_i \| \leq r) \\ &= 1 - \frac{Kr^p}{K} \\ &= 1 - r^p \end{align}
Putting these together, we obtain that \label{eq:10} \frac{1}{2} = \left(1-r^p \right)^{N}
and solving for $r$, we have $$\label{eq:11} r = \left(1-\left(\frac{1}{2}\right)^{1/N}\right)^{1/p}$$
#### Exercise 2.5
Consider inputs drawn from a spherical multivariate-normal distribution $X \sim N(0,\mathbf{1}_p)$. The squared distance from any sample point to the origin has a $\chi^2_p$ distribution with mean $p$. Consider a prediction point $x_0$ drawn from this distribution, and let $a = \frac{x_0}{\| x_0\|}$ be an associated unit vector. Let $z_i = a^T x_i$ be the projection of each of the training points on this direction.
Show that the $z_i$ are distributed $N(0,1)$ with expected squared distance from the origin 1, while the target point has expected squared distance $p$ from the origin.
Hence for $p = 10$, a randomly drawn test point is about 3.1 standard deviations from the origin, while all the training points are on average one standard deviation along direction a. So most prediction points see themselves as lying on the edge of the training set.
#### Proof
Let $z_i = a^T x_i = \frac{x_0^T}{\| x_0 \|} x_i$. Then $z_i$ is a linear combination of $N(0,1)$ random variables, and hence normal, with expectation zero and variance
$$\label{eq:12} \text{Var}(z_i) = \| a^T \|^2 \text{Var}(x_i) = \text{Var}(x_i) = 1$$ as the vector $a$ has unit length and $x_i \sim N(0, 1)$.
For each target point $x_i$, the squared distance from the origin is a $\chi^2_p$ distribution with mean $p$, as required.
#### Exercise 2.6
1. Derive equation (2.27) in the notes.
2. Derive equation (2.28) in the notes.
#### Proof
1. We have \begin{align} EPE(x_0) &= E_{y_0 | x_0} E_{\mathcal{T}}(y_0 - \hat y_0)^2 \\ &= \text{Var}(y_0|x_0) + E_{\mathcal T}[\hat y_0 - E_{\mathcal T} \hat y_0]^2 + [E_{\mathcal T} - x_0^T \beta]^2 \\ &= \text{Var}(y_0 | x_0) + \text{Var}_\mathcal{T}(\hat y_0) + \text{Bias}^2(\hat y_0). \end{align} We now treat each term individually. Since the estimator is unbiased, we have that the third term is zero. Since $y_0 = x_0^T \beta + \epsilon$ with $\epsilon$ an $N(0,\sigma^2)$ random variable, we must have $\text{Var}(y_0|x_0) = \sigma^2$. The middle term is more difficult. First, note that we have \begin{align} \text{Var}_{\mathcal T}(\hat y_0) &= \text{Var}_{\mathcal T}(x_0^T \hat \beta) \\ &= x_0^T \text{Var}_{\mathcal T}(\hat \beta) x_0 \\ &= E_{\mathcal T} x_0^T \sigma^2 (\mathbf{X}^T \mathbf{X})^{-1} x_0 \end{align} by conditioning (3.8) on $\mathcal T$.
2. TODO
#### Exercise 2.7
Consider a regression problem with inputs $x_i$ and outputs $y_i$, and a parameterized model $f_\theta(x)$ to be fit with least squares. Show that if there are observations with tied or identical values of $x$, then the fit can be obtained from a reduced weighted least squares problem.
#### Proof
This is relatively simple. WLOG, assume that $x_1 = x_2$, and all other observations are unique. Then our RSS function in the general least-squares estimation is
$$\label{eq:13} RSS(\theta) = \sum_{i=1}^N \left(y_i - f_\theta(x_i) \right)^2 = \sum_{i=2}^N w_i \left(y_i - f_\theta(x_i) \right)^2$$
where $$\label{eq:14} w_i = \begin{cases} 2 & i = 2 \\ 1 & \text{otherwise} \end{cases}$$
Thus we have converted our least squares estimation into a reduced weighted least squares estimation. This minimal example can be easily generalised.
#### Exercise 2.8
Suppose that we have a sample of $N$ pairs $x_i, y_i$, drawn IID from the distribution such that \begin{align} x_i \sim h(x), \\ y_i = f(x_i) + \epsilon_i, \\ E(\epsilon_i) = 0, \\ \text{Var}(\epsilon_i) = \sigma^2. \end{align} We construct an estimator for $f$ linear in the $y_i$, $$\label{eq:16} \hat f(x_0) = \sum_{i=1}^N \ell_i(x_0; \mathcal X) y_i$$ where the weights $\ell_i(x_0; X)$ do not depend on the $y_i$, but do depend on the training sequence $x_i$ denoted by $\mathcal X$.
1. Show that the linear regression and $k$-nearest-neighbour regression are members of this class of estimators. Describe explicitly the weights $\ell_i(x_0; \mathcal X)$ in each of these cases.
2. Decompose the conditional mean-squared error $$\label{eq:17} E_{\mathcal Y | \mathcal X} \left( f(x_0) - \hat f(x_0) \right)^2$$ into a conditional squared bias and a conditional variance component. $\mathcal Y$ represents the entire training sequence of $y_i$.
3. Decompose the (unconditional) MSE $$\label{eq:18} E_{\mathcal Y, \mathcal X}\left(f(x_0) - \hat f(x_0) \right)^2$$ into a squared bias and a variance component.
4. Establish a relationship between the square biases and variances in the above two cases.
#### Proof
1. Recall that the estimator for $f$ in the linear regression case is given by $$\label{eq:19} \hat f(x_0) = x_0^T \beta$$ where $\beta = (X^T X)^{-1} X^T y$. Then we can simply write $$\label{eq:20} \hat f(x_0) = \sum_{i=1}^N \left( x_0^T (X^T X)^{-1} X^T \right)_i y_i.$$ Hence $$\label{eq:21} \ell_i(x_0; \mathcal X) = \left( x_0^T (X^T X)^{-1} X^T \right)_i.$$ In the $k$-nearest-neighbour representation, we have $$\label{eq:22} \hat f(x_0) = \sum_{i=1}^N \frac{y_i}{k} \mathbf{1}_{x_i \in N_k(x_0)}$$ where $N_k(x_0)$ represents the set of $k$-nearest-neighbours of $x_0$. Clearly, $$\label{eq:23} \ell_i(x_0; \mathcal X) = \frac{1}{k} \mathbf{1}_{x_i \in N_k(x_0)}$$
2. TODO
3. TODO
4. TODO
#### Exercise 2.9
Compare the classification performance of linear regression and $k$-nearest neighbour classification on the zipcode data. In particular, consider on the 2's and 3's, and $k = 1, 3, 5, 7, 15$. Show both the training and test error for each choice.
#### Proof
Our implementation in R and graphs are attached.
#### Exercise 2.10
Consider a linear regression model with $p$ parameters, fitted by OLS to a set of trainig data $(x_i, y_i)_{1 \leq i \leq N}$ drawn at random from a population. Let $\hat \beta$ be the least squares estimate. Suppose we have some test data $(\tilde x_i, \tilde y_i)_{1 \leq i \leq M}$ drawn at random from the same population as the training data. If $R_{tr}(\beta) = \frac{1}{N} \sum_{i=1}^N \left(y_i \beta^T x_i \right)^2$ and $R_{te}(\beta) = \frac{1}{M} \sum_{i=1}^M \left( \tilde y_i - \beta^T \tilde x_i \right)^2$, prove that $$\label{eq:15} E(R_{tr}(\hat \beta)) \leq E(R_{te}(\hat \beta))$$ where the expectation is over all that is random in each expression. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 5, "equation": 20, "x-ck12": 0, "texerror": 0, "math_score": 0.9996208548545837, "perplexity": 486.65040216013716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999261.43/warc/CC-MAIN-20190620145650-20190620171650-00390.warc.gz"} |
http://physics.stackexchange.com/questions/57980/concentration-of-proteins?answertab=active | # Concentration of Proteins
You release a billion protein molecules at position $x = 0$ in the middle of a narrow capillary test tube. The molecules’ diffusion constant is $10^{−6} \ cm^2 s^{−1}$. An electric field pulls the molecules to the right (larger x) with a drift velocity of $1\ μms^{−1}$. Nevertheless, after $80 \ s$ you see that a few protein molecules are actually to the left of where you released them. How could this happen? What is the ending concentration exactly at $x = 0$?
[Note: This is a one-dimensional problem, so you should express your answer in terms of the concentration integrated over the cross-sectional area of the tube, a quantity with dimensions $L ^{−1}.$]
In this problem I tried to use diffusion equation $$\frac{\partial C}{\partial t} = D \frac{\partial ^2 C}{\partial x^2} + v\frac{\partial C}{\partial x}$$ where $v$ is drift velocity.
Firstly, does this equation help me, or am I in the wrong woods? Secondly, I have trouble finding the initial conditions, since we have two parameters.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8958346247673035, "perplexity": 368.0903401166536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398450581.71/warc/CC-MAIN-20151124205410-00044-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://arxiv.org/abs/gr-qc/0302088 | gr-qc
(what is this?)
(what is this?)
# Title: The derivation of the coupling constant in the new Self Creation Cosmology
Abstract: It has been shown that the new Self Creation Cosmology theory predicts a universe with a total density parameter of one third yet spatially flat, which would appear to accelerate in its expansion. Although requiring a moderate amount of 'cold dark matter' the theory does not have to invoke the hypotheses of inflation, 'dark energy', 'quintessence' or a cosmological constant (dynamical or otherwise) to explain observed cosmological features. The theory also offers an explanation for the observed anomalous Pioneer spacecraft acceleration, an observed spin-up of the Earth and an problematic variation of G observed from analysis of the evolution of planetary longitudes. It predicts identical results as General Relativity in standard experimental tests but three definitive experiments do exist to falsify the theory. In order to match the predictions of General Relativity, and observations in the standard tests, the new theory requires the Brans Dicke omega parameter that couples the scalar field to matter to be -3/2 . Here it is shown how this value for the coupling parameter is determined by the theory's basic assumptions and therefore it is an inherent property of the principles upon which the theory is based.
Comments: LaTex, 29 pages no figures, Gravity Probe B predicted geodetic precession corrected for Thomas precession to 2/3 that of GR or 4.4096 arcsec/yr Subjects: General Relativity and Quantum Cosmology (gr-qc); Astrophysics (astro-ph); High Energy Physics - Theory (hep-th) Cite as: arXiv:gr-qc/0302088 (or arXiv:gr-qc/0302088v3 for this version)
## Submission history
From: Garth Antony Barber [view email]
[v1] Fri, 21 Feb 2003 17:19:00 GMT (18kb)
[v2] Sat, 22 Feb 2003 07:51:13 GMT (18kb)
[v3] Thu, 15 Dec 2005 19:34:04 GMT (18kb) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8455954194068909, "perplexity": 1876.3753235738855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171281.53/warc/CC-MAIN-20170219104611-00474-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://www.groundai.com/project/effectively-stable-dark-matter/ | Effectively Stable Dark Matter
# Effectively Stable Dark Matter
Clifford Cheung David Sanford Walter Burke Institute for Theoretical Physics
California Institute of Technology, Pasadena, CA 91125
July 17, 2019
###### Abstract
We study dark matter (DM) which is cosmologically long-lived because of standard model (SM) symmetries. In these models an approximate stabilizing symmetry emerges accidentally, in analogy with baryon and lepton number in the renormalizable SM. Adopting an effective theory approach, we classify DM models according to representations of , allowing for all operators permitted by symmetry, with weak scale DM and a cutoff at or below the Planck scale. We identify representations containing a neutral long-lived state, thus excluding dimension four and five operators that mediate dangerously prompt DM decay into SM particles. The DM relic abundance is obtained via thermal freeze-out or, since effectively stable DM often carries baryon or lepton number, asymmetry sharing through the very operators that induce eventual DM decay. We also incorporate baryon and lepton number violation with a spurion that parameterizes hard breaking by arbitrary units. However, since proton stability precludes certain spurions, a residual symmetry persists, maintaining the cosmological stability of certain DM representations. Finally, we survey the phenomenology of effectively stable DM as manifested in probes of direct detection, indirect detection, and proton decay.
preprint: CALT-2014-034
## I Introduction
Dark matter (DM) is elegantly accounted for by a neutral, cosmologically long-lived particle beyond the standard model (SM). In most circumstances, however, DM stability is ensured by a symmetry that is simply imposed by fiat. While as much can be expected from DM effective theories, similarly ad hoc choices are often needed for their ultraviolet completions, for instance in theories of supersymmetry or extra dimensions where -parity or -parity are assumed.
In contrast, the SM implements stability with less contrivance: charge stabilizes the electron, while angular momentum stabilizes the neutrino. The proton is not guaranteed to be stable, but like DM it is cosmologically long-lived, with current limits bounding its lifetime to be greater than years. Famously, the SM gauge symmetry explicitly forbids baryon and lepton number violation at the renormalizable level, suggesting an argument for proton stability from effective theory. This mechanism, whereby an approximate symmetry arises as the byproduct of existing symmetries, is sometimes referred to as an accidental symmetry.
In this paper we argue that DM, like the proton, can be cosmologically stable as an accident of SM symmetries. For our analysis we consider the symmetry group
SU(3)C×SU(2)L×U(1)Y×U(1)B×U(1)L, (1)
which is exactly preserved in the renormalizable SM to all orders in perturbation theory. Beyond the renormalizable level, and may be approximate or exact, depending on whether they are gauged in the ultraviolet. While SM quark and lepton flavor are approximate symmetries, we will not consider them here.
In our analysis, we enumerate models according to the quantum numbers of DM under the SM symmetry group. We take the stance of effective theory throughout: all interactions, renormalizable and non-renormalizable, are to be included with order one coefficients. We assume a DM mass near the weak scale and an effective theory cutoff at or below the Planck scale.
To start, we discard all representations without a component neutral under color and electromagnetism. We then discard all representations that permit DM decay into SM particles via dimension four or five operators. In such cases even a Planck scale cutoff is insufficient to prevent cosmologically prompt DM decay. On the other hand, decays induced by dimension six operators hover right at the boundary of current bounds from indirect detection, assuming a cutoff near the scale of grand unification (GUT) Arvanitaki et al. (2009).
Applying these criteria, we enumerate models of effectively stable DM, focusing on all possible fermionic and scalar DM candidates whose leading decays occur at dimension six or seven. These results are summarized in Tabs. 1 and 2 for dimension six and Tabs. 3 and 4 for dimension seven decays, respectively.
Our approach resonates with that of minimal DM Cirelli et al. (2006, 2007) but with the crucial difference that we incorporate both and and faithfully assume the presence of all interactions not forbidden by symmetry. Without and , representations smaller than the quintuplet of , DM will promptly decay to SM particles via renormalizable interactions Cirelli et al. (2006, 2007); Di Luzio et al. (2015). This is consistent with our own finding that effectively stable DM in small representations of must carry or . For exactly this reason many of these models may be generated through the mechanism of asymmetric DM Kaplan et al. (2009); Zhao and Zurek (2014). Previous authors have also considered accidental stabilization of DM by flavor symmetries Lavoura et al. (2013) or new gauge symmetries Antipin et al. (2015).
Finally, we consider the possibility that and are merely approximate. For this analysis we introduce a spurion parameterizing the hard breaking of baryon and lepton number by arbitrary units and , respectively. In many models of effectively stable DM, this induces prompt decays. However, not all values of are phenomenologically safe: the non-observation of proton decay suggests that dimension six operators of the form should be forbidden, thus precluding hard breaking by units of . Remarkably, given arbitrary breaking by any , we still maintain a handful of viable candidates for effectively stable DM. In these models, DM is long-lived because of the SM gauge symmetry together with the stability of the proton.
The remainder of this paper is as follows. In Sec. II we enumerate viable models of effectively stable DM using the criteria described above. In Sec. III we study experimental constraints on these theories from direct detection, indirect detection, and proton decay. Finally, we summarize our results and discuss future directions in Sec. IV.
## Ii Classification of Models
In this section we enumerate representations of the SM symmetry group with a DM component that is neutral, cosmologically long-lived, and generated with the observed relic abundance. We adopt an effective theory perspective in which any operator allowed by symmetry is present with a strength set by a sub-Planckian cutoff.
### ii.1 Neutrality
The quantum numbers of DM are parameterized by a discrete choice of representations for and together with a continuous choice of charges for , , and . For simplicity, we focus on pure gauge eigenstates, assuming that DM is either a complex scalar or a Dirac fermion.
To begin, we restrict to representations that have a neutral component under the unbroken SM gauge symmetry. Thus, DM is an singlet. For DM that is a -plet under , we require that , where the hypercharge is quantized to an integer or half-integer value if is odd or even, respectively.
Charge neutrality places no constraints on the or charges of DM. While irrational values of and are allowed a priori, this is literally equivalent to enforcing DM number as an exact symmetry of the Lagrangian. Furthermore, exact global symmetries are known to conflict with black hole no-hair theorems Banks and Seiberg (2011). For these reasons we restrict to rational values and .
While and are symmetries of the renormalizable SM, they may of course be spontaneously or explicitly broken in the full theory. To parameterize these effects conservatively, we introduce effective hard breaking of and into the low energy theory with a dimensionless spurion for symmetry breaking by units of . For remainder of this section we assume that , but return to the issue of explicit breaking later on in Sec. III.
### ii.2 Stability
We now determine the leading operators that are allowed by symmetry and mediate DM decay. An operator that induces DM decay takes the form
ODM = XOSM, (2)
where is the fermion or scalar field that contains DM and is an operator composed entirely of SM fields. For later convenience, we define
N = [ODM], (3)
to be the dimension of the DM decay operator. The quantum numbers of are equal and opposite to those of , so to enumerate all decay operators it suffices to determine all operators of a given charge and operator dimension.
We define a fiducial decay rate into SM particles,
Γ(X→SM) ∼ M4π(MΛ)2(N−4), (4)
corresponding to two-body decay via . Depending on the precise operator, this decay may be three-body or higher. Moreover, decays into SM fermions will involve flavor structures that may further suppress the width. In any case, the fiducial decay rate in Eq. (4) should be taken as an overestimate.
By definition, a cosmologically stable DM particle has a lifetime of order the age of the universe,
τ(X→SM) ≳ 1018 sec(age of universe). (5)
However, this bound is weaker than experimental constraints on cosmic ray production from DM decay into positrons Ibarra et al. (2014), gamma rays Ando and Ishiwata (2015), antiprotons Giesen et al. (2015), and neutrinos Rott et al. (2014). These limits all place similar constraints on the DM lifetime, of order
τ(X→SM) ≳ 1026 sec(indirect detection), (6)
for of order the weak scale. While CMB bounds are also stronger the one from the age of the universe, they are still weaker than indirect search bounds by orders of magnitude Cline and Scott (2013).
Comparing Eq. (6) to Eq. (4), we obtain upper bounds on the cutoff of higher dimension operators. For dimension five, six, and seven operators, this implies a schematic lower bound on the cutoff,
Λ ≳ ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩(M1\leavevmode\nobreak TeV)321029\leavevmode\nobreak GeV,N=5(M1\leavevmode\nobreak TeV)541016\leavevmode\nobreak GeV,N=6(M1\leavevmode\nobreak TeV)761012\leavevmode\nobreak GeV,N=7. (7)
Since Eq. (4) is an overestimate, this bound on the cutoff is conservative.
For dimension five operators, a sub-Planckian cutoff would require for DM, so hereafter we consider only DM decay via dimension six and seven operators. Notably, dimension six operators are a particularly intriguing portal for weak scale DM because GUT-suppressed dimension six operators induce decays that lie just at the boundary of current experimental limits. For dimension seven operators, the bound on is even smaller. Fig. 1 shows the lower bound on as a function of for dimension six and seven decay operators. The bounds for dimension five operators are not shown because they require a cutoff above the Planck scale. Also shown are the natural scale for DM mass near the weak scale and cutoff scale at the GUT scale, as well as regions excluded by experimental limits from LEP and direct detection, to be discussed later.
To summarize, a sub-Planckian cutoff implies that cosmologically stable, weak scale DM forbids all dimension four or five decay operators. This is a stringent constraint on the DM representation. For example, since a pure singlet scalar can couple to any dimension four operator in the SM, the associated DM will promptly decay at dimension five. At the opposite extreme is scalar or fermionic DM with extremely large charges. In this case lower dimensional operators are forbidden simply because so many SM fields have to be included in the decay operator just to preserve the symmetry. As noted in Cirelli et al. (2006), this occurs for -plets of with large values of .
All of our fermionic DM candidates and most of our scalar DM candidates carry non-zero or . The reason is that most representations with zero or typically have very low dimension decay operators. For example whenever is a SM fermion bilinear that is and neutral, it is always possible to construct a lower dimension operator via replacements of the form , , , etc. As a result, viable DM tends to carry or , or reside in a large representation of so that gauge invariance requires many Higgs fields in the operator.
Large hypercharge can serve the same role in terms of stability, but as noted earlier, a neutral DM component requires that for a -plet under . This in turn bounds the net charges of the SM fields that couple to . In particular, it largely excludes decay operators involving , since this SM field has a sizable hypercharge and no charge.
Tabs. 1 and 2 list all fermionic and scalar DM representations, respectively, whose leading decay is mediated by a dimension six operator. Every representation is color neutral and every representation carries or except the scalar sextet. For the fermionic DM we forbid both members of the Dirac pair from decaying via dimension five or lower operators. Also, shown are various attributes of the model regarding direct detection and explicit and breaking, to be discussed later. Tabs. 3 and 4 are the same as Tabs. 1 and 2 except they apply to DM representations whose leading decay is mediated by a dimension seven operator.
### ii.3 Relic Abundance
Let us comment briefly on the origin of the DM relic abundance. DM with non-trivial SM gauge charges will be equilibrated with the thermal plasma in the early universe, in which case DM annihilations are mediated by gauge interactions whose strength is fixed by the charges of . In general there may be additional interactions between the DM and SM fields, either through the Higgs boson or through direct couplings to quarks and leptons. However, the DM charges are chosen by construction to eliminate renormalizable decays, so the latter are highly suppressed. On the other hand, the former are allowed for scalar DM, and from an effective theory perspective should be included if they are permitted by the symmetries of the theory.
If gauge interactions dominate DM annihilations, then the relic abundance is fixed by thermal freeze-out Cirelli et al. (2006, 2007). Since the strength of the gauge interactions are known, there is only one free parameter available to tune the relic abundance: the DM mass. As discussed in Cirelli et al. (2006, 2007), to obtain DM of the observed relic abundance today requires of order the weak scale or slightly above, depending on the DM spin and quantum numbers. Of course, direct couplings to the Higgs Cheung and Sanford (2014) can strongly affect the relic abundance. However, annihilations mediated by the Higgs generally add to the DM annihilation cross-section, depleting the DM abundance and thus requiring a larger DM mass.
Another approach to DM generation is asymmetric DM, which fits quite naturally into the effective theory picture presented here. In models of asymmetric DM there is an approximate DM number that is conserved along with or . In the early universe, some set of interactions, for example higher dimension operators or sphaelerons, break some combination of , , and , thus sharing particle asymmetries among the quarks, leptons, and DM. As we have shown, the majority of effectively stable DM candidates carry non-zero or , which implies that the very same operators that mediate DM decay can also share the asymmetries in the early universe. A similar observation was noted about asymmetric DM models in Zhao and Zurek (2014). Alternatively, these kinds of higher dimension operators can also be applied in the reverse direction to produce the baryon asymmetry Cheung and Ishiwata (2013).
If the decay operator is in chemical equilibrium in the early universe, then the DM and or asymmetries will be shared according to their charges Kaplan et al. (2009). As is well-known, for efficient sharing this typically requires light asymmetric DM of order the GeV scale rather than the weak scale. For DM that is a gauge singlet, such as the first entry in Tab. 1, this offers a viable model of asymmetric DM, provided additional annihilation modes to deplete the symmetric abundance.
On the other hand, models with DM carrying SM gauge charges are excluded by LEP Heister et al. (2002); Abdallah et al. (2003) for GeV scale masses. In this case, asymmetric DM is possible only if asymmetry sharing through the decay operator is inefficient, so the abundance of DM is less than the amount prescribed by chemical equilibrium, thus requiring a larger DM mass.
## Iii Experimental Constraints
In this section we survey the bounds on effectively stable DM from laboratory and telescope experiments. As we will see, bounds from direct detection and proton decay have an interesting connection to explicit breaking of and .
### iii.1 Direction Detection
Experimental bounds on spin-independent DM-nucleon scattering are extremely stringent. In particular, Dirac or complex scalar DM with a spin-independent coupling to the boson is excluded by many orders of magnitude. Such an interaction arises when , so direct detection constraints are trivially evaded by DM with vanishing hypercharge.
In Tabs. 1 and 2 and Tabs. 3 and 4, the column denoted carries a for models which have zero hypercharge and are thus safe from direct detection, and otherwise. For the latter, models of hypercharged DM can evade these direct detection if we allow for additional structures which we now discuss.
In particular, limits on spin-independent scattering via the boson are null if there is an even tiny mass splitting between the components of the Dirac fermion or complex scalar Hall et al. (1998); Tucker-Smith and Weiner (2001). In this case, scattering through the boson is inelastic, requiring additional energy to excite the incoming DM particle into a neighboring mass eigenvalue. In particular, scattering is kinematically forbidden provided the mass splitting exceeds
δ≥β2μ2, (8)
where km/sec is the DM velocity and is the reduced mass of the DM-nucleus system. For the typical atomic weights of targets such in experiments like CDMS Agnese et al. (2013), XENON100 Aprile et al. (2012), and LUX Akerib et al. (2014), for GeV this translates into a bound keV. For very small , the reduced mass will decrease and so too will the lower bound on , but these regions of light DM are excluded by LEP for Heister et al. (2002); Abdallah et al. (2003).
However, inducing the requisite mass splitting requires new operators that explicitly break the DM particle number associated with Dirac fermions or complex scalars. For DM candidates that carry or , this implies explicit breaking of or . However, the spurion responsible for explicit breaking, , enters with twice the and charge of the DM. Consequently, even with these splitting operators, there is still an unbroken subgroup of or that maintains DM stability.
For example, consider a fermionic DM particle that is a doublet of hypercharge . The leading operator that can split its components is . A mass splitting sufficient to evade direct detection requires
Λ≲v2δ∼109\leavevmode\nobreak GeV, (9)
which is a low cutoff in the context of DM stability. For even larger hypercharges, the level splitting operator involves more Higgs fields, and the requisite cutoff is even lower, dropping to 30 TeV for .
According to the philosophy of effective theory, the bound in Eq. (9) defines a cutoff scale at which all higher dimension operators allowed by symmetry should be present—including those which can mediate DM decay. For dimension five, six, and seven operators, such a low cutoff is inconsistent with cosmological limits on weak scale DM. This is depicted in Fig. 1, where the blue shaded region shows that below above , the cutoff is too high to induce a sufficient mass splitting to evade direct detection, so is forbidden. Conversely, effectively stable DM with non-zero hypercharge can only evade direct detection if there is a low cutoff, in which case cosmological stability requires the DM decay operator be dimension eight or higher.
In general, if the mass splitting requires a higher dimension operator then direct detection is inconsistent with the criterion of cosmologically stable DM. On the other hand, there is no issue if the splitting operator is renormalizable. For example, this is possible for complex scalar DM with , which permits the renormalizable operator . For , a splitting of is achieved for , which is easily satisfied for order one couplings. If carries or , this mass splitting operator explicitly breaks or down to a discrete baryon or lepton parity, however still maintaining DM stability. Alternatively, this interaction is symmetry preserving if does not carry or , as is the case for -plets of with large values of which couple only to Higgs bosons in the decay operator.
Even for models which evade bounds on -mediated scattering, direct detection may still impose constraints on Higgs boson exchange. For example, scalar DM-nucleon scattering can occur via the Higgs portal interaction, Schabinger and Wells (2005). For fermionic DM, the analogous interactions are higher dimension. Non-singlet fermionic and scalar candidates can also scatter with nucleons at loop level via multiple gauge boson exchange, which may be observable in the next generation of direct detection experiments Cline et al. (2013).
### iii.2 Indirect Detection
Since these DM candidates eventually decay on cosmological time scales, they are naturally probed by cosmic ray telescopes. Conveniently, the authors of Zhao and Zurek (2014) studied indirect detection constraints on DM decay via high dimension operators of the very type considered in this paper, albeit with underlying motivation of asymmetric DM. In particular, they considered bounds from FERMI, PAMELA, AMS-02, and HESS on high energy gamma rays and charge particle cosmic rays from electrons, protons, and anti-protons, obtaining a limit on the DM lifetime which we have taken as a loose input for Eq. (6). We refer the reader to Zhao and Zurek (2014) for precise numerical bounds, but we summarize the salient takeaways below.
In general, DM carrying will decay to quarks, yielding anti-protons, while DM carrying will decay to leptons, yielding positrons and neutrinos. All of the DM decays considered here will produce high energy gamma rays from charged particle bremsstrahlung, hadronic decays, and inverse Compton scattering of CMB photons and starlight. However, due to the large and charges of most our DM candidates, gamma ray lines are not typically expected among the theories considered here. An exception occurs for DM particles which decay through operators involving only the Higgs boson, in which case mixing together with a loop of SM particles will induce two-body decays of DM to photons. Another possibility occurs for DM with unit number, which can decay to photon plus neutrino.
### iii.3 Proton Decay
The non-observation of proton decay offers a strong motivation for at least approximate and conservation. Current limits on and related decay modes from the Super-Kamiokande experiment require a lifetime of at least years Abe et al. (2014a); Takhistov et al. (2014); Abe et al. (2014b); Regis et al. (2012), which already places significant constraints on the simplest GUTs. From an effective theory viewpoint, proton decay is mediated by dimension six operators of the form
Q3LΛ2, (10)
which breaks and but preserves . Current limits on proton decay imply a lower bound on the cutoff of approximately GeV Nath and Fileviez Perez (2007), so and are very well-preserved symmetries.
As noted earlier, we can parameterize and violation with a dimensionless spurion characterizing hard breaking by some number of units of and . However, is of particular note because it permits proton decay via the operator in Eq. (10). As a result, if we wish to forbid this then we should avoid breaking by units of . Remarkably, even with there are still viable models of effectively stable DM. In these models there are low dimension DM decay operators allowed by the SM gauge symmetry, but these operators require explicit breaking by which would also decay the proton.
For example, this happens in the model described by the first row of Tab. 1. Since is a gauge singlet, the gauge invariant operator , if present, would induce catastrophically prompt DM decay. However, the existence of both together with would require symmetry breaking by which would in turn induce dimension six proton decay. Conversely, if and are explicitly broken while forbidding proton decay, an accidental symmetry remains which effectively stabilizes DM.
Note that proton decay is also mediated by operators beyond dimension six requiring and violation by one unit and an odd number of units, respectively. For example, induces proton decay with breaking. Forbidding this larger class of spurions would allow for even more viable candidates for cosmologically stable DM, but we do not consider this possibility here.
In Tabs. 1 and 2 and Tabs. 3 and 4, the column labelled carries a if DM is still effectively stable—that is, cannot decay at dimension five or less—even after including any hard breaking spurion with , and otherwise. The entries in Tabs. 3 and 4 indicate models with DM that, while stable at dimension five or less, still decays at dimension six. For these models, DM is still cosmologically stable if the cutoff is higher than was required for dimension seven decays.
## Iv Summary and Outlook
DM phenomenology often hinges on the assumption of a stabilizing symmetry. Naturally, this leads one to wonder about the underlying reason for cosmological stability. In this paper we present an alternative hypothesis whereby DM is long-lived as an accident of the SM symmetry group. We have classified all models in which DM decay is cosmologically slow and induced by dimension six or seven operators. In such cases a sub-Planckian cutoff is sufficient to prevent cosmologically prompt DM decays which are excluded by indirect detection. All candidates for effectively stable DM either carry or , or reside in a high representation of . We have identified those models which are consistent with stringent bounds on spin-independent DM-nucleon scattering. Finally, we have accounted for the possibility explicit breaking of and by arbitrary units. As long as the symmetry breaking spurion still forbids proton decay, models of effectively stable DM persist.
Our analysis leaves a number of avenues for future work. First, there is the question of DM stabilization by SM quark and lepton flavor symmetries, which are well-preserved for the light generations. Second, there is the question of building these models in a supersymmetric context. Lastly, it would be interesting to see if any of the models presented here arise explicitly in GUT constructions.
## Acknowledgments
CC is supported by a DOE Early Career Award DE-SC0010255 and a Sloan Research Fellowship. DS is supported in part by U.S. Department of Energy grant DE–FG02–92ER40701 and by the Gordon and Betty Moore Foundation through Grant No. 776 to the Caltech Moore Center for Theoretical Cosmology and Physics.
## References
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.
The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9742140769958496, "perplexity": 1248.7635555650795}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195417.37/warc/CC-MAIN-20201128095617-20201128125617-00068.warc.gz"} |
https://tex.stackexchange.com/questions/310465/cvdoublecolumn-producing-an-error-while-making-references-using-the-package-mode?noredirect=1 | # cvdoublecolumn producing an error while making references using the package modern CV in latex [closed]
I am making my CV using the TeX package moderncv. I wish to enter references in my CV. I am getting an error when I use the command for double column. The error is
undefined control sequence \listdoublecolumnwidth
Following is my code. Many thanks in advance.
\documentclass[11pt,a4paper]{moderncv}
\moderncvtheme[green]{classic}
\usepackage[utf8]{inputenc}
\usepackage{hyperref}
\firstname{Ridhima}
\familyname{Gupta}
\begin{document}
\maketitle
\newcommand{\cvdoublecolumn}[2]{%
\cvitem[0.75em]{}{%
\begin{minipage}[t]{\listdoubleitemcolumnwidth}#1\end{minipage}%
\hfill%
\begin{minipage}[t]{\listdoubleitemcolumnwidth}#2\end{minipage}%
}%
}
\newcommand{\cvreference}[7]{%
\textbf{#1}\newline% Name
\ifthenelse{\equal{#3}{}}{}{#3\newline}%
\ifthenelse{\equal{#4}{}}{}{#4\newline}%
\ifthenelse{\equal{#5}{}}{}{#5\newline}%
\ifthenelse{\equal{#6}{}}{}{\emailsymbol~\texttt{#6}\newline}%
\ifthenelse{\equal{#7}{}}{}{\phonesymbol~#7}}
\cvdoublecolumn{\cvreference{E. Somanathan}
{Professor}
{Economics and Planning Unit}
{Indian Statistical Institute}
{Delhi-110016}
{[email protected]}
{+91-11-41493939}
}
## closed as unclear what you're asking by Heiko Oberdiek, marmot, Sebastiano, Mico, BobyandbobMay 27 at 6:05
Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
Well, you have several errors in your code.
You defined two commands with different number of parameters:
\cvdoublecolumn{col1}{col2}
Then you have to use them as follows in your code (please see that I moved the definitions before line \begin{document}:
\documentclass[11pt,a4paper]{moderncv}
\moderncvtheme[green]{classic}
\usepackage[utf8]{inputenc}
%\usepackage{hyperref}
\newcommand{\cvdoublecolumn}[2]{%
\cvitem[0.75em]{}{%
\begin{minipage}[t]{\listdoubleitemcolumnwidth}#1\end{minipage}%
\hfill%
\begin{minipage}[t]{\listdoubleitemcolumnwidth}#2\end{minipage}%
}%
}
\newcommand{\cvreference}[7]{%
\textbf{#1}\newline% Name
\ifthenelse{\equal{#3}{}}{}{#3\newline}%
\ifthenelse{\equal{#4}{}}{}{#4\newline}%
\ifthenelse{\equal{#5}{}}{}{#5\newline}%
\ifthenelse{\equal{#6}{}}{}{\emailsymbol~\texttt{#6}\newline}%
\ifthenelse{\equal{#7}{}}{}{\phonesymbol~#7}%
}
\firstname{Ridhima}
\familyname{Gupta}
\begin{document}
\maketitle
\cvdoublecolumn{%
\cvreference{E. Somanathan}%
{Professor}%
{Economics and Planning Unit}%
{Indian Statistical Institute}%
{Delhi-110016}%
{[email protected]}%
{+91-11-41493939}%
• Add \listfiles as first line to the MWE, compile 3 times and check the log file at the end for a list of used packages and versions. Add this list to your question. I'm sure you are using an outdated version of moderncv ... – Kurt May 19 '16 at 16:05
Your moderncv.cls (and perhaps some other .sty files) must be out of date. Update it (them), from https://github.com/xdanaux/moderncv. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8017429113388062, "perplexity": 2243.085835413843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376832559.95/warc/CC-MAIN-20181219151124-20181219173124-00028.warc.gz"} |
http://mathhelpforum.com/advanced-statistics/41579-probability-random-samples.html | # Thread: probability and random samples
1. ## probability and random samples
Hi I have some uni problems in my exam revision i can't sort out.
m=215mg
sd=15mg
normal distribution
215mg of nicotine in a standard cigarette
a) find the probability a single cigarette has a nicotine content more than 220mg
b) if a random sample of 25 cigarettes is selected, what is the probability that the mean is more than 220mg
I tried using Z=y-u/sd but the z score from the table was 0.629 which can be correct right?
Also, i have no idea about part b! HELP!
2. Originally Posted by kate204
Hi I have some uni problems in my exam revision i can't sort out.
m=215mg
sd=15mg
normal distribution
215mg of nicotine in a standard cigarette
a) find the probability a single cigarette has a nicotine content more than 220mg
b) if a random sample of 25 cigarettes is selected, what is the probability that the mean is more than 220mg
I tried using Z=y-u/sd but the z score from the table was 0.629 which can be correct right?
Mr F says: Obviously NOT since $\displaystyle {\color{red}220 > \mu = 215}$.
Also, i have no idea about part b! HELP!
The model you're using is plainly the following:
X ~ Normal($\displaystyle \mu = 215, ~ \sigma = 15$) where X is the random variable nicotine content of a single cigarette.
(a) $\displaystyle \Pr(X > 220) = \Pr \left( Z > \frac{220 - 215}{15} \right)$.
(b) You need to know the distribution of the sample mean:
$\displaystyle \overline{X}$ ~ Normal$\displaystyle \left( \mu = 215, ~ \sigma = \frac{15}{\sqrt{25}}\right)$
assuming an 'infinite' population of cigarettes. You must have a theorem in your textbook or class notes that shows where this has come from.
Now find $\displaystyle \Pr (\overline{X} > 220) = \Pr(Z > .......)$.
3. Originally Posted by kate204
Hi I have some uni problems in my exam revision i can't sort out.
m=215mg
sd=15mg
normal distribution
215mg of nicotine in a standard cigarette
a) find the probability a single cigarette has a nicotine content more than 220mg
b) if a random sample of 25 cigarettes is selected, what is the probability that the mean is more than 220mg
I tried using Z=y-u/sd but the z score from the table was 0.629 which can be correct right?
Also, i have no idea about part b! HELP!
(a) $\displaystyle z=\frac{220-215}{15}=0.3333..$
Now look this up to find $\displaystyle P(Z<0.3333)\approx 0.631$, so to within the limits of accuracy of your table your answer looks OK.
(b) for the second part you need to use the result that the SD of a sample mean is the SD of an individual divided by the square root of the sample size.
RonL
4. Originally Posted by CaptainBlack
(a) $\displaystyle z=\frac{220-215}{15}=0.3333..$
Now look this up to find $\displaystyle P(Z<0.3333)\approx 0.631$, so to within the limits of accuracy of your table your answer looks OK.
[snip]
Sorry to be disagreeable but the answer is not OK. Not as a final answer anyway, which is how it was presented ....
The question asks "find the probability a single cigarette has a nicotine content MORE than 220mg" so the answer is 1 - 0.631 ......
5. sorry im still a little confused,
$\displaystyle \Pr (Z < 0.3333) \approx 0.631$. But the question is not asking this.
It should be clear (and my posts make this abundantly clear) that the question wants $\displaystyle \Pr(Z > 0.3333)$ ..... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8411486148834229, "perplexity": 753.7667975177917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865913.52/warc/CC-MAIN-20180524033910-20180524053910-00217.warc.gz"} |
https://math.stackexchange.com/questions/1199530/why-is-anti-symmetry-a-desirable-quality-in-determinants | # Why is anti-symmetry a desirable quality in determinants?
I hear the determinant of matrix can be defined using 3 facts. 1. It is multilinear. 2. It is anti-symmetric. 3. It is scaled so the determinant of the identity is 1.
But, I don't understand why anti-symmetric is there? Why do people want determinants to be anti-symmetric?
• Anti-symmetry means (given multi-linearity) that the determinant of a matrix with equal (or more generally linearly dependent) columns is zero. Would you want to use a "determinant" that lacks this property? Mar 22, 2015 at 6:04
My answer is pretty much taken from Winitzki's Linear Algebra via Exterior Products, a very good book available legitimately for free online.
Here's the idea. Let's say we have two vectors, $\mathbf{a}, \mathbf{b} \in \mathbb{R}^2$. It's not hard to show that the area of the parallelogram with vertices $\mathbf{0}, \mathbf{a}, \mathbf{b}$ and $\mathbf{a} + \mathbf{b}$ is $|\mathbf{a}| \cdot |\mathbf{b}| \sin\theta$, where the angle between our two vectors is $\theta$.
Let's give this function a name; $Ar(\mathbf{a}, \mathbf{b})$. As we demand linearity, then it must be the case that
\begin{align*}Ar(\mathbf{a + b},\mathbf{a + b}) &= Ar(\mathbf{a} , \mathbf{a}) +Ar(\mathbf{a} , \mathbf{b}) +Ar(\mathbf{b} , \mathbf{a}) + Ar(\mathbf{b} , \mathbf{b})\\ &= 0 + Ar(\mathbf{a} , \mathbf{b}) + Ar(\mathbf{b} , \mathbf{a}) + 0\\ &= 0,\end{align*}
where all the zeros come from the fact that the area $Ar(\mathbf{x} , \mathbf{x})$ can only sensibly be $0$ for all vectors $\mathbf{x}$.
Thus, we're forced to set $Ar(\mathbf{b} , \mathbf{a}) = -Ar(\mathbf{a} , \mathbf{b})$, in order to get the sane result that $Ar(\mathbf{a+b} , \mathbf{a+b}) = 0$.
This isn't the whole story, of course, but that's the idea: vanishing on a linearly dependent input list and (multi)linearity force us to use antisymmetry. In order to get sensible results in terms of volumes of parallelopipeds, we need oriented volume.
That, in my opinion, is the best motivator for the determinant: thinking in terms of signed volumes. It has a geometric intuition and even 'detects' linear dependence, like we saw above. It provides a great way to see why antisymmetry is essentially required.
• i think demanding that the determinant be zero, if two arguments or even two neighboring arguments, is the one forces the determinant to be antisymmetric; not the multi linearity.
– abel
Mar 21, 2015 at 14:40
• @abel I'm not exactly sure what you meant there (I think you're missing some statement about the two arguments). Perhaps it's a combination, of what you mean, together with multilinearity. Without multilinearity, I'm not sure how we can manipulate determinants very much. Mar 21, 2015 at 15:06
• i meant not the multi linearity alone but that and the fact that if neighbors are equal, then set is zero forces det to be antisymmetric. i was reacting to then fact that you seemed to suggest that multi linearity alone is sufficient to make the det antisymmetric.
– abel
Mar 21, 2015 at 15:09
• @abel Ah, I see. In that case, I fully agree with you. I'll try and remember to make it more clear that multilinearity matters, in conjunction with vanishing at linearly dependent sets. Mar 21, 2015 at 15:34
• @Danu Thanks for catching that! Mar 21, 2015 at 19:45
Determinant are useful beasts. For many reasons: they characterize invertibility of matrices, allow us to solve linear equations, they show up in the formula for change of variables in multiple integrals, and what not. And, mind you, it is not that we put them there: they show up there of their own volition: that is how (mathematical) nature is.
Once we have observed that determinants are useful, we have the problem of describing what the determinant is. Indeed, as we notice that there is this thing that shows up all over the place, we mght just as well be precise about what it is that we are finding in all this different places! There are many ways to do this. We can provide the huge ugly formula with the sum over permutations, for example, and others.
How do we choose which description of determinants is best? Well, we want it to be as concise as possible, simple, conceptual, flexible. We want to be able to prove things about the determinant, and the description we pick should make this easy, not hard. And so on.
One possible description of determinants is the one you mention. It is very succesful in all those respects.
In other words, it is not that we decide that determinants should be antisymmetric: they are antisymmetricindependently of our wishes. And it turns out that we can take advantage of that to describe what a determinant is.
There are two approaches to defining something.
• First, you can define an object by construction, just as one does when one says «the determinant of a matrix is the scalar one ges when one does this ugly computation». This is a good approach at times, but then one is left with proving that the object we have explicitly constructed has all the properties we want it to have, and this can be more or less difficult, depending on the circumstances.
• Alternatively, if we study the object in depth, it might be the case that we come up with the observation that it has properties X, Y and Z and that in fact it isthe only object which has those three properties. This allows us to define the object as «The only object which has properties X, Y, and Z». Now, this definition has a problem: we have to check that an object having those three properties indeed exists and is unique.
Historically, most objects get defined initially in the first way, and then, as out knowledge of its properties increases, we redefine them in the second style.
• one of the nicest exposition i have seen of the determinant is in artin's galois theory. a small book of his lectures at norte dame university.
– abel
Mar 21, 2015 at 20:11
• The sum over permutations is certainly huge (if you allow $n>3$), but given that, I don't find it ugly; to the contrary it is the most beautiful such sum one could imagine! Mar 22, 2015 at 6:07
Other answers discuss the background better, but I'll just to go for the question of why anti-symmetry is required in that description of determinants. The answer is simply that without it uniqueness of the form described would fail, and it would fail miserably. (By the way, this nice description of determinants has one aspect that you, and often many others, overlooked: mentioning multi-linearity and anti-symmetry (alternating property), only makes sense if you mention that this is when regarding the determinant as a function of the columns of matrix; an alternating multilinear form of all $n^2$ entries would be something radically different, and impossible.)
A bilinear form $B$ on an $n$-dimensional space is determined by giving the $n^2$ values $B(e_i,e_j)$ where $e_i,e_j$ independently run through a chosen basis $e_1,\ldots,e_n$, and (without symmetry condition) these values can be chosen arbitrarily. Similarly a multilinear form$~M$ of $k$ arguments that are vectors in dimension$~n$ is determined by giving the $n^k$ values $M(e_{i_1},e_{i_2},\ldots,e_{i_k})$ where each argument independently runs through the chosen basis, and these values can again be chosen arbitrarily. Thus the space of such $k$-linear forms has dimension $n^k$; in particular the dimension of the space of all $n$-linear forms is a whopping $n^n$. It is clear that one cannot single out the determinant as element of this space by merely imposing its value at the identity matrix (condition 3).
By additionally imposing symmetry conditions, subspaces of smaller dimension can be defined. For bilinear forms, imposing symmetry pairs up most of the values $B(e_i,e_j)$, and leaves $\frac{n^2+n}2=\binom{n+1}2$ of them to be chosen freely. Instead imposing anti-symmetry would relate the same pairs (but differently), and in addition force the "diagonal" values $B(e_i,e_i)$ to be zero, leaving a dimension of $\frac{n^2-n}2=\binom n2$. Similarly, for multilinear forms of $k$ arguments imposing full symmetry defines a subspace of dimension $\binom{n+k-1}k$, while instead imposing full anti-symmetry defines a subspace of dimension$~\binom nk$. And for $k=n$ the latter means something miraculous: the subspace has dimension $\binom nn=1$ (remember that this is out of an original dimension$~n^n$). (By contrast the space of fully symmetric $n$-linear alternating forms still has dimension $\binom{2n-1}n$; one would still need some very strong additional restriction to single out one special such form.) So the alternating condition has succeeded in eliminating almost all the freedom in choosing a form, leaving just enough freedom to avoid being left with the zero form only. The condition 3 is now precisely what is needed to single out our single very special form, the determinant.
I don't think that this property was ever "desired". The determinants naturally come out of linear algebra theory when you want to solve linear systems of equations and test linear dependency, so they are somehow invariant to (rank-preserving) linear combinations.
Indeed, a determinant is zero when its rows (columns) are linearly dependent, and in particular when they are equal. This implies antisymmetry. (If $f(x,y)=-f(y,x)$, then necessarily $f(x,x)=0$.) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9156391024589539, "perplexity": 453.65077609544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517245.1/warc/CC-MAIN-20220517095022-20220517125022-00758.warc.gz"} |
http://mathhelpforum.com/calculus/172679-related-rates.html | 1. ## Related Rates
I am having a problem understanding how the chain rule was used on $\frac{d(x^2)}{dt}$ to arrive at $2x \cdot \frac{dx}{dt}$ in the question below (sections highlighted in red)
Question: The edge of an expanding square is changing at the rate of 2 cm/s. Determine the rate of change of its area at the instant when its edge is 6cm long.
Here is the working:
$A = x^2$ (state the problem mathematically)
$\frac{dA}{dt}=\frac{d(x^2)}{dt}$ (differentiate with respect to time)
$\frac{dA}{dt}=2x \cdot \frac{dx}{dt}$ (chain rule - I don't understand how this was arrived at $2x \cdot \frac{dx}{dt}$ )
Use x = 6cm and $\frac{dx}{dt}$ =2cm/s (and I don't understand why dx/dt =2. In other words, from reading the question above, how would I know that dx/dt=2?)
$\frac{dA}{dt}=2(6 cm) \cdot (2cm/s)$
$\frac{dA}{dt}=24cm^2/2$
Answer: The area of the square is changing at the rate of $24cm/s^2$ at the instant when its side is 6 cm long
2. x represents the side length of the square ... the problem states that the edge (side) is expanding at a rate of 2 cm/sec ... x is changing over time and $\dfrac{dx}{dt}$ represents that rate of change ... it's symbology can be read as "the rate that x changes w/r to time".
since x is an implicit function of time, then $\dfrac{d}{dt}(x) = \dfrac{dx}{dt}$, and because of the chain rule $\dfrac{d}{dt}(x^2) = 2(x)^1 \cdot \dfrac{dx}{dt}$
I recommend you review your previous lessons in finding implicit derivatives. The mechanics of taking derivatives are the same in this situation.
3. Thank you Skeeter
4. $A = x^2$
$\frac{dA}{dt}=\frac{dA}{dx}\times\frac{dx}{dt}$ Using the chain rule. Can you see why?
And, we can work out that $\frac{dA}{dx}=2x$
So $\frac{dA}{dt}=2x\times\frac{dx}{dt}$
Edit: This post was moved from a duplicate thread by a moderator. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8921403884887695, "perplexity": 344.980472533245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982935910.69/warc/CC-MAIN-20160823200855-00179-ip-10-153-172-175.ec2.internal.warc.gz"} |
https://www.physicsoverflow.org/19419/proving-one-field-equation-leads-to-the-other | # Proving one field equation leads to the other
+ 5 like - 0 dislike
307 views
Assume that the universe is homogenous and isotropic, and the following equation holds:
$$R_{00}-\frac{1}{2}g_{00}R=8\pi GT_{00}; \space \space \nabla_{\mu}T^{\mu 0}=0.$$
How do I prove that the following equations are identically satisfied provided that the above two are satisfied?
$$R_{0i}-\frac{1}{2}g_{0i}R=8\pi GT_{0i}; \space \space R_{ij}-\frac{1}{2}g_{ij}R=8\pi GT_{ij}; \space \space \nabla_{\mu}T^{\mu i}=0.$$
My approach was to write $g_{00}=1$ and $g_{ij}=-a^2\gamma_{ij}$ and evaluate the Ricci tensors and so on, but I know this is not the way to do it. Can anyone suggest me the way?
This post imported from StackExchange Physics at 2014-06-21 08:52 (UCT), posted by SE-user titanium
+ 3 like - 0 dislike
Here is a simple approach that might work. Start by defining
$$F_{\mu\nu} \equiv G_{\mu\nu} - T_{\mu\nu}$$
where $G_{\mu\nu} = R_{\mu\nu} - \frac{1}{2}g_{\mu\nu}R$ is the Einstein tensor. Now from what you know we have $$F_{00} = 0$$
$$\nabla^{\mu}F_{\mu0} = 0$$
You must show that $F_{\mu\nu} = 0$. Writing out the last equation gives
$$0 = \partial_{i} F^{i 0} + \Gamma^{\mu}_{\mu \alpha}F^{\alpha 0} + \Gamma^{0}_{\mu \alpha}F^{\mu \alpha}$$
Homogenous and isotropic implies that gradients vanish and that $F^{11}=F^{22}=F^{33}$ so
$$0 = a^2 H \delta_{ij} F^{i j}$$
This shows that $F_{00} = F_{11} = F_{22} = F_{33} = 0$. Now to show $F_{ij} = 0$ for $i\not =j$ you might need some additional assumptions on the energy momentum tensor $T^{\mu\nu}$, for example $T^{ij} = 0$.
This post imported from StackExchange Physics at 2014-06-21 08:52 (UCT), posted by SE-user Winther
answered Jun 20, 2014 by (30 points)
+ 1 like - 0 dislike
First what we know: $G_{\mu \nu} = R_{\mu \nu} - \frac{1}{2}g_{\mu \nu} R$ and $T_{\mu \nu}$ are tensors in that they transform properly under coordinate transformations ($G_{\mu \nu}$ by construction and $T_{\mu \nu}$ because of the EFEs), so it doesn't matter which frame we do our measurements in, this tensor equation will always hold.
Suppose that a comoving observer takes careful measurements in his frame and finds the first equation to be true. This is a special case of how we determine what an observer with an arbitrary four-velocity will measure, which is the contraction with that four-velocity $$G_{\mu \nu} u^{\mu} u^{\nu} = 8\pi T_{\mu \nu} u^{\mu} u^{\nu}$$ Then imagine other observers with four velocities of the form $u_i^{\alpha} = Ae_0^{\alpha} + B e_i^{\alpha}$, where $e_0$ denotes the unit vector in the time direction, $e_1$ denotes the unit vector in the $1/x/r/$whathaveyou direction, etc., and $A$ and $B$ are normalization factors. The above equation is an invariant scalar equation and from this fact and a plethora of observers we can build up the rest of the relations. The same procedure can be applied to the energy conservation equation, only now we are contracting $u_{\nu}\nabla_{\mu} T^{\mu \nu}$
This post imported from StackExchange Physics at 2014-06-21 08:52 (UCT), posted by SE-user Jordan
answered Jun 20, 2014 by (15 points)
Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor) Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysic$\varnothing$OverflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.8114138841629028, "perplexity": 638.5333602596784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337428.0/warc/CC-MAIN-20221003164901-20221003194901-00377.warc.gz"} |
https://www.physicsforums.com/threads/fluids-floating-balloon-problem.84203/ | Fluids - floating balloon problem
1. Aug 4, 2005
Neuronic
Hi - I know this must be a basic question, but I'm doing this physics problem and I can't get the exact answer!
Here it is and my work along with it - can someone please tell me what I'm missing in my equations?
The problem is taken from the James Walker physics book - Chapter 15. Number 29.
A 0.12 kg balloon is filled with helium (density = .179 kg/m-cubed).
The balloon is a sphere with radius of 5.2 m. What is the maximum
weight the baloon can lift?
I keep getting something close to 6.4 kN, whereas the book says 5.7 kN.
My work:
volume of baloon (which is actually the volume of helium) = 4/3 pi
r-cubed = 588.9 m-cubed
bouyance force lifts the balloon upward, while the weight of the
balloon, the weight of the helium and the weight of the unknown weight
(ex. block) counteracts the bouyance force.
Bouyance force = (density of air) (g) (V) = (1.29) (9.8) (588.9) =
7453 N (upward force)
Weight total = weight of baloon material + (density of helium) (g) (V)
+ unknown weight = (.12)(9.81) + (.179)(9.81)(588.9) + unknown weight
= bouyance force
this would make the unknown weight = 6.4 kN
Not 5.7 kN
Please someone help? Thanks a bunch!
2. Aug 4, 2005
Staff: Mentor
Your work looks OK to me. Double-check the value you are using for the density of air, since that depends on the assumed temperature and pressure.
3. Aug 5, 2005
Neuronic
thanks a bunch! | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8644578456878662, "perplexity": 2618.0901284073225}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720615.90/warc/CC-MAIN-20161020183840-00306-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/1570094/a-conditional-asymptotic-for-sum-textp-p2-twin-primesp-alpha-when | # A conditional asymptotic for $\sum_{\text{$p,p+2$twin primes}}p^{\alpha}$, when $\alpha>-1$
When I've followed a notes that show how obtain a similar asymptotic using Abel summation formula, my case with $a_n=\chi(n)$, the characteristic function taking the value 1 if $p$ is prime (in a twin prime-pair, thus caution I've defined $\chi(p+2)$ as zero) and $f(x)=x^{\alpha}$, which $\alpha>-1$, and Prime Number Theorem, in my case I am assuming the Twin prime conjecture, and L'Hopital rule (the author put much careful to write justified computations in the use of L'Hopital rule, I understad all, but he claim that the previous application of L'Hopital rule gives the same result that a more right way, which is to take an $\epsilon$ and compute the asymptotic limit of the main term with superior limit, I emphatize other time that the author claims that previous computations are the same using L'Hopital or taking epsilon and computing with superior limits) applied in my case $$\sum_{\text{p,p+2 twin primes}}p^{\alpha}$$ is asymptotic to $$2C_2\frac{x^{\alpha+1}}{\log^2 x},$$ multiplied by a constant defined precisely by $$\lim_{x\to\infty}1-\alpha\frac{\int_2^{x}\left(\frac{2C_2t}{\log ^2 t}+o\left(\frac{t}{\log ^2 t}\right)\right)t^{\alpha-1}dt}{2C_2\frac{x^{\alpha+1}}{\log^2 x}}=\frac{1}{1+\alpha}.$$
Thus, when I've used his method I compute for $\alpha>-1$
$$\sum_{\text{p,p+2 twin primes}}p^{\alpha}\sim 2C_2\frac{x^{\alpha+1}}{(1+\alpha)\log^2 x},$$ where $C_2$ is the twin prime constant.
Question. Assuming the Twin prime conjecture can you justify rigorously an asymptotic for $\sum_{\text{$p,p+2$twin primes}}p^{\alpha}$, when $\alpha>-1$? Thanks in advance.
I've defined previous characteristic function and the sum $\sum_{\text{$p,p+2$twin primes}}p^{\alpha}$, in wich only is added the term $p^{\alpha}$ to follow a similar method corresponding to the author. I don't know if is better add terms $(p+2)^{\alpha}$.
• I clarify that in the notes it is ommited the computation by second method, the computation with epsilon and the superior limit, but the author claims that his careful and justified computations with L'Hopital are equivalent, because he know the limit value. – user243301 Dec 10 '15 at 23:53
This can be done using partial summation in a way that is similar to this answer: How does $\sum_{p<x} p^{-s}$ grow asymptotically for $\text{Re}(s) < 1$?
Let $\pi_2(x)=\sum_{\text{twin primes }p,p+2\leq x}1$. Then $$\sum_{\text{twin primes }p,p+2\leq x}p^{\alpha}=\int_1^x t^{\alpha}d\pi_2(t).$$ Assuming that $$\pi_2(x)\sim 2C_2\int_2^x \frac{1}{(\log t)^2}dt,$$ by properly rearranging to control the error term as was done in that previous answer, we find that $$\sum_{\text{twin primes }p,p+2\leq x}p^{\alpha}\sim 2C_2 \int_1^x \frac{t^\alpha}{(\log t)^2}dt$$ and by setting $t=u^{1/(1+\alpha)}$ we have $$\sum_{\text{twin primes }p,p+2\leq x}p^{\alpha}\sim \frac{2C_2x^{1+\alpha}}{(1+\alpha)^2(\log x)^2}.$$
• I have no background in number theory, so could you explain how you know that the measure is $d\pi_2$? – tired Dec 11 '15 at 11:02
• Very thanks much @EricNaslund your grades for this answer are $A^{A^{+++}}$. I take notes and try do the computations, if you can answer the question that is cited (from an user named tired) is the better. Very thanks much. – user243301 Dec 11 '15 at 13:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9518682956695557, "perplexity": 204.40738599808785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141735395.99/warc/CC-MAIN-20201204071014-20201204101014-00539.warc.gz"} |
https://chemistry.stackexchange.com/questions/31038/how-to-interpret-td-dft-results-for-finding-%CE%BBmax-and-amplitude | # How to interpret TD-DFT results for finding λmax and amplitude
I am trying to understand how to interpret the TDDFT results ( I know there is another post for this but it does not answer my question ). Here is part of the result from GAMESS:
I have two related questions:
1-How did it calculate this 1.41 from the amplitude of this state ?
2-Does it have anything to do with λmax ?
• Which reaction are you talking about? The question is vague. Energy can be transferred through collisions even in liquids. And such collisions can lead to emission of electrons. The Helium-Neon laser operates like this en.wikipedia.org/wiki/Helium%E2%80%93neon_laser – CoffeeIsLife May 9 '15 at 1:40
• that should just be the transitions of the orbitals and your $\lambda_\mathrm{max}$ should be somewhere around $9.3~\mathrm{eV}$ judging from the spectrum. Note that the state you have selected has 0.000 oscillator strength. – Martin - マーチン May 12 '15 at 8:30
• @Martin-マーチン this is the output for Acetone and related to my other question about Acetone behavior. I am confused what is going wrong and how this 1.41 is calculated – Aug May 12 '15 at 13:35
• These are terrible tags! Reaction mechanism? Heat? Transition state theory? They don't make any sense. – Greg May 15 '15 at 11:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8577388525009155, "perplexity": 823.2833416678413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655901509.58/warc/CC-MAIN-20200709193741-20200709223741-00047.warc.gz"} |
https://infoscience.epfl.ch/record/180356 | Infoscience
Conference paper
# Approximate Feedback Capacity of the Gaussian Multicast Channel
We characterize the capacity region to within log {2(M − 1)} bits/s/Hz for the M -transmitter K -receiver Gaus- sian multicast channel with feedback where each receiver wishes to decode every message from the M transmitters. Extending Cover-Leung’s achievable scheme intended for (M, K) = (2, 1), we show that this generalized scheme achieves the cutset-based outer bound within log {2(M − 1)} bits per transmitter for all channel parameters. In contrast to the capacity in the non- feedback case, the feedback capacity improves upon the naive intersection of the feedback capacities of K individual multiple access channels. We find that feedback provides unbounded multiplicative gain at high signal-to-noise ratios as was shown in the Gaussian interference channel. To complement the results, we establish the exact feedback capacity of the Avestimehr-Diggavi- Tse deterministic model, from which we make the observation that feedback can also be beneficial for function computation. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9297742247581482, "perplexity": 1812.3209487188485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285289.45/warc/CC-MAIN-20170116095125-00208-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/3105082/confusion-on-vert-vert-varphi-vert-vert-p-where-p-in-1-infty | # Confusion on $\vert\vert \varphi\vert\vert_{p}$ where $p \in [1,\infty[$
Let $$\varphi \in C^{\infty}(\mathbb R^n)$$ and for $$\epsilon > 0$$ define $$\varphi_{\epsilon}(x):=\epsilon^{-n}\varphi(x/\epsilon)$$ such that $$\varphi_{\epsilon} \in C^{\infty}(\mathbb R^n)$$ with compact support.
Determine $$\vert \vert \varphi_{\epsilon}\vert \vert_{p}$$ with $$1 \leq p \leq \infty$$ in dependence on $$\epsilon$$
I understand the case $$p =\infty$$, namely $$\vert \vert \varphi_{\epsilon}\vert \vert_{\infty}=\sup_{x \in \mathbb R^{n}}|\varphi_{\epsilon}(x)|=\sup_{x \in \mathbb R^{n}}|\epsilon^{-n}\varphi(x/\epsilon)|=\epsilon^{-n}\sup_{ x\in \mathbb R^{n}}|\varphi(x/\epsilon)|=\epsilon^{-n}||\varphi||_{\infty}$$
I get confused by the case $$p \in [1, \infty[$$
Surely, by definition:
$$\vert \vert \varphi_{\epsilon}\vert \vert_{p}^{p}=\int_{\mathbb R^{n}}|\epsilon^{-n}\varphi(x/\epsilon)|^{p}dx$$ and then by substitution $$( x/\epsilon = x^{'}\Rightarrow dx/\epsilon^{n}=dx^{'})(*)$$
First question: surely differentiating $$n-$$times has no effect on $$\epsilon$$, so surely $$(*)$$ should be $$dx/\epsilon=dx^{'}$$ rather than $$dx/\epsilon^{n}=dx^{'}$$
In any case, assuming $$(*)$$ holds: $$\int_{\mathbb R^{n}}\epsilon^{-np}|\varphi(x^{'})|^{p}\epsilon^{n}dx^{'}=\epsilon^{-n(p-1)}\int_{\mathbb R^{n}}|\varphi(x^{'})|^{p}dx^{'}\Rightarrow \vert \vert \varphi_{\epsilon}\vert \vert_{p}=\epsilon^{\frac{-n(p-1)}{p}}||\varphi||_{p}$$
Second Question: I have been told that the answer must be $$\vert \vert \varphi_{\epsilon}\vert \vert_{p}=\vert \vert \varphi\vert \vert_{p}$$
But I cannot see what I did wrong.
• Your answer seems fine to me! Maybe I'm missing something too xd. With regards to your first point, you're not differentiating n times but are differentiating n variables each with an $\varepsilon$ scaling. This is what gives you the $\varepsilon^n$! – Drefain Feb 8 at 13:27
• Indeed the $L^p$ norm will in general be different between the two. The situation is obvious for $L^\infty$ (you haven't changed the extrema of $\varphi$), and it is slightly less obvious that it is the same for $L^1$ (essentially you've concentrated the "mass" of $\varphi$ more tightly but also stretched it by exactly the right amount to compensate for the change in "volume"). But for $L^p$ the requisite rescaling is different. – Ian Feb 8 at 13:39
• As for this issue about the change of variable, this is part of why people sometimes use explicit notations like $\vec{x}$ or $\mathbf{x}$ to denote vectors. Here it is true that say $dx_1/\epsilon=dx_1'$ but there is one such $\epsilon$ for every component of $x$. – Ian Feb 8 at 13:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 20, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.951312243938446, "perplexity": 249.9629801099211}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000044.37/warc/CC-MAIN-20190626013357-20190626035357-00383.warc.gz"} |
https://thoughtstreams.io/jtauber/thoughts/5320/ | # Thoughts
11 later thoughts
1
I cringe a little when people say that maths education needs to make maths more relevant and explain to students "when they're going to use this in life".
Why not instead foster an attitude that some things are interesting to learn for their own sake?
21 earlier thoughts | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8468542695045471, "perplexity": 3022.864038031315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488528979.69/warc/CC-MAIN-20210623011557-20210623041557-00033.warc.gz"} |
https://math.stackexchange.com/questions/2479413/subfields-of-a-cyclotomic-field | Subfields of a cyclotomic field
I was doing some study on the programming language GAP and I came to know from here (in the very fist line) that " $\mathbb Q(\sqrt{5})$ is a number field that is not cyclotomic but contained in the cyclotomic field $\mathbb {Q}_5 = \mathbb Q(e^{\frac{2\pi i}{5}})$".
So I think it is an example that says that in general not all subfields of a cyclotomic field are cyclotomic. But a question came across in my mind from here, that I want to ask.
The above example deals with the $5$-th root of unity and $5$ is an odd prime. But I was thinking if we take $\mathbb Q(\theta)$ where $\theta$ is a primitive $2^n$-th root of unity for some $n>1$, will then same statement holds? In other words, I want to know
Are all subfields of $\mathbb Q(\theta)$ cyclotomic where $\theta$ is a primitive $2^n$-th root of unity for some $n>1$ ?
I am not at all good in algebraic number theory, so I am really sorry that I can not show much work from my side. I might be completely wrong or missing something trivial. Sorry again.
I will be really grateful if someone helps me to find an answer to this.
• Just to warn you that you refer to GAP 3.4.4 manual (1997). Unless you use GAP 3.4.4, the most recent version is at gap-system.org/Doc/manuals.html and is also supplied with GAP (and searchable from GAP command line). Oct 19, 2017 at 14:52
• @AlexanderKonovalov Thanks. Oct 20, 2017 at 4:01
It's certainly not the case that a subfield of $\Bbb Q(\theta)$ where $\theta=\exp(2\pi i/2^n)$ is a primitive $2^n$-th root of unity must be a cyclotomic field. For instance it contains the subfield $\Bbb Q(\cos(2\pi i/2^n))$ which is contained in $\Bbb R$.
By the Kronecker-Weber theorem, the subfields of cyclotomic fields are precisely the finite extensions of $\Bbb Q$ whose Galois group is Abelian. In particular, all quadratic fields $\Bbb Q(\sqrt m)$ for $m\in\Bbb Z$ are contained in cyclotomic fields.
• Thanks a lot for the example and link to the theorem. Oct 19, 2017 at 2:42
• (+1) To add a specific instance of your two examples (for @usermath's amusement): $\sqrt{2} = 2 \cos \pi/4 = \zeta + \zeta^{-1}$, where $\zeta$ is the $8$-th primitive root of unity in the first quadrant. Oct 19, 2017 at 2:48
• @peterag Great. Thanks. I was reading the Kronecker-Weber theorem and it seems to me that what you said holds in general. Oct 19, 2017 at 2:51
Let $\zeta_m$ denote a primitive $m$th root of unity. When $n \geq 3$, you can conclude that there exist noncyclotomic subfields of $\mathbb{Q}(\zeta_{2^n})$ without computing any examples.
The Galois group of $\mathbb{Q}(\zeta_{2^n})$ over $\mathbb{Q}$ is isomorphic to $(\mathbb{Z}/2^n\mathbb{Z})^{\ast} \cong \mathbb{Z}/2^{n-2}\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$. The number of subgroups of this last group is the same as the number of subfields of $\mathbb{Q}(\zeta_{2^n})$.
How many subfields of $\mathbb{Q}(\zeta_{2^n})$ are cyclotomic? The only roots of unity in $\mathbb{Q}(\zeta_{2^n})$ are $\pm \zeta_{2^n}^j, j = 0, ... , n-1$. (see A Classical Introduction to Modern Number Theory, Chapter 14, Section 5, Lemma 1). So the only cyclotomic subfields are $$\mathbb{Q} = \mathbb{Q}(\zeta_2), \mathbb{Q}(\zeta_4) = \mathbb{Q}(i), ... , \mathbb{Q}(\zeta_{2^n})$$
$n$ in all. But there are more than $n$ subgroups of $\mathbb{Z}/2^{n-2}\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$. There are $n-1$ subgroups of $\mathbb{Z}/2^{n-2}\mathbb{Z}$, and for each such subgroup $H$, you have two subgroups $H \times \{0\}$ and $H \times \mathbb{Z}/2\mathbb{Z}$ of $\mathbb{Z}/2^{n-2}\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$. So this gives you at least
$$2(n-1) = 2n -2$$
subfields of $\mathbb{Q}(\zeta_{2^n})$. Since $n > 2$, we have $2n-2 > n$.
Let $f(x)$ be a polynomial with integer coefficients. For a root of unity $\zeta$, the subfields $\mathbf{Q}[f(\zeta +\bar\zeta )]\subset \mathbf{Q}[\zeta]$ give for most polynomials $f$ give examples non-cyclotomic fields.(of course two different choices of $f$ may lead to the same subfield). These are all reals.
When $\zeta$ is a $p$-th root of unity, for a prime number $p$, using Quadratic Gauss sums Gauss showed (generalizing your example $p=5$) that when $p-1$ is a multiple of $4$, the real quadratic field $\mathbf{Q}[\sqrt p]$ is a subfield of the $p$th cyclotomic field. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9191773533821106, "perplexity": 129.1142939087754}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104660626.98/warc/CC-MAIN-20220706030209-20220706060209-00753.warc.gz"} |
http://math.stackexchange.com/questions/164384/markov-and-independent-random-variables/164414 | # Markov and independent random variables
This is a part of an exercise in Durrett's probability book.
Consider the Markov chain on $\{1,2,\cdots,N\}$ with $p_{ij}=1/(i-1)$ when $j<i, p_{11}=1$ and $p_{ij}=0$ otherwise. Suppose that we start at point $k$. We let $I_j=1$ if $X_n$ visits $j$. Then $I_1,I_2,\cdots,I_{k-1}$ are independent.
I don't find it obvious that $I_1,\cdots,I_{k-1}$ are independent. It is possible to prove the independence if we calculate all $P(\cap_{j\in J\subset\{1,\cdots,k-1\}}I_j)$, but this work is long and tedious. Since the independence was written as an obvious thing in this exercise, I assume that there is an easier way.
-
Let $A_k$ denote the set of $\mathfrak a=(a_i)_{1\leqslant i\leqslant k}$ such that $a_1=a_k=1$ and $a_i$ is in $\{0,1\}$ for every $2\leqslant i\leqslant k-1$. For every $\mathfrak a$ in $A_k$, let $U(\mathfrak a)=\{2\leqslant i\leqslant k\mid a_i=1\}$. Then $$\mathrm P((I_i)_{1\leqslant i\leqslant k}=\mathfrak a)=\prod_{u\in U(\mathfrak a)}\frac1{u-1}=\prod_{i=2}^k\frac1{(i-1)^{a_i}}.$$ The product form of the RHS ensures that $(I_i)_{1\leqslant i\leqslant k}$ is independent.
Furthermore, for every $1\leqslant i\leqslant k-1$, summing the RHS over every $\mathfrak a=(a_i)_{1\leqslant i\leqslant k}$ in $A_k$ such that $a_i=\alpha$ with $\alpha$ in $\{0,1\}$ shows that $$\mathrm P(I_i=\alpha)=\frac1{k-1}\frac1{(i-1)^{\alpha}}\prod_{2\leqslant j\leqslant k-1}^{j\ne i}\left(1+\frac1{j-1}\right)=\frac1{(i-1)^{\alpha}}\frac{i-1}i,$$ hence $\mathrm P(I_i=1)=\dfrac1i$ and $\mathrm P(I_i=0)=\dfrac{i-1}i$.
-
For any $j$, observe that $X_{3}|X_{2}=j-1,X_{1}=j$ has the same distribution as $X_{2}|X_{2} \neq j-1, X_{1}=j$. Since $X_{2}=j-1$ iif $I_{j-1}=1$, by Markovianity conclude that $I_{j-1}$ is independent of $(I_{j-2},\ldots,I_{1})$ given that $X_{1}=j$.
Let's prove by induction that $I_{j-1}$ independent of $(I_{j-2},\ldots,I_{1})$ given that $X_{1}=k$.
I) $j=k$ follows straight from the first paragraph.
II) Now assume $I_{a-1}$ independent of $(I_{a-2},\ldots,I_{1})$ for all $a \geq j+1$. Thus, $(I_{k-1},\ldots,I_{j})$ is independent of $(I_{j-1},\ldots,I_{1})$. Hence, in order to prove that $I_{j-1}$ is independent of $(I_{j-2},\ldots,I_{1})$ we can condition on $(I_{k-1}=1,\ldots,I_{j}=1)$. This is the same as conditioning on $(X_{2}=k-1,\ldots,X_{k-j+1}=j)$. By markovianity and temporal homogeneity, $(X_{k-j+2}^{\infty}|X_{k-j+1}=j,\ldots,X_{1}=k)$ is identically distributed to $(X_{2}^{\infty}|X_{1}=j)$. Using the first paragraph, we know that $I_{j-1}$ is independent of $(I_{j-1},\ldots,I_{1})$ given that $X_{1}=j$. Hence, by the equality of distributions, $I_{j-1}$ is independent of $(I_{j-2},\ldots,I_{1})$ given that $X_{1}=k$.
-
I understand now. I think you mean "Hence $I_{k-1}$ is independent of $(I_{k−2},\cdots,I_1)$". This is a very good answer! thanks. – Wei Jun 29 '12 at 17:08
Which induction? – Did Jun 29 '12 at 21:26
@did I'll try to fix this (it is very sloppy, indeed). – madprob Jun 29 '12 at 21:39
Your induction hypothesis is not clear. (By the way, I am curious to know which parts of your argument the OP got, really.) – Did Jun 30 '12 at 6:13
@did the induction hypothesis is that $I_{j-1}$ is independent of $(I_{j-2},\ldots,I_{1})$ for all $k-n \leq j < k$ – madprob Jun 30 '12 at 6:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9971039295196533, "perplexity": 139.59879796824953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663739.33/warc/CC-MAIN-20140930004103-00376-ip-10-234-18-248.ec2.internal.warc.gz"} |
https://admin.clutchprep.com/organic-chemistry/practice-problems/13821/draw-resonance-structures-for-each-of-the-following-8 | 🤓 Based on our data, we think this question is relevant for Professor Pollet's class at GT.
# Solution: Draw resonance structures for each of the following:
###### Problem
Draw resonance structures for each of the following: | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.828361451625824, "perplexity": 2071.6002077610583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370511408.40/warc/CC-MAIN-20200410173109-20200410203609-00506.warc.gz"} |
http://math.stackexchange.com/users/48197/anuar | # Anuar
less info
reputation
11
bio website anuars.wordpress.com location Mexico age member for 1 year, 11 months seen Aug 27 at 4:07 profile views 25
Institute of Physics.
Department of Complex Systems.
National Autonomous University of Mexico (UNAM).
$$\mathcal{H}(s_{1},...,s_{N})=-\frac{J}{N}\sum_{1\leq i<j\leq N}s_{i}s_{j}-H\sum_{i=1}^{N}s_{i}$$
# 11 Questions
3 Functional form of a solution to a Differential Equation (Euler-Lagrange) 3 Calculating an almost Gamma integral 1 Heaviside step function squared 1 Commutation between Logarithm and Gaussian Integral. 1 How to interchange a sum and a product?
# 183 Reputation
+10 How to interchange a sum and a product? +5 How to interchange a sum and a product? +5 Commutation between Logarithm and Gaussian Integral. +5 Functional form of a solution to a Differential Equation (Euler-Lagrange)
1 How to interchange a sum and a product? 1 Calculating an almost Gamma integral 1 inner product and adjoint operator 1 Integral of gradient dotted with dr 0 Commutation between Logarithm and Gaussian Integral.
# 21 Tags
1 integration × 4 1 summation × 2 1 proof-strategy × 3 1 products × 2 1 riemann-zeta × 2 1 linear-algebra × 2 1 improper-integrals × 2 1 multivariable-calculus 1 gamma-function × 2 1 adjoint
# 13 Accounts
Physics 482 rep 217 Mathematics 183 rep 11 Mathematica 139 rep 6 Video Production 113 rep 4 Chemistry 102 rep 5 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9405365586280823, "perplexity": 4868.373115121616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119661285.56/warc/CC-MAIN-20141024030101-00280-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://neuronaldynamics-exercises.readthedocs.io/en/latest/exercises/brunel-network.html | # 11. Network of LIF neurons (Brunel)¶
In this exercise we study a well known network of sparsely connected Leaky-Integrate-And-Fire neurons (Brunel, 2000).
Book chapters
The Brunel model is introduced in Chapter 13 Section 4.2 . The network structure is shown in figure 13.6b. Read the section “Synchrony, oscillations, and irregularity” and have a look at Figure 13.7. For this exercise, you can skip the explanations related to the Fokker-Planck equation.
Python classes
The module brunel_model.LIF_spiking_network implements a parametrized network. The figure below shows the simulation result using the default configuration.
Simulation result. Top: raster plot of 150 randomly selected neurons. Three spike trains are visually highlighted. Middle: time evolution of the population activity A(t). Bottom: Membrane voltage of three neurons. The red color in the top and bottom panels identifies the same neuron.
To get started, call the function brunel_model.LIF_spiking_network.getting_started() or copy the following code into a Jupyter notebook.
%matplotlib inline
from neurodynex.brunel_model import LIF_spiking_network
from neurodynex.tools import plot_tools
import brian2 as b2
rate_monitor, spike_monitor, voltage_monitor, monitored_spike_idx = LIF_spiking_network.simulate_brunel_network(sim_time=250. * b2.ms)
plot_tools.plot_network_activity(rate_monitor, spike_monitor, voltage_monitor, spike_train_idx_list=monitored_spike_idx, t_min=0.*b2.ms)
Note that you can change all parameters of the neuron by using the named parameters of the function simulate_brunel_network(). If you do not specify any parameter, the default values are used (see next code block). You can access these variables in your code by prefixing them with the module name (for example LIF_spiking_network.POISSON_INPUT_RATE).
# Default parameters of a single LIF neuron:
V_REST = 0. * b2.mV
V_RESET = +10. * b2.mV
FIRING_THRESHOLD = +20. * b2.mV
MEMBRANE_TIME_SCALE = 20. * b2.ms
ABSOLUTE_REFRACTORY_PERIOD = 2.0 * b2.ms
# Default parameters of the network
SYNAPTIC_WEIGHT_W0 = 0.1 * b2.mV # note: w_ee=w_ie = w0 and = w_ei=w_ii = -g*w0
RELATIVE_INHIBITORY_STRENGTH_G = 4. # balanced
CONNECTION_PROBABILITY_EPSILON = 0.1
SYNAPTIC_DELAY = 1.5 * b2.ms
POISSON_INPUT_RATE = 12. * b2.Hz
N_POISSON_INPUT = 1000
## 11.1. Exercise: model parameters and threshold rate¶
In the first exercise, we get familiar with the model and parameters. Make sure you have read the book chapter . Then have a look at the documentation of simulate_brunel_network(). Note that in our implementation, the number of excitatory presynaptic poisson neurons (input from the external population) is a parameter N_extern and thus independent of CE.
### 11.1.1. Question:¶
• Run the simulation with the default parameters (see code block above). In that default configuration, what values take the variables $$N_E$$, $$N_I$$, $$C_E$$, $$C_I$$, $$w_{EE}$$, $$w_{EI}$$, $$w_{IE}$$, and $$w_{II}$$? The variables are described in the book and in Fig. 13.6
• What are the units of the weights w?
• The frequency $$\nu_{threshold}$$ is is the poisson rate of the external population sufficient to drive the neurons in the network to the firing threshold. Using Eq. (1), compute $$\nu_{threshold}$$. You can do this in Python, e.g. use LIF_spiking_network.FIRING_THRESHOLD for $$u_{thr}$$, etc.
• Refering to Figure 13.7, left panel, what is the meaning of the value 1 on the y-axis (Input). What is the horizontal dashed line designating? How is it related to $$u_{thr}$$?
• Run a simulation for 500ms. Set poisson_input_rate to $$\nu_{threshold}$$. Plot the network activity in the time interval [0ms, 500ms]. Is the network quiet (Q)?
• During the simulation time, what is the average firing rate of a single neuron? You can access the total number of spikes from the Brian2.SpikeMonitor: spike_monitor.num_spikes and the number of neurons in the network from spike_monitor.source.N .
(1)$\nu_{threshold} = \frac{u_{thr}}{N_{extern} w_{0} \tau_m}$
## 11.2. Exercise: Population activity¶
The network of spiking LIF-neurons shows characteristic population activities. In this exercise we investigate the patterns asynchronous irregular (AI), synchronous regular (SR), fast synchronous irregular (SI fast) and slow synchronous irregular (SI slow).
### 11.2.1. Question: Network states¶
• The function simulate_brunel_network() gives you three options to vary the input strength (y-axis in figure 13.7, a). What options do you have?
• Which parameter of the function simulate_brunel_network() lets you change the relative strength of inhibition (the x-axis in figure 13.7, a)?
• Define a network of 6000 excitatory and 1500 inhibitory neurons. Find the appropriate parameters and simulate the network in the regimes AI, SR, SI-fast and SI-slow. For each of the four configurations, plot the network activity and compute the average firing rate. Run each simulation for at least 1000ms and plot two figures for each simulation: one showing the complete simulation time and one showing only the last ~50ms.
• What is the population activity A(t) in each of the four conditions (in Hz, averaged over the last 200ms of your simulation)?
### 11.2.2. Question: Interspike interval (ISI) and Coefficient of Variation (CV)¶
Before answering the questions, make sure you understand the notions ISI and CV. If necessary, read Chapter 7.3.1 .
• What is the CV of a Poisson neuron?
• From the four figures plotted in the previous question, qualitatively interpret the spike trains and the population activity in each of the four regimes:
• What is the mean firing rate of a single neuron (only a rough estimate).
• Sketch the ISI histogram. (is it peaked or broad? where’s the maximum?)
• Estimate the CV. (is it <1, <<1, =1, >1 ?)
• Validate your estimates using the functions spike_tools.get_spike_train_stats() and plot_tools.plot_ISI_distribution(). Use the code block provided here.
• Make sure you understand the code block. Why is the function .spike_tools.get_spike_train_stats called with the parameter window_t_min=100.*b2.ms?
%matplotlib inline
from neurodynex.brunel_model import LIF_spiking_network
from neurodynex.tools import plot_tools, spike_tools
import brian2 as b2
poisson_rate = ??? *b2.Hz
g = ???
CE = ???
simtime = ??? *b2.ms
rate_monitor, spike_monitor, voltage_monitor, monitored_spike_idx = LIF_spiking_network.simulate_brunel_network(N_Excit=CE, poisson_input_rate=poisson_rate, g=g, sim_time=simtime)
plot_tools.plot_network_activity(rate_monitor, spike_monitor, voltage_monitor, spike_train_idx_list=monitored_spike_idx, t_min = 0*b2.ms)
plot_tools.plot_network_activity(rate_monitor, spike_monitor, voltage_monitor, spike_train_idx_list=monitored_spike_idx, t_min = simtime - ??? *b2.ms)
spike_stats = spike_tools.get_spike_train_stats(spike_monitor, window_t_min= 100 *b2.ms)
plot_tools.plot_ISI_distribution(spike_stats, hist_nr_bins=100, xlim_max_ISI= ??? *b2.ms)
• In the Synchronous Repetitive (SR) state, what is the dominant frequency of the population activity A(t)? Compare this frequency to the firing frequency of a single neuron. You can do this “visually” using the plots created by plot_tools.plot_network_activity() or by solving the bonus exercise below.
## 11.3. Exercise: Emergence of Synchronization¶
The different regimes emerge from from the recurrence and the relative strength of inhibition g. In the absence of recurrent feedback from the network, the network would approach a constant mean activity A(t).
### 11.3.1. Question:¶
• Simulate a network of 6000 excitatory and 1500 inhibitory neurons. Set the following parameters: poisson_rate = 14*b2.Hz, g=2.5. In which state is this network?
• What would be the population activity caused by the external input only? We can simulate this. Run a simulation of the same network, but disable the recurrent feedback: simulate_brunel_network(…,w0=0.*b2.mV, w_external = LIF_spiking_network.SYNAPTIC_WEIGHT_W0).
• Explain why the non-recurrent network shows a strong synchronization in the beginning and why this synchronization fades out.
• The non recurrent network is strongly synchronized in the beginning. Is the connected network simply “locked” to this initial synchronization? You can falsify this hypothesis by initializing each neuron in the network with a random vm. Run the simulation with random_vm_init=True to see how the synchronization emerges over time.
Simulation of a network with random v_m initialization. The synchronization of the neurons is not a residue of shared initial conditions, but emerges over time.
## 11.4. Bonus: Power Spectrum of the Population Activity¶
We can get more insights into the statistics of the network activity by analysing the power spectrum of the spike trains and the population activity. The four regimes (SR, AI, SI fast, SI slow) are characterized by two properties: the regularity/irregularity of individual neuron’s spike trains and the stationary/oscillatory pattern of the population activity A(t). We transform the spike trains and A(t) into the frequency domain to identify regularities.
### 11.4.1. Question: Sampling the Population Activity¶
• When analysing the population activity A(t), what is the lowest/highest frequency we are interested?
The highest frequency $$f_{max}$$ one can resolve from the time series A(t) is determined by $$\Delta t$$. Even if we are not interested in very high frequencies, we should not increase $$\Delta t$$ (too much) because it may affect the accuracy of the simulation.
The lowest frequency $$\Delta f$$ is determined by the signal length $$T_{Simulation}$$. We could therefore decrease the simulation duration if we accept decreasing the resolution in the frequency domain. But there is another option: We still use a “too long” simulation time $$T_{Simulation}$$ but then split the RateMonitor.rate signal into $$k$$ chunks of duration $$T_{Signal}$$. We can then average the power across the $$k$$ repetitions. This is what the function spike_tools.get_population_activity_power_spectrum() does - we just have to get the parameters first:
• Given the values $$\Delta f = 5 Hz, \Delta t = 0.1ms, T_{init}=100ms, k=5$$, compute $$T_{Signal}$$ and $$T_{Simulation}$$.
(2)$\begin{split}\begin{array}{ccll} f_{max} = \frac{f_{Sampling}}{2} = \frac{1}{2 \cdot \Delta t} \\[.2cm] N \cdot \Delta t = T_{Signal} \\[.2cm] 2 \cdot f_{max} = N \cdot \Delta f \\[.2cm] T_{Simulation} = k \cdot T_{Signal} + T_{init}; k \in N \\ \end{array}\end{split}$
$$f_{Sampling}$$: sampling frequency of the signal; $$f_{max}$$: highest frequency component; $$\Delta f$$: frequency resolution in fourier domain = lowest frequency component; $$T_{Signal}$$ length of the signal; $$\Delta t$$: temporal resolution of the signal; $$N$$: Number of samples (same in time- and frequency- domain) $$T_{Simulation}$$: simulation time; $$k$$: k repetitions of the signal; $$T_{init}$$: initial part of the simulation (not used for data analysis);
### 11.4.2. Question: Sampling a Single Neuron Spike Train¶
• The sampling of the individual neuron’s spike train is different because in that case, the signal is given as a list of timestamps (SpikeMonitor.spike_trains) and needs to be transformed into a binary vector. This is done inside the function spike_tools.get_averaged_single_neuron_power_spectrum(). Read the doc to learn how to control the sampling rate.
• The firing rate of a single neuron can be very low and very different from one neuron to another. For that reason, we do not split the spike train into k realizations but we analyse the full spike train ($$T_{Simulation}-T_{init}$$). From the simulation, we get many (CE+CI) spike trains and we can average across a subset of neurons. Check the doc of spike_tools.get_averaged_single_neuron_power_spectrum() to learn how to control the number of neurons of this subset.
### 11.4.3. Question: Single Neuron activity vs. Population Activity¶
We can now compute and plot the power spectrum.
%matplotlib inline
from neurodynex.brunel_model import LIF_spiking_network
from neurodynex.tools import plot_tools, spike_tools
import brian2 as b2
# Specify the parameters of the desired network state (e.g. SI fast)
poisson_rate = ??? *b2.Hz
g = ???
CE = ???
# Specify the signal and simulation properties:
delta_t = ??? * b2.ms
delta_f = ??? * b2.Hz
T_init = ??? * b2.ms
k = ???
# compute the remaining values:
f_max = ???
N_samples = ???
T_signal = ???
T_sim = k * T_signal + T_init
# replace the ??? by appropriate values:
print("Start simulation. T_sim={}, T_signal={}, N_samples={}".format(T_sim, T_signal, N_samples))
b2.defaultclock.dt = delta_t
# for technical reason (solves rounding issues), we add a few extra samples:
stime = T_sim + (10 + k) * b2.defaultclock.dt
rate_monitor, spike_monitor, voltage_monitor, monitored_spike_idx = \
LIF_spiking_network.simulate_brunel_network(
N_Excit=CE, poisson_input_rate=poisson_rate, g=g, sim_time=stime)
plot_tools.plot_network_activity(rate_monitor, spike_monitor, voltage_monitor,
spike_train_idx_list=monitored_spike_idx, t_min=0*b2.ms)
plot_tools.plot_network_activity(rate_monitor, spike_monitor, voltage_monitor,
spike_train_idx_list=monitored_spike_idx, t_min=T_sim - ??? *b2.ms)
spike_stats = spike_tools.get_spike_train_stats(spike_monitor, window_t_min= T_init)
plot_tools.plot_ISI_distribution(spike_stats, hist_nr_bins= ???, xlim_max_ISI= ??? *b2.ms)
# Power Spectrum
pop_freqs, pop_ps, average_population_rate = \
spike_tools.get_population_activity_power_spectrum(
rate_monitor, delta_f, k, T_init)
plot_tools.plot_population_activity_power_spectrum(pop_freqs, pop_ps, ??? *b2.Hz, average_population_rate)
freq, mean_ps, all_ps, mean_firing_rate, all_mean_firing_freqs = \
spike_tools.get_averaged_single_neuron_power_spectrum(
spike_monitor, sampling_frequency=1./delta_t, window_t_min= T_init,
window_t_max=T_sim, nr_neurons_average= ??? )
plot_tools.plot_spike_train_power_spectrum(freq, mean_ps, all_ps, max_freq= ??? * b2.Hz,
mean_firing_freqs_per_neuron=all_mean_firing_freqs,
nr_highlighted_neurons=2)
print("done")
The figures below show the type of analysis you can do with this script. The first figure shows the last 80ms of a network simulation. The second figure the power spectrum of the population activity A(t) and the third figure shows the power spectrum of single neurons (individual neurons and averaged across neurons). Note the qualitative difference between the spectral density of the population and that of the individual neurons.
Single neurons (red, grey) fire irregularly (I) while the population activity oscillates (S). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9238841533660889, "perplexity": 3227.293527752721}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221214702.96/warc/CC-MAIN-20180819051423-20180819071423-00063.warc.gz"} |
https://physics.stackexchange.com/questions/267186/a-question-on-the-redshift-of-photons-due-to-cosmic-expansion | # A question on the redshift of photons due to cosmic expansion
Given that the universe is expanding over time, in the sense that the (spatial) metric is changing over time, corresponding to the physical distance between objects increasing, naïve intuition leads me to the conclusion that the wavelength of a photon travelling from a distant galaxy (receding us) will be stretched, and consequently its frequency will decrease, leading to the energy of the photon decreasing (with this energy simply being lost, since time translation symmetry is broken due to the big-bang). The problem with this is, relative to another observer, wouldn't the wavelength be stretched by a different amount and hence the redshift the photon will be different, corresponding to a different amount of energy being lost by the photon.
This all leaves me feeling confused on the subject. Does the photon actually get redshifted and lose energy due to cosmic expansion, or is it simply an observer effect (akin to the standard Doppler effect). It does seem a little counter intuitive that a photon would lose energy simply due to its propagation through space?!
Is the whole point of this that it is an observer dependent phenomenon and the energy of an object is an observer dependent quantity (energy is not conserved one moves between two different frames of reference)?!
You need to concentrate on the observer making the measurements. Photons carry energy, but they don't lose energy just because they travel. The "loss" of energy is not the cause of the redshift, only if the photon scatters off something will it lose energy.
However, not all observers will agree that photon has the same amount of energy. Assume you are in a frame in which the photon is observed as green. An observer in a different frame moving relative to yours measures the photon as red.
Because measurements are made in different reference frames, the conservation of energy principle is not violated. Ultimately the energy of a photon is frequency dependent and different observers measure different frequencies.
Analogously, if you toss a coin whilst being driven, the coin has a different velocity to you, as to that measured by a bystander. Keep in mind that energy is conserved within each reference frame. The law of energy states that, in any given reference frame, the amount of energy doesn't change, but it does not apply to the way in which the energy in one frame is related to the energy in another frame.
A good read on this is Preposterous Universe.
• While this is correct, the beginner is still left with the question how nature can accelerate non-co-located observers and their infinitesimally small inertial frames relative to each other without expending energy. Jul 9, 2016 at 20:54
• @count_to_10 So is it essentially like the ordinary Doppler effect, in that the energy of photon hasn't changed from what it was when it was emitted by the source, but it's energy depends on the reference frame in which it is measured in?! Is the point that it is the 4-momentum that is frame independent, and not energy (or momentum) on its own?! Jul 9, 2016 at 20:56
• Answers from people more experienced than me, by a long way. physics.stackexchange.com/questions/214983/… and the questions on the right hand side.
– user108787
Jul 9, 2016 at 21:02
• @count_to_10 Thanks for the links. Mathematically, the frame dependence of energy quantified by $E'=\frac{\partial x'^{0}}{\partial x^{\nu}}p^{\nu}$, right ($p^{\nu}$ is the 4-momentum with respect to the unprimed frame, and $E':=p^{0}$ is the zeroth component of the 4-momentum with respect to the primed frame)? Jul 9, 2016 at 21:31
• @count_to_10 Ah ok, thanks for taking a look. A mixture really. I've been reading Sean Carroll's lecture notes, and Wald's GR book, in particular. Jul 10, 2016 at 10:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.924418032169342, "perplexity": 232.66286396649636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662527626.15/warc/CC-MAIN-20220519105247-20220519135247-00707.warc.gz"} |
https://farside.ph.utexas.edu/teaching/plasma/lectures/node17.html | Next: Invariance of Magnetic Moment Up: Charged Particle Motion Previous: Guiding Centre Motion
Magnetic Drifts
Equations (70) and (86) can be combined to give
(87)
The three terms on the right-hand side of the above expression are conventionally called the magnetic, or grad-B, drift, the inertial drift, and the polarization drift, respectively.
The magnetic drift,
(88)
is caused by the slight variation of the gyroradius with gyrophase as a charged particle rotates in a non-uniform magnetic field. The gyroradius is reduced on the high-field side of the Larmor orbit, whereas it is increased on the low-field side. The net result is that the orbit does not quite close. In fact, the motion consists of the conventional gyration around the magnetic field combined with a slow drift which is perpendicular to both the local direction of the magnetic field and the local gradient of the field-strength.
Given that
(89)
the inertial drift can be written
(90)
In the important limit of stationary magnetic fields and weak electric fields, the above expression is dominated by the final term,
(91)
which is called the curvature drift. As is easily demonstrated, the quantity is a vector whose direction is towards the centre of the circle which most closely approximates the magnetic field-line at a given point, and whose magnitude is the inverse of the radius of this circle. Thus, the centripetal acceleration imposed by the curvature of the magnetic field on a charged particle following a field-line gives rise to a slow drift which is perpendicular to both the local direction of the magnetic field and the direction to the local centre of curvature of the field.
The polarization drift,
(92)
reduces to
(93)
in the limit in which the magnetic field is stationary but the electric field varies in time. This expression can be understood as a polarization drift by considering what happens when we suddenly impose an electric field on a particle at rest. The particle initially accelerates in the direction of the electric field, but is then deflected by the magnetic force. Thereafter, the particle undergoes conventional gyromotion combined with drift. The time between the switch-on of the field and the magnetic deflection is approximately . Note that there is no deflection if the electric field is directed parallel to the magnetic field, so this argument only applies to perpendicular electric fields. The initial displacement of the particle in the direction of the field is of order
(94)
Note that, because , the displacement of the ions greatly exceeds that of the electrons. Thus, when an electric field is suddenly switched on in a plasma, there is an initial polarization of the plasma medium caused, predominately, by a displacement of the ions in the direction of the field. If the electric field, in fact, varies continuously in time, then there is a slow drift due to the constantly changing polarization of the plasma medium. This drift is essentially the time derivative of Eq. (94) [i.e., Eq. (93)].
Next: Invariance of Magnetic Moment Up: Charged Particle Motion Previous: Guiding Centre Motion
Richard Fitzpatrick 2011-03-31 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9560632705688477, "perplexity": 320.7254924032137}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361253.38/warc/CC-MAIN-20211202084644-20211202114644-00063.warc.gz"} |
https://www.kth.se/math/kalender/xiao-shen-on-the-number-and-size-of-holes-in-the-growing-ball-of-first-passage-percolation-1.1193532?date=2022-09-27&orgdate=2022-02-26&length=1&orglength=309 | # Xiao Shen: On the number and size of holes in the growing ball of first-passage percolation
Time: Tue 2022-09-27 15.15 - 16.15
Location: Zoom
Video link: Meeting ID: 698 3346 0369
Participating: Xiao Shen (University of Utah)
### Abstract
First-passage percolation is a random growth model defined on Z^d using i.i.d. nonnegative weights (τ_e) on the edges. Letting T(x,y) be the distance between vertices x and y induced by the weights, we study the random ball of radius t centered at the origin, B(t)=\{x∈Z^d: T(0,x)≤t\}. It is known that for all such τ_e, the number of vertices (volume) of B(t) is at least order t^d, and under mild conditions on τ_e, this volume grows like a deterministic constant times t^d. Defining a hole in B(t) to be a bounded component of the complement B(t)^c, we prove that if τ_e is not deterministic, then a.s., for all large t, B(t) has at least ct^{d−1} many holes, and the maximal volume of any hole is at least c\log t. Conditionally on the (unproved) uniform curvature assumption, we prove that a.s., for all large t, the number of holes is at most (\log t)^Ct^{d−1}, and for d=2, no hole in B(t) has volume larger than (\log t)^C. Without curvature, we show that no hole has volume larger than Ct\log t. (Joint work with Michael Damron, Julian Gold, Wai-Kit Lam). | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9065808653831482, "perplexity": 2383.5564163765675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950373.88/warc/CC-MAIN-20230402012805-20230402042805-00394.warc.gz"} |
https://www.physicsforums.com/threads/i-cant-understand-a-development-problem-i-need-help-please.729424/ | # I can't understand a development problem, I need help, please!
1. Dec 21, 2013
### superduck
1. The problem statement, all variables and given/known data
I'm learning mathematics in Japanese, so maybe my English is not correct to discribe what I want to mention.
I'm struggling a development probloem below
( x + y + 2z)3 – ( y + 2z – x)3 – ( 2z + x – y )3 – ( x + y – 2z )3
2. Relevant equations
I used cube development formula like a3 + b3= ( a+ b )3 -3ab( a + b)
3. The attempt at a solution
1. The problem statement, all variables and given/known data
I solved this problem, but the answer of my reference book used another way to solve it. Then, I can't understand that way.
My attempt at a solution is below
transpose   {x + y + 2z}3 - { y+2z - x }3=P,
{ 2z + x - y }3 + { x + y - 2z}3=Q
P=[{( y+2z) + x } - {( y + 2z) - x}]3 + 3{( y + 2z ) + x}{( y + 2Z) - x}{( y + 2z + x) - ( y + 2z - x)}
=(2x)3 + 3{( y + 2z )2 - x2} ・2x
=8x3 + 6x( y2 + 4yz + 4z2 - x2)
=8x3 - 6x3 + 6xy2 + 24xyz + 24z2x
=2x3 + 6xy2 + 24z2x - 24xyz
Q=[{ x -( y - 2z)} + { x + ( y - 2z)}]3 - 3{ x - ( y - 2z)}{ x + ( y - 2z)}×( x - y + 2z + x + y -2z)
=(2x)3 - 3{ x2 - ( y - 2z)2}・2x
=8x3 - 6x( x2 - y2 + 4yz - 4z2)
=2x3 - 6xy2 + 24z2x - 24xyz
thus,
P-Q=( 2x3 + 6xy2 + 24z2x + 24xyz) - ( 2x3 + 6xy2 + 24z2x - 24xyz)
   =48xyz
2. Relevant equations
I solved this problem like above, but the answer of the reference book is different from mine.
The answer of reference book is below
transpose   {x + y + 2z}3 - { y+2z - x }3=P,
{ 2z + x - y }3 + { x + y - 2z}3=Q
P={( x + y + 2z) - ( y + 2z - x)}{( x + y + 2z)2 + ( x + y +2z )( y + 2z - x) + ( y + 2z - x)2}
=2x{ x2 + 2x( y + 2z) + ( y + 2z)2 + ( y + 2z)2 - x2 + (y + 2z)2 - 2x( y + 2z) + x2}
=2x{3( y + 2z)2 + x2}
Q={(2z + x - y) + ( x + y - 2z)}{(2z + x - y)2 - ( 2z + x - y)( x + y - 2z) + (x + y - 2z)2}
=2x{(2z - y)2 + 2x(2z - y) + x2 - x2 + (2z - y)2 + (2z - y)2 - 2x( 2z - y) + x2}
=2x{3(2z - y)2 + x2}
thus,
P-Q=2x{3(y + 2z)2 + x2 - 3( 2z - y)2- x2}
=6x・8yz=48xyz
the answer is same to my result, but the way to solve this problem is different. I want to understand the solving way of reference book. Please help me.
Last edited by a moderator: Dec 21, 2013
2. Dec 21, 2013
### Mentallic
The reference book is using the rule
$$a^3+b^3=(a+b)(a^2+ab+b^2)$$
and
$$a^3-b^3=(a-b)(a^2-ab+b^2)$$
3. Dec 23, 2013
### haruspex
You mean
$$a^3+b^3=(a+b)(a^2-ab+b^2)$$
and
$$a^3-b^3=(a-b)(a^2+ab+b^2)$$
4. Dec 23, 2013
### Mentallic
Yeah
5. Dec 24, 2013
### superduck
I noticed that, but I could find my mistake
Yes, that formula is true. I found that the first comment has a little mistake. But, I could find my mistake by the comment. I'm grateful for two members, thank you very much.
I'm poor at English and also mathematics, so if you advise me when I ask for help, I'll be really grateful for that.
Draft saved Draft deleted
Similar Discussions: I can't understand a development problem, I need help, please! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8138131499290466, "perplexity": 2548.6250587747295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820927.48/warc/CC-MAIN-20171017052945-20171017072945-00290.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.