url
stringlengths 15
1.13k
| text
stringlengths 100
1.04M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
https://www.physicsforums.com/threads/correspond-momentum-of-a-rigid-body-to-rotational-forces.553119/ | # Correspond Momentum of a rigid body to rotational forces
1. Nov 23, 2011
### _chris_198
1. The problem statement, all variables and given/known data:
I have the momentum acting on the center of mass of a rigid body consisting of 3 bodies.
I would like to have the corresponding rotational forces (of this momentum) on the 3 bodies.
2.
I derive the general form:
Frot_i= Inertia_i*(RixM)/Total_Inertia*(Ri)^2
where:
Inertia_i=mass_i*(RixM)^2
Total_Inertia=SUM(Inertia_i)
3.
First calculate the vectors
Rix = xi-xc_of_mass
Riy = yi-yc_of_mass
Riz = zi-zc_of_mass
Then calculate the Ri^2:
=Rix^2 + Riy^2 + Riz^2
Then calculate the vectors Vi=(RixM):
Vix=Riy*Mz-Riz*My
Viy=Riz*Mx-Rix*Mz
Viz=Rix*My-Riy*Mx
Then calculate the scalar Si=(RixM)^2
Si=Vix^2 +Viy^2+Viz^2
Then I subtitute to the above equation and calculate the rotational forces of each of the 3 beads i in x y z component:
Frot_ix
Frot_iy
Frot_iz
When I sum up the x component of the force on the three beads I come out with a non-zero number. Shouldn't it be zero? Same is happening with y and z also.
Frot_1x+Frot_2x+Frot_3x !=0
Is the equation for calculating the rotational force correct?
I would be grateful for any reply.
cheers,
Chris
Can you offer guidance or do you also need help?
Similar Discussions: Correspond Momentum of a rigid body to rotational forces | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9326179623603821, "perplexity": 1153.2276600538992}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805687.20/warc/CC-MAIN-20171119153219-20171119173219-00387.warc.gz"} |
https://math.stackexchange.com/questions/1260038/when-does-one-use-succeeds-and-when-does-one-use-greater-than | # When does one use 'succeeds' and when does one use 'greater than'?
I am reading a text on convex optimisation, and there is a line:
$f_i(\tilde{x})\leq0$ and $h_i(\tilde{x})=0$, and $\lambda \succeq 0$
and I was just wondering why for one term, $\leq$ is used and for the other, $\succeq$ is used.
I have a computer science background and for some reason we never were taught much formal mathematical notation.
$\succeq$ is used typically in the context of matrices and vectors.
• If used in the context of vectors, it typically means that all elements of the vectors are non-negative, i.e., $\vec{\lambda} \succeq 0$, if $\lambda_i \geq 0$ for all $i$.
• If used in the context of matrices, it typically means that the matrix is non-negative definite, i.e., $A \succeq 0$, if $x^TAx \geq 0$ for all $x \in \mathbb{R}^{n}$, where $A \in \mathbb{R}^{n \times n}$. However, on extremely rare occasions, this symbol could also mean that all entries in a matrix are non-negative, i.e., $A \succeq 0$, if $A(i,j) \geq 0$ for all $i,j$.
In a different context, succeed might be used when there is "discrete" ordering, e.g. for the natural numbers, as opposed to "continuous" ordering, e.g. the reals. For example, $5$ succeeds $4$ (as there are no intermediate natural numbers between $4$ and $5$) it is also said that $5$ is the immediate successor of $4$. On the other hand there is not such thing as an immediate successor for real numbers, e.g. one could say that $4.6>4.2$ but we could always insert more numbers in between, $4.6>4.56>4.48374883748>4.2$. So here one would only say that $4.6$ is greater than $4.2$
• @guskenny83 Thank you. In yet another context one may use the same notation $\prec$ with a different word and different meaning, that is for (usually open) covers $\mathcal U$ and $\mathcal V$ of a topological space, one says that $\mathcal U$ refines $\mathcal V$, written $\mathcal U\prec \mathcal V$ if if for every $U\in\mathcal U$ there is some $V\in\mathcal V$ such that $U\subseteq V$. More generally $\prec$ is sometimes used for pre-orders: When $x\prec y$ and $y\prec x$ does not necessarily imply that $x=y$. – Mirko May 1 '15 at 2:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9556264281272888, "perplexity": 121.70951147520923}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739182.35/warc/CC-MAIN-20200814070558-20200814100558-00248.warc.gz"} |
http://wbsjosaka.com/expedition/onosato08.htm | j
NO. @@ 7/26 8/23 9/27 10/25 11/22 12/27
17 JE 8 2 7 7 2 1
29 _CTM 1 3 @ 2 @ @
31 RTM 3 7 6 10 3 @
34 AITM 3 5 12 5 2 4
47 }K @ @ @ 2 @ 4
49 RK @ @ 10 8 12 17
53 qhK @ @ @ 26 48 72
72 ~TS @ 1 1 5 @ 4
74 gr @ 1 4 7 5 2
87 nuT @ @ @ 1 @ @
91 `EQ{E @ @ 1 @ @ @
108 CJ`h 7 4 11 12 11 1
109 V`h 8 1 1 @ @ @
115 P @ @ 5 @ @ @
126 n}VM 2 @ @ @ @ @
146 LAVVM 10 3 3 @ @ @
147 C\VM @ 2 5 5 3 3
148 \nVVM 1 @ @ @ @ @
153 `EVNVM 15 2 @ @ @ @
169 ZOJ @ @ 1 2 1 6
170 IIZOJ @ @ @ @ @ 1
174 E~lR 11 25 30 16 1 1
188 LWog 3 10 8 12 9 7
207 JZ~ @ @ 1 1 @ @
216 qo @ 1 1 6 5 5
218 co 20 15 5 @ @ @
224 nNZLC 5 11 4 3 5 3
225 ZOZLC 1 3 3 3 2 2
227 rYC @ @ @ @ @ 2
231 qh 5 2 7 12 22 20
233 Y @ @ 4 6 4 2
248 WEr^L @ @ @ @ 2 @
249 mr^L @ @ @ 2 @ @
251 C\qh @ 2 2 2 2 2
259 cO~ @ @ @ @ 2 12
261 EOCX @ @ @ @ 1 1
275 ZbJ 2 2 @ @ 1 @
294 zIW @ @ @ 2 2 2
301 AIW @ @ @ @ 3 2
304 IIW @ @ @ @ @ 3
307 Jq 2 2 1 6 2 21
320 XY 40 23 39 50 260
324 Nh 13 10 18 19 55 40
331 nV{\KX 1 4 2 5 3 6
332 nVugKX 5 3 3 4 4 1
@ XEv 22 25 28 29 28 30
1227
@ƂɁARTMA_CTMAJƂFp͂ȂBB̃Z_iLn[ɁANhAcO~AqhQ~HׂĂBqhKtAN̓JJYɒ[ȂBROJ]AN߂ƂẮAT~TJCBNjɂzFl肪Ƃ܂BN܂B
1122
1025
C݂rTMR̍XnŃCJ`hAn̑ŃzIWAmr^LQoAK̗ǂX^[gBACɓAAg̊FAC̐|BCb^Cpɐ|ĂƂ̂ƁB̂gAp͏ȂBłÃ𑽂~TSĂA|H˂ԃnuT܂pȂώ@łAQꂽF͖lqB
927
oƁASnJ[lj߂B̃VꂷɃ`EQ{EbzoOcJB܂QmAŌAgljыBCł̓E~lRJьAYqrnARKĂBgECHB
823
WVESEɂJAQ͂ȂBX^btRnQ`Bnƃ~TS_CrO܂BdȂAvĂAVME`hpȂBeg|bgɃ`EVNVMLAVVMpBr1030vJcȂAr~ƂBleeJQȂAЂ肵ĂAC\qhۗĂB
726
@AHn^n܂Ă̂A`EVNVMALAVVMȂǂꂽBɐBĂACJ`hړĂĂBiJł̃S~EAVM`hpijقƂ悤CɂȂB
NO. @@ 1/26 2/23 3/22 4/26 5/24 6/28
4 JCcu @ 1 @ @ @ @
17 JE 1 2 3 1 5 1
29 _CTM @ 2 2 2 1 @
31 RTM 1 2 4 6 5 2
34 AITM 5 7 3 1 3 5
47 }K 11 13 4 @ @ @
48 JK @ @ 2 2 @ @
49 RK 32 32 15 17 @ @
53 qhK 66 68 72 25 @ @
54 AJqh @ 1 1 @ @ @
70 E~ACT @ @ 1 @ @ @
72 ~TS 1 @ 1 @ @ @
74 gr 2 1 1 1 1 1
76 II^J @ @ @ 2 @ 1
79 nC^J @ @ @ 1 @ @
81 mX 1 @ @ @ @ @
87 nuT @ 1 @ @ @ @
100 o 1 1 @ @ @ @
107 R`h @ @ 3 1 @ 1
108 CJ`h 3 7 @ 2 @ @
109 V`h 16 1 11 1 1 @
110 _C`h @ @ 1 6 1 @
114 _C[ 2 2 @ @ @ @
115 P @ 1 2 2 2 2
126 n}VM 1 @ 7 1 @ @
146 LAVVM @ @ @ 11 6 @
147 C\VM 3 4 5 3 @ 2
148 \nVVM @ @ @ 1 3 @
150 II\nVVM @ @ @ 3 @ @
153 `EVNVM @ @ @ 23 4 6
168 J 3 6 3 2 @ @
169 ZOJ 3 28 2 1 @ @
170 IIZOJ 2 18 @ @ @ @
173 J @ 3 @ @ @ @
174 E~lR 2 8 2 1 @ 2
184 RAWTV @ @ @ @ 8 @
188 LWog 3 2 7 8 11 5
207 JZ~ 1 @ @ @ @ @
216 qo 20 5 6 3 2 11
218 co @ @ 4 23 11 30
224 nNZLC 4 7 3 2 @ 3
225 ZOZLC 2 1 1 @ @ @
229 ^qo @ 3 @ @ @ @
231 qh 14 2 6 5 3 @
233 Y 1 1 2 @ @ @
251 C\qh @ @ 1 @ 1 1
259 cO~ 16 10 15 4 @ @
261 EOCX 2 1 1 @ @ @
267 IIVL @ @ @ 1 @ @
275 ZbJ @ @ @ 1 2 3
293 W 6 16 3 @ @ @
294 zIW 3 @ 1 @ @ @
301 AIW 4 5 2 @ @ @
304 IIW 5 @ @ @ @ @
307 Jq 18 3 4 3 6 3
318 V @ @ 1 @ @ @
320 XY 43 12 12 20 20 19
324 Nh 8 18 20 16 21 38
331 nV{\KX 3 5 4 3 2 2
332 nVugKX 3 @ 5 1 3 5
@ vSEPC 37 37 40 37 23 21
628
ȂJjVBO@B㔼Jj̊ώ@Beg̏ɂUH̃`EVNVMC\qh̎pBcoNh挎葽BJj̓PtTC\KjAAJeKjAAVnKjANxPCKjAn}KjAJNxPCKjAn}KjA}gITKjAVI}lLAnNZVI~}lL̂XAjł͐ł̂ł́HƂgrn[̎pB
grn[
nNZVI}lL
524
J͍~ĂȂE~肻ȋ͗lBQ҂͂ȂvAÂQQnBC݂ɓt|c|cJ~oBCɃJEARAWTVAJVMXNьe͏BPԒJ{u~ƂȂAVEEĂPPIB
426
̎lœ키ALł̉e̎p͑SȂBŊPUFQVƒ̏]ǂȂÃJjTĂ`EiJVNVM̌Q̒II\nVVMAۂŃLAVVM̌QJjTĂpgB̉âlqVME`h̎ʂ̃|CgǂWbNgƊώ@B^XNVM`h̐͏Ȃ`JA^̃VM^Ő݂AQF͖ꂽlqB
322
Wꏊ߂̌ŃV̏o}BjɌr̋TMRՒn̍XnɃPQHBPH͒nʂɂ܂ĂBЂƂƕHł̓coBj͌ɓƁASn悢̍肪YBJjBoĂBJ̐ȂȂĂBVM`̎ށA͂܂ȂA悢j̋GߓƂBy݂ByސlĂB
223
@͕BC݂ɒƍXɕȂ萁ꂻBQH̃_C[̎pmFB}ɍȂJ~oB}篋Ɍꎞҋ@B̂ނ̎p͏ȂBVH̃CJ`h]̒ɂ܂ĂBJނʂ悤Ƃr[AĂɔїBƕӂnƏɃnuT̎pBJ̌Q̒ɃAJqh̎pB쒹̎pȂ悤ɊA荇킹ł͐挎ƓRVmFB
126
C݂ŃqoQȂĔщĂBȌi͒BIIWV̌sɕtĂ鍩HׂĂlqJZ~AmX̔ĂȂǂ܋߂ŃWbNƊώ@oB_C[V`hAn}VM̎pȂAOlA̒ł̒TłAQꂽF͊YꖞꂽlqłB | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.898631751537323, "perplexity": 1657.3719301414526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370524043.56/warc/CC-MAIN-20200404134723-20200404164723-00482.warc.gz"} |
https://www.jobilize.com/course/section/parametric-surfaces-surface-integrals-by-openstax?qcr=www.quizover.com | # 6.6 Surface integrals
Page 1 / 27
• Find the parametric representations of a cylinder, a cone, and a sphere.
• Describe the surface integral of a scalar-valued function over a parametric surface.
• Use a surface integral to calculate the area of a given surface.
• Explain the meaning of an oriented surface, giving an example.
• Describe the surface integral of a vector field.
• Use surface integrals to solve applied problems.
We have seen that a line integral is an integral over a path in a plane or in space. However, if we wish to integrate over a surface (a two-dimensional object) rather than a path (a one-dimensional object) in space, then we need a new kind of integral that can handle integration over objects in higher dimensions. We can extend the concept of a line integral to a surface integral to allow us to perform this integration.
Surface integrals are important for the same reasons that line integrals are important. They have many applications to physics and engineering, and they allow us to develop higher dimensional versions of the Fundamental Theorem of Calculus. In particular, surface integrals allow us to generalize Green’s theorem to higher dimensions, and they appear in some important theorems we discuss in later sections.
## Parametric surfaces
A surface integral is similar to a line integral, except the integration is done over a surface rather than a path. In this sense, surface integrals expand on our study of line integrals. Just as with line integrals, there are two kinds of surface integrals: a surface integral of a scalar-valued function and a surface integral of a vector field.
However, before we can integrate over a surface, we need to consider the surface itself. Recall that to calculate a scalar or vector line integral over curve C , we first need to parameterize C . In a similar way, to calculate a surface integral over surface S , we need to parameterize S . That is, we need a working concept of a parameterized surface (or a parametric surface ), in the same way that we already have a concept of a parameterized curve.
A parameterized surface is given by a description of the form
$\text{r}\left(u,v\right)=⟨x\left(u,v\right),y\left(u,v\right),z\left(u,v\right)⟩.$
Notice that this parameterization involves two parameters, u and v , because a surface is two-dimensional, and therefore two variables are needed to trace out the surface. The parameters u and v vary over a region called the parameter domain, or parameter space —the set of points in the uv -plane that can be substituted into r . Each choice of u and v in the parameter domain gives a point on the surface, just as each choice of a parameter t gives a point on a parameterized curve. The entire surface is created by making all possible choices of u and v over the parameter domain.
## Definition
Given a parameterization of surface $\text{r}\left(u,v\right)=⟨x\left(u,v\right),y\left(u,v\right),z\left(u,v\right)⟩,$ the parameter domain of the parameterization is the set of points in the uv -plane that can be substituted into r .
## Parameterizing a cylinder
Describe surface S parameterized by
$\text{r}\left(u,v\right)=⟨\text{cos}\phantom{\rule{0.2em}{0ex}}u,\text{sin}\phantom{\rule{0.2em}{0ex}}u,v⟩,\text{−}\infty
To get an idea of the shape of the surface, we first plot some points. Since the parameter domain is all of ${ℝ}^{2},$ we can choose any value for u and v and plot the corresponding point. If $u=v=0,$ then $\text{r}\left(0,0\right)=⟨1,0,0⟩,$ so point (1, 0, 0) is on S . Similarly, points $\text{r}\left(\pi ,2\right)=\left(-1,0,2\right)$ and $\text{r}\left(\frac{\pi }{2},4\right)=\left(0,1,4\right)$ are on S .
Although plotting points may give us an idea of the shape of the surface, we usually need quite a few points to see the shape. Since it is time-consuming to plot dozens or hundreds of points, we use another strategy. To visualize S , we visualize two families of curves that lie on S. In the first family of curves we hold u constant; in the second family of curves we hold v constant. This allows us to build a “skeleton” of the surface, thereby getting an idea of its shape.
First, suppose that u is a constant K . Then the curve traced out by the parameterization is $⟨\text{cos}\phantom{\rule{0.2em}{0ex}}K,\text{sin}\phantom{\rule{0.2em}{0ex}}K,v⟩,$ which gives a vertical line that goes through point $\left(\text{cos}\phantom{\rule{0.2em}{0ex}}K,\text{sin}\phantom{\rule{0.2em}{0ex}}K,v\right)$ in the xy -plane.
Now suppose that v is a constant K. Then the curve traced out by the parameterization is $⟨\text{cos}\phantom{\rule{0.2em}{0ex}}u,\text{sin}\phantom{\rule{0.2em}{0ex}}u,K⟩,$ which gives a circle in plane $z=K$ with radius 1 and center (0, 0, K ).
If u is held constant, then we get vertical lines; if v is held constant, then we get circles of radius 1 centered around the vertical line that goes through the origin. Therefore the surface traced out by the parameterization is cylinder ${x}^{2}+{y}^{2}=1$ ( [link] ).
Notice that if $x=\text{cos}\phantom{\rule{0.2em}{0ex}}u$ and $y=\text{sin}\phantom{\rule{0.2em}{0ex}}u,$ then ${x}^{2}+{y}^{2}=1,$ so points from S do indeed lie on the cylinder. Conversely, each point on the cylinder is contained in some circle $⟨\text{cos}\phantom{\rule{0.2em}{0ex}}u,\text{sin}\phantom{\rule{0.2em}{0ex}}u,k⟩$ for some k , and therefore each point on the cylinder is contained in the parameterized surface ( [link] ).
Is there any normative that regulates the use of silver nanoparticles?
what king of growth are you checking .?
Renato
What fields keep nano created devices from performing or assimulating ? Magnetic fields ? Are do they assimilate ?
why we need to study biomolecules, molecular biology in nanotechnology?
?
Kyle
yes I'm doing my masters in nanotechnology, we are being studying all these domains as well..
why?
what school?
Kyle
biomolecules are e building blocks of every organics and inorganic materials.
Joe
anyone know any internet site where one can find nanotechnology papers?
research.net
kanaga
sciencedirect big data base
Ernesto
Introduction about quantum dots in nanotechnology
what does nano mean?
nano basically means 10^(-9). nanometer is a unit to measure length.
Bharti
do you think it's worthwhile in the long term to study the effects and possibilities of nanotechnology on viral treatment?
absolutely yes
Daniel
how to know photocatalytic properties of tio2 nanoparticles...what to do now
it is a goid question and i want to know the answer as well
Maciej
Abigail
for teaching engĺish at school how nano technology help us
Anassong
Do somebody tell me a best nano engineering book for beginners?
there is no specific books for beginners but there is book called principle of nanotechnology
NANO
what is fullerene does it is used to make bukky balls
are you nano engineer ?
s.
fullerene is a bucky ball aka Carbon 60 molecule. It was name by the architect Fuller. He design the geodesic dome. it resembles a soccer ball.
Tarell
what is the actual application of fullerenes nowadays?
Damian
That is a great question Damian. best way to answer that question is to Google it. there are hundreds of applications for buck minister fullerenes, from medical to aerospace. you can also find plenty of research papers that will give you great detail on the potential applications of fullerenes.
Tarell
what is the Synthesis, properties,and applications of carbon nano chemistry
Mostly, they use nano carbon for electronics and for materials to be strengthened.
Virgil
is Bucky paper clear?
CYNTHIA
carbon nanotubes has various application in fuel cells membrane, current research on cancer drug,and in electronics MEMS and NEMS etc
NANO
so some one know about replacing silicon atom with phosphorous in semiconductors device?
Yeah, it is a pain to say the least. You basically have to heat the substarte up to around 1000 degrees celcius then pass phosphene gas over top of it, which is explosive and toxic by the way, under very low pressure.
Harper
Do you know which machine is used to that process?
s.
how to fabricate graphene ink ?
for screen printed electrodes ?
SUYASH
What is lattice structure?
of graphene you mean?
Ebrahim
or in general
Ebrahim
in general
s.
Graphene has a hexagonal structure
tahir
On having this app for quite a bit time, Haven't realised there's a chat room in it.
Cied
what is biological synthesis of nanoparticles
how did you get the value of 2000N.What calculations are needed to arrive at it
Privacy Information Security Software Version 1.1a
Good
Got questions? Join the online conversation and get instant answers! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 24, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8970869779586792, "perplexity": 953.0127175960356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999817.30/warc/CC-MAIN-20190625092324-20190625114324-00388.warc.gz"} |
http://mathhelpforum.com/pre-calculus/83387-word-problems-help.html | 1. ## Word Problems help
1. The displacement of a spring vibrating in damped harmonic motion is given by the following equation. Find the times when the spring is at its equilibrium position, y = 0.
2. As the moon revolves around the earth, the side that faces the earth is usually just partially illuminated by the sun. The phases of the moon describe how much of the surface appears to be in sunlight. An astronomical measure of phase is given by the fraction F of the lunar disc that is lit. When the angle between the sun, earth, and moon is θ (0 ≤ θ < 360°), then the fraction F is given by the formula below. Determine the angles θ that correspond to the following phases.
3. Consider the equation.
(a) Use an addition or subtraction formula to simplify the equation.
(b) Find all solutions in the interval [0, 2π). (
2. did you check the last post you made with the same questions?
http://www.mathhelpforum.com/math-he...solutions.html | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9155239462852478, "perplexity": 374.98154603786367}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280730.27/warc/CC-MAIN-20170116095120-00324-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://stacks.math.columbia.edu/tag/04GZ | Lemma 7.30.1. Let $\mathcal{C}$ be a site. Let $\mathcal{F}$ be a sheaf on $\mathcal{C}$. Then the category $\mathop{\mathit{Sh}}\nolimits (\mathcal{C})/\mathcal{F}$ is a topos. There is a canonical morphism of topoi
$j_\mathcal {F} : \mathop{\mathit{Sh}}\nolimits (\mathcal{C})/\mathcal{F} \longrightarrow \mathop{\mathit{Sh}}\nolimits (\mathcal{C})$
which is a localization as in Section 7.25 such that
1. the functor $j_\mathcal {F}^{-1}$ is the functor $\mathcal{H} \mapsto \mathcal{H} \times \mathcal{F}/\mathcal{F}$, and
2. the functor $j_{\mathcal{F}!}$ is the forgetful functor $\mathcal{G}/\mathcal{F} \mapsto \mathcal{G}$.
Proof. Apply Lemma 7.29.5. This means we may assume $\mathcal{C}$ is a site with subcanonical topology, and $\mathcal{F} = h_ U = h_ U^\#$ for some $U \in \mathop{\mathrm{Ob}}\nolimits (\mathcal{C})$. Hence the material of Section 7.25 applies. In particular, there is an equivalence $\mathop{\mathit{Sh}}\nolimits (\mathcal{C}/U) = \mathop{\mathit{Sh}}\nolimits (\mathcal{C})/h_ U^\#$ such that the composition
$\mathop{\mathit{Sh}}\nolimits (\mathcal{C}/U) \to \mathop{\mathit{Sh}}\nolimits (\mathcal{C})/h_ U^\# \to \mathop{\mathit{Sh}}\nolimits (\mathcal{C})$
is equal to $j_{U!}$, see Lemma 7.25.4. Denote $a : \mathop{\mathit{Sh}}\nolimits (\mathcal{C})/h_ U^\# \to \mathop{\mathit{Sh}}\nolimits (\mathcal{C}/U)$ the inverse functor, so $j_{\mathcal{F}!} = j_{U!} \circ a$, $j_\mathcal {F}^{-1} = a^{-1} \circ j_ U^{-1}$, and $j_{\mathcal{F}, *} = j_{U, *} \circ a$. The description of $j_{\mathcal{F}!}$ follows from the above. The description of $j_\mathcal {F}^{-1}$ follows from Lemma 7.25.7. $\square$
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9994243383407593, "perplexity": 191.31850267179033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578727587.83/warc/CC-MAIN-20190425154024-20190425175111-00046.warc.gz"} |
http://www.physicspages.com/2015/07/01/ideal-gas-relation-of-average-speed-of-molecules-to-temperature/ | # Ideal gas: relation of average speed of molecules to temperature
Reference: Daniel V. Schroeder, An Introduction to Thermal Physics, (Addison-Wesley, 2000) – Problems 1.18 – 1.20.
By using the ideal gas law and an analysis on the molecular scale, we can derive a relation between the speed of the molecules in an ideal gas and the temperature. Schroeder does this in detail in his section 1.2 so I won’t go through the whole derivation again; I’ll just summarize the main ideas.
The model places a single molecule inside a cylinder with a movable piston at one end. The molecule is moving at some speed ${v}$ and whenever it hits a wall or the piston it bounces off elastically. If the axis of the cylinder is the ${x}$ axis, then whenever the molecule hits the piston, the ${x}$ component of its velocity, ${v_{x}}$, is reversed, which means that its ${x}$ momentum changes by ${\Delta p_{x}=-2mv_{x}}$. The rate of change of momentum is the force exerted on the piston, and the force per unit area is the pressure, so we can relate ${v_{x}}$ to the pressure exerted on the piston. Since we’re considering only one molecule, the pressure is felt only at the moments when the molecule collides with the piston, so what we’re really interested in is the time average of the pressure. It turns out that this is
$\displaystyle \bar{P}=\frac{mv_{x}^{2}}{V} \ \ \ \ \ (1)$
where ${m}$ is the mass of the molecule and ${V}$ is the volume of the cylinder.
We can now extend the argument by putting a large number ${N}$ of molecules in the cylinder. In this case, the ${v_{x}^{2}}$ factor is replaced by the average of ${v_{x}^{2}}$ over all the molecules, so we get
$\displaystyle \bar{P}V=Nm\overline{v_{x}^{2}} \ \ \ \ \ (2)$
Comparing with the ideal gas law
$\displaystyle PV=NkT \ \ \ \ \ (3)$
we see that
$\displaystyle m\overline{v_{x}^{2}}=kT \ \ \ \ \ (4)$
or, since ${\frac{1}{2}m\overline{v_{x}^{2}}}$ is the kinetic energy from the ${x}$ motion of the molecules,
$\displaystyle \frac{1}{2}m\overline{v_{x}^{2}}=\frac{1}{2}kT \ \ \ \ \ (5)$
However, since the molecules are moving at random, there’s nothing special about the ${x}$ direction, so we’d expect the same contribution to the kinetic energy from the ${y}$ and ${z}$ directions, giving the relation
$\displaystyle \bar{K}_{trans}=\frac{1}{2}m\overline{v^{2}}=\frac{3}{2}kT \ \ \ \ \ (6)$
where ${\bar{K}_{trans}}$ is the average kinetic energy due to the translational motion of the molecules (if the molecules contain two or more atoms, then we can also have rotational and vibrational kinetic energy, so the total kinetic energy is greater than ${\frac{3}{2}kT}$).
A reasonable estimate of the average speed of molecules is the root mean square speed, defined as
$\displaystyle v_{rms}\equiv\sqrt{\overline{v^{2}}}=\sqrt{\frac{3kT}{m}} \ \ \ \ \ (7)$
Example 1 The molecules in a gas at room temperature are actually moving pretty fast. For example, for a nitrogen molecule (nitrogen makes up around 4/5 of the air) at room temperature (293 K), we have
$\displaystyle m$ $\displaystyle =$ $\displaystyle 28.0134\mbox{ amu}=4.65\times10^{-26}\mbox{ kg}\ \ \ \ \ (8)$ $\displaystyle k$ $\displaystyle =$ $\displaystyle 1.38\times10^{-23}\mbox{ m}^{2}\mbox{kg s}^{-2}\mbox{K}^{-1}\ \ \ \ \ (9)$ $\displaystyle v_{rms}$ $\displaystyle =$ $\displaystyle 510.7\mbox{ m s}^{-1} \ \ \ \ \ (10)$
Example 2 Consider a gas containing hydrogen and oxygen molecules in thermal equilibrium. The ratio of their rms speeds is
$\displaystyle \frac{v_{H}}{v_{O}}=\sqrt{\frac{m_{O}}{m_{H}}}=\sqrt{\frac{31.9988}{2.016}}=3.984 \ \ \ \ \ (11)$
where the masses are in amu. The hydrogen molecules are moving about 4 times faster than the oxygen molecules.
Example 3 To separate the two naturally occurring isotopes of uranium ${^{235}\mbox{U}}$ and ${^{238}\mbox{U}}$, the uranium is combined with fluorine to make uranium hexafluoride gas ${\mbox{UF}_{6}}$. The two isotopes will result in different rms speeds for the two types of molecules. We have
$\displaystyle m_{235}$ $\displaystyle =$ $\displaystyle 235.04+6\times18.998\mbox{ amu}=5.794\times10^{-25}\mbox{ kg}\ \ \ \ \ (12)$ $\displaystyle m_{238}$ $\displaystyle =$ $\displaystyle 238.02891+6\times18.998\mbox{ amu}=5.843\times10^{-25}\mbox{ kg}\ \ \ \ \ (13)$ $\displaystyle v_{235}$ $\displaystyle =$ $\displaystyle \sqrt{\frac{3k\times293}{m_{235}}}=144.692\mbox{ m s}^{-1}\ \ \ \ \ (14)$ $\displaystyle v_{238}$ $\displaystyle =$ $\displaystyle \sqrt{\frac{3k\times293}{m_{238}}}=144.084\mbox{ m s}^{-1} \ \ \ \ \ (15)$
Thus the speed difference is quite small.
## 8 thoughts on “Ideal gas: relation of average speed of molecules to temperature”
1. Pingback: Compression work | Physics pages | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 51, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9430686235427856, "perplexity": 167.12871658357884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125532.90/warc/CC-MAIN-20170423031205-00328-ip-10-145-167-34.ec2.internal.warc.gz"} |
http://www.quantwolf.com/doc/simplestrategies/appa.html | # Appendix AReview of Discrete Probability
This appendix provides a review of the essentials of discrete probability. It is concerned with observations, experiments or actions that have a finite number of unpredictable outcomes. The set of all possible outcomes is called the sample space (standard terminology) and is denoted by the symbol $$\Omega$$. An element of $$\Omega$$ (an individual outcome) will be denoted by $$\omega$$. A coin toss for example, has two possible outcomes: heads (H) or tails (T). The sample space is $$\Omega=\{H,T\}$$ and $$\omega=H$$ is one of the possible outcomes. Another example is the roll of a dice which has 6 outcomes so that $$\Omega=\{1,2,3,4,5,6\}$$. A subset of the sample space is called an event and is denoted by a capital letter such as $$A$$ or $$B$$. In the dice example, let $$A$$ be the event that an even number is rolled, then $$A=\{2,4,6\}$$.
Each outcome, $$\omega$$, will have a probability assigned to it, denoted $$P(\omega)$$. The probability is a real number ranging from $$0$$ to $$1$$ that signifies the likelihood that an outcome will occur. If $$P(\omega)=0$$ then $$\omega$$ will never occur and if $$P(\omega)=1$$ then $$\omega$$ will always occur. An intermediate value such as $$P(\omega)=1/2$$ means that $$\omega$$ will occur roughly half the time if the experiment is repeated many times. In general, if you perform the experiment a large number of times, $$N$$, and the number of times that $$\omega$$ occurs is $$n(\omega)$$, then the ratio $$n(\omega)/N$$ should approximately equal the probability of $$\omega$$. It is possible to define $$P(\omega)$$ as the limit of this ratio.
$$\tag{A.1} P(\omega) = \lim_{N \to \infty} \frac{n(\omega)}{N}$$
The function $$P(\omega)$$, which assigns probabilities to outcomes, is called a probability distribution. We will now look at some of its defining properties. To begin with, if the probabilities are defined as in equation A.1, then clearly the sum of all the probabilities must equal 1.
$$\tag{A.2} \sum_{\omega \in \Omega} P(\omega) = 1$$
It is often necessary to determine the probability that one of a subset of all the possible outcomes will occur. If $$A$$ is a subset of $$\Omega$$ then $$P(A)$$ is the probability that one of the outcomes contained in $$A$$ will occur. Using the definition in equation A.1 it should be obvious that:
$$\tag{A.3} P(A)=\sum_{\omega \in A} P(\omega)$$
Many other properties can be derived from the algebra of sets. Let $$A + B$$ be the set of all elements in either $$A$$ or $$B$$ (no duplicates) and let $$AB$$ be the the set of all elements in both $$A$$ and $$B$$, then:
$$\tag{A.4} P(A + B) = P(A) + P(B) - P(AB)$$
If $$A$$ and $$B$$ have no elements in common then they are exclusive events, i.e. they can not both occur simultaneously. In this case equation A.4 reduces to $$P(A + B) = P(A) + P(B)$$. In general, the probability that any one of a number of exclusive events will occur is just equal to the sum of their individual probabilities.
Conditional probabilities and the closely related concept of independence are very important and useful in probability calculations. Let $$P(A|B)$$ be the probability that $$A$$ has occurred given that we know $$B$$ has occurred. In short, we will refer to this as the probability of $$A$$ given $$B$$ or the probability of $$A$$ conditioned on $$B$$. What $$P(A|B)$$ really represents is the probability of $$A$$ using $$B$$ as the sample space instead of $$\Omega$$. If $$A$$ and $$B$$ have no elements in common then $$P(A|B)=0$$. If they have all elements in common so that $$A=B$$ then obviously $$P(A|B)=1$$. In general we have
$$\tag{A.5} P(A|B) = \frac{P(AB)}{P(B)}$$
Using a single fair dice roll as an example, let $$A=\{1,3\}$$ and $$B=\{3,5\}$$ then $$AB=\{3\}$$, $$P(AB)=1/6$$, $$P(B)=1/3$$ and
$$\tag{A.6} P(A|B) = \frac{1/6}{1/3} = \frac{1}{2}$$
Knowledge that $$B$$ has occurred has increased the probability of $$A$$ from $$P(A)=1/3$$ to $$P(A|B)=1/2$$. The result can also be deduced by simple logic. We know that $$B$$ has occurred therefore the roll was either a 3 or a 5. Half of the $$B$$ events are caused by a 3 and half by a 5 but only the 3 also counts as an $$A$$ event also, therefore $$P(A|B)=1/2$$.
Conditional probabilities are not necessarily symmetric. $$P(B|A)$$ need not be equal to $$P(A|B)$$. Using the definition in equation A.5, you can show that
$$\tag{A.7} P(A|B) P(B) = P(B|A) P(A)$$
so the two conditional probabilities are only equal if $$P(A)=P(B)$$. Another useful thing to keep in mind is that conditional probabilities obey the same properties as non-conditional probabilities. This means for example that if $$A$$ and $$B$$ are exclusive events then $$P(A+B|C) = P(A|C) + P(B|C)$$.
The concept of independence is naturally related to conditional probability. Two events are independent if the occurrence of one has no effect on the probability of the other. In terms of conditional probabilities this means that $$P(A|B)=P(A)$$. Independence is always symmetric, if $$A$$ is independent of $$B$$ then $$B$$ is independent of $$A$$. Using the definition in equation A.5 you can see that independence also implies that
$$\tag{A.8} P(AB) = P(A) P(B)$$
This is often taken as the defining relation for independence.
Another important concept in probability is the law of total probability. Let the sample space $$\Omega$$ be partitioned by the sets $$B_1$$ and $$B_2$$ so that every element in $$\Omega$$ is in one and only one of the two sets and we can write $$\Omega = B_1 + B_2$$. This means that the occurrence of $$A$$ coincides with the occurrence of $$B_1$$ or $$B_2$$ but not both and we can write
$$\tag{A.9} A = A B_1 + A B_2 = A (B_1 + B_2) = A \Omega$$
The probability of $$A$$ is then
$$\tag{A.10} P(A) = P(A B_1) + P(A B_2)$$
This can be extended to any number of sets that partition $$\Omega$$.
To carry out any kind of probabilistic analysis we need random variables. A random variable is a bit like the probability distributions discussed above in that it assigns a number to each of the elements in the sample space. It is therefore really more like a function that maps elements in the sample space to real numbers. A random variable is usually denoted with an upper case letter such as $$X$$ and the values it can assume are given subscripted lower case letters such as $$x_i$$ for $$i=1,2,\ldots,n$$ where $$n$$ is the number of possible values. The mapping from an element $$\omega$$ to a value $$x_i$$ is denoted as $$X(\omega)=x_i$$. Note that it is not necessary that every element be assigned a unique value and the particular value assigned will depend on what you want to analyze.
A simple example is a coin toss betting game. You guess what the result of the toss will be. If your guess is correct you win $1 otherwise you loose$1. The sample space consists of only two elements, a correct guess and an incorrect guess $$\Omega=\{\mathrm{correct},\mathrm{incorrect}\}$$. If you are interested in analyzing the amounts won and lost by playing several such games then the obvious choice for the random variable is $$X(\mathrm{correct})=1$$, $$X(\mathrm{incorrect})=-1$$. If you are just interested in the number of games won or lost then the random variable $$Y(\mathrm{correct})=1$$, $$Y(\mathrm{incorrect})=0$$ would be better. Often an analysis in terms of one variable can be converted into another variable by finding a relation between them. In the above example $$X = 2Y - 1$$ could be used to convert between the variables.
As another example consider tossing a coin three times. The sample space consists of 8 elements $$\Omega=\{TTT,TTH,THT,THH,HTT,HTH,HHT,HHH\}$$ where $$T$$ indicates the toss was a tail and $$H$$ a head. This time we let $$X$$ be the random variable that counts the number of heads in the three tosses. It can have values 0, 1, 2, or 3 and not every element in the sample space has a unique value. The values are $$X(TTT)=0$$, $$X(TTH)=X(THT)=X(HTT)=1$$, $$X(THH)=X(HTH)=X(HHT)=2$$, $$X(HHH)=3$$.
Probability distributions are most often expressed in terms of the values that a random variable can take. The usual notation is
$$\tag{A.11} P(X=x_i) = p(x_i)$$
The function $$p(x_i)$$ is the probability distribution for the random variable $$X$$. It is often also called the probability mass function. Note that it is not necessarily the same as the probability distribution for the individual elements of the sample space since multiple elements may be mapped to the same value by the random variable. In the three coin toss example, each element in the sample space has a probability of $$1/8$$, assuming a fair coin. The probability distribution for $$X$$ however is $$p(0)=1/8$$, $$p(1)=3/8$$, $$p(2)=3/8$$, $$p(3)=1/8$$. It will always be true that the sum over all the probabilities must equal 1.
$$\tag{A.12} \sum_i p(x_i) = 1$$
The two most important properties of a random variable are its expectation and variance. The expectation is simply the average value of the random variable. In the coin toss betting game, $$X$$ can have a value of +1 or -1 corresponding to winning or losing. In $$N$$ flips of the coin let $$k$$ be the number of wins and $$N-k$$ the number of losses. The total amount won is then
$$\tag{A.13} W = k - (N-k)$$
and the average amount won per flip is
$$\tag{A.14} \frac{W}{N} = \frac{k}{N} - (1-\frac{k}{N})$$
As the number of flips becomes very large the ratio $$k/N$$ will equal $$p(1)$$, the probability of winning, and the equation then becomes equal to expectation of the random variable.
$$\tag{A.15} E[X] = p(1) - p(-1)$$
Where $$p(-1)=1-p(1)$$ is the probability of losing and $$E[X]$$ is the usual notation for the expectation of $$X$$. In this case the expectation is the average amount that you can expect to win per flip if you play the game for a very long time.
In general if $$X$$ can take on $$n$$ values, $$x_i$$, $$i=1,2,\ldots,n$$ with corresponding probabilities $$p(x_i)$$ then the expectation is
$$\tag{A.16} E[X] = \sum_{i=1}^n p(x_i) x_i$$
The expectation gives you the average but in reality large deviations from the average may be possible. The variance of a random variable gives a sense for how large those deviations can be. It measures the average of the squares of the deviations. The equation for the variance is:
$$\tag{A.17} \mathrm{Var}[X] = \sum_{i=1}^n p(x_i) (x_i - E[X])^2$$
The equation simplifies somewhat to
$$\tag{A.18} \mathrm{Var}[X] = E[X^2] - E[X]^2$$
where
$$\tag{A.19} E[X^2] = \sum_{i=1}^n p(x_i) x_i^2$$
is the expectation for the square of the random variable. In general the expectation for any function $$g(X)$$ is:
$$\tag{A.20} E[g(X)] = \sum_{i=1}^n p(x_i) g(x_i)$$
Another useful measure of deviation from the average is called the standard deviation, $$\sigma$$. It is found by taking the square root of the variance.
$$\tag{A.21} \sigma = \sqrt{\mathrm{Var}[X]}$$
As we saw above, a sample space can have more than one random variable defined on it. If we have two variables $$X$$ and $$Y$$ then we can define the probability that $$X=x_i$$ at the same time that $$Y=y_j$$. This is called the joint probability distribution for $$X$$ and $$Y$$.
$$\tag{A.22} P(X=x_i,Y=y_j) = p(x_i,y_j)$$
The individual distributions, $$p(x_i)$$ and $$p(y_j)$$, are recovered by summing the joint distribution over one of the variables. To get $$p(x_i)$$ you sum $$p(x_i,y_j)$$ over all the possible values of $$Y$$.
$$\tag{A.23} p(x_i) = \sum_j p(x_i,y_j)$$
and likewise for $$p(y_j)$$
$$\tag{A.24} p(y_j) = \sum_i p(x_i,y_j)$$
From these last two equations it is obvious that if you sum over both variables of the distribution, the result should equal 1.
$$\tag{A.25} \sum_i \sum_j p(x_i,y_j) = 1$$
It is possible to construct a joint distribution for any number of random variables, not just 2. For example $$p(x_i,y_j,z_k)$$ would be a joint distribution for the variables $$X$$, $$Y$$, and $$Z$$.
With a joint distribution you can calculate the expectation and variance for functions of variables. The expectation for the sum $$X + Y$$ is:
\begin{eqnarray}\tag{A.26} E[X+Y] & = & \sum_i \sum_j p(x_i,y_j)(x_i + y_j)\\ & = & \sum_i x_i \sum_j p(x_i,y_j) + \sum_j y_j \sum_i p(x_i,y_j)\nonumber\\ & = & \sum_i x_i p(x_i) + \sum_j y_j p(y_j)\nonumber\\ & = & E[X] + E[Y]\nonumber \end{eqnarray}
The property that the expectation for a sum of variables is equal to the sum of their expectations is called linearity and it is true for the sum of any number of variables. For three variables for example $$E[X+Y+Z]=E[X]+E[Y]+E[Z]$$. Another easily verifiable consequence of linearity is that for any constants $$a$$ and $$b$$
$$\tag{A.27} E[aX+bY] = aE[X] + bE[Y]$$
In the example of the coin toss game we had two random variables that were related by $$X = 2Y -1$$. The linearity property of the expectation means that $$E[X] = 2E[Y] - 1$$, where we used the fact that the expectation of a constant is just the constant.
The expectation for the product $$XY$$ is
$$\tag{A.28} E[XY] = \sum_i \sum_j p(x_i,y_j) x_i y_j$$
If the variables $$X$$ and $$Y$$ are independent then the joint distribution can be factored into a product of the individual distributions, $$p(x_i,y_j) = p(x_i) p(y_j)$$. In this case you can show that the expectation of the product is the product of the expectations, $$E[XY] = E[X] E[Y]$$.
For the variance of a sum we have
\begin{eqnarray}\tag{A.29} \mathrm{Var}[X+Y] & & =\\ & & E[(X - E[X] + Y - E[Y])^2] \end{eqnarray}
after expanding and simplifying this becomes
\begin{eqnarray}\tag{A.30} \mathrm{Var}[X+Y] & & =\\ & & \mathrm{Var}[X] + \mathrm{Var}[Y] +\\ & & 2\mathrm{Cov}[X,Y] \end{eqnarray}
where $$\mathrm{Cov}[X,Y]$$ is called the covariance of $$X$$ and $$Y$$. The covariance is defined as:
$$\tag{A.31} \mathrm{Cov}[X,Y] = E[XY] - E[X]E[Y]$$
For independent variables the covariance is zero. The variance of the sum is then just the sum of the variances.
This completes the review of discrete probability. You do not need to understand everything in this review in order to understand the contents of this book. The more you do understand, the more likely you will be able to extend the concepts in this book to build even more powerful trading strategies. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 28, "x-ck12": 0, "texerror": 0, "math_score": 0.9723559021949768, "perplexity": 85.90405928005389}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867364.94/warc/CC-MAIN-20180625014226-20180625034226-00467.warc.gz"} |
http://mathhelpforum.com/pre-calculus/42409-solve-equation-round-nearest-tenth-if-necessary.html | # Thread: Solve the equation. Round to the nearest tenth, if necessary.
1. ## Solve the equation. Round to the nearest tenth, if necessary.
If an object is projected upward with an initial velocity of 64 feet per second from a height of 336 feet, then its height t seconds after it is projected is defined by the expression h(t) = -16t2 + 64t + 336. How long after it is projected will it hit the ground?
THANK YOU!!!
2. Originally Posted by cechmanek32
If an object is projected upward with an initial velocity of 64 feet per second from a height of 336 feet, then its height t seconds after it is projected is defined by the expression h(t) = -16t2 + 64t + 336. How long after it is projected will it hit the ground?
THANK YOU!!!
Since h(t) is the height of the object we want h(t)=0 that is
$-16t^2+64t+336=0 \iff -16(t^2-4t-21)=0 \iff -16(t-7)(t+3)=0$
So t=7 or t=-3 since we are concerned with only future events we don't use t=-3. So the object hits the ground after 7 seconds.
Good luck.
3. Hello,
Originally Posted by cechmanek32
If an object is projected upward with an initial velocity of 64 feet per second from a height of 336 feet, then its height t seconds after it is projected is defined by the expression h(t) = -16t2 + 64t + 336. How long after it is projected will it hit the ground?
THANK YOU!!!
Find t such that $h(t)=0$.
$h(t)=-16t^2+64t+336=16(-t^2+4t+21)=-16(t-7)(t+3)$
$t=7$ or $t=-3$.
Because it's a nonsense to have a negative duration, the solution is 7seconds...
Edit : | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8480011224746704, "perplexity": 509.441149812993}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320057.96/warc/CC-MAIN-20170623114917-20170623134917-00545.warc.gz"} |
https://www.mathdoubts.com/quadratic-equations/ | An equation with one variable, which contains two as its highest power of variable, is called as quadratic equation. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8887853026390076, "perplexity": 323.61110636529327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202704.58/warc/CC-MAIN-20190323000443-20190323022443-00531.warc.gz"} |
http://nbviewer.jupyter.org/github/myuuuuun/MirandaFackler/blob/master/ddp_ex_MF_9_7_8_jl.ipynb | # Livestock Feeding¶
Akira Matsushita
Department of Economics, University of Tokyo
From Miranda and Fackler, Applied Computational Economics and Finance, 2002, Section 8.4.8 and 9.7.8
## 8.4.8 Model Formulation and Solution by hand¶
• A livestock producer feeds their stock for $T$ periods and sells it at the beginning of period $T+1$
• Each period $t$, the producer determine the amount of feed: $x_t \geq 0$. The fixed unit cost of grain is $\kappa \geq 0$
• $s_t$, the weight of livestock at time $t$, follows the deterministic process: $s_{t+1} = g(s_t, x_t), s_0 = \bar{s}$. Assume $s_t \geq 0$ for all $t$
• The selling price of the livestock at time $T+1$ is given by $s_{T+1} \times p$
Then the profit maximization problem for producer is formulated as:
\begin{align*} \underset{ \{x_t \geq 0 \}_{t=1}^T}{\max} \delta^T p s_{T+1} - \sum_{t=1}^{T} \delta^{t-1} \kappa x_t \hspace{1em} \text{s.t.} \hspace{1em} \begin{cases} s_{t+1} = g(s_t, x_t) \\ s_0 = \bar{s} \end{cases} \end{align*}
where $\delta \geq 0$ is the discount factor
The corresponding Bellman equation is:
\begin{align*} \begin{cases} V_t(s_t) = \underset{x_t \geq 0}{\max} \{ -\kappa x_t + \delta V_{t+1}(g(s_t, x_t)) \} \hspace{1em} t=1, \ldots, T \\ V_{T+1}(s_{T+1}) = ps_{T+1} \end{cases} \end{align*}
### Euler Conditions¶
Let's derive the Euler Equilibrium Conditions from the Bellman equation
Assume $g: \mathbb{R}_+ \times \mathbb{R}_+ \rightarrow \mathbb{R}_+$ is differentiable on the entire domain (so $g$ is partially differentiable by both $s$ and $x$)
Define $g_x = \frac{\partial g}{\partial x}$ and $g_s = \frac{\partial g}{\partial s}$
Assume $g_x(s, 0)$ is large enough so that the nonnegative constraint of $x$ will not be binded at an optimum (ensures an interior point solution)
Let $X_t^*$ be the optimal solution correspondence, i.e.,
\begin{align*} X_t^*(s) = \left\{ x_t^* \in [0, \infty) \mid V_t(s) = -\kappa x_t^* + \delta V_{t+1}(g(s_t, x_t^*)) \right\} \end{align*}
and $x_t^*(s) \in X_t^*(s)$ be a selection of it. Assume $x_t^*(s)$ is differentiable for all $t$
Note that by using $x_t^*(s)$ the optimal value function at time 1 is derived as
\begin{align*} V_1(\bar{s}) &= -\kappa x_1^*(\bar{s}) + \delta V_2(g(\bar{s}, x_1^*(\bar{s}))) \\ &= -\kappa \left[ x_1^*(\bar{s}) + \delta x_2^*(g(\bar{s}, x_1^*(\bar{s}))) \right] + \delta^2 V_3(g(g(\bar{s}, x_1^*(\bar{s})), x_2^*(g(\bar{s}, x_1^*(\bar{s}))))) \\ &= \ldots \end{align*}
Here we show that $V_t$ is differentiable by $s_t$ for all $t=1, 2, \ldots, T+1$
• At $t = T+1$, $V_{T+1}(s_{T+1}) = ps_{T+1}$ so this is differentiable by $s_{T+1}$
• As an induction hypothesis, assume $V_{t+1}$ is differentiable by $s_t$ ($1 \leq t \leq T$). Since
\begin{align*} V_t(s_t) = -\kappa x_t^*(s_t) + \delta V_{t+1}(g(s_t, x_t^*(s_t))) \end{align*}
and $x_t^*(s_t)$ and $V_{t+1}(s_t)$ are differentiable by $s_t$ and $g(s_t, x)$ is differentiable by both $s_t$ and $x$, $V_t(s_t)$ is also differentiable. The derivative is
\begin{align*} \frac{\partial}{\partial s_t} V_t(s_t) = -\kappa \frac{\partial}{\partial s_t} x_t^*(s) + \delta \frac{\partial}{\partial s_t}V_{t+1}(g(s_t, x_t^*(s_t))) \left\{ g_s(s_t, x_t^*(s_t)) + g_x(s_t, x_t^*(s_t)) \frac{\partial}{\partial s_t} x_t^*(s_t) \right\} \end{align*}
Hence by induction, $V_t$ is differentiable by $s_t$ for all $t$
Next consider the optimality condition of the maximization problem in the Bellman equation: $\underset{x_t \geq 0}{\max} \{ -\kappa x_t + \delta V_{t+1}(g(s_t, x_t)) \}$
Define $\lambda_t(s_t) \equiv V_{t}'(s_t) = \frac{\partial V_t}{\partial s_t}$
Using the chain rule, the FOCs are
\begin{align*} \delta \lambda_{t+1}(g(s_t, x_t^*))g_x(s_t, x_t^*) = \kappa \hspace{1em} t=1, \ldots, T \\ \end{align*}
Then the drivatives of the value functions by $s_t$: $\lambda_t(s_t) = \frac{\partial}{\partial s_t} V_t(s_t)$ become
\begin{align*} \lambda_t(s) &= \underbrace{ \bigl\{ -\kappa + \delta \lambda_{t+1}(g(s_t, x_t^*(s_t))) g_x(s_t, x_t^*(s_t)) \bigr\}}_{=0 \text{ by FOC}} \frac{\partial}{\partial s_t} x_t^*(s_t) + \delta \lambda_{t+1}(g(s_t, x_t^*(s_t))) g_s(s_t, x_t^*(s_t)) \\ &= \delta \lambda_{t+1}(g(s_t, x_t^*(s_t))) g_s(s_t, x_t^*(s_t)) \end{align*}
for $t=1, \ldots, T$ and
\begin{align*} \lambda_{T+1}(s_{T+1}) = p \end{align*}
In summary, the optimal path follows the following equations:
\begin{align*} \begin{cases} \delta \lambda_{t+1}(g(s_t, x_t))g_x(s_t, x_t) = \kappa \hspace{1em} t=1, \ldots, T \\ \lambda_t(s_t) = \delta \lambda_{t+1}(g(s_t, x_t)) g_s(s_t, x_t) \hspace{1em} t=1, \ldots, T \\ \lambda_{T+1}(s_{T+1}) = p \end{cases} \end{align*}
### Solve the equations by hand¶
How to solve the above equations and get the optimal polity $\{x_t\}_{t=1}^{T}$?
• At $t=T$, since $\lambda_{T+1}(s) = p$ regardless of the value of $s$, the 1st equation becomes $$g_x(s_T, x_T) = \frac{\kappa}{\delta p}$$
So we can get the optimal policy at $t=T$ by solving this equation for $x_T$. Then by the 2nd equation, we get $\lambda_{T}(s_T)$
• At $1 \leq t < T$, we can get $x_t$ and $\lambda_t(s_t)$ in the similar way
• Finally we set $s_1 = \bar{s}$ and then we get the concrete values of $\lambda_1(s_1^*), x_1^*, \lambda_2(s_2^*), x_2^*, \ldots$
### Concrete Example¶
Let
• $T = 6$
• $g(x, s) = \alpha s + \sqrt{x}$
• $p = 1$
Then
• $g_x(x, s) = \cfrac{0.5}{\sqrt{x}}$
• $g_s(x, s) = \alpha$
At $t=6$,
• $g_x(s_6, x_6) = \cfrac{0.5}{\sqrt{x_6}} = \cfrac{\kappa}{\delta} \hspace{2em} \therefore x_6(s_6) = \cfrac{\delta^2}{4 \kappa^2}$
• $\lambda_6(s_6) = \delta \alpha$
At $1 \leq t \leq 5$,
• $g_x(s_t, x_t) = \cfrac{0.5}{\sqrt{x_t}} = \cfrac{\kappa}{\delta^{7-t} \alpha^{6-t}} \hspace{2em} \therefore x_t(s_t) = \cfrac{(\delta^{7-t} \alpha^{6-t})^2}{4 \kappa^2}$
• $\lambda_t(s_t) = \delta^{7-t} \alpha^{7-t}$
So the optimal feeding policy is
\begin{align*} x_t^*(s_t) = \cfrac{(\delta^{7-t} \alpha^{6-t})^2}{4 \kappa^2} \hspace{1em} t=1, \ldots, 6 \end{align*}
Note that in this special case the optimal policy $x_1, \ldots, x_6$ does not depend on the initial weight $\bar{s}$
## 9.7.8 Solution by computation¶
In addition to the settings of the example above, assume
• $\alpha = 0.9$
• $\kappa = 0.4$
• $\delta = 0.9$
In order to calculate the value function $V_t(s_t) = \underset{x_t \geq 0}{\max} \{ -\kappa x_t + \delta V_{t+1}(g(s_t, x_t)) \}$ given $s_t$ computationally, we approximate $V_t(s_t)$ as
\begin{align*} V_t(s_t) \simeq \tilde{V_t}(s_t) = \sum_{i=1}^n c_{it} \phi_i(s_t) \end{align*}
where $\{ \phi_i \}_{i=1}^n$ are the predetermined basis functions and $\{ c_{it} \}_{i=1}^n$ are coefficients of them
Also we choose $n$ sample points $\xi_1 < \xi_2 < \ldots < \xi_n$ in the state space
The procedure of calculating optimal policy is as follows:
• At $t=T$
• Using optimizer, solve $V_T(\xi_j) = \underset{x_T \geq 0}{\max} \{ -\kappa x_T + \delta p g(\xi_j, x_T) \}$ for each $\xi_1, \ldots, \xi_n$ and get optimal $V_T(\xi_j)$ and $x_T(\xi_j)$
• Solve $n$ simultaneous linear equations
\begin{align*} \sum_{i=1}^n c_{iT} \phi_i(\xi_j) = V_T(\xi_j) \hspace{1em} j=1, \ldots, n \end{align*}
to determine the $n$ coefficients $\{ c_{iT} \}_{i=1}^n$ and get $\tilde{V_T}(s_T)$
• At $1 \leq t < T$, given $\tilde{V_{t+1}}(\xi_j)$,
• Using optimizer, solve $V_t(\xi_j) = \underset{x_t \geq 0}{\max} \{ -\kappa x_t + \delta \tilde{V_{t+1}}(g(\xi_j, x_t)) \}$ for each $\xi_1, \ldots, \xi_n$ and get optimal $V_t(\xi_j)$ and $x_t(\xi_j)$
• Solve $n$ simultaneous linear equations
\begin{align*} \sum_{i=1}^n c_{it} \phi_i(\xi_j) = V_t(\xi_j) \hspace{1em} j=1, \ldots, n \end{align*}
to determine the $n$ coefficients $\{ c_{it} \}_{i=1}^n$ and get $\tilde{V_t}(s_t)$
• Finally set $s_1 = \bar{s}$ and then we can get all optimal policies
In [1]:
using QuantEcon
using Optim
using BasisMatrices
using Plots
plotlyjs() | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997541308403015, "perplexity": 3004.0905054536393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647584.56/warc/CC-MAIN-20180321063114-20180321083114-00523.warc.gz"} |
https://stats.stackexchange.com/questions/315836/does-statistical-power-matter-if-we-are-not-interested-in-nhst | # Does statistical power matter if we are not interested in NHST?
If we don't care about the probability of finding a real effect if there is one, AKA a statistically significant finding, why care about statistical power and power calculations?
I mean there are obviously benefits to having larger samples such as decreasing our standard errors and increasing precision etc, but would power analyses matter if we don't care about the p-values? What if we were to just focus on effect sizes and other descriptive statistics?
• Power is the probability of rejecting the null hypothesis given some particular alternative holds. If you're not testing, you're can't be rejecting hypotheses, so what do you then mean by power at all? Can you clarify what you're asking (and you should indicate where your premise comes from -- why you think anyone does care. Maybe they do, but what leads you to think they do?) – Glen_b Nov 27 '17 at 9:00
• AKA a Statisticcally significant finding - please elaborate. just to understand what is meant by p- value ? – Subhash C. Davar Nov 27 '17 at 11:03 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8399702310562134, "perplexity": 726.1101275408582}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991258.68/warc/CC-MAIN-20210517150020-20210517180020-00056.warc.gz"} |
http://www.objectivebooks.com/2017/12/network-theorems-electrical-engineering.html | # Practice Test: Question Set - 02
1. Between the branch voltages of a loop the Kirchhoff s voltage law imposes
(A) Nonlinear constraints
(B) Linear constraints
(C) No constraints
(D) None of the above
2. An ideal voltage source has
(A) Zero internal resistance
(B) Open circuit voltage equal to the voltage on full load
(C) Terminal voltage in proportion to current
(D) Terminal voltage in proportion to load
3. A closed path made by several branches of the network is known as
(A) Branch
(B) Loop
(C) Circuit
(D) Junction
4. The superposition theorem requires as many circuits to be solved as there are
(A) Sources, nodes and meshes
(B) Sources and nodes
(C) Sources
(D) Nodes
5. Kirchhoff s current law states that
(A) Net current flow at the junction is positive
(B) Hebraic sum of the currents meeting at the junction is zero
(C) No current can leave the junction without some current entering it
(D) Total sum of currents meeting at the junction is zero
6. An ideal voltage source should have
(A) Large value of e.m.f.
(B) Small value of e.m.f.
(C) Zero source resistance
(D) Infinite source resistance
7. “In any linear bilateral network, if a source of e.m.f. E in any branch produces a current I in any other branch, then same e.m.f. acting in the second branch would produce the same current / in the first branch”. The above statement is associated with
(A) Compensation theorem
(B) Superposition theorem
(C) Reciprocity theorem
(D) None of the above
8. The resistance LM will be
(A) 6.66 Q
(B) 12 Q
(C) 18 Q
(D) 20 Q
9. A nonlinear network does not satisfy
(A) Superposition condition
(B) Homogeneity condition
(C) Both homogeneity as well as superposition condition
(D) Homogeneity, superposition and associative condition
10. A terminal where three on more branches meet is known as
(A) Node
(B) Terminus
(C) Combination
(D) Anode
Show and hide multiple DIV using JavaScript View All Answers | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8445118069648743, "perplexity": 1925.9203544112258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891105.83/warc/CC-MAIN-20180122054202-20180122074202-00410.warc.gz"} |
https://tex.stackexchange.com/questions/332962/rounded-box-around-heading | I was wondering if anyone would be willing to provide a code to replicate the attached image in TeX. It is simply a blue text box with rounded edges and a dotted (rather than dashed) line between the heading (I don't mind which font this is in) and a paragraph of extra information. The width of the box should be the same as the width of each line of text and currently I'm working in a document set on A4 Paper with left and right margins both at -0.5 cm, the top and bottom margins being 2 cm.
• Welcome to TeX SX! You should take a look at the tcolorbox package documentation. – Bernard Oct 6 '16 at 20:32
\documentclass{book}
\usepackage{xcolor}
\usepackage{tcolorbox}
\begin{document}
\begin{tcolorbox}[colback=white, colframe=blue]
bla
\tcblower
blub
\end{tcolorbox}
\end{document} | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9738517999649048, "perplexity": 487.8420314500397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999876.81/warc/CC-MAIN-20190625172832-20190625194832-00486.warc.gz"} |
https://www.physicsforums.com/threads/eigenvalues-of-two-matrices-are-equal.711169/ | # Eigenvalues of two matrices are equal
1. Sep 18, 2013
### gopi9
Hi everyone,
I have two matrices A and B,
A=[0 0 1 0; 0 0 0 1; a b a b; c d c d] and B=[0 0 0 0; 0 0 0 0; 0 0 a b; 0 0 c d].
I have to proves theoretically that two of the eigenvalues of A and B are equal and remaining two eigenvalues of A are 1,1.
I tried it by calculating the determinant of A and B and I got close to the result but I am not able to prove it completely.
I got result like this,
sum of roots of determinant of A and B as
p+q+r+s=p1+q1 (p,q,r,s are roots of det of A, p1,q1 are roots of det of B)
Product of roots
p*q*r*s=p1*q1
pqr+qrs+prs+pqs=-2(p1*q1).
Thanks.
2. Sep 18, 2013
### AlephZero
Find the characteristic polynomials of A and B.
Then factorize A - the question tells you two of the factors.
3. Sep 18, 2013
### gopi9
The characteristic equations that I got for A is
and
for B
I cant factorize A polynomial equation, since it does not have simple 1 or -1 as roots.
4. Sep 18, 2013
### AlephZero
The question says two roots of the A polynomial are equal to 1. So if (p-1)^2 isn't a factor, either you made a mistake somewhere, or the question is wrong.
I agree with you that p-1 us not a factor of the A polynomial, so I think the question in your OP is wrong. Are you missing some minus signs in the A matrix?
5. Sep 18, 2013
### gopi9
Matlab gives -1 as an eigenvalue but theoretically i cant prove it. There is no mistake in the theoretical proof, i checked it many times. The signs in A matrix are also correct.
6. Sep 18, 2013
### gopi9
This is an example of A matrix that I have
0 0 1 0
0 0 0 1
-400000 200000 -400000 200000
66666.67 -133333.33 66666.67 -133333.33
I took a=-400000, b=200000, c=66666.67, d= -133333.33
7. Sep 19, 2013
### AlephZero
Take the simpler example of a = b = c = d = 0.
There is obviously something wrong with the question here.
8. Oct 5, 2013
### brmath
Switching around the rows you have A' = $\begin{pmatrix} a & b & a & b\\ c & d & c & d\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\\ \end{pmatrix}$ and B = $\begin{pmatrix} 0 & 0 & 0 & 0\\ 0 &0 & 0 & 0\\ 0 & 0 & a & b\\ 0 & 0 & c & d\\ \end{pmatrix}$
Clearly A' has two eigenvalues of 1 -- they are sitting right there on the diagonal; Since A' was obtained by switching each row and even number of times, the eigenvalues of A' are those of A. Clearly also B has two eigenvalues of 0. What are the other eigenvalues of B? If b = c = 0 then they are a and d. If b and c are 0 a and d will also be the eigenvalues of A. You will want to show this, but the computation should be easy with all those zeros in it.
However, b and c don't have to be 0. My guess would be that the 2 eigenvalues of $\begin{pmatrix} a & b\\ c & d\\ \end{pmatrix}$ will also be the other two eigenvalues of A.
Can you show that?
Last edited: Oct 5, 2013
9. Dec 20, 2013
### gopi9
In my case a,b,c,d are not zeros alephzero. Thanks for the reply
10. Dec 20, 2013
### gopi9
Thanks brmath. That helps
11. Jan 13, 2014
### gopi9
Can we obtain relation between eigenvectors of A and B matrices
12. Jan 13, 2014
### brmath
Will try to get back to you later today.
13. Jan 13, 2014
### Office_Shredder
Staff Emeritus
Take brmath's re-arranged matrix A' and consider vectors of the form (x,y,0,0).
14. Jan 13, 2014
### gopi9
Thanks for the reply.I did not understand what u meant by consider vectors of the form (x,y,0,0). I already tried using A' matrix to solve it but could not go any further.
15. Jan 13, 2014
### gopi9
For the example that I took A has eigenvectors
[2.206e-6 6.008e-6 0.6912 -0.3835;
-4.749e-7 9.304e-6 -0.1487 -0.5940;
-0.9776 -0.5424 -0.6912 0.3835;
0.2104 -0.84007 0.1487 0.5940]
and B has
[0 0 1 0;
0 0 0 1;
-0.9776 -0.5424 0 0;
0.2104 -0.84007 0 0].
Eigenvectors of [a b; c d] is
[-0.9776 -0.5424;
0.21043 -0.84007]
16. Jan 13, 2014
### brmath
The eigenvectors present a different kind of problem, and there are a number of different possibilities.
I suggest you start by finding the eigenvectors of B which match the 0 eigenvalues: i.e. Bx = 0. You will get either one or two different x's. I suspect just one.
With the A there are numerous possibilities, which depend on the a, b, c, and d. For example if a = c = 1 and b = d = 0, A will probably have four independent eigenvectors all corresponding to the single eigenvalue 1. If you have a = b = d = 1 and c = 0, A will probably have 3 eigenvectors corresponding to the eigenvalue 1. You will have to work this out.
In either of these cases, B will also have eigenvectors corresponding to 1 - -either two independent ones, or just one.
Whether any of these eigenvectors match up between A and B is something you will have to compute. That is, find the eigenvectors of A which correspond to 1 under the two a,b,c,d scenarios I suggested, and find the eigenvectors of B for those same 1's.
Offhand I see no particular reason to believe they are the same or different -- you'll have to see.
Now the x's that match with the zero eigenvalues of B might or might not be eigenvalues of A. It could be that they are for some values of a,b,c,d and likely not for others. But you should check by multiplying the x's by A.
Once you've gotten through all that, you may have a clue as to whether anything matches up for other values of a,b,c,d.
17. Jan 13, 2014
### Office_Shredder
Staff Emeritus
Calculate A'(x,y,0,0)t and you should notice that it looks a lot like a 2x2 matrix operating on a two dimensional vector.
18. Jan 14, 2014
### brmath
It does, but I think the a,b,c,d create a lot of complications.
19. Jan 20, 2014
### wangchong
A and B are block matrices. So are xI-A and xI-B. Does that give you a way to find their characteristic polynomials? If I have to guess:
charpoly(B,x) = x^2((x-a)(x-d) -bc) and charpoly(A,x)=(x-1)(x-1)((x-a)(x-d) -bc)
so both charpoly(A) and charpoly(B) have common (quadratic) factors ((x-a)(x-d) -bc) .
Similar Discussions: Eigenvalues of two matrices are equal | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8636042475700378, "perplexity": 847.279228080153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109157.57/warc/CC-MAIN-20170821152953-20170821172953-00180.warc.gz"} |
https://gilkalai.wordpress.com/2020/03/22/kelman-kindler-lifshitz-minzer-and-safra-towards-the-entropy-influence-conjecture/ | Kelman, Kindler, Lifshitz, Minzer, and Safra: Towards the Entropy-Influence Conjecture
Let me briefly report on a remarkable new paper by Esty Kelman, Guy Kindler, Noam Lifshitz, Dor Minzer, and Muli Safra, Revisiting Bourgain-Kalai and Fourier Entropies. The paper describes substantial progress towards the Entropy-Influence conjecture, posed by Ehud Friedgut and me in 1996. (See this blog post from 2007.)
Abstract: The total influence of a function is a central notion in analysis of Boolean functions, and characterizing functions that have small total influence is one of the most fundamental questions associated with it. The KKL theorem and the Friedgut junta theorem give a strong characterization of such functions whenever the bound on the total influence is o(logn), however, both results become useless when the total influence of the function is ω(logn). The only case in which this logarithmic barrier has been broken for an interesting class of function was proved by Bourgain and Kalai, who focused on functions that are symmetric under large enough subgroups of $S_n$.
In this paper, we revisit the key ideas of the Bourgain-Kalai paper. We prove a new general inequality that upper bounds the correlation between a Boolean function f and a real-valued, low degree function g in terms of their norms, Fourier coefficients and total influences.
Some corollaries of this inequality are:
1. The Bourgain-Kalai result.
2. A slightly weaker version of the Fourier-Entropy Conjecture. More precisely, we prove that the Fourier entropy of the low-degree part of f is at most $O(I[f]log^2 I[f])$, where I[f] is the total influence of f. In particular, we conclude that the Fourier spectrum of a Boolean function is concentrated on at most $2O(I[f]log^2 I[f])$ Fourier coefficients.
3. Using well-known learning algorithms of sparse functions, the previous point implies that the class of functions with sub-logarithmic total influence (i.e. at most $O(\log n/(\log \log n)2))$ is learnable in polynomial time, using membership queries.
Our theorem on influence under symmetry. From my videotaped lecture on Jean Bourgain. See this post about Jean Bourgain.
A few remarks:
A) Given a Boolean function $f:\{-1,1\}^n\to \{-1,1\}$ let $f=\sum_{S \subset \{1,2,\dots ,n\}}\hat f(S)W_S$ be its Fourier-Walsh expansion. (Here $W_S(x_1,x_2,\dots ,x_n)=\prod_{i\in S}x_i$.) The total influence $I(f)$ of $f$ is described by
$I(f)=\sum {S \subset \{1,2,\dots ,n\}}\hat f^2(S)|S|$.
The spectral entropy $E(f)$ of $f$ is defined by
$E(f)=\sum_{S \subset \{1,2,\dots ,n\}}\hat f^2(S) \log (\hat f^2(S))$.
The entropy-influence conjecture (here called the Fourier-entropy conjecture) asserts that for some universal constant C
$I(f) \ge C E(f)$.
B) One interesting feature of the conjecture is that the RHS is invariant under arbitrary linear transformations of $\{-1,1\}^n$ (regarded as an $n$-dimensional vector space) while the LHS is invariant only to permutations of the variables.
C) My paper with Jean Bourgain, Influences of variables and threshold intervals under group symmetries, describes lower bounds on total influences for Boolean function invariant under certain groups of permutations (of the variables). Those results are stronger (but more restrictive) than what the Entropy/influence conjecture directly implies. (This was overlooked for a while.) The new paper gives (much hoped for, see below) simpler proofs and stronger results compared to those in my paper with Jean Bourgain.
D) Ryan O’Donnel wrote about Bourgain and some of his contributions to the analysis of Boolean functions:
“I spent close to 5 years understanding one 6-page paper of his (“On the distribution of the Fourier spectrum of Boolean functions”), and also close to 10 years understanding a 10-page paper of his (the k-SAT sharp threshold ‘appendix’). If anyone’s up for a challenge, I’m pretty sure no one on earth fully understands the second half of his paper “Influences of variables and threshold intervals under group symmetries” with Kalai (including Gil 🙂 )”
It looks that by now we have pretty good understanding and even some far-reaching progress regarding all three papers that Ryan mentioned. (It is unclear if we can hope now for an exponential spread of understanding for Bourgain’s proofs.)
This entry was posted in Combinatorics, Computer Science and Optimization, Open problems and tagged , , , , . Bookmark the permalink.
1 Response to Kelman, Kindler, Lifshitz, Minzer, and Safra: Towards the Entropy-Influence Conjecture
1. Gil Kalai says:
Here is a link to two videotaped lectures on the new proof by Dor Minzer https://simons.berkeley.edu/events/boolean-1 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 16, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9160208106040955, "perplexity": 1383.9249432150684}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740343.48/warc/CC-MAIN-20200814215931-20200815005931-00304.warc.gz"} |
http://mathhelpforum.com/math-software/140806-m-file-failure.html | 1. ## m file failure?
Hey:
Iv written this m-file, basically i want it to have a function, annd to plot it on a graph, then run another m-file within it to show the graph with small limits using its derivative, but when I run it, I don't get any results, i just get this:
f =
[]
Heres the m file:
function [ y ] = f( x )
% calculate f(x)=exp(x)
y=exp(x);
plot(x,f(x));
function [ y ] = fd( x )
% calculates fd(x)=exp(x)
y=exp(x);
plot(x,fd(x),'k')
pause
plot(x,(f(x+0.1)-f(x))/0.1,'r')
pause
plot(x,(f(x+0.01)-f(x))/0.01,'g')
pause
plot(x,(f(x+0.001)-f(x))/0.001,'b')
pause
plot(x,(f(x+0.0001)-f(x))/0.0001,'m')
end
end
cheerz
2. Originally Posted by ramdrop
Hey:
Iv written this m-file, basically i want it to have a function, annd to plot it on a graph, then run another m-file within it to show the graph with small limits using its derivative, but when I run it, I don't get any results, i just get this:
f =
[]
Heres the m file:
function [ y ] = f( x )
% calculate f(x)=exp(x)
y=exp(x);
plot(x,f(x));
function [ y ] = fd( x )
% calculates fd(x)=exp(x)
y=exp(x);
plot(x,fd(x),'k')
pause
plot(x,(f(x+0.1)-f(x))/0.1,'r')
pause
plot(x,(f(x+0.01)-f(x))/0.01,'g')
pause
plot(x,(f(x+0.001)-f(x))/0.001,'b')
pause
plot(x,(f(x+0.0001)-f(x))/0.0001,'m')
end
end
cheerz
This is a function definition file the first function in it is executed by invoking it, the second is only executed if invoked from the first (and it is not).
1. Please post the calling statement (what you type on at the command prompt or have in the script file)
2. Consider executing in the debugger.
3. Look at all the red or amber marks to the right in the editor, each of these indicates a syntactical mistake or a warning that the code may not be doing what you think, so check them.
4. You appear to be calling the top level function recursively, I doubt you intend this.
CB
3. Originally Posted by CaptainBlack
This is a function definition file the first function in it is executed by invoking it, the second is only executed if invoked from the first (and it is not).
1. Please post the calling statement (what you type on at the command prompt or have in the script file)
2. Consider executing in the debugger.
3. Look at all the red or amber marks to the right in the editor, each of these indicates a syntactical mistake or a warning that the code may not be doing what you think, so check them.
4. You appear to be calling the top level function recursively, I doubt you intend this.
CB
Basically (I think this is rigHT), What I want it to do is simple. To have a function defined, and then to (through a series of commands) run an m-file, so I think its called a Script m file or something like that.
1. I don't type anything in the command window, I just right click on it, and run it - that is obviously wrong though.
2. Not sure what you mean by this, but I will have a look.
3. There are no marks in the "m file maker" its a green square, so nothing is wrong. Well something obviously is.
4. Again, not sure what you mean by this, but Il take a second look at it.
4. Originally Posted by ramdrop
Basically (I think this is rigHT), What I want it to do is simple. To have a function defined, and then to (through a series of commands) run an m-file, so I think its called a Script m file or something like that.
1. I don't type anything in the command window, I just right click on it, and run it - that is obviously wrong though.
How does it find its input data?
2. Not sure what you mean by this, but I will have a look.
3. There are no marks in the "m file maker" its a green square, so nothing is wrong. Well something obviously is.
It is in the Matlab editor (answer for both of the above)
4. Again, not sure what you mean by this, but Il take a second look at it.
This is a function file not a script, only the top function is visible out side the file.
CB
5. Well, Iv had a major play with it at the moment, im trying to run the file first to see what happens. Except it just doesn't work,
I get in the command window:
??? Input argument "x" is undefined.
Error in ==> f at 3
y=x^2;
This clearly indicates to me that the variable, x is undefined and it will not work unless I define it.
Im unsure as to how to define it, I know of one command, "syms 'x'" but that doesn't seem to work,
Any ideas?
6. Originally Posted by ramdrop
Well, Iv had a major play with it at the moment, im trying to run the file first to see what happens. Except it just doesn't work,
I get in the command window:
This clearly indicates to me that the variable, x is undefined and it will not work unless I define it.
Im unsure as to how to define it, I know of one command, "syms 'x'" but that doesn't seem to work,
Any ideas?
Yes, but I have already posted them, have you read them and acted on them?
Please repost the content of the .m file its name and the data you are trying to pass to it and an explanation of how you are trying to invoke this function.
Also post a clear statement of what you think you are trying to do (the exact wording of the problem as set)
CB
7. function [ y ] = f( x )
% calculate f(x)=exp(x)
y=exp(x);
plot(x,f(x));
function [ y ] = fd( x )
% calculates fd(x)=exp(x)
y=exp(x);
plot(x,fd(x),'k')
pause
plot(x,(f(x+0.1)-f(x))/0.1,'r')
pause
plot(x,(f(x+0.01)-f(x))/0.01,'g')
pause
plot(x,(f(x+0.001)-f(x))/0.001,'b')
pause
plot(x,(f(x+0.0001)-f(x))/0.0001,'m')
end
end
That is the m file, what I want it to do, is for values of x, plot the function $\displaystyle e^x$ and show this on a graph.
Then I want it to (after I input some command I think) probably the actual plot line in the file, to plot the derivative of $\displaystyle e^x$ which is obviously the same, but with the small values towards the end of the m-file. We know the actual derivative, and its been plotted on a graph, the other "plots" get closer to the correct derivative. Thats what I want it to do
8. Originally Posted by ramdrop
That is the m file, what I want it to do, is for values of x, plot the function $\displaystyle e^x$ and show this on a graph.
Then I want it to (after I input some command I think) probably the actual plot line in the file, to plot the derivative of $\displaystyle e^x$ which is obviously the same, but with the small values towards the end of the m-file. We know the actual derivative, and its been plotted on a graph, the other "plots" get closer to the correct derivative. Thats what I want it to do
That is an m-file what happens when you run it?
Also what about the other questions I asked? Most importantly how are you calling this and what argument are you passing to it?
Also I have already pointed out the you are calling F recursively and you probably do not want to do that, do you want to do that? Have you done anything to fix that problem?
CB
9. It comes up with the error when I run it:
??? Input argument "x" is undefined.
Error in ==> f at 3
y=x^2
I have tried changing the functions, 1 being f and the other being g.
The file itself is called "f.m"
Im trying to write something in the command window, like f(5) or something and it comes with errors, I just need it to run basically
10. Originally Posted by ramdrop
It comes up with the error when I run it:
I have tried changing the functions, 1 being f and the other being g.
The file itself is called "f.m"
Im trying to write something in the command window, like f(5) or something and it comes with errors, I just need it to run basically
f requires a vector of points as input something like f([1,2,3,4,5]).
CB
11. I want it for all values of x.
Okay, so to write it, I would write in the m-file on the second line:
f = [-inf,inf] ?
12. Originally Posted by ramdrop
I want it for all values of x.
Okay, so to write it, I would write in the m-file on the second line:
f = [-inf,inf] ?
No you can't do such a plot! You are also not passing it a range to plot but a set of points.
It seems to me that your level of understanding of what you want and what Matlab does is poor. So if we are to make any progress with this you will first have to explain why you are doing what you are trying to do.
CB
13. My MATLAB knowledge is indeed poor because the lectures we have been given are just a sheet, the lecturer is never around to ask questions, usually these "sheets" are not really informative, they just work through examples, we're given them and then the lecturer wanders off for the majority of the class.
Im trying to learn MATLAB by myself because next year, I have a lot of MATLAB to do..
Okay, so this is EXACTLY what I want the m-file to do.
I want the m-file (for any function, x², exp(x)) defined as f, to be plotted on a graph, probably the best would be a 10 by 10. This is represented by:
function [ y ] = f( x )
% calculate f(x)=exp(x)
y=exp(x);
plot(x,f(x));
Then, I want it to differentiate the function f, and plot that graph. Then it "approximates" the derivative for values of x = 0.01, 0.001, 0.0001. This causes the "curve/line" of the graph to change and become more accurate.
So in basic terms, the smaller the value of x, the more accurate the derivative plot is. This is represented by the latter part of the m-file.
function [ y ] = fd( x )
% calculates fd(x)=exp(x)
y=exp(x);
plot(x,fd(x),'k')
pause
plot(x,(f(x+0.1)-f(x))/0.1,'r')
pause
plot(x,(f(x+0.01)-f(x))/0.01,'g')
pause
plot(x,(f(x+0.001)-f(x))/0.001,'b')
pause
plot(x,(f(x+0.0001)-f(x))/0.0001,'m')
end
en
14. Originally Posted by ramdrop
My MATLAB knowledge is indeed poor because the lectures we have been given are just a sheet, the lecturer is never around to ask questions, usually these "sheets" are not really informative, they just work through examples, we're given them and then the lecturer wanders off for the majority of the class.
Im trying to learn MATLAB by myself because next year, I have a lot of MATLAB to do..
Okay, so this is EXACTLY what I want the m-file to do.
I want the m-file (for any function, x², exp(x)) defined as f, to be plotted on a graph, probably the best would be a 10 by 10. This is represented by:
Then, I want it to differentiate the function f, and plot that graph. Then it "approximates" the derivative for values of x = 0.01, 0.001, 0.0001. This causes the "curve/line" of the graph to change and become more accurate.
So in basic terms, the smaller the value of x, the more accurate the derivative plot is. This is represented by the latter part of the m-file.
Lets start at the begining:
[code]
function [ y ] = f( x )
% calculate f(x)=exp(x)
y=exp(x);
plot(x,f(x));
[\code]
1. I have told you before that you have a recursive call to f in this function definition that you almost certainly don't intend.
Create a file f.m which contains:
Code:
function [y]=f(x)
y=exp(x);
Now call this from the console with:
Code:
>>x=[-10:0.5:10]
>>y=f(x)
>>plot(x,y);
Now report what happens.
CB
15. I got a very nice plot, thanks, so I guess I do the same for the next part of the file?
Page 1 of 2 12 Last | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8058874011039734, "perplexity": 770.1537611747076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865468.19/warc/CC-MAIN-20180523082914-20180523102914-00156.warc.gz"} |
https://www.physicsforums.com/threads/simplifying-this-equation.600615/ | # Homework Help: Simplifying this equation
1. Apr 26, 2012
### Cyclopse
1. The problem statement, all variables and given/known data
Is it possible to simplify this even further?
2. Relevant equations
This is a Secant-tangent equation.
3. The attempt at a solution
I cross multiplied to get 18=324radical 3 (AC)
and in the end i got 0 = (18radical3) (AC) which makes no sense
so is there a different way to simplify that?
2. Apr 26, 2012
### Staff: Mentor
Are you trying to solve for AC?
When you divide both sides by 18, you will be left with 1 on the left side, not 0.
If you are trying to find AC, divide both sides by 324√3.
3. Apr 26, 2012
### Cyclopse
Yes.
Here is the original question
Last edited: Apr 26, 2012
4. Apr 26, 2012
### Staff: Mentor
It doesn't seem to me that you have enough information.
How did you go from AB = 18 and AD = 18/√3 to 18 = 324√3 * AC?
5. Apr 26, 2012
### Cyclopse
I'm really terrible at maths...that's why I'm here.
6. Apr 26, 2012
### Staff: Mentor
Draw a radius from O to B, and call its length r. The radius OB is perpendicular to AB, so that ABO is a right triangle.
Let θ = the angle at A.
cos(θ) = 18/(r + 18/√3), and
sin(θ) = r/(r + 18/√3)
Now you have two equations in two unknowns, so you should be able to solve for both unknowns, although you only need r. Once you know r, it's pretty easy to get AC.
Last edited: Apr 26, 2012
7. Apr 26, 2012
### Staff: Mentor
Start out like Mark44 said by drawing a radius from O to B, and call its length r, but then apply the Pythagorean Theorem to the right triangle ABO to find r.
Chet
8. Apr 27, 2012
### Staff: Mentor
That's simpler than what I suggested. Simple is good! | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8088799118995667, "perplexity": 2557.5803387797855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156724.6/warc/CC-MAIN-20180921013907-20180921034307-00052.warc.gz"} |
https://www.physicsforums.com/threads/inequality-math-troubles.372222/ | # Inequality math troubles
1. Jan 24, 2010
### Nikolas15
1. The problem statement, all variables and given/known data
Basically I was reviewing a proof of one of the inequality properties and there was a statement that a <= a , or in other words for ex. 7<=7. So my question is why is that, since 7 is really = to 7, at least I think so.
thanks.
2. Jan 24, 2010
### jgens
Re: inequality
The sign $\leq$ mean less than or equal to. So saying $7 \leq 7$ means that $7$ is less than or equal to $7$. Now, in mathematics (and pretty much everywhere else) the word "or" means that either one or the other holds (and depending on the context, potentially both). Clearly $7 < 7$ is absurd, but $7 = 7$ is true which means that one of the two clauses holds, thus $7 \leq 7$.
Similar Discussions: Inequality math troubles | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9659703373908997, "perplexity": 419.81904250669754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948513611.22/warc/CC-MAIN-20171211144705-20171211164705-00192.warc.gz"} |
https://deepai.org/publication/the-quadratic-wasserstein-metric-for-inverse-data-matching | # The quadratic Wasserstein metric for inverse data matching
This work characterizes, analytically and numerically, two major effects of the quadratic Wasserstein (W_2) distance as the measure of data discrepancy in computational solutions of inverse problems. First, we show, in the infinite-dimensional setup, that the W_2 distance has a smoothing effect on the inversion process, making it robust against high-frequency noise in the data but leading to a reduced resolution for the reconstructed objects at a given noise level. Second, we demonstrate that for some finite-dimensional problems, the W_2 distance leads to optimization problems that have better convexity than the classical L^2 and Ḣ^-1 distances, making it a more preferred distance to use when solving such inverse matching problems.
## Authors
• 1 publication
• 28 publications
• 6 publications
• ### Sliced Gromov-Wasserstein
Recently used in various machine learning contexts, the Gromov-Wasserste...
05/24/2019 ∙ by Titouan Vayer, et al. ∙ 10
• ### A Smoothed Dual Approach for Variational Wasserstein Problems
Variational problems that involve Wasserstein distances have been recent...
03/09/2015 ∙ by Marco Cuturi, et al. ∙ 0
• ### Sensor Clusterization in D-optimal Design in Infinite Dimensional Bayesian Inverse Problems
We investigate the problem of sensor clusterization in optimal experimen...
07/23/2020 ∙ by Yair Daon, et al. ∙ 0
• ### On the well-posedness of Bayesian inverse problems
02/26/2019 ∙ by Jonas Latz, et al. ∙ 0
• ### A Framework for Wasserstein-1-Type Metrics
We propose a unifying framework for generalising the Wasserstein-1 metri...
01/08/2017 ∙ by Bernhard Schmitzer, et al. ∙ 0
• ### The Unbalanced Gromov Wasserstein Distance: Conic Formulation and Relaxation
Comparing metric measure spaces (i.e. a metric space endowed with a prob...
09/09/2020 ∙ by Thibault Séjourné, et al. ∙ 5
• ### Stability of Gibbs Posteriors from the Wasserstein Loss for Bayesian Full Waveform Inversion
Recently, the Wasserstein loss function has been proven to be effective ...
04/07/2020 ∙ by Matthew M. Dunlop, et al. ∙ 0
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## 1 Introduction
This paper is concerned with inverse problems where we intend to match certain given data to data predicted by a (discrete or continuous) mathematical model, often called the forward model. To set up the problem, we denote by a function () the input of the mathematical model that we are interested in reconstructing from a given datum . We denote by the forward operator that maps the unknown quantity to the datum , that is
f(m)=g, (1)
where the operator is assumed to be nonlinear in general. We denote by the linearization of the operator at the background . With a bit of abuse of notation, we write to denote a linear inverse problem of the form (1) where . The space of functions where we take our unknown object , denoted by , and datum , denoted by , as well as the exact form of the forward operator , will be given later when we study specific problems.
Inverse problems for (1) are mostly solved computationally due to the lack of analytic inversion formulas. Numerical methods often reformulate the problem as a data matching problem where one takes the solution as the function that minimizes the data discrepancy, measured in a metric , between the model prediction and the measured datum . That is,
m∗=argminm∈MΦ(m),with,Φ(m):=12D2(f(m),g). (2)
The most popular metric used in the past to measure the prediction-data discrepancy is the metric due to its mathematical and computational simplicity. Moreover, it is often the case that a regularization term is added to the mismatch functional to impose extra prior knowledge on the unknown (besides of the fact that it is an element of ) to be reconstructed.
In recent years, the quadratic Wasserstein metric [1, 27, 32] is proposed as an alternative for the metric in solving such inverse data matching problems [6, 7, 13, 18, 20, 19, 22, 23, 29, 34]. Numerical experiments suggest that the quadratic Wasserstein metric has attractive properties for some inverse data matching problems that the classical metric does not have [14, 35]. The objective of this work is trying to understand mathematically these numerical observations reported in the literature. More precisely, we attempt to characterize the numerical inversion of (1) based on the quadratic Wasserstein metric and compare that with the inversion based on the classical metric.
In the rest of the paper, we first briefly review some background materials on the quadratic Wasserstein metric and its connection to inverse data match problems in Section 2. We then study in Section 3 the Fourier domain behavior of the solutions to (1) in the asymptotic regime of the Wasserstein metric: the regime where the model prediction and the datum are sufficiently close. We show that in the asymptotic regime, the Wasserstein inverse solution tends to be smoother than the based inverse solution. We then show in Section 4 that this smoothing effect of the Wasserstein metric also exists in the non-asymptotic regime, but in a less explicit way. In Section 5, we demonstrate, in perhaps overly simplified settings, some advantages of the Wasserstein metric over the metric in solving some important inverse matching problems: inverse transportation, back-projection from (possibly partial) data, and deconvolution of highly concentrated sources. Numerical simulations are shown in Section 6 to demonstrate the main findings of our study. Concluding remarks are offered in Section 7.
## 2 Background and problem setup
Let and
be two probability densities on
that have the same total mass. The square of the quadratic Wasserstein distance between and , denoted as , is defined as
W22(f,g):=infT∈T∫Rd|x−T(x)|2f(x)dx, (3)
where is the set of measure-preserving maps from to . The map that achieves the infimum is called the optimal transport map between and . In the context of (1), the probability density depends on the unknown function . Therefore, can be viewed as a mismatch functional of for solving the inverse problem. Since the quadratic Wasserstein distance is only defined between probability measures of the same total mass, one has to normalize and and turn them into probability densities when applying them to solve inverse matching problems where and
cannot be interpreted as nonnegative probability density functions. This introduces new issues that need to be addressed
[15].
It is well-known by now that the quadratic Wasserstein distance between and is connected to a weighted distance between them; see [32, Section 7.6] and [21, 25]. For any , let be the space of functions
Hs(Rd):={m(x):∥m∥2Hs(Rd):=∫Rd⟨ξ⟩2s|ˆm(ξ)|2dξ<+∞}
where
denotes the Fourier transform of
and . When , is the usual Hilbert space of functions with square integrable derivatives, and . The space with is understood as the dual of . We also introduce the space , , with the (semi-) norm defined through the relation
∥m∥2Hs(Rd)=∥m∥2L2(Rd)+∥m∥2˙Hs(Rd).
The space is defined as the dual of via the norm
∥m∥˙H−s:=sup{|⟨w,m⟩|:∥w∥˙Hs≤1}. (4)
It was shown [32, Section 7.6] that asymptotically is equivalent to , where the subscript indicates that the space is defined with respect to the reference probability measure . To be precise, if is the probability measure and is an infinitesimal perturbation that has zero total mass, then
W2(μ,μ+dπ)=∥dπ∥˙H−1(dμ)+o(dπ). (5)
This fact allows one to show that, for two positive probability measures and with densities and that are sufficiently regular, we have the following non-asymptotic equivalence between and :
c1∥μ−ν∥˙H−1(dμ)≤W2(μ,ν)≤c2∥μ−ν∥˙H−1(dμ), (6)
for some constants and . The second inequality is generic with [25, Theorem 1] while the first inequality, proved in [21, Proposition 2.8] and [25, Theorem 5] independently, requires further that and be bounded from above.
In the rest of this paper, we study numerical solutions to the inverse data matching problem for (1) under three different mismatch functionals:
ΦHs(m)≡12∥f(m)−g∥2Hs:=12∫Rd⟨ξ⟩2s|ˆf(m)(ξ)−ˆg(ξ)|2dξ, (7)
(8)
where , denotes the convolution operation, and
ΦW2(m)≡12W22(f(m),g):=12infT∈T∫Rd|x−T(x)|2f(m(x))dx. (9)
Our main goal is to analyze the differences between the Fourier contents of the inverse matching results, a main motivation for us to choose the Fourier domain definition of the norms. These norms allow us to systematically study: (i) the differences between the (i.e. the special case of of ) and the , with a positive or negative , inversion results; (ii) the differences between and inversion results caused by the weight ; and (iii) the similarities and differences between and inversion results. This is our roadmap toward better understandings of the differences between -based and -based inverse data matching.
###### Remark 2.1.
Note that since the norm is only a shift away from the corresponding norm in the Fourier representation, by replacing with , we do not introduce extra mismatch functionals for those (semi-) norms. We will, however, discuss inversions when we study the corresponding inversions.
###### Remark 2.2.
In the definition of the objective function, we take the weight function such that is consistent with the linearization of [32].
We refer interested readers to [32, 21, 25] for technical discussions on the results in (5) and (6) (under more general settings than what we present here) that connect with . For our purpose, these results say that: (i) in the asymptotic regime when two signals and , both being probability density functions, are sufficiently close to each other, their distance can be well approximated by their distance; and (ii) if , then and vice versa, that is, the exact matching solutions to the model (1), if exists, are global minimizers to both and . However, let us emphasize that the non-asymptotic equivalence in (6) does NOT imply that the functional and (if we define one) have exactly the same optimization landscape. In fact, numerical evidences show that the two functionals have different optimization landscapes that are both quite different from that of the mismatch functional ; see for instance Section (5) for analytical and numerical evidences.
## 3 Frequency responses in asymptotic regime
We first study the Fourier-domain behavior of the solutions to (1) obtained through the minimizing the functionals we introduced in the previous section. At the solution, and are sufficiently close to each other. Therefore their distance can be replaced with their distance according to (5). In the leading order, the solution is simply the solution in this regime.
### 3.1 Hs-based inverse matching for linear problems
Let us start with a linear inverse matching problem given by the model:
Am=gδ, (10)
where denotes the datum in (1) polluted by an additive noise introduced in the measuring process. The subscript is used to denote the size (in appropriate norms to be specified soon) of the noise, that is, the size of . Besides, we assume that is still in the range of the operator . When the model is viewed as the linearization of the nonlinear model (1), should be regarded as the perturbation of the background . The model perturbation is also often denoted as . We assume that the linear operator is diagonal in the Fourier domain, that is, it has the symbol,
ˆA(ξ)∼⟨ξ⟩−α, (11)
for some . This assumption is to make some of the calculations more concise but is not essential as we will comment on later. When the exponent , the operator is “smoothing”, in the sense that it maps a given to an output with better regularity than itself. The inverse matching problem of solving for in (10), on the other hand, is ill-conditioned (so would be the corresponding nonlinear inverse problem if is regarded as the linearization of ). The size of , to some extent, can describe the degree of ill-conditionedness of the inverse matching problem.
We assume a priori that for some . Therefore, could be viewed as an operator . We now look at the inversion of the problem under the () framework.
We seek the solution of the inverse problem as the minimizer of the functional , defined as in (7) with and replaced with . We verify that the Fréchet derivative of at in the direction is given by
Φ′Hs(m)[δm]=∫RdˆA∗(ξ){⟨ξ⟩2s¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯[ˆA(ξ)ˆm(ξ)−ˆgδ(ξ)]}ˆδm(ξ)dξ,
where we used to denote the adjoint of the operator . The minimizer of is located at the place where its Fréchet derivative vanishes. Therefore the minimizer solves the following (modified) normal equation at frequency :
ˆA∗(ξ){⟨ξ⟩2sˆA}ˆm=ˆA∗(ξ){⟨ξ⟩2sˆgδ(ξ)}. (12)
The solution at frequency is therefore
ˆm(ξ)=(ˆA∗(ξ)(⟨ξ⟩2sˆA))−1ˆA∗(ξ)(⟨ξ⟩2sˆgδ(ξ)).
We can then perform an inverse Fourier transform to find the solution in the physical space. The result is
m=(A∗PA)−1A∗Pgδ,P:=(I−Δ)s/2, (13)
where the operator is defined through the relation
(I−Δ)s/2m:=F−1(⟨ξ⟩sˆm),
being the inverse Fourier transform, being the usual Laplacian operator, and being the identity operator.
#### Key observations.
Let us first remark that the calculations above can be carried out in the same way if the norm is replaced with the norm. The only changes are that should be replaced with and the operator in has to be replaced with .
When , assuming that , the above solution reduces to the classical least-square solution . Moreover, when is invertible (so will be ), the solution can be simplified to , using the fact that , which is simply the true solution to the original problem (10). Therefore, in the same manner, as the classical least-square method, the least-square method based on the norm does not change the solution to the original inverse problem when it is uniquely solvable. This is, however, not the case for the inversion in general. For instance, inversion only matches the derivative of the predicted data to the measured data.
When , is a differential operator. Applying to the datum amplifies high-frequency components of the datum. When , is a (smoothing) integral operator. Applying to the datum suppresses high-frequency components of the datum. Even though the presence of the operator in will un-do the effect of on the datum in a perfect world (when is invertible, and all calculations are done with arbitrary precision), when operated under a given accuracy, inversion with is less sensitive to high-frequency noise in the data while inversion with is more sensitive to high-frequency noise in the data, compared to the case of (that is, the classical least-square inversion). Therefore, inversion with can be seen as a “preconditioned” (by the operator ) least-square inversion.
### 3.2 Resolution analysis of linear inverse matching
We now perform a simple resolution analysis, following the calculations in [2], on the inverse matching result for the linear model (10).
###### Theorem 3.1.
Let be given as in (11) and an approximation to defined through its symbol:
ˆRc(ξ)∼{⟨ξ⟩α,|ξ|<ξc0,|ξ|>ξc.
Let be the norm of the additive noise in . Then the reconstruction error , with obtained as the minimizer of , is bounded by
∥m−mcδ∥L2≲∥m∥α−sα+β−sHβδβα+β−s, (14)
when we select
⟨ξc⟩−1∼(δ∥m∥−1Hβ)1α+β−s. (15)
###### Proof.
Following classical results in [12], it is straightforward to verify that the difference between the true solution and the approximated noisy solution is
∥m−mcδ∥L2=∥m−Rcgδ∥L2=∥m−Rcg+Rcg−Rcgδ∥L2=∥(I−RcA)m+Rc(g−gδ)∥L2≤∥(I−RcA)m∥L2+∥Rc(g−gδ)∥L2. (16)
We then verify that operators and have the following norms respectively
∥Rc∥Hs↦L2∼⟨ξc⟩α−sand∥(I−RcA)∥Hβ↦L2∼⟨ξc⟩−β.
This allows us to conclude that
∥m−mcδ∥L2≤∥Rc∥Hs↦L2δ+∥(I−RcA)∥Hβ↦L2∥m∥Hβ≲⟨ξc⟩α−sδ+⟨ξc⟩−β∥m∥Hβ.
We can now select , i.e. the relation given in (15), to minimize the error of the reconstruction, which gives the bound in (14). ∎
#### Optimal resolution.
One main message carried by the theorem above is that reconstruction based on the mismatch has a spatial resolution
ε:=⟨ξc⟩−1∼δ1α+β−s,
under the conditions in the theorem. At a fixed noise level , for fixed and , the optimal resolution of the inverse matching result degenerates when gets smaller. The case of corresponds to the usual reconstruction in the framework. The optimal resolution one could get in this case is decided by . When (), the best resolution one could get is better than the case in a perfect world. When , the reconstructions in the framework provides an optimal resolution that is worse than the case. In other words, the reconstructions based on the negative norms appear to be smoother than optimal reconstructions in this case. See Section 6 for numerical examples that illustrate this resolution analysis.
However, we should emphasize that the above simple calculation only provides the best-case scenarios. It does not tell us exactly how to achieve the best results in a general setup (when the symbol of
, i.e. the singular value decomposition of
in the discrete case, is not precisely known). Nevertheless, the guiding principle of the analysis is well demonstrated: least-square with a stronger (than the ) norm yield higher resolution reconstructions while least-square with a weaker norm yield lower (again compared to the case) resolution reconstructions in the best case.
### 3.3 Hs(dμ)-based inverse matching for linear problems
Inverse matching with the weighted norm can be analyzed in the same manner to study the impact of the weight on the inverse matching result. The solution to (10) in this case is sought as the minimizer of the functional defined in (8) with and . This means that the weight in our definition of the objective function.
Following the same calculation as in the previous subsection, we find that the minimizer of the functional solves the following normal equation at frequency :
(17)
where is the adjoint of the operator defined through the relation .
We first observe that the right-hand side of (17) and that of (12) are different. In (12), the -th Fourier mode of the datum is amplified or attenuated, depending on the sign of , by a factor of . While in (17), this mode is further convoluted with other modes of the datum after the amplification/attenuation. The convolution induces mixing between different modes of the datum. Therefore, inverse matching with the weighted norm cannot be done mode by mode as what we did for the unweighted norm, even when we assume that the forward operator is diagonal. However, main effect of the norm on the inversion, the smoothing/sharpening effect introduced by the factor (half of which come from the factor in front of while the other half come from the factor in ), are the same in both the unweighted and the weighted norms.
The inverse matching solution, in this case, written in physical space, is:
m=(A∗PgA)−1A∗Pggδ,Pg:=(I−Δ)s/2ω(I−Δ)s/2. (18)
We can again compare this with the unweighted solution in (13). The only difference is the introduction of the inhomogeneity, which depends on the datum , in the preconditioning operator by replacing it with . When (), and are (local) differential operators. Roughly speaking, compared to , emphasizes places where is small, be reminded that , or the -th order derivative of is large. At those locations, amplifies the same modes of the datum more than does. When , and are non-local operators. The impact of is more global (as we have seen in the previous paragraph in the Fourier domain). It is hard to precisely characterize the impact of without knowing its form explicitly. However, we can still see, for instance, from (17), that the smoother is, the smoother the inverse matching result will be (since has fast decay and the convolution will further smooth out ). If is very rough, say that it behaves like Gaussian noise, then decays very slow. The convolution, in this case, will not smooth out as much as in the previous case. The main effect of on the inverse matching result in this case mainly comes from the norm, not the weight.
###### Remark 3.2.
Thanks to the asymptotic equivalence between and in (5), the smoothing effect we observe in this section for the inverse matching (and therefore inverse matching since is only different from
on the zeroth moment in the Fourier domain) is also what we observe in the
inverse matching. This observation will be demonstrated in more detail in our numerical simulations in Section 6.
### 3.4 Iterative solution of nonlinear inverse matching
The simple analysis in the previous sections based on the linearized quadratic Wasserstein metric, i.e., a weighted norm, on the inverse matching of linear model (10) does not translate directly to the case of inverse matching with the fully nonlinear model (1). Nevertheless, the analysis does provide us some insights.
Let us consider an iterative matching algorithm for the nonlinear problem, starting with a given initial guess , characterized by the following iteration:
mk+1=mk+ℓkζk, k≥0, (19)
where is a chosen descent direction of the objective functional at iteration , and is the step length at this iteration. For simplicity, let us take the steepest descent method where the descent direction is taken as the negative gradient direction. Following the calculations in the previous section, we verify that the Fréchet derivative of at the current iteration in the direction is given by
(20)
assuming that the forward model is Fréchet differentiable at the with derivative . This leads to the following descent direction chosen by a gradient descent method:
ζk=−(f′(mk)∗Pgf′(mk))−1f′(mk)∗Pg(f(mk)−gδ), (21)
Let us compare this with the descent direction resulted from the least-square functional:
ζk=(f′(mk)∗f′(mk))−1f′(mk)∗(f(mk)−g). (22)
We see that the iterative process of the inverse matching can be viewed as a preconditioned version of the corresponding iteration. The preconditioning operator, , depends on the datum but is independent of the iteration. When the iteration is stopped after a finite step, the effect we observed for linear problems, that is, the smoothing effect of in the case of or its de-smoothing effect in the case of , is carried to the solution of nonlinear problems.
#### Wasserstein smoothing in the asymptotic regime.
To summarize, when the model predictions and the measured data are sufficiently close to each other, inverse matching with the quadratic Wasserstein metric, or equivalently the metric, can be viewed as a preconditioned -based inverse matching. The preconditioning operator is roughly the inverse Laplacian operator with a coefficient given by the datum. The optimal resolution of the inversion result from the Wasserstein metric, with data at a given noise level is roughly of the order ( being the order of the operator at the optimal solution and ) instead of as given in the case. The shape of the datum distorts the Wasserstein matching result slightly from the inverse matching result with a regular (semi-) norm.
## 4 Wasserstein iterations in non-asymptotic regime
As we have seen from the previous sections, in the asymptotic regime, the smoothing effect of the quadratic Wasserstein metric in solving inverse matching problems can be characterized relatively precise thanks to the equivalence between and given in (5). The demonstrated smoothing effect makes -based inverse matching very robust against high-frequency noise in the measured data. This phenomenon has been reported in the numerical results published in recent years [6, 13, 14, 34, 35] and is one of the main reasons that is considered as a good alternative for -based matching methods. In this section, we argue that the smoothing effect of can also be observed in the non-asymptotic regime, that is, a regime where signals and are sufficiently far away from each other. The smoothing effect in the non-asymptotic regime implies that the landscape of the objective functional is smoother than that of the classical objective functional.
To see the smoothing effect of in non-asymptotic regime, we analyze the inverse matching procedure described by the iterative scheme (19) for the objective functional , defined in (9). For the sake of being technically correct, we assume that the data we are comparing in this section are sufficiently regular. More precisely, we assume that and for some . We also assume that the map () is Fréchet differentiable at any admissible and denote by the derivative in direction . We can then write down the Fréchet derivative of at the current iteration in the direction following the differentiability result of with respect to [32]. It is,
(23)
where denotes the optimal transport map at iteration (that is, for ), and denotes the derivative of with respect to (not ) in the direction .
Following the optimal transport theorem of Brenier [32], the optimal transport map at the current iteration , , is given as where is the unique (up to a constant) convex solution to the Monge-Ampère equation:
det(D2u(x))=f(mk(x))/g(∇u(x)), u being convex. (24)
Here is the Hessian operator defined through the Hessian matrix (with the notation ).
Let be the Fréchet derivative of at in the direction , we then verify that solves the following second-order elliptic equation to the leading order:
∑ijaij∂2φ∂xi∂xj+∑jbj∂φ∂xj=δf, (25)
where while depend on the dimension. When , () and (). When , we have
aij=g(Tk(x))⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩∂2u∂xk∂xi∂2u∂xj∂xk−∂2u∂xj∂xi∂2u∂xk∂xk,i≠j≠k,∂2u∂xk′∂xk′∂2u∂xk∂xk−∂2u∂xk′∂xk∂2u∂xk∂xk′,i=j≠k≠k′.
Let be the solution to the (adjoint) equation:
∑ijaij∂2ψ∂xi∂xj−∑jbj∂ψ∂xj=−∇⋅((x−Tk(x))f(x)). (26)
It is then straightforward to verify, following standard adjoint state calculations [33], that update direction can be written as
ζk(x)=f′∗(mk)[|x−Tk(x)|22+ψ(x)], (27)
where denotes the adjoint of the operator .
We first observe that unlike in the classical case where is applied directly to the residual , that is, , the descent direction here depends on the model prediction and the datum only implicitly through the transfer map and its derivative with respect to . Only in the asymptotic regime of being very close to can we make the connection between and the normalized residual. This is where the approximation to comes from.
From Caffarelli’s regularity theory (c.g. [32, Theorem 4.14]), which states that when and we have that the Monge-Ampère solution , we see that is at least . Therefore the solution to the adjoint problem, , is in
by standard theory for elliptic partial differential equations when the problem is not degenerate and in
if it is degenerate. Therefore, is one derivative smoother than and (and therefore the residual). This is exactly what the preconditioning operator (with ) did to the residual in the asymptotic regime, for instance, as shown in (13). This shows that inverse matching has smoothing effect even in the non-asymptotic regime.
In one-dimensional case, we can see the smoothing effect more explicitly since we are allowed to construct the optimal map explicitly in this case. Let and be the cumulative density functions for and respectively. The optimal transportation theorem in one-dimensional setting (c.g. [32, Theorem 2.18]) then says that the optimal transportation map from to is given by . This allows us to verify that, the gradient of at in direction , given in (23), is simplified to:
Φ′W2(mk)[δm]=∫R((x−Tk(x))22+pk(+∞)−pk(x))f′(mk)[δm]dx=∫Rf′∗(mk)[(x−Tk(x))22−pk(+∞)+pk(x)]δm(x)dx (28)
where the function is defined as . Therefore the descent direction (27) simplifies to
ζk(x)=f′∗(mk)[(x−Tk(x))22−pk(+∞)+pk(x)]. (29)
It is clear from (29) that the Fréchet derivative of at iteration depends only on the anti-derivatives of , and , through and . Therefore, it is smoother than the Fréchet derivative of in general, whether or not the signals and are close to each other. This shows why the smoothing effect of exists also in non-asymptotic regime.
To provide some numerical evidences, we show in Figure 1 and Figure 2 some gradients of the and objective functions, with respect to the absorption coefficient (i.e. ), for the inverse diffusion problem we study in Section 6.2, in one- and two-dimensional domains and respectively. The synthetic data, generated by applying the forward operator to the true absorption coefficient and then adding multiplicative random noise, contains roughly of random noise. We intentionally choose initial guesses to be relatively far away from the true coefficient so that the model prediction is far from the data to be matched. We are not interested in a direct quantitative comparison between the gradient of the Wasserstein objective function and that of the objective function since we do not have a good basis for the comparison. However, it is evident from these numerical results that the gradient of the Wasserstein functional is smoother, or contains fewer frequencies to be more precise, compared to the corresponding case.
## 5 Wasserstein inverse matching for transportation and convolution
Its robustness against high-frequency noise in data, resulted from its smoothing effect we demonstrated in the previous two sections, is not the only reason why is thought as better than for many inverse data matching problems. We show in this section another advantage of the distance in studying inverse matching problems: its convexity with respect to translation and dilation of signals.
### 5.1 W2 convexity with respect to affine transformations
For a given probability density function with finite moments and , we define:
f(m(x)):=1√|Σ|ϕ(Σ−12(x−λ¯x)). (30)
where with symmetric and positive-definite, and . This is simply a translation (by ) and dilation (by ) of the function . We verify that .
Let be generated from with symmetric and positive-definite. Then we check that the optimal transport map from to is given by . In other words, the function is a convex solution to the Monge-Ampère equation (24) with this pair. This observation allows us to find that,
W22(f,g)=|λ¯x−λg¯xg|2+2(λ¯x−λg¯xg)t(Σ1/2−Σ1/2g)M1+∫Rd|(Σ1/2−Σ1/2g)x|2ϕ(x)dx. (31)
This calculation shows that is convex with respect to and for rather general probability density function . | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9822043776512146, "perplexity": 476.0434944116934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153474.19/warc/CC-MAIN-20210727170836-20210727200836-00134.warc.gz"} |
https://www.physicsforums.com/threads/relativity-without-the-aether-pseudoscience.90552/ | # Relativity without the aether: pseudoscience?
1. Sep 24, 2005
### Aether
Special relativity (SR) SR and Lorentz ether theory (LET) are empirically equivalent systems for interpreting local Lorentz symmetry. These two theories are equally valid, but it is not possible (so far) to demonstrate the truth or falseness of the postulates of either theory over the other by experimentation. Still, a superstition persists in the minds of many that somehow "SR is true, and LET is false". Why isn't "relativity without the aether" fairly described by the term "pseudoscience"?
pseudoscience - Refers to any body of knowledge or practice which purports to be scientific or supported by science but which is judged by the mainstream scientific community to fail to comply with the scientific method.
scientific method n - The principles and empirical processes of discovery and demonstration considered characteristic of or necessary for scientific investigation, generally involving the observation of phenomena, the formulation of a hypothesis concerning the phenomena, experimentation to demonstrate the truth or falseness of the hypothesis, and a conclusion that validates or modifies the hypothesis.
su·per·sti·tion n An irrational belief that an object, action, or circumstance not logically related to a course of events influences its outcome.
1) A belief, practice, or rite irrationally maintained by ignorance of the laws of nature or by faith in magic or chance.
2) A fearful or abject state of mind resulting from such ignorance or irrationality.
3) Idolatry.
2. Sep 24, 2005
### Perspicacious
Special relativity is physics according to the definition of physics whereas adherence to the aether is a religious belief that doesn't generate any physics.
3. Sep 24, 2005
### Aether
Lorentz symmetry, $\eta_{\mu \nu}$, is coordinate independent physics. Adherence to either one of SR, $\Lambda_\nu^\mu$, or LET, $T_\nu^\mu$, to the exclusion of the other is, it seems to me, a religious superstition that doesn't generate any physics (so far).
Last edited: Sep 24, 2005
4. Sep 24, 2005
### Perspicacious
Postulating Lorentz symmetry generates a lot of physics. Taking as an axiom the existence of an aether only produces narrow-mindedness and noise on newsgroups and physics forums. If an axiom doesn't generate any quantifiable predictions, it's worthless and needs to be thrown out.
5. Sep 24, 2005
### Aether
I agree, but what I am in doubt about is that one can truly appreciate Lorentz symmetry without the combined perspectives of SR & LET. Can you actually visualize Lorentz symmetry from both of these perspectives yourself?
SR and LET are empirically equivalent. Name one quantifiable prediction that is generated by postulating that the speed of light is a constant in all inertial frames that is not also generated by postulating an aether.
I'm not saying that LET should replace SR (yet). I'm saying that either one without the other is a potentially misleading representation of Lorentz symmetry.
Last edited: Sep 24, 2005
6. Sep 24, 2005
### Tom Mattson
Staff Emeritus
Because relativity is testable, and it passes every test to which it is subjected.
As for the fact that there are other theories that are empirically equivalent to SR: this situation is not unique to SR. There are also the pilot waves of Bohmian mechanics, which is empirically equivalent to QM. Should QM be considered pseudoscience? Of course not, the question is ridiculous. And it's just as ridiculous for SR.
Honestly Aether, how many threads do you need to push your agenda?
7. Sep 24, 2005
### benjamincarson
8. Sep 24, 2005
### Perspicacious
The Santa Reindeer Postulate
So what! I can create an infinite number of new theories empirically equivalent to SR.
Permit me to illustrate.
Start with SR and create a new theory SR* by adjoining to SR the postulate of Santa Claus and flying reindeer. Realize that SR and SR* are empirically equivalent if we wisely stipulate that Santa Claus and flying reindeer are undetectable. Show the physics of this and how it integrates so naturally with the Santa Reindeer postulate. Estimate the speed and distances covered by Santa Claus on Christmas night. Post a lot of hoopla about it and declare that the physics is unassailable. Also note that the theory has been peer-reviewed and is published on the Fermi National Accelerator Laboratory website.
http://www.fnal.gov/pub/ferminews/santa/
9. Sep 24, 2005
### Aether
I'm not trying to push an agenda. You have my blessing to kill this thread if you don't think that the question is a valid one. If there was a sub-forum to discuss "Lorentz symmetry" that might be a better place to raise this question, but I really am looking for answers.
10. Sep 24, 2005
### Staff: Mentor
The fact that they are equivalent, but that one involves assumptions for which there is no evidence and the other doesn't is precisely what makes one science and the other not.
It's the invisible purple elephant postulate: I can postulate that there is an invisible purple elephant in my garage (caveat: I do not have a garage). But that postulate would not affect how we understand gravity, so it would be useless to include it in our theory of gravity.
Aether theorists do, of course, hope that eventually evidence is found that makes SR and LET not empirically equivalent, but until such evidence is found, the postulate is just a superfluous invisible purple elephant. It's a piggyback theory.
It can also not be denied that the box in which the Aether could possibly be found in has been shrinking as physics advances. It is not unreasonable to operate on the assumption that the box is empty until physicists perform an experiment that does not fit with predictions.
edit: dang, Perspicacious beat me to it with his "Santa and the flying reindeer" postulate. However, that postulate is compatible with my invisible purple elephant theory. So how 'bout we co-author a paper about a combined "Relativity and Santa and the Flying Reindeer and the Invisible Purple Elephant Theory of Gravity"?
Last edited: Sep 24, 2005
11. Sep 24, 2005
### Aether
I think that both theories involve such an assumption: SR assumes that the one-way speed of light is isotropic, but this can not be proven by experiment; LET assumes absolute simultaneity, but this hasn't been proven either (yet). Why would an impartial observer, not from our culture, prefer either one of these theories (today) over the other?
Some people are under the false impression that the speed of light postulate from SR has been "proven by experiment", but that's not true. How can a preference for SR as opposed to SR+LET be a benign choice when it leads so readily to such a misunderstanding?
The box for violations of the rotation and boost invariance components of Lorentz symmetry is shrinking, but that still does not predict which theory (SR or LET) an impartial observer would choose (I would hope that they would either choose SR+LET, or better yet a completely coordinate independent approach). Can you visualize Lorentz symmetry from the LET perspective as well as the SR perspective? It's like starting with both eyes open, and closing your left eye, and that's one perspective; then open your left eye and close your right eye, and that's the other perspective. They are both equally valid. I'm just saying, now open both eyes.
That's fine, but it would apply equally well to SR alone as it does to LET alone if you were an impartial observer. SR+LET has an advantage over either one alone.
12. Sep 24, 2005
### Perspicacious
Forget about Einstein's tortured derivation. There's no reason to base relativity on Einstein's original presentation. There's no need to fixate on the clumsiness of anyone's baby steps.
Special relativity is best understood without the clutter of unnecessary assumptions. Having to assume anything about the one-way speed of light is absolutely unnecessary. The best derivation of special relativity available anywhere to date in terms of sheer elegance, physical intuitiveness and mathematical simplicity is given here:
http://www.everythingimportant.org/relativity/
http://scitation.aip.org/getabs/servlet/GetabsServlet?prog=normal&id=AJPIAS000043000005000434000001
http://arxiv.org/PS_cache/physics/pdf/0302/0302045.pdf
Only the mathematically inept believe that Einstein discovered an admirable derivation of special relativity.
13. Sep 24, 2005
### Staff: Mentor
One thing I've never understood is why people say that the 1-way speed of light cannot be verified by experiment. It should be simple: place two clocks a set distance apart and synchronize them (there are lots of ways to do this, but the simplest would probably be to use a 3rd clock halfway between them to set them). Then fire a laser from one to the other and measure how long it takes to get there.
It has been my understanding that physicists who accept Relativity consider such an experiment to be unnecessary, while aether theorists simply choose not to do it - perhaps because they don't want to see the result.
14. Sep 24, 2005
### JesseM
If a given aether theory makes precisely the same predictions as SR, then it is not really an alternate "theory" at all, it's more akin to one of the various "interpretations" of quantum mechanics. This post from sci.physics.relativity has some good reasons for rejecting such an interpretation on the basis of "elegance" and Occam's razor:
Last edited: Sep 24, 2005
15. Sep 24, 2005
### Aether
The experiment really is just that simple, and anyone can do it, but exactly how you synchronize the clocks is what will always determine the outcome. If we synchronize our clocks so that the speed of light is the same in both (all) directions, then that is Einstein synchronization, relative simultaneity, and SR. However, we are also perfectly free to synchronize them to maintain absolute simultaneity with an "ether" frame, and we will then measure a generally different speed of light going in every direction, and that is LET. What we can not do by experiment(yet) is to say that one synchronization scheme is better than the other.
Such an experiment is unnecessary if you accept relativity on philosophical grounds, but it is not appropriate to then forget exactly how it was that you ever came to accept relativity in the first place and then go on to claim that you have done an experiment and that as a result of that experiment we are all compelled to accept relativity. Aether theorists haven't found a locally preferred frame yet, and that maintains their theory at parity (so far) with special relativity as far as experiments go.
16. Sep 24, 2005
### Perspicacious
No. Actually, they like that method because of the arbitrariness of it. They will quickly point out an explicit assumption in your approach. You're going to end up assuming that light speed from A to B is equal to light speed from B to A. It's better to do an ultraslow clock transport instead.
17. Sep 24, 2005
### Tom Mattson
Staff Emeritus
It's not considered unecessary at all. In fact it's been done!
I think that the most dramatic example is the following:
Alvaeger F.J.M. Farley, J. Kjellman and I Wallin, Physics Letters 12, 260 (1964).
Alvaeger et al measured the speed of gamma rays from decaying pions moving near light speed. The speed of the gamma rays was found to be 'c', modulo a small experimental error.
18. Sep 24, 2005
### JesseM
When you say "yet", are you suggesting you see no reason why some future phenomenon might not respect local Lorentz-invariance? If so, you are underscoring point #6 from the sci.physics.relativity post by Tom Roberts I quoted in my last post:
19. Sep 24, 2005
### Hurkyl
Staff Emeritus
"True" and "false" aren't really (internally) applicable to science. What is true is that each of the postulates of SR are empirically verifiable, whereas the same cannot be said of LET.
No it doesn't.
To even begin to say something like "the one-way speed of light is isotropic", it requires one to specify a coordinate chart.
Coordinare charts are nonphysical choices.
You're thinking about the fact that, among all possible rectilinear coordinate charts we could use, SR chooses to define the orthogonal ones as the inertial reference frames.
This choice is exactly analogous to the fact that we generally like, when studying 3-space, for our x, y, and z axes to be perpendicular.
Special Relativity is generally done in these orthogonal, linear coordinate charts for exactly the same reason that coordinate geometry is generally done with perpendicular axes.
Another good reason to use the orthogonal, linear coordinate charts is that such things can be defined experimentally. (Of course, so can other types of coordinate charts)
Contrast to the choice of charts used by aether theories which invoke some principle of absolute simultaneity which cannot be experimentally determined.
(Since I'm talking about "orthogonal" in the above, that means I'm using some sort of "metric". Of course, I'm using the "metric" of Minowski 4-space, because that's the one that appears in all the physical laws)
20. Sep 24, 2005
### Aether
Here's the first two lines from a paper from Kostelecky & Mewes for example: http://www.citebase.org/cgi-bin/citations?id=oai:arXiv.org:hep-ph/0111026 "Lorentz violation is a promising candidate signal for Planck-scale physics. For instance, it could arise in string theory and is a basic feature of noncommutative field theories...". So, when I say "yet" I simply mean that I am aware of many physicists who expect to find violations eventually.
I know that there are some good points in Tom Robert's articles, and I have read them in the past, but he's talking about a whole infinity of ether theories and I'm just talking about one; LET. So, kindly extract the point from the article that you think applies here.
He (and you by quoting it) seems to think that local Lorentz invariance is synonymous with SR to the exclusion of LET:
"Note that all phenomena discovered since 1905 do indeed exhibit the local Lorentz invariance of SR -- what is happenstance in ether theory was directly predicted by SR." | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8343714475631714, "perplexity": 1225.2770185838683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721555.36/warc/CC-MAIN-20161020183841-00347-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://me.gateoverflow.in/1461/gate2019-me-2-1 | In matrix equation $[A] \{X\}=\{R\}$,
$[A] = \begin{bmatrix} 4 & 8 & 4 \\ 8 & 16 & -4 \\ 4 & -4 & 15 \end{bmatrix} \{X\} = \begin{Bmatrix} 2 \\ 1 \\ 4 \end{Bmatrix} \text{ and} \{ R \} = \begin{Bmatrix} 32 \\ 16 \\ 64 \end{Bmatrix}$
One of the eigen values of matrix $[A]$ is
1. $4$
2. $8$
3. $15$
4. $16$
recategorized
Let $A$ be a square matrix of order $n$ and $\lambda$ is one of its eigenvalues. Let $X$ be an eigenvector associated to eigenvalue $\lambda$ then we must have equation
$AX = \lambda X$ --------- Equation $(1)$
In Question, It is given that $AX=R$ ----------Equation $(2)$
Now, From Equation $(1)$ and $(2),$ $\lambda X = R$
So, $\lambda \begin{Bmatrix} 2\\ 1\\ 4 \end{Bmatrix} = \begin{Bmatrix} 32\\ 16\\64 \end{Bmatrix}$
$\Rightarrow$ $\lambda \begin{Bmatrix} 2\\ 1\\ 4 \end{Bmatrix} =16 \begin{Bmatrix} 2\\ 1\\4 \end{Bmatrix}$
$\Rightarrow$ $\lambda = 16$
So, Answer should be $(D)$
by (390 points) 1 4 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9731176495552063, "perplexity": 289.77464313389277}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660323.32/warc/CC-MAIN-20191015205352-20191015232852-00187.warc.gz"} |
https://www.khanacademy.org/math/recreational-math/math-warmup | # Math warmups
Introducing key concepts using physical analogies
Community Questions
All content in “Math warmups”
## Expectation warmup
The 'problem of points' is a classic problem Fermat and Pascal famously debated in the 17th century. Their solution to this problem formed the basis of modern day probability theory. Now it's your turn to relive this challenge!
## Random sampling warmup
Introduction to random sampling (also known as the weak law of large numbers)
## Independent events warmup
Introduction to independent events and frequency analysis using histograms.
## Distribution warmup
Introduction to probability distributions, center, spread, and overall shape. In this warmup we will discover the binomial distribution!
## Arithmetic warmups
Arithmetic warmups | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9402281045913696, "perplexity": 4329.435505110813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447773.21/warc/CC-MAIN-20151124205407-00309-ip-10-71-132-137.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/91595/pseudoanosov-mapping-torus-and-length-of-curves?sort=newest | # Pseudoanosov mapping torus and length of curves.
Let $M_{\phi}$ be a hyperbolic mapping torus coming from a pseudo-Anosov map $\phi$ in a surface $S$. Is there any way to estimate the length of the geodesic representing a given curve in the surface in terms of the map $\phi$? That is, knowing something like the stable and unstable foliations for the map or something equivalent, can you estimate the length of a given curve? Any references for something like this are really appreciated.
For example, if you take a mapping torus $M_{\phi}$, drill one simple nontrivial curve $\alpha$ in the surface and re-glue by $\sigma^n$, a large Dehn twist about $\alpha$, you are going to get a hyperbolic mapping torus $M_{\phi\sigma^{n}}$. In this manifold $\alpha$ is going to be very short.
Another example, if you take a map $\psi = \phi\sigma^n$ where $\phi$ is pseudo-anosov in all of $S$ and $\sigma$ is a pseudo-Anosov just in a subsurface $X \subset S$, I think the curves in the complement of $X$ have to be very small for $n$ large, right?
-
Regarding the last paragraph - Suppose $\phi$ is pA in all of $S$, and $\sigma$ is pA in $X$, a strict subsurface. The lengths of the curves of $\partial X$ go to zero in $M_{\phi \sigma^n}$, as $n$ goes to infinity. But for curves in the complement of $X$, yet not in the boundary, their length does not go to zero. – Sam Nead Mar 19 '12 at 11:34
(I edited your post to make the notation in the last paragraph match the previous paragraphs.) – Sam Nead Mar 19 '12 at 11:44
Thanks for that and for your answer. Do you know any example where the curves in the complement of $X$ don't have short length? – shurtados Mar 25 '12 at 19:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9445899128913879, "perplexity": 179.34677465465182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931007056.0/warc/CC-MAIN-20141125155647-00232-ip-10-235-23-156.ec2.internal.warc.gz"} |
http://chemistry.stackexchange.com/questions/2493/is-combustion-considered-a-redox-reaction | # Is combustion considered a redox reaction?
When carbon combusts with oxygen, is this considered a redox reaction since the oxygen atoms gain electrons and the carbon atoms lose them?
-
In general, yes. If the reaction involves oxygen going from the oxidation state of zero in $\ce{O_2}$ to an oxidation state of -2, then there is oxidation and the reaction is a redox reaction. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9619551301002502, "perplexity": 1344.587330796074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464051342447.93/warc/CC-MAIN-20160524005542-00016-ip-10-185-217-139.ec2.internal.warc.gz"} |
https://rdrr.io/cran/lmomco/man/cdftexp.html | cdftexp: Cumulative Distribution Function of the Truncated Exponential... In lmomco: L-Moments, Censored L-Moments, Trimmed L-Moments, L-Comoments, and Many Distributions
Description
This function computes the cumulative probability or nonexceedance probability of the Truncated Exponential distribution given parameters (ψ and α) computed by partexp. The parameter ψ is the right truncation of the distribution and α is a scale parameter. The cumulative distribution function, letting β = 1/α to match nomenclature of Vogel and others (2008), is
F(x) = \frac{1-\mathrm{exp}(-β{t})}{1-\mathrm{exp}(-βψ)}\mbox{,}
where F(x) is the nonexceedance probability for the quantile 0 ≤ x ≤ ψ and ψ > 0 and α > 0. This distribution represents a nonstationary Poisson process.
The distribution is restricted to a narrow range of L-CV (τ_2 = λ_2/λ_1). If τ_2 = 1/3, the process represented is a stationary Poisson for which the cumulative distribution function is simply the uniform distribution and F(x) = x/ψ. If τ_2 = 1/2, then the distribution is represented as the usual exponential distribution with a location parameter of zero and a rate parameter β (scale parameter α = 1/β). These two limiting conditions are supported.
Usage
1 cdftexp(x, para)
Arguments
x A real value vector. para The parameters from partexp or vec2par.
Value
Nonexceedance probability (F) for x.
W.H. Asquith
References
Vogel, R.M., Hosking, J.R.M., Elphick, C.S., Roberts, D.L., and Reed, J.M., 2008, Goodness of fit of probability distributions for sightings as species approach extinction: Bulletin of Mathematical Biology, DOI 10.1007/s11538-008-9377-3, 19 p.
pdftexp, quatexp, lmomtexp, partexp
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 cdftexp(50,partexp(vec2lmom(c(40,0.38), lscale=FALSE))) ## Not run: F <- seq(0,1,by=0.001) A <- partexp(vec2lmom(c(100, 1/2), lscale=FALSE)) x <- quatexp(F, A) plot(x, cdftexp(x, A), pch=16, type='l') by <- 0.01; lcvs <- c(1/3, seq(1/3+by, 1/2-by, by=by), 1/2) reds <- (lcvs - 1/3)/max(lcvs - 1/3) for(lcv in lcvs) { A <- partexp(vec2lmom(c(100, lcv), lscale=FALSE)) x <- quatexp(F, A) lines(x, cdftexp(x, A), pch=16, col=rgb(reds[lcvs == lcv],0,0)) } # Vogel and others (2008) example sighting times for the bird # Eskimo Curlew, inspection shows that these are fairly uniform. # There is a sighting about every year to two. T <- c(1946, 1947, 1948, 1950, 1955, 1956, 1959, 1960, 1961, 1962, 1963, 1964, 1968, 1970, 1972, 1973, 1974, 1976, 1977, 1980, 1981, 1982, 1982, 1983, 1985) R <- 1945 # beginning of record S <- T - R lmr <- lmoms(S) PARcurlew <- partexp(lmr) # read the warning message and then force the texp to the # stationary process model (min(tau_2) = 1/3). lmr$ratios[2] <- 1/3 lmr$lambdas[2] <- lmr$lambdas[1]*lmr$ratios[2] PARcurlew <- partexp(lmr) Xmax <- quatexp(1, PARcurlew) X <- seq(0,Xmax, by=.1) plot(X, cdftexp(X,PARcurlew), type="l") # or use the MVUE estimator TE <- max(S)*((length(S)+1)/length(S)) # Time of Extinction lines(X, punif(X, min=0, max=TE), col=2) ## End(Not run) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8776788711547852, "perplexity": 4930.767446599699}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590711.36/warc/CC-MAIN-20180719070814-20180719090814-00270.warc.gz"} |
http://www.maplesoft.com/support/help/Maple/view.aspx?path=DEtools/checkrank | DEtools - Maple Programming Help
Home : Support : Online Help : Mathematics : Differential Equations : Rif : DEtools/checkrank
DEtools
checkrank
illustrate ranking to be used for a rifsimp calculation
Calling Sequence checkrank(system, options) checkrank(system, vars, options)
Parameters
system - list or set of polynomially nonlinear PDEs or ODEs (may contain inequations) vars - (optional) list of the dependent variables to solve for options - (optional) sequence of options to control the behavior of checkrank
Description
• To simplify systems of PDEs or ODEs, a ranking must be defined over all indeterminates present in the system. The ranking allows the algorithm to select an indeterminate for which to solve algebraically when looking at an equation. The checkrank function can be used to understand how the ranking-associated options define a ranking in rifsimp. For more detailed information about rankings, please see rifsimp[ranking].
• The checkrank function takes in the system as input along with the options:
vars List of dependent variables (See Following) indep=[indep vars] List of independent variables (See Following) ranking=[...] Specification of exact ranking (See rifsimp[ranking]) degree=n Use all derivatives to differential order n.
• The output is a list that contains the derivatives in the system ordered from highest to lowest rank. If degree is given, all possible derivatives of all dependent variables up to the specified degree are used; otherwise, only the derivatives present in the input are used.
• Default Ranking
When simplifying a system of PDEs or ODEs, you may want to eliminate higher order derivatives in favor of lower order derivatives. Do this by using a ranking by differential order, as is the default for rifsimp. Unfortunately, this says nothing about how ties are broken, for example, between two third order derivatives.
The breaking of ties is accomplished by first looking at the differentiations of the derivative with respect to each independent variable in turn. If they are of equal order, then the dependent variable itself is examined. The independent variable differentiations are examined in the order in which they appear in the dependency lists of the dependent variables, and the dependent variables are ordered alphabetically.
So, for example, given an input system containing f(x,y,z),g(x,y,z),h(x,z), the following will hold:
[x,y,z] Order of independent variables [f,g,h] Order of dependent variables f[x] < g[xx] By differential order g[xy] < f[xxz] By differential order f[xy] < g[xx] By differentiation with respect to x (x>y) Note: differential order is equal f[xzz] < g[xyz] By differentiation with respect to y g[xx] < f[xx] By dependent variable Note: differentiations are exactly equal h[xz] < f[xz] By dependent variable
Note that, in the above example, the only time the dependent variable comes into play is when all differentiations are equal.
• Changing the Default
To change the default ranking, use the vars, $\mathrm{indep}=[...]$, or $\mathrm{ranking}=[...]$ options. The vars can be specified in two distinct ways:
1. Simple List
If the vars are specified as a simple list, this option overrides the alphabetical order of the dependent variables described in the default ordering section.
2. Nested List
This option gives a solving order for the dependent variables. For example, if vars were specified as [[f],[g,h]], this would tell rifsimp to rank any derivative of f greater than all derivatives of g and h. Then, and when comparing g and h, the solving order would be differential order, then differentiations, and then dependent variable name as specified by the input [g,h]. This would help in obtaining a subset of the system that is independent of f; that is, a smaller PDE system in g and h only.
• The indep=[...] option provides for the specification of the independent variables for the problem, as well as the order in which differentiations are examined. So if the option indep=[x, y] were used, then f[x] would be ranked higher than f[y], but if indep=[y, x] were specified, then the opposite would be true.
Examples
> $\mathrm{with}\left(\mathrm{DEtools}\right):$
The first example uses the default ranking for a simple system.
> $\mathrm{sys}≔\left[\frac{{ⅆ}^{2}}{ⅆ{x}^{2}}g\left(x\right)-g\left(x\right)=0,{\left(\frac{ⅆ}{ⅆx}f\left(x\right)\right)}^{3}-\left(\frac{ⅆ}{ⅆx}g\left(x\right)\right)=0\right]$
${\mathrm{sys}}{:=}\left[\frac{{{ⅆ}}^{{2}}}{{ⅆ}{{x}}^{{2}}}{}{g}{}\left({x}\right){-}{g}{}\left({x}\right){=}{0}{,}{\left(\frac{{ⅆ}}{{ⅆ}{x}}{}{f}{}\left({x}\right)\right)}^{{3}}{-}\left(\frac{{ⅆ}}{{ⅆ}{x}}{}{g}{}\left({x}\right)\right){=}{0}\right]$ (1)
> $\mathrm{checkrank}\left(\mathrm{sys}\right)$
$\left[\frac{{{ⅆ}}^{{2}}}{{ⅆ}{{x}}^{{2}}}{}{g}{}\left({x}\right){,}\frac{{ⅆ}}{{ⅆ}{x}}{}{f}{}\left({x}\right){,}\frac{{ⅆ}}{{ⅆ}{x}}{}{g}{}\left({x}\right){,}{g}{}\left({x}\right)\right]$ (2)
By default, the first equation would be solved for the second order derivative in g(x), while the second equation would be solved for the first order derivative in f(x). Suppose instead that we always want to solve for g(x) before f(x). We can use vars.
> $\mathrm{checkrank}\left(\mathrm{sys},\left[\left[g\right],\left[f\right]\right]\right)$
$\left[\frac{{{ⅆ}}^{{2}}}{{ⅆ}{{x}}^{{2}}}{}{g}{}\left({x}\right){,}\frac{{ⅆ}}{{ⅆ}{x}}{}{g}{}\left({x}\right){,}{g}{}\left({x}\right){,}\frac{{ⅆ}}{{ⅆ}{x}}{}{f}{}\left({x}\right)\right]$ (3)
So here g(x) and all derivatives are ranked higher than f(x).
The next example shows the default for a PDE system in f(x,y), g(x,y), h(y) (where we use the degree=2 option to get all second order derivatives):
> $\mathrm{checkrank}\left(\left[f\left(x,y\right),g\left(x,y\right),h\left(y\right)\right],\mathrm{degree}=2\right)$
$\left[\frac{{{\partial }}^{{2}}}{{\partial }{{x}}^{{2}}}{}{f}{}\left({x}{,}{y}\right){,}\frac{{{\partial }}^{{2}}}{{\partial }{{x}}^{{2}}}{}{g}{}\left({x}{,}{y}\right){,}{0}{,}\frac{{{\partial }}^{{2}}}{{\partial }{y}{}{\partial }{x}}{}{f}{}\left({x}{,}{y}\right){,}\frac{{{\partial }}^{{2}}}{{\partial }{y}{}{\partial }{x}}{}{g}{}\left({x}{,}{y}\right){,}{0}{,}\frac{{{\partial }}^{{2}}}{{\partial }{{y}}^{{2}}}{}{f}{}\left({x}{,}{y}\right){,}\frac{{{\partial }}^{{2}}}{{\partial }{{y}}^{{2}}}{}{g}{}\left({x}{,}{y}\right){,}\frac{{{ⅆ}}^{{2}}}{{ⅆ}{{y}}^{{2}}}{}{h}{}\left({y}\right){,}\frac{{\partial }}{{\partial }{x}}{}{f}{}\left({x}{,}{y}\right){,}\frac{{\partial }}{{\partial }{x}}{}{g}{}\left({x}{,}{y}\right){,}{0}{,}\frac{{\partial }}{{\partial }{y}}{}{f}{}\left({x}{,}{y}\right){,}\frac{{\partial }}{{\partial }{y}}{}{g}{}\left({x}{,}{y}\right){,}\frac{{ⅆ}}{{ⅆ}{y}}{}{h}{}\left({y}\right){,}{f}{}\left({x}{,}{y}\right){,}{g}{}\left({x}{,}{y}\right){,}{h}{}\left({y}\right)\right]$ (4)
All second order derivatives are first (first 7 entries), then the first derivatives with respect to x ahead of the first derivatives with respect to y, and finally $f\left(x,y\right)$, then $g\left(x,y\right)$, then $h\left(y\right)$.
Suppose we want to eliminate higher derivatives involving y before x. We can use indep for this as follows:
> $\mathrm{checkrank}\left(\left[f\left(x,y\right),g\left(x,y\right),h\left(y\right)\right],\mathrm{indep}=\left[y,x\right],\mathrm{degree}=2\right)$
$\left[\frac{{{\partial }}^{{2}}}{{\partial }{{y}}^{{2}}}{}{f}{}\left({x}{,}{y}\right){,}\frac{{{\partial }}^{{2}}}{{\partial }{{y}}^{{2}}}{}{g}{}\left({x}{,}{y}\right){,}\frac{{{ⅆ}}^{{2}}}{{ⅆ}{{y}}^{{2}}}{}{h}{}\left({y}\right){,}\frac{{{\partial }}^{{2}}}{{\partial }{y}{}{\partial }{x}}{}{f}{}\left({x}{,}{y}\right){,}\frac{{{\partial }}^{{2}}}{{\partial }{y}{}{\partial }{x}}{}{g}{}\left({x}{,}{y}\right){,}{0}{,}\frac{{{\partial }}^{{2}}}{{\partial }{{x}}^{{2}}}{}{f}{}\left({x}{,}{y}\right){,}\frac{{{\partial }}^{{2}}}{{\partial }{{x}}^{{2}}}{}{g}{}\left({x}{,}{y}\right){,}{0}{,}\frac{{\partial }}{{\partial }{y}}{}{f}{}\left({x}{,}{y}\right){,}\frac{{\partial }}{{\partial }{y}}{}{g}{}\left({x}{,}{y}\right){,}\frac{{ⅆ}}{{ⅆ}{y}}{}{h}{}\left({y}\right){,}\frac{{\partial }}{{\partial }{x}}{}{f}{}\left({x}{,}{y}\right){,}\frac{{\partial }}{{\partial }{x}}{}{g}{}\left({x}{,}{y}\right){,}{0}{,}{f}{}\left({x}{,}{y}\right){,}{g}{}\left({x}{,}{y}\right){,}{h}{}\left({y}\right)\right]$ (5)
Now to eliminate f(x,y) and derivatives in terms of $g\left(x,y\right)$ and $h\left(y\right)$, and to rank y derivatives higher than x, we can combine the options to obtain the following.
> $\mathrm{checkrank}\left(\left[f\left(x,y\right),g\left(x,y\right),h\left(y\right)\right],\left[\left[f\right],\left[g,h\right]\right],\mathrm{indep}=\left[y,x\right],\mathrm{degree}=2\right)$
$\left[\frac{{{\partial }}^{{2}}}{{\partial }{{y}}^{{2}}}{}{f}{}\left({x}{,}{y}\right){,}\frac{{{\partial }}^{{2}}}{{\partial }{y}{}{\partial }{x}}{}{f}{}\left({x}{,}{y}\right){,}\frac{{{\partial }}^{{2}}}{{\partial }{{x}}^{{2}}}{}{f}{}\left({x}{,}{y}\right){,}\frac{{\partial }}{{\partial }{y}}{}{f}{}\left({x}{,}{y}\right){,}\frac{{\partial }}{{\partial }{x}}{}{f}{}\left({x}{,}{y}\right){,}{f}{}\left({x}{,}{y}\right){,}\frac{{{\partial }}^{{2}}}{{\partial }{{y}}^{{2}}}{}{g}{}\left({x}{,}{y}\right){,}\frac{{{ⅆ}}^{{2}}}{{ⅆ}{{y}}^{{2}}}{}{h}{}\left({y}\right){,}\frac{{{\partial }}^{{2}}}{{\partial }{y}{}{\partial }{x}}{}{g}{}\left({x}{,}{y}\right){,}{0}{,}\frac{{{\partial }}^{{2}}}{{\partial }{{x}}^{{2}}}{}{g}{}\left({x}{,}{y}\right){,}{0}{,}\frac{{\partial }}{{\partial }{y}}{}{g}{}\left({x}{,}{y}\right){,}\frac{{ⅆ}}{{ⅆ}{y}}{}{h}{}\left({y}\right){,}\frac{{\partial }}{{\partial }{x}}{}{g}{}\left({x}{,}{y}\right){,}{0}{,}{g}{}\left({x}{,}{y}\right){,}{h}{}\left({y}\right)\right]$ (6) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 20, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9343292713165283, "perplexity": 1039.2785069434335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00136-ip-10-164-35-72.ec2.internal.warc.gz"} |
http://www.bbc.com/future/story/20140606-how-do-you-weigh-a-planet | # How do you weigh a planet?
Scales don’t come Earth-sized, so you may think calculating the weight of a planet is a tricky task. But it’s a lot easier to do if you use a nearby moon…
Weight is measured by how much gravity acts on you, which is why you would weigh less on the Moon, which has less gravity than Earth, than you would at home. So scientists talk about mass rather than weight, as mass is the same no matter where you are.
To understand how we are able to calculate the mass of a planet, you have to first start with the principle called the Law of Universal Gravitation, published in 1687 by Sir Isaac Newton. Newton’s work tells us to look at how a planet affects the things around it. First, find a planet with a handy second object nearby like a moon. Second, measure the distance from the moon to the planet. Third, time one complete orbit. This gives you a moon’s speed, and the faster the moon is going the bigger the planet must be.
This only allows you to compare the relative masses of planets. To find out the actual masses of planets we had to wait for Lord Henry Cavendish’s experiment in 1797. He set up an experiment with two 150kg lead balls representing planets, and two smaller spheres, representing moons, and he measured the gravitational pull between them. Cavendish’s experiment led us to the missing piece of Newton’s puzzle, which was the value of G – the number that relates the gravitational force between two bodies to their masses and distance apart. By putting the value of G into Newton’s equation Cavendish calculated Earth’s mass to be six billion trillion tonnes, which is within 1% of our best guess today. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9301937222480774, "perplexity": 374.3624931211168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768497.135/warc/CC-MAIN-20141217075248-00125-ip-10-231-17-201.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/5431/formally-proving-that-a-function-is-oxn | # Formally proving that a function is $O(x^n)$
Say I have a function
\begin{equation*} f(x) = ax^3 + bx^2 + cx + d,\text{ where }a > 0. \end{equation*}
It's clear that for a high enough value of $x$, the $x^3$ term will dominate and I can say $f(x) \in O(x^3)$, but this doesn't seem very formal.
The formal definition is $f(x) \in O(g(x))$ if constants $k, x_0 > 0$ exist, such that $0 \le f(x) \le kg(x)$ for all $x > x_0$.
My question is, what are appropriate values for $k$ and $x_0$? It's easy enough to find ones that apply (say $k = a + b + c + d$). By the formal definition, all I have to do is show that these numbers exist, so does it actually matter which numbers I use? For some value of $x$, $k$ could be anywhere from $1$ to $a + b + c + d + ...$. From my understanding, it doesn't matter what numbers I pick as long as they 'work', but is this right? It seems too easy.
Thanks
-
I changed some of the c's to k's because I think there was a clash there - hope that it was correct to do so. – anon Sep 25 '10 at 16:57
Looks fine to me. You missed one though -- I fixed it :-P – Joel Sep 25 '10 at 17:03
If $a < 0$, then the $0 \le kg(x)$ is violated... Perhaps a different definition? Or can $k < 0$? – Aryabhata Sep 25 '10 at 18:54
@Moron: Yeah, I think we should have $|f(x)|<k|g(x)|$ for $x>x_0$. For instance, $sin(x)=O(1)$ and the defintion by Joel doesn't give this. – alext87 Sep 25 '10 at 18:58
@alex: Yeah, since this seems like homework, just trying to make sure Joel knows the right definition taught in their class :-) – Aryabhata Sep 25 '10 at 19:03
The argument you are getting at as I understand is is, roughly: $x^n \in O(x^n)$ and thus $kx^n \in O(x^n)$, so $f(x)$ acts like $O(x^3) + O(x^2) + O(x) + O(1)$ which can be reduced to $O(x^3)$.
So the theorem we would like to prove now is that for $n\geq m$: $f \in O(x^n)$ and $g \in O(x^m)$ implies $f + g \in O(x^n)$. Once we have this you just add up the monomials of the polynomial and that proves the result.
Look at what we have, from the definitions:
$$f \in O(x^n) \Rightarrow \exists x_0, k,\;\; \forall x>x_0,\;\; f(x) \leq kx^n$$
$$g \in O(x^m) \Rightarrow \exists x_1, k',\;\; \forall x>x_1,\;\; g(x) \leq kx^m$$
Let $x_2$ be the maximum of $x_0$ and $x_1$, $k''$ be the maximum of $k$ and $k'$ and add these inequalities:
$$\forall x>x_2,\;\; (f+g)(x) \leq kx^n + k'x^m \leq k''(x^n + x^m) \leq 2 k'' x^n$$
Now the pair of values $(x_2,2k'')$ prove that $f+g \in O(x^n)$.
To consider abstract functions like this makes the proof very easy, but it is clear that the values we exhibit to prove the existential may not be the best, although we still prove the theorem in an effective way. In particular, you could do a very detailed analysis of the functions in specific cases to get very tight bounds - in this lucky case it's not needed which is why the theorem is easier to prove in the abstract case.
-
HINT $\quad\rm ax^3 + bx^2 + cx + d\ \le \ (|a|+|b|+|c|+|d|)\ x^3 \$ for $\rm\ x > 1$
-
Suppose you have a function $f(x)$ and $g(x)$.
A really simple method (that works when $f(x)$ and $g(x)$ are polynomials) of determining a constant that works is the following. Consider
$\lim_{x\rightarrow\infty}\frac{f(x)}{g(x)}$
If the limit exists and if a constant $0\leq C< \infty$. Then $C+\epsilon$ for any $\epsilon>0$ is a constant that works. To see this just apply the definition of the limit. $\forall \epsilon>0$ there exists a $x_0(\epsilon)$ such that $\forall x>x_0(\epsilon)$ we have
$\left|\frac{f(x)}{g(x)}-C\right|<\epsilon$
That is
$\frac{f(x)}{g(x)} < C+\epsilon$
Now you know the constant from calculating the limit and know the existence of $x_0$. Since you are always considering asymptotics when using this definition you are never concerned with the value of $x_0$ (only that it exists)
It does not matter what constants you use and someone could easily use different constants to get $f(x)=O(g(x))$. This notation is to compare the growth rate of two functions.
If $\lim_{x\rightarrow \infty} \frac{f(x)}{g(x)}$ does not exist or is hard to calculate then as long as you can bound it above you still have $f(x)=O(g(x))$
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9652760624885559, "perplexity": 186.18674647641978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860125524.70/warc/CC-MAIN-20160428161525-00066-ip-10-239-7-51.ec2.internal.warc.gz"} |
http://umj.imath.kiev.ua/volumes/issues/?lang=en&year=2005&number=10 | 2019
Том 71
№ 11
# Volume 57, № 10, 2005
Article (Russian)
### Majorant estimates for the percolation threshold of a Bernoulli field on a square lattice
Ukr. Mat. Zh. - 2005. - 57, № 10. - pp. 1315–1326
We suggest a method for obtaining a monotonically decreasing sequence of upper bounds of percolation threshold of the Bernoulli random field on $Z^2$. On the basis of this sequence, we obtain a method of constructing approximations with the guaranteed exactness estimate for a percolation probability. We compute the first term $c_2 = 0,74683$ of the considered sequence.
Article (Russian)
### Some remarks on a Wiener flow with coalescence
Ukr. Mat. Zh. - 2005. - 57, № 10. - pp. 1327–1333
We study properties of a stochastic flow that consists of Brownian particles coalescing at contact time.
Article (Russian)
### Degenerate Nevanlinna-Pick problem
Ukr. Mat. Zh. - 2005. - 57, № 10. - pp. 1334–1343
A general solution of the degenerate Nevanlinna-Pick problem is described in terms of fractional-linear transformations. A resolvent matrix of the problem is obtained in the form of a J-expanding matrix of full rank.
Article (Russian)
### Qualitative investigation of a singular Cauchy problem for a functional differential equation
Ukr. Mat. Zh. - 2005. - 57, № 10. - pp. 1344–1358
We consider the singular Cauchy problem $$txprime(t) = f(t,x(t),x(g(t)),xprime(t),xprime(h(t))), x(0) = 0,$$ where $x: (0, τ) → ℝ, g: (0, τ) → (0, + ∞), h: (0, τ) → (0, + ∞), g(t) ≤ t$, and $h(t) ≤ t, t ∈ (0, τ)$, for linear, perturbed linear, and nonlinear equations. In each case, we prove that there exists a nonempty set of continuously differentiable solutions $x: (0, ρ] → ℝ$ ($ρ$ is sufficiently small) with required asymptotic properties.
Article (Russian)
### On the distribution of the time of the first exit from an interval and the value of a jump over the boundary for processes with independent increments and random walks
Ukr. Mat. Zh. - 2005. - 57, № 10. - pp. 1359–1384
For a homogeneous process with independent increments, we determine the integral transforms of the joint distribution of the first-exit time from an interval and the value of a jump of a process over the boundary at exit time and the joint distribution of the supremum, infimum, and value of the process.
Article (Ukrainian)
### On properties of subdifferential mappings in Fréchet spaces
Ukr. Mat. Zh. - 2005. - 57, № 10. - pp. 1385–1394
We present conditions under which the subdifferential of a proper convex lower-semicontinuous functional in a Fréchet space is a bounded upper-semicontinuous mapping. The theorem on the boundedness of a subdifferential is also new for Banach spaces. We prove a generalized Weierstrass theorem in Fréchet spaces and study a variational inequality with a set-valued mapping.
Article (Ukrainian)
### Approximation of classes of analytic functions by Fourier sums in the metric of the space $L_p$
Ukr. Mat. Zh. - 2005. - 57, № 10. - pp. 1395–1408
Asymptotic equalities are established for upper bounds of approximants by Fourier partial sums in a metric of spaces $L_p,\quad 1 \leq p \leq \infty$ on classes of the Poisson integrals of periodic functions belonging to the unit ball of the space $L_1$. The results obtained are generalized to the classes of $(\psi, \overline{\beta})$-differentiable functions (in the Stepanets sense) that admit the analytical extension to a fixed strip of the complex plane.
Article (Ukrainian)
### Exact order of relative widths of classes $W^r_1$ in the space $L_1$
Ukr. Mat. Zh. - 2005. - 57, № 10. - pp. 1409–1417
As $n \rightarrow \infty$ the exact order of relative widths of classes $W^r_1$ of periodic functions in the space $L_1$ is found under restrictions on higher derivatives of approximating functions.
Anniversaries (Ukrainian)
### Ivan Oleksandrovych Lukovs'kyi (on his 70-th birthday)
Ukr. Mat. Zh. - 2005. - 57, № 10. - pp. 1418-1419
Brief Communications (Russian)
### On domains with regular sections
Ukr. Mat. Zh. - 2005. - 57, № 10. - pp. 1420–1423
We prove the generalized convexity of domains satisfying the condition of acyclicity of their sections by a certain continuously parametrized family of two-dimensional planes.
Brief Communications (Russian)
### On one problem for comonotone approximation
Ukr. Mat. Zh. - 2005. - 57, № 10. - pp. 1424–1429
For a comonotone approximation, we prove that an analog of the second Jackson inequality with generalized Ditzian - Totik modulus of smoothness $\omega^{\varphi}_{k, r}$ is invalid for $(k, r) = (2, 2)$ even if the constant depends on a function.
Brief Communications (Russian)
### On one extremal problem for numerical series
Ukr. Mat. Zh. - 2005. - 57, № 10. - pp. 1430–1434
Let $Γ$ be the set of all permutations of the natural series and let $α = \{α_j\}_{j ∈ ℕ},\; ν = \{ν_j\}_{j ∈ ℕ}$, and $η = {η_j}_{j ∈ ℕ}$ be nonnegative number sequences for which $$\left\| {\nu (\alpha \eta )_\gamma } \right\|_1 : = \sum\limits_{j = 1}^\infty {v _j \alpha _{\gamma (_j )} } \eta _{\gamma (_j )}$$ is defined for all $γ:= \{γ(j)\}_{j ∈ ℕ} ∈ Γ$ and $η ∈ l_p$. We find $\sup _{\eta :\left\| \eta \right\|_p = 1} \inf _{\gamma \in \Gamma } \left\| {\nu (\alpha \eta )_\gamma } \right\|_1$ in the case where $1 < p < ∞$.
Brief Communications (Russian)
### Finite-dimensionality and growth of algebras specified by polylinearly interrelated generators
Ukr. Mat. Zh. - 2005. - 57, № 10. - pp. 1435–1440
We investigate the finite-dimensionality and growth of algebras specified by a system of polylinearly interrelated generators. The results obtained are formulated in terms of a function $\rho$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9038186073303223, "perplexity": 694.3002071098599}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145774.75/warc/CC-MAIN-20200223123852-20200223153852-00276.warc.gz"} |
https://www.physicsforums.com/threads/propagator-equation.176982/ | # Propagator equation
1. Jul 14, 2007
### ehrenfest
Can someone explain to me why the equation $$\psi(x,t) = \int{U(x,t,x',t')\psi(x',t')dx'}$$ where U is the propagator has an integral? I thought you could just multiply the propogator by an initial state and get a final state?
2. Jul 14, 2007
### meopemuk
In quantum mechanics, time evolution of wave functions is represented by the time evolution operator. The formula you wrote is the most general linear operator connecting wave functions at times t and t'.
3. Jul 14, 2007
### jostpuur
Perhaps so, if you call the time evolution operator a propagator. You can get the time evolution by multiplying the state vector by a correct operator
$$|\psi(t)\rangle = e^{-iH(t-t')/\hbar}|\psi(t')\rangle$$
This is quite abstract like this. If you use the position representation, then this operator is something more complicated that just a function, and this "multiplication" is not a pointwise multiplication of two functions.
4. Jul 14, 2007
### ehrenfest
OK. So for the position representation (which is the same as the X basis, right?) we have that (for a free particle):
$$U(x,t;x') \equiv <x|U(t)|x'> = \int^{\infty}_{\infty}<x|p><p|x'>e^{-ip^2/2m\hbar}dp$$
which can be reduced to $$(\frac{m}{2\pi\hbar i t})^{1/2}e^{im(x-x')^2/2\hbar t}$$.
So why is it not $$\psi(x,t) = (\frac{m}{2\pi\hbar i t})^{1/2}e^{im(x-x')^2/2\hbar t} \psi(x',0)$$
or simply $$\psi(x,t) = U(x,t;x') \psi(x',0)$$?
Also, what exactly does the equivalence mean $$U(x,t;x') \equiv <x|U(t)|x'>$$?
Last edited: Jul 14, 2007
5. Jul 14, 2007
### plmokn2
I could well be wrong, and at best I'll give a very restricted view since I probably know less than you, but...
Think about a particle with psi(x,t=0) a delta function. Over time psi spreads out, and psi(x',t)=U(x,t;x') psi(x,t=0).
But then think about (the maybe unphysical I don't know) situation of a particle starting with two inifinitly thin peaks. Then at a later time, at position x', psi will be given by the contribution from the first peak which has spread out + the contribution from the second peak spread. Generalise for a continuous wavefunction and you get an integral.
6. Jul 14, 2007
### ehrenfest
That makes sense except I think you may have mixed up your primes in the expression for U(t) but still that explanation of the integral really helped.
7. Jul 14, 2007
### meopemuk
This is an amplitude for the particle to move from point x' to point x in time t.
Because the particle can arrive to the point x not only from x', but from any other point in space as well. So, this expression should be integrated on x' in order to get the full amplitude of finding the particle at point x.
I think that explanation given by plmokn2 is a good one.
Eugene.
8. Jul 15, 2007
### ehrenfest
So I see how you can think of its as U(t) operating on the ket |x'> to get U(t;x') but how can you think of the relationship between the bra <x'| and the operator U(t;x'). What does <x'|U(t;x') "mean"?
9. Jul 15, 2007
### meopemuk
In the notation U(t;x, x') = < x| U(t) |x'> you can first apply the unitary operator U(t) to the ket vector |x'> on the right and obtain a new ket vector
(which is a result of a time translation applied to |x'>), which I denote by
|x', t> = U(t) |x'>
In the next step you can take an inner product of |x', t> with the bra vector <x|
U(t;x, x') = <x |x', t>
U(t;x, x') is a complex number which can be interpreted as a matrix element of the unitary operator U(t) in the basis provided by vectors |x>.
10. Jul 15, 2007
### jostpuur
ehrenfest, you should do the exercise, where Shrodinger's equation is derived out of a time evolution defined with a propagator. After it, it becomes easier to believe in propagators, and in how they work. If the sources you are using don't explain it, you can get hints from here.
Similar Discussions: Propagator equation | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9389753341674805, "perplexity": 912.404528038633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687837.85/warc/CC-MAIN-20170921191047-20170921211047-00568.warc.gz"} |
http://ndl.iitkgp.ac.in/document/VS92cmMxZ2Mrd0diTnVxQUl3Q1dsSk5uMFh0K0ZhNVZxS2luRFdhbEQwcz0 | ### Particle Acceleration and Fractional Transport in Turbulent ReconnectionParticle Acceleration and Fractional Transport in Turbulent Reconnection
Access Restriction
Open
Author Isliker, Heinz ♦ Pisokas, Theophilos ♦ Vlahos, Loukas ♦ Anastasiadis, Anastasios Source United States Department of Energy Office of Scientific and Technical Information Content type Text Language English
Subject Keyword ASTROPHYSICS, COSMOLOGY AND ASTRONOMY ♦ ACCELERATION ♦ DIFFUSION ♦ DISTRIBUTION ♦ ELECTRIC FIELDS ♦ ELECTRONS ♦ ENERGY SPECTRA ♦ FOKKER-PLANCK EQUATION ♦ MAGNETIC RECONNECTION ♦ PARTICLES ♦ PLASMA ♦ RANDOMNESS ♦ REFLECTION ♦ SIMULATION ♦ SPACE ♦ TRANSPORT THEORY ♦ TURBULENCE Abstract We consider a large-scale environment of turbulent reconnection that is fragmented into a number of randomly distributed unstable current sheets (UCSs), and we statistically analyze the acceleration of particles within this environment. We address two important cases of acceleration mechanisms when particles interact with the UCS: (a) electric field acceleration and (b) acceleration by reflection at contracting islands. Electrons and ions are accelerated very efficiently, attaining an energy distribution of power-law shape with an index 1–2, depending on the acceleration mechanism. The transport coefficients in energy space are estimated from test-particle simulation data, and we show that the classical Fokker–Planck (FP) equation fails to reproduce the simulation results when the transport coefficients are inserted into it and it is solved numerically. The cause for this failure is that the particles perform Levy flights in energy space, while the distributions of the energy increments exhibit power-law tails. We then use the fractional transport equation (FTE) derived by Isliker et al., whose parameters and the order of the fractional derivatives are inferred from the simulation data, and solving the FTE numerically, we show that the FTE successfully reproduces the kinetic energy distribution of the test particles. We discuss in detail the analysis of the simulation data and the criteria that allow one to judge the appropriateness of either an FTE or a classical FP equation as a transport model. ISSN 0004637X Educational Use Research Learning Resource Type Article Publisher Date 2017-11-01 Publisher Place United States Journal Astrophysical Journal Volume Number 849 Issue Number 1 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8171371221542358, "perplexity": 3004.7909053324943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655888561.21/warc/CC-MAIN-20200705184325-20200705214325-00246.warc.gz"} |
https://brilliant.org/problems/divisible-by-this-year-part-4-insanity/ | # Divisible by this year??? (Part 4: INSANITY!!!!!!!!)
$$n!$$ or $$n$$-factorial is the product of all integers from $$1$$ up to $$n$$ $$(n! = 1 \times 2 \times 3 \times ... \times n)$$. Let's denote $$n!!$$ be the product of all factorials from $$1!$$ up to $$n!$$ $$(n!! = 1! \times 2! \times 3! \times ... \times n!)$$. Let's also denote $$n!!!$$ be the product of all double factorials from $$1!!$$ up to $$n!!$$ $$(n!!! = 1!! \times 2!! \times 3!! \times ... \times n!!)$$. Find the maximum integral value of $$k$$ such that $$2014^k$$ divides $$2014!!!$$
You may also try these problems:
Divisible by this year???
Divisible by this year??? (Part 2: Factorials)
This problem is part of the set "Symphony"
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9455754160881042, "perplexity": 431.13545455685016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267868135.87/warc/CC-MAIN-20180625150537-20180625170537-00023.warc.gz"} |
http://mathoverflow.net/questions/99663/homotopy-groups-of-on | # homotopy groups of O(n)
Can you give me a reference book where homotopy groups of O(n) are calculated?
-
$$O(n-1) \to O(n) \to S^{n-1}$$
which allow you to inductively compute the homotopy groups of $O(n)$ in terms of the homotopy of $S^{k}$, for $k < n$. But the latter is one of the main open questions in homotopy theory.
Of course, real Bott periodicity tells you the homotopy groups of $O = \lim_{n\to \infty} O(n)$. By the previous fibre sequence, this is the same as $\pi_k(O(n))$ for $n>k+1$ -- the homotopy groups stabilise at that point -- since $\pi_k(S^{n-1}) = 0$ in that range. But the higher homotopy of $O(n)$ for a fixed $n$ is less tractable. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9459196925163269, "perplexity": 150.54774207022584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115869404.51/warc/CC-MAIN-20150124161109-00140-ip-10-180-212-252.ec2.internal.warc.gz"} |
http://qualitymathproducts.com/dividing-with-fraction-bars/ | # Dividing with Fraction Bars
The Teacher’s Guides cover this in more depth, this is a good introduction that shows how simple the normally complex “concept” of fraction division can be!
Before showing how to use Fraction Bars to divide ¼ by 23, lets look at dividing 56 by 13. Looks tricky, but it can be so easy!
Using the idea of fitting one amount into another, we can see that 13 “fits into” 56 twice, with 16 remaining. By comparing the remaining 16 to the divisor 13, we see 16 is half of the divisor 13. So 56 divided by 13 is 2 and 12.
This is similar to the reasoning when dividing one whole number by another. For example, 17 divided by 5 is 3 with a remainder of 2. So the quotient is 3 25. In this example, we compare the remainder, 2, to the divisor, 5, and obtain the ratio 25.
Now let’s look at ¼ divided by 23. Since 23 is greater than ¼, it “fits into” ¼ zero times with a remainder of ¼. So we compare the remainder ¼ to the divisor 23. To make this comparison, it is convenient to replace the first two bars by bars with parts of the same size. Now if we compare 3 shaded parts to 8 shaded parts, the ratio is 38.
¼ ÷ 23 = 312 ÷ 812 = 38
Starting with examples for students where one shaded amount fits into a second shaded amount a whole number of times, students will be able to see that division of fractions is comparing two amounts, just like division of whole numbers. In this way, division of fractions makes sense. An initial example like the one above for 56 divided by 13 where students can see that 13 fits into 56 two and one-half times is good. Later bring in the “invert and multiply” rule to show that this method gives the same answers that they can see makes sense with a few simple examples with Fraction Bars. So viewing division as comparing two amounts to see how many times greater one amount is than another, works whether the number being used are whole numbers or fractions. And once we obtain bars with parts of the same size, (i.e. common denominators) finding the quotient of two fractions is just a matter of finding the quotients of whole numbers of part of the same size. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8913390636444092, "perplexity": 664.7734177508445}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320323.17/warc/CC-MAIN-20170624184733-20170624204733-00246.warc.gz"} |
https://www.nature.com/articles/s41467-021-23061-8?error=cookies_not_supported | Introduction
Light is a prominent tool to probe the properties of materials and their electronic structure, as evidenced by the widespread use of light-based spectroscopies across the physical sciences1,2. Among these tools, far-field optical techniques are particularly prevalent, but are constrained by the diffraction limit and the mismatch between optical and electronic length scales to probe the response of materials only at large length scales (or, equivalently, at small momenta). Plasmon polaritons—hybrid excitations of light and free carriers—provide a mean to overcome these constraints through their ability to confine electromagnetic radiation to the nanoscale3.
Graphene, in particular, supports gate-tunable plasmons characterized by an unprecedentedly strong confinement of light4,5,6. When placed near a metal, graphene plasmons (GPs) are strongly screened and acquire a nearly linear (acoustic-like) dispersion7,8,9,10 (contrasting with the square-root-type dispersion of conventional GPs). Crucially, such acoustic graphene plasmons (AGPs) in graphene–dielectric–metal (GDM) structures have been shown to exhibit even higher field confinement than conventional GPs with the same frequency, effectively squeezing light into the few-nanometer regime8,9,10,11. Recently, using scanning near-field optical microscopy, these features were exploited to experimentally measure the conductivity of graphene, σ(q,ω), across its frequency (ω) and momentum (q) dependence simultaneously8. The observation of momentum dependence implies a nonlocal response (i.e., response contributions at position r from perturbations at $${\bf{r}}^{\prime}$$), whose origin is inherently quantum mechanical. Incidentally, traditional optical spectroscopic tools cannot resolve nonlocal response in extended systems due to the intrinsically small momenta k0 ≡ ω/c carried by far-field photons. Acoustic graphene plasmons, on the other hand, can carry large momenta—up to a significant fraction of the electronic Fermi momentum kF and with group velocities asymptotically approaching the electron’s Fermi velocity vF—and so can facilitate explorations of nonlocal (i.e., q-dependent) response not only in graphene itself but also, as we detail in this Article, in nearby materials. So far, however, only aspects related to the quantum response of graphene have been addressed8, leaving any quantum nonlocal aspects of the adjacent metal’s response unattended, despite their potentially substantial impact at nanometric graphene–metal separations12,13,14,15,16.
Here, we present a theoretical framework that simultaneously incorporates quantum nonlocal effects in the response of both the graphene and of the metal substrate for AGPs in GDM heterostructures. Further, our approach establishes a concrete proposal for experimentally measuring the low-frequency nonlocal electrodynamic response of metals. Our model treats graphene at the level of the nonlocal random-phase approximation (RPA)4,9,17,18,19 and describes the quantum aspects of the metal’s response—including nonlocality, electronic spill-out/spill-in, and surface-enabled Landau damping—using a set of microscopic surface-response functions known as the Feibelman d-parameters12,13,15,16,20,21. These parameters, d and d, measure the frequency-dependent centroids of the induced charge density and of the normal derivative of the tangential current density, respectively (Supplementary Note 1). Using a combination of numerics and perturbation theory, we show that the AGPs are spectrally shifted by the quantum surface-response of the metal: toward the red for $${\rm{Re}} {\,}{d}_{\perp } > \,0$$ (associated with electronic spill-out of the induced charge density) and toward the blue for $${\rm{Re}} {\,}{d}_{\perp } < \,0$$ (signaling an inward shift, or “spill-in”). Interestingly, these shifts are not accompanied by a commensurately large quantum broadening nor by a reduction of the AGP’s quality factor, thereby providing the theoretical support explaining recent experimental observations11. Finally, we discuss how state-of-the-art measurements of AGPs could be leveraged to map out the low-frequency quantum nonlocal surface response of metals experimentally. Our findings have significant implications for our ability to optimize photonic designs that interface far- and mid-infrared optical excitations—such as AGPs—with metals all the way down to the nanoscale, with pursuant applications in, e.g., ultracompact nanophotonic devices, nanometrology, and in the surface sciences more broadly.
Results
Theory
We consider a GDM heterostructure (see Fig. 1) composed of a graphene sheet with a surface conductivity σ ≡ σ(q,ω) separated from a metal substrate by a thin dielectric slab of thickness t and relative permittivity ϵ2 ≡ ϵ2(ω); finally, the device is covered by a superstrate of relative permittivity ϵ1 ≡ ϵ1(ω). While the metal substrate may, in principle, be represented by a nonlocal and spatially non-uniform (near the interface) dielectric function, here we abstract its contributions into two parts: a bulk, local contribution via $${\epsilon }_{\text{m}}\equiv {\epsilon }_{\text{m}}(\omega )={\epsilon }_{\infty }(\omega )-{\omega }_{\text{p}}^{2}/({\omega }^{2}+\text{i}\omega {\gamma }_{\text{m}})$$, and a surface, quantum contribution included through the d-parameters. These parameters are quantum-mechanical surface-response functions, defined by the first moments of the microscopic induced charge (d) and of the normal derivative of the tangential current (d); see Fig. 1 (Supplementary Note 1 gives a concise introduction). They allow the leading-order corrections to classicality to be conveniently incorporated via a surface dipole density ( d) and a surface current density ( d)9,15,16, and can be obtained either by first-principles computation20,21, semiclassical models, or experiments15.
The electromagnetic excitations of any system can be obtained by analyzing the poles of the (composite) system’s scattering coefficients. For the AGPs of a GDM structure, the relevant coefficient is the p-polarized reflection (or transmission) coefficient, whose poles are given by $$1\ -\ {r}_{p}^{2|{\mathrm{g}}| 1}\ {r}_{p}^{2|{\mathrm{m}}}\ {\text{e}}^{{\text{i}}2{k}_{z,2}t}=0$$ (ref. 22). Here, $${r}_{p}^{2| \text{g}| 1}$$ and $${r}_{p}^{2| \text{m}}$$ denote the p-polarized reflection coefficients for the dielectric–graphene–dielectric and the dielectric–metal interface (detailed in Supplementary Note 2), respectively. Each coefficient yields a material-specific contribution to the overall quantum response: $${r}_{p}^{2|\text{g}| 1}$$ incorporates graphene’s via σ(q,ω) and $${r}_{p}^{2| \text{m}}$$ incorporates the metal’s via the d-parameters (see Supplementary Note 2). The complex exponential [with $${k}_{z,2}\equiv {({\epsilon }_{2}{k}_{0}^{2}-{q}^{2})}^{1/2}$$, where q denotes the in-plane wavevector] incorporates the effects of multiple reflections within the slab. Thus, using the above-noted reflection coefficients (defined explicitly in Supplementary Note 2), we obtain a quantum-corrected AGP dispersion equation:
$$\left[\frac{{\epsilon }_{1}}{{\kappa }_{1}}+\frac{{\epsilon }_{2}}{{\kappa }_{2}}+\frac{\,\text{i}\,\sigma }{\omega {\epsilon }_{0}}\right]\left[{\epsilon }_{\text{m}}{\kappa }_{2}+{\epsilon }_{2}{\kappa }_{\text{m}}-\left(\right.{\epsilon }_{\text{m}}-{\epsilon }_{2}\left)\right.\left(\right.{q}^{2}{d}_{\perp }-{\kappa }_{2}{\kappa }_{\text{m}}{d}_{\parallel }\left)\right.\right] \\ =\left[\frac{{\epsilon }_{1}}{{\kappa }_{1}}-\frac{{\epsilon }_{2}}{{\kappa }_{2}}+\frac{\,\text{i}\,\sigma }{\omega {\epsilon }_{0}}\right]\left[{\epsilon }_{\text{m}}{\kappa }_{2}-{\epsilon }_{2}{\kappa }_{\text{m}}+\left(\right.{\epsilon }_{\text{m}}-{\epsilon }_{2}\left)\right.\left(\right.{q}^{2}{d}_{\perp }+{\kappa }_{2}{\kappa }_{\text{m}}{d}_{\parallel }\left)\right.\right]{\text{e}}^{-2{\kappa }_{2}t},$$
(1)
for in-plane AGP wavevector q and out-of-plane confinement factors $${\kappa }_{j}\equiv ( {q}^{2}-{\epsilon }_{j}{k}_{0}^{2} )^{1/2}$$ for j {1, 2, m}.
Since AGPs are exceptionally subwavelength (with confinement factors up to almost 300)8,10,11, the nonretarded limit (wherein κj → q) constitutes an excellent approximation. In this regime, and for encapsulated graphene, i.e., where ϵd ≡ ϵ1 = ϵ2, Eq. (1) simplifies to
$$\left[1+\frac{2{\epsilon }_{\text{d}}}{q}\frac{\omega {\epsilon }_{0}}{\text{i}\,\sigma }\right]\left[\frac{{\epsilon }_{\text{m}}+{\epsilon }_{\text{d}}}{{\epsilon }_{\text{m}}-{\epsilon }_{\text{d}}}-q\left(\right.{d}_{\perp }-{d}_{\parallel }\left)\right.\right]=\left[\right.1+q\left(\right.{d}_{\perp }+{d}_{\parallel }\left)\right.\left]\right.{\text{e}}^{-2qt}.$$
(2)
For simplicity and concreteness, we will consider a simple jellium treatment of the metal such that d vanishes due to charge neutrality21,23, leaving only d nonzero. Next, we exploit the fact that AGPs typically span frequencies across the terahertz (THz) and mid-infrared (mid-IR) spectral ranges, i.e., well below the plasma frequency ωp of the metal. In this low-frequency regime, ωωp, the frequency dependence of d (and d) has the universal, asymptotic dependence
$${d}_{\perp }(\omega )\simeq \zeta +\,\text{i}\frac{\omega }{{\omega }_{\text{p}}}\xi \,\qquad (\text{for}\,\,\omega \ll {\omega }_{\text{p}}),$$
(3)
as shown by Persson et al.24,25 by exploiting Kramers–Kronig relations. Here, ζ is the so-called static image-plane position, i.e., the centroid of induced charge under a static, external field26, and ξ defines a phase-space coefficient for low-frequency electron–hole pair creation, whose rate is qωξ21: both are ground-state quantities. In the jellium approximation of the interacting electron liquid, the constants ζ ≡ ζ(rs) and ξ ≡ ξ(rs) depend solely on the carrier density ne, here parameterized by the Wigner–Seitz radius $${r}_{s}{a}_{\text{B}}\equiv {(3{n}_{\text{e}}/4\pi )}^{1/3}$$ (Bohr radius, aB). In the following, we exploit the simple asymptotic relation in Eq. (3) to calculate the dispersion of AGPs with metallic (in addition to graphene’s) quantum response included.
Quantum corrections in AGPs due to metallic quantum surface-response
The spectrum of AGPs calculated classically and with quantum corrections is shown in Fig. 2. Three models are considered: one, a completely classical, local-response approximation treatment of both the graphene and the metal; and two others, in which graphene’s response is treated by the nonlocal RPA4,9,17,18,19 while the metal’s response is treated either classically or with quantum surface-response included (via the d-parameter). As noted previously, we adopt a jellium approximation for the d-parameter. Figure 2a shows that—for a fixed wavevector—the AGP’s resonance blueshifts upon inclusion of graphene’s quantum response, followed by a redshift due to the quantum surface-response of the metal (since $${\rm{Re}} {\,}{d}_{\perp } > \,0$$ for jellium metals; electronic spill-out)13,15,16,21,27,28. This redshifting due to the metal’s quantum surface-response is opposite to that predicted by the semiclassical hydrodynamic model (HDM) where the result is always a blueshift14 (corresponding to $${\rm{Re}}{\,}{d}_{\perp }^{\text{HDM}} < \,0$$; electronic “spill-in”) due to the neglect of spill-out effects29. The imaginary part of the AGP’s wavevector (that characterizes the mode’s propagation length) is shown in Fig. 2b: the net effect of the inclusion of d is a small, albeit consistent, increase of this imaginary component. Notwithstanding this, the modification of $${\rm{Im}}\, q$$ is not independent of the shift in $${\rm{Re}}{\,}q$$; as a result, an increase in $${\rm{Im}}{\,}q$$ does not necessarily imply the presence of a significant quantum decay channel [e.g., an increase of $${\rm{Im}}{\,}q$$ can simply result from increased classical loss (i.e., arising from local response alone) at the newly shifted $${\rm{Re}}\, q$$ position]. Because of this, we inspect the quality factor $$Q\equiv {\rm{Re}}{\,}q/{\rm{Im}}{\,}q$$ (or “inverse damping ratio”30,31) instead32 (Fig. 2c), which provides a complementary perspective that emphasizes the effective (or normalized) propagation length rather than the absolute length. The incorporation of quantum mechanical effects, first in graphene alone, and then in both graphene and metal, reduces the AGP’s quality factor. Still, the impact of metal-related quantum losses in the latter is negligible, as evidenced by the nearly overlapping black and red curves in Fig. 2c.
To better understand these observations, we treat the AGP’s q-shift due to the metal’s quantum surface-response as a perturbation: writing q = q0 + q1, we find that the quantum correction from the metal is q1q0d/(2t), for a jellium adjacent to vacuum in the $${\omega }^{2}/{\omega }_{\text{p}}^{2}\ll {q}_{0}t\ll 1$$ limit (Supplementary Note 3). This simple result, together with Eq. (3), provides a near-quantitative account of the AGP dispersion shifts due to metallic quantum surface-response: for ωωp, (i) $${\rm{Re}}\, {d}_{\perp }$$ tends to a finite value, ζ, which increases (decreases) $${\rm{Re}}\, q$$ for ζ > 0 (ζ < 0); and (ii) $${\rm{Im}}{\,}{d}_{\perp }$$ is $$\mathop{\propto}\omega$$ and therefore asymptotically vanishing as ω/ωp → 0 and so only negligibly increases $${\rm{Im}}\, q$$. Moreover, the preceding perturbative analysis warrants $${\rm{Re}}{\,}{q}_{1}/{\rm{Re}}{\,}{q}_{0} \approx {\rm{Im}}{\,}{q}_{1}/{\rm{Im}}{\,}{q}_{0}$$ (Supplementary Note 3), which elucidates the reason why the AGP’s quality factor remains essentially unaffected by the inclusion of metallic quantum surface-response. Notably, these results explain recent experimental observations that found appreciable spectral shifts but negligible additional broadening due to metallic quantum response10,11.
Next, by considering the separation between graphene and the metallic interface as a renormalizable parameter, we find a complementary and instructive perspective on the impact of metallic quantum surface-response. Specifically, within the spectral range of interest for AGPs (i.e., ωωp), we find that the “bare” graphene–metal separation t is effectively renormalized due to the metal’s quantum surface-response from t to $$\tilde{t}\equiv t-s$$, where sdζ (see Supplementary Note 4), corresponding to a physical picture where the metal’s interface lies at the centroid of its induced density (i.e., $${\rm{Re}}{\,}{d}_{\perp }$$) rather than at its “classical” jellium edge. With this approach, the form of the dispersion equation is unchanged but references the renormalized separation $$\tilde{t}$$ instead of its bare counterpart t, i.e.:
$$1+\frac{2{\epsilon }_{\text{d}}}{q}\frac{\omega {\epsilon }_{0}}{\text{i}\sigma }=\frac{{\epsilon }_{\text{m}}-{\epsilon }_{\text{d}}}{{\epsilon }_{\text{m}}+{\epsilon }_{\text{d}}}\ {\text{e}}^{-2q\tilde{t}},$$
(4)
This perspective, for instance, has substantial implications for the analysis and understanding of plasmon rulers33,34,35 at nanometric scales.
Furthermore, our findings additionally suggest an interesting experimental opportunity: as all other experimental parameters can be well-characterized by independent means (including the nonlocal conductivity of graphene), high-precision measurements of the AGP’s dispersion can enable the characterization of the low-frequency metallic quantum response—a regime that has otherwise been inaccessible in conventional metal-only plasmonics. The underlying idea is illustrated in Fig. 3; depending on the sign of the static asymptote ζ ≡ d(0), the AGP’s dispersion shifts toward larger q (smaller ω; redshift) for ζ > 0 and toward smaller q (larger ω; blueshift) for ζ < 0. As noted above, the q-shift is ~q0ζ/(2t). Crucially, despite the ångström-scale of ζ, this shift can be sizable: the inverse scaling with the spacer thickness t effectively amplifies the attainable shifts in q, reaching up to several μm−1 for few-nanometer t. We stress that these regimes are well within current state-of-the-art experimental capabilities8,10,11, suggesting a new path toward the systematic exploration of the static quantum response of metals.
Probing the quantum surface-response of metals with AGPs
The key parameter that regulates the impact of quantum surface corrections stemming from the metal is the graphene–metal separation, t (analogously to the observations of nonclassical effects in conventional plasmons at narrow metal gaps13,36,37); see Fig. 4. For the experimentally representative parameters indicated in Fig. 4, these come into effect for t 5 nm, growing rapidly upon decreasing the graphene–metal separation further. Chiefly, ignoring the nonlocal response of the metal leads to a consistent overestimation (underestimation) of AGP’s wavevector (group velocity) for d < 0, and vice versa for d > 0 (Fig. 4a); this behavior is consistent with the effective renormalization of the graphene–metal separation mentioned earlier (Fig. 4b). Finally, we analyze the interplay of both t and EF and their joint influence on the magnitude of the quantum corrections from the metal (we take d = −4 Å, which is reasonable for the Au substrate used in recent AGP experiments7,8,11); in Fig. 4c we show the relative wavevector quantum shift (excited at λ0 = 11.28 μm32). In the few-nanometer regime, the quantum corrections to the AGP wavevector approach 5%, increasing further as t decreases—for instance, in the extreme, one-atom-thick limit (t ≈ 0.7 nm11, which also approximately coincides with edge of the validity of the d-parameter framework, i.e., t 1 nm15) the AGP’s wavevector can change by as much as 10% for moderate graphene doping. The pronounced Fermi level dependence exhibited in Fig. 4c also suggests a complementary approach for measuring the metal’s quantum surface-response even if an experimental parameter is unknown (although, as previously noted, all relevant experimental parameters can in fact be characterized using currently available techniques8,10,11,15): such an unknown variable can be fitted at low EF using the “classical” theory (i.e., with d = d = 0), since the impact of metallic quantum response is negligible in that regime. A parameter-free assessment of the metal’s quantum surface-response can then be carried out subsequently by increasing EF (and with it, the metal-induced quantum shift). We emphasize that this can be accomplished in the same device by doping graphene using standard electrostatic gating8,10,11.
Discussion
In this Article, we have presented a theoretical account that establishes and quantifies the influence of the metal’s quantum response for AGPs in hybrid GDM structures. We have demonstrated that the nanoscale confinement of electromagnetic fields inherent to AGPs can be harnessed to determine the quantum surface-response of metals in the THz and mid-IR spectral ranges (which is typically inaccessible with traditional metal-based plasmonics). Additionally, our findings elucidate and contextualize recent experiments10,11 that have reported the observation of nonclassical spectral shifting of AGPs due to metallic quantum response but without a clear concomitant increase of damping, even for atomically thin graphene–metal separations. Our results also demonstrate that the metal’s quantum surface-response needs to be rigorously accounted for—e.g., using the framework developed here—when searching for signatures of many-body effects in the graphene electron liquid imprinted in the spectrum of AGPs in GDM systems8, since the metal’s quantum-surface response can lead to qualitatively similar dispersion shifts, as shown here. In passing, we emphasize that our framework can be readily generalized to more complex graphene–metal hybrid structures either by semi-analytical approaches (e.g., the Fourier modal method38 for periodically nanopatterned systems) or by direct implementation in commercially available numerical solvers (see refs. 15,39), simply by adopting d-parameter-corrected boundary conditions15,16.
Further, our formalism provides a transparent theoretical foundation for guiding experimental measurements of the quantum surface-response of metals using AGPs. The quantitative knowledge of the metal’s low-frequency, static quantum response is of practical utility in a plethora of scenarios, enabling, for instance, the incorporation of leading-order quantum corrections to the classical electrostatic image theory of particle–surface interaction20 as well as to the van der Waals interaction21,25,40 affecting atoms or molecules near metal surfaces. Another prospect suggested by our findings is the experimental determination of ζ ≡ d(0) through measurements of the AGP’s spectrum. This highlights a new metric for comparing the fidelity of first-principle calculations of different metals (inasmuch as ab initio methods can yield disparate results depending on the chosen scheme or functional)41,42 with explicit measurements.
Our results also highlight that AGPs can be extremely sensitive probes for nanometrology as plasmon rulers, while simultaneously underscoring the importance of incorporating quantum response in the characterization of such rulers at (sub)nanometric scales. Finally, the theory introduced here further suggests additional directions for exploiting AGP’s high-sensitivity, e.g., to explore the physics governing the complex electron dynamics at the surfaces of superconductors43 and other strongly correlated systems. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8989700675010681, "perplexity": 2296.2155799589577}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950110.72/warc/CC-MAIN-20230401160259-20230401190259-00305.warc.gz"} |
https://heattransfer.asmedigitalcollection.asme.org/ICONE/proceedings-abstract/ICONE28/85253/1122407 | ## Abstract
In this study, the system thermal-hydraulic code LOCUST is applied to simulate reflood heat transfer experiments conducted in the RBHT facility, and the effect of spacer grids are considered. The calculation results of LOCUST are compared with experimental data and the calculations of RELAP5 4.0. The results show that both LOCUST and RELAP5 4.0 are capable of predicting the reflood behaviors at a satisfactory level. Besides, the calculations of cladding temperature and heat transfer coefficient are generally in good agreement with experimental data. When spacer grids are introduced, PCT calculated by RELAP5 4.0 and LOCUST are 1178K and 1201K, respectively. However, the calculations without spacer girds are 1184K and 1206K, respectively. The comparison reveals that a decrease of around 5K in the PCT occurs due to spacer grids.
This content is only available via PDF. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8735396265983582, "perplexity": 2665.808496674988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945368.6/warc/CC-MAIN-20230325161021-20230325191021-00752.warc.gz"} |
https://tomatoheart.com/nanowhiz/viral-math-problem-in-japan/ | Sometimes things are not at easy at it might seem. Almost all of us ( the so called literate people ) know the basics of Mathematics very well and the knowledge of addition, subtraction, multiplication and division is kind of four fundamental pillars of Mathematics. but what if I say, you may not able to solve this very simple problem of math which only involves three of them. The question is:
According to YouTube Channel “MindYourDecisions” , This problem went viral in Japan after a study found 60 percent of 20 somethings could get the correct answer. In a similar study that was conducted in 1980s, the success rate was 90 percent. Let me explain a common mistake and how to get the correct answer by using the order of operations.
The culprit behind this problem. Probably the increased use of calculators. They are on every phone and that makes it easy for students to rely on them to find an answer to a simple math problem like the one above. However, enter this simple question into a calculator and it will give you the wrong answer.
So what makes it difficult to get the correct answer? Most probably, the main culprit is the increased use of calculators in our lives. They are on every phone and hence makes it easy for students to rely on them to find an answer to a simple math problem like the one above. However, in this case, try to enter this simple question into a calculator and I bet, you will get the wrong answer 🙂
Is this really simple? In fact yes. This problem is really simple but if you know the Order of Operations, a math skill taught in the 3rd grade. If you don’t remember learning the order of operations in school, let’s revise it now.
Order of Operations (PEMDAS):
1. Parenthesis ( )
2. Exponents x^2
3. Multiplication / Division | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9328406453132629, "perplexity": 341.1747833352998}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607802.75/warc/CC-MAIN-20170524055048-20170524075048-00468.warc.gz"} |
https://www.physicsforums.com/threads/two-capacitive-spheres-separated-by-dielectric-ratio-of-radii-for-lowest-electric-fi.621540/ | Two capacitive spheres separated by dielectric, ratio of radii for lowest electric fi
1. Jul 17, 2012
Xinthose
1. The problem statement, all variables and given/known data
It's desired to build a capacitor which has two concentric spheres separated by a dielectric of high permittivity, low loss, and high dielectric strength. Calculate the ratio of sphere b's radius to sphere a's radius which produces the lowest electric field between the spheres.
Not sure how to start this one. Thank you for any help.
2. Jul 18, 2012
CWatters
3. Jul 18, 2012
Xinthose
Re: Two capacitive spheres separated by dielectric, ratio of radii for lowest electri
I do know, from Wikipedia, that concerning concentric spheres, Cap = 4 (pi) ε / ( (1 /a) - (1/b) )
4. Jul 18, 2012
Nicholasc1988
Re: Two capacitive spheres separated by dielectric, ratio of radii for lowest electri
So what Ive done on this problem so far is
C=[4pi(epsilon nought)][(ab)/(b-a)]
and
V=(Q/[4pi(epsilon nought)])(b-a)/(ab)
and plug it in Q=CV
i just get Q=Q
Conceptually
from E=kq/r^2 as the radius goes up the E field goes down
so the ratio from B to a would approach infinity or
A should be much less then A?
5. Jul 18, 2012
CWatters
Re: Two capacitive spheres separated by dielectric, ratio of radii for lowest electri
The electric field E = V/d where d is the distance between the plates.
6. Jul 18, 2012
Xinthose
Re: Two capacitive spheres separated by dielectric, ratio of radii for lowest electri
Alright, but you eventually get E = Q / (4 * pi * ε * a * b) ; so how would you get a ratio from that from b to a ?
7. Jul 19, 2012
Xinthose
Re: Two capacitive spheres separated by dielectric, ratio of radii for lowest electri
You failed me Physics Forums; here is the scanned answer from my professor's solution set given to us after the test; I hope that this will help someone else out there; Make of it what you will; his handwriting is kind of hard to read
http://i633.photobucket.com/albums/uu57/Xinthose/scan0002.jpg
http://i633.photobucket.com/albums/uu57/Xinthose/scan0003.jpg [Broken]
or if you prefer to see it on the forum
page 1
page 2
Last edited by a moderator: May 6, 2017
Similar Discussions: Two capacitive spheres separated by dielectric, ratio of radii for lowest electric fi | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9330692887306213, "perplexity": 2649.1815280538017}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103579.21/warc/CC-MAIN-20170817151157-20170817171157-00130.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/121213-proof-involving-inequalities.html | 1. ## Proof involving inequalities
Let $a,b \in \mathbb{R}$. If $a \le b_{1}$ for every $b_{1} > b$, then $a \le b$. I'm fairly certain I have to apply some of the ordered field properties but I am not sure how. Thanks!
2. Suppose $a>b$; let $a-b=\epsilon$. Let $b_1=b+\epsilon/2$. Then $b_1 >b$ so by assumption $a \leq b_1$. So $a \leq b+\epsilon/2 =b+\frac{a-b}{2} = \frac{a+b}{2} < a$, which is absurd.
3. So the contradiction you arrived at is $a > a$, if I'm not mistaken. Thanks!
4. That is correct!
There may be a more abstract approach but this is pretty straightforward.
5. Originally Posted by Pinkk
Let $a,b \in \mathbb{R}$. If $a \le b_{1}$ for every $b_{1} > b$, then $a \le b$. I'm fairly certain I have to apply some of the ordered field properties but I am not sure how. Thanks!
http://www.mathhelpforum.com/math-he...her-proof.html. Look familiar? Clearly, if it is true for every $b_1>b$ then it is true for the above post. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9952408671379089, "perplexity": 144.0928349275479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284405.58/warc/CC-MAIN-20170116095124-00194-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://web2.0calc.com/questions/help_80532 | +0
# Help!
0
270
1
+1211
The fourth degree polynomial equation $$x^4 - 7x^3 + 4x^2 + 7x - 4 = 0$$ has four real roots, a, b, c, and d. What is the value of the sum $$\frac{1}{a}+\frac{1}{b}+\frac{1}{c}+\frac{1}{d}$$? Express your answer as a common fraction.
Jun 4, 2019
#1
+6193
+1
$$a b c d = c_0 = -4\\ bcd+acd+abd+abc = -(c_1) = -7\\ \dfrac 1 a + \dfrac 1 b+\dfrac 1 c+\dfrac 1 d = \\\dfrac{bcd+acd+abd+abc}{a b c d } = \dfrac{-7}{-4}=\dfrac 7 4$$
.
Jun 4, 2019
$$a b c d = c_0 = -4\\ bcd+acd+abd+abc = -(c_1) = -7\\ \dfrac 1 a + \dfrac 1 b+\dfrac 1 c+\dfrac 1 d = \\\dfrac{bcd+acd+abd+abc}{a b c d } = \dfrac{-7}{-4}=\dfrac 7 4$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8574908971786499, "perplexity": 786.7140601906285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886802.13/warc/CC-MAIN-20200704232817-20200705022817-00559.warc.gz"} |
https://statisticshelp.org/confidence_intervals_problems_1.php | Select Page
Use the margin of error E = $139, confidence level of 99%, and a =$513 to find the minimum sample size needed to estimate an unknown population mean, p.
Solution: Again, we know that
This means that
GO TO NEXT PROBLEM >> | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8795048594474792, "perplexity": 1394.709066432874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588284.71/warc/CC-MAIN-20211028100619-20211028130619-00705.warc.gz"} |
http://www.physicsforums.com/showthread.php?t=138470 | by superpig10000
Tags: oscillation, stupid
P: 8 A particle of mass m is moving under the ocmbined action of the forces -kx, a damping force -2mb (dx/dt) and a driving force Ft. Express the solutions in terms of intial position x(t=0) and the initial velocity of the particle. For the complementary solution, use x(t) = e^(-bt) A sin (w1t + theta) And the for the particular solution, use Ct + D w1^2 = w0^2 - b^2 w0^2 = k/m Here's what I have so far: m (dx^2/dt^2) + 2mb (dx/dt) + kx = Ft (dx^2/dt^2) + 2b (dx/dt) + w0^2 x = At (A = F/m) The complementary solution is x = e^(-bt) (A1e^(w1t) + A2e^-(w1t). I dont know how to convert this to the form above. And I am totally clueless as to how to find the particular solution. Please help!
Related Discussions Introductory Physics Homework 1 Introductory Physics Homework 2 Introductory Physics Homework 2 General Physics 5 Classical Physics 0 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8229445815086365, "perplexity": 1276.0240041220463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997901589.57/warc/CC-MAIN-20140722025821-00115-ip-10-33-131-23.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/quick-questions-on-work-and-energy.159991/ | # Homework Help: Quick questions on work and energy
1. Mar 9, 2007
### future_vet
Can work be done on a system if there is no motion?
I would say no, no motion = no energy...
Is it possible for a system to have negative potential energy?
I would say yes, since the choice of the zero of potential energy is arbitrary.
A 500-kg elevator is pulled upward with a constant force of 550N for a distance of 50 m. What is the work done by the 550N force?
From what I understand, we multiply 550N by 50 m, and get about 3.00x 104 J.
2. Mar 9, 2007
### Dick
I think you are pretty much correct.
3. Mar 9, 2007
### Staff: Mentor
I would agree with you on all three. But the first question is thought provoking. I wonder if we can think of a case where there is work done, but no motion. Certainly that is true in cases where there is no *net* motion, like spinning a wheel with friction bearings. But no motion at all....hmmm.
Chemical energy conversion...is that considered work? I don't think so, but maybe someone else can think of a creative case.
4. Mar 9, 2007
Thank you!
5. Mar 9, 2007
### AlephZero
You can't do mechanical work without motion, but there are other ways to increase the energy of a system - for example adding heat energy, or storing electrical charge in a capacitor. "Increasing the energy" is the same as "doing work".
Both correct. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.919135570526123, "perplexity": 1039.6166680498181}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221218122.85/warc/CC-MAIN-20180821112537-20180821132537-00257.warc.gz"} |
http://terrytao.wordpress.com/ | This is the third thread for the Polymath8b project to obtain new bounds for the quantity
$\displaystyle H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),$
either for small values of ${m}$ (in particular ${m=1,2}$) or asymptotically as ${m \rightarrow \infty}$. The previous thread may be found here. The currently best known bounds on ${H_m}$ are:
• (Maynard) Assuming the Elliott-Halberstam conjecture, ${H_1 \leq 12}$.
• (Polymath8b, tentative) ${H_1 \leq 330}$. Assuming Elliott-Halberstam, ${H_2 \leq 330}$.
• (Polymath8b, tentative) ${H_2 \leq 484{,}126}$. Assuming Elliott-Halberstam, ${H_4 \leq 493{,}408}$.
• (Polymath8b) ${H_m \leq \exp( 3.817 m )}$ for sufficiently large ${m}$. Assuming Elliott-Halberstam, ${H_m \ll e^{2m} \log m}$ for sufficiently large ${m}$.
Much of the current focus of the Polymath8b project is on the quantity
$\displaystyle M_k = M_k({\cal R}_k) := \sup_F \frac{\sum_{m=1}^k J_k^{(m)}(F)}{I_k(F)}$
where ${F}$ ranges over square-integrable functions on the simplex
$\displaystyle {\cal R}_k := \{ (t_1,\ldots,t_k) \in [0,+\infty)^k: t_1+\ldots+t_k \leq 1 \}$
with ${I_k, J_k^{(m)}}$ being the quadratic forms
$\displaystyle I_k(F) := \int_{{\cal R}_k} F(t_1,\ldots,t_k)^2\ dt_1 \ldots dt_k$
and
$\displaystyle J_k^{(m)}(F) := \int_{{\cal R}_{k-1}} (\int_0^{1-\sum_{i \neq m} t_i} F(t_1,\ldots,t_k)\ dt_m)^2$
$\displaystyle dt_1 \ldots dt_{m-1} dt_{m+1} \ldots dt_k.$
It was shown by Maynard that one has ${H_m \leq H(k)}$ whenever ${M_k > 4m}$, where ${H(k)}$ is the narrowest diameter of an admissible ${k}$-tuple. As discussed in the previous post, we have slight improvements to this implication, but they are currently difficult to implement, due to the need to perform high-dimensional integration. The quantity ${M_k}$ does seem however to be close to the theoretical limit of what the Selberg sieve method can achieve for implications of this type (at the Bombieri-Vinogradov level of distribution, at least); it seems of interest to explore more general sieves, although we have not yet made much progress in this direction.
The best asymptotic bounds for ${M_k}$ we have are
$\displaystyle \log k - \log\log\log k + O(1) \leq M_k \leq \frac{k}{k-1} \log k \ \ \ \ \ (1)$
which we prove below the fold. The upper bound holds for all ${k > 1}$; the lower bound is only valid for sufficiently large ${k}$, and gives the upper bound ${H_m \ll e^{2m} \log m}$ on Elliott-Halberstam.
For small ${k}$, the upper bound is quite competitive, for instance it provides the upper bound in the best values
$\displaystyle 1.845 \leq M_4 \leq 1.848$
and
$\displaystyle 2.001162 \leq M_5 \leq 2.011797$
we have for ${M_4}$ and ${M_5}$. The situation is a little less clear for medium values of ${k}$, for instance we have
$\displaystyle 3.95608 \leq M_{59} \leq 4.148$
and so it is not yet clear whether ${M_{59} > 4}$ (which would imply ${H_1 \leq 300}$). See this wiki page for some further upper and lower bounds on ${M_k}$.
The best lower bounds are not obtained through the asymptotic analysis, but rather through quadratic programming (extending the original method of Maynard). This has given significant numerical improvements to our best bounds (in particular lowering the ${H_1}$ bound from ${600}$ to ${330}$), but we have not yet been able to combine this method with the other potential improvements (enlarging the simplex, using MPZ distributional estimates, and exploiting upper bounds on two-point correlations) due to the computational difficulty involved.
(This is an extended blog post version of my talk “Ultraproducts as a Bridge Between Discrete and Continuous Analysis” that I gave at the Simons institute for the theory of computing at the workshop “Neo-Classical methods in discrete analysis“. Some of the material here is drawn from previous blog posts, notably “Ultraproducts as a bridge between hard analysis and soft analysis” and “Ultralimit analysis and quantitative algebraic geometry“‘. The text here has substantially more details than the talk; one may wish to skip all of the proofs given here to obtain a closer approximation to the original talk.)
Discrete analysis, of course, is primarily interested in the study of discrete (or “finitary”) mathematical objects: integers, rational numbers (which can be viewed as ratios of integers), finite sets, finite graphs, finite or discrete metric spaces, and so forth. However, many powerful tools in mathematics (e.g. ergodic theory, measure theory, topological group theory, algebraic geometry, spectral theory, etc.) work best when applied to continuous (or “infinitary”) mathematical objects: real or complex numbers, manifolds, algebraic varieties, continuous topological or metric spaces, etc. In order to apply results and ideas from continuous mathematics to discrete settings, there are basically two approaches. One is to directly discretise the arguments used in continuous mathematics, which often requires one to keep careful track of all the bounds on various quantities of interest, particularly with regard to various error terms arising from discretisation which would otherwise have been negligible in the continuous setting. The other is to construct continuous objects as limits of sequences of discrete objects of interest, so that results from continuous mathematics may be applied (often as a “black box”) to the continuous limit, which then can be used to deduce consequences for the original discrete objects which are quantitative (though often ineffectively so). The latter approach is the focus of this current talk.
The following table gives some examples of a discrete theory and its continuous counterpart, together with a limiting procedure that might be used to pass from the former to the latter:
(Discrete) (Continuous) (Limit method) Ramsey theory Topological dynamics Compactness Density Ramsey theory Ergodic theory Furstenberg correspondence principle Graph/hypergraph regularity Measure theory Graph limits Polynomial regularity Linear algebra Ultralimits Structural decompositions Hilbert space geometry Ultralimits Fourier analysis Spectral theory Direct and inverse limits Quantitative algebraic geometry Algebraic geometry Schemes Discrete metric spaces Continuous metric spaces Gromov-Hausorff limits Approximate group theory Topological group theory Model theory
As the above table illustrates, there are a variety of different ways to form a limiting continuous object. Roughly speaking, one can divide limits into three categories:
• Topological and metric limits. These notions of limits are commonly used by analysts. Here, one starts with a sequence (or perhaps a net) of objects ${x_n}$ in a common space ${X}$, which one then endows with the structure of a topological space or a metric space, by defining a notion of distance between two points of the space, or a notion of open neighbourhoods or open sets in the space. Provided that the sequence or net is convergent, this produces a limit object ${\lim_{n \rightarrow \infty} x_n}$, which remains in the same space, and is “close” to many of the original objects ${x_n}$ with respect to the given metric or topology.
• Categorical limits. These notions of limits are commonly used by algebraists. Here, one starts with a sequence (or more generally, a diagram) of objects ${x_n}$ in a category ${X}$, which are connected to each other by various morphisms. If the ambient category is well-behaved, one can then form the direct limit ${\varinjlim x_n}$ or the inverse limit ${\varprojlim x_n}$ of these objects, which is another object in the same category ${X}$, and is connected to the original objects ${x_n}$ by various morphisms.
• Logical limits. These notions of limits are commonly used by model theorists. Here, one starts with a sequence of objects ${x_{\bf n}}$ or of spaces ${X_{\bf n}}$, each of which is (a component of) a model for given (first-order) mathematical language (e.g. if one is working in the language of groups, ${X_{\bf n}}$ might be groups and ${x_{\bf n}}$ might be elements of these groups). By using devices such as the ultraproduct construction, or the compactness theorem in logic, one can then create a new object ${\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}$ or a new space ${\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}}$, which is still a model of the same language (e.g. if the spaces ${X_{\bf n}}$ were all groups, then the limiting space ${\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}}$ will also be a group), and is “close” to the original objects or spaces in the sense that any assertion (in the given language) that is true for the limiting object or space, will also be true for many of the original objects or spaces, and conversely. (For instance, if ${\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}}$ is an abelian group, then the ${X_{\bf n}}$ will also be abelian groups for many ${{\bf n}}$.)
The purpose of this talk is to highlight the third type of limit, and specifically the ultraproduct construction, as being a “universal” limiting procedure that can be used to replace most of the limits previously mentioned. Unlike the topological or metric limits, one does not need the original objects ${x_{\bf n}}$ to all lie in a common space ${X}$ in order to form an ultralimit ${\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}$; they are permitted to lie in different spaces ${X_{\bf n}}$; this is more natural in many discrete contexts, e.g. when considering graphs on ${{\bf n}}$ vertices in the limit when ${{\bf n}}$ goes to infinity. Also, no convergence properties on the ${x_{\bf n}}$ are required in order for the ultralimit to exist. Similarly, ultraproduct limits differ from categorical limits in that no morphisms between the various spaces ${X_{\bf n}}$ involved are required in order to construct the ultraproduct.
With so few requirements on the objects ${x_{\bf n}}$ or spaces ${X_{\bf n}}$, the ultraproduct construction is necessarily a very “soft” one. Nevertheless, the construction has two very useful properties which make it particularly useful for the purpose of extracting good continuous limit objects out of a sequence of discrete objects. First of all, there is Łos’s theorem, which roughly speaking asserts that any first-order sentence which is asymptotically obeyed by the ${x_{\bf n}}$, will be exactly obeyed by the limit object ${\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}$; in particular, one can often take a discrete sequence of “partial counterexamples” to some assertion, and produce a continuous “complete counterexample” that same assertion via an ultraproduct construction; taking the contrapositives, one can often then establish a rigorous equivalence between a quantitative discrete statement and its qualitative continuous counterpart. Secondly, there is the countable saturation property that ultraproducts automatically enjoy, which is a property closely analogous to that of compactness in topological spaces, and can often be used to ensure that the continuous objects produced by ultraproduct methods are “complete” or “compact” in various senses, which is particularly useful in being able to upgrade qualitative (or “pointwise”) bounds to quantitative (or “uniform”) bounds, more or less “for free”, thus reducing significantly the burden of “epsilon management” (although the price one pays for this is that one needs to pay attention to which mathematical objects of study are “standard” and which are “nonstandard”). To achieve this compactness or completeness, one sometimes has to restrict to the “bounded” portion of the ultraproduct, and it is often also convenient to quotient out the “infinitesimal” portion in order to complement these compactness properties with a matching “Hausdorff” property, thus creating familiar examples of continuous spaces, such as locally compact Hausdorff spaces.
Ultraproducts are not the only logical limit in the model theorist’s toolbox, but they are one of the simplest to set up and use, and already suffice for many of the applications of logical limits outside of model theory. In this post, I will set out the basic theory of these ultraproducts, and illustrate how they can be used to pass between discrete and continuous theories in each of the examples listed in the above table.
Apart from the initial “one-time cost” of setting up the ultraproduct machinery, the main loss one incurs when using ultraproduct methods is that it becomes very difficult to extract explicit quantitative bounds from results that are proven by transferring qualitative continuous results to the discrete setting via ultraproducts. However, in many cases (particularly those involving regularity-type lemmas) the bounds are already of tower-exponential type or worse, and there is arguably not much to be lost by abandoning the explicit quantitative bounds altogether.
This is the second thread for the Polymath8b project to obtain new bounds for the quantity
$\displaystyle H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),$
either for small values of ${m}$ (in particular ${m=1,2}$) or asymptotically as ${m \rightarrow \infty}$. The previous thread may be found here. The currently best known bounds on ${H_m}$ are:
• (Maynard) ${H_1 \leq 600}$.
• (Polymath8b, tentative) ${H_2 \leq 484,276}$.
• (Polymath8b, tentative) ${H_m \leq \exp( 3.817 m )}$ for sufficiently large ${m}$.
• (Maynard) Assuming the Elliott-Halberstam conjecture, ${H_1 \leq 12}$, ${H_2 \leq 600}$, and ${H_m \ll m^3 e^{2m}}$.
Following the strategy of Maynard, the bounds on ${H_m}$ proceed by combining four ingredients:
1. Distribution estimates ${EH[\theta]}$ or ${MPZ[\varpi,\delta]}$ for the primes (or related objects);
2. Bounds for the minimal diameter ${H(k)}$ of an admissible ${k}$-tuple;
3. Lower bounds for the optimal value ${M_k}$ to a certain variational problem;
4. Sieve-theoretic arguments to convert the previous three ingredients into a bound on ${H_m}$.
Accordingly, the most natural routes to improve the bounds on ${H_m}$ are to improve one or more of the above four ingredients.
Ingredient 1 was studied intensively in Polymath8a. The following results are known or conjectured (see the Polymath8a paper for notation and proofs):
• (Bombieri-Vinogradov) ${EH[\theta]}$ is true for all ${0 < \theta < 1/2}$.
• (Polymath8a) ${MPZ[\varpi,\delta]}$ is true for ${\frac{600}{7} \varpi + \frac{180}{7}\delta < 1}$.
• (Polymath8a, tentative) ${MPZ[\varpi,\delta]}$ is true for ${\frac{1080}{13} \varpi + \frac{330}{13} \delta < 1}$.
• (Elliott-Halberstam conjecture) ${EH[\theta]}$ is true for all ${0 < \theta < 1}$.
Ingredient 2 was also studied intensively in Polymath8a, and is more or less a solved problem for the values of ${k}$ of interest (with exact values of ${H(k)}$ for ${k \leq 342}$, and quite good upper bounds for ${H(k)}$ for ${k < 5000}$, available at this page). So the main focus currently is on improving Ingredients 3 and 4.
For Ingredient 3, the basic variational problem is to understand the quantity
$\displaystyle M_k({\cal R}_k) := \sup_F \frac{\sum_{m=1}^k J_k^{(m)}(F)}{I_k(F)}$
for ${F: {\cal R}_k \rightarrow {\bf R}}$ bounded measurable functions, not identically zero, on the simplex
$\displaystyle {\cal R}_k := \{ (t_1,\ldots,t_k) \in [0,+\infty)^k: t_1+\ldots+t_k \leq 1 \}$
with ${I_k, J_k^{(m)}}$ being the quadratic forms
$\displaystyle I_k(F) := \int_{{\cal R}_k} F(t_1,\ldots,t_k)^2\ dt_1 \ldots dt_k$
and
$\displaystyle J_k^{(m)}(F) := \int_{{\cal R}_{k-1}} (\int_0^{1-\sum_{i \neq m} t_i} F(t_1,\ldots,t_k)\ dt_i)^2 dt_1 \ldots dt_{m-1} dt_{m+1} \ldots dt_k.$
Equivalently, one has
$\displaystyle M_k({\cal R}_k) := \sup_F \frac{\int_{{\cal R}_k} F {\cal L}_k F}{\int_{{\cal R}_k} F^2}$
where ${{\cal L}_k: L^2({\cal R}_k) \rightarrow L^2({\cal R}_k)}$ is the positive semi-definite bounded self-adjoint operator
$\displaystyle {\cal L}_k F(t_1,\ldots,t_k) = \sum_{m=1}^k \int_0^{1-\sum_{i \neq m} t_i} F(t_1,\ldots,t_{m-1},s,t_{m+1},\ldots,t_k)\ ds,$
so ${M_k}$ is the operator norm of ${{\cal L}}$. Another interpretation of ${M_k({\cal R}_k)}$ is that the probability that a rook moving randomly in the unit cube ${[0,1]^k}$ stays in simplex ${{\cal R}_k}$ for ${n}$ moves is asymptotically ${(M_k({\cal R}_k)/k + o(1))^n}$.
We now have a fairly good asymptotic understanding of ${M_k({\cal R}_k)}$, with the bounds
$\displaystyle \log k - 2 \log\log k -2 \leq M_k({\cal R}_k) \leq \log k + \log\log k + 2$
holding for sufficiently large ${k}$. There is however still room to tighten the bounds on ${M_k({\cal R}_k)}$ for small ${k}$; I’ll summarise some of the ideas discussed so far below the fold.
For Ingredient 4, the basic tool is this:
Theorem 1 (Maynard) If ${EH[\theta]}$ is true and ${M_k({\cal R}_k) > \frac{2m}{\theta}}$, then ${H_m \leq H(k)}$.
Thus, for instance, it is known that ${M_{105} > 4}$ and ${H(105)=600}$, and this together with the Bombieri-Vinogradov inequality gives ${H_1\leq 600}$. This result is proven in Maynard’s paper and an alternate proof is also given in the previous blog post.
We have a number of ways to relax the hypotheses of this result, which we also summarise below the fold.
For each natural number ${m}$, let ${H_m}$ denote the quantity
$\displaystyle H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),$
where ${p_n}$ denotes the ${n\textsuperscript{th}}$ prime. In other words, ${H_m}$ is the least quantity such that there are infinitely many intervals of length ${H_m}$ that contain ${m+1}$ or more primes. Thus, for instance, the twin prime conjecture is equivalent to the assertion that ${H_1 = 2}$, and the prime tuples conjecture would imply that ${H_m}$ is equal to the diameter of the narrowest admissible tuple of cardinality ${m+1}$ (thus we conjecturally have ${H_1 = 2}$, ${H_2 = 6}$, ${H_3 = 8}$, ${H_4 = 12}$, ${H_5 = 16}$, and so forth; see this web page for further continuation of this sequence).
In 2004, Goldston, Pintz, and Yildirim established the bound ${H_1 \leq 16}$ conditional on the Elliott-Halberstam conjecture, which remains unproven. However, no unconditional finiteness of ${H_1}$ was obtained (although they famously obtained the non-trivial bound ${p_{n+1}-p_n = o(\log p_n)}$), and even on the Elliot-Halberstam conjecture no finiteness result on the higher ${H_m}$ was obtained either (although they were able to show ${p_{n+2}-p_n=o(\log p_n)}$ on this conjecture). In the recent breakthrough of Zhang, the unconditional bound ${H_1 \leq 70,000,000}$ was obtained, by establishing a weak partial version of the Elliott-Halberstam conjecture; by refining these methods, the Polymath8 project (which I suppose we could retroactively call the Polymath8a project) then lowered this bound to ${H_1 \leq 4,680}$.
With the very recent preprint of James Maynard, we have the following further substantial improvements:
Theorem 1 (Maynard’s theorem) Unconditionally, we have the following bounds:
• ${H_1 \leq 600}$.
• ${H_m \leq C m^3 e^{4m}}$ for an absolute constant ${C}$ and any ${m \geq 1}$.
If one assumes the Elliott-Halberstam conjecture, we have the following improved bounds:
• ${H_1 \leq 12}$.
• ${H_2 \leq 600}$.
• ${H_m \leq C m^3 e^{2m}}$ for an absolute constant ${C}$ and any ${m \geq 1}$.
The final conclusion ${H_m \leq C m^3 e^{2m}}$ on Elliott-Halberstam is not explicitly stated in Maynard’s paper, but follows easily from his methods, as I will describe below the fold. (At around the same time as Maynard’s work, I had also begun a similar set of calculations concerning ${H_m}$, but was only able to obtain the slightly weaker bound ${H_m \leq C \exp( C m )}$ unconditionally.) In the converse direction, the prime tuples conjecture implies that ${H_m}$ should be comparable to ${m \log m}$. Granville has also obtained the slightly weaker explicit bound ${H_m \leq e^{8m+5}}$ for any ${m \geq 1}$ by a slight modification of Maynard’s argument.
The arguments of Maynard avoid using the difficult partial results on (weakened forms of) the Elliott-Halberstam conjecture that were established by Zhang and then refined by Polymath8; instead, the main input is the classical Bombieri-Vinogradov theorem, combined with a sieve that is closer in spirit to an older sieve of Goldston and Yildirim, than to the sieve used later by Goldston, Pintz, and Yildirim on which almost all subsequent work is based.
The aim of the Polymath8b project is to obtain improved bounds on ${H_1, H_2}$, and higher values of ${H_m}$, either conditional on the Elliott-Halberstam conjecture or unconditional. The likeliest routes for doing this are by optimising Maynard’s arguments and/or combining them with some of the results from the Polymath8a project. This post is intended to be the first research thread for that purpose. To start the ball rolling, I am going to give below a presentation of Maynard’s results, with some minor technical differences (most significantly, I am using the Goldston-Pintz-Yildirim variant of the Selberg sieve, rather than the traditional “elementary Selberg sieve” that is used by Maynard (and also in the Polymath8 project), although it seems that the numerology obtained by both sieves is essentially the same). An alternate exposition of Maynard’s work has just been completed also by Andrew Granville.
It’s time to (somewhat belatedly) roll over the previous thread on writing the first paper from the Polymath8 project, as this thread is overflowing with comments. We are getting near the end of writing this large (173 pages!) paper, establishing a bound of 4,680 on the gap between primes, with only a few sections left to thoroughly proofread (and the last section should probably be removed, with appropriate changes elsewhere, in view of the more recent progress by Maynard). As before, one can access the working copy of the paper at this subdirectory, as well as the rest of the directory, and the plan is to submit the paper to Algebra and Number theory (and the arXiv) once there is consensus to do so. Even before this paper was submitted, it already has had some impact; Andrew Granville’s exposition of the bounded gaps between primes story for the Bulletin of the AMS follows several of the Polymath8 arguments in deriving the result.
After this paper is done, there is interest in continuing onwards with other Polymath8 – related topics, and perhaps it is time to start planning for them. First of all, we have an invitation from the Newsletter of the European Mathematical Society to discuss our experiences and impressions with the project. I think it would be interesting to collect some impressions or thoughts (both positive and negative) from people who were highly active in the research and/or writing aspects of the project, as well as from more casual participants who were following the progress more quietly. This project seemed to attract a bit more attention than most other polymath projects (with the possible exception of the very first project, Polymath1). I think there are several reasons for this; the project builds upon a recent breakthrough (Zhang’s paper) that attracted an impressive amount of attention and publicity; the objective is quite easy to describe, when compared against other mathematical research objectives; and one could summarise the current state of progress by a single natural number H, which implied by infinite descent that the project was guaranteed to terminate at some point, but also made it possible to set up a “scoreboard” that could be quickly and easily updated. From the research side, another appealing feature of the project was that – in the early stages of the project, at least – it was quite easy to grab a new world record by means of making a small observation, which made it fit very well with the polymath spirit (in which the emphasis is on lots of small contributions by many people, rather than a few big contributions by a small number of people). Indeed, when the project first arose spontaneously as a blog post of Scott Morrrison over at the Secret Blogging Seminar, I was initially hesitant to get involved, but soon found the “game” of shaving a few thousands or so off of $H$ to be rather fun and addictive, and with a much greater sense of instant gratification than traditional research projects, which often take months before a satisfactory conclusion is reached. Anyway, I would welcome other thoughts or impressions on the projects in the comments below (I think that the pace of comments regarding proofreading of the paper has slowed down enough that this post can accommodate both types of comments comfortably.)
Then of course there is the “Polymath 8b” project in which we build upon the recent breakthroughs of James Maynard, which have simplified the route to bounded gaps between primes considerably, bypassing the need for any Elliott-Halberstam type distribution results beyond the Bombieri-Vinogradov theorem. James has kindly shown me an advance copy of the preprint, which should be available on the arXiv in a matter of days; it looks like he has made a modest improvement to the previously announced results, improving $k_0$ a bit to 105 (which then improves H to the nice round number of 600). He also has a companion result on bounding gaps $p_{n+m}-p_n$ between non-consecutive primes for any $m$ (not just $m=1$), with a bound of the shape $H_m := \lim \inf_{n \to \infty} p_{n+m}-p_n \ll m^3 e^{4m}$, which is in fact the first time that the finiteness of this limit inferior has been demonstrated. I plan to discuss these results (from a slightly different perspective than Maynard) in a subsequent blog post kicking off the Polymath8b project, once Maynard’s paper has been uploaded. It should be possible to shave the value of $H = H_1$ down further (or to get better bounds for $H_m$ for larger $m$), both unconditionally and under assumptions such as the Elliott-Halberstam conjecture, either by performing more numerical or theoretical optimisation on the variational problem Maynard is faced with, and also by using the improved distributional estimates provided by our existing paper; again, I plan to discuss these issues in a subsequent post. ( James, by the way, has expressed interest in participating in this project, which should be very helpful.)
The classical foundations of probability theory (discussed for instance in this previous blog post) is founded on the notion of a probability space ${(\Omega, {\cal E}, {\bf P})}$ – a space ${\Omega}$ (the sample space) equipped with a ${\sigma}$-algebra ${{\cal E}}$ (the event space), together with a countably additive probability measure ${{\bf P}: {\cal E} \rightarrow [0,1]}$ that assigns a real number in the interval ${[0,1]}$ to each event.
One can generalise the concept of a probability space to a finitely additive probability space, in which the event space ${{\cal E}}$ is now only a Boolean algebra rather than a ${\sigma}$-algebra, and the measure ${\mu}$ is now only finitely additive instead of countably additive, thus ${{\bf P}( E \vee F ) = {\bf P}(E) + {\bf P}(F)}$ when ${E,F}$ are disjoint events. By giving up countable additivity, one loses a fair amount of measure and integration theory, and in particular the notion of the expectation of a random variable becomes problematic (unless the random variable takes only finitely many values). Nevertheless, one can still perform a fair amount of probability theory in this weaker setting.
In this post I would like to describe a further weakening of probability theory, which I will call qualitative probability theory, in which one does not assign a precise numerical probability value ${{\bf P}(E)}$ to each event, but instead merely records whether this probability is zero, one, or something in between. Thus ${{\bf P}}$ is now a function from ${{\cal E}}$ to the set ${\{0, I, 1\}}$, where ${I}$ is a new symbol that replaces all the elements of the open interval ${(0,1)}$. In this setting, one can no longer compute quantitative expressions, such as the mean or variance of a random variable; but one can still talk about whether an event holds almost surely, with positive probability, or with zero probability, and there are still usable notions of independence. (I will refer to classical probability theory as quantitative probability theory, to distinguish it from its qualitative counterpart.)
The main reason I want to introduce this weak notion of probability theory is that it becomes suited to talk about random variables living inside algebraic varieties, even if these varieties are defined over fields other than ${{\bf R}}$ or ${{\bf C}}$. In algebraic geometry one often talks about a “generic” element of a variety ${V}$ defined over a field ${k}$, which does not lie in any specified variety of lower dimension defined over ${k}$. Once ${V}$ has positive dimension, such generic elements do not exist as classical, deterministic ${k}$-points ${x}$ in ${V}$, since of course any such point lies in the ${0}$-dimensional subvariety ${\{x\}}$ of ${V}$. There are of course several established ways to deal with this problem. One way (which one might call the “Weil” approach to generic points) is to extend the field ${k}$ to a sufficiently transcendental extension ${\tilde k}$, in order to locate a sufficient number of generic points in ${V(\tilde k)}$. Another approach (which one might dub the “Zariski” approach to generic points) is to work scheme-theoretically, and interpret a generic point in ${V}$ as being associated to the zero ideal in the function ring of ${V}$. However I want to discuss a third perspective, in which one interprets a generic point not as a deterministic object, but rather as a random variable ${{\bf x}}$ taking values in ${V}$, but which lies in any given lower-dimensional subvariety of ${V}$ with probability zero. This interpretation is intuitive, but difficult to implement in classical probability theory (except perhaps when considering varieties over ${{\bf R}}$ or ${{\bf C}}$) due to the lack of a natural probability measure to place on algebraic varieties; however it works just fine in qualitative probability theory. In particular, the algebraic geometry notion of being “generically true” can now be interpreted probabilistically as an assertion that something is “almost surely true”.
It turns out that just as qualitative random variables may be used to interpret the concept of a generic point, they can also be used to interpret the concept of a type in model theory; the type of a random variable ${x}$ is the set of all predicates ${\phi(x)}$ that are almost surely obeyed by ${x}$. In contrast, model theorists often adopt a Weil-type approach to types, in which one works with deterministic representatives of a type, which often do not occur in the original structure of interest, but only in a sufficiently saturated extension of that structure (this is the analogue of working in a sufficiently transcendental extension of the base field). However, it seems that (in some cases at least) one can equivalently view types in terms of (qualitative) random variables on the original structure, avoiding the need to extend that structure. (Instead, one reserves the right to extend the sample space of one’s probability theory whenever necessary, as part of the “probabilistic way of thinking” discussed in this previous blog post.) We illustrate this below the fold with two related theorems that I will interpret through the probabilistic lens: the “group chunk theorem” of Weil (and later developed by Hrushovski), and the “group configuration theorem” of Zilber (and again later developed by Hrushovski). For sake of concreteness we will only consider these theorems in the theory of algebraically closed fields, although the results are quite general and can be applied to many other theories studied in model theory.
One of the basic tools in modern combinatorics is the probabilistic method, introduced by Erdos, in which a deterministic solution to a given problem is shown to exist by constructing a random candidate for a solution, and showing that this candidate solves all the requirements of the problem with positive probability. When the problem requires a real-valued statistic ${X}$ to be suitably large or suitably small, the following trivial observation is often employed:
Proposition 1 (Comparison with mean) Let ${X}$ be a random real-valued variable, whose mean (or first moment) ${\mathop{\bf E} X}$ is finite. Then
$\displaystyle X \leq \mathop{\bf E} X$
with positive probability, and
$\displaystyle X \geq \mathop{\bf E} X$
with positive probability.
This proposition is usually applied in conjunction with a computation of the first moment ${\mathop{\bf E} X}$, in which case this version of the probabilistic method becomes an instance of the first moment method. (For comparison with other moment methods, such as the second moment method, exponential moment method, and zeroth moment method, see Chapter 1 of my book with Van Vu. For a general discussion of the probabilistic method, see the book by Alon and Spencer of the same name.)
As a typical example in random matrix theory, if one wanted to understand how small or how large the operator norm ${\|A\|_{op}}$ of a random matrix ${A}$ could be, one might first try to compute the expected operator norm ${\mathop{\bf E} \|A\|_{op}}$ and then apply Proposition 1; see this previous blog post for examples of this strategy (and related strategies, based on comparing ${\|A\|_{op}}$ with more tractable expressions such as the moments ${\hbox{tr} A^k}$). (In this blog post, all matrices are complex-valued.)
Recently, in their proof of the Kadison-Singer conjecture (and also in their earlier paper on Ramanujan graphs), Marcus, Spielman, and Srivastava introduced an striking new variant of the first moment method, suited in particular for controlling the operator norm ${\|A\|_{op}}$ of a Hermitian positive semi-definite matrix ${A}$. Such matrices have non-negative real eigenvalues, and so ${\|A\|_{op}}$ in this case is just the largest eigenvalue ${\lambda_1(A)}$ of ${A}$. Traditionally, one tries to control the eigenvalues through averaged statistics such as moments ${\hbox{tr} A^k = \sum_i \lambda_i(A)^k}$ or Stieltjes transforms ${\hbox{tr} (A-z)^{-1} = \sum_i (\lambda_i(A)-z)^{-1}}$; again, see this previous blog post. Here we use ${z}$ as short-hand for ${zI_d}$, where ${I_d}$ is the ${d \times d}$ identity matrix. Marcus, Spielman, and Srivastava instead rely on the interpretation of the eigenvalues ${\lambda_i(A)}$ of ${A}$ as the roots of the characteristic polynomial ${p_A(z) := \hbox{det}(z-A)}$ of ${A}$, thus
$\displaystyle \|A\|_{op} = \hbox{maxroot}( p_A ) \ \ \ \ \ (1)$
where ${\hbox{maxroot}(p)}$ is the largest real root of a non-zero polynomial ${p}$. (In our applications, we will only ever apply ${\hbox{maxroot}}$ to polynomials that have at least one real root, but for sake of completeness let us set ${\hbox{maxroot}(p)=-\infty}$ if ${p}$ has no real roots.)
Prior to the work of Marcus, Spielman, and Srivastava, I think it is safe to say that the conventional wisdom in random matrix theory was that the representation (1) of the operator norm ${\|A\|_{op}}$ was not particularly useful, due to the highly non-linear nature of both the characteristic polynomial map ${A \mapsto p_A}$ and the maximum root map ${p \mapsto \hbox{maxroot}(p)}$. (Although, as pointed out to me by Adam Marcus, some related ideas have occurred in graph theory rather than random matrix theory, for instance in the theory of the matching polynomial of a graph.) For instance, a fact as basic as the triangle inequality ${\|A+B\|_{op} \leq \|A\|_{op} + \|B\|_{op}}$ is extremely difficult to establish through (1). Nevertheless, it turns out that for certain special types of random matrices ${A}$ (particularly those in which a typical instance ${A}$ of this ensemble has a simple relationship to “adjacent” matrices in this ensemble), the polynomials ${p_A}$ enjoy an extremely rich structure (in particular, they lie in families of real stable polynomials, and hence enjoy good combinatorial interlacing properties) that can be surprisingly useful. In particular, Marcus, Spielman, and Srivastava established the following nonlinear variant of Proposition 1:
Proposition 2 (Comparison with mean) Let ${m,d \geq 1}$. Let ${A}$ be a random matrix, which is the sum ${A = \sum_{i=1}^m A_i}$ of independent Hermitian rank one ${d \times d}$ matrices ${A_i}$, each taking a finite number of values. Then
$\displaystyle \hbox{maxroot}(p_A) \leq \hbox{maxroot}( \mathop{\bf E} p_A )$
with positive probability, and
$\displaystyle \hbox{maxroot}(p_A) \geq \hbox{maxroot}( \mathop{\bf E} p_A )$
with positive probability.
We prove this proposition below the fold. The hypothesis that each ${A_i}$ only takes finitely many values is technical and can likely be relaxed substantially, but we will not need to do so here. Despite the superficial similarity with Proposition 1, the proof of Proposition 2 is quite nonlinear; in particular, one needs the interlacing properties of real stable polynomials to proceed. Another key ingredient in the proof is the observation that while the determinant ${\hbox{det}(A)}$ of a matrix ${A}$ generally behaves in a nonlinar fashion on the underlying matrix ${A}$, it becomes (affine-)linear when one considers rank one perturbations, and so ${p_A}$ depends in an affine-multilinear fashion on the ${A_1,\ldots,A_m}$. More precisely, we have the following deterministic formula, also proven below the fold:
Proposition 3 (Deterministic multilinearisation formula) Let ${A}$ be the sum of deterministic rank one ${d \times d}$ matrices ${A_1,\ldots,A_m}$. Then we have
$\displaystyle p_A(z) = \mu[A_1,\ldots,A_m](z) \ \ \ \ \ (2)$
for all ${z \in C}$, where the mixed characteristic polynomial ${\mu[A_1,\ldots,A_m](z)}$ of any ${d \times d}$ matrices ${A_1,\ldots,A_m}$ (not necessarily rank one) is given by the formula
$\displaystyle \mu[A_1,\ldots,A_m](z) \ \ \ \ \ (3)$
$\displaystyle = (\prod_{i=1}^m (1 - \frac{\partial}{\partial z_i})) \hbox{det}( z + \sum_{i=1}^m z_i A_i ) |_{z_1=\ldots=z_m=0}.$
Among other things, this formula gives a useful representation of the mean characteristic polynomial ${\mathop{\bf E} p_A}$:
Corollary 4 (Random multilinearisation formula) Let ${A}$ be the sum of jointly independent rank one ${d \times d}$ matrices ${A_1,\ldots,A_m}$. Then we have
$\displaystyle \mathop{\bf E} p_A(z) = \mu[ \mathop{\bf E} A_1, \ldots, \mathop{\bf E} A_m ](z) \ \ \ \ \ (4)$
for all ${z \in {\bf C}}$.
Proof: For fixed ${z}$, the expression ${\hbox{det}( z + \sum_{i=1}^m z_i A_i )}$ is a polynomial combination of the ${z_i A_i}$, while the differential operator ${(\prod_{i=1}^m (1 - \frac{\partial}{\partial z_i}))}$ is a linear combination of differential operators ${\frac{\partial^j}{\partial z_{i_1} \ldots \partial z_{i_j}}}$ for ${1 \leq i_1 < \ldots < i_j \leq d}$. As a consequence, we may expand (3) as a linear combination of terms, each of which is a multilinear combination of ${A_{i_1},\ldots,A_{i_j}}$ for some ${1 \leq i_1 < \ldots < i_j \leq d}$. Taking expectations of both sides of (2) and using the joint independence of the ${A_i}$, we obtain the claim. $\Box$
In view of Proposition 2, we can now hope to control the operator norm ${\|A\|_{op}}$ of certain special types of random matrices ${A}$ (and specifically, the sum of independent Hermitian positive semi-definite rank one matrices) by first controlling the mean ${\mathop{\bf E} p_A}$ of the random characteristic polynomial ${p_A}$. Pursuing this philosophy, Marcus, Spielman, and Srivastava establish the following result, which they then use to prove the Kadison-Singer conjecture:
Theorem 5 (Marcus-Spielman-Srivastava theorem) Let ${m,d \geq 1}$. Let ${v_1,\ldots,v_m \in {\bf C}^d}$ be jointly independent random vectors in ${{\bf C}^d}$, with each ${v_i}$ taking a finite number of values. Suppose that we have the normalisation
$\displaystyle \mathop{\bf E} \sum_{i=1}^m v_i v_i^* = 1$
where we are using the convention that ${1}$ is the ${d \times d}$ identity matrix ${I_d}$ whenever necessary. Suppose also that we have the smallness condition
$\displaystyle \mathop{\bf E} \|v_i\|^2 \leq \epsilon$
for some ${\epsilon>0}$ and all ${i=1,\ldots,m}$. Then one has
$\displaystyle \| \sum_{i=1}^m v_i v_i^* \|_{op} \leq (1+\sqrt{\epsilon})^2 \ \ \ \ \ (5)$
with positive probability.
Note that the upper bound in (5) must be at least ${1}$ (by taking ${v_i}$ to be deterministic) and also must be at least ${\epsilon}$ (by taking the ${v_i}$ to always have magnitude at least ${\sqrt{\epsilon}}$). Thus the bound in (5) is asymptotically tight both in the regime ${\epsilon\rightarrow 0}$ and in the regime ${\epsilon \rightarrow \infty}$; the latter regime will be particularly useful for applications to Kadison-Singer. It should also be noted that if one uses more traditional random matrix theory methods (based on tools such as Proposition 1, as well as more sophisticated variants of these tools, such as the concentration of measure results of Rudelson and Ahlswede-Winter), one obtains a bound of ${\| \sum_{i=1}^m v_i v_i^* \|_{op} \ll_\epsilon \log d}$ with high probability, which is insufficient for the application to the Kadison-Singer problem; see this article of Tropp. Thus, Theorem 5 obtains a sharper bound, at the cost of trading in “high probability” for “positive probability”.
In the paper of Marcus, Spielman and Srivastava, Theorem 5 is used to deduce a conjecture ${KS_2}$ of Weaver, which was already known to imply the Kadison-Singer conjecture; actually, a slight modification of their argument gives the paving conjecture of Kadison and Singer, from which the original Kadison-Singer conjecture may be readily deduced. We give these implications below the fold. (See also this survey article for some background on the Kadison-Singer problem.)
Let us now summarise how Theorem 5 is proven. In the spirit of semi-definite programming, we rephrase the above theorem in terms of the rank one Hermitian positive semi-definite matrices ${A_i := v_iv_i^*}$:
Theorem 6 (Marcus-Spielman-Srivastava theorem again) Let ${A_1,\ldots,A_m}$ be jointly independent random rank one Hermitian positive semi-definite ${d \times d}$ matrices such that the sum ${A :=\sum_{i=1}^m A_i}$ has mean
$\displaystyle \mathop{\bf E} A = I_d$
and such that
$\displaystyle \mathop{\bf E} \hbox{tr} A_i \leq \epsilon$
for some ${\epsilon>0}$ and all ${i=1,\ldots,m}$. Then one has
$\displaystyle \| A \|_{op} \leq (1+\sqrt{\epsilon})^2$
with positive probability.
In view of (1) and Proposition 2, this theorem follows from the following control on the mean characteristic polynomial:
Theorem 7 (Control of mean characteristic polynomial) Let ${A_1,\ldots,A_m}$ be jointly independent random rank one Hermitian positive semi-definite ${d \times d}$ matrices such that the sum ${A :=\sum_{i=1}^m A_i}$ has mean
$\displaystyle \mathop{\bf E} A = 1$
and such that
$\displaystyle \mathop{\bf E} \hbox{tr} A_i \leq \epsilon$
for some ${\epsilon>0}$ and all ${i=1,\ldots,m}$. Then one has
$\displaystyle \hbox{maxroot}(\mathop{\bf E} p_A) \leq (1 +\sqrt{\epsilon})^2.$
This result is proven using the multilinearisation formula (Corollary 4) and some convexity properties of real stable polynomials; we give the proof below the fold.
Thanks to Adam Marcus, Assaf Naor and Sorin Popa for many useful explanations on various aspects of the Kadison-Singer problem.
I’ve just finished the first draft of my book “Expansion in finite simple groups of Lie type“, which is based in the lecture notes for my graduate course on this topic that were previously posted on this blog. It also contains some newer material, such as the notes on Lie algebras and Lie groups that I posted most recently here.
Let ${F}$ be a field. A definable set over ${F}$ is a set of the form
$\displaystyle \{ x \in F^n | \phi(x) \hbox{ is true} \} \ \ \ \ \ (1)$
where ${n}$ is a natural number, and ${\phi(x)}$ is a predicate involving the ring operations ${+,\times}$ of ${F}$, the equality symbol ${=}$, an arbitrary number of constants and free variables in ${F}$, the quantifiers ${\forall, \exists}$, boolean operators such as ${\vee,\wedge,\neg}$, and parentheses and colons, where the quantifiers are always understood to be over the field ${F}$. Thus, for instance, the set of quadratic residues
$\displaystyle \{ x \in F | \exists y: x = y \times y \}$
is definable over ${F}$, and any algebraic variety over ${F}$ is also a definable set over ${F}$. Henceforth we will abbreviate “definable over ${F}$” simply as “definable”.
If ${F}$ is a finite field, then every subset of ${F^n}$ is definable, since finite sets are automatically definable. However, we can obtain a more interesting notion in this case by restricting the complexity of a definable set. We say that ${E \subset F^n}$ is a definable set of complexity at most ${M}$ if ${n \leq M}$, and ${E}$ can be written in the form (1) for some predicate ${\phi}$ of length at most ${M}$ (where all operators, quantifiers, relations, variables, constants, and punctuation symbols are considered to have unit length). Thus, for instance, a hypersurface in ${n}$ dimensions of degree ${d}$ would be a definable set of complexity ${O_{n,d}(1)}$. We will then be interested in the regime where the complexity remains bounded, but the field size (or field characteristic) becomes large.
In a recent paper, I established (in the large characteristic case) the following regularity lemma for dense definable graphs, which significantly strengthens the Szemerédi regularity lemma in this context, by eliminating “bad” pairs, giving a polynomially strong regularity, and also giving definability of the cells:
Lemma 1 (Algebraic regularity lemma) Let ${F}$ be a finite field, let ${V,W}$ be definable non-empty sets of complexity at most ${M}$, and let ${E \subset V \times W}$ also be definable with complexity at most ${M}$. Assume that the characteristic of ${F}$ is sufficiently large depending on ${M}$. Then we may partition ${V = V_1 \cup \ldots \cup V_m}$ and ${W = W_1 \cup \ldots \cup W_n}$ with ${m,n = O_M(1)}$, with the following properties:
• (Definability) Each of the ${V_1,\ldots,V_m,W_1,\ldots,W_n}$ are definable of complexity ${O_M(1)}$.
• (Size) We have ${|V_i| \gg_M |V|}$ and ${|W_j| \gg_M |W|}$ for all ${i=1,\ldots,m}$ and ${j=1,\ldots,n}$.
• (Regularity) We have
$\displaystyle |E \cap (A \times B)| = d_{ij} |A| |B| + O_M( |F|^{-1/4} |V| |W| ) \ \ \ \ \ (2)$
for all ${i=1,\ldots,m}$, ${j=1,\ldots,n}$, ${A \subset V_i}$, and ${B\subset W_j}$, where ${d_{ij}}$ is a rational number in ${[0,1]}$ with numerator and denominator ${O_M(1)}$.
My original proof of this lemma was quite complicated, based on an explicit calculation of the “square”
$\displaystyle \mu(w,w') := \{ v \in V: (v,w), (v,w') \in E \}$
of ${E}$ using the Lang-Weil bound and some facts about the étale fundamental group. It was the reliance on the latter which was the main reason why the result was restricted to the large characteristic setting. (I then applied this lemma to classify expanding polynomials over finite fields of large characteristic, but I will not discuss these applications here; see this previous blog post for more discussion.)
Recently, Anand Pillay and Sergei Starchenko (and independently, Udi Hrushovski) have observed that the theory of the étale fundamental group is not necessary in the argument, and the lemma can in fact be deduced from quite general model theoretic techniques, in particular using (a local version of) the concept of stability. One of the consequences of this new proof of the lemma is that the hypothesis of large characteristic can be omitted; the lemma is now known to be valid for arbitrary finite fields ${F}$ (although its content is trivial if the field is not sufficiently large depending on the complexity at most ${M}$).
Inspired by this, I decided to see if I could find yet another proof of the algebraic regularity lemma, again avoiding the theory of the étale fundamental group. It turns out that the spectral proof of the Szemerédi regularity lemma (discussed in this previous blog post) adapts very nicely to this setting. The key fact needed about definable sets over finite fields is that their cardinality takes on an essentially discrete set of values. More precisely, we have the following fundamental result of Chatzidakis, van den Dries, and Macintyre:
Proposition 2 Let ${F}$ be a finite field, and let ${M > 0}$.
• (Discretised cardinality) If ${E}$ is a non-empty definable set of complexity at most ${M}$, then one has
$\displaystyle |E| = c |F|^d + O_M( |F|^{d-1/2} ) \ \ \ \ \ (3)$
where ${d = O_M(1)}$ is a natural number, and ${c}$ is a positive rational number with numerator and denominator ${O_M(1)}$. In particular, we have ${|F|^d \ll_M |E| \ll_M |F|^d}$.
• (Definable cardinality) Assume ${|F|}$ is sufficiently large depending on ${M}$. If ${V, W}$, and ${E \subset V \times W}$ are definable sets of complexity at most ${M}$, so that ${E_w := \{ v \in V: (v,w) \in W \}}$ can be viewed as a definable subset of ${V}$ that is definably parameterised by ${w \in W}$, then for each natural number ${d = O_M(1)}$ and each positive rational ${c}$ with numerator and denominator ${O_M(1)}$, the set
$\displaystyle \{ w \in W: |E_w| = c |F|^d + O_M( |F|^{d-1/2} ) \} \ \ \ \ \ (4)$
is definable with complexity ${O_M(1)}$, where the implied constants in the asymptotic notation used to define (4) are the same as those that appearing in (3). (Informally: the “dimension” ${d}$ and “measure” ${c}$ of ${E_w}$ depends definably on ${w}$.)
We will take this proposition as a black box; a proof can be obtained by combining the description of definable sets over pseudofinite fields (discussed in this previous post) with the Lang-Weil bound (discussed in this previous post). (The former fact is phrased using nonstandard analysis, but one can use standard compactness-and-contradiction arguments to convert such statements to statements in standard analysis, as discussed in this post.)
The above proposition places severe restrictions on the cardinality of definable sets; for instance, it shows that one cannot have a definable set of complexity at most ${M}$ and cardinality ${|F|^{1/2}}$, if ${|F|}$ is sufficiently large depending on ${M}$. If ${E \subset V}$ are definable sets of complexity at most ${M}$, it shows that ${|E| = (c+ O_M(|F|^{-1/2})) |V|}$ for some rational ${0\leq c \leq 1}$ with numerator and denominator ${O_M(1)}$; furthermore, if ${c=0}$, we may improve this bound to ${|E| = O_M( |F|^{-1} |V|)}$. In particular, we obtain the following “self-improving” properties:
• If ${E \subset V}$ are definable of complexity at most ${M}$ and ${|E| \leq \epsilon |V|}$ for some ${\epsilon>0}$, then (if ${\epsilon}$ is sufficiently small depending on ${M}$ and ${F}$ is sufficiently large depending on ${M}$) this forces ${|E| = O_M( |F|^{-1} |V| )}$.
• If ${E \subset V}$ are definable of complexity at most ${M}$ and ${||E| - c |V|| \leq \epsilon |V|}$ for some ${\epsilon>0}$ and positive rational ${c}$, then (if ${\epsilon}$ is sufficiently small depending on ${M,c}$ and ${F}$ is sufficiently large depending on ${M,c}$) this forces ${|E| = c |V| + O_M( |F|^{-1/2} |V| )}$.
It turns out that these self-improving properties can be applied to the coefficients of various matrices (basically powers of the adjacency matrix associated to ${E}$) that arise in the spectral proof of the regularity lemma to significantly improve the bounds in that lemma; we describe how this is done below the fold. We also make some connections to the stability-based proofs of Pillay-Starchenko and Hrushovski.
I’ve just uploaded to the arXiv my article “Algebraic combinatorial geometry: the polynomial method in arithmetic combinatorics, incidence combinatorics, and number theory“, submitted to the new journal “EMS surveys in the mathematical sciences“. This is the first draft of a survey article on the polynomial method – a technique in combinatorics and number theory for controlling a relevant set of points by comparing it with the zero set of a suitably chosen polynomial, and then using tools from algebraic geometry (e.g. Bezout’s theorem) on that zero set. As such, the method combines algebraic geometry with combinatorial geometry, and could be viewed as the philosophy of a combined field which I dub “algebraic combinatorial geometry”. There is also an important extension of this method when one is working overthe reals, in which methods from algebraic topology (e.g. the ham sandwich theorem and its generalisation to polynomials), and not just algebraic geometry, come into play also.
The polynomial method has been used independently many times in mathematics; for instance, it plays a key role in the proof of Baker’s theorem in transcendence theory, or Stepanov’s method in giving an elementary proof of the Riemann hypothesis for finite fields over curves; in combinatorics, the nullstellenatz of Alon is also another relatively early use of the polynomial method. More recently, it underlies Dvir’s proof of the Kakeya conjecture over finite fields and Guth and Katz’s near-complete solution to the Erdos distance problem in the plane, and can be used to give a short proof of the Szemeredi-Trotter theorem. One of the aims of this survey is to try to present all of these disparate applications of the polynomial method in a somewhat unified context; my hope is that there will eventually be a systematic foundation for algebraic combinatorial geometry which naturally contains all of these different instances the polynomial method (and also suggests new instances to explore); but the field is unfortunately not at that stage of maturity yet.
This is something of a first draft, so comments and suggestions are even more welcome than usual. (For instance, I have already had my attention drawn to some additional uses of the polynomial method in the literature that I was not previously aware of.) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 462, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9399732351303101, "perplexity": 291.0268352494186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164043130/warc/CC-MAIN-20131204133403-00078-ip-10-33-133-15.ec2.internal.warc.gz"} |
http://mathhelpforum.com/trigonometry/151364-funny-double-equation-problem.html | # Math Help - Funny double-equation problem
1. ## Funny double-equation problem
5-7sin θ€ =2cos^2 θ€ , θ€ [-360°, 450°]
I'm just a little lost with the multiple use of Theda and €...
Just thought of something... is θ€ simply another way of writing θ? The € sign has no input on what the problem means...?
2. Originally Posted by wiseguy
5-7sin θ€ =2cos^2 θ€ , θ€ [-360°, 450°]
I'm just a little lost with the multiple use of Theda and €...
Just thought of something... is θ€ simply another way of writing θ? The € sign has no input on what the problem means...?
Solutions are required over the interval [-360 degrees, 450 degrees].
Substitute $\cos^2 \theta = 1 - \sin^2 \theta$ and re-arrange the resulting into a quadratic equation where $\sin \theta$ is the unknown.
If you need more help, please show all your work and say where you get stuck.
Also, please don't put questions in quote tags - it makes it too difficult to quote the question when replying.
3. Here's what I got...
5=2cos^2θ/7sinθ
(7/2)5=cos^2θ/sinθ
17.5=cosθ*cotθ
Can I carry this out like a normal equation? What I use to eliminate the cosx cotx mess?
4. I did an alternative approach to the problem, however I'm not sure if the 1 on the right side works
5=2cos^2θ/7sinθ
(7/2)5=1-sin^2θ/sinθ
17.5=1-sinθ
5. Originally Posted by wiseguy
I did an alternative approach to the problem, however I'm not sure if the 1 on the right side works
5=2cos^2θ/7sinθ
(7/2)5=1-sin^2θ/sinθ
17.5=1-sinθ
Substitute $w = \sin \theta$. Then, following from my earlier reply, you have:
$5 - 7 w = 2(1 - w^2) \Rightarrow 2w^2 - 7w + 3 =0$.
Solve for w. One solution is rejected (why?). The other solution leads to $\sin \theta = \frac{1}{2}$. Solve this equation.
6. Okay, I think I got it:
x=1/2, x=3
sinθ=1/2, sinθ=3
arcsin(1/2)=0.523599, arcsin3=no solution
so there is only one solution, and it is θ=0.523599 ...?
Thank you
7. Originally Posted by wiseguy
Okay, I think I got it:
x=1/2, x=3
sinθ=1/2, sinθ=3
arcsin(1/2)=0.523599, arcsin3=no solution
so there is only one solution, and it is θ=0.523599 ...?
Thank you
1) θ needs to be in degrees, as stated in the original problem, so θ = 30°.
2) In the future, you should express radian measures as something times pi if you can. IOW it's better to say θ = π/6 instead of 0.523599.... sinθ = 1/2 -> θ = π/6 or 30° is one of those things you should really memorize.
8. Originally Posted by wiseguy
Okay, I think I got it:
x=1/2, x=3
sinθ=1/2, sinθ=3
arcsin(1/2)=0.523599, arcsin3=no solution
so there is only one solution, and it is θ=0.523599 ...?
Thank you
Your answer is supposed to be in degrees since the question uses degree.
arcsin (0.5) = 30 degree and this is one of the solutions in the range given.
There are more: 150, 390, -330 , -210
9. Okay, how would I tie the thing where sin is positive in the first and second quadrant to the four solutions of 150, 390, -330 , -210?
10. Originally Posted by wiseguy
Okay, how would I tie the thing where sin is positive in the first and second quadrant to the four solutions of 150, 390, -330 , -210?
The reference angle is 30(1st quadrant) , 150(2nd quadrant). Note also that the period of a sin graph is 360. In other words, it repeats itself every 360.
so 30+360=390
How about 150+360=510? Look at the range.
Now you go clockwise direction, where sin is now positive in the 3rd and 4th quadrant.
In the 3rd quadrant, -(180+30) and the 4th: -(360-30)
As an alternative, you can use the general formula for sine.
11. Got it!
Now I have to remember this stuff... lol | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.945685625076294, "perplexity": 1842.5705105190898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257826908.63/warc/CC-MAIN-20160723071026-00206-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/newtons-cooling-law.239803/ | # Newton's cooling law
• Start date
• #1
Ry122
565
2
For Newton's cooling law
$$q = h*a \Delta T$$
q is the rate of energy loss of a body but for what unit time?
For example if q = 3 does the body lose 3 watts of energy in 1 second?
• #2
Gold Member
3,380
895
Whatever units you want as long as you are consistent (i.e mixing imperial and SI is a bad idea).
So yes, assuming you are using SI for the constant and the variables the time will be in seconds.
• #3
armis
103
0
The differential form is more general
$$\partial{Q}/\partial{t} = -k{\oint}\nabla{T}\vec{dS}$$
$$\partial{Q}/\partial{t}$$ is the amount of heat transferred per time unit as long as you are using SI. [W] or [J*s^-1]. So it's J that are transferred in one second not W
And you have a minus missing
I may be wrong, feel free to correct me
Last edited:
• #4
ironhill
10
0
Newton's law of cooling: If you put milk in your coffee then leave it for a minute it will be warmer than if you leave it for a minute then add milk.
• #5
armis
103
0
That's an efficient way of applying the Newton's law of cooling :)
• Last Post
Replies
18
Views
405
• Last Post
Replies
20
Views
563
• Last Post
Replies
58
Views
2K
• Last Post
Replies
27
Views
577
• Last Post
Replies
3
Views
1K
• Last Post
Replies
4
Views
1K
• Last Post
Replies
1
Views
296
• Last Post
Replies
7
Views
516
• Last Post
Replies
9
Views
695
• Last Post
Replies
35
Views
1K | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8166705369949341, "perplexity": 1985.8333381075547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499646.23/warc/CC-MAIN-20230128153513-20230128183513-00463.warc.gz"} |
https://socratic.org/questions/i-need-help-with-subscripts-for-empirical-formula-how-do-i-know-which-number-to- | Chemistry
Topics
# I need help with Subscripts for empirical formula, how do i know which number to multiply so that I get a whole number?
Mar 10, 2015
When you have to calculate a compound's empirical formula from its percent composition, there are a few tricks to use to help you deal with decimal mole ratios between the atoms that comprise your compound.
Now, I assume you know how to get to this point, so I won't show you the whole approach. Let's assume you have a compound containing $\text{A}$, $\text{B}$ ,and $\text{C}$, and you determine the mole ratios between these elements to be
$\text{A} : 2.33$
$\text{B} : 1$
$\text{C} : 1.67$
In such cases it is very useful to use mixed fractions. Mixed fractions are a combination of a whole number and a regular (or proper) fraction.
In this case, $2.33$ is equal to 2 and 1/3, or 7/3, and $1.67$ is equal to 1 and 2/3, or 5/3. This makes the ratios equal to
$\text{A": "7/3}$
$\text{B} : 1$
$\text{C": "5/3}$
Now multiply all of them by 3 to get rid of the denominator and you'll get the empirical formula
${A}_{7} {B}_{3} {C}_{5}$
If you get enough practice with empirical formulas you'll be able to "see" the answer faster. For example, if you have a compound comporised of $\text{X}$, $\text{Y}$, and $\text{Z}$, and the mole ratio looks like this
$\text{X} : 1.33$
$\text{Y} : 1$
$\text{Z} : 1$
It will become obvious in time that you have to multiply all of them by 3 to get all-whole numbers and an empirical formula of
${X}_{4} {Y}_{3} {Z}_{3}$
Notice that the mixed fractions method is useful in this case as well, since 1.33 is actually 1 and 1/3, or 4/3.
As a conclusion, it takes a little practice to be able to determine which numbers can be written in a useful way as mixed fractions, so spend some time on getting this skill down.
SIDE NOTE I assume you know how to get around mixed fractions, so I won't detail how I got 7/3 or 4/3.
Mar 10, 2015
After you divide by the smallest amount of moles if you end up with a number ending in .25 then multiply all numbers by 4. If you end up with a number ending in .33 then multiply all numbers 3. If you end up with a number ending in .20 then multiply all numbers by 5. If you end up with a number ending in .5 then multiply all numbers by 2.
##### Impact of this question
5924 views around the world | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 19, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9347317814826965, "perplexity": 284.9141213819734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313987.32/warc/CC-MAIN-20190818165510-20190818191510-00163.warc.gz"} |
https://eres.architexturez.net/doc/oai-eres-id-eres2005-201 | This paper examines the distributional characteristics of REITs using the daily NAREIT indices for the period 1997-2004. While previous studies have examined the distributional properties of REITs, they have largely used lower frequency monthly data. This paper has two primary aims. Firstly, it extends the existing literature on REITs by utilising the approaches proposed by Peiro (1999, 2002) and illustrating that the conventional skewness statistic, which is normally used to test for normality in return distributions, may provide erroneous inferences regarding the distribution as it is based on the normal distribution. We test for non-normality using a variety of alternative tests that make minimal assumptions about the shape of the underlying distribution. Secondly, building on the reported findings we analyse the implications for risk measurement. We estimate value-at-risk measures on a daily basis for REITs. While VaR has over the last ten years become a main standard risk measure it does suffer from a number of problems, especially concerning the assumptions made regarding normality in the basic estimation of the measure (Hull & White, 1998). We therefore use of Extreme Value Theory in examining the tail behaviour of REITs and integrate this with the estimation of daily value-at-risk figures. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8708873987197876, "perplexity": 933.2849356584265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000231.40/warc/CC-MAIN-20190626073946-20190626095946-00466.warc.gz"} |
https://quant.stackexchange.com/questions/42930/whats-the-logic-behind-3-10-ust-yield-inversion-predicting-recession/42931 | # What's the logic behind 3-10 UST yield inversion predicting recession?
Is there causality, behavioral or logical explanation behind this indicator or is it just purely an observation based on correlation? My guess is that there are existing derivatives with clauses that force them to take action that results in inverting the curve because it makes no sense to receive less for 10s than 3s.
• the real interest rate measures the rate at which consumption () is expected to grow over a given horizon. A high 1-year yield signals that growth is expected to be high over a one-year horizon. A high 10-year yield signals that annual growth is expected, on average, to be high over a ten-year horizon. If the difference in the 10-year and 1-year yield is positive, then growth is expected to accelerate. If the difference is negative--i.e., if the real yield curve inverts--then growth is expected to decelerate. andolfatto.blogspot.com/2018/09/… Dec 6, 2018 at 1:39 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9128680229187012, "perplexity": 1444.9086245425553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950373.88/warc/CC-MAIN-20230402012805-20230402042805-00089.warc.gz"} |
http://math.stackexchange.com/questions/52266/the-leap-to-infinite-dimensions/52269 | # The leap to infinite dimensions
Extending this question, page 447 of Gilbert Strang's Algebra book says
What does it mean for a vector to have infinitely many components? There are two different answers, both good:
1) The vector becomes $v = (v_1, v_2, v_3 ... )$
2) The vector becomes a function $f(x)$. It could be $\sin(x)$.
I don't quite see in what sense the function is "infinite dimensional". Is it because a function is continuous, and so represents infinitely many points? The best way I can explain it is:
• 1D space has 1 DOF, so each "vector" takes you on "one trip"
• 2D space has 2 DOF, so by following each component in a 2D (x,y) vector you end up going on "two trips"
• ...
• $\infty$D space has $\infty$ DOF, so each component in an $\infty$D vector takes you on "$\infty$ trips"
How does it ever end then? 3d space has 3 components to travel (x,y,z) to reach a destination point. If we have infinite components to travel on, how do we ever reach a destination point? We should be resolving components against infinite axes and so never reach a final destination point.
-
Do you know anything about Fourier series? – Matt Calhoun Jul 19 '11 at 0:20
there are different notions of "basis". the algebraic one (sometimes called a hamel basis) is a collection of independent vectors st every vector can be written as a finite linear combination of basis elements. in something like $L^2(S^1)$ you might consider the orthonormal basis $\{\cos(nx), \sin(nx) : n=0,1,2,3,...\}$ where $L^2$ functions can be written as infinite linear combinations (fourier series) of the basis functions. – yoyo Jul 19 '11 at 0:45
@Theo Buehler: Hm? – Christian Blatter Jul 19 '11 at 12:08
@Theo But in $f(x)=\sin(x)$, say $x=1$ (basis=1), then $f(x)=\sin(1)$ which is a value, not a function – bobobobo Jul 19 '11 at 12:59
@Christian Blatter: Oh... that was a major lapse. (How could 4 people agree?) `@bobobobo: Sorry about that. GleasSpty and Agustí expand on what I was trying to say, but correctly. – t.b. Jul 19 '11 at 14:20
One thing that might help is thinking about the vector spaces you already know as function spaces instead. Consider $\mathbb{R}^n$. Let $T_{n}=\{1,2,\cdots,n\}$ be a set of size $n$. Then $$\mathbb{R}^{n}\cong\left\{ f:T_{n}\rightarrow\mathbb{R}\right\}$$ where the set on the right hand side is the space of all real valued functions on $T_n$. It has a vector space structure since we can multiply by scalars and add functions. The functions $f_i$ which satisfy $f_i(j)=\delta_{ij}$ will form a basis.
So a finite dimensional vector space is just the space of all functions on a finite set. When we look at the space of functions on an infinite set, we get an infinite dimensional vector space.
-
Do you mean $T_n$ is an n-tuple? (So ${ a_1, a_2 ... a_n }, a_n \epsilon \mathbb{R}$ )? – bobobobo Jul 19 '11 at 15:39
@bobobobo: No. I mean $T_n$ is a set of size $n$. Any set of size $n$. It could represent the vertices of a graph, in which case we are talking about the vector space of functions on a graph. Or it could be the elements of a group. Above I used the numbers $1$ to $n$ for simplicity. We could have $T=\{\text{cat}, \text{ dog}, \text{ rat} \}$. In this case, the space of all functions from $T$ to $\mathbb{R}$ is a three dimensional vector spaces over $\mathbb{R}$. A basis would be the three delta functions. – Eric Naslund Jul 19 '11 at 15:51
This is a nice answer, but I still haven't found what I'm looking for – bobobobo Jul 19 '11 at 23:16
I would also like to add to Eric's answer (it turned out that this was too long to be just a comment) that in general it's probably not a good idea to think of a vector as defined in terms of its components. Rather, one should probalby think of a vector as an element of an abstract vector space, and then, once a basis is chosen, you can represent the vector in that basis by its components with respect to that basis. If the (algebraic) basis is finite, then you can write the coordinates as usual as $(v_1,\ldots ,v_n)$. Simiarly, if the (algebraic) basis is countably infinite, the vector can be represented by its components as $(v_1,\ldots ,v_n,\ldots )$. In general, if the (algebraic) basis is indexed by an index set $I$, the components of a vector will be a function $f_v:I\rightarrow F$, where $F$ is the field you're working over.
In the second example you posted above, you can take $V$ to be the set of all bounded functions on $\mathbb{R}$ and you can take $F=\mathbb{R}$. Then, for each $x_0\in \mathbb{R}$, you may define the function $$\delta _{x_0}(x)=\begin{cases}1 & \text{if }x=x_0 \\ 0 &\text{otherwise}\end{cases}$$ It turns out that the collection $\left\{ \delta _{x_0}|\, x_0\in \mathbb{R}\right\}$ forms an algebraic basis for $V$. This collection is naturally indexed by $\mathbb{R}$, and so by choosing this basis you can think of a function in $V$ as represnted by a function from $\mathbb{R}$ (the indexing set) to $\mathbb{R}$ (the field). In this case, that function was $\sin (x)$, which, because of how we chose our basis, agrees with the element of $V$ it is trying to represent, namely the original function $\sin$.
Hope that helps!
P.S.: I use the term algebraic basis to distinguish it from a topological basis, which is often more useful in infinite-dimensional settings.
-
I won't say anything more than Theo and Eric have already said, but...
As Eric says, every $\mathbb{R}^n$ can be seen as a space of functions $f: T_n \longrightarrow \mathbb{R}$.
That is, the vector $v = (8.2 , \pi , 13) \in \mathbb{R}^3$ is the same as the function $v: \left\{ 1,2,3\right\} \longrightarrow \mathbb{R}$ such that $v(1) = 8.2, v(2) = \pi$ and $v(3) = 13$.
So, the coordinates of $v$ are the same as its values on the set $\left\{ 1,2,3\right\}$, aren't they? Indeed, the coordinates of $v$ are the coefficients that appear in the right-hand side of this equality:
$$(8.2, \pi , 13) = v(1) (1,0,0) + v(2) (0,1,0) + v(3) (0,0,1) \ .$$
On the other hand, the coordinates of $v$ are its coordinates in the standard basis of $\mathbb{R}^3$: $e_1 = (1,0,0), e_2 = (0,1,0)$ and $e_3 = (0,0,1)$ and we can look at these vectors of the standard basis as functions too -like all vectors in $\mathbb{R}^3$. They are the following "functions":
$$e_i (j) = \begin{cases} 1 & \text{if}\quad i=j \\ 0 & \text{if}\quad i \neq j \end{cases}$$
This is an odd way to look at old, reliable, $\mathbb{R}^3$ and its standard basis, isn't it?
Well, the point in doing so is to get hold for the following construction: let $X$ be any set (finite or infinite, countable or uncountable) and let's consider the set of all functions $f: X \longrightarrow \mathbb{R}$ (not necessarily continuous: besides, since we didn't ask $X$ to be a topological space, it doesn't make sense to talk about continuity). Call this set
$$\mathbb{R}^X \ .$$
Now, you can make $\mathbb{R}^X$ into a real vector space by defining
$$(f + g)(x) = f(x) + g(x) \qquad \text{and} \qquad (\lambda f)(x) = \lambda f(x)$$
for every $x \in X$, $f, g \in\mathbb{R}^X$ and $\lambda \in \mathbb{R}$.
And you would have a "standard basis" too in $\mathbb{R}^X$ which would be the set of functions $e_x : X \longrightarrow \mathbb{R}$, one for each point $x \in X$:
$$e_x (y) = \begin{cases} 1 & \text{if}\quad x=y \\ 0 & \text{if}\quad x \neq y \end{cases} \ .$$
So, you see $\mathbb{R}^3$ can be seen as a particular example of a space of functions $\mathbb{R}^X$ if you see the number $3$ as the set $\left\{ 1,2,3\right\}$: $\mathbb{R}^3 = \mathbb{R}^\left\{ 1,2,3\right\} = \mathbb{R}^\mathbb{T_3}$ and the "coordinates" of a function $f\in \mathbb{R}^X$ are the same as its values $\left\{ f(x)\right\}_{x \in X}$.
(In fact, a function $f$ is the same as its set of values over all points of $X$, isn't it? -Just in the same way as you identify every vector with its coordinates in a given basis.)
Warning. I've been cheating a little bit here, because, in general, the set $\left\{ e_x\right\}_{x\in X}$ is not a basis for the vector space $\mathbb{R}^X$. If it was, every function $f\in \mathbb{R}^X$ could be written as a finite linear combination of those $e_x$. Indeed you have
$$f = \sum_{x\in X} f(x) e_x \ ,$$
but the sum on the right needs not to be finite -if $X$ is not so, for instance.
One way to fix this: instead of $\mathbb{R}^X$, consider the subset $S \subset \mathbb{R}^X$ of functions $f: X \longrightarrow \mathbb{R}$ such that $f(x) \neq 0$ just for a finite number of points $x\in X$. Then it is true that $\left\{ e_x\right\}_{x\in X}$ is a basis for $S$.
(Otherwise said, $\mathbb{R}^X = \prod_{x\in X} \mathbb{R}_x$ and $S = \bigoplus_{x\in X} \mathbb{R}_x$, where $\mathbb{R}_x = \mathbb{R}$ for all $x\in X$.)
-
I will try to answer your question about in which sense a space is called infinite dimensional, and how you despite this can reach any destination point.
It is a theorem that every vector space $V$ has some basis $B\subseteq V$. This means that every vector $v\in V$ can be written as $v=c_1b_1+\cdots+c_nb_n$ for some scalars $c_1,\ldots,c_n\in\mathbb R$ and some basis vectors $b_1,\ldots,b_n\in B$ and some integer $n$. It is very important to note here that only a finite number of basis vectors were used. So even if $V$ is infinite-dimensional, which means that $B$ contains infinitely many basis vectors $b$, we only make a finite number of "trips" from the origin along some basis vectors. We have infinitely many basis vectors to choose from (these are our "degrees of freedom"), but we choose only a finite number $n$ of them, say a hundred, and travel a scalar multiple of $c_i$ along each (where $i=1,\ldots,n$), in order to reach a vector $v$ in our vector space.
Now, it's understandable to be confused about this, because it's difficult to give concrete examples for general infinite dimensional spaces. If we take $V$ as the space functions $f:\mathbb R\to\mathbb R$, it is tempting to think of $f(x)$ as the coordinate of the vector $f$ at the position $x$, in the same way we think of $v(2)=15$ as the coordinate of the vector $v=(7,15,11)\in\mathbb R^3$ at the position $2$. But this doesn't work: if the values $f(x)$ are our only coordinates, how could we "reach" $f=\sin$? We would need to make a "trip" from $0$ to $\sin(x)$ at every $x$, and this involves infinitely many trips, which we're not allowed to do by the definition of a basis. The problem is that even more coordinates than just the $f(x)$ are needed in order to specify a function $f$, or as Agustí Roig put it: the functions $e_x$ (in the notation from his post) are not a basis! It's difficult to visualize any basis for the vector space of all functions $\mathbb R\to\mathbb R$: in fact, one needs the axiom of choice to prove that there exists a basis, and no conrete example can be given. You will have to look at another space if you want to be able to better visualize the coefficients of the vectors. One example is the space $V_0$ of all functions $f:\mathbb R\to\mathbb R$ such that $f(x)=0$ for all but finitely many $x$. Then, in fact, you can view $f(x)$ as the coordinate of $f$ at the position $x$. To reach any function $f\in V_0$, you need to make only a finite number of "trips".
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.926163375377655, "perplexity": 137.1040259932391}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444774.49/warc/CC-MAIN-20141017005724-00354-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/two-sliders-work-and-energy.308940/ | # Homework Help: Two Sliders, work and energy
1. Apr 21, 2009
### dietwater
1. The problem statement, all variables and given/known data
Each of the sliders A and B has a mass of 2 kg and moves with negligible friction in its
respective guide, with y being in the vertical direction (see Figure 3). A 20 N horizontal force
is applied to the midpoint of the connecting link of negligible mass, and the assembly is
released from rest with θ = 0°. Determine the velocity vA with which slider A strikes the
horizontal guide when θ = 90°.
[vA = 3.44 m/s]
2. Relevant equations
1/2 mv^2
F = ma
Wp = mgh
SUVAT
3. The attempt at a solution
When at 0 degrees
W=0J
At 90
F=20N
W = 20xd = 8J
Work from cart A = 0.5mv^2
Therefore 16 = mv^2
v = 2 rt2
Or...
do i need to add the energy from 20n force and from cart b...
0.5mv^2 (b) + 8J = 0.5mv^2 (A)
with F = ma, 20/10 a = 10 therefore v (b) = 2 rt 2
sub this into above eq.
8 + 8 = 0.5mv^2
v = 4
Help!
Iv been goin round in circles, clearly im wrong lol can someone explain how i could work this out please | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.839150071144104, "perplexity": 3588.889085305027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864364.38/warc/CC-MAIN-20180622065204-20180622085204-00373.warc.gz"} |
https://www.arxiv-vanity.com/papers/hep-th/0508134/ | # To the Fifth Dimension and Back
Raman Sundrum
Department of Physics and Astronomy
The Johns Hopkins University
3400 North Charles Street
Baltimore, MD 21218, USA
###### Abstract
Introductory lectures on Extra Dimensions delivered at TASI 2004.
## 1 Introduction
There are several significant motivations for studying field theory in higher dimensions: (i) We are explorers of spacetime structure, and extra spatial dimensions give rise to one of the few possible extensions of relativistic spacetime symmetry. (ii) Extra dimensions are required in string theory as the price for taming the bad high energy behavior of quantum gravity within a weakly coupled framework. (iii) Extra dimensions give rise to qualitatively interesting mechanisms within effective field theory that may play key roles in our understanding of Nature. (iv) Extra dimensions can be a type of “emergent” phenomenon, as best illustrated by the famous AdS/CFT correspondence.
These lectures are intended to provide an introduction, not to the many attempts at realistic extra-dimensional model-building, but rather to central qualitative extra-dimensional mechanisms. It is of course hoped that by learning these mechanisms in their simplest and most isolated forms, the reader is well-equipped to work through more realistic incarnations and combinations, in the literature, or better yet, as products of their own invention. (Indeed, to really digest these lectures, the reader must use them to understand some particle physics models and issues. The other TASI lectures are a good place to start.) When any of the examples in the lectures yields a cartoon of the real world, or a cartoon solution to real world problems, I point this out.
The lectures are organized as follows. Section 2 gives the basic language for dealing with field theory in the presence of extra dimensions, “compactified” in order to hide them at low energies. It is also shown how particles of different spins in four dimensions can be unified within a single higher-dimensional field. Section 3 illustrates the “chirality problem” associated with fermions in higher dimensions. Section 4 illustrates the emergence of light scalars from higher dimensional theories without fundamental scalars, computes quantum corrections to the scalar mass (potential), and assesses how natural these light scalars are. Section 5 describes how extra dimensional boundaries (and boundary conditions) can be derived from extra dimensional spaces without boundary, by the procedure of “orbifolding”. It is shown how the chirality problem can thereby be solved. The localization of some fields to the boundaries is illustrated. Section 6 describes the matching of the higher dimensional couplings to the effective four-dimensional long-distance couplings. In Section 7, the issue of non-renormalizability of higher-dimensional field theory is discussed and the scale at which a UV completion is required is identified. Higher-dimensional General Relativity is discussed in Section 8, in partiucular the emergence of extra gauge fields at low energies as well as scalar “radion” fields (or “moduli”) describing massless fluctuations in the extra-dimensional geometry. Section 9 illustrates how moduli may be stabilized to fix the extra-dimensional geometry at low energies. Section 10 describes the unsolved Cosmological Constant Problem as well as the less problematic issue of having a higher-dimensional cosmological constant. Section 10 shows that a higher dimensional cosmological constant leads to “warped” compactifications, as well as the phenomenon of “gravity localization”. Section 11 shows that strongly warped compactifications naturally lead to hierarchies in the mass scales appearing in the low energy effective four-dimensional description. Section 12 shows that when warped hierarchies are used to generate the Planck/weak-scale hierarchy, the extra-dimensional graviton excitations are much more strongly coupled to matter than the massless graviton of nature, making them observable at colliders. Section 13 shows how flavor hierarchies and flavor protection can arise naturally in warped compactification, following from a study of higher-dimensional fermions. Section 14 studies features of gauge theory, including the emergence of light scalars, in warped compactifications.
The TASI lectures of Ref. [1] and Ref. [2], and the Cargese lectures of Ref. [3], while overlapping with the present lectures, also contain complementary topics and discussion. The central qualitative omissions in the present lectures are supersymmetry, which can combine with extra dimensions in interesting ways (see the TASI lectures of Refs. [1] and [4]), a longer discussion of the connection of extra dimensions to string theory [5] [6], a discussion of fluctuating “branes” (see Refs. [1] and [3]), and the (very illuminating) AdS/CFT correspondence between some warped extra-dimensional theories and some purely four-dimensional theories with strong dynamics [7] [8] [9]. Phenomenologically, there is no discussion of the “Large Extra Dimensions” scenario [10], although these lectures will equip the reader to easily understand it.
The references included are meant to be useful and to act as gateways to the broader literature. They are not intended to be a complete set. I have taken moderate pains to get incidental numbers right in the notes, but I am fallible. I have taken greater pains to ensure that important numbers, such as exponents, are correct.
## 2 Compactification and Spin Unification
Let us start by considering Yang-Mills (YM) theory in five-dimensional (5D) Minkowski spacetime,111 Our metric signature convention throughout these lectures is . in particular all dimensions being infinite in size,
S = Tr∫d4x∫dx5{−14FMNFMN} (2.1) = Tr∫d4x∫dx5{−14FμνFμν−12Fμ5Fμ5},
where are 5D indices, while are 4D indices. We use matrix notation for so that the gauge field is , where are the isospin Pauli matrices. We will study this theory in an axial gauge, . To see that this is a legitimate gauge, imagine that is in a general gauge and consider a gauge transformation,
A′M≡igΩ−1DMΩ, Ω(xμ,x5)∈SU(2), (2.2)
where is the gauge coupling. It is always possible to find , such that .
Ex. Check that this happens for , where represents the path-ordering of the exponential.
Ex. Check that in this gauge,
S=Tr∫d4x∫dx5{−14FμνFμν+12(∂5Aμ)2}. (2.3)
Let us now compactify the fifth dimension to a circle, so that , where is the radius of the circle and is an angular coordinate . See Fig. 1.
We can Fourier expand the gauge field in this coordinate,
Aμ(xμ,ϕ)=A(0)μ(x)+∞∑n=1(A(n)μ(x)einϕ+h.c.). (2.4)
But now we can no longer go to axial gauge; in general our above will not be -periodic. The best we can do is go to an “almost axial” gauge where is -independent, , where the action can be written
S = Tr∫d4x∫π−πdϕR{−14FμνFμν+12(DμA(0)5)2+12(∂5Aμ)2} = 2πRTr∫d4x{−12(∂μA(0)ν−∂νA(0)μ)2+12(∂μA(0)5)2 +∞∑n=1[−12|∂μA(n)ν−∂νA(n)μ|2+n2R2|A(n)μ|2]+O(A3)},
showing that the 5D theory is equivalent to a 4D theory with an infinite tower of 4D fields, with masses, . This rewriting of 5D compactified physics is called the Kaluza-Klein (KK) decomposition.
Ex. Show that if is in a general gauge it can be brought to almost axial gauge via the (periodic) gauge transformation
Ω(x,ϕ)≡Peig∫ϕ0dϕ′RA5(x,ϕ′)e−igA(0)5(x)ϕ. (2.6)
Note that the sum over of the fields in any interaction term must be zero since this is just conservation of fifth dimensional momentum, where for convenience we define the complex conjugate modes, , to be the modes corresponding to . In this way a spacetime symmetry and conservation law appears as an internal symmetry in the 4D KK decomposition, with internal charges, .
Since all of the modes have 4D masses, we can write a 4D effective theory valid below involving just the light modes. Tree level matching yields
Seff∼E≪1R2πRTr∫d4x{−14F(0)μνF(0)μν+12(DμA(0)5)2}. (2.7)
The leading (renormalizable) non-linear interactions follow entirely from the 4D gauge invariance which survives almost axial gauge fixing. We have a theory of a 4D gauge field and gauge-charged 4D scalar, unified in their higher-dimensional origins. This unification is hidden along with the extra dimension at low energies, but for the tell-tale “Kaluza-Klein” (KK) excitations are accessible, and the full story can be reconstructed in principle.
Ex. Check that almost axial gauge is preserved by 4D gauge transformations, (independent of ).
Our results are summarized in Fig. 2.
## 3 5D Fermions and the Chirality Problem
To proceed we need a representation of the 5D Clifford algebra, . This is straightforwardly provided by
Γμ≡γμ, Γ5≡−iγ5, (3.1)
where the ’s are the familiar 4D Dirac matrices. Therefore, 5D fermions are necessarily -component spinors. We decompose them as
Ψα(x,ϕ)=∞∑n=−∞Ψ(n)α(x)einϕ. (3.2)
Plugging this into the 5D Dirac action gives
SΨ = ∫d4x∫dx5¯¯¯¯Ψ(iDMΓM−m)Ψ = ∫d4x∫dx5¯¯¯¯Ψ(iDμγμ−m)Ψ−¯¯¯¯Ψγ5∂5Ψ+ig¯¯¯¯ΨA5γ5Ψ = 2πR∫d4x∞∑n=−∞¯¯¯¯Ψ(n)(iγμ∂μ−m−inRγ5)Ψ(n)+O(¯¯¯¯ΨAΨ).
We see that we get a tower of 4D Dirac fermions labelled by integer (no longer positive), with physical masses,
m2phys=m2+n2R2. (3.4)
For small , this is illustrated in Fig. 3.
These fermions are coupled to the gauge field KK tower, again with all interactions conserving 5D momentum, the sum over of all 4D fields in an interaction adding up to zero.
At low energies, , we can again get a 4D effective action for the light modes,
Seff=E≪1Rm≪1R2πR∫d4x{¯¯¯¯Ψ(0)(iγμDμ−m)Ψ(0)+ig¯¯¯¯Ψ(0)γ5A(0)5Ψ(0)}, (3.5)
where the covariant derivative contains only the gauge field . Note that we also have a Yukawa coupling to the 4D scalar, , of the same strength as the gauge coupling, so-called gauge-Yukawa unification. The idea that the Higgs particle may originate from extra-dimensional components of gauge fields was first discussed in Refs. [11].
An unattractive feature in this cartoon of the real world, emerging below , is that the necessity of having Dirac 4-component spinor representations of 5D Lorentz invariance has resulted in having 4-component non-chiral 4D fermion zero-modes. The Standard Model is however famously a theory of chiral Weyl 2-component fermions. Even as a cartoon this looks worrying. This general problem in theories of higher dimensions is called the “chirality problem” and we will return to deal with it later.
## 4 Light Scalar Quantum Corrections
Given that light scalars are unnatural in (non-supersymmetric) quantum field theories, it is rather surprising to see a massless 4D scalar, , emerge from higher dimensions. Of course, we should consider quantum corrections to our classical story and see what happens to the scalar mass. From a purely 4D effective field theory viewpoint we would expect there to be large divergent corrections to the scalar mass coming from its gauge and Yukawa couplings, from diagrams such as Fig. 4,
δm2scalar∼g2416π2Λ2UV, (4.1)
suggesting that the scalar is naturally very heavy. But from the 5D viewpoint is massless because it is part of a 5D gauge field, whose mass is protected by 5D gauge invariance. So the question is which viewpoint is correct?
To find out let us first compute the 1-fermion-loop effective potential for [12]. For this purpose we treat as completely constant, and . Then,
SΨ=2πR∫d4x∑n¯¯¯¯Ψ(n)(x)[i⧸∂−m−i(nR−a)γ5]Ψ(n)(x), (4.2)
where
⧸∂≡γμ∂μ. (4.3)
Since is constant,
SΨ=2πR∫d4p(2π)4∑n¯¯¯¯Ψ(n)(p)[⧸p−m−i(nR−a)γ5]Ψ(n)(p). (4.4)
After Wick rotating, this gives
SEΨ=∑n2πR∫d4p(2π)4¯¯¯¯Ψ(n)(p)[⧸p+im+(nR−a)γ5]Ψ(n)(p). (4.5)
Integrating out the fermions by straightforward Gaussian Grassman integration,
e−Veff = ∏p,ndet[⧸p+im+(nR−a)γ5] (4.6) =
From now on, I will simplify slightly by considering a gauge group rather than . All subtleties will come from finite , so we focus on
∂Veff∂R = −∑n∫d4p(2π)4tr⎡⎢ ⎢⎣−nR2γ51⧸p+im+(nR−a)γ5⎤⎥ ⎥⎦ (4.7) = = ∑n∫d4p(2π)44n(n−a)p2+(n−a)2+m2,
where we have gone to units in the last line.
Naively, this integal and sum over is quintically divergent! So let us carefully regulate the calculation by adding Pauli-Villars fields, in a 5D gauge-invariant manner. These fields have the same quantum numbers as , but have cut-off size masses , some with Bose rather than Fermi statistics. Thereby,
(4.8)
The regulator terms resemble the physical term except for having cutoff size masses and with signs (determined by the statistics of the regulator field) chosen in such a way that the entire expression converges. The big trick for doing the sum on is to replace it by a contour integral,
∂Veff∂R=∫d4p(2π)4∮Cdz1e2πiz−1(4z(z−a)p2+(z−a)2+m2+Reg.), (4.9)
where the contour is shown in Fig. 5,
following from the simple poles of the factor and from the residue theorem. The semi-circles at infinity needed to have a closed contour are irrelevant because the integrand vanishes rapidly enough there, precisely because of the addition of the regulator terms. We can deform the contour to that shown in Fig. 6
without encountering any singularities of the integrand, so that by the residue theorem,
∂Veff∂R = −4πi∫d4p(2π)4[a+i√p2+m2e2πiae−2π√p2+m2−1 (4.10) +a−i√p2+m2e2πiae2π√p2+m2−1+Reg.].
We can also write this as
∂Veff∂R = 4π∫d4p(2π)4[√p2+m2−iae2πiae−2π√p2+m2−1−√p2+m2+iae2πiae2π√p2+m2−1 (4.11) +(√p2+m2−ia)⎛⎝e2πiae−2π√p2+m2−1e2πiae−2π√p2+m2−1⎞⎠
where we have just added and subtracted the same quantity in the last two terms (not counting the regulator terms). Note that the overbraced terms cancel out, leaving
∂Veff∂R = 4π∫d4p(2π)4[(−√p2+m2−iae−2πiaRe2πR√p2+m2−1)+c.c. (4.12) −(√p2+m2−ia)]+Reg.
where we have put back explicitly, by dimensional analysis.
Now let us integrate with respect to ,
Veff = ∫d4p(2π)4{−4Reln(1−e−2πR√p2+m2e2πiaR) (4.13) +irrelevantconst.
In the limit, must be independent of since certainly all potential terms for gauge fields vanish by gauge invariance as usual. This yields the identity,
Veff⟶R→∞−4πR∫d4p(2π)4(√p2+m2−ia)+Reg.≡ΛR, (4.14)
where is a constant independent of and .
Ex. Directly show the cancellation of -dependence in the right hand side of eq. (4.14) by carefully writing out the regulator terms.
Using this identity in eq. (4.13) yields
(4.15)
This formula has some remarkable properties. The first term is indeed highly cutoff dependent, but it does not depend on . The integrand of the second term behaves as for large and therefore the integrals converge. The regulator terms are suppressed by factors and can be completely neglected for (or more formally, for ). We therefore drop the -dependent regulator terms from now on.
Finally, combining complex exponentials we arrive at our final result,
Veff = ΛR−2∫d4p(2π)4ln(1+e−4πR√p2+m2 (4.16) −2e−2πR√p2+m2cos(2πRgA(0)5)),
which is illustrated in Fig. 7 .
For small , this can be approximated,
Veff ∼ ΛR+∫d4p(2π)4{−4ln(1−e−2πR√p2+m2) −(2πRgA(0)5)2⎡⎢ ⎢ ⎢⎣e−2πR√p2+m2(1−e−2πR√p2+m2)2⎤⎥ ⎥ ⎥⎦ +(2πRgA(0)5)4⎡⎢ ⎢ ⎢⎣e−2πR√p2+m26(1−e−2πR√p2+m2)2+e−4πR√p2+m2(1−e−2πR√p2+m2)4⎤⎥ ⎥ ⎥⎦}.
We see immediately that the vacuum has non-vanishing ,
⟨A(0)5⟩∼1Rg, (4.18)
for .
Let us now return from considering gauge group back to . Nothing much changes as far as the loop contribution we have just considered ( is just to be replaced by , where the trace is over gauged isospin) but now there are also diagrams involving gauge loops which contribute to the effective potential. See Fig. 8.
By similar methodology, these give a contribution illustrated in Fig. 9.
We see that there is a competition now between the contribution from gauge loops which prefers a vacuum at versus the fermion loops which prefer a vacuum at . But clearly if we include sufficiently many identical species of , their contribution must dominate, and . Since is an isovector, a non-zero expectation necessarily breaks the gauge group down to . One can think of this as a caricature of electroweak symmetry breaking where the preserved is electromagnetism and is the Higgs field! We refer to it as “radiative symmetry breaking” (also the “Hosotani mechanism” [12]) because it is a loop effect that sculpted out the symmetry breaking potential.
In this symmetry breaking vacuum or Higgs phase, we can easily estimate the physical mass spectrum,
mγ(0)=0 mW±(0)∼1R mΨ(0)∼√m2+1R2⟶m→01R mKK∼1R m2Higgs''∼g232π3R3. (4.19)
Now this is certainly an interesting story theoretically, but it is surely dangerous to imagine anything like this happening in the real world because we are predicting , and such light KK states should already have been seen. However, there is a simple way to make the KK scale significantly larger than , by making
⟨A(0)5⟩≪1Rg. (4.20)
Note that for small we have
Veff = VΨ--loopeff+V% gauge--loopeff∼smallaΛR (4.21) +[c1−c2(m)N](Ra)2+[c3+c4(m)N](Ra)4,
where the ’s are order one and positive, and depend on the 5D fermion mass , and is the number of species of fermions. Now let us tune to achieve
−c1+c2(m)N≡ε≪1 c3+c4(m)N∼O(1), (4.22)
from which it follows that there is a local minimum of the effective potential (a possibly cosmologically stable, false vacuum) with
A(0)5∼√εgR. (4.23)
This yields the hierarchy,
mW±∼√εR∼√εmKK. (4.24)
## 5 Orbifolds and Chirality
If we ask whether our results thusfar could be extended to a realistic model of nature, with the standard model as a low energy limit, we encounter some big problems, not just problems of detail:
a) The previously mentioned chirality problem.
b) Yukawa couplings of the standard model vary greatly. Our low energy fermion modes seem to have Yukawa couplings equal to their gauge coupling, a reasonable cartoon of the top quark but not of other real world fermions.
A very simple way of solving (a) is to replace the fifth dimensional circle by an interval. The two spaces can be technically related by realizing the interval as an “orbifold” of the circle. This is illustrated in Fig. 10,
where the points on the two hemispheres of the circle are identified. Mathematically, we identify the points at or with or . In this way the physical interval extends a length , half the circumference of our original circle. This identification is possible if we also assign a “parity” transformation to all the fields, which is respected by the dynamics (i.e. the action). The action we have considered above has such a parity, given by
P(x5)=−x5P(Aμ)=+AμP(A(0)5)=−A(0)5P(ΨL)=+ΨLP(ΨR)=−ΨR, (5.1)
precisely when the 5D fermion mass vanishes, . We consider this case for now.
Ex. Check that the action is invariant under this parity transformation.
With such a parity transformation we continue to pretend to live on a circle, but with all fields satisfying
Φ(xμ,−x5)=P(Φ)(xμ,x5). (5.2)
That is, the degrees of freedom for are merely a reflection of degrees of freedom for , they have no independent existence. Of course we also require circular periodicity,
Φ(xμ,ϕ+2π)=Φ(xμ,ϕ). (5.3)
These conditions specify “orbifold boundary conditions” on the interval, derived from the the circle, which of course has no boundary.
We can write out the mode decompositions (in almost axial gauge) for all the fields subject to orbifold boundary conditions,
Aμ(x,ϕ) = ∞∑n=0A(n)μ(x)cos(nϕ) A5(x,ϕ) = 0Lost Higgs''! ΨL(x,ϕ) = ∞∑n=0Ψ(n)L(x)cos(nϕ) ΨR(x,ϕ) = ∞∑n=1Ψ(n)R(x)sin(nϕ)LostΨ(0)R! (5.4)
One unfortunate consequence we see is that has no modes, in particular orbifolding has eliminated our candidate Higgs! The good consequence is for the chirality problem, in that the massless right-handed fermion is eliminated, only the massless left handed fermion mode is left. The low energy effective theory below is just
Seff=E≪1R2πR∫d4x{−14(F(0)μν)2+¯¯¯¯Ψ(0)LiDμγμΨ(0)L}. (5.5)
With gauge group, if is an isodoublet (so that is an isodoublet), the only possible gauge invariant mass term for the light mode,
ΨLiαΨLjβϵijϵαβ, (5.6)
vanishes by fermi statistics. Therefore we apparently have a chiral effective gauge theory below . Unfortunately this theory is afflicted by a subtle non-perturbative “Witten anomaly”, so the theory is really unphysical. However, if we consider to be in the isospin representation, we again get a chiral gauge theory, but now not anomalous in any way.
Having seen that the chirality problem is soluble, we need to recover our Higgs field. (For discussion of related mechanisms and further references see the TASI review of Ref. [13].) To do this we must enlarge our starting gauge group, from
SU(2)≅SO(3) (5.7)
to . Gauge fields are conveniently thought of as anti-symmetric matrices, , in the fundmental gauge indices . For simplicity we choose fermions in the fundamental representation, . The action,
S = tr∫d4x∫dx5{−14FμνFμν+12(∂5Aμ)2+12(DμA(0)5)2 (5.8) +¯¯¯¯ΨiDμγμΨ−¯¯¯¯Ψγ5∂5Ψ+ig¯¯¯¯ΨiAij(0)5γ5Ψj},
is invariant under the orbifold parity given by
P(A^i^jμ)=+A^i^jμP(A^i^j5)=−A^i^j5P(ΨL^i)=+ΨL^iP(ΨR^i)=−ΨR^iP(A^i4μ)=−A^i4μP(A^i45)=+A^i45P(ΨL4)=−ΨL4P(ΨR4)=+ΨR4, (5.9)
where
Ex. Check by mode decomposition that this leaves 4D massless fields,
A^i^j(0)μ,A^i45(0),ΨL^i(0),ΨR4(0), (5.10)
that is, a 4D gauge field, a 4D Higgs triplet of , a left-handed fermion triplet of , and a right-handed singlet of .
This illustrates how (orbifold) boundary conditions on extra dimensions can break the gauge group of the bulk of the extra dimensions. The low-energy effective theory is given by
Seff=E≪1R2πR∫d4x{−14F(0)μνFμν(0)+12(DμA^i45(0))2 +¯¯¯¯ΨL^i(0)(iDμγμΨL(0))^i+¯¯¯¯ΨR4(0)i∂μγμΨR4(0) +ig(¯¯¯¯ΨL(0)^iA^i4(0)5ΨR(0)4+¯¯¯¯ΨR(0)4A^i4(0)5ΨL(0)^i)}. (5.11)
This contains 4D gauge theory with two different representations of Weyl fermions Yukawa-coupled to a Higgs field. This again bears some resemblence to the standard model if we think of the fermion as the left and right handed “top” quark. But what of the second problem we identified, (b), that the standard model contains some fermions with much smaller Yukawa couplings than gauge coupling? Such fermions can arise by realizing them very differently in the higher-dimensional set-up. The simplest example is illustrated in Fig. 11,
where beyond the fields we have thusfar considered, which live in the “bulk” of the 5D spacetime, there is a 4D Weyl fermion precisely confined to one of the 4D boundaries of the 5D spacetime, say . It can couple to the gauge field evaluated at the boundary if it carries some non-trivial representation, say triplet. This represents a second way in which the chirality problem can be solved, localization to a physical 4D subspace or “3-brane” (a “”-brane has spatial dimensions plus time), in this case the boundary of our 5D spacetime. The new fermion has action,
Sχ=∫d4x¯¯¯¯χ\raisebox−5.0ptL^i(x)[i∂μδ^i^j+gA^i^jμ(x,ϕ=π)]χ\raisebox−5.0ptL^j(x). (5.12)
At low energies, , this fermion will have identical gauge coupling as the triplet, but it will have no Yukawa coupling, thereby giving a crude representation of a light fermion of the standard model.
Well, there are other tricks that one can add to get closer and closer to the real world. Ref. [14] gives a nice account of many model-building issues and further references. I want to move in a new direction.
## 6 Matching 5D to 4D couplings
Let us study how effective 4D couplings at low energies emerge from the starting 5D couplings. Returning to pure Yang-Mills on an extra-dimensional circle, we get a low-energy 4D theory,
S4eff∼E≪1R2πR∫d4x{−14F(0)μνFμν(0)+12(DμA(0)5)2}. (6.1)
The fields are clearly not canonically normalized, even though the 5D theory we started with was canonically normalized. We can wavefunction renormalize the 4D effective fields to canonical form,
φ≡A(0)5√2πR,¯Aμ≡A(0)μ√2πR, (6.2)
and see what has happened to the couplings,
S4eff = 2πR∫d4x{−14(∂μAa(0)ν−∂νAa(0)μ−ig5ϵabcAb(0)μAc(0)ν)2 (6.3) +12(∂μA(0)5−ig5A(0)μA(0)5)2} = ∫d4x{−14(∂μ¯Aaν−∂ν¯Aaμ−ig5√2πRϵabc¯Abμ¯Acν)2 +12(∂μφ−ig5√2πR¯Aμφ)2}.
From this we read off the effective 4D gauge coupling,
g4eff=g5√2πR. (6.4)
Ex. Check that this is dimensionally correct, that 4D gauge couplings are dimensionless while 5D gauge couplings having units of .
For experimentally measured gauge couplings, roughly order one, we require
g5∼O(√2πR). (6.5)
## 7 5D Non-renormalizability
Now, having couplings with negative mass dimension is the classic sign of non-renormalizability, and as you can easily check it happens rather readily in higher dimensional quantum field theory. There are various beliefs about non-renormalizable theories:
a) A non-renormalizable quantum field theory is an unmitigated disaster. Throw the theory away at once. Only a few people still hold to this incorrect viewpoint.
b) A non-renormalizable quantum field theory can only be used classically, for example in General Relativity where has negative mass dimension. All quantum corrections give nonsense. This incorrect view is held by a surprisingly large number of people.
c) The truth (what I believe): Non-renormalizable theories with couplings, , with negative mass dimension can make sense as effective field theories, working pertubatively in powers of the dimensionless small parameter, , where is the mass dimension of . To any fixed order in this expansion, one in fact has all the advantages of renormalizable quantum field theory. There are even meaningful finite quantum computations one can perform. In fact we have just done one in computing the quantum effective potential. But of course there is a price: the whole procedure breaks down once the formal small parameter is no longer small, . At higher energies the effective field theory is useless and must be replaced by a more fundamental and better behaved description of the dynamics.
Ex. Learn (non-renormalizable) effective field theory at the systematic technical level as well as a way of thinking. A good place to start is the chiral Lagrangian discussion of soft pions in Ref. [15].
In more detail, perturbative expansions in effective field theory will have expansion parameters, , divided by extra numerical factors such as ’s or ’s. These factors are parametrically order one, but enough of them can be quantitatively significant. These factors can be estimated from considerations of phase space. I will just put these factors in correctly without explanation.
Ex. Learn the art of naive dimensional analysis, including how to estimate the ’s and ’s (for some discussion in the extra-dimensional context see Ref. [16]). Use this in your work on extra dimensions.
Our findings so far are summarized in Fig. 12.
The non-renormalizable effective field theory of 5D gauge theory breaks down when the formal small parameter, , gets large, that is we can define a maximum cutoff on its validity, . The 5D effective theory cannot hold above this scale and must be replaced by a more fundamental theory. Let us say this happens at . From here down to we have 5D effective field theory, and below we have 4D effective field theory.
We found it interesting that a Higgs-like candidate emerged from 5D gauge fields because it suggested a way of keeping the 4D scalar naturally light, namely by identifying it as part of a higher-dimensional vector field. But given that , and 4D gauge couplings are measured to be not much smaller than one, we must ask how well this extra-dimensional picture is doing at addressing the naturalness problem of the Higgs. In Fig. 13
we make the comparison with purely 4D field theory with a UV cutoff imposed. We see that in the purely 4D scenario one naturally predicts a weak scale | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.889651358127594, "perplexity": 1031.715825817803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572043.2/warc/CC-MAIN-20220814143522-20220814173522-00116.warc.gz"} |
http://math.gatech.edu/node/16914 | ## Multiplicity of solutions for non-local elliptic equations driven by the fractional Laplacian
Series:
CDSNS Colloquium
Tuesday, January 7, 2014 - 3:05pm
1 hour (actually 50 minutes)
Location:
Skiles 005
,
Beijing Normal University
We consider the semi-linear elliptic PDE driven by the fractional Laplacian: \begin{equation*}\left\{%\begin{array}{ll} (-\Delta)^s u=f(x,u) & \hbox{in $\Omega$,} \\ u=0 & \hbox{in $\mathbb{R}^n\backslash\Omega$.} \\\end{array}% \right.\end{equation*}An $L^{\infty}$ regularity result is given, using De Giorgi-Stampacchia iteration method.By the Mountain Pass Theorem and some other nonlinear analysis methods, the existence and multiplicity of non-trivial solutions for the above equation are established. The validity of the Palais-Smale condition without Ambrosetti-Rabinowitz condition for non-local elliptic equations is proved. Two non-trivial solutions are given under some weak hypotheses. Non-local elliptic equations with concave-convex nonlinearities are also studied, and existence of at least six solutions are obtained. Moreover, a global result of Ambrosetti-Brezis-Cerami type is given, which shows that the effect of the parameter $\lambda$ in the nonlinear term changes considerably the nonexistence, existence and multiplicity of solutions. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8876519799232483, "perplexity": 4301.069139657976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512382.62/warc/CC-MAIN-20181019082959-20181019104459-00220.warc.gz"} |
https://gitlab.mpi-sws.org/iris/iris/-/commit/a1579b6efda2d6f98d55f93374ab0869cfa0b653 | Commit a1579b6e by Ralf Jung
### Be explicit about the CMRA on option
parent dccb4153
Pipeline #2859 passed with stage
in 9 minutes and 22 seconds
\section{COFE constructions} \subsection{Trivial pointwise lifting} The COFE structure on many types can be easily obtained by pointwise lifting of the structure of the components. This is what we do for option $\maybe\cofe$, product $(M_i)_{i \in I}$ (with $I$ some finite index set), sum $\cofe + \cofe'$ and finite partial functions $K \fpfn \monoid$ (with $K$ infinite countable). \subsection{Next (type-level later)} Given a COFE $\cofe$, we define $\latert\cofe$ as follows (using a datatype-like notation to define the type): ... ... @@ -75,6 +80,16 @@ The composition and core for $\cinr$ are defined symmetrically. The remaining cases of the composition and core are all $\bot$. Above, $\mval'$ refers to the validity of $\monoid_1$, and $\mval''$ to the validity of $\monoid_2$. The step-indexed equivalence is inductively defined as follows: \begin{mathpar} \infer{x \nequiv{n} y}{\cinl(x) \nequiv{n} \cinl(y)} \infer{x \nequiv{n} y}{\cinr(x) \nequiv{n} \cinr(y)} \axiom{\bot \nequiv{n} \bot} \end{mathpar} We obtain the following frame-preserving updates, as well as their symmetric counterparts: \begin{mathpar} \inferH{sum-update} ... ... @@ -87,6 +102,16 @@ We obtain the following frame-preserving updates, as well as their symmetric cou \end{mathpar} Crucially, the second rule allows us to \emph{swap} the side'' of the sum that the CMRA is on if $\mval$ has \emph{no possible frame}. \subsection{Option} The definition of the (CM)RA axioms already lifted the composition operation on $\monoid$ to one on $\maybe\monoid$. We can easily extend this to a full CMRA by defining a suitable core, namely \begin{align*} \mcore{\mnocore} \eqdef{}& \mnocore & \\ \mcore{\maybe\melt} \eqdef{}& \mcore\melt & \text{If $\maybe\melt \neq \mnocore$} \end{align*} Notice that this core is total, as the result always lies in $\maybe\monoid$ (rather than in $\maybe{\maybe\monoid}$). \subsection{Finite partial function} \label{sec:fpfnm} ... ...
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9975844025611877, "perplexity": 3438.5372237900933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988986.98/warc/CC-MAIN-20210509122756-20210509152756-00140.warc.gz"} |
https://www2.isye.gatech.edu/~dai/cap/gt-seminars/fallseminartop.html | # Probability Seminar
## Topics
October 17, 1996
Y. L. Tong
Georgia Tech
#### Dimension-Reduction Inequalities for Exchangeable Random Variables, With Applications in Statistical Inference
Exchangeable random variables make frequent appearances in probability and statistics, and play a central role in Bayes theory, multiple comparisons, reliability theory, and certain other applications. This talk is concerned with a class of dimension-reduction inequalities for exchangeable random variables with selected applications to statistical inference problems. The proof of the main theorem depends on a moment inequality and de Finetti's theorem, which states that exchangeable random variables are conditionally i.i.d. random variables.
October 24, 1996
Doug Down
Georgia Tech
#### Stability and Monotone Properties of a Tandem Queueing Network under Window Flow Control
In this talk a network under window flow control is studied. The system is modelled as a (tandem) queueing network with two types of sources, one uncontrolled exogenous traffic and the other controlled. Window flow control operates on the following principle: the controlled source cannot send more than K packets without receiving an acknowledgement from the destination. The situation of interest in this work is that of the flow control being active, in which case the system may be modelled as a network in which exogenous traffic traverses the system as before but the controlled source can be replaced by a closed loop of K packets. Service at each of the servers is assumed to be FIFO. The stability for the system is examined with an emphasis on the situation in which the network dynamics may be described by a Markov process. It is found that the system is stable under the usual load condition (service rate greater than arrival rate) on the exogenous traffic, and in particular is independent of the window size K. Monotonicity properties of certain quantities in the system are identified, which may have implications for further analysis. Finally, the case in which the arrival and service processes are simply assumed to be stationary will be examined.
October 31, 1996
Tom Kurtz
University of Wisconsin
#### Martingale problems for partially observed Markov processes
We consider a Markov process $X$ characterized as a solution of a martingale problem with generator $A$. Let $Y(t)=\gamma (X(t))$. Assuming that we observe $Y$ but not $X$, then the fundamental problem of filtering is to characterize the conditional distribution $\pi_t(\Gamma )=P(X(t)\in\Gamma |{\cal F}^Y_t)$. Under very general conditions, the probability measure-valued process $\pi$ can be characterized as a solution of a martingale problem. Applications of the general result include a proof of uniqueness for the Kushner-Stratonovich equation for the conditional distribution of a signal observed in additive white noise, proofs of Burke's output theorem and an analogous theorem of Harrison and Williams for reflecting Brownian motion, conditions under which $Y$ is Markov, and proofs of uniqueness for measure-valued diffusions.
November 1, 1996
Reuven Y. Rubinstein
Technion, Israel
#### Optimization of Computer Simulation Models with Rare Events
Discrete event simulation systems (DESS) are widely used in many diverse areas such as computer-communication networks, flexible manufacturing systems, project evaluation and review techniques (PERT), and flow networks. Because of their complexity, such systems are typically analyzed via Monte Carlo simulation methods. This talk deals with optimization of complex computer simulation models involving rare events. A classic example is to find an optimal (s,S) policy in a multi-item, multicommodity inventory system, when quality standards require the backlog probability to be extremely small. Our approach is based on change of the probability measure techniques, also called likelihood ratio (LR) and importance sampling (IS) methods. Unfortunately, for arbitrary probability measures the LR estimators and the resulting optimal solution often tend to be unstable and may have large variances. Therefore, choice of the corresponding importance sampling distribution -- and in particular of its parameters -- in an optimal way is an important task. We consider the case where the IS distribution belongs to the same parametric family as the original (true) one and use the stochastic counterpart method to handle simulation based optimization models. More specifically, we use a two-stage procedure: at the first stage we identify (estimate) the optimal parameter vector of the IS distribution, and at the second the optimal solution of the underlying constrained optimization problem. Particular emphasis will be placed on estimation of rare events and on integration of the associated performance function into stochastic optimization programs. Supporting numerical results are provided as well.
November 7, 1996
Walter Philipp
University of Illinois Urbana-Champaign
#### Weak Dependence in Probability, Analysis, and Number Theory
In this talk we survey some of the basic facts on weak dependence, some results on sums of lacunary trigonometric series and their application to harmonic analysis and probabilistic number theory. Also, we will mention some new results on the domain of partial attraction of phi-mixing random variables. The talk will be accessible to non-experts and graduate students.
November 14, 1996
Raid Amin
University of West Florida
#### Some Control Charts Based on the Extremes
Howell (1949) introduced a Shewhart-type control chart for the smallest and largest observations. He showed that the proposed chart was useful for monitoring the process mean and process variability, and it allowed specification limits to be placed on the chart. We propose an exponentially weighted moving average (EWMA) control chart which is based on smoothing the smallest and largest observations in each sample. A two-dimensional Markov chain to approximate the Average Run Length is developed. A design procedure for the MaxMin EWMA control chart is given. The proposed MaxMin EWMA chart shows which parameters have changed, and in which direction the change occurred. The MaxMin EWMA can also be viewed as smoothed distribution-free tolerance limits. It is a control procedure that offers excellent graphical guidance for monitoring processes. A modified (two-sided) MaxMin chart is also discussed. Numerical results show that the MaxMin EWMA has very good ARL properties for changes in the mean and/or variability. The MaxMin chart has already been implemented at a local company with success.
November 21, 1996
Serguei Foss
Novosibirsk State University and Colorado State University
#### Coupling and Renovation
In the first part of the talk, we introduce notions of coupling (forward coupling) and strong coupling (backward coupling), and show the use of these notions in the stability study of Markov chains and of stochastically recursive sequences, and, in particular, in a simulation of the stationary distribution of a homogeneous discrete-time Markov Chain.
In the second part of the talk, we consider the following problem. Let $Y \equiv Y_0 \equiv \{ X_n, n \geq 0 \}$ be a sequence of random variables. For $k=1,2, \ldots$, put $Y_k = \{ X_{k+n}, n \geq 0\}$ and denote by $P_k$ the distribution of $Y_k$. When does there exist a probability measure $P$ such that $P_k \to P$ in the total variation norm?
December 5, 1996
Minping Qian
Peking University, Beijing, China
#### An accelerated algorithm of Gibbs sampling
To overcome the difficulty of oscillation when the density of samples is peaky, a reversible calculation scheme is introduced. Theoretical discussion and calculation examples show that it does accelerate the calculations.
January 10, 1997
Andre Dabrowski
University of Ottawa
#### Statistical Analysis of Ion Channel Data
Ion channels are small pores present in the outer membranes of most biological cells. Their use by those cells in generating and transmitting electrical signals has made their study of considerable importance in biology and medicine. A considerable mathematical literature has developed on the analysis of the alternating on/off current signal generated by ion channels in "patch-clamp" experiments.
After a brief decription of patch-clamp experiments and their associated data, we will provide an overview of the major approaches to the statistical analysis of current traces. The renewal-theoretic approach of Dabrowski, McDonald and Rosler (1990) will be described in greater detail, and applied to the analysis of data arising from an experiment on stretch-sensitive ion channels.
January 16, 1997
Christian Houdre
Georgia Tech
#### An Interpolation Formula and Its Consequences
We present an interpolation formula for the expectation of functions of Gaussian random vectors. This is then applied to present new correlation inequalities, comparison theorems and tail inequalities for various classes of functions of Gaussian vectors. This approach can be extended to the infinitely divisible and the discrete cube cases.
January 30, 1997
Ming Liao
Auburn University
#### L'evy processes on Lie groups and stability of stochastic flows
We consider stochastic flows generated by stochastic differential equations on compact manifolds which are contained in finite dimensional Lie transformation groups. Using a result for limiting behavior of L'evy processes on Lie groups, we can decompose such a stochastic flow as a product of the following three transformations:
(1) a random "rotation" which tends to a limit as time goes to infinity;
(2) an asymptotically deterministic flow;
(3) another random "rotation".
Using this decomposition, we may describe the random "sinks" and "sources" of the stochastic flow explicitly. Examples of stochastic flows on spheres will be discussed.
February 6, 1997
Dana Randall
Georgia Tech
#### Testable algorithms for generating self-avoiding walks
We present a polynomial time Monte Carlo algorithm for almost uniformly generating and approximately counting self-avoiding walks in rectangular lattices. These are classical problems that arise, for example, in the study of long polymer chains. While there are a number of Monte Carlo algorithms used to solve these problems in practice, these are heuristic and their correctness relies on unproven conjectures. In contrast, our algorithm relies on a single, widely-believed conjecture that is simpler than preceding assumptions, and, more importantly, is one which the algorithm itself can test. Thus our algorithm is reliable, in the sense that it either outputs answers that are guaranteed, with high probability, to be correct, or finds a counterexample to the conjecture. (Joint work with Alistair Sinclair.)
February 13, 1997
Indiana University
#### Models combining group symmetry and conditional independence in a multivariate normal distribution
Three of the most important concepts used in defining a statistical model are independence, conditional distributions, and symmetries. Statistical models given by a combination of two of these concepts, conditional distributions and independence, the so-called conditional independence models, have received increasing attention in recent years. The models are defined in terms of directed graphs, undirected graphs or a combination of the two, the so-called chain graphs.
This paper combines conditional independence (CI) restrictions with group symmetry (GS) restrictions to obtain the group symmetry conditional independence (GS-CI) models. The group symmetry models and the conditional independence models are thus special cases of the GS-CI models. A complete solution to the likelihood inference for the GS-CI models is presented.
Special examples of GS models are Complete Symmetry, Compound Symmetry, Circular Symmetry, Complex Normal Distributions, Multivariate Complete Symmetry, Multivariate Compound Symmetry, and Multivariate Circular Symmetry. When some of these simple GS models are combined with some of the simple CI models, numerous well-behaved GS-CI models can be presented.
February 27, 1997
Dimitris Bertsimas
Sloan School, MIT
#### Optimization of multiclass queueing networks via infinite linear programming and singular perturbation methods
We propose methods for optimization of multiclass queueing networks that model manufacturing systems. We combine ideas from optimization and partial differential equations.
The first approach aims to explore the dynamic character of the problem by considering the fluid model of the queueing network. We propose an algorithm that solves the fluid control problem based on infinite linear programming. Our algorithm is based on nonlinear optimization ideas, and solves large scale problems (50 station problems with several hundred classes) very efficiently.
The second approach aims to shed light on the question of how stochasticity affects the character of optimal policies. We use singular perturbation techniques from the theory of partial differential equations to obtain a series of optimization problems, the first of which is the fluid optimal control problem mentioned in the previous paragraph. The second order problem provides a correction to the optimal fluid solution. This second order problem has strong ties with the optimal control of Brownian multiclass stochastic networks. We solve the problem explicitly in many examples and we see that the singular perturbation approach leads to insightful new qualitative behavior. In particular, we obtain explicit results on how variability in the system affects the character of the optimal policy.
March 5, 1997
Paul Glasserman
#### Importance Sampling for Rare Event Simulation: Good News and Bad News
Precise estimation of rare event probabilities by simulation can be difficult: the computational burden frequently grows exponentially in the rarity of the event. Importance sampling --- based on applying a change of measure to make a rare event less rare --- can improve efficiency by orders of magnitude. But finding the right change of measure can be difficult. Through a variety of examples in queueing and other contexts, a general strategy has emerged: find the most likely path to a rare event and apply a change of measure to follow this path. The most likely path is found through large deviations calculations.
The first part of this talk reviews positive results that support this strategy and examples of its potential for dramatic variance reduction. The second part shows, however, that the same approach can be disastrous even in very simple examples. For each negative example, we propose a simple modification that produces an asymptotically optimal estimator.
March 13, 1997
Hayriye Ayhan
Georgia Tech
#### On the Time-Dependent Occupancy and Backlog Distributions for the $GI / G / \infty$ Queue
An examination of sample path dynamics allows a straightforward development of integral equations having solutions that give time-dependent occupancy and backlog distributions (conditioned on the time of the first arrival) for the $GI/G/\infty$ queue. These integral equations are amenable to numerical evaluation and can be generalized to characterize $GI^X/G/ \infty$ queue. Two examples are given that illustrate the results.
April 17, 1997
Andrew Nobel
University of North Carolina, Chapel Hill
#### Adaptive Model Selection Using Empirical Complexities
We propose and analyze an adaptive model selection procedure for multivariate classification, which is based on complexity penalized empirical risk.
The procedure divides the available data into two parts. The first is used to select an empirical cover of each model class. The second is used to select from each cover a candidate rule with the smallest number of misclassifications. The final estimate is chosen from the list of candidates in order to minimize the sum of class complexity and empirical probability of error.
A distinguishing feature of the approach is that the complexity of each model class is assessed empirically, based on the size of its empirical cover.
April 24, 1997
University of North Carolina, Chapel Hill
and Technion -- Israel Institute of Technology
#### An Introduction to Superprocesses
A superprocess is a measure valued stochastic process used for modelling, among other things, infinite density systems of particles undergoing random motion and random branching. They can be studied either via the general theory of Markov processes, stochastic partial differential equations, or martingale problems.
In this talk I shall try to provide an introduction to superprocesses for the uninitiated, describing their basic structure, some basic results, and some interestimg open questions.
The talk will be followed by a 15 minute movie for anyone who wishes to stay.
May 1, 1997
Alex Koldobsky
University of Texas at San Antonio
#### More on Schoenberg's problem on positive definite functions
In 1938, Schoenberg posed the following problem: for which $p>0$ is the function $exp(-\|x\|_q^p)$ positive definite. The solution was completed in 1991, and since then there have appeared a few more proofs all of which were quite technical. We present a new proof which significantly generalizes the solution and, in a certain sense, clears things up. This proof is based on extending the Levy representation of norms to the case of negative exponents. We also show the connections between Schoenberg's problem and isotropic random vectors, and apply our results to inequalities of correlation type for stable vectors.
May 8, 1997
Bok Sik Yoon
Hong-Ik University, Seoul, Korea & Georgia Tech
#### QN-GPH Method for Sojourn Time Distributions in General Queueing Networks
We introduce the QN-GPH method to compute the sojourn time distributions in non-product form open queueing networks. QN-GPH is based on GPH semi-Markov chain modelling for the location process of a typical customer. To derive the method, GPH distribution, GPH/GPH/1 queue, and GPH semi-Markov chains are briefly explained and a seemingly efficient method computing the transition function and first passage time distributions in the GPH semi-Markov chain is developed. Numerical examples in the area of telecommunication are given to demonstrate the accuracy of the method. The QN-GPH method seems to be computationally affordable tool for delay analysis in various manufacturing systems or computer and communication systems.
May 15, 1997
Jan Rosinski
University of Tennessee, Knoxville
#### Problems of unitary representations arising in the study of stable processes
Study of different classes of stable processes, such as stationary, self-similar, stationary increment, isotropic, etc., leads to the problem of obtaining explicit forms for unitary representations on L^p spaces, which can be used for a classification of stable processes. This approach is necessitated by the lack of a satisfactory spectral theorem when p < 2. The talk will survey some results in this area and present some open problems.
May 22, 1997
Robert Cooper
Florida Atlantic University
#### Polling Models and Vacation Models in Queueing Theory: Some Interesting and Surprising Stuff
A polling model is used to represent a system of multiple queues that are attended by a single server that switches from queue to queue in some prescribed manner. These models have many important applications, such as performance analysis of computer-communication networks and manufacturing systems, and they tend to be quite complicated. A vacation model describes a single-server queue in which the server can be unavailable for work (away on "vacation") even though customers are waiting. Some vacation models exhibit a "decomposition," in which the effects of the vacations can be separated from the effects of the stochastic variability of the arrival times and the service times. In a polling model, the time that the server spends away from any particular queue, serving the other queues or switching among them, can be viewed as a vacation from that queue. Adoption of this viewpoint greatly simplifies the analysis of polling models.
Recently, it has been found that polling models themselves enjoy an analogous decomposition with respect to the server switchover times (or, in the manufacturing context, setup times), but for apparently different reasons. Furthermore, it has recently been discovered that some polling models exhibit counterintuitive behavior: when switchover times increase, waiting times decrease; or, equivalently, in the parlance of manufacturing, WIP (work in process) can be decreased by artificially increasing the setup times.
In this talk we give an overview of polling and vacation models, including some historical context. Also, using decomposition we "explain" the counterintuitive behavior, and identify it as a hidden example of the well-known renewal (length-biasing) paradox.
The talk will emphasize conceptual arguments rather than mathematical detail, and should be of interest to a general audience.
Unless otherwise noted, the seminar meets Thursdays at 3 PM in Skiles, Room 140. For further information, contact Jim Dai ([email protected]) or Richard Serfozo ( [email protected]).
May 29, 1997
Takis Konstantopoulos
University of Texas, Austin
#### Distributional Approximations of Processes, Queues and Networks under Long-Range Dependence Assumptions
In this talk we discuss the issue of modeling and analysis of stochastic systems under the assumption that the inputs possess long-range dependence. The hypothesis is based on experimental observations in high-speed communication networks that have motivated a large body of research in the recent years. After briefly reviewing typical experiments, models, and theoretical results on performance, we present a detailed limit theorem for a class of traffic processes possessing a weak regenerative structure with infinite variance cycle times and burstiness constrained'' cycles. The distribution of the approximating process is characterized and found to be of Levy-type with a stable marginal distribution whose index is the ratio of a parameter characterizing the tail of cycle times and a parameter representing the asymptotic growth rate of traffic processes. We also discuss queueing analysis for Levy networks. Finally, we comment on the matching of distributions of both arrivals and queues with those observed in practice.
June 5, 1997
Alan F. Karr
National Institute of Statistical Sciences
#### Does Code Decay?
Developers of large software systems widely believe that these systems _decay_ over time, becoming increasingly hard to change: changes take longer, cost more and are more likely to induce faults.
This talk will describe a large, cross-disciplinary, multi-organization study, now in its first year, meant to define, measure and visualize code decay, to identify its causes (both structural and organizational), to quantify effects, and to devise remedies. Emphasis will be on the code itself and its change history as statistical data, and on tools to describe and visualize changes.
Last updated: May 31, 1997 by J. Hasenbein ( [email protected]) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8042525053024292, "perplexity": 783.5280054246175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891811795.10/warc/CC-MAIN-20180218081112-20180218101112-00273.warc.gz"} |
http://home.fnal.gov/~mrenna/lutp0613man2/node74.html | Next: Cross-section Calculations Up: Process Generation Previous: Kinematics and Cross Section Contents
## Resonance Production
The simplest way to produce a resonance is by a process. If the decay of the resonance is not considered, the cross-section formula does not depend on , but takes the form
(84)
Here the physics is contained in the cross section . The scale is usually taken to be .
In published formulae, cross sections are often given in the zero-width approximation, i.e. , where is the mass of the resonance. Introducing the scaled mass , this corresponds to a delta function , which can be used to eliminate the integral over .
However, what we normally want to do is replace the function by the appropriate Breit-Wigner shape. For a resonance width this is achieved by the replacement
(85)
In this formula the resonance width is a constant.
An improved description of resonance shapes is obtained if the width is made -dependent (occasionally also referred to as mass-dependent width, since is not always the resonance mass), see e.g. [Ber89]. To first approximation, this means that the expression is to be replaced by , both in the numerator and the denominator. An intermediate step is to perform this replacement only in the numerator. This is convenient when not only -channel resonance production is simulated but also non-resonance - or -channel graphs are involved, since mass-dependent widths in the denominator here may give an imperfect cancellation of divergences. (More about this below.)
To be more precise, in the program the quantity is introduced, and the Breit-Wigner is written as
(86)
The factor is evaluated as a sum over all possible final-state channels, . Each decay channel may have its own dependence, as follows.
A decay to a fermion pair, , gives no contribution below threshold, i.e. for . Above threshold, is proportional to , multiplied by a threshold factor for the vector part of a spin 1 resonance, by for the axial vector part, by for a scalar resonance and by for a pseudoscalar one. Here . For the decay into unequal masses, e.g. of the , corresponding but more complicated expressions are used.
For decays into a quark pair, a first-order strong correction factor is included in . This is the correct choice for all spin 1 colourless resonances, but is here used for all resonances where no better knowledge is available. Currently the major exception is top decay, where the factor is used to approximate loop corrections [Jez89]. The second-order corrections are often known, but then are specific to each resonance, and are not included. An option exists for the resonances, where threshold effects due to bound-state formation are taken into account in a smeared-out, average sense, see eq. ().
For other decay channels, not into fermion pairs, the dependence is typically more complicated. An example would be the decay , with a nontrivial threshold and a subtle energy dependence above that [Sey95a]. Since a Higgs with could still decay in this channel, it is in fact necessary to perform a two-dimensional integral over the Breit-Wigner mass distributions to obtain the correct result (and this has to be done numerically, at least in part). Fortunately, a Higgs particle lighter than is sufficiently narrow that the integral only needs to be performed once and for all at initialization (whereas most other partial widths are recalculated whenever needed). Channels that proceed via loops, such as , also display complicated threshold behaviours.
The coupling structure within the electroweak sector is usually (re)expressed in terms of gauge boson masses, and , i.e. factors of are replaced according to
(87)
Having done that, is allowed to run [Kle89], and is evaluated at the scale. Thereby the relevant electroweak loop correction factors are recovered at the scale. However, the option exists to go the other way and eliminate in favour of . Currently is not allowed to run.
For Higgs particles and technipions, fermion masses enter not only in the kinematics but also as couplings. The latter kind of quark masses (but not the former, at least not in the program) are running with the scale of the process, i.e. normally the resonance mass. The expression used is [Car96]
(88)
Here is the input mass at a reference scale , defined in the scheme. Typical choices are either or ; the latter would be relevant if the reference scale is chosen at the threshold. Both and are as given in .
In summary, we see that an dependence may enter several different ways into the expressions from which the total is built up.
When only decays to a specific final state are considered, the in the denominator remains the sum over all allowed decay channels, but the numerator only contains the term of the final state considered.
If the combined production and decay process is considered, the same dependence is implicit in the coupling structure of as one would have had in , i.e. to first approximation there is a symmetry between couplings of a resonance to the initial and to the final state. The cross section is therefore, in the program, written in the form
(89)
As a simple example, the cross section for the process can be written as
(90)
where
(91)
If the effects of several initial and/or final states are studied, it is straightforward to introduce an appropriate summation in the numerator.
The analogy between the and cannot be pushed too far, however. The two differ in several important aspects. Firstly, colour factors appear reversed: the decay contains a colour factor enhancement, while is instead suppressed by a factor . Secondly, the first-order correction factor for the final state has to be replaced by a more complicated factor for the initial state. This factor is not known usually, or it is known (to first non-trivial order) but too lengthy to be included in the program. Thirdly, incoming partons as a rule are space-like. All the threshold suppression factors of the final-state expressions are therefore irrelevant when production is considered. In sum, the analogy between and is mainly useful as a consistency cross-check, while the two usually are calculated separately. Exceptions include the rather messy loop structure involved in and , which is only coded once.
It is of some interest to consider the observable resonance shape when the effects of parton distributions are included. In a hadron collider, to first approximation, parton distributions tend to have a behaviour roughly like for small -- this is why is replaced by in eq. (). Instead, the basic parton-distribution behaviour is shifted into the factor of in the integration phase space , cf. eq. (). When convoluted with the Breit-Wigner shape, two effects appear. One is that the overall resonance is tilted: the low-mass tail is enhanced and the high-mass one suppressed. The other is that an extremely long tail develops on the low-mass side of the resonance: when , eq. () with gives a , which exactly cancels the factor mentioned above. Naïvely, the integral over , , therefore gives a net logarithmic divergence of the resonance shape when . Clearly, it is then necessary to consider the shape of the parton distributions in more detail. At not-too-small , the evolution equations in fact lead to parton distributions more strongly peaked than , typically with , and therefore a divergence like in the cross-section expression. Eventually this divergence is regularized by a closing of the phase space, i.e. that vanishes faster than , and by a less drastic small- parton-distribution behaviour when .
The secondary peak at small may give a rather high cross section, which can even rival that of the ordinary peak around the nominal mass. This is the case, for instance, with production. Such a peak has never been observed experimentally, but this is not surprising, since the background from other processes is overwhelming at low . Thus a lepton of one or a few GeV of transverse momentum is far more likely to come from the decay of a charm or bottom hadron than from an extremely off-shell of a mass of a few GeV. When resonance production is studied, it is therefore important to set limits on the mass of the resonance, so as to agree with the experimental definition, at least to first approximation. If not, cross-section information given by the program may be very confusing.
Another problem is that often the matrix elements really are valid only in the resonance region. The reason is that one usually includes only the simplest -channel graph in the calculation. It is this signal' graph that has a peak at the position of the resonance, where it (usually) gives much larger cross sections than the other background' graphs. Away from the resonance position, signal' and background' may be of comparable order, or the background' may even dominate. There is a quantum mechanical interference when some of the signal' and background' graphs have the same initial and final state, and this interference may be destructive or constructive. When the interference is non-negligible, it is no longer meaningful to speak of a signal' cross section. As an example, consider the scattering of longitudinal 's, , where the signal' process is -channel exchange of a Higgs. This graph by itself is ill-behaved away from the resonance region. Destructive interference with background' graphs such as -channel exchange of a Higgs and - and -channel exchange of a is required to save unitarity at large energies.
In colliders, the parton distribution is peaked at rather than at . The situation therefore is the opposite, if one considers e.g. production in a machine running at energies above : the resonance-peak tail towards lower masses is suppressed and the one towards higher masses enhanced, with a sharp secondary peak at around the nominal energy of the machine. Also in this case, an appropriate definition of cross sections therefore is necessary -- with additional complications due to the interference between and . When other processes are considered, problems of interference with background appears also here. Numerically the problems may be less pressing, however, since the secondary peak is occurring in a high-mass region, rather than in a more complicated low-mass one. Further, in there is little uncertainty from the shape of the parton distributions.
In processes where a pair of resonances are produced, e.g. , cross section are almost always given in the zero-width approximation for the resonances. Here two substitutions of the type
(92)
are used to introduce mass distributions for the two resonance masses, i.e. and . In the formula, is the nominal mass and the actually selected one. The phase-space integral over , and in eq. () is then extended to involve also and . The effects of the mass-dependent width is only partly taken into account, by replacing the nominal masses and in the expression by the actually generated ones (also e.g. in the relation between and ), while the widths are evaluated at the nominal masses. This is the equivalent of a simple replacement of by in the numerator of eq. (), but not in the denominator. In addition, the full threshold dependence of the widths, i.e. the velocity-dependent factors, is not reproduced.
There is no particular reason why the full mass-dependence could not be introduced, except for the extra work and time consumption needed for each process. In fact, the matrix elements for several and production processes do contain the full expressions. On the other hand, the matrix elements given in the literature are often valid only when the resonances are almost on the mass shell, since some graphs have been omitted. As an example, the process is dominated by when each of the two lepton pairs is close to in mass, but in general also receives contributions e.g. from , followed by and . The latter contributions are neglected in cross sections given in the zero-width approximation.
Widths may induce gauge invariance problems, in particular when the -channel graph interferes with - or -channel ones. Then there may be an imperfect cancellation of contributions at high energies, leading to an incorrect cross section behaviour. The underlying reason is that a Breit-Wigner corresponds to a resummation of terms of different orders in coupling constants, and that therefore effectively the -channel contributions are calculated to higher orders than the - or -channel ones, including interference contributions. A specific example is , where -channel exchange interferes with -channel exchange. In such cases, a fixed width is used in the denominator. One could also introduce procedures whereby the width is made to vanish completely at high energies, and theoretically this is the cleanest, but the fixed-width approach appears good enough in practice.
Another gauge invariance issue is when two particles of the same kind are produced in a pair, e.g. . Matrix elements are then often calculated for one common mass, even though in real life the masses . The proper gauge invariant procedure to handle this would be to study the full six-fermion state obtained after the two decays, but that may be overkill if indeed the 's are close to mass shell. Even when only equal-mass matrix elements are available, Breit-Wigners are therefore used to select two separate masses and . From these two masses, an average mass is constructed so that the velocity factor of eq. () is retained,
(93)
This choice certainly is not unique, but normally should provide a sensible behaviour, also around threshold. Of course, the differential cross section is no longer guaranteed to be gauge invariant when gauge bosons are involved, or positive definite. The program automatically flags the latter situation as unphysical. The approach may well break down when either or both particles are far away from mass shell. Furthermore, the preliminary choice of scattering angle is also retained. Instead of the correct and of eq. (), modified
(94)
can then be obtained. The , and are now used in the matrix elements to decide whether to retain the event or not.
Processes with one final-state resonance and another ordinary final-state product, e.g. , are treated in the same spirit as the processes with two resonances, except that only one mass need be selected according to a Breit-Wigner.
Next: Cross-section Calculations Up: Process Generation Previous: Kinematics and Cross Section Contents
Stephen Mrenna 2007-10-30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9587163925170898, "perplexity": 723.7594679702174}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084892892.86/warc/CC-MAIN-20180124010853-20180124030853-00382.warc.gz"} |
https://encyclopedia.pub/10918 | 1. Please check and comment entries here.
Topic review
# Clinoptilolite Characterization and EDS Analysis
View times: 54
## Definition
Zeolites are materials of biomedical interest, in particular owing to their ability to remove metabolic products such as uremic toxins (i.e., urea, uric acid, creatinine, p-cresol, and indoxul sulfate); they are used for the regeneration of dialysis solutions and as in vivo membranes for artificial kidney. Zeolites have further important applications in the biomedical field, in fact they are used as hemostats (due to their ability to absorb water), antiseptics (when modified with silver or zinc ions), carriers for drugs and genes (adjuvant in vaccines), glucose absorbers, etc. Here, EDS microanalysis in the study of a sample of natural clinoptilolite is reported.
## 1. Determination of the Si/Al molar ratio
A very important characteristic parameter for zeolites is the atomic[1] (or molar) ratio of the silicon and aluminium elements (Si/Al) contained in them. According to Lowenstein's rule[2], this ratio always assumes numerical values greater than 1 (the rule says that an AlO4 tetrahedron is unlikely to bind another AlO4 tetrahedron). When the ratio Si/Al is equal to 1, the tetrahedra of Si and those of Al regularly alternate to build an ordered structure. The Si/Al ratio varies from 1 to 7 for natural zeolites and varies from 1 to infinity for the synthetic ones. In general, zeolites are classified on the basis of the numerical value of this Si/Al atomic ratio and distinguished in highly-siliceous zeolites, when the ratio is greater than 5 (highly-siliceous zeolites are nonpolar and therefore have little affinity with water), and in aluminous zeolites, when this ratio is less than 5 (these minerals are polar and very compatible with water). The affinity of the zeolites with water depends on the concentration of the hydrophilic sites (cationic and external hydroxyl sites) present in them and these sites are with good approximation numerically equal to that of the aluminum atoms (the concentration of the external hydroxyls is negligible). Many physico-chemical properties of zeolites depend on this ratio. For example, the electrical conductivity and ion exchange capacity of zeolites are closely related to the Si/Al ratio and both properties improve as this ratio decreases. The atomic Si/Al ratio is generally determined by using the elementary chemical analysis of zeolite samples, which is a destructive, generally time consuming, and laborious procedure. The EDS microanalysis is carried out by measuring the energy and intensity distribution of the X-rays generated by the action of the primary electron beam on the sample, by using an energy dispersion detector (single crystal of silicon doped with lithium). It represents a rapid, non-destructive analytical method to evaluate the atomic Si/Al ratio of a zeolite sample. Thin slices of clinoptilolite were produced by cutting the raw stone with a diamond saw (electric mini-drill). The data generated by the EDS analysis consisted of spectra containing peaks that corresponded to the different elements present in the sample. This technique combines: the morphological information offered by the SEM microscope with the qualitative and semi-quantitative compositional information offered by the X-rays acting on the section of the observed samples. The samples were not metallized with Au/Pd alloy for avoiding to mask lighter elements and samples were observed in low vacuum mode by a SEM microscope (FEI Quanta 200 FEG), equipped with an EDS energy-dispersive spectrometer (Oxford Inca Energy System 250 equipped with INCAx-act LN2-free detector). The EDS analysis was conducted on several samples of natural clinoptilolite and several points of different areas have been analyzed for each of them. The investigated area was about 900 µm2 (see Figure 1).
Fig. 1 - Natural clinoptilolite sample analyzed by the EDS technique.
According to the EDS technique, the atomic percentage of silicon in the zeolite was on average equal to 22.90% and the average atomic percentage of aluminum was equal to 4.25%, the ratio of these atomic percentages provides a value for the atomic Si/Al ratio corresponding to 5.39, which exactly matches the Si/Al value of natural clinoptilolite (a highly siliceous zeolite). Table I summarizes the results of the EDS analysis conducted on a single sample of natural clinoptilolite measured in three different points.
at.% at.% at.% Average values Si 23.38 22.75 22.58 22.90 Al 4.27 4.27 4.22 4.25 Si/Al 5.475 5.328 5.351 5.385
Tab. 1 - Atomic percentages of Si and Al and atomic Si/Al ratio for the sample of natural clinoptilolite.
## 2. Determination of the nature and concentrations of extra-framework cations
The crystallochemical variability of zeolites and consequently their technological applications depend not only on the atomic Si/Al ratio but also on the type of cations present in the structure. Usually, these cations are alkali or alkaline-earth metals[3], which are present in the channels of the mineral depending on their radius and charge (for example, clinoptilolite readily accepts Cs+ ions by on exchange mechanism). The type of extra-framework cations and their molar or weight percentage in the mineral can also be obtained quickly and accurately by EDS analysis. As visible in the spectrum given in Figure 2, four different types of cations were present in the natural clinoptilolite sample, namely: potassium, calcium, iron, and magnesium. The intensities of the signals of these ions were quite different and, in particular, calcium and potassium were more abundant, while magnesium and iron were present at trace level. The average values of the percentages for these elements are reported in Tables II and III. On basis of these results, the investigated zeolite sample corresponded to K-type clinoptilolite (generally referred to as: clinoptilolite-K). Iron is a typical impurity that is frequently found in zeolites of natural origin.
Fig. 2 - EDS spectrum of the natural clinoptilolite sample (top) and classification of the three forms of natural clinoptilolite (bottom).
Cation Area 1 Area 2 Area 3 Average value K 1.55 1.49 1.43 1.49 Ca 1.03 1.01 0.97 1.00 Mg 0.34 0.40 0.41 0.38 Fe 0.41 0.38 0.35 0.38
Tab.2 - Atomic/molar percentages of extra-framework cations present in the sample of natural clinoptilolite.
Cation Area 1 Area 2 Area 3 Average value K 3.02 2.92 2.80 2.91 Ca 2.06 2.02 1.95 2.01 Mg 0.41 0.48 0.50 0.46 Fe 1.14 1.05 0.97 1.05
Tab. 3 - Percentages by weight of extra-framework cations present in the sample of natural clinoptilolite.
## 3. Stoichiometric verification of the mineral chemical formula
Obviously, the EDS spectrum is completed by the presence of the oxygen fluorescence signal, which represents the most abundant element contained in the silicoaluminate compound (the average oxygen concentration calculated by EDS was ca. 69.48at.%, 55.55% by weight). This signal was generated both by oxygen bonded to silicon and by oxygen bonded to aluminum. As can be easily calculated by using the data in Table I, due to the presence of crystallization water in the mineral, the O/(Si+Al) atomic ratio was about 2.6. These experimental data can be compared with the theoretical values that can be calculated from the chemical formula of the mineral. According to the chemical formula of a typical clinoptilolite, that is: (Na2,K2,Ca)3Al6Si30O72.24H2O), in the mineral there are 30 silicon atoms, 6 aluminum atoms and 96 oxygen atoms, therefore the O/(Si+Al) ratio corresponds to 96/(15+5) = 2.67 and this value is in perfect agreement with the experimental data obtained by EDS, thus proving that the mineral is clinoptilolite. As shown in Figure 3, a diagram (histogram) of all EDS data also allows an immediate displaying of the compound composition.
Fig. 3 - Pie-diagram built with EDS data. This diagram allows to immediatly display the relative abundance of the elements in the compound.
## 4. Information on zeolites modified by ion exchange
After chemical modification of the zeolites (e.g., ion exchange, treatment with surfactants, etc.), the EDS allows to verify the effectiveness of the performed treatment. For example, when a new type of cation has been inserted into the zeolite crystal lattice by using the ion exchange method, the EDS technique allows to quickly evaluate the obtained result. In the case of K-clinoptilolite, the sodium cation (Na+) is not originally present in the mineral, but after that the mineral has contacted a boiling aqueous solution of sodium chloride (NaCl) for approximately 20min; after repeated washing with hot tap water, the EDS analysis showed the presence of this element (sodium) as well as a greater amount of magnesium (see Figure 4). In particular, the quantity of sodium introduced into the crystal lattice of natural clinoptilolite corresponded to 1.86at.% (2.16% by weight in the first point), while in the second point corresponded to 2.17at.% (2.52% by weight). As can be verified from the overall EDS data reported in Table IV, as a consequence of the ion exchange with a concentrated solution of sodium chloride in tap water, the content of sodium and magnesium ions increased, while the concentration of potassium and calcium decreased. The concentration of the elements belonging to the framework (i.e., silicon, aluminum and oxygen), which are not involved in the ion exchange process, and that of iron, which is a trivalent cation (Fe3+) and therefore it is hardly exchanged by monovalent ions, due to the considerable strength of electrostatic interaction with the negative charges of the framework, remained practically constant.
Fig. 4 - EDS spectra of the sample of natural clinoptilolite treated at 100°C with a concentrated aqueous solution of NaCl (and then hot washed repeatedly) and SEM micrographs of the areas where the EDS analysis was carried out.
Element Before treatment After treatment Na - 1.86 Mg 0.30 1.81 K 1.74 0.75 Ca 1.15 0.39 Fe 0.49 0.37 Si 24.48 21.09 Al 4.37 4.35 O 67.47 69.33
Tab. IV - Comparison between the atomic percentages of the elements before and after the ion exchange treatment.
## 5. Conclusions
Finally, according to the results given in this short technical report, the characterization of clinoptilolite and other zeolites by energy dispersive X-ray microanalysis (EDS) combined with SEM represents an extremely powerful approach, which is also fast and easy to use.
## References
1. 1. Mohau Moshoeshoe; A Review of the Chemistry, Structure, Properties and Applications of Zeolites. American Journal of Materials Science 2017, 7, 196-221, 10.5923/j.materials.20170705.12.
2. Christopher J. Heard; The effect of water on the validity of Löwenstein's rule. Chem. Sci. 2019, 10, 5705-5711, 10.1039/C9SC00725C.
3. D.A.Kennedy; Cation exchange modification of clinoptilolite–Screening analysis forpotential equilibrium and kinetic adsorption separations involving methane,nitrogen, and carbon dioxide. Microporous and Mesoporous Materials 2018, 262, 235-250, https://doi.org/10.1016/j.micromeso.2017.11.054.
More | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8148811459541321, "perplexity": 2523.453500997264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585183.47/warc/CC-MAIN-20211017210244-20211018000244-00321.warc.gz"} |
https://www.arxiv-vanity.com/papers/1608.02881/ | # A badly expanding set on the 2-torus
Rene Rühr
March 14, 2021
###### Abstract.
We give a counterexample to a conjecture stated in [LL06] regarding expansion on under and .
Let be the set containing the linear transformation and its transpose , and let , adding the inverses of and . Using these transformations, Linial and London [LL06] studied an infinite -regular expander graph, showing the following expansion property: For any bounded measurable set of the plane, one has
(1) m⎛⎝A∪⋃σ∈ΣUσ(A)⎞⎠≥2m(A) and m⎛⎝A∪⋃σ∈ΣDσ(A)⎞⎠≥43m(A)
where denotes the Lebesgue measure of a set and the bounds are sharp. Note that and thus its elements also act on , and this action is measure preserving with respect to the induced probability measure on . It was conjectured in [LL06] and in [HLW06][Conjecture 4.5] that there is a constant such that for with the estimate of line (1) with in place of holds. Below we give a simple counterexample to this conjecture. Let denote the natural projection map. Let and define
CU=π({(x,y)∈R2:|x|≤ε or |y|≤ε}).
and
CD=CU∪π({(x,y)∈R2:|x+y|≤ε}).
These sets are of arbitrary small measure as and satisfy
and .
###### Proof.
The following picture depicts the set in red and the image under in blue. We note that the overlapping triangles outside the square are to be seen modulo , thus wrap up and do not amount to additional mass. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9975262880325317, "perplexity": 558.8323289963137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488519735.70/warc/CC-MAIN-20210622190124-20210622220124-00554.warc.gz"} |
https://www.nature.com/articles/s41467-020-16667-x?error=cookies_not_supported | ## Introduction
Chemistry can be broadly defined as the change in valence electronic structure as atoms in a molecule geometrically rearrange. The adiabatic picture that describes this delicate interplay between electrons and nuclei is a central pillar to chemical dynamics and leads to the concept of a potential energy surface within the Born-Oppenheimer approximation1. Consequently, developing experimental probes that are sensitive to both nuclear and electronic evolution in real-time has become a primary goal in chemical physics2,3,4,5,6. Time-resolved photoelectron spectroscopy is one such method as it maps an adiabatically evolving wavepacket onto ionised final states with no strict selection rules7,8. A variant of the method, time-resolved photoelectron imaging, offers an additional dimension through the photoelectron angular distributions (PADs) that are sensitive to the molecular orbital from which the electron is removed9. Adiabatic changes can be tracked through molecular-frame PADs, but such measurements require a connection between the laboratory and molecular frames of reference, either through coincidence measurements2,5 or molecular alignment10. This complexity inhibits the application of such experiments to complex polyatomic molecules for which the methods are ultimately designed. Overcoming these limitations will provide a platform to probe chemical dynamics in complex molecules.
The photoactive yellow protein, PYP, is a blue-light absorbing protein that has been an important testbed for novel structural probes in complex biological environments11,12. The absorption maximum for the S1 ← S0 transition in PYP is at ~450 nm and can be traced to a small chromophore that undergoes a light-activated transcis isomerisation which serves as a mechanical lever and triggers an extensive bio-cycle with numerous intermediates13,14,15. Derivatives of the PYP chromophore are commonly based on para-coumaric acid and have been studied extensively as a prototypical bio-chromophore16,17. Yet, there remains ambiguity about which specific bonds are involved in the initial excited state isomerisation and, hence, there is a desire to develop experimental probes that can distinguish subtly different reaction coordinates. For example, the anionic para-coumaric ketone chromophore (pCK, Fig. 1a), studied by Zewail and coworkers using time-resolved photoelectron spectroscopy, can isomerise about the first (single), the second (double), or both bonds in the para-position; but the photoelectron spectra alone could not discern these differences16. Chemical derivatives in which rotation about specific bonds is inhibited have also been studied, but such modifications diverge further from the native chromophore18. Several computational studies have explored the potential energy surfaces of the S0 and S1 states and considered the dynamics on the S1 state following photoexcitation19,20,21,22,23. These have converged on the position that the initial isomerisation coordinate involves predominantly rotation about the single bond, but have not been clearly linked with experimental data. In the present study, pCK is probed by time-resolved photoelectron imaging combined with electronic structure calculations. For specific photoelectron features, the temporal evolution of the spectra and laboratory-frame PADs, in unison with our calculations, enables the identification of the nuclear and electronic structural changes associated with the single-bond isomerisation coordinate on the excited state, thus demonstrating a direct probe for adiabatic dynamics, i.e. chemistry.
## Results
### Time-resolved photoelectron imaging
Our experiment involves excitation of mass-selected pCK at 2.79 eV (444 nm) with femtosecond pulses to the bright S1 state. The excited state dynamics are subsequently probed at various delays using femtosecond pulses at 1.55 eV (800 nm) at the centre of a velocity map imaging spectrometer, yielding time-resolved photoelectron images. The temporal resolution (full width at half maximum) is 100 ± 10 fs and the spectral resolution ~5% of the electron kinetic energy, ε.
The time-resolved photoelectron spectra are shown in Fig. 1b over the first picosecond following excitation. Each spectrum has had the spectrum at t < 0 removed to leave only the time-evolving signals. Figure 1e displays spectra at a few specific delays. At very early times, the photoelectron spectrum exhibits a peak centred at an electron kinetic energy, ε ~ 1.4 eV. With increasing pump-probe delay, t, this initial peak shifts towards lower ε, leaving a peak centred at ε ~ 0.8 eV at longer times (t >> 1 ps). This peak then slowly decays with a lifetime of ~120 ps with no further spectral evolution (see Supplementary Fig. 1). Based on our photoelectron spectra (Supplementary Note 2 and Supplementary Fig. 2), the electron affinity of pCK is 2.87 ± 0.05 eV. Hence, excitation to the S1 state at 2.79 eV is just below the detachment threshold and ensures we are predominantly probing bound-state dynamics, although the S1 absorption profile is broad and extends into the continuum24.
Fig. 1d presents the integrated photoelectron signal over specific spectral windows that are indicated in Fig. 1b (ε1, ε2, and ε3) and are representative of the different spectral features. The high energy peak (represented by spectral region ε1 in Fig. 1b) shows a rapid initial decay with an apparent oscillation superimposed. The signal in the intermediate spectral range (ε2) rises as the high energy signal decays and similarly oscillates with a commensurate period, but with a π phase shift. These dynamics are also clearly visible in Fig. 1b.
In addition to the evolution of the photoelectron spectra, we also observe an evolution of the PADs. Laboratory-frame PADs are typically quantified by anisotropy parameters, β2n(ε)25. For a two-photon (pump-probe) process, n = 1 and 2, and the PADs are defined through9,26
$$I\left( {\varepsilon ,\theta } \right) = \sigma /4\pi \left[ {1 + \beta _2\left( \varepsilon \right)P_2\left( {{\mathrm{cos}}\theta } \right) + \beta _4\left( \varepsilon \right)P_4\left( {{\mathrm{cos}}\theta } \right)} \right],$$
(1)
where I(ε, θ) is the photoelectron signal as a function of the angle, θ, between the laser polarisation axis and the photoelectron emission velocity vector, σ is the detachment cross-section, and P2(cosθ) and P4(cosθ) are the second- and fourth-order Legendre polynomials. For large polyatomic molecules, only changes in β2 are often significant, which has limiting values +2 and −1 that correspond to electron emission predominantly parallel to and perpendicular to the polarisation axis, respectively. Figure 1c shows the measured β2(ε, t), with a 5-point moving average in ε. Figure 1c is directly comparable to the spectral evolution shown in Fig. 1b. Note that when the overall photoelectron signal is low, the determination of β2(ε, t) has a large uncertainty and we omit data for signal that is less than 0.1 of the normalised signal in Fig. 1b for clarity. The corresponding β4(ε, t) data are given in the Supplementary Note 3 and Supplementary Fig. 3 and have values very close to zero suggesting that β2(ε, t) is a good measure of the overall PADs.
Figure 1f shows the β2(ε) with no moving-average applied at two delays, t = 0 and t = 1 ps, with the corresponding spectra shown in Fig. 1e. To determine a specific anisotropy for a given feature, β2(ε) has been averaged over the spectral features as shown by the shaded regions in Fig. 1e. This yields values of β2 = −0.36 ± 0.09 and β2 = −0.11 ± 0.12 for the initial photoelectron peak at centred at ε ~1.4 eV (ε1) and the lower energy peak centred at ε ~0.8 eV (ε2), respectively.
### Assignment of photoelectron features
There are two dominant pathways discussed for the initial S1 state dynamics of PYP chromophores19,20,22. The ground state of pCK is planar because of the π-conjugation over the para-substituent on the phenolate anion. Upon excitation to the S1 state, an electron populates a molecular orbital with π* character, weakening the corresponding π-conjugation of the molecule and facilitating rotation about the bonds. Following S1 photoexcitation, the molecule first rapidly relaxes to a local planar minimum with a geometry that is very similar to the Franck-Condon geometry (i.e. S0 minimum). From this planar S1 minimum, rotation about either the single bond, φSB, or the double bond, φDB, can occur as shown in Fig. 1a.
Figure 2 shows the relevant potential energy surfaces that have been calculated using high-level multireference methods along two different pathways connecting the S1 planar minimum (PM) with the two minima on the S1 surface. These two minima arise from rotation around φSB or φDB and their geometries are denoted as SB and DB, respectively, as shown inset in Fig. 2a. The calculated pathways connect the different minima via a linear interpolation in internal coordinates (LIIC) and as such account for the geometrical changes along the different points of the S1 potential energy surface. While other nuclear displacements take place, the motion along either pathway is dominated by the rotations φSB or φDB. Motion along the φDB involves a barrier, while that along the φSB coordinate is essentially barrierless. Our calculations are in reasonable agreement with previous theoretical work, which suggested that φSB rotation is more probable and also found a barrier along φDB for related chromophores20,22. The main differences with previous theoretical works arise from the levels of theory used. Here, our main goal is to treat the S1 excited state on the same footing as the D0 final state to offer the most reliable energies to compare with the photoelectron spectra that are measured in the experiment. The vertical excitation energies of the S0, S1 and D0 states are obtained from the same XMCQDPT2 calculation (see Computational Details for more information).
The photoelectron spectrum is determined by the difference in energy between the anionic (S1) and neutral (D0) states, as shown in Fig. 2a. Based on the calculated values, detachment of the Franck-Condon geometry (denoted FC in Fig. 2a) with a hv = 1.55 eV probe will lead to electron signal extending to εFC = 1.40 eV. The rapid initial relaxation to the S1 planar minimum (PM) reduces this limit slightly to εPM = 1.34 eV. Rotation about φSB and φDB leading to the S1 minima SB and DB is expected to lead to photoelectron signal extending to εSB = 0.87 eV and εDB = 0.21 eV. These limits are shown in Fig. 1e. It is important to note that the estimated uncertainty in the calculations is ~0.2 eV and that only the molecular geometry at each critical point was used to determine these energies. Additionally, the maxima of the photoelectron peaks are expected to be shifted to slightly lower energy compared to the predicted maximum kinetic energy because the potential energy at the minima is lower than at the initial excitation energy. For example, the predicted maximum signal for rotation about φSB will occur in the range 0.43 < εSB < 0.87 eV. Hence, the calculated maximal values should be used as a guide only. Nevertheless, based on the potential energy surfaces in Fig. 2a, the agreement of the peak at t = 0 with the expected energy for the Franck-Condon (and S1 planar minimum) geometry is excellent. At the later time of t = 1 ps, the broad peak centred at ε ~ 0.8 eV is consistent with a twisted intermediate that has undergone rotation about φSB. This peak is not consistent with rotation about φDB as the spectral maximum of DB is expected at ε < 0.21 eV. Hence, based solely on energetic arguments, the dynamics involving the peaks at ε1 ~ 1.4 eV and ε2 ~ 0.8 eV correspond to dynamics involving rotation about the single bond.
Rotation about specific bonds also leads to differing electronic structures: adiabatically, a change in nuclear configuration is associated with an instantaneous adaptation of the underlying electronic structure. That is to say, the character of the valence orbitals at a given molecular geometry should be reflected in the laboratory-frame PADs and these may be expected to be different for the two different isomerisation pathways. Such changes can be quantitatively analysed by computing the Dyson orbital, ΨD, for the key structures along the reaction coordinate. The Dyson orbital can be thought of as the one-electron wavefunction describing the electron that is being photodetached. Krylov and coworkers have shown that PADs can be conveniently calculated from ΨD yielding computed β2(ε) trends27,28. We have previously shown that computed β2(ε) are in satisfactory agreement with experimental ones for several molecular anions in their ground state, including para-substituted phenolate anions, which pCK is a derivative of29,30. Moreover, we showed that PADs are also sensitive to subtly differing electronic structure when a short alkyl chain (ethyl) lies either in the plane of the phenolate ring or perpendicular to it30. We have now extended these calculations to predict the β2(ε) for detachment from the S1 excited state of pCK.
Figure 2b shows ΨD for key critical geometries: the Franck-Condon geometry, ΨD(FC), and the two S1 minima associated with a rotation about φSB and φDB: ΨD(SB) and ΨD(DB). Laboratory-frame PADs were calculated based on these ΨD, with the neutral D0 ground state as the final state. The computed β2 values can be directly compared to the measured values (Fig. 1f). The simplest comparison can be done by averaging the computed β2 values over the same energy range as for the experimental results. This yielded computed anisotropy parameters for key geometries of β2 = −0.48 (FC), β2 = −0.40 (PM), β2 = −0.19 (SB) and β2 = +0.04 (DB). These can be directly compared with experimental values of β2 = −0.36 and β2 = −0.11 for the initial peak at t = 0 and the peak at t = 1 ps. Such a comparison suggests that the signal in ε1 arises from FC and PM, while the signal in ε2 arises from SB. A more useful comparison is based on the trends of β2(ε). From Fig. 3a, the measured β2(ε) for the peak at t = 0 is in reasonable quantitative agreement with the β2(ε) computed from ΨD(FC) and ΨD(PM). From Fig. 3b, β2(ε) for the photoelectron peak at t = 1 ps is in reasonable quantitative agreement with β2(ε) computed from ΨD(SB). In contrast, the agreement for this feature with β2(ε) computed from ΨD(DB) is poor and qualitatively has the wrong sign and trend. Other points along the φDB coordinate, including at the barrier, yielded predominantly positive β2(ε) values, similar to that predicted from ΨD(DB), and thus also qualitatively different to the observed experimental trends. We conclude that the signal in the ε2 spectral range is a direct measure of the single bond pathway rather than the double bond one upon photoexcitation to S1 of pCK and that the dynamical changes between ε1 and ε2 reflect adiabatic motion between the FC/PM region to the SB minimum on the S1 excited state. This conclusion is consistent with the energetic arguments made earlier.
Overall, the agreement between predicted and measured β2(ε) is almost quantitative, especially given that these are based on single geometries that do not account for other nuclear motions (either thermal or photoinduced), which will tend to make the PADs more isotropic. Moreover, the calculation of the PADs employs some key approximations. In particular, the outgoing wave is treated as a plane wave and thus assumes no interaction of the photoelectron with the neutral core. In the present case, this may be a poor approximation because the neutral pCK core has a large permanent dipole moment. Despite these limitations, the agreement is very good, especially in terms of the trends of β2 with ε as see in Fig. 3.
Inspection of ΨD in Fig. 2b provides intuitive chemical insight about how the PADs reflect the changes in electronic structure along the isomerisation coordinate. Specifically, we have previously used a simple Hückel model to interpret changes in the excited state energies and character for a series of para-substituted phenolate anions29. As pCK belongs to this family, similar arguments apply here. The S1 and S2 states can be considered as linear combinations of molecular orbitals localised on the phenolate ring and the π-conjugated para-substituent. From Fig. 2b, rotation about φSB leads to a localisation of ΨD(SB) onto the π-conjugated substituent. Locally, ΨD(SB) is therefore associated with a planar π-conjugated system and this is expected to lead to β2 < 0, similar to that predicted for ΨD(FC)26. In contrast, following the rotation about φDB, ΨD(DB) becomes delocalised over a non-planar moiety. Such a molecular orbital is expected to yield β2 ~ 0, as previously seen in the ground electronic state of para-ethylphenolate30. Hence, despite the complex nature of lab-frame PADs, simple arguments provide an intuitive view of the electronic structure changes associated with the isomerisation coordinate. Hence, without the need to perform high-level calculations, the observed PADs can provide qualitative insight into the changes in valence-bonding along the isomerisation coordinate.
## Discussion
Based on the spectral and angular distributions, the photoelectron signal at ε1 is assigned to the signature of FC and PM and that at ε2 to the twisted intermediate following rotation about the single bond, SB. The dynamics associated with this evolution is shown in Fig. 1d. The coherence observed shows a nuclear wavepacket moving on the excited state surface from the S1 planar minimum past the SB minimum and back again with a period of ~400 fs. Note that the vibrational modes that comprise this wavepacket are not necessarily the Franck-Condon active modes. The dominant FC modes are likely to stretch the C–C bonds as the excitation involves a π* ← π transition. These are high frequency modes that lead to very rapid dynamics from the FC towards the S1 minimum. This motion then evolves into the modes that lead to isomerisation. The observed oscillation is in agreement with excited state molecular dynamics simulations of pCK that have predicted a similar oscillation20. Only a single oscillation is observed, presumably as a result of the dephasing to other modes (i.e. internal vibrational energy redistribution).
The time-resolved photoelectron spectroscopy experiment by Zewail and coworkers similarly noted energetic shifts and associated dynamics, following photoexcitation at hv = 3.10 eV16. Excitation at 3.10 eV is above the adiabatic detachment energy and probably also above the barrier to double bond rotation. In their experiments, autodetachment from the S1 state (characterised by electrons at low ε) was a prominent feature, which could swamp any signatures of the dynamics associated with double bond rotation that might have been occurring. We also observe a very small fraction of autodetachment (4%), enabled by the finite temperature (~300 K) and the spectral width of the pump pulse. Additionally, Zewail and coworkers observed an oscillation in the high kinetic energy window, similar to that observed in ε1, but not the out-of-phase oscillation at lower energy (ε2), probably because of contamination by autodetachment16. Dynamics involving isomerisation were also observed in a recent study on a closely related PYP chromophore anion in which the ketone is replaced by an ester group31. These dynamics were in competition with internal conversion to a non-valence state of the anion. Such dynamics are not observed here highlighting that even small chemical changes can have a marked impact on the excited state dynamics.
Finally, Fig. 1b and e shows a peak at very low ε (ε3) in the time-resolved photoelectron spectra. This spectral peak could arise from double bond rotation. The maximum expected energy for photodetachment from DB, εDB = 0.21 eV, which would be consistent with this peak. It is not informative to analyse the PADs for this channel because they are at too low kinetic energy, where the PADs are generally expected to be isotropic. However, a number of observations may suggest a different origin of the ε3 signal. Firstly, the formation of the DB minimum involves motion along the φDB coordinate (Fig. 2a) and should lead to photoelectron signals that evolve continuously from FC/PM to the DB minimum; but this is not observed. Secondly, the oscillation frequency of the integrated signal in ε2 and ε3 is essentially identical (Fig. 1d); one might expect that the period of motion to differ slightly between the two coordinates. Thirdly, if this signal was attributed to DB rotation, then the minimum of the photoelectron signal in ε3 would arise because the probe photon energy was insufficient to access the final neutral state (D0)32. In that case, the oscillation should be observable in the total photoelectron signal, but no such changes are seen (Supplementary Fig. 4). Instead, we suspect that the signal in ε3 comes about because the probe can access the first excited state of the neutral, D1. This excited state can be seen in the photoelectron spectrum at higher photon energies (see Supplementary Fig. 2). According to our calculations, the vertical energy difference between the SB intermediate on S1 and the D1 is 1.3 eV, suggesting that it could be accessed with the 1.55 eV probe. Nevertheless, the assignment of this feature remains somewhat uncertain and we cannot exclude that concomitant dynamics about φDB are taking place on the S1 state over the first picosecond. It would be useful to probe the dynamics with a higher energy photon. However, this comes with added complications of possible excitations from the S1 to higher-lying excited states of the anion.
In summary, we have probed the geometric and electronic structure of a polyatomic molecule using time-resolved photoelectron imaging. In combination with calculations beyond the Franck-Condon region, we can identify specific signals that arise from an isomerisation coordinate involving rotation about the single bond in pCK. The photoelectron signal provides information about changes in the energies of potential energy surfaces along an intramolecular coordinate, while the photoelectron angular distributions capture the changes in electronic structure that arise from such an isomerisation. While we can conclusively identify single-bond rotation, we cannot exclude that double-bond rotation may be occurring also, because its photoelectron signatures are not captured well in the current experiments. To the best of our knowledge, this presents the first study in which lab-frame photoelectron angular distributions have been tracked along a non-dissociative adiabatic coordinate and that have been quantitatively modelled. These methods provide a basis for probing adiabatic dynamics in large molecular systems.
## Methods
### Experimental details
Experiments were performed on an anion photoelectron imaging spectrometer33. Anions were produced by negative-mode electrospray ionization of pCK in methanol at pH ~ 10 and transferred into vacuum where they were stored in a ring-electrode trap, thermalized to ~300 K, and unloaded into a time-of-flight mass spectrometer at 100 Hz. Mass-selected anions were intersected by a pair of delayed femtosecond pulses at the centre of a velocity-map imaging spectrometer, which monitored the velocity vectors of the emitted photoelectrons. Probe pulses used the fundamental of a Ti:Sapph (450 μJ pulse−1) and pump pulses were generated by 4th harmonic generation of idler of an OPA (5 μJ pulse−1) and interacted with the sample unfocussed (beam diameter ~ 3 mm). Pump and probe polarizations were set parallel to the detector. The temporal instrument response is 100 fs and times are accurate to better than ±10 fs. Raw photoelectron images were analysed using polar onion peeling34, which recovers the 3D electron velocity distribution from the 2D projection measured on the position sensitive detector (see Supplementary Methods and Supplementary Fig. 5). This analysis yields photoelectron spectra and PADs that were calibrated using the photoelectron spectrum of iodide.
### Computational details
The energetic minima corresponding to FC(S0), the planar S1 state and SB(S1) and DB(S1) were first located at the SA2-CASSCF(12,11)/6-31G* level of theory (see Supplementary Fig. 6 and Supplementary Table 1)35. Linear interpolation in internal coordinates (LIIC) pathways were obtained to link the different critical points. An LIIC pathway gives the most straightforward path from a given molecular geometry to a different geometry by interpolating a series of geometries in between, using internal (not Cartesian) coordinates (see for example ref. 36). It is important to note that no reoptimisation of the molecular geometries is performed along these pathways, implying that LIIC pathways do not correspond to minimum energy paths, per se. In particular, the barriers observed along LIIC pathways are possibly higher than the actual barriers one would obtain by searching for proper transition states. LIICs, however, offer a clear picture of the possible pathways between critical points of potential energy surfaces and allow to predict photophysical and photochemical processes that a molecule can undergo. The electronic energy of the S1, S2 and D0 states were recalculated at all points along the LIICs using multi-state extended multi-configurational quasi-degenerate perturbation theory (MS-XMCQDPT2)37 to correct for the lack of dynamic correlation at the SA-CASSCF level. The (aug)-cc-pVTZ basis set was used where the augmented function was only affixed to the oxygen atoms38. The D0 was calculated through addition of an orbital characterized by an extremely diffuse p-function (α = 1E–10) in the active space and included in the 6 state averaging procedure to mimic detachment to the continuum39,40,41. A rigid shift was applied to match the S0–D0 energy to the experimentally determined vertical detachment energy of 2.94 ± 0.05 eV at the Franck-Condon geometry. A DFT/PBE0-based one-electron Fock-type matrix was used to obtain energies of MCSCF semi-canonical orbitals used in perturbation theory as done elsewhere39,40,41.
The Dyson orbitals for critical geometries were calculated using EOM-EE/IP-CCSD/6-31+G** 27,28,42,43 and the PADs were modelled using ezDyson v444. EOM-EE-CCSD calculations with the 6-31+G** basis set were also used to determine the vertical excitation energies of the first excited state of the neutral, D1, at the minimum energy geometries on the S1 surface.
The initial SA-CASSCF calculations were performed with Molpro 201245, XMCQDPT2 calculations were carried out using the Firefly quantum chemistry package46 and EOM-EE/IP-CCSD calculations used QChem 547. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8507168889045715, "perplexity": 1769.0408077898712}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948632.20/warc/CC-MAIN-20230327123514-20230327153514-00344.warc.gz"} |
http://www.neverendingbooks.org/tag/consani | # Tag: Consani
‘Gabriel’s topos’ (see here) is the conjectural, but still elusive topos from which the validity of the Riemann hypothesis would follow.
It is the latest attempt in Alain Connes’ 20 year long quest to tackle the RH (before, he tried the tools of noncommutative geometry and later those offered by the field with one element).
For the last 5 years he hopes that topos theory might provide the missing ingredient. Together with Katia Consani he introduced and studied the geometry of the Arithmetic site, and later the geometry of the scaling site.
If you look at the points of these toposes you get horribly complicated ‘non-commutative’ spaces, such as the finite adele classes $\mathbb{Q}^*_+ \backslash \mathbb{A}^f_{\mathbb{Q}} / \widehat{\mathbb{Z}}^{\ast}$ (in case of the arithmetic site) and the full adele classes $\mathbb{Q}^*_+ \backslash \mathbb{A}_{\mathbb{Q}} / \widehat{\mathbb{Z}}^{\ast}$ (for the scaling site).
In Vienna, Connes gave a nice introduction to the arithmetic site in two lectures. The first part of the talk below also gives an historic overview of his work on the RH
The second lecture can be watched here.
However, not everyone is as optimistic about the topos-approach as he seems to be. Here’s an insightful answer on MathOverflow by Will Sawin to the question “What is precisely still missing in Connes’ approach to RH?”.
Other interesting MathOverflow threads related to the RH-approach via the field with one element are Approaches to Riemann hypothesis using methods outside number theory and Riemann hypothesis via absolute geometry.
About a month ago, from May 10th till 14th Alain Connes gave a series of lectures at Ohio State University with title “The Riemann-Roch strategy, quantizing the Scaling Site”.
The accompanying paper has now been arXived: The Riemann-Roch strategy, Complex lift of the Scaling Site (joint with K. Consani).
Especially interesting is section 2 “The geometry behind the zeros of $\zeta$” in which they explain how looking at the zeros locus inevitably leads to the space of adele classes and why one has to study this space with the tools from noncommutative geometry.
Perhaps further developments will be disclosed in a few weeks time when Connes is one of the speakers at Toposes in Como.
A couple of weeks ago, Alain Connes and Katia Consani arXived their paper “On the notion of geometry over $\mathbb{F}_1$”. Their subtle definition is phrased entirely in Grothendieck‘s scheme-theoretic language of representable functors and may be somewhat hard to get through if you only had a few years of mathematics.
I’ll try to give the essence of their definition of an affine scheme over $\mathbb{F}_1$ (and illustrate it with an example) in a couple of posts. All you need to know is what a finite Abelian group is (if you know what a cyclic group is that’ll be enough) and what a commutative algebra is. If you already know what a functor and a natural transformation is, that would be great, but we’ll deal with all that abstract nonsense when we’ll need it.
So take two finite Abelian groups A and B, then a group-morphism is just a map $f~:~A \rightarrow B$ preserving the group-data. That is, f sends the unit element of A to that of B and
f sends a product of two elements in A to the product of their images in B. For example, if $A=C_n$ is a cyclic group of order n with generator g and $B=C_m$ is a cyclic group of order m with generator h, then every groupmorphism from A to B is entirely determined by the image of g let’s say that this image is $h^i$. But, as $g^n=1$ and the conditions on a group-morphism we must have that $h^{in} = (h^i)^n = 1$ and therefore m must divide i.n. This gives you all possible group-morphisms from A to B.
They are plenty of finite abelian groups and many group-morphisms between any pair of them and all this stuff we put into one giant sack and label it $\mathbf{abelian}$. There is another, even bigger sack, which is even simpler to describe. It is labeled $\mathbf{sets}$ and contains all sets as well as all maps between two sets.
Right! Now what might be a map $F~:~\mathbf{abelian} \rightarrow \mathbf{sets}$ between these two sacks? Well, F should map any abelian group A to a set F(A) and any group-morphism $f~:~A \rightarrow B$ to a map between the corresponding sets $F(f)~:~F(A) \rightarrow F(B)$ and do all of this nicely. That is, F should send compositions of group-morphisms to compositions of the corresponding maps, and so on. If you take a pen and a piece of paper, you’re bound to come up with the exact definition of a functor (that’s what F is called).
You want an example? Well, lets take F to be the map sending an Abelian group A to its set of elements (also called A) and which sends a groupmorphism $A \rightarrow B$ to the same map from A to B. All F does is ‘forget’ the extra group-conditions on the sets and maps. For this reason F is called the forgetful functor. We will denote this particular functor by $\underline{\mathbb{G}}_m$, merely to show off.
Luckily, there are lots of other and more interesting examples of such functors. Our first class we will call maxi-functors and they are defined using a finitely generated $\mathbb{C}$-algebra R. That is, R can be written as the quotient of a polynomial algebra
$R = \frac{\mathbb{C}[x_1,\ldots,x_d]}{(f_1,\ldots,f_e)}$
by setting all the polynomials $f_i$ to be zero. For example, take R to be the ring of Laurant polynomials
$R = \mathbb{C}[x,x^{-1}] = \frac{\mathbb{C}[x,y]}{(xy-1)}$
Other, and easier, examples of $\mathbb{C}$-algebras is the group-algebra $\mathbb{C} A$ of a finite Abelian group A. This group-algebra is a finite dimensional vectorspace with basis $e_a$, one for each element $a \in A$ with multiplication rule induced by the relations $e_a.e_b = e_{a.b}$ where on the left-hand side the multiplication . is in the group-algebra whereas on the right hand side the multiplication in the index is that of the group A. By choosing a different basis one can show that the group-algebra is really just the direct sum of copies of $\mathbb{C}$ with component-wise addition and multiplication
$\mathbb{C} A = \mathbb{C} \oplus \ldots \oplus \mathbb{C}$
with as many copies as there are elements in the group A. For example, for the cyclic group $C_n$ we have
$\mathbb{C} C_n = \frac{\mathbb{C}[x]}{(x^n-1)} = \frac{\mathbb{C}[x]}{(x-1)} \oplus \frac{\mathbb{C}[x]}{(x-\zeta)} \oplus \frac{\mathbb{C}[x]}{(x-\zeta^2)} \oplus \ldots \oplus \frac{\mathbb{C}[x]}{(x-\zeta^{n-1})} = \mathbb{C} \oplus \mathbb{C} \oplus \mathbb{C} \oplus \ldots \oplus \mathbb{C}$
The maxi-functor asociated to a $\mathbb{C}$-algebra R is the functor
$\mathbf{maxi}(R)~:~\mathbf{abelian} \rightarrow \mathbf{sets}$
which assigns to a finite Abelian group A the set of all algebra-morphism $R \rightarrow \mathbb{C} A$ from R to the group-algebra of A. But wait, you say (i hope), we also needed a functor to do something on groupmorphisms $f~:~A \rightarrow B$. Exactly, so to f we have an algebra-morphism $f’~:~\mathbb{C} A \rightarrow \mathbb{C}B$ so the functor on morphisms is defined via composition
$\mathbf{maxi}(R)(f)~:~\mathbf{maxi}(R)(A) \rightarrow \mathbf{maxi}(R)(B) \qquad \phi~:~R \rightarrow \mathbb{C} A \mapsto f’ \circ \phi~:~R \rightarrow \mathbb{C} A \rightarrow \mathbb{C} B$
So, what is the maxi-functor $\mathbf{maxi}(\mathbb{C}[x,x^{-1}]$? Well, any $\mathbb{C}$-algebra morphism $\mathbb{C}[x,x^{-1}] \rightarrow \mathbb{C} A$ is fully determined by the image of $x$ which must be a unit in $\mathbb{C} A = \mathbb{C} \oplus \ldots \oplus \mathbb{C}$. That is, all components of the image of $x$ must be non-zero complex numbers, that is
$\mathbf{maxi}(\mathbb{C}[x,x^{-1}])(A) = \mathbb{C}^* \oplus \ldots \oplus \mathbb{C}^*$
where there are as many components as there are elements in A. Thus, the sets $\mathbf{maxi}(R)(A)$ are typically huge which is the reason for the maxi-terminology.
Next, let us turn to mini-functors. They are defined similarly but this time using finitely generated $\mathbb{Z}$-algebras such as $S=\mathbb{Z}[x,x^{-1}]$ and the integral group-rings $\mathbb{Z} A$ for finite Abelian groups A. The structure of these inegral group-rings is a lot more delicate than in the complex case. Let’s consider them for the smallest cyclic groups (the ‘isos’ below are only approximations!)
$\mathbb{Z} C_2 = \frac{\mathbb{Z}[x]}{(x^2-1)} = \frac{\mathbb{Z}[x]}{(x-1)} \oplus \frac{\mathbb{Z}[x]}{(x+1)} = \mathbb{Z} \oplus \mathbb{Z}$
$\mathbb{Z} C_3 = \frac{\mathbb{Z}[x]}{(x^3-1)} = \frac{\mathbb{Z}[x]}{(x-1)} \oplus \frac{\mathbb{Z}[x]}{(x^2+x+1)} = \mathbb{Z} \oplus \mathbb{Z}[\rho]$
$\mathbb{Z} C_4 = \frac{\mathbb{Z}[x]}{(x^4-1)} = \frac{\mathbb{Z}[x]}{(x-1)} \oplus \frac{\mathbb{Z}[x]}{(x+1)} \oplus \frac{\mathbb{Z}[x]}{(x^2+1)} = \mathbb{Z} \oplus \mathbb{Z} \oplus \mathbb{Z}[i]$
For a $\mathbb{Z}$-algebra S we can define its mini-functor to be the functor
$\mathbf{mini}(S)~:~\mathbf{abelian} \rightarrow \mathbf{sets}$
which assigns to an Abelian group A the set of all $\mathbb{Z}$-algebra morphisms $S \rightarrow \mathbb{Z} A$. For example, for the algebra $\mathbb{Z}[x,x^{-1}]$ we have that
$\mathbf{mini}(\mathbb{Z} [x,x^{-1}]~(A) = (\mathbb{Z} A)^*$
the set of all invertible elements in the integral group-algebra. To study these sets one has to study the units of cyclotomic integers. From the above decompositions it is easy to verify that for the first few cyclic groups, the corresponding sets are $\pm C_2, \pm C_3$ and $\pm C_4$. However, in general this set doesn’t have to be finite. It is a well-known result that the group of units of an integral group-ring of a finite Abelian group is of the form
$(\mathbb{Z} A)^* = \pm A \times \mathbb{Z}^{\oplus r}$
where $r = \frac{1}{2}(o(A) + 1 + n_2 -2c)$ where $o(A)$ is the number of elements of A, $n_2$ is the number of elements of order 2 and c is the number of cyclic subgroups of A. So, these sets can still be infinite but at least they are a lot more manageable, explaining the mini-terminology.
Now, we would love to go one step deeper and define nano-functors by the same procedure, this time using finitely generated algebras over $\mathbb{F}_1$, the field with one element. But as we do not really know what we might mean by this, we simply define a nano-functor to be a subfunctor of a mini-functor, that is, a nano-functor N has an associated mini-functor $\mathbf{mini}(S)$ such that for all finite Abelian groups A we have that $N(A) \subset \mathbf{mini}(S)(A)$.
For example, the forgetful functor at the beginning, which we pompously denoted $\underline{\mathbb{G}}_m$ is a nano-functor as it is a subfunctor of the mini-functor $\mathbf{mini}(\mathbb{Z}[x,x^{-1}])$.
Now we are allmost done : an affine $\mathbb{F}_1$-scheme in the sense of Connes and Consani is a pair consisting of a nano-functor N and a maxi-functor $\mathbf{maxi}(R)$ such that two rather strong conditions are satisfied :
• there is an evaluation ‘map’ of functors $e~:~N \rightarrow \mathbf{maxi}(R)$
• this pair determines uniquely a ‘minimal’ mini-functor $\mathbf{mini}(S)$ of which N is a subfunctor
of course we still have to turn this into proper definitions but that will have to await another post. For now, suffice it to say that the pair $~(\underline{\mathbb{G}}_m,\mathbf{maxi}(\mathbb{C}[x,x^{-1}]))$ is a $\mathbb{F}_1$-scheme with corresponding uniquely determined mini-functor $\mathbf{mini}(\mathbb{Z}[x,x^{-1}])$, called the multiplicative group scheme.
Continued here | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9358854293823242, "perplexity": 328.16526050774587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314130.7/warc/CC-MAIN-20190818205919-20190818231919-00060.warc.gz"} |
https://www.groundai.com/project/mechanical-detection-of-carbon-nanotube-resonator-vibrations/ | Mechanical detection of carbon nanotube resonator vibrations
# Mechanical detection of carbon nanotube resonator vibrations
D. Garcia-Sanchez, A. San Paulo, M.J. Esplandiu, F. Perez-Murano, L. Forró, A. Aguasca, A. Bachtold ICN, Campus UABarcelona, E-08193 Bellaterra, Spain. CNM-CSIC, Campus UABarcelona, E-08193 Bellaterra, Spain. EPFL, CH-1015, Lausanne, Switzerland. Universitat Politecnica de Catalunya, Barcelona, Spain.
July 12, 2019
###### Abstract
Bending-mode vibrations of carbon nanotube resonator devices were mechanically detected in air at atmospheric pressure by means of a novel scanning force microscopy method. The fundamental and higher order bending eigenmodes were imaged at up to with sub-nanometer resolution in vibration amplitude. The resonance frequency and the eigenmode shape of multi-wall nanotubes are consistent with the elastic beam theory for a doubly clamped beam. For single-wall nanotubes, however, resonance frequencies are significantly shifted, which is attributed to fabrication generating, for example, slack. The effect of slack is studied by pulling down the tube with the tip, which drastically reduces the resonance frequency.
###### pacs:
85.85.+j, 73.63.Fg, 81.16.Rf, 85.35.Kt
Carbon nanotubes offer unique opportunities as high-frequency mechanical resonators for a number of applications. Nanotubes are ultra light, which is ideal for ultralow mass detection and ultrasensitive force detection APoncharalScience1999 (); AReuletPRL2000 (). Nanotubes are also exceptionally stiff, making the resonance frequency very high. This is interesting for experiments that manipulate and entangle mechanical quantum states ABlencowePhysReports2004 (); AHayeScience2004 (); AKnobelNature2003 (). However, mechanical vibrations of nanotubes remain very difficult to detect. Detection has been achieved with transmission or scanning electron microscopy APoncharalScience1999 (); ABabicNanoLett2003 (); Meyer (); AJensenPRL2006 (), and field-emission APurcellPRL2004 (). More recently, a capacitative technique has been reported ASazanovaNature2004 (); APengPRL2006 (); Witkamp () that allows detection for nanotubes integrated in a device, and is particulary promising for sensing and quantum electromechanical experiments. A limitation of this capacitive technique is that the measured resonance peaks often cannot be assigned to their eigenmodes. In addition, it is often difficult to discern resonance peaks from artefacts of the electrical circuit. It is thus desirable to develop a method that allows the characterization of these resonances.
In this letter, we demonstrate a novel characterization method of nanotube resonator devices, based on mechanical detection by scanning force microscopy (SFM). This method enables the detection of the resonance frequency () in air at atmospheric pressure and the imaging of the mode-shape for the first bending eigenmodes. Measurements on single-wall nanotubes (SWNT) show that the resonance frequency is very device dependent, and that dramatically decreases as slack is introduced. We show that multi-wall nanotube (MWNT) resonators behave differently from SWNT resonators. The resonance properties of MWNTs are much more reproducible, and are consistent with the elastic beam theory for a doubly clamped beam without any internal tension.
An image of one nanotube resonator used in these experiments is shown in Fig. 1(a). The resonator consists of a SWNT grown by chemical-vapour deposition AKongNature1998 () or a MWNT synthesized by arc-discharge evaporation ABonardAdvMater1997 (). The nanotube is connected to two Cr/Au electrodes patterned by electron-beam lithography on a high-resistivity Si substrate () with a thermal silicon dioxide layer. The nanotube is released from the substrate during a buffered HF etching step. The Si substrate is fixed for SFM measurements on a home-made chip carrier with transmission lines.
A schematic of the measurement method is presented in Fig. 1(b). The nanotube motion is electrostatically actuated with an oscillating voltage applied on a side gate electrode. As the driving frequency approaches the resonance frequency of the nanotube, the nanotube vibration becomes large. In addition, the amplitude of the resonator vibration is 100% modulated at , which can be seen as sequentially turning on and off the vibration. The resulting envelope of the vibration amplitude is sensed by the SFM cantilever. Note that the SFM cantilever has a limited bandwidth response so it cannot follow the rapid vibrations at NSFMMicro ().
The SFM is operated in tapping mode to minimize the forces applied on the nanotube by the SFM cantilever. The detection of the vibrations is optimized by matching to the resonance frequency of the first eigenmode of the SFM cantilever. As a result, the first cantilever eigenmode is excited with an amplitude proportional to the nanotube amplitude, which is measured with a lock-in amplifier tuned at . The second eigenmode of the SFM cantilever is used for topography imaging in order to suppress coupling between topography and vibration detections (see Fig. 1(c)). Note that in-plane nanotube vibrations can be detected by means of the interaction between the nanotube and the tip side, or asperities at the tip apex.
We start discussing measurements on MWNTs. Suspended MWNTs stay straighter than SWNTs and are thus more suitable to test the technique. Figures 2(a-e) show the topography and the nanotube vibration images obtained at different actuation frequencies. The different shapes of the vibrations are attributed to different bending eigenmodes. Zero, one, and two nodes correspond to the first, second and third order bending eigenmodes.
Figure 2(g) shows the resonance peak of the fundamental eigenmode for another MWNT device. The resonance frequency at is remarkably high. It is higher than the reported resonance frequency of doubly clamped resonators based on nanotube or other materials APengPRL2006 (); AHuangNature2003 (). The quality factor is . The quality factor of the other tubes that we have studied is 3-20.
We now compare these results with the elastic beam theory for a doubly clamped beam. The displacement is given by BClelandFoundationsNanomechanics ()
ρπr2∂2z∂t2+EI∂4z∂x4−T∂2z∂x2=0 (1)
with the density of graphite, the radius, the Young modulus, the momenta of inertia, and the tension in the tube. Assuming that , , and at and , the resonance frequencies are BClelandFoundationsNanomechanics ()
fn=β2n4πrL2√Eρ (2)
with , , , and the length.
Table 1 shows the resonance frequency for all the measured MWNTs remark (). Measured span over two orders of magnitudes, between 51 MHz and 3.1 GHz. Eq. 2 describes rather accurately these measured when is set at . This value of is consistent with results on similarly prepared MWNT devices ALefevrePRL2005 ().
Such a good agreement is remarkable, since rather large deviations from Eq. 2 have been reported for nanoscale resonators made of other materials BClelandFoundationsNanomechanics (); AHussainAPhysLett2003 (). These deviations have been attributed to the tension or slack (also called buckling) that can result during fabrication. Our measurements suggest that tension and slack have little effect on the resonances of MWNTs. We attribute this to the high mechanical rigidity of MWNTs, which makes deformation difficult to occur AHertelPRB1998 (). This result may be interesting for certain applications, such as radio-frequency signal processing AWangIEEE2004 (), where the resonance frequency has to be predetermined.
We now look at the spatial shape of the vibrations. The maximum displacement is given by with solution of Eq.1 with , BClelandFoundationsNanomechanics ()
zn=an(cos(βnxL)−cosh(βnxL))+bn(sin(βnxL)−sinh(βnxL)) (3)
with -1.017, -0.9992, and -1.00003. When damping is described within the context of Zener’s model, we have BClelandFoundationsNanomechanics ()
αn=14π3r2ρL31f2n−f2RF−\textmdif2n/Qn∫L0zn(x)Fext(x)\textmddx (4)
with being the quality factor measured for each eigenmode, and the external force. and are the DC and the AC voltages applied on the gate, and the capacitance between the gate and the tube. The precise estimate of is very challenging due to the difficulty of determining . The most difficult task is to account for the asymmetric gate and for the screening of the clamping electrodes. As a simplification, we use along a certain portion of the tube, and otherwise. We use and as fitting parameters. A third fitting parameter is the linear conversion of the displacement of the tube into the one of the cantilever that is measured remark2 (). Fig. 2(f) shows the results of the calculations. The model qualitatively reproduces the overall shape of the measured eigenmodes as well as the ratio between the amplitudes of the different eigenmodes. In addition, the model predicts that the displacement at the nodes is different from zero, as shown in the measurements. This is due to the low , so the first eigenmode contributes to the displacement even at of the second or the third eigenmode.
These calculations allow for an estimate of the tube displacement, which is 0.2 nm for the fundamental eigenmode (Fig. 2(f)). We emphasize that this estimate indicates only the order of the magnitude of the actual vibration amplitude, since crude simplifications have been used for . The vibration amplitude for the other devices is estimated to be low as well, between 0.1 pm and 0.5 nm. Notice that we find that is quite comparable to (Fig. 2(g,f)). We are pursuing numerical simulations taking into account the microscopic tube-tip interaction that support this.
We turn our attention to the quality factor. The low may be attributed to the disturbance of the SFM tip. Note, however, that the topography feedback is set at the limit of cantilever retraction, for which the tube-tip interaction is minimum. Moreover, we have noticed no change in the quality factor as the amplitude setpoint of the SFM cantilever is reduced by 3-5% from the limit of cantilever retraction, which corresponds to the enhancement of the tube-tip interaction. This suggests that the tip is not the principal source of dissipation.
The low may be attributed to collision with air molecules. Indeed, previous measurements in vacuum on similarly prepared resonators show a between 10 and 200 ASazanovaNature2004 (); Witkamp (), which is larger than 3-20, the we have obtained. In addition, we can estimate in the molecular regime using with the effective mass of the beam, ms the velocity of air molecules, and the pressure erkinci (). We get for the tube in Fig. 2(a), which is not too far from , the value we have measured. Note that the molecular regime holds for a mean free path of air molecules that is larger than the resonator dimensions. 65 nm at 1 atm, so we are at the limit of the applicability of this regime. Overall, a more systematic study should be carried out to clearly identify the origin of the low .
Having shown that SFM successfully detects mechanical vibrations of MWNTs, we now look at SWNTs (Fig. 3(a)). Table 1 shows poor agreement between the measured resonance frequencies and the values expected from a doubly clamped beam. We attribute this to tension or slack. When the tube is elongated by due to tension, the resonance frequency increases and becomes when ASapmazPRB2003 (). The measured frequency of the long SWNT in Tab. 1 is 128% larger than what is expected for a beam without tension. This deviation can be accounted by elongation ( is 0.2 pm). This suggests that even a weak elongation can dramatically shift the resonance frequency. Such an elongation can result, for example, from the bending of the partially suspended Cr/Au electrodes.
Table 1 shows that the resonance frequencies of other SWNTs can be below the one expected from a doubly clamped beam. This may result from the additional mass of contamination adsorbed on the tube APoncharalScience1999 (); AReuletPRL2000 (). This may also be the consequence of slack, which occurs when the tube is longer than the distance between the electrodes AUstunel2005 ().
To further investigate the effect of slack, we have introduced slack in a non-reversible way by pulling down the tube with the SFM cantilever. Figure 3(b) shows that can be divided by two for a slack below 1%. The slack is defined as with being the tube length and the separation between the clamping points.
Taking into account slack, Eq. 1 has been solved analytically only for in-plane vibrations (plane of the buckled beam) Nayfeh (). Recent numerical calculations have extended this treatment to out-of-plane vibrations AUstunel2005 (). It has been shown that of the fundamental eigenmode can even be zero when no force is applied on the beam. The schematic in Fig. 3(b) shows the physics of this effect. For zero slack, the beam motion can be described by a spring with the spring force that results from the tube bending. When slack is introduced, the fundamental eigenmode is called ”jump rope” AUstunel2005 (). It is similar to a mass attached to a point through a massless rod of length . does not depend on bending anymore but is with being an external force, which can be the electrostatic force between the tube and the side gate. We get for .
We estimate the reduction of when the slack passes from 0.3 to 0.9% in Fig. 3(b). Assuming that stays constant, and using AUstunel2005 (), we expect a reduction by a factor of about 1.3, which is consistent with the experiment, since passes from 142 to . More studies should be done, in particular to relate to , but also to understand the effect of the boundary conditions at the clamping points. The section of the nanotube in contact with the electrodes may be bended, especially after SFM manipulation, so that at and .
Overall, these results show that SFM, as a tool to visualize the spatial distribution of the vibrations, is very useful to characterize eigenmodes of SWNT resonator devices. In addition, SFM detection provides unique information about the physics of nanotube resonators such as the effect of slack. Further studies will be carried out on slack for which interesting predictions have been reported AUstunel2005 (). For example, the number of nodes of higher eigenmodes is expected to change as slack is increased. We anticipate that the reported SFM detection will be very useful to study NEMS devices made of other materials, such as graphene Bunch () or microfabricated semiconducting Illic () resonators.
We thank J. Bokor, A.M. van der Zande, J. Llanos, and S. Purcell for discussions. The research has been supported by an EURYI grant and FP6-IST-021285-2.
## References
• (1) P. Poncharal, et al., Science 283, 1513 (1999)
• (2) B. Reulet, et al., Phys. Rev. Lett. 85, 2829 (2000)
• (3) M. Blencowe, Physics Reports 395, 159 (2004)
• (4) R.G. Knobel, A.N. Cleland, Nature 424, 291 (2003)
• (5) M.D. LaHaye, et al., Science 304, 74 (2004)
• (6) B. Babic, et al., Nano Lett. 3, 1577 (2003)
• (7) J.C. Meyer, M. Paillet, S. Roth, Science 309, 1539 (2005)
• (8) K. Jensen, et al., Phys. Rev. Lett. 96, 215503 (2006)
• (9) S.T. Purcell, et al., Phys. Rev. Lett. 89, 276103 (2002)
• (10) V. Sazonova, et al., Nature 431, 284 (2004)
• (11) H.B. Peng, et al., Phys. Rev. Lett. 97, 087203 (2006)
• (12) B. Witkamp, M. Poot, and H.S.J. van der Zant, Nano Lett. 6, 2904 (2006).
• (13) J. Kong, et al., Nature 395, 878 (1998)
• (14) J.M. Bonard, et al., Adv. Mater. 9, 827 (1997)
• (15) The SFM microscope is a Dimension 3100 from Veeco. The SFM tips from Olympus have a spring constant and a resonance frequency for the first mode. The amplitude setpoint of the topography feedback is set lower than the free amplitude ( nm). The time constant of the lock-in is about 10 ms.
• (16) X.M.H. Huang, et al., Nature 421, 496 (2003)
• (17) A.N. Cleland, Foundations of Nanomechanics (Springer, Berlin 2003)
• (18) We did not observe a change of as is varied. This is attributed to the low and the short . For instance, to see a change for the device in Fig. 2(a), we estimate that should be larger than 13 V ASapmazPRB2003 ().
• (19) S. Sapmaz, et al., Phys. Rev. B 67, 235414 (2003).
• (20) R. Lefevre, et al., Phys. Rev. Lett. 95, 185504 (2005)
• (21) A. Husain, et al., Appl. Phys. Lett. 83, 1240 (2003).
• (22) T. Hertel, R.E. Walkup, and P. Avouris, Phys. Rev. B 58, 13870 (1998)
• (23) W. Jing, Z. Ren, and C.T.C. Nguyen, IEEE Trans. Ferro. Freq. Control 51, 1607 (2004)
• (24) We observed that depends linearly on . In addition, is expected to be linear with in the linear regime. This suggests that is linearly proportional to .
• (25) K.L. Ekinci, M.L. Roukes, Rev. Sci. Instrum. 76, 061101 (2005)
• (26) H. Ustunel, D. Roundy, and T.A. Arias, Nano Lett. 5, 523 (2005)
• (27) A.H. Nayfeh, W. Kreider, T.J. Anderson, AIAA J. 33, 1121 (1995).
• (28) J. Scott Bunch, et al., Science 315, 490 (2007)
• (29) B. Ilic, S. Krylov, L.M. Bellan, H.G. Craighead, J. Appl. Phys. 101, 044308 (2007).
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.
The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8828885555267334, "perplexity": 1800.090241177945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145316.8/warc/CC-MAIN-20200220224059-20200221014059-00341.warc.gz"} |
https://tex.stackexchange.com/questions/160137/is-there-any-way-to-get-something-like-pmatrix-with-customizable-grid-lines-betw?noredirect=1 | # Is there any way to get something like pmatrix with customizable grid lines between cells? [duplicate]
In the document I have to describe a series of transformations, made with a matrix. Each transformation works only on 2x2 or 1x1 block, so I want to visually select this block in the matrix like this:
I can type the matrix using the pmatrix environment, but I don't know, how to draw the rectangle. What is the best way to achieve this?
## marked as duplicate by Werner, egreg, Svend Tveskæg, mafp, Martin SchröderFeb 12 '14 at 22:16
• Perhaps the following is helpful/sufficient/duplicate: Highlight elements in the matrix – Werner Feb 12 '14 at 20:32
• @Werner, the link you provided was indeed very helpful. I managed to edit the code, provided in the question you linked to do what I wanted. For the sake of reference I wrote thus obtained code in the answer below. Thank you. – fiktor Feb 12 '14 at 21:44
My question was indeed close to duplicate as was hinted by @Werner . For the sake of reference I provide the code, which draws what I wanted. The code was created after analyzing the answer, linked by @Werner.
\begin{tikzpicture}[baseline=(current bounding box.center)]
\matrix [matrix of math nodes,left delimiter=(,right delimiter=)] (m)
{
\!1 & 0 & 0\!\!\! \\ \!0 & {P_\theta \otimes P} & 0\!\!\! \\ \!0 & 0 & 0\!\!\! \\
};
\draw (m-1-1.north west) -- (m-1-3.north west) -- (m-3-3.north west) -- (m-3-1.north west) -- (m-1-1.north west);
\end{tikzpicture}
The following code should be included in the preamble.
\usepackage{tikz}
\usetikzlibrary{arrows,matrix,positioning}
This gives the following. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9385836720466614, "perplexity": 1425.5240239264929}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987828425.99/warc/CC-MAIN-20191023015841-20191023043341-00389.warc.gz"} |
https://papers.neurips.cc/paper/2018/file/9a0ee0a9e7a42d2d69b8f86b3a0756b1-Reviews.html | NIPS 2018
Sun Dec 2nd through Sat the 8th, 2018 at Palais des Congrès de Montréal
Paper ID: 5112 Data-dependent PAC-Bayes priors via differential privacy
### Reviewer 1
**Summary and main remarks** The manuscript investigates data-dependent PAC-Bayes priors. This is an area of great interest to the learning theory community: in PAC-Bayesian learning, most prior distributions do not rely on data and there has been some effort in leveraging information provided by data to design more efficient / relevant priors. Classical PAC-Bayes bounds hold for any prior and the crux is often to optimize a Kullback-Leibler (KL) divergence term between a pseudo-posterior (a Gibbs potential of the form $\exp(-\lambda R_n(\cdot))\pi(\cdot)$) and the prior. The manuscript starts with a very convincing and clear introduction to the problem, and builds upon the paper Lever, Laviolette and Shawe-Taylor (2013). The intuition defended by the authors is that when using a data-dependent prior which is *robust* to data changes (loosely meaning that the prior is not crudely overfitting the data), then PAC-Bayesian bounds using this prior must be tighter than similar bounds with any prior. This is a clever direction and the use of differential privacy to address this more formally appears very relevant to me. A second contribution of the manuscript is the use of SGLD (Stochastic Gradient Langevin Dynamics) to elicit such data-dependent priors (section 5). This section closes with an important message, which is that the approximation found by SGLD still yields a valid PAC-Bayesian bound (Corollary 5.4). This is reassuring for practitioners as they benefit from the comforting PAC-Bayesian theory. **Overall assessment** I find the paper to be very well-written and with clever contributions to PAC-Bayesian learning, with a significant impact to the NIPS community. I have carefully checked the proofs and found no flaw. Since differential privacy is not my strongest suit, I have lowered my confidence score to 3. I recommend acceptance. **Specific remarks** - Some improper capitalization of words such as PAC, Bayesian, etc. in the references. - typo: end of page 4, "ogften" -> often - with the authors' scheme of proofs, it is unclear to me wether the limitation of having a loss bounded (by 1 for simplicity) is easy to relax. Since some effort has been put by the PAC-Bayesian community to derive generalization bounds for unbounded / heavy-tailed / non-iid data, perhaps a comment in section 2 (other related works) would be a nice addition to the manuscript. See for example the references Catoni, 2007 (already cited); Alquier and Guedj, 2018 (Simpler PAC-Bayesian bounds for hostile data, Machine Learning); and references therein. [Rebuttal acknowledged.]
### Reviewer 2
The authors provide PAC-Bayes bounds using differential privacy for data dependent priors. They further discuss the approximation of the differential private priors based on the stochastic gradient Langevin dynamics, which has certain convergence properties in 2-Wasserstein distance. They further connect such an approach to study the generalization bound of neural nets using PAC-Bayes, which seems interesting. However, it is not quite clear to me why this procedure is helpful in measuring the generalization bound (e.g., for the neural nets). Empirically, we can measure the empirical generalization gap, and theoretically the differential privacy based bound can be looser than the PAC-Bayes bound. Further discussion regarding this aspect will be helpful. After rebuttal: I thank the authors for their efforts to address my queries. The answers are helpful and I think the idea of connecting the PAC-Bayes and differential privacy is interesting.
### Reviewer 3
%% Summary %% This paper develops new PAC-Bayes bounds with data-dependent priors. Distribution-dependent PAC-Bayesian bounds (especially ones with distribution-dependent priors) are by now well-known. However, as far as I am aware, the authors' bound is the first non-trivial PAC-Bayesian bound where the prior is allowed to depend on the data. The key idea to accomplish this is the observation that a differentially private prior can be related to a prior that does not depend on the data, thereby allowing (as the authors do in their first main result) the development of a PAC-Bayesian bound that uses a data-dependent-yet-differentially-private prior based on the standard one that uses a data-independent prior. The proof of the new bound depends on the connection between max information and differential privacy (versions of the latter imply bounds on versions of the former). A second key result of the authors, more relevant from the computational perspective, is that if a differentially private prior is close in 2-Wasserstein distance to some other (not necessarily differentially private) prior, then a PAC-Bayesian bound can again be developed using this latter data-dependent-but-not-differentially-private prior, where we pick up some additional error according to the 2-Wasserstein distance. This result allows the authors to leverage a connection between stochastic gradient Langevin dynamics (which is computationally friendly) and Gibbs distributions (which are differentially private but computational nightmares). In addition to the above bounds, the authors also perform an empirical study. I have to admit that I focused more on the theoretical guarantees, as I believe they already are interesting enough to warrant publication of the paper. Also, the authors did a poor job in writing the Section 6, leaving vital details in the appendix, like figures which they constantly refer to. I feel that this was not in the spirit of abiding by the 8 page limit, as the paper was not self-contained as a result. This also was confusing, as the authors clearly had extra space throughout the paper to include these two figures. %% Reviewer's Expertise ** I am an expert in PAC-Bayesian inequalities and also well-versed in recent developments related to differential privacy. I am less familiar with stochastic gradient Langevin dynamics but know the basics. %% Detailed Comments %% Aside from Section 6, I found this paper to be remarkably well-written and clear. I especially appreciated the key insight in the paragraph immediately after equation (1.1), for why the Lever et al. bound often must be vacuous for large values of $\tau$. I believe that Theorem 4.2 is a very interesting (and not so complicated) result which really makes for a significant and original contribution. It should open up future directions in developing new PAC-Bayesian guarantees, and so this result alone I feel makes a compelling argument for accepting this paper. In addition, Theorem 5.3, which allows us to only require a prior that is close in 2-Wasserstein distance to a differentially private one, further broadens the PAC-Bayesian toolbox in a significant and original way. That said, Theorem 5.3 has a major weakness, which is that it forces us to give up on obtaining very high probability'' bounds, since (5.3) has a term which grows as $1/\delta'$ as the failure probability $\delta'$ decreases. This is a crucial weakness which the authors should mention after Theorem 5.3, including discussion of whether or not this issue is fundamental. I looked through the proof of the main results Theorems 4.2 and 5.3 and I believe the analysis is sound. I do think the authors made a typo in the definition of $g$ in Theorem 5.3; you should remove the negative sign (indeed the logarithm in (5.4) is not well-defined if the negative sign stays!). I would have liked a more useful/detailed version of what is currently Corollary 5.4. It currently glosses over too many details to really say much. Since you have extra space in the paper, I recommend including a more explicit version of this corollary. You should also include around this point a citation to the Raginsky / Rakhlin / Telgarsky (2017) paper that you refer to in the appendix. I also recommend giving, at least in the appendix, a precise citation of which result from their paper you are using. Minor comments: You never explain the notation for the squiggly arrow in the main paper. Please fix this. It is too important / widely used in the main text to be left to the appendix. On page 4, line 2, you say the above result''. I think you are referring to Definition 3.1, which is of course a definition, not a result. So, you should change the text accordingly. In Theorem 3.2, you have one instance of $n$ which should be replaced by $m$ (see the inline math following the word Then'') In the proof of Lemma D.1, you should mention that the upper bound of total variation by KL divergence is from Pinsker's inequality. Standard results'' is a bit vague to inform the uninformed reader. %% UPDATE AFTER AUTHOR'S RESPONSE %% I've read the author's rebuttal and their responses are satisfactory. I do hope you will highlight the weakness of the high probability bound involving the 1/\delta' dependence. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8731937408447266, "perplexity": 642.883601722619}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057913.34/warc/CC-MAIN-20210926175051-20210926205051-00360.warc.gz"} |
http://www.nccr-swissmap.ch/research/publications/supersymmetric-affine-yangian | # The supersymmetric affine Yangian
Monday, 20 November, 2017
## Published in:
arXiv:1711.07449
The affine Yangian of \mathfrak{gl}_1 is known to be isomorphic to {\cal W}_{1+\infty}, the W-algebra that characterizes the bosonic higher spin -- CFT duality. In this paper we propose defining relations of the Yangian that is relevant for the N=2 superconformal version of {\cal W}_{1+\infty}. Our construction is based on the observation that the N=2 superconformal {\cal W}_{1+\infty} algebra contains two commuting bosonic {\cal W}_{1+\infty} algebras, and that the additional generators transform in bi-minimal representations with respect to these two algebras. The corresponding affine Yangian can therefore be built up from two affine Yangians of \mathfrak{gl}_1 by adding in generators that transform appropriately.
## Author(s):
Matthias R. Gaberdiel
Wei Li
Cheng Peng
Hong Zhang | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9364778995513916, "perplexity": 2560.964499474401}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145767.72/warc/CC-MAIN-20200223093317-20200223123317-00016.warc.gz"} |
http://math.stackexchange.com/questions/208470/poisson-integral-on-mathbbh-for-boundary-data-which-is-orientation-preservi | # Poisson integral on $\mathbb{H}$ for boundary data which is orientation-preserving homeomorphism of $\mathbb{R}$
Let $f$ be a real-valued function (in my case, an orientation-preserving homeomorphims of $\mathbb{R}$) on the real line $\mathbb{R}$ which is not in any $L^p$ -space. Let us take the simplest example $f(t)=t$. Is there a $\textit{direct or indirect}$ way of computing the harmonic extension $H(f)$ to $\mathbb{H}$ of such a function. For $\textit{direct}$ way, if I try to use the standard Poisson formula with the Poison kernel $p(z,t)=\frac{y}{(x-t)^2+y^2}, z= x+iy$, then I am getting $H(f)(i)=\infty$ for $f(t)=t$. But all the sources [Evans, PDE, p. 38 or wikipidea http://en.wikipedia.org/wiki/Poisson_kernel] for Poison formula for $\mathbb{H}$ assumes that the boundary map $f$ is in some $L^p, 1\leq p \leq \infty$. But then how should I compute $H(f)$ for very nice functions like $f(t)=t$ and make that equal to $H(f)(z)=z$?
For $\textit{indirect}$ way, I know one solution to the problem could be to pass to the unit disk model $\mathbb{D}$ by a Möbius transformation $\phi$ that sends $\mathbb{H}$ to $\mathbb{D}$, and then solve the problem in $\mathbb{D}$, call the solution on disk $F$, and then take $\phi^{-1}\circ F \circ \phi :\mathbb{H}\to \mathbb{H}$. This does solve the problem for $f(t)=t$, but my concern is in general $\phi^{-1}\circ F \circ \phi$ may $\textit{not}$ be harmonic, because post-composition of harmonic with holomorphic is not harmonic in general. In that case, how would I solve this problem ? Answer or reference would be greatly appreciated !
-
Is there necessarily always a harmonic extension? Why are you convinced there is? Note btw that homeomorphisms on $\mathbb R$ are not really the "nice" functions in this context... You're not doing topology here, you are trying to solve a PDE with given boundary conditions. So your well-behaved functions are those on which you know some bounds in whatever norm. – Sam Oct 7 '12 at 1:42
Having said this, integration against a kernel really does not seem suitable here. But maybe some variant of the Perron method might still show the existence of a solution. – Sam Oct 7 '12 at 1:50
@ Sam L. : I agree that we are not doing topology here, but there are some good theorems connecting topology and harmonic extensions,for example, harmonic extension of the unit circle homeomorphim is homeomorphism of the closed unit disk(Rado-Kneser-Choquet theorem). In fact, there are techniques in low-dimensional topology which use harmonic extension of circle homeomorphisms. But for calculational simplifications, sometimes it is easier to use $\mathbb{H}$-model, of course, only if the harmonic extension exists ! Thanks though. – Mathmath Oct 7 '12 at 2:41
First of all, the harmonic extension of $f(t)=t$ cannot be $F(z)=z$ because the natural harmonic extension of a real function is real. You have to add $+iy$ "manually", it does not come from the Poisson kernel. (Unless you interpret the boundary data as $f(t)+i\delta_\infty$, which makes some sense but is likely to be more confusing than helpful.)
Anyway, we have a legitimate question: given a orientation-increasing homeomorphism $f\colon \mathbb R\to\mathbb R$, how (and when) can we realize $f$ as boundary values of a real harmonic function $F(x,y)$ that is increasing with respect to $x$ for any fixed $y>0$? (The increasing property is what will make $F(x,y)+iy$ a homeomorphism.) It helps to look at the derivative $f'$, which is sure to exist at least as a positive Radon measure $\mu$. We want to get a positive harmonic function $u$ out of $\mu$; then $F$ will be a harmonic function such that $F_x=u$ (you can get $F$ by completing $u$ to holomorphic $u+i\tilde u$ and taking the antiderivative of that).
Thus, the problem is reduced to getting a positive harmonic function out of a positive measure $\mu$ on the boundary. This is possible if and only if the Poisson integral of $\mu$ converges. That is, if the integral converges at one point, then it converges at all points and gives what we want. If it diverges, there is no such positive function, hence no harmonic homeomorphic extension.
Two examples will illustrate the above. With $f(t)=t$ we have $f'=1$. The Poisson integral of $f'$ converges and gives $u=1$. Complete to holomorphic function (still $1$) and integrate: you get $z$.
But if $f(t)=t^3$, then $f'(t)=3t^2$ and the Poisson integral of $f'$ diverges. There is no harmonic homeomorphic extension of this $f$. It extends to a harmonic (indeed holomorphic) map $f(z)=z^3$ which is not injective. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9643760323524475, "perplexity": 164.26204605946134}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246661675.84/warc/CC-MAIN-20150417045741-00081-ip-10-235-10-82.ec2.internal.warc.gz"} |
http://mathhelpforum.com/statistics/54960-probability-questions.html | 1. ## Probability Questions
Wasn't sure how to categorize these.
3. Calculate the probability of selecting a college student at random and finding out they have an IQ less than 60 if (a) the probability distribution of the college students IQs is N(64,5) and (b) if the probability distribution is uniform with endpoints 50 and 63.
8. Suppose a computer has 15 main components, each works or does not work independent of the others, with a probability of working equal to 0.8 for each. Now suppose the computer will not boot if 4 or more of the components do not work. Calculate the probability that the computer does not boot.
9. A computer has 1,500 switches, each working or not working independent of the others. Each switch has a probability of 0.6 of working. At least 915 switches must work properly or the computer will not boot. Calculate the probability that this computer boots.
Any help at all would be appreciated. Thanks.
2. Originally Posted by AlphaOmegaStrife
Wasn't sure how to categorize these.
3. Calculate the probability of selecting a college student at random and finding out they have an IQ less than 60 if (a) the probability distribution of the college students IQs is N(64,5) and (b) if the probability distribution is uniform with endpoints 50 and 63.
[snip]
(a) $Z = \frac{X - \mu}{\sigma} = \frac{60 - 64}{5} = -0.8$.
Therefore $\Pr(X < 60) = \Pr(Z < -0.8) = \Pr(Z > 0.8) = 1 - \Pr(Z < 0.8)$ by symmetry.
(b) The pdf of X is $f(x) = \frac{1}{13}$ for 50 < x < 63 and zero elsewhere. So $\Pr(X < 60) = \frac{1}{13} \, (60 - 50) = \frac{10}{13}$.
3. Originally Posted by AlphaOmegaStrife
[snip]
8. Suppose a computer has 15 main components, each works or does not work independent of the others, with a probability of working equal to 0.8 for each. Now suppose the computer will not boot if 4 or more of the components do not work. Calculate the probability that the computer does not boot.
[snip]
Let X be the random variable number of components that don't work.
X ~ Binomial(n = 15, p = 1 - 0.8 = 0.2).
Calculate $\Pr(X \geq 4) = 1 - \Pr(X \leq 3)$.
4. Originally Posted by AlphaOmegaStrife
[snip]
9. A computer has 1,500 switches, each working or not working independent of the others. Each switch has a probability of 0.6 of working. At least 915 switches must work properly or the computer will not boot. Calculate the probability that this computer boots.
Any help at all would be appreciated. Thanks.
Let X be the random variable number of components that don't work.
X ~ Binomial(n = 1500, p = 0.6)
Calculate $\Pr(X \geq 915)$.
You can use the normal approximation to the binomial distribution. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9508201479911804, "perplexity": 533.8068603577887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822747.22/warc/CC-MAIN-20171018051631-20171018071631-00576.warc.gz"} |
https://forum.bebac.at/forum_entry.php?id=18371&category=26&order=time | ## Suitable equipment? [Study Assessment]
Hi ssussu,
» » […] Did you use an infusion pump or an infusion bottle / drip chamber?
»
» Yes, the burble is extruded by squeezing the tube to float the burble up from the liquid surface of the tube, like this
» [image]
» so, the dose is not lost.
»
» » » » 2. only if you lost nothing […] the Cmax and AUC will equal the one with a constant rate infusion.
» » »
» » » […] if the dose is not lost, can i say just the AUC will be effect but the Cmax is the same with the canstant rate infusion?
» »
» » Read again what I wrote above.
»
» So, you mean the Cmax and the AUC both will equal the one with a constant rate infusion?
Yes!
» Don't we need to consider the eliminate rate?
Why? Elimination is independent from any kind of input.
» if we speed up the infusion rate while the eliminate rate is not change, the Cmax won't be greater than the constant rate infusion?
Again: yes.
In my simulations I assumed:
1. ▬▬ D=1 with a constant infusion rate of 2 h–1 (i.e., infusion time of exactly 30 minutes).
2. ▬▬ D=0.5 with a constant infusion rate of 2 h–1 (i.e., planned infusion time of exactly 30 minutes).
Infusion completely stopped after 15 minutes for five minutes.
Infusion of the remaining D=0.5 resumed at 20 minutes with an accelerated infusion rate of 3 h–1 (i.e., infusion time of exactly ten minutes).
I have some doubts whether with your equipment (drip counter) you managed
1. to have the same infusion rate across all subjects (I guess in some infusion was completed earlier or later than at 30 minutes) and
2. to adjust an accelerated infusion rate properly.
ElMaestro had valid points as well. With an accelerated infusion rate you may run into safety problems.
Coming back to your very first question:
» Does the Cmax should be excluded? What about the AUC? What should the investigator to do if the same situation (need to stop temporarily or need to change the infusion speed) occurs next time?
For the next time I strongly recommend infusion pumps (yeah, I know, expensive): Exact infusion rates, no bubbles, no problems.
If you would have kept the original infusion rate (after the five minutes stop), i.e., complete it at 35 minutes, you would have seen a Cmax which is 0.35% (!) lower than expected with the planned schedule – if you have an early sampling time point. If not, you will observed a lower Cmax. How much lower depends on the sampling schedule and the elimination rate. Theoretically the AUC is not affected (depends only on the dose).
PS: Why did it take the nurse five minutes to expel the bubble? In my experience about one minute should be sufficient. Would look like this:
Cheers,
Helmut Schütz
The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8211401700973511, "perplexity": 2875.4321685521463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627997335.70/warc/CC-MAIN-20190615202724-20190615224724-00108.warc.gz"} |
https://www.physicsforums.com/threads/hamiltonian-as-the-generator-of-time-translations.884777/ | # I Hamiltonian as the generator of time translations
Tags:
1. Sep 9, 2016
### Frank Castle
In literature I have read it is said that the Hamiltonian $H$ is the generator of time translations. Why is this the case? Where does this statement derive from?
Does it follow from the observation that, for a given function $F(q,p)$, $$\frac{dF}{dt}=\lbrace F,H\rbrace +\frac{\partial F}{\partial t}$$ In particular, if $F$ is not explicitly dependent on time, then $$\frac{dF}{dt}=\lbrace F,H\rbrace$$ Or is there more to it?
2. Sep 10, 2016
### pliep2000
Ah poisson brackets!
I think what you are looking for is Noethers theorem.
Susskinds classical mechanics course on youtube, i think lecture 4 (symmetries) and 8 (poisson) will help you.
Draft saved Draft deleted
Similar Discussions: Hamiltonian as the generator of time translations | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8656253218650818, "perplexity": 672.1284439477331}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816178.71/warc/CC-MAIN-20180225070925-20180225090925-00376.warc.gz"} |
https://www.physicsforums.com/threads/linear-functionals.132085/ | # Homework Help: Linear functionals
1. Sep 14, 2006
### wurth_skidder_23
Here is the problem I have been asked to solve:
Assume that m < n and l1, l2, . . . , lm are linear functionals on an n-dimensional vector space X.
(a) Prove there exists a non-zero vector x in X such that the scalar product < x, lj >= 0 for 1 <= j <= m. What does this say about the solution of systems of linear equations?
(b) Under what conditions on the scalars b1, . . . , bm is it true that there exists a vector x in X such that the scalar product < x, lj >= bj for 1 <= j <= m? What does this say about the solution of systems of linear equations?
I am having trouble understanding the concept of a linear functional and how it relates to the vector space X.
Last edited: Sep 14, 2006
2. Sep 14, 2006
### AKG
x is a vector in X, and lj is a linear functional on X, so what is < x, lj >? Do you mean lj(x)? Linear functionals are linear functions which map vectors to scalars. By linearity, they are uniquely determined by their behaviour on the basis vectors of the vector space. Let f be a linear functional on a vector space V, and let v1, ..., vn be basis vectors for V. If x is any vector in X, then there exist scalars x1, ..., xn such that
x = x1v1 + ... + xnvn
Then:
f(x)
= f(x1v1 + ... + xnvn)
= f(x1v1) + ... + f(xnvn)
= x1f(v1) + ... + xnf(vn)
= (x1, ..., xn).(f(v1), ..., f(vn))
If l1, ..., lm are linear functionals, consider the row vectors (l1(v1), ..., l1(vn)), ..., (lm(v1), ..., lm(vn)). Do you see how the question is reduced to the following:
If m < n, and {li(vj) | i = 1, ..., m; j = 1, ..., n} is any set of scalars, do there exists scalars x1, ..., xn such that for each i, it holds that:
(x1, ..., xn).(li(v1), ..., li(vn))
Consider:
$$(\mbox{Span}\{(l_i(v_1),\, \dots ,\, l_i (v_n))\ :\ 1 \leq i \leq m\})^{\perp}$$
3. Sep 14, 2006
### wurth_skidder_23
< x, lj > is the scalar/dot product
4. Sep 14, 2006
### AKG
I know that. Suppose v is a vector and x is toenail. What is <v,x>?
5. Sep 14, 2006
### wurth_skidder_23
Yeah, I still have no idea what I'm proving for this problem, nor do I understand the final question, though I imagine if I could do the proof, the question would make sense. A hint would be appreciated.
6. Sep 15, 2006
### AKG
The dot product takes two vectors of the same space and gives you a scalar. A linear functional on V is not a vector in V. There is a strong relation between the value of a linear functional at a vector and the dot product of that vector with a particular other vector, but that's still a little off in the distance for us. First, you need to get the basics.
7 is a real number. So is $-\pi$. It makes sense to talk about, say, 7 x $-\pi$. You can multiply numbers. Can you multiply a number by a function? What is 7 x sine? It doesn't really make sense (okay, you can make sense of it, but try to understand the basic idea). Similarly, v and x are both vectors, say. Then it makes sense to take their dot product, v.x If f is a linear functional, then what sense does it make to take v.f? You can multiply two numbers together, but you can't multiply a number by a toenail, it's just absurd. Likewise, you can take the dot product of a vector with another vector, but you can't take the dot product of a vector with a functional, it's just absurd. It makes as much sense as taken the dot product of a vector with a toenail.
If V is a vector space over a field F, then a linear functional is a function: f : V -> F that is linear. So a linear functional (I'll just call it a functional) is a function, which is just something that maps elements of one set to elements of another set. In addition, it is a special kind of function, it is a linear function. That means f(v+w) = f(v)+f(w), and f(av) = af(v), where v and w are vectors in V, and a is a scalar in F. Do you understand this so far?
Suppose you have an n-dimensional vector space V (over a field F), and m (with m < n) linear functionals f1, ..., fm. You want to prove that there is a non-zero x such that fi(x) = 0 for 1 < i < m.
Let me give you a related problem. Let V be an n-dimensional vector space. Let {v1, ..., vm} be any set of m vectors from V, with m < n. Prove that there exists a non-zero x such that <x,vi> = 0 for 1 < i < m.
7. Sep 15, 2006
### matt grime
Duals always seem to cause problem's, conceptually, and I don't know why.
If V and W are vector spaces you are really happy with what the space of linear maps between V and W is. If we are the kind of person who likes bases then it is the set of dim(W) by dim(V) matrices. Well, all we've done is take a general case you're happy with and looked at one particular example, when W is one dimensional. Surely that is even easier to understand than the general case?
8. Sep 21, 2011
### abhi_kirk
Hi.
I have the same question as wurthskidder23 and ive read the reply by AKG. Can anybody give some further hint as how to solve the question. Im not sure how to prove that x is non-zero? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9587490558624268, "perplexity": 342.36114972605384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823657.20/warc/CC-MAIN-20181211151237-20181211172737-00636.warc.gz"} |
http://physics.stackexchange.com/questions/37716/what-is-the-angular-distance-between-ptolemaic-perigees-of-mercury | # What is the angular distance between Ptolemaic perigees of Mercury?
In his excellent treatment of the history of the science of astronomical distances and sizes, Albert van Helden says (p.29) that
The complicated [Ptolemaic] model of Mercury has the curious property of producing two perigees, each about 120° removed from the apogee.
But when I try to confirm this using the 88 day period Mercury's epicycle, I get approximately half the expected value.
In a given time (in days), $t$, Mercury will travel through an angle $\varepsilon = 2\pi t/88$ along its epicycle which will traveled through an angle $\delta = 2\pi t/365$ along its deferent. In transitioning from apogee to perigee, it must be the case (since Mercury must go half way round the epicycle, and then an additional $\delta$ to "catch up" with the angle traveled by the center of the epicycle) that $$\varepsilon=\delta+\pi$$ solving which yields $$t=[2\cdot(1/88-1/365)]^{-1}\approx58$$ which corresponds to about 57°, roughly half the number expected.
What is missing from the above reasoning? Is there something about the definitions of epicyclic period I've missed, perhaps?
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8224913477897644, "perplexity": 308.3750418708667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663754.0/warc/CC-MAIN-20140930004103-00295-ip-10-234-18-248.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/how-would-this-work.108703/ | # How would this work
1. Jan 30, 2006
### vincerelli
if you were to drill a hole through the center of the earth and line the whole with a material that made everything stay out of the hole (The bottom line is a clear path from one side of the earth to the other) and you fell down the hole what would happen. I would think you could not escape gravity's pull on the other side, if that were the case would you turn into a tight compacted ball when you finally settled in the core of the earth? INTERESTING!!!
2. Jan 30, 2006
### daveb
Neglecting air drag, you'd (roughly) fall to the opposite side, and oscillate back and forth between the openings. I say roughly because density of the surrounding material causes slight variations in the gravity field. With air drag, you'd oscillate several times back and forth, but not as far each time, until you settled in at the center of earth's gravity. Of course, this all assumes you drill through the actual center of gravity. Otherwise you slam into the side. No matter what, though, you'd end up dead from all the heat and radioactivity down there. Not sure how much the air pressure would be down there either.
Edited to add: I don't think the gravity is strong enough to compact you into a ball, btw.
3. Jan 30, 2006
### HallsofIvy
If you do allow for air resistance, then you would eventually come to a stop at the center of the earth- where the gravitational pull would, in fact, pull on you and not "compact" you.
4. Jan 30, 2006
### vincerelli
but, how would a force like gravity pull you when in fact there would be a force pushing you from both sides. I don't understand how it would pull you, well I guess if you were standing up (not really standing) your head and feet would be drawn to your waist.
5. Jan 30, 2006
### Claude Bile
Provided the hole is cylindrically symmetric, all the lateral forces would cancel out, the only remaining forces would be along the axis of the hole. The symmetry effectively reduces the problem to a 1D simple harmonic oscillator. If you consider the presence of air, the system becomes a 1D damped simple harmonic oscillator.
In the case of a damped oscillator you would indeed come to rest at the centre of the Earth, however the gravitational force at the centre of the Earth is exactly zero, because for each chunk of matter pulling on you, there is another chunk of matter pulling in the opposite direction with exactly the same force (This too is a consequence of symmetry).
Claude.
6. Jan 30, 2006
### moose
vincerelli, gravity pulls, not pushes. If there is an equal amount of matter all around you, then gravity will essentially cancel out.
7. Jan 30, 2006
### DaveC426913
Vincerelli, you weigh the most at the *surface* of the Earth. As you descend into the Earth, you'll weigh less. At the bottom of the Mariana Trench (the deepest spot on the Earth's surface) you would actually weigh slightly less than you would a sea level.
Why? Because there is slightly less "Earth" under you, and slightly more above you. It is enough to make a difference (but the geometry is diffilcult to explain). The upshot is that, by the time you reach the Centre of the Earth, you will feel zero gravititatonal pull. You will be weightless. The Earth's mass is actually pulling you outward, but it pulls out in all directions equally, and cancels out. (No, you won't feel pulled apart, either)
BTW, in all that falling thing, don't forget that the Earth turns. This ultimately ruins the experiment, since you can't use a straight tunnel. In fact, you can't just simply use a curved tunnel either, because you won't even fall back down the *same* tunnel you rose through. The tunnel you'd have to carve out would have a separate path for every trip from surface through core to surface. The tunnel would look like a spirograph design.
Last edited: Jan 30, 2006
8. Jan 30, 2006
### tony873004
You could drill from pole to pole.
Question. Imagine a hypothetical water world; pure water from surface to core. What would the pressure in the middle be? Gravity is cancelled but it seems to me like there would still be pressure from 4000 miles of water on top of you from all directions.
9. Jan 30, 2006
### nbo10
I think there is a train based on this concept, I can't remember where.
10. Feb 1, 2006
### Staff: Mentor
While your weight at the center of the earth would be zero, the weight of all that water crushing down on you would exert a very high pressure.
11. Feb 1, 2006
### pallidin
In this scenario, one would expierience acceleration towards the center of the earth. Likely, that acceleration would provide for your being ejected from that center point and further along the "tube"
Having reached a certain point due to acceleration, you would fall back past the center of the earth, though much less a distance than started.
In effect, as presented earlier, a "dampening" scenario would take-over, and thus eventually cyclically "vibrate" to a static equilibrium with the earth's center.
There is nothing special about it. This acceleration/oscillatory/damping effect is readily seen with metal springs.
12. Feb 1, 2006
### pallidin
Agreed. Also, gravity CANNOT be "canceled", however, it's objective influence can be mitigated.
For example, for someone to suggest that being in a hollow sphere in the center of our earth "cancels" gravity is simply not correct(not speaking to you or anyone in particular). Rather, under that scenario, there are external, spherical equipotental gravital influences which locally cancels out the influence, but not gravity itself.
13. Feb 1, 2006
### DaveC426913
I was trying to avoid implying that you would be "pulled apart" - eg. your left arm and right arm pulled in different directions. - this is not the case In fact, every part of your body is pulled in every direction.
As far as I know, except for gravitational tides (i.e. gradients over a distance), there is no way to detect this "pull in all directions", and for all intents purposes they really do cancel out. I may be wrong about that. Perhaps you can enlighten me. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8184024691581726, "perplexity": 691.1212803593684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512395.23/warc/CC-MAIN-20181019103957-20181019125457-00395.warc.gz"} |
https://www.physicsforums.com/threads/circles-on-the-complex-plane.4018/ | # Circles on the complex plane
1. Jul 18, 2003
### StephenPrivitera
In general, |z - zo|=r, where z_o is a fixed point and r is a positive number, represents a circle centered at z_o and with radius r. |z - z1|=k|z - z2|, where z_1 and z_2 are fixed points, also apparently represents a circle, except maybe in the case where k=1. Then we have a line, or a circle of infinite radius. So to find the radius of the circle for |z - z1|=k|z - z2|, I could try to rewrite the equation to fit |z - zo|=r. I did this, and I got a frightening answer. I shall attempt to show it here. The work is much too long and too tedious to write here in full form but I'll explain briefly. x is the x component of z, y is the y component of z, x_1 is the x component of z_1, y_1...y component of z_1, etc.
Square both sides, distribute the k^2, collect x's and y's on the left, complete the square to get (x-somthing)^2+(y-same thing)^2=some big mess
simplify the right hand side, rewrite in terms of z1 and z2 as much as possible, take the sqrt of each side,
Anyone who feels like trying this problem could verify my result/ show me a better way of doing it?
r=(k2-1)-1[squ][(k4-2k2+2)|z1|2+k2(2k2-1)|z2|2-2k2(x1x2+y1y2)]
2. Jul 18, 2003
### arcnets
Hi StephenPrivitera,
the problem is surely symmetrical wrt. the line Z1 Z2.
So let's look at this line, and the 2 points A, B where the circle intersects it (points=capital, distances=small):
Z1------ka------A----a---Z2-----b-----B
|------------------kb-----------------|
Let |Z1 - Z2| = d.
From the drawing, we see:
I. d = ka + a
II. b + d = kb
III. 2r = a+b.
Three unknowns: a, b, r.
Three equations: Perfect!
Last edited: Jul 18, 2003
3. Jul 19, 2003
### StephenPrivitera
Really there are four unknowns, a,b,r, and d. If you substitute in for d, then there are two equations and three unknowns. You can solve for r in terms of b or a. I did it for a and got,
r=(1/2)a/(k-1)
Also, how do you know that the diameter lies on the line z1z2?
4. Jul 19, 2003
### arcnets
You know the value of d, because you know Z1 and Z2. I defined d = |Z1 - Z2|.
Concerning the symmetry: It's clear that the circle can have only 2 points in common with the line Z1 Z2. Let's assume the center C is not on Z1 Z2. Now take the mirror image C' of C wrt. to the line Z1 Z2. The circle centered at C', and going through A, B is obviously another possible solution. Now, since you stated that the circle is defined by the given equation, this is a contradiction. Thus, C is on Z1 Z2.
5. Jul 20, 2003
### StephenPrivitera
good point, so rather than eliminating d, i should eliminate a and b;
r=dk/(k2-1)
tricky
I'm still upset the other way didn't work... There was a (k2-1)-1 factor there. Maybe the numerator can simplify somehow to dk. I'll try it again. Thanks for the help.
Last edited: Jul 20, 2003
6. Jul 20, 2003
Correct. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9557344317436218, "perplexity": 1255.9826512860634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660158.72/warc/CC-MAIN-20160924173740-00000-ip-10-143-35-109.ec2.internal.warc.gz"} |
https://128.84.21.199/abs/1609.06649v1 | cs.CL
(what is this?)
# Title: Minimally Supervised Written-to-Spoken Text Normalization
Abstract: In speech-applications such as text-to-speech (TTS) or automatic speech recognition (ASR), \emph{text normalization} refers to the task of converting from a \emph{written} representation into a representation of how the text is to be \emph{spoken}. In all real-world speech applications, the text normalization engine is developed---in large part---by hand. For example, a hand-built grammar may be used to enumerate the possible ways of saying a given token in a given language, and a statistical model used to select the most appropriate pronunciation in context. In this study we examine the tradeoffs associated with using more or less language-specific domain knowledge in a text normalization engine. In the most data-rich scenario, we have access to a carefully constructed hand-built normalization grammar that for any given token will produce a set of all possible verbalizations for that token. We also assume a corpus of aligned written-spoken utterances, from which we can train a ranking model that selects the appropriate verbalization for the given context. As a substitute for the carefully constructed grammar, we also consider a scenario with a language-universal normalization \emph{covering grammar}, where the developer merely needs to provide a set of lexical items particular to the language. As a substitute for the aligned corpus, we also consider a scenario where one only has the spoken side, and the corresponding written side is "hallucinated" by composing the spoken side with the inverted normalization grammar. We investigate the accuracy of a text normalization engine under each of these scenarios. We report the results of experiments on English and Russian.
Subjects: Computation and Language (cs.CL) Cite as: arXiv:1609.06649 [cs.CL] (or arXiv:1609.06649v1 [cs.CL] for this version)
## Submission history
From: Ke Wu [view email]
[v1] Wed, 21 Sep 2016 17:51:11 GMT (1355kb) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8348682522773743, "perplexity": 2236.8082091082388}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120349.46/warc/CC-MAIN-20170423031200-00122-ip-10-145-167-34.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/676613/finding-the-closed-form-of-a-sum | Finding the closed form of a sum
I would like to find the closed form of the sum $\sum_{n = 4}^{x}(x - n)$. I believe that the derivative is $x - 4$, but when I take the integral of that and graph it, the sum and $\frac{x^2}{2} + 4x$ are certainly not the same. Any help would be appreciated, as I have no idea how to proceed.
-
$$\sum_{n=4}^x(x-n)$$ is an Arithmetic Series with the common difference $=1$
as the $r(0\le r\le -n-x+4)$th term$(T_r)$ is $x-n-r-4$
So, $T_{r+1}-T_r=1$
the first term being $x-4$ and the last being $x-x=0$ and the number of terms is $\displaystyle (x-4)-(x-x)+1=x-4+1$
Now, the sum of $N$ term with the first & the last term being $a,l$ is $$\frac N2(a+l)$$
-
So the closed form would be \$)\frac{x/2})(x-4)? – recursive recursion Feb 14 at 18:20
@recursiverecursion, it should be $$\frac{x-3}2(x-4+0)$$ – lab bhattacharjee Feb 14 at 18:22
Thank you! great answer, didn't read it correctly the first time. – recursive recursion Feb 14 at 18:24
@recursiverecursion, my pleasure. Hope I could make the idea clear – lab bhattacharjee Feb 14 at 18:25
This was the last part of deriving the formula for the number of diagonals in a polygon. I looked just now, and my formula's right :) – recursive recursion Feb 14 at 18:34
$$\sum_{n=4}^x{(x-n)}=\sum_{n=4}^x{x}-\sum_{n=4}^x{n}$$ $$=(x-3)x+(4+5+6+...+x)$$
$$=(x^2-3x)-\left(\frac{x(x+1)}{2}-6\right)$$ $$=x^2-3x-\frac{x^2}{2}-\frac{x}{2}+6$$ $$=\frac{x^2}{2}-\frac{7x}{2}+6$$
$$=\frac{1}{2}(x^2-7x+12)$$ $$=\frac{1}{2}(x-4)(x-3)$$
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9060521125793457, "perplexity": 460.489162217372}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769305.33/warc/CC-MAIN-20141217075249-00165-ip-10-231-17-201.ec2.internal.warc.gz"} |
http://www.physicsforums.com/showthread.php?p=3232615 | # Prime Numbers
by EIRE2003
Tags: numbers, prime
P: 3 In this case the number as a whole changes from 11 to 11/3! but 11 itself doesn't change at all. If it were 25 and I divided it by 120 it would change to 5 and not remain the same. Basically I can write a fraction of X/(Square root of X rounded down)! on a piece of paper or calculator and change it to smaller numbers on both sides of the division line by either myself or the calculator if it is not prime.
P: 70 I believe PrimeNumbers wants to say that if $$GCD(N,[\sqrt{N}]!) = 1$$ then N is prime.
HW Helper
P: 805
Quote by atomthick I believe PrimeNumbers wants to say that if $$GCD(N,[\sqrt{N}]!) = 1$$ then N is prime.
This is true, and can be proved using prime decomposition. Is it practical though? I'm not sure. If you want to determine if a humungous number is prime, calculating factorials and then the gcd can be a very excrutiating process.
P: 70 Clearly it's not computational feasible for large numbers, however there are some interesting results for example if $$gcd(N, [\sqrt{N}]!) = P, P > 1$$ then P is a factor of N. It could become computational feasible if someone finds good algorithms for adding, subtracting and finding modulus that work in the factorial base (Cantor discovered that we can write any number in factorial base, for example 15 = 1! + 1*2! + 2*3!). Because we can easily find the representation of N in factorial base all we would need is fast computational algorithms for this base. P.S. EIRE2003 see how many interesting questions are prime numbers raising? This kind of questions and their answers have made great improvements allover mathematics! Those numbers look uninteresting until you ask a question about them, try it.
PF Gold
P: 1,951
Quote by PrimeNumbers DIVIDE X by (SQUARE ROOT X)! ! BEING A FUNCTION ON YOUR CALCULATOR THAT SUMS 1 x 2 x 3 etc. IF PRIME THEN X WILL NOT BE DIVISIBLE BY ANY OF THE NUMBERS MULTIPLIED BY EACH OTHER BELOW THE SQUARE ROOT OF X AND SO WILL REMAIN UNCHANGED BY THE DIVISION. IF NOT PRIME THEN ONE OF IT's FACTORS CAN BE FOUND BELOW THE SQUARE ROOT OF IT AND THE TOP HALF OF THE EQUATION NAMELY X WILL BE DIVIDED BY IT AND REDUCED, OTHERWISE IT'S PRIME!
Huh? Take off the caps lock and size changes, please. And explain better. Have a comma: ","
HW Helper P: 1,987 It was surely not for any of the applications that have been mentioned. It was cultivated for centuries before they were dreamt of. And also prime numbers specifically are not really essentially connected with cryptography. It is just that factorisation into prime numbers is one, just one, example of a hard (computationally very long) problem whose inverse (multiplying the factors) is not hard, if I understand. There are other such hard problems ready to take over for cryptography if ever anyone cracks the factorisation problem. I think of it as having a pile of pebbles, can I arrange them in a regularly spaced rectangle? If not I have a prime number of pebbles. Could be tempted to wonder if it is worthy of a grown man's attention. Tempted to believe that it would be if it were simple - could be explained, followed, carried in the head it would be revealing of a structure. But if it is so difficult and complicated that no one understands the solution when it is found, will it be revealing in the same way? I believe this question is discussed about some of today's very difficult proofs. Unless it throws light on other problems whose significance is more apparent. We are told this would be so, but I suggest we do need to be told.
P: 221
Quote by DeaconJohn Now you've got to admit that's incredible. Why should the distribution of the prime numbers have anything to do with an infinite sum of factorials? As far as I know, that is a mystery that has not been completely explained by what we know about mathematics so far. It's is only relatively recent (say 100 years ago) that mathematicians were able to prove that the factorials and the primes are related as described above. So, it's not suprising that there is still some mystery surrounding "the real reason why."
didn't ken ono recently establish something like 'factorials of primes follow a fractal pattern'?
can someone post the proof please?
Related Discussions Calculus & Beyond Homework 6 General Math 10 Linear & Abstract Algebra 0 Linear & Abstract Algebra 5 Linear & Abstract Algebra 44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.827267050743103, "perplexity": 506.41361175881764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921869.7/warc/CC-MAIN-20140901014521-00198-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://en.wikipedia.org/wiki/Normalized_compression_distance | # Normalized compression distance
Normalized compression distance (NCD) is a way of measuring the similarity between two objects, be it two documents, two letters, two emails, two music scores, two languages, two programs, two pictures, two systems, two genomes, to name a few. Such a measurement should not be application dependent or arbitrary. A reasonable definition for the similarity between two objects is how difficult it is to transform them into each other.
It can be used in information retrieval and data mining for cluster analysis.
## Information distance
We assume that the objects one talks about are finite strings of 0s and 1s. Thus we mean string similarity. Every computer file is of this form, that is, if an object is a file in a computer it is of this form. One can define the information distance between strings ${\displaystyle x}$ and ${\displaystyle y}$ as the length of the shortest program ${\displaystyle p}$ that computes ${\displaystyle x}$ from ${\displaystyle y}$ and vice versa. This shortest program is in a fixed programming language. For technical reasons one uses the theoretical notion of Turing machines. Moreover, to express the length of ${\displaystyle p}$ one uses the notion of Kolmogorov complexity. Then, it has been shown [1]
${\displaystyle |p|=\max\{K(x\mid y),K(y\mid x)\}}$
up to logarithmic additive terms which can be ignored. This information distance is shown to be a metric (it satisfies the metric inequalities up to a logarithmic additive term), is universal (it minorizes every computable distance as computed for example from features up to a constant additive term).[1]
### Normalized information distance (similarity metric)
The information distance is absolute, but if we want to express similarity, then we are more interested in relative ones. For example, if two strings of length 1,000,000 differ by 1000 bits, then we are inclined to think that those strings are relatively more similar than two strings of 1000 bits that have that distance. Hence we need to normalize to obtain a similarity metric. This way one obtains the normalized information distance (NID),
${\displaystyle NID(x,y)={\frac {\max\{K{(x\mid y)},K{(y\mid x)}\}}{\max\{K(x),K(y)\}}},}$
where ${\displaystyle K(x\mid y)}$ is algorithmic information of ${\displaystyle x}$ given ${\displaystyle y}$ as input. The NID is called the similarity metric.' since the function ${\displaystyle NID(x,y)}$ has been shown to satisfy the basic requirements for a metric distance measure.[2][3] However, it is not computable or even semicomputable.[4]
## Normalized compression distance
While the NID metric is not computable, it has an abundance of applications. Simply approximating ${\displaystyle K}$ by real-world compressors, with ${\displaystyle Z(x)}$ is the binary length of the file ${\displaystyle x}$ compressed with compressor Z (for example "gzip", "bzip2", "PPMZ") in order to make NID easy to apply.[2] Vitanyi and Cilibrasi rewrote the NID to obtain the Normalized Compression Distance (NCD)
${\displaystyle NCD_{Z}(x,y)={\frac {Z(xy)-\min\{Z(x),Z(y)\}}{\max\{Z(x),Z(y)\}}}.}$[3]
The NCD is actually a family of distances parametrized with the compressor Z. The better Z is, the closer the NCD approaches the NID, and the better the results are.[3]
### Applications
The normalized compression distance has been used to fully automatically reconstruct language and phylogenetic trees.[2][3] It can also be used for new applications of general clustering and classification of natural data in arbitrary domains,[3] for clustering of heterogeneous data,[3] and for anomaly detection across domains.[5] The NID and NCD have been applied to numerous subjects, including music classification,[3] to analyze network traffic and cluster computer worms and viruses,[6] authorship attribution,[7] gene expression dynamics,[8] predicting useful versus useless stem cells,[9] critical networks,[10] image registration,[11] question-answer systems.[12]
### Performance
Researchers from the datamining community use NCD and variants as "parameter-free, feature-free" data-mining tools.[5] One group have experimentally tested a closely related metric on a large variety of sequence benchmarks. Comparing their compression method with 51 major methods found in 7 major data-mining conferences over the past decade, they established superiority of the compression method for clustering heterogeneous data, and for anomaly detection, and competitiveness in clustering domain data.
NCD has an advantage of being robust to noise.[13] However, although NCD appears "parameter-free", practical questions include which compressor to use in computing the NCD and other possible problems.[14]
### Comparison with the Normalized Relative Compression (NRC)
In order to measure the information of a string relative to another there is the need to rely on relative semi-distances (NRC).[15] These are measures that do not need to respect symmetry and triangle inequality distance properties. Although the NCD and the NRC seem very similar, they address different questions. The NCD measures how similar both strings are, mostly using the information content, while the NRC indicates the fraction of a target string that cannot be constructed using information from another string. For a comparison, with application to the evolution of primate genomes, see [16].
Objects can be given literally, like the literal four-letter genome of a mouse, or the literal text of War and Peace by Tolstoy. For simplicity we take it that all meaning of the object is represented by the literal object itself. Objects can also be given by name, like "the four-letter genome of a mouse," or "the text of War and Peace' by Tolstoy." There are also objects that cannot be given literally, but only by name, and that acquire their meaning from their contexts in background common knowledge in humankind, like "home" or "red." We are interested in semantic similarity. Using code-word lengths obtained from the page-hit counts returned by Google from the web, we obtain a semantic distance using the NCD formula and viewing Google as a compressor useful for data mining, text comprehension, classification, and translation. The associated NCD, called the normalized Google distance (NGD) can be rewritten as
${\displaystyle NGD(x,y)={\frac {\max\{\log f(x),\log f(y)\}-\log f(x,y)}{\log N-\min\{\log f(x),\log f(y)\}}},}$
where ${\displaystyle f(x)}$ denotes the number of pages containing the search term ${\displaystyle x}$, and ${\displaystyle f(x,y)}$ denotes the number of pages containing both ${\displaystyle x}$ and ${\displaystyle y}$,) as returned by Google or any search engine capable of returning an aggregate page count. The number ${\displaystyle N}$ can be set to the number of pages indexed although it is more proper to count each page according to the number of search terms or phrases it contains. As rule of the thumb one can multiply the number of pages by, say, a thousand...[17]
## References
1. ^ a b
2. ^ a b c Li, Ming; Chen, Xin; Li, Xin; Ma, Bin; Vitanyi, P. M. B. (2011-09-27). "M. Li, X. Chen, X. Li, B. Ma, P.M.B. Vitanyi, The similarity metric, IEEE Trans. Inform. Th., 50:12(2004), 3250–3264". IEEE Transactions on Information Theory. 50 (12): 3250–3264. doi:10.1109/TIT.2004.838101.
3. Cilibrasi, R.; Vitanyi, P. M. B. (2011-09-27). "R. Cilibrasi, P.M.B. Vitanyi, Clustering by compression, IEEE Trans. Inform. Theory, 51:12(2005), 1523–1545. Also http://xxx.lanl.gov/abs/cs.CV/0312044 (2003)". IEEE Transactions on Information Theory. 51 (4): 1523–1545. arXiv:cs/0312044. doi:10.1109/TIT.2005.844059. External link in |title= (help)
4. ^ Terwijn, Sebastiaan A.; Torenvliet, Leen; Vitányi, Paul M.B. (2011). "Nonapproximability of the normalized information distance". Journal of Computer and System Sciences. 77 (4): 738–742. doi:10.1016/j.jcss.2010.06.018.
5. ^ a b Keogh, Eamonn; Lonardi, Stefano; Ratanamahatana, Chotirat Ann (2004-08-22). "Towards parameter-free data mining". E. Keogh, S. Lonardi, and C.A. Ratanamahatana. "Towards parameter-free data mining." In Conference on Knowledge Discovery in Data: Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, vol. 22, no. 25, pp. 206–215. 2004. Dl.acm.org. p. 206. doi:10.1145/1014052.1014077. ISBN 978-1581138887. Retrieved 2012-11-03.
6. ^ "S. Wehner,Analyzing worms and network traffic using compression, Journal of Computer Security, 15:3(2007), 303–320". Iospress.metapress.com. Retrieved 2012-11-03.
7. ^ Stamatatos, Efstathios (2009). "A survey of modern authorship attribution methods". Journal of the American Society for Information Science and Technology. 60 (3): 538–556. CiteSeerX 10.1.1.207.3310. doi:10.1002/asi.21001.
8. ^ Nykter, M. (2008). "Gene expression dynamics in the macrophage exhibit criticality". Proceedings of the National Academy of Sciences. 105 (6): 1897–1900. doi:10.1073/pnas.0711525105. PMC 2538855. PMID 18250330.
9. ^ Cohen, Andrew R (2010). "Computational prediction of neural progenitor cell fates". Nature Methods. 7 (3): 213–218. doi:10.1038/nmeth.1424. hdl:1866/4484. PMID 20139969.
10. ^ Nykter, Matti; Price, Nathan D.; Larjo, Antti; Aho, Tommi; Kauffman, Stuart A.; Yli-Harja, Olli; Shmulevich, Ilya (2008). "M. Nykter, N.D. Price, A. Larjo, T. Aho, S.A. Kauffman, O. Yli-Harja1, and I. Shmulevich, Critical networks exhibit maximal information diversity in structure-dynamics relationships, Phys. Rev. Lett. 100, 058702 (2008)". Physical Review Letters. 100 (5): 058702. arXiv:0801.3699. doi:10.1103/PhysRevLett.100.058702. PMID 18352443.
11. ^ Bardera, Anton; Feixas, Miquel; Boada, Imma; Sbert, Mateu (July 2006). "Compression-based Image Registration". M. Feixas, I. Boada, M. Sbert, Compression-based Image Registration. Proc. IEEE International Symposium on Information Theory, 2006. 436–440. Ieeexplore.ieee.org. pp. 436–440. doi:10.1109/ISIT.2006.261706. hdl:10256/3052. ISBN 978-1-4244-0505-3. Retrieved 2012-11-03.
12. ^ Zhang, Xian; Hao, Yu; Zhu, Xiaoyan; Li, Ming; Cheriton, David R. (2007). "Information distance from a question to an answer". X Zhang, Y Hao, X Zhu, M Li, Information distance from a question to an answer, Proc. 13th ACM SIGKDD international conference on Knowledge discovery and data mining, 2007, 874–883. Dl.acm.org. p. 874. doi:10.1145/1281192.1281285. ISBN 9781595936097. Retrieved 2012-11-03.
13. ^ Cebrian, M.; Alfonseca, M.; Ortega, A. (2011-09-27). "M. Cebrian, M. Alfonseca, A. Ortega, The normalized compression distance is resistant to noise, IEEE Transactions on Information Theory, 53:5(2007), 1895–1900". IEEE Transactions on Information Theory. 53 (5): 1895–1900. CiteSeerX 10.1.1.158.2463. doi:10.1109/TIT.2007.894669.
14. ^
15. ^ Ziv, J.; Merhav, N. (1993). "A measure of relative entropy between individual sequences with application to universal classification". IEEE Transactions on Information Theory. 39 (4): 1270–1279. doi:10.1109/18.243444.
16. ^ Pratas, Diogo; Silva, Raquel M.; Pinho, Armando J. (2018). "Comparison of Compression-Based Measures with Application to the Evolution of Primate Genomes". Entropy. 20 (6): 393. doi:10.3390/e20060393. Material was copied from this source, which is available under a Creative Commons Attribution 4.0 International License.
17. ^ Cilibrasi, R. L.; Vitanyi, P. M. B. (2011-09-27). "R.L. Cilibrasi, P.M.B. Vitanyi, The Google Similarity Distance, IEEE Trans. Knowledge and Data Engineering, 19:3(2007), 370-383". IEEE Transactions on Knowledge and Data Engineering. 19 (3): 370–383. arXiv:cs/0412098. doi:10.1109/TKDE.2007.48. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 23, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8285675644874573, "perplexity": 3745.2213992092948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201885.28/warc/CC-MAIN-20190319032352-20190319054352-00382.warc.gz"} |
https://www.encyclopediaofmath.org/index.php?title=Talk:Bernstein_inequality&diff=prev&oldid=27202 | Difference between revisions of "Talk:Bernstein inequality"
• I would invite corrections to the notation used for expectation ($\mathbb{E}$) and probability ($\mathbb{P}$) here (I am no probablist).
• I am unsure about the equation following the sentence "Some idea of the accuracy of (2) may be obtained by comparing it with the approximate value of the left-hand side of (2) which is obtained by the central limit theorem in the form", should the square-root in the denominator of the RHS include the $t$ or not? It is difficult to tell from the original images and I do not have access to the literature at present. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.970340371131897, "perplexity": 180.14831069694}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514570740.10/warc/CC-MAIN-20190915052433-20190915074433-00490.warc.gz"} |
http://mathhelpforum.com/calculus/212967-absolute-increase-relative-rate-growth.html | # Thread: Absolute increase/relative rate of growth
1. ## Absolute increase/relative rate of growth
In class we are learning about relative rate of growth and here's this question I don't understand how to approach:
The table below shows the cumulative number of AIDS deaths worldwide. Find the absolute increase in AIDS death between 2003 and 2004 and between 2006 and 2007. Find the relative increase between 2006 and 2007
Year 2003 2004 2005 2006 2007 Cases 30.2 33.3 35.5 37.6 39.6
Apparently, the absolute increase is in millions and relative increase is in percent.
2. ## Re: Absolute increase/relative rate of growth
Hey kuppina.
(Hint: Relative increase between something is (b-a)/a). | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9221757650375366, "perplexity": 1826.42056195219}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171620.77/warc/CC-MAIN-20170219104611-00139-ip-10-171-10-108.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/163946/are-complex-substitutions-legal-in-integration/163948 | # Are Complex Substitutions Legal in Integration?
This question has been irritating me for awhile so I thought I'd ask here.
Are complex substitutions in integration okay? Can the following substitution used to evaluate the Fresnel integrals:
$$\int_{0}^{\infty} \sin x^2\, dx=\operatorname {Im}\left( \int_0^\infty\cos x^2\, dx + i\int_0^\infty\sin x^2\, dx\right)=\operatorname {Im}\left(\int_0^\infty \exp(ix^2)\, dx\right)$$
Letting $ix^2=-z^2 \implies x=\pm\sqrt{iz^2}=\pm \sqrt{i}z \implies z=\pm \sqrt{-i} x \implies dx = \pm\sqrt{i}\, dz$
Thus the integral becomes
$$\operatorname {Im}\left(\pm \sqrt{i}\int_0^{\pm\sqrt{-i}\infty} \exp(z^2)\, dz\right)$$
This step requires some justification, and I am hoping someone can help me justify this step as well: $$\pm \sqrt{i}\int_0^{\pm\sqrt{-i}\infty} \exp(z^2)\, dz=\pm\sqrt{i}\int^\infty_0\exp(z^2)\, dz=\pm\sqrt{i}\left(\frac{\sqrt{\pi}}{2}\right)$$
Thus
$$\operatorname {Im}\left(\int_0^\infty \exp(ix^2)\, dx\right)=\operatorname {Im}\left(\pm\frac{\sqrt{i\pi}}{2}\right)=\operatorname {Im}\left(\pm\frac{(1+i)\sqrt{\pi}}{2\sqrt{2}}\right)=\pm\frac{1}{2}\sqrt{\frac{\pi}{2}}$$
We find that the correct answer is the positive part (simply prove the integral is positive, perhaps by showing the integral can be written as an alternating sum of integrals).
Can someone help justify this substitution? Is this legal?
-
A complex substitution is fine. The thing to be worried about is that replacing $x$ with $\pm \sqrt{i} z$ is not a substitution at all! At best, it's an (awkward) way to try and work with two different substitutions simultaneously. At worst, it's a sure-fire recipe for confusion. – Hurkyl Jun 27 '12 at 23:09
Let's consider the legality of doing an actual u-substitution, such as $z = \sqrt{i} x$. Not only must the integrand be rewritten, so must the limits of integration.
In the original definite integral you have $x$ going from $0$ to $\infty$. Of course this then gives a path of integration for $z$, but it's not sufficient to have just limits $0$ to $\infty$ in the complex plane to specify that path. So this would be a gray area where the limitations of your notation could let you down!
In the complex plane there are many paths from $0$ to $\infty$, even many straight such paths.
A correct way to do this might go as follows. Consider the contour integral $$\oint_\Gamma e^{iz^2}\ dz$$ where $\Gamma$ is the positively oriented triangle with vertices $0, R, R+Ri$ for large $R$. Since the integrand is analytic, Cauchy's Theorem says the result is $0$. This can be written as $J_1 + J_2 + J_3=0$, where $J_1, J_2, J_3$ are the integrals over the segments $[0, R]$, $[R, R+Ri]$, and $[R+Ri,0]$ respectively.
Show that as $R \to +\infty$, $J_2 \to 0$ and $J_3 \to -(1+i) \dfrac{\sqrt{2 \pi}}{4}$. Thus $J_1 \to (1+i) \dfrac{\sqrt{2 \pi}}{4}$, which says that $$\int_0^\infty \cos(t^2)\ dt = \int_0^\infty \sin(t^2)\ dt = \dfrac{\sqrt{2 \pi}}{4}$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9939139485359192, "perplexity": 151.41104874159717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246636650.1/warc/CC-MAIN-20150417045716-00167-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://slideplayer.com/slide/223840/ | # Chapter 11 Vibrations and Waves.
## Presentation on theme: "Chapter 11 Vibrations and Waves."— Presentation transcript:
Chapter 11 Vibrations and Waves
Elasticity When you hang a weight on a spring, the weight applies a force to the spring and it stretches in direct proportion to the applied force. According to Hooke’s law, the amount of stretch (or compression), x, is directly proportional to the applied force F. Double the force and you double the stretch; triple the force and you get three times the stretch, and so on: F ~ ∆x
Elasticity If an elastic material is stretched or compressed more than a certain amount, it will not return to its original state. The distance at which permanent distortion occurs is called the elastic limit. Hooke’s law holds only as long as the force does not stretch or compress the material beyond its elastic limit.
Free Body Diagrams Revisited
Spring force = - (spring constant) (displacement) Felastic = -kx
Hooke’s Law Spring force = - (spring constant) (displacement) Felastic = -kx
Hooke’s Law If a mass of 0.55 kg attached to a vertical spring stretches the spring 2.0 cm from its original equilibrium, what is the spring constant?
Simple Harmonic Motion
Vibration about an equilibrium position in which a restoring force is proportional to the displacement from equilibrium
Simple Pendulums Amplitude Length Period Frequency
Simple Pendulums
Simple Pendulums At what position in the cycle of a swinging pendulum is the potential energy of the pendulum at a maximum?
Simple Harmonic Motion Calculations
For a simple pendulum in simple harmonic motion For a mass-spring system in simple harmonic motion
Waves Medium – a physical environment through which a disturbance can travel Mechanical wave – a wave that requires a medium to travel through Examples Non-examples
Transverse Waves Wave motion is perpendicular to equilibrium
Transverse Waves
Transerve Waves
Longitudinal Waves Wave motion is parallel to equilibrium
Wave Properties Pulse wave Reflection
A wave that is just one interference – no repetition Reflection
Constructive and Destructive Interference
Constructive interference A superposition of two or more waves in which individual displacements on the same side of the equilibrium position are added together to form the resultant wave
Constructive and Destructive Interference
A superposition of two or more waves in which individual displacements on opposite sides of the equilibrium position are added together to form the resultant wave
speed of a wave = (frequency) (wavelength) v = f λ
1. A piano emits frequencies that range from a low of about 28 Hz to a high of about 4200 Hz. Find the range of wavelengths in air attained by this instrument when the speed of sound in air is 340 m/s. 2. The red light emitted by a He-Ne laser has a wavelength of 633 nm in air and travels at the speed of light (3.00 x 108 m/s). Find the frequency of the laser light.
De Broglie Waves Louis de Broglie suggested that all matter has wavelike characteristics. where h is Planck’s constant, equal to 6.63 x J·s. This wavelength is too small to notice interference for large objects. This idea becomes important when looking all things at the microscopic level.
De Broglie Waves What is the wavelength of an electron (mass = 9.11 x 10¯31 kg) traveling at 5.31 x 106 m/s? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8642433285713196, "perplexity": 774.4169053046821}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589404.50/warc/CC-MAIN-20180716154548-20180716174548-00406.warc.gz"} |
http://math.stackexchange.com/questions/227827/supremum-infimum | # Supremum, infimum
I came across nasty task which includes supremum, infimum and I am confused about it. The question is to find a supremum and infimum of a given set:$$A =(x+y+z:x,y,z>0,xyz=1)$$I tried to eliminate z and use corelations between arithmetic mean and so on. I get $A =(\frac{xy(x+y)+1}{xy}:x,y>0)$, and I know that $\frac{x+y}{2}>\sqrt{xy}>\frac{2}{\frac{1}{x}+\frac{1}{y}}=\frac{2xy}{x+y}$. From this I get $xy<\frac{1}{4}(x+y)^{2}$ $(x+y)^2>2\sqrt{xy}$, what after substitution gives $$\frac{2(xy)^{\frac{3}{2}}+1}{xy}<\frac{xy(x+y) +1}{xy}<\frac{(x+y)^3 +4}{(x+y)^2}$$but I have no idea how to evaluate it (if it is a good way). Intuition says that infimum is $0$, and supremum $\infty$, but how to prove it more formally? Thanks in advance!
-
For infimum, recall AM-GM. Given $x,y,z > 0$, we have that $$\dfrac{x+y+z}3 \geq \sqrt[3]{xyz}$$ and equality holds when $x=y=z$. Since $xyz = 1$, we get that $x+y+z \geq 3$. Hence, the infimum is $3$.
For supremum, consider $x = n, y = \dfrac1n$ and $z = 1$, where $n$ can be arbitrarily large. Note that $xyz = 1$. $x+y+z = n + \dfrac1n + 1$. Hence, the supremum is $\infty$.
It really is a good idea in this kind of question to try some things out, as this can give you ideas. Try all the numbers equal. Try some other "small" numbers - does the value of $x+y+z$ increase or decrease when the numbers are different. What happens if you try to make one of the numbers very small? What happens if you make one of them very large (how big can you get)? That is very quick to do, and gives some clues as to which way the inequalities go (easy to get them the wrong way round). – Mark Bennet Nov 2 '12 at 21:48
How do you know that infimum is indeed 3? By subtituting $\frac{1}{n}$ and $n$ for $x, y$ we can go as close to 3 as we want to, but does $x*y*z=1$ directly imply that infimum is 3? Is it possible to prove with definitions, for example with epsilon? – fdhd Nov 2 '12 at 21:51
@user46034 The AM-GM gives us $x+y+z \geq 3$. Equality holds at $x=y=z = 1$. Hence $3$ is indeed the infimum. And I set $x = n, y = \dfrac1n$ and $z = 1$ to argue that the supremum is infinity by letting $n \to \infty$. – user17762 Nov 2 '12 at 21:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9524087309837341, "perplexity": 157.60384779182962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00175-ip-10-164-35-72.ec2.internal.warc.gz"} |
http://cpr-astrophco.blogspot.com/2013/08/13080884-k-thorat-et-al.html | ## Environments of Extended Radio Sources in the ATLBS Survey [PDF]
K. Thorat, L. Saripalli, R. Subrahmanyan
We present a study of the environments of extended radio sources in the Australia Telescope Low Brightness Survey (ATLBS). The radio sources were selected from the Extended Source Sample (ATLBS-ESS), which is a well defined sample containing the most extended of radio sources in the ATLBS sky survey regions. The environments were analyzed using 4-m CTIO Blanco telescope observations carried out for ATLBS fields in the SDSS ${\rm r}^{\prime}$ band. We have estimated the properties of the environments using smoothed density maps derived from galaxy catalogs constructed using these optical imaging data. The angular distribution of galaxy density relative to the axes of the radio sources has been quantified by defining anisotropy parameters that are estimated using a new method presented here. Examining the anisotropy parameters for a sub-sample of extended double radio sources that includes all sources with pronounced asymmetry in lobe extents, we find good evidence for environmental anisotropy being the dominant cause for lobe asymmetry in that higher galaxy density occurs almost always on the side of the shorter lobe, and this validates the usefulness of the method proposed and adopted here. The environmental anisotropy parameters have been used to examine and compare the environments of FRI and FRII radio sources in two redshift regimes ($z<0.5$ and $z>0.5$). Wide-angle tail sources and Head-tail sources lie in the most overdense environments. The Head-tail source environments (for the HT sources in our sample) display dipolar anisotropy in that higher galaxy density appears to lie in the direction of the tails. Excluding the Head-tail and Wide-angle tail sources, subsamples of FRI and FRII sources from the ATLBS survey appear to lie in similar moderately overdense environments, with no evidence for redshift evolution in the regimes studied herein.
View original: http://arxiv.org/abs/1308.0884 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9772801995277405, "perplexity": 2885.7711221425766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607245.69/warc/CC-MAIN-20170523005639-20170523025639-00575.warc.gz"} |
https://mattymatica.com/2016/07/12/newtons-switcheroo/ | # Newton’s Switcheroo
For in much wisdom is much grief, And he who increases knowledge increases sorrow.
(Ecclesiastes 1:18) NKJV
How massive is the sun? That depends on what assumption you make, regarding frame of reference, before you do any math.
• If you assume heliocentricity first, then the value for the mass of the sun is:
• 1.9E+30 kg
• If you assume Geocentrosphericity then you get a different number:
• 1.88E+19 kg
• The difference is a factor of 9.87E-12 (Matty’s Constant)
We all have the same evidence. Our choice of paradigm determines what we think it’s evidence of.
Matty’s Razor
The calculation used to derive a mass for the sun incorporates Kepler’s third law of planetary motion and Newton’s law of universal gravitation as a mathematical expression of the following statement:
How massive is the Sun..
.. if it’s in orbit (of known radius and duration)
.. of the Earth (of known mass)*?
– Matty
Change the a priori assumption: the sun orbits the Earth (known mass) at a known radius and duration and the calculated mass for the sun that is now a fraction of what you get by assuming heliocentricity.
Faith is believing in something that you can’t see, because of evidence.
– Faith, definition
#### Newton’s Law of Universal Gravitation
• F force between masses
• G gravitational constant (6.674×10−11 N · (m/kg)2)
• m1 first mass
• m2 second mass
• r distance between the centers of the masses
The proportional difference between the mass of the Earth and the mass of the sun is simply reversed. Naturally this affects the masses of all of the other planets in our system, but it doesn’t change any of the observed orbital mechanics. The math is the same, the laws are the same, the view from Earth is indistinguishable. The mass of the sun is only determined by your choice of paradigm, not by any facts. What you believe is a choice.
## Donate
We need your financial help but Mattymatica isn’t a religious organization, charity or new age cult.
If you need to belong somewhere, find a local church. If you’d like to help, please consider donating.
## One Reply to “Newton’s Switcheroo”
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9688501954078674, "perplexity": 1448.8469125922525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710980.82/warc/CC-MAIN-20221204204504-20221204234504-00143.warc.gz"} |
https://www.gradesaver.com/textbooks/math/other-math/CLONE-547b8018-14a8-4d02-afd6-6bc35a0864ed/chapter-2-multiplying-and-dividing-fractions-review-exercises-page-190/38 | ## Basic College Mathematics (10th Edition)
$2\frac{1}{2}$
$\frac{\frac{15}{18}}{\frac{10}{30}}$ To divide two fractions, multiply the first fraction by the reciprocal of the divisor (the second fraction). $\frac{15}{18}\times\frac{30}{10}$ To multiply. multiply the numerator and multi ply the denominators. = $\frac{15\times30}{18\times10}$ = $\frac{5}{2}$ = $\frac{4+1}{2}$ = $2\frac{1}{2}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9931693077087402, "perplexity": 663.6937307383794}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104628307.87/warc/CC-MAIN-20220705205356-20220705235356-00090.warc.gz"} |
https://socratic.org/questions/circle-a-has-a-center-at-2-7-and-an-area-of-81-pi-circle-b-has-a-center-at-4-3-a | Geometry
Topics
# Circle A has a center at (2 ,7 ) and an area of 81 pi. Circle B has a center at (4 ,3 ) and an area of 36 pi. Do the circles overlap? If not, what is the shortest distance between them?
Apr 24, 2018
$\textcolor{b l u e}{\text{Circles intersect}}$
#### Explanation:
First we find the radii of A and B.
Area of a circle is $\pi {r}^{2}$
Circle A:
$\pi {r}^{2} = 81 \pi \implies {r}^{2} = 81 \implies r = 9$
Circle B:
$\pi {r}^{2} = 36 \pi \implies {r}^{2} = 36 \implies r = 6$
Now we know the radii of each we can test whether they intersect, touch in one place or do not touch.
If the sum of the radii is equal to the distance between the centres, then the circles touch in one place only.
If the sum of the radii is less than the distance between centres, then the circles do not touch
If the sum of the radii is greater than the distance between centres then the circles intersect.
We find the distance between centres using the distance formula.
d=sqrt((x_2-x_1)^2+(y_2-y_1^2)
$d = \sqrt{{\left(2 - 4\right)}^{2} + {\left(7 - 3\right)}^{2}} = 2 \sqrt{2}$
$9 + 6 = 15$
$15 > 2 \sqrt{2}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8729475140571594, "perplexity": 579.1469138919107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104576719.83/warc/CC-MAIN-20220705113756-20220705143756-00701.warc.gz"} |
https://www.studyadda.com/notes/6th-class/science/changes-around-us/changes-arorund-us/16131 | # 6th Class Science Changes Around us Changes Arorund Us
Changes Arorund Us
Category : 6th Class
Changes Around Us
Changes Around Us
We observe changes around us all the time. Changes may occur in shape, size, mass, density, colour, position, temperature, structure or in composition of a substance. So we can define a change as:
‘Transformation in one or more than one physical or chemical properties of a substance is called change’.
Types of Changes
Types of changes on the basis of either the changes can be reversed to bring back the original substance or not:
Reversible Change
A change which can be reversed to form the 'original substance' is called reversible change.
For example, melting of ice, freezing of water, dissolution of salt in water, increase in temperature of a metal rod, etc.
Irreversible Changes
A change which cannot be reversed to form the 'original substance' called irreversible change.
For example, burning of wood, ripening of fruit, turning milk sour, etc.
Types of changes on the basis of either a new substance is formed or not:
Physical Change
The change, in which molecules of a substance do not undergo any change or no new substances are formed, are called physical changes. For example, melting of ice, freezing of water, evaporation of water, dissolution of salt in water.
Chemical Change
The change, in which molecules of substance undergo change or new substances are formed, are called chemical changes. For example, burning of paper, rusting of iron, spoliation of food, etc.
Types of changes on the basis of heat absorbed or evolved:
Exothermic
The change in which heat is released. For example, burning of wood.
Endothermic
The change in which heat is absorbed. For example, melting of ice.
#### Other Topics
##### 30 20
You need to login to perform this action.
You will be redirected in 3 sec | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8338438868522644, "perplexity": 2832.2749097082333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371830894.88/warc/CC-MAIN-20200409055849-20200409090349-00529.warc.gz"} |
http://www.physicsforums.com/showthread.php?t=179383 | # Does this make sense?
by pivoxa15
Tags: sense
P: 2,268 A,B are sets A + B=AuB + AnB Does it make sense to add sets? I know union and intersections are possible.
HW Helper
P: 3,680
Quote by pivoxa15 A,B are sets A + B=AuB + AnB Does it make sense to add sets? I know union and intersections are possible.
What do you mean that to do? Additive number theory has addition of sets like $X+Y=\{x+y:x\in X,y\in Y \}$ (so that {1, 2, 3} + {10, 40} = {11, 12, 13, 41, 42, 43}). Is that what you want?
P: 2,268 No. I am talking about sets in measure theory.
P: 5,891
## Does this make sense?
Quote by pivoxa15 A,B are sets A + B=AuB + AnB Does it make sense to add sets? I know union and intersections are possible.
You ccan define "+" to mean anything you want. What is the point of your definition?
PF Gold
P: 2,330
Quote by pivoxa15 A,B are sets A + B=AuB + AnB Does it make sense to add sets? I know union and intersections are possible.
I take that to mean that you want an element that is in both A and B to show up twice in the sum of A and B? The sum then could not be a set since there are no dupllcates in sets. What kind of object do you want the sum to be, a bag, a.k.a. multiset?
PF Gold P: 1,059 honestrosewater: I take that to mean that you want an element that is in both A and B to show up twice in the sum of A and B? I take it that he wants to say: A +B = A union B-A intersection B.
PF Gold
P: 2,330
Quote by robert Ihnot I take it that he wants to say: A +B = A union B-A intersection B.
Oh. So symmetric difference (more) then?
PF Gold P: 1,059 If we take the sets {1,2,3} + {2,3,4} = {1,2,3,4}= A U B, which for n=1 to 4 is the whole set. Thus $$A\cup B+A\cap B =A\cup B$$ (I don't think measure theory has any effect on that.) However if we thought of these as collections, then we would have: {1,2,3}+{2,3,4} = {1,2,2,3,3,4} (From Wikipedia: When two or more collections are combined into a single collection, the number of objects in the single collection is the sum of the number of objects in the original collections. ) This is easier to follow if we were thinking of collections of furniture like lamps, rugs, etc. So I believe that you are correct about the symmetric difference of sets.
PF Gold
P: 2,330
Quote by robert Ihnot If we take the sets {1,2,3} + {2,3,4} = {1,2,3,4}= A U B, which for n=1 to 4 is the whole set. Thus $$A\cup B+A\cap B =A\cup B$$ (I don't think measure theory has any effect on that.)
Is this what you meant previously? You seem to have just defined this addition to be union. The symmetric difference is "the set of elements belonging to one but not both of two given sets", i.e., "A union B-A intersection B", which I assume you meant as "(A union B) - (A intersection B)", with "-" denoting set difference (A - B = {x | x in A and x not in B}).
The original definition, "A + B=AuB + AnB" appears to be circular since the symbol that it is defining is used in the definition, so who knows. Normally, when you add two things, the result includes, in a loose sense, all of what you started with. For sets, this would seem to simply be union, but I assume the OP had something more than union in mind. You at least don't usually lose, or subtract, things when you add, so I assume the OP was thinking that the sum of two sets should include everything that was in those sets in some way that union doesn't, i.e., by including any duplicates.
Sci Advisor HW Helper P: 3,680 Since you say measure theoretic, perhaps you mean measure(a) + measure(b) = measure(a union b) + measure (a intersect b)? (for finitely additive measures, of course!)
PF Gold P: 1,059 Yes, you are right. Measure is a mathematical concept, so we can use the plus or minus sign. So that in general: $$A\cup B = A+B-A\cap B$$
Math Emeritus Sci Advisor Thanks PF Gold P: 38,705 But that equation doesn't say anything about measure! Do you mean having first defined A+ B as $$A\cup B + A\cap B$$. Of course, as has been pointed out, that is just equal to $$A\cup B[/itex] It WOULD make sense if you would do what people have been asking you to do and write the "measure": [tex] measure(A\cup B) = measure(A)+ measure(B)-measure(A\cap B)$$
P: 1,572
Quote by CRGreathouse Since you say measure theoretic, perhaps you mean measure(a) + measure(b) = measure(a union b) + measure (a intersect b)? (for finitely additive measures, of course!)
this form might be better because it works even if measure(a intersect b) is infinite.
Sci Advisor HW Helper P: 9,398 In what sense is that better? It is clearly wrong.
P: 1,572
Quote by matt grime In what sense is that better? It is clearly wrong. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8449959754943848, "perplexity": 891.8384597106891}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999654302/warc/CC-MAIN-20140305060734-00023-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://www.ti.inf.ethz.ch/ew/mise/mittagssem.html?action=show&what=abstract&id=311070c3cfc5bef23eaaef14d8fc9a7346d6c595 | ## Theory of Combinatorial Algorithms
Prof. Emo Welzl and Prof. Bernd Gärtner
# Mittagsseminar (in cooperation with M. Ghaffari, A. Steger and B. Sudakov)
Mittagsseminar Talk Information
Date and Time: Thursday, November 26, 2015, 12:15 pm
Duration: 30 minutes
Location: CAB G51
Speaker: Dániel Korándi
## Saturation in random graphs
A graph $H$ is $K_s$-saturated if it is a maximal $K_s$-free graph, i.e., $H$ contains no clique on $s$ vertices, but the addition of any missing edge creates one. The minimum number of edges in a $K_s$-saturated graph was determined over 50 years ago by Zykov and independently by Erdős, Hajnal and Moon. In this talk, we consider the random analog of this problem: minimizing the number of edges in a maximal $K_s$-free subgraph of the Erdős-Rényi random graph $G(n,p)$. We give asymptotically tight estimates on this minimum, and also provide exact bounds for the related notion of weak saturation in random graphs.
Joint work with Benny Sudakov.
Information for students and suggested topics for student talks | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8375290036201477, "perplexity": 1325.3832320950582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578733077.68/warc/CC-MAIN-20190425193912-20190425215912-00364.warc.gz"} |
https://math.stackexchange.com/questions/2570316/intersection-of-two-lie-subgroup-is-lie-subgroup | # Intersection of two Lie subgroup is Lie subgroup ?
This question is already asked in here but i can't find a satisfactory answer.
The question (in the title) arise from the following definition (i'm using Lee's smooth manifold p.156) : If $G$ is a Lie group and $S \subseteq G$, the $\textbf{subgroup generated by } S$ is the smallest subgroup containing $S$ (i.e., the intersection of all subgroup containing $S$).
The definition above implicitly assume that the intersection of any two Lie subgroup $S_1,S_2 \subset G$ is again Lie subgroup. I find it difficult to prove this. I can see that the intersection has the group property but i have no idea how to show that $S_1 \cap S_2$ is an immersed submanifold of $G$.
The answer by @Moishe Cohen in the given link above is using argument involving Lie algebra. But since the definition in Lee's book is given before he define Lie algebra, i assume that this problem can be solved without it (probably).
Can anyone help me with this ? Thank you.
• Lee is not claiming that the subgroup generated by $S$ is a Lie group. This is irrelevant to his discussion; we just need to know that it is a group. – Spenser Dec 17 '17 at 16:43
There are two issues here:
First, the definition of the "subgroup generated by a set $S$" has nothing to do with Lie groups -- it's purely a group-theoretic concept. So for a general set $S$, there's no assumption, implicit or explicit, that the subgroup generated by $S$ is a Lie subgroup. (And it might not be -- for example, there are dense uncountable subgroups of $\mathbb R$, which cannot be given any topology or smooth structure making them into immersed Lie subgroups, and we can take $S$ to be such a subgroup.)
Second, independently of that, it is true that the intersection of two Lie subgroups is again a Lie subgroup.
Theorem. Suppose $G$ is a Lie group, and $H_1,H_2$ are Lie subgroups of $G$. Let $H$ be the subgroup $H_1\cap H_2$. Then $H$ has a unique topology and smooth structure making it into a Lie subgroup of $G$.
EDIT: In Moishe Cohen's answer to the question you cited, he originally just stated this as a simple exercise, but in reply to my laborious argument below, he's now added a simple proof. The idea is to view the two Lie subgroups $H_1$ and $H_2$ as injective Lie homomorphisms $f_i\colon H_i\to G$, and define $H$ as the subgroup $\{(x_1,x_2)\in H_1\times H_2: f_1(x_1) = f_2(x_2)\}$ of $H_1 \times H_2$. The equivariant rank theorem shows that $H$ is an embedded Lie subgroup of $H_1\times H_2$, and for either $i=1$ or $i=2$, the following composition given an injective Lie homomorphism of $H$ into $G$: \begin{equation*} H\hookrightarrow H_1\times H_2 \overset{p_i} {\to} H_i \overset{f_i}{\to} G. \end{equation*}
I'll leave my much more laborious argument here, in case anyone's interested.
My proof:
I don't know of any proof that doesn't rely on some nontrivial facts about Lie algebras, exponential maps, and foliations. Here's a quick sketch of a proof. Can't guarantee that I haven't missed some details, but this general idea should work. (Note that I'm using the definitions from my Intro to Smooth Manifolds book -- in particular, smooth manifolds are second-countable and therefore have only countably many components, and a Lie subgroup is a subgroup endowed with a topology and smooth structure making it into a Lie group and an immersed, not necessarily embedded, submanifold.)
Proof: Let's denote the Lie alebras of $G$, $H_1$, and $H_2$ by $\mathfrak g$, $\mathfrak h_1$, and $\mathfrak h_2$, respectively. Since $\mathfrak h_1$ and $\mathfrak h_2$ are canonically identified with Lie subalgebras of $\mathfrak g,$ the set $\mathfrak h = \mathfrak h_1\cap \mathfrak h_2$ is a Lie subalgebra of $\mathfrak g$ too. Thus there is a unique connected Lie subgroup $H_0$ of $G$ whose Lie algebra is $\mathfrak h$. This means $H_0$ has a topology and smooth structure making it into an immersed smooth submanifold of $G$, and the group operations on $H_0$ are smooth with respect to this structure. If $V\subset\mathfrak g$ is a neighborhood of $0$ on which the exponential map of $G$ is a diffeomorphism, $H_0$ is generated (in the group-theoretic sense) by $\exp(V\cap\mathfrak h)$, where $\exp$ denotes the exponential map of $G$. Since $V\cap \mathfrak h\subset \mathfrak h_1\cap \mathfrak h_2$, it follows that $H_0\subset H_1\cap H_2$.
Since $H_0$ is a subgroup (in the algebraic sense) of $H$, it follows that $H$ is the disjoint union of the left cosets of $H_0$ in $H$. We need to verify that there are only countably many such cosets. I think you can prove this based on the fact that $H_1$ and $H_2$ are integral manifolds of left-invariant foliations of $G$; if we choose a flat chart for $H_1$ on some open subset $W\subseteq G$, then $H_1\cap W$ is a union of countably many disjoint slices; then we can take a connected neighborhood $Y$ of the identity in $H_2$ that is embedded in $W$, and $Y\cap H_1$ will consist of countably many connected embedded submanifolds. I haven't worked out the details.
For each $h\in H$, the map $L_h$ (left multiplication by $h$) is a diffeomorphism of $G$ that takes $H_0$ bijectively onto $hH_0$. Thus we can define a smooth manifold structure on $H$ by declaring each such bijection $H_0 \to hH_0$ to be a diffeomorphism, and viewing $H$ as the topological disjoint union of these cosets. (That is, we declare each coset to be open and closed in $H$.)
We already verified that the group operations are smooth on $H_0$. Given any two points $h_1,h_2\in H$, we can choose connected neighborhoods $U_1$ of $h_1$ and $U_2$ of $h_2$ in $H$, and then the multiplication map $m|_{U_1\times U_2}\colon U_1\times U_2\to H$ can be viewed as the following composition: \begin{equation*} m|_{U_1\times U_2} = L_{h_1}\circ R_{h_2} \circ (m|_{H_0\times H_0}) \circ ( L_{h_1^{-1}} \times R_{h_2^{-1}}). \end{equation*} It follows that the multiplication on $H$ is smooth. (Here you have to use the fact that $H$ is an integral manifold of the left-invariant foliation determined by $\mathfrak h$, and therefore it's "weakly embedded," meaning that a smooth map into $G$ that takes its values in $H$ is also smooth into $H$.) A similar argument applies to inversion. The inclusion map $i_H\colon H\hookrightarrow G$ is a smooth immersion, because on each component it can be written as a composition of the form $L_h \circ i_{H_0}\circ L_{h^{-1}}$.
Finally, uniqueness of the topology and smooth structure are left as an exercise. $\square$
• Thank you Prof. Lee. Espescially for the sketch of the proof. – kelvinn aja Dec 18 '17 at 1:42
• Dear Jack: I maintain that there is nothing difficult about this all what is needed is the constant rank theorem (and familiarity with the notion of a fiber product). Take a look at the edit of my answer. – Moishe Kohan Dec 19 '17 at 15:17
• @MoisheCohen: That's a very nice argument. Much simpler. I'll edit my answer to refer to yours. – Jack Lee Dec 19 '17 at 20:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9788221716880798, "perplexity": 109.53829336798681}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256040.41/warc/CC-MAIN-20190520142005-20190520164005-00013.warc.gz"} |
http://www.ck12.org/trigonometry/Law-of-Cosines/lesson/Law-of-Cosines/r7/ | <meta http-equiv="refresh" content="1; url=/nojavascript/"> Law of Cosines ( Read ) | Trigonometry | CK-12 Foundation
You are viewing an older version of this Concept. Go to the latest version.
# Law of Cosines
%
Progress
Practice Law of Cosines
Progress
%
Law of Cosines
While helping your mom bake one day, the two of you get an unusual idea. You want to cut the cake into pieces, and then frost over the surface of each piece. You start by cutting out a slice of the cake, but you don't quite cut the slice correctly. It ends up being an oblique triangle, with a 5 inch side, a 6 inch side, and an angle of $70^\circ$ between the sides you measured. Can you help your mom determine the length of the third side, so she can figure out how much frosting to put out?
By the end of this Concept, you'll know how to find the length of the third side of the triangle in cases like this by using the Law of Cosines.
### Guidance
The Law of Cosines is a fantastic extension of the Pythagorean Theorem to oblique triangles. In this Concept, we show some interesting ways to utilize this formula to analyze real world situations.
#### Example A
In a game of pool, a player must put the eight ball into the bottom left pocket of the table. Currently, the eight ball is 6.8 feet away from the bottom left pocket. However, due to the position of the cue ball, she must bank the shot off of the right side bumper. If the eight ball is 2.1 feet away from the spot on the bumper she needs to hit and forms a $168^\circ$ angle with the pocket and the spot on the bumper, at what angle does the ball need to leave the bumper?
Note: This is actually a trick shot performed by spinning the eight ball, and the eight ball will not actually travel in straight-line trajectories. However, to simplify the problem, assume that it travels in straight lines.
Solution: In the scenario above, we have the SAS case, which means that we need to use the Law of Cosines to begin solving this problem. The Law of Cosines will allow us to find the distance from the spot on the bumper to the pocket $(y)$ . Once we know $y$ , we can use the Law of Sines to find the angle $(X)$ .
$y^2 & = 6.8^2 + 2.1^2 - 2(6.8)(2.1) \cos 168^\circ \\ y^2 & = 78.59 \\y & = 8.86\ feet$
The distance from the spot on the bumper to the pocket is 8.86 feet. We can now use this distance and the Law of Sines to find angle $X$ . Since we are finding an angle, we are faced with the SSA case, which means we could have no solution, one solution, or two solutions. However, since we know all three sides this problem will yield only one solution.
$\frac{\sin 168^\circ}{8.86} & = \frac{\sin X}{6.8} \\\frac{6.8 \sin 168^\circ}{8.86} & = \sin X \\0.1596 & \approx \sin B \\\angle{B} & = 8.77^\circ$
In the previous example, we looked at how we can use the Law of Sines and the Law of Cosines together to solve a problem involving the SSA case. In this section, we will look at situations where we can use not only the Law of Sines and the Law of Cosines, but also the Pythagorean Theorem and trigonometric ratios. We will also look at another real-world application involving the SSA case.
#### Example B
Three scientists are out setting up equipment to gather data on a local mountain. Person 1 is 131.5 yards away from Person 2, who is 67.8 yards away from Person 3. Person 1 is 72.6 yards away from the mountain. The mountains forms a $103^\circ$ angle with Person 1 and Person 3, while Person 2 forms a $92.7^\circ$ angle with Person 1 and Person 3. Find the angle formed by Person 3 with Person 1 and the mountain.
Solution: In the triangle formed by the three people, we know two sides and the included angle (SAS). We can use the Law of Cosines to find the remaining side of this triangle, which we will call $x$ . Once we know $x$ , we will two sides and the non-included angle (SSA) in the triangle formed by Person 1, Person 2, and the mountain. We will then be able to use the Law of Sines to calculate the angle formed by Person 3 with Person 1 and the mountain, which we will refer to as $Y$ .
To find $x$ :
$x^2 & = 131.5^2 + 67.8^2 -2(131.5)(67.8) \cos 92.7 \\ x^2 & = 22729.06397 \\x & = 150.8\ yds$
Now that we know $x = 150.8$ , we can use the Law of Sines to find $Y$ . Since this is the SSA case, we need to check to see if we will have no solution, one solution, or two solutions. Since $150.8 > 72.6$ , we know that we will have only one solution to this problem.
$\frac{\sin 103}{150.8} & = \frac{\sin Y}{72.6} \\\frac{72.6 \sin 103}{150.8} & = \sin Y \\0.4690932805 & = \sin Y \\28.0 & \approx \angle{Y}$
#### Example C
Katie is constructing a kite shaped like a triangle.
She knows that the lengths of the sides are a = 13 inches, b = 20 inches, and c = 19 inches. What is the measure of the angle between sides "a" and "b"?
Solution: Since she knows the length of each of the sides of the triangle, she can use the Law of Cosines to find the angle desired:
$c^2 & = a^2 + b^2 - 2(a)(b)\cos C \\ 19^2 & = 13^2 + 20^2 - (2)(13)(20)\cos C \\361 & = 169 + 400 - 520\cos C \\-208 & = -520\cos C\\\cos C & = 0.4\\C \approx 66.42^\circ$
### Vocabulary
Law of Cosines: The law of cosines is a rule involving the sides of an oblique triangle stating that the square of a side of the triangle is equal to the sum of the squares of the other two sides plus two times the lengths of the other two sides times the cosine of the angle opposite the side being computed.
### Guided Practice
1. You are cutting a triangle out for school that looks like this:
Find side $c$ (which is the side opposite the $14^\circ$ angle) and $\angle{B}$ (which is the angle opposite the side that has a length of 14).
2. While hiking one day you walk for 2 miles in one direction. You then turn $110^\circ$ to the left and walk for 3 more miles. Your path looks like this:
When you turn to the left again to complete the triangle that is your hiking path for the day, how far will you have to walk to complete the third side? What angle should you turn before you start walking back home?
3. A support at a construction site is being used to hold up a board so that it makes a triangle, like this:
If the angle between the support and the ground is $17^\circ$ , the length of the support is 2.5 meters, and the distance between where the board touches the ground and the bottom of the support is 3 meters, how far along the board is the support touching? What is the angle between the board and the ground?
Solutions:
1. You know that two of the sides have lengths of 11 and 14 inches, and that the angle between them is $14^\circ$ . You can use this to find the length of the third side:
$c^2 = a^2 + b^2 - 2ab\cos \theta\\c^2 = 121 + 196 - (2)(11)(14)(.97)\\c^2 = 121 + 196 - 307.384\\c^2 = 9.16\\c = 3.03\\$
And with this you can use the Law of Sines to solve for the unknown angle:
$\frac{\sin 14^\circ}{3.03} = \frac{\sin B}{11}\\\sin B = \frac{11\sin 14^\circ}{3.03}\\\sin B = .878\\B = \sin^{-1}(.0307) = 61.43^\circ\\$
2. Since you know the lengths of two of the legs of the triangle, along with the angle between them, you can use the Law of Cosines to find out how far you'll have to walk along the third leg:
$c^2 = a^2 + b^2 + 2ab\cos 70^\circ\\c^2 = 4 + 1 + (2)(2)(1)(.342)\\c^2 = 6.368\\c = \sqrt{6.368} \approx 2.52\\$
Now you have enough information to solve for the interior angle of the triangle that is supplementary to the angle you need to turn:
$\frac{\sin A}{a} = \frac{\sin B}{b}\\\frac{\sin 70^\circ}{2.52} = \frac{\sin B}{2}\\\sin B = \frac{2 \sin 70^\circ}{2.52} = \frac{1.879}{2.52} = .746\\B = \sin^{-1}(.746) = 48.25^\circ\\$
The angle $48.25^\circ$ is the interior angle of the triangle. So you should turn $90^\circ + (90^\circ - 48.25^\circ) = 90^\circ + 41.75^\circ = 131.75^\circ$ to the left before starting home.
3. You should use the Law of Cosines first to solve for the distance from the ground to where the support meets the board:
$c^2 = a^2 + b^2 + 2ab\cos 17^\circ\\c^2 = 6.25 + 9 + (2)(2.5)(3)\cos 17^\circ\\c^2 = 6.25 + 9 + (2)(2.5)(3)(.956)\\c^2 = 26.722\\c \approx 5.17\\$
And now you can use the Law of Sines:
$\frac{\sin A}{a} = \frac{\sin B}{b}\\\frac{\sin 17^\circ}{5.17} = \frac{\sin B}{2.5}\\\sin B = \frac{2.5 \sin 17^\circ}{5.17} = .1414\\B = \sin^{-1}(.1414) = 8.129^\circ\\$
### Concept Problem Solution
You can use the Law of Cosines to help your mom find out the length of the third side on the piece of cake:
$c^2 = a^2 + b^2 - 2ab\cos C\\c^2 = 5^2 + 6^2 + (2)(5)(6) \cos 70^\circ\\c^2 = 25 + 36 + 60(.342)\\c^2 = 81.52\\c \approx 9.03\\$
The piece of cake is just a little over 9 inches long.
### Practice
In $\triangle ABC$ , a=12, b=15, and c=20.
1. Find $m\angle A$ .
2. Find $m\angle B$ .
3. Find $m\angle C$ .
In $\triangle DEF$ , d=25, e=13, and f=16.
1. Find $m\angle D$ .
2. Find $m\angle E$ .
3. Find $m\angle F$ .
In $\triangle KBP$ , k=19, $\angle B=61^\circ$ , and p=12.
1. Find the length of b.
2. Find $m\angle K$ .
3. Find $m\angle P$ .
4. While hiking one day you walk for 5 miles due east, then turn to the left and walk 3 more miles $30^\circ$ west of north. At this point you want to return home. How far are you from home if you were to walk in a straight line?
5. A parallelogram has sides of 20 and 31 ft, and an angle of $46^\circ$ . Find the length of the longer diagonal of the parallelogram.
6. Dirk wants to find the length of a long building from one side (point A) to the other (point B). He stands outside of the building (at point C), where he is 500 ft from point A and 220 ft from point B. The angle at C is $94^\circ$ . Find the length of the building.
Determine whether or not each triangle is possible.
1. a=12, b=15, c=10
2. a=1, b=5, c=4
3. $\angle A=32^\circ$ , a=8, b=10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 51, "texerror": 0, "math_score": 0.8421725034713745, "perplexity": 237.89618946349617}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122276676.89/warc/CC-MAIN-20150124175756-00132-ip-10-180-212-252.ec2.internal.warc.gz"} |
https://math.duke.edu/people/ingrid-daubechies?qt-scholars_publications_mla=5&qt-faculty_profile_tabs=0&page=11 | # Ingrid Daubechies
• James B. Duke Distinguished Professor of Mathematics and Electrical and Computer Engineering
• Professor in the Department of Mathematics
• Professor in the Department of Electrical and Computer Engineering (Joint)
### Research Areas and Keywords
##### Analysis
wavelets, inverse problems
shape space
inverse problems
shape space
##### Mathematical Physics
time-frequency analysis
##### Signals, Images & Data
wavelets, time-frequency analysis, art conservation
##### Education & Training
• Ph.D., Vrije Universiteit Brussel (Belgium) 1980
Daubechies, I., and Y. Huang. “How does truncation of the mask affect a refinable function?Constructive Approximation, vol. 11, no. 3, Sept. 1995, pp. 365–80. Scopus, doi:10.1007/BF01208560. Full Text
Friedlander, S., et al. “A celebration or women in mathematics.” Notices of the American Mathematical Society, vol. 42, no. 1, Jan. 1995, pp. 32–42.
Daubechies, I., et al. “Gabor Time-Frequency Lattices and the Wexler-Raz Identity.” Journal of Fourier Analysis and Applications, vol. 1, no. 4, Jan. 1994, pp. 437–78. Scopus, doi:10.1007/s00041-001-4018-3. Full Text
Daubechies, I. “Two Recent Results on Wavelets: Wavelet Bases for the Interval, and Biorthogonal Wavelets Diagonalizing the Derivative Operator.” Wavelet Analysis and Its Applications, vol. 3, no. C, Jan. 1994, pp. 237–57. Scopus, doi:10.1016/B978-0-12-632370-2.50013-1. Full Text
Daubechies, I., and Y. Huang. “A decay theorem for refinable functions.” Applied Mathematics Letters, vol. 7, no. 4, Jan. 1994, pp. 1–4. Scopus, doi:10.1016/0893-9659(94)90001-9. Full Text
Cohen, A., et al. “Wavelets on the interval and fast wavelet transforms.” Applied and Computational Harmonic Analysis, vol. 1, no. 1, Jan. 1993, pp. 54–81. Scopus, doi:10.1006/acha.1993.1005. Full Text
Daubechies, I. “Two Theorems on Lattice Expansions.” Ieee Transactions on Information Theory, vol. 39, no. 1, Jan. 1993, pp. 3–6. Scopus, doi:10.1109/18.179336. Full Text
Daubechies, I., and J. C. Lagarias. “Sets of matrices all infinite products of which converge.” Linear Algebra and Its Applications, vol. 161, no. C, Jan. 1992, pp. 227–63. Scopus, doi:10.1016/0024-3795(92)90012-Y. Full Text
Cohen, A., et al. “Biorthogonal bases of compactly supported wavelets.” Communications on Pure and Applied Mathematics, vol. 45, no. 5, Jan. 1992, pp. 485–560. Scopus, doi:10.1002/cpa.3160450502. Full Text
Antonini, M., et al. “Image coding using wavelet transform.Ieee Transactions on Image Processing : A Publication of the Ieee Signal Processing Society, vol. 1, no. 2, Jan. 1992, pp. 205–20. Epmc, doi:10.1109/83.136597. Full Text | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8916758894920349, "perplexity": 3026.426861458096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347432521.57/warc/CC-MAIN-20200603081823-20200603111823-00032.warc.gz"} |
https://chemistry.stackexchange.com/tags/ideal-gas/hot | # Tag Info
13
It's not a coincidence at all! If you do an online search for "derivation of osmotic pressure", you'll see how $R$ enters into the derivation. Indeed, that's one of the beauties of the van 't Hoff equation for osmotic pressure – it reveals that (under the modest simplifying assumptions of the van 't Hoff equation derivation) the osmotic pressure created ...
11
The answer to your question is yes and no. You are correct in supposing that as an ideal gas expands the entropy will increase as it has more space to occupy and so the number of ways the molecules can be placed in the total space has increased. (Try to not think of entropy in terms of randomness but in the number of ways molecules can be positioned in space,...
10
Here's your confusion: You need to consider two different things: The momentum transfer per particle per collision. There, since we assume an instantaneous collision, it doesn't make sense to try to figure out force from acceleration. [I suppose you could do this using limits, and maybe there are applications in which that does make sense, but adding that ...
5
In ideal gases no intermolecular forces, therefore no potential energy. Thus, internal energy is equal to total kinetic energy (KE) of the system. Consider $N$ monoatomic particles in a cubical box of side $\ell$ (assumption: ideal gasses consist of monoatomic point particles). The amount of ideal gas in the box is $\frac{N}{N_A} = n \ \pu{mol}$ where $N_A$ ...
4
His name was Amedeo Avogadro, not Amidio nor Amadeo nor Avegadro nor Avagadro. The full name was Lorenzo Romano Amedeo Carlo Avogadro, Count of Quaregna and Cerreto. Avogadro's law states that equal volumes of all gases, at the same temperature and pressure, have the same number of molecules. More exactly, it also includes single atoms, e.g. for a case of ...
4
This definition is pretty ambiguous, and in my opinion not very helpful. But if you consider an ideal gas, for which we have $$pV = nRT$$ then $nRT$ can be identified with something which has the dimensions of p-V work. The problem is that 'work' is a path function, which must be specified with respect to a process, which goes from an initial state to a ...
4
It may be helpful to look at a related value $k_{B}$, the Boltzmann constant, which is widely used in thermodynamics. These two are related by $R = k_{B}N_{A}$, allowing the ideal gas law to also be written: $$PV = Nk_{B}T$$ where $N$ is the number of particles, as opposed to the number of moles. The units are $\pu{J\cdot K^{-1}}$. It's a proportionality ...
3
Because it gives simpler-to-derive laws which are often very good approximations Clearly real gases do not always follow the ideal gas laws. They mostly liquefy under some conditions, for example, and, under those conditions they are clearly not ideal. But in practice gas laws are used for things far away from those non-ideal regions. When we are applying ...
3
OP has given a good effort to solve the problem using correct path. Only loose point was not considering the diatomic nature of the gases as Safdar Faisal pointed out in a comment. Suppose the amount of $\ce{O2}$ in the gas mixture is $n_1 \ \pu{mol}$ and the amount of $\ce{N2}$ in the gas mixture is $n_2 \ \pu{mol}$ in volume of $\pu{1.0 L}$ container. Thus,...
3
In general for a perfect or ideal gas, $$C_p=C_V + R'$$ (using your notation) where the heat capacities are molar quantities. It follows that for a perfect gas mixture $(C_p)_\text{mix}=(C_V)_\text{mix} + R'$.
3
When you decrease the temperature of an ideal gas held at constant volume, what you are doing is transferring energy as heat from the gas into the surroundings. You do this by keeping the surroundings at a targeted final temperature and placing the gas in thermal contact with the surroundings, allowing heat to dissipate. When you reach the final state, you ...
2
Internal energy of any gas is given by $$U=U_\text{trans} + U_\text{rotational} + U_\text{vibrational} + U_\text{intermolecular} + U_\text{electronic} + U_\text{relativistic} + U_\text{bonds}$$ Last three aren't affected by ordinary heating. And $U_\text{intermolecular}$ for ideal gas is zero. That's why for ideal gas $U$ is only function of temperature. But ...
2
The temperature and the volume of the inner ear are constant. When your ears pop during descent, air from the cabin goes into the ear, increasing the pressure. The law is the following: $$n / P = const$$ You can derive this from the ideal gas law. It has no special name.
2
To carry out the B transformation, you have to heat a lot. If you don't heat, the volume will of course increase, but the pressure will decrease. B is a process difficult to carry out, because it is not easy to heat enough a gas who is inflated to as to maintain the inner pressure. Then, at the end of B, the temperature is very high. You block the position ...
2
Your analysis is correct in terms of number density. But let's see how it plays out in terms of molar density. Let n be the number of moles and A be Avagadro's number. Then N=nA. If we substitute this into your first equation, then I get $$P=\frac{n(Ak)T}{V-nAb}-\frac{aA^2n^2}{V^2}$$But, since Ak=R, we obtain:$$P=\frac{nRT}{V-nb'}-\frac{a'n^2}{V^2}$$where ...
2
We know from the first Newton motion law, that the net force acting on an object in rest must be zero. The forces acting on the piston are gravity and gas pressure: $$\vec F_g + \vec F_\mathrm{p,down} + \vec F_\mathrm{p,up}=\vec 0 \tag{1}$$ If $V$ is the given bottom gas volume, $V_0$ is the total gas volume, $n$ is the molar amount of each of gases, the gas ...
2
The Gibbs free energy change is zero in the case of reversible processes carried out at constant temperature and pressure, but that isn't the case if these conditions are not observed. As a demonstration consider an isothermal reversible expansion of an ideal gas. Since the temperature is constant, the free energy change is given by $$\Delta G = \Delta H - T ... 2 T in K V^2 in \pu{m2 s-2} 300 206116 323 230400 363 260100 403 280900 498 348100 623 435600 698 504100 773 547600 These are proportional. 2 B=\pu{0.6226 bar} as the water-water vapour system would be in equilibrium. Now, A+B= \pu{1 bar}\\ A= \pu{0.3774 bar} For initial volume of the container, P_{\ce{Ar}_i}V_i=n_{\ce{Ar}_i}RT\\ V_i=C=\frac{n_{\ce{Ar}_i}RT}{P_{\ce{Ar}_i}}\\ C=\pu{\frac{0.1 \times 0.08314 \times 360}{0.3774} L}\\ C=\pu{7.93 L}\\ For initial moles of water vapour, P_{\ce{... 2 Some notes I had easily at hand may help. You have already calculated the probability of obtaining the chance with a number k of type of ball (or molecule) out of a total on n is$$\displaystyle p=\frac{n!}{k!(n-k)!}\frac{1}{2^n} \tag{25c}$$This distribution is a maximum when k=n/2. This can be seen with a straightforward argument. The factorial terms ... 2 Charles' law says that at constant pressure the volume and temperature of an ideal gas are related as$$\frac{V_1}{T_1}=\frac{V_2}{T_2}$$If V_2=V_1+dV and T_2=T_1+dT then$$\frac{V_1}{T_1}=\frac{V_1+dV}{T_1+dT}=\frac{V_1}{T_1}\left(\frac{1+dV/V_1}{1+dT/T_1}\right)$$which can be rearranged into$$\frac{dV}{dT}=\frac{V_1}{T_1}$$But Charles' law says ... 1 Assumptions: Reaction of carbon dioxide with water is neglected. Vapour pressure of water is negligible. Volume of solution does not change on dissolution of carbon dioxide. Moles of carbon dioxide dissolved in water is very less as compared to moles of water and thus, X_{\ce{CO2_{(aq.)}}}≈\frac{n_{\ce{CO2(aq)}}}{n_{\ce{H2O(l)}}} Initially no \ce{CO2} ... 1 Your claim that \Delta H = \Delta U + P_{ext} \Delta V is incorrect. It is only correct when the system is always in mechanical equilibrium with the external pressure, or in other words irreversible expansion against constant pressure. The more general statement is \Delta H = \Delta U + \Delta (PV) . Here as you can see even when you compare it with \... 1 Cp of an ideal gas doesn't depend on pressure. Cp of any substance is defined as the partial derivative of enthalpy respect to temperature at constant pressure. But, if the enthalpy of the substance is independent of pressure, then it doesn't matter if the pressure is constant. However, determining Cp by measuring the amount of heat required to change the ... 1 At the interface between the gas and its surroundings, by Newton's 3rd law, the pressure exerted by the gas on its surroundings is equal to the pressure exerted by the surroundings on the gas. So we can use either. For a reversible expansion, the gas pressure is given by the ideal gas law. But the ideal gas law only applies at thermodynamic equilibrium ... 1 When real gases are at high temperature, the kinetic energy prevents any gas particles from interacting via intermolecular forces. With low pressure, the gas particles are separated enough that the intermolecular forces are sparse, therefore, giving rise to the ideal behavior since ideal gases are defined as non-interacting particles. When real gases are at ... 1 The van Der Waal equation is$$\left(p+\frac a{V_\mathrm m^2}\right)(V_\mathrm m-b)=RT$$Here V_\mathrm m is molar volume. When pressure is low and temperature is very high, we can qualitatively say that the molar volume will be very large. Due to this the volume occupied by the molecules (given by b) becomes insignificant. The pressure is low but the ... 1 There are at least two fundamental issues you have to address. First, you have to distinguish between the Gibbs energy G and the Gibbs energy of reaction \Delta_r G. In you diagram, one is the value on the y-axis (without defined zero point) and the other is the slope of the line, as labeled in your sketch. The second issue is that in the expressions on ... 1 I can't see what you did in your derivation, but, for what it's worth, here's my derivation. My starting equations are$$G=\sum{n_i\mu_i}$$and, along your contour, at constant temperature and pressure,$$\mathrm dG=\sum{\mu_i\mathrm dn_i}$$The changes in the number of moles of the various species are given by$$n_i=n_{i0}+c_inwhere $c_i$ is the ...
Only top voted, non community-wiki answers of a minimum length are eligible | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8887727856636047, "perplexity": 380.6071684218952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.40/warc/CC-MAIN-20210514060536-20210514090536-00372.warc.gz"} |
https://www.chemicalforums.com/index.php?topic=105577.0;prev_next=prev | May 18, 2021, 06:29:33 PM
Forum Rules: Read This Before Posting
### Topic: Photon with greatest energy. (Read 215 times)
0 Members and 1 Guest are viewing this topic.
#### IceRiceIce
• New Member
• Posts: 8
• Mole Snacks: +0/-0
##### Photon with greatest energy.
« on: October 07, 2020, 06:57:03 PM »
Which of the following transitions would produce a photon with the greatest energy? Why?
A) n = 1 → n = 5
B) n = 4 → n = 3
C) n = 5 → n = 2
D) n = 3 → n = 4
I don't even know where to start with this so if someone could explain this it would be helpful.
• Mr. pH | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9250604510307312, "perplexity": 2739.4073616605515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989874.84/warc/CC-MAIN-20210518222121-20210519012121-00233.warc.gz"} |
https://brilliant.org/problems/promarantid-question/ | # promarantid question
Geometry Level 2
DB is a diagonal of rectangle ABCD and line L through A and line M through C divide DB in three equal parts of length 1 meter each are perpendicular to DB. To 2 decimal places, find the area ( in meter^2) of the rectangle.
× | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8164598941802979, "perplexity": 1966.7794756207834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720380.80/warc/CC-MAIN-20161020183840-00359-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/distribution-difference-of-two-independent-random-variables.786525/ | # Distribution Difference of Two Independent Random Variables
1. Dec 8, 2014
### izelkay
1. The problem statement, all variables and given/known data
Z = X - Y and I'm trying to find the PDF of Z.
2. Relevant equations
Convolution
3. The attempt at a solution
Started by finding the CDF:
Fz(z) = P(Z ≤ z)
P(X - Y ≤ z)
So I drew a picture
So then should Fz(z) be:
since, from my graph, it looks as though Y can go from negative infinity to positive infinity, and X can go from negative infinity on the left but is bounded by z+y on the right
then should the pdf be:
Edit: sorry I forgot to include the fact that, because fx and fy are independent, f(x,y) = fxfy in the following derivation
?
Last edited: Dec 8, 2014
2. Dec 9, 2014
3. Dec 9, 2014
### Orodruin
Staff Emeritus
Your approach looks reasonable. Is there something you are particularly worried about?
4. Dec 9, 2014
### izelkay
Oh yes, sorry I was wondering if what I arrived at for the PDF of the difference of two independent random variables was correct.
I tried Googling but all I could find was the pdf of the sum of two RVs, which I know how to do already.
5. Dec 9, 2014
### Orodruin
Staff Emeritus
Yes, it looks ok. You should note that if you know the pdf of the sum and the pdf of the negative of a distribution (i.e., if you know how the the distribution of $Y$, what is the distribution of $-Y$), then you can use these two to obtain the distribution of $X-Y$.
6. Dec 9, 2014
### izelkay
I'm going to modify the problem a little bit, can you tell me if I do it right/wrong?
X and Y are two independent random variables, each of which are uniform on (0,1). Find the pdf of Z = X - Y
Using my result from above, the pdf of Z is given by:
X ~ U(0,1) and Y ~ U(0,1)
The pdf of fx(x) and fY(y) are both 1 on (0,1) and 0 otherwise.
z goes from -1 to 1 because of X - Y right?
For -1 ≤ z ≤ 0
fy is uniform only on (0,1) so my integral limits should be z+1 to z
For 0 ≤ z ≤ 1 I wouldn't need to change the limits here, just from 0 to 1
So then the pdf of X - Y is:
Is this correct?
7. Dec 9, 2014
### Ray Vickson
No, it cannot possibly be correct, because you would have f(z) = -1 for -1 < z < 0 and f(z) = 1 for 0 < z < 1. That kind of f cannot be a probability density function.
You need to go back and evaluate the limits more carefully; in cases like this, drawing an x,y diagram of the integration region would be helpful.
8. Dec 9, 2014
### izelkay
I don't have much practice drawing integration regions of density functions and I wouldn't know where to start with this one besides drawing the box with vertices (0,0), (0,1), (1,0), (1,1).
I think I found another way to look at it though:
If -1 ≤ z ≤ 0,
If I want fx(z+y) to be 1, I need to shift the bounds by +1, and then I'll need z+y+1 ≥ 0 , or y ≥ -1-z
So
Then on 0 ≤ z ≤ 1,
I need z+y ≤ 1, or y ≤ 1 - z
So
Then all together this is:
1 + z , -1 ≤ z ≤ 0
1 - z , 0 ≤ z ≤ 1
I got the same answer as here: http://www.math.wm.edu/~leemis/chart/UDR/PDFs/StandarduniformStandardtriangular.pdf
but I'm not sure my process is correct because my bounds are a little different than theirs, and I don't know how they computed the cdf
9. Dec 9, 2014
### Ray Vickson
You already drew part of the diagram in Post #1. You just need to add the boundary lines x = 0, x = 1, y = 0, y = 1 and you are done---that's your drawing!
You say you "don't know how they computed the cdf". Well, you claimed before that you know how to compute the cdf of a SUM, and that is all you need to do.
If $Z = X-Y$ then $S = Z+1$ has the form $S = X + (1-Y) = X + Y'$, where $X$ and $Y' = 1-Y$ are independent and $\rm{U}(0,1)$ random variables. Thus, $S$ is the sum of two uniforms, and has a well-known, widely-documented triangular density:
$$f_S(s) = \begin{cases} s, & 0 < s < 1\\ 2-s, & 1 < s < 2\\ 0 & \rm{elsewhere} \end{cases}$$
Now just shift the graph of $f_S(s)$ one unit to the left, and that would be your density of $Z$: $f_Z(z) = f_S(1+z), \; -1 < z < 1$.
Last edited: Dec 9, 2014
Draft saved Draft deleted
Similar Discussions: Distribution Difference of Two Independent Random Variables | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9216769933700562, "perplexity": 463.1513028208326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106996.2/warc/CC-MAIN-20170820223702-20170821003702-00238.warc.gz"} |
https://www.contextgarden.net/index.php?title=Talk:Command/setupinmargin&diff=prev&oldid=13068&printable=yes | Difference between revisions of "Talk:Command/setupinmargin"
Does first bracket work?
Does the first bracket actually work? I assumed that it defines the location so that \setupinmargin[right] would put all notes using \inmargin into the right hand margin? The following does not work in MKII nor MKIV, i.e. the margin notes are neither on the left nor slanted.
```\setupinmargin[right][style=slanted]
\starttext
Hello World! \inmargin{Goodbye}
\stoptext
```
However, the following does:
```\setupinmargin[location=right,style=slanted]
\starttext
Hello World! \inmargin{Goodbye}
\stoptext
```
Richard Stephens 09:51, 2 June 2011 (CEST) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9743638634681702, "perplexity": 4058.583256761999}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703538226.66/warc/CC-MAIN-20210123160717-20210123190717-00200.warc.gz"} |
http://www.heldermann.de/JLT/JLT24/JLT243/jlt24028.htm | Journal Home Page Cumulative Index List of all Volumes Complete Contentsof this Volume Previous Article Journal of Lie Theory 24 (2014), No. 3, 657--685Copyright Heldermann Verlag 2014 The Spherical Transform Associated with the Generalized Gelfand Pair (U(p,q),Hn), p+q=n Silvina Campos CIEM-FaMAF, Universidad Nacional de Córdoba, Córdoba 5000, Argentina [email protected] Linda Saal CIEM-FaMAF, Universidad Nacional de Córdoba, Córdoba 5000, Argentina [email protected] [Abstract-pdf] We denote by $H_{n}$ the $2n+1$-dimensional Heisenberg group and study the spherical transform associated with the generalized Gelfand pair $(U(p,q) \rtimes H_{n},U(p,q))$, $p+q=n$, which is defined on the space of Schwartz functions on $H_{n}$, and we characterize its image. In order to do that, since the spectrum associated to this pair can be identified with a subset $\Sigma$ of the plane, we introduce a space ${\cal H}_{n}$ of functions defined on $\mathbb{R}^2$ and we prove that a function defined on $\Sigma$ lies in the image if and only if it can be extended to a function in ${\cal H}_{n}$. In particular, the spherical transform of a Schwartz function $f$ on $H_{n}$ admits a Schwartz extension on the plane if and only if its restriction to the vertical axis lies in ${\cal S}(\mathbb{R})$. Keywords: Heisenberg group, spherical transform. MSC: 43A80; 22E25 [ Fulltext-pdf (446 KB)] for subscribers only. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8973563313484192, "perplexity": 347.8249964199296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806258.88/warc/CC-MAIN-20171120223020-20171121003020-00527.warc.gz"} |
https://web2.0calc.com/questions/decimals_42 | +0
# Decimals
0
94
1
Let $x = (0.\overline{6})(0.\overline{06})$. When x is written out as a decimal, what is the sum of the first 15 digits after the decimal point?
Dec 27, 2021 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9771654605865479, "perplexity": 374.52646667876536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104204514.62/warc/CC-MAIN-20220702192528-20220702222528-00258.warc.gz"} |
http://mathhelpforum.com/calculus/187478-differentiable-function.html | 1. Differentiable function
By using the definition of a differentiable function, show that the following function is differentiable in (2,2):
f(x,y) = e^[x + 2y]
Thx!!
2. Re: Differentiable function
Originally Posted by marqushogas
By using the definition of a differentiable function, show that the following function is differentiable in (2,2):
f(x,y) = e^[x + 2y]
Thx!!
Well, what is the definition of differentiability?
3. Re: Differentiable function
In two variables: If a function f(x,y) is differentiable in (a,b) there exists two numbers, A and B, and a function r(h,k) such that:
f(a+h,b+k) - f(a,b) = A*h + B*k + sqrt(h^2+k^2)*r(h,k) and r(h,k) -> 0 when (h,k) -> (0,0) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9855453968048096, "perplexity": 1888.8205056114466}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280891.90/warc/CC-MAIN-20170116095120-00368-ip-10-171-10-70.ec2.internal.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.