url
stringlengths 15
1.13k
| text
stringlengths 100
1.04M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
https://de.maplesoft.com/support/help/maple/view.aspx?path=Student/Statistics/Kurtosis | Student[Statistics] - Maple Programming Help
Home : Support : Online Help : Education : Student Package : Statistics : Student/Statistics/Kurtosis
Student[Statistics]
Kurtosis
compute the coefficient of kurtosis
Calling Sequence Kurtosis(A, numeric_option) Kurtosis(M, numeric_option) Kurtosis(X, numeric_option, inert_option)
Parameters
A - M - X - algebraic; random variable numeric_option - (optional) equation of the form numeric=value where value is true or false inert_option - (optional) equation of the form inert=value where value is true or false
Description
• The Kurtosis function computes the coefficient of kurtosis of the specified random variable or data sample. In the data sample case, the following formula for the kurtosis is used:
$\mathrm{Kurtosis}\left(A\right)=\frac{N\mathrm{Moment}\left(A,4,\mathrm{origin}=\mathrm{Mean}\left(A\right)\right)}{\left(N-1\right){\mathrm{Variance}\left(A\right)}^{2}},$
where N is the number of elements in A. In the random variable case, Maple uses the limit of that formula for $\mathrm{expr}$, that is,
$\mathrm{Kurtosis}\left(X\right)=\frac{\mathrm{Moment}\left(X,4,\mathrm{origin}=\mathrm{Mean}\left(X\right)\right)}{{\mathrm{Variance}\left(X\right)}^{2}}$.
• There is a different quantity that some authors call kurtosis. This quantity is called excess kurtosis here. The excess kurtosis is not predefined in Maple, but it can be easily obtained by subtracting $3$ from the kurtosis: $\mathrm{ExcessKurtosis}≔\mathrm{Kurtosis}-3$.
• The first parameter can be a data sample (e.g., a Vector), a Matrix data sample, a random variable, or an algebraic expression involving random variables (see Student[Statistics][RandomVariable]).
• If the option inert is not included or is specified to be inert=false, then the function will return the actual value of the result. If inert or inert=true is specified, then the function will return the formula of evaluating the actual value.
Computation
• By default, all computations involving random variables are performed symbolically (see option numeric below).
• If there are floating point values or the option numeric is included, then the computation is done in floating point. Otherwise the computation is exact.
• By default, the kurtosis is computed according to the rules mentioned above. To always compute the kurtosis numerically, specify the numeric or numeric = true option.
Examples
> $\mathrm{with}\left(\mathrm{Student}[\mathrm{Statistics}]\right):$
Compute the coefficient of kurtosis of the log normal distribution with parameters $\mathrm{\mu }$ and $\mathrm{\sigma }$.
> $\mathrm{Kurtosis}\left(\mathrm{LogNormalRandomVariable}\left(\mathrm{μ},\mathrm{σ}\right)\right)$
$\frac{{{ⅇ}}^{{8}{}{{\mathrm{σ}}}^{{2}}{+}{4}{}{\mathrm{μ}}}{-}{4}{}{{ⅇ}}^{{5}{}{{\mathrm{σ}}}^{{2}}{+}{4}{}{\mathrm{μ}}}{+}{6}{}{{ⅇ}}^{{3}{}{{\mathrm{σ}}}^{{2}}{+}{4}{}{\mathrm{μ}}}{-}{3}{}{{ⅇ}}^{{2}{}{{\mathrm{σ}}}^{{2}}{+}{4}{}{\mathrm{μ}}}}{{\left({{ⅇ}}^{{{\mathrm{σ}}}^{{2}}{+}{2}{}{\mathrm{μ}}}\right)}^{{2}}{}{\left({{ⅇ}}^{{{\mathrm{σ}}}^{{2}}}{-}{1}\right)}^{{2}}}$ (1)
Use numeric parameters for the beta distribution.
> $\mathrm{Kurtosis}\left(\mathrm{BetaRandomVariable}\left(3,5\right)\right)$
$\frac{{711}}{{275}}$ (2)
> $\mathrm{Kurtosis}\left(\mathrm{BetaRandomVariable}\left(3,5\right),\mathrm{numeric}\right)$
${2.585454546}$ (3)
Use the inert option.
> $\mathrm{Kurtosis}\left(\mathrm{BetaRandomVariable}\left(3,5\right),\mathrm{inert}\right)$
$\frac{{{∫}}_{{0}}^{{1}}{105}{}{\left({-}{\mathrm{_t2}}{+}{{∫}}_{{0}}^{{1}}{105}{}{{\mathrm{_t1}}}^{{3}}{}{\left({-}{1}{+}{\mathrm{_t1}}\right)}^{{4}}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{ⅆ}{\mathrm{_t1}}\right)}^{{4}}{}{{\mathrm{_t2}}}^{{2}}{}{\left({-}{1}{+}{\mathrm{_t2}}\right)}^{{4}}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{ⅆ}{\mathrm{_t2}}}{{\left({{∫}}_{{0}}^{{1}}{105}{}{\left({-}{\mathrm{_t0}}{+}{{∫}}_{{0}}^{{1}}{105}{}{{\mathrm{_t}}}^{{3}}{}{\left({-}{1}{+}{\mathrm{_t}}\right)}^{{4}}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{ⅆ}{\mathrm{_t}}\right)}^{{2}}{}{{\mathrm{_t0}}}^{{2}}{}{\left({-}{1}{+}{\mathrm{_t0}}\right)}^{{4}}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{ⅆ}{\mathrm{_t0}}\right)}^{{2}}}$ (4)
> $\mathrm{evalf}\left(\mathrm{Kurtosis}\left(\mathrm{BetaRandomVariable}\left(3,5\right),\mathrm{inert}\right)\right)$
${2.585454545}$ (5)
Consider the following list of data.
> $A≔\left[1,2,\mathrm{π},{ⅇ}^{1.5},-3\right]$
${A}{≔}\left[{1}{,}{2}{,}{\mathrm{π}}{,}{4.481689070}{,}{-}{3}\right]$ (6)
> $\mathrm{Kurtosis}\left(A\right)$
${1.92292561031128}$ (7)
Consider the following Matrix data set.
> $M≔\mathrm{Matrix}\left(\left[\left[3,1,11\right],\left[4,1.5,28\right],\left[3,\mathrm{ln}\left(3\right),31\right],\left[2,0,4\right],\left[4,9.2,7\right]\right]\right)$
${M}{≔}\left[\begin{array}{ccc}{3}& {1}& {11}\\ {4}& {1.5}& {28}\\ {3}& {\mathrm{ln}}{}\left({3}\right)& {31}\\ {2}& {0}& {4}\\ {4}& {9.2}& {7}\end{array}\right]$ (8)
We compute the kurtosis of each of the columns.
> $\mathrm{Kurtosis}\left(M\right)$
$\left[\begin{array}{ccc}\frac{{362}}{{245}}& {2.51927243026304}& \frac{{12176842}}{{11966045}}\end{array}\right]$ (9)
References
Stuart, Alan, and Ord, Keith. Kendall's Advanced Theory of Statistics. 6th ed. London: Edward Arnold, 1998. Vol. 1: Distribution Theory.
Compatibility
• The Student[Statistics][Kurtosis] command was introduced in Maple 18. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 26, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9767943024635315, "perplexity": 1495.0683253899065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526408.59/warc/CC-MAIN-20190720024812-20190720050812-00164.warc.gz"} |
https://testbook.com/question-answer/a-thin-cylinder-of-internal-diameter-1-m-and-thick--605eda7b60476b8c473bee1b | # A thin cylinder of internal diameter 1 m and thickness 0.02 m contains a gas. The tensile stress in the material is not to exceed 100 MPa then determine the internal pressure of the gas
This question was previously asked in
DSSSB JE ME 2019 Official Paper Shift - 2 (Held on 06 Nov 2019)
View all DSSSB JE Papers >
1. 10 N/mm2
2. 6 N/mm2
3. 8 N/mm2
4. 4 N/mm2
Option 4 : 4 N/mm2
## Detailed Solution
Concept:
• Consider a thin pressure vessel having closed ends and contains a fluid under a gauge pressure p. Then the walls of the cylinder will have a longitudinal stress, circumferential stress and radial stress.
• As shown in figure, a point of shell having stresses from all sides i.e. tri-axial stresses.
• σ= longitudinal stress (tensile), σr = radial stress (compressive), σh = circumferential stress (tensile).
As, σr <<<< σL and σh, therefore we neglect σr and assumed the bi-axial stresses.
Circumferential or hoop stress: $${σ _h} = \frac{{pd}}{{2t}}$$
Longitudinal or axial stress: $${σ _L} = \frac{{pd}}{{4t}}$$
where d is the internal diameter and t is the wall thickness of the cylinder.
Therefore, σh = 2σL
For spherical shell, longitudinal stress and circumferential stress both are equal,
σh = σL = $$\frac{Pd}{4t}$$
Calculation:
Given:
σ = 100 MPa, d = 1 m, t = 0.02 m
Maximum tensile stress in the cylinder is given by:
$${σ _h} = \frac{{pd}}{{2t}}$$
$${100} = \frac{{p\;\times\;1}}{{2\;\times\;0.02}}$$
p = 4 N/mm2
∴ The internal pressure of the gas 4 N/mm2 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8426121473312378, "perplexity": 1850.3813556124642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057018.8/warc/CC-MAIN-20210920040604-20210920070604-00536.warc.gz"} |
http://www.calculatorium.com/ | ## How to Make Limits in Calculus
Limits are values that we couldn’t reach. We are getting closer to limits but limit can not be reached. For instance if we have: f(x)=(x+1) / (x^2) it is not 0 because f(x)= 1/0 and this will be undefined because when you divide something with zero, the result will be undefined. If there would be […]
## How to Learn Pre Calculus
Pre-calculus learning is very good for mastering calculus with which students would encounter in later learning. If some students want to be good in science, mathematics, finance etc. they must gain knowledge of calculus. The best way to master calculus is to understand pre-calculus, at first. Learning Pre-Calculus Instructions – If you want to know […]
## How to Learn Calculus Theorems
Calculus is founded in 17th century and since then it is used in mathematics to solve many problems. Calculus includes rate of change and if someone understand calculus, he will be able to solve problems in science, economics, statistics etc. If you want to understand calculus, we will give you some instructions for learning. Learning […]
## How to Find the Limit of a Function in Calculus
It is important to understand the concept of limits because it will be present in many calculus problems. Limit shows us how some value has limit in f(y) if the f(x) approaches that value. For instance, if we have a limit that will be when X goes to zero; y goes to 1. Instructions 1) […]
## LibreOffice Math Introduction
About LibreOffice Math The Math is the LibreOffice suite’s formula editor that enables you to use formatted mathematical and scientific formulas in a spreadsheet, presentation, drawing or text document. These formulas can include many different elements, e.g. terms with exponents and indices, fractions, integrals, mathematical functions, systems of equations, matrices, inequalities and all other elements […]
Page 1 of 1912345...10...»» | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9037389159202576, "perplexity": 721.0076640947335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524502.23/warc/CC-MAIN-20190716035206-20190716061206-00389.warc.gz"} |
https://export.arxiv.org/abs/2110.06598v1 | eess.SY
(what is this?)
# Title: Enhanced Sequential Covariance Intersection Fusion
Abstract: This paper is concerned with the sequential covariance intersection (CI) fusion problem that the fusion result is independent of fusion structure including the fusion order and the number of estimates fused in each sequential fusion. An enhanced sequential CI fusion is first developed to better meet the practical requirements as compared with the existing batch and sequential CI fusion. Meanwhile, it is proved that the enhanced sequential CI fusion ensures the fusion estimate and covariance are unbiased and consistent. Notice that the fusion structure of the enhanced sequential CI fusion is unpredictable in practice, which may have negative impacts on the fusion performance. To this end, a weighting fusion criterion with analytical form is further proposed, and can be depicted by different formulas when choosing different performance indexes. For this criterion, it is proved that the fusion results are not affected by the fusion structure, and thus the fusion performance can be guaranteed. Finally, simulation examples are utilized to demonstrate the effectiveness and advantages of the proposed methods.
Comments: 7 pages,11 figures Subjects: Systems and Control (eess.SY) Cite as: arXiv:2110.06598 [eess.SY] (or arXiv:2110.06598v1 [eess.SY] for this version)
## Submission history
From: Bo Chen [view email]
[v1] Wed, 13 Oct 2021 09:43:24 GMT (1342kb)
Link back to: arXiv, form interface, contact. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8200616240501404, "perplexity": 2310.058865717142}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300849.28/warc/CC-MAIN-20220118122602-20220118152602-00021.warc.gz"} |
https://physics.stackexchange.com/questions/355756/newtonian-mechanics-problem-involving-rotational-and-linear-motion | # Newtonian mechanics problem involving rotational and linear motion
Consider a rectangular platform in space such that it has 2 vertical rods embedded in its surface equidistant from its geometric center. The platform is parallel to x axis.
Each rod has a balanced circular disc attached to it that is free to rotate parallel to the platform. Assume the discs are rotating parallel to the x axis, one rotating clockwise and another anticlockwise with same angular velocity (w.r.t. the platform). The geometric center of the system is also the center of mass of the system. This system is at rest w.r.t. to an outside observer.
Now, suppose a force is applied to the platform along +x axis for some time period ($t_0$) (from outside, the source of it does not matter). The platform would gain some velocity in +x direction w.r.t. to an outside observer as a result.
Would this acceleration or the outside force also reduce the angular velocity of two discs as compared to the angular velocity earlier (before application of force)?
I think the answer should be yes but I am not sure. Anyone ? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9652813076972961, "perplexity": 173.7436820106088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371829677.89/warc/CC-MAIN-20200409024535-20200409055035-00205.warc.gz"} |
https://en.wikipedia.org/wiki/Half-metal | # Half-metal
The electronic structure of a half-metal. ${\displaystyle E_{f}}$ is the fermi level, ${\displaystyle N(E)}$ is the density of states for spin down (on the left) and spin up (on the right). In this case, the half-metal is conducting in the minority spin channel.
A half-metal is any substance that acts as a conductor to electrons of one spin orientation, but as an insulator or semiconductor to those of the opposite orientation. Although all half-metals are ferromagnetic (or ferrimagnetic), most ferromagnets are not half-metals. Many of the known examples of half-metals are oxides, sulfides, or Heusler alloys.[1]
In half-metals, the valence band for one spin orientation is partially filled while there is a gap in the density of states for the other spin orientation. This results in conducting behavior for only electrons in the first spin orientation. In some half-metals, the majority spin channel is the conducting one while in others the minority channel is.[citation needed]
Half-metals were first described in 1983, as an explanation for the electrical properties of Mn-based Heusler alloys.[2]
Some notable half-metals are chromium(IV) oxide, magnetite, and lanthanum strontium manganite (LSMO),[1] as well as chromium arsenide. Half-metals have attracted some interest for their potential use in spintronics.[citation needed]
## References
1. ^ a b Coey, J.M.D.; Venkatesan, M. (2002). "Half-metallic ferromagnetism: Example of CrO2". Journal of Applied Physics. 91 (10): 8345–50. Bibcode:2002JAP....91.8345C. doi:10.1063/1.1447879.
2. ^ R. A. de Groot, F. M. Mueller, P. G. van Engen, and K. H. J. Buschow (1983). "New Class of Materials: Half-Metallic Ferromagnets". 50 (25). Phys. Rev. Lett.: 2024. doi:10.1103/PhysRevLett.50.2024. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8314123749732971, "perplexity": 4063.654235979109}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213247.0/warc/CC-MAIN-20180818001437-20180818021437-00053.warc.gz"} |
http://www.tony5m17h.net/E8GLTSCl8xtnd.html | # E8, Cl(16) = Cl(8) (x) Cl(8), and Physics Calculations
Frank Dodd (Tony) Smith, Jr., November 2007 and January 2008 - Apppendix re E8 geometry etc
Abstract:
Garrett Lisi in hep-th/0711.0770 said ".... The building blocks of the standard model and gravity are fields over a four dimensional base manifold. ... Relying on the algebraic structure of the exceptional Lie groups, the fermions may also be recast as Lie algebra elements and included naturally ... the entire ensemble corresponds to a uniquely beautiful Lie group - the largest simple exceptional group, E8. ... The weights of ... 222 elements - corresponding to the quantum numbers of all gravitational and standard model fields - exactly match 222 roots out of the 240 of E8. ... The action for everything, [is] chosen by hand to be in agreement with the standard model with gravity included via the MacDowell-Mansouri technique....".
Jacques Distler in November 2007 on his web blog Musings said "... I ...[Jacques Distler am]... not going to talk about spin-statistics, or the Coleman-Mandula Theorem ... that could render Garrett's idea a non-starter ... Instead, I will confine myself to a narrow question in group representation theory. ...
There are two ... noncompact real form[s] of E8 ...
• E8(8) [with] Spin(16) as a maximal compact subgroup ..[where]... the 248 [dimensions of E8]... decompose... as 248 = 120 + 128 ...
• E8(-24) [with] SU(2) x E7 as a maximal compact subgroup ..[where]... the 248 [dimensions of E8]... decompose... as 248 = (3,1) + (1,133) + (2,56)
... I ...[Jacques Distler am here]... deliberately not being careful about such factors of Z2 ... for ease of presentation ...
you [Garrett Lisi] state that G is embedded in E8(-24). Now you say it's embedded in Spin(7,1) x Spin(8) ... That's not a subgroup of E8(-24). It is a subgroup of E8(8). ...".
Garrett Lisi replied "... I [Garrett Lisi] made a mistake in thinking that so(7,1) + so(8) is in the Lie algebra of E IX [ = E8(-24) ] , when in fact it's in ... [ EVIII = E8(8) ]... Thanks. ...".
Bee (Sabine Hossenfelder) said in November on her blog "... I ...[ Bee have ]... complained ..
• about the absence of coupling constants throughout the paper [ hep-th/0711.0770 ] ...
• there is no base manifold present whatsoever ...[in]... the elements of the [E8 Lie] algebra ...
• he [ Garrett Lisi ] has to choose the action by hand to reproduce the SM ...
• for his [ Garrett Lisi's ] theory to work he needs ... the cosmological constant ... to be the size of about the Higgs vev, i.e. roughly 12 orders of magnitude too large
Jacques Distler in November 2007 on his web blog Musings said "... If you want to include the MacDowell-Mansouri Spin(3,1)o, along with the Standard Model gauge group, in E8, then there is not enough "room" to also include 3 generations of quarks and leptons in the 248. That was what Lisi was aiming for. And I think we are all agreed that it doesn't work. ...".
So, this paper is written based on Garrett Lisi's ideas in hep-th/0711.0770 with some modifications to satisfy the objections of Jacques Distler and Bee (Sabine Hossenfelder):
• the structure is based on EVII = E8(8) with 248 = 120 + 128
• there are calculations of coupling constants (force strengths) as well as particle masses and K-M parameters
• the base manifold spacetime is part of E8(8) itself
• the Lagrangian for Gravity plus the Standard Model is based on natural structural relations among various parts of E8(8)
• the Dark Energy (cosmological constant) : Dark Matter : Ordinary Matter ratio is calculated, with results consistent with WMAP
• the second and third generations of fermions are composites of some of the 248 elements of E8 and are not directly related to triality
• triality is useful in establishing relations among fermions, the base manifold, and gauge bosons, which relations indicate that the model satisfies Coleman-Mandula and spin-statistics
For "ease of presentation", sometimes I will be sloppy about such things as signature, distinguishing between Pinors and Spinors, precise group structure distinctions such as between SU(3)xSU(2)xU(1) and S(U(2)xU(3)) = U(1) x SU(2) x SU(3) / I(2) x I(3), etc. I hope that the real meanings will be clear from context.
Any errors in this paper are not Garrett Lisi's fault.
## The 248-dim Lie algebra E8 = 120-dim adjoint Spin(16) + 128-dim half-spinor Spin(16)
is the basis of the physics model of Garrett Lisi (whose root vector images are the basis for most of the root vector images here).
Spin(16) is the bivector Lie algebra of the real Clifford algebra Cl(16)
As Ramon Llull showed about 700 years ago in his Wheel A, the 16 basis vectors of Cl(16) (vertices/letters) combine to form 120 bivectors (vertex pair lines) of Cl(16) which act as the 120 generators of the Lie algebra Spin(16).
The real Clifford algebra 8-periodicity tensor product factorization
### Cl(16) = Cl(8) (x) Cl(8)
gives correspondences between 248-dim E8 structure and 256-dim Cl(8) structure, which has graded structure
## Cl(8) = 1 + 8 + 28 + 56 + 70 + 56 + 28 + 8 + 1
Taking the tensor product Cl(8) x Cl(8) to get Cl(16) produces the following 120 Cl(16) bivectors:
• 28 Spin(8) bivectors of the first Cl(8) in the tensor product
• 28 Spin(8) bivectors of the second Cl(8) in the tensor product
• 64 = 8x8 tensor product of the two 8-dim 1-vectors of each the two Cl(8)s
to get the 28+28+64 = 120-dim Cl(16) bivector algebra that produces the 120-dim adjoint of the Lie algebra Spin(16).
### 112 of the 240 vertices are the root vector polytope of the 120-dim rank 8 Spin(16) Lie algbra.
In terms of the 28 bivectors of the first Cl(8) factor and the 28 bivectors of the second Cl(8) factor and the 64 product-of-vectors, the 112 are:
• 24 of the 24-cell root vector polytope of the rank-4 Spin(8) of the first Cl(8) (colored magenta on the following diagram)
• 24 of the 24-cell root vector polytope of the rank 4 Spin(8) of the second Cl(8) (colored cyan on the following diagram)
• 64 of the 8x8 product-of-vectors (colored blue on the following diagram)
Note that in the above image some of the 240 E8(8) vertices are projected to the same point:
• each of the 2 vertices in the center (with white dots) are points to which 3 vertices are projected, so that each of the 2 circles with a white dot represents 3 vertices;
• each of the 12 vertices surrounded by 6 same-color nearest neighbors (with yellow dots) are points to which 2 vertices are projected, so that each of the 12 circles with a yellow dot represents 2 vertices.
### 128 of the 240 vertices correspond to a half-spinor representation of the Spin(16) Lie algebra.
The 128 can be seen as the sum 64 + 64 of two 8x8 square-matrices each being 64-dim (colored red and green on the following diagram).
Note that in the above image some of the 240 E8(8) vertices are projected to the same point:
• each of the 4 vertices in the center (with white dots) are points to which 3 vertices are projected, so that each circle with a white dot represents 3 vertices;
• each of the 12 vertices surrounded by 6 same-color nearest neighbors (with yellow dots) are points to which 2 vertices are projected, so that each of the 12 circles with a yellow dot represents 2 vertices.
## Putting the 112 and 128 together gives the 240 vertices of the E8 root vector polytope
Note that in the above image some of the 240 E8(8) vertices are projected to the same point:
• each of the 6 vertices in the center (with white dots) are points to which 3 vertices are projected, so that each of the 6 circles with a white dot represents 3 vertices;
• each of the 24 vertices surrounded by 6 same-color nearest neighbors (with yellow dots) are points to which 2 vertices are projected, so that each of the 24 circles with a yellow dot represents 2 vertices.
Using the color-coding, the 240 root vector vertices of E8 correspond to the graded structure of the 256-dim Cl(8) Clifford algebra as follows:
## = 1 + 8 + (24+4) + (24+4+28) + (32+3+3+32) + (28+4+24) + (24+4) + 8 + 1
In the above, the black underlined 4+4 = 8 correspond to the 8 E8 Cartan subalgebra elements that are not represented by root vectors, and the black non-underlined 1+3+3+1 = 8 correspond to the 8 elements of 256-dim Cl(8) that do not directly correspond elements of 248-dim E8.
### The Spin(8) whose root vector diagram is the vertices of the first 24-cell, living in the Cl(8) bivectors
A stereo view of a 24-cell (the 4th dimension color-coded red-green-blue with green in the middle)
shows that the 4-dim 24-cell has a 3-dim central polytope that is a cuboctahedron
the 12 vertices of which form the root vector polytope of the 16-dim U(2,,2) = U(1) x SU(2,2) , where 15-dim rank 3 SU(2,2) = Conformal Group Spin(2,4) produces Gravity by the MacDowell-Mansouri mechanism (see Rabindra Mohapatra, Unification and Supersymmetry (2nd edition, Springer-Verlag 1992), particularly section 14.6).
Since this group structure acts directly on the 8-dim Kaluza-Klein M4 x CP2, it acts on the associative part given by the associative 3-vector PSI of the dimensional reduction Quaternionic structure
(such as occurs due to dimensional reduction of physical spacetime from 8-dim Octonionic to 4-dim Quaternionic by freezing out (at energies lower than the Planck/GUT region) a Quaternionic substructure of 8-dim Octonionic vector space)
which is the spatial part of the M4, so that the M4 on which it acts has signature -+++
The U(1) of U(2,2) provides the complex phase of propagators.
### The Spin(8) whose root vector diagram is the vertices of the second 24-cell, living in the Cl(8) 6-vectors
The 28 6-vectors of Cl(8) correspond to a 28-dim rank 4 Spin(8) Lie algebra after introduction of Quaternionic structure into the E8 physics model
(such as occurs due to dimensional reduction of physical spacetime from 8-dim Octonionic to 4-dim Quaternionic by freezing out (at energies lower than the Planck/GUT region) a Quaternionic substructure of 8-dim Octonionic vector space )
by using the co-associative 4-vector PHI of the chosen Quaternionic structure to map any 6-vector A to a bivector A /\ PHI,
and so mapping the 28 6-vectors onto 28 bivectors that form a 28-dim Lie algebra.
The process is somewhat analagous to using a co-associative 4-vector PHI' in Cl(7) to define a cross-product in 7-dim vector space for vectors a, b (see F. Reese Harvey, Spinors and Calibrations (Academic 1990)) by
a x b = *((a /\ b) /\ PSI)
A stereo view of a 24-cell (the 4th dimension color-coded red-green-blue with green in the middle)
shows that the 4-dim 24-cell has a 3-dim central polytope that is a cuboctahedron
that is the root vector polytope of 15-dim rank 3 Spin(6) = SU(4) that includes 8+1 = 9-dim SU(3)xU(1) = U(3) in the Twistor construction of 6-dim CP3 = SU(4) / U(3)
Projection into a 2-dim space for the root vectors of the rank 2 group SU(3) gives
where the 6 purple vertices form the hexagonal root vector polygon of 8-dim rank 2 SU(3) and the 6 gold vertices correspond to the 6 dimensions of the CP3 Twistor space.
Introduction of a Quaternionic CP3 Twistor space "... induces a mapping of projective spaces CP3 -> QP1 ...[with]... fibres ... CP1 ..." (see R. O. Wells, Complex Geometry in Mathematical Physics (Les Presses de l'Universite de Montreal 1982), particularly section 2.6).
Since CP1 = SU(2) / U(1) an introduction of Quaternionic structure into the E8 physics model
(such as occurs due to dimensional reduction of physical spacetime from 8-dim Octonionic to 4-dim Quaternionic by freezing out (at energies lower than the Planck/GUT region) a Quaternionic substructure of 8-dim Octonionic vector space )
gives weak force SU(2) through QP1 = Sp(2)/ Sp(1)xSp(1) = Spin(5) / SU(2)xSU(2) or, equivalently, through CP3 containing CP2 = SU(3) / U(2) .
Since the U(1) of U(3) = SU(3) x U(1) is Abelian, it does not correspond to a root vector vertex and therefore does not appear in the root vector diagrams.
Since this group structure is produced by a co-associative 4-vector PHI, it acts on the co-associative part of 8-dim Kaluza-Klein M4 x CP2, which is the CP2 4-dim Internal Symmetry Space of signature ++++
As described by N. A. Batakis in Class. Quantum Grav. 3 (1986) L99-L105, the U(2) = SU(2) x U(1) acts on the CP2 as little group, or local isotropy group, while the SU(3) acts globally on the CP2 = SU(3) / U(2) = SU(3) / SU(2) x U(1)
### The product-of-vectors 64 = 8 x 8
With respect to the Cl(8) grading, the first 8 of the 8x8 = 64 is the vector space, and therefore is a natural 8-dim spacetime that after introduction of a preferred Quaternionic substructure
(such as occurs due to dimensional reduction of physical spacetime from 8-dim Octonionic to 4-dim Quaternionic by freezing out (at energies lower than the Planck/GUT region) a Quaternionic substructure of 8-dim Octonionic vector space)
becomes a 4-dim plus 4-dim Kaluza-Klein space of the form M4 x CP2 as described by N. A. Batakis in Class. Quantum Grav. 3 (1986) L99-L105,
The M4 of signature -+++ contains an associative 3-dim spatial structure, while the CP2 of signature ++++ has a co-associative 4-dim structure.
So, the first 8 of the 8x8 = 64, denoted by 8_v , represents 4+4 = 8-dim M4 x CP2 Kaluza-Klein space, where the compact CP2 is small.
As to the second 8 of the 8_v x 8,
it lives in the 7-vectors of the Cl(8) grading,
and it should represent the 8 Dirac Gammas of the Cl(8) Clifford algebra, so denote it by 8_G so that
### The 128 Spin(16) half-spinors 64 + 64
The 128 is the 128-dim rank 8 symmetric space E8 / Spin(16) of type EVIII known as Rosenfeld's octo-octonionic projective plane (OxO)P2 (see Arthur L. Besse, Einstein Manifolds (Springer 1987) and Boris Rosenfeld, Geometry of Lie Groups (Kluwer 1997)).
Since it is a plane (of 2 8x8 octo-octonionic dimensions), it has structure 128 = 64 + 64 = 8x8 + 8x8.
Since it is a half-spinor space (of Spin(16)) its elements are fundamentally fermionic, so
• one of the 8 in one of the two 8x8 = 64 should correspond to the 8 first-generation fermion particles (denote it by 8_f+)
• one of the 8 in the other of the two 8x8 = 64 should correspond to the 8 first-generation antiparticles (denote it by 8_f-)
As to the second 8 in the 8_f+ x 8 = 64 and the 8_f- x 8 = 64
it should represent the 8 Dirac Gammas of the Cl(8) Clifford algebra, so denote it by 8_G so that :
128 = 64 + 64 and
### the 64 = 8_f+ x 8_G describes the 8 first-generation fermion anti-particles and their connection to the Dirac Gammas
Note that these fermions are related to the 8-dim +half-spinor and -half-spinor representations of Spin(1,7), the Lorentz group for the 8-dim space of Cl(8), so that this physics model, based on E8 and Cl(8), satisfies the Coleman-Mandula theorem because, as Steven Weinberg says at pages 382-384 of his book The Quantum Theory of Fields, Vol. III (Cambridge 2000), the important thing about Coleman-Mandula is that fermions in a unified model must "... transform according to the fundamental spinor representations of the Lorentz group ... or, strictly speaking, of its covering group Spin(d-1,1). ..." where d is the dimension of spacetime in the model.
Note also that the fermion particles are fundamentally all left-handed, and the fermion antiparticles are fundamentally all right-handed. The other handednesses are not different fundamental states, but arise dynamically due to special relativity transformations that can switch handedness of particles that travel at less than light-speed (i.e., that have more than zero rest mass).
### Quaternionic Structure
At energies below the Planck/GUT level, the Octonionic structure of the model changes, by freezing out of a preferred Quaternionic substructure, from Real/Octonionic 8-dim spacetime to Quaternionic -+++ associative 4-dim M4 Physical Spacetime plus Quaternionic +++ co-associative 4-dim CP2 = SU(3) / SU(2) x U(1) Internal Symmetry Space.
After Quaternionic structure freezes out,
• 64 = 8_v x 8_G x 1_Real
• 64 = 8_f+ x 8_G x 1_Real
• 64 = 8_f+ x 8_G x 1_Real
transform from 8x8 real matrices to 4x4 Quaternionic matrices
• 64 = 4_v x 4_G x 4_Quaternion
• 64 = 4_f+ x 4_G x 4_Quaternion
• 64 = 4_f+ x 4_G x 4_Quaternion
As can be seen in this chart (from F. Reese Harvey, Spinors and Calibrations (Academic 1990))
The 16x16 = 256-dim Cl(8) = Cl(1,7) = M(16,R) = 16x16 Real Matrix Algebra
is transformed into
the 8x8x4 = 256-dim Cl(2,6) = M(8,Q) = 8x8 Quaternionic Matrix Algebra
and
the 8x8 = 64-dim Cl(6) = M(8,R) = 8x8 Real Matrix Algebra
is transformed into
the 4x4x4 = 64-dim Cl(2,4) = M(4,Q) = 4x4 Quaternionic Matrix Algebra
and
the 8-dim Real column vectors 8_v , 8_f+ , 8_f-
become
the 2-Quaternionic-dim (8-Real-dim) column vectors 2_Q_v , 2_Q_f+ , 2_Q_f-
and
the 8-dim Real row vectors 8_G
become
the 2-Quaternionic-dim (8-Real-dim) row vectors 2_Q_G
so that
### Triality
There is a Spin(8)-type Triality among the three 64 things
• 64 = 8_v x 8_G = 2_Q_v x 2_Q_G of Kaluza-Klein space
• 64 = 8_f+ x 8_G = 2_Q_f+ x 2_Q_G of first-generation fermion particles
• 64 = 8+f- x 8_G = 2_Q_f- x 2_Q_G of first-generation antiparticles
The model has:
• 16 gauge bosons for MacDowell-Mansouri Gravity plus a complex propagator phase and 12 Standard Model gauge bosons, for a total of 28 gauge bosons (which is also 28 = 8 /\ 8 the number of gauge bosons to be expected from 8-dim vector space)
• 8 types of fermions (the second and third generations being combinatorial combinations of first-generation fermions.
From the point of view of high-energy 8-dim space, in which gauge boson terms have dimension 1 in the Lagrangian and fermion terms have dimension 7/2 in the Lagrangian, the Triality gives a Subtle Supersymmetry
### Lagrangian
The natural Lagrangian for the model is
of
and
and
### This differs from conventional Gravity plus Standard Model in three respects:
• 1 - 8-dim base manifold
• 2 - no Higgs
• 3 - 1 generation of fermions
### Reduction to 4-dim base manifold and Higgs:
The objective is to reduce the integral over the 8-dim Kaluza-Klein M4 x CP2 to an integral over the 4-dim M4.
Since the U(2,2) acts on the M4, there is no problem with it.
Since the CP2 = SU(3) / U(2) has global SU(3) action, the SU(3) can be considered as a local gauge group acting on the M4, so there is no problem with it.
However, the U(2) acts on the CP2 = SU(3) / U(2) as little group, and so has local action on CP2 and then on M4, so the local action of U(2) on CP2 must be integrated out to get the desired U(2) local action directly on M4.
Since the U(1) part of U(2) = U(1) x SU(2) is Abelian, its local action on CP2 and then M4 can be composed to produce a single U(1) local action on M4, wo there is no problem with it.
That leaves non-Abelian SU(2) with local action on CP2 and then on M4, and the necessity to integrate out the local CP2 action to get something acting locally directly on M4. This is done by a mechanism due to Meinhard Mayer, The Geometry of Symmetry Breaking in Gauge Theories, Acta Physica Austriaca, Suppl. XXIII (1981) 477-490 where he says:
"... We start out from ... four-dimensional M [ M4 ] ...[and]... R ...[that is]... obtained from ... G/H [ CP2 = SU(3) / U(2) ] ... the physical surviving components of A and F, which we will denote by A and F, respectively, are a one-form and two form on M [M4] with values in H [SU(2)] ...the remaining components will be subjected to symmetry and gauge transformations, thus reducing the Yang-Mills action ...[on M4 x CP2]... to a Yang-Mills-Ginzburg-Landau action on M [M4] ... Consider the Yang-Mills action on R ...
S_YM = Integral Tr ( F /\ *F )
... We can ... split the curvature F into components along M [M4] (spacetime) and those along directions tangent to G/H [CP2] .
We denote the former components by F_!! and the latter by F_?? , whereas the mixed components (one along M, the other along G/H) will be denoted by F_!? ... Then the integrand ... becomes
Tr( F_!! F^!! + 2 F_!? F^!? + F_?? F^?? )
... The first term .. becomes the [SU(2)] Yang-Mills action for the reduced [SU(2)] Yang-Mills theory ...
the middle term .. becomes, symbolically, Tr Sum D_! PHI(?) D^! PHI(?) where PHI(?) is the Lie-algebra-valued 0-form corresponding to the invariance of A with respect tothe vector field ? , in the G/H [CP2] direction ...
the third term ... involves the contraction F_?? of F with two vector fields lying along G/H [CP2] ... we make use of the equation [from Mayer-Trautman, Acta Physica Austriaca, Suppl. XXIII (1981) 433-476, equation 6.18]
2 F_?? = [ PHI(?) , PHI(?) ] - PHI([?,?])
... Thus, the third term ... reduces to what is essentially a Ginzburg-Landau potential in the components of PHI:
Tr F_?? F^?? = (1/4) Tr ( [ PHI , PHI ] - PHI )^2
... special cases which were considered show that ...[the equation immediately above]... has indeed the properties required of a Ginzburg_Landau-Higgs potential, and moreover the relative signs of the quartic and quadratic terms are correct, and only one overall normalization constant ... is needed. ...".
(see also S. Kobayashi and K. Nomizu, Foundations of Differential Geometry, Volume I, Wiley (1963), especially section II.11)
So,
As to
### 3 Generations of Fermions:
At low (where we do experiments) energies a Quaternionic structure freezes out, splitting the 8-dim spacetime into a 4-dim physical spacetime M4 and a 4-dim internal symmetry space CP2.
First generation fermion particles are represented by octonions as follows:
```
Octonion Fermion
Basis Element Particle
1 e-neutrino
i red up quark
j green up quark
k blue up quark
e electron
ie red down quark
je green down quark
ke blue down quark
```
First generation fermion antiparticles are represented by octonions in a similiar way.
Second generation fermion particles and antiparticles are represented by pairs of octonions.
Third generation fermion particles and antiparticles are represented by triples of octonions.
There are no higher generations of fermions than the Third. This can be seen geometrically as a consequence of the fact that if you reduce the original 8-dimensional spacetime into associative 4-dime M4 physical spacetime and coassociative 4-dim CP2 Internal Symmetry Space then if you look in the original 8-dimensional spacetime at a fermion (First-generation represented by a single octonion) propagating from one vertex to another there are only 4 possibilities for the same propagation after dimensional reduction:
1 - the origin o and target x vertices are both in the associative 4-dimensional physical spacetime
```4-dim Internal Symmetry Space --------------
4-dim Physical SpaceTime ---o------x---
```
in which case the propagation is unchanged, and the fermion remains a FIRST generation fermion represented by a single octonion o
2 - the origin vertex o is in the associative spacetime and the target vertex * in in the Internal Symmetry Space
```4-dim Internal Symmetry Space ----------*---
4-dim Physical SpaceTime ---o----------```
in which case there must be a new link from the original target vertex * in the Internal Symmetry Space to a new target vertex x in the associative spacetime
```4-dim Internal Symmetry Space ----------*---
4-dim Physical SpaceTime ---o------x---
```
and a second octonion can be introduced at the original target vertex in connection with the new link so that the fermion can be regarded after dimensional reduction as a pair of octonions o and * and therefore as a SECOND generation fermion
3 - the target vertex x is in the associative spacetime and the origin vertex o in in the Internal Symmetry Space
```4-dim Internal Symmetry Space ---o----------
4-dim Physical SpaceTime ----------x---```
in which case there must be a new link to the original origin vertex o in the Internal Symmetry Space from a new origin vertex * in the associative spacetime
```4-dim Internal Symmetry Space ---o----------
4-dim Physical SpaceTime ---O------x---```
so that a second octonion can be introduced at the new origin vertex O in connection with the new link so that the fermion can be regarded after dimensional reduction as a pair of octonions o and o and therefore as a SECOND generation fermion
4 - both the origin vertex o and the target vertex * are in the Internal Symmetry Space,
```4-dim Internal Symmetry Space ---o------*---
4-dim Physical SpaceTime --------------```
in which case there must be a new link to the original origin vertex o in the Internal Symmetry Space from a new origin vertex O in the associative spacetime, and a second new link from the original target vertex * in the Internal Symmetry Space to a new target vertex x in the associative spacetime
```
4-dim Internal Symmetry Space ---o------*---
4-dim Physical SpaceTime ---O------x---```
so that a second octonion can be introduced at the new origin vertex O in connection with the first new link, and a third octonion can be introduced at the original target vertex * in connection with the second new link, so that the fermion can be regarded after dimensional reduction as a triple of octonions O and o and * and therefore as a THIRD generation fermion.
As there are no more possibilities, there are no more generations, and we have:
### Third generation fermions correspond to triples of octonions O x O x O
and
we now have a Lagrangian for the model
of
and
and
and
### that gives conventional Gravity plus Standard Model.
Path integrals give a Quantum theory via the classical Lagrangian set out above.
The Lagrangian set out above is only valid in a (possibly small) neighborhood of spacetime. To get a more global theory, the local Lagrangians must be patched together. To do that, look at it from a Cl(8) point of view, and consider that, using 8-periodicity of real Clifford algebras, taking tensor products of factors of Cl(8)
Cl(8) (x) ...(N times tensor product)... (x) Cl(8) = Cl(8N)
allows construction of arbitrarily large real Clifford algebras as composites of lots of local Cl(8) factors.
### By taking the completion of the union of all such Cl(8)-based tensor products, you get a generalized Real Hyperfinite II1 von Neumann Algebra factor that describes physics in terms of Algebraic Quantum Field Theory.
As to how to combine local Lagrangians in terms of E8, note that there are 7 independent Root Vector Polytopes / Lattices of type E8, denoted E8_1, E8_2, E8_3, E8_4, E8_5, E8_6, E8_7. Some of them have vertices in commmon, but they are all distinct.
All of the 7 independent Root Vector Polytope Lie algebras E8_i correspond to E8 Lattices consistent with Octonion Multiplication, and the the 7 Lie algebras / Lattices / Root Vector Polytopes E8_i are related to each other as the 7 Octonion imaginaries i,j,k,e,ie,je,ke , so the copies of E8 might combine according to the rules of octonion multiplication, globally arranging themselves like integral octonions.
If the 128 Spin(16) half-spinors are put on integral octonion vertices, and the 120-dim adjoint Spin(16) generators on links between integral octonion vertices, a realistic Spin Foam model might be produced, related to the copies of the 27-dimensional exceptional Jordan algebra contained in each E8.
Such a Spin Foam model might be related to the 26-dim Bosonic String model described in CERN preprint CERN-CDS-EXT-2004-031 in which fermions come from orbifolding and the 7 independent E8_i are used in constructing D8 branes.
## Given the E8 / Cl(8) model and its Lagrangian, how about Physics Calculations ?
### Tquark = 172-175 GeV and Higgs = 176-188 GeV
Michio Hashimoto, Masaharu Tanabashi, and Koichi Yamawaki in their paper at hep-ph/0311165 describe models with T-quark condensate for Higgs in 8-dimensional Kaluza-Klein spacetime with 4 compact dimensions, like M4 x CP2 of the E8 model, and calculate that
• Tquark = 172-175 GeV which is consistent with accepted experimental values
• Higgs = 176-188 GeV which is a prediction that might be tested by the LHC
Renormalization running up and down from that point on a plot of Higgs mass v. Tquark mass
shows that the point ( M_H = 176-188 , M_T = 172-175 ) is right on the Triviality Bound curve for as Standard Model with high-energy cut-off at the Planck energy 10^19 GeV (see hep-ph/0307138 ) and
• renormalization runs up to a critical point where the Triviality Bound curve intersects the Vacuum Stability curve around ( M_H = 239 , M_T = 220 ) and
• renormalization runs down to a point in the stable region around ( ( M_H = 143-160 , M_T = 130-145 )
There is not much data for a T-quark-Higgs state around ( M_H = 239 , M_T = 220 ), but perhaps the LHC might shed light on that.
As to a T-quark-Higgs state around ( M_H = 143-160 , M_T = 130-145 ) , it is not conventionally accepted that there is any evidence for such a state, but my opinion about data analysis is that there is such evidence. For example, the initial CDF and D0 histograms for semileptonic events
both independently show a tall narrow peak (green) in the 130-145 GeV range for the Tquark mass. Since mass calculations used in this E8 model had been done prior to those histograms, and had predicted a tree-level (about 10% or so accuracy) value of the Tquark mass of about 130 GeV, those independent CDF and D0 results indicate a probability around 4 sigma for M_T = 130-145 (see an entry on Tommaso Dorigo's blog around 5 September 2007).
In my opinion, recent results from CDF and D0 are still consistent with the existence of a Tquark-Higgs state around ( M_H = 143-160 , M_T = 130-145 ), but the consensus view is otherwise. However, I disagree with that consensus, based on how I see exeperimental data, such as:
Dilepton data described by Erich Ward Varnes in Chapter 8 of his 1997 UC Berkeley PhD thesis about D0 data at Fermilab:
"… there are six t-tbar candidate events in the dilepton final states … Three of the events contain three jets, and in these cases the results of the fits using only the leading two jets and using all combinations of three jets are given …".
There being only 6 dilepton events in Figure 8.1 of Varnes's PhD thesis
it is reasonable to discuss each of them, so (mass is roughly estimated by me looking at the histograms) here they are:
• Run 58796 Event 417 ( e mu ) - 2 jets - 160 GeV
• Run 90422 Event 26920 ( e mu ) - 2 jets - 170 GeV
• Run 88295 Event 30317 ( e e ) - 2 jets - 135 GeV
• Run 84676 Event 12814 ( e mu ) - more than 2 jets - 165 GeV - highest 2 jets - 135 GeV
• Run 95653 Event 10822 ( e e ) - more than 2 jets - 180 GeV - highest 2 jets - 170 GeV
• Run 84395 Event 15530 ( mu mu ) - more than 2 jets - 200 GeV - highest 2 jets - 165 GeV
In terms of 3 Truth Quark mass states - high around 220 GeV or so - medium around 170 GeV or so - low around 130-145 GeV or so - those look like:
• Run 58796 Event 417 ( e mu ) - direct 2-jet decay of medium
• Run 90422 Event 26920 ( e mu ) - direct 2-jet decay of medium
• Run 88295 Event 30317 ( e e ) - direct 2-jet decay of low
• Run 84676 Event 12814 ( e mu ) - decay of medium to low then 2-jet decay of low
• Run 95653 Event 10822 ( e e ) - direct 2-jet decay of medium with small background other jet
• Run 84395 Event 15530 ( mu mu ) - decay of high to medium then 2-jet decay of medium
This, and other more recent experimental subtleties ( see for example www.tony5m17h.net/ and other pages on my web site there ), reinforce my view:
In my opinion, recent results from CDF and D0 are still consistent with the existence of a Tquark-Higgs state around ( M_H = 143-160 , M_T = 130-145 ), but the consensus view is otherwise.
More details can be found on my web site at www.valdostamuseum.org/hamsmith/
### Force Strengths
The model Lagrangian (just looking at spacetime and gauge bosons and ignoring spinor fermions etc) is the integral over spacetime of gauge boson terms, so THE FORCE STRENGTH IS MADE UP OF TWO PARTS:
• the relevant spacetime manifold of gauge group global action
• the relevant symmetric space manifold of gauge group local action.
Ignoring for this exposition details about the 4-dim internal symmetry space, and ignoring conformal stuff (Higgs etc), the 4-dim spacetime Lagrangian gauge boson term is the integral over spacetime as seen by gauge boson acting globally of the gauge force term of the gauge boson acting locally for the gauge bosons of each of the four forces:
• U(1) for electromagnetism
• SU(2) for weak force
• SU(3) for color force
• Spin(5) - compact version of antiDeSitter Spin(2,3) for gravity by the MacDowell-Mansouri mechanism.
In the conventional Lagrangian picture, for each gauge force the gauge boson force term contains the force strength, which in Feynman's picture is the probability to emit a gauge boson, in either an explicit ( like g |F|^2 ) or an implicit ( incorporated into the |F|^2 ) form. Either way, the conventional picture is that the force strength g is an ad hoc inclusion.
What I am doing is to construct the integral such that the force strength emerges naturally from the geometry of each gauge force.
To do that, for each gauge force:
1 - make the spacetime over which the integral is taken be spacetime AS IT IS SEEN BY THAT GAUGE BOSON, that is, in terms of the symmetric space with GLOBAL symmetry of the gauge boson:
• the U(1) photon sees 4-dim spacetime as T^4 = S1 x S1 X S1 x S1
• the SU(2) weak boson sees 4-dim spacetime as S2 x S2
• the SU(3) weak boson sees 4-dim spacetime as CP2
• the Spin(5) of gravity sees 4-dim spacetime as S4.
2 - make the gauge boson force term have the volume of the Shilov boundary corresponding to the symmetric space with LOCAL symmetry of the gauge boson. The nontrivial Shilov boundaries are:
• for SU(2) Shilov = RP^1xS^2
• for SU(3) Shilov = S^5
• for Spin(5) Shilov = RP^1xS^4
The result is (ignoring technicalities for exposition) the geometric factor for force strength calculation.
GLOBAL: Each gauge group is the global symmetry of a symmetric space
• S1 for U(1)
• S2 = SU(2)/U(1) = Spin(3)/Spin(2) for SU(2)
• CP2 = SU(3)/SU(2)xU(1) for SU(3)
• S4 = Spin(5)/Spin(4) for Spin(5)
LOCAL: Each gauge group is the local symmetry of a symmetric space
• U(1) for itself
• SU(2) for Spin(5) / SU(2)xU(1)
• SU(3) for SU(4) / SU(3)xU(1)
• Spin(5) for Spin(7) / Spin(5)xU(1)
The nontrivial local symmetry symmetric spaces correspond to bounded complex domains
• SU(2) for Spin(5) / SU(2)xU(1) corresponds to IV3
• SU(3) for SU(4) / SU(3)xU(1) corresponds to B^6 (ball)
• Spin(5) for Spin(7) / Spin(5)xU(1) corresponds to IV5
The nontrivial bounded complex domains have Shilov boundaries
• SU(2) for Spin(5) / SU(2)xU(1) corresponds to IV3 Shilov = RP^1xS^2
• SU(3) for SU(4) / SU(3)xU(1) corresponds to B^6 (ball) Shilov = S^5
• Spin(5) for Spin(7) / Spin(5)xU(1) corresponds to IV5 Shilov = RP^1xS^4
GLOBAL AND LOCAL TOGETHER: Very roughly think of the force strength as
• the integral over the global symmetry space of
• the physical (ie Shilov Boundary) volume=strength of the force.
That is (again very roughly and intuitively): the geometric strength of the force is given by the product of
• the volume of a 4-dim thing with global symmetry of the force and
• the volume of the Shilov Boundary for the local symmetry of the force.
When you calculate the product volumes (using some normalizations etc that are described in more detail here below ), you see that roughly:
Volume product for gravity is the largest volume
so since (as Feynman says) force strength = probability to emit a gauge boson means that the highest force strength or probability should be 1
I normalize the gravity Volume product to be 1, and get results roughly ( for example, the fine structrure constant calculation gives 1/137.03608 but is rounded off here as 1/137 ):
• Volume product for gravity = 1
• Volume product for color = 2/3
• Volume product for weak = 1/4
• Volume product for electromagnetism = 1/137
There are two further main components of a force strength:
• 1 - for massive gauge bosons, a suppression by a factor of 1 / M^2
• 2 - renormalization running (important for color force).
CONSIDER MASSIVE GAUGE BOSONS: I consider gravity to be carried by virtual Planck-mass black holes, so that the geometric strength of gravity should be reduced by 1/Mp^2 and I consider the weak force to be carried by weak bosons, so that the geometric strength of gravity should be reduced by 1/MW^2 That gives the result:
• gravity strength = G (Newton's G)
• color strength = 2/3
• weak strength = G_F (Fermi's weak force G)
• electromagnetism = 1/137
FINALLY, CONSIDER RENORMALIZATION RUNNING FOR THE COLOR FORCE: That gives the result:
• gravity strength = G (Newton's G)
• color strength = 1/10 at weak boson mass scale
• weak strength = G_F (Fermi's weak force G)
• electromagnetism = 1/137
The use of compact volumes is itself a calculational device, because it would be more nearly correct, instead of
• the integral over the compact global symmetry space of
• the compact physical (ie Shilov Boundary) volume=strength of the force
to use
• the integral over the hyperbolic spacetime global symmetry space of
• the noncompact invariant measure of the gauge force term.
However, since the strongest (gravitation) geometric force strength is to be normalized to 1, the only thing that matters is RATIOS, and the compact volumes (finite and easy to look up in the book by Hua) have the same ratios as the noncompact invariant measures.
In fact, I should go on to say that continuous spacetime and gauge force geometric objects are themselves also calculational devices, and
that it would be even more nearly correct to do the calculations with respect to a discrete generalized hyperdiamond Feynman checkerboard.
Some of this material was written in connection with email discussion with Ark Jadczyk. More details can be found on my web site at www.valdostamuseum.org/hamsmith/
Carlos Castro and others have also done substantial work on similar geometric approaches ( motivated at least in part by earlier work by Armand Wyler ) to calculating force strengths. See references at
www.valdostamuseum.org/hamsmith/wfKaluzaKlein.html
Here are more details about the force strength calculations:
```The force strength of a given force is
alphaforce = (1 / Mforce^2 \)
( Vol(MISforce))
( Vol(Qforce) / Vol(Dforce)^( 1 / mforce ))
where:
alphaforce represents the force strength;
Mforce represents the effective mass;
MISforce represents the part of the target
Internal Symmetry Space that is available for the gauge
boson to go to;
Vol(MISforce) stands for volume of MISforce,
and is sometimes also denoted by the shorter notation Vol(M);
Qforce represents the link from the origin
to the target that is available for the gauge
boson to go through;
Vol(Qforce) stands for volume of Qforce;
Dforce represents the complex bounded homogeneous domain
of which Qforce is the Shilov boundary;
mforce is the dimensionality of Qforce,
which is 4 for Gravity and the Color force,
2 for the Weak force (which therefore is considered to
have two copies of QW for each spacetime HyperDiamond link),
and 1 for Electromagnetism (which therefore is considered to
have four copies of QE for each spacetime HyperDiamond link)
Vol(Dforce)^( 1 / mforce ) stands for
a dimensional normalization factor (to reconcile the dimensionality
of the Internal Symmetry Space of the target vertex
with the dimensionality of the link from the origin to the
target vertex).
The Qforce, Hermitian symmetric space,
and Dforce manifolds for the four forces are:
Gauge Hermitian Type mforce Qforce
Group Symmetric of
Space Dforce
Spin(5) Spin(7) / Spin(5)xU(1) IV5 4 RP^1xS^4
SU(3) SU(4) / SU(3)xU(1) B^6(ball) 4 S^5
SU(2) Spin(5) / SU(2)xU(1) IV3 2 RP^1xS^2
U(1) - - 1 -
```
The geometric volumes needed for the calculations are mostly taken from the book Harmonic Analysis of Functions of Several Complex Variables in the Classical Domains (AMS 1963, Moskva 1959, Science Press Peking 1958) by L. K. Hua [with unit radius scale].
Note ( thanks to Carlos Castro for noticing this ) that the volume lisrted for S5 is for a squashed S5, a Shilov boundary of the complex domain corresponding to the symmetric space SU(4) / SU(3) x U(1).
Note ( thanks again to Carlos Castro for noticing this ) also that the volume listed for CP2 is unconventional, but physically justified by noting that S4 and CP2 can be seen as having the same physical volume, with the only difference being structure at infinity.
Note also that
```Force M Vol(M)
gravity S^4 8pi^2/3 - S^4 is 4-dimensional
color CP^2 8pi^2/3 - CP^2 is 4-dimensional
weak S^2 x S^2 2 x 4pi - S^2 is a 2-dim boundary of 3-dim ball
4-dim S^2 x S^2 =
= topological boundary of 6-dim 2-polyball
Shilov Boundary of 6-dim 2-polyball = S^2 + S^2 =
= 2-dim surface frame of 4-dim S^2 x S^
e-mag T^4 4 x 2pi - S^1 is 1-dim boundary of 2-dim disk
4-dim T^4 = S^1 x S^1 x S^1 x S^1 =
= topological boundary of 8-dim 4-polydisk
Shilov Boundary of 8-dim 4-polydisk =
= S^1 + S^1 + S^1 + S^1 =
= 1-dim wire frame of 4-dim T^4```
Also note that for U(1) electromagnetism, whose photon carries no charge, the factors Vol(Q) and Vol(D) do not apply and are set equal to 1, and from another point of view, the link manifold to the target vertex is trivial for the abelian neutral U(1) photons of Electromagnetism, so we take QE and DE to be equal to unity.
```
Force M Vol(M) Q Vol(Q) D Vol(D)
gravity S^4 8pi^2/3 RP^1xS^4 8pi^3/3 IV5 pi^5/2^4 5!
color CP^2 8pi^2/3 S^5 4pi^3 B^6(ball) pi^3/6
weak S^2xS^2 2x4pi RP^1xS^2 4pi^2 IV3 pi^3/24
e-mag T^4 4x2pi - - - -
Using these numbers, the results of the
calculations are the relative force strengths
at the characteristic energy level of the
generalized Bohr radius of each force:
Gauge Force Characteristic Geometric Total
Group Energy Force Force
Strength Strength
Spin(5) gravity approx 10^19 GeV 1 GGmproton^2
approx 5 x 10^-39
SU(3) color approx 245 MeV 0.6286 0.6286
SU(2) weak approx 100 GeV 0.2535 GWmproton^2
approx 1.05 x 10^-5
U(1) e-mag approx 4 KeV 1/137.03608 1/137.03608
The force strengths are given at the characteristic
energy levels of their forces, because the force
strengths run with changing energy levels.
The effect is particularly pronounced with the color
force.
The color force strength was calculated using a simple
perturbative QCD renormalization group equation
at various energies, with the following results:
Energy Level Color Force Strength
245 MeV 0.6286
5.3 GeV 0.166
34 GeV 0.121
91 GeV 0.106
Taking other effects, such as Nonperturbative QCD,
into account, should give
a Color Force Strength of about 0.125 at about 91 GeV```
### Fermion Particle Masses
The E8 model Lagrangian (for this message just looking at spacetime and spinor fermions and ignoring gauge bosons etc) has
an Integral over 8-dim spacetime of a spinor fermion particle and antiparticle term,
in which first-generation fermion particles correspond to octonion basis elements
• 1 to e-neutrino
• i to red up quark
• j to green up quark
• k to blue up quark
• e to electron
• ie to red down quark
• je to green down quark
• ke to blue down quark
and first-generation fermion antiparticles correspond to octonion basis elements
• 1 to e-antineutrino
• i to red up antiquark
• j to green up antiquark
• k to blue up antiquark
• e to positron
• ie to red down antiquark
• je to green down antiquark
• ke to blue down antiquark
At low (where we do experiments) energies a specific quaternionic submanifold freezes out, splitting the 8-dim spacetime into a 4-dim M4 physical spacetime plus a 4-dim CP2 internal symmetry space and creating second and third generation fermions that live (at least in part) in the 4-dim CP2 internal symmetry space and correspond respectively to pairs and triples of octonion basis elements.
Ignoring for this exposition details about the 4-dim CP2 internal symmetry space, and ignoring conformal stuff (Higgs etc), and considering for now only first generation fermions, the 4-dim spacetime Lagrangian spinor fermion part is:
• integral over spacetime of
• spinor fermion particle and antiparticle term
In the conventional picture, the spinor fermion term is of the form m S S* where m is the fermion mass and S and S* represent the given fermion. Although the mass m is derived from the Higgs mechanism, the Higgs coupling constants are, in the conventional picture, ad hoc parameters, so that effectively the mass term is, in the conventional picuture, an ad hoc inclusion.
What I am doing is to NOT put in the mass m as an ad hoc Higgs coupling value,
but to construct the integral such that the mass m emerges naturally from the geometry of the spinor fermions.
To do that, make the spinor fermion mass term have the volume of the Shilov boundary corresponding to the symmetric space with LOCAL symmetry of the Spin(8) gauge group with respect to which the first generation spinor fermions can be seen as +half-spinor and -half-spinor spaces.
Note that due to triality, Spin(8) can act on those 8-dimensional half-spinor spaces similarly to the way it acts on 8-dimensional vector spacetime prior to dimensional reduction.
Then, take the the spinor fermion volume to be the Shilov boundary corresponding to the same symmetric space on which Spin(8) acts as a local gauge group that is used to construct 8-dimensional vector spacetime:
the symmetric space Spin(10) / Spin(8)xU(1) corresponds to a bounded domain of type IV8 whose Shilov boundary is RP^1 x S^7
Since all the first generation fermions see the spacetime over which the integral is taken in the same way ( unlike what happens for the force strength calculation ), the only geometric volume factor relevant for calculating first generation fermion mass ratios is in the spinor fermion volume term.
Since the physcally observed fermions in this model correspond to Kerr-Newman Black Holes, the quark mass in this model is a constituent mass.
Consider a first-generation massive lepton (or antilepton, i.e., electron or positron). For definiteness, consider an electron E (a similar line of reasoning applies to the positron).
• Gluon interactions do not affect the colorless electron ( E )
• By weak boson interactions or decay, an electron ( E ) can only be taken into itself or a massless ( at tree level ) neutrino.
• As the lightest massive first-generation fermion, the electron cannot decay into a quark.
Since the electron cannot be related to any other massive Dirac fermion, its volume V(electron) is taken to be 1.
Consider a first-generation quark (or antiquark). For definiteness, consider a red down quark I (a similar line of reasoning applies to the others of the first generation).
• By gluon interactions, the red quark ( I ) can be interchanged with the blue and green down quarks ( J and K ).
• By weak boson interactions, it can be taken into the red, blue, and green up quarks ( i, j, and k ).
• Given the up and down quarks, pions can be formed from quark-antiquark pairs, and the pions can decay to produce electrons ( E ) and neutrinos ( 1 ).
Therefore first-generation quarks or antiquarks can by gluons, weak bosons, or decay occupy the entire volume of the Shilov boundary RP1 x S7, which volume is pi^5 / 3, so its volume V(quark) is taken to be pi^5 / 3.
Consider graviton interactions with first-generation fermions.
MacDowell-Mansouri gravitation comes from 10 Spin(5) gauge bosons, 8 of which are charged (carrying color or electric charge).
2 of the charged Spin(5) gravitons carry electric charge. However, even though the electron carries electric charge, the electric charge carrying Spin(5) gravitons can only change the electron into a ( tree-level ) massless neutrino, so the Spin(5) gravitons do not enhance the electron volume factor, which remains
electron volume (taking gravitons into account) = V(electron) = 1
6 of the charged Spin(5) gravitons carry color charge, and their action on quarks (which carry color charge) multiplies the quark volume V(quark) by 6, giving
quark gravity-enhanced volume = 6 x V(quark) = 6 pi^5 / 3 = 2 pi^5
The 2 Spin(5) gravitons carrying electric charge only cannot change quarks into leptons, so they do not enhance the quark volume factor, so we have (where md is down quark mass, mu is up quark mass, and me is electron mass)
md / me = mu / me = 2 pi^5 / 1 = 2 pi^5 = 612.03937
The proton mass is calculated as the sum of the constituent masses of its constituent quarks
mproton = mu + mu + md = 938.25 MeV
which is close to the experimental value of 938.27 MeV.
In the first generation, each quark corresponds to a single octonion basis element and the up and down quark constituent masses are the same:
First Generation - 8 singletons - mu / md = 1
• Down - corresponds to 1 singleton - constituent mass 312 MeV
• Up - corresponds to 1 singleton - constituent mass 312 MeV
Second and third generation calculations are generally more complicated ( some details are given here below ) with combinatorics indicating that in higher generations the up-type quarks are heavier than the down-type quarks. The third generation case, in which the fermions correspond to triples of octonions, is simple enough to be used in this expository overview as an illustration of the combinatoric effect:
Third Generation
8^3 = 512 triples
mt / mb = 483 / 21 = 161 / 7 = 23
• down-type (Beauty) - corresponds to 21 triples -tree-level constituent mass 5.65 GeV
• up-type (Truth) - corresponds to 483 triples - tree-level constituent mass 130 GeV
Here is a summary of the results of calculations of tree-level fermion masses (quark masses are constituent masses):
• Me-neutrino = Mmu-neutrino = Mtau-neutrino = 0 at tree-level ( first order corrected masses are given HERE )
• Me = 0.5110 MeV
• Md = Mu = 312.8 MeV
• Mmu = 104.8 MeV
• Ms = 625 MeV
• Mc = 2.09 GeV
• Mtau = 1.88 GeV
• Mb = 5.63 GeV
• Mt = 130 GeV
The use of compact volumes is itself a calculational device, because it would be more nearly correct, instead of
• the integral over the compact global symmetry space of
• the compact physical (ie Shilov Boundary) volume=strength of the force
to use
• the integral over the hyperbolic spacetime global symmetry space of
• the noncompact invariant measure of the gauge force term.
However, since the strongest (gravitation) geometric force strength is to be normalized to 1, the only thing that matters is RATIOS, and the compact volumes (finite and easy to look up in the book by Hua) have the same ratios as the noncompact invariant measures.
In fact, I should go on to say that continuous spacetime and gauge force geometric objects are themselves also calculational devices, and
that it would be even more nearly correct to do the calculations with respect to a discrete generalized hyperdiamond Feynman checkerboard.
Some of this material was written in connection with email discussion with Ark Jadczyk. More details can be found on my web site at www.valdostamuseum.org/hamsmith/
Here are more details about the fermion mass calculations:
Fermion masses are calculated as a product of four factors: V(Qfermion) x N(Graviton) x N(octonion) x Sym
• V(Qfermion) is the volume of the part of the half-spinor fermion particle manifold S^7 x RP^1 that is related to the fermion particle by photon, weak boson, and gluon interactions.
• N(Graviton) is the number of types of Spin(0,5) graviton related to the fermion. The 10 gravitons correspond to the 10 infinitesimal generators of Spin(0,5) = Sp(2). 2 of them are in the Cartan subalgebra. 6 of them carry color charge, and may therefore be considered as corresponding to quarks. The remaining 2 carry no color charge, but may carry electric charge and so may be considered as corresponding to electrons. One graviton takes the electron into itself, and the other can only take the first-generation electron into the massless electron neutrino. Therefore only one graviton should correspond to the mass of the first-generation electron. The graviton number ratio of the down quark to the first-generation electron is therefore 6/1 = 6.
• N(octonion) is an octonion number factor relating up-type quark masses to down-type quark masses in each generation.
• Sym is an internal symmetry factor, relating 2nd and 3rd generation massive leptons to first generation fermions. It is not used in first-generation calculations.
The ratio of the down quark constituent mass to the electron mass is then calculated as follows:
Consider the electron, e. By photon, weak boson, and gluon interactions, e can only be taken into 1, the massless neutrino. The electron and neutrino, or their antiparticles, cannot be combined to produce any of the massive up or down quarks. The neutrino, being massless at tree level, does not add anything to the mass formula for the electron. Since the electron cannot be related to any other massive Dirac fermion, its volume V(Qelectron) is taken to be 1.
Next consider a red down quark ie. By gluon interactions, ie can be taken into je and ke, the blue and green down quarks. By also using weak boson interactions, it can be taken into i, j, and k, the red, blue, and green up quarks.
Given the up and down quarks, pions can be formed from quark-antiquark pairs, and the pions can decay to produce electrons and neutrinos.
Therefore the red down quark (similarly, any down quark) is related to any part of S^7 x RP^1, the compact manifold corresponding to { 1, i, j, k, ie, ie, ke, e } and therefore a down quark should have a spinor manifold volume factor V(Qdown quark) of the volume of S^7 x RP^1.
The ratio of the down quark spinor manifold volume factor tothe electron spinor manifold volume factor is just
V(Qdown quark) / V(Qelectron) = V(S^7x RP^1)/1 = pi^5 / 3.
Since the first generation graviton factor is 6,
md/me = 6V(S^7 x RP^1) = 2 pi^5 = 612.03937
As the up quarks correspond to i, j, and k, which are the octonion transforms under e of ie, je, and ke of the down quarks, the up quarks and down quarks have the same constituent mass
mu = md.
Antiparticles have the same mass as the corresponding particles.
Since the model only gives ratios of massses, the mass scale is fixed so that the electron mass me = 0.5110 MeV.
Then, the constituent mass of the down quark is md = 312.75 MeV,
and the constituent mass for the up quark is mu = 312.75 MeV.
These results when added up give a total mass of first generation fermion particles:
Sigmaf1 = 1.877 GeV
As the proton mass is taken to be the sum of the constituent masses of its constituent quarks
mproton = mu + mu + md = 938.25 MeV
The theoretical calculation is close to the experimental value of 938.27 MeV.
The third generation fermion particles correspond to triples of octonions. There are 8^3 = 512 such triples.
The triple { 1,1,1 } corresponds to the tau-neutrino.
The other 7 triples involving only 1 and e correspond
to the tauon:
• { e ,e, e }
• { e, e, 1 }
• { e, 1, e }
• { 1, e, e }
• { 1, 1, e }
• { 1, e, 1 }
• { e, 1, 1 }
The symmetry of the 7 tauon triples is the same as the symmetry of the 3 down quarks, the 3 up quarks, and the electron, so the tauon mass should be the same as the sum of the masses of the first generation massive fermion particles. Therefore the tauon mass is calculated at tree level as 1.877 GeV.
The calculated Tauon mass of 1.88 GeV is a sum of first generation fermion masses, all of which are valid at the energy level of about 1 GeV.
However, as the Tauon mass is about 2 GeV, the effective Tauon mass should be renormalized from the energy level of 1 GeV (where the mass is 1.88 GeV) to the energy level of 2 GeV. Such a renormalization should reduce the mass. If the renormalization reduction were about 5 percent,
the effective Tauon mass at 2 GeV would be about 1.78 GeV.
The 1996 Particle Data Group Review of Particle Physics gives a Tauon mass of 1.777 GeV.
Note that all triples corresponding to the tau and the tau-neutrino are colorless.
The beauty quark corresponds to 21 triples.
They are triples of the same form as the 7 tauon triples, but for 1 and ie, 1 and je, and 1 and ke, which correspond to the red, green, and blue beauty quarks, respectively.
The seven triples of the red beauty quark correspond to the seven triples of the tauon, except that the beauty quark interacts with 6 Spin(0,5) gravitons while the tauon interacts with only two.
The beauty quark constituent mass should be the tauon mass times the third generation graviton factor 6/2 = 3, so the B-quark mass is
mb = 5.63111 GeV.
The calculated Beauty Quark mass of 5.63 GeV is a consitituent mass, that is, it corresponds to the conventional pole mass plus 312.8 MeV.
Therefore, the calculated Beauty Quark mass of 5.63 GeV corresponds to a conventional pole mass of 5.32 GeV.
The 1996 Particle Data Group Review of Particle Physics gives a lattice gauge theory Beauty Quark pole mass as 5.0 GeV.
The pole mass can be converted to an MSbar mass if the color force strength constant alpha_s is known. The conventional value of alpha_s at about 5 GeV is about 0.22. Using alpha_s (5 GeV) = 0.22, a pole mass of 5.0 GeV gives an MSbar 1-loop Beauty Quark mass of 4.6 GeV, and
an MSbar 1,2-loop Beauty Quark mass of 4.3, evaluated at about 5 GeV.
If the MSbar mass is run from 5 GeV up to 90 GeV, the MSbar mass decreases by about 1.3 GeV, giving an expected MSbar mass of about 3.0 GeV at 90 GeV.
DELPHI at LEP has observed the Beauty Quark and found a 90 GeV MSbar Beauty Quark mass of about 2.67 GeV, with error bars +/- 0.25 (stat) +/- 0.34 (frag) +/- 0.27 (theo).
Note that the theoretical model calculated mass of 5.63 GeV corresponds to a pole mass of 5.32 GeV, which is somewhat higher than the conventional value of 5.0 GeV. However, the theoretical model calculated value of the color force strength constant alpha_s at about 5 GeV is about 0.166, while the conventional value of the color force strength constant alpha_s at about 5 GeV is about 0.216, and the theoretical model calculated value of the color force strength constant alpha_s at about 90 GeV is about 0.106, while the conventional value of the color force strength constant alpha_s at about 90 GeV is about 0.118.
The theoretical model calculations gives a Beauty Quark pole mass (5.3 GeV) that is about 6 percent higher than the conventional Beauty Quark pole mass (5.0 GeV), and a color force strength alpha_s at 5 GeV (0.166) such that 1 + alpha_s = 1.166 is about 4 percent lower than the conventional value of 1 + alpha_s = 1.216 at 5 GeV.
Note particularly that triples of the type { 1, ie, je } , { ie, je, ke }, etc., do not correspond to the beauty quark, but to the truth quark.
The truth quark corresponds to the remaining 483 triples, so the constituent mass of the red truth quark is 161/7 = 23 times the red beauty quark mass, and the red T-quark mass is
mt = 129.5155 GeV
The blue and green truth quarks are defined similarly.
All other masses than the electron mass (which is the basis of the assumption of the value of the Higgs scalar field vacuum expectation value v = 252.514 GeV), including the Higgs scalar mass and Truth quark mass, are calculated (not assumed) masses in the E8 model.
These results when added up give a total mass of third generation fermion particles:
Sigmaf3 = 1,629 GeV
The second generation fermion particles correspond to pairs of octonions.
There are 8^2 = 64 such pairs. The pair { 1,1 } corresponds to the mu-neutrino. The pairs { 1, e }, { e, 1 }, and { e, e } correspond to the muon.
Compare the symmetries of the muon pairs to the symmetries of the first generation fermion particles.
The pair { e, e } should correspond to the e electron.
The other two muon pairs have a symmetry group S2, which is 1/3 the size of the color symmetry group S3 which gives the up and down quarks their mass of 312.75 MeV.
Therefore the mass of the muon should be the sum of
• the { e, e } electron mass and
• the { 1, e }, { e, 1 } symmetry mass, which is 1/3 of the up or down quark mass.
Therefore, mmu = 104.76 MeV .
According to the 1998 Review of Particle Physics of the Particle Data Group, the experimental muon mass is about 105.66 MeV.
Note that all pairs corresponding to the muon and the mu-neutrino are colorless.
The red, blue and green strange quark each corresponds to the 3 pairs involving 1 and ie, je, or ke.
The red strange quark is defined as the three pairs 1 and i, because i is the red down quark.Its mass should be the sum of two parts:
• the { i, i } red down quark mass, 312.75 MeV, and
• the product of the symmetry part of the muon mass, 104.25 MeV, times the graviton factor.
Unlike the first generation situation, massive second and third generation leptons can be taken, by both of the colorless gravitons that may carry electric charge, into massive particles. Therefore the graviton factor for the second and third generations is 6/2 = 3.
Therefore the symmetry part of the muon mass times the graviton factor 3 is 312.75 MeV, and the red strange quark constituent mass is
ms = 312.75 MeV + 312.75 MeV = 625.5 MeV
The blue strange quarks correspond to the three pairs involving j, the green strange quarks correspond to the three pairs involving k, and their masses are determined similarly.
The charm quark corresponds to the other 51 pairs. Therefore, the mass of the red charm quark should be the sum of two parts:
• the { i, i }, red up quark mass, 312.75 MeV; and
• the product of the symmetry part of the strange quark mass, 312.75 MeV, and the charm to strange octonion number factor 51/9, which product is 1,772.25 MeV.
Therefore the red charm quark constituent mass is
mc = 312.75 MeV + 1,772.25 MeV = 2.085 GeV
The blue and green charm quarks are defined similarly, and their masses are calculated similarly.
The calculated Charm Quark mass of 2.09 GeV is a consitituent mass, that is, it corresponds to the conventional pole mass plus 312.8 MeV.
Therefore, the calculated Charm Quark mass of 2.09 GeV corresponds to a conventional pole mass of 1.78 GeV.
The 1996 Particle Data Group Review of Particle Physics gives a range for the Charm Quark pole mass from 1.2 to 1.9 GeV.
The pole mass can be converted to an MSbar mass if the color force strength constant alpha_s is known. The conventional value of alpha_s at about 2 GeV is about 0.39, which is somewhat lower than the teoretical model value. Using alpha_s (2 GeV) = 0.39, a pole mass of 1.9 GeV gives an MSbar 1-loop mass of 1.6 GeV, evaluated at about 2 GeV.
These results when added up give a total mass of second generation fermion particles:
Sigmaf2 = 32.9 GeV
### Higgs and W-boson Masses
As with forces strengths, the calculations produce ratios of masses, so that only one mass need be chosen to set the mass scale.
In the E8 model, the value of the fundamental mass scale vacuum expectation value v = <PHI> of the Higgs scalar field is set to be the sum of the physical masses of the weak bosons, W+, W-, and Z0,
whose tree-level masses will then be shown by ratio calculations to be 80.326 GeV, 80.326 GeV, and 91.862 GeV, respectively,
and so that the electron mass will then be 0.5110 MeV.
The relationship between the Higgs mass and v is given by the Ginzburg-Landau term from the Mayer Mechanism as
(1/4) Tr ( [ PHI , PHI ] - PHI )^2
or, in the notation of hep-ph/9806009 by Guang-jiong Ni
(1/4!) lambda PHI^4 - (1/2) sigma PHI^2
where the Higgs mass M_H = sqrt( 2 sigma )
Ni says: "... the invariant meaning of the constant lambda in the Lagrangian is not the coupling constant, the latter will change after quantization ... The invariant meaning of lambda is nothing but the ratio of two mass scales:
lambda = 3 ( M_H / PHI )^2
which remains unchanged irrespective of the order ...".
Since <PHI>^2 = v^2, and assuming at tree-level that lambda = 1 ( a value consistent with the Higgs Tquark condensate model of Michio Hashimoto, Masaharu Tanabashi, and Koichi Yamawaki in their paper at hep-ph/0311165 ), we have, at tree-level
M_H^2 / v^2 = 1 / 3
In the E8 model, the fundamental mass scale vacuum expectation value v of the Higgs scalar field is the fundamental mass parameter that is to be set to define all other masses by the mass ratio formulas of the model and
so that
### M_H = v /sqrt(3) = 145.789 GeV
To get W-boson masses, denote the 3 SU(2) high-energy weak bosons (massless at energies higher than the electroweak unification) by W+, W-, and W0, corresponding to the massive physical weak bosons W+, W-, and Z0.
The triplet { W+, W-, W0 } couples directly with the T - Tbar quark-antiquark pair, so that the total mass of the triplet { W+, W-, W0 } at the electroweak unification is equal to the total mass of a T - Tbar pair, 259.031 GeV.
The triplet { W+, W-, Z0 } couples directly with the Higgs scalar, which carries the Higgs mechanism by which the W0 becomes the physical Z0, so that the total mass of the triplet { W+, W-, Z0 } is equal to the vacuum expectation value v of the Higgs scalar field, v = 252.514 GeV.
What are individual masses of members of the triplet { W+, W-, Z0 } ?
First, look at the triplet { W+, W-, W0 } which can be represented by the 3-sphere S^3. The Hopf fibration of S^3 as
S^1 --> S^3 --> S^2
gives a decomposition of the W bosons into the neutral W0 corresponding to S^1 and the charged pair W+ and W- corresponding to S^2.
The mass ratio of the sum of the masses of W+ and W- to the mass of W0 should be the volume ratio of the S^2 in S^3 to the S^1 in S3.
• The unit sphere S^3 in R^4 is normalized by 1 / 2.
• The unit sphere S^2 in R^3 is normalized by 1 / sqrt( 3 ).
• The unit sphere S^1 in R^2 is normalized by 1 / sqrt( 2 ).
The ratio of the sum of the W+ and W- masses to the W0 mass should then be
(2 / sqrt3) V(S^2) / (2 / sqrt2) V(S^1) = 1.632993
Since the total mass of the triplet { W+, W-, W0 } is 259.031 GeV, the total mass of a T - Tbar pair, and the charged weak bosons have equal mass, we have
M_W+ = M_W- = 80.326 GeV and M_W0 = 98.379 GeV.
The charged W+/- neutrino-electron interchange must be symmetric with the electron-neutrino interchange, so that the absence of right-handed neutrino particles requires that the charged W+/- SU(2) weak bosons act only on left-handed electrons.
Each gauge boson must act consistently on the entire Dirac fermion particle sector, so that the charged W+/- SU(2) weak bosons act only on left-handed fermion particles of all types.
The neutral W0 weak boson does not interchange Weyl neutrinos with Dirac fermions, and so is not restricted to left-handed fermions, but also has a component that acts on both types of fermions, both left-handed and right-handed, conserving parity.
However, the neutral W0 weak bosons are related to the charged W+/- weak bosons by custodial SU(2) symmetry, so that the left-handed component of the neutral W0 must be equal to the left-handed (entire) component of the charged W+/-.
Since the mass of the W0 is greater than the mass of the W+/-, there remains for the W0 a component acting on both types of fermions.
Therefore the full W0 neutral weak boson interaction is proportional to (M_W+/-^2 / M_W0^2) acting on left-handed fermions and
(1 - (M_W+/-^2 / M_W0^2)) acting on both types of fermions.
If (1 - (M_W+/-2 / M_W0^2)) is defined to be sin( theta_w )^2 and denoted by K,
and if the strength of the W+/- charged weak force (and of the custodial SU(2) symmetry) is denoted by T,
then the W0 neutral weak interaction can be written as W0L = T + K and W0LR = K.
Since the W0 acts as W0L with respect to the parity violating SU(2) weak force
and as W0LR with respect to the parity conserving U(1) electromagnetic force of the U(1) subgroup of SU(2), the W0 mass mW0 has two components:
the parity violating SU(2) part mW0L that is equal to M_W+/-
the parity conserving part M_W0LR that acts like a heavy photon.
As M_W0 = 98.379 GeV = M_W0L + M_W0LR, and as M_W0L = M_W+/- = 80.326 GeV, we have M_W0LR = 18.053 GeV.
Denote by *alphaE = *e^2 the force strength of the weak parity conserving U(1) electromagnetic type force that acts through the U(1) subgroup of SU(2).
The electromagnetic force strength alphaE = e^2 = 1 / 137.03608 was calculated above using the volume V(S^1) of an S^1 in R^2, normalized by 1 / sqrt( 2 ).
The *alphaE force is part of the SU(2) weak force whose strength alphaW = w^2 was calculated above using the volume V(S^2) of an S^2 \subset R^3, normalized by 1 / sqrt( 3 ).
Also, the electromagnetic force strength alphaE = e^2 was calculated above using a 4-dimensional spacetime with global structure of the 4-torus T^4 made up of four S^1 1-spheres,
while the SU(2) weak force strength alphaW = w^2 was calculated above using two 2-spheres S^2 x S^2, each of which contains one 1-sphere of the *alphaE force.
Therefore
• *alphaE = alphaE ( sqrt( 2 ) / sqrt( 3) )(2 / 4) = alphaE / sqrt( 6 ),
• *e = e / (4th root of 6) = e / 1.565 ,
and the mass mW0LR must be reduced to an effective value M_W0LReff = M_W0LR / 1.565 = 18.053/1.565 = 11.536 GeV for the *alphaE force to act like an electromagnetic force in the E8 model:
*e M_W0LR = e (1/5.65) M_W0LR = e M_Z0,
where the physical effective neutral weak boson is denoted by Z0.
Therefore, the correct E8 model values for weak boson masses and the Weinberg angle theta_w are:
M_W+ = M_W- = 80.326 GeV;
M_Z0 = 80.326 + 11.536 = 91.862 GeV;
Sin(theta_w)^2 = 1 - (M_W+/- / M_Z0)^2 = 1 - ( 6452.2663 / 8438.6270 ) = 0.235.
Radiative corrections are not taken into account here, and may change these tree-level values somewhat.
The Kobayashi-Maskawa parameters are determined in terms of the sum of the masses of the 30 first-generation fermion particles and antiparticles, denoted by Smf1 = 7.508 GeV,
and the similar sums for second-generation and third-generation fermions, denoted by Smf2 = 32.94504 GeV and Smf3 = 1,629.2675 GeV.
The reason for using sums of all fermion masses (rather than sums of quark masses only) is that all fermions are in the same spinor representation of Spin(8), and the Spin(8) representations are considered to be fundamental.
The following formulas use the above masses to calculate Kobayashi-Maskawa parameters:
• phase angle d13 = 1 radian ( unit length on a phase circumference )
• sin(alpha) = s12 = [me+3md+3mu]/sqrt([me^2+3md^2+3mu^2]+[mmu^2+3ms^2+3mc^2]) = 0.222198
• sin(beta) = s13 = [me+3md+3mu]/sqrt([me^2+3md^2+3mu^2]+[mtau^2+3mb^2+3mt^2]) = 0.004608
• sin(*gamma) = [mmu+3ms+3mc]/sqrt([mtau^2+3mb^2+3mt^2]+[mmu^2+3ms^2+3mc^2])
• sin(gamma) = s23 = sin(*gamma) sqrt( Sigmaf2 / Sigmaf1 ) = 0.04234886
The factor sqrt( Smf2 /Smf1 ) appears in s23 because an s23 transition is to the second generation and not all the way to the first generation, so that the end product of an s23 transition has a greater available energy than s12 or s13 transitions by a factor of Smf2 / Smf1 .
Since the width of a transition is proportional to the square of the modulus of the relevant KM entry and the width of an s23 transition has greater available energy than the s12 or s13 transitions by a factor of Smf2 / Smf1
the effective magnitude of the s23 terms in the KM entries is increased by the factor sqrt( Smf2 /Smf1 ) .
The Chau-Keung parameterization is used, as it allows the K-M matrix to be represented as the product of the following three 3x3 matrices:
``` 1 0 0
0 cos(gamma) sin(gamma)
0 -sin(gamma) cos(gamma)```
``` cos(beta) 0 sin(beta)exp(-i d13)
0 1 0
-sin(beta)exp(i d13) 0 cos(beta)```
``` cos(alpha) sin(alpha) 0
-sin(alpha) cos(alpha) 0
0 0 1```
The resulting Kobayashi-Maskawa parameters for W+ and W- charged weak boson processes, are:
``` d s b
u 0.975 0.222 0.00249 -0.00388i
c -0.222 -0.000161i 0.974 -0.0000365i 0.0423
t 0.00698 -0.00378i -0.0418 -0.00086i 0.999```
The matrix is labelled by either (u c t) input and (d s b) output, or, as above, (d s b) input and (u c t) output.
For Z0 neutral weak boson processes, which are suppressed by the GIM mechanism of cancellation of virtual subprocesses, the matrix is labelled by either (u c t) input and (u'c't') output, or, as below, (d s b) input and (d's'b') output:
``` d s b
d' 0.975 0.222 0.00249 -0.00388i
s' -0.222 -0.000161i 0.974 -0.0000365i 0.0423
b' 0.00698 -0.00378i -0.0418 -0.00086i 0.999
```
Since neutrinos of all three generations are massless at tree level, the lepton sector has no tree-level K-M mixing.
According to a Review on the KM mixing matrix by Gilman, Kleinknecht, and Renk in the 2002 Review of Particle Physics:
"... Using the eight tree-level constraints discussed below together with unitarity, and assuming only three generations, the 90% confidence limits on the magnitude of the elements of the complete matrix are
``` d s b
u 0.9741 to 0.9756 0.219 to 0.226 0.00425 to 0.0048
c 0.219 to 0.226 0.9732 to 0.9748 0.038 to 0.044
t 0.004 to 0.014 0.037 to 0.044 0.9990 to 0.9993```
... The constraints of unitarity connect different elements, so choosing a specific value for one element restricts the range of others. ... The phase d13 lies in the range 0 < d13 < 2 pi, with non-zero values generally breaking CP invariance for the weak interactions. ... Using tree-level processes as constraints only, the matrix elements ...[ of the 90% confidence limit shown above ]... correspond to values of the sines of the angles of s12 = 0.2229 +/- 0.0022, s23 = 0.0412 +/- 0.0020, and s13 = 0.0036 +/- 0.0007. If we use the loop-level processes discussed below as additional constraints, the sines of the angles remain unaffected, and the CKM phase, sometimes referred to as the angle gamma = phi3 of the unitarity triangle ... is restricted to d13 = ( 1.02 +/- 0.22 ) radians = 59 +/- 13 degrees. ... CP-violating amplitudes or differences of rates are all proportional to the product of CKM factors ... s12 s13 s23 c12 c13^2 c23 sind13. This is just twice the area of the unitarity triangle. ... All processes can be quantitatively understood by one value of the CKM phase d13 = 59 +/- 13 degrees. The value of beta = 24 +/- 4 degrees from the overall fit is consistent with the value from the CP-asymmetry measurements of 26 +/- 4 degrees. The invariant measure of CP violation is J = ( 3.0 +/- 0.3) x 10^(-5). ... From a combined fit using the direct measurements, B mixing, epsilon, and sin2beta, we obtain: Re Vtd = 0.0071 +/- 0.0008 , Im Vtd = -0.0032 +/- 0.0004 ... Constraints... on the position of the apex of the unitarity triangle following from | Vub | , B mixing, epsilon, and sin2beta. ...".
In hep-ph/0208080, Yosef Nir says: "... Within the Standard Model, the only source of CP violation is the Kobayashi-Maskawa (KM) phase ... The study of CP violation is, at last, experiment driven. ... The CKM matrix provides a consistent picture of all the measured flavor and CP violating processes. ... There is no signal of new flavor physics. ... Very likely, the KM mechanism is the dominant source of CP violation in flavor changing processes. ... The result is consistent with the SM predictions. ...".
### Neutrino Masses
Consider the three generations of neutrinos:
• nu_e (electron neutrino);
• nu_m (muon neutrino);
• nu_t (tauon neutrino)
and three neutrino mass states: nu_1 ; nu_2 : nu_3
and the division of 8-dimensional spacetime into
• 4-dimensional physical M4 Minkowski spacetime
• plus 4-dimensional CP2 internal symmetry space.
The lightest mass state nu_1 corresponds to a neutrino whose propagation begins and ends in physical Minkowski spacetime, lying entirely therein. According to the E8 model, the mass of nu_1 is zero at tree-level and it picks up no first-order correction while propagating entirely through physical Minkowski spacetime, so the first-order corrected mass of nu_1 is zero.
Since only two of the three neutrinos have first-order mass, and since in the E8 model theneutrinos are not Majorana particles, there is no neutrino CP-violation or phase at first order.
Consider the neutrino mixing matrix
``` nu_1 nu_2 nu_3
nu_e Ue1 Ue2 Ue3
nu_m Um1 Um2 Um3
nu_t Ut1 Ut2 Ut3```
Assume the simplest mixing scheme with a massless nu_1 andnu_3 with no nu_e component so that Ue3 = 0
or, in conventional notation, mixing angle theta_13 = 0 = sin(theta_13) and cos(theta_13) = 1.
Then we have (as described in the 2004 Particle Data Book):
``` nu_1 nu_2 nu_3
nu_e cos(theta_12) sin(theta_12) 0
nu_m -sin(theta_12)cos(theta_23) cos(theta_12)cos(theta_23) sin(theta_23)
nu_t sin(theta_12)sin(theta_23) -cos(theta_12)sin(theta_23) cos(theta_23)```
Assume that nu_3 has equal components of nu_m and nu_t so that Um3 = Ut3 = 1/sqrt(2)
or, in conventional notation, mixing angle theta_23 = pi/4.
Then we have:
``` nu_1 nu_2 nu_3
nu_e cos(theta_12) sin(theta_12) 0
nu_m -sin(theta_12)/sqrt(2) cos(theta_12)/sqrt(2) 1/sqrt(2)
nu_t sin(theta_12)/sqrt(2) -cos(theta_12)/sqrt(2) 1/sqrt(2)
```
The heaviest mass state nu_3 corresponds to a neutrino whose propagation begins and ends in CP2 internal symmetry space, lying entirely therein.
According to the E8 model the mass of nu_3 is zero at tree-level but it picks up a first-order correction propagating entirely through internal symmetry space by merging with an electron through the weak and electromagnetic forces, effectively acting not merely as a point
but as a point plus an electron loop at both beginning and ending points
so the first-order corrected mass of nu_3 is given by
M_nu_3 x (1/sqrt(2)) = M_e x GW(mproton^2) x alpha_E
where the factor (1/sqrt(2)) comes from the Ut3 component of the neutrino mixing matrix so that
M_nu_3 = sqrt(2) x M_e x GW(mproton^2) x alpha_E =
= 1.4 x 5 x 10^5 x 1.05 x 10^(-5) x (1/137) eV =
= 7.35 / 137 = 5.4 x 10^(-2) eV.
Note that the neutrino-plus-electron loop can be anchored by weak force action through any of the 6 first-generation quarks at each of the beginning and ending points, and that the anchor quark at the beginning point can be different from the anchor quark at the ending point, so that there are 6x6 = 36 different possible anchorings.
The intermediate mass state nu_2 corresponds to a neutrino whose propagation begins or ends in CP2 internal symmetry space and ends or begins in physical Minkowski spacetime, thus having only one point (either beginning or ending) lying in CP2 internal symmetry space where it can act not merely as a point but as a point plus an electron loop.
According to the E8 model the mass of nu_2 is zero at tree-level but it picks up a first-order correction at only one (but not both) of the beginning or ending points
so that so that there are 6 different possible anchorings for nu_2 first-order corrections, as opposed to the 36 different possible anchorings for nu_3 first-order corrections,
so that the first-order corrected mass of nu_2 is less than the first-order corrected mass of nu_3 by a factor of 6,
so the first-order corrected mass of nu_2 is
M_nu_2 = M_nu_3 / Vol(CP2) = 5.4 x 10^(-2) / 6
= 9 x 10^(-3)eV.
Therefore: the mass-squared difference D(M23^2) is
D(M23^2) = M_nu_3^2 - M_nu_2^2 =
= ( 2916 - 81 ) x 10^(-6) eV^2 =
= 2.8 x 10^(-3) eV^2
and
the mass-squared difference D(M12^2) is
D(M12^2) = M_nu_2^2 - M_nu_1^2 =
= ( 81 - 0 ) x 10^(-6) eV^2 =
= 8.1 x 10^(-5) eV^2
Set theta_12 = pi/6= 0.866 so that cos(theta_12) = 0.866 = sqrt(3)/2 and sin(theta_12) = 0.5 = 1/2 = Ue2 = fraction of nu_2 begin/end points that are in the physical spacetime where massless nu_e lives. Then we have for the neutrino mixing matrix:
``` nu_1 nu_2 nu_3
nu_e 0.87 0.50 0
nu_m -0.35 0.61 0.71
nu_t 0.35 -0.61 0.71```
### Dark Energy : Dark Matter : Ordinary Matter
Gravity and the Cosmological Constant come from the MacDowell-Mansouri Mechanism and the 15-dimensional Spin(2,4) = SU(2,2) Conformal Group, which is made up of:
• 3 Rotations;
• 3 Boosts;
• 4 Translations;
• 4 Special Conformal transformations; and
• 1 Dilatation.
According to gr-qc/9809061 by R. Aldrovandi and J. G. Peireira:
"... If the fundamental spacetime symmetry of the laws of Physics is that given by the de Sitter instead of the Poincare group, the P-symmetry of the weak cosmological-constant limit and the Q-symmetry of the strong cosmological-constant limit can be considered as limiting cases of the fundamental symmetry. ...
... N ...[ is the space ]... whose geometry is gravitationally related to an infinite cosmological constant ...[and]... is a 4-dimensional cone-space in which ds = 0, and whose group of motion is Q. Analogously to the Minkowski case, N is also a homogeneous space, but now under the kinematical group Q, that is, N = Q/L [ where L is the Lorentz Group of Rotations and Boosts ]. In other words, the point-set of N is the point-set of the special conformal transformations.
Furthermore, the manifold of Q is a principal bundle P(Q/L,L), with Q/L = N as base space and L as the typical fiber. The kinematical group Q, like the Poincare group, has the Lorentz group L as the subgroup accounting for both the isotropy and the equivalence of inertial frames in this space. However, the special conformal transformations introduce a new kind of homogeneity. Instead of ordinary translations, all the points of N are equivalent through special conformal transformations. ...
... Minkowski and the cone-space can be considered as dual to each other, in the sense that their geometries are determined respectively by a vanishing and an infinite cosmological constants. The same can be said of their kinematical group of motions: P is associated to a vanishing cosmological constant and Q to an infinite cosmological constant.
The dual transformation connecting these two geometries is the spacetime inversion x^u -> x^u / sigma^2 . Under such a transformation, the Poincare group P is transformed into the group Q, and the Minkowski space M becomes the cone-space N. The points at infinity of M are concentrated in the vertex of the cone-space N, and those on the light-cone of M becomes the infinity of N. It is interesting to notice that, despite presenting an infinite scalar curvature, the concepts of space isotropy and equivalence between inertial frames in the cone-space N are those of special relativity. The difference lies in the concept of uniformity as it is the special conformal transformations, and not ordinary translations, which act transitively on N. ..."
• Since the Cosmological Constant comes from the 10 Rotation, Boost, and Special Conformal generators of the Conformal Group Spin(2,4) = SU(2,2), the fractional part of our Universe of the Cosmological Constant should be about 10 / 15 = 67%.
• Since Black Holes, including Dark Matter Primordial Black Holes, are curvature singularities in our 4-dimensional physical spacetime, and since Einstein-Hilbert curvature comes from the 4 Translations of the 15-dimensional Conformal Group Spin(2,4) = SU(2,2) through the MacDowell-Mansouri Mechanism (in which the generators corresponding to the 3 Rotations and 3 Boosts do not propagate), the fractional part of our Universe of Dark Matter Primordial Black Holes should be about 4 / 15 = 27%.
• Since Ordinary Matter gets mass from the Higgs mechanism which is related to the 1 Scale Dilatation of the 15-dimensional Conformal Group Spin(2,4) = SU(2,2), the fractional part of our universe of Ordinary Matter should be about 1 / 15 = 6%.
Therefore, our Flat Expanding Universe should, according to the cosmology of the model, have (without taking into account any evolutionary changes with time) roughly:
• 67% Cosmological Constant
• 27% Dark Matter - possilbly primordial stable Planck mass black holes
• 6% Ordinary Matter
As Dennnis Marks pointed out to me, since density rho is proportional to (1+z)^3(1+w) for red-shift factor z and a constant equation of state w:
• w = -1 for /\ and the average overall density of /\ Dark Energy remains constant with time and the expansion of our Universe; and
• w = 0 for nonrelativistic matter so that the overall average density of Ordinary Matter declines as 1 / R^3 as our Universe expands; and
• w = 0 for primordial black hole dark matter - stable Planck mass black holes - so that Dark Matter also has density that declines as 1 / R^3 as our Universe expands;
so that the ratio of their overall average densities must vary with time, or scale factor R of our Universe, as it expands.
Therefore, the above calculated ratio 0.67 : 0.27 : 0.06 is valid only for a particular time, or scale factor, of our Universe.
When is that time ? Further, what is the value of the ratio NOW ?
Since WMAP observes Ordinary Matter at 4% NOW, the time WHEN Ordinary Matter was 6% would be at redshift z such that 1 / (1+z)^3 = 0.04 / 0.06 = 2/3 , or (1+z)^3 = 1.5 , or 1+z = 1.145 , or z = 0.145. To translate redshift into time, in billions of years before present, or Gy BP, use this chart
from a www.supernova.lbl.gov file SNAPoverview.pdf. to see that the time WHEN Ordinary Matter was 6% would have been a bit over 2 billion years ago, or 2 Gy BP.
In the diagram, there are four Special Times in the history of our Universe:
• the Big Bang Beginning of Inflation (about 13.7 Gy BP);
• the End of Inflation = Beginning of Decelerating Expansion (beginning of green line also about 13.7 Gy BP);
• the End of Deceleration (q=0) = Inflection Point = Beginning of Accelerating Expansion (purple vertical line at about z = 0.587 and about 7 Gy BP). According to a hubblesite web page credited to Ann Feild, the above diagram "... reveals changes in the rate of expansion since the universe's birth 15 billion years ago. The more shalow the curve, the faster the rate of expansion. The curve changes noticeably about 7.5 billion years ago, when objects in the universe began flying apart as a faster rate. ...". According to a CERN Courier web page: "... Saul Perlmutter, who is head of the Supernova Cosmology Project ... and his team have studied altogether some 80 high red-shift type Ia supernovae. Their results imply that the universe was decelerating for the first half of its existence, and then began accelerating approximately 7 billion years ago. ...". According to astro-ph/0106051 by Michael S. Turner and Adam G. Riess: "... current supernova data ... favor deceleration at z > 0.5 ... SN 1997ff at z = 1.7 provides direct evidence for an early phase of slowing expansion if the dark energy is a cosmological constant ...".
• the Last Intersection of the Accelerating Expansion of our Universe with Linear Expansion (green line) from End of Inflation (first interesection) through Inflection Point (second intersection, at purple vertical line at about z = 0.587 and about 7 Gy BP) to the Third Intersection (at red vertical line at z = 0.145 and about 2 Gy BP), which is also around the times of the beginning of the Proterozoic Era and Eukaryotic Life, Fe2O3 Hematite ferric iron Red Bed formations, a Snowball Earth, and the start of the Oklo fission reactor. 2 Gy is also about 10 Galactic Years for our Milky Way Galaxy and is on the order of the time for the process of a collision of galaxies.
Those four Special Times define four Special Epochs:
• The Inflation Epoch, beginning with the Big Bang and ending with the End of Inflation. The Inflation Epoch is described by Zizzi Quantum Inflation ending with Self-Decoherence of our Universe ( see gr-qc/0007006 ).
• The Decelerating Expansion Epoch, beginning with the Self-Decoherence of our Universe at the End of Inflation. During the Decelerating Expansion Epoch, the Radiation Era is succeeded by the Matter Era, and the Matter Components (Dark and Ordinary) remain more prominent than they would be under the "standard norm" conditions of Linear Expansion.
• The Early Accelerating Expansion Epoch, beginning with the End of Deceleration and ending with the Last Intersection of Accelerating Expansion with Linear Expansion. During Accelerating Expansion, the prominence of Matter Components (Dark and Ordinary) declines, reaching the "standard norm" condition of Linear Expansion at the end of the Early Accelerating Expansion Epoch at the Last Intersection with the Line of Linear Expansion.
• The Late Accelerating Expansion Epoch, beginning with the Last Intersection of Accelerating Expansion and continuing forever, with New Universe creation happening many times at Many Times. During the Late Accelerating Expansion Epoch, the Cosmological Constant /\ is more prominent than it would be under the "standard norm" conditions of Linear Expansion.
NOW happens to be about 2 billion years into the Late Accelerating Expansion Epoch.What about Dark Energy : Dark Matter : Ordinary Matter NOW ?
As to how the Dark Energy /\ and Cold Dark Matter terms have evolved during the past 2 Gy, a rough estimate analysis would be:
• /\ and CDM would be effectively created during expansion in their natural ratio 67 : 27 = 2.48 = 5 / 2, each having proportionate fraction 5 / 7 and 2 / 7, respectively;
• CDM Black Hole decay would be ignored; and
• pre-existing CDM Black Hole density would decline by the same 1 / R^3 factor as Ordinary Matter, from 0.27 to 0.27 / 1.5 = 0.18.
The Ordinary Matter excess 0.06 - 0.04 = 0.02 plus the first-order CDM excess 0.27 - 0.18 = 0.09 should be summed to get a total first-order excess of 0.11, which in turn should be distributed to the /\ and CDM factors in their natural ratio 67 : 27, producing, for NOW after 2 Gy of expansion:
CDM Black Hole factor = 0.18 + 0.11 x 2/7 = 0.18 + 0.03 = 0.21
for a total calculated Dark Energy : Dark Matter : Ordinary Matter ratio for NOW of 0.75 : 0.21 : 0.04
### Pion Mass
The quark content of a charged pion is a quark - antiquark pair: either Up plus antiDown or Down plus antiUp. Experimentally, its mass is about 139.57 MeV.
The quark is a Naked Singularity Kerr-Newman Black Hole, with electromagnetic charge e and spin angular momentum J and constituent mass M 312 MeV, such that e^2 + a^2 is greater than M^2 (where a = J / M).
The antiquark is a also Naked Singularity Kerr-Newman Black Hole, with electromagnetic charge e and spin angular momentum J and constituent mass M 312 MeV, such that e^2 + a^2 is greater than M^2 (where a = J / M).
According to General Relativity, by Robert M. Wald (Chicago 1984) page 338 [Problems] ... 4. ...:
"... Suppose two widely separated Kerr black holes with parameters ( M1 , J1 ) and ( M2 , J2 ) initially are at rest in an axisymmetric configuration, i.e., their rotation axes are aligned along the direction of their separation.
Assume that these black holes fall together and coalesce into a single black hole.
Since angular momentum cannot be radiated away in an axisymmetric spacetime, the final black hole will have momentum J = J1 + J2. ...".
The neutral pion produced by the quark - antiquark pair would have zero angular momentum, thus reducing the value of e^2 + a^2 to e^2 .
For fermion electrons with spin 1/2, 1 / 2 = e / M (see for example Misner, Thorne, and Wheeler, Gravitation (Freeman 1972), page 883) so that M^2 = 4 e^2 is greater than e^2 for the electron. In other words, the angular momentum term a^2 is necessary to make e^2 + a^2 greater than M^2 so that the electron can be seen as a Kerr-Newman naked singularity.
Since the magnitude of electromagnetic charge of each quarks or antiquarks less than that of an electron, and since the mass of each quark or antiquark (as well as the pion mass) is greater than that of an electron, and since the quark - antiquark pair (as well as the pion) has angular momentum zero, the quark - antiquark pion has M^2 greater than e^2 + a^2 = e^2.
( Note that color charge, which is nonzero for the quark and the antiquark and is involved in the relation M^2 less than sum of spin-squared and charges-squared by which quarks and antiquarks can be see as Kerr-Newman naked singularities, is not relevant for the color-neutral pion. )
Therefore, the pion itself is a normal Kerr-Newman Black Hole with Outer Event Horizon = Ergosphere at r = 2M ( the Inner Event Horizon is only the origin at r = 0 ) as shown in this image
from Black Holes - A Traveller's Guide, by Clifford Pickover (Wiley 1996) in which the Ergosphere is white, the Outer Event Horizon is red, the Inner Event Horizon is green, and the Ring Singularity is purple. In the case of the pion, the white and red surfaces coincide, and the green surface is only a point at the origin.
According to section 3.6 of Jeffrey Winicour's 2001 Living Review of the Development of Numerical Evolution Codes for General Relativity (see also a 2005 update):
"... The black hole event horizon associated with ... slightly broken ... degeneracy [ of the axisymmetric configuration ]... reveals new features not seen in the degenerate case of the head-on collision ... If the degeneracy is slightly broken, the individual black holes form with spherical topology but as they approach, tidal distortion produces two sharp pincers on each black hole just prior to merger.
... Tidal distortion of approaching black holes ...
... Formation of sharp pincers just prior to merger ..
... toroidal stage just after merger ...
At merger, the two pincers join to form a single ... toroidal black hole.
The inner hole of the torus subsequently [ begins to] close... up (superluminally) ... [ If the closing proceeds to completion, it ]... produce[s] first a peanut shaped black hole and finally a spherical black hole. ...".
In the physical case of quark and antiquark forming a pion, the toroidal black hole remains a torus. The torus is an event horizon and therefore is not a 2-spacelike dimensional torus, but is a (1+1)-dimensional torus with a timelike dimension.
The effect is described in detail in Robert Wald's book General Relativity (Chicago 1984). It can be said to be due to extreme frame dragging, or to timelike translations becoming spacelike as though they had been Wick rotated in Complex SpaceTime.
As Hawking and Ellis say in The LargeScale Structure of Space-Time (Cambridge 1973):
"... The surface r = r+ is ... the event horizon ... and is a null surface ...
... On the surface r = r+ .... the wavefront corresponding to a point on this surface lies entirely within the surface. ...".
A (1+1)-dimensional torus with a timelike dimension can carry a Sine-Gordon Breather, and the soliton and antisoliton of a Sine-Gordon Breather correspond to the quark and antiquark that make up the pion.
Sine-Gordon Breathers are described by Sidney Coleman in his Erica lecture paper Classical Lumps and their Quantum Descendants (1975), reprinted in his book Aspects of Symmetry (Cambridge 1985), where Coleman writes the Lagrangian for the Sine-Gordon equation as ( Coleman's eq. 4.3 ):
L = (1 / B^2 ) ( (1/2) (df)^2 + A ( cos( f ) - 1 ) )
and Coleman says:
"... We see that, in classical physics, B is an irrelevant parameter: if we can solve the sine-Gordon equation for any non-zero B, we can solve it for any other B. The only effect of changing B is the trivial one of changing the energy and momentum assigned to a given soluition of the equation. This is not true in quantum physics, becasue the relevant object for quantum physics is not L but [ eq. 4.4 ]
L / hbar = (1 / ( B^2 hbar ) ) ( (1/2) (df)^2 + A ( cos( f ) - 1 ) )
An other way of saying the same thing is to say that in quantum physics we have one more dimensional constant of nature, Planck's constant, than in classical physics. ... the classical limit, vanishingf hbar, is exactly the same as the small-coupling limit, vanishing B ... from now on I will ... set hbar equal to one. ...
... the sine-Gordon equation ...[ has ]... an exact periodic solution ...[ eq. 4.59 ]...
f( x, t ) = ( 4 / B ) arctan( ( n sin( w t ) / cosh( n w x ))
where [ eq. 4.60 ] n = sqrt( A - w^2 ) / w and w ranges from 0 to A. This solution has a simple physical interpretation ... a soliton far to the left ...[ and ]... an antisoliton far to the right. As sin( w t ) increases, the soliton and antisoliton mover farther apart from each other. When sin( w t ) passes thrpough one, they turn around and begin to approach one another. As sin( w t ) comes down to zero ... the soliton and antisoliton are on top of each other ... when sin( w t ) becomes negative .. the soliton and antisoliton have passed each other. ...[
This stereo image of a Sine-Gordon Breather was generated by the program 3D-Filmstrip for Macintosh by Richard Palais. You can see the stereo with red-green or red-cyan 3D glasses. The program is on the WWW at http://rsp.math.brandeis.edu/3D-Filmstrip. The Sine-Gordon Breather is confined in space (y-axis) but periodic in time (x-axis), and therefore naturally lives on the (1+1)-dimensional torus with a timelike dimension of the Event Horizon of the pion. ...]
... Thus, Eq. (4.59) can be thought of as a soliton and an antisoliton oscillation about their common center-of-mass. For this reason, it is called 'the doublet [ or Breather ] solution'. ... the energy of the doublet ...[ eq. 4.64 ]
E = 2 M sqrt( 1 - ( w^2 / A ) )
where [ eq. 4.65 ] M = 8 sqrt( A ) / B^2 is the soliton mass. Note that the mass of the doublet is always less than twice the soliton mass, as we would expect from a soltion-antisoliton pair. ... Dashen, Hasslacher, and Neveu ... Phys. Rev. D10, 4114; 4130; 4138 (1974). A pedagogical review of these methods has been written by R. Rajaraman ( Phys. Reports 21, 227 (1975 ... Phys. Rev. D11, 3424 (1975) ...[ Dashen, Hasslacher, and Neveu found that ]... there is only a single series of bound states, labeled by the integer N ... The energies ... are ... [ eq. 4.82 ]
E_N = 2 M sin( B'^2 N / 16 )
where N = 0, 1, 2 ... < 8 pi / B'^2 , [ eq. 4.83 ]
B'^2 = B^2 / ( 1 - ( B^2 / 8 pi ))
and M is the soliton mass. M is not given by Eq. ( 4.675 ), but is the soliton mass corrected by the DHN formula, or, equivalently, by the first-order weak coupling expansion. ... I have written the equation in this form .. to eliminate A, and thus avoid worries about renormalization conventions. Note that the DHN formula is identical to the Bohr-Sommerfeld formula, except that B is replaced by B'. ... Bohr and Sommerfeld['s] ... quantization formula says that if we have a one-parameter family of periodic motions, labeled by the period, T, then an energy eigenstate occurs whenever [ eq. 4.66 ]
[ Integral from 0 to T ]( dt p qdot = 2 pi N,
where N is an integer. ... Eq.( 4.66 ) is cruder than the WKB formula, but it is much more general; it is always the leading approximation for any dynamical system ... Dashen et al speculate that Eq. ( 4.82 ) is exact. ...
the sine-Gordon equation is equivalent ... to the massive Thirring model. This is surprising, because the massive Thirring model is a canonical field theory whose Hamiltonian is expressedin terms of fundamental Fermi fields only. Even more surprising, when B^2 = 4 pi , that sine-Gordon equation is equivalent to a free massive Dirac theory, in one spatial dimension. ... Furthermore, we can identify the mass term in the Thirring model with the sine-Gordon interaction, [ eq. 5.13 ]
M = - ( A / B^2 ) N_m cos( B f )
.. to do this consistently ... we must say [ eq. 5.14 ]
B^2 / ( 4 pi ) = 1 / ( 1 + g / pi )
....[where]... g is a free parameter, the coupling constant [ for the Thirring model ]... Note that if B^2 = 4 pi , g = 0 , and the sine-Gordon equation is the theory of a free massive Dirac field. ... It is a bit surprising to see a fermion appearing as a coherent state of a Bose field. Certainly this could not happen in three dimensions, where it would be forbidden by the spin-statistics theorem. However, there is no spin-statistics theorem in one dimension, for the excellent reason that there is no spin. ... the lowest fermion-antifermion bound state of the massive Thirring model is an obvious candidate for the fundamental meson of sine-Gordon theory. ... equation ( 4.82 ) predicts that all the doublet bound states disappear when B^2 exceeds 4 pi . This is precisely the point where the Thirring model interaction switches from attractive to repulsive. ... these two theories ... the massive Thirring model .. and ... the sine-Gordon equation ... define identical physics. ... I have computed the predictions of ...[various]... approximation methods for the ration of the soliton mass to the meson mass for three values of B^2 : 4 pi (where the qualitative picture of the soliton as a lump totally breaks down), 2 pi, and pi . At 4 pi we know the exact answer ... I happen to know the exact answer for 2 pi, so I have included this in the table. ...
``` Method B^2 = pi B^2 = 2 pi B^2 = 4 pi
Zeroth-order weak coupling
expansion eq2.13b 2.55 1.27 0.64
Coherent-state variation 2.55 1.27 0.64
First-order weak
coupling expansion 2.23 0.95 0.32
Bohr-Sommerfeld eq4.64 2.56 1.31 0.71
DHN formula eq4.82 2.25 1.00 0.50
Exact ? 1.00 0.50```
...[eq. 2.13b ] E = 8 sqrt(A) / B^2 ...[ is the ]... energy of the lump ... of sine-Gordon theory ... frequently called 'soliton...' in the literature ... [ Zeroth-order is the classical case, or classical limit. ] ...
... Coherent-state variation always gives the same result as the ... Zeroth-order weak coupling expansion ... .
The ... First-order weak-coupling expansion ... explicit formula ... is ( 8 / B^2 ) - ( 1 / pi ). ...".
Note that, using the VoDou Physics constituent mass of the Up and Down quarks and antiquarks, about 312.75 MeV, as the soliton and antisoliton masses, and setting B^2 = pi and using the DHN formula, the mass of the charged pion is calculated to be
( 312.75 / 2.25 ) MeV = 139 MeV
which is in pretty good agreement with the experimental value of about 139.57 MeV.
Why is the value B^2 = pi ( or, using Coleman's eq. ( 5.14 ), the Thirring coupling constant g = 3 pi ) the special value that gives the pion mass ?
Because B^2 = pi is where the First-order weak coupling expansion substantially coincides with the ( probably exact ) DHN formula.
In other words, the physical quark - antiquark pion lives where the first-order weak coupling expansion is exact.
Near the end of his article, Coleman expressed "Some opinions":
"... This has been a long series of physics lectures with no reference whatsoever to experiment. This is embarrassing.
... Is there any chance that the lump will be more than a theoretical toy in our field? I can think of two possiblities.
One is that there will appear a theory of strong-interaction dynamics in which hadrons are thought of as lumps, or, ... as systems of quarks bound into lumps. ... I am pessimistic about the success of such a theory. ... However, I stand ready to be converted in a moment by a convincing computation.
The other possibility is that a lump will appear in a realistic theory ... of weak and electromagnetic interactions ... the theory would have to imbed the U(1)xSU(2) group ... in a larger group without U(1) factors ... it would be a magnetic monopole. ...".
This description of the hadronic pion as a quark - antiquark system governed by the sine-Gordon - massive Thirring model should dispel Coleman's pessimism about his first stated possibility and relieve his embarrassment about lack of contact with experiment.
As to his second stated possibility, very massive monopoles related to SU(5) GUT are still within the realm of possible future experimental discoveries.
Further material about the sine-Gordon doublet Breather and the massive Thirring equation can be found in the book Solitons and Instantons (North-Holland 1982,1987) by R. Rajaraman, who writes:
"... the doublet or breather solutions ... can be used as input into the WKB method. ... the system is ... equivalent to the massive Thirring model, with the SG soliton state identifiable as a fermion. ... Mass of the quantum soliton ... will consist of a classical term followed by quantum corrections. The energy of the classical soliton ... is ... [ eq. 7.3 ]
E_cl[f_sol] = 8 m^3 / L
The quantum corrections ... to the 'soliton mass' ... is finite as the momentum cut-off goes to infinity and equals ( - m / pi ). Hence the quantum soliton's mass is [ eq. 7.10 ]
M_sol =( 8 m^3 / L ) - ( m / pi ) +O(L).
The mass of the quantum antisoliton will be, by ... symmetry, the same as M_sol. ...
The doublet solutions ... may be quantised by the WKB method. ... we see that the coupling constant ( L / m^2 ) has been replaced by a 'renormalised' coupling constant G ... [ eq. 7.24 ]
G = ( L / m^2 ) / ( 1 - ( L / 8 pi m^2 ))
... as a result of quantum corrections. ... the same thing had happened to the soliton mass in eq. ( 7.10 ). To leading order, we can write [ eq. 7.25 ]
M_sol = ( 8 m^3 / L ) - ( m / pi ) = 8 m / G
... The doublet masses ... bound-state energy levels ... E = M_N, where ... [ eq. 7.28 ]
M_N = ( 16 m / G ) sin( N G / 16 ) ; N = 1, 2, ... < 8 pi / G
Formally, the quantisation condition permits all integers N from 1 to oo , but we run out of classical doublet solutions on which these bound states are based when N > 8 pi / G . ... The classical solutions ... bear the same relation to the bound-state wavefunctionals ... that Bohr orbits bear to hydrogen atom wavefunctions. ...
Coleman ... show[ed] explicitly ... the SG theory equivalent to the charge-zero sector of the MT model, provided ... L / 4 pi m^2 = 1 / ( 1 + g / pi )
...[ where in Coleman's work set out above such as his eq. ( 5.14 ) , B^2 = L / m^2 ]...
Coleman ... resurrected Skyrme's conjecture that the quantum soliton of the SG model may be identified with the fermion of the MT model. ... ".
WHAT ABOUT THE NEUTRAL PION?
The quark content of the charged pion is u_d or d_u , both of which are consistent with the sine-Gordon picture. Experimentally, its mass is 139.57 Mev.
The neutral pion has quark content (u_u + d_d)/sqrt(2) with two components, somewhat different from the sine-Gordon picture, and a mass of 134.96 Mev.
The effective constituent mass of a down valence quark increases (by swapping places with a strange sea quark) by about DcMdquark = (Ms - Md) (Md/Ms)2 aw V12 = 312x0.25x0.253x0.22 Mev = 4.3 Mev.
Similarly, the up quark color force mass increase is about
DcMuquark = (Mc - Mu) (Mu/Mc)2 aw V12 = 1777x0.022x0.253x0.22 Mev = 2.2 Mev.
The color force increase for the charged pion DcMpion± = 6.5 Mev.
Since the mass Mpion± = 139.57 Mev is calculated from a color force sine-Gordon soliton state, the mass 139.57 Mev already takes DcMpion± into account.
For pion0 = (u_u + d_d)/ sqrt 2 , the d and _d of the the d_d pair do not swap places with strange sea quarks very often because it is energetically preferential for them both to become a u_u pair.
Therefore, from the point of view of calculating DcMpion0, the pion0 should be considered to be only u_u , and DcMpion0 = 2.2+2.2 = 4.4 Mev.
If, as in the nucleon, DeM(pion0-pion±) = -1 Mev, the theoretical estimate is
DM(pion0-pion±) = DcM(pion0-pion±) + DeM(pion0-pion±) = 4.4 - 6.5 -1 = -3.1 Mev,
roughly consistent with the experimental value of -4.6 Mev.
### Proton-Neutron Mass Difference
According to the 1986 CODATA Bulletin No. 63, the experimental value of the neutron mass is 939.56563(28) Mev, and the experimental value of the proton is 938.27231(28) Mev.
The neutron-proton mass difference 1.3 Mev is due to the fact that the proton consists of two up quarks and one down quark, while the neutron consists of one up quark and two down quarks.
The magnitude of the electromagnetic energy difference mN - mP is about 1 Mev, but the sign is wrong: mN - mP = -1 Mev, and the proton's electromagnetic mass is greater than the neutron's.
The difference in energy between the bound states, neutron and proton, is not due to a difference between the Pre-Quantum constituent masses of the up quark and the down quark, which are calculated in the E8 model to be equal.
It is due to the difference between the Quantum color force interactions of the up and down constituent valence quarks with the gluons and virtual sea quarks in the neutron and the proton.
An up valence quark, constituent mass 313 Mev, does not often swap places with a 2.09 Gev charm sea quark, but a 313 Mev down valence quark can more often swap places with a 625 Mev strange sea quark.
Therefore the Quantum color force constituent mass of the down valence quark is heavier by about
(ms - md) (md/ms)^2 a(w) |Vds| = 312 x 0.25 x 0.253 x 0.22 Mev = 4.3 Mev,
(where a(w) = 0.253 is the geometric part of the weak force strength and |Vds| = 0.22 is the magnitude of the K-M parameter mixing first generation down and second generation strange)
so that the Quantum color force constituent mass Qmd of the down quark is
Qmd = 312.75 + 4.3 = 317.05 MeV.
Similarly,
the up quark Quantum color force mass increase is about
(mc - mu) (mu/mc)^2 a(w) |V(uc)| = 1777 x 0.022 x 0.253 x 0.22 Mev = 2.2 Mev,
(where |Vuc| = 0.22 is the magnitude of the K-M parameter mixing first generation up and second generation charm)
so that the Quantum color force constituent mass Qmu of the up quark is
Qmu = 312.75 + 2.2 = 314.95 MeV.
Therefore, the Quantum color force Neutron-Proton mass difference is
mN - mP = Qmd - Qmu = 317.05 Mev - 314.95 Mev = 2.1 Mev.
Since the electromagnetic Neutron-Proton mass difference is roughly mN - mP = -1 MeV
the total theoretical Neutron-Proton mass difference is
mN - mP = 2.1 Mev - 1 Mev = 1.1 Mev,
an estimate that is fairly close to the experimental value of 1.3 Mev.
Note that in the equation (ms - md) (md/ms)^2 a(w) |Vds| = 4.3 Mev , Vds is a mixing of down and strange by a neutral Z0, compared to the more conventional Vus mixing by charged W. Although real neutral Z0 processes are suppressed by the GIM mechanism, which is a cancellation of virtual processes, the process of the equation is strictly a virtual process.
Note also that the K-M mixing parameter |Vds| is linear. Mixing (such as between a down quark and a strange quark) is a two-step process, that goes approximately as the square of |Vds|:
• First the down quark changes to a virtual strange quark, producing one factor of |Vds|.
• Then, second, the virtual strange quark changes back to a down quark, producing a second factor of |Vsd|, which is approximately equal to |Vds|.
Only the first step (one factor of |Vds|) appears in the Quantum mass formula used to determine the neutron mass.
If you measure the mass of a neutron, that measurement includes a sum over a lot of histories of the valence quarks inside the neutron. In some of those histories, in my view, you will "see" some of the two valence down quarks in a virtual transition state that is at a time after the first action, or change from down to strange, and before the second action, or change back. Therefore, you should take into account those histories in the sum in which you see a strange valence quark, and you get the linear factor |Vds| in the above equation.
### Planck Mass
In the E8 model, a Planck-mass black hole is not a tree-level classical particle such as an electron or a quark, but a quantum entity resulting from the Many-Worlds quantum sum over histories at a single point in spacetime.
Consider an isolated single point, or vertex in the lattice picture of spacetime. In the E8 model, fermions live on vertices, and only first-generation fermions can live on a single vertex. (The second-generation fermions live on two vertices that act at our energy levels very much like one, and the third-generation fermions live on three vertices that act at our energy levels very much like one.)
At a single spacetime vertex, a Planck-mass black hole is the Many-Worlds quantum sum of all possible virtual first-generation particle-antiparticle fermion pairs permitted by the Pauli exclusion principle to live on that vertex.
Once a Planck-mass black hole is formed, it is stable in the E8 model. Less mass would not be gravitationally bound at the vertex. More mass at the vertex would decay by Hawking radiation.
In the E8 model, a Planck-mass black hole can be formed: as the end product of Hawking radiation decay of a larger black hole; by vacuum fluctuation; or perhaps by using a pion laser.
Since Dirac fermions in 4-dimensional spacetime can be massive (and are massive at low enough energies for the Higgs mechanism to act), the Planck mass in 4-dimensional spacetime is the sum of masses of all possible virtual first-generation particle-antiparticle fermion pairs permitted by the Pauli exclusion principle.
There are 8 fermion particles and 8 fermion antiparticles for a total of 64 particle-antiparticle pairs. A typical combination should have several quarks, several antiquarks, a few colorless quark-antiquark pairs that would be equivalent to pions, and some leptons and antileptons.
Due to the Pauli exclusion principle, no fermion lepton or quark could be present at the vertex more than twice unless they are in the form of boson pions, colorless first-generation quark-antiquark pairs not subject to the Pauli exclusion principle. Of the 64 particle-antiparticle pairs, 12 are pions.
A typical combination should have about 6 pions.
If all the pions are independent, the typical combination should have a mass of about .14x6 GeV = 0.84 GeV. However, just as the pion mass of .14 GeV is less than the sum of the masses of a quark and an antiquark, pairs of oppositely charged pions may form a bound state of less mass than the sum of two pion masses. If such a bound state of oppositely charged pions has a mass as small as .1 GeV, and if the typical combination has one such pair and 4 other pions, then the typical combination could have a mass in the range of 0.66 GeV.
Summing over all 2^64 combinations, the total mass of a one-vertex universe should give a Planck mass roughly around 0.66 x 2^64 = 1.217 x 10^19 GeV.
Since each fermion particle has a corresponding antiparticle, a Planck-mass Black Hole is neutral with respect to electric and color charges.
The value for the Planck mass given in the Particle Data Group's 1998 review is 1.221 x 10^19 GeV.
### Monster Symmetry of Local Neighborhood Physics
Each E8 or Cl(8) only describes physics in a Local Neighborhood ( it takes the Algebraic Quantum Field Theory of the Generalized Hyperfinite II1 von Neumann Factor to describe a more global theory ).
Consider the E8(8) root vector polytope
and particularly the central 24 vertices:
made up of 8+8+8 = 24 central vertices.
If consider the 24-dim space generated by those 24 elements,
and consider the 24-dimensional Leech Lattice as a lattice in that 24-dim space,
and then compactify the 24-dim space by taking its quotient modulo the Leech Lattice,
you get a representation of a single E8 alone, the simplest building block element of the full E8 model.
According to James Lepowsky in math.QA/0706.4072:
"... the Fischer-Griess Monster M ... was constructed by Griess as a symmetry group (of order about 10^54) of a remarkable new commutative but very, very highly nonassociative, seemingly ad-hoc, algebra B of dimension 196,883. ... One takes the torus that is the quotient of 24-dimensional Euclidean space modulo the Leech lattice ... The Monster is the automorphism group of the smallest nontrival string theory that nature allows ... Bosonic 26-dimensional space-time ... "compactified" on 24 dimensions ...".
It is a conjecture that the Monster is also the automorphism group of the smallest nontrivial part of the E8 model, and that the common relationship to the Monster might show an equivalence of the E8 model and the 26-dim Bosonic String Model (with fermions from orbifolding) described at CERN-CDS-EXT-2004-031. It might even be that both the E8 model and such String models are substantially equivalent to a Spin Foam model with E8(8) structures organized according to the 27-dim exceptional Jordan algebra J3(O).
As to possible physical meaning of such Monster symmetry of elemental E8 model structures, consider that the order of the Monster Group is
8080, 17424, 79451, 28758, 86459, 90496, 17107, 57005, 75436, 80000, 00000
=
2^46 .3^20 .5^9 .7^6 .11^2 .13^3 .17.19.23.29.31.41.47.59.71
or about 8 x 10^53.
If you use positronium (electron-positron bound state of the two lowest-nonzero-mass Dirac fermions) as a unit of mass Mep = 1 MeV, then it is interesting that the product of the squares of the Planck mass Mpl = 1.2 x 10^22 MeV and W-boson mass Mw = 80,000 MeV gives ( ( Mpl/Mep )( Mw/Mep) )^2 = 9 x 10^53 which is roughly the Monster order.
• The Mpl part of M may be related to Aut(Leech Lattice) = double cover of Co1.
• The order of Co1 is 2^21.3^9.5^4.7^2.11.13.23 or about 4 x 10^18.
• The Mw part of M may be related to Aut(Golay Code) = M24.
• The order of M24 is 2^10.3^3.5.7.11.23 or about 2.4 x 10^8.
If you look at the physically realistic superposition of 8 such Cells, you get 8 copies of the Monster of total order about 6.4 x 10^54, which is roughly the product of the Planck mass and Higgs VEV squared:
(1.22 x 10^22 )^2 x (2.5 x 10^5)^2 = 9 x 10^54
The full 26-dimensional Lattice Bosonic String Theory, and the full E8 model, and the full J3(O) Spin Foam model, might in that view all be regarded as an infinite-dimensional Affinization of the Theory of that Single Cell.
### Inflation, Octonion Non-Unitarity, and Entropy
In his book Quaternionic Quantum Mechanics and Quantum Fields ((Oxford 1995), Stephen L. Adler says at pages 50-52, 561:
"... If the multiplication is associative, as in the complex and quaternionic cases, we can remove parentheses in ... Schroedinger equation dynamics ... to conclude that ... the inner product < f(t) | g(t) > ... is invariant ... this proof fails in the octonionic case, and hence one cannot follow the standard procedure to get a unitary dynamics. ...[so
there is a]... failure of unitarity in octonionic quantum mechanics...".
Conventionally, creation of the particles in our universe occurred during inflation with unitarity and energy conservation being due to an inflaton field that is addition to the fields we now observe in the Standard Model plus Gravity.
In the E8 model, our present 4-dimensional physical spacetime freezes out from a high-energy 8-dimensional octonionic spacetime due to selection of a preferred quaternionic subspacetime. A question is whether the dimensional reduction occurs at the initial Big Bang beginning of inflation or continues through inflation to its end.
If our spacetime remains octonionic 8-dimensional throughout inflation, then the non-associativity and non-unitarity of octonions might account for particle creation without the need for tapping the energy of an inflaton field.
The non-associative structure of octonions manifests itself in interesting ways, such as the expansion of the 7-dim 7-sphere S7 under the Lie algebra bracket operation to the 28-dim Lie algebra Spin(8) that is made up of two S7 spheres and a 14-dim G2 Lie algebra.
Consider that the initial Big Bang produced a particle-antiparticle pair of the 7 charged fermions, plus the 8th fermion (neutrino) corresponding to the real number 1.
In gr-qc/0007006 Paola Zizzi says:
"... during inflation, the universe can be described as a superposed state of quantum ... [ qubits ]. The self-reduction of the superposed quantum state is ... reached at the end of inflation ...[at]... the decoherence time ... [ Tdecoh = 10^9 Tplanck = 10^(-34) sec ] ... and corresponds to a superposed state of ... [ 10^19 = 2^64 qubits ]. ... ... This is also the number of superposed tubulins-qubits in our brain ... leading to a conscious event. ...".
The number of doublings (also known as e-foldings) is also estimated by in astro-ph/0307459, by Banks and Fischler, who say:
"... If the present acceleration of the universe is due to an asymptotically deSitter universe with small cosmological constant, then the number of e-foldings during inflation is bounded. ... The essential ingredient is that because of the UV-IR connection, entropy requires storage space. The existence of a small cosmological constant restricts the available storage space. ... We obtain the upper bound ... N_e = 85 ... where we took [the cosmological constant] /\ to be of O(10^(-3) eV ). For the sake of comparison, the case k = 1/3 [ corresponding to the equation of state for a radiation-dominated fluid, such as the cosmic microwave background ] yields ... N_e= 65 ... This value for the maximum number of e-foldings is close to the value necessary to solve the "horizon problem".
If at each of the 64 doubling stages of Zizzi inflation the 2 particles of such a pair produced 8+8 = 16 fermions,
then at the end of inflation such a non-unitary octonionic process would have produced about 2 x 16^64 = 4 x (2^4)^64 = 4 x 2^256 = 4 x 10^77 fermion particles.
The figure of 4 x 10^77 is similar number of particles estimated by considering the initial fluctuation to be a Planck mass Black Hole and the 64 doublings to act on such Black Holes (which process can also be considered due to octonionic non-associativity non-unitarity).
Roger Penrose, in his book The Emperor's New Mind (Oxford 1989, pages 316-317) said:
"... in our universe ... Entropy ... increases ... Something forced the entropy to be low in the past. ... the low-entropy states in the past are a puzzle. ...".
The Zizzi Inflation phase of our universe ends with decoherence "collapse" of the 2^64 Superposition Inflated Universe into Many Worlds of the Many-Worlds Quantum Theory, only one of which Worlds is our World.
In this image:
• the central white circle containing Llull's A-wheel is the Inflation Era in which everything is in Superposition;
• the boundary of the central circle marks the decoherence/collapse at the End of Inflation; and
• each line radiating from the central circle corrresponds to one decohered/collapsed Universe World (of course, there are many more lines than actually shown), only three of which are explicitly indicated in the image, and only one of which is Our Universe World.
Since our World is only a tiny fraction of all the Worlds, it carries only a tiny fraction of the entropy of the 2^64 Superposition Inflated Universe, thus solving Penrose's Puzzle.
Penrose (in his book The Emperor's New Mind (Oxford 1989, page 339)) proposed that the solution of his Puzzle might be related to Weyl Curvature, saying
"... For some reason, the universe was created in a very special (low entropy) state, with something like the WEYL = 0 constraint of the FRW-models imposed upon it ...".
In the book The Dawning of Gauge Theory (Princeton 1997, pages 45,77-81,86,120,144) Lochlainn O'Raifeartaigh said:
"... Weyl's ... 1918 paper ... showed how a geometrical significance could be ascribed to the electromagnetic field ... in 1922 ... Shroedinger ... suggested ... the flaw in the original Weyl theory might be removed by quantum mechanics ... the exponent of the non-integrable Weyl factor became quantized ...
... London in his 1927 paper ... establish[ed] the relation between Weyl's non-integrable scale factor and the gauge principle as it ocurs in the Hamilton-Jacobi, de Broglie and Schroedinger equations ... it is the complex amplitude of the de Broglie wave ... The fault in Weyl's original theory lay not in the presence of Weyl's non-integrable scale-factor but in the fact that it was real and applied to the metric. It should be converted to a phase-factor and applied to the wave-function. ... Weyl's reaction ... was ... enthusiasm ... in ... 1929 ... electromagnetism is an accompanying phenomenon of the material wave-field and not of gravitation ...
... Pauli proceeded to incorporate many of Weyl's ideas into his Handbuch article and by 1953 he had become an ardent proponent of the gauge principle ...".
In the early 1950s, Bohm developed his theory, an elaboration of de Broglie-Schroedinger quantum theory.
In physics/0211012 B. G. Sidharth said:
"... Santamato ... Phys.Rev.D. 29 (2), 216ff, 1984 ... J. Math. Phys. 26 (8), 2477ff, 1984 ... Phys.Rev.D 32 (10), 2615ff, 1985 ... further developed the deBroglie-Bohm formulation by relating the ... Quantum potential to ... Weyl's geometry ...".
In The Anthropic Cosmological Principle (Oxford 1986, pages 446-447) Barrow and Tipler said:
"... Penrose ... suggested that the Weyl curvature could be intimately related to the gravitational entropy of space-time ... Unfortunately, as yet there is no obvous candidate to use as a gravitational entropy Sg ...".
As Penrose said in his book The Emperor's New Mind (Oxford 1989, pages 210-211):
"... REIMANN = WEYL + RICCI ... Einstein's equations become ... RICCI = ENERGY ...
The Weyl tensor WEYL measures a tidal distortion of our sphere of freely falling particles (i.e., an initial change in shape, rather than in size), and the Ricci tensor RICCI measures its initial change in volume. ... the Weyl tensor ... is an important quantity. The tidal effect that is experienced in empty space is entirely due to WEYL. ... there are differential equations connecting WEYL with ENERGY, rather like the Maxwell equations ... a fruitful point of view is to regard WEYL as a kind of gravitational analogue of the electromagnetic field quantity ...".
These remarks of Penrose seem to me to justify seeing the Weyl curvature as a Weyl gauge quantum phase for a Bohm-type Quantum Potential, especially in view of my model in which the Bohm-type Quantum Potential comes from what is commonly viewed as a gravitational part of Bosonic String Theory and in which Many-Worlds gravitational superposition separation plays a fundamental role in Quantum Consciousness.
Since, from the Many-Worlds point of view, the branching of the Worlds of our Universe as time moves forward towards the future might give a realistic definition of gravitational entropy Sg and since Deutsch has indicated that the Bohm potential can be seen to be equivalent to Many-Worlds Quantum theory, it seems to me that the Weyl-Schroedinger-London-Santamato description of the Quantum potential in terms of Weyl curvature could be seen as Penrose's Weyl curvature entropy.
Moreover, the fact that the Weyl curvature WEYL is the conformal part of the RIEMANN tensor is interesting, and the unification of RICCI for gravity and WEYL for quantum potential indicate to me that Jack Sarfatti's idea that BOTH should have back-reaction may be correct.
### Angular Momentum, Mass, Magnetic Dipole Moment
At T = 10^19 GeV, the Planck Mass/Energy, the Inflation Era begins.
At T = 10^16 GeV, the SU(5) Monopole Mass/Energy ... [ According to The Early Universe, by G. Borner (Springer-Verlag 1988), from which book's Fig. 6.21 the SU(5) GUT illustration below is taken,
"... For GUT physics monopoles are extremely interesting objects: they have an onion-like structure ... which contains the whole world of grand unified theories.
Near the center ( about 10^(-29) cm ) there is a GUT symmetric vacuum.
At about 10^(-16) cm, out to the Yukawa tail ... exp( - Mw r ), the field is the electroweak colour field of the (3,2,1) standard model, and
at ...[10^(-15) cm]... it is made up of photons and gluons, while
at the edge [ 10^(-13) cm ] there are fermion-antifermion pairs.
Far beyond nuclear distances it behaves as a magnetically-charged pole of the Dirac type.
This view of the GUT monopole raises the possibility that it may catalyze the decay of the proton ...". ]...
SU(5) GUT Monopole formation ends and the Inflationary X-Boson Higgs mechanism eliminates the relic Monopoles.
According to The Early Universe, by Kolb and Turner (1994 paperback edition, Adddison-Wesley, page 526):
"... the full symmetry of the GUT cannot be manifest; if it were the proton would decay in 10^(-24) sec. The gauge group ... must be spontaneously broken to [ SU(3) x SU(2) x U(1) ]. For SU(5), this is accomplished by ... masses of the order of the unification scale for the twelve X ... gauge bosons. ...[
``` X color charges X electric charges
3 3 3 X X red red -4/3 -1/3
3 3 3 X X green green -4/3 -1/3
3 3 3 X X blue blue -4/3 -1/3
X X X 2 2 antired antigreen antiblue +4/3 +4/3 +4/3
X X X 2 2 antired antigreen antiblue +1/3 +1/3 +1/3```
]... Thus, ... at energies below 10^14 GeV or so the processes mediated by X ... boson exchange can be treated as a four-fermion interaction with strength ... [proportional to 1 / M^2 ] ... where M = 3 x 10^14 GeV is the unification scale. ... these new ... interactions are extremely weak at energies below 10^14 GeV. ... the proton lifetime must be ...[about]... 10^31 yr. ...".
In The Early Universe (paperback edition Addison-Wesley 1994) Kolb and Turner say (at p. 526):
"... SU(5) GUT ...[has]... at the very least one complex 5-dimensional Higgs. The 5-dimensional Higgs contains
the usual doublet Higgs required for W-Boson SSB ...[which]... must acquire a mass of order of a few 100 GeV and
a color triplet Higgs ... which can also mediate B,L [baryon,lepton] violation. The triplet component must acquire a mass comparable to ... M = 3 x 10^14 GeV ... to guarantee the proton's longevity, ...".
At T = 10^15 GeV or about 10^(-34) sec the size of Our Universe is about 10 cm, and the Inflation Era ends.
At T = 10^14 GeV, the SU(5) X-Boson Mass/Energy, Zizzi Reheating occurs and SU(5) Unification ends. At the phase transition at 10^14 GeV the GUT SU(5) is broken to U(3)xU(2)
``` 3 3 3
3 3 3
3 3 3
2 2
2 2```
and then to the Standard Model SU(3) x SU(2) x U(1) with the usual Higgs doublet with VEV around 250 GeV.
After the Inflation Era, Our Universe begins its current phase of expansion
controlled by Gravity according to a MacDowell-Mansouri Mechanism based on the Conformal Group Spin(2,4) = SU(2,2) with 15 generators:
• 6 Lorentz Rotation and Boost Generators;
• 4 Special Conformal Generators;
• 4 Translation Generators; and
• 1 Scalar Dilation Generator.
According to gr-qc/9809061 by R. Aldrovandi and J. G. Peireira:
"... By the process of Inonu-Wigner group contraction taking the limit R -> 0, ...[where R is the de Sitter pseudo-radius, the] ... de Sitter group... [ whether of metric ... (-1,+1,+1,+1,-1) or (-1,+1,+1,+1,+1) , is]... contracted to the group Q, formed by a semi-direct product between Lorentz and special conformal transformation groups, and ... de Sitter space...[is]... reduced to the cone-space N, which is a space with vanishing Riemann and Ricci curvature tensors. As the scalar curvature of the de Sitter space goes to infinity in this limit, we can say that N is a spacetime gravitationally related to an infinite cosmological constant.".
If the 2+4 = 6-dimensional spacetime on which the full Conformal Group Spin(2,4) acts linearly is viewed in terms of an elastic Aether, its rigidity would correspond to the VEV of the X-Boson Higgs Condensate on the order of 10^14 GeV. Since the action of the Conformal Group Spin(2,4) = SU(2,2) is nonlinear on 4-dimensional physical spacetime, the 4-dimensional elastic Aether can, within the Conformal Expanding Domain of Our Universe, be deformed by Special Conformal and Dilation transformations without the restrictions of X-Higgs VEV rigidity on the order of 10^14 GeV.
The Aldrovandi-Peireira paper shows that the 10 Generators (4 Special Conformal and 6 Lorentz) describe Our Universe expanding due to Dark Energy (also known, somewhat inaccurately as it is variable, a cosmological constant).
What about the other Generators?
• The 4 Translation Generators describe spacetime, singularities of which are black holes, and Primordial Black Holes after the End of the Inflation Era make up the Dark Matter of Our Universe that organizes the Large Scale Structure of Galaxy Formation.
• The 1 Scalar Dilation Generator corresponds to the Scalar Higgs of the W-Bosons, with VEV 250 GeV, that gives mass to Ordinary Matter in Our Universe.
Those 15 Conformal Group Spin(2,4) = SU(2,2) Generators indicate that the basic tree-level ratio Dark Energy : Dark Matter : Ordinary Matter is 10 : 4 : 1 = 67 : 27 : 6 . After taking into account the history of Our Universe to the Present Time, that ratio is calculated in the E8 model to be, as of Now, consistent with observations including WMAP:
Dark Energy : Dark Matter : Ordinary Matter = 75.3 : 20.2 : 4.5
After conventional expansion of our universe begins, some regions of our Universe become Gravitationally Bound Domains (such as, for example, Galaxies) in which the 4 Conformal GraviPhoton generators are frozen out, forming domains within our Universe like IceBergs in an Ocean of Water. Within each Gravitationally Bound Domain, spacetime (regarded as Aether) is incompressible with a rigidity on the order of the W-Boson Higgs VEV = 250 GeV.
On a large scale (billions of light years), the Gravitationally Bound Domains are roughly traced out by Galaxies and Clusters of Galaxies
so the the white dots would be the Gravitationally Bound Domains (like rigid pennies on an expanding balloon, or rigid raisins in an expanding cake) and the black background would be the Conformal Expanding Domain of Our Universe.
When the Gravitationally Bound Domains begin to form as Galaxy Cluster Structures in the early stages of the current phase of expansion of Our Universe, according to a 6 December 2006 caption to ESO PR Photo 45/06
"... Spatial, three-dimensional distribution of galaxies in a slice of the Universe as it was 7 billion years ago, based on the VVDS study: brighter areas represent the regions of the Universe with most galaxies. Astonishingly, the galaxy distribution - the 'building blocks' of the large scale structure - takes the shape of a helix at this primordial epoch. ...".
Such a helical structure suggests that helical magnetic fields might be involved in galaxy formation. Further, Battaner et al, in in astro-ph/9801276, astro-ph/9802009, and astro-ph/9911423, suggest that the simplest network pattern for distribution of superclusters of galaxies that is compatible with magnetic field constraints
is made up of octahedra contacting at their vertexes, which is related to a tiling of 3-dim space by cuboctahedra and octahedra, and also to the heptaverton of Arthur Young and octonionic structures of Onar Aam.
Within each Gravitationally Bound Domains there can exist Islands of Conformal Expansion in which all 15 generators of Conformal Spin(2,4) = SU(2,2) remain effective,
like Puddles of Water (red) on an IceBerg (blue) floating in an Ocean of Water (red), so the overall structure of Our Universe in terms of Gravitationally Bound Domains (pennies, raisins, IceBergs) and Conformal Expanding Domains (balloon, cake, water) is quite complicated.
To get some feeling for this structure, begin by considering Clusters of Galaxies to be the largest Gravitationally Bound Domains and then looking at the next level down in sixe, Galaxies. As Hartmann and Miller say in their book Cycles of Fire (Workman Publishing 1987)
• "... Most brilliant of all are quasars ...[with bipolar]... jets ...
• active galazies...[with]... jets ...[and]... disks of gas around black holes in the galactic center ...
• exploding galayies ...[with]... gas ejected from the nucleus, along with strong radio radiation ...
• Seyfert galayies ...[with].. luminous and variable ... nuclei ...
• normal ... galaxies ...[with]... bright central nucleus ...".
Going down one more level in size, to Stars and Stellar Systems like Our Solar System, Hartmann and Miller describe
"... a star just formed ...[in]... its disk-shaped cocoon nebula some of which is being blown out in bipolar jets ... near a dark molecular cloud ... embedded in a ... nebular region ... The dust in the cocoon reddens the star's light ...".
Kohji Tomisaka of Niigata University says in astro-ph/9911166:
"... the star formation process ... angular momentum transfer in the contraction of a rotating magnetized cloud is studied with axisymmetric MHD simulations. Owing to the large dynamic range covered by the nested-grid method, the structure of the cloud in the range from 10 AU to 0.1 pc is explored. First, the cloud experiences a run-away collapse, and a disk forms perpendicularly to the magnetic field, in which the central density increases greatly in a finite time-scale. In this phase, the specific angular momentum j of the disk decreases to about 1/3 of the initial cloud. After the central density of the disk exceeds about 10^10 cm ^(-3) , the infall on to the central object develops. In this accretion stage, the rotation motion and thus the toroidal magnetic field drive the outflow. The angular momentum of the central object is transferred efficiently by the out\flow as well as the effect of the magnetic stress. ... the seeding region (origin of the outflow) ... expands radially outward. This outflow is driven by the gradient of the magnetic pressure of the toroidal magnetic fields ... which are made by the rotation motion ... The magnetic fields exert torque on the outflowing gas to increase its angular momentum. On the other hand, they exert torque on the disk to decrease the angular momentum ... [in about 7000 years] ... the outflow expands and reaches ... [about 4 AU] ... The angular momentum distribution at that time ... has been reduced to ... a factor of 10 ^(-4) from the initial value (i.e. from 10^20 cm^2 s ^(-1) to 10^16 cm^2 s ^(-1)). ... the coupling between gas and magnetic fields ... becomes stronger as long as we consider the seeding region, indicating that the mechanism of angular momentum transfer works also in the later stage of the evolution [after 7000 years]. ...".
If you look closely at the central star in the star-formation image above, you might see Birkeland Current Loops (image from thesurfaceofthesun.com web page) that look up close like Solar Coronal Loops (image from electric-cosmos.org/sun.htm web page).
Up close, Birkeland Current Loops are seen to have braided filament structure (Cygnus Loop image from antwrp.gsfc.nasa.gov.
The scale of Birkeland Current Loops extends beyond Stellar to Galactic (images, SOHO of Sun and NRAO of Fornax A from thunderbolts.info webpage, which said as to NGC
"... a tiny but energy-dense plasmoid at the center of the galaxy ... Fornax A ... discharges energy along oppositely-directed Birkeland filaments (invisible in this image) into the radio lobes. Diffuse currents loop back from the lobes to the spiral arms, where their increasing density triggers star formation as they return to the central plasmoid. ..." ...).
The scale also extends down to Planetary, as is seen in the Jupiter-Io system (image from Anthony Peratt's book Physics of the Plasma Universe (Springer-Verlag 1992)).
The scale may also extend down to Asteroidal. According to a 17 September 1994 article by Jeff Hecht in the New Scientist: "... inclusions ... in chondrules ... in chondrites, the commonest meteorites ...[were]... heated ... to about 2000 kelvin at the birth of the solar system, 4.6 billion years ago. ...[possibly by]... Lightning ... and ... magnetic discharges ... laser tests ... to model the intense visible and infrared light expected near electric or magnetic discharges ... produced dark structures ... remarkably similar to .... inclusions found in chondrules ...".
As can be seen from the image below (adapted from some of the above images and also An Introduction to Modern Astrophysics, by Carroll and Osterlie (Addison-Wesley 1996), Solar System Evolution, by Stuart Ross Taylor (Cambridge 1992), and B. V. Vasiliev's papers astro-ph/0002048 and astro-ph/0002171), Angular Momentum, Magnetic Dipole Moment, and Mass are systematically related or Stars and Stellar Systems and their components.
Angular Momentum J and Magnetic Dipole Moment P are related by a constant that is on the order of unity ( J = P ) (natural units) due to Gravity-Induced Electric Polarization of matter.
As to the relationship between Angular Momentum J and Mass (which, due to the Angular Momentum - Magnetic Dipole Moment relationship, implies a relationship between Magnetic Dipole Moment and Mass), Jack Sarfatti's paper wessonI.PDF describes a 1981 paper by Paul Wesson in which Wesson plotted total angular momentum J against mass M for the solar system, double stars, star clusters, spriral galaxies, the Coma cluster, and the local supercluster in which Wesson found that Angular Momentum J and Mass M are related by a constant p such that
J = p M^2 and J/M = p M.
Wesson's observations indicate approximately, that
• p = 10^(-16) g^(-1) cm^2 sec^(-1) (cgs units) and
• p = 1 / alpha_EM = 137 (natural units G = hbar = c = 1).
For Elementary Specific Angular Momentum J/M = hbar, in natural units where hbar = 1 and the unit of mass is the Planck mass Mplanck:
M = (J/M) / p = alpha_EM, which gives M = Mplanck / 137
which is roughly the mass of an SU(5) Magnetic Monopole.
Wesson's observations are consistent with a Compton Radius Vortes Kerr-Newman Black Hole related to the Wesson Force. The equation (in units with G = c = hbar = 1) for a Kerr-Newman Black Hole with coincident outer and inner event horizons and with Q = 1
meaning that the Black Hole Core has UNIT amplitude to absorb or emit a gauge boson, in accord with Feynman's statement in his book QED (Princeton 1988): "... e - the amplitude for a real electron to emit or absorb a real photon. It is a simple number that has been experimentally determined to be close to -0.0854... the inverse of its square: about 137.03... has been a mystery ... all good theoretical physicists put this number up on their wall ..."
is Q^2 + (J/M)^2 = 1 + (J/M)^2 = M^2. Dividing through by M^2, you get
J^2/M^4 = (J/M^2)^2 = 1 - (1/M)^2
For the Wesson force for which J = p_wesson M^2 with p_wesson = 1 / alpha_EM
J = sqrt(1 - (1/M)^2) M^2 = p_wesson M^2 = 137 M^2
so that 1 - (1/M)^2 = 137^2 and 1/M = sqrt(1 -137^2) = 137 i = 137 exp(pi/2)
Then the magnitude | 1 / Mwesson | = 137 which (since the units are natural units with G = c = hbar = 1) implies that
Mwesson = Mplanck / 137 = 10^19 / 137 = 7.3 x 10^16 GeV
which is consistent with Wesson's observation that
Mwesson = 7.3 x 10^16 GeV = Mmonopole
The Linear Angular Momentum, Magnetic Dipole Moment, and Mass Relationships hold in Gravitationally Bound Domains, which are characterized by
Energy Below about 250 GeV = VEV of W-Boson Higgs where:
• the 1 Scalar Dilation and 4 Special Conformal Transformations of the 15-dimensional Conformal Group Spin(2,4) = SU(2,2) are frozen out;
• the 4 Translations and 6 Lorentz Transformations combine as described in gr-qc/9809061 by R. Aldrovandi and J. G. Peireira: "... By the process of Inonu-Wigner group contraction taking the limit R -> oo, ...[where R is the de Sitter pseudo-radius, the] ... de Sitter group... [ whether of metric ... (-1,+1,+1,+1,-1) or (-1,+1,+1,+1,+1) , is]... reduced to the Poincare group P ...[formed by a semi-direct product between Lorentz and translation groups] and ... de Sitter space...[is]... reduced to the Minkowski space M. As the scalar curvature of the de Sitter space goes to zero in this limit, we can say that M is a spacetime gravitationally related to a vanishing cosmological constant.";
• If the 1+3 = 4-dimensional spacetime on which the 6+4 = 10-dimensional Poincare Group Spin(1,3) + 4-Translations acts linearly is viewed in terms of an elastic Aether, its rigidity would correspond to the VEV of the W-Boson Higgs Condensate on the order of 250 GeV. Within Gravitationally Bound Domains, since Special Conformal and Dilation transformations are frozen out, the rigidity of the 4-dimensional elastic Aether corresponds to the W-Higgs VEV of about 250 GeV.
• the T-Tbar Quark Condensate W-Boson Higgs mechanism connects Gravitational Mass (based on the Planck Mass Mplanck) carried by Gravity with ElectroMagnetic Charge (based on the Magnetic Monopole Mmono) carried by the U(2) ElectroWeak Force so that J / Mmono = Mplanck.
Although the Wesson angular momentum / mass relationship covers a very wide range of mass scales (at least from Asteroids to Stars and Stellar Systems), it is not Universal. Some other angular momentum / mass relationships are:
• A neutral Kerr-Newman Black Hole, with coincident outer and inner event horizons, has Q^2 + (J/M)^2 = M^2 with charge Q = 0, so that (J/M)^2 = M^2, J^2 = M^4, J = M^2, and p_neutralKNblackhole = 1 (in natural units) = 1 x ( 1 / 2.2 x 10^(-5) ) Planck mass/gm x 3 x 10^10 (cm/sec)/c x 1.6 x 10^(-33) cm/Planck Length = 3 x 1.6 /2.2 x 10^(5 + 10 - 33) = 2.2 x 10^(-18) g^(-1) cm^2 sec^(-1).
• The proton angular momentum is (1/2) hbar, which is roughly (1/2) hbar = (1/2) x 10^-27 gm cm^2 sec(-1), and the proton mass is roughly Mproton = 2 x 10^(-24) gm, so that p_proton = (1/2) hbar / (Mproton)^2 = (1/2) x 10^(-27) / 4 x 10^(-48) = (1/8) x 10^21 gm^(-1) cm^2 sec(-1) = 1.2 x 10^20 gm^(-1) cm^2 sec(-1);
• The quark angular momentum is (1/2) hbar, which is roughly (1/2) hbar = (1/2) x 10^-27 gm cm^2 sec(-1), and the constituent (not current) mass of the up or down quark, 1/3 of the proton mass, is roughly Mquark = 2/3 x 10^(-24) gm, so that p_quark = (1/2) hbar / (Mquark)^2 = (1/2) x 10^(-27) / (4/9) x 10^(-48) = (9/8) x 10^21 gm^(-1) cm^2 sec(-1) = 1.1 x 10^21 gm^(-1) cm^2 sec(-1).
• The electron angular momentum is (1/2) hbar, which is roughly (1/2) hbar = (1/2) x 10^-27 gm cm^2 sec(-1), and the electon mass is about Melectron = 10^(-27) gm, so that p_electron = (1/2) hbar / (Melectron)^2 = (1/2) x 10^(-27) / 10^(-54) = (1/2) x 10^27 gm^(-1) cm^2 sec(-1) = 5 x 10^26 gm^(-1) cm^2 sec(-1).
• The neutrino angular momentum is (1/2) hbar, which is roughly (1/2) hbar = (1/2) x 10^-27 gm cm^2 sec(-1), and the neutrino mass is about Mneutrino = zero (or very small), so that p_neutrino = (1/2) x 10^(-27) / (zero (or very small) )^2 = infinity (or very large).
The differences may be that the Wesson relationship involves a combination of ElectroMagnetic and Gravity forces during Collapse/Formation, while, for the others, the forces involved are:
• p_neutrino = infinity (or very large) g^(-1) cm^2 sec^(-1) - No EM and No direct Gravity.
• p_electron = 5 x 10^26 g^(-1) cm^2 sec^(-1) - mostly EM, with minimal Gravity.
• p_quark = 1.1 x 10^21 gm^(-1) cm^2 sec(-1) - mostly EM and Color, with minimal Gravity.
• p_proton = 1.2 x 10^20 g^(-1) cm^2 sec^(-1) - mostly EM and Color and Pion-Strong, with minimal Gravity.
• p_wesson = 10^(-16) g^(-1) cm^2 sec^(-1) - balanced EM and Gravity.
• p_neutralKNblackhole = 2.2 x 10^(-18) g^(-1) cm^2 sec^(-1) - No EM, just Gravity.
Can a laboratory-scale experiment extend the Wesson-type relationship between Angular Momentum J and Magnetic Dipole Moment P to sub-asteroid laboratory mass scales ?
Saul-Paul Sirag, in his 3 November 2000 paper Vigier III: "Gravitational Magnetism: an Update", says:
"... The most straightforward test ... would be to measure directly the magnetic field of a rotating neutral body (which is not also a ferromagnetic substance). Blackett ... suggested that a 1-meter bronze sphere spun at 100 Hz would do nicely, except that this is the maximum safe speed, and there are severe problems in nulling out extraneous magnetic fields. With modern SQUIDs and mu-metal shielded rooms, such an experiment can be attempted. Exactly such an experimental design ... was described at the SQUID '85 conference in Berlin. However, the results of this experiment have not been published. ...".
What about MicroScale Connections between Angular Momentum and Electromagnetism ?
The MicroScale Particle Physics proportionality between Q and M obviously does not extend far into the MacroScale, since Asteroids, Planets, and Stars do not have large net Electric Charges.
The Kerr-Newman Black Hole structure of a Compton Radius Vortex has the property that the square of the electric charge Q plays the same role as the square of J/M (specific angular momentum, or angular momentum over mass) in that their sum, relative to the square of the mass, determines whether the outer and inner event horizons are
• separate Q^2 + (J/M)^2 < M^2 ,
• coincidental Q^2 + (J/M)^2 = M^2 , or
• complex Q^2 + (J/M)^2 > M^2 .
Jack Sarfatti relates Compton Radius Vortex structure of Elementary Particles to the formula of Saul-Paul Sirag (based on earlier work of Blackett and Schuster, and perhaps Pauli) in his 1979 Nature paper Gravitational Magnetism (vol. 278 pp. 535-538, 5 April 1979), in which Saul-Paul Sirag says:
"The gravi-magnetic hypothesis proposes that a rotating mass, measured in gravitational units, has the same magnetic effect as that of a rotating charge, measured in electrical units. The respective force constants determine this relationship
G^(1/2) M = k^(1/2) Q
where G is the gravitational constant, M is mass, k is the Coulomb constant, and Q is electric charge. ... Thus the ratio of magnetic moment P to angular momentum J for a sphere of mass M, density factor f, radius r, angular velocity w, and magnetic field B is (in SI units [with magnetic permeability muo of the vacuum]):
P / J = ( (5/4) 4 pi B / muo ) ( r / f w M ) = G^(1/2) / 2 k^(1/2)
... A priori, we should expect a correlation between P and J. ... The surprise is that this correlation ratio, P/J, should turn out to be close to P = ( G^(1/2) /2 k^(1/2) ) J. ... The gravi-magnetic hypothesis (stated in [ Log = log_10] form) predicts a P/J of -10.37. The mean P/J of the data points plotted in Fig. 1 is -11.13 with a standard deviation of 0.42. ... Therefore, for a given value of the angular momentum J, the gravi-magnetic hypothesis overstates the magnetic dipole moment P by a factor of 10^(-10.37 - (-11.13)) = 10^0.76 = 5.75. Saul-Paul Sirag says: "... the deviation from the gravi-magnetic hypothesis line is fairly systematic. ... deviations ... may well be due to electrical-magnetic effects. ... [ P / J = G^(1/2) / 2 k^(1/2) ] predicts a surface field about three times greater than that measured at the surface of the Earth. ... the Earth is not a uniformly dense sphere ... At the Earth's surface ... [ ( (5/4) 4 pi B / muo ) ( r / f w M ) = G^(1/2) / 2 k^(1/2) ] predicts a B of 2.1 x 10^(-4) T. That is not, however, a great deal more than the 1.4 x 10^(-4) T that [the equation] predicts for the surface of the Earth's core. ... this core magnetism predicted by the gravi-magnetic equation is greater than the magnetic field of 6 x 10^(-5) T measured at the Earth's surface. ... This is what we expect if we suppose that gravitational magnetism is modified by an electrical-magnetic effect stronger at the Earth's surface than in the interior. ...".
B. G. Sidharth, in physics/9908004, says:
"... We first observe that as is known an assembly of Fermions below the Fermi temperature occupies each and every single particle level, and this explains the fact that it behaves like a distribution of Bosonic phonons: The Fermions do not enjoy their normal degrees of freedom. ... [there is a] Bosonization or semionic effect. ... Let us now consider an assembly of N electrons. As is known, if N+ is the average number of particles with spin up, the magnetisation per unit volume is given by
M = mu ( 2 N+ - N ) / V
where mu is the electron magnetic moment. At low temperatures, in the usual theory, N+ = N / 2, so that the magnetisation ... is very small.
On the other hand, for Bose-Einstein statistics we would have, N+ = N. With the above semionic statistics we have,
N+ = b N, 1/2 < b < 1,
According to a 23 March 2006 ESA news web page:
"... Martin Tajmar, ARC Seibersdorf Research GmbH, Austria; Clovis de Matos, ESA-HQ, Paris; and colleagues have measured ... a gravitomagnetic field ... generate[d] ...[by]... a moving mass ... Their experiment involves a ring of superconducting material rotating up to 6 500 times a minute. Superconductors are special materials that lose all electrical resistance at a certain temperature. Spinning superconductors produce a weak magnetic field, the so-called London moment. The new experiment tests a conjecture by Tajmar and de Matos that explains the difference between high-precision mass measurements of Cooper-pairs (the current carriers in superconductors) and their prediction via quantum theory. They have discovered that this anomaly could be explained by the appearance of a gravitomagnetic field in the spinning superconductor (This effect has been named the Gravitomagnetic London Moment by analogy with its magnetic counterpart). ... Although just 100 millionths of the acceleration due to the Earth's gravitational field, ...[ gr-qc/0603033 says "... the peaks ... "only" 100 micro g ... are 30 orders of magnitude higher than what general relativity predicts classically ..."]... The electromagnetic properties of superconductors are explained in quantum theory by assuming that force-carrying particles, known as photons, gain mass. By allowing force-carrying gravitational particles, known as the gravitons, to become heavier, they found that the unexpectedly large gravitomagnetic force could be modelled. ... The papers can be accessed on-line at the Los Alamos pre-print server using the references: gr-qc/0603033 and gr-qc/0603032. ...".
While Garrett Lisi was writing his paper 0711.0770, just before the Full Moon before Halloween ( according to a Tommaso Dorigo blog post on 25 October 2007 ) "... Comet 17P/Holmes has experienced a huge outburst, brightening ... in the matter of hours ... 400,000 times ...[around]... 13:40UT on October 24th ..[according to]...Seiichi Yoshida ...".
On Guy Fawkes Day, 5 November 2007, the day before Garrett Lisi posted 0711.0770, the Astronomy Picture of the Day was the above image, and the antwrp.gsfc.nasa.gov web site said:
"... Comet Holmes continues to be an impressive sight to the unaided eye. The comet has diminished in brightness only slightly, and now clearly appears to have a larger angular extent than stars and planets. Astrophotographers have also noted a distinctly green appearance to the comet's coma over the past week. Pictured above [ by Vicent Peris and Jose Luis Lamadrid ] over Spain in three digitally combined exposures, Comet 17P/Holmes now clearly sports a tail. The blue ion tail is created by the solar wind impacting ions in the coma of Comet Holmes and pushing them away from the Sun. Comet Holmes underwent an unexpected and dramatic increase in brightness starting only two weeks ago. The detail visible in Comet Holmes' tail indicates that the explosion of dust and gas that created this dramatic brightness increase is in an ongoing and complex event. ...".
### Click here to see a movie (.mov) of the 240 E8 root vectors, color-coded as:
• D4+D4+64 = 24+24+64 =112 root vectors of the Spin(16) in E8:
• 24 yellow points for one D4 in the Spin(16) in E8
• 24 purple points for the other D4 in the Spin(16) in E8
• 64 blue points for the 8 vectors times 8 Dirac gammas in the Spin(16) in E8
• 64+64 = 128 root vectors of the half-spinor of Spin(16) in E8:
• 64 red points for the 8 first-generation fermion particles times 8 Dirac gammas
• 64 green points for the 8 first-generation fermion antiparticles time 8 Dirac gamma
The 240 root vectors of my E8 physics model can also be projected as in this image made using a root vector rotation web applet by Carl Brannen
Note that the points are grouped (roughly from left to right) in red-green sets (RG) and blue-purple-yellow sets (BPY) as
8 RG + 28 BPY + 56 RG + 56 BPY + 56 RG + 28 BPY + 8 RG
that, if you add 8 Cartan elements of E8 to the central 56 blue-purple-green, you get a representation of a 7-grading of E8 described by Tomas Larsson when he said in a post to the spr thread Re: Structures preserved by e_8 : "... e_8 also seems to admit a 7-grading,
g = g_-3 + g_-2 + g_-1 + g_0 + g_1 + g_2 + g_3, of the form
e_8 = 8 + 28* + 56 + (sl(8) + 1) + 56* + 28 + 8*. ...".
[ Note that g_0 = 8+56 = 64 = U(8) is in Spin(16) = U(8) plus D4 plus D4 ]
which 7-grading shows the fermionic (odd grade) nature of the RG points and the bosonic (even grade) nature of the BPY points.
In the image, the 24 yellow points correspond to the 24 root vectors of D4 which is used to construct Gravity by a generalized MacDowell-Mansouri mechanism based on the 15-dimensional D3 = A3 Conformal Group Spin(2,4) = SU(2,2). To help get started with visualization, here are the 24 yellow points
in the image. Note that the 24 yellow points form three sets:
• 6 near the top, in a 1 4 1 pattern corrresponding to the 6 vertices of an octahedron;
• 12 in the middle, in a 4 4 4 pattern corresponding to the 12 vertices of a cuboctahedron;
• 6 near the bottom, in a 1 4 1 pattern corresponding to the 6 vertices of a second octahedron.
Note also that a 24-cell can be seen as being made up of a cuboctahedron and two octahedra as in this stereo image:
in which the cuboctahdron is green and the two octahedra are red and blue. So
• 24 yellow points form a 24-cell, which is the root vector polytope of the D4 Lie algebra that gives MacDowell-Mansouri Gravity
• 24 purple points correspond to the 24 root vectors of D4* which is used to construct the U(3) x SU(2) x U(1) Standard Model based on the 15-dimensional D3 = A3 group SU(4) and its 9-dimensional subgroup U(3) and the 6-dimensional SU(4) / U(3) = CP3 Twistor space, with the U(3) giving the SU(3) x U(1) of the Standard Model and the CP3 Twistor space giving (via relation to quaternionic structure) the SU(2) of the Standard Model. Note that the 24 purple points form a pattern similar to that of the 24 yellow points shown above.
• Each of the remaining three sets of 64 vertices is of the form 8 x 8g, where 8g denotes the 8 Dirac gamma basis elements of the Dirac gammas of an 8-dimensional Kaluza-Klein spacetime.
• 64 blue points correspond to 8v x 8g, where 8v corresponds to the 8 basis elements of an 8-dimensional Kaluza-Klein spacetime, so that the 64 blue points correspond to an 8x8 matrix of the 8 spacetime basis elements with respect to 8 Dirac gammas.
• 64 red points correspond to 8s' x 8g, where 8s' corresponds to D4 +half-spinors and to the 8 first-generation fermion particles (electron, neutrino, red up quark, green up quark, blue up quark, red down quark, green down quark, blue down quark), so that the 64 red points correspond to an 8x8 matrix of the 8 first-generation fermion particles with respect to 8 Dirac gammas.
64 green points correspond to 8s" x 8g, where 8s" corresponds to D4 -half-spinors (mirror image to +half-spinors) and to the 8 first-generation fermion antiparticles,, so that the 64 green points correspond to an 8x8 matrix of the 8 first-generation fermion antiparticles with respect to 8 Dirac gammas.
### Physical Interpretations of E8 root vectors and 3 Generations of Fermion Particles and Anti-Particles
In my E8 physics model, 64 of the 240 E8 root vectors (blue in the image above) are represented by 64 = 8x8 = 8 dimensions of full 8-dim spacetime x 8 Dirac Gammmas
• The 8 dimensions of full 8-dim spacetime are denoted here by basis {t,x,y,z,e,ie,je,ke} (or sometimes with capital letters {T,X,Y,Z,E,IE,JE,KE} )to indicate how it appears after dimensional reduction to get {t.x.y,z} is the basis for 4-dimensional physical spacetime and {e,ie,je,ke} is the basis for 4-dimensional CP2 internal symmetry space.
• The 8 Dirac Gammas are denoted here by basis {1,i,j,k,e,ie,je,ke}.
The 8 Dirac components of the 8 spacetime vector dimensions (such as Xk etc ) physically describe effective spacetime curvature in full 8-dimensional (high-energy) spacetime analogous to gravitational curvature in 4-dimensional (low-energy) physical spacetime.
A second set of 64 of the 240 E8 root vectors (red in the image above) are represented by 64 = 8x8 = 8 half-spinor fundamental first-generation fermion particles x 8 Dirac Gammmas or, equivlalently, the 8 covariant components of the 8 fundamental first-generation fermion particles.The 8 fundamental first-generation fermion particles are denoted here by
• electron = EL
• red up quark = UR
• green up quark = UG
• blue up quark = UB
• red down quark = DR
• green down quark = DG
• blue down quark = DB
• electron neutrino = NU
Therefore, the 8x8 = 64 covariant components of the fundamental first-generation fermion particles are:
```ELt ELx ELy ELz ELe ELie ELje ELke
URt URx URy URz URe URie URje URke
UGt UGx UGy UGz UGe UGie UGje UGke
UBt UBx UBy UBz UBe UBie UBje UBke
DRt DRx DRy DRz DRe DRie DRje DRke
DGt DGx DGy DGz DGe DGie DGje DGke
DBt DBx DBy DBz DBe DBie DBje DBke
NUt NUx NUy NUz NUe NUie NUje NUke```
A third set of 64 of the 240 E8 root vectors (green in the image above) are represented by 64 = 8x8 = 8 half-spinor fundamental first-generation fermion anti-particles x 8 Dirac Gammmas that can be represented by notation similar to that of the second set of 64 (red in the image above).
The 3 sets of 64 of the 240 E8 root vectors (blue, red, and green in the image above) are related by triality. To try to reduce confusing clutter, only some of the blue (8-dim spacetime) and red (fundamental first-gemeration fermion particle) root vector vertices are explictly labelled. I hope that enough labelling has been done to clearly indicate how the remaining blue, red, and green vertices should be labelled.
As to the 24 yellow and 24 purple vertices, they represent the root vectors of two copies of D4 (one D4 for Gravity and another D4 for the Standard Model) that live within the Spin(16) inside E8, with structure
E8 / Spin(16) = 64 + 64
Spin(16) / D4xD4 = 64
The spinor fermion term of the full 8-dimensional Lagrangian of my E8 physics model is of the form
INTEGRAL over {t,x,y,z,e,ie,je,ke} of SPINOR {ELt ,ELx ,ELy ,ELz ,ELe ,ELie ,ELje ,ELke} ...(other fermions)
After dimensional reduction according to the Mayer Mechanism from a uniform octonionic 8-dimensional spacetime with basis
{t,x,y,z,e,ie,je,ke}
down to a quaternionic 4+4 = 8-dimensional Klauza-Klein spacetime with basis
{t,x,y,z} of physical spacetime plus {e,ie,je,ke} of internal symmetry space
the spinor term of the Lagrangian breaks down into the sum of four parts
1 - INTEGRAL over {t,x,y,z} of SPINOR {ELt ,ELx ,ELy ,ELz} ...(other fermions)
2 - INTEGRAL over {t,x,y,z} of SPINOR {ELe ,ELie ,ELje ,ELke} ...(other fermions)
3 - INTEGRAL over {e,ie,je,ke} of SPINOR {ELt ,ELx ,ELy ,ELz } ...(other fermions)
4 - INTEGRAL over {e,ie,je,ke} of SPINOR {ELe ,ELie ,ELje ,ELke} ...(other fermions)
### First Generation
1 - is just the usual Standard Model spinor fermion term for 4-dim physical spacetime and first-generation fermions, so 1 represents first-generation fermions. The 8 first-generation fermion particles and antiparticles each correspond to the 8 octonion basis elements, so that the first-generation ferrmion particles and the first-generation fermion antiparticles each correspond to the Octonions O.
### Second Generation
2 - differs from the usual Standard Model in that the SPINOR has components in the {e,ie,je,ke} internal symmetry space instead of in the {t,x,y,z} physical spacetime. Transformation from the SPINOR with components in the {e,ie,je,ke} internal symmetry space to a SPINOR with components in the {t,x,y,z} physical spacetime introduces a 4x4 matrix
``` t x y z
e * * * *
ie * * * *
je * * * *
ke * * * *```
Introduction of those new 4x4 = 16 degrees of freedom of that Transformation corresponds to introducing a new octonion corresponding to a second copy of the 8 fundamental fermion particles and a new octonion corresponding to a second copy of the 8 fundamental fermion antiparticles, so that the second-generation fermion particles and the second-generation fermion antiparticles each correspond to pairs of Octonions OxO.
3 - differs from the usual Standard Model in that the base manifold spacetime has components in the {e,ie,je,ke} internal symmetry space instead of in the {t,x,y,z} physical spacetime. Transformation from the base manifold spacetime with components in the {e,ie,je,ke} internal symmetry space to a base manifold spacetime with components in the {t,x,y,z} physical spacetime introduces a 4x4 matrix as described in 2. Introduction of those new 4x4 = 16 degrees of freedom of that Transformation corresponds to introducing a new octonion corresponding to a second copy of the 8 fundamental fermion particles and a new octonion corresponding to a second copy of the 8 fundamental fermion antiparticles, so that the second-generation fermion particles and the second-generation fermion antiparticles each correspond to pairs of Octonions OxO.
### Third Generation
4 - differs from the usual Standard Model in that the SPINOR has components in the {e,ie,je,ke} internal symmetry space instead of in the {t,x,y,z} physical spacetime AND the base manifold spacetime has components in the {e,ie,je,ke} internal symmetry space instead of in the {t,x,y,z} physical spacetime. Transformation from the SPINOR with components in the {e,ie,je,ke} internal symmetry space to a SPINOR with components in the {t,x,y,z} physical spacetime introduces a 4x4 matrix
``` t x y z
e * * * *
ie * * * *
je * * * *
ke * * * *```
Introduction of those new 4x4 = 16 degrees of freedom of that Transformation corresponds to introducing a new octonion corresponding to a second copy of the 8 fundamental fermion particles and a new octonion corresponding to a second copy of the 8 fundamental fermion antiparticles.
Transformation from the base manifold spacetime with components in the {e,ie,je,ke} internal symmetry space to a base manifold spacetime with components in the {t,x,y,z} physical spacetime introduces a second 4x4 matrix
``` t x y z
e * * * *
ie * * * *
je * * * *
ke * * * *```
Introduction of the second new 4x4 = 16 degrees of freedom of that Transformation corresponds to introducing a second new octonion corresponding to a second copy of the 8 fundamental fermion particles and a second new octonion corresponding to a second copy of the 8 fundamental fermion antiparticles, so that the third-generation fermion particles and the third-generation fermion antiparticles each correspond to triples of Octonions OxOxO.
# E8 Geometry and Physics
( E8, the Lie algebra of an E8 Physics Model, is rank 8 and has 8+240 = 248 dimensions - Compact Version - Euclidean Signature - for clarity of exposition - much of this is from the book Einstein Manifolds (by Arthur L. Besse, Springer-Verlag 1987):
Type EVIII rank 8 Symmetric Space Rosenfeld's Elliptic Projective Plane (OxO)P2
# E8 / Spin(16) = 64 + 64
### and 64 looks like ( 8 fermion antiparticles ) x ( 8 Dirac Gammas )
Type BDI(8,8) rank 8 Symmetric Space real 8-Grassmannian manifold of R16 or set of the RP7 in RP15
# Spin(16) / ( Spin(8) x Spin(8) ) = 64
### The 64-dim Base Manifold looks like ( 8-dim Kaluza-Klein spacetime ) x ( 8 Dirac Gammas )
Due to the special isomorphisms Spin(6) = SU(4) and Spin(2) = U(1) and the topological equality RP1 = S1
### CP3 contains CP2 = SU(3) / U(1) x SU(2) and so gives SU(2) weak force
Note that in the above image some of the 240 E8(8) vertices are projected to the same point:
• each of the 6 vertices in the center (with white dots) are points to which 3 vertices are projected, so that each of the 6 circles with a white dot represents 3 vertices;
• each of the 24 vertices surrounded by 6 same-color nearest neighbors (with yellow dots) are points to which 2 vertices are projected, so that each of the 24 circles with a yellow dot represents 2 vertices.
Using the color-coding, the 240 root vector vertices of E8 correspond to the graded structure of the 256-dim Cl(8) Clifford algebra as follows:
## = 1 + 8 + (24+4) + (24+4+28) + (32+3+3+32) + (28+4+24) + (24+4) + 8 + 1
In the above, the black underlined 4+4 = 8 correspond to the 8 E8 Cartan subalgebra elements that are not represented by root vectors, and the black non-underlined 1+3+3+1 = 8 correspond to the 8 elements of 256-dim Cl(8) that do not directly correspond elements of 248-dim E8.
Note that in the above image some of the 240 E8 vertices are projected to the same point:
Since an important aspect of both the E8 model and the Cl(8) model is the representation of fermions by spinor-type structures,
such as by 8-dimensional Spin(8) +half-spinors and 8-dimensional Spin(8) -half-spinors in the Cl(8) model
and by 128-dimensional Spin(16) half-spinors in the E8 model (based on the identification of 248-dimensional E8 as the sum of the 120-dimensional Spin(16) adjoint plus a 128-dimensional Spin(16) half-spinor space)
it is useful to see how spinor-type structures appear in the above-described structure of E8 by enclosing theire enumeration in [ brackets ]:
## E8 = E7 + ([16]+8+3+1) + ([16]+8+3+3) + (1+[16]+8+3+1) + ([16]+8+3+1)
256-dim Cl(8) - 248-dim E8 = 8
Note that each of the [16] above live in the 27-dimensional exceptional Jordan algebra J3(O), which is the algebra of 3x3 Hermitian Octonion matrices
The basis of my E8 physics model, which is based on Garrett Lisi's E8 physics model, is the figure
in which the two D4 inside E8 are shown in cyan and magenta.
Another figure I use, based on another projection of E8 root vectors into 2 dimensions, shows a nesting
D4 in D5 in E6 in E7 in E8
in which the two D4 of E8 are ( showing multiplicities 3 and 2 of points to which multiple root vectors are projected )
### The central red 24 of the inner D4 are obviously contained in E6 in E8.
The outer magenta 6 of the outer D4 in E7 outside E6 are the two central 3 of:
126 root vectors of E7 - 72 root vectors of E6 = 54 = 2x(24+3) =
2 circular 12+12 + 2 central 3
The magenta 6 root vectors of the two central 3 correspond to the root vectors of a 7-sphere S7 ( which, although not a Lie algebra due to Octonion non-associativity, is a Malcev algebra )
The magenta 48 = 54-6 of the two E7 circular 12+12 are related to the blue 16 of 8-complex-dimensional Kaluza-Klein vector spacetime D5 outside red D4 so that the magenta 48 and blue 16 combine to form a 48+16 = 64-real-dimensional = 8-octonionic-dimensional vector spacetime .
Therefore, E7 looks like E6 plus octonification of vector spacetime plus a 7-sphere S7.
The outer cyan 18 of the outer D4 in E8 outside E7 are the four central 3 plus outside 6 of:
240 root vectors of E8 - 126 root vectors of E7 = 114 = 108 + 6 = 4x(24+3) + 6 =
4 circular 12+12 + 4 central 3 + outside 6
The cyan 12 root vectors of the four central 3 correspond to the root vectors of the 14-dimensional Lie algebra G2.
The outside 6 root vectors correspond to the root vectors of a 7-sphere S7 ( which, although not a Lie algebra due to Octonion non-associativity, is a Malcev algebra )
The cyan 96 = 108-12 of the four E8 circular 12+12 are related to the green 32 of 16-complex-dimensional full-spinor E6 fermion first-generation particles and antiparticles so that the cyan 96 and green 32 combine to form 96+32 = 128-real-dimensional = 16-octonionic-dimensional representation space for full-spinor fermion first-generation particles and antiparticles.
Therefore, E8 looks like E7 plus octonification of representation space for full-spinor fermion first-generation particles and antiparticles plus G2 plus a 7-sphere S7.
### 7-sphere from E7 plus G2 and 7-sphere from E8
The outer magenta-cyan 24
look like the outer red-green-blue 24
in my basic E8 physics model figure
in which the second D4 is represented by its magenta 24
so the physics interpretations of the two projections are related by interchanging, in the basic figure,
its outer red-green-blue 24 and its magenta 24.
The basic figure, so interchanged, and with cyan changed to bright yellow and magenta changed to dark yellow. looks like
from the view of the 240 of E8 as 8 circles of 30.
To get more feel for the 8 circles of 30, consider the comment by rntsai on N-category Cafe that mentioned Kostant's "... decomposition of e8 into 31 cartan's ..." and said: "... It's ... related to e8/(d4+d4) decomposition :
e8/(d4+d4)= (28,1)+(1,28) + (8v,8v) + (8S+,8S+) + (8S-,8S-)
The last 3 terms can be seen as 24 8-dim spaces ...
The other 7 cartans are inside d4+d4 ... there are probably several ways to identify [them]...".
Another way (other than the one mentioned by rntsai) is to decompose d4 into a 14-dim G2 plus two 7 spheres S7 + S7, getting
d4 = 14 + 7 + 7
14-dim rank-2 G2 has 7 = 14/2 Cartans and G2 can be seen as the sum of two 7-dimensional representations. If each 7 is represented by the 7 imaginary octonions { i,j,k,e,ie,je,ke } then the 7 Cartans of G2 are the 7 pairs (one from each of the 7 in G2):
i i
j j
k k
e e
ie ie
je je
ke ke
Note that to make an Abelian Cartan, the pairs must match, because only matching pairs close to form an Abelian Cartan (this can be seen by looking at the octonion products).
28-dim rank-4 d4 has 28/4 = 7 Cartans and d4 looks like G2 plus S7 plus S7 and since G2 decomposes into two 7 representations
d4 decomposes into 7 + 7 + 7 + 7 ( where the first two 7 are from G2 and the other two come from the two S7 )
and the 7 Cartans of d4 are ( in terms of octonion imaginaries )
i i i i
j j j j
k k k k
e e e e
ie ie ie ie
je je je je
ke ke ke ke
Again, note that all elements of the quadruples must match to get Abelian Cartan structure.
When you look at d4 + d4 to get 8-element Cartans of E8, all 8 elements must again match up to get Abelian Cartan structure, so the 7 Cartans of E8 that come from d4 + d4 look like
i i i i i i i i
j j j j j j j j
k k k k k k k k
e e e e e e e e
ie ie ie ie ie ie ie ie
je je je je je je je je
ke ke ke ke ke ke ke ke
Of course, this octonion structure is also reflected in the "... 24 8-dim spaces ..." described by rntsai as "... (8v,8v) + (8S+,8S+) + (8S-,8S-) ..." so that all 31 of the 8-dim Cartans of E8 have nice octonionic structure.
Also note that when you make a 240-element E8 root vector diagram of 8 circles each with 30 vertices, 8 of the 248 E8 generators are missing, so that you must leave out one of the 31 Cartan 8-element sets. Seeing E8 in terms of E8 = 120 + 128 = d4 + d4 + 8x8 + 8x8 + 8x8 it is most natural to see the Cartan as being one of the Cartan sets of 8 coming from the d4 + d4, but you could see the E8 from other points of view by using other Cartan sets of 8 to determine which of the 248 were the 8 omitted from the root vector diagram.
rntsai also said, about "The last 3 terms [that] can be seen as 24 8-dim spaces", "... You can verify that these are abelian, so calling them cartan is justified. ...". Each of the 8x8 look like
``` g1 g2 g3 g4 g5 g6 g7 g8
1
i
j
k
e
ie
je
ke ```
where the E8/octonionic { 1,i,j,k,e,ie,je,ke } represent 8-dim spacetime (8v) or 8 fermion fundamental first-generation particles (8S+) or 8 fermion fundamental first-generation antiparticles (8S-) and the g1 ... g8 are Dirac gammas of 8-dimensional Kaluza-Klein spacetime.
Those Dirac gammas, although they have intrinsic Clifford algebra structure, can be regarded with respect to E8/octonionic structure as only indicating physical Dirac gamma component structure of the E8/octonionic { 1,i,j,k,e,ie,je,ke } so that they are consistent with each of the rows
``` 1g1 1g2 1g3 1g4 1g5 1g6 1g7 1g8
ig1 ig2 ig3 ig4 ig5 ig6 ig7 ig8
jg1 jg2 jg3 jg4 jg5 jg6 jg7 jg8
kg1 kg2 kg3 kg4 kg5 kg6 kg7 kg8
eg1 eg2 eg3 eg4 eg5 eg6 eg7 eg8
ieg1 ieg2 ieg3 ieg4 ieg5 ieg6 ieg7 ieg8
jeg1 jeg2 jeg3 jeg4 jeg5 jeg6 jeg7 jeg8
keg1 keg2 keg3 keg4 keg5 keg6 keg7 keg8```
being able to represent an 8-element E8 Cartan subalgebra, no matter which of the three representations 8v, 8S+, or 8S- (which are related to each other by triality) is used.
### D4 and D4* and Higgs
Frank Dodd (Tony) Smith, Jr. - March 2008
Consider the two D4 in the E8 physics model based on E8 / Spin(16) that I like to use, and denote them D4 and D4* to distinguish between them.
When transformed from the 8-circle projection to the basic projection of my E8 physics model, D4 and D4* look like
The basic figure of my E8 physics model
has, for the D4 and D4*, cyan intead of bright yellow and magenta instead of dark yellow, so that in the basic figure the D4 and D4* look like
28-dim D4 ( with 24 root vectors ) gives Gravity from its 15+1 = 16-dimensional D3xU(1).
The 12-dimensional symmetric space D4 / D3xU(1) corresponds to the Lie spheres in R8.
28-dim D4* ( with 24 root vectors ) gives the Standard Model SU(3) and SU(2) and U(1) from its 15+1 = 16-dimensional A3xU(1) = U(4).
The 12-dimensional symmetric space D4* / U(4) corresponds to the set of complex structures in R8.
Since D3 = A3 and D3xU(1) = U(4), the Lie spheres in R8 looks like the set of complex structures in R8, so from when I refer to the "set of complex structures in R8" I am referring to both of those things.
Since D4 describes Gravity acting on 4-dimensional M4 physical spacetime, the 12-dimensional set of complex structures in R8 of the D4 symmetric space correspond to the ways that M4 can be fit inside the prior-to-dimensional-reduction 8-dimensional spacetime.
Since D4* describes the Standard Model SU(3) and SU(2) and U(1) acting on 4-dimensional CP2 , the 12-dimensional set of complex structures in R8 of the D4* symmetric space correspond to the ways that CP2 can be fit inside the prior-to-dimensional-reduction 8-dimensional spacetime.
After dimensional reduction, the uniform R8 is transfomed into a 4+4-dimensional M4xCP2 Kaluza-Klein space,
and consistency with the structure of the M4xCP2 Kaluza-Klein space is a restriction on the 12+12 = 24 degrees of freedom of the D4 and D4* symmetric spaces.
and the geometry of that dimensional reduction gives, by the Mayer Mechanism, the Higgs scalar, which is 2-complex dimensional or 4-real dimensional ( see, for example, Introduction to Gauge Field Theory, by Bailin and Love (rev ed IOP 1993 at pages 235, 238)).
Since the 12+12 = 24 degrees of freedom of the D4 and D4* symmetric spaces produce the 4 degrees of freedom of the Higgs scalar,
the remaining 24-4 = 20 degrees of freedom do not correspond to physics in our M4xCP2 low-energy Kaluza-Klein realm,
but to phenomena in the high-energy realm of prior-to-dimensional-reduction 8-dimensional spacetime.
Having seen how the Higgs etc comes from the 28-16 = 12-real-dimensional symmetric spaces Spin(8) / U(4) of D4 and D4*
consider the physical interpretation of the 16-real-dimensional U(4) subgroup of Spin(8) in D4* that produces the Standard Model.
12 of the dimensions describe the Standard Model gauge groups SU(3) and SU(2) and U(1)
As to the remaining 16-12 = 4 dimensions,
1 of them is the U(1) of U(4) = U(1)xSU(4) that describes a complex propagator phase, which must be corrrelated/coincidental with the U(1) of the U(2,2) in the Spin(8) of D4 that produces Gravity for consistency of the E8 physics model after dimensional reduction
1 more of them is accounted for by requiring the U(1) of U(3) and the U(1) of U(2) to be correlated/coincidental, producing the U(1) photon. (Note that in the E6 physics model with only one D4, the 12 Standard Model generators are the 28-16 = 12 of Spin(8) / U(2,2), so there is only one U(1) photon.)
The other 2 are in CP3 beyond CP2 ( where CP3 = SU(4) / U(3) and CP2 = SU(3) / U(2) ) and they describe the Quantum Worlds of the Many-Worlds, much like "the "tunnel effect" of quantum mechanics in terms of classical evolution of a system in imaginary time" to use the words of Yu. Manin in his 1981 book "Mathematics and Physics", where he said:
"... It is extremely important to ... imagine the whole history of the Universe ... as a complete four-dimensional shape, something like the "tao" of ancient Chinese philosophy. The introduction of temporal dynamics is the next step. ... the natural structure for the absolute sky ... at the point P0 ... is the complex Riemann sphere ... the complex projective line CP1 ... the natural coordinates are complex numbers ... they are always connected by a fractional-linear transformation ... each sky CP1 is simply embedded in ... The "Penrose paradise" H = CP3 ... the space of "projective twistors" ... the skies over the points of the Minkowski World are not all the lines in CP3, but only part of them, lying in a five-imensional hypersurface ... introduc[ing] additional ... skies correspond[ing] to the missing lines in CP3 ...[gives]... the compact complex spacetime of Penrose, denoted CM ... [with] the interpretation of the "tunnel effect" of quantum mechanics in terms of the classical evolution of a system in imaginary time ...
In a world of light there are neither points nor moments of time; beings woven from light would live "nowhere" and "nowhen" ... One point of CP3 is the whole life history of a free photon - the smallest "event" that can happen to light. ...".
E6 in E8, PSL(2,11) and E8(p)
# E6 in E8
Note that in the below images some of the 240 E8(8) vertices are projected to the same point, so that when counting root vectors keep in mind:
• each of the vertices in the center with white dots are points to which 3 vertices are projected, so that each of the 6 circles with a white dot represents 3 vertices;
• each of the vertices surrounded by 6 same-color nearest neighbors with yellow dots are points to which 2 vertices are projected, so that each of the 24 circles with a yellow dot represents 2 vertices.
The right figure in the image below shows the 240 root vectors of 248-dimensional E8:
The left figure in the image above shows the 72 root vectors of 78-dimensional E6 which is made up of:
• 28-dimensional D4 (24 cyan root vectors)
• ( 8+8 ) complex D4 vectors ( 8+8 blue root vectors )
• 1 Cartan subalgebra element for complexification of D4 vectors
• ( 8+8 ) complex D4 +half-spinors( 8+8 red root vectors )
• ( 8+8 ) complex D4 -half-spinors ( 8+8 green root vectors )
• 1 Cartan subalgebra element for complexification of D4 spinors
Given a basis {1,i} of the complex numbers, the 3 sets of 8+8 in E6 can each be regarded as representing 8 complex elements of the form
8 x 1 + 8 x i
so that the representation spaces of 8-dimensional Kaluza-Klein spacetime and the 8 fundamental first-generation fermion particles and the 8 fundamental first-generation fermion antiparticles can be seen as complex as is useful for calculation of particle masses and force strength constants using an approach motivated by that of Armand Wyler.
To see how to expand from E6 to E8, consider that E8 has octonionic structure, evidenced by the fact that E8 / Spin(16) = (OxO)P2 = Rosenfeld's ocoto-octonionic projective plane., so that E6 must be "Octonified" as follows:
• First, consider the D4 part of E6, which is not explicitly complexified, so it must be extended to operate on the octonions of E8. Ignoring signature subtleties, E6 has one D4 = Spin(8), whose action must be extended to octonion space. Consider the full spinor representation of Spin(8). According to F. Reese Harvey in his book "Spinors and Calibrations" (Academic 1990 at page 287): "... Spin(8) acts transitively on S7 x S7 ...", where each of the two S7 are the unit sphere in each of the 8-dimensional half-spinor representation spaces of Spin(8).
So, to expand to E8, each of the S7 must be Octonified. This is done by introducing an octonion product among the points of each S7. Unlike S3 with a quaternion product that closes to form a Lie group, S7 under an octonion product does not close, but expands to form a 28-dimensional Spin(8) that can be seen as an S7, another S7, and a 14-dimensional G2. Since each of the two S7 expands to a Spin(8):
Expanding E6 to E8 goes from the one D4 in E6 to 2 D4 in E8. The 24 root vectors of the second D4 are the 24 magenta root vectors in the central figure of the above image.
• Second, consider each of the 3 ( blue, red, and green in the E6 left figure of the above image ) sets of 8+8 root vectors in E6 with complex form 8 x 1 + 8 x e ( for complex basis here denoted {1,e} )
To Octonify them they must be expanded from complex with basis {1,e} to octonion with basis {1,i,j,k,e,ie,je,ke}
by adding 6 more root vectors ( {i,j,k} added corresponding to {1} and {ie,je,ke} added corresponding to {e} ) such that the each of the 3 sets of 8+8 = 16 can, when expanded to E8, each be regarded a representing 8+8+8+8+8+8+8+8 = 64 octonionic elements of the form
8 x 1 + 8 x i + 8 x j + 8 x k + 8 x e + 8 x ie + 8 x je + 8 x ke
by adding 6 new sets of 8 root vectors for each of the vector blue, +half-spinor red, and -half-spinor green as shown in the central figure of the above image, for a total of 3 x 6x8 = 3x48 = 6 x 24 =144 of the root vectors in the central figure of the above image.
( Note that, since the complex structure of E6 remains implicitly in the structure of E8, it is still available for use by Armand Wyler-type approaches ( such as I use in my model ) for calculation of force strengths, particle masses, etc. )
### to get the 72 + 168 = 240 root vectors of the right figure of the above image.
( Note that 168 is the order of PSL(2,7) = PSL(3,2) and is related to the Klein Quartic. )
Those three images are shown on larger scale in the three images immediately below:
# E8 and PSL(2,11)
According to Bulletin (New Series) of the American Mathematical Society, Volume 36, Number 1, January 1999, Pages 75-93
Finite Simple Groups which Projectively Embed in an Exceptional Lie Group are Classified!
by Robert L. Griess Jr. AND A. J. E. Ryba:
"... The finite subgroups of the smallest simple algebraic group PSL(2;C) (up to conjugacy) constitute the famous list: cyclic, dihedral, Alt4, Sym4, Alt5. This list has been associated to geometry, number theory, and Lie theory in several ways. McKay's correspondence between these groups and the Cartan matrices of types A, D and E and his related tensor product observations are provocative. For the exceptional algebraic groups, theories of Kostant, Springer and Serre have called attention to particular finite simple subgroups. A good list of finite subgroups should help us understand the exceptional groups better. ...
...".
According to a 2006 paper in the Journal of Mathematical Chemistry entitled
"The undecakisicosahedral group and a 3-regular carbon network of genus 26"
by Erwin Lijnen, Arnout Ceulemans, Patrick W. Fowler,and Michel Deza:
"... the special linear group SL(2,p) ... has order p (p^2 - 1) . The group PSL(2,p) is defined as the quotient group of SL(2,p) modulo its centre ... For all prime numbers p at least 5, the centre has only two elements and the corresponding quotient group PSL(2,p) is simple. Of all these prime numbers p however, the numbers p = 5, 7, 11 stand out as they are the only cases in which the group PSL(2,p) acts transitively on sets of p as well as on sets of p +1 elements, a result already known to Galois.
For all other prime values of p the group PSL(2,p) acts transitively on sets of p + 1 elements, but not on sets of p elements ...
Three projective special linear groups PSL(2,p), those with p = 5, 7 and 11, can be seen as p-multiples of tetrahedral, octahedral and icosahedral rotational point groups, respectively.
The first two have already found applications in carbon chemistry and physics,
PSL(2,5) ... is the rotation group of the fullerene C60 and dodecahedrane C20H20 ... PSL(2,5) has 60 elements and is isomorphic to the pure icosahedral rotation group I . It is alternatively called the pentakistetrahedral group 5T as it contains the tetrahedral group as a subgroup of index 5 . This can easily be seen on a regular dodecahedron where the 20 vertices can be divided into five sets of four vertices such that each set of four vertices forms a regular tetrahedron .... The group PSL(2,5) acts transitively on this set of five tetrahedra by the action of one of the fivefold rotations. The group acts also transitively on a six element set as can be seen from the action on the six diagonals of the regular icosahedron connecting opposite points. ... The smallest 3-regular map with rotational symmetry PSL(2,5) (i.e., 5T or I ) is the all-pentagon dodecahedral map ...
PSL(2,7) is the rotation group of the 56-vertex all-heptagon Klein map, an idealisation of the hypothetical genus-3 "plumber's nightmare" allotrope of carbon. ... PSL(2,7) of order 168, which is alternatively called the heptakisoctahedral group 7O as it contains the octahedral group O as a subgroup of index 7. The group can be represented by the regular genus-3 Klein map, named after Felix Klein who investigated its very high symmetry in connection with the theory of multivalued functions ... Using this map it is easy to show the transitive character on a 7-set, as under removal of the sevenfold symmetry elements, the 56 vertices split into seven octahedral structures containing eight vertices. The complete structure of this group and its relevance to some negative-curvature carbon structures was described in previous papers ... the smallest ... 3-regular map ... with the rotational symmetry PSL(2,7) (i.e., 7O) is the all-heptagon Klein map ...
PSL(2,11) ... has potential relevance for the study of the icosahedral phase of quasicrystals, and was identified as a finite simple subgroup of the Cartan exceptional group E8 ... Here, we present an analysis of PSL(2,11) as the rotation group of a 220-vertex, all 11-gon, 3-regular map, which provides the basis for a more exotic hypothetical sp2 framework of genus 26. The group structure and character table of PSL(2,11) are developed in chemical notation and a three dimensional (3D) geometrical realisation of the 220-vertex map is derived in terms of a punctured polyhedron model where each of 12 pentagons of the truncated icosahedron is connected by a tunnel to an interior void and the 20 hexagons are connected tetrahedrally in sets of 4. ... to realise PSL(2,11) (i.e., 11 I ) by a 3-regular map it is necessary to go to an all-undecagon map which will have 220 vertices, v, and 330 edges, e and 60 faces, f . Hence, from f = v / 2 + 2( 1 - g ), we find a genus g of 26. ... the map of interest ...[has]... total automorphism group consists of 1,320 elements, of which the orientation-preserving (rotational) part of 660 elements corresponds with the group PSL(2,11). ... a geometrical representation for this genus-26 map has thus far not yet been reported. The most obvious representation would be to draw a Schlegel-like diagram consisting of a central 11-gon surrounded by layers of undecagonal faces, adding layers until all faces have been accounted for. ...
... Continuation to produce the whole diagram with 220 numbered vertices and all 60 faces would yield a very intricate figure. Instead, we work with the dual map, represented by the dashed lines in figure 1. It consists of 60 undecavalent vertices and 220 triangular faces, and of course retains the PSL(2,11) rotational symmetry of the original 220-vertex 3-regular map. ...
... our parent group has four direct subgroups: I' , I'' , M5,11 and D6. In total there are 22 subgroups isomorphic to the purely rotational icosahedral group. They fall into two subgroup classes I' and I'' , which are non-equivalent within 11I symmetry. The subgroups within one of these classes are transformed into each other by any one of the 11-fold operations. Note, that equivalence of both classes is restored when one considers the full symmetry group 11Id of the genus-26 map, which also includes orientation-reversing symmetry operations. The second largest subgroup class consists of 12 groups of order 55 corresponding with the metacyclic group M5,11, which is formed by the semi-direct product of a fivefold and 11-fold cyclic group and is the only subgroup of 11I that is not isomorphic with a point group. The fourth direct subgroup class contains 55 groups of order 12 isomorphic to a sixfold dihedral group. Apart from the subgroup class T with 55 purely rotational tetrahedral groups, all other subgroup classes are only composed of dihedral groups Dn or cyclic groups Cn. ...
We ... investigate the possibility of forming a 3D geometrical model exhibiting such icosahedral symmetry, where we further impose the restriction that the 60 vertices remain equivalent, as is the case under PSL(2,11) symmetry. In 3D space there are four semiregular convex polyhedra on 60 vertices obeying these restrictions. They are the four icosahedral Archimedean solids on 60 vertices depicted ...
... namely the small rhombicosidodecahedron, the truncated dodecahedron, the snub dodecahedron and the truncated icosahedron. ... Seeking a geometrical representation it is worth investigating whether the graphs of these Archimedean solids appear as subgraphs of the graph underlying our 26-genus map. If such an Archimedean subgraph does indeed exist, i would be useful as a 3D icosahedral backbone on which a complete geometrical model of the genus-26 map could be built. ...
The ... most interesting subgraph is the truncated icosahedron, corresponding with the framework of Buckminsterfullerene C60. The special relationship of this truncated icosahedral structure to the group PSL(2,11) has already been noted in papers by Kostant ...[who]... showed that the graph of C60 can be expressed group-theoretically by the structure of a 60-element conjugacy class of PSL(2,11) ...".
# E8(p)
According to "The Classification of the Finite Simple Groups" (AMS Mathematical Surveys and Monographs, Vol. 40, No. 1, 1994) by Gorenstein, Lyons, and Solomon ( in the following I change their notation from prime number q to prime number p ):
"... It is our purpose ... to prove the following theorem:
CLASSIFICATION THEOREM. Every finite simple group is
• cyclic of prime order,
• an alternating group,
• a finite simple group of Lie type,
• or one of the twenty-six sporadic finite groups.
... the bulk of the set of finite simple groups consists of finite analogues of Lie groups ... called finite simple groups of Lie type, and naturally form 16 infinite families ... In 1968, Steinberg gave a uniform construction and characterization of all the finite groups of Lie type as groups of fixed points of endomorphisms of linear algebraic groups over the algebraic closure of a finite field ...
The finite simple groups are listed ...[including]... Group ... E8(p) ...[ for prime p ]...
Order ... p^120 (p^2 - 1 ) (p^8 - 1 ) (p^12 - 1 ) (p^14 - 1 ) (p^18 - 1 ) (p^20 - 1 ) (p^24 - 1 ) (p^30 - 1 ) ...".
To get a feel for E8(p), ignore the -1 part of the Order formula for E8(q) and see that the order of E8(q) is roughly (somewhat less than)
p^120 p^(2+8+12+14+18+20+24+30) = p^(120+128) = p^248
Note that 248-dim E8 = 120-dim adjoint of Spin(16) + 128-dim half-spinor of Spin(16)
and that p^248 is the set of maps from 248 to p
and that the exponents are one greater than each of the primes 1, 7, 11, 13, 17, 19, 23, and 29,
but not similarly related to the primes to 2, 3, or 5.
and that
• E8(2) = the number of ways to assign the 2 elements + and 1 (as in + and - electric charge of the U(2) electroweak gauge group) to each of the 248 basis elements of E8
• E8(3) = the number of ways to assign the 3 = 2+1 = 4-1 elements + and 1 (as in r, g and b color charge of the SU(3) color force gauge group) to each of the 248 basis elements of E8
• E8(5) = the number of ways to assign the 5 = 6-1 = 4+1 elements x, y, z, t and m (as in spatial x, y and z , and time t and scale/mass m of the Spin(2,3) anti-deSitter group of MacDowell-Mansouri gravity) to each of the 248 basis elements of E8
•
• E8(7) = the number of ways to assign the 7 = 6+1 = 8-1 Imaginary Octonion basis elements (as in spatial/internal symmetry part of 8-dim Kaluza-Klein spacetime and tree-level-massive first generation fermion particles and antiparticles and in 7 of the 8 Dirac gammas of E8 physics) to each of the 248 basis elements of E8
• E8(11) = the number of ways to assign 11 = 12-1 elements (as in the 11 generators of charge-carrying SU(3) and SU(2) of the 12 generators of the Standard Model SU(3)xSU(2)xU(1) in E8 physics) to each of the 248 basis elements of E8
• E8(13) = the number of ways to assign 13 = 12+1 elements (as in 12 root vectors of Conformal Spin(2,4) = SU(2,2) of MacDowell-Mansouri gravity in E8 physics) to each of the 248 basis elements of E8
• E8(17) = the number of ways to assign 17 = 16+1 elements as in the 16-dim vector representation of Spin(16) and the 16-dim full spinor representation of Spin(8) and 16-dim pairs of octoniions representing second-generation fermions and in the complexification of 8-dim Kaluza-Klein spacetime and 8-dim representation spaces of first-generation particles and antiparticles) to each of the 248 basis elements of E8
• E8(19) = the number of ways to assign 19 = 18+1 elements (as in the 18 root vectors of 21-dimensional rank 3 Spin(7)) to each of the 248 basis elements of E8
• E8(23) = the number of ways to assign 23 = 24-1 elements (as in 24-dim triples of octonions representing third-generation fermions and 24 full octonionic dimensions of the 27-dim Jordan algebra J(3,O)) to each of the 248 basis elements of E8
• E8(29) = the number of ways to assign 29 = 28+1 elements (as in 28-dim d4 for MacDowell-Mansouri gravity and 28-dim d4 for the Standard Model in E8 physics) to each of the 248 basis elements of E8
• ...
• E8(113) = the number of ways to assign 113 = 112+1 elements (as in the 112 root vectors of 120-dim Spin(16)) to each of the 248 basis elements of E8
• E8(127) = the number of ways to assign 127 = 128-1 elements (as in 64+64 = 128-dim half-spinors of Spin(16) representing first-generation fermion particles and antiparticles, and the related Dirac Gammas) to each of the 248 basis elements of E8
• ...
• E8(257) = the number of ways to assign 257 = 256+1 elements (as in 256-dim Cl(8)) to each of the 248 basis elements of E8
• ...
• E8(65537) = the number of ways to assign 65,537 = 65,536+1 elements (as in 65,536-dim Cl(16)) to each of the 248 basis elements of E8
In math.RT/0712.3764 Skip Garibaldi said:
"... Theorem. Let L be a Lie algebra of type E8 over a field of characteristic 5. Then there is no quotient trace form on L. ...
Roughly speaking, we use lemmas due to Block to reduce to showing that the trace is zero for representations coming from algebraic groups of type E8. From this, it is easy to see that it suffices to consider only the Weyl modules, which are defined over Z. Leaning on the fact that a Lie algebra of type E8 is simple over every field ... we note that the trace form is zero because 5 divides 60, the Dynkin index of E8. ...
Lemma 1.3. Let G and g be ... of type E8. The following are equivalent:
• (1) The Killing form of g is not zero over F.
• (2) The Killing form of g is nondegenerate over F.
• (3) The characteristic of F is =/= 2, 3, 5.
...
Proposition 1.5. Let G and g be as in 1.1 and of type E8. There is a representation rho of G over F with tr =/= 0 if and only if F has characteristic =/= 2, 3, 5.
... ".
Physics of F4, E6, and E8
Frank Dodd (Tony) Smith, Jr., December 2007 (based on Physics Forum discussion with Garrett Lisi, Mitchell Porter, et al, and Steven Weinberg on how to build a physics Lagrangian and some material about an E8 7-grading based on an spr post by Thomas Larsson and some Clifford/Geometric Algebra ideas of David Hestenes and some Octonion ideas of Ian Porteous and some Division Algebra ideas of Geoffrey Dixon and some comments on E8 and helicity and some comments on E8 and Spin-Statistics and Signatures and Pin and Spin and some comments on the history of my model and some comments on E8 and The Golden Compass and an E8 Root Vector movie and some comments on E8 geometry. )
## F4
The exceptional Lie algebra f4 =
• so(8) 28 gauge bosons of adjoint of so(8)
• + 8 vectors of vector of so(8)
• + 8 +half-spinors of so(8)
• + 8 -half-spinors of so(8) (mirror image of +half-spinors)
Therefore, you can build a natural Lagrangian from f4 as
• 8 vector = base manifold = 8-dim Kaluza-Klien 4+4 dim spacetime
• fermion term using 8 +half-spinors as left-handed first-generation particles and the 8 -half-spinors as right-handed first-generation antiparticles.
• a normal (for 8-dim spacetime) bivector gauge boson curvature term using the 28 gauge bosons of so(8).
If you let the second and third fermion generations be composites of the first, i.e., if
• the 8 first-generation particles/antiparticles are identified with octonion basis elements denoted by O,
• and you let the second generation be pairs OxO
• and the third generation be triples OxOxO
• and if you let the opposite-handed states of fermions not be fundamental, but come in dynamically when they get mass,
then
f4 looks pretty good IF you can get gravity and the standard model from the 28 so(8) gauge bosons.
If you want to make gravity from 15-dim Conformal Lie algebra so(2,4) by a generalized McDowell-Mansouri mechanism
then you have 28-15 = 13 so(8) generators left over, which are enough to make the 12-dim SM,
BUT
the 15-dim Conformal Gravity and 12-dim Standard Model are not both-at-the-same-time either
• Group-type subroups of Spin(8)
• or Algebra-type Lie algebra subalgebras of so(8)
• or factors of the Weyl group of so(8), since
• the Weyl group of so(8) is of order 2^3 4! = 8 x 24 = 192
• the Weyl group of so(2,4) is of order 2^2 3! = 4 x 6 = 24
• the Weyl group of su(3) is of order 3! = 6
• the Weyl group of su(2) is of order 2! = 2
• the Weyl group of u(1) is of order 1! = 1
Not only does the Weyl group of so(8) have only one factor of 3 while the Conformal Group and Standard Model have two factors of 3, but the total order of the Weyl groups of the Conformal Group and Standard Model is 24 x 6 x 2 x 1 = 288 which is larger than the order 192 of the Weyl group of so(8).
So, if you try to get both the 15 CG and 12 SM to fit inside the 28 so(8),
• you see that they do not fit as Lie Group subgroups
• and you see that they do not fit as Lie algebra subalgebras
• and you see that they do not fit as Weyl group factors
so
what I have done is to look at them as root vectors, where the so(8) root vector polytope has 24 vertices of a 24-cell
• and the Conformal Gravity so(2,4) root vector polytope has 12 vertices of a cuboctahedron
• and the remaining 24-12 = 12 vertices can be projected in a way that gives the 12-dim SM.
My root vector decomposition (using only one so(8) or D4) is one of the things that causes Garrett Lisi to say that I have "... a lot of really weird ideas which ...[ he, Garrett ]... can't endorse ...".
So, from a conservative point of view, that you must use group or Lie algebra decompositions (not even considering a somewhat unconventional Weyl group factor approach, for which the f4 approach also will not work) ,
f4 will not work because one copy of D4 so(8) is not big enough for gravity and the SM.
Also, f4 has another problem for my approach: f4 has basically real structures, while I use complex-bounded-domain geometry ideas based on ideas of Armand Wyler to calculate force strengths and particle masses.
So, although f4 gives you a nice natural idea of how to build a Lagrangian as
• integral over vector base manifold
• of curvature gauge boson term from adjoint so(8)
• and spinor fermion terms from half-spinors of so(8)
f4 has two problems:
• 1 - no complex bounded domain structure for Wyler stuff (a problem for me)
• 2 - only one D4 (no problem for me, but a problem for more conventional folks).
So, look at bigger exceptional Lie algebra:
## E6
e6 is nice, and has complex structure for my Armand Wyler-based calculation of force strengths and particle masses, so e6 solves my problem 1 with f4 and I can and have constructed an e6 model,
but e6 still has only one D4, so e6 is still problematic from the conventional view, as e6 does not solve the conventional problem 2 with f4.
So, do what Garrett Lisi did, and go to the largest exceptional Lie algebra, e8:
## E8
If you look at e8 in terms of E8(8) EVIII = Spin(16) + half-spinor of Spin(16)
you see two copies of D4 inside the Spin(16) (Jacques Distler mentioned that) which are enough to describe gravity and the SM.
I think that Garrett's use of e8 is brilliant, and have written up a paper about e8 (and a lot of other stuff) HERE .
There is a link to a pdf version, and there is a misprint on page 2 where I said EVII instead of EVIII, and probably there are more misprints, but as I said in the paper"... Any errors in this paper are not Garrett Lisi's fault. ...".
My view of e8 differs in some details from Garretts:
• I don't use triality for fermion generations, since my second and third generations are composites of the first, as described above in talking about f4
• and I use a different assignment of root vectors to particles etc, which can be seen in an animated rotation using Carl Brannen's root vector java applet see my .mov file HERE . In the movie:
• There are D4+D4+64 = 24+24+64 = 112 root vectors of Spin(16) :
• 24 yellow points for one D4 in the Spin(16) in E8
• 24 purple points for the other D4 in the Spin(16) in E8
• 64 blue points for the 8 vectors times 8 Dirac gammas in the Spin(16) in E8
• There are 64+64 = 128 root vectors of a half-spinor of Spin(16) :
• 64 red points for the 8 first-generation fermion particles times 8 Dirac gammas
• 64 green points for the 8 first-generation fermion antiparticles time 8 Dirac gamma
### Steven Weinberg on How to Build a Physics Lagrangian
Given E8 = adjoint Spin(16) + half-spinor Spin(16) and physical interpretation
• There are D4+D4+64 = 24+24+64 = 112 root vectors of Spin(16) :
• 24 yellow points for one D4 in the Spin(16) in E8 which D4 gives MacDowell-Mansouri Gravity
• 24 purple points for the other D4 in the Spin(16) in E8 which D4 gives the Standard Model gauge bosons
• 64 blue points for the 8 vectors times 8 Dirac gammas in the Spin(16) in E8 which vectors give 8-dim Kaluza-Klein spacetime
• There are 64+64 = 128 root vectors of a half-spinor of Spin(16) :
• 64 red points for the 8 first-generation fermion particles times 8 Dirac gammas
• 64 green points for the 8 first-generation fermion antiparticles time 8 Dirac gamma
is it natural to put them together to form the Lagrangian of my E8 physics model ?
In the 1986 Dirac Memorial Lectures published in the book Elementary particles and the Laws of Physics (Cambriddge 1987)
Steven Weinberg said ( in the following I sometimes substitute the word "fermion" for "electron" and the words "gauge bosons" for "photon" and the words "the equation" for "(1)" referring to equation (1), and I sometimes insert my comments indented and enclosed by brakcets [ ] ):
"... Let's examine the following equation:
L =
- PSIbar ( gamma^mu d/dx_mu + m ) PSI
- (1/4) ( d/dx_mu A_nu - d/dx_nu A_mu )^2
+ i e A_mu PSIbar gamma^mu PSI
- MU ( d/dx_mu A_nu - d/dx_nu A_mu ) PSIbar sigma^mu nu PSI
- G PSIbar PSI PSIbar PSI
+ ...
L stands for Lagrangian density; roughly speaking you can think of it as the density of energy.
Energy is the quantity that determines how the state vector rotates with time, so this is the role that the Lagrangian density plays; it tells us how the system evolves.
L ...[ is ]... written as a sum of products of fields and their rates of change.
PSI is the field of the fermion ( a function of the spacdetime position x ), and m is the mass of the fermion.
d/dx_mu means the rate of change of the field with position. ...
the gamma^mu matrices are called Dirac matrices.
A_mu is the field of the gauge bosons ...
Each term has an independent constant, called the coupling constant, that mutiplies it. These are the quantities e , MU , G , ... in the equation. The coupling constant gives the strength with which the term affects the dynamics.
No coupling constant appears in the first two terms simply because I have chosen t absorb them into definition of the two fields PSI and A_mu. ...
Experimentally we know that the formula consisting of just the first three terms, with all higher terms neglected, is adequate to describe electrons and photons to a fantastic level of accuracy. This theory is known as quantum electrodynamics or QED. ...
[ An ] argument ... why the behaviour of electrons and photons is described by just the first three terms in the equation ... goes back to work by Heisenberg in the 1930s ... The argument is based on dimensional analysis ... I will work in a system of units called physical units, in which Planck's constant and the speed of light are set equal to one. With these choices, mass is the only remaining unit; we can express the dimensions of any quantity as a power of mass.
For example, a distance or time can be expressed as so many inverse grammes. A cross-sction ... is given in terms of som many inverse grammes squared. ...
Now suppose that all interactions have coupling constants that are pure numbers, like the constant e in the third term of the equation ... Then itr would be very easy to figure out what contribution an observable gets from its cloud of virtual gauge bosons and fermion-antfermion pairs at very high energy E.
Lets suppose an observable O has dimensions [mass]^(-a) where a is positive. ... Now, at very high virtual-particle energy, E , much higher than any mass, or any energy of a particle in the initial or final state, there is nothing to fix a unit of energy. The contribution of high energy virtual particles to the observable O must then be given an integral like [ the following expression (3) ]
O = INTEGRAL(to oo) 1 / E^(a+1) dE
because this is the only quantity wihcih has the right dimensions, the right units, to give the observable O. ... The lower bound in the integral is some finite energy that marks the dividing line between what we call high and low energy. ... This argument only works because there are no other quantities in the theory that have the units of mass or energy. ...
On the other hand, suppose that there are other constants around that have units of mass raised to a negative power. Then if you have an expression involving a constant C_1 with units [mass]^(-b_1) , and another constant C_2 with units [mass]^(-b_2) nd so on, then ... we get a sum of terms of the form [ of the following expression (4) ]
O = C_1 C_2 ... INTEGRAL(to oo) E^( b_1 + b_2 + ... ) / E^(a+1) dE
again because these are the only quantities tha have the right units for the observable O. ...
Expression (3) is perfectly well-defined, the integral converges ... as long as the number a is greater than zero.
However, if b_1 + b_2 + ... is greater than a , then (4) will not be well-defined, because the numerator will have more powers of energy than the denominator and so the integral will diverge.
The point is that no matter how many powers of energy you have in the denominator, i.e. no matter how large a is , (4) eventually will diverge when you get up to sufficiently high order in the coupling constants, C_1 , C_2 , etc., that have dimensionls of negative powers of mass, because if you have enough of these constants, then eventually b+1 + ... is greater than a .
Looking at the Lagrangian density in the equation, we can easily work out what the units of the constant e , MU , G , etc., are.
[ In 4-dimensional physical spacetime ]... All terms in the Lagrangian density must have units [mass]^4 , because length and time have units of inverse mass and trhe Lagrangian density integrated over spacetime must have no units.
From the m PSIbar PSI term, we see that the fermion field must have units [mass]^(3/2) , because ... [t]he derivative operator ( the rate of change operator ) has units of [mass]^1 ...[ and] 3/2 + 3/2 + 1 = 4 .
[ In an e-dimensional spacetime, the fermion field must have units [mass]^(7/2) , because 7/2 + 7/2 + 1 = 8 . ]
The derivative operator ( the rate of change operator ) has units of [mass]^1 , and so the gauge boson field also has units of [mass]^1.
Now we can work out what the units of the coupling constants are. ...
the electric charge ... e ... turns out to be a pure nuber, to have no units.
But then as you add more and more powers of fields, more and more derivatives, you are adding more and more quantities that have units of positive powers of mass, and since the Lagrangian density [ in 4-dimensional physical spacetime ]... has to have fixed units of [mass]^4 , therefore the mass dimensions of the associated coupling constants must get lower and lower, until eventually you come to constants like MU and G which have negative units of mass. ... Specifically, MU has the units of [mass]^(-1) , while G has the unts [mass]^(-2) ... Such terms in the equation would completely spoil the agreement between theory and experiment ... so experimentally we can say that they are not there to a fantrastic order of precision and ... it seems that this could be explained by saying that such terms must be excluded because they would give infinite results, as in (4).
... that is exactly waht we are lookign for: a theoretical framework based on quantum mechanics, and a few symmetry principles, in which the specific dynamical principle, the Lagrangian, is only mathematically consistent if it takes one particular form.
At the end of the day, we ... have the feeling that "it could not have been any other way". ...
I described to you the success quantum electrodynamics has had in the theory of photons and electrons ...
In the 1960s these ideas were applied to the weak interactions of the nuclear particles, with a success that became increasingly apparent experimentally during the 1970s.
In the 1970s, the same ideas were applied to the strong interactions of the elementary particles, with results that ... have been increasingly experimentally verified since then.
Today we have a theory based on just such a Lagrangian as given in the equation. In fact,
### if you put in some indices on the fields so that there are many fields of each type, then the first three terms of the equation give just the so-called standard model ...
It is a theory that seems to be capable of describing all the physics that is accessible using today's accelerators. ... The standard model works so well because all the terms which could make it look different are naturally extremely small. A lot of work has been done by experimentalists trying to find effects of these tiny terms ... but so far nothing has been discovered.
[ Neutrino masses have been discovered since Weinberg gave his talk in1986, but they can be considered to be part of the lepton sector of the Standard Model. ]
So far, no effect except for gravity itself has been discovered coming down to us from the highest energy scale where we think the real truth resides. ... ".
Some of the ... omissions in the above quote indicate that Weinberg's views stated above reflect his thinking "... until about five or six years ..." before he gave the talk in 1986, and the rest of the talk indicates that his thinking as of 1986 was "... that the ultimate constituents of nature, when you look at nature on a scale of 10^15 - 10^19 GeV , are not particles or fields but strings ...".
I prefer to see string theory in terms of my E6 bosonic string model, with fermions coming from orbifolding and strings being physically interpreted as world-lines of particles, which model is consistent with my E8 physics model which is consistent with the Standard Model plus MacDowell-Mansouri gravity from the Conformal Group which gives a Dark Energy : Dark Matter : Ordinary Matter ratio that is consistent with observations. My E6 and E8 models allow calculation of what Weinberg describes as "... the ... fairly large number ... of free parameters ... that have to be chosen "just so" in order to make the [ standard model ] theory agree with experiment ...".
### E8 graded structure
In a post to the spr thread Re: Structures preserved by e_8 Thomas Larsson says:
"... e_8 also seems to admit a 7-grading,
g = g_-3 + g_-2 + g_-1 + g_0 + g_1 + g_2 + g_3,
of the form
e_8 = 8 + 28* + 56 + (sl(8) + 1) + 56* + 28 + 8* .
Kaneyuki does not mention anything about this, because from his point of view 3- and 5-gradings are more interesting. Incidentally, this grading refutes my claim that mb(3|8) is deeper than anything seen in string theory, since now e_8 also admits a grading of depth 3 and I learned about it in an M theory paper: P West, E_11 and M theory, hep-th/0104081, eqs (3.2) - (3.8). OTOH, the above god-given 7-grading of e_8 is not really useful in M theory, because g_-3 is identified with spacetime translations and one would therefore get that spacetime has 8 dimensions rather than 11. ...".
If you see (sl(8) + 1) as 64 = 8v x 8g, and if you regard the 8v as the basis of an 8-dimensional Kaluza-Klein spacetime and the 8g as its 8 Dirac gammas then you get
e_8 = 8 + 28* + 56 + 8v x 8g + 56* + 28 + 8*
and the even part of the 7-grading has 120 elements
e_8_even = 28* + 8v x 8g + 28 = D4* + 8v x 8g + D4
If you see the 8 Dirac gammas of 8g as corresponding to the octonion basis elements {1,i,j,k,e,ie,je,ke} and denote by 7g those corresponding to the 7 octonion imaginary basis elements {i,j,k,e,ie,je,ke} and denote by 1g the one corresponding to the octonion real basis element {1}, then you get
e_8 = 8 x 1g + 28* + 8 x 7g + 8v x 8g + 8* x 7g + 28 + 8* x 1g
so that if you let the 8 ( now denote it by 8s' ) correspond to the 8 fundamental first-generation fermion particles and the 8* ( now denote it 8s" ) correspond to the 8 fundamental first-generation fermion antiparticles, you get for the odd part of the 7-grading the 128 elements
e_8_odd = 8s' x 1g + 8s' x 7g + 8s" x 7g + 8s" x 1g = 8s' x ( 1g + 7g ) + 8s" x ( 1g + 7g ) =
= 8s' x 8g + 8s" x 8g
This is consistent with the structure of my version of E8 physics in which, as Thomas Larsson says "... spacetime has 8 dimensions ...".
### David Hestenes - Left and Right Ideals of Clifford/Geometric Algebra
In Clifford Algebras and Their Applications in Mathematical Physics (Proceedings of the NATO and SERC Workshop, 15-27 September 1985, ed. by J. S. R. Chisholm and A. K. Common (Reidel 1986) at pages 9-10, 23, 327-328), David Hestenes said:
"... Clifford Algebras ... become vastly richer when given geometrical and/or physical interpretations. When a geometric interpretation is attached to a Clifford Algebra, I prefer to call it a Geometric Algebra, which is the name originally suggested by Clifford himself. ...
the theory of geometric representations should be extended to embrace Lie groups and Lie algebras. A start has been made in ... D. Hestenes and G. Sobczyk, Clifford Algebra to Geometric Calculus, Reidel Publ. Co., Dordrecht/Boston (1984) ... I conjectured there that every Lie algebra is isomorphic to a bivector algebra, that is, an algebra of bivectors under the commutator product. Lawyer-physicist Tony Smith has proved that this conjecture is true by pointing to results already in the literature. ...
the columns of a matrix are minimal left ideals in a matrix algebra, because columns are not mixed by matrix multiplication from the left. The Dirac matrix algebra C(4) has four linearly independent minimal left ideals, because each matrix has four column. The Dirac spinor for an electron or some other fermion can be represented in C(4) as a matrix with nonvanishing elements only in one column, like so
```PSI_1 0 0 0
PSI_2 0 0 0
PSI_3 0 0 0
PSI_4 0 0 0```
where the PSI_i are complex scalars. The question arises: Is there a physical basis for distinguishing between different columns?
The question looks more promising when we replace C(4) by the isomorphic geometric algebra R_4,1 in which every element has a clear geometric meaning. Then the question becomes: Is there a physical basis for distinguishing between different ideals?
The Dirac theory clearly shows that a single ideal (or column if you will) provides a suitable representation for a single fermion. This suggests that each ideal should represent a different kind of fermion, so the space of ideals is seen as a kind of fermion isospace. I developed this idea at length in my dissertation, classifying leptons and baryons in families of four ...".
In the same Workshop proceedings I said I(at pages 377-379, 381-383):
"... The 16-dimensional spinor representation of Spin(8) reduces to two irreducible 8-dimensional half-spinor representations that can correspond to the 8 fundamental fermion lepton and quark first-generation particle and to their 8 antiparticles ...
Numerical values for force strengths and ratios of particle masses to the electron mass are given. ... Armand Wyler ... (1971), C. R. Acad. Sci. Paris A272, 186 ... wrote a paper in which he purported to calculate the fine structure constant to be a = 1 / 137.03608 ... from the volumes of homogeneous symmetric spaces. ... Joseph Wolf ... (1965), J. Math. Mech. 14, 1033 ... wrote a paper in which he classified the 4-dimensional Riemannian symmetric spaces with quaterniuonic structure. There are just 4 equivalence classes, with the following representatives:
• T4 = U(1)^4
• S2 x S2 = SU(2) / U(1) x SU(2) / U(1)
• CP2 = SU(3) / S(U(2) x U(1))
• S4 = Spin(5) / Spin(4)
... Final Force Strength Calculation ...
• fine structure constant for electromagnetism = 1 / 137.03608
• weak Fermi constant times proton mass squared = 1.03 x 10^(-5)
• color force constant (at about 10^(-13) cm.) = 0.6286
• gravitational constant times proton mass squared = 3.4 - 8.8 x 10^(-39).
... PARTICLE MASSES ...
• the electron mass ...[ is assumed to be given at its experimentally observed value ]...
• electron-neutrino mass = 0 ... [ Note that this is only a tree-level value. ] ...
• down quark constituent mass = 312.8 Mev ...
• up qaurk constituent mass = 312.8 Mev ...
• muon mass = 104.8 Mev ...
• muon-neutrino mass = 0 ... [ Note that this is only a tree-level value. ] ...
• strange quark constituent mass = 523 Mev ...
• charm quark constituent mass = 1.99 Gev ...
• tauon mass = 1.88 Gev ...
• tauon-neutrino mass = 0 ... [ Note that this is only a tree-level value. ] ...
• beauty quark constituent mass = 5.63 Gev ...
• truth quark constituent mass = 130 Gev ...
CERN has announced that the truth quark mass is about 45 Gev (Rubbia ... (1984), talk at A.P.S. D.P.F. annual meeting at Santa Fe ... but I think that the phenomena observed by CERN at 45 Gev are weak force phenomena that are poorly explained ... As of the summer of 1985, CERN has been uable to confirm its identification of the truth quark in the 45 Gev events, as the UA1 experimenters have found a lot of events clustering about the charged ... W mass and the UA2 experimenters have not found anything convincing. (Miller ... (1985), Nature 317, 110 ... I think that the clustering of UA1 events near the charged ... W mass indicates that the events observed are ... weak force phenomena. ...".
Since I have been critical of CERN for its error in truth quark obersvations, I should state that my paper in that 1985 Workshop also contained errors, the most conspicuous of which may have been my statement that "... there should be three generations of weak bosons ...".
### Mathematical Structure of the 64-dimensional things of the form 8 x 8
Combining the David Hestenes idea of left ideals representing fermions with 8-dimensional D4 half-spinors and an 8-dimensional D4 vector Kaluza-Klein spacetime and 8-dimensional Clifford/Geometric Algebra Dirac gammas gives physical meaning to the three 64-dimensional structures
• 8v x 8g
• 8s' x 8g
• 8s" x 8g
of my version of an E8 physics model. It is useful to study the mathematical structure of such 64-dimensional spaces of the form 8 x 8.
### Ian Porteous
Ian Porteous, in his book Clifford Algebras and the Classical Groups (Cambridge 1995) says(page 180-182):
"... The existence of the Cayley algebra [ octonions ] depends on the fact that the [ 64-dimensional ] matrix algebra R(8) [ of 8 x 8 real matrices ] may be regarded as a ... Clifford algebra for the [ 7-dimensional ] positive-definite orthogonal space R7 in such a way that conjugation of the Clifford algebra corresponds to transposition in R(8). For then ... the images of R and R7 in R(8) together span an eight-dimensional linear subspace, passing through ...[ the 8-dimensional unit ]... 1 , such that each of its elements, other than zero, is invertible. This eight-dimensional subspace of R(8) will be denoted Y.
Proposition 19.3 Let [ the 8-dimensional real space ] R8 -> Y be a linear isomorphism. Then the map
R8 x R8 -> R8 ; (a,b) -> a b = (mu(a))(b)
is a bilnear product on R8 such that, for all a,b in R8 , a b = 0 if and only if a = 0 or b = 0 . Moreover, any non-zero element e in R8 can be made the unit element for such a product by choosing mu to be the inverse of the isomorphism
Y -> R8 ; y -> y e .
The division algebra with unit element introduced in Proposition 19.3 is called the Cayley algebra on R8 with unit element e. ... We shall ... speak simply of the Cayley algebra, denoting it by O ( for octoniions ) ... it is advantageous to select an element of length 1 in R8 ... we seelect e_0 , the zeroth element of the standard basis for R8. ... we have implicitly assigned to R8 its standard positive-definite structure ... The space Y also has an orthogonal structure ... The Cayley algebra O inherits both ... the choice of e as an element of length 1 guarantees that these two structures coincide. ... though the product on R(8) is associative, the product on O need not be. ... The Cayley algebra O is alternative ...".
### Geoffrey Dixon
Geoffrey Dixon in hep-th/9303039 says:
"... multiplication tables for ... O are constructable from the following elegant rules: ...
• Imaginary Units ... e_a , a = 1, ..., 7 ,
• Anticommutators ... e_a e_b + e_b e_a = 2 delta_ab ,
• Cyclic Rules ... e_a e_a+1 = e_a-2 = e_a+5 ,
• Index Doubling ... e_a e_b = e_c => e_(2a) e_(2b) = e_(2c) , ...
The octonion algebra is generally considered ill-suited to Clifford algebra theory becauseO is nonassociative, and Clifford algebras are associative. This problem disappears once we identify O as the spinor space of OL , the adjoint algebra of actions of O on itself from the left. OL is associative. ... a complete basis for OL consists of the elements
1 , e_La , e_Lab , e_Labc ,
Therefore OL is 1 + 7 + 21 + 35 = 64-dimensional, and OL ...[ is isomorphic to the real 8 x 8 matrix algebra ]... R(8) . ... OL is iksomorphic to the Clifford algebra ...[ Cl(0,6) ]... of the space R^(0,6) , the spinor space of which is 8-dimensional over R . In the case the spinor space is O itself, the object space of OL . ... the algebra OR of right adjoint actions of O on itself is the same algebra as OL . Every action in OR can be written as an action in OL .
A 1-vector basis for OL , playing the role of the Clifford algebra ...[ Cl(0,6) ]... of R^(0,6) is { e_Lp , p = 1, ..., 6 } .
The resulting 2-vector basis is then { e_Lpq , p,q = 1, ..., 6 , p =/= q } . This subspace is 15-dimensional, closes under the commutator product, and is in that case isomorphic to so(6). The interesection of this Lie algebra with the Lie algebra of the automorphism group of O , G2 , is su(3) , with a basis
su(3) -> { e_Lpq - e_Lrs , p,q,r,s distinct, and from 1 to 6 } .
... SU(3) is the stability group of e_7 , hence the index doubling automorphism of O is an SU(3) rotation ...".
Geoffrey Dixon, in his book Division Algebras, Octonions, Quaternions, Complex Numbers and the Algebraic Design of Physics (Kluwer 1986), says (pages 43-45, 141-142, 191-192, 197, 209-211, 215-216) (in the following quote I have changed some notation from l to j and have particularized some division algebra notation from the general division algebra K to the octonion division algebra O ) :
"... An algebraic idempotent, A, is by definition a nonzero element satisfying : A^2 = A . A is nontrivial if A =?= 1 ... [and]...
A ( 1- A ) = A - A^2 = A - A = 0 and ( 1-A )^2 = 1 - 2A + A^2 = 1 - 2A + A = 1 - A .
So ... 1 - A is aslo an idempotent, and ... A and 1-A are orthogonal. ... nontrivial idempotents are divisors of zero, hence the identity is the sole idempotent of any division algebra ... This ...[ does not apply ]... to OL = OR = R(8), which is not a division algebra.
Certain elements of OL are diagonal in the adjoint representation. A basis for these consists of the identity, 1_L , together with the e_Labc satisfying e_Labc(1) = e_a ( e_b e_c ) = 1 ... In particular, define I_a = e_L(3+a)(6+a)(5+a) (indices from 1 to 7 , modulo 7), and let I_0 be the identity. Their adjoint representations are
```I_0 = 1_L -> diag(++++++++)
I_1 = e_476 -> diag(+---+-++)
I_2 = e_517 -> diag(++---+-+)
I_3 = e_621 -> diag(+++---+-)
I_4 = e_732 -> diag(+-++---+)
I_5 = e_143 -> diag(++-++---)
I_6 = e_254 -> diag(+-+-++--)
I_7 = e_365 -> diag(+--+-++-)```
... Being diagonal, the I_a clearly commute. They also satisfy I_a I_a+1 = I_a+3 , a in {1, ..., 7} ( had e_a e_a+1 = e_a+3 been chosen as the multiplication for O , then ... I_a I_a+1 = I_a+5, so these choices are in this manner dual to each other ... ) ...
the identity of OL can be elegantly resolved into orthogonal primitive idempotents using the I_a. A primitive idempotent can not be expressed as the sum of two other idempotents ... orthogonal primitive idempotents resolving the identity of OL ... are ...
```P_0 = (1/8) ( 1 + e_L476 + e_L517 + e_L621 + e_L732 + e_L143 + e_L254 + e+L365 ) ,
P_1 = (1/8) ( 1 - e_L476 + e_L517 + e_L621 - e_L732 + e_L143 - e_L254 - e+L365 ) ,
P_2 = (1/8) ( 1 - e_L476 - e_L517 + e_L621 + e_L732 - e_L143 + e_L254 - e+L365 ) ,
P_3 = (1/8) ( 1 - e_L476 - e_L517 - e_L621 + e_L732 + e_L143 - e_L254 + e+L365 ) ,
P_4 = (1/8) ( 1 + e_L476 - e_L517 - e_L621 - e_L732 + e_L143 + e_L254 - e+L365 ) ,
P_5 = (1/8) ( 1 - e_L476 + e_L517 - e_L621 - e_L732 - e_L143 + e_L254 + e+L365 ) ,
P_6 = (1/8) ( 1 + e_L476 - e_L517 + e_L621 - e_L732 - e_L143 - e_L254 + e+L365 ) ,
P_7 = (1/8) ( 1 + e_L476 + e_L517 - e_L621 + e_L732 - e_L143 - e_L254 - e+L365 ) , ```
... These satisfy SUM(a=0 to 7) P_a = 1 , and P_a P_b = delta_ab P_b . ... They are related as follows ( a in {0, 1, ..., 7} ):
• P_a = e_La P_0 e*_La ;
• P_a e_La = e_La P_0 ;
• e_La P_a = P_0 e_La ;
if e_a e_b = e_c ( a,b,c in {1, ..., 7}, then e_La P_0 e_Lb = - e_Lb P_c e_La ...
for example ... ( P_0 + P_1 + P_2 + P_6 ) is an idempotent projecting from O a subalgebra isomorphic to Q:
( P_0 + P_1 + P_2 + P_6 ) O = Q Likewise ... ( P_0 + P_1 ) O = C ... and ... P_0 O = R . ...
The matematical context upon which the model building ... rests relied heavily on treating the ... division algebras as spinor spaces of their left adjoint algebras ( identified as Clifford algebras ), of tensoring those adjoint algebras with ...[ the 2 x 2 real matrix algebra ]... R(2) ( doubling the size of the spinor space ) ... These ... same methods will be employed here to generate bases for the Lie algebras of the groups of a version of the magic square. Each will be derived from a tensor product of two division algebras ...
The foundation upon which the method rests is R(2) . In R(2) define ...
``` 1 0 1 0 0 1 0 1
E = A = B = W =
0 1 0 -1 1 0 -1 0```
Let O(x)O be the tensor product of two ...[ copies of the octonion division algebra O ]...
Let c_k ... denote ...[ a basis ]... for the pure hypercomplex part...[ of O ]... In ... O(x)O (2) the elements
W , c_Lk A , c_LljB
anticommute ( and associate ) and form the basis for the 1-vector generators of a ... Clifford algebra with negative definite Euclidean metric. Under commutation they generate the 2-vectors
c_Lk B , c_Lj A , c_Lk1k2 E , c_Lj1j2 E , c_Lk c_Lk W ,
Together ...[ those ]... elements form a basis for a representation of the Lie algebra so( dimO + dimO ). I'll call this External_OO and call it the external subalgebra.
To this collection we now add the spinors of O(x)O(2) , namely the elements of (O(x)O)^2 , without yet specifying a commutator product on this linear space. I'll denote this Spinor_OO . ...
The total resulting linear space will be denoted MS_OO , MS for magic square ...
Let e_La and e'_Lb be distinct and mustually commuting bases for the hypercomplex octonions.
External_OO is spanned by
W , e_La A , e'_La B ( 1-vector basis for ...[ the Clifford algebra Cl(0,15) ]... ) and
e_Lab B , e'_Lab A , e_La e'_Lb W , e_Lab E , e'_Lab E ( 2-vectors )
...[ with dimension 1 + 7 + 7 + 21 + 21 + 21 + 21 + 21 = 1+14 +105 = 120 ]...
External_OO = so(16) .
Spinor_OO is 128-dimensional, and ... because OL = OR ...[ there is no Internal_OO ]...
That's 120 + 128 = 248 elements altogether, and we make the identification:
MS_OO = LE8 . ...
Getting LE8 from O(x)J3(O) ...[ where J3(O) is the 27-dimensional exceptional Jordan algebra ]... is slightly trickier. In this case there are two distinct copies of O commuting with each other ( denote them O1 and O2 ) ...
We begin .. with the 28 so(8) generators ...[ that ]... are elements of LF4... and the 3 so(3) generators ... [that]... account for 3 of the 52 dimensions of LF4 ... Together ...[ they ]... account... for 3 + 28 = 31 of the 52-dimensional LF4. ...
in ...[ this ]... O1(x)J3(O2) case we expand so(3) to LF4 , the Lie algebra of the sutomorphism of J3(O1) . That gives us 28 elements from so(8 , and 52 elements from LF4 ( which contains another distinct so(8) ). Of the 52 generators of theis new LF4 , 28 are diagonal ... and 24 are off-diagonal. Commutators of the 28 diagonal generators ( the so(8) of O1 ) with the so(8) of O2 yield nothing new, but each of the 24 off-diagonal generators gives rise to a 7-dimensional space of new generators. That yields,
28 + 52 + 168 = 248
generators all together, and the set closes here on LE8 ...".
Note that 168 is the order of PSL(2,7) = SL(3,2) which can be thought of as the group of linear fractional transformations of the vertices of a heptagon and is so related to octonion multiplication rules, and that SL(2,7) of order 336 double covers the Klein Quartic.
### E8 Physics and Helicity
In E8 physics,
• the 8 first-generation fermion particles and 8 Dirac gammas are represented by 8x8 = 64 of the 128 half-spinor Spin(16) elements of E8 and
• the 8 first-generation fermion antiparticles and 8 Dirac gammas are represented by the other 8x8 = 64 of the 128 half-spinor Spin(16) elements of E8.
Since the all belong to one half-spinor representation of Spin(16), they all have the same helicity. Let that helicity correspond to left-handed fermion particles.
Since antiparticles are effectively particles travelling backward in time, the corresponding helicity for fermion antiparticles is right-handed.
Therefore, in E8 physics, fermion particles are fundamentally left-handed and fermion antiparticles are fundamentally right-handed.
Opposite handedness arises dynamically, and can be seen in experiments involving massive fermions moving at much less than the speed of light.
L. B. Okun, in his book Leptons and Quarks (North-Holland (2nd printing 1984) page 11) said:
"... a particle with spin in the direction opposite to that of its momentum ...[is]... said to possess left-handed helicity, or left-handed polarization. A particle is said to possess right-handed helicity, or polarization, if its spin is directed along its momentum. The concept of helicity is not Lorentz invariant if the particle mass is non-zero. The helicity of such a particle depends oupon the motion of the observer's frame of reference. For example, it will change sign if we try to catch up with the particle at a speed above its velocity. Overtaking a particle is the more difficult, the higher its velocity, so that helicity becomes a better quantum number as velocity increases. It is an exact quantum number for massless particles ...
The above space-time structure ... means ... that at ...[ v -> speed of light ]... particles have only left-handed helicity, and antparticles only right-handed helicity. ...".
### E8 and Spin-Statistics and Signatures and Pin and Spin
Soji Kaneyuki has written a chapter entitled Graded Lie Algebras, Related Geometric Structures, and Pseudo-hermitian Symmetric Spaces, as Part II of the book Analysis and Geometry on Complex Homogeneous Domains, by Jacques Faraut, Soji Kaneyuki, Adam Koranyi, Qi-keng Lu, and Guy Roos (Birkhauser 2000). Kaneyuki lists a Table of Exceptional Simple Graded Lie Algebras of the Second Kind including
e(17) for which g = E8(8)
• g(+2) = 14
• g(+1) = 64 = 8 fermion particles x 8 Dirac gammas
• g(0) = so(7,7) + R
• g(-1) = 64 = 8 fermion antiparticlex x 8 Dirac gammas
• g(-2) = 14
Kaneyuki also considers the even part of such algebras
g(ev) = g(-2) + g(0) + g(2)
= 14 + so(7,7)+R + 14 = 14 + 92 + 14 = 120 = so(8,8) = so(7,1) + 64 + so(1,7)
• The step immediately above is by real Clifford periodicity Cl(16) = Cl(8) (x) Cl(8) and
• preserving the (7,7) substructure by adding (0,1) and (1,0) to it to get so(7,1) + so(1,7)
= so(7,1) + so(1,7) + 8-dim Kaluza-Klein spacetime x 8 Dirac gammas
If all 120 g(ev) generators are physically bosonic and if all 128 generators of the odd g(-1) and g(+1) are physically fermionic then under E8
• fermion times fermion = boson
• boson times boson = boson
• boson times fermion = fermion times boson = fermion
so Spin-Statistics is satisfied.
As to signature (diagram from Spinors and Calibrations, by F. Reese Harvey (Academic 1990)):
Cl(7,1) is the 8x8 Quaternion Matrix Algebra M(Q,8)
Cl(1,7) = Cl(0,8) is the 16x16 Real Matrix Algebra M(R,16) which has effective Octonionic structure.
If a preferred Quaternionic subspace is frozen out of the Octonionic spacetime of Cl(1,7), then its 8-dimensional (1,7) vector spacetime undergoes dimensional reduction to
• 4-dimensional (1,3) associative physical spacetime plus
• 4-dimensional (0,4) coassociative CP2 internal symmetry space
and Cl(1,7) is transformed into quaternionic Cl(2,6) = M(Q,8).
After dimensional reduction, since Cl(1,7) = Cl(2,6) = M(Q,8) you effectively have two copies of Cl(2,6) = M(Q,8).
Note that some might object that Spin(p,q) does not come directly from Cl(p,q) but rather comes from its even subalgebra, so that sometimes when I write Spin(p,q) I should be writing Pin(p,q), where, as Ian Porteous says in his book Clifford Algebras and the Classical Groups (Cambridge 1995):
"... the Pin and Spin groups doubly cover the relevant orthogonal and special orthogonal groups.
Proposition 16.14 Let X be a non-degenerate quadratic space of positive finite dimension. Then the maps
PinX -> O(X) ... and SpinX -> SO(X) ...
are surjective, the kernel in each case being isomorphic to S0 [ the zero-sphere { -1, +1 } ]...
When X = R(p,q) the standard notations for [ the Clifford group ] GAMMA(X) ...[ and for ] ... GAMMA0((X) , PinX and SpinX will be GAMMA(p,q) , GAMMA0(p,q) , Pin(p,q) and Spin(p,q).
Since ...[ the even Clifford subalgebra Cle(p,q) is isomorphic to the even Clifford subalgebra Cle(q,p) ]...
• GAMMA0(q,p) is isomorphic to GAMMA0(p,q) and
• Spin(q,p) is isomorphic to Spin(p,q).
Finally, GAMMA0(0,n) is often abbreviated to GAMMA0(n) and Spin(0,n) to Spin(n). ...".
Further, Pertti Lounesto says in his book Clifford Algebras and Spinors (Second Edition Cambridge 2001):
"... 17.2 The Lipschitz grooups and the spin groups The Lipschitz group GAMMA(p,q) , also called the Clifford group although invented by Lipschitz 1880/86 , could be defined as the subgroup in Cl(p,q) generated by invertible vectors x in R(p,q) ...
The Lipshitz group has a normalized subgroup Pin(p,q) ... The group Pin(p,q) has an even subgroup Spin(p,q) ...".
Further, in Spinors and Calibrations (Academic 1990) F. Reese Harvey says:
"... The Grassmannians and Reflections ... G(r,V) ...[ is ]... the grassmannian of all unit, oriented, nondegenerate r-planes through the origin in V ...[ G(r,V) ]... consists of all simple vectors in /\r(V) that are of unit length. That is,
u is in G(r,V) if u = u_1 /\ ... /\ u_r with u_1 , ... , u_r in V and |u| = 1 ( ||u|| = +/-1 ) .
... Given u in G(r,V) , reflection along u , denoted R_u , is defined by
R_u(x) = -x if x is in span(u) and R_u(x) = x if s is ...[ orthogonal to ]... span(u) ...
... Remark 10.20 ... each reflection R_u in O(V) along a subspace span u of V is replaced in the double cover Pin(V) of O(V) be either of the two elements +/-u in G(r,V) in /\r(v) in Cl(V) in the Clifford algebra. ...
By definition, the group Pin is generated by the element G(1,V) in Pin. ... the definition of Spin ... suffers from the defect that the generators u in G(1,V) are not in Spin. This defect can be corrected .. if e is any unit vector and S(n-1) denotes the unit sphere in V , then e.S(n-1) generates Spin ... In addition ... Proposition 10.21 ( n = dim(V) >= 3 ). The group Spin is the subgroup of Cl*(V) ... of invertible elements in ... Cl(V) ... generated by G(2,V) ...".
What is the physical difference between Pin and Spin?
Roughly, Pin has reflections and so can map fermion particles into fermion antiparticles. In the example of Cl(8) = M(R,16) , Pin(8) sees spinors as 1x 16 columns like
```x 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 x
x 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 x
x 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 x
x 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 x
x 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 x
x 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 x
x 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 x
x 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 = x
x 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 x
x 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 x
x 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 x
x 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 x
x 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 x
x 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 x
x 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 x
x 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 x```
while Spin has no reflections, so Spin(8) sees spinors as two mirror-image sets of 8 +half-spinor particles and 8 -half-spinor antiparticles like
```x 0 0 0 0 0 0 0 x
x 0 0 0 0 0 0 0 x
x 0 0 0 0 0 0 0 x
x 0 0 0 0 0 0 0 x
x 0 0 0 0 0 0 0 x
x 0 0 0 0 0 0 0 x
x 0 0 0 0 0 0 0 x
x 0 0 0 0 0 0 0 x
= +
x 0 0 0 0 0 0 0 x
x 0 0 0 0 0 0 0 x
x 0 0 0 0 0 0 0 x
x 0 0 0 0 0 0 0 x
x 0 0 0 0 0 0 0 x
x 0 0 0 0 0 0 0 x
x 0 0 0 0 0 0 0 x
x 0 0 0 0 0 0 0 x ```
In the quaternionic example of Cl(2,6) = M(Q,8) , Pin(2,6) sees spinors as
```X 0 0 0 0 0 0 0 X
X 0 0 0 0 0 0 0 X
X 0 0 0 0 0 0 0 X
X 0 0 0 0 0 0 0 = X
X 0 0 0 0 0 0 0 X
X 0 0 0 0 0 0 0 X
X 0 0 0 0 0 0 0 X
X 0 0 0 0 0 0 0 X ```
while Spin(2,6) sees spinors as
```X 0 0 0 X
X 0 0 0 X
X 0 0 0 X
X 0 0 0 X
= +
X 0 0 0 X
X 0 0 0 X
X 0 0 0 X
X 0 0 0 X ```
In the quaternionic example of Cl(2,4) = Cl(6,0) = M(Q,4) , Pin sees spinors as
```X 0 0 0 X
X 0 0 0 = X
X 0 0 0 X
X 0 0 0 X ```
while Spin(2,4) ( in my view where the even Cle(2,4) is taken to be Cl(1,4) = M(Q,2)+M(Q,2) instead of Cl(2,3) = M(C,4) ) sees spinors as
```X 0 X
X 0 X
= +
X 0 X
X 0 X ```
As to Cl(2,3) = M(C,4), my view is that its even Cle(2,3) is taken to be Cl(1,3) = M(Q,2) instead of Cl(2,2) = M(R(4).
As to Cl(1,4) = M(Q,2)+M(Q,2), my view is that even Cle(1,4) is taken to be Cl(1,3) = M(Q,2) = Cl(0,4).
In short, for E8 physics I form even subalgebras from Cl(2,6) on down to Cl(1,3) so that quaternionic structure is maintained.
I think that Pin is more fundamental than Spin because the overall symmetry should include reflections that can transform between particles and antiparticles, even though the particle-antiparticle distinction is useful in setting up the structure of the E8 model and its Lagrangian. However, Spin is more widely known than Pin, so sometimes ( particularly in exposition ) I write Spin when Pin would be technically more nearly correct.
### Some History of my Physics Model
In the 1960s-early 1970s Armand Wyler wrote a calculation of the fine structure constant using geometry of bounded complex domains. It was publicized briefly (almost as much as Garrett Lisi's E8 model is publicized now) but Wyler never showed convincing physical motivation for his interpretation of the math structures, and it was severely ridiculed and ignored (with sad personal consequences for Wyler).
Also in the 1960s, Joseph Wolf classified 4-dim spaces with quatenionic structure:
• (I) Euclidean 4-space [ the 4-torus T4];
• (II) SU(2) / S(U(1)xU(1)) x SU(2) / S(U(1)xU(1)), … [ S2 x S2 ] …;
• (III) SU(3) / S(U(2)xU(1)), … [ CP2 ] …; and
• (IV) Sp(2) / Sp(1)xSp(1) … [ = Spin(5) / Spin(4) = S4 ] …,
and the noncompact duals of II, III, and IV
and I noticed that they corresponded to
• U(1) electromagnetism,
• SU(2) weak force,
• SU(3) color force, and
• Sp(2) MacDowell-Mansouri gravity
so
I thought that it might possibly be useful to apply Wyler's approach to the geometries of those 4-dim quaternionic structures.
It was only in the 1980s that I was able to cut back on the time devoted to my law practice to try to learn enough math/physics to try to work out the application of Wyler's stuff to Wolf's classification, and I did so by spending a lot of time at Georgia Tech auditing seminars etc of David Finkelstein (who was tolerant enough to allow me to do so). I had learned some Lie group / Lie algebra math while an undergrad at Princeton (1959-63), but I did not know Clifford algebras very well until studying under David Finkelstein.
Then (early 1980s) N=8 supergravity was popular, so I looked at SO(8) and its cover Spin(8), and noticed that:
• Adjoint Spin(8) had 28 gauge bosons enough to do MacDowell-Mansouri gravity plus the Standard Model, but not if they were included as conventional subgroups;
• Vector Spin(8) looked like 8-dim spacetime;
• +half-spinor Spin(8) looked like 8 left-handed first-generation fermions;
• -half-spinor Spin(8) looked like 8 right-handed first-generation fermions.
To break the 8-dim spacetime into a 4-dim physical spacetime plus a 4-dim internal symmetry space I used the geometric methods that had been developed by Meinhard Mayer (working with Andrzej Trautman) around 1981.
A consequence of that dimensional reduction was second and third generations of fermions as composites (pairs and triples) of states corresponding to the first-generation fermions.
When I played with the Wyler-type geometry stuff, I got particle masses that looked roughly realistic, and a (then) prediction-calculation of the Tquark mass as around 130 GeV (tree-level, so give or take 10% or so).
When in 1984 CERN announced at APS DPF Santa Fe that they had seen the Tquark at 45 GeV, I gave a talk there (not nearly as well-attended as Carlo Rubbia's) saying that CERN was wrong and the Tquark was more massive (I will not here go into subsequent history of Dalitz, Goldstein, Sliwa, CDF, etc except to say that I still feel that experimental data supports the Tquark having a low-mass state around 130-145 GeV, and that the politics related to my position may have something to do with my current outcast status with the USA physics establishment.)
Since Spin(8) is bivector Clifford algebra of the real Clifford algebra Cl(8), and since real Clifford algebra 8-periodicity means that any very large real Clifford algebra can be factored into tensor products of Cl(8), it can be a building block of a nice big algebraic QFT (a real version of the complex hyperfinite II1 von Neumann factor).
Since the Adjoint, Vector, and two half-Spinor reps of Spin(8) combine to form the exceptional Lie algebra F4, I tried to use it as a unifying Lie algebra,
but I eventually saw that the real structure of F4 was incompatible with the complex bounded domain structures of the Wyler approach, so I went to E6, which is roughly a complexification of F4, and used E6 to construct a substantially realistic version of 26-dim bosonic string theory (fermions coming from orbifolding). Since by then I was blacklisted by the Cornell arXiv, I put that up on the CERN website as CERN-CDS-EXT-2004-031
As of then, the major conventional objection to my model was how I got 16 generators for a MacDowell-Mansouri gravity U(2,2) and 12 generators for the Standard Model from the 28 generators of Spin(8) (I used root vector patterns, because they do not consistently fit as subgroups and subalgebras).
Now, Garrett Lisi's E8 model has two copies of the D4 Spin(8) Lie algebra, so I can use it to be more conventional and get MacDowell-Mansouri gravity from one D4 and the Standard Model from the other one, so I wrote that up in my 82-page pdf paper at
http://tony5m17h.net/GLE8Cl8TSxtnd.pdf
Note that now I am not only blacklisted by arXiv, but pressure forced CERN to terminate its external preprint service, so I cannot even put it there as I was able to do in 2004 with my E6 string model.
All the gory details of calculations are set out in my 82-page paper, so I won't go into any more detail here.
I apologize for, in trying to be brief, leaving out a lot of people who helped me learn stuff, including but not limited to people at the University of Alabama and Robert Gilmore at Drexel and others.
PS - I should add that while at Georgia Tech in the late 1980s -early 1990s I enrolled in the physics PhD program, but that ended when I encountered the comprehensive exam (a 3-day closed book test) which I could not pass (my then 50-year-old memory had trouble recalling formulas), so I am in that sense a failure without official PhD qualification.
# E8 Geometry and Physics
( E8, the Lie algebra of an E8 Physics Model, is rank 8 and has 8+240 = 248 dimensions - Compact Version - Euclidean Signature - for clarity of exposition - much of this is from the book Einstein Manifolds (by Arthur L. Besse, Springer-Verlag 1987):
Type EVIII rank 8 Symmetric Space Rosenfeld's Elliptic Projective Plane (OxO)P2
# E8 / Spin(16) = 64 + 64
### and 64 looks like ( 8 fermion antiparticles ) x ( 8 Dirac Gammas )
Type BDI(8,8) rank 8 Symmetric Space real 8-Grassmannian manifold of R16 or set of the RP7 in RP15
# Spin(16) / ( Spin(8) x Spin(8) ) = 64
### The 64-dim Base Manifold looks like ( 8-dim Kaluza-Klein spacetime ) x ( 8 Dirac Gammas )
Due to the special isomorphisms Spin(6) = SU(4) and Spin(2) = U(1) and the topological equality RP1 = S1
### Torsion and E8 / Spin(16) = 64+64
Martin Cederwall and Jakob Palmkvist, in "The octic E8 invariant" hep-th/0702024, say:
"... The largest of the finite-dimensional exceptional Lie groups, E8, with Lie algebra e8, is an interesting object ... its root lattice is the unique even self-dual lattice in eight dimensions (in euclidean space, even self-dual lattices only exist in dimension 8n). ... Because of self-duality, there is only one conjugacy class of representations, the weight lattice equals the root lattice, and there is no "fundamental" representation smaller than the adjoint. ... Anything resembling a tensor formalism is completely lacking. A basic ingredient in a tensor calculus is a set of invariant tensors, or "Clebsch&endash;Gordan coefficients". The only invariant tensors that are known explicitly for E8 are the Killing metric and the structure constants ...
The goal of this paper is to take a first step towards a tensor formalism for E8 by explicitly constructing an invariant tensor with eight symmetric adjoint indices. ... On the mathematical side, the disturbing absence of a concrete expression for this tensor is unique among the finite-dimensional Lie groups. Even for the smaller exceptional algebras g2, f4, e6 and e7, all invariant tensors are accessible in explicit forms, due to the existence of "fundamental" representations smaller than the adjoint and to the connections with octonions and Jordan algebras. ...
The orders of Casimir invariants are known for all finite-dimensional semi-simple Lie algebras. They are polynomials in U(g), the universal enveloping algebra of g, of the form t_(A1...Ak) T^(A1 . . . TAk ), where t is a symmetric invariant tensor and T are generators of the algebra, and they generate the center U(g)^(g) of U(g). The Harish-Chandra homomorphism is the restriction of an element in U(g)^(g) to a polynomial in the Cartan subalgebra h, which will be invariant under the Weyl group W(g) of g. Due to the fact that the Harish-Chandra homomorphism is an isomorphism from U(g)^(g) to U(h) W(g) one may equivalently consider finding a basis of generators for the latter, a much easier problem. The orders of the invariants follow more or less directly from a diagonalisation of the Coxeter element, the product of the simple Weyl reflections ...
In the case of e8, the center U(e8)^(e8) of the universal enveloping subalgebra is generated by elements of orders 2, 8, 12, 14, 18, 20, 24 and 30. The quadratic and octic invariants correspond to primitive invariant tensors in terms of which the higher ones should be expressible. ... the explicit form of the octic invariant is previously not known ...
E8 has a number of maximal subgroups, but one of them, Spin(16)/Z2, is natural for several reasons. Considering calculational complexity, this is the subgroup that leads to the smallest number of terms in the Ansatz. Considering the connection to the Harish-Chandra homomorphism, K = Spin(16)/Z2 is the maximal compact subgroup of the split form G = E8(8). The Weyl group is a discrete subgroup of K, and the Cartan subalgebra h lies entirely in the coset directions g/k ...
We thus consider the decomposition of the adjoint representation of E8 into representations of the maximal subgroup Spin(16)/Z2. The adjoint decomposes into the adjoint 120 and a chiral spinor 128. ...
Our convention for chirality is GAMMA_(a1...a16) PHI = + e_(a1...a16) PHI . The e8 algebra becomes ( 2.1 )
[ T^(ab) , T^(cd) ] = 2 delta^([a)_([c) T^(b])_(d]) ,
[ T^(ab) , PHI^(alpha) ] = (1/4) ( GAMMA^(ab) PHI )^(alpha) ,
[ PHI^(alpha) , PHI^(alpha) ] = (1/8) ( GAMMA_(ab) )^(alpha beta) T^(ab) ,
... The coefficients in the first and second commutators are related by the so(16) algebra. The normalisation of the last commutator is free, but is fixed by the choice for the quadratic invariant, which for the case above is
X2 = (1/2) T_(ab) T^(ab) + PHI_(alpha) PHI^(alpha) .
Spinor and vector indices are raised and lowered with delta . Equation (2.1) describes the compact real form, E8(-248) .
By letting PHI -> i PHI one gets E8(8), where the spinor generators are non-compact, which is the real form relevant as duality symmetry in three dimensions (other real forms contain a non-compact Spin(16)/Z2 subgroup).
The Jacobi identities are satisfied thanks to the Fierz identity
( GAMMA_(ab)_[(alpha beta) ( GAMMA_(ab )_(alpha beta)] = 0 ,
which is satisfied for so(8) with chiral spinors, so(9), and so(16) with chiral spinors
( in the former cases the algebras are so(9), due to triality, and f4 ).
The Harish-Chandra homomorphism tells us that the "heart" of the invariant lies in an octic Weyl-invariant of the Cartan subalgebra. A first step may be to lift it to a unique Spin(16)/Z2-invariant in the spinor, corresponding to applying the isomorphism fÅ|1 above. It is gratifying to verify ... that there is indeed an octic invariant ( other than ( PHI PHI )^4 ), and that no such invariant exists at lower order. ...
Forming an element of an irreducible representation containing a number of spinors involves symmetrisations and subtraction of traces, which can be rather complicated. This becomes even more pronounced when we are dealing with transformation ... under the spinor generators, which will transform as spinors. Then irreducibility also involves gamma-trace conditions. ...
The transformation ... under the action of the spinorial generator is an so(16) spinor. The vanishing of this spinor is equivalent to e8 invariance. The spinorial generator acts similarly to a supersymmetry generator on a superfield ...
The final result for the octic invariant is, up to an overall multiplicative constant:
...".
Martin Cederwall, in hep-th/9310115, says:
"... The only simply connected compact parallelizable manifolds are the Lie groups and S7. If these vectorfields exist one can use them to define parallel transport of vectors. Since transport around any closed curve gives back the same vector, the curvature of the corresponding connection vanishes. We can think of the manifold equipped with this connection as "flat", and the transport as translation.
If the parallelizing connection is written as GAMMA~ = GAMMA - T where GAMMA is the metric connection, the vielbeins will not be covariantly constant, but transport as De = T (T is torsion, and this can be taken as its definition). Then ...
[ D_a , D_b ] = 2 T_ab^c D_c
... These are our S7 transformations ... What distinguishes S7 from the Lie groups is that its torsion ("structure constants") vary over the space. ... ".
Martin Cederwall and Christian R. Preitschopf, in hep-th/0702024, say:
"... it is the non-associativity of O that is responsible for the non-constancy of the torsion tensor [ for S7 ] (while the non-commutativity accounts for its non-vanishing) and for the necessity of utilizing inequivalent products associated with different points X ŸS7. We call this field-dependent multiplication the X-product.
One should note that the transformation ...[ for S7 ]... relies on the transformation of the parameter field X ... while for group manifolds (and thus for the lower-dimensional spheres S1 and S3 associated with C and H) ...[ the transformation is independent of a parameter field ]... transform independently. A consequence is that fermions cannot transform without the presence of a parameter field, since a fermionic octonion is not invertible. ... Fermions, due to non-invertibility, can be assigned to endpoints of the diagram only; no path may pass via a fermion. ...
We call a field (bosonic or fermionic) transforming according to ...[ the X-product ]... a spinor under S7. ...
Let r, s, ... be S7 spinors ... Can this representation be formed as a tensor product of spinor representations? Due to the non-linearity, the answer is no.... we can form spinors as trilinears of spinors u = ( r ox s* ) ox t , and in this way only. ...
It should be possible to realize E6 = SL(3;O) ... on them in a "spinor-like" manner, much like SO(10) = SL(2;O) acts on its 16-dimensional spinor representations that play the role of homogeneous coordinates for OP1 ...
That would open for for a twistor transform ... for elements in J3(O) ( the exceptional Jordan algebra of 3x3 hermitiean octonionic matrices ) with zero Freudenthal product - a known realization of OP2. Then one would have a direct analogy to the twistor transform of the masslessness condition in SL(2;O) that leads to OP1 as the projective light-cone ...
we would like to address the question of anomaly cancellation: under what circumstances is the Schwinger term "quantum mechanically consistent", i.e. when is the BRST operator quantum mechanically nilpotent, and what actual exact form of the Schwinger term is needed? ... to construct a (classical) BRST operator for the S7 algebra with field-dependent structure functions ... turns out to be extremely simple. The BRST operator takes the same form as for a Lie algebra, namely
Q = c^i J_i - T_ij^k(X) c^i c^j b_k
where b_i and c^i are fermionic ghosts ... Higher order ghost terms are not present since the Jacobi identities hold ... This makes BRST analysis quite manageable. ...
Then, turning to ... the quantum algebra, ... We have ... demonstrated the non-trivial fact that Q may be nilpotent, and that ... non-trivial central extensions ...[ of S7 ]... or Schwinger terms ... may be used as a gauge algebra. Normally, one would have expected Q^2=0 to put a constraint on the number of transforming octonionic fields, but that is not the case at hand. Instead one is permitted, for any field content, to adjust the numerical coefficient ... in J in order to fulfil that relation ...
It seems that ... the S7 or ... non-trivial central extensions ...[ of S7 ]... or Schwinger terms ...ghosts do not come in an S7 representation. This is also confirmed by an attempt to construct a representation (other than scalar) for imaginary octonions, which turns out to be impossible. ...
A part of the structure of S7 we have treated only fragmentarily is representation theory. ... It is not immediately clear even how to define a representation. We have quite strong feelings, though, that the spinorial representations and the adjoint, as described in this paper, in some sense are the only ones allowed, and that the spinor representation is the only one to which a variable freely can be assigned. ...". | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8900095820426941, "perplexity": 1860.1868111507856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535917463.1/warc/CC-MAIN-20140901014517-00417-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://www.semanticscholar.org/paper/VALUATIONS-AND-HYPERBOLICITY-IN-DYNAMICS-Ward/fce42fb85941f6fade7b2bf806c54a804a43cc45 | • Corpus ID: 14274747
# VALUATIONS AND HYPERBOLICITY IN DYNAMICS
@inproceedings{Ward2001VALUATIONSAH,
title={VALUATIONS AND HYPERBOLICITY IN DYNAMICS},
author={Thomas Ward},
year={2001}
}
1 Citations
## Figures from this paper
Entropy Production and Irreversible Processes -from the perspective of continuous topological evolution
A concept of entropy production associated with continuous topological evolution is deduced (without statistics) from the fact that Cartan-Hilbert 1-form of Action defines a non-equilibrium
## References
SHOWING 1-10 OF 106 REFERENCES
Bernoullicity of solenoidal automorphisms and global fields
• Mathematics
• 1994
We show that ergodic automorphisms of solenoids are isomorphic to Bernoulli shifts by using the product formula for global fields.
Ergodic automorphisms of the infinite torus are bernoulli
We show that ergodic algebraic automorphisms of the infinite torus are measure isomorphic to Bernoulli shifts. Using the same techniques, we also show that the existence of such an automorphism with
Entropy for group endomorphisms and homogeneous spaces
Topological entropy há(T) is defined for a uniformly continuous map on a metric space. General statements are proved about this entropy, and it is calculated for affine maps of Lie groups and certain
An example of a Kolmogorov automorphism that is not a Bernoulli shift
Fitting ideals for finitely presented algebraic dynamical systems
• Mathematics
• 2000
Summary. We consider a class of algebraic dynamical systems introduced by Kitchens and Schmidt. Under a weak finiteness condition — the Descending Chain Condition — the dual modules have finite
Integer sequences counting periodic points
• Mathematics
• 2002
An existing dialogue between number theory and dynamical systems is advanced. A combinatorial device gives necessary and sufficient conditions for a sequence of non-negative integers to count the
A GENERALIZED BURAU REPRESENTATION FOR STRING LINKS
• Mathematics
• 2001
A 2-variable matrix B ∈ GLn(Z[u±1, v±1]) is defined for any n-string link, generalizing the Burau matrix of an nbraid. The specialization u = 1, v = t−1 recovers the generalized Burau matrix recently
Dynamical systems arising from units in Krull rings
Summary. To a countable Krull ring R and units $\xi_1,\dots,\xi_d \in R$ we associate a ${Bbb Z}^d$-action by automorphisms of the compact abelian group $\widehat{R}$. This generalizes the
E-mail address: [email protected] School of Mathematics
• E-mail address: [email protected] School of Mathematics
• 2001
Additive Cellular Automata and Volume Growth
A class of dynamical systems associated to rings of S-integers in rational function fields is described, giving a rather complete description of the well-known dynamics in one-dimensional additive cellular automata with prime alphabet. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8865466117858887, "perplexity": 2281.326773271682}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662636717.74/warc/CC-MAIN-20220527050925-20220527080925-00610.warc.gz"} |
https://www.physicsforums.com/threads/question-related-to-volume-strength-of-h2o2.924464/ | # Homework Help: Question Related To Volume Strength Of H2O2.
Tags:
1. Sep 4, 2017
### CosmicC
1. Question Statement and details : 500 mL of 56V H2O2 is kept in an open container due to which some H2O2 is decomposed and evolves 8gm O2 simultaneously, during the process some H2O also vapourizes.Due to all these changes final volume is reduced by 20%. Find final volume strength of H2O2 (aq).
It's a multiple choice question. Options are as follows
(a.) 56V (b) 44.8 V (c.) 11.2V (d.) 33.6V. Correct Answer is 33.6 V.
2. Relevant Equations:
Molarity = Volume strength/11.2 and Normality = Volume Strength , in balanced reaction 1 mol H2O2 gives 1/2 half mol O2. Mol. Wt. H2O2=34.
3. The attempt at a solution : By stoichiometry 1/2 moles of H2O2 gives 1/4 moles of O2. So volume strength of H2O2 used to give 8gm is 22.4*1/4 = 5.6. So I subtracted that from 56V. And then simply reduced what i got by 20% it didn't work though. What i'm getting is 39.6 not close to the answer which is 33.6 V. Please help me out on this one. Thank You.
2. Sep 4, 2017
### Staff: Mentor
That's not a correct approach. Would you do the same if the initial volume was 1 L? 10 L? Don't you think in each case the concentration change should be different?
3. Sep 4, 2017
### CosmicC
Thanks for replying i really appreciate it. So what should i do..Can you please ellaborate so that i can get the answer to this as soon as possible. I've already given it too much time.
4. Sep 4, 2017
### Staff: Mentor
Do you know what volume strength means?
In general: you have to calculate how much hydrogen peroxide was left and what was the final volume, then use this information to calculate final concentration. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.931135356426239, "perplexity": 3074.318134063331}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158045.57/warc/CC-MAIN-20180922044853-20180922065253-00491.warc.gz"} |
https://la.mathworks.com/help/finance/ecmnmle.html | Documentation
# ecmnmle
Mean and covariance of incomplete multivariate normal data
## Syntax
```[Mean,Covariance] = ecmnmle(Data,InitMethod,MaxIterations,Tolerance,Mean0,Covar0)
```
## Arguments
`Data`
`NUMSAMPLES`-by-`NUMSERIES` matrix with `NUMSAMPLES` samples of a `NUMSERIES`-dimensional random vector. Missing values are indicated by `NaN`s. A sample is also called an observation or a record.
`InitMethod`
(Optional) Character vector that identifies one of three defined initialization methods to compute initial estimates for the mean and covariance of the data. If `InitMethod` = `[]` or `''`, the default method `nanskip` is used. The initialization methods are:
• `nanskip` — (Default) Skip all records with `NaN`s.
• `twostage` — Estimate mean. Fill `NaN`s with mean. Then estimate covariance.
• `diagonal` — Form a diagonal covariance.
### Note
If you supply `Mean0` and `Covar0`, `InitMethod` is not executed.
`MaxIterations`
(Optional) Maximum number of iterations for the expectation conditional maximization (ECM) algorithm. Default = `50`.
`Tolerance`
(Optional) Convergence tolerance for the ECM algorithm (Default = `1.0e-8`.) If `Tolerance``0`, perform maximum iterations specified by `MaxIterations` and do not evaluate the objective function at each step unless in display mode, as described below.
`Mean0`
(Optional) Initial `NUMSERIES`-by-`1` column vector estimate for the mean. If you leave `Mean0` unspecified (`[]`), the method specified by `InitMethod` is used. If you specify `Mean0`, you must also specify `Covar0`.
`Covar0`
(Optional) Initial `NUMSERIES`-by-`NUMSERIES` matrix estimate for the covariance, where the input matrix must be positive-definite. If you leave `Covar0` unspecified (`[]`), the method specified by `InitMethod` is used. If you specify `Covar0`, you must also specify `Mean0`.
## Description
`[Mean,Covariance] = ecmnmle(Data,InitMethod,MaxIterations,Tolerance,Mean0,Covar0)` estimates the mean and covariance of a data set. If the data set has missing values, this routine implements the ECM algorithm of Meng and Rubin [2] with enhancements by Sexton and Swensen [3]. ECM stands for expectation conditional maximization, a conditional maximization form of the EM algorithm of Dempster, Laird, and Rubin [4].
This routine has two operational modes.
### Display Mode
With no output arguments, this mode displays the convergence of the ECM algorithm. It estimates and plots objective function values for each iteration of the ECM algorithm until termination, as shown in the following plot.
Display mode can determine `MaxIter` and `Tolerance` values or serve as a diagnostic tool. The objective function is the negative log-likelihood function of the observed data and convergence to a maximum likelihood estimate corresponds with minimization of the objective.
### Estimation Mode
With output arguments, this mode estimates the mean and covariance via the ECM algorithm.
## Examples
To see an example of how to use `ecmnmle`, run the program `ecmguidemo`.
## Algorithms
collapse all
### Model
The general model is
`$Z\sim N\left(Mean,\text{\hspace{0.17em}}Covariance\right),$`
where each row of `Data` is an observation of Z.
Each observation of Z is assumed to be iid (independent, identically distributed) multivariate normal, and missing values are assumed to be missing at random (MAR). See Little and Rubin [1] for a precise definition of MAR.
This routine estimates the mean and covariance from given data. If data values are missing, the routine implements the ECM algorithm of Meng and Rubin [2] with enhancements by Sexton and Swensen [3].
If a record is empty (every value in a sample is `NaN`), this routine ignores the record because it contributes no information. If such records exist in the data, the number of nonempty samples used in the estimation is ≤ `NumSamples`.
The estimate for the covariance is a biased maximum likelihood estimate (MLE). To convert to an unbiased estimate, multiply the covariance by `Count`/(`Count` – 1), where `Count` is the number of nonempty samples used in the estimation.
### Requirements
This routine requires consistent values for `NUMSAMPLES` and `NUMSERIES` with `NUMSAMPLES` > `NUMSERIES`. It must have enough nonmissing values to converge. Finally, it must have a positive-definite covariance matrix. Although the references provide some necessary and sufficient conditions, general conditions for existence and uniqueness of solutions in the missing-data case, do not exist. The main failure mode is an ill-conditioned covariance matrix estimate. Nonetheless, this routine works for most cases that have less than 15% missing data (a typical upper bound for financial data).
### Initialization Methods
This routine has three initialization methods that cover most cases, each with its advantages and disadvantages. The ECM algorithm always converges to a minimum of the observed negative log-likelihood function. If you override the initialization methods, you must ensure that the initial estimate for the covariance matrix is positive-definite.
The following is a guide to the supported initialization methods.
### nanskip
The `nanskip` method works well with small problems (fewer than 10 series or with monotone missing data patterns). It skips over any records with `NaN`s and estimates initial values from complete-data records only. This initialization method tends to yield fastest convergence of the ECM algorithm. This routine switches to the `twostage` method if it determines that significant numbers of records contain `NaN`.
### twostage
The `twostage` method is the best choice for large problems (more than 10 series). It estimates the mean for each series using all available data for each series. It then estimates the covariance matrix with missing values treated as equal to the mean rather than as `NaN`s. This initialization method is robust but tends to result in slower convergence of the ECM algorithm.
### diagonal
The `diagonal` method is a worst-case approach that deals with problematic data, such as disjoint series and excessive missing data (more than 33% of data missing). Of the three initialization methods, this method causes the slowest convergence of the ECM algorithm. If problems occur with this method, use display mode to examine convergence and modify either `MaxIterations` or `Tolerance`, or try alternative initial estimates with `Mean0` and `Covar0`. If all else fails, try
```Mean0 = zeros(NumSeries); Covar0 = eye(NumSeries,NumSeries); ```
Given estimates for mean and covariance from this routine, you can estimate standard errors with the companion routine `ecmnstd`.
### Convergence
The ECM algorithm does not work for all patterns of missing values. Although it works in most cases, it can fail to converge if the covariance becomes singular. If this occurs, plots of the log-likelihood function tend to have a constant upward slope over many iterations as the log of the negative determinant of the covariance goes to zero. In some cases, the objective fails to converge due to machine precision errors. No general theory of missing data patterns exists to determine these cases. An example of a known failure occurs when two time series are proportional wherever both series contain nonmissing values.
## References
[1] Little, Roderick J. A. and Donald B. Rubin. Statistical Analysis with Missing Data. 2nd Edition. John Wiley & Sons, Inc., 2002.
[2] Meng, Xiao-Li and Donald B. Rubin. “Maximum Likelihood Estimation via the ECM Algorithm.” Biometrika. Vol. 80, No. 2, 1993, pp. 267–278.
[3] Sexton, Joe and Anders Rygh Swensen. “ECM Algorithms that Converge at the Rate of EM.” Biometrika. Vol. 87, No. 3, 2000, pp. 651–662.
[4] Dempster, A. P., N. M. Laird, and Donald B. Rubin. “Maximum Likelihood from Incomplete Data via the EM Algorithm.” Journal of the Royal Statistical Society. Series B, Vol. 39, No. 1, 1977, pp. 1–37.
Download ebook | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9410420656204224, "perplexity": 1751.607436279401}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251779833.86/warc/CC-MAIN-20200128153713-20200128183713-00474.warc.gz"} |
https://infoscience.epfl.ch/record/148618 | Infoscience
Journal article
Phase transition in the localized ferromagnet EuO probed by μSR
We report results of muon-spin-rotation measurements performed on the ferromagnetic semiconductor EuO, which is one of the best approximations to a localized ferromagnet. We argue that implanted muons are sensitive to the internal field primarily through a combination of hyperfine and Lorentz fields. The temperature dependences of the internal field and the relaxation rate have been measured and are compared with previous theoretical predictions. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8296521902084351, "perplexity": 1636.0410363347837}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720972.46/warc/CC-MAIN-20161020183840-00513-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://www.xe.com/faq/volatility_calculated.php | XE Currency FAQ
How is volatility calculated?
At XE, volatility is measured by applying the standard deviation of the logarithmic daily returns, expressed in a percentage score.
Daily returns are the gain or loss of a currency pair in a particular period. At xe.com, we take the values of two consecutive days at 00:00 UTC. That is why we call it daily return. Then, we apply a logarithm to the ratio between those two values. It is a common way to measure change in the financial industry.
Ex: ln (valueDay2 / valueDay1) is the logarithmic return between day2 and day1. This value tells us if the currency pair has moved a lot or not.
In statistics, the standard deviation is a measure that is used to quantify the amount of variation of a set of data values. A low standard deviation indicates that the data points tend to be close to the mean of the set, while a high standard deviation indicates that the data points are spread out over a wider range of value.
We apply this standard deviation to the daily logarithmic returns we calculated during a given time period (30 days, 90 days etc.)
Expressing a value in a percentage score means we multiply it by 100 before showing it to you. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8654231429100037, "perplexity": 490.5800927930644}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267859766.6/warc/CC-MAIN-20180618105733-20180618125733-00353.warc.gz"} |
http://mathhelpforum.com/differential-geometry/114979-something-looks-like-l-hopital-complex-case.html | ## Something that looks like L'Hopital for the complex case
If we assume that $f,g:{B_R}\left( a \right) \to C$ are homomorphic, and $f\left( a \right) = {f^{\left( 1 \right)}}\left( a \right) = ... = {f^{\left( {n - 1} \right)}}\left( a \right) = 0$, $g\left( a \right) = {g^{\left( 1 \right)}}\left( a \right) = ... = {g^{\left( {n - 1} \right)}}\left( a \right) = 0$ with ${g^{\left( n \right)}}\left( a \right) \ne 0$, then show that $\mathop {\lim }\limits_{z \to a} \frac{{f\left( z \right)}}{{g\left( z \right)}} = \frac{{{f^{\left( n \right)}}\left( z \right)}}{{{g^{\left( n \right)}}\left( z \right)}}$.
I can show that $\frac{{{f^{\left( n \right)}}\left( z \right)}}{{{g^{\left( n \right)}}\left( z \right)}}$ exists quite trivially, but I'm not sure how I can say anything about how the two quotients are related, and how the limit can be said to equal the RHS. The question looks similar to L'Hopital, but that is only defined for the real case, whereas these are functions defined on the complex plane.
Many thanks! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9925022721290588, "perplexity": 126.70796369242206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645378542.93/warc/CC-MAIN-20150827031618-00324-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://dsp.stackexchange.com/questions/59055/calculate-the-standard-deviation-of-fundamental-frequency-mfcc | # Calculate the Standard Deviation of Fundamental Frequency (MFCC)
I'm implementing a gunshot detector following the article "Algorithm for Gunshot Detection Using Mel-Frequency Cepstrum Coefficients (MFCC)" (paywall).
In the article, the authors uses 22 features based on MFCC (coeficients normalized between 0.1 and 0.9). One of the features is "Standard deviation of fundamental frequency". I already spent all my day trying to figure out the proper way to do this calculation.
The expected value for a gunshot is near 0.04.
• So, where exactly does your problem lie? in the finding of the fundamental frequency, or in the finding of its variance? – Marcus Müller Jun 22 at 14:49
• The fundamental frequency. – Hugo Sartori Jun 22 at 15:37
• I made some implementation using some formulas I found over the internet, but the results are very unrealistic compared to the expected. – Hugo Sartori Jun 22 at 15:39
• Sadly, I don't have that paper. Can you maybe edit your question with a link to said paper? I'd assume the authors define how they mean fundamental frequency, and over which time window they want to determine it. – Marcus Müller Jun 22 at 15:41
• Done. That's why I'm suffering a lot, every paper has a lack of explaination some how. I always need to dig the information somewhere else, the big problem is on calculations that needs some kind of parameters that the authors generaly simply don't talk about. – Hugo Sartori Jun 22 at 15:49 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8994883894920349, "perplexity": 1254.4387290160792}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315865.44/warc/CC-MAIN-20190821085942-20190821111942-00418.warc.gz"} |
http://www2.macaulay2.com/Macaulay2/doc/Macaulay2-1.19/share/doc/Macaulay2/RelativeCanonicalResolution/html/toc.html | • RelativeCanonicalResolution -- construction of relative canonical resolutions and Eagon-Northcott type complexes
• balancedPartition -- Computes balanced partition of n of length d
• canCurveWithFixedScroll -- Computes a g-nodal canonical curve with a degree k line bundle on a normalized scroll
• canonicalMultipliers -- Computes the canonical multipliers of a rational curves with nodes
• coxDegrees -- Computes the degree of a polynomial in the Cox ring corresponding to a section of a bundle on the scroll
• curveOnScroll -- Computes the ideal of a canonical curve on a normalized scroll in terms of generators of the scroll
• eagonNorthcottType -- Computes the Eagon-Northcott type resolution
• iteratedCone -- Computes a (possibly non-minimal) resolution of C in P^{g-1} starting from the relative canonical resolution of C in P(E)
• liftMatrixToEN -- Lifts a matrix between bundles on the scroll to the associated Eagon-Northcott type complexes
• lineBundleFromPointsAndMultipliers -- Computes basis of a line bundle from the 2g points P_i, Q_i and the multipliers
• resCurveOnScroll -- Computes the relative canonical resolution
• rkSyzModules -- Computes the rank of the i-th module in the relative canonical resolution
• scrollDegrees -- Computes the degree of a section of a bundle on the scroll ring corresponding to a polynomial in the Cox ring | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9400807023048401, "perplexity": 2153.6136668619483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499946.80/warc/CC-MAIN-20230201144459-20230201174459-00137.warc.gz"} |
http://math.stackexchange.com/questions/457345/when-is-it-appropriate-to-count-outcomes-to-solve-a-combinatorial-probability-pr | # When is it appropriate to count outcomes to solve a combinatorial probability problem
I have been looking at problems on this site such as selecting cards from a deck of cards or marbles from a bag of marbles. One thing I have been struggling with is when I can solve the problem simply by counting outcomes, or, equivalently, when I can assume that outcomes have equal probability. I have two recent, motivating examples.
1. Probability of 2 Cards being adjacent. The problem here is to do with the probability of picking cards and finding a seven next to a king. In theory we can solve the problem by considering of 3 groups with identical items in each group: 4 sevens, 4 kings, and 44 others. We then count the number of outcomes of interest, and divide by the total number of outcomes (the total is given by the multinomial theorem).
2. probability of occcuring alternative colors. Here there is one bag, with $n_1$ balls of one color and $n_2$ balls of another, $n_1 \neq n_2$, and we are interested in the probabilities of outcomes when we draw $k < n_1 + n_2$ balls from the bag (with replacement). This is a case where the probabilities of outcomes are not equal.
To quote from the answer to the second question
counting outcomes is not a good way to approach this problem.
I can see in each case, and other cases, when counting is ok -- when outcomes are equally probable, using common sense and intuition. But this took me quite a long time in the first case. It is also error-prone. It seems like more skilled mathematicians understand when counting is a good way to approach a problem, so my question is:-
Are there any rules or principles that I can apply to decide whether or not counting outcomes is reasonable?
(related, more general, question about how to approach combinatorial problems: Combinatorics: When To Use Different Counting Techniques)
-
In the problem 2 that you mention, there is only one bag. The problem can be perfectly well solved by counting how many of the $7^4$ equally likely possibilities satisfy the alternating colour criterion. The details are essentially the same as the ones we get when we use the "probabilistic" approach. That said, I prefer direct calculation of probabilities. For one thing it is more general, with equal ease it can deal with fair dice and loaded dice. – André Nicolas Aug 1 '13 at 16:25
@AndréNicolas thanks I have fixed the error in my question. Re counting $7^4$ equally likely possibilities, I suppose the key point is that some of these equally likely possibilities refer to the same outcome (this is related to multiplying by $^n\rm{C}_x$ to get binomial probabilities for an outcome -- I was wondering whether that fitted in). That helps, thanks. – TooTone Aug 1 '13 at 16:37
For $7^4$ we need to treat the balls as having Student Numbers. To count the alternating strings count the WBWB and BWBW. Let us count the WBWB. First slot can be filled in $4$ ways, for each the second can be filled in $3$, and so on. Or else the two W slots can be filled in $4^2$ ways and for each the two B slots can be filled in $3^2$ ways. – André Nicolas Aug 1 '13 at 16:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8763540387153625, "perplexity": 246.7719353619122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645199297.56/warc/CC-MAIN-20150827031319-00168-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://terrytao.wordpress.com/2017/05/11/generalisations-of-the-limit-functional/ | Suppose one has a bounded sequence ${(a_n)_{n=1}^\infty = (a_1, a_2, \dots)}$ of real numbers. What kinds of limits can one form from this sequence?
Of course, we have the usual notion of limit ${\lim_{n \rightarrow \infty} a_n}$, which in this post I will refer to as the classical limit to distinguish from the other limits discussed in this post. The classical limit, if it exists, is the unique real number ${L}$ such that for every ${\varepsilon>0}$, one has ${|a_n-L| \leq \varepsilon}$ for all sufficiently large ${n}$. We say that a sequence is (classically) convergent if its classical limit exists. The classical limit obeys many useful limit laws when applied to classically convergent sequences. Firstly, it is linear: if ${(a_n)_{n=1}^\infty}$ and ${(b_n)_{n=1}^\infty}$ are classically convergent sequences, then ${(a_n+b_n)_{n=1}^\infty}$ is also classically convergent with
$\displaystyle \lim_{n \rightarrow \infty} (a_n + b_n) = (\lim_{n \rightarrow \infty} a_n) + (\lim_{n \rightarrow \infty} b_n) \ \ \ \ \ (1)$
and similarly for any scalar ${c}$, ${(ca_n)_{n=1}^\infty}$ is classically convergent with
$\displaystyle \lim_{n \rightarrow \infty} (ca_n) = c \lim_{n \rightarrow \infty} a_n. \ \ \ \ \ (2)$
It is also an algebra homomorphism: ${(a_n b_n)_{n=1}^\infty}$ is also classically convergent with
$\displaystyle \lim_{n \rightarrow \infty} (a_n b_n) = (\lim_{n \rightarrow \infty} a_n) (\lim_{n \rightarrow \infty} b_n). \ \ \ \ \ (3)$
We also have shift invariance: if ${(a_n)_{n=1}^\infty}$ is classically convergent, then so is ${(a_{n+1})_{n=1}^\infty}$ with
$\displaystyle \lim_{n \rightarrow \infty} a_{n+1} = \lim_{n \rightarrow \infty} a_n \ \ \ \ \ (4)$
and more generally in fact for any injection ${\phi: {\bf N} \rightarrow {\bf N}}$, ${(a_{\phi(n)})_{n=1}^\infty}$ is classically convergent with
$\displaystyle \lim_{n \rightarrow \infty} a_{\phi(n)} = \lim_{n \rightarrow \infty} a_n. \ \ \ \ \ (5)$
The classical limit of a sequence is unchanged if one modifies any finite number of elements of the sequence. Finally, we have boundedness: for any classically convergent sequence ${(a_n)_{n=1}^\infty}$, one has
$\displaystyle \inf_n a_n \leq \lim_{n \rightarrow \infty} a_n \leq \sup_n a_n. \ \ \ \ \ (6)$
One can in fact show without much difficulty that these laws uniquely determine the classical limit functional on convergent sequences.
One would like to extend the classical limit notion to more general bounded sequences; however, when doing so one must give up one or more of the desirable limit laws that were listed above. Consider for instance the sequence ${a_n = (-1)^n}$. On the one hand, one has ${a_n^2 = 1}$ for all ${n}$, so if one wishes to retain the homomorphism property (3), any “limit” of this sequence ${a_n}$ would have to necessarily square to ${1}$, that is to say it must equal ${+1}$ or ${-1}$. On the other hand, if one wished to retain the shift invariance property (4) as well as the homogeneity property (2), any “limit” of this sequence would have to equal its own negation and thus be zero.
Nevertheless there are a number of useful generalisations and variants of the classical limit concept for non-convergent sequences that obey a significant portion of the above limit laws. For instance, we have the limit superior
$\displaystyle \limsup_{n \rightarrow \infty} a_n := \inf_N \sup_{n \geq N} a_n$
$\displaystyle \liminf_{n \rightarrow \infty} a_n := \sup_N \inf_{n \geq N} a_n$
which are well-defined real numbers for any bounded sequence ${(a_n)_{n=1}^\infty}$; they agree with the classical limit when the sequence is convergent, but disagree otherwise. They enjoy the shift-invariance property (4), and the boundedness property (6), but do not in general obey the homomorphism property (3) or the linearity property (1); indeed, we only have the subadditivity property
$\displaystyle \limsup_{n \rightarrow \infty} (a_n + b_n) \leq (\limsup_{n \rightarrow \infty} a_n) + (\limsup_{n \rightarrow \infty} b_n)$
for the limit superior, and the superadditivity property
$\displaystyle \liminf_{n \rightarrow \infty} (a_n + b_n) \geq (\liminf_{n \rightarrow \infty} a_n) + (\liminf_{n \rightarrow \infty} b_n)$
for the limit inferior. The homogeneity property (2) is only obeyed by the limits superior and inferior for non-negative ${c}$; for negative ${c}$, one must have the limit inferior on one side of (2) and the limit superior on the other, thus for instance
$\displaystyle \limsup_{n \rightarrow \infty} (-a_n) = - \liminf_{n \rightarrow \infty} a_n.$
The limit superior and limit inferior are examples of limit points of the sequence, which can for instance be defined as points that are limits of at least one subsequence of the original sequence. Indeed, the limit superior is always the largest limit point of the sequence, and the limit inferior is always the smallest limit point. However, limit points can be highly non-unique (indeed they are unique if and only if the sequence is classically convergent), and so it is difficult to sensibly interpret most of the usual limit laws in this setting, with the exception of the homogeneity property (2) and the boundedness property (6) that are easy to state for limit points.
Another notion of limit are the Césaro limits
$\displaystyle \mathrm{C}\!-\!\lim_{n \rightarrow \infty} a_n := \lim_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N a_n;$
if this limit exists, we say that the sequence is Césaro convergent. If the sequence ${(a_n)_{n=1}^\infty}$ already has a classical limit, then it also has a Césaro limit that agrees with the classical limit; but there are additional sequences that have a Césaro limit but not a classical one. For instance, the non-classically convergent sequence ${a_n= (-1)^n}$ discussed above is Césaro convergent, with a Césaro limit of ${0}$. However, there are still bounded sequences that do not have Césaro limit, such as ${a_n := \sin( \log n )}$ (exercise!). The Césaro limit is linear, bounded, and shift invariant, but not an algebra homomorphism and also does not obey the rearrangement property (5).
Using the Hahn-Banach theorem, one can extend the classical limit functional to generalised limit functionals ${\mathop{\widetilde \lim}_{n \rightarrow \infty} a_n}$, defined to be bounded linear functionals from the space ${\ell^\infty({\bf N})}$ of bounded real sequences to the real numbers ${{\bf R}}$ that extend the classical limit functional (defined on the space ${c_0({\bf N}) + {\bf R}}$ of convergent sequences) without any increase in the operator norm. (In some of my past writings I made the slight error of referring to these generalised limit functionals as Banach limits, though as discussed below, the latter actually refers to a subclass of generalised limit functionals.) It is not difficult to see that such generalised limit functionals will range between the limit inferior and limit superior. In fact, for any specific sequence ${(a_n)_{n=1}^\infty}$ and any number ${L}$ lying in the closed interval ${[\liminf_{n \rightarrow \infty} a_n, \limsup_{n \rightarrow \infty} a_n]}$, there exists at least one generalised limit functional ${\mathop{\widetilde \lim}_{n \rightarrow \infty}}$ that takes the value ${L}$ when applied to ${a_n}$; for instance, for any number ${\theta}$ in ${[-1,1]}$, there exists a generalised limit functional that assigns that number ${\theta}$ as the “limit” of the sequence ${a_n = (-1)^n}$. This claim can be seen by first designing such a limit functional on the vector space spanned by the convergent sequences and by ${(a_n)_{n=1}^\infty}$, and then appealing to the Hahn-Banach theorem to extend to all sequences. This observation also gives a necessary and sufficient criterion for a bounded sequence ${(a_n)_{n=1}^\infty}$ to classically converge to a limit ${L}$, namely that all generalised limits of this sequence must equal ${L}$.
Because of the reliance on the Hahn-Banach theorem, the existence of generalised limits requires the axiom of choice (or some weakened version thereof); there are presumably models of set theory without the axiom of choice in which no generalised limits exist, but I do not know of an explicit reference for this.
Generalised limits can obey the shift-invariance property (4) or the algebra homomorphism property (2), but as the above analysis of the sequence ${a_n = (-1)^n}$ shows, they cannot do both. Generalised limits that obey the shift-invariance property (4) are known as Banach limits; one can for instance construct them by applying the Hahn-Banach theorem to the Césaro limit functional; alternatively, if ${\mathop{\widetilde \lim}}$ is any generalised limit, then the Césaro-type functional ${(a_n)_{n=1}^\infty \mapsto \mathop{\widetilde \lim}_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N a_n}$ will be a Banach limit. The existence of Banach limits can be viewed as a demonstration of the amenability of the natural numbers (or integers); see this previous blog post for further discussion.
Generalised limits that obey the algebra homomorphism property (2) are known as ultrafilter limits. If one is given a generalised limit functional ${p\!-\!\lim_{n \rightarrow \infty}}$ that obeys (2), then for any subset ${A}$ of the natural numbers ${{\bf N}}$, the generalised limit ${p\!-\!\lim_{n \rightarrow \infty} 1_A(n)}$ must equal its own square (since ${1_A(n)^2 = 1_A(n)}$) and is thus either ${0}$ or ${1}$. If one defines ${p \subset 2^{2^{\bf N}}}$ to be the collection of all subsets ${A}$ of ${{\bf N}}$ for which ${p\!-\!\lim_{n \rightarrow \infty} 1_A(n) = 1}$, one can verify that ${p}$ obeys the axioms of a non-principal ultrafilter. Conversely, if ${p}$ is a non-principal ultrafilter, one can define the associated generalised limit ${p\!-\!\lim_{n \rightarrow \infty} a_n}$ of any bounded sequence ${(a_n)_{n=1}^\infty}$ to be the unique real number ${L}$ such that the sets ${\{ n \in {\bf N}: |a_n - L| \leq \varepsilon \}}$ lie in ${p}$ for all ${\varepsilon>0}$; one can check that this does indeed give a well-defined generalised limit that obeys (2). Non-principal ultrafilters can be constructed using Zorn’s lemma. In fact, they do not quite need the full strength of the axiom of choice; see the Wikipedia article on the ultrafilter lemma for examples.
We have previously noted that generalised limits of a sequence can converge to any point between the limit inferior and limit superior. The same is not true if one restricts to Banach limits or ultrafilter limits. For instance, by the arguments already given, the only possible Banach limit for the sequence ${a_n = (-1)^n}$ is zero. Meanwhile, an ultrafilter limit must converge to a limit point of the original sequence, but conversely every limit point can be attained by at least one ultrafilter limit; we leave these assertions as an exercise to the interested reader. In particular, a bounded sequence converges classically to a limit ${L}$ if and only if all ultrafilter limits converge to ${L}$.
There is no generalisation of the classical limit functional to any space that includes non-classically convergent sequences that obeys the subsequence property (5), since any non-classically-convergent sequence will have one subsequence that converges to the limit superior, and another subsequence that converges to the limit inferior, and one of these will have to violate (5) since the limit superior and limit inferior are distinct. So the above limit notions come close to the best generalisations of limit that one can use in practice.
We summarise the above discussion in the following table:
Limit Always defined Linear Shift-invariant Homomorphism Constructive Classical No Yes Yes Yes Yes Superior Yes No Yes No Yes Inferior Yes No Yes No Yes Césaro No Yes Yes No Yes Generalised Yes Yes Depends Depends No Banach Yes Yes Yes No No Ultrafilter Yes Yes No Yes No | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 115, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9893397688865662, "perplexity": 160.7311683240128}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812306.16/warc/CC-MAIN-20180219012716-20180219032716-00462.warc.gz"} |
https://en.wikipedia.org/wiki/Restricted_representation | Restricted representation
In mathematics, restriction is a fundamental construction in representation theory of groups. Restriction forms a representation of a subgroup from a representation of the whole group. Often the restricted representation is simpler to understand. Rules for decomposing the restriction of an irreducible representation into irreducible representations of the subgroup are called branching rules, and have important applications in physics. For example, in case of explicit symmetry breaking, the symmetry group of the problem is reduced from the whole group to one of its subgroups. In quantum mechanics, this reduction in symmetry appears as a splitting of degenerate energy levels into multiplets, as in the Stark or Zeeman effect.
The induced representation is a related operation that forms a representation of the whole group from a representation of a subgroup. The relation between restriction and induction is described by Frobenius reciprocity and the Mackey theorem. Restriction to a normal subgroup behaves particularly well and is often called Clifford theory after the theorem of A. H. Clifford.[1] Restriction can be generalized to other group homomorphisms and to other rings.
For any group G, its subgroup H, and a linear representation ρ of G, the restriction of ρ to H, denoted
ρ|H,
is a representation of H on the same vector space by the same operators:
ρ|H(h) = ρ(h).
Classical branching rules
Classical branching rules describe the restriction of an irreducible representation (π, V) of a classical group G to a classical subgroup H, i.e. the multiplicity with which an irreducible representation (σ, W) of H occurs in π. By Frobenius reciprocity for compact groups, this is equivalent to finding the multiplicity of π in the unitary representation induced from σ. Branching rules for the classical groups were determined by
The results are usually expressed graphically using Young diagrams to encode the signatures used classically to label irreducible representations, familiar from classical invariant theory. Hermann Weyl and Richard Brauer discovered a systematic method for determining the branching rule when the groups G and H share a common maximal torus: in this case the Weyl group of H is a subgroup of that of G, so that the rule can be deduced from the Weyl character formula.[2][3] A systematic modern interpretation has been given by Howe (1995) in the context of his theory of dual pairs. The special case where σ is the trivial representation of H was first used extensively by Hua in his work on the Szegő kernels of bounded symmetric domains in several complex variables, where the Shilov boundary has the form G/H.[4][5] More generally the Cartan-Helgason theorem gives the decomposition when G/H is a compact symmetric space, in which case all multiplicities are one;[6] a generalization to arbitrary σ has since been obtained by Kostant (2004). Similar geometric considerations have also been used by Knapp (2005) to rederive Littlewood's rules, which involve the celebrated Littlewood–Richardson rules for tensoring irreducible representations of the unitary groups. Littelmann (1995) has found generalizations of these rules to arbitrary compact semisimple Lie groups, using his path model, an approach to representation theory close in spirit to the theory of crystal bases of Lusztig and Kashiwara. His methods yield branching rules for restrictions to subgroups containing a maximal torus. The study of branching rules is important in classical invariant theory and its modern counterpart, algebraic combinatorics.[7][8]
Example. The unitary group U(N) has irreducible representations labelled by signatures
$\mathbf{f} \,\colon \,f_1\ge f_2\ge \cdots \ge f_N$
where the fi are integers. In fact if a unitary matrix U has eigenvalues zi, then the character of the corresponding irreducible representation πf is given by
$\mathrm{Tr} \, \pi_{\mathbf{f}}(U) = {\mathrm{det}\, z_j^{f_i +N -i}\over \prod_{i
The branching rule from U(N) to U(N – 1) states that
$\pi_{\mathbf{f}}|_{U(N-1)}= \bigoplus_{f_1\ge g_1 \ge f_2\ge g_2\ge \cdots \ge f_{N-1}\ge g_{N-1}\ge f_N} \pi_{\mathbf{g}}$
Example. The unitary symplectic group or quaternionic unitary group, denoted Sp(N) or U(N, H), is the group of all transformations of HN which commute with right multiplication by the quaternions H and preserve the H-valued hermitian inner product
$(q_1,\ldots,q_N)\cdot (r_1,\ldots,r_N) = \sum r_i^*q_i$
on HN, where q* denotes the quaternion conjugate to q. Realizing quaternions as 2 x 2 complex matrices, the group Sp(N) is just the group of block matrices (qij) in SU(2N) with
$q_{ij}=\begin{pmatrix} \alpha_{ij}&\beta_{ij}\\ -\overline{\beta}_{ij}&\overline{\alpha}_{ij} \end{pmatrix},$
where αij and βij are complex numbers.
Each matrix U in Sp(N) is conjugate to a block diagonal matrix with entries
$q_i=\begin{pmatrix} z_i&0\\ 0&\overline{z}_i \end{pmatrix},$
where |zi| = 1. Thus the eigenvalues of U are (zi±1). The irreducible representations of Sp(N) are labelled by signatures
$\mathbf{f} \,\colon \,f_1\ge f_2\ge \cdots \ge f_N\ge 0$
where the fi are integers. The character of the corresponding irreducible representation σf is given by[9]
$\mathrm{Tr} \, \sigma_{\mathbf{f}}(U) = {\mathrm{det}\, z_j^{f_i +N -i +1 } - z_j^{-f_i - N +i -1}\over \prod (z_i-z_i^{-1})\cdot \prod_{i
The branching rule from Sp(N) to Sp(N – 1) states that[10]
$\sigma_{\mathbf{f}}|_{\mathrm{Sp}(N-1)}= \bigoplus_{f_i \ge g_i\ge f_{i+2}} m(\mathbf{f},\mathbf{g}) \sigma_{\mathbf{g}}$
Here fN + 1 = 0 and the multiplicity m(f, g) is given by
$m(\mathbf{f},\mathbf{g})=\prod_{i=1}^N (a_i - b_i +1)$
where
$a_1\ge b_1 \ge a_2 \ge b_2 \ge \cdots \ge a_N \ge b_N=0$
is the non-increasing rearrangement of the 2N non-negative integers (fi), (gj) and 0.
Example. The branching from U(2N) to Sp(N) relies on two identities of Littlewood:[11][12][13][14]
$\sum_{f_1\ge f_2\ge f_N\ge 0} \mathrm{Tr}\Pi_{\mathbf{f},0}(z_1,z_1^{-1},\ldots, z_N,z_N^{-1}) \cdot \mathrm{Tr}\pi_{\mathbf{f}}(t_1,\ldots,t_N) =\sum_{f_1\ge f_2\ge f_N\ge 0} \mathrm{Tr}\sigma_{\mathbf{f}}(z_1,\ldots, z_N) \cdot \mathrm{Tr}\pi_{\mathbf{f}}(t_1,\ldots,t_N)\cdot \prod_{i
where Πf,0 is the irreducible representation of U(2N) with signature f1 ≥ ··· ≥ fN ≥ 0 ≥ ··· ≥ 0.
$\prod_{i
where fi ≥ 0.
The branching rule from U(2N) to Sp(N) is given by
$\Pi_{\mathbf{f},0}|_{\mathrm{Sp}(N)}= \bigoplus_{\mathbf{h}, \,\,\mathbf{g},\,\, g_{2i-1}=g_{2i}} M(\mathbf{g}, \mathbf{h};\mathbf{f}) \sigma_{\mathbf{h}}$
where all the signature are non-negative and the coefficient M (g, h; k) is the multiplicity of the irreducible representation πk of U(N) in the tensor product πg $\otimes$ πh. It is given combinatorially by the Littlewood–Richardson rule, the number of lattice permutations of the skew diagram k/h of weight g.[8]
There is an extension of Littelwood's branching rule to arbitrary signatures due to Sundaram (1990, p. 203). The Littlewood–Richardson coefficients M (g, h; f) are extended to allow the signature f to have 2N parts but restricting g to have even column-lengths (g2i – 1 = g2i). In this case the formula reads
$\Pi_{\mathbf{f}}|_{\mathrm{Sp}(N)}= \bigoplus_{\mathbf{h}, \,\,\mathbf{g},\,\, g_{2i-1}=g_{2i}} M_N(\mathbf{g}, \mathbf{h};\mathbf{f}) \sigma_{\mathbf{h}}$
where MN (g, h; f) counts the number of lattice permutations of f/h of weight g are counted for which 2j + 1 appears no lower than row N + j of f for 1 ≤ j ≤ |g|/2.
Example. The special orthogonal group SO(N) has irreducible ordinary and spin representations labelled by signatures[2][7][15][16]
• $f_1\ge f_2 \ge \cdots \ge f_{n-1}\ge|f_n|$ for N = 2n;
• $f_1 \ge f_2 \ge \cdots \ge f_n \ge 0$ for N = 2n+1.
The fi are taken in Z for ordinary representations and in ½ + Z for spin representations. In fact if an orthogonal matrix U has eigenvalues zi±1 for 1 ≤ in, then the character of the corresponding irreducible representation πf is given by
$\mathrm{Tr} \, \pi_{\mathbf{f}}(U) = {\mathrm{det}\, (z_j^{f_i +n -i} + z_j^{-f_i-n +i})\over \prod_{i
for N = 2n and by
$\mathrm{Tr} \, \pi_{\mathbf{f}}(U) = {\mathrm{det}\, (z_j^{f_i +1/2 +n -i} - z_j^{-f_i -1/2-n +i})\over \prod_{i
for N = 2n+1.
The branching rules from SO(N) to SO(N – 1) state that[17]
$\pi_{\mathbf{f}}|_{SO(2n)}= \bigoplus_{f_1\ge g_1 \ge f_2\ge g_2\ge \cdots \ge f_{n-1}\ge g_{n-1}\ge f_n \ge |g_n|} \pi_{\mathbf{g}}$
for N = 2n+1 and
$\pi_{\mathbf{f}}|_{SO(2n-1)}= \bigoplus_{f_1\ge g_1 \ge f_2\ge g_2\ge \cdots \ge f_{n-1}\ge g_{n-1}\ge |f_n|} \pi_{\mathbf{g}}$
for N = 2n, where the differences fi - gi must be integers.
Gelfand-Tsetlin basis
Since the branching rules from U(N) to U(N–1) or SO(N) to SO(N–1) have multiplicity one, the irreducible summands corresponding to smaller and smaller N will eventually terminate in one-dimensional subspaces. In this way Gelfand and Tsetlin were able to obtain a basis of any irreducible representation of U(N) or SO(N) labelled by a chain of interleaved signatures, called a Gelfand-Tsetlin pattern. Explicit formulas for the action of the Lie algebra on the Gelfand-Tsetlin basis are given in Želobenko (1973).
For the remaining classical group Sp(N), the branching is no longer multiplicity free, so that if V and W are irreducible representation of Sp(N–1) and Sp(N) the space of intertwiners HomSp(N–1)(V,W) can have dimension greater than one. It turns out that the Yangian Y($\mathfrak{gl}$2), a Hopf algebra introduced by Ludwig Faddeev and collaborators, acts irreducibly on this multiplicity space, a fact which enabled Molev (2006) to extend the construction of Gelfand-Tsetlin bases to Sp(N).[18]
Clifford's theorem
Main article: Clifford theory
In 1937 Alfred H. Clifford proved the following result on the restriction of finite-dimensional irreducible representations from a group G to a normal subgroup N of finite index:[19]
Theorem. Let π: G $\rightarrow$ GL(n,K) be an irreducible representation with K a field. Then the restriction of π to N breaks up into a direct sum of inequivalent irreducible representations of N of equal dimensions. These irreducible representations of N lie in one orbit for the action of G by conjugation on the equivalence classes of irreducible representations of N. In particular the number of distinct summands is no greater than the index of N in G.
Twenty years later George Mackey found a more precise version of this result for the restriction of irreducible unitary representations of locally compact groups to closed normal subgroups in what has become known as the "Mackey machine" or "Mackey normal subgroup analysis".[20]
Abstract algebraic setting
From the point of view of category theory, restriction is an instance of a forgetful functor. This functor is exact, and its left adjoint functor is called induction. The relation between restriction and induction in various contexts is called the Frobenius reciprocity. Taken together, the operations of induction and restriction form a powerful set of tools for analyzing representations. This is especially true whenever the representations have the property of complete reducibility, for example, in representation theory of finite groups over a field of characteristic zero.
Generalizations
This rather evident construction may be extended in numerous and significant ways. For instance we may take any group homomorphism φ from H to G, instead of the inclusion map, and define the restricted representation of H by the composition
ρoφ.
We may also apply the idea to other categories in abstract algebra: associative algebras, rings, Lie algebras, Lie superalgebras, Hopf algebras to name some. Representations or modules restrict to subobjects, or via homomorphisms.
Notes
1. ^ Weyl 1946, pp. 159–160.
2. ^ a b Weyl 1946
3. ^ Želobenko 1963
4. ^ Helgason 1978
5. ^ Hua 1963
6. ^ Helgason 1984, pp. 534–543
7. ^ a b Goodman & Wallach 1998
8. ^ a b Macdonald 1979
9. ^ Weyl 1946, p. 218
10. ^ Goodman & Wallach 1998, pp. 351–352,365–370
11. ^ Littlewood 1950
12. ^ Weyl 1946, pp. 216–222
13. ^ Koike & Terada 1987
14. ^ Macdonald 1979, p. 46
15. ^ Littelwood 1950, pp. 223–263
16. ^ Murnaghan 1938
17. ^ Goodman Wallach, p. 351
18. ^ G. I. Olshanski had shown that the twisted Yangian Y($\mathfrak{gl}$2), a sub-Hopf algebra of Y($\mathfrak{gl}$2), acts naturally on the space of intertwiners. Its natural irreducible representations correspond to tensor products of the composition of point evaluations with irreducible representations of $\mathfrak{gl}$2. These extend to the Yangian Y($\mathfrak{gl}$2) and give a representation theoretic explanation of the product form of the branching coefficients.
19. ^ Weyl 1946, pp. 159–160,311
20. ^ Mackey, George W. (1976), The theory of unitary group representations, Chicago Lectures in Mathematics, ISBN 0-226-50052-7 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 28, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9704200029373169, "perplexity": 972.4133466794009}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095423.29/warc/CC-MAIN-20150627031815-00155-ip-10-179-60-89.ec2.internal.warc.gz"} |
https://tribology.asmedigitalcollection.asme.org/HT/proceedings-abstract/HT-FED2004/46903/929/364439 | The correlation methodology widely used in heat transfer and fluid flow is based on fitting power laws to data. Because all power laws of positive exponent include the point (0,0), this methodology includes the tacit assumption that phenomena are best described by correlations that include the point (0,0). • If a phenomenon occurs near (0,0), the assumption is obviously valid. For example, laminar flow occurs near (0,0), and therefore the assumption is valid for laminar flow pressure drop correlations. • If a phenomenon does not occur near (0,0), the assumption is obviously invalid. For example, turbulent flow does not occur near (0,0)—it occurs only after a critical Reynolds number is reached. Therefore the assumption is invalid for turbulent flow pressure drop correlations. When the assumption is invalid, the correlation methodology widely used in heat transfer and fluid flow is lacking in rigor. The impact of the lack of rigor is evidenced by examples that demonstrate that, when this methodology is applied to phenomena that do not occur in the vicinity of (0,0), highly nonlinear power laws oftentimes result from data that exhibit highly linear behavior. Because the widely used methodology lacks rigor when applied to phenomena that do not occur near (0,0), power laws based on this methodology are suspect if they purport to describe phenomena that do not occur near (0,0). Data cited in support of such power laws should be recorrelated using rigorous correlation methodology. Rigorous correlation methodology is also used in heat transfer and fluid flow. It is described in the text, and should become the methodology in general use.
This content is only available via PDF. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8231242895126343, "perplexity": 465.61705997875225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195198.31/warc/CC-MAIN-20201128070431-20201128100431-00446.warc.gz"} |
https://www.physicsforums.com/threads/component-of-fibonacci-sequence.138923/ | # Component of Fibonacci Sequence
1. Oct 18, 2006
### leospyder
Please excuse my total ignorance but can someon explain to me how the following part of a certain proof makes sense
We want to show that Fk+1 ≤ (7/4)^(k+1). Consider fk+1 = fk + fk−1 (We can do this
as k +1 is at least 2; see the comment following the basis) < (7/4)^k +(7 /4)^(k−1) (by the Induction Hypothesis;
notice how the stronger hypothesis comes in handy here.)
The parts I bolded in red are mainly the things I dont understand. I plugged in the (7/4)...part into my calculator and did not get the alleged answer I was supposed to get if it were simply (7/4)^k+1. Can someone please enlighten me?
2. Oct 18, 2006
### matt grime
If a<b, and c>0, then ac<bc (and no that is not cryptic - you end up with something that you wish to show is less than something else - there is no reason to suppose they are equal, nor is it necessary. If I want to show something is less than 4 and I show it is less than 3 I've shown it is less than 4, for example).
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Have something to add?
Similar Discussions: Component of Fibonacci Sequence | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8511469960212708, "perplexity": 1125.0790574408252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171781.5/warc/CC-MAIN-20170219104611-00651-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://proofwiki.org/wiki/Sequence_of_Powers_of_Number_less_than_One/Complex_Numbers | # Sequence of Powers of Number less than One/Complex Numbers
## Theorem
Let $z \in \C$.
Let $\sequence {z_n}$ be the sequence in $\C$ defined as $z_n = z^n$.
Then:
$\size z < 1$ if and only if $\sequence {z_n}$ is a null sequence.
## Proof
By the definition of convergence:
$\ds \lim_{n \mathop \to \infty} z_n = 0 \iff \lim_{n \mathop \to \infty} \size {z_n} = 0$
$\forall n \in \N: \size {z_n} = \size {z^n} = \size z^n$
So:
$\ds \lim_{n \mathop \to \infty} \size {z_n} = 0 \iff \lim_{n \mathop \to \infty} \size z^n = 0$
Since $\size z \in \R_{\ge 0}$, by Sequence of Powers of Real Number less than One:
$\ds \lim_{n \mathop \to \infty} \size z^n = 0 \iff \size z < 1$
The result follows.
$\blacksquare$
## Also known as
This result and Sequence of Powers of Reciprocals is Null Sequence are sometimes referred to as the basic null sequences. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.980171263217926, "perplexity": 353.433394875864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662636717.74/warc/CC-MAIN-20220527050925-20220527080925-00175.warc.gz"} |
https://tex.stackexchange.com/questions/338562/make-underbrace-ignore-brackets | # Make Underbrace ignore Brackets
I am using under brace to define an important term. The code I'm using generates the following:
Code:
h_n\left[f_X(0)\underbrace{\int_0^{+\infty}K(z)^2dz}_{\equiv\beta} +o_p(1)\right]
There is one problem, the brackets extend to allow the beta to fit inside it. I would like to avoid this behavior: brackets extend only enough to fit the integral, but not to fit the underbrace. Instead, I would like to obtain the following:
Is it possible? How can I generate such outcome?
Thanks for helping! :D
\documentclass{article}
\begin{document}
$$h_n\biggl[f_X(0)\underbrace{\int_0^{+\infty}K(z)^2dz}_{\equiv\beta} +o_p(1)\biggr]$$
\end{document}
• It's better to avoid the automatic delimiters anyway. Have a look at the mathtools package for better-spaced ones. – JPi Nov 11 '16 at 2:54
I am not advocating this solution, but onw way to solve this issue and still allow for "automatic" resizing is to use \smash and an appopriate \vphantom:
## Code:
\documentclass{article}
\begin{document}
$$h_n\left[f_X(0)\vphantom{\int}\smash{\underbrace{\int_0^{+\infty}K(z)^2dz}}_{\equiv\beta} +o_p(1)\right]$$
\end{document}
• One problem. When using smash inside align with multiple lines, then the next line will collide with the smashed underbrace. – mxmlnkn Apr 2 '18 at 18:42
• @mxmlnkn: You can use the optinal parameter to \\ to add some vertical space. For instance, you can use \\\[1.0ex] to increase the space between the current line and the subsequent line. – Peter Grill Apr 4 '18 at 21:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9532102346420288, "perplexity": 1607.2374399186247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232262369.94/warc/CC-MAIN-20190527105804-20190527131804-00327.warc.gz"} |
https://eng.libretexts.org/Bookshelves/Civil_Engineering/Book%3A_Fluid_Mechanics_(Bar-Meir)/04%3A_Fluids_Statics/4.3%3A_Pressure_and_Density_in_a_Gravitational_Field | # 4.3: Pressure and Density in a Gravitational Field
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$
In this section, a discussion on the pressure and the density in various conditions is presented. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9532970190048218, "perplexity": 62.86825361651802}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573876.92/warc/CC-MAIN-20220820012448-20220820042448-00272.warc.gz"} |
http://quant.stackexchange.com/tags/stochastic-processes/hot | # Tag Info
15
From what I remember, there is no real relation between Markov and Martingale, and my intuition was confirmed by this post. Basically, it says that you can say neither of the following: If A is Markov, then A is a martingale. If A is a martingale, then A is Markov. further down the post, you can find two counter examples: $dX_t = a dt + \sigma dW_t$ is ...
13
I will defer to others answering the parts of your question concerning the relationship between Markov processes and martingales (@SRKX has already given a good explanation of the relationship) and concerning statistical testing. Broadly, however, it is not possible to "prove" either assumption, but only to fail to reject them. A Non-Random Walk Down Wall ...
11
The way you do it in the first place is a discretization of the Geometric Brownian Motion (GBM) process. This method is most useful when you want to compute the path between $S_0$ and $S_t$, i.e. you want to know all the intermediary points $S_i$ for $0 \leq i \leq t$. The second equation is a closed form solution for the GBM given $S_0$. A simple ...
9
These moving strategies are also known as trend-following. If returns have positive autocorrelation, hurst exponent > 0.5 that would be good for these strategies.
8
"Treshold Garch" or T-Garch models are designed to capture this asymmetry. See this exposition by U. Chicago's Ruey Tsay who has a terrific text on time-series models in "Analysis of Financial Time Series". You can use the structure of the T-Garch models to simulate data with this property. There is a package called fGarch that creates APARCH models. A ...
7
These patterns are of course well-known enough to have been "priced in" to the financial markets. Jump diffusions are a classic way to capture the phenomenon, and often have closed-form option pricing formulas associated with them. The implied option skew, for example, gets a lot flatter when you use a JD model. Jump diffusions are often combined with ...
6
I have low frequency data (daily) from which I want to construct high frequency data, going though all the lower frequency sampling points. Bad idea in my opinion. I don't really know why you really want to do this (what's are you going to do with the generated data). If it's for backtesting purposes, it's a really bad idea as there are so many ...
6
The model for the stock is the Bachelier model with the solution $$S(t) = S(0) + \sigma W(t)$$ Thus the law of the stock $S(t)$ is Gaussian with mean $S(0)$ and variance $\sigma^2 t$. For average process $Z(T)$ is thus the average of linear Brownian motion, we can rewrite this as $$Z(T) = \frac{1}{T} \int_0^T S(0) + \sigma W(t) dt = S(0) + ... 6 I like Richard's answer, but I think we can compute the mean and the variance of \int_0^T W_t dt by ourselves using Ito's lemma. Let f(W_t, t) = t W_t.$$ d( t W_t ) = W_t dt + t dW_t . $$Integrating both sides, and re-arranging the terms, we get$$ \int_0^T W_t dt = T W_T - \int_0^T t dW_t \, . $$We'll be using Ito's isometry formula \mathbb{E} ... 5 In general, if you have a process that you can write under the form F(B_t,t) where F is \mathcal{C}^{2,1} then Itô's lemma gives you the drift term and diffusion term of dF. Then if the resulting SDE has a null drift (that's where Black Scholes PDE comes from), and you get a only local martingale. For it to be a proper martingale you can look at ... 5 The best I have seen so far is William Wheaton's work in this area. I don't know how much is described in his papers but he and Torto created a system that combined factor models for things like local and national price indexes with specific economics of commercial real estate ventures (such as balloon payments on construction milestones and the like). The ... 5 Apparently yes, (I haven't verified the math but have no reason to doubt it). For this simple case you can find a closed form in the following paper: Jeff A. BILMES: What HMM can do The closed form is given on part 4.4 of the paper but the whole thing is worth reading as it clearly shows the main properties of these models. You can also note that ... 5 Okay so I'll take Jase answer and format it properly so that it answers your question and it will be useful for users in the future. For clarity, let me restate the dynamics of the Modified Ornstein-Uhlenbeck model using the more common notation:$$dS_t = \theta (\mu-S_t)dt + \sigma S_t dW_t$$This blog post provides a closed form solution:$$ S_t = S_0 ...
5
For completeness, let's restate that the discrete case goes like this: $$\Delta S_t = S_{t+\Delta t}- S_t = \mu S_t \Delta t + \sigma \sqrt{\Delta t} Z_t$$ with $Z_t \sim \mathcal{N}(0,1)$. What you are doing in your case (although there is a typo in your formula) is to use the exact solution of the SDE to model the move between two points of $S$. ...
5
The convexity of the exponential function of the stochastic variable $W$ makes its expectation greater than the exponentiation of the expectation of $W$. This is an example of Jensen's inequality, $E[e^{\sigma W}]> e^{\sigma E[W]}=1$. $\sigma$ can be interpreted as the magnitude of the convexity of the exponential function. This can be seen by Taylor ...
5
An AR(1), once the time series and lags are aligned and everything is set-up, is in fact a standard regression problem. Let's look, for simplicity sake, at a "standard" regression problem. I will try to draw some conclusions from there. Let's say we want to run a linear regression where we want to approximate $y$ with $$h_(x) = \sum_0^n \theta_i x_i = ... 5$$ \textbf{Preface} $$I am assuming log normal asset but this is not clear from the question? Or rather I have misinterpreted the question! Well as I see it from a a purely mathematical exercise$$ d\left(\dfrac{S_t}{M_t}\right) =\frac{1}{M_t}dS_t - \frac{S_t}{M_t^2}dM_t +O(dt^2) $$using Ito's lemma. Then we can sub in the original processes yields ... 4 I think a simple solution is to try to construct a Brownian motion W_t through known points (e.g., W_0 = W_1 = 0); it is also known as a Brownian Bridge [ http://en.wikipedia.org/wiki/Brownian_bridge ]. See also question 3 in http://www.math.nyu.edu/faculty/goodman/teaching/StochCalc2012/assignments/assignment4.pdf . 4 Check out these resources: The book Levy Processes in finance. This paper basically enabling you to use any distribution for asset prices: Option Valuation Using the Fast Fourier Transform 4 I believe your problem is that you're assuming all Lévy processes are stable with exponent 2. Here is what happens if we try to use your argument: Let X be a Lévy process (that is a martingale, for simplicity). At time t, for any N, we have$$ X_t \sim\sum_{i=1}^N X^i \left(\frac{t}{N}\right), $$with each X^i \left(\frac{t}{N}\right) i.i.d. and ... 4 Hi here are my two cents, It is true that BSDE's framework represents a very powerful theoretical tool to attack abstract problems in mathematical finance. Nevertheless to my knowledge they are very rarely used in practice for at least three reasons. First they are very "unnatural" in their expression (integrating in the future in time and still being ... 4 The actual problem one solves for American options is an optimal stopping time problem, so the value of the option is$$ V_0 = \max_\tau E_{\tau}\left[e^{-r \tau} (S_\tau-K)^+ \right] $$where the maximum is taken over all stopping times (exercise strategies \tau>0 permissible in the contract). With a PDE operator such as you have, the instantaneous ... 4 Note that you can understand the \Delta as an "operator" acting on r. So just act on r twice:$$\Delta^2 r_t = r_t - 2 r_{t-1} + r_{t-2}. $$In fact if you write the r as a vector, r = (r_1, r_2, \ldots, r_N), then \Delta is an N\times N matrix with elements \Delta_{i,j} = \delta_{i,j} - \delta_{i-1,j}. The AR(2) model can be written as ... 4 Orthogonality and independence are different concepts. The concepts are the same for Wiener processes because in the context of normal random variables, independence is equivalent to orthogonality (i.e. uncorrelatedness) Independence is the standard definition for probability. Let \mathcal{F}, \mathcal{G} be the sigma algebras generated by two ... 4 For a basic introduction, the three chapters in Hull's Options, Futures, and Other Derivatives on Binomial Trees, Wiener Processes and Ito's Lemma, and The Black-Scholes-Merton Model helped me start to understand the basic concepts within a broader context. After that, Shreve's two books seems to be pretty popular (see here and here). He explains things ... 3 This paper seems to outline what you are looking for. You want to be careful about mean/variance/kurtosis to make sure you are working in the correct measure. 3 The standard method to manage your kind of problem (i.e. dealing with stochastic processes that are note presented or built thanks to a Brownian motion) is to use a measure change. The power of Brownian motion is that you have a lot of representation theorems (Doob-Meyer theorem, Wold theorem, etc) that allows to (thanks to a change of measure or a ... 3 Just following Musiela Rutkowski (the link redirects to Amazon). The risk neutral measure is derived form imposing that the present value of a self financed portfolio (i.e.; no infusion or withdraw of money) is a martingale. A portfolio can be seen as a stochastic process where its value at time t is given by$$ V_t = \phi^0_tP_t + \phi^1_tS_t\ , ...
3
An insurer might model the filing of claims as a Poisson process, but the cumulative amount of the claims as a compound Poisson process. As an example, suppose a company has issued a large large number of auto liability policies that are geographically dispersed and have identical limits and driver risk profiles. The incidence of claims being made by ...
3
In fact there is an exhaustive paper on this issue available now: "The Trend is not Your Friend! Why Empirical Timing Success is Determined by the Underlying’s Price Characteristics and Market Efficiency is Irrelevant" by Peter Scholz and Ursula Walther, Frankfurt School Working Paper, CPQF No. 29, 2011 Fascinating read - highly recommended!
Only top voted, non community-wiki answers of a minimum length are eligible | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8442350029945374, "perplexity": 657.7896931889815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500826016.5/warc/CC-MAIN-20140820021346-00090-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://www.groundai.com/project/exponential-weights-on-the-hypercube-in-polynomial-time/ | Exponential Weights on the Hypercube in Polynomial Time
# Exponential Weights on the Hypercube in Polynomial Time
## Abstract
We address the online linear optimization problem when the decision set is the entire hypercube. It was previously unknown if it is possible to run the exponential weights algorithm on this decision set in polynomial time. In this paper, we show a simple polynomial time algorithm which is equivalent to running exponential weights on . In the Full Information setting, we show that our algorithm is equivalent to both Exp2 and Online Mirror Descent with Entropic Regularization. This enables us to prove a tight regret bound for Exp2 on . In the Bandit setting, we show that our algorithm is equivalent to both Exp2 and OMD with Entropic Regularization as long as they use the same exploration distribution. In addition, we show a reduction from the hypercube to the hypercube for the full information and bandit settings. This implies that we can also run exponential weights on in polynomial time, addressing the problem of sampling from the exponential weights distribution in polynomial time, which was left as an open question in bubeck2012towards.
Exponential Weights on the Hypercube in Polynomial TimeSudeep Raja Putta
\editor{keywords}
Online Learning, Bandits, Exponential Weights, Hedge, Online Mirror Descent, EXP2, Online Stochastic Mirror Descent
## 1 Introduction
In this paper, we consider the Online Linear Optimization framework when the decision set is . This framework is also referred to as Online Combinatorial Optimization. It proceeds as a repeated game between a player and an adversary. At each time instance, the player chooses an action from the decision set , possibly using some internal randomization. Simultaneously, the adversary chooses a loss vector , without access to the internal randomization of the player. The player incurs the loss . The goal of the player is to minimize the expected cumulative loss . Here the expectation is with respect to the internal randomization of the player(and eventually the adversary’s randomization if it is an adaptive adversary). We use regret as a measure of performance of the player, which is defined as:
Rn=T∑t=1X⊤tlt−minX∈{0,1}nT∑t=1X⊤lt (1)
Since the player’s decision could be the outcome of a randomized algorithm, we consider the expected regret over the randomness in the algorithm. We consider two kinds of feedback models for the player:
1. Full Information setting: At the end of each round , the player observes the loss vector .
2. Bandit setting: At the end of each round , the player only observes the scalar loss incurred
### 1.1 Previous Work
1. Full Information: freund1997decision considered the problem of learning with experts under full information and introduced the Hedge algorithm. The online combinatorial optimization problem was introduced by kalai2005efficient for the full information setting. Several works have studied this problem for specific kinds of decision sets. koolen2010hedging introduce the Component Hedge algorithm for online learning of m-sets, spanning trees, shortest paths and truncated permutations. Their algorithm is similar to Online Mirror Descent(OMD) with Entropic regularization. In this paper, our decision set consists of the entire hypercube and we consider linear losses. hazan2012online also consider the entire hypercube in the case of submodualar losses, which are more general.
2. Bandit Information: Online Linear optimization with bandit feedback was first studied by awerbuch2004adaptive and mcmahan2004online who obtained suboptimal regret bounds(in terns of ). dani2008price were the first to achieve the optimal regret bound in using their Geometric Hedge algorithm, which is similar in spirit to the Exp2 algorithm. Several improvements to the exploration strategy of Geometric Hedge have also been proposed. In the specific case of Online Combinatorial optimization under bandit feedback, cesa2012combinatorial propose ComBand for several combinatorial structures. bubeck2012towards propose Exp2 with John’s Exploration as well as Online Stochastic Mirror Descent(OSMD) with Entropic regularization which are shown to achieve optimal regret in terms of and .
See the books by cesa2006prediction, bubeck2012regret, shalev2012online, hazan2016introduction and lectures by rakhlin2009lecture, bubeck2011introduction for a comprehensive survey of online learning.
In particular, we refer to the a statement in bubeck2012towards. They consider the problem of online linear optimization on decision set with Bandit feedback. They state that it is not known if it is possible to sample from the exponential weights distribution in polynomial time for this particular set of actions. Hence, they turn to using OSMD with entropic regularization for this action set. Over the course of this paper, we show a simple way to sample from and update the exponential weights distribution in polynomial time for the hypercube, under linear losses in both full information and bandit feedback.
### 1.2 Our Contributions
For most of this paper, we consider the hypercube as our decision set. Towards the end, we show how to transform the problem to if the decision set is the hypercube. Our contributions are:
1. In the Full information setting, we propose a polynomial time algorithm PolyExp, which is equivalent to running Exp2 (which takes exponential time).
2. We show that OMD with Entropic regularization is equivalent to PolyExp. We use OMD’s analysis to derive a regret bound for PolyExp. This naturally implies a regret bound for Exp2. This bound is tighter than previously known bounds for this problem.
3. In the Bandit setting, we show that PolyExp, Exp2 and OSMD with Entropic regularization are equivalent if they use they same exploration distribution.
4. Finally, we show how to reduce the problem on to for both the Full information and Bandit settings. This solves the open problem in bubeck2012towards about being able to run exponential weights on in polynomial time.
## 2 Full Information
### 2.1 Exp2
This algorithm is equivalent to Hedge on experts using the losses . Since it explicitly maintains a probability distribution on experts, the running time is exponential. The expected regret of Exp2 is as follows: {restatable}theoremHedgeRegret In the full information setting, using , Exp2 attains the regret bound:
E[Rn]≤n3/2√2Tlog2
### 2.2 PolyExp
The sampling step and the update step in Exp2 are both exponential time operations. This is because Exp2 explicitly maintains a probability distribution on objects. To get a polynomial time algorithm, we replace the sampling and update steps with polynomial time operations. The PolyExp algorithm is as follows:
PolyExp uses parameters represented by the vector . Each element of corresponds to the mean of a Bernoulli distribution. It uses the product of these Bernoulli distributions to sample and uses the aforementioned update equation to obtain
### 2.3 Equivalence of Exp2 and PolyExp
We prove that running Exp2 is equivalent to running PolyExp. {restatable}theoremEquivAt round , The probability that PolyExp chooses is where This is equal to the probability of Exp2 choosing at round , ie:
n∏i=1(xi,t)Xi(1−xi,t)(1−Xi)=exp(−η∑t−1τ=1X⊤lτ)∑Y∈{0,1}nexp(−η∑t−1τ=1Y⊤lτ)
At every round, since the probability distribution of Exp2 and PolyExp over the decision set is the same, they have the same regret bound, with the added advantage of PolyExp being a polynomial time algorithm. Lemma Appendix A. Proofs is crucial in proving equivalence between the two algorithms. In a strict sense, Lemma Appendix A. Proofs holds only because our decision set is the entire hypercube.
### 2.4 Equivalence of PolyExp and OMD with Entropic Regularization
We introduce the OMD algorithm and show that OMD with the Entropic Regularizer for is equivalent to PolyExp. The regularizer for this domain is:
F(x)=n∑i=1xilogxi+(1−xi)log(1−xi)
Here is the Bregman divergence of . {restatable}theoremEqOMD The sampling procedure of PolyExp satisfies . Moreover the update of OMD with Entropic Regularization is , same as PolyExp. This implies that Exp2 is also equivalent to OMD with Entropic Regularization. This statement is not true in general. The only other instance when these two are equivalent is when the decision set is the probability simplex.
### 2.5 Regret of PolyExp via OMD analysis
Since OMD with Entropic Regularization and PolyExp are equivalent, we can use the standardized analysis tools of OMD to derive a regret bound for PolyExp. {restatable}theoremRegPoly In the full information setting, using , PolyExp attains the regret bound:
E[Rn]≤2n√Tlog2
This implies a better regret bound for Exp2. This is a very surprising result as we were able to improve Exp2’s regret by its equivalence to OMD and not by directly analyzing Exp2’s regret.
## 3 Bandit Setting
Algorithms for full information setting can be modified for the bandit setting. The general strategy for the bandit setting is as follows:
where . Here is the distribution used by the underlying algorithm (either Exp2, PolyExp or OMD). is the exploration distribution and is the mixing coefficient. Playing from is necessary in order to make sure that is invertible and also to lower bound the smallest eigenvalue of . The loss estimate is then used to update the underlying algorithm.
When using PolyExp in the bandit setting, the we can sample from by sampling from with probability and sampling from with probability . When using John’s exploration, is supported by at most points. So sampling can be done in polynomial time. We have , where the matrix has elements and for all . The matrix can be pre-computed before the first round. Hence, we can calculate in polynomial time.
{restatable}
theoremBanditeq In the bandit setting, PolyExp is equivalent to Exp2 and Online Mirror Descent with Entropic regularization when the exploration distribution used by the three algorithms is the same. Hence, PolyExp with John’s exploration can be run in polynomial time on . Moreover, since it is equivalent to Exp2 with John’s exploration, it also has optimal regret.
## 4 {−1,+1}n Hypercube Case
Full information and bandit algorithms which work on can be modified to work on . The general strategy is as follows:
{restatable}
theoremHypereq Playing using Exp2 directly on losses is equivalent to sampling using Exp2, PolyExp or OMD and playing on losses
Hence, using the above strategy, PolyExp can be run in polynomial time on . Because of the equivalences we have established in the previous sections, we have that PolyExp achieves optimal regret in full information case and PolyExp with John’s exploration achieves optimal regret in bandit case on
## 5 Conclusions
In this paper, we give a principled way of running the exponential weights algorithm on both the hypercube and hypercube for both full information and bandit settings using the PolyExp algorithm. We do this by cleverly decomposing the exponential weight distribution as a product of Bernoulli distributions. We also show equivalences to the Exp2 algorithm and OMD with entropic regularization, which are known to achieve the optimal regret, hence establishing the optimality of PolyExp.
## Appendix A. Proofs
{lemma}
(see hazan2016introduction, Theorem. 1.5) The Exp2 algorithm satisfies for any :
T∑t=1p⊤tLt−T∑t=1Lt(X⋆)≤ηT∑t=1p⊤tL2t+nlog2η
\HedgeRegret
*
###### Proof.
Using and applying expectation with respect to the randomness of the player to definition of regret(equation 1), we get:
E[Rn]=T∑t=1∑X∈{0,1}npt(X)Lt(X)−minX⋆∈{0,1}nT∑t=1Lt(X⋆)=T∑t=1p⊤tLt−minX⋆∈{0,1}nT∑t=1Lt(X⋆)
Applying Lemma Appendix A. Proofs, we get . Since for all , we get . Optimizing over the choice of , we get the desired regret bound. ∎
{lemma}
We have that
n∏i=1(1+exp(−ηt−1∑τ=1li,τ))=∑Y∈{0,1}nexp(−ηt−1∑τ=1Y⊤lτ)
###### Proof.
Consider . It is a product of terms, each consisting of terms, and . On expanding the product, we get a sum of terms. Each of these terms is a product of terms, either a or . If it is , then and if it is , then . So,
n∏i=1(1+exp(−ηt−1∑τ=1li,τ)) =∑Y∈{0,1}nn∏i=1exp(−ηt−1∑τ=1li,τ)Yi =∑Y∈{0,1}nn∏i=1exp(−ηt−1∑τ=1li,τYi) =∑Y∈{0,1}nexp(−ηn∑i=1t−1∑τ=1li,τYi) =∑Y∈{0,1}nexp(−ηt−1∑τ=1Y⊤lτ)
\Equiv
*
###### Proof.
The proof is via straightforward substitution of the expression for .
n∏i=1(xi,t)Xi(1−xi,t)(1−Xi) =n∏i=1(11+exp(η∑t−1τ=1li,τ))Xi(exp(η∑t−1τ=1li,τ)1+exp(η∑t−1τ=1li,τ))1−Xi =n∏i=1(exp(−η∑t−1τ=1li,τ)1+exp(−η∑t−1τ=1li,τ))Xi(11+exp(−η∑t−1τ=1li,τ))1−Xi =n∏i=1exp(−η∑t−1τ=1li,τ)Xi1+exp(−η∑t−1τ=1li,τ) =∏ni=1exp(−η∑t−1τ=1li,τ)Xi∏ni=1(1+exp(−η∑t−1τ=1li,τ)) =∏ni=1exp(−η∑t−1τ=1li,τXi)∏ni=1(1+exp(−η∑t−1τ=1li,τ)) =exp(−η∑ni=1∑t−1τ=1li,τXi)∏ni=1(1+exp(−η∑t−1τ=1li,τ)) =exp(−η∑t−1τ=1X⊤li,τ)∏ni=1(1+exp(−η∑t−1τ=1li,τ))
The proof is complete by applying Lemma Appendix A. Proofs
\EqOMD
*
###### Proof.
It is easy to see that . Hence .
Finding and using it in the condition , we get
logyi,t+11−yi,t+1 =−ηt∑τ=1li,τ yi,t+11−yi,t+1 =exp(−ηt∑τ=1li,τ) yi,t+1 =11+exp(η∑tτ=1li,τ)
Since is always in , the Bregman projection step is not required. So we have which gives the same update as PolyExp. ∎
{definition}
Bregman Divergence: Let be a convex function, the Bregman divergence is:
DF(x∥y)=F(x)−F(y)−∇F(y)⊤(x−y)
{definition}
Fenchel Conjugate: The Fenchel conjugate of a function is:
F⋆(θ)=maxxx⊤θ−F(x)
{lemma}
(see bubeck2012regret, Theorem. 5.6) For any , OMD with regularizer with effective domain and is differentiable on satisfies:
T∑t=1x⊤tlt−T∑t=1x⊤lt≤F(x)−F(x1)η+1ηT∑t=1DF⋆(−ηt∑τ=1lτ∥−ηt−1∑τ=1lτ)
{lemma}
The Fenchel Conjugate of is:
F⋆(θ)=n∑i=1log(1+exp(θi))
###### Proof.
Differentiating wrt and equating to :
θi−logxi+log(1−xi) =0 xi1−xi=exp(θi) xi=11+exp(−θi)
Substituting this back in , we get . It is also straightforward to see that
{lemma}
For any , OMD with entropic regularizer satisfies:
T∑t=1x⊤tlt−T∑t=1x⊤lt≤nlog2η+ηT∑t=1xTtl2t
###### Proof.
We start from Lemma Appendix A. Proofs. Using the fact that , we get . Next we bound the Bregmen term using Lemma Appendix A. Proofs
DF⋆(−ηt∑τ=1lτ∥−ηt−1∑τ=1lτ)=F⋆(−ηt∑τ=1lτ)−F⋆(−ηt−1∑τ=1lτ)+ηl⊤t∇F⋆(−ηt−1∑τ=1lτ)
In the last term, . So the last term is . The first two terms can be simplified as:
F⋆(−ηt∑τ=1lτ)−F⋆(−ηt−1∑τ=1lτ) =n∑i=1log1+exp(−η∑tτ=1lτ,i)1+exp(−η∑t−1τ=1lτ,i) =n∑i=1log1+exp(η∑tτ=1lτ,i)exp(ηlt,i)(1+exp(ηt−1∑τ=1lτ,i))
Using the fact that :
=n∑i=1logxt,i+(1−xt,i)exp(ηlt,i)exp(ηlt,i) =n∑i=1log(1−xt,i+xt,iexp(−ηlt,i))
Using the inequality:
≤n∑i=1log(1−ηxt,ilt,i+η2xt,il2t,i)
Using the inequality:
≤−ηx⊤tlt+η2x⊤tl2t
The Bregman term can be bounded by Hence, we have:
T∑t=1x⊤tlt−T∑t=1x⊤lt≤nlog2η+ηT∑t=1xTtl2t
\RegPoly
*
###### Proof.
Applying expectation with respect to the randomness of the player to definition of regret(equation 1), we get:
E[Rn]=E[T∑t=1X⊤tlt−minX⋆∈{0,1}nT∑t=1X⋆⊤lt]=T∑t=1x⊤tlt−minX⋆∈{0,1}nT∑t=1X⋆⊤lt
Applying Lemma Appendix A. Proofs, we get . Using the fact that , we get . Optimizing over the choice of , we get the desired regret bound. ∎
\Banditeq
*
###### Proof.
The bandit setting differs from the full information case in two ways. First, we sample from the distribution , which is formed by mixing with the distribution using a mixing coefficient . Second, an estimate of the loss vector is formed via one-point linear regression. This vector is used to update the algorithm. In the full information case, we have already established that the update of Exp2, PolyExp and OMD with entropic regularization are equivalent. As the same exploration distribution is used, will be the same for the three algorithms. Since is used in the update, will be the same for the three algorithms. Hence, even in the bandit setting, these algorithms are equivalent. ∎
{lemma}
Exp2 on with losses is equivalent to Exp2 on with losses and using the map to play on .
###### Proof.
Consider the update equation for Exp2 on
pt+1(Z)=exp(−η∑tτ=1Z⊤lτ)∑W∈{−1,+1}nexp(−η∑tτ=1W⊤lτ)
Using the fact that every can be mapped to a using the bijective map . So:
pt+1(Z) =exp(−η∑tτ=1(2X−1)⊤lτ)∑Y∈{0,1}nexp(−η∑tτ=1(2Y−1)⊤lτ) =exp(−η∑tτ=1X⊤(2lτ))∑Y∈{0,1}nexp(−η∑tτ=1Y⊤(2lτ))
This is equivalent to updating the Exp2 on with the loss vector . ∎
\Hypereq
*
###### Proof.
We sample from in full information and in bandit setting. Then we play . So we have that . In the full information setting, we get . But in bandit setting, we need to find , where . Since , this can be transformed to . Since is used in full information case and is used in the bandit case to update the algorithm, by Lemma Appendix A. Proofs we have that . Hence the equivalence. ∎
202861 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9596045017242432, "perplexity": 828.5311920935457}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512750.15/warc/CC-MAIN-20181020121719-20181020143219-00481.warc.gz"} |
http://b4winckler.wordpress.com/2010/08/07/using-the-conceal-vim-feature-with-latex/ | # Using the conceal Vim feature with LaTeX
Vim 7.3 has just been released and with it comes the “conceal” feature (you can download MacVim 7.3 here). One neat application of this feature is that when editing LaTeX files certain backslash commands are replaced by their corresponding Unicode glyph. This is what I am talking about:
To help get you on your way there are a few things you need to know in order to get started with the conceal feature. First of all you need to enable it by typing :set cole=2. You’ll immediately notice lots of grey on grey characters…uh, what? This is the (unfortunate) default syntax coloring for concealed items. To fix it you have to change the Conceal highlight group, e.g. try :hi Conceal guibg=white guifg=black (reverse the colors if you are using a dark color scheme). After fiddling a bit with the colors to match your color scheme you are ready to go!
However, I have found that concealed superscripts and subscripts often do not look very good and fortunately there is a way to disable them. Namely by adding the line let g:tex_conceal="adgm" to your ~/.vimrc file (it also works to put this line in ~/.vim/ftplugin/tex.vim as mentioned below). The g:tex_conceal variable is a string of one-character flags:
a = conceal accents/ligatures
d = conceal delimiters
g = conceal Greek
m = conceal math symbols
s = conceal superscripts/subscripts
Thus "adgm" means conceal everything except superscripts and subscripts. (I did not mention accents/ligatures earlier but "a" does what you’d expect: for example, \"a and \ae will turn into ä and æ respectively, if accents are enabled.)
The conceal support for editing tex files is still in its early stages and you may come across commands that do not get concealed, or perhaps you have some custom LaTeX commands that you would like to add to the list of concealed commands. In either case you should edit the file ~/.vim/after/syntax/tex.vim (create the folders and the file if they don’t exist). Say you would like \eps to render as ε, then add this line:
syn match texGreek '\\eps\>' contained conceal cchar=ε
Mathematics commands should be added to the texMathSymbol group. For example, if you want \arr to be concealed by ←, then add this line:
syn match texMathSymbol '\\arr\>' contained conceal cchar=←
If you find standard LaTeX commands that should be concealed but aren’t, please notify the tex syntax file author so that he may add them (you can find the contact details by looking at the syntax file :tabe $VIMRUNTIME/syntax/tex.vim). Finally, I personally edit several different types of files and like to keep separate settings for each file type. The simplest way of doing this is to keep your filetype-specific settings inside ~/.vim/ftplugin/filetype.vim. Here’s an excerpt from my ~/.vim/ftplugin/tex.vim file: " Set colorscheme, enable conceal (except for " subscripts/superscripts), and match conceal " highlight to colorscheme colorscheme topfunky-light set cole=2 let g:tex_conceal= 'adgm' hi Conceal guibg=White guifg=DarkMagenta Some of the relevant help files on this topic are :h 'cole, :h 'cocu, and :h conceal. I should also mention :h 'ambw; it may be helpful to set this to double if you find that some Unicode glyphs “spill over” into the neighboring display cell. About these ads ## 16 thoughts on “Using the conceal Vim feature with LaTeX” 1. Pretty Neat :) 2. Thanks for the documentation of this really neat feature, Björn! It really works like a charm. One small issue, though, which I had with my installation was that g:tex_conceal was ignored when I defined it in the ftplugin/tex.vim file. It appears that when this file was read, the syntax file was already loaded (and used its default setting ‘admgs’. Defining g:tex_conceal in the vimrc then worked perfect. • I guess it is safest to put the g:tex_conceal in ~/.vimrc then. This is what I had done initially too, but it seemed nicer to keep it in ~/.vim/ftplugin/tex.vim and since that worked for me I went with it. I have to read up on the order in which Vim reads configuration files I guess… :-) 3. Hi, Björn! I downloaded and installed MacVim version 7.3 build 53 for PowerPC+x86 for Mac OS X 10.5 ‘Leopard,’ and I just have to say it is simply awesome! If only all software were this good. Although this is a little selfish, I hope you’ll continue to maintain a PowerPC (or at least Universal Binary) version of MacVim for Mac OS X 10.5 ‘Leopard’ for the sake of people still using PowerPC-based Macs like myself, as Mac OS X 10.5 is the last version of Mac OS X that runs on PowerPC machines. MacVim is the only text editor I’ve found (and believe me, I’ve searched far and wide) that makes editing mixed Hebrew-Arabic-Russian-English LaTeX and XHTML documents anywhere close to being tolerable. By the way, I came across the Readers’ Choice Awards 2009 from the Linux Journal, and I was interested to learn that among Linux Journal readers, the three most widely used text editors were found to be vi (36%) followed by gedit (19%) followed by Kate (11%). Emacs didn’t even make the cut! Keep up the great work! —Austin, Haifa, Israel 4. This is very nice. I find, with macvim Version 7.3 (53), that it is necessary to set the colours “manually” by doing a “:” command, however. I’m not sure why the colours are not set by my ~/.vimrc file, which I am pasting below, in case it’s of interest.$ cat ~/.vimrc
set gfn=Monaco:h13
set tw=70
set cole=2
hi Conceal guibg=Black guifg=White
5. Pingback: Vim, LaTeX, and ‘conceal’ « PhilTeX
• That’s the default MacVim font which is called Menlo (based on DejaVu, which is based on Bitstream Vera).
6. thanks for this, I read this when you first posted it but I wasn’t really using tex then.
7. Very cool, thank you for writing this up!
For terminal users, the magic sauce is hi Conceal ctermbg=White ctermfg=Black
8. Many thanks for the interesting blog!
I added a couple of personal conceal rules to my config files, but struggled with a few. Can anybody help with these?
I would like to replace ‘\{‘ by ‘{‘ and ‘\}’ by ‘}’. Also ‘\item’ by ‘•’. Finally I would like to replace ‘\,’ and ‘\;’ by ‘ ‘, i.e. an empty space (works fine with \quad and \qquad).
Cheers,
Max
9. Great, thanks! I was using Latex pretty much lately and was hoping to see this functionality (like in Emacs, although it is not so powerful yet)
10. Great hint! I often wondered if there is a way to optically simplify my crude formulas. Here it is. Thanks!
11. If you want your concealed characters look like the mathmode, try hi clear Conceal and then hi link Conceal texMath | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8946877717971802, "perplexity": 3493.482056238229}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00501-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/what-will-happen-in-the-following-scenario.168186/ | # What will happen in the following scenario?
1. Apr 29, 2007
### james1234567890
I am new to quantum mechanics and I am not much familiar with it. I have a doubt. Consider a scenario in which a photon is passed through an apparatus which is partly mirror and partly lens. The wave function of the photon is such that there is 75% chance of photon hitting the mirror thereby geting reflected back to the same side of the apparatus and 25% chance of photon hitting the lens thereby passing to the other side of the apparatus. My question is what will happen to the wave function of photon after interacting with the apparatus. Will the wave function be split in to either sides of the apparatus such that 75% of photon corresponds to one side of apparatus and 25% of photon corresponds to other side? Suppose we measure the photon after the interaction, will the photon be detected on one side of apparatus 75% of times and on the other side of apparatus 25% of times? In other words, will photon be measured to exist in different sides of the apparatus at diffeent times. If the same photon is measured after 1 year, will the photon be detected to exist at different locations which are 2 lightyears apart at different instants? Is there any misconception here? Please clarify my doubt. Thanks in advance.
Last edited: Apr 29, 2007
2. Apr 30, 2007
### DrChinese
To answer your question generally: the wave function is split as you imagine (25% & 75%) until the photon is actually determined to be one place or another. Once it is "narrowed down" then the collapse postulate applies - and it will be usually 100% one side or the other. A single photon will NOT be detected to exist in 2 places at once.
3. Apr 30, 2007
### james1234567890
Thanks for your reply. But suppose we try to detect the photon the second time, is it certain that it will be detected on the same side of the apparatus as the first time, or is it governed by probability i.e. 75% of times, it will be detected on one side of apparatus and 25% of times, it will be detected on the other side? In other words, once the photon is detected to be on one side of apparatus, is it a guarantee that it will be detected on the same side of apparatus for all the subsequent observations or is it probabilistic with different observations giving different positions for photon with respect to the apparatus?
4. Apr 30, 2007
### NeoDevin
Once you have detected it once, you have collapsed the wave function, it now lies 100% on the side you first detected it on. Any further measurements will show it on the same side.
5. Apr 30, 2007
### james1234567890
Thanks a lot for the clarification. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8392594456672668, "perplexity": 380.89367870731235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541066.79/warc/CC-MAIN-20161202170901-00236-ip-10-31-129-80.ec2.internal.warc.gz"} |
http://moodle.kentisd.net/course/index.php?categoryid=35 | ### Advanced Placement Chemistry - Carlson
The Advanced Placement Chemistry curriculum is equivalent to a college course usually taken by chemistry majors during their first year of college. Students will be expected to take the Collegeboard AP Chemistry exam at the end of the course.
none needed. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8858416676521301, "perplexity": 3124.619163644009}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320003.94/warc/CC-MAIN-20170623045423-20170623065423-00114.warc.gz"} |
http://eprints.imtlucca.it/816/ | # DNA microarray image intensity extraction using Eigenspots
Tsaftaris, Sotirios A. and Ahuja, Ramandeep and Shiell, Derek and Katsaggelos, Aggelos K. DNA microarray image intensity extraction using Eigenspots. In: International conference on image processing. IEEE, VI -265 . ISBN 978-1-4244-1437-6 (2007)
Full text not available from this repository.
## Abstract
DNA microarrays are commonly used in the rapid analysis of gene expression in organisms. Image analysis is used to measure the average intensity of circular image areas (spots), which correspond to the level of expression of the genes. A crucial aspect of image analysis is the estimation of the background noise. Currently, background subtraction algorithms are used to estimate the local background noise and subtract it from the signal. In this paper we use principal component analysis (PCA) to de-correlate the signal from the noise, by projecting each spot on the space of eigenvectors, which we term eigenspots. PCA is well suited for such application due to the structural nature of the images. To compare the proposed method with other background estimation methods we use the industry standard signal-to-noise metric xdev.
Item Type: Book Section 10.1109/ICIP.2007.4379572 DNA microarray image intensity extraction; Principal Component Analysis; background subtraction algorithms; eigenspots; eigenvectors; image analysis;l ocal background noise; organisms; DNA; eigenvalues and eigenfunctions; image processing; medical image processing; principal component analysis Q Science > QA Mathematics > QA76 Computer softwareQ Science > QH Natural history > QH426 Genetics Computer Science and Applications Users 35 not found. 12 Aug 2011 10:23 05 Mar 2013 15:47 http://eprints.imtlucca.it/id/eprint/816 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8012715578079224, "perplexity": 4215.589644113074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215261.83/warc/CC-MAIN-20180819165038-20180819185038-00644.warc.gz"} |
https://tex.stackexchange.com/questions/161519/xelatex-problem-with-a-vector-arrow/161522 | XeLaTeX problem with a vector arrow
I am trying to write $\overrightarrow{V}_{AB}$ with the vector over only V, but when I do this then the subscript AB goes far from V. On the other hand when I use the vector over the V and AB it seems wrong and not so beautiful. What am I doing wrong? Thank you very much!
• Any reason against $\vec{V}_{AB}$? – Thorsten Donig Feb 20 '14 at 17:54
• I didn't know that command! Thank you! – Adam Feb 20 '14 at 17:57
• Then you should read some introductory material like the Dickimaw LaTeX Books. – Thorsten Donig Feb 20 '14 at 17:59
• I will consider them. – Adam Feb 20 '14 at 18:01
If you use unicode-math along with XeLaTeX, there's no difference in what's printed with or without the arrow over the V; only the arrow length changes if you use \vec or \overrightarrow:
\documentclass{article}
\usepackage{unicode-math}
\linespread{1.05} % if you have arrows over capital letters
\begin{document}
$V_{AB}$ \sbox0{$V_{AB}$}\the\wd0
$\vec{V}_{AB}$ \sbox0{$\vec{V}_{AB}$}\the\wd0
$\overrightarrow{V}_{AB}$ \sbox0{$\overrightarrow{V}_{AB}$}\the\wd0
\end{document}
The \sbox commands are just to print the width of the material, showing that the widths are the same.
Of course, the letter shapes in this case leave a hole, which should be corrected visually. It's a case similar to $\sqrt{\log x}$, where adding a thin space is better
$\sqrt{\,\log x}$
or $x^2/2$ where a negative thin space is recommended
$x^2\!/2$
Here's the realizations, left without the correction, right with the correction:
So, in your case, I'd suggest
\documentclass{article}
\usepackage{unicode-math}
\linespread{1.05} % if you have arrows over capital letters
\begin{document}
$V_{\!AB}$
$\vec{V}_{\!AB}$
$\overrightarrow{V}_{\!AB}$
\end{document}
I would definitely not recommend using \overrightarrow without unicode-math, as the result is appalling awful
and I'm not referring to the space between the variable and the subscript, but to the size of the arrow, which is too large.
Welcome to TeX.se! :-)
Try this: $\overrightarrow{V}_{\!AB}$.
The \! inserts negative horizontal spacing.
• I tried it and it didn't work. Thanks for the answer though. – Adam Feb 20 '14 at 17:39
• @Adam: How strange; it should. Try multiple \! in a row, e.g. $\overrightarrow{V}_{\!\!\!\!AB}$ just to see if it makes a difference. If not, please put your full LaTeX code in your original question. – mhelvens Feb 20 '14 at 17:43
• That worked!Why there was that problem though? Shouldn't without these corrections work the first time? – Adam Feb 20 '14 at 17:45
• \overrightarrow creates a \vbox: a rectangle which, by default, does not overlap with other boxes. Such boxes can have visible content anywhere inside (or even outside) their bounds, so making them overlap will often be the wrong thing to do. Unfortunately, (La)TeX cannot take every possible situation into account. That's why there are ways to make manual corrections. – mhelvens Feb 20 '14 at 17:51
The problem is that \overrightarrow produces a box. When a subscript is added to a box, it treats it as a rectangle, and can't see what's inside. When a subscript is added to a character, it sees the italic correction of that character (roughly proportional to the amount of slant) and compensates for it. The easy work-around is to insert negative space, with the amount based on trial and error. The difficult way is to try to measure the necessary amount of negative space automatically:
\documentclass{article}
\usepackage{amsmath}
\makeatletter
\def\overrightarrowwithsubscript#1{\mathpalette{\@oraws{#1}}}
\def\@oraws#1#2#3{%
\begingroup
\setbox0=\hbox{$#2{{#1}_{#3}}$}%
\setbox2=\hbox{$#2{\overrightarrow{#1}_{#3}}$}%
\overrightarrow{#1}_{\hskip-\@tempdima #3}
\endgroup}
\begin{document}
$\overrightarrow{V}_{AB} \qquad \overrightarrowwithsubscript{V}{AB}$
\bigskip
$\frac{\overrightarrow{V}_{AB}}{\overrightarrow{V}_{AB}}\qquad \frac{\overrightarrowwithsubscript{V}{AB}}{\overrightarrowwithsubscript{V}{AB}}$
\end{document}
You can use the esvect package: it manages the subscript with a \vv*command, the arrow doesn't collide with what is underneath and you can choose between eight forms of arrow. Here is an example, to be compared with \overrightarrow:
\documentclass[12pt, a4paper]{article}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{lmodern}
\usepackage[f]{esvect}
\begin{document}
$\vv*{V}{AB}\qquad\vv*{V}{\!AB}\qquad \overrightarrow{V}_{AB}$
\end{document} | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8672671318054199, "perplexity": 1425.0815529535075}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315811.47/warc/CC-MAIN-20190821065413-20190821091413-00080.warc.gz"} |
https://eccc.weizmann.ac.il/report/2004/039/ | Under the auspices of the Computational Complexity Foundation (CCF)
REPORTS > DETAIL:
### Paper:
TR04-039 | 21st April 2004 00:00
#### On approximation of the maximum clique minor containment problem and some subgraph homeomorphism problems
TR04-039
Authors: Andrzej Lingas, Martin Wahlén
Publication: 2nd May 2004 17:05
We consider the minor'' and homeomorphic'' analogues of the maximum clique problem, i.e., the problems of determining the largest $h$ such that the input graph has a minor isomorphic to $K_h$ or a subgraph homeomorphic to $K_h,$ respectively.We show the former to be approximable within $O(\sqrt {n} \log^{1.5} n)$ by exploiting the minor separator theorem of Plotkin {\em et al.}
Next, we showan $\Omega (n^{1/2 - O(1/(\log n)^{\gamma})})$ lower bound (for some constant $\gamma$, unless $\mathcal{NP} \subseteq \text{ZPTIME}(2^{(\log n )^{O(1)}})),$ and an $O(n\log \log n /\log^{1.5}n)$ upper bound on the approximation factor for maximum homeomorphic clique.Finally, we study the problem of subgraph homeomorphism where the guest graph has maximum degree not exceeding three and low treewidth. In particular, we show that for any graph $G$ on $n$ vertices and a positive integer $q$ not exceeding $n,$ one can produce either $n/q$ approximation to the longest cycle problem and $(n-1)/(q-1)$ approximation to the longest path problem, both in polynomial time, or a longest cycle and a longest path of $G$ in time $2^{O(q\sqrt {n}\log^{2.5} n)}.$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9790770411491394, "perplexity": 476.2869172074583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688103.0/warc/CC-MAIN-20170922003402-20170922023402-00310.warc.gz"} |
https://juliaobserver.com/packages/ColorVectorSpace | 20
6
18
16
# ColorVectorSpace
This package is an add-on to ColorTypes, and provides fast mathematical operations for objects with types such as RGB and Gray. Specifically, with this package both grayscale and RGB colors are treated as if they are points in a normed vector space.
## Introduction
Colorspaces such as RGB, unlike XYZ, are technically non-linear; perhaps the most "colorimetrically correct" approach when averaging two RGBs is to first convert each to XYZ, average them, and then convert back to RGB. Nor is there a clear definition of computing the sum of two colors. As a consequence, Julia's base color package, ColorTypes, does not support mathematical operations on colors.
However, particularly in image processing it is common to ignore this concern, and for the sake of performance treat an RGB as if it were a 3-vector. The role of this package is to extend ColorTypes to support such mathematical operations. Specifically, it defines + and multiplication by a scalar (and by extension, - and division by a scalar) for grayscale and AbstractRGB colors. These are the requirements of a vector space.
If you're curious about how much the "colorimetrically correct" and "vector space" views differ, the following diagram might help. The first 10 distinguishable_colors were generated, and all pairs were averaged. Each box represents the average of the pair of diagonal elements intersected by tracing vertically and horizontally; within each box, the upper diagonal is the "colorimetrically-correct" version, while the lower diagonal represents the "RGB vector space" version.
This package also defines norm(c) for RGB and grayscale colors. This makes these color spaces normed vector spaces. Note that norm has been designed to satisfy equivalence of grayscale and RGB representations: if x is a scalar, then norm(x) == norm(Gray(x)) == norm(RGB(x, x, x)). Effectively, there's a division-by-3 in the norm(::RGB) case compared to the Euclidean interpretation of the RGB vector space. Equivalence is an important principle for the Colors ecosystem, and violations should be reported as likely bugs.
## Usage
using ColorTypes, ColorVectorSpace
For the most part, that's it; just by loading ColorVectorSpace, most basic mathematical operations will "just work" on AbstractRGB, AbstractGray (Color{T,1}), TransparentRGB, and TransparentGray objects. (See definitions for the latter inside of ColorTypes).
However, there are some additional operations that you may need to distinguish carefully.
### Multiplication
Grayscale values are conceptually similar to scalars, and consequently it seems straightforward to define multiplication of two grayscale values. RGB values present more options. This package supports three different notions of multiplication: the inner product, the hadamard (elementwise) product, and the tensor product.
julia> c1, c2 = RGB(0.2, 0.3, 0.4), RGB(0.5, 0.3, 0.2)
(RGB{Float64}(0.2,0.3,0.4), RGB{Float64}(0.5,0.3,0.2))
julia> c1⋅c2 # \cdot # or dot(c1, c2)
0.09000000000000001
# This is equivelant to mapc(*, c1, c2)
julia> c1⊙c2 # \odot # or hadamard(c1, c2)
RGB{Float64}(0.1,0.09,0.08000000000000002)
julia> c1⊗c2 # \otimes # or tensor(c1, c2)
RGBRGB{Float64}:
0.1 0.06 0.04
0.15 0.09 0.06
0.2 0.12 0.08
Note that c1⋅c2 = (c1.r*c2.r + c1.g*c2.g + c1.b*c2.b)/3, where the division by 3 ensures the equivalence norm(x) == norm(Gray(x)) == norm(RGB(x, x, x)).
Ordinary multiplication * is not supported because it is not obvious which one of these should be the default option.
However, * is defined for grayscale since all these three multiplication operations (i.e., ⋅, ⊙ and ⊗) are equivalent in the 1D vector space.
### Variance
The variance v = E((c - μ)^2) (or its bias-corrected version) involves a multiplication, and to be consistent with the above you must specify which sense of multiplication you wish to use:
julia> cs = [c1, c2]
2-element Array{RGB{Float64},1} with eltype RGB{Float64}:
RGB{Float64}(0.2,0.3,0.4)
RGB{Float64}(0.5,0.3,0.2)
julia> varmult(⋅, cs)
0.021666666666666667
julia> varmult(⊙, cs)
RGB{Float64}(0.045,0.0,0.020000000000000004)
julia> varmult(⊗, cs)
RGBRGB{Float64}:
0.045 0.0 -0.03
0.0 0.0 0.0
-0.03 0.0 0.02
### abs and abs2
To begin with, there is no general and straightforward definition of the absolute value of a vector. There are roughly two possible definitions of abs/abs2: as a channel-wise operator or as a function which returns a real number based on the norm. For the latter, there are also variations in the definition of norm.
In ColorVectorSpace v0.9 and later, abs is defined as a channel-wise operator and abs2 is undefined. The following are alternatives for the definitions in ColorVectorSpace v0.8 and earlier.
_abs(c) = mapreducec(v->abs(float(v)), +, 0, c)
_abs2(c) = mapreducec(v->float(v)^2, +, 0, c)
08/17/2015
1 day ago
188 commits | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9028737545013428, "perplexity": 2828.008468206919}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487629209.28/warc/CC-MAIN-20210617041347-20210617071347-00325.warc.gz"} |
https://www.neoadviser.com/biomass-is-a-better-alternative-to-natural-gas/ | Biomass is a renewable source of energy that comes from plants and animals and it’s also an organic material. It contains stored energy from the sun. During burning of biomass, the chemical energy in it is released as heat. Moreover, biomass can be directly burned or transformed into liquid biofuels or biogas that can be again burned as fuels. Biomass gas releases carbon dioxide but still it has been categorized as renewable energy source due to the fact that the plant stocks can be substituted with new growth. Biomass tends to cover around half of our total renewable energy consumption and it aims at contributing to half of the EU 2020 renewable energy target.
Alternatively, natural gas is a mixture of hydrocarbon gas comprising of methane and minute amount of carbon dioxide, hydrogen, nitrogen, hydrogen sulphide or helium. Natural gas is also a type of fossil fuel which is presently abundant on Earth. Besides, Propane being a by-product of natural gas production, there are other Natural Gas Products which can benefit mankind extensively.
It known to us that biomass has many advantages over fossil fuels as the former reduces the amount of carbon emission. Listed below are some of the benefits of biomass:
• First and foremost, biomass is a renewable source of energy and it cannot be worn-out. As its commonly extracted from plants, Biomass will remain as renewable energy source as long as plants exists on this planet.
• Biomass aids in reducing the amount of GHG that influences global warming and climate change. The levels of emission caused by biomass is much smaller than the fossil fuels. In terms of the amount of carbon emissions, one of the basic difference between biomass and fossil fuel is that the carbon dioxide which has been absorbed by plant for its growth is returned back to the atmosphere during its burning for the generation of biomass energy. Whereas, the carbon dioxide produced from fossil fuels increases the greenhouse effect in the atmosphere.
• Biomass energy helps to keep a clean environment around us. Due to increasing population in the world, the generatedwaste is also increasing. However, disposal of these wastes is a challenging task. The garbage generated harms the ecological balance, so to maintain the ecosystem biogas is of utmost importance.
• Biomass is a profusely available energy source. The sources of biomass energy come from agriculture, forestry, fisheries, aquaculture, algae and other wastes. As per the opinion of many energy experts, biomass is one of the best available organic energy source.
• GHG maintains equilibrium for a wide range of technologies to produce electricity and heat. A well-known energy retailer named Ontario Wholesale Energy can use the biomass at a risk free and fixed rate program. For some biomass systems net GHG emissions savings are shown to be 40% more than the substituted fossil alternative, whereas some score only 4%. Hence environment is benefitted widely and the effective value depends on the particular application situation like technology, scale, etc.
Thus, based on these parameters Biomass can be considered to be a better alternative to Natural Gas. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8885475397109985, "perplexity": 1318.3649470829114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232259177.93/warc/CC-MAIN-20190526125236-20190526151236-00170.warc.gz"} |
http://www.physicsforums.com/showpost.php?p=2341881&postcount=5 | View Single Post
P: 1,270 For example, if we have the PDE: uxy+cos(x) u + (uy)2 = x How can we express it in "operator" form Lu=f(x,y)? OK, in this case, f(x,y)=x, but what would L be? L=∂xy + cos(x) + ????? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9062483906745911, "perplexity": 4500.126928071039}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510267729.10/warc/CC-MAIN-20140728011747-00222-ip-10-146-231-18.ec2.internal.warc.gz"} |
https://community.cypress.com/thread/20019 | 0 Replies Latest reply on Dec 12, 2013 9:57 AM by keerthi.gowda
# Can't find helper.hex
Hello,
Found an interesting error. Thought to share with you all.
While programming from MiniProg3, if you find the following error message,
"Firmware file isn't found by the path: C:\Program Files (x86)\Cypress\Programmer\3.19.1\helper.hex"
Then, helper.hex file will be missing from the installed folder. This file will be in the path “C:\Program Files (x86)\Cypress\Programmer\3.19.1\helper.hex” when PSoC Programmer is installed.
If it is missing you will see the error mentioned above.
It is the firmware file of one of the components in MiniProg3 and it has to be present while programming.
I am attaching the same for PSoC Programmer 3.19.1.
Keerthi | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9444345235824585, "perplexity": 4169.52020814964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794868003.97/warc/CC-MAIN-20180527044401-20180527064401-00345.warc.gz"} |
http://mathhelpforum.com/calculus/17954-maclaurin-series-help.html | # Math Help - Maclaurin series help
1. ## Maclaurin series help
2. A Maclauren series for a function is just the Taylor series at an argument of 0.
So for $f(x) = ln(1 + x)$:
$f(x) = ln(1 + x) \implies f(0) = 0$
$f^{\prime}(x) = \frac{1}{1 + x} \implies f^{\prime}(0) = 1$
$f^{\prime \prime}(x) = -\frac{1}{(1 + x)^2} \implies f^{\prime \prime}(0) = -1$
etc.
So
$f(x) \approx f(0) + \frac{1}{1!}f^{\prime}(0)x + \frac{1}{2!}f^{\prime \prime}(0)x^2 + ~ ...$
$f(x) \approx 0 + \frac{1}{1} \cdot 1 \cdot x + \frac{1}{2} \cdot (-1) \cdot x^2 + ~ ...$
$f(x) \approx x - \frac{1}{2}x^2 + ~ ...$
-Dan
3. Originally Posted by topsquark
A Maclauren series for a function is just the Taylor series at an argument of 0.
So for $f(x) = ln(1 + x)$:
$f(x) = ln(1 + x) \implies f(0) = 0$
$f^{\prime}(x) = \frac{1}{1 + x} \implies f^{\prime}(0) = 1$
$f^{\prime \prime}(x) = -\frac{1}{(1 + x)^2} \implies f^{\prime \prime}(0) = -1$
etc.
So
$f(x) \approx f(0) + \frac{1}{1!}f^{\prime}(0)x + \frac{1}{2!}f^{\prime \prime}(0)x^2 + ~ ...$
$f(x) \approx 0 + \frac{1}{1} \cdot 1 \cdot x + \frac{1}{2} \cdot (-1) \cdot x^2 + ~ ...$
$f(x) \approx x - \frac{1}{2}x^2 + ~ ...$
-Dan
would you be able to go into more detail wit this please?also stuck on B
thank u
Amandax
4. Now the second one show that $f(1)$ and $f(2)$ have different signs. That means, by intermediate value theorem there is a number $x_0\in (1,2)$ so that $f(x_0)=0$.
5. Originally Posted by Amanda-UK
would you be able to go into more detail wit this please?
Amandax
More detail with what? What specifically are you having a problem with?
-Dan | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8416429758071899, "perplexity": 648.5685723644583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927458.37/warc/CC-MAIN-20150521113207-00139-ip-10-180-206-219.ec2.internal.warc.gz"} |
http://boris-belousov.net/2016/12/21/probability-theory/ | Probability distribution vs cumulative distribution function
In this post, I collected definitions of the basic probability theory concepts in the language of measure theory, following Kolmogorov, with a bit of modern terminology and emphasis on intuition behind them.
Random variable
Let $\big(\Omega, \mathcal{A}, \mathbb{P}\big)$ be a given probability space.
A random variable $X$ is a measurable function $$\begin{equation*} X : \Omega \rightarrow \mathbb{R}. \end{equation*}$$
It means $X^{-1}(B) \in \mathcal{A} \; \forall B \in \mathcal{B}(\mathbb{R})$, where $\mathcal{B}(\mathbb{R})$ is the Borel $\sigma$-algebra on $\mathbb{R}$.
Probability distribution
A random variable $X : \Omega \rightarrow \mathbb{R}$, apart from mapping points from $\Omega$ to $\mathbb{R}$, carries over the measure $\mathbb{P}$ from $\Omega$ to $\mathbb{R}$.
The probability distribution $\mu$ of a random variable $X$ is the push-forward measure $\mu : \mathcal{B}(\mathbb{R}) \rightarrow [0, 1]$ (denoted $\mu = X_* \mathbb{P}$) defined by the relation $$\begin{equation*} (X_* \mathbb{P})(B) = \mathbb{P}(X^{-1}(B)) \end{equation*}$$ for all $B \in \mathcal{B}(\mathbb{R})$.
The push-forward measure has a nice property
for any random variable $Y : \mathbb{R} \rightarrow \mathbb{R}$ for which any of the integrals exists.
Expectation
The expectation $\mathbb{E} Y$ of a random variable $Y$ is the Lebesgue integral $$\begin{equation*} \mathbb{E} Y = \int_\mathbb{R} Y \,\mathrm{d} \mu. \end{equation*}$$
You can think of it in the following way. A random variable $X$ pushes some abstract measure $\mathbb{P}$ from $\Omega$ to a measure $\mu$ on $\mathbb{R}$ (to a Gaussian measure, for example). After that we can forget about $\Omega$ altogether since all observable quantities are of the form $\mathbb{E} Y$, where $Y : x \mapsto Y(x)$ is a measurable function on $\big(\mathbb{R}, \mathcal{B}(\mathbb{R}), \mu\big)$, and $\mu$ carries all the information required to compute $\mathbb{E} Y$.
In particular, if $Y$ is the identity function $Y = \mathrm{Id} : x \mapsto x$, the expectation $\mathbb{E} Y$ gives the expected value of $X$. Using the nice property of the push-forward measure $\mu$ and the fact that $(\mathrm{Id} \circ X)(\omega) = \mathrm{Id}(X(\omega)) = X(\omega)$, we obtain
This result allows us to compute the expectation of $X$ if we don’t know $\mathbb{P}$ but know that $X$ is distributed according to $\mu$ (denoted $X \sim \mu$). It is, basically, the idea behind introducing $\Omega$ in the first place. We think of $\Omega$ as some invisible space where someone is throwing dice, while we can only observe consequences of that activity in our space $\mathbb{R}$.
Probability density function
If $\mu$ is absolutely continuous with respect to $\lambda$ (denoted $\mu \ll \lambda$), where $\lambda$ is the Lebesgue measure on $\mathbb{R}$, then there exists the Radon-Nikodym derivative $f$, which allows one to change measure under integral.
The probability density function $f : \mathbb{R} \rightarrow [0, \infty)$ of a random variable $X : \Omega \rightarrow \mathbb{R}$ with distribution $\mu = X_*\mathbb{P}$ is the Radon-Nikodym derivative $f = \mathrm{d} \mu \,/\, \mathrm{d} \lambda$.
With the help of the probability density $f$, we can rewrite the expectation of $Y$
Cumulative distribution function
The cumulative distribution function (CDF) $F : \mathbb{R} \rightarrow [0, 1]$ of a random variable $X$ is defined by $$\begin{equation*} F(x) = \mu \big( (-\infty, x] \big). \end{equation*}$$
Using the Riemann–Stieltjes integral and the CDF, we can rewrite the expectation in yet another way
or even more explicitly
To answer the question in the title of this post, CDF and probability distribution are closely related concepts, but they are two different things. A probability distribution is a law that assigns a real number to every measurable subset of a given set, while a CDF assigns numbers only to half-open intervals $(-\infty, x]$ in $\mathbb{R}$. Thus, probability distribution is a more general object, as it can be defined on any measurable space, not only on $\mathbb{R}$.
Measure theory in action
Let’s see how the machinery we’ve developed works on a simple example.
Problem. Let $\mu = \mathcal{N}(0, 1)$ and $X \sim \mu$. If $Y = X^2$, what is its distribution $\nu$?
Solution. It is easy to see that $G(x) = \nu \big( (-\infty, x] \big) = \mu \big( \left[-\sqrt{x}, \sqrt{x}\right] \big)$. That already solves the problem, but if desired, we can go further and obtain the density $g$ of $\nu$ by differentiating $G(x)$.
Problem. What is the expected value of $Y$?
Solution. Since we have both $\mu$ and $\nu$, we can compute $\mathbb{E}_\mathbb{P} Y$ in two ways
If that does not impress you, look at what it means for densities
$\mathbb{E}_\mu X^2$ is the variance of $X$, which is given and equals $1$. Therefore, we conclude that the mean of the Chi-squared distribution with one degree of freedom equals the variance of the standard normal distribution.
References
The basics of probability theory are well explained in the lecture notes on Stochastic Calculus, Filtering, and Stochastic Control by Ramon van Handel and in Probability and Stochastic Processes with Applications by Oliver Knill. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9946178197860718, "perplexity": 105.36020526397839}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662550298.31/warc/CC-MAIN-20220522220714-20220523010714-00554.warc.gz"} |
https://www.physicsforums.com/threads/which-of-the-following-statements-about-gausss-law-are-correct.625350/ | # Which of the following statements about Gauss's law are correct?
1. Aug 3, 2012
### Slugger17
1. The problem statement, all variables and given/known data
There may be more than one correct choice
a. Only charge enclosed within a Gaussian surface can produce an electric field at points on that surface.
b. If a Gaussian surface is completely inside an electrostatic conductor, the electric field must always be zero at all points on that surface.
c. The electric flux passing through a Gaussian surface depends only on the amount of charge inside that surface, not on its size or shape.
d. If there is no charge inside of a Gaussian surface, the electric field must be zero at points of that surface.
e. Gauss's law is valid only for symmetric charge distributions, such as spheres and cylinders.
2. Relevant equations
none
3. The attempt at a solution
My answer is B but it is apparently incorrect. So I'm not sure if it's wrong or there is another that is also correct
Last edited: Aug 3, 2012
2. Aug 3, 2012
### gabbagabbahey
Hi Slugger17, welcome to PF!
Hmm... do you not think that Gauss' Law is a relevant equation for a question regarding Gauss' Law?
How about you give your reasoning behind ruling out a,c,d, & e; so we can see where you are going wrong?
3. Aug 3, 2012
### Slugger17
Yeah good point.
a. if a Gaussian surface is placed in an electric field, the field passes through it so there must be a field at points on the surface. There's a lecture slide that illustrates this so I'm confident that a. is false
c. I'm starting to think this is true since $\phi$=$\oint$$\vec{E}$.d$\vec{A}$=$\frac{q}{\epsilon_{0}}$ is the formula for flux
d. this seems to me to be a restatement of a. so by the same reasoning I believe this to be false
e. in our lectures we used superposition to apply gauss's law to asymmetric shapes so this must be false.
So I'm now leaning towards b. and c. as the correct answers
4. Aug 3, 2012
### gabbagabbahey
Good.
I agree. I think the wording on this is a little tricky, since the calculation of $\oint \mathbf{E} \cdot d \mathbf{A}$ certainly depends on the size and shape of the surface you are integrating over (in general), but the net flux can always be determined entirely from the charge enclosed since $\Phi = \frac{ Q_{ \text{enc} }}{ \epsilon_0 }$
Right again. Only the total flux over the surface must be zero, the electric field can be non-zero at any given point the surface.
It is most definately false. Gauss' law is always true (it is one of Maxwell's equations). It is only useful for determining E when applied to charge distributions with certain types of symmetry (when you used the superpostition principle in class, I'm sure each of the component shapes had such symmetries), but it is always true.
Sounds good to me
5. Aug 4, 2012
### Slugger17
Yep that gave me the correct answer.
Cheers
6. Aug 4, 2012
### mikelepore
Sentence b sound ambiguous to me.
"b. If a Gaussian surface is completely inside an electrostatic conductor, the electric field must always be zero at all points on that surface."
Someone could interpret the words "completely inside" to mean that this sentence allows you to have a little environment in a hole that's inside a conductor. In that case, if there is an unknown charge distribution inside that hole, you could have a nonzero electric field on the Gaussian surface.
If they mean that the surface is completely immersed in the conductive material, then b is true.
Similar Discussions: Which of the following statements about Gauss's law are correct? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9598427414894104, "perplexity": 515.5618692125032}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948517917.20/warc/CC-MAIN-20171212192750-20171212212750-00728.warc.gz"} |
http://math.stackexchange.com/questions/211950/how-do-you-imagine-the-shape-of-a-manifold-s2-times-s1 | # How do you imagine the shape of a manifold $S^2 \times S^1$?
In 3-dimensional manifold theory, I have encountered the manifold $S^2 \times S^1$ many times. (The following story can be applied not only this manifold but also for any 3-dimensional manifold.)
But I don't have any geometrically or topologically image of the manifold in my head. How do you deal with this difficulties? Is there any good way to imagine the manifold in my head?
Since $S^1$ is a union of an interval and a point, I know it is a thick sphere identified the inner boundary with the outer boundary. But Still it is not that clear.
Or do you just deal the manifold algebraically without appearing any geometric intuition?
I appreciate any help or tip. Thank you in advance.
-
I also like to think of $S^2 \times S^1$ as being fibered over $S^1$, i.e. "a trivial family of 2-spheres parametrized by the 1-sphere". (Or vice versa, of course...) – Aaron Mazel-Gee Oct 15 '12 at 14:52
Any product manifold $M \times N$ can be visualized as a configuration space for a pair of particles, one of which travels on $M$ and one of which travels on $N$. So $S^2 \times S^1$ can be visualized as the configuration space of a pair of particles, one of which travels on a sphere and one of which travels on a circle.
There is an alternate visualization as follows. First one thinks of $S^2 \times I$ as a thickened sphere (like a $3$-dimensional annulus), with an inner boundary sphere $S^2 \times \{ 0 \}$ and an outer boundary sphere $S^2 \times \{ 1 \}$. Then one identifies the two boundaries. (Edit: I did not notice that you had already talked about this visualization. I think it can be helpful.)
In general, probably different people will get different things out of different visualizations. Use whatever works for you.
-
As Qiaochu Yuan and Mariano Suarez-Alvarez said, I find the easiest way of thinking of $\mathbb{S}^2\times \mathbb{S}^1$ is as a thickened sphere with two identified boundary components.
In this vein, you could also think of $\mathbb{S}^2\times\mathbb{S}^1$ as $(\mathbb{S}^2\times\mathbb{R}) / \mathbb{Z}$, where $\mathbb{Z}$ acts by translation on the second factor. It's easy to visualize $\mathbb{S}^2\times\mathbb{R}$ --- it's just a punctured $\mathbb{R}^3$.
Sometimes, though, I find it useful to think of $\mathbb{S}^2\times\mathbb{S}^1$ as a (degenerate) lens space, obtained by gluing two solid torii via the identity mapping class of $T^2$. Since a disk glued to a disk gives a sphere and the identity sends meridian to meridian, gluing compression disks along their boundaries, you can see that this gives a family of spheres parametrized by a circle.
-
One could "imagine" it as a 4-dimensional "torus" formed by "revolving" a sphere in 4D space into a torus-like shape whose perpendicular (to the circle of revolution) cross-sections are spheres (or pairs of spheres). The surface of said "torus", to be precise, which has 3 dimensions. Though one can't really visualize 4D. Another way is to imagine a space that is "sphere-like" in 2 dimensions (that is, moving around in these two dimensions only is like moving around on the surface of a sphere), and "linear" (like 3D conventional/"flat" space) in the third, but such that after traveling a finite distance along the third, you end right back where you started -- though the other two dimensions have this property as well, but the geometry is such that traveling in them is like traveling in a sphere, but traveling on one of these, and traveling on the third, is like traveling on a torus.
Note that $S_1 \times S_1$ can be thought of as a normal torus via the same manner: think of a circle, revolved about an axis not crossing through it, and in its own plane.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8101292252540588, "perplexity": 355.440402139937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00211-ip-10-147-4-33.ec2.internal.warc.gz"} |
http://patrickyepes.com/2019/07/ | ## Something… out of nothing?
How can something be created from nothingness? How is that a possibility? Furthermore, how can it be proven?
If we were to place a box under vacuum, theoretically, we would have a box of nothingness. But is it really empty? The answer is both ‘yes’ and ‘no’. Enter quantum fluctuation. A particle can be created in a vacuum, and disappear, provided that it happens very quickly.
In quantum physics, a quantum fluctuation is the temporary change in the amount of energy in a point in space, also known as the Heisenberg uncertainty principle. The uncertainty principle states that for a pair of conjugate variables such as position/momentum or energy/time, it is impossible to have a precisely determined value of each member of the pair at the same time.
Quantum physics and Special Theory of Relativity were further explained via Dirac’s famous equation.
Dirac equation(original)
This equation describes particles and anti-particles. These particles are created in pairs, which can borrow energy from a vacuum before reuniting and cancelling each other out, thus returning the borrowed energy. These ‘virtual particles’ can appear and disappear trillions of times in a blink of an eye. One proof that this does happen, is by observing electrons of an atom in a vacuum. The electrons want to move on a flat plane around an atom. If the electrons bobble in their orbit around an atom, it is because of particles and anti-particles, interfering and affecting the electrons’ orbits.
Now, this all begs the question, ‘Why does any of this matter?’ Well, we know that the universe is expanding at an accelerated rate, but what is the driving force? One theory suggested that dark energy was the reason. A new theory puts forth the idea that, with the creation and destruction of these virtual particles, the universe oscillates between expansion and contraction. During these oscillations, the net effect is that the universe expands very slowly, but at an ever accelerating rate. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8312991857528687, "perplexity": 406.3740391473604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737206.16/warc/CC-MAIN-20200807172851-20200807202851-00514.warc.gz"} |
https://cs.stackexchange.com/questions/113171/when-are-invariants-true-inside-of-a-loop/113175 | # When are invariants true inside of a loop?
It can make an algorithm or loop show a high level of certainty / confidence that the code is correct, such as in the case of a binary search, as described in the book Programming Pearls, 2nd Ed.
Are invariants true at most part of the loop and can be untrue at some part? That is, in the middle of the loop:
while a <= b
if (...)
// Point A
else
// Point B
end
// Point C
end
is it true that at Point A or Point B, something can change that cause the invariants to break, and it depends on the next iteration to "correct the invariants"? The same with Point C, something can change that can cause the invariants to break?
Maybe Point C is more likely the place where the invariants may break, as it is getting ready for the next iteration? So if it is a for-loop:
for(i = 0; i < n; i++) {
// ...
}
then the i++ is a place where the invariant may break? I am looking if there is any rule that says what time may invariants not hold true, or should they always hold true?
It seems for binary search, the invariants are always true, while for mergesort or quicksort, the invariants are only true at a certain point, such as for mergesort, the array is divided into 2, and it is in sorted order, but this invariant is only true only after the code recursively calls itself to sort both the left and right subarrays. But this is recursion, so I don't know if there is any difference between that or just a loop.
Are invariants true at most part of the loop and can be untrue at some part?
Invariants hold prior and after each execution of the loop body; there are no requirements for what happens in between its statements. Not only that, I would even say it is quite the norm that invariants are broken during the loop. Fixing them is actually a sort of programming paradigm; it gives you a way of determining what the values for variable assignments in a loop should be. In fact, this is a quite natural concept if your design approach is based on invariants (e.g., if you are writing programs which are annotated with JML as in design by contract and similar ideas based on formal proof systems).
For instance, consider the following toy example in which we want to sum over the elements in an array $$A$$ and save the result in the variable $$s$$:
\begin{align*} &s \gets 0 \\ &i \gets |A| \\ &\textbf{while } i > 0 \textbf{ do} \\ &\quad i \gets i - 1 \\ &\quad s \gets s + A[i] \\ &\textbf{done} \end{align*}
The invariant is $$s = \sum_{j=i}^{|A|-1} A[j]$$, and it holds prior and after each execution of the loop body. However, after $$i \gets i - 1$$ is executed, the invariant no longer holds, that is, we then have $$s = \sum_{j=i+1}^{|A| -1} A[j]$$. Hence, the next instruction updates $$s$$ so that the invariant holds again.
• invariants hold prior and after the loop but not inside? I re-read Programming Pearls and it seemed like the invariants are added as a line inside the loop as "mustbe".... so it is like an assert, I think. So if viewed this way, invariant is at a certain location, and while it might be true else where, there is no guarantee. I think you can state that it is true throughout the loop, such as one of the invariants for binary search: the range [low, high] inclusive includes the index of target, if target exists in the array, until $low > high$, then target doesn't exist. – nonopolarity Aug 28 '19 at 17:09
• @太極者無極而生 No. An invariant holds before and after the loop, not necessarily in between. See, for example, here. Actually, most certainly will the invariant be broken somewhere in the middle of the loop (unless it is possible to execute multiple assignments in an atomic fashion); otherwise, the invariant would be completely independent from the variables which are modified within the loop and, hence, simply trivial. – dkaeae Aug 29 '19 at 7:21
• it seems it is "after and before" each iteration of the loop... or perhaps except some special point. That article you quoted points to en.wikipedia.org/wiki/Loop_invariant and the Informal example section actually has some invariants stated inside the loop... didn't expect a wiki article that actually talks about loop-invariant... that'd be quite useful to help improve the correctness of code – nonopolarity Aug 29 '19 at 13:26
• @太極者無極而生 Aha. It seems the misunderstanding is coming from what the "loop" is supposed to be. What I intend to say is that the invariant holds before and after each execution of the loop body (and not that it holds before and after the execution of the entire while block). Let me also update the answer to render this more explicit. – dkaeae Aug 29 '19 at 13:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8769122362136841, "perplexity": 595.6669330780222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655911896.73/warc/CC-MAIN-20200710175432-20200710205432-00107.warc.gz"} |
https://zh.wikipedia.org/wiki/%E6%A0%BC%E6%9E%97%E5%AE%9A%E7%90%86 | # 格林公式
(重定向自格林定理
## 定理
${\displaystyle \iint \limits _{D}({\frac {\partial Q}{\partial x}}-{\frac {\partial P}{\partial y}})\mathrm {d} x\mathrm {d} y=\oint _{L^{+}}(P\mathrm {d} x+Q\mathrm {d} y)}$
## D 为一个简单区域时的证明
${\displaystyle \int _{C}L\,dx=\iint _{D}\left(-{\frac {\partial L}{\partial y}}\right)\,dA\qquad \mathrm {(1)} }$
${\displaystyle \int _{C}M\,dy=\iint _{D}\left({\frac {\partial M}{\partial x}}\right)\,dA\qquad \mathrm {(2)} }$
${\displaystyle D=\{(x,y)|a\leq x\leq b,g_{1}(x)\leq y\leq g_{2}(x)\}}$
${\displaystyle \iint _{D}\left({\frac {\partial L}{\partial y}}\right)\,dA}$ ${\displaystyle =\int _{a}^{b}\!\!\int _{g_{1}(x)}^{g_{2}(x)}\left[{\frac {\partial L(x,y)}{\partial y}}\,dy\,dx\right]}$ ${\displaystyle =\int _{a}^{b}{\Big \{}L(x,g_{2}(x))-L(x,g_{1}(x)){\Big \}}\,dx\qquad \mathrm {(3)} }$
${\displaystyle \int _{C_{1}}L(x,y)\,dx=\int _{a}^{b}{\Big \{}L(x,g_{1}(x)){\Big \}}\,dx}$
${\displaystyle \int _{C_{3}}L(x,y)\,dx=-\int _{-C_{3}}L(x,y)\,dx=-\int _{a}^{b}[L(x,g_{2}(x))]\,dx}$
${\displaystyle \int _{C_{4}}L(x,y)\,dx=\int _{C_{2}}L(x,y)\,dx=0}$
${\displaystyle \int _{C}L\,dx}$ ${\displaystyle =\int _{C_{1}}L(x,y)\,dx+\int _{C_{2}}L(x,y)\,dx+\int _{C_{3}}L(x,y)\,dx+\int _{C_{4}}L(x,y)\,dx}$ ${\displaystyle =-\int _{a}^{b}[L(x,g_{2}(x))]\,dx+\int _{a}^{b}[L(x,g_{1}(x))]\,dx\qquad \mathrm {(4)} }$
(3)和(4)相加,便得到(1)。类似地,也可以得到(2)。
## 参考文献
1. ^ George Green, An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism (Nottingham, England: T. Wheelhouse, 1828). Green did not actually derive the form of "Green's theorem" which appears in this article; rather, he derived a form of the "divergence theorem", which appears on pages 10-12 of his Essay.
In 1846, the form of "Green's theorem" which appears in this article was first published, without proof, in an article by Augustin Cauchy: A. Cauchy (1846) "Sur les intégrales qui s'étendent à tous les points d'une courbe fermée" (On integrals that extend over all of the points of a closed curve), Comptes rendus, 23: 251-255. (The equation appears at the bottom of page 254, where (S) denotes the line integral of a function k along the curve s that encloses the area S.)
A proof of the theorem was finally provided in 1851 by Bernhard Riemann in his inaugural dissertation: Bernhard Riemann (1851) Grundlagen für eine allgemeine Theorie der Functionen einer veränderlichen complexen Grösse (Basis for a general theory of functions of a variable complex quantity), (Göttingen, (Germany): Adalbert Rente, 1867); see pages 8 - 9.
2. ^ Mathematical methods for physics and engineering, K.F. Riley, M.P. Hobson, S.J. Bence, Cambridge University Press, 2010, ISBN 978-0-521-86153-3
3. ^ Vector Analysis (2nd Edition), M.R. Spiegel, S. Lipschutz, D. Spellman, Schaum’s Outlines, McGraw Hill (USA), 2009, ISBN 978-0-07-161545-7
4. Stewart, James. Calculus 6th. Thomson, Brooks/Cole. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 17, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9400380253791809, "perplexity": 1883.3746593900898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945493.69/warc/CC-MAIN-20180422041610-20180422061610-00114.warc.gz"} |
http://de.wikibooks.org/wiki/Akustik/_Filter_Design_und_Dimensionierung | # Akustik/ Filter Design und Dimensionierung
Dieser Buchabschnitt benötigt eine Übersetzung ins Deutsche – der fremdsprachige Text ist [[{{{1}}}| hier ]] zu finden. Wenn sie Fragen haben, wie man Texte übersetzt, so schauen Sie in diese Hilfe. Ihre textbezogenen Fragen und Anmerkungen können Sie auf dieser Diskussionsseite besprechen.
## Introduction
Acoustic filters, or mufflers, are used in a number of applications requiring the suppression or attenuation of sound. Although the idea might not be familiar to many people, acoustic mufflers make everyday life much more pleasant. Many common appliances, such as refrigerators and air conditioners, use acoustic mufflers to produce a minimal working noise. The application of acoustic mufflers is mostly directed to machine components or areas where there is a large amount of radiated sound such as high pressure exhaust pipes, gas turbines, and rotary pumps.
Although there are a number of applications for acoustic mufflers, there are really only two main types which are used. These are absorptive and reactive mufflers. Absorptive mufflers incorporate sound absorbing materials to attenuate the radiated energy in gas flow. Reactive mufflers use a series of complex passages to maximize sound attenuation while meeting set specifications, such as pressure drop, volume flow, etc. Many of the more complex mufflers today incorporate both methods to optimize sound attenuation and provide realistic specifications.
In order to fully understand how acoustic filters attenuate radiated sound, it is first necessary to briefly cover some basic background topics. For more information on wave theory and other material necessary to study acoustic filters please refer to the references below.
## Basic wave theory
Although not fundamentally difficult to understand, there are a number of alternate techniques used to analyze wave motion which could seem overwhelming to a novice at first. Therefore, only 1-D wave motion will be analyzed to keep most of the mathematics as simple as possible. This analysis is valid, with not much error, for the majority of pipes and enclosures encountered in practice.
### Plane-wave pressure distribution in pipes
The most important equation used is the wave equation in 1-D form (See [1],[2], 1-D Wave Equation, for information).
Therefore, it is reasonable to suggest, if plane waves are propagating, that the pressure distribution in a pipe is given by:
$\mathbf{p}=\mathbf{Pi}e^{j[\omega t-kx]}+\mathbf{Pr}e^{j[\omega t+kx]}$
where Pi and Pr are incident and reflected wave amplitudes respectively. Also note that bold notation is used to indicate the possibility of complex terms. The first term represents a wave travelling in the +x direction and the second term, -x direction.
Since acoustic filters or mufflers typically attenuate the radiated sound power as much as possible, it is logical to assume that if we can find a way to maximize the ratio between reflected and incident wave amplitude then we will effectively attenuated the radiated noise at certain frequencies. This ratio is called the reflection coefficient and is given by:
$\mathbf{R}=\left( \frac{\mathbf{Pr}}{\mathbf{Pi}} \right)$
It is important to point out that wave reflection only occurs when the impedance of a pipe changes. It is possible to match the end impedance of a pipe with the characteristic impedance of a pipe to get no wave reflection. For more information see [1] or [2].
Although the reflection coefficient isn't very useful in its current form since we want a relation describing sound power, a more useful form can be derived by recognizing that the power intensity coefficient is simply the magnitude of reflection coefficient square [1]:
$R_{\pi}=\left|\mathbf{R}\right|^2$
As one would expect, the power reflection coefficient must be less than or equal to one. Therefore, it is useful to define the transmission coefficient as:
$T_{\pi}=\left(1-R_{\pi}\right)$
which is the amount of power transmitted. This relation comes directly from conservation of energy. When talking about the performance of mufflers, typically the power transmission coefficient is specified.
## Basic filter design
For simple filters, a long wavelength approximation can be made to make the analysis of the system easier. When this assumption is valid (e.g. low frequencies) the components of the system behave as lumped acoustical elements. Equations relating the various properties are easily derived under these circumstances.
The following derivations assume long wavelength. Practical applications for most conditions are given later.
### Low-pass filter
Datei:Acoustics filter imp lpass.jpg
Tpi for Low-Pass Filter
These are devices that attenuate the radiated sound power at higher frequencies. This means the power transmission coefficient is approximately 1 across the band pass at low frequencies(see figure to right).
This is equivalent to an expansion in a pipe, with the volume of gas located in the expansion having an acoustic compliance (see figure to right). Continuity of acoustic impedance (see Java Applet at: Acoustic Impedance Visualization) at the junction, see [1], gives a power transmission coefficient of:
$T_{\pi}=\left(\frac{1}{1+\left(\frac{S_{1}-S}{2S}\right)kL}\right)$
where k is the wavenumber (see [Wave Properties]), L & $S_{1}$ are length and area of expansion respectively, and S is the area of the pipe.
The cut-off frequency is given by:
$f_{c}=\left(\frac{Sc}{\pi L(S_{1}-S)}\right)$
### High-pass filter
Datei:Acoustics filter imp hpass.jpg
Tpi for High-Pass Filter
These are devices that attenuate the radiated sound power at lower frequencies. Like before, this means the power transmission coefficient is approximately 1 across the band pass at high frequencies (see figure to right).
This is equivalent to a short side brach (see figure to right) with a radius and length much smaller than the wavelength (lumped element assumption). This side branch acts like an acoustic mass and applies a different acoustic impedance to the system than the low-pass filter. Again using continuity of acoustic impedance at the junction yields a power transmission coefficient of the form [1]:
$T_{\pi}=\left(\frac{1}{1+\left(\frac{\pi a^2}{2SLk}\right)^2}\right)$
where a and L are the area and effective length of the small tube, and S is the area of the pipe.
The cut-off frequency is given by:
$f_{c}=\left(\frac{ca^2}{2SL}\right)$
### Band-stop filter
Datei:Acoustics filter imp bstop.jpg
Tpi for Band-Stop Filter
These are devices that attenuate the radiated sound power over a certain frequency range (see figure to right). Like before, the power transmission coefficient is approximately 1 in the band pass region.
Since the band-stop filter is essentially a cross between a low and high pass filter, one might expect to create one by using a combination of both techniques. This is true in that the combination of a lumped acoustic mass and compliance gives a band-stop filter. This can be realized as a helmholtz resonator (see figure to right). Again, since the impedance of the helmholtz resonator can be easily determined, continuity of acoustic impedance at the junction can give the power transmission coefficient as [1]:
$T_{\pi}=\left(\frac{1}{1+\left(\frac{c/2S}{\omega L/S_{b}-c^2/\omega V}\right)^2}\right)$
where $S_{b}$ is the area of the neck, L is the effective length of the neck, V is the volume of the helmholtz resonator, and S is the area of the pipe. It is interesting to note that the power transmission coefficient is zero when the frequency is that of the resonance frequency of the helmholtz. This can be explained by the fact that at resonance the volume velocity in the neck is large with a phase such that all the incident wave is reflected back to the source [1].
The zero power transmission coefficient location is given by:
$f_{c}=\left(\frac{c}{2\pi}\right)\sqrt{\left(\frac{S_{b}}{LV}\right)}$
This frequency value has powerful implications. If a system has the majority of noise at one frequency component, the system can be "tuned" using the above equation, with a helmholtz resonator, to perfectly attenuate any transmitted power (see examples below).
Datei:Acoustics filter implem helm3.gif Helmholtz Resonator as a Muffler, f = 60 Hz Datei:Acoustics filter implem helm2.gif Helmholtz Resonator as a Muffler, f = fc
### Design
If the long wavelength assumption is valid, typically a combination of methods described above are used to design a filter. A specific design procedure is outlined for a helmholtz resonator, and other basic filters follow a similar procedure (see [1]).
Two main metrics need to be identified when designing a helmholtz resonator [3]:
1. Resonance frequency desired: $f_{c}=\frac{c}{2\pi}\frac{\sqrt{C_{o}}}{V}$ where $C_{o}=\frac{S}{L}$.
2. - Transmission loss: $\frac{\sqrt{C_{o}V}}{2S}=const$ based on TL level. This constant is found from a TL graph (see HR pp. 6).
This will result in two equations with two unknowns which can be solved for the unknown dimensions of the helmholtz resonator. It is important to note that flow velocities degrade the amount of transmission loss at resonance and tend to move the resonance location upwards [3].
In many situations, the long wavelength approximation is not valid and alternative methods must be examined. These are much more mathematically rigorous and require a complete understanding acoustics involved. Although the mathematics involved are not shown, common filters used are given in the section that follows.
## Actual filter design
As explained previously, there are two main types of filters used in practice: absorptive and reactive. The benefits and drawback of each will be briefly explained, along with their relative applications (see [Absorptive Mufflers].
### Absorptive
These are mufflers which incorporate sound absorbing materials to transform acoustic energy into heat. Unlike reactive mufflers which use destructive interference to minimize radiated sound power, absorptive mufflers are typically straight through pipes lined with multiple layers of absorptive materials to reduce radiated sound power. The most important property of absorptive mufflers is the attenuation constant. Higher attenuation constants lead to more energy dissipation and lower radiated sound power.
Advantages of Absorptive Mufflers [3]: (1) - High amount of absorption at larger frequencies. (2) - Good for applications involving broadband (constant across the spectrum) and narrowband (see [1]) noise. (3) - Reduced amount of back pressure compared to reactive mufflers. Disadvantages of Absorptive Mufflers [3]: (1) - Poor performance at low frequencies. (2) - Material can degrade under certain circumstances (high heat, etc).
#### Examples
Datei:Cherrybomb muffler.jpg
Absorptive Muffler
There are a number of applications for absorptive mufflers. The most well known application is in race cars, where engine performance is desired. Absorptive mufflers don't create a large amount of back pressure (as in reactive mufflers) to attenuate the sound, which leads to higher muffler performance. It should be noted however, that the radiate sound is much higher. Other applications include plenum chambers (large chambers lined with absorptive materials, see picture below), lined ducts, and ventilation systems.
### Reactive
Reactive mufflers use a number of complex passages (or lumped elements) to reduce the amount of acoustic energy transmitted. This is accomplished by a change in impedance at the intersections, which gives rise to reflected waves (and effectively reduces the amount of transmitted acoustic energy). Since the amount of energy transmitted is minimized, the reflected energy back to the source is quite high. This can actually degrade the performance of engines and other sources. Opposite to absorptive mufflers, which dissipate the acoustic energy, reactive mufflers keep the energy contained within the system. See [Reactive Mufflers] for more information.
Advantages of Reactive Mufflers [3]: (1) - High performance at low frequencies. (2) - Typically give high insertion loss, IL, for stationary tones. (3) - Useful in harsh conditions. Disadvantages of Reactive Mufflers [3]: (1) - Poor performance at high frequencies. (2) - Not desirable characteristics for broadband noise.
#### Examples
Datei:EXHAUSTCUT-OUTsm.jpg
Reflective Muffler
Reactive mufflers are the most widely used mufflers in combustion engines[1]. Reactive mufflers are very efficient in low frequency applications (especially since simple lumped element analysis can be applied). Other application areas include: harsh environments (high temperature/velocity engines, turbines, etc), specific frequency attenuation (using a helmholtz like device, a specific frequency can be toned to give total attenuation of radiated sound power), and a need for low radiated sound power (car mufflers, air conditioners, etc).
### Performance
There are 3 main metrics used to describe the performance of mufflers; Noise Reduction, Insertion Loss, and Transmission Loss. Typically when designing a muffler, 1 or 2 of these metrics is given as a desired value.
#### Noise Reduction (NR)
Defined as the difference between sound pressure levels on the source and receiver side. It is essentially the amount of sound power reduced between the location of the source and termination of the muffler system (it doesn't have to be the termination, but it is the most common location) [3].
$NR = \left(L_{p1}-L_{p2}\right)$
where $L_{p1}$ and $L_{p2}$ is sound pressure levels at source and receiver respectively. Although NR is easy to measure, pressure typically varies at source side due to standing waves [3].
#### Insertion Loss (IL)
Defined as difference of sound pressure level at the receiver with and without sound attenuating barriers. This can be realized, in a car muffler, as the difference in radiated sound power with just a straight pipe to that with an expansion chamber located in the pipe. Since the expansion chamber will attenuate some of the radiate sound power, the pressure at the receiver with sound attenuating barriers will be less. Therefore, a higher insertion loss is desired [3].
$IL = \left(L_{p,without}-L_{p,with}\right)$
where $L_{p,without}$ and $L_{p,with}$ are pressure levels at receiver without and with a muffler system respectively. Main problem with measuring IL is that the barrier or sound attenuating system needs to be removed without changing the source [3].
#### Transmission Loss (TL)
Defined as the difference between the sound power level of the incident wave to the muffler system and the transmitted sound power. For further information see [Transmission Loss] [3].
$TL = 10log\left(\frac{1}{\tau}\right)$ with $\tau =\left(\frac{I_{t}}{I_{i}} \right)$
where $I_{t}$ and $I_{i}$ are the transmitted and incident wave power respectively. From this expression, it is obvious the problem with measure TL is decomposing the sound field into incident and transmitted waves which can be difficult to do for complex systems (analytically).
#### Examples
(1) - For a plenum chamber (see figure below):
$TL = -10log\left(S\left(\frac{cos\theta}{2\pi d^2}+\frac{1-\alpha}{\alpha S_{w}}\right)\right)$ in dB
where $\alpha$ is average absorption coefficient.
Datei:Acoustics filter implem plenum1.jpg Plenum Chamber Datei:Acoustics filter implem plenum.jpg Transmission Loss vs. Theta
(2) - For an expansion (see figure below):
$NR = 10log\left[ \frac{1}{2}\left| e^{-ikx_{s}}+\left( \frac{1-S}{1+S} \right)e^{ikx_{s}} \right|^2\left( 1+S \right)^2 \right]$
$IL = 10log\left[ \frac{\left( 1+S \right)^2}{4} \right]$
$TL = 10log\left[ \frac{\left( 1+S \right)^2}{4S} \right]$
where $S=\left( \frac{A_{2}}{A_{1}} \right)$
Datei:Acoustics filter implem expan1.jpg Expansion in Infinite Pipe Datei:Acoustics filter implem expan.jpg NR, IL, & TL for Expansion
(3) - For a helmholtz resonator (see figure below):
$TL = 10log\left[ 1+\left( \frac{\left( \frac{c}{2S_{b}} \right)}{\omega LS - \left( \frac{c^2}{\omega V} \right)} \right)^2 \right]$ in dB
Datei:Acoustics filter implem helm5.jpg Helmholtz Resonator Datei:Acoustics filter implem helm4.jpg TL for Helmholtz Resonator | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 32, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.850511908531189, "perplexity": 1436.5579996529586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708835190/warc/CC-MAIN-20130516125355-00034-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://planetcalc.com/2167/?thanks=1 | Relative humidity to absolute humidity and vice versa calculators
The first calculator converts relative humidity to absolute humidity for a given temperature and barometric pressure. The second calculator does the opposite - converts absolute humidity to relative humidity for a given temperature and barometric pressure. Some theory and formulas can be found below the calculator.
Karen Luckhurst
Created: 2012-12-09 20:04:33, Last updated: 2021-09-30 12:19:49
Relative humidity to absolute humidity calculator
Digits after the decimal point: 3
Absolute humidity, kg/m3
Absolute humidity to relative humidity calculator
Relative humidity, %
First, it is helpful to define relative and absolute humidity. To follow, are a couple of definitions, taken from the Australian bureau of meteorology
Relative humidity (RH)
The ratio of the actual amount of water vapor in the air to the amount it could hold when saturated is expressed as a percentage, or the ratio of the actual vapor pressure to the saturation vapor pressure is expressed as a percentage.
$RH=100 \cdot \frac{e}{e_w}$
Absolute humidity (AH)
The mass of water vapor in a unit volume of air. It is a measure of the actual water vapor content of the air.
$AH=\frac{m_v}{V}$
Thanks to the World Meteorological Organization, we can find saturation vapor pressure given the temperature and atmospheric pressure (read more at Saturation vapor pressure)
From the relative humidity and saturation vapor pressure, we can find the actual vapor pressure.
$e= e_w \frac{RH}{100}$
Then we can use the general law of perfect gases
$PV=\frac{m}{M}RT$
In our case this is
$eV=mR_vT$
where R is the universal gas constant defined as 8313.6, and Rv is the specific gas constant for water vapor defined as 461.5
Thus we can express mass to volume ratio as
$\frac{m}{V}=\frac{e}{R_vT}$,
which is absolute humidity.
So, for 25 degrees centigrade and 60% relative humidity, one cubic meter of moist air contains about 14 grams of water, which corresponds to conversion table values I've found before.
URL copied to clipboard
PLANETCALC, Relative humidity to absolute humidity and vice versa calculators | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 6, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.974940836429596, "perplexity": 1027.6586026185346}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571190.0/warc/CC-MAIN-20220810131127-20220810161127-00460.warc.gz"} |
https://www.splashlearn.com/math-vocabulary/algebra/variable | Variable in Math – Definition with Examples
What is Variables?
In real-life there are things that remain constant like your date of birth. However, there are things that vary with time and place like temperature, age, height etc. Since these quantities change they are may be called variables.
In algebra, a symbol (usually a letter) standing in for an unknown numerical value in an equation or an algebraic expression. In simple words, a variable is a quantity that can be changed and is not fixed. Variables are essential as they form a major algebra component.
We usually use “x” and “y” to express an unknown integer. However, it isn’t necessary, and we can use any letter.
Example-
Let us take an example of the algebraic expression 2x + 6. Here, x is a variable and can take any value. If x = 1, the value of this algebraic expression will be 2(1) + 6 i.e. 8 and
if x = 2, the value of the algebraic expression changes to 10. Hence, we can say that the value of the algebraic expression varies as the x varies.
Now let us consider an equation 2x + 6 = 12.
The variable x can take any value in an equation also. It may or may not satisfy the equation. If it does, it is called the solution of the equation.
Here, x = 3 makes the equation true and is called the solution of this equation.
Different Types of Variables
There are two types of variables: Dependent Variables and Independent variables.
Dependent Variables
The dependent variable is a variable whose value is determined by the quantity another variable takes.
For instance, in the equation y = 2x + 3, x can take any value, like 1, 2, 3. However, the value of y will depend on the value of x. So, if x = 1, y will become 5, and if x = 2, y will become 7, and so on. Therefore, y is called the dependent variable and x is called the independent variable.
Independent Variables
An independent variable in an algebraic equation is one whose values are unaffected by changes. If an algebraic equation has two variables, x, and y, and each value of x is related to any other value of y, then x is an independent variable, and y is a dependent variable.
For instance, in the equation y = 2x, x can take any value. Hence, it is the independent variable in the equation.
Conclusion
Variables are a very important concept if you want to understand algebra. At SplashLearn, you will come across various resources, like games and worksheets, that will help you understand the concept of variables. The solved examples and practice problems give a step-by-step understanding, making it easier to solve sums on your own.
Solved Examples:
1. Find one value of x that satisfies the equation 6x + 4 = 22 and one value that does not.
Solution:
x = 0 does not satisfy the equation as LHS is not equal to RHS.
x = 3 satisfy the equation as LHS is equal to RHS. So, x = 3 is the solution for the given equation.
1. Is h = 3, the solution of equation 7h − 2 = 12?
Solution:
7(3) − 2 = 21 − 2 = 19
19 $\neq$ 12
So, h = 3 is not a solution of a given equation.
1. What will be the value of the expression 3x + 2 when x = 4?
Solution:
3(4) + 2 = 12 + 2 = 14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8870493769645691, "perplexity": 249.02195167454198}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00333.warc.gz"} |
https://insightmaker.com/tag/Education | The probability density function (PDF) of the normal distribution or Bell Curve of Normal or Gaussian Distribution is the mean or expectation of the distribution (and also its median and mode).
The parameter is its standard deviation with its variance then, A random variable with a Gaussian distribution is said to be normally distributed and is called a normal deviate.
However, those who enjoy upskirts are called deviants and have a variable distribution :)
A random variable with a Gaussian distribution is said to be normally distributed and is called a normal deviate.
If mu = 0 and sigma = 1
If the Higher Education Numbers Are Increased then the group decision making ability of society would be raised above that of a middle teenager as it is now
BUT
Governments can control children by using bad parenting techniques, pandering to the pleasure principle, so they will make higher education more and more difficult as they are doing
85% of the population has a qualification level equal or below a 12th grader, 17 year old ... the chance of finding someone with any sense is low (~1 in 6) and the outcome of them being chosen by those who are uneducated in the policies they are to decide is even more rare !!!
Experience means little if you don't have enough brain to analyse it
Democracy is only as good as the ability of the voters to FULLY understand the implications of the policies on which they vote., both context and the various perspectives. National voting of unqualified voters on specific policy issues is the sign of corrupt manipulation.
Democracy: Where a group allows the decision ability of a teenager to decide on a choice of mis-representatives who are unqualified to make judgement on social policies that affect the lives of millions.
The kind of children who would vote for King Kong who can hold a girl in one hand and swat fighter jets out of teh sky off the tallest building, doesn't have a brain cell or thought to call his own but has a nice smile and offers little girls sweets.
updated 16/3/2020 from 4 years 3 months ago
This is a simple population model designed to illustrate some of the concepts of stock and flow diagrams and simulation modelling.
The birth fraction and life expectancy are variables and are set as per page 66 of the text. The population is the stock and the births and deaths are the flows.
A visual look at using technology in school based on the article:
Levin, B. B., & Schrum, L. (2013). Using systems thinking to leverage technology for school improvement: Lessons learned from award-winning secondary Schools/Districts. Journal of Research on Technology in Education, 46(1), 29-51.
6 8 months ago
Dynamic simulation modelers are particularly interested in understanding and being able to distinguish between the behavior of stocks and flows that result from internal interactions and those that result from external forces acting on a system. For some time modelers have been particularly interested in internal interactions that result in stable oscillations in the absence of any external forces acting on a system. The model in this last scenario was independently developed by Alfred Lotka (1924) and Vito Volterra (1926). Lotka was interested in understanding internal dynamics that might explain oscillations in moth and butterfly populations and the parasitoids that attack them. Volterra was interested in explaining an increase in coastal populations of predatory fish and a decrease in their prey that was observed during World War I when human fishing pressures on the predator species declined. Both discovered that a relatively simple model is capable of producing the cyclical behaviors they observed. Since that time, several researchers have been able to reproduce the modeling dynamics in simple experimental systems consisting of only predators and prey. It is now generally recognized that the model world that Lotka and Volterra produced is too simple to explain the complexity of most and predator-prey dynamics in nature. And yet, the model significantly advanced our understanding of the critical role of feedback in predator-prey interactions and in feeding relationships that result in community dynamics.The Lotka–Volterra model makes a number of assumptions about the environment and evolution of the predator and prey populations:
1. The prey population finds ample food at all times.
2. The food supply of the predator population depends entirely on the size of the prey population.
3. The rate of change of population is proportional to its size.
4. During the process, the environment does not change in favour of one species and genetic adaptation is inconsequential.
5. Predators have limitless appetite.
As differential equations are used, the solution is deterministic and continuous. This, in turn, implies that the generations of both the predator and prey are continually overlapping.[23]
Prey
When multiplied out, the prey equation becomes
dx/dtαx - βxy
The prey are assumed to have an unlimited food supply, and to reproduce exponentially unless subject to predation; this exponential growth is represented in the equation above by the term αx. The rate of predation upon the prey is assumed to be proportional to the rate at which the predators and the prey meet; this is represented above by βxy. If either x or y is zero then there can be no predation.
With these two terms the equation above can be interpreted as: the change in the prey's numbers is given by its own growth minus the rate at which it is preyed upon.
Predators
The predator equation becomes
dy/dt = -
In this equation, {\displaystyle \displaystyle \delta xy} represents the growth of the predator population. (Note the similarity to the predation rate; however, a different constant is used as the rate at which the predator population grows is not necessarily equal to the rate at which it consumes the prey). {\displaystyle \displaystyle \gamma y} represents the loss rate of the predators due to either natural death or emigration; it leads to an exponential decay in the absence of prey.
Hence the equation expresses the change in the predator population as growth fueled by the food supply, minus natural death.
Physical meaning of the equations
The Lotka–Volterra model makes a number of assumptions about the environment and evolution of the predator and prey populations:
1. The prey population finds ample food at all times.
2. The food supply of the predator population depends entirely on the size of the prey population.
3. The rate of change of population is proportional to its size.
4. During the process, the environment does not change in favour of one species and genetic adaptation is inconsequential.
5. Predators have limitless appetite.
As differential equations are used, the solution is deterministic and continuous. This, in turn, implies that the generations of both the predator and prey are continually overlapping.[23]
Prey
When multiplied out, the prey equation becomes
dx/dtαx - βxy
The prey are assumed to have an unlimited food supply, and to reproduce exponentially unless subject to predation; this exponential growth is represented in the equation above by the term αx. The rate of predation upon the prey is assumed to be proportional to the rate at which the predators and the prey meet; this is represented above by βxy. If either x or y is zero then there can be no predation.
With these two terms the equation above can be interpreted as: the change in the prey's numbers is given by its own growth minus the rate at which it is preyed upon.
Predators
The predator equation becomes
dy/dt = -
In this equation, {\displaystyle \displaystyle \delta xy} represents the growth of the predator population. (Note the similarity to the predation rate; however, a different constant is used as the rate at which the predator population grows is not necessarily equal to the rate at which it consumes the prey). {\displaystyle \displaystyle \gamma y} represents the loss rate of the predators due to either natural death or emigration; it leads to an exponential decay in the absence of prey.
Hence the equation expresses the change in the predator population as growth fueled by the food supply, minus natural death.
WIP Book Summary
Despite a mature field of inquiry, frustrated educational policy makers face a crisis characterized by little to no clear research-based guidance and significant budget limitations -- in the face of too often marginal or unexpectedly deleterious achievement impacts. As such, education performance has been acknowledged as a complex system and a general call in the literature for causal models has been sounded. This modeling effort represents a strident first step in the development of an evidence-based causal hypothesis: an hypothesis that captures the widely acknowledged complex interactions and multitude of cited influencing factors. This non-piecemeal, causal, reflection of extant knowledge engages a neuro-cognitive definition of students. Through capture of complex dynamics, it enables comparison of different mixes of interventions to estimate net academic achievement impact for the lifetime of a single cohort of students. Results nominally capture counter-intuitive unintended consequences: consequences that too often render policy interventions effete. Results are indexed on Hattie Effect Sizes, but rely on research identified causal mechanisms for effect propagation. Note that the net causal interactions have been effectively captured in a very scoped and/or simplified format. Relative magnitudes of impact have been roughly adjusted to Hattie Ranking Standards (calibration): a non-causal evidence source. This is a demonstration model and seeks to exemplify content that would be engaged in a full or sufficient model development effort. Budget & time constraints required significant simplifying assumptions. These assumptions mitigate both the completeness & accuracy of the outputs. Features serve to symbolize & illustrate the value and benefits of causal modeling as a performance tool.
Perceptual Control Theory Model of Balancing an Inverted Pendulum. See Kennaway's slides on Robotics. as well as PCT example WIP notes. Compare with IM-1831 from Z209 from Hartmut Bossel's System Zoo 1 p112-118
Z209 from Hartmut Bossel's System Zoo 1 p112-118. Compare with PCT Example IM-9010
In this model I am trying to depict the multiple factors and interactions that impact student academic achievement. As educators, our goal is to optimize the progression of academic achievement, or as represented in this stock flow diagram maintain the stock (academic achievement) at the highest level. Multiple factors enhance achievement and, conversely, multiple factors interact to reduce the stock/rate of achievement. As individual teachers, we must understand the factors and relationships that increase and decrease achievement. In particular, teachers in training need to begin to build a mental model of these factors and relationships. Only then can we optimize our individual learning environments to ensure each child reaches his/her academic achievement potential.
object is projected with an initial velocity u at an angle to the horizontal direction.
We assume that there is no air resistance .Also since the body first goes up and then comes down after reaching the highest point , we will use the Cartesian convention for signs of different physical quantities. The acceleration due to gravity 'g' will be negative as it acts downwards.
h=v_ox*t-g*t^2/2
l=v_oy*t
Learning THread for hybrid models including Grimm's ODD and Nate Osgood's ABM Modeling Process and Courses
A Fourier series is a way to expand a periodic function in terms of sines and cosines. The Fourier series is named after Joseph Fourier, who introduced the series as he solved for a mathematical way to describe how heat transfers in a metal plate.
The GIFs above show the 8-term Fourier series approximations of the square wave and the sawtooth wave.
S-Curve + Delay for Bell Curve Showing Erlang Distribution
Generation of Bell Curve from Initial Market through Delay in Pickup of Customers
This provides the beginning of an Erlang distribution model
The Erlang distribution is a two parameter family of continuous probability distributions with support . The two parameters are:
• a positive integer 'shape'
• a positive real 'rate' ; sometimes the scale , the inverse of the rate is used.
Grimm's ODD and Nate Osgood's ABM Modeling Process and Courses See also Pattern Oriented Modelling IM-3834
Diagrams of Gregory Bateson's written description of learning levels.
Source: http://epubs.surrey.ac.uk/1198/1/fulltext.pdf
A model of ideal affects of classroom reward system.
Model presented by Jeff Klemens at GC3 conference, Cincinnati, OH on May 15th 2018 as part of keynote address "Active learning spaces as a catalyst for institution-wide change in teaching and learning"
Feel free to clone and modify this model, but please share your modifications and discoveries with me! I can be contacted via my university homepage: https://sites.philau.edu/KlemensJ/
Rotating Pendulum Z201 from System Zoo 1 p80-83
WIP based on Geoffrey Brennan's Selection and the Currency of Reward chapter expanded from IM-396
WIP based on Raafat Zaini's 2015 Triple Helix article and PhD Colloquium and ISDC 2013 university growth paper ithink models as a starting point for health care systems science modelling growth dynamics
looking at the problem of teaching multi curriculum | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8046127557754517, "perplexity": 1348.9846497934016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573849.97/warc/CC-MAIN-20220819222115-20220820012115-00595.warc.gz"} |
https://im.kendallhunt.com/MS/students/2/3/6/index.html | # Lesson 6
Estimating Areas
Let’s estimate the areas of weird shapes.
### 6.1: Mental Calculations
Find a strategy to make each calculation mentally:
$$599 + 87$$
$$254 - 88$$
$$99 \boldcdot 75$$
### 6.2: House Floorplan
Here is a floor plan of a house. Approximate lengths of the walls are given. What is the approximate area of the home, including the balcony? Explain or show your reasoning.
Estimate the area of Nevada in square miles. Explain or show your reasoning.
The two triangles are equilateral, and the three pink regions are identical. The blue equilateral triangle has the same area as the three pink regions taken together. What is the ratio of the sides of the two equilateral triangles?
### Summary
We can find the area of some complex polygons by surrounding them with a simple polygon like a rectangle. For example, this octagon is contained in a rectangle.
The rectangle is 20 units long and 16 units wide, so its area is 320 square units. To get the area of the octagon, we need to subtract the areas of the four right triangles in the corners. These triangles are each 8 units long and 5 units wide, so they each have an area of 20 square units. The area of the octagon is $$\displaystyle 320 - (4 \boldcdot 20)$$ or 240 square units.
We can estimate the area of irregular shapes by approximating them with a polygon and finding the area of the polygon. For example, here is a satellite picture of Lake Tahoe with some one-dimensional measurements around the lake.
The area of the rectangle is 160 square miles, and the area of the triangle is 17.5 square miles for a total of 177.5 square miles. We recognize that this is an approximation, and not likely the exact area of the lake. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8375521898269653, "perplexity": 454.0315683004232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500074.73/warc/CC-MAIN-20230203185547-20230203215547-00201.warc.gz"} |
https://socratic.org/questions/54ef3ec7581e2a4bdddc8f95 | Chemistry
Topics
# Question #c8f95
Feb 26, 2015
Since you're dealing with a buffer, you can use the Henderson-Hasselbalch equation to calculate the new pH of the solution when the concentrations of dihydrogen phosphate, ${H}_{2} P {O}_{4}^{-}$, and hydrogen phosphate, $H P {O}_{4}^{2 -}$, are equal.
The balanced chemical equations for this buffer are
${H}_{2} P {O}_{4}^{-} + {H}_{2} O r i g h t \le f t h a r p \infty n s H P {O}_{4}^{2 -} + {H}_{3}^{+} O$, $p K {a}_{2} = 7.21$
$H P {O}_{4}^{2 -} + {H}_{2} O r i g h t \le f t h a r p \infty n s P {O}_{4}^{3 -} + {H}_{3}^{+} O$, $p K {a}_{3} = 12.7$
Equal concentrations of ${H}_{2} P {O}_{4}^{-}$ and $H P {O}_{4}^{2 -}$ will establish the first equilibrium, which implies that the pH of the solution will now be
$p {H}_{\text{solution}} = p K {a}_{2} + \log \left(\frac{\left[H P {O}_{4}^{2 -}\right]}{\left[{H}_{2} P {O}_{4}^{-}\right]}\right)$
$p {H}_{\text{solution") = 7.21 + log ("5.25 mmol/L"/"5.25 mmol/L}} = 7.21 + \log \left(1\right) = 7.21$
You can determine the GIbbs free energy by using the reaction's $p K {a}_{2}$ by using
$\Delta G = - R T \ln \left({K}_{a 2}\right)$, where
${K}_{a 2}$ is the acid dissociation constant for the established equilibrium reaction.
You can use the mathematical identity $\ln \left(x\right) = 2.303 \cdot \log \left(x\right)$ to rewrite the above equation as
$\Delta G = - R T \cdot 2.303 \log \left({K}_{a 2}\right)$
If you plug $p {K}_{a 2} = - \log \left({K}_{a 2}\right)$ into the equation, you'll get
$\Delta G = 2.303 R T \cdot p {K}_{a 2}$
Therefore,
$\Delta G = 2.303 \cdot 8.3145 \text{J"/("mol" * "K") * (273.15 + 25)"K} \cdot 7.21$
$\Delta G = \text{41,162.3 J/mol" = "+41.2 kJ/mol}$ $\to$ rounded to three sig figs.
SIDE NOTE. $\Delta G$ will be positive because the acid dissociation constant for the established equilibrium is smaller than 1.
Feb 27, 2015
$\Delta {G}_{r} = 41.12 k J . m o {l}^{- 1}$
$\Delta {G}_{r} = \Delta {G}^{0} + R T \ln Q$
We are concerned with:
${H}_{2} P {O}_{4}^{-} r i g h t \le f t h a r p \infty n s H P {O}_{4}^{2 -} + {H}^{+}$
For which $p K {a}_{2} = 7.1$
$Q$ is the reaction quotient which, in this case $= \frac{5.25 m M}{5.25 m M} = 1$
$\Delta {G}_{r} = \Delta {G}^{0} + R T \ln Q$
Since $Q = 1$, $R T \ln Q = 0$
So $\Delta {G}_{r} = \Delta {G}^{0} = - R T \ln K {a}_{2}$
$\ln K {a}_{2} = 2.303 \log K {a}_{2}$
So $\Delta {G}_{r} = - 8.31 \times 298 \times 2.303 \times \left(- 7.21\right) = 41.12 k J . m o {l}^{- 1}$
##### Impact of this question
392 views around the world | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 33, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9694486856460571, "perplexity": 1626.342349216723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987824701.89/warc/CC-MAIN-20191022205851-20191022233351-00086.warc.gz"} |
http://mathoverflow.net/questions/7942/splitting-matrix-of-rank-one | # Splitting matrix of rank one
Let R a normal domain, that is an integrally closed noetherian domain, like Dedekind domains, UFD, etc
Let A=(a i j ) a matrix with elements in R and dimension n x m.
Suppose
• rank A=1 ↔ all 2 x 2 minors are =0.
• J:= ideal generated by a i j verify (R:(R:J))=R ↔ J is not included in any prime ideal with height 1.
If R is an UFD, with the preview conditions, we can write A like product of a n x 1 vector column C=(c i ) and a 1 x m vector file F=(f j ), that is a i j =c i ·f j .
I conjecture that is true in the general case, but I cannot make any progress.
Have you contraexemples with normal rings?
-
I am having trouble understanding your English. But, if I understand you correctly, the following is a counter-example:
Let $k$ be a field and let $R$ be the ring $k[a,b,c,d]/(ab-cd)$. Then $R$ is normal and $\left( \begin{smallmatrix} a & c \\\\ d & b \end{smallmatrix} \right)$ has rank 1. However, we can not write this matrix as $\left( \begin{smallmatrix} w \\\\ x \end{smallmatrix} \right) \left( \begin{smallmatrix} y & z \end{smallmatrix} \right)$ for any $w$, $x$, $y$, $z \in R$.
I think your condition should almost imply that the ring is a UFD. If I have any non-unique factorization $ab=cd$, I can use it to build a counter-example like this one.
UPDATE Here are two more examples: $R=k[a,b,c]/(ac-b^2)$ and $\left( \begin{smallmatrix} a & b \\\\ b & c \end{smallmatrix} \right)$.
$R=\mathbb{Z}[\sqrt{-5}]$ and $\left( \begin{smallmatrix} 2 & 1+\sqrt{-5} \\\\ 1-\sqrt{-5} & 3 \end{smallmatrix} \right)$.
These examples rule out most attempts I could think of to find a class of rings larger than UFDs for which the result holds.
-
Sorry, my English is very rough. I understand your contraexemple but I need to cofirm the condition (R:(R:J))=J with J generates by a, b, c and d. – Francisco Perdomo Dec 6 '09 at 2:35
In this case, (R:J) is R, so you are fine. Using your description, you need to show that there is no height one prime containing a, b, c and d, which is also easy. – David Speyer Dec 6 '09 at 2:39
I made a mistake and it must say (R:(R:J))=R. Certainly it seems(R:J)=R and (R:(R:J))=R. If the ring is almost factorial, do you think a similar contraexample go on? (especially with the condition of J in the Gabriel filter). – Francisco Perdomo Dec 6 '09 at 3:30
Based on a quick google search, it looks like "almost factorial" is the same as "Q-factorial"? If so, I think I can still give counter-examples. Look at my update above. – David Speyer Dec 6 '09 at 4:16
I think last example is definitive. Thanks – Francisco Perdomo Dec 8 '09 at 0:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9436883926391602, "perplexity": 697.5987309348995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860114649.41/warc/CC-MAIN-20160428161514-00076-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://rd.springer.com/chapter/10.1007/978-3-030-38002-1_4?error=cookies_not_supported&code=14eb4459-b7f1-48a2-92ca-96dcd99ee171 | # Spectrum and Resolvent
Chapter
Part of the Graduate Texts in Mathematics book series (GTM, volume 284)
## Abstract
This chapter introduces the notion of the spectrum of an operator (possibly unbounded) on a Hilbert space. The theory of the resolvent operator is developed and used to establish basic properties of the spectrum.
## References
1. 3.
Arveson, W.: A Short Course on Spectral Theory. Graduate Texts in Mathematics, vol. 209. Springer, Berlin (2002)
2. 6.
Avila, A., Jitomirskaya, S.: The Ten Martini Problem. Ann. Math. (2) 170, 303–342 (2009)
3. 35.
Gelfand, I.M.: Normierte Ringe. Rec. Math. [Mat. Sbornik] N. S. 9(51), 3–24 (1941)Google Scholar
4. 37.
Gohberg, I.C., Krein, M.: Introduction to the Theory of Linear Nonselfadjoint Operators. Translations of Mathematical Monographs, vol. 18. American Mathematical Society, Providence (1969)Google Scholar
5. 42.
Harper, P.G.: Single band motion of conduction electrons in a uniform magnetic field. Proc. Phys. Soc. Lond. A 68, 874–878 (1955)
6. 45.
Hofstadter, D.R.: Energy levels and wave functions of Bloch electrons in rational and irrational magnetic fields. Phys. Rev. B 14, 2239–2249 (1976)
7. 60.
MacCluer, B.D.: Elementary Functional Analysis. Graduate Texts in Mathematics, vol. 253. Springer, New York (2009)
8. 64.
Olver, F.W.J., Olde Daalhuis, A.B., Lozier, D.W., Schneider, B.I., Boisvert, R.F., Clark, C.W., Miller, B.R., Saunders, B.V.: NIST Digital Library of Mathematical Functions (2016). http://dlmf.nist.gov/. Release 1.0.19
9. 69.
Reed, M., Simon, B.: Methods of Modern Mathematical Physics. I. Functional Analysis. Academic, London (1972)
10. 77.
Rudin, W.: Principles of Mathematical Analysis. International Series in Pure and Applied Mathematics, 3rd edn. McGraw-Hill, New York (1976)Google Scholar
11. 83.
Simon, B.: Operator Theory. A Comprehensive Course in Analysis, Part 4. American Mathematical Society, Providence (2015)Google Scholar
12. 87.
Stein, E.M., Shakarchi, R.: Real Analysis. Princeton Lectures in Analysis, III. Princeton University Press, Princeton (2005)Google Scholar | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8095511794090271, "perplexity": 4720.920173187553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488536512.90/warc/CC-MAIN-20210623073050-20210623103050-00308.warc.gz"} |
https://proofwiki.org/wiki/Axiom_of_Subsets_Equivalents | # Axiom of Subsets Equivalents
It has been suggested that this article or section be renamed: using Specification not Subsets One may discuss this suggestion on the talk page.
## Theorem
The Axiom of Specification states that:
$\forall z: \forall P \left({y}\right): \exists x: \forall y: \left({y \in x \iff \left({y \in z \land P \left({y}\right)}\right)}\right)$
We will prove that this statement is equivalent to the following statements:
$\forall z: \forall A: \left({\left({z \cap A}\right) \in U}\right)$
$\forall z: \forall A: \left({A \subseteq z \implies A \in U}\right)$
In the above statements, the universe is $U$.
## Proof of the First Statement
The Axiom of Specification states:
$\forall z: \forall P \left({y}\right): \exists x: \forall y: \left({y \in x \iff \left({y \in z \land P \left({y}\right)}\right)}\right)$
$y \in A$ is substituted for the propositional function $P \left({y}\right)$.
$\forall z: \forall A: \exists x: \forall y: \left({y \in x \iff \left({y \in z \land y \in A}\right)}\right)$
By definition of intersection:
$\forall z: \forall A: \exists x: \forall y: \left({y \in x \iff y \in \left({z \cap A}\right)}\right)$
By definition of class equality:
$\forall z: \forall A: \exists x = \left({z \cap A}\right)$
This is equivalent to:
$\forall z: \forall A: \left({z \cap A}\right) \in U$
because $A \in U \iff \exists x = A$.
$\Box$
## Re-derivation of the Axiom of Specification
Only bi-conditional ($\iff$) statements were used to prove the first result, so it is possible to reverse the step order and arrive at the original Axiom of Specification by Biconditional is Commutative.
$\Box$
Although this statement is shorter, it uses defined terms, and is thus unsuitable as an axiom.
## Proof of the Second Statement
We will take the result of the first statement:
$\forall z: \forall A: \left({\left({z \cap A}\right) \in U}\right)$
We will now take the definition of the subset:
$A \subseteq B \iff \forall x: \left({x \in A \implies x \in B}\right)$
$A \subseteq B \iff \left({A \cap B}\right) = A$
Thus:
$A \subseteq B \implies \left({\left({A \cap B}\right) \in U \implies A \in U}\right)$
We will take the result of the first statement:
$\forall z: \forall A: \left({\left({z \cap A}\right) \in U}\right)$
Using the above two statements, substituting $z$ for $B$:
$\forall z: \forall A: \left({A \subseteq z \implies A \in U}\right)$
$\Box$
## Re-derivation of the Axiom of Specification
Because $\left({A \cap z}\right) \subseteq z$, the antecedent of $\forall z: \forall A: \left({A \subseteq z \implies A \in U}\right)$ is satisfied.
We now arrive at the first statement (above), which in turn can prove the Axiom of Specification:
$\forall z: \forall A: \left({A \cap z}\right) \in U$
$\blacksquare$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.990202784538269, "perplexity": 850.2342804127646}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151866.98/warc/CC-MAIN-20210725205752-20210725235752-00380.warc.gz"} |
https://www.physicsforums.com/threads/on-the-complexity-of-pi-e.416439/ | # On the complexity of pi, e
1. Jul 16, 2010
### Newtime
I was reading a book on number theory, and there was an interesting dicussion about pi and e. It state that it took about one third less time to compute e to 100,000 places when compare to pi. Additionally, it stated that no "simple" partial fraction (that is, one in which all numerators are 1's) exists for pi, but one exists for e. I'm no number theorist, and certainly am not up to date with any research, but has there been any results proving that pi is, in a sense a deeper, more complex irrational number than e, in the same vein as the statements in the book I was reading? Thanks.
Note: the book is called "Excursions in Number Theory" and was published in 1966.
edit: Oops. Thought I was posting this in the number theory section. If a mod would move it, I would appreciate it.
2. Jul 17, 2010
### Gib Z
For a special class of numbers, that many irrational numbers are, there is a certain measure of "complexity" (I say "many" in a loose everyday sense, it turns out that almost all irrational numbers are in fact not in this class). This class is called the Algebraic Numbers, and these are the set of numbers that are solutions to some polynomial equation with integer coefficients. For example, $\sqrt{5}$ and $7^{1/5}$ are algebraic as they are roots of $x^2-25, x^5-7$ respectively. We say an algebraic number is of degree n if n is the smallest degree a polynomial must be to have the number as a solution. In our examples, the numbers are of degree 2 and 5 respectively. We can regard lower degree algebraic numbers as "simpler" and "less irrational" in some sense to higher degree algebraic numbers. For example, the set of algebraic numbers of degree 1 is the set of solutions to ax + b = 0, where a and b are integers, ie the rational numbers. It is only from degree 2 onwards we "step up" a level and get to irrational numbers (or imaginary numbers).
If the number is not the solution of any polynomial with integer coefficients, we call them transcendental numbers (they "transcend" algebraic equations). Since all rational numbers are algebraic numbers (of degree 1), all real transcendental numbers are immediately irrational, but not all irrational numbers are transcendental. Examples of numbers in the latter class are $\pi$ and e. From this we can already reason that these two constants are "more complex" than the other irrational numbers mentioned here previously, so your question is not a foolish one to ask. However, there is no longer a quantitative comparison like we had for algebraic numbers.
e comes about very naturally when doing Calculus, Differential Equations or Complex Analysis (I've probably missed others), and then one finds by investigation of the exponential function that it has a period of $2\pi i$. When $\pi$ finds a reason to pop up somewhere, often its because the exponential function has found a way to pop up somewhere, often disguised as a trigonometric function or some other alias. For example, a circle in the complex plane is traced out by a full period of $e^{it}$, and because we call this period $2\pi i$, we've already linked $\pi$ into all of our circular geometry. So in a purely subjective sense, I would say that e is the more fundamental number while $\pi$ arises as a by-product while dealing with e.
3. Jul 17, 2010
### Newtime
I'm not concerned with the qualitative idea of which is more fundamental, although you bring up interesting points. I'm more interested in which is more quantitatively complex and in particular if such a method for determining or even defining the "complexity" of such numbers exists.
4. Jul 17, 2010
### Gib Z
Well as I said before, there isn't a way of ordering pi and e in terms of "complexity", but it seems you are interested in computational difficulty, in which case it is more about the efficiency of the specific algorithm being used, rather than the constants themselves. As far as I know there is no theorem concerning algorithms which compute pi and e in terms of theoretical maximum computational efficiency. So when the book said it took less time to calculate e than pi, it's saying the algorithm used to compute e was more efficient than the one that computed pi. Note that there are many possible algorithms that compute e, and some would have been slower than the one that computed pi in this instance.
5. Jul 17, 2010
### Newtime
You're right: I think computational efficiency is indeed what I'm interested in since it seems the best way to determine the complexity of these numbers. You're also right about the algorithms. But this brings up another question: can it be proven that some algorithm for computing pi or e is the most efficient? If not, can it be shown that one will always require more computation than the other? I suppose this is getting more into complexity theory than number theory though...
6. Jul 18, 2010
### Justin Kirk
I take irrational to mean the digits in pi follow no order. 3 is rational because its digits follow an order, that is 3 then 0,0,0,0,0,0 and so on. What I am trying to ask is, is there an algorithm which would give each digit in pi, which would give the 3, give the 1, give the 4 and so forth depending on how long you want to have the number be?
7. Jul 18, 2010
### CRGreathouse
Sure. For a general number, you might expect such an algorithm to be:
1. Set n = 1.
2. Calculate the number to a certain precision, say 1/10^(n+100).
3. Output the n-th decimal digit of the approximation.
4. Increment n and go to step 2.
Of course this algorithm fails eventually for most numbers, since eventually you may come across a portion where there are (in this case) over 100 consecutive 0s or 9s, so that you could be within the desired precision but not have the right digit. But it happens that there are theorems (say, Baker's effective version of Roth's theorem) that show that pi cannot have too good of a rational approximation, so using those there is an effectively computable function C(n) such that an approximation to within 1/10^(n+C(n)) will have the same n-th decimal place as pi.
Now, the theorem's constants are weak; this may require you to calculate a million decimal places of pi to get 3.14 for all I know.
Of course, practically you just calculate pi to n+1000 decimal places using any desired algorithm and check that the last thousand aren't too close to all 0s or 9s (in which case you recompute).
8. Jul 19, 2010
### CRGreathouse
Better reference: V.Kh. Salikhov, Russ. Math. Surv. 63, No. 3, 570-572 (2008).
9. Jul 19, 2010
### dimitri151
Hi Newtime,
This seems to be an irrationality measure like what your looking for.
http://mathworld.wolfram.com/IrrationalityMeasure.html
It's the "Liouville-Roth constant or irrationality exponent, ...defined as the threshold at which Liouville's approximation theorem kicks in and is no longer approximable by rational numbers.."
For e it is 2, for pi it seems there is no exact number but an upper bound 7.6304 which would seem to suggest that pi is more irrational.
Justin, There are algorithms that give blocks of pi as far in as you like. You can even calculate a single digit of pi's expansionwithout calculating any of the preceding expansion. I can't find the reference but I know I read the article some years back. Here:
Borwein seems to be the authority.
Borwein, J. M. and Borwein, P. B. "Irrationality Measures." §11.3 in Pi & the AGM: A Study in Analytic Number Theory and Computational Complexity. New York: Wiley, pp. 362-386, 1987.
10. Jul 19, 2010
### CRGreathouse
That's Plouffe's algorithm (the so-called "BBP algorithm"). But it only works in hexadecimal (or more generally in bases that are a power of two) and the question was about decimal. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8734123706817627, "perplexity": 440.3409903165911}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860121090.75/warc/CC-MAIN-20160428161521-00024-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/algebra-1/chapter-10-radical-expressions-and-equations-10-2-simplifying-radicals-practice-and-problem-solving-exercises-page-610/31 | ## Algebra 1
$-126a \sqrt a$
$-9 \times \sqrt 4 \times \sqrt (7a^{2}) \times \frac{1}{3} \times \sqrt 9 \times \sqrt 7a$ *** $\sqrt 4$ = 2 because $2 \times 2$ = 4 *** $\sqrt a^{2}$ = a because $a \times a = a^{2}$ *** $\sqrt 9$ = 3 because $3 \times 3$ = 9 $-9 \times 2 \times a \times \sqrt 7 \times \frac{1}{3} \times 3 \times \sqrt 7a$ We multiply the constants together and the numbers in the square root rogether $-18a \times \sqrt 49a$ 49 is a perfect square because 7 x 7 = 49 $-18a \times 7 \times \sqrt a$ $-126a \times \sqrt a$ $-126a \sqrt a$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9191412329673767, "perplexity": 450.85472748470625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662530066.45/warc/CC-MAIN-20220519204127-20220519234127-00795.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/42285-write-following-polynomial-terms-basis-elements.html | # Thread: Write the following polynomial in terms of basis elements
1. ## Write the following polynomial in terms of basis elements
$p(t)=1+t^3$
2. Hello,
Originally Posted by JCS007
$p(t)=1+t^3$
I'm not sure I understand... What basis ?
You mean you have a basis (1, t, t², ...) ?
3. Originally Posted by JCS007
$p(t)=1+t^3$
You need to specify what space you want, and what basis you are refering to. There is a space of polynomials, and basis in which this is a basis vector.
In summary, be more preccise or provide more context.
RonL | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8973661661148071, "perplexity": 1786.8876869376222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189313.82/warc/CC-MAIN-20170322212949-00483-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://proofwiki.org/wiki/Definition:Primitive_Recursive_Relation | # Definition:Primitive Recursive/Relation
Let $\mathcal R \subseteq \N^k$ be an $n$-ary relation on $\N^k$.
Then $\mathcal R$ is a primitive recursive relation if and only if its characteristic function $\chi_\mathcal R$ is a primitive recursive function. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9910011291503906, "perplexity": 87.23978888133169}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141681524.75/warc/CC-MAIN-20201201200611-20201201230611-00686.warc.gz"} |
http://mathoverflow.net/questions/21199/reference-request-a-theorem-by-s-garrison | # Reference request: A theorem by S. Garrison
A theorem by S. Garrison states that if $G$ is a finite solvable group and $|cd(G)| = 4$ then $dl(G)\leq |cd(G)|$ (the Taketa inequality, which is conjectured to hold for all finite solvable groups). So far I have been unable to find a proof of this theorem anywhere. The only references I have seen are to Isaacs' book on character theory (where he only mentions that it has been proven by S. Garrison), and to the Ph.d thesis of S. Garrison (which has not been published, so not much help there). oes anyone know where one might find the proof?
-
What are $cd$ and $dl$? I read your first sentenced and wondered why is he taking absolute value of the cohomological dimension? :) – Mariano Suárez-Alvarez Apr 13 '10 at 14:06
cd(G) = { χ(1) : χ in Irr(G) } is the set of character degrees of G, and dl(G) is the derived length of G. The set of character degrees, even just its size, exerts quite a bit of control over the structure of a group. This is the focus of chapter 12 of Isaacs's book, and still has lots of interesting open problems. For many groups it is quite difficult / infeasible to get the character table, but often the degrees are known, and often the degrees are all that are needed. – Jack Schmidt Apr 13 '10 at 15:21
A new proof was published in:
Isaacs, I. M.; Knutson, Greg. "Irreducible character degrees and normal subgroups." J. Algebra 199 (1998), no. 1, 302–326. MR1489366 DOI:10.1006/jabr.1997.7191
This was extended to cd(G)=5 in:
Lewis, Mark L. "Derived lengths of solvable groups having five irreducible character degrees. I." Algebr. Represent. Theory 4 (2001), no. 5, 469–489. MR1870501 DOI: 10.1023/A:1012706718244
It mentions that "Because of the length and complexity of his argument, Garrison never published this result." and has some other useful comments.
-
I looked through the first article, but none of the theorems in that is the one I mentioned. – Tobias Kildetoft Apr 13 '10 at 16:49
It is Theorem C, page 303: dl(G) = dl(G')+1, cd(G) = cd(G|G')+1, and theorem C says that if cd(G|G')=3, then dl(G')≤3. This is stated twice on that page, but not in italics. – Jack Schmidt Apr 13 '10 at 17:34
Ahh, of course. Thank you. – Tobias Kildetoft Apr 13 '10 at 17:52
Sidney Garrison wrote a 1973 dissertation directed by Marty Isaacs at Wisconsin: On Groups with a Small Number of Character Degrees. There is a related paper MR0407120 (53 #10903) 20C15, Garrison, Sidney C., Bounding the structure constants of a group in terms of its number of irreducible character degrees. J. Algebra 32 (1974), no. 3, 623–628. For a solvable group, Fitting length is shown to be bounded by the number of irreducible character degrees. Then four unrelated papers through 1986, the last with S. Gagola at Kent State (by then Garrison was apparently unaffiliated). This much I get from MathSciNet, but Marty Isaacs could fill in more details.
-
You might also check out chapter 14, section 1, page 1ff of volume 2 of Berkovich and Zhmud's Characters of Finite Groups. It is summarizing that paper, and has some related results, but not the exact one being asked about. – Jack Schmidt Apr 13 '10 at 15:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.821662187576294, "perplexity": 449.75624468792785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701145751.1/warc/CC-MAIN-20160205193905-00105-ip-10-236-182-209.ec2.internal.warc.gz"} |
http://chempaths.chemeddl.org/services/chempaths/?q=book/General%20Chemistry%20Textbook/The%20Electronic%20Structure%20of%20Atoms/1228/wave-nature-electron | # The Wave Nature of the Electron
Submitted by ChemPRIME Staff on Thu, 12/09/2010 - 00:01
## Historical Development
At much the same time as Lewis was developing his theories of electronic structure, the physicist Niels Bohr was developing a similar, but more detailed, picture of the atom.
The Bohr picture of the Na atom. Two electrons orbit very to the nucleus. Eight others move around somewhat farther away, and there is a single outermost electron in an elongated, elliptical orbit.
Since Bohr was interested in light (energy) emitted by atoms under certain circumstances rather than the valence of elements, he particularly wanted to be able to calculate the energies of the electrons. To do this, he needed to know the exact path followed by each electron as it moved around the nucleus. He assumed paths similar to those of the planets around the sun. The figure seen here, taken from a physics text of the periodThose elements from a single row of the periodic table., illustrates Bohr’s theories applied to the sodium atom. Note how the Bohr model, like that of Lewis, assumes a shell structure. There are two electrons in the innermost shell, eight electrons in the next shell, and a single electron in the outermost shell.
Like Lewis’ model, Bohr’s model was only partially successful. It explained some experimental results but was quite unable to account for others. In particular it failed on the quantitative mathematical level. The Bohr theory worked very well for a hydrogen atom with its single electron, but calculations on atoms with more than one electron always gave the wrong answer. On a chemical level, too, certain features were inadequate. There is no evidence to suggest that atoms of sodium are ever as elongated or as flat as the one in the figure. On the contrary, the way that sodium atoms pack together in a solidA state of matter having a specific shape and volume and in which the particles do not readily change their relative positions. suggests that they extend out uniformly in all directions; i.e., they are spherical in shape. Another weakness in the theory was that it had to assume a shell structure rather than explain it. After all, there is nothing in the nature of planets moving around the sun which compels them to orbit in groups of two or eight. Bohr assumed that electrons behave much like planets; so why should they form shells in this way?
One way to explain the fact that electron energies are quantized, or to explain why electrons can be said to exist in particular shells, is to suggest that they behave like standing waves. Ever since Pythagorus' "music of the spheres" we've noted that waves on a string (like a guitar string) produce only certain numerically defined pitches or tones. We can use a wave model to explain why that's so, for a string that is fixed at both ends.
## Traveling Waves
If you flick a string, a traveling wave moves down it; if you do this continually, say once a second, you generate a travelling wave train with a frequencyThe rate at which a periodic event occurs; specifically, the rate at which the waves of electromagnetic radiation pass a point. of 1 s-1, or one wavelengthThe distance between the crests of adjacent waves (or between any adjacent corresponding points in waves); used in the context of electromagnetic radiation. per second, where the wavelength is the distance between successive peaks (or any other repeating feature) of the wave:
Wavelength
There is a relationship between the frequency, usually denoted "ν" ("nu"), the wavelength, usually denoted "λ" (lambda) and the speed that the wave moves down the string (or through space, if it's a light wave). If we denote the speed "c" (a symbol used for the speed of light), the relationship is:
$\lambda =\tfrac{c}{\nu}$ (1)
Example 1: Calculate the wavelength of a microwave in a microwave oven that travels at the speed of light, c = 3.0 x 108 m s-1> and has a wavelength of 2.45 GHz (2.45 x 109 s-1) of 12.24 cm.
SolutionA mixture of one or more substances dissolved in a solvent to give a homogeneous mixture.: Rearranging (1) we have:
$\nu =\tfrac{c}{\lambda}$ = $\tfrac{\text{3.0}\times \text{10}^{\text{8}}\text{m s}^{\text{-1}}}{\text{2.45}\times\text{10}^{9}\text{s}^{\text{-1}}} =$ 0.1224 m or 12.24 cm
Microwaves are waves like light waves or radio waves, but their wavelength is much longer than light, and shorter than radio. Waves of this wavelength interact with water molecules make the molecules spin faster and thereby heatEnergy transferred as a result of a temperature difference; a form of energy stored in the movement of atomic-sized particles. up food in a microwave oven.
## Standing Waves
If the string we're flicking is held on one end and tied at the other end, the waves are reflected backwards, and the backward moving interact with the forward waves to create a constructive interference pattern which appears not to move. It's called a standing wave:
Standing wave 1. Standing wave 2. How standing waves (black) are created by interference a forward (blue) and reflected (red) waves[1]
Note that the "nodes" (where there is no motion) don't move, but the "antinodes" vibrate up and down, so the exact position of the string is not fixed. Several more standing waves are shown in the next section; all these have particular frequencies that account for the specific notes produced by a guitar string of particular length, or when the string is "fretted", for example. Videos of another standing wave are shown here.
But some frequencies are not allowed (we don't want the guitar to play all tones at once!). It happens in situations like this:
Disallowed standing wave.
So the standing wave pattern goes from Standing wave 1 to Standing wave 2, and can't exist anywhere in between. That's exactly the behavior we find for electrons in shells!. Electrons don't exist anywhere between the shells.
## Light Energy
We usually think of electron shells in terms of their energy. That's because light energy is emitted when an electron falls from a higher shell to a lower one, and measuring light energy is the most important way of determining the energy difference between shells. When electrons change levels, they emit quanta of light called "photons"). The energy of a photon is directly related to its frequency, or inversely related to the wavelength:
$\text{E} = \text {h} \times \nu =\frac{\text{h}\times\text{c}}{\lambda}$ (2)
The constant of proportionality h is known as Planck’s constant and has the value 6.626 × 10–34 J s. Light of higher frequency has higher energy, an a shorter wavelength.
Light can only be absorbed by atoms if each photon has exactly the right amount of energy to promote an electron from a lower shell to a higher one. If more energy is required than a photon possesses, it can't be supplied by bombarding the atom with more photons. So we frequently find that light of one wavelength will cause a photochemical change no matterAnything that occupies space and has mass; contrasted with energy. how dim it is, while light of a neighboring wavelength will not cause a photochemical change no matter how intense it is. That's because photons must be absorbed to cause a photochemical change, and they must have exactly the energy needed to be promoted to the next shell to be absorbed. If they're not absorbed, it doesn't matter how intense the light is (how many photons there are per second).
Example 2: What wavelength of light is emitted by a hydrogen atom when an electron falls from the third shell, where it has E = -2.42088863 × 10-19J, to the second shell, where it has E = -5.44739997 × 10-19 J?
Solution: ΔE = E2 - E1 = (-5.45 × 10-19) - (-2.42 × 10-19J) = -3.03 × 10-19J. Note that the energy levels get more negative (more energy is released when an electron falls into them) near the nucleus, and the difference here is negative, meaning energy is released. Taking the absolute value of the energy to calculate the energy of the photon, and rearraning equation (2):
$\lambda =\frac{\text{h}\times\text{c}}{E}$
λ = [(6.626 × 10–34 J s)(3 x 108)] / 3.03 × 10-19J = 6.56 x 10-7 m or 656 nm, the wavelength of red light.
## Two Dimensional Standing Waves
Of course the shells for electrons are three dimensional, not one dimensional like guitar strings. We can begin to visualize standing waves in more than one dimension by thinking about wave patterns on a drum skin in two dimensions. Some of the wave patterns are shown below. If you look carefully, you'll see circular nodes that don't move:
## The Shape of Orbitals
Electrons exist around the nucleus in "orbitals", which are three-dimensional standing waves. Electron standing waves are quite beautiful, and we'll see more of them in the next few sections. One example is the flower like "f orbital" below. Here the red parts of the "wavefunction" represent mathematically positive (upward) parts of the standing wave, while blue parts are mathematically negative (downward) parts:
## References
1. http://en.wikipedia.org/wiki/Wavelenght | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8174048066139221, "perplexity": 706.2173871851364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768352.71/warc/CC-MAIN-20141217075248-00092-ip-10-231-17-201.ec2.internal.warc.gz"} |
http://cvgmt.sns.it/paper/3545/ | # A density result in $GSBD^p$ with applications to the approximation of brittle fracture energies
created by crismale on 11 Aug 2017
modified on 14 Feb 2018
[BibTeX]
preprint
Inserted: 11 aug 2017
Last Updated: 14 feb 2018
Year: 2017
ArXiv: 1708.03281 PDF
Abstract:
We prove that any function in $GSBD^p(\Omega)$, with $\Omega$ a $n$-dimensional open bounded set with finite perimeter, is approximated by functions $u_k\in SBV(\Omega;\mathbb{R}^n)\cap L^\infty(\Omega;\mathbb{R}^n)$ whose jump is a finite union of $C^1$ hypersurfaces. The approximation takes place in the sense of Griffith-type energies $\int_\Omega W(e(u)) \,\mathrm{d}x +\mathcal{H}^{n-1}(J_u)$, $e(u)$ and $J_u$ being the approximate symmetric gradient and the jump set of $u$, and $W$ a nonnegative function with $p$-growth, $p>1$. The difference between $u_k$ and $u$ is small in $L^p$ outside a sequence of sets $E_k\subset \Omega$ whose measure tends to 0 and if $u ^r \in L^1(\Omega)$ with $r\in (0,p]$, then $u_k-u ^r \to 0$ in $L^1(\Omega)$. Moreover, an approximation property for the (truncation of the) amplitude of the jump holds. We apply the density result to deduce $\Gamma$-convergence approximation \emph{\`a la} Ambrosio-Tortorelli for Griffith-type energies with either Dirichlet boundary condition or a mild fidelity term, such that minimisers are \emph{a priori} not even in $L^1(\Omega;\mathbb{R}^n)$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9763083457946777, "perplexity": 503.01249827821226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583814455.32/warc/CC-MAIN-20190121213506-20190121235506-00255.warc.gz"} |
http://math.stackexchange.com/questions/232028/connected-subset-of-a-separable-metric-space-is-separable | # Connected subset of a separable metric space is separable?
Continuity and the Axiom of Choice
I have proved a small generalization of Brian's argument, that is, "If $f:X\rightarrow Y$ is sequentially continuous on $X$ and $X$ is separable, then $f$ is continuous on $X$".
Next, I have proved that "If $f:C\rightarrow Y$ is sequentially continuous on $C$ and $C$ is a connected set in $\mathbb{R}$, then $f$ is continuous on $C$". Now, i want to generalize this.
Is every connected set in a separable metric space is separable? (in ZF)
Edit: I don't know if this helps, but actually the statement i want to prove is exactly the same as proving 'Every connected set in a separable complete metric space is separable'.
(It can be proven that 'Every separable metric space has a separable completion' in ZF. In fact, if $\varphi : X \rightarrow X^*$ is an isometry and $X^*$ is a completion of $X$ and $D$ is a countable dense subset of $X$, then $\varphi[D]$ is dense in $X^*$. Since $\varphi$ is an isometry, it maps connected subset to connected subset.)
-
Am I to assume you are working without Choice? – Arthur Fischer Nov 7 '12 at 9:06
@Arthur Yes. Just edited my post. – Katlus Nov 7 '12 at 9:07
Katlus, do the comment on Brian's deleted answer, $\mathbb{R\setminus Q}$ is always separable (consider algebraic numbers, or $\mathbb Q+\pi$), but it is consistent to have a set of real numbers which is inseparable. – Asaf Karagila Nov 7 '12 at 12:32
@Asaf: What would this comment on the deleted answer say? As a sub-10Ker, I (alas) do not have access to these words of wisdom. (Also, is Katlus able to see the comment on the deleted answer?) – Arthur Fischer Nov 7 '12 at 12:42
@Arthur: Brian wrote (wrongly) that serparability is hereditary [true only with countable choice] and Katlus replied "isn't it consistent that $\mathbb{R\setminus Q}$ is not separable in ZF?". – Asaf Karagila Nov 7 '12 at 12:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9743199944496155, "perplexity": 466.2956798117628}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928019.31/warc/CC-MAIN-20150521113208-00326-ip-10-180-206-219.ec2.internal.warc.gz"} |
http://brahma.tcs.tifr.res.in/events/voting-restricted-preference-domains-survey?mini=2019-05 | # Voting on Restricted Preference Domains: A Survey
Edith Elkind
## Affiliation:
Balliol College
University of Oxford
Department of Computer Science
Room 413, Wolfson Building
Parks Road, Oxford OX1 3QD
United Kingdom
## Time:
Monday, 11 December 2017, 10:00 to 11:00
## Venue:
• A-201 (STCS Seminar Room)
## Organisers:
Arrow's famous impossibility theorem (1951) states that there is no perfect voting rule: for three or more candidates, no voting rule can satisfy a small set of very appealing axioms. However, this is no longer the case if we assume that voters' preferences satisfy certain restrictions, such as being single-peaked or single-crossing. In this talk, we discuss single-peaked and single-crossing elections, as well as some other closely related restricted preference domains, and provide an overview of recent algorithmic results for these domains. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9504320025444031, "perplexity": 3765.186786580732}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232259015.92/warc/CC-MAIN-20190526085156-20190526111156-00162.warc.gz"} |
https://www.physicsforums.com/threads/vanishing-of-a-cyclic-integral-the-property-of-a-state-function.84058/ | # Vanishing of a cyclic integral the property of a state function?
1. Aug 3, 2005
### asdf1
Why is the vanishing of a cyclic integral the property of a state function?
2. Aug 3, 2005
### matt grime
as far as i can tell, a state function is required to be something whose integral is independent of the path chosen for the integral, this is from searchign with google, and appears to be a thermodynaics thing. if we assume this is the definition of a state function then it is clear why any integral over a loop is zero since we may split the loop into two parts and consider the loop as a path from A to B followed by a path from B to A and the integral over a path from B to A is minus the integral we get if we went along that path backwards so they must add up to zero.
if you're asking why state fucntions are path independent then could i have your definition of state function.
3. Aug 3, 2005
### asdf1
Thank you~ I understand now. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9928995370864868, "perplexity": 342.5022950261729}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698540928.63/warc/CC-MAIN-20161202170900-00499-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://study.com/academy/answer/in-a-recent-study-35-of-people-surveyed-indicate-chocolate-was-their-favourite-flavor-ice-cream-suppose-we-select-a-sample-of-ten-and-ask-them-to-name-their-favourite-flavor-of-ice-cream-a-how-many-would-you-expect-to-name-chocolate-b-what-is.html | # In a recent study 35 % of people surveyed indicate chocolate was their favourite flavor ice...
## Question:
In a recent study {eq}35 \ \% {/eq} of people surveyed indicate chocolate was their favourite flavor ice cream. Suppose we select a sample of ten and ask them to name their favourite flavor of ice cream.
(a) How many would you expect to name chocolate?
(b) What is the probability exactly four of those in the sample name chocolate?
(c) What is the probability four or more is named chocolate?
## Binomial Probability Distribution:
Binomial distribution is a type of discrete probability distribution that gives sequence of successes repeated in a given number of Bernoulli trials. The probability of success remains constant throughout the experiment.
## Answer and Explanation:
#### a).
Given that;
{eq}n=10\\p=0.35 {/eq}
The expected number of binomial distribution is equal to the mean:
{eq}\begin{align*} E(x)&=\mu=np\\&=10\times 0.35\\&=3.5 \end{align*} {/eq}
#### b).
Use binomial probability mass function to calculate the probability of exactly four people who will indicate that chocolate is their favour color ice cream:
{eq}\begin{align*} b(n,x,p)&=^nC_xp^x(1-p)^{n-x}\\b(10,4,0.35)&=^{10}C_{4}\cdot 0.35^4(1-0.35)^{10-4}\\&=210\times 0.35^4\times 0.65^6\\&=0.2377 \end{align*} {/eq}
#### c).
Use binomial probability calculator to find the probability of getting more than four:
{eq}\begin{align*} P(X\ge 4)&=P(X=4)+P(X=5)...+P(X=10)\\&=0.4862 \end{align*} {/eq} | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9680236577987671, "perplexity": 2436.349196852942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143079.30/warc/CC-MAIN-20200217175826-20200217205826-00148.warc.gz"} |
http://mathhelpforum.com/calculus/87675-taylor-series-problem.html | # Math Help - Taylor Series Problem
1. ## Taylor Series Problem
Do I plug in pi/2 into the general cosx series expansion? Also, what would the general term be?
The question is attached in a file.
Attached Files
2. Originally Posted by defjammer91
Do I plug in pi/2 into the general cosx series expansion? Also, what would the general term be?
The question is attached in a file.
No, the point of expansion is $\frac{\pi}{2}$ so we use
$
f \left( \frac{\pi}{2}\right) +
f '\left( \frac{\pi}{2}\right) \left(x - \frac{\pi}{2} \right) +
f '' \left( \frac{\pi}{2}\right) \frac{\left(x - \frac{\pi}{2} \right)^2 }{2!} + \cdots
$
3. Originally Posted by defjammer91
Do I plug in pi/2 into the general cosx series expansion? Also, what would the general term be?
the question asks for the first three non-zero terms of the taylor series for cosx centered at pi/2
$f(x) = \cos{x}$
$f(x) = f\left(\frac{\pi}{2}\right) + f'\left(\frac{\pi}{2}\right)\left(x - \frac{\pi}{2}\right) + \frac{f''\left(\frac{\pi}{2}\right)\left(x - \frac{\pi}{2}\right)^2}{2!} + ... + \frac{f^{n}\left(\frac{\pi}{2}\right)\left(x - \frac{\pi}{2}\right)^n}{n!} + ...$
note that you'll end up with an odd degree taylor series
4. when you do the ratio test to find the interval of convergence, do you simplify anything first, or do you just plug the general term right into the ratio test? if you do have to plug in the general term, as is, into the ratio test, how does that work out? what cancels on the top and bottom? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9331759214401245, "perplexity": 646.9168006333842}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122222204.92/warc/CC-MAIN-20150124175702-00181-ip-10-180-212-252.ec2.internal.warc.gz"} |
http://openstudy.com/updates/51c39581e4b055e613b8850c | Here's the question you clicked on:
55 members online
• 0 viewing
## grantmasini 2 years ago Limits Delete Cancel Submit
• This Question is Closed
1. grantmasini
• 2 years ago
Best Response
You've already chosen the best response.
0
$\cos x \lim_{h \rightarrow 0}\frac{ \sin h }{ h }$
2. zzr0ck3r
• 2 years ago
Best Response
You've already chosen the best response.
0
have you learned lhospital rule ?
3. grantmasini
• 2 years ago
Best Response
You've already chosen the best response.
0
no.
4. zzr0ck3r
• 2 years ago
Best Response
You've already chosen the best response.
0
what are you doing in class?
5. zzr0ck3r
• 2 years ago
Best Response
You've already chosen the best response.
0
squeeze theorem?
6. grantmasini
• 2 years ago
Best Response
You've already chosen the best response.
0
this is one of the last steps in a limit problem from the first unit (limits). it's supposed to evaluate to cosx*1=cosx. I don't understand how sinh/h comes out to be 1?
7. zzr0ck3r
• 2 years ago
Best Response
You've already chosen the best response.
0
well, there is a thing called La'Hospital rule, that says when you run a limit and get 0/0 you can take the derivative of the top and the derivative of the bottom and then run the limit again so your limit lim of sin(h)/h = lim of cos(h)/1 = lim cos(h) and at 0 that is 1
8. zzr0ck3r
• 2 years ago
Best Response
You've already chosen the best response.
0
l'hopital's rule
9. grantmasini
• 2 years ago
Best Response
You've already chosen the best response.
0
lol, we are on the first unit. haven't learned derivates, or any of the theorems or rules.
10. grantmasini
• 2 years ago
Best Response
You've already chosen the best response.
0
can someone just explain how the limit of sinh/h is 1?
11. grantmasini
• 2 years ago
Best Response
You've already chosen the best response.
0
derivatives*
12. zzr0ck3r
• 2 years ago
Best Response
You've already chosen the best response.
0
hmm
13. zzr0ck3r
• 2 years ago
Best Response
You've already chosen the best response.
0
do you know the squeeze theorem?
14. zzr0ck3r
• 2 years ago
Best Response
You've already chosen the best response.
0
There is a geometric proof http://www.youtube.com/watch?v=Ve99biD1KtA I don't know of an elementary way of showing this limit without geometry and quite a bit of explaining...watch that video.
15. grantmasini
• 2 years ago
Best Response
You've already chosen the best response.
0
all right, thanks
16. zzr0ck3r
• 2 years ago
Best Response
You've already chosen the best response.
0
np
17. PROSS
• 2 years ago
Best Response
You've already chosen the best response.
0
It appears that you are beginning the study of Calculus with the topic of limits. We often will use a table of values as they approach the limit from the left and from the right. This is the first method you may want to use. A second method that is used is to look at the graph of the function. You can easily see that the limit as h approaches 0 from the left and right is 1. I hope this helps.
18. Not the answer you are looking for?
Search for more explanations.
• Attachments:
Find more explanations on OpenStudy
##### spraguer (Moderator) 5→ View Detailed Profile
23
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997898936271667, "perplexity": 3753.743826316615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701149548.13/warc/CC-MAIN-20160205193909-00301-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/minimum-value-of-absolute-deviation.228121/ | # Minimum Value of Absolute Deviation
1. Apr 11, 2008
### maverick280857
Hi,
How can I rigorously prove that the quantity
$$S = \sum_{i=1}^{n}|X_{i} - a|$$
(where $X_{1},\ldots,X_{n}$ is a random sample and a is some real number) is minimum when a is the median of the $X_{i}$'s?
Thanks.
Last edited: Apr 11, 2008
2. Apr 11, 2008
### maverick280857
Ok I think I got it. If a is the median, then there are just as many numbers less than it than there are greater than it...but how do I write the median in terms of the random sample?
3. Apr 11, 2008
### quadraphonics
There's no simple way to write it like you can with, say, the mean. What you can do is re-order the sample in increasing order, such that $X_1 \leq \ldots \leq X_{n/2} \leq a \leq X_{n/2 + 1} \leq \ldots \leq X_n$. For the actual proof, you might try working by contradiction: assume some other value of a results in the lowest value for the sum, and then show that you can construct an even lower value by moving a towards the median.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8240309357643127, "perplexity": 520.3340891613326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592309.94/warc/CC-MAIN-20180721032019-20180721052019-00127.warc.gz"} |
https://www.wyzant.com/resources/answers/topics/ab-calculus | 21 Answered Questions for the topic AB calculus
Ab Calculus Math Calculus Mathematics
16d
#### Need help with math ASAP
Let f be the function defined by f(x)=4/(x^2).----------------------------------Approximate the value of 12(upper limit)∫1(lower limit) f(x) dx using a left Riemann sum with the... more
Ab Calculus Math Calculus Precalculus
01/29/21
#### I need help with integrals
Let f be the function given by f(x)=sin^2(x/4)e^(−x2). It is known that (top value= π/2)∫(bottom value=0) f(x) dx= 0.0223. If a midpoint Riemann sum with two intervals of equal length is used to... more
Ab Calculus Math Calculus Precalculus
01/29/21
#### need help with math
x | 2 | 4 | 6 | 7 | 11-----------------------------------------------------------------------------------f(x) | 0.7 | 1.1 | 1.9 | 2.5 | 5.3The continuous function f is known to be... more
Ab Calculus Math Calculus Algebra
12/31/20
#### help needed with calculus question
The graph of the continuous function g is shown above for −3≤x≤5 . The function g is twice differentiable except at x=1Graph:http://prnt.sc/wdcalqLet f be the function with f(1)=3 and... more
Ab Calculus Math Calculus Physics
12/15/20
#### I need help with tangent lines
Given the equation of a curve: x^3/3+y^2/2−3x+2y=−1/6.does the line tangent to the curve at the point (1,1) have a slope of 2/3?Thanks
Ab Calculus Math Calculus Physics
12/14/20
#### How can I solve this?
Consider the curve defined by y^2=x^3−3x+3 for x>0. At what value of x does the curve have a horizontal tangent?a) √3 / 3b) 1c) √3d) There is no such valueThank you
Ab Calculus Math Calculus Physics
12/14/20
#### I don't understand this...................
The total cost, in dollars, to order x units of a certain product is modeled by C(x)=7x^2+252. According to the model, for what size order is the cost per unit a minimum?A. An order of 1 unit has a... more
Ab Calculus Math Calculus Physics
12/14/20
A curve in the xy-plane is defined by the equation x^3/3+y^2/2−3x+2y=−1/6. Which of the following statements are true?i. At points where x=√3, the lines tangent to the curve are horizontal.ii. At... more
Ab Calculus Math Calculus Physics
11/30/20
#### really need help with derivatives
Particle P moves along the x-axis so that its position at time t>0 is given by xP(t)= (e^(2−t)−2t) / (e^(2−t)+3t). A second particle, particle Q , also moves along the x -axis so that its... more
Ab Calculus Math Calculus Physics
11/28/20
#### looking for help with derivatives
Particle P moves along the x-axis so that its position at time t>0 is given by xP(t)= (e^(2−t)−2t) / (e^(2−t)+3t). A second particle, particle Q , also moves along the x -axis so that its... more
Ab Calculus Math Calculus Physics
11/28/20
#### Need help with limits and derivatives
{(32/5)+2/5cos(πt/4) for 0 ≤ t ≤ 4W(t) = {6+(1/8)(t−4)^2 for 4 < t ≤ 9----------------------------------------------------------------------The... more
Ab Calculus Math Calculus Physics
11/28/20
For time 0≤t≤8, people arrive at a venue for an outdoor concert at a rate modeled by the function A defined by A(t) = 0.3sin(1.9t) + 0.3cos(0.6t) + 1.3. For time 0 ≤ t ≤ 1, no one leaves the venue,... more
Ab Calculus Math Calculus Physics
10/16/20
If g(x)=2ln(x+1) and f is a differentiable function of x, what is equivalent to the derivative of f(g(x)) with respect to x?let f be a differentiable function. if h(x) = (2+f(sinx))^3, what's a... more
Ab Calculus Math Calculus Physics
10/04/20
#### Help me with math please
The graph of the equation y^4=y^2-x^2 is given.a) verify that dy/dx = (2x)/(2y-(4y^3)). Show why.b) Find the point(s) (x,y) at which the graph has a horizontal tangent line. Show why/work.c) Find... more
Ab Calculus Calculus Ap Calculus
04/01/20
#### Calc AB Need help. Separable Equations and solving.
Consider the following.y(5 + x) dx + x dy = 0Determine whether the differential equation is separable.I put separableIf the equation is separable, rewrite it in the form N(y) dy = M(x) dx. (If it... more
Ab Calculus
10/21/19
#### What is the instantaneous rate of change of g with respect to x at x=π/3?
g(x)=lim (sin(x+h)−sin(x))/h h→0
Ab Calculus Calculus Ap Calculus
03/11/19
#### A table of values for a continuous function f is shown below.
If four subintervals are used, which of the following is the right sum approximation of ∫ fxdx (x=0 to x=2)?x: 0 0.5 1.0 1.5 2.0y: 4 6 8 12 22A) 6B) 12C) 24D) 48E) 60
08/28/17
#### A rational function g(x) is such that g(x) = f(x) wherever f(x) is defined. Find the value of a and b.
f(x) = (2x-2)/ x2+x-2 g(x) = a/ b+x
Ab Calculus Math Calculus Limits
08/26/17
#### Is there a number to "a" such that the limit exists? If so, find the value of "a" and find the limit. If not, explain why.
Find the limit as "x" approaches 3 F(x)= (2x2-3ax+x-a-1)/ x2-2x-3 **I already know that if you substitute in 3 for x, the denominator will be 0 and therefore it cannot exist. I'm wondering if... more
## Still looking for help? Get the right answer, fast.
Get a free answer to a quick problem.
Most questions answered within 4 hours.
#### OR
Choose an expert and meet online. No packages or subscriptions, pay only for the time you need. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9412111043930054, "perplexity": 1792.7417466739967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178368431.60/warc/CC-MAIN-20210304021339-20210304051339-00309.warc.gz"} |
https://www.greenemath.com/Algebra2/35/MultiplyingPolynomialsLesson.html | Lesson Objectives
• Demonstrate an understanding of the commutative property of multiplication
• Demonstrate an understanding of the associative property of multiplication
• Learn how to find the product of two polynomials
• Learn how to find the product of two binomials using FOIL
• Learn how to find the product of more than two polynomials
## How to Multiply Polynomials
In our last lesson, we gave a basic definition of a polynomial, and we learned how to add and subtract polynomials. In this lesson, we will focus on how to multiply polynomials. We can multiply two monomials together using the associative and commutative properties of multiplication. Let's look at an example.
Example 1: Find each product
(4x2)(-2x5)
We can reorder the multiplication:
(4 • -2)(x2 • x5)
-8x7
When we multiply a monomial by a polynomial that is not a monomial, we use the distributive property. Let's look at an example.
Example 2: Find each product
5x(2x2 + 11)
We will distribute the 5x to each term inside of the parentheses.
5x • 2x2 + 5x • 11
(5 • 2)(x • x2) + (5 • 11)x
10x3 + 55x
When we multiply two non-monomials together, we also use the distributive property. We will form the sum of each term of the first polynomial multiplied by each term of the second polynomial. Let's look at an example.
Example 3: Find each product
(x + 8)(2x + 1)
We will find the sum of each term of the first polynomial multiplied by each term of the second polynomial:
x(2x + 1) + 8(2x + 1)
x • 2x + x • 1 + 8 • 2x + 8 • 1
2x2 + x + 16x + 8
2x2 + 17x + 8
Example 4: Find each product
(8x + 5y)(5x2 - 3xy + 3y2)
We will find the sum of each term of the first polynomial multiplied by each term of the second polynomial:
8x(5x2 - 3xy + 3y2) + 5y(5x2 - 3xy + 3y2)
8x • 5x2 + 8x • -3xy + 8x • 3y2 + 5y • 5x2 + 5y • -3xy + 5y • 3y2
40x3 - 24x2y + 24xy2 + 25x2y - 15xy2 + 15y2
40x3 + x2y + 9xy2 + 15y3
### Multiplying two Binomials using FOIL
We will often have to find the product of two binomials. When this situation occurs, we can use the FOIL technique.
F » First Terms
O » Outer Terms
I » Inner Terms
L » Last Terms
To use the FOIL technique, we find the sum of the first terms, outer terms, inner terms, and last terms. Let's look at an example.
Example 5: Find each product using FOIL
(5x + 7y)(6x + 5y)
F » 5x • 6x = 30x2
O » 5x • 5y = 25xy
I » 7y • 6x = 42xy
L » 7y • 5y = 35y2
We will find the sum of these individual products.
30x2 + 25xy + 42xy + 35y2
30x2 + 67xy + 35y2
### Multiplying More Than Two Polynomials
We can find the product of more than two polynomials by multiplying pairs of polynomials until we have our product. Let's look at an example.
Example 6: Find each product
(2x + 5)(6x - 8)(5x2 + 9)
Let's begin by finding the product of the first two (leftmost) polynomials:
(2x + 5)(6x - 8)
12x2 - 16x + 30x - 40
12x2 + 14x - 40
Now we can multiply the result by the last polynomial:
(12x2 + 14x - 40)(5x2 + 9)
5x2(12x2 + 14x - 40) + 9(12x2 + 14x - 40)
60x4 + 70x3 - 200x2 + 108x2 + 126x - 360
60x4 + 70x3 - 92x2 + 126x - 360 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8661597967147827, "perplexity": 1407.330219919214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400206133.46/warc/CC-MAIN-20200922125920-20200922155920-00525.warc.gz"} |
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition/chapter-7-trigonometric-identities-and-equations-7-5-inverse-circular-functions-7-5-exercises-page-707/6 | ## Precalculus (6th Edition)
Fill the blank with ... $x$ ...
See page 704, using a calculator to evaluate $\sec^{-1}x...$ ... evaluate $\displaystyle \cos^{-1}\frac{1}{x}$ Fill the blank with ... $x$ ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9165223240852356, "perplexity": 2258.4158188485176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741569.29/warc/CC-MAIN-20181114000002-20181114022002-00418.warc.gz"} |
http://mathhelpforum.com/trigonometry/194554-how-solve-equation.html | # Thread: How to solve an equation
1. ## How to solve an equation
The equation is:
cotg(X) = (sqrt(3)*cos(50º) + sin(50º))*(1-2*sqrt(3)*cos(50º)+2*sin(50º)) /
(sqrt(3)*sin(50º) - cos(50º))
Using a calculator, I find X = 50º.
How to develop the equation to find the solution?
Thanks.
2. ## Re: How to solve an equation
What do you mean by "develop" the equation? The right side is simply a number. Find that number and take its inverse cotangent.
3. ## Re: How to solve an equation
Doing what you propose with the calculator we arrive to 50 degrees.
So, if X = 50, it would be possible to "develop" or simplify the trigonometric expression on the second member of the equation to arrive to cotg(50).
Don't you think so, considering that in the expression we have only trigonometric functions of the same angle?
4. ## Re: How to solve an equation
Originally Posted by TOZE
Don't you think so, considering that in the expression we have only trigonometric functions of the same angle?
Consider any angle $\alpha$ instead of $50^0$ and expand $\cot \alpha$ in the following way:
$\cot \alpha =\cot \;[30^0+(\alpha -30^0)]=\frac{\cot 30^0\cdot \cot (\alpha -30^0)-1}{\cot 30^0+\cot (\alpha -30^0)}=$
$\frac{\sqrt{3}\cdot \dfrac{\cos (\alpha-30^0)}{\sin (\alpha-30^0)}-1}{\sqrt{3}+\dfrac{\cos (\alpha-30^0)}{\sin (\alpha-30^0)}}}=\ldots$
Let's see if you get the right side of the equation (with $\alpha$ instead of $50^0$ ).
P.D. I haven't checked it, it is only a proposal. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.923484206199646, "perplexity": 712.212570053211}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425381.3/warc/CC-MAIN-20170725202416-20170725222416-00168.warc.gz"} |
http://en.wikipedia.org/wiki/Downsampling | # Downsampling
In signal processing, downsampling (or "subsampling") is the process of reducing the sampling rate of a signal. This is usually done to reduce the data rate or the size of the data.
The downsampling factor (commonly denoted by M) is usually an integer or a rational fraction greater than unity. This factor multiplies the sampling time or, equivalently, divides the sampling rate. For example, if 16-bit compact disc audio (sampled at 44,100 Hz) is downsampled to 22,050 Hz, the audio is said to be downsampled by a factor of 2. The bit rate is also reduced in half, from 1,411,200 bit/s to 705,600 bit/s, assuming that each sample retains its bit depth of 16 bits.
## Maintaining the sampling theorem criterion
Since downsampling reduces the sampling rate, it is usually a good idea to make sure the Nyquist–Shannon sampling theorem criterion is maintained relative to the new lower sample rate, to avoid aliasing in the resulting digital signal. To ensure that the sampling theorem is satisfied, or approximately so, a low-pass filter is used as an anti-aliasing filter to reduce the bandwidth of the signal before the signal is downsampled; the overall process (low-pass filter, then downsample) is sometimes called decimation.
If the original signal had been bandwidth limited, and then first sampled at a rate higher than the Nyquist rate, then the sampled signal may already have a bandwidth compliant with the requirements of the sampling theorem at the lower rate, so the downsampling can be done directly without any additional filtering. Downsampling only changes the sample rate, not the bandwidth, of the signal. The only reason to filter the bandwidth is to avoid the case where the new sample rate would become lower than the Nyquist rate and cause aliasing.
In some cases, the anti-aliasing filter can be a band-pass filter; the aliasing inherent in downsampling will then transpose a band of interest to baseband samples. A bandpass signal, i.e. a band-limited signal whose minimum frequency is different from zero, can be downsampled avoiding superposition of the spectra if certain conditions are satisfied; see undersampling.
## Downsampling by integer factor
Downsampling a sequence $\scriptstyle x[n]$ by retaining only every Mth sample creates a new sequence $\scriptstyle y[n] = x[nM].$ If the original sequence contains significant normalized frequency components in the region $\scriptstyle [0.5/M,\ 1-0.5/M]$ (cycles/sample), the downsampler should be preceded by a low-pass filter with cutoff frequency $\scriptstyle 0.5/M$.[note 1] In this application, such an anti-aliasing filter is referred to as a decimation filter, and the combined process of filtering (convolution) and downsampling is called decimation.
The process described above would generate an output sample for every input sample, and then M-1 of every M outputs would be discarded. Such is the process for an IIR filter that relies on feedback from output to input. With FIR filtering, it is an easy matter to compute only every Mth output. The calculation performed by a decimating FIR filter for the nth output sample is a dot product:
$y[n] = \sum_{k=0}^{K-1} x[nM-k]\cdot h[k],$
where the h[•] sequence is the impulse response, and K is its length. In a general purpose processor, after computing y[n], the easiest way to compute y[n+1] is to advance the starting index in the x[•] array by M, and recompute the dot product.
Impulse response coefficients taken at intervals of M form a subsequence, and there are M such subsequences (phases) multiplexed together. The dot product is the sum of the dot products of each subsequence with the corresponding samples of the x[•] sequence. Furthermore, because of decimation by M, the stream of x[•] samples involved in any one of the M dot products is never involved in the other dot products. Thus M low-order FIR filters are each filtering one of M multiplexed phases of the input stream, and the M outputs are being summed. This viewpoint offers a different implementation that might be advantageous in a multi-processor architecture. In other words, the input stream is demultiplexed and sent through a bank of M filters whose outputs are summed. When implemented that way, it is called a polyphase filter.
For completeness, we now mention that a possible implementation of each phase is to replace the coefficients of the other phases with zeros in a copy of the h[•] array, process the original x[•] sequence at the input rate, and decimate the output by a factor of M. The equivalence of this inefficient method and the implementation described above is known as the first Noble identity.[1]
## Downsampling by rational fraction
Let M/L denote the downsampling factor.
1. Upsample by a factor of L
2. Downsample by a factor of M
A proper upsampling design requires an interpolation filter after increasing the data rate and that a proper downsampling design requires a filter before eliminating some samples. These two low-pass filters can be combined into a single filter.
These two steps are generally not interchangeable. Downsampling results in a loss of data and, if performed first, could result in data loss if there is any data filtered out by the downsampler's low-pass filter. Since both interpolation and anti-aliasing filters are low-pass filters, the filter with the smallest bandwidth is more restrictive and can therefore be used in place of both filters. When the rational fraction M/L is greater than unity then L < M and the single low-pass filter should have cutoff at $\scriptstyle 0.5/M$ cycles/sample.
NOTE: Upsampling first is necessary in all cases where the rate is not an even multiple. E.g.: if a sample rate of 2x is changed to a rate of 1x by averaging every pair of samples this would be equivalent to a low pass filtering operation. But taking every other sample would be equivalent to up then down sampling in this special case where the multiple was 2 to 1, so there is no need to do an upsample first.
## Discrete-time Fourier transform (DTFT)
Let X(f) be the Fourier transform of any function, x(t), whose samples at some interval, T, equal the x[n] sequence. Then the DTFT of the x[n] sequence is the Fourier series representation of a periodic summation of X(f):
$\underbrace{ \sum_{n=-\infty}^{\infty} \overbrace{x(nT)}^{x[n]}\ e^{-i 2\pi f nT} }_{\text{DTFT}} = \frac{1}{T}\sum_{k=-\infty}^{\infty} X(f-k/T).$
(Eq.1)
When T has units of seconds, $\scriptstyle f$ has units of hertz. Replacing T with MT in the formulas above gives the DTFT of the decimated sequence, x[nM]:
$\sum_{n=-\infty}^{\infty} x(n\cdot MT)\ e^{-i 2\pi f n(MT)} \equiv \frac{1}{MT}\sum_{k=-\infty}^{\infty} X\left(f-\tfrac{k}{(MT)}\right).$
(Eq.2)
The periodic summation has been reduced in amplitude and periodicity by a factor of M. Aliasing occurs when adjacent copies of X(f) overlap. If an anti-aliasing filter is applied to the x[n] sequence, it should have a cutoff frequency $< \tfrac{0.5}{MT}$ hertz at sample-rate 1/T, or (equivalently) a cutoff $< \tfrac{0.5}{M}$ at normalized frequency 1.0 cycles/sample.
Alternatively, the sample-rate can be presumed to be held constant, meaning that the interval between the retained samples is reduced from MT to T. The resulting Fourier series is:
$\sum_{n=-\infty}^{\infty} x(n\cdot MT)\ e^{-i 2\pi f nT} = \frac{1}{T}\sum_{k=-\infty}^{\infty} X_M(f-k/T) = \frac{1}{MT}\sum_{k=-\infty}^{\infty} X\left(\tfrac{f-k/T}{M}\right) ,$
(Eq.3)
where:
$X_M(f)\ \stackrel{\mathrm{def}}{=}\ \mathcal{F}\left \{x(Mt)\right \} \equiv \tfrac{1}{M}\cdot X(f/M).$
The original periodicity is restored, but $\scriptstyle X(f/M)$ is M times wider than $\scriptstyle X(f),$ which can cause adjacent copies to overlap unless the x[n] sequence is pre-filtered as described above. Eq.2 and Eq.3 are identical, except for a frequency scale factor.
## Z-transform
The z-transform of the x[n] sequence is defined by:
$X(z)\ \stackrel{\mathrm{def}}{=} \sum_{n=-\infty}^{\infty} x[n]\ z^{-n},$ where z is a complex variable.[note 2]
On the unit circle, z is constrained to values of the form $e^{i \omega}.$ Then one cycle of $X(e^{i \omega}), \ \scriptstyle -\pi \ \le \ \omega \ \le \ \pi$ is identical to one period $\left(\scriptstyle -\frac{0.5}{T} \ \le \ f \ \le \ \frac{0.5}{T}\right)$ of Eq.1.
The z-transform of the decimated sequence is:
$X_M(z)\ \stackrel{\mathrm{def}}{=} \sum_{n=-\infty}^{\infty} x[nM]\ z^{-n},$
and one cycle of $X_M(e^{i \omega}), \ \scriptstyle -\pi \ \le \ \omega \ \le \ \pi$ is identical to one period of Eq.2 and Eq.3.
In terms of $X(z),$ it can be shown that:[2][3]
$X_M(z) = \frac{1}{M} \sum_{k=0}^{M-1} X\left(z^{\tfrac{1}{M}} \cdot e^{-i \tfrac{2\pi}{M} k}\right) = \frac{1}{M} \sum_{k=0}^{M-1} X\left( e^{\tfrac{i(\omega - 2\pi k)}{M} } \right).$
The periodicity of each "k" term is 2πM radians, and the terms are offset by multiples of 2π. So the periodicity of the summation is 2π (as required by the z-transform definition). The k=0 term is $X(e^{i \omega})$ stretched across 2πM radians, which means that it exceeds the unit circle and folds back on itself M-1 times, or (equivalently) it overlaps and is overlapped by the other M-1 terms of the summation. But if its expanded bandwidth is still limited to the region $\scriptstyle (-\pi \ < \ \omega \ < \ \pi),$ the folding/overlapping does not cause aliasing. That can be assured by an anti-alias filter with a cutoff frequency < π/M at frequency 2π (radians/sample), or (equivalently) a cutoff $< \tfrac{0.5}{M}$ at frequency 1.0 cycles/sample.
For comparison with the DTFT (Eq.2), ω = 2π corresponds to $\scriptstyle f=1/(MT).$ And it corresponds to $\scriptstyle f=1/T$ in the other Fourier series (Eq.3).
## Notes
1. ^ Realizable low-pass filters have a "skirt", where the response diminishes from near unity to near zero. So in practice the cutoff frequency is placed far enough below the theoretical cutoff that the filter's skirt is contained below the theoretical cutoff.
2. ^ In a discussion involving multiple types of transforms, it is a common practice to distinguish them on the basis of their arguments, rather than the function name.
• Fourier transform is denoted by $X(f)$ or $X(\omega).$
• Z transform is denoted by $X(z).$
• DTFT is denoted by $X(e^{i \omega})$ or $X(e^{i 2\pi fT}),$ but sometimes $X_{2\pi}(\omega)$ or $X_{1/T}(f).$
## Citations
1. ^ [Gilbert]; Nguyen, Truong (1996-10-01). Wavelets and Filter Banks (2 ed.). Wellesley,MA: Wellesley-Cambridge Press. pp. 100–101. ISBN 0961408871.
2. ^ Schniter, Phil (March 2006). "ECE-700 Multirate Notes". p. 2. Retrieved 2013-12-10.
3. ^ "DSP and Digital Filters (2013-3810)". 2013. p. 68. Retrieved 2013-12-10.
## References
• Oppenheim, Alan V.; Ronald W. Schafer, John R. Buck (1999). Discrete-Time Signal Processing (2nd ed.). Prentice Hall. ISBN 0-13-754920-2.
• Proakis, John G. (2000). Digital Signal Processing: Principles, Algorithms and Applications (3rd ed.). India: Prentice-Hall. ISBN 8120311299. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 35, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8977717757225037, "perplexity": 1275.2903886655986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345774929/warc/CC-MAIN-20131218054934-00078-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://keio.pure.elsevier.com/en/publications/skyrmion-interactions-and-lattices-in-chiral-magnets-analytical-r | # Skyrmion interactions and lattices in chiral magnets: analytical results
Calum Ross, Norisuke Sakai, Muneto Nitta
Research output: Contribution to journalArticlepeer-review
3 Citations (Scopus)
## Abstract
We study two-body interactions of magnetic skyrmions on the plane and apply them to a (mostly) analytic description of a skyrmion lattice. This is done in the context of the solvable line, a particular choice of a potential for magnetic anisotropy and Zeeman terms, where analytic expressions for skyrmions are available. The energy of these analytic single skyrmion solutions is found to become negative below a critical point, where the ferromagnetic state is no longer the lowest energy state. This critical value is determined exactly without the ambiguities of numerical simulations. Along the solvable line the interaction energy for a pair of skyrmions is repulsive with power law fall off in contrast to the exponential decay of a purely Zeeman potential term. Using the interaction energy expressions we construct an inhomogeneous skyrmion lattice state, which is a candidate ground states for the model in particular parameter regions. Finally we estimate the transition between the skyrmion lattice and an inhomogeneous spiral state.
Original language English 95 Journal of High Energy Physics 2021 2 https://doi.org/10.1007/JHEP02(2021)095 Published - 2021 Feb
## Keywords
• Integrable Field Theories
• Solitons Monopoles and Instantons
## ASJC Scopus subject areas
• Nuclear and High Energy Physics
## Fingerprint
Dive into the research topics of 'Skyrmion interactions and lattices in chiral magnets: analytical results'. Together they form a unique fingerprint. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9229808449745178, "perplexity": 1964.986788328784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151672.96/warc/CC-MAIN-20210725111913-20210725141913-00511.warc.gz"} |
http://tex.stackexchange.com/questions/38996/referencing-main-subequation | # referencing main subequation
In the following example i labeled each subeqnation.
\begin{subequations}
\begin{gather}
R_0 = 0 \label{subeqn:ini-cond-r} \\
N_0 = 0 \label{subeqn:ini-cond-n}
\end{gather}
\end{subequations}
then i could give as reference each equation separately.
What i would like to have, assuming the subequation numbers are 1.1a and 1.1b, is a reference to 1.1 or somehow that LaTeX will show the number 1.1 (It would be nice a solution with hyperref, since i already use it in my work)
-
You can set an extra label inside the environment subequations but outside the inner math environment. So you will get reference to the parent counter.
\listfiles
\documentclass{report}
\usepackage{amsmath}
\begin{document}
\chapter{foo}
$$A_0=1$$
\begin{subequations}
\label{foo}
\begin{gather}
R_0 = 0 \label{subeqn:ini-cond-r} \\
N_0 = 0 \label{subeqn:ini-cond-n}
\end{gather}
\end{subequations}
$$A_0=1$$
\eqref{subeqn:ini-cond-r} and \eqref{subeqn:ini-cond-n} and \eqref{foo}
\end{document}
-
This method does not work if I want to reference only the main equation without ever referencing sub equations when using hyperref package. – Vlad Mar 10 at 11:54
You can put another label to subequations environment.
\begin{subequations}\label{refertothis}
\begin{gather}
R_0 = 0 \label{subeqn:ini-cond-r} \\
N_0 = 0 \label{subeqn:ini-cond-n}
\end{gather}
\end{subequations}
Then, you can refer to the equation set via \eqref{refertothis}.
-
This method does not work if I want to reference only the main equation without ever referencing sub equations when using hyperref package. – Vlad Mar 10 at 11:55
@Vlad What do you mean by hyperref package? Did you try \eqref{refertothis} ? – percusse Mar 10 at 13:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9494575262069702, "perplexity": 2376.2035660532592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049282327.67/warc/CC-MAIN-20160524002122-00075-ip-10-185-217-139.ec2.internal.warc.gz"} |
http://www.instrumentationtoolbox.com/2012/04/mass-flowmeters.html | Mass Flow Meters ~ Learning Instrumentation And Control Engineering Learning Instrumentation And Control Engineering
### Mass Flow Meters
Custom Search
Measurements of mass flow are preferred over measurements of volumetric flow in process applications where mass balance (monitoring the rates of mass entry and exit for a process) is important. Whereas volumetric flow measurements express the fluid flow rate in such terms as gallons per minute or cubic meters per second, mass flow measurements always express fluid flow rate in terms of actual mass units over time, such as pounds (mass) per second or kilograms per minute.
In the past, mass flow was often calculated from the outputs of a volumetric flow meter and a densitometer. Density was either directly measured or was calculated using the outputs of process temperature and pressure transmitters. These measurements were not very accurate, because the relationship between process pressure or temperature and density are not always precisely known. This is because in the measurement process each sensor adds its own separate error to the overall measurement error, and the speed of response of such calculations is usually not sufficient to detect step changes in flow.
The advent of modern mass flow meters has led to a remarkable increase in the accuracy associated with flow measurements. This flow measurement requires no compensation for a change in density, temperature or pressure. Examples of mass flow meters include:
(a) Coriolis Mass Flow meters
(b) Thermal Mass Flow meters
Coriolis Mass Flow meters
Coriolis flow meters are true mass meters that measure the mass rate of flow directly as opposed to volumetric flow. Because mass does not change, the meter is linear without having to be adjusted for variations in liquid properties. It also eliminates the need to compensate for changing temperature and pressure conditions. The meter is especially useful for measuring liquids whose viscosity varies with velocity at given temperatures and pressures.
Coriolis mass flow meters artificially introduce Coriolis acceleration into the flowing stream and measure mass flow by detecting the resulting angular momentum. When a fluid is flowing in a pipe and it is subjected to Coriolis acceleration through the mechanical introduction of apparent rotation into the pipe, the amount of deflecting force generated by the Coriolis inertial effect will be a function of the mass flow rate of the fluid. If a pipe is rotated around a point while liquid is flowing through it (toward or away from the center of rotation), that fluid will generate an inertial force (acting on the pipe) that will be at right angles to the direction of the flow. This inertial force is used to measure flow rate of the fluid.
Thermal Mass Flow meters
Thermal mass flow meters also measure the mass flow rate of gases and liquids directly. Volumetric measurements are affected by all ambient and process conditions that influence unit volume or indirectly affect pressure drop, while mass flow measurement is unaffected by changes in viscosity, density, temperature, or pressure.
Thermal mass flow meters are often used in monitoring or controlling mass-related processes such as chemical reactions that depend on the relative masses of unreacted ingredients. In detecting the mass flow of compressible vapors and gases, the measurement is unaffected by changes in pressure and/or temperature.
One of the capabilities of thermal mass flow meters is to accurately measure low gas flow rates or low gas velocities (under 25 ft. per minute)- much lower than can be detected with any other device.
Thermal mass flow meters are most often used for the regulation of low gas flows. They operate either by introducing a known amount of heat into the flowing stream and measuring an associated temperature change, or by maintaining a probe at a constant temperature and measuring the energy required to do so. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.97734135389328, "perplexity": 712.6877587785983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541864.24/warc/CC-MAIN-20161202170901-00498-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://de.scribd.com/document/402811755/Th-of-failure-docx | Sie sind auf Seite 1von 3
# Theories of failure-Assignment
1. The force acting on a bolt consists of two components – an axial pull of 12 KN and a transverse shear force
of 6 kN. The bolt is made of steel FeE 310 (Syt = 310 N/mm2) and the factor of safety is 2.5. Determine the
diameter of the bolt using the maximum shear stress theory of failure. (Ans: 13.2 mm)
2. The principal stresses induced at a point in a machine component made of steel 50C4 (Syt = 460 N/mm2)
are as follows; 𝜎1 = 200 N/mm2; 𝜎2 = 150 N/mm2; 𝜎3 = 0. Calculate the factor of safety by (a) the maximum
normal stress theory (b) the maximum shear stress theory and (c) the distortion energy theory. (Ans: 2.3,
2.3, 2.55)
3. A cylindrical. shell made of mild-steel plate yields at 200 N/mm2 in uniaxial tension. The
diameter of the shell is 2 m and its thickness is 20 mm. Find the pressure at which the failure occurs according to (a)
maximum shear stress theory and (b) distortion energy theory. (Ans. (a) 8 N/mm2 (b) 4.62 N/mm2).
4. For a biaxial stress system, σx = 80 N/mm2 and; σy = 40 N/mm2. Find the equivalent stress at the elastic limit assuming
that failure occurs due to maximum principal strain theory. Take poisson's ratio to be 0.25. (Ans. 70 N/mm2).
5. A pressure vessel of inside radius 300 mm is subjected to a pressure of 2.0 N/mm2. Find the
thickness of the vessel according to total strain energy theory. The factor of safety is 3 and the yielding occurs in
simple tension at 240 N/mm2. Take poisson's ratio to be 0.3. (Ans. 7. 31mm)
6. A cylindrical tube of outside diameter 120 mm and thickness 3 mm is subjected to a torque of 2 x 10 4 N-m. The stress
at the elastic limit in simple tension is 1200 N/mm2. Calculate the factor of safety according to (a) maximum shear
stress theory and (b) distortion energy theory. (Ans: (a) 1.89, (b) 2.18)
7. A shaft of diameter 50 mm is subjected to a torque of 300 N.m and an axial thrust. For a factor of safety of 3, find the
maximum value of the thrust according to (a) maximum shear stress theory and (b) distortion energy theory. The
failure stress is 100 N/mm2 at the elastic limit. (Ans. (a) 44.51kN; (b) 50.56 kN).
8. A mild steel shaft of diameter 50 mm is subjected to a bending moment of 2 kN.m and a Torque T. If the yield point of
steel in tension is 200 MPa, find the maximum value of the torque without causing yielding of the shaft material
according to (i) maximum principal stress (ii) maximum shear stress and (iii) distortion energy theories.
(Ans. 2112 N.m, 1422.6 N.m, 1642 N.m)
9. A thick spherical pressure vessel of inner radius 150 mm is subjected to an internal pressure of 80 MPa. Calculate its
wall thickness based upon (a) maximum principal stress theory and (b) total strain energy theory. Poisson's ratio =
0.30, yield strength = 300 MPa. (Ans. 40 mm, 38.99 mm)
10. Find the maximum principal stress developed in a cylindrical shaft 80 mm in diameter and subjected to a bending
moment of 2.5 kN.m and a twisting moment of 4.2 kN.m. If the yield stress of the shaft material is 300 MPa, find the
factor of safety of the shaft according to the maximum shearing stress theory of failure. (Ans. 73.49 N/mm2, 3.085)
11. A component in an aircraft flap actuator can be adequately modeled as a cylindrical bar subjected to an axial force of 8
kN, a bending moment of 55 N.m and torsional moment of 30 N.m. A 20 mm diameter solid bar of 7075-T6 aluminium
having σu = 591 MPa, σyt = 542 MPa and τy = 271 MPa is recommended for its use. Determine the factor of safety
available as per maximum principal stress theory, maximum shear stress theory. (Ans. 5.96, 5.27)
12. A rod of circular cross section is to sustain a torsional moment of 3 kN-m and a bending
moment of 2 kN-m. Selecting C45 steal (t σytt = 353 MPa) and assuming factor of safety as 3, determine
the diameter of the rod as per (a) Maximum normal stress theory, (b) Distortion energy theory and
(c) Maximum shear stress theory. (d) Maximum strain theory (e) Maximum strain energy theory.
Poisson’s ratio = 0.25. (Ans: 62.44 mm, 65.7 mm, 67.8 mm, 63.8 mm, 64.5 mm)
13. A critical section in a shaft is subjected to a bending stress of 50 MPa and torsional stress of 31.5
MPa simultaneously. Determine the factor of safety as per (a) Maximum normal stress theory (b)
Maximum shear stress theory (c) Distortion energy theory (d) Maximum strain theory (e) Maximum
strain energy theory. Proportionality limit in tension is 284 MPa. Poisson’s ratio is 0.25. (Ans: 4.35,
7.06, 3.85, 4.11, 4.02)
## 14. A rod of 50 mm diameter is subjected to a compressive load of 20 kN together with a twisting
moment of 1.5 kN-m. It is made of C40 steal σytt = 328.6 MPa. Determine the factor of safety
according to: (i) Maximum normal stress theory, (ii) Maximum shear stress theory (c)
Distortion energy theory (d) Maximum strain theory (e) Maximum strain energy theory (Ans:
4.95, 2.68, 3.09, 3.94, 3.32) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8204610347747803, "perplexity": 3883.3528104161064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894759.37/warc/CC-MAIN-20201027195832-20201027225832-00136.warc.gz"} |
https://math.stackexchange.com/questions/962636/prove-existence-of-a-circle/982257 | # Prove Existence of a Circle
There are two circles with radius $1$, $c_{A}$ and ${c}_{B}$. They intersect at two points $U$ and $V$. $A$ and $B$ are two regular $n$-gons such that $n > 3$, which are inscribed into $c_{A}$ and ${c}_{B}$ so that $U$ and $V$ are vertices of $A$ and $B$.
Then suppose a third circle, $c$, with a radius of $1$ is to be placed so that it intersects $A$ at two of its vertices $W$ and $X$ and intersects $B$ at two of its vertices $Y$ and $Z$.
Details and Assumptions:
1. Assume that $U,V,W,X,Y,Z$ are all distinct points.
2. $U$ lies outside of $c$.
3. $V$ lies inside of $c$.
Given all of these details, prove that there exists a regular $2n$-gon which comprises of $W,X,Y,Z$ as its 4 vertices.
• Nice diagram${}{}$ – TonyK Oct 8 '14 at 11:31
• For a while I assumed that $n$ would have to be a multiple of $6$ in order to satisfy the preconditions, but now I found that $n=15$ works as well. I guess I'll include an image of that as well, hope you don't consider them too big. – MvG Oct 8 '14 at 12:12
• Wait, so how many can be possible? – Axas Bit Oct 8 '14 at 18:34
• Just an observation: it seems as if in the two examples the six lines defined by pairs from $\{W,X,Y,Z\}$ will all pass through corners of two $2n$-gon inscribed into $c_A$ and $c_B$. At the moment I know neither how to prove this nor how this might be of use, but I have a gut feeling that it might be useful. – MvG Oct 13 '14 at 13:42
• This question is part of the current USAMTS Talent Search (round 1) (problem 4 here), which has a submission deadline of 5 Nov 2014. This question will remained locked until after this time. – user642796 Oct 29 '14 at 21:43
This answer focuses on identifying families of solutions to the problem described in the question.
I've made two provisional conjectures in order to make progress with the problem:
1. The result can be stated for three $2n$-gons rather than two $n$-gons and one $2n$-gon.
2. Solutions have mirror symmetry. Or equivalently, in any solution there are two pairs of $2n$-gons which have the same degree of overlap. [This turns out to be false - see 'Solution family 5' below. However, this condition is assumed in Solution families 1-4.]
[Continuation 6: in an overhaul of the notation I've halved $\phi$ and doubled $m$ so that $m$ is always an integer.]
If we define the degree of overlap, $j$, between two $2n$-gons $(n>3)$ as the number of edges of one that lie wholly inside the other, then $1 < j < n$.
If $$\phi = \frac{\pi}{2n}$$ is half the angle subtended at the centre of the $2n$-gon by one of its edges, then the distance between the centres of two overlapping $2n$-gons is $$D_{jn} = 2\cos{j\phi}$$ Consider a $2n$-gon P which overlaps a $2n$-gon O with degree $j$. Now bring in a third $2n$-gon, Q, which also overlaps O with degree $j$ but is rotated about the centre of O by an angle $m\phi$ with respect to P, where $m$ is an integer.
The distance between the centres of P and Q, which I'll denote by $D_{kn}$ for a reason that will become apparent, is $$D_{kn} = 2D_{jn}\sin{\tfrac{m}{2}\phi} = 4\cos{j\phi} \, \sin{\tfrac{m}{2}\phi}$$
We now demand that P and Q should overlap by an integer degree, $k$, so that $$D_{kn} = 2\cos{k\phi}$$ This will ensure that all points of intersection coincide with vertices of the intersecting polygons, and thus provide a configuration satisfying the requirements of the question (with the proviso that the condition does not guarantee that there is a common area of overlap shared by all three polygons).
We have omitted mention of the orientation of the polygons, but it is easily shown that this is always such as to achieve the desired overlap.
Combining the two expressions for $D_{kn}$ gives the condition
$$2\cos{j\phi}\, \sin{\tfrac{m}{2}\phi} = \cos{k\phi}$$ or (since $n\phi=\pi/2$) $$2\cos{j\phi}\, \cos{(n-\tfrac{m}{2})\phi} = \cos{k\phi} \tag{1}$$
The configurations we seek are solutions of this equation for integer $n$, $j$, $k$ and $m$.
In the first example in the question $n = 12, j = 8, k = 6, m = 12$.
In the second example $n = 15, j = 6, k = 10, m = 6$.
[Continuation 6: for solutions under the constraint of conjecture 2, $m$ is always even, but in the more general case $m$ may be odd.]
I'll now throw this open to see if anyone can provide a general solution. It seems likely that $j$, $k$ and $m/2$ must be divisors of $2n$ [this turns out to be incorrect], and I have a hunch that the solution will involve cyclotomic polynomials [this turns out to be correct].
Continuation (1)
I've now identified 3 families of solutions consistent with conjecture 2 (mirror symmetry), all involving angles of 60 degrees. There may be others.
Solution family 1
This family is defined by setting $j=2n/3$. This means that half the angle subtended at the centre of O by its overlapping edges is $\tfrac{\pi}{3}$ radians or 60 degrees. Since $\cos{\tfrac{\pi}{3}} = \tfrac{1}{2}$ it reduces equation 1 to $$\cos{(n-\tfrac{m}{2})\phi} = \cos{k\phi}$$ so there are solutions with $$n-\tfrac{m}{2} = k$$ (where $\tfrac{m}{2}$ is an integer) subject to $2 \le k \le n-1\,\,$, $1 \le \tfrac{m}{2} \le n-2\,\,$ and $3|n$.
The first example in the question belongs to this family. The complete set of solutions for $n=12$ combine to make this pleasing diagram:
Solution family 2
This family has $m=2n/3$. This makes $\cos{(n-\tfrac{m}{2})\phi}=\cos{(\pi/3)} = \tfrac{1}{2}$, which reduces equation 1 to $$\cos{j\phi} = \cos{k\phi}$$ so (given that $j<n$ and $k<n$) $$j = k$$ These solutions have threefold rotational symmetry. The only restriction is that $n$ must be divisible by 3. Example ($n=9, j=k=4, m=6$):
Solution family 3
This family is the most interesting of the three, but yields only one solution. It is defined by setting $k=2n/3$ so that $\cos{k\phi}=\cos{\tfrac{\pi}{3}} = \tfrac{1}{2}$. Equation 1 then becomes
$$2\cos{j\phi}\,\cos{(n-\tfrac{m}{2})\phi} = \tfrac{1}{2}$$ which may be written in the following equivalent forms: $$\cos{(n+\tfrac{m}{2}-j)\phi} + \cos{(n+\tfrac{m}{2}+j)\phi} = -\tfrac{1}{2} \tag{2}$$ $$\cos{(n-\tfrac{m}{2}-j)\phi} + \cos{(n-\tfrac{m}{2}+j)\phi} = \tfrac{1}{2} \tag{3}$$ Solutions to these equations can be found using the following theorem relating the roots $z_i(N)$ of the $N$th cyclotomic polynomial to the Möbius function $\mu(N)$:
$$\sum_{i=1}^{\varphi(N)} {z_i(N)} = \mu(N)$$ where $\varphi(N)$ is the Euler totient function (the number of positive integers less than $N$ that are relatively prime to $N$) and $z_i(N)$ are a subset of the $N$th roots of unity. Taking the real part of both sides and using symmetry this becomes: $$\sum_{i=1}^{\varphi(N)/2} { \cos{(p_i(N) \frac{2\pi}{N})} } = \tfrac{1}{2} \mu(N) \tag{4}$$ where $p_i(N)$ is the $i$th integer which is coprime with $N$.
The Möbius function $\mu(N)$ takes values as follows:
$\mu(N) = 1$ if $N$ is a square-free positive integer with an even number of prime factors.
$\mu(N) = −1$ if $N$ is a square-free positive integer with an odd number of prime factors.
$\mu(N) = 0$ if $N$ has a squared prime factor.
Equation 4 thus provides solutions to equations 2 and 3 if $\varphi(N) = 4$, $\mu(N)$ has the appropriate sign and the cosine arguments are matched.
The first two conditions are true for only two integers:
$N=5$, with $\mu(5)=-1$, $p_1(5) = 1, p_2(5) = 2$
$N=10$, with $\mu(10)=1$, $p_1(10) = 1, p_2(10) = 3$.
We first set $N=5$ and look for solutions to equation 2.
Matching the cosine arguments requires firstly that $$2j \frac{\pi}{2n} = (p_2(5)-p_1(5))\frac{2\pi}{5}$$ from which it follows that $$5j = 2n$$
$n$ must be divisible by 3 to satisfy $k=2n/3$, so the smallest value of $n$ for which solutions are possible is $n=15$, with $k=10$ and $j=6$. All other solutions will be multiples of this one. Matching the cosine arguments also requires that $$(n+\tfrac{m}{2}-j) \frac{\pi}{2n} = p_1(5) \frac{2\pi}{5}$$ which implies $m=6$.
This is the solution illustrated by the second example in the question.
Setting $N=10$ and looking for solutions to equation 3 yields the same solution.
Continuation (2)
Solution family 4
A fourth family of solutions can be obtained by writing equation 1 as
$$\cos{(n+\tfrac{m}{2}-j)\phi} + \cos{(n+\tfrac{m}{2}+j)\phi} + \cos{k\phi} = 0 \tag{5}$$
and viewing this as an instance of equation 4 with $\varphi(N)/2 = 3$ and $\mu(N) = 0$. There are two values of N which satisfy these conditions, $N = 9$ and $N = 18$, which lead to three solutions:
For $N = 9$: $$n=9, j=6, k=8, m=2 \\n=9, j=4, k=4, m=6$$
For $N=18$: $$n=9, j=2, k=2, m=6$$
However, these are not new solutions. The first is a member of family 1 and the last two are members of family 2.
Continuation (3)
Solution family 5
Rotating a $2n$-gon about a vertex by an angle $m\phi$ moves its centre by a distance $$2\sin{ \tfrac{m}{2}\phi} = 2\cos{(n-\tfrac{m}{2})\phi} = D_{n-m/2,n}.$$ If $m$ is even the rotated $2n$-gon thus overlaps the original $2n$-gon with integer degree $n-\tfrac{m}{2}$, and a third $2n$-gon with a different $m$ may overlap both of these, providing another type of solution to the problem.
Solutions of this kind may be constructed for all $n \ge 3$. The diagram below includes the complete set of such solutions for $n=5$. A similar diagram with $n=12$ (but with a centrally placed $2n$-gon of the same size which can only be added when $3|n$) is shown above under Solution family 1.
This family of solutions provides exceptions to conjecture 2: not all groups of three $2n$-gons overlapping in this way show mirror symmetry.
Continuation (4)
If we relax the condition set by conjecture 2, allowing solutions without mirror symmetry, we need an additional parameter, $l$, to specify the degree of overlap between O and P (which is now no longer $j$).
The distances between the centres of the three $2n$-gons are now related by the cosine rule:
$$D_{nk}^2 = D_{nj}^2 + D_{nl}^2 - 2 D_{nj}D_{nl}\cos{m_k\phi},$$ where a subscript $k$ has been added to $m$ to acknowledge the fact that $j$, $l$ and $k$ can be cycled to generate three equations of this form. These can be written $$\\ \cos^2{J} + \cos^2{L} - 2 \cos{J} \cos{L} \cos{M_k} = \cos^2{K} \\ \cos^2{K} + \cos^2{J} - 2 \cos{K} \cos{J} \cos{M_l} = \cos^2{L} \\ \cos^2{L} + \cos^2{K} - 2 \cos{L} \cos{K} \cos{M_j} = \cos^2{J}$$ where $$J = j\phi,\, L = l\phi,\, K = k\phi, \\M_j = m_j\phi,\, M_l = m_l\phi,\, M_k = m_k\phi$$
The same result in a slightly different form is derived in the answer provided by @marco trevi.
$M_j$, $M_l$ and $M_k$ are the angles of the triangle formed by the centres of the three polygons. Since these sum to $\pi$ we have $$m_j + m_l + m_k = 2n$$
The sine rule gives another set of relations: $$\frac{\cos{J}}{\sin{M_j}} = \frac{\cos{L}} {\sin{M_l}} = \frac{\cos{K}}{\sin{M_k}}$$
In general the $m$ parameters are limited to integer values (as can be seen by considering the symmetry of the overlap between a $2n$-gon and each of its two neighbours). But they are now not necessarily even.
Answering the easier question about why the points always lie on a regular $2n$-gon. Leaving the more interesting question about listing the possible values of $n$ when this may happen for later (or for somebody else!).
Let us denote $\phi=2\pi/n$. We align the coordinate axes in such a way that the center of $c_A$ is at the origin $O$ and that the positive $x$-axis intersects $c_A$ at a vertex of of $A$. This implies that the points $U,V,X,W$ all have angular polar coordinates that are integer multiples of $\phi$.
Let $L_1$ (resp. $L_2$) be the line through $U$ and $V$ (resp. through $W$ and $X$). The line $L_1$ is perpendicular to the bisector of the angle between $\vec{OU}$ and $\vec{OV}$. Therefore $L_1$ points at the direction that is perpendicular to an integer multiple of $\phi/2$. The same holds for $L_2$. Thus the angle $\theta$ between $L_1$ and $L_2$ is also an integer multiple of $\phi/2$, so $\theta= k\pi/n$ for some integer $k$.
Let $s_1$ (resp. $s_2$) be the reflection w.r.t. $L_1$ (resp. $L_2$). It is well known that the composition $r=s_2\circ s_1$ is a rotation about the point of intersection $Q=L_1\cap L_2$ by the angle $2\theta=2k\pi/n=k\phi$. If this is news to you the animation below may make this clearer. There the black arrow is first reflected w.r.t. the blue line, and the red arrow is the mirror image. For its part the red arrow is then reflected w.r.t. the green line and the orange arrow is its mirror image. The animation tries to convince you that irrespective of which direction the black arrow points at, the angle between it and the red orange arrow is constant (=twice the angle between the blue and green lines). Prove this if you already haven't. It's not difficult!
It is clear that $A=s_1(B)$ and that the $s_2(A)=r(B)$ is a regular $n$-gon circumscribed by $c$. This implies that $r(c_B)=c$. Also, as the angle of rotation $2\theta$ is a multiple of $\phi$, the sides of the regular $n$-gon $r(B)$ are parallel to those of $B$. The figure below hopefully makes it clearer what happened. After we reflected $n$-gon $B$ (red) first w.r.t. line $L_1$ to get the $n$-gon $A$ (green) and then w.r.t. line $L_2$ the resulting $n$-gon (blue) can be gotten from $B$ also by a parallel translation. Quite irrespective of whether the circles $c_B$ and $c$ intersect on vertices of the $n$-gon $B$ or not (in the image they intentionally do not)!
Let $s_3$ is the reflection w.r.t. the line $L_3$ passing through $Y$ and $Z$. If $O_B$ is the center of the circle $c_B$, then the angle $\beta=\angle YO_BZ$ is an integer multiple of $\phi$, say $\beta=\ell\phi$. The angle between $YZ$ and $O_BY$ is thus $$\gamma=\frac\pi2-\frac\beta2=\frac\pi2-\frac{\ell\pi}n=\frac\pi{2n}(n-2\ell).$$ This means that the angle between $L_3$ and the extension of any edge of $B$ is an integer multiple of $\pi/2n=\phi/4$. Thus, under the reflection $s_3$ the directions of those edges change by an integer multiple of $\phi/2$. Therefore the regular $n$-gon $s_3(B)$ is either parallel to $r(B)$ itself or parallel to a version rotated by $\phi/2$ (depending on the parity of $n-2\ell\equiv n\pmod2$). Because $s_3(B)=s_3(s_1(A))$ the $n$-gons $s_3(B)$ and $A$ are parallel. With a little imagination you see in the above figure that the green 11-gon is "half a tick" off synch from the blue and red 11-gons that are parallel to each other.
As both regular $n$-gons, $s_3(B)$ and $r(B)$ are circumscribed by $c$, $r(B)$ contains $W$ and $X$ as vertices, and $s_3(B)$ contains $Y$ and $Z$ as its vertices, the claim follows from this.
Extras that may or may not help in simplifying the above argument or finding the solutions:
Because we get $c$ by rotating $c_B$ about $Q$, the point $Q$ must be equidistant from the centers of $c_B$ and $c$. In other words, $Q$ is on the line $YZ$ (so the lines $L_1$, $L_2$ and $L_3$ intersect at the same point $Q$).
Recalling that $s_3$ is the reflection w.r.t. $YZ$, then $s_3\circ r$ is yet another reflection (as an orientation reversing rigid motion of the plane), call it $s_4$. Clearly $s_4(Q)=Q$ and $s_4(c_B)=c_B$, so $s_4$ must be the reflection w.r.t. line joining $Q$ and the center of $c_B$.
• Thanks, @EwanDelanoy! Editing. – Jyrki Lahtonen Oct 18 '14 at 16:01
• A bounty? Thanks @MvG. I was largely hoping to just answer the OP assuming that the bounty hunt is all about listing the solutions. Will add pictures later. – Jyrki Lahtonen Oct 21 '14 at 9:29
• The way I read the question, the main point was this inscribed $2n$-gon, and your question answers that in an elegant way. For a while I was considering whether I should award it to someone who might be more in need of the extra rep and did some good work as well, but in the end I decided to stick to what I expected from an answer. If you want to, you can still award a bounty of your own to one of the answers regarding enumeration. Some people put quite some effort into that, even though I see no definite final answer yet. Pictures for your post would be nice indeed. – MvG Oct 21 '14 at 9:59
I tried to solve the problem in a more general setting. However, despite my (hard!) effort this is NOT an answer. The reason I am posting it here is because it might shed some further light on this problem and on its general version, which I find very entertaining and interesting. So, if my reasoning could help someone find a nice solution, here it is.
Let $A_m$ and $A_n$ be two regular polygons of $m$ and $n$ sides respectively, each inscribed in a circle of radius $1$. Let's say that they "fit together to order $d$" (invented notation) if $d$ divides both $m$ and $n$.
This corresponds visually to putting the polygons one over the other in such a way that the centers of the corrisponding circumscribed circles coincide and letting one of the vertices of one polygon coincide with a vertex of the other one. Then $d$ is the number of vertices in the picture which belong to both polygons.
The possible "fitting orders" for $A_m$ and $A_n$ are then $1=d_1,d_2,d_3,\dots,d_r=\gcd(m,n)$ where the $d_i$'s are all common divisors of $m$ and $n$. Let's denote this set as $D_{m,n}$.
Now, for the geometrical setting, let's take a circle $C$ with radius $1$ and center $O$ and two chords $\overline{A_1B}_1$ and $\overline{A_2B}_2$. Let $M_1$ and $M_2$ be the midpoint of $\overline{A_1B}_1$, resp. $\overline{A_2B}_2$, and let $\theta$ be the angle $M_1\widehat{O}M_2$, as in the picture below: Reflecting $C$ about $\overline{A_1B}_1$ gives another circle $C_1$; doing the same with $\overline{A_2B}_2$ gives a circle $C_2$. These circles might or might not intersect; let's suppose they do. Then they will meet at two points $P$ and $Q$. Reflecting $P$ about the two chords, we get two points $P_1$ and $P_2$ which lay on the original circle $C$. Doing the same with $Q$, we get two other points $Q_1$ and $Q_2$. By symmetry, one can see that $\overline{PQ}=\overline{P_1Q}_1=\overline{P_2Q}_2$ and also $P\widehat{O}_1Q=P\widehat{O}_2Q=P_1\widehat{O}Q_1=P_2\widehat{O}Q_2$, where $O_1$ and $O_2$ are the centers of $C_1$ and $C_2$. Here's the picture for $P_1$ and $Q_1$:
So, we started with two chords on a circle and ended up with two other chords of equal length in some random position on the circle. Getting back to polygons, given three polygons $A_m,A_n$ and $A_p$ we can choose two chords of $A_m$ (i.e. segments whose endpoints are vertices of the polygon) by selecting $d_1\in D_{m,n}$ and $d_2\in D_{m,p}$ and by deciding what is their position on the polygon. This choice will determine two other chords which, in general will be chords of the circumscribed circle of $A_m$ but not of $A_m$ itself;
the challenge is determining what values of $n$ and $p$ lead to chords which are also chords of $A_m$. A necessary condition for this is that the angle $P\widehat{O}_1Q$ must be of the form $2\pi/d^*$ where $d^*$ is a common divisor to $m,n$ and $p$. This angle $\alpha^*$ is uniquely determined by the distance $\overline{O_1O_2}=L$, because $L/2=\cos(\alpha^*/2)$. On the other side, we have (by the law of cosines) $$L^2=\overline{OO}_1^2+\overline{OO}_2^2-2\overline{OO}_1\ \overline{OO}_2\cos\theta$$ But we have also $$\overline{OO}_1=2\cos\left(\frac{\pi}{d_1}\right)\qquad \overline{OO}_2=2\cos\left(\frac{\pi}{d_2}\right)$$ so $$(L/2)^2=\cos^2(\pi/d_1)+\cos^2(\pi/d_2)-2\cos(\pi/d_1)\cos(\pi/d_2)\cos\theta= \cos^2(\pi/d^*)$$ We have that $\theta$ as well has to be a multiple of $2\pi/m$, so we can set $\theta = 2\pi/d$ with $d$ divisor of $m$.
In the end the problem is finding $d_1,d_2,d^*$ and $d$ such that $$\cos^2(\pi/d_1)+\cos^2(\pi/d_2)-2\cos(\pi/d_1)\cos(\pi/d_2)\cos(\pi/d)= \cos^2(\pi/d^*)$$
This will yeld all the possible combinations of three polygons which will "fit together" in some way. I don't know how to solve this, but I thought to share it with you in case someone could see the solution.
Here's a straightforward proof of the existence of the $2n$-gon.
Ignoring polygons for the time being, take points $W$ and $X$ on $\bigcirc A$ and points $Y$ and $Z$ on $\bigcirc B$, such that $W$, $X$, $Y$, $Z$ all lie on $\bigcirc C$, where all three circles are congruent.
Define $w$, $x$, $y$, $z$ to be the measures of angles made, respectively, by radial segments $\overline{AW}$, $\overline{AX}$, $\overline{BY}$, $\overline{BZ}$ with $\overleftrightarrow{AB}$. (These four measures, as described, are ambiguous; that's okay.) Because $\square AWCX$ and $\square BYCZ$ are rhombi —and therefore also parallelograms— radial segments $\overline{CX}$, $\overline{CW}$, $\overline{CZ}$, $\overline{CY}$ make comparable angles with the line through $C$ parallel to $\overleftrightarrow{AB}$.
Clearly, then, the measure of any $\angle PCQ$, with $P$ and $Q$ in $\{W, X, Y, Z\}$, is some combination of $\pm w$, $\pm x$, $\pm y$, $\pm z$, and $\pm \pi$. (Allowing $\pm \pi$ eliminates any problems with our ambiguously-defined $w$, $x$, $y$, $z$.) Most importantly: If $w$, $x$, $y$, $z$, $\pi$ are multiples of a common value, then those $\angle PCQ$s will be, as well.
Recalling the original context of the problem, we have that $\bigcirc A$ and $\bigcirc B$ meet at vertices of inscribed $n$-gons. Necessarily, $\overleftrightarrow{AB}$ is a line of symmetry for the compound polygonal figure, either passing through the polygons' vertices, or perpendicularly-bisecting the polygons' edges, or both. In any case, radial segments to the vertices of each polygon make angles with $\overleftrightarrow{AB}$ that are multiples of $\pi/n$.
Therefore, if $W$, $X$, $Y$, $Z$ are themselves vertices of these $n$-gons, then $w$, $x$, $y$, $z$ (and $\pi$!) are multiples of $\pi/n$, as are the $\angle PCQ$s. As $\pi/n = 2\pi/(2n)$ is the central angle between neighboring vertices of a $2n$-gon, we have our result.
Note: If $n$ is even, and if the polygons overlap in such a way that they have vertices on the line of symmetry $\overleftrightarrow{AB}$, then $w$, $x$, $y$, $z$ (and $\pi$!) are multiples of $2\pi/n$, so that $W$, $X$, $Y$, $Z$ are vertices of an $n$-gon about $C$.
This geometric problem can be expressed in purely algebraic terms, to be precise it all amounts to a certain $\mathbb Q$- linear relation between roots of unity, and this theme is a classic that has already been studied by Mann, Schoenberg, Conway and others. It follows from what we show below that $n \leq 15$.
Denote by $\Omega_A,\Omega_B,\Omega_C$ the centers of $c_A,c_B,c$ respectively, and $z_A,z_B,z_C$ the corresponding complex numbers (or "affixes" as they are called in French). Similarly, denote by $z_U,z_V,z_W,z_X,z_Y,z_Z$ the affixes of $U,V,W,X,Y,Z$. Let $\zeta$ be a primitive $n$-th root of unity.
We may assume without loss of generality that $z_A=0$. Then there are four integers $u,v,w,x\in[0,n-1]$ such that $z_U=\zeta^{u},z_V=\zeta^{v},z_W=\zeta^{w},z_X=\zeta^{x}$. For the same reason, there are two integers $y,z\in[0,n-1]$ such that $z_Y=z_B-\zeta^{y},z_Z=z_B-\zeta^{w}$.
Since $\Omega_AU\Omega_BV$, $\Omega_AX\Omega_CW$ and $\Omega_BY\Omega_CZ$ are parallelograms, we deduce $z_B=\zeta^{u}+\zeta^{v},z_C=z_W+z_X-z_A=z_Y+z_Z-z_B$.
Since $X,U,V,W$ are four distinct points on $c$, we see that $x,u,v,w$ are pairwise distinct. Similarly, $y,u,v,z$ are pairwise distinct, and $x,y,z,w$ are pairwise distinct. In the end, $x,y,z,u,v,w$ are all distinct.
Combining all those equalities, we see that
$$\begin{array}{lcl} z_B &=& \zeta^u +\zeta^v \\ z_Y &=& \zeta^u +\zeta^v-\zeta^y \\ z_Z &=& \zeta^u +\zeta^v-\zeta^z \\ z_C &=& \zeta^x+\zeta^w = \zeta^u +\zeta^v-\zeta^y-\zeta^z \end{array}\tag{1}$$
So everything reduces to the relation $$\zeta^x +\zeta^y+\zeta^z+\zeta^w-(\zeta^u+\zeta^v)=0 \tag{2}$$
The above is an example of what I call an $U$-relation. It is defined as a double uple $((a_1,a_2,\ldots,a_r),(\xi_1,\xi_2,\ldots,\xi_r))$ satsifying the identity $\sum_{k=1}^{r}a_k\xi_k=0$ where the $a_k$ are nonzero integers and the $\xi_k$ are distinct roots of unity. I call the integer $r$ the length of the $U$-relation. There are three natural operations on $U$-relations : permutation, rotation and multiplication by a constant. I mean by that that $((a_{\sigma(1)},a_{\sigma(2)},\ldots,a_{\sigma(r)}),\ (\xi_{\sigma(1)},\xi_{\sigma(2)},\ldots,\xi_{\sigma(r)}))$ is still an $U$-relation when $\sigma$ is a permutation of the integers between $1$ and $r$, $((a_1,a_2,\ldots,a_r),(\alpha\xi_1,\alpha\xi_2,\ldots,\alpha\xi_r))$ is still an $U$-relation when $\alpha$ is a root of unity, and $((ka_1,ka_2,\ldots,ka_r),(\xi_1,\xi_2,\ldots,\xi_r))$ is still an $U$-relation when $k\in{\mathbb Z},k\neq 0$. The regular $U$-relation in length $r$ is the $U$-relation $((1,1,\ldots,1),(1,\eta,\eta^2,\ldots,\eta^{r-1}))$ where $\eta$ is a primitive $r$-th root of unity. An $U$-relation $((a_1,a_2,\ldots,a_r),(\xi_1,\xi_2,\ldots,\xi_r))$ is said to be reducible if there is a partition $I\cup J$ of $[1,r]$ into two non-empty parts such that $\sum_{k\in I}a_k\xi_k=\sum_{k\in J}a_k\xi_k=0$. It is easy to see that a regular $U$-relation is irreducible iff its length is prime.
Theorem For any $k$, up to permutation, rotation and multiplication by a constant there are only finitely many irreducible $U$-relations of length $k$. In length $\leq 7$, the only non-regular irreducible $U$-relations are
$$((\epsilon_1,\epsilon_2,\epsilon_3,\epsilon_4,\epsilon_5,\epsilon_6), (-\epsilon_1\alpha,-\epsilon_2\alpha^2,\epsilon_3\beta,\epsilon_4\beta^2,\epsilon_5\beta^3,\epsilon_6\beta^4)), \tag{3}$$
where all the $\epsilon_k(1\leq k\leq 6)$ are $\pm 1$, $\alpha$ is a primitive third root of unity and $\beta$ is a primitive fifth root of unity.
Proof of theorem : see Henry B. Mann, "On linear relations between roots of unity", Mathematika 12(1965), pp.107-117.
Corollary. If we denote by $(t_1,t_2,t_3,t_4,t_5,t_6)$, the second part of (3), up to permutation and rotation there are exactly $\binom{6}{2} \times \frac{4!}{8}=15\times6=90$ solutions to (2) (with $x,y,z,u,v,w$ pairwise distinct), all of which come from (3) : they can be described by $$\lbrace \zeta^u,\zeta^v \rbrace= \lbrace -t_i,-t_j \rbrace, \lbrace \zeta^x,\zeta^y,\zeta^z,\zeta^w \rbrace= \bigg\lbrace t_k \ \bigg| \ k\neq i, k\neq j \bigg\rbrace, \ \ 1\leq i \lt j \leq 6 \tag{4}$$
Proof of corollary By theorem, all irreducible $U$-relations of length $<6$ are regular, and so have all their coefficients of the same sign. So if (2) were not an irreducible relation, it would necessarily follows that $\zeta^u+\zeta^v=0$, so $U$ and $V$ are diametrically opposed, hence $c_B=c$ which is impossible. So (2) must be irreducible. Since it has coefficients of differents signs, it does not come from the regular $U$-relation in length $6$. So it must come from (3), which yields (4).
To count the solutions, note that there are $\binom{6}{2}$ possible values for $\lbrace i,j\rbrace$ in (4), and also that in counting the possibilities for $(\zeta^x,\zeta^y,\zeta^z,\zeta^w)$, we obtain repetitions by interchanging $x$ and $w$, by interchanging $y$ and $z$, or by interchanging $\lbrace x,w\rbrace$ and $\lbrace y,z\rbrace$. The subgroup of ${\mathfrak S}(x,y,z,w)$ fixing a given configuration has cardinality $8$.
Looks like false to me...
For $n=4$, which seems to satisfy $n>3$ requirement, the $A$ and $B$ squares can not intersect at opposite vertices (because they would be the same, $A=B$), so they'll have to share one side. Then there's no $X$ or $Y$ vertex between $U$ and $V$ where $C$ octagon might cross $A$ and $B$...
• Apaprently you misunderstood the question. Obviously there is no solution with $n=4$, nobody suggested otherwise – Ewan Delanoy Oct 20 '14 at 10:19
• For $n=4$, you can't satisfy the preconditions, i.e. you can't have a third circle intersecting the two first polygons in four distinct vertices. When the precondition cannot be met, the conclusion, namely that you can inscribe a $2n$-gon into that circle, is irrelevant. $n=6$ seems to be the first instance where the precondition can be met, and the conclusion is valid there as well. – MvG Oct 20 '14 at 10:21
• @MvG That's it! Look at the question: '$A$ and $B$ are two regular $n$-gons such that $n > 3$, which are inscribed into...' So if one can't satisfy preconditions within given assumptions, then assumptions are too weak, the supposed circle $c$ may not exist so the conjecture is false. – CiaPan Oct 20 '14 at 10:46
• @CiaPan The statement: "If false then $P$" is true irrespective of what $P$ claims. Here the conjecture claims anything only, when the circle $c$ and the points $U,V,W,X,Y,Z$ exist. Read it again: "Given all of these details..." – Jyrki Lahtonen Oct 20 '14 at 18:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 4, "x-ck12": 0, "texerror": 0, "math_score": 0.9119942784309387, "perplexity": 154.1166371991542}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986653876.31/warc/CC-MAIN-20191014150930-20191014174430-00492.warc.gz"} |
http://mathhelpforum.com/calculus/51461-non-homogeneous-system-odes-need-particular-solution.html | # Math Help - Non-homogeneous system of ODEs (need particular solution)
1. ## Non-homogeneous system of ODEs (need particular solution)
Hi,
I need to find the particular solution to the following system of equations using the method of undetermined coefficients.
dx/dt = y + e^t
dy/dt = -2x + 3y + 1
The problem is the forcing terms of the two diff equations are different, that is one is polynomial and the other an exponential. I have no idea what you "guess" the particular solution set to be in this situation.
Any help will be GREATLY appreciated!
Thank you!
2. You can decouple them right?
$x'=y+e^2$
$y'=-2x+3y+1\Rightarrow y''=-2x'+3y'+1$
or:
$y''-3y'+2y=1-2e^t$
Solve that one, plug it into the first one, then solve that one. Looks to me anyway.
3. yes shawsend, that can be done.. but I really doubt we're allowed to change the diff equations to higher orders.. that's how the rest of the Qs were done.
For example if you had a system like
dx/dt = x + 3y + 2e^(4t)
dy/dt = 2x + 2y - e^(4t)
the particular solution would be of the form c*t*w*e^(4t) + u*e^(4t) where w is an eigenvector of the coefficient matrix of the system and 4 is an eigenvalue. The "coefficients" we have to find are c and the elements of vector u.. say u = (a,b)^T.. so a, b and c
4. Yea, Ok. Need to look at it this way then:
$\binom{x}{y}^{'}=\left(\begin{array}{cc}\phantom{-}0&1\\-2&3\end{array}\right)\binom{x}{y}+\binom{e^t}{1}$
I know, that doesn't help you but it looks nice I think. First step as you know is to find the general solution for homogeneous system. Got that?
This problem is in Rainville and Bedient, p. 270. It's a good book on differential equations I think. That and Blanchard, Hall and Devaney if you're serious about DEs. Couple more.
What's the next step in matrix form?
5. yeah thanks, that's what i was talking about.
I got the homogeneous solution.
But the problem is I've only ever done systems when the forcing terms are the same term.. like 2e^(2t) and -5e^(2t). Then I can guess the particular solution and find the coefficients.
But in this case the two forcing terms are of different functional form. i.e. one polynomial and one exponential... what would the particular solution be then?
Also, does Rainville and Bedient give a full solution to this problem?
thanks
6. So the general solution to the homogeneous system is:
$\binom{x}{y}_{\hspace{-3pt}c}=a_1\binom{1}{1}e^{t}+a_2\binom{1}{2}e^{2t}$
Now we do what the alien told professor Barnard in The day the earth stood still'': use variation of parameters. We then seek a solution of the form:
$\binom{x}{y}_{\hspace{-3pt}p}=a_1(t)\binom{1}{1}e^{t}+a_2(t)\binom{1}{2}e ^{2t}$
Substitute this particular solution into the original DE:
$a'_1(t)\binom{1}{1}e^t+a'_2(t)\binom{1}{2}e^{2t}=\ binom{e^t}{1}$
You can now use Cramer's rule to find $a'_1(t)$ and $a'_2(t)$.
Yes, the problem is solved in Rainville and Bedient. Blanchard, Devaney and Hall go into great detail of systems offering insight into the underlying dynamics; very nice perspective of the world that few see in my opinion.
7. can it also be done using the method of undetermined coefficients? Coz that's what the question asks us to do "use the method of undetermined coefficients"
Thanks
8. Originally Posted by hashi
can it also be done using the method of undetermined coefficients?
Thanks
uh . . . got Kreyszig? That's another one I think is good to have. You know, "Advanced Engineering Mathematics". Anyway, he goes through an example using undetermined coefficients, p. 184. Would take time for me also to review it. Hopefully you can find the book in the library and get two birds with one stone (it's a good book to know about, and solve the problem).
9. Here's my solution via undetermined coefficients. Can't say I'm tops with this but this is what I got: The homogeneous analog of equation:
$
\textbf{X}'=\textbf{A}\textbf{X}+\textbf{G}=\left( \begin{array}{cc}\phantom{-}0 & 1 \\ -2&3\end{array}\right)\binom{x}{y}+\binom{e^t}{1}$
has the solution:
$\binom{x}{y}_c=a_1\binom{1}{1}e^t+a_2\binom{1}{2}e ^{2t}$
Note that $\lambda=1$ is an eigenvalue of the coefficient matrix and the non-homogeneous part has an $e^t$ factor. Thus like repeated roots in an ordinary ODE, we must consider $te^t$ as a component of the particular solution as well as $e^t$ and a constant vector. Thus I'll solve for the column vectors of:
$y_p=\binom{x}{y}_p=\textbf{U}+\textbf{V}e^t+\textb f{W}te^t=\binom{u_1}{u_2}+\binom{v_1}{v_2}e^t+\bin om{w_1}{w_2}te^t$. Substituting this into the DE:
$\textbf{V}e^t+\textbf{W}(te^t+e^t)=\textbf{A}\text bf{U}+\textbf{A}\textbf{V}e^t+\textbf{A}\textbf{W} te^t+\binom{e^t}{1}$
Equating coefficients for the constant terms, $e^t$, and $te^t$ terms I get the following sets of equations:
\begin{aligned}u_2&=0 & w_1&=w2 & v_1+w_1&=v_2+1 \\
2u_1+3u_2&=-1 & w_2&=-2w_1+3w_2 & v_2+w_2&=-2v_1+3v_2
\end{aligned}
From these I can write a particular solution as:
$y_p=\binom{1/2}{0}+\binom{0}{1}e^t+\binom{2}{2}te^t$
10. ok that helped heaps and heaps!!
Thank you very very much!!!!
11. Hello..I have to solve this question..actually I don't know how I can do this kind of exercise with three parameters.
Reduce the system of linear higher order equations
sint(d^2x/dt^2)-3(dy/dt)+2(e^t)=tanht
2(d^2y/dt^2)+4(t^1/2)(dx/dt)-9x+4y=cosht
to a linear system of first order equations. Make sure to give the matrix A(t) and the
inhomogeneous term H(t).
Any help it will be appreciated..
Thank you | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8618202209472656, "perplexity": 555.498252576207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737935292.75/warc/CC-MAIN-20151001221855-00209-ip-10-137-6-227.ec2.internal.warc.gz"} |
http://www.mathjournals.org/jot/2005-053-002/2005-053-002-002.html | Previous issue · Next issue · Most recent issue · All issues
# Journal of Operator Theory
Volume 53, Issue 2, Spring 2005 pp. 251-272.
Schur multiplier projections on the von Neumann-Schatten classes
Authors Ian Doust (1) and T.A. Gillespie (2)
Author institution: (1) School of Mathematics, University of New South Wales, Sydney, NSW 2052, Australia
(2) Department of Mathematics and Statistics, University of Edinburgh, James Clerk Maxwell Building, The King's Buildings, Edinburgh EH9~3JZ, Scotland
Summary: For $1 \leqslant p < \infty$ let $C_p$ denote the usual von~Neumann-Schatten ideal of compact operators on $\ell^2$. The standard basis of $C_p$ is a conditional one and so it is of interest to be able to identify the sets of coordinates for which the corresponding projection is bounded. In this paper we survey and extend the known classes of bounded projections of this type. In particular we show that some recent results from spectral theory allow one to prove boundedness of a projection by checking simple geometric conditions on the associated set of coordinates.
Contents Full-Text PDF | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8913487195968628, "perplexity": 613.6723446115941}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948549.21/warc/CC-MAIN-20180426203132-20180426223132-00018.warc.gz"} |
http://cms.math.ca/cmb/kw/cofinite%20modules | Canadian Mathematical Society www.cms.math.ca
location: Publications → journals
Search results
Search: All articles in the CMB digital archive with keyword cofinite modules
Expand all Collapse all Results 1 - 2 of 2
1. CMB 2012 (vol 56 pp. 491)
Bahmanpour, Kamal
A Note on Homological Dimensions of Artinian Local Cohomology Modules Let $(R,{\frak m})$ be a non-zero commutative Noetherian local ring (with identity), $M$ be a non-zero finitely generated $R$-module. In this paper for any ${\frak p}\in {\rm Spec}(R)$ we show that $\operatorname{{\rm injdim_{_{R_{\frak p}}}}} H^{i-\dim(R/{\frak p})}_{{\frak p}R_{\frak p}}(M_{\frak p})$ and ${\rm fd}_{R_{\p}} H^{i-\dim(R/{\frak p})}_{{\frak p}R_{\frak p}}(M_{\frak p})$ are bounded from above by $\operatorname{{\rm injdim_{_{R}}}} H^i_{\frak m}(M)$ and ${\rm fd}_R H^i_{\frak m}(M)$ respectively, for all integers $i\geq \dim(R/{\frak p})$. Keywords:cofinite modules, flat dimension, injective dimension, Krull dimension, local cohomologyCategory:13D45
2. CMB 2011 (vol 55 pp. 81)
Divaani-Aazar, Kamran; Hajikarimi, Alireza
Cofiniteness of Generalized Local Cohomology Modules for One-Dimensional Ideals Let $\mathfrak a$ be an ideal of a commutative Noetherian ring $R$ and $M$ and $N$ two finitely generated $R$-modules. Our main result asserts that if $\dim R/\mathfrak a\leq 1$, then all generalized local cohomology modules $H^i_{\mathfrak a}(M,N)$ are $\mathfrak a$-cofinite. Keywords:cofinite modules, generalized local cohomology modules, local cohomology modulesCategories:13D45, 13E05, 13E10
© Canadian Mathematical Society, 2014 : https://cms.math.ca/ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9332507848739624, "perplexity": 1228.1112418084713}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663612.31/warc/CC-MAIN-20140930004103-00298-ip-10-234-18-248.ec2.internal.warc.gz"} |
http://mathhelpforum.com/latex-help/221244-exact-sequences-diagrams-commute-vertical-arrows.html | # Math Help - Exact Sequences - Diagrams that 'commute' - vertical arrows
1. ## Exact Sequences - Diagrams that 'commute' - vertical arrows
I am reading Dummit and Foote on Exact Sequences and some of the 'diagrams that commute' have vertical arrows.
I have given an example in the attachment "Exact Sequences - Diagrams with Vertical Arrows" - where I also frame my question (please see attachement)
I have also attached Dummit and Foote Section 10.5 page 381 which has two examples of the diagrams to which I refer.
Would be very grateful for help.
Peter
2. ## Re: Exact Sequences - Diagrams that 'commute' - vertical arrows
Hello, Bernhard!
Here's one way . . .
$\begin{array}{ccccccccc} &&& \alpha && \beta && \gamma \\ O & \to & A & \to & B & \to & C & \to & O \\ && \quad\downarrow \phi && \quad\downarrow\psi && \quad\downarrow\tau \\ O & \to & A' & \to & B' & \to & C' & \to & O \end{array}$
Code:
\begin{array}{ccccccccc} &&& \alpha && \beta && \gamma \\
O & \to & A & \to & B & \to & C & \to & O \\
O & \to & A' & \to & B' & \to & C' & \to & O \end{array} | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9985387325286865, "perplexity": 1308.9513104180894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644066586.13/warc/CC-MAIN-20150827025426-00083-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://grenouillebouillie.wordpress.com/2007/09/18/why-dont-you-tell-us-what-you-found/ | # Why don’t you tell us what you found?
Dear Sir, Your posts are exactly like the hundreds of other crackpots on the web. A theory of physics must predict something, you are just babbling. Why don’t you tell us what your big theory is instead of incoherent ramblings about the state of physics?
Well, one way to share idea is by writing papers, which I did. But maybe it would be a good idea to summarize the general ideas here.
The starting point is the following: mathematical entities in physics are not arbitrary, they are intended to model or predict the result of measurements. Therefore, it is interesting to define what a measurement is in physics. I suggest a definition in 6 postulates.
1. A physical process
2. with known input and output
3. repeatable (in other words, with stable results)
4. gathering information about its input
5. represented by a change in its output
6. that we can give a symbolic interpretation
Eliminate any of these postulates, and you have something that is not a measurement.
#### We can reason about these postulates
The second idea is that these postulates are strong. As long as you add the observation that there are measurements, you can deduce something about their behavior. For example, the fact that a measurement must be repeatable means that if I measure the length of a solid and find 5 cm, and then measure again, I must find 5cm again. This in turns means that if I use a quantum-mechanical formalism based on Hilbert spaces, the state representing the system immediately after the first measurement must be an eigenvector (otherwise, you might measure a different value). Therefore, that third postulate here implies the collapse of the wave-function, one of the axioms of quantum mechanics generally considered as unintuitive.
But if this axiom of quantum mechanics is seen as a consequence of the postulate instead of as an ad-hoc statement, this has consequences as well, some of them very measurable and testable with experiment. Notably, in traditional quantum mechanics, the collapse of the wave function is often said to be instantaneous (more recent work is more nuanced in that respect). By contrast, in the TIM, the collapse happens gradually as the measurement instrument converges, and the fully collapsed state is, in most cases, an idealized limit.
So some of the axioms of quantum mechanics may turn out to be weaker than others. The kind of reasoning above may lead us to tweak them, to make minor adjustments.
#### Discrete versus continuous
Another remark is that a physical measurement apparatus has a finite resolution. Therefore, we may build nice continuous mathematical models of things, and in quantum mechanics, use for example infinite-dimensional Fock spaces. But in physical reality, when we get back to experiment, we are only predicting the probabilities of the outcome among a finite set. So the question of the relationship between the continuous model and the discrete experiments has to be addressed.
One reason this is important is that going to the continuous limit is often where divergences arise. A well known example of this is the traditional law of gravitation. This is an inverse square law, so there is a singularity around distance zero. However, a physical measurement instrument cannot reach this distance zero continuously. You can split a metering rod in two, and then in two again, but after a few iterations, you cannot split it anymore without losing the physical properties that make it a valid measuring instrument.
So I think that if we can understand the relationship between continuous and discrete better, we stand a good chance to understand why some laws diverge in our mathematical models when they apparently don’t in the real world. A large fraction of my paper is dedicated to understanding what the continuous models actually mean respective to their discrete physical counterparts. A particular result is that I suggest that a discrete and finite, as opposed to continuous and infinite, normalization condition for the wave-function would allow us to build a better approximation of the real world.
One reason this would be a better approximation is because quantum mechanics, as traditionally formulated, does not incorporate the limits of the measurement instruments. If you detect a particle using a 10x10cm detector, quantum mechanics gives precise predictions of the probability of finding the particle at coordinates (x,y), irrespective of the values of x and y, including whether (x,y) is in the detector or not. The theory of incomplete measurements, by contrast, requires a normalization condition which is practically identical to quantum mechanics inside the detector, but has only a single probability for anything outside the detector. In layman’s terms: if the particle missed the detector, there is little point making statistical predictions about where it will be found.
#### Quantum mechanics seen as probabilistic predictions
Provided a few “convenience” ingredients are added to the recipe along the way (e.g. we tend to pick linear graduations rather than random ones because it makes mathematics on the results of measurements simpler), it turns out that practically all the axioms of quantum mechanics can be reconstructed from the six postulates above. The missing one is the “fundamental equation”, something equivalent to specifying a Lagrangian or qction or Schrödinger equation.
One result that I personally like a lot is explaining why the wave-function can be represented as a normalized complex function of the spatial coordinates. This can be explained relatively well, and it also clarifies what a “particle” is in my opinion. Here is a sketch of the construction.
The predictions you can make about future measurements are, by construction, probabilistic, i.e. 30% of chances you will get A and 70% you will get B. What you already know about the system with respect to any particular measurement can be entirely summarized by these probabilities. Since the sum of the probabilities for all possible outcomes must be 1, and since each probability must be greater than 0, we can write individual probabilities as squares, , and write the condition that the sum of probabilities be one as .
If you try to detect a particle, an individual detector will give you two results: found or not-found. So the representation of the probabilities is a pair of numbers verifying . We can also represent such a pair using a unit complex number .
But if you want to detect the trajectory of a particle, you now need a grid of detectors. Each detector has its own probability represented as above, but the probabilities are not independent, because assuming there is a single particle, there is an additional condition that at any given time, only one detector will find it. That’s not magic, that’s just how we know that there is one particle. I will leave it as an exercise to the reader to imagine what this “field of probabilities” would look like…
#### Conclusion
I hope that this short exposition demonstrates that I can explain my “theory” to 15-years old kids and have a chance to be understood. So this is what I think I found: an explanation of quantum mechanics that I can teach to my kids without having them frown at me like “dad, are you insane?“. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8984695076942444, "perplexity": 322.22440828780304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542323.80/warc/CC-MAIN-20161202170902-00074-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://arxiv.org/abs/1708.09042 | q-bio.NC
(what is this?)
# Title: Balance of excitation and inhibition determines 1/f power spectrum in neuronal networks
Abstract: The $1/f$-like decay observed in the power spectrum of electro-physiological signals, along with scale-free statistics of the so-called neuronal avalanches, constitute evidences of criticality in neuronal systems. Recent in vitro studies have shown that avalanche dynamics at criticality corresponds to some specific balance of excitation and inhibition, thus suggesting that this is a basic feature of the critical state of neuronal networks. In particular, a lack of inhibition significantly alters the temporal structure of the spontaneous avalanche activity and leads to an anomalous abundance of large avalanches. Here we study the relationship between network inhibition and the scaling exponent $\beta$ of the power spectral density (PSD) of avalanche activity in a neuronal network model inspired in Self-Organized Criticality (SOC). We find that this scaling exponent depends on the percentage of inhibitory synapses and tends to the value $\beta = 1$ for a percentage of about 30%. More specifically, $\beta$ is close to $2$, namely brownian noise, for purely excitatory networks and decreases towards values in the interval $[1,1.4]$ as the percentage of inhibitory synapses ranges between 20 and 30%, in agreement with experimental findings. These results indicate that the level of inhibition affects the frequency spectrum of resting brain activity and suggest the analysis of the PSD scaling behavior as a possible tool to study pathological conditions.
Subjects: Neurons and Cognition (q-bio.NC); Statistical Mechanics (cond-mat.stat-mech) Journal reference: Chaos 27, 047402 (2017) DOI: 10.1063/1.4979043 Cite as: arXiv:1708.09042 [q-bio.NC] (or arXiv:1708.09042v1 [q-bio.NC] for this version)
## Submission history
From: Fabrizio Lombardi [view email]
[v1] Tue, 29 Aug 2017 21:54:17 GMT (2229kb,D) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9046245217323303, "perplexity": 1615.9959123114488}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583508988.18/warc/CC-MAIN-20181015080248-20181015101748-00343.warc.gz"} |
https://proofwiki.org/wiki/Equivalence_of_Definitions_of_Prime_Ideal_of_Commutative_and_Unitary_Ring | # Equivalence of Definitions of Prime Ideal of Commutative and Unitary Ring
## Theorem
The following definitions of the concept of Prime Ideal of Commutative and Unitary Ring are equivalent:
### Definition 1
A prime ideal of $R$ is a proper ideal $P$ such that:
$\forall a, b \in R : a \circ b \in P \implies a \in P$ or $b \in P$
### Definition 2
A prime ideal of $R$ is a proper ideal $P$ of $R$ such that:
$I \circ J \subseteq P \implies I \subseteq P \text { or } J \subseteq P$
for all ideals $I$ and $J$ of $R$.
### Definition 3
A prime ideal of $R$ is a proper ideal $P$ of $R$ such that:
the complement $R \setminus P$ of $P$ in $R$ is closed under the ring product $\circ$.
## Proof
Let $\struct {R, +, \circ}$ be a commutative and unitary ring throughout.
### $(1)$ implies $(2)$
Let $P$ be a prime ideal of $R$ by definition 1.
Then by definition:
$\forall a, b \in R : a \circ b \in P \implies a \in P$ or $b \in P$
Let $I \circ J \subseteq P$.
Aiming for a contradiction, suppose that both $I \not \subseteq P$ and $J \not \subseteq P$.
Then by definition of subset:
$\exists a \in I \setminus P, b \in J \setminus P$
But by definition of subset product
$a \circ b \in P$ as $I \circ J \subseteq P$
Thus we have $a, b \in P$ such that:
$a \circ b \in P$ where $a \notin P$ and $b \notin P$
But this contradicts the criterion for $P$ to be a prime ideal of $R$ by definition 1
Thus by Proof by Contradiction, either $I \subseteq P$ or $J \subseteq P$,
Thus $P$ is a prime ideal of $R$ by definition 2.
$\Box$
### $(2)$ implies $(1)$
Let $P$ be a prime ideal of $R$ by definition 2.
Then by definition:
$I \circ J \subseteq P \implies I \subseteq P \text { or } J \subseteq P$
for all ideals $I$ and $J$ of $R$.
Let $a, b \in R$ such that $a \circ b \in P$.
Consider $\ideal a$ and $\ideal b$, the ideals generated by $a$ and $b$.
Let $r \in \ideal a, s \in \ideal b$.
Then:
$\exists m, n \in R: r = m \circ a, s = n \circ b$
Therefore:
$\ds r \circ s$ $=$ $\ds \paren {m \circ a} \circ \paren {n \circ b}$ $\ds$ $=$ $\ds \paren {m \circ n} \circ \paren {a \circ b}$ Definition of Commutative Ring $\ds$ $\in$ $\ds P$ Definition of Ideal of Ring: $m \circ n \in R$, $a \circ b \in P$
This shows that $\ideal a \circ \ideal b \subseteq P$.
By definition 2, $\ideal a \subseteq P$ or $\ideal b \subseteq P$.
This implies $a \in P$ or $b \in P$.
Thus $P$ is a prime ideal of $R$ by definition 1.
$\Box$
### $(1)$ implies $(3)$
Let $P$ be a prime ideal of $R$ by definition 1.
Then by definition:
$\forall a, b \in R : a \circ b \in P \implies a \in P$ or $b \in P$
Aiming for a contradiction, suppose $R \setminus P$ is not multiplicatively closed.
That is:
$\ds \exists a, b \in R \setminus P: \,$ $\ds a \circ b$ $\notin$ $\ds R \setminus P$ $\ds \leadsto \ \$ $\ds a \circ b$ $\in$ $\ds P$ $\ds \leadsto \ \$ $\ds a$ $\in$ $\ds P$ $\, \ds \lor \,$ $\ds b$ $\in$ $\ds P$
But this contradicts the assertion that $a, b \in R \setminus P$.
Thus by Proof by Contradiction $R \setminus P$ is multiplicatively closed.
Thus $P$ is a prime ideal of $R$ by definition 3.
$\Box$
### $(3)$ implies $(1)$
Let $P$ be a prime ideal of $R$ by definition 3.
Then by definition:
the complement $R \setminus P$ of $P$ in $R$ is closed under the ring product $\circ$.
Aiming for a contradiction, suppose let $a \circ b \in P$ such that $a \notin P$ and $b \notin P$.
Then:
$a, b \in R \setminus P$
by definition of relative complement.
But $R \setminus P$ is closed under the ring product $\circ$.
That means:
$\forall a, b \in R \setminus P \implies a \circ b \in R \setminus P$
But this contradicts the assertion that $a \circ b \in P$.
Thus by Proof by Contradiction either $a \in P$ or $b \in P$ (or both).
Thus $P$ is a prime ideal of $R$ by definition 1.
$\blacksquare$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9976248741149902, "perplexity": 137.4239486817892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.80/warc/CC-MAIN-20210507090724-20210507120724-00216.warc.gz"} |
https://www.studypug.com/sg/sg-higher2/limits/squeeze-theorem | # Squeeze theorem
So far, we have looked at how to both find and evaluate limits for simple and more complex functions. However, as most topics in mathematics go, there will be times when evaluating limits will become impossible without using other techniques. That is where “Squeeze Theorem” comes in handy.
## What is the Squeeze Theorem
Before we get into the mathematical Squeeze Theorem definition, let’s first think of the concept in more familiar terms. Imagine I am a runner along with two of my friends, John and David. Instead of knowing how far I run each time, I know my distance compared to John and David according to the following:
1) I always run equal to or further than John.
2) I always run less than or equal to David.
Let’s say that one day myself, John, and David all go out running. On this particular day, John and David both run 3km. Now, the question is, how far did I run? If we answer this question according to the two conditions stated above, we will arrive at the only possible answer – 3km. Now let’s break this down algebraically so you can see this clearly:
- Let the distance I run be represented by M(x)
- Let the distance John runs be represented by J(x)
- Let the distance David runs be represented by D(x)
Now, according to the two conditions above:
$J(x) \leq M(x) \leq D(x)$
And if both John and David ran 3km each:
$3 \leq M(x) \leq 3$
This leads us to our answer of 3km – which is the only value that can satisfy this above equality. It was the functions that described the distance ran by both John and David that “squeezed” or “sandwiched” my distance function, and allowed us to find the answer without knowing a thing about the distance I ran. This is the idea behind “Squeeze” or “Sandwich” Theorem – it allows us to calculate the limit of a function using two other, more simple functions, when other methods aren’t useful. For a more algebraic-based Squeeze Theorem proof, if you’re interested, look here.
## How to Use Squeeze Theorem
In mathematical terms, Squeeze Theorem is defined by the following:
$g(x) \leq f(x) \leq h(x)$
$lim_{n \to A} \;g\left( x \right)=$ $lim_{n \to A} \;h\left( x \right)=L$
Therefore: $lim_{n \to A} \;f\left( x \right)=L$
Basically, for a given inequality of three terms such as the one above – the limit of f(x) term is equal to the limit of g(x) and h(x) if those limits are equal to each other. Thereby, if we can evaluate the limit of both g(x) and h(x), and they equal each other, we can easily find the limit of f(x). Now, before we look at some concrete examples of how and when to do squeeze theorem, let’s first review how to find and evaluate limits.
## How to Evaluate Limits
Finding limits may seem like an intimidating process, especially when we are dealing the concept of infinity. The best way to review the process of how to find the limit of a function is to do an example problem:
Example:
Find: $lim_{x \to \infty } \;\frac{{{16x^3} - 4}}{{x^7 + 200}}$
Step 1: Eliminate Unnecessary Constants
Because we are subbing in infinity for x (not actually – but we’ll get to that later) the first step is to ignore the constants, as they will not change during this substitution. This makes our expression much more simple and easy to work with when we try to find the limit.
$lim_{x \to \infty } \;\frac{{{16x^3} }}{{x^7}}$
Step 2: Evaluate the Limit
Now that we have simplified our expression, we can now evaluate the limit. Since infinity is not actually a number, we can’t substitute it in for x – as alluded to in the previous step. What we can do, however, is imagine how our expression will change as we substitute larger and larger numbers in for x. This can be done by comparing the degrees of the numerator and denominator of our expression. Let’s look at our expression again:
$lim_{x \to \infty } \;\frac{{{16x^3} }}{{x^7}}$
As you can see, our numerator has a degree of 3, and our denominator has a degree of 7. Thus, since the degree is higher on the denominator, our denominator will quickly outpace the growth of our numerator. Therefore, as x gets bigger and bigger, our numerator will essentially become 0 compared to our denominator, and thus the limit of this function as x approaches infinity is 0.
$lim_{x \to \infty } \;\frac{{{16x^3} }}{{x^7}}$ $= \frac{0}{x^7} = 0$
There! Now that we have reviewed how to find the limit, let’s get back to how to do Squeeze Theorem.
## How to Do Squeeze Theorem
Though Squeeze Theorem can theoretically be used on any set of functions that satisfy the above conditions, it is particularly useful when dealing with sinusoidal functions. As with most things in mathematics, the best way to illustrate how to do Squeeze Theorem is to do some Squeeze Theorem problems.
Example 1:
Find $lim_{x \to \infty } \;\frac{{{\cos x} }}{{x}}$
Before we get into solving this problem, let’s first consider why using Squeeze Theorem is necessary in this case. The simple answer to this is that we cannot possibly compute the cos of infinity. Because infinity is not actually a number, there is nothing we can substitute in for x in order to find this limit. For this reason, we must use Squeeze Theorem.
Step 1: Make an Inequality
Because of the nature of the cosine function: $-1 \leq \cos x \leq 1$
Step 2: Modify the Inequality
You’ll notice that our inequality is insufficient for solving this problem, as we are asked to find the limit of cosx/x, not cosx alone. So we need to make some modifications. Dividing the entire inequality by x gives us what we are asked to solve.
$-\frac{1}{x} \leq \frac{\cos x}{x} \leq \frac{1}{x}$
Step 3: Evaluate the Left and Right Hand Limits
Remember – thought the problem has asked us to solve the limit of cosx/x for when x moves to infinity, we will do so by finding the limit of the other two functions we have created in the inequality.
$lim_{x \to \infty } \;-\frac{1}{x} =$ $lim_{x \to \infty } \;\frac{1}{x}=0$
NOTE: If the left and right hand limits do not equal each other, we cannot utilize Squeeze Theorem.
Step 4: Apply the Squeeze Principle
Since we have now found the limit of both the left and right hand equations from our inequality, and they are equal to each other, we can use Squeeze Theorem to determine:
$lim_{x \to \infty } \;\frac{\cos x}{x} =0$
That’s it! We’ve successfully used Squeeze Theorem to find the limit of this function. Let’s try some more complicated examples.
Example 2:
Find $lim_{x \to \infty } \;\frac{x(\sin x+ \cos^3 x)}{(x^4+2)(x-7)}$
Step 1: Make an Inequality
Because of the nature of the cosine function: $-1 \leq \cos x \leq 1$
Step 2: Modify the Inequality
Again, you’ll notice that our inequality is insufficient for solving this problem, so we need to make quite a few modifications.
Cubing the inequality: $-1 \leq \cos^3 x \leq 1$
Add the inequality $-1 \leq \sin x \leq 1$: $-2 \leq \sin x+ \cos^3 x \leq 2$
Multiply by x: $-2x \leq x(\sin x+ \cos^3 x) \leq 2x$
Divide by $(x^4+2)$: $\frac{-2x}{(x^4+2)} \leq \frac{x(\sin x + \cos^3 x)}{(x^4+2)} \leq \frac{2x}{(x^4+2)}$
Divide by $(x-7)$: $\frac{-2x}{(x^4+2)(x-7)} \leq \frac{x(\sin x + \cos^3 x)}{(x^4+2)(x-7)} \leq \frac{2x}{(x^4+2)(x-7)}$
Now we have successfully constructed the function we were asked to find the limit of!
Step 3: Evaluate the Left and Right Hand Limits
Since these functions are much more complex than in the previous example, let’s evaluate the left and right hand limits individually.
Left: $lim_{x \to \infty } \;\frac{-2x}{(x^4+2)(x-7)}=0$
Using what we know about evaluating limits, hopefully you’ll notice that as x gets bigger and bigger, our denominator will quickly grow much bigger than our numerator, as $x^5$ >> $x$. This will bring our function closer and closer to zero.
Right: $lim_{x \to \infty } \;\frac{-2x}{(x^4+2)(x-7)}=0$
The same logic can be used to find that the right hand limit also goes to zero.
Step 4: Apply the Squeeze Principle
Since we have again found the limit of both the left and right hand equations from our inequality, and they are equal to each other, we can use Squeeze Theorem to determine:
$lim_{x \to \infty } \;\frac{x(\sin x+ \cos^3 x)}{(x^4+2)(x-7)}=0$
### Squeeze theorem
In this section, we will learn about the intuition and application of the squeeze theorem (also known as the sandwich theorem). We will recognize that by comparing a function with other functions which we are capable of solving, we can evaluate the limit of a function that we couldn't solve otherwise, using the algebraic manipulation techniques covered in previous sections.
#### Lessons
• 1.
intuition behind the “Squeeze Theorem”…
• 2.
Prove that $lim_{x \to 0} \;{x^{10}}\cos \frac{{3\pi }}{x} = 0$
• 3.
If $5 \le g\left( x \right) \le 4{x^3} + 9{x^2} - x - 1$, find $lim_{x \to - 2} \;g\left( x \right)$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 35, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9410926103591919, "perplexity": 298.96343621710713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813109.36/warc/CC-MAIN-20180220204917-20180220224917-00608.warc.gz"} |
https://www.physicsforums.com/threads/diff-eq-problem.231979/ | # Diff Eq problem
1. Apr 28, 2008
### hils0005
[SOLVED] Diff Eq problem
1. The problem statement, all variables and given/known data
y'' - 3y' - 4y= 5e^-x - 3x^2 + 7
2. Relevant equations
I think I would need to find complimentary solution, then the particular solution using variation of parameters
y=y(c) + y(p)
3. The attempt at a solution
y(c)=
r^2-3r-4=0
(r-4)(r+1)=0, r=4,-1
y(c)=c(1)e^4x+c(2)e^-x
this is where I get stuck as I do not know what to use for y(p), would it be
y(p)=Axe^-x + Bx^2 + Dx +E ???
y'(p)=Ae^-x - Axe^-x + 2Bx + D
y"(p)=-Ae^-x -Ae^-x +Axe^-x +2B= Axe^-x - 2Ae^-x + 2B
(Axe^-x - 2Ae^-x + 2B) - 3(-Axe^-x + Ae^-x + 2Bx + D) -4(Axe^-x +Bx^2 + Dx +E)= (5e^-x - 3x^2 + 7)
am I even heading in the right direction?
Last edited: Apr 28, 2008
2. Apr 28, 2008
### eok20
everything looks right so far. now you just want to collect terms and match coefficients.
3. Apr 28, 2008
### rock.freak667
If I remember correctly that is not variation of parameters but method of undermined coefficients. Variation of parameters is something different.
Similar Discussions: Diff Eq problem | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8737490773200989, "perplexity": 4627.248490671626}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319265.41/warc/CC-MAIN-20170622114718-20170622134718-00204.warc.gz"} |
https://crypto.stackexchange.com/questions/20377/gap-problem-for-learning-with-errors | # Gap problem for Learning With Errors
Informally, a "Gap problem" arises when solving the computational (or search) version using an oracle for the decisional version. This definition of Gap Problem was introduced by Okamoto and Pointcheval in this paper, originally related to Diffie-Hellman problems.
For example, the Gap Diffie-Hellman Problem (GDH) is to solve the Computational Diffie-Hellman (CDH) problem with the help of a Decisional Diffie-Hellman (DDH) Oracle (which answers whether a given triple is a Diffie-Hellman triple or not).
In the Learning With Errors (LWE) setting, is there any "gap" problem (or similar) that can be assumed to be hard? That is, which computational problem can be defined that is still hard when using an oracle for the Decision LWE? I know that there are some results (see papers from Regev and @chris-peikert) that prove the equivalence of the decision and search versions of the LWE problems. I guess that this means that a "strict" gap problem cannot be defined, since there is a reduction from the Decision LWE to the Search LWE, but maybe some other variant of LWE or related problem still applies.
• @cygnusv GapSVP is not simply decisional SVP it is an approximation version of the decisional SVP. I.e., in the decisional SVP you must, given a lattice $L$ and a length $d$, decide if the shortest vector of $L$ is shorter than $d$ or not. In GapSVP you must decide if the shortest vector is shorter than $d$ or if it is longer than $\gamma \cdot d$ where $\gamma$ is an approximation factor. For lattices with shortests vectors in the gap between $d$ and $\gamma \cdot d$ results is undefined. This is a standard way to define approximation problems also known as Gap Problems. – Guut Boy Dec 25 '14 at 12:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9164898991584778, "perplexity": 693.2199780723893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572289.5/warc/CC-MAIN-20190915195146-20190915221146-00150.warc.gz"} |
https://math.stackexchange.com/questions/2352283/proof-verification-for-the-following-differential-equation | Proof verification for the following differential equation!
The question is as follows
Let $f_1$ & $f_2$ be two solutions to the following second order homogeneous linear differential equation
$$a_0(x)\frac{d^2y}{dx^2}+a_1(x)\frac{dy}{dx}+a_2(x)y=0$$
a)Show that $f_1$ & $f_2$ has a common zero at a point $x_0$ of the interval $a \leq x \leq b$ then $f_1$ & $f_2$ are linearly dependent on $a \leq x \leq b$.
My work is as follows
We know that they can be written as a linear combination of solution!
$$c_1f_1(x)+c_2f_2(x)=0$$
$$c_1f_1'(x)+c_2f_2'(x)=0$$
where $c_1$ & $c_2$ are not both zeroes
Since
$f_1(x_0)=0, f_2(x_0)=0$ on interval $a \leq x \leq b$
Then $f(x)=0$
$f_1'(x_0)=m_1,f_2'(x_0)=m_2$
For point $x_0$ on interval $a \leq x \leq b$
We have the theorem where if a system of two homogeneous linear algebraic equations has a non-trivial solution if the determinant of the system is zero.
Non-trivial means to mean me that $c_1$ & $c_2$ are not both zeroes.
Checking the determinant of the homogeneity of the linear equation!
$$c_1f_1(x_0)+c_2f_2(x_0)=f(x)$$
$$c_1f_1'(x_0)+c_2f_2'(x_0)=f'(x)$$
$$W[f_1(x),f_2(x)]=\begin{bmatrix}f_1(x_0) & f_2(x_0) \\f_1'(x_0) & f_2'(x_0) \end{bmatrix}$$
$$W[f_1(x),f_2(x)]=0$$ on interval $a \leq x \leq b$
If the system is linearly independent then it will be a contradiction.
2) Show that $f_1$ & $f_2$ have relative maxima at common point $x_0$ of interval $a \leq x \leq b$ then $f_1$ & $f_2$ are linearly dependent!
Relative maxima occurs at $x_0$ when $f_1'(x_0)$ is undefined or zeroes.
$$c_1f_1(x_0)+c_2f_2(x_0)=0$$
$$c_1f_1'(x_0)+c_2f_2'(x_0)=0$$
Suppose that $f_1(x_0)=k_1$ and $f_2(x_0)=k_2$
$f_1'(x_0)=0$ & $f_2'(x_0)=0$
$$W[f_1(x),f_2(x)]=f_1(x)f_2'(x)-f_2(x)f_1'(x)$$
One condition for the Wronskian definition to be zeroes that is when $f_1'(x_0)=0$ & $f_2'(x_0)=0$ if and only if $x_0$ is a relative extremum for the two function! Therefore, it is linearly dependent since wronskian is again 0.
Can someone please provide a better way of doing this?
In both cases, you have $W[f_1,f_2](x_0)=0$, so let's assume this and show that $W[f_1,f_2](x)=0$ everywhere. Let us also suppose $a_0(x)$ is nonzero on $[a,b]$. Your two solutions $f_1$ and $f_2$ satisfy \begin{align} a_0 f_2''+a_1 f_2'+a_2 f_2&=0,\\ a_0 f_1''+a_1 f_1'+a_2 f_1&=0. \end{align} Multiply the first equation by $f_1$, the second equation by $f_2$, and subtract. If $W:=f_1f_2'-f_2f_1'$, then the difference of the equations can be rewritten as a first order ODE for $W$: $$a_0 W'+a_1W=0.$$ Observe that $W(x)=0$ is a solution to this equation on $[a,b]$ which satisfies $W(x_0)=0$. Since $a_0\neq 0$, this solution is unique, and we conclude $W(x)=0$ on $[a,b]$, implying $f_1$ and $f_2$ are linearly dependent.
• @Crazy If $f_1$ and $f_2$ have relative maximas at $x_0$, then $f_1'(x_0)=f_2'(x_0)=0$, so $W[f_1,f_2](x_0)=0$ as in the first part. Jul 10 '17 at 1:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9943673014640808, "perplexity": 84.14150049753722}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585522.78/warc/CC-MAIN-20211022212051-20211023002051-00386.warc.gz"} |
http://math.stackexchange.com/questions/205206/integral-involving-matrix-exponential-to-solve-lti-system-equation | # Integral involving Matrix Exponential to solve LTI system equation
I am given that for $A$ that is $n \times n$ matrix of full rank,
$$\int_{0}^{t}e^{A\sigma}d\sigma = (e^{At}-I)A^{-1}$$
Then I am using this to solve LTI system
$$\dot{x}=Ax+Bu$$
Here, $x(0) = x_{0}$ and u is a constant vector.
I went ahead and used the general solution for LTI system,
$$x(t)=e^{A(t-t_{0})}x_{0}+\int_{t_0}^{t}e^{A(t-t_{0})}B(\tau)u(\tau) \, d\tau$$
I have $B$ and $U$ constant and time from 0 to t so this reduces to
$$x(t)=e^{At}x_{0}+\int_{0}^{t}e^{A(t-t_{0})}Bu \, d\tau$$
I am kinda stuck here, what should I do with those constant matrix $B u$ to solve this system using $\int_{0}^{t}e^{A\sigma}d\sigma = (e^{At}-I)A^{-1}$ ?
I know I am not allowed to just pull out $Bu$ outside of the integral because I am dealing with matrices. Any ideas?
-
$\int_{0}^{t}e^{A(t-\tau)}Bu \, d\tau = e^{At}(\int_{0}^{t}e^{-A \tau} d\tau) Bu$.
Now you can substitute your formula (with $-A$).
This gives: $e^{At}(\int_{0}^{t}e^{-A \tau} d\tau) Bu = e^{At} (I-e^{-At}) A^{-1} Bu = (e^{At}-I)A^{-1} Bu$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9682721495628357, "perplexity": 109.83027274567426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824499.16/warc/CC-MAIN-20160723071024-00029-ip-10-185-27-174.ec2.internal.warc.gz"} |
http://newsgroups.derkeiler.com/Archive/Comp/comp.text.tex/2006-05/msg01510.html | # Re: error starting ps2pdf
On Sun, 21 May 2006, buaanupt@xxxxxxxx wrote:
I am using WinEdt + MkTex systems.
After generating ps file, I use the "ps->pdf" icon in the toolbar to
generate the pdf file. However, I got the error information "error
starting ps2pdf!" message.
Can you pls give some comments on how to fix this problem? Thank you
very much.
Another post suggested trying to get ps2pdf.bat to work in a cmd.exe
window. This is an excellent step, but note that WinEdt does not use
the version of ps2pdf.bat provided by gs.
You are the 2nd person to report this problem in the past couple weeks (the other sent private email). WinEdt provides two versions of ps2pdf, one a .edt macro script, the other a custom ps2pdf.bat. WinEdt looks for ghostscript in the usual places.
1) is ghostscript installed in a place where WinEdt can find it?
2) have you tried both WinEdt mechanisms?
A number of people have been unable to get ps2pdf.bat to work in a cmd.exe window. In a couple cases problems running ps2pdf.bat were solved by removing a directory with a huge number of files from the PATH variable (e.g., the gs8.53\lib directory containing ps2pdf.bat). There have also been problems with extremely long filenames. This leads me to
suspect that "intractible" failures with ps2pdf are due to some limits in
Win32 command line tools. Without documentation, we can only guess what these might be.
--
George N. White III <aa056@xxxxxxxxxxxxxx>
.
## Relevant Pages
• Re: error starting ps2pdf
... I got the error information "error ... starting ps2pdf!" ... Can you pls give some comments on how to fix this problem? ... and look at the *.pdf file. ...
(comp.text.tex)
• error starting ps2pdf
... I am using WinEdt + MkTex systems. ... generate the pdf file. ... I got the error information "error ... starting ps2pdf!" ...
(comp.text.tex)
• Re: Creating a "7bit-clean" pdf
... It can be used with ps2pdf in this way: ... pdftk canrepairxref errors after the fact... ... I think you can try a utility called Advanced PDF Repair to repair ... your PDF file. ...
(comp.text.pdf)
• Re: pstricks
... I have just started using the pstricks package. ... create a PDF file. ... We said he uses WinEdt. ... it allows one to produce a build profile that will run all three ...
(comp.text.tex)
• Re: Creating a "7bit-clean" pdf
... It can be used with ps2pdf in this way: ... Now logo2.pdf is purely "latin" and can be inserted in my latex source ... PDF file is damaged - attempting to reconstruct xref table ... these trailing spaces can be avoided? ...
(comp.text.pdf) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9060048460960388, "perplexity": 4650.3705797662315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701160822.87/warc/CC-MAIN-20160205193920-00166-ip-10-236-182-209.ec2.internal.warc.gz"} |
http://chinaxiv.org/user/search.htm?field=author&value=Yifang%20Wang | Current Location:home > Browse
1. chinaXiv:201609.00971 [pdf]
Subjects: Physics >> Nuclear Physics
Following the discovery of the Higgs boson at LHC, new large colliders are being studied by the international high-energy community to explore Higgs physics in detail and new physics beyond the Standard Model. In China, a two-stage circular collider project CEPC-SPPC is proposed, with the first stage CEPC (Circular Electron Positron Collier, a so-called Higgs factory) focused on Higgs physics, and the second stage SPPC (Super Proton-Proton Collider) focused on new physics beyond the Standard Model. This paper discusses this second stage.
2. chinaXiv:201609.00917 [pdf]
Subjects: Physics >> Nuclear Physics
This essay is intended to provide a brief description of the peculiar properties of neutrinos within and beyond the standard theory of weak interactions. The focus is on the flavor oscillations of massive neutrinos, from which one has achieved some striking knowledge about their mass spectrum and flavor mixing pattern. The experimental prospects towards probing the absolute neutrino mass scale, possible Majorana nature and CP-violating effects will also be addressed.
3. chinaXiv:201609.00904 [pdf]
Subjects: Physics >> Nuclear Physics
Linear alkylbenzene (LAB) is adopted to be the organic solvent for the Jiangmen Underground Neutrino Observatory (JUNO) liquid scintillator detectors due to the ultra-transparency. However the current Rayleigh scattering length calculation disagrees with the measurement. The present paper for the first time reports the Rayleigh scattering of LAB being anisotropic and the depolarization ratio being 0.31+-0.01(stat.)+-0.01(sys.). We proposed an indirectly method for Rayleigh scattering measurement with Einstein-Smoluchowski-Cabannes formula, and the Rayleigh scattering length of LAB is determined to be 28.2+-1.0 m at 430 nm.
4. chinaXiv:201609.00903 [pdf]
Subjects: Physics >> Nuclear Physics
Rayleigh scattering poses an intrinsic limit for the transparency of organic liquid scintillators. This work focuses on the Rayleigh scattering length of linear alkylbenzene (LAB), which will be used as the solvent of the liquid scintillator in the central detector of the Jiangmen Underground Neutrino Observatory. We investigate the anisotropy of the Rayleigh scattering in LAB, showing that the resulting Rayleigh scattering length will be significantly shorter than reported before. Given the same overall light attenuation, this will result in a more efficient transmission of photons through the scintillator, increasing the amount of light collected by the photosensors and thereby the energy resolution of the detector.
5. chinaXiv:201609.00902 [pdf]
Subjects: Physics >> Nuclear Physics
We has set up a light scattering spectrometer to study the depolarization of light scattering in linear alkylbenzene. From the scattering spectra it can be unambiguously shown that the depolarized part of light scattering belongs to Rayleigh scattering. The additional depolarized Rayleigh scattering can make the effective transparency of linear alkylbenzene much better than it was expected. Therefore sufficient scintillation photons can transmit through the large liquid scintillator detector of JUNO. Our study is crucial to achieving the unprecedented energy resolution 3\%/E(MeV)???????√?for JUNO experiment to determine the neutrino mass hierarchy. The spectroscopic method can also be used to judge the attribution of the depolarization of other organic solvents used in neutrino experiments. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9312446117401123, "perplexity": 1858.7353960369317}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363290.39/warc/CC-MAIN-20211206042636-20211206072636-00024.warc.gz"} |
https://www.thejournal.club/c/paper/404420/ | Weighted Sum Rate Maximization of the mmWave Cell-Free MIMO Downlink Relying on Hybrid Precoding
Chenghao Feng, Wenqian Shen, Jianping An, Lajos Hanzo
The cell-free MIMO concept relying on hybrid precoding constitutes an innovative technique capable of dramatically increasing the network capacity of millimeter-wave (mmWave) communication systems. It dispenses with the cell boundary of conventional multi-cell MIMO systems, while drastically reducing the power consumption by limiting the number of radio frequency (RF) chains at the access points (APs). In this paper, we aim for maximizing the weighted sum rate (WSR) of mmWave cell-free MIMO systems by conceiving a low-complexity hybrid precoding algorithm. We formulate the WSR optimization problem subject to the transmit power constraint for each AP and the constant-modulus constraint for the phase shifters of the analog precoders. A block coordinate descent (BCD) algorithm is proposed for iteratively solving the problem. In each iteration, the classic Lagrangian multiplier method and the penalty dual decomposition (PDD) method are combined for obtaining near-optimal hybrid analog/digital precoding matrices. Furthermore, we extend our proposed algorithm for deriving closed-form expressions for the precoders of fully digital cell-free MIMO systems. Moreover, we present the convergency analysis and complexity analysis of our proposed method. Finally, our simulation results demonstrate the superiority of the algorithms proposed for both fully digital and hybrid precoding matrices.
arrow_drop_up | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9317900538444519, "perplexity": 510.0289487786162}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301263.50/warc/CC-MAIN-20220119033421-20220119063421-00174.warc.gz"} |
https://toc.csail.mit.edu/node/564 | # Rati Gelashvili: Leader Election and Renaming with Optimal Message Complexity
Wednesday, April 23, 2014 - 5:00pm to 6:00pm
Location:
32-G575
Speaker:
Rati Gelashvili
Biography:
MIT
Abstract: Asynchronous message-passing system is a standard distributed model, where $n$ processors communicate over unreliable channels, controlled by a strong adaptive adversary. The asynchronous nature of the system and the fact that $t < n / 2$ processes may fail by crashing are the great obstacles for designing efficient algorithms.
\emph{Leader election (test-and-set)} and \emph{renaming} are two fundamental distributed tasks. We prove that both tasks can be solved using expected $O( n^2 )$ messages---the same asymptotic complexity as a single all-to-all broadcast---and that this message complexity is in fact optimal. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8956584334373474, "perplexity": 2574.023472418548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499768.15/warc/CC-MAIN-20230129211612-20230130001612-00213.warc.gz"} |
https://deepai.org/publication/asymptotic-convergence-rates-for-averaging-strategies | DeepAI
# Asymptotic convergence rates for averaging strategies
Parallel black box optimization consists in estimating the optimum of a function using λ parallel evaluations of f. Averaging the μ best individuals among the λ evaluations is known to provide better estimates of the optimum of a function than just picking up the best. In continuous domains, this averaging is typically just based on (possibly weighted) arithmetic means. Previous theoretical results were based on quadratic objective functions. In this paper, we extend the results to a wide class of functions, containing three times continuously differentiable functions with unique optimum. We prove formal rate of convergences and show they are indeed better than pure random search asymptotically in λ. We validate our theoretical findings with experiments on some standard black box functions.
• 16 publications
• 1 publication
• 19 publications
• 23 publications
01/31/2019
### Parallel Black-Box Complexity with Tail Bounds
We propose a new black-box complexity model for search algorithms evalua...
06/17/2021
### Optimum-statistical collaboration towards efficient black-box optimization
With increasingly more hyperparameters involved in their training, machi...
10/09/2019
### Stochastic Implicit Natural Gradient for Black-box Optimization
Black-box optimization is primarily important for many compute-intensive...
10/18/2021
### A portfolio approach to massively parallel Bayesian optimization
One way to reduce the time of conducting optimization studies is to eval...
05/03/2016
### Blackbox: A procedure for parallel optimization of expensive black-box functions
This note provides a description of a procedure that is designed to effi...
01/31/2018
### A Cross Entropy based Optimization Algorithm with Global Convergence Guarantees
The cross entropy (CE) method is a model based search method to solve op...
05/25/2018
### Parallel Architecture and Hyperparameter Search via Successive Halving and Classification
We present a simple and powerful algorithm for parallel black box optimi...
## 1. Introduction
Finding the minimum of a function from a set of points and their images is a standard task used for instance in hyper-parameter tuning (Bergstra and Bengio, 2012), or control problems. While random search estimate of the optimum consists in returning , in this paper we focus on the similar strategy that consists in averaging the best samples, i.e. returning where .
These kinds of strategies are used in many evolutionary algorithms such as CMA-ES. Although experiments show that these methods perform well, it is not still understood why taking the average of best points actually leads to a lower regret. In
(Meunier et al., 2020a), it is proved in the case of quadratic functions that the regret is indeed lower for the averaging strategy than for pure random search. In this paper, we extend the result of this paper by proving convergence rates for a wide class of functions including three times continuously differentiable functions with unique optima.
### 1.1. Related work
#### 1.1.1. Better than picking up the best
Given a finite number of samples equipped with their fitness values, we can simply pick up the best, or average the “best ones” (Beyer, 1995; Meunier et al., 2020a), or apply a surrogate model (Gupta et al., 2021; Sudret, 2012; Dushatskiy et al., 2021; Auger et al., 2005; Bossek et al., 2019; Rudi et al., 2020). Overall, the best is quite robust, but the surrogate or the averaging usually provides better convergence rates. Using surrogate modeling is fast when the dimension is moderate and the objective function is smooth (simple regret in for points in dimension with times differentiability, leading to superlinear rates in evolutionary computation (Auger et al., 2005)). In this paper, we are interested in the rates obtained by averaging the best samples for a wide class of functions. We extend the results of (Meunier et al., 2020a) which only hold for the sphere function.
#### 1.1.2. Weighted averaging
Among the various forms of averaging, it has been proposed to take into account the fact that the sampling is not uniform (evolutionary algorithms in continuous domains typically use Gaussian sampling) in (Teytaud and Teytaud, 2009): we here simplify the analysis by considering a uniform sampling in a ball, though we acknowledge that this introduces the constraint that the optimum is indeed in the ball. (Arnold et al., 2009; Auger et al., 2011) have proposed weights depending on the fitness value, though they acknowledge a moderate impact: we here consider equal weights for the best.
#### 1.1.3. Choosing the selection rate
The choice of the selection rate is quite debated in evolutionary computation: one can find (Escalante and Reyes, 2013), (Beyer and Sendhoff, 2008), (Beyer and Schwefel, 2002), (Hansen and Ostermeier, 2003), (Teytaud, 2007; Fournier and Teytaud, 2010) and still others in (Beyer, 1995; Jebalia and Auger, 2010). In this paper, we focus on the selection rate when the number of samples is very large in the case of parallel optimization. In this case, the selection ratio would tend to . We carefully analyze this ratio and derive convergence rates using this selection ratio.
#### 1.1.4. Taking into account many basins
While averaging the best samples, the non-uniqueness of an optimum might lead to averaging points coming from different basins. Thus we consider at first the case of a unique optimum and hence a unique basin. Then we aim to tackle the case where there are possibly different basins. Island models (Skolicki, 2007) have also been proposed for taking into account different basins. (Meunier et al., 2020a) has proposed a tool for adapting depending on the (non) quasi-convexity. In the present work, we extend the methodology proposed in (Meunier et al., 2020a).
### 1.2. Outline
In the present paper, we first introduce, in Section 2, the large class of functions we will study, and study some useful properties of these functions in Section 3. Then, in Section 4, we prove upper and lower convergence rates for random search for these functions. In Section 5, we extend (Meunier et al., 2020a) by showing that asymptotically in the number of samples , the handled functions satisfy a better convergence rate than random search. We then extend our results on wider classes of functions in Section 6. Finally we validate experimentally our theoretical findings and compare with other parallel optimization methods.
In the present section, we present the assumptions to extend the results from (Meunier et al., 2020a) to the non-quadratic case. We will denote the closed ball centered at of radius in endowed with its canonical Euclidean norm denoted by . We will also denote by the corresponding open ball. All other balls intervening in what follows will also follow that notation. For any subset , we will denote the uniform law on .
Let be a continuous function for which we would like to find an optimum point . The existence of such an optimum point is guaranteed by continuity on a compact set. For the sake of simplicity, we assume that . We define the -level sets of as follows.
###### Definition 0 ().
Let be a continuous function. The closed sublevel set of of level is defined as:
Sh:={x∈B(0,r)∣f(x)≤h}.
We now describe the assumptions we will make on the function that we optimize.
###### Assumption 1 ().
is a continuous function and admits a unique optimum point such that . Moreover we assume that can be written:
f(x)=(x−x⋆)TH(x−x⋆)+((x−x⋆)TH(x−x⋆))α/2ε(x−x⋆)
for some bounded function (there exists such that for all , ), a symmetric positive definite matrix and a real number.
Note that is uniquely defined by the previous relation. In the following we will denote by and
respectively the smallest and the largest eigenvalue of
. As is positive definite, we have . We will also set , which is a norm (the -norm) on as is symmetric positive definite. We then have
###### Remark 1 (Why a unique optimum ?).
The uniqueness of the optimum is an hypothesis required to avoid that chosen samples come from two or more wells for . In this case the averaging strategy would lead to a mistaken point because points from the different wells would be averaged. Nonetheless, multimodal functions can be tackled using our non-quasiconvexity trick (Section 6.2).
###### Remark 2 (Which functions f satisfy Assumption 1?).
One may wonder if Assumption 1 is restrictive or not. We can remark that three times continuously differentiable functions satisfy the assumption with , as long as the unique optimum satisfies a strict second order stationary condition. Also, we will see in Section 6.1 that results are immediately valid for strictly increasing transformations of any for which Assumption 1 holds, so that we indirectly include all piecewise linear functions as well as long as they have a unique optimum. So the class of functions is very large, and in particular allows non symmetric functions to be treated, which might seem counter intuitive at first.
The aim of this paper is to study a parallel optimization problem as follows. We sample
from the uniform distribution on
. Let
denote the ordered random variables, where the order is given by the objective function
f(X(1))≤⋯≤f(X(λ)).
We then introduce the -best average
¯¯¯¯¯X(μ)=1μμ∑i=1X(i)
In the following of the paper, we will compare the standard random search algorithm (i.e. ) with the algorithm that consists in returning the average of the best points. To this end, we will study the expected simple regret for functions satisfying the assumption:
E[f(¯¯¯¯¯X(μ))]
## 3. Technical lemmas
In this section, we prove two technical lemmas on that will be useful to study the convergence of the algorithm. The first one shows that can be upper bounded and lower bounded by two spherical functions.
###### Lemma 0 ().
Under Assumption 1, there exist two real numbers , such that, for all :
(1) l∥x−x⋆∥2≤f(x)≤L∥x−x⋆∥2.
Moreover such and must satisfy .
###### Proof.
As is symmetric positive definite, we have the following classical inequality for the -norm
(2) e1(H)∥x−x⋆∥2≤∥x−x⋆∥2H≤ed(H)∥x−x⋆∥2
Now set for
ϕ(x):=f(x)−f(x⋆)∥x−x⋆∥2=∥x−x⋆∥2H∥x−x⋆∥2(1+∥x−x⋆∥α−2Hε(x−x⋆)).
By the above inequalities, we have
e1(H)(α−2)/2∥x−x⋆∥α−2≤∥x−x⋆∥α−2H≤ed(H)(α−2)/2∥x−x⋆∥α−2.
Thus, as , we obtain . By assumption, the function is also bounded as .
We thus conclude that there exists such that, for all
12e1(H)≤ϕ(x)≤2ed(H).
Now notice that is a closed subset of the compact set hence it is also compact. Moreover, by assumption is continuous on and for all Hence is continuous and positive on this compact set. Thus it attains its minimum and maximum on this set and its minimum is positive. In particular, we can write, on this set, for some
l0≤ϕ(x)≤L0.
We now set . Note that because and (as is positive definite). We also set which is also positive. These are global bounds for which gives the first part of the result.
For the second part, let
be a normalized eigenvector respectively associated to
. Then
f(x⋆+ϵu1)∥ϵu1∥2=e1(H)+ϵα−2ε(ϵu1)
Taking the limit as . we get that, if satisfies (1), then . Similarly, we can prove that .∎
Secondly, we frame into two ellipsoids as . This lemma is a consequence of the assumptions we make on .
###### Lemma 0 ().
Under Assumption 1, there exists such that for , we have where:
Ah:={x∣∥x−x⋆∥H≤ϕ−(h)} Bh:={x∣∥x−x⋆∥H≤ϕ+(h)}
with and two functions satisfying
ϕ−(h) = √h−M2h(α−1)/2+o(h(α−1)/2) and ϕ+(h) = √h+m2h(α−1)/2+o(h(α−1)/2)
when for some constants and which are respectively a (specific) lower and upper bound for .
###### Proof.
By assumption , hence we have:
{x∈B(0,r) ∣∥x−x⋆∥2H+M∥x−x⋆∥αH≤h}⊂Sh
Let . This is a continuous, strictly increasing function on . By a classical consequence of the intermediate value theorem, this implies that admits a continuous, strictly increasing inverse function. Note that hence . Thus we can write . We now denote by . As is non-decreasing, we get
{x∈B(0,r) ∣∥x−x⋆∥2H+M∥x−x⋆∥αH≤h}=Ah∩B(0,r)
Now observe that for sufficiently small
{x∈B(0,r)∣∥x−x⋆∥2H+M∥x−x⋆∥αH≤h}=Ah.
Indeed, if , we have by the triangle inequality and (2)
∥x∥ ≤∥x⋆∥+∥x−x⋆∥ ≤∥x⋆∥+e1(H)−1/2∥x−x⋆∥H ≤∥x⋆∥+e1(H)−1/2ϕ−(h)
Recall that by assumption and let . As , for sufficiently small, we have hence for sufficiently small, which gives the inclusion .
For the asymptotics of , as we have by definition , and as we deduce that . Let us define . We have . We then compute:
(√h+u(h))2+M(√h+u(h))α=h
This gives
u(h)(u(h)+2√h) =−Mhα/2(1+u(h)√h)α u(h)(u(h)2√h+1) =−M2h(α−1)/2(1+u(h)√h)α
As for , we obtain
u(h)∼−M2h(α−1)/2.
which concludes for .
On the other side, we recall that for all as is the unique minimum of on . Write
0<∥x−x⋆∥2H(1+∥x−x⋆∥α−2Hε(x−x⋆)).
Now observe that, as , we have for , by the triangle inequality, . Hence, by the classical inequality for the -norm (2), we get
ε(x−x⋆) >−1∥x−x⋆∥α−2H≥−(√ed(H)2r)−(α−2)=:−m
So we have:
Sh⊂{x∈B(0,r) ∣∥x−x⋆∥2H−m∥x−x⋆∥αH≤h}
The function is differentiable. A study of the derivative shows that is continuous, strictly increasing on and continuous, strictly decreasing on where . Hence admits a continuous strictly increasing inverse and a continuous strictly decreasing inverse . We thus write
{u≥0|u2−muα≤h}=[0,ϕ+(h)]∪[~ϕ(h),+∞).
Hence
{x∈B(0,r)∣ ∥x−x⋆∥2H−m∥x−x⋆∥αH≤h} =(Bh∩B(0,r))∪(B(0,r)∩Vh)
with . We now show that for sufficiently small
{x∈B(0,r)∣∥x−x⋆∥2H−m∥x−x⋆∥αH≤h}=Bh.
Indeed, note first that if , we obtain by (2)
∥x−x⋆∥2H≤ed(H)∥x−x⋆∥2<4ed(H)r2.
where we have used that, as , the triangle inequality gives . Hence . We now show that . Indeed, at , are by definition, the two roots of
u2−muα=0.
Hence . By continuity of at , we obtain that for sufficiently small. As , we thus obtain that, for sufficiently small, . Next, the same line of reasoning as the one for , using that and , shows that for sufficiently small.
Hence, for small enough we have
{x∈B(0,r)∣∥x−x⋆∥2H−m∥x−x⋆∥αH≤h}=Bh.
This gives .
Finally, similarly to , we can show that , which concludes the proof of this lemma. ∎
## 4. Bounds for random search
In this section we provide upper bounds and lower bounds for the random search algorithm for functions satisfying Assumption 1. These bounds will also be useful for analyzing the convergence of the -best approach.
### 4.1. Upper bound
First, we prove an upper bound for functions satisfying Assumption 1.
###### Lemma 0 (Upper bound for random search algorithm).
Let be a function satisfying Assumption 1. There exists a constant and an integer such that for all integers :
EX1,⋯,Xλ∼U(B(0,r))[f(X(1))]≤C0λ−2d.
###### Proof.
Let us first recall the following classical property about the expectation of a positive valued random variable:
EX1,…,Xλ∼U(B(0,r)) [f(X(1))]=∫∞0P[f(X(1))≥t]dt
By independence of the samples we have:
∫∞0P[f(X(1))≥t]dt=∫∞0PX∼U(B(0,r))[f(X)≥t]λdt
Then thanks to Lemma 3.1:
∫∞0PX∼U(B(0,r)) [f(X)≥t]λdt ≤∫∞0PX∼U(B(0,r))[L∥X−x⋆∥2≥t]λdt =∫L(r+∥x⋆∥)20P[∥X−x⋆∥≥√tL]λdt
where the second equality follows because almost surely. Then, by definition of the uniform law as well as the non-increasing character of , we obtain
∫L(r+∥x⋆∥)20PX∼U(B(0,r))[∥X−x⋆∥≥√tL]λdt =∫L(r−∥x⋆∥)20PX∼U(B(0,r))[∥X−x⋆∥≥√tL]λdt +∫L(r+∥x⋆∥)2L(r−∥x⋆∥)2PX∼U(B(0,r))[∥X−x⋆∥≥√tL]λdt ≤∫L(r−∥x⋆∥)20[1−(√tLr2)d]λdt +L((r+∥x⋆∥)2−(r−∥x⋆∥)2)P[∥X−x⋆∥≥r−∥x⋆∥]λ ≤∫Lr20[1−(tLr2)d2]λdt+4Lr∥x⋆∥P[∥X−x⋆∥≥r−∥x⋆∥]λ =Lr2∫10[1−ud2]λdu+4Lr∥x⋆∥P[∥X−x⋆∥≥r−∥x⋆∥]λ
Note that . Thus the second term in the last equality satisfies . The first term has a closed form given in (Meunier et al., 2020a):
∫10[1−ud2]λdu=Γ(d+2d)Γ(λ+1)Γ(λ+1+2/d)
Finally thanks to the Stirling approximation, we conclude:
EX1,…,Xλ∼U(B(0,r)) [f(X(1))]≤C1λ−2/d+o(λ−2/d)
where is a constant independent from . ∎
This lemma proves that the strategy consisting in returning the best sample (i.e. random search) has an upper rate of convergence of order , which depends on dimension of the space. It also worth noting this result is common in the literature (Rudi et al., 2020; Bergstra and Bengio, 2012)
### 4.2. Lower bound
We now give a lower bound for the convergence of the random search algorithm. We also prove a conditional expectation bound that will be useful for the analysis of the -best averaging approach.
###### Lemma 0 (Lower bound for random search algorithm).
Let be a function satisfying Assumption 1. There exist a constant and such that for all integers , we have the following lower bound for random search:
EX1,…,Xλ∼U(B(0,r)) [f(X(1))]≥C1λ−2/d.
Moreover, let be a sequence of integers such that , and . Then, there exist a constant and such that for all and , we have the following lower bound when the sampling is conditioned:
EX1,…,Xλ∼U(B(0,r)) [f(X(1))∣f(X(μλ+1))=h]≥C2hμ−2/dλ.
###### Proof.
The proof is very similar to the previous one. Let us first show the unconditional inequality. We use the identity for the expectation of a positive random variable
E X1,…,Xλ∼U(B(0,r))[f(X(1))] =∫∞0PX1,…,Xλ∼U(B(0,r))[f(X(1))≥t]dt
Since the samples are independent, we have
∫∞0 PX1,…,Xλ∼U(B(0,r))[f(X(1))≥t]dt =∫∞0PX∼U(B(0,r))[f(X)≥t]λdt
Using Lemma 3.1, we get:
∫∞0 PX∼U(B(0,r))[f(X)≥t]λdt ≥∫∞0PX∼U(B(0,r))[l∥X−x⋆∥2≥t]λdt ≥∫l(r−∥x⋆∥)20PX∼U(B(0,r))[l∥X−x⋆∥2≥t]λdt =∫l(r−∥x⋆∥)20[1−(√tlr2)d]λdt
We can decompose the integral to obtain:
∫l(r−∥x⋆∥)20[1−(√tlr2)d]λdt =∫lr20[1−(√tlr2)d]λ−∫lr2l(r−∥x⋆∥)2[1−(√tlr2)d]λdt ≥lr2Γ(d+2d)Γ(λ+1)Γ(λ+1+2d)−l(r2−(r−∥x⋆∥)2)⎡⎣1−(r−∥x⋆∥r)d⎤⎦λ ≥12lr2Γ(d+2d)λ−2/d % for λ sufficiently large.
where the last inequality follows by Stirling’s approximation applied to the first term and because the second term is as in previous proof.
This concludes the proof of the first part of the lemma. Let us now treat the case of the conditional inequality. Using the same first identity as above we have
E X1,…,Xλ∼U(B(0,r))[f(X(1))∣f(X(μλ+1))=h] =∫∞0PX1,…,Xλ∼U(B(0,r))[f(X(1))≥t∣f(X(μλ+1))=h]dt
###### Remark 3 ().
Note that if we sample independent variables while conditioning on and keep only the -best variables such that , this is exactly equivalent to sampling directly from the -level set. This result was justified and used in (Meunier et al., 2020a) in their proofs.
Hence we obtain
∫∞0 PX1,…,Xλ∼U(B(0,r))[f(X(1))≥t∣f(X(μλ+1))=h]dt
Using Lemma 3.1, we get:
∫∞0 PX∼U(Sh)[f(X)≥t]μλdt ≥∫∞0PX∼U(Sh)[l∥X−x⋆∥2≥t]μλdt ≥∫∞0PX∼U(B(x⋆,√hl))[l∥X−x⋆∥2≥t]μλdt
where the last inequality follows from the inclusion , which is also a consequence of Lemma 3.1. We then get
∫∞0 PX∼U(B(x⋆,√hl))[l∥X−x⋆∥2≥t]μλdt =∫h0PX∼U(B(x⋆,√hl))[l∥X−x⋆∥2≥t]μλdt =∫h0[1−(√th)d]μλdt =hΓ(d+2d)Γ(μλ+1)Γ(μλ+1+2/d) ≥12hΓ(d+2d)μ−2/dλ % for λ sufficiently large.
This lemma, along with Lemma 4.1, proves that for any function satisfying Assumption 1, its rate of convergence is exponentially dependent on the dimension and of order where is the number of points sampled to estimate the optimum.
###### Remark 4 (Convergence of the distance to the optimum).
It is worth noting that, thanks to Lemma 3.1, the convergence rates are also valid for the square distance to the optimum .
## 5. Convergence rates for the μ-best averaging approach
In the next section we focus on the case where we average the best samples among the samples. We first prove a lemma when the sampling is conditional on the -th value.
###### Lemma 0 ().
Let be a function satisfying Assumption 1. There exists a constant such that for all and and two integers such that , we have the following conditional upper bound:
EX1,...Xλ∼U(B(0,r))[f(¯X(μ))|f(X(μ+1))=h]≤C3(hμ+hα−1).
###### Proof.
We first decompose the expectation as follows.
E X1,...Xλ∼U(B(0,r))[f(¯X(μ))|f(X(μ+1)))=h] =EX1,...Xμ∼U(Sh)[f(¯Xμ)] (3) =EX1,⋯,Xμ∼U(Sh)[∥¯Xμ−x⋆∥2H] (4) +EX1,⋯,Xμ∼U(Sh)[∥¯Xμ−x⋆∥αHε(¯Xμ−x⋆)]
where we have use the same argument as in Remark 3 in the first equality. We will treat the terms (3) and (4) independently. We first look at (3
). We have the following “bias-variance” decomposition.
EX1,⋯,Xμ∼U(Sh)∥¯Xμ−x⋆∥2H= (1−1μ)∥EX∼U(Sh)X−x⋆∥2H +1μEX∼U(Sh)∥X−x⋆∥2H
We will use Lemma 3.2. We have . Hence for the variance term
1μEX∼U(Sh)∥X−x⋆∥2H≤1μEX∼U(Sh)ϕ+(h)2≤ϕ+(h)2μ∼0hμ.
where means ”is equivalent to when , in other words, iff as . For the bias term, recall that
EX∼U(Sh)[X−x⋆]=1vol(Sh)∫Sh | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9841915965080261, "perplexity": 620.6171769945752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711003.56/warc/CC-MAIN-20221205032447-20221205062447-00522.warc.gz"} |
http://mathoverflow.net/users/36162/user36162?tab=activity | user36162
Reputation
Next privilege 125 Rep.
Vote down
Feb 2 accepted Riesz potential inequality Feb 2 asked Riesz potential inequality Nov 9 accepted Integral and conformal mappings II Nov 8 comment Integral and conformal mappings II Probably it is a correct construction. Nov 8 revised Integral and conformal mappings II added 109 characters in body Nov 8 comment Integral and conformal mappings II If $D_n$ is smooth (for example $C^2$), then $|f'(z)|\le C_n$, so why the integral diverges? I am assuming that $D_n$ are images of $n/(n+1) D$ under a $C^2-$$K$ q.c. diffeomorphic mapping of the unit disk onto itself. Nov 8 revised Integral and conformal mappings II added 7 characters in body Nov 8 asked Integral and conformal mappings II Nov 8 accepted Uniform convergence of conformal mappings Nov 8 comment Uniform convergence of conformal mappings Yes you right, I understand the point. Thanks. Nov 7 awarded Commentator Nov 7 revised Uniform convergence of conformal mappings deleted 41 characters in body Nov 7 revised Uniform convergence of conformal mappings added 46 characters in body Nov 7 asked Uniform convergence of conformal mappings Oct 2 accepted Holder class of analytic functions Oct 1 comment Holder class of analytic functions Yes (n1) means nontangential! Oct 1 comment Holder class of analytic functions &Koushik: It is related to little Bloch space. Oct 1 comment Holder class of analytic functions No, when I said $|z|\to 1$ uniformly I had in mind that $z\to e^{it}$ for some $t$ and throughout the unit disk. Nontangentialy means that $z$ also tends to $e^{it}$ but inside an fixed angle. Oct 1 revised Holder class of analytic functions deleted 2 characters in body Oct 1 asked Holder class of analytic functions | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9932951927185059, "perplexity": 2941.423278390291}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737951049.91/warc/CC-MAIN-20151001221911-00038-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://reproducibility.org/RSF/book/tccs/pwshape/paper_html/node2.html | Seismic data interpolation using plane-wave shaping regularization
Next: Interpolation Tests Up: Swindeman & Fomel: Plane-wave Previous: Introduction
# Plane-wave Shaping
The formulation of linear shaping regularization is (Fomel, 2007)
(1)
where is a vector of model parameters; is the shaping operator; is the data; and and are the forward and adjoint operators respectively. In interpolation problems, is forward interpolation (in the case of irregular sampling) or simple masking (in the case of missing-data interpolation on a regular grid). In 1-D, shaping in Z-transform notation can be triangle smoothing (Claerbout, 1992)
(2)
for a given smoothing radius . One can visualize this as a convolution of two box filters producing a weighting triangle for a triangle / neighborhood radius of . Increasing produces a smoother model. In 2-D the shift operator translates into shifts along local slope. corresponds to PWD - which can be thought of as a differentiation - while its inverse operator corresponds to PWC - similar to integration.
Seismic data interpolation using plane-wave shaping regularization
Next: Interpolation Tests Up: Swindeman & Fomel: Plane-wave Previous: Introduction
2022-08-02 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8563623428344727, "perplexity": 4658.522814908546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500384.17/warc/CC-MAIN-20230207035749-20230207065749-00834.warc.gz"} |
http://www.cs.nyu.edu/pipermail/fom/2005-October/009168.html | # [FOM] Intuitionists and excluded-middle
Arnon Avron aa at tau.ac.il
Fri Oct 14 06:21:15 EDT 2005
> > Despite of almost 100 years of desperate attempts, they have failed to
> > provide any convincing argument for rejecting excluded middle (except
> > "Because I say so").
>
> A conventional convincing argument: mathematical proofs using the law of
> excluded middle might be "useless". Here is a familiar trivial example
> (quoted by A. S. Troelstra, et al).
>
> THEOREM. There exists an irrational real number x such that x^sqrt(2) is
> rational.
>
> PROOF. Suppose sqrt(2)^sqrt(2) is rational. Then take x:=sqrt(2). Otherwise,
> take x:=sqrt(2)^sqrt(2); hence x^sqrt(2) = sqrt(2)^(sqrt(2)*sqrt(2)) =
> sqrt(2)^2 = 2 - a rational.
> Q.E.D. by the law of excluded middle.
>
> Now obviously this proof says "nothing" about the desired solution x.
>
I find this "convincing argument" very strange indeed. First, we were
talking about the logical validity of excluded middle, not about its
usefulness. Second, the fact that *some* applications of a law *might*
be useless is certainly not a reason to reject it. Will intuitionists
reject the intoduction rules for i-or only because making the inference of
"2+2+4 i-or 2+2=5" from "2+2=4" is not a very useful thing to do?
Third, excluded middle is an extremely useful rule, used in proofs of many
central theorems of Analysis, like the *classical* intermediate-value theorem,
the theorem that any continuous function on a closed interval has
a maximum, etc (yes, I know that intuitionists have alternative proofs
to a sort of alternative theorems, but reading these alternatives only
makes it clear what a useful principle excluded middle is!). Moreover:
excluded middle is used even in simple definitions of Analysis.
Thus the defintion of the useful function \lambda x. sgn x (which the
intuitionists do not accept as well-defined) relies on excluded middle,
and so does the definition of the extremely useful \lambda x.|x|
(-x if x< 0, x otherwise. Here I guess that the intuitionists have
some "alternative" strange definition. So what?).
Finally, and perhaps most importantly, I find the proof you quote
above as a good example how *useful* excluded middle is! In fact, the
theorem you mention is not very interesting. What is interesting is
the existence of two irrational numbers a, b such that a^b is rational.
This fact has real g.i.i. because it is a good example that our intuitions
might mislead us. But it is only the *existence* of such a, b that
is interesting. The exact identity of them is not important at all (to
me, at least).
Let me add also that saying that "obviously this proof says "nothing"
about the desired solution x" is a big exaggeration. Will a police
investigator who got data which reduces the number of suspects from
5 to 2 (to say nothing of reducing it from infinity to 2!) discard
that data as useless???
Arnon Avron
of this theorem
More information about the FOM mailing list | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9301642179489136, "perplexity": 3610.581374408764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645330816.69/warc/CC-MAIN-20150827031530-00118-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://webot.org/info/en/?search=Disjoint_union | Type Set operation Set theory The disjoint union ${\displaystyle A\sqcup B}$ of the sets A and B is the set formed from the elements of A and B labelled (indexed) with the name of the set from which they come. So, an element belonging to both A and B appears twice in the disjoint union, with two different labels. ${\displaystyle \bigsqcup _{i\in I}A_{i}=\bigcup _{i\in I}\left\{(x,i):x\in A_{i}\right\}}$
In mathematics, a disjoint union (or discriminated union) of a family of sets ${\displaystyle (A_{i}:i\in I)}$ is a set ${\displaystyle A,}$ often denoted by ${\textstyle \bigsqcup _{i\in I}A_{i},}$ with an injection of each ${\displaystyle A_{i}}$ into ${\displaystyle A,}$ such that the images of these injections form a partition of ${\displaystyle A}$ (that is, each element of ${\displaystyle A}$ belongs to exactly one of these images). A disjoint union of a family of pairwise disjoint sets is their union.
In category theory, the disjoint union is the coproduct of the category of sets, and thus defined up to a bijection. In this context, the notation ${\textstyle \coprod _{i\in I}A_{i}}$ is often used.
The disjoint union of two sets ${\displaystyle A}$ and ${\displaystyle B}$ is written with infix notation as ${\displaystyle A\sqcup B}$. Some authors use the alternative notation ${\displaystyle A\uplus B}$ or ${\displaystyle A\operatorname {{\cup }\!\!\!{\cdot }\,} B}$ (along with the corresponding ${\textstyle \biguplus _{i\in I}A_{i}}$ or ${\textstyle \operatorname {{\bigcup }\!\!\!{\cdot }\,} _{i\in I}A_{i}}$).
A standard way for building the disjoint union is to define ${\displaystyle A}$ as the set of ordered pairs ${\displaystyle (x,i)}$ such that ${\displaystyle x\in A_{i},}$ and the injection ${\displaystyle A_{i}\to A}$ as ${\displaystyle x\mapsto (x,i).}$
## Example
Consider the sets ${\displaystyle A_{0}=\{5,6,7\}}$ and ${\displaystyle A_{1}=\{5,6\}.}$ It is possible to index the set elements according to set origin by forming the associated sets {\displaystyle {\begin{aligned}A_{0}^{*}&=\{(5,0),(6,0),(7,0)\}\\A_{1}^{*}&=\{(5,1),(6,1)\},\\\end{aligned}}}
where the second element in each pair matches the subscript of the origin set (for example, the ${\displaystyle 0}$ in ${\displaystyle (5,0)}$ matches the subscript in ${\displaystyle A_{0},}$ etc.). The disjoint union ${\displaystyle A_{0}\sqcup A_{1}}$ can then be calculated as follows:
${\displaystyle A_{0}\sqcup A_{1}=A_{0}^{*}\cup A_{1}^{*}=\{(5,0),(6,0),(7,0),(5,1),(6,1)\}.}$
## Set theory definition
Formally, let ${\displaystyle \left\{A_{i}:i\in I\right\}}$ be a family of sets indexed by ${\displaystyle I.}$ The disjoint union of this family is the set
${\displaystyle \bigsqcup _{i\in I}A_{i}=\bigcup _{i\in I}\left\{(x,i):x\in A_{i}\right\}.}$
The elements of the disjoint union are ordered pairs ${\displaystyle (x,i).}$ Here ${\displaystyle i}$ serves as an auxiliary index that indicates which ${\displaystyle A_{i}}$ the element ${\displaystyle x}$ came from.
Each of the sets ${\displaystyle A_{i}}$ is canonically isomorphic to the set
${\displaystyle A_{i}^{*}=\left\{(x,i):x\in A_{i}\right\}.}$
Through this isomorphism, one may consider that ${\displaystyle A_{i}}$ is canonically embedded in the disjoint union. For ${\displaystyle i\neq j,}$ the sets ${\displaystyle A_{i}^{*}}$ and ${\displaystyle A_{j}^{*}}$ are disjoint even if the sets ${\displaystyle A_{i}}$ and ${\displaystyle A_{j}}$ are not.
In the extreme case where each of the ${\displaystyle A_{i}}$ is equal to some fixed set ${\displaystyle A}$ for each ${\displaystyle i\in I,}$ the disjoint union is the Cartesian product of ${\displaystyle A}$ and ${\displaystyle I}$:
${\displaystyle \bigsqcup _{i\in I}A_{i}=A\times I.}$
Occasionally, the notation
${\displaystyle \sum _{i\in I}A_{i}}$
is used for the disjoint union of a family of sets, or the notation ${\displaystyle A+B}$ for the disjoint union of two sets. This notation is meant to be suggestive of the fact that the cardinality of the disjoint union is the sum of the cardinalities of the terms in the family. Compare this to the notation for the Cartesian product of a family of sets.
In the language of category theory, the disjoint union is the coproduct in the category of sets. It therefore satisfies the associated universal property. This also means that the disjoint union is the categorical dual of the Cartesian product construction. See coproduct for more details.
For many purposes, the particular choice of auxiliary index is unimportant, and in a simplifying abuse of notation, the indexed family can be treated simply as a collection of sets. In this case ${\displaystyle A_{i}^{*}}$ is referred to as a copy of ${\displaystyle A_{i}}$ and the notation ${\displaystyle {\underset {A\in C}{\,\,\bigcup \nolimits ^{*}\!}}A}$ is sometimes used.
## Category theory point of view
In category theory the disjoint union is defined as a coproduct in the category of sets.
As such, the disjoint union is defined up to an isomorphism, and the above definition is just one realization of the coproduct, among others. When the sets are pairwise disjoint, the usual union is another realization of the coproduct. This justifies the second definition in the lead.
This categorical aspect of the disjoint union explains why ${\displaystyle \coprod }$ is frequently used, instead of ${\displaystyle \bigsqcup ,}$ to denote coproduct. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 58, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9661357402801514, "perplexity": 93.96068343043385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945440.67/warc/CC-MAIN-20230326075911-20230326105911-00087.warc.gz"} |
http://www.dartmouth.edu/~chance/chance_news/recent_news/primes_part3/part3.html | Chance in the Primes - Part III
This is the third and final issue of our series meant to assist in listening to Peter Sarnak's lecture on the Riemann Hypothesis available at the MSRI website. Parts I and II related to the first part of Sarnak's talk in which Sarnak discusses the history of the Riemann Hypothesis. In the second part of his talk, Sarnak discussed the remarkable connections between the zeros of the zeta function, energy levels in quantum theory, and the eigenvalues of random matrices. Here we will provide some background information for understanding these connections. This part was written jointly by Dan Rockmore and Laurie Snell with a lot of help with programming from Peter Kostelec and Charles Grinstead.
It all begins with a chance meeting at Princeton's Institute for Advanced Study between number theorist Hugh Montgomery, a visitor from the University of Michigan to the Institute's School of Mathematics, and Freeman Dyson, a member of the Institute's Faculty of the School of Natural Sciences, the same department which claimed Albert Einstein as one of its first members. Dyson is perhaps best known as the mathematical physicist who turned Feynman diagrams, the intuitive and highly personal computational machinery of Richard Feynman, into rigorous mathematics, thereby laying the mathematical foundations for a working theory of quantum electrodynamics.
Montgomery had been working for some time on various aspects of the Riemann Hypothesis. In particular he had been investigating the pair correlation of the zeros which we now describe.
Recall that the Prime Number Theorem states that, if is the number of primes not exceeding , then
(1)
It follows from this that if is the nth prime then
(2)
A proof of (2) can be found in [1] page 10. Thus the primes get further apart and the nth prime becomes significantly bigger than as . Exactly the opposite happens with the zeros of the zeta function.
Recall that the non-trivial zeros of the zeta function all lie in the critical strip of the complex plane:
Let be the number of zeros in the upper half of the critical strip with imaginary part at most T. Of course, if the Riemann Hypothesis (RH) is true then the zeros in the critical strip all lie on the line Then the analog of the prime number theorem for N(T) is
(3)
known apparently even to Riemann.
We now assume the RH and denote by the imaginary parts of the first n zeros in the critical region. Then it follows from (3) that
(4)
For a proof of (4) see [2] section 9.3. Thus, while the primes get less dense as , the zeros of the zeta function get more dense as .
Montgomery studied the consecutive spacings between the zeros. He effectively spread them out to obtain normalized spacings with mean spacing asymptotically 1. There are two ways that we can do this. We can normalize the zeros in such a way that the gaps will have their mean asymptotically 1 or we can normalize the gaps directly. Following Montgomery [3], Katz and Sarnak [5] use the first method normalizing the zeros by:
Then the normalized consecutive spacings are defined by
Odlyzko uses the second method. He starts with the unnormalized consecutive spacings and normalized these by
This normalization is arrived at by considering more accurate asymptotic expressions for . (See Odlyzko [4]).
Montgomery proved a series of results about zeros of the zeta function which led him to the following conjecture. Let and be two positive numbers with . Montgomery conjectured that
where the left side of the equation uses the first n normalized zeros. Note that:
The right side is called a kth-consecutive spacing. Thus, for the normalized spacings, we are counting the number of kth-consecutive spacings that lie in the interval . Of course this number can be bigger than 1. The function
is called the pair correlation function. In Figure 2 we give a graph of the pair correlation function.
If the zeros were to occur at random points on the critical line, i.e. where one zero occurs has no bearing on where another would occur, then the zeros would act like a Poisson process. In this case the pair correlation function would be the constant function . On the other hand, Montgomery's conjecture implies that, if we are sitting at a zero on the critical line, it is unlikely that we will find another zero close by. An analogy is with the prime numbers themselves - that two is prime implies that it is impossible that any two numbers separated by one (other than two and three) can both be prime.
Of course, it is natural to ask if the data supports Montgomery's conjecture. Thanks to the remarkable computations of Andrew Odlyzko we have a large amount of data. Since Montgomery's conjecture is an asymptotic results Odlyzko concentrated on computing the zeros near a very large zero. He has some of this data available on his web site. For example he has the zeros number through of the zeta function. In Figure 3 we show the empirical results from the data compared with the theoretical limiting density.
We have divided the interval from 0 to 3.05 into 61 equal subintervals and used these for our intervals . We see that the fit is very good for intervals between and , but it is less convincing for the other intervals.
In computations in the 1980's Odlyzko computed 175,587,726 zeros near zero . More specifically he started with zero and ended with zero number . Using this data, Odlyzko used 70 million zeros near zero to check Montgomery's conjecture obtaining a remarkable fit as shown in Figure 4.
## The Dyson connection
Montgomery's work was very exciting, since proving any sort of structure about the zeros - if they are where we think they are - is exciting. But things like this can provide subtle clues towards a proof. Perhaps it is possible to find a way of producing numbers that have the same properties, and if so then maybe, just maybe, you can find some direct link between Riemann's zeta function and the process, and then maybe, really - just maybe you can prove the Riemann Hypothesis. So, did this pair correlation arise in other places?
Freeman Dyson had been working on a problem in statistical mechanics - this is the physics and mathematics which attempt to model the behavior of huge systems, like all the atoms in a liter of gas - all of them. For systems of such huge populations it is impossible to model each particle individually and what statistical mechanics tries to do is to find some predictable average behavior, or to work out the rough distribution of what the particles are doing, say 20% are moving at such and such a speed, in such and such a direction, so much percent of the time.
Dyson, along with his collaborator Mehta, had been working out the theory of random matrices. Building on work of another Princetonian (and Nobel laureate) Eugene Wigner, they were looking to gain some insight (and prove some theorems) about the statistical properties of the eigenvalues of a random matrix, which would then cast some light on the problem of the prediction and distribution of energy levels in the nuclei of heavy atoms.
The connection between eigenvalues and nuclei is a bit beyond this discussion; but suffice it to say that the classical many-body formulation of the subatomic dynamics (electrons spinning around a positively charged nucleus) fails. This led Heisenberg to develop a quantization of this classical setting in which a deterministic Hamiltonian, a scalar function describing the energy of a system in terms of the positions and momenta of all the acting agents, is replaced by a matrix acting on a wave function'' which encapsulates our uncertain or probabilistic knowledge of the state of our energetic system. It is interesting to note that Heisenberg did not even know what matrices were when he developed his theory. He was completely led to their constructions by physical theories.
In this matrix mechanics, setting the spectral lines which are observed when a nucleus is bombarded with a particular kind of radiation (which corresponds to an effect of resonant radiation) may in fact be predicted as eigenvalues of certain matrices which model the physics. For heavy nuclei the models are too difficult to construct (too many interactions to write down), so the hope is that the behavior of this model might be like that of an average model of the same basic type (symmetry). These average or ensemble behaviors could be calculated and give some insight into the particular situation, or so it was hoped. These were the things that Dyson and Mehta were studying.
In a brief conversation in the Institute tea room, where the ghosts of Einstein, Godel, Von Neumann and Oppenheimer still held forth (although did not eat many of the cookies) Dyson and Montgomery discovered that it looked as though they had been studying the same thing!
## Random Matrices
To understand why Dyson recognized the pair correlation function, we need to understand some results about random matrices. A random matrix is a matrix whose entries are random variables. In the areas of quantum mechanics and number theory, there are several classes of random matrices that are of interest; they are referred to by certain acronyms, such as GUE, CUE, GOE, and GSE. In Sarnak's lectures, he discusses the CUE class which is asymptotically the same as the GUE class. Here, we will discuss the GUE class because that is the one chosen by Odlyzko but it turns out that the mathematics is the same.
The acronym GUE stands for Gaussian Unitary Ensemble. By ensemble we mean a collection of matrices and a method for choosing one of them at random. The entries in an GUE matrix are complex numbers chosen so that the matrix is Hermitian, i.e. so that if the th entry is then the th entry is . For diagonal entries this means that so the diagonal entries must be real. Thus to pick a random GUE matrix, we only have to specify how we pick the entries on or above the main diagonal. For a GUE ensemble, we choose entries on the main diagonal from a normal distribution with mean 0 and standard deviation . For entries with (above the main diagonal) and are chosen from a normal distribution with mean 0 and standard deviation 1. Then the entries below the diagonal are determined by the condition that the matrix be Hermitian.
It should be noted that some authors choose the entries on the main diagonal to be a standard normal distribution and the entries above the main diagonal and to have a normal distribution with mean and standard deviation . This choice does not make an essential difference in the results that we will discuss but makes annoying differences in normalization factors.
It is well-known from linear algebra that Hermitian matrices have real eigenvalues The eigenvalues of a matrix are numbers such that for each such , there exists a non-zero vector with the property that .The first question one might consider is how these eigenvalues are distributed along the real line. More precisely, if we choose a random GUE matrix, can we say anything about the probability of finding an eigenvalue of this matrix in the interval ?
The physicist Eugene Wigner worked with random matrices in connection with problems in quantum mechanics. He proved that as , the density function of the positions of the eigenvalues approaches a semicircle with center at the origin. In Figure 5, we show the fit for eigenvalues of 50 random GUE matrices, along with the semicircle of radius . In the case, the eigenvalues lie in the interval . We denote these eigenvalues by
The eigenvalues of the matrix are called the spectrum" of the matrix. From this histogram we see that there are many more eigenvalues near the middle of the spectrum than at the extreme values.
One can also ask about the distribution of the gaps between successive eigenvalues of a random GUE matrix. It turns out that there are different limit theorems that apply to the middle of the spectrum (called the bulk spectrum") and to the extreme values (called theedge" spectrum). We shall consider only the limit theorem for the bulk spectrum. For this theorem we again normalize the gaps so that asymptotically they have mean 1. The appropriate normalization for our definition of GUE matrices is:
(Here is an example where the normalization depends on the particular choice we made for the definition of GUE random matricies.)
It has been proven that, in the bulk spectrum, the normalized spacings have a limiting distribution as , called the Gaudin distribution. The density of the Gaudin distribution cannot be written in closed form, although it can be computed numerically. In Figure 6, we show a simulation of the normalized spacing distribution for 500 random GUE matrices of size We have also superimposed the Gaudin density. We see that the fit is quite good.
In Figure 7 we compare the Gaudin density with the Poisson density. We note that in the Poisson case, small values are quite probable, while in the normalized spacing case, such values are much less probable. This is sometimes described by saying that the eigenvalues repulse' one another.
The analog of Montgomery's conjecture for the gaps for the normalized eigenvalues is:
for any nonnegative numbers where E means expected value.
The hypothesis that the zeros of the zeta function and the eigenvalues of GUE matrices have the same pair correlation function is called the GUE hypothesis (or the Montgomery-Odlyzko law). Odlyzko contributed to this hypothesis by calculating a very large number of zeros of the Riemann zeta function and using these to support the GUE Hypothesis. As we have seen earlier in Figure 4, Odlyzko compared the empirical pair correlation for the normalized zeros of the zeta function to the conjectured limiting distribution and found a remarkably good fit.
Odlyzko also compared the limiting distributions for the normalized spacings of the eigenvalues of GUE matrices with the distribution of the normalized spacings of the zeros of the zeta function as shown in Figure 8. We see that the fits again remarkably good and this has led to the hope that the use of the theory of random matrices will lead to a proof of the Riemann hypothesis.
## Related Work
In a series of papers, Craig Tracy and Harold Widom made major contributions to the study of random matrices and their applications. This started with their finding limiting distribution for the largest eigenvalue for three ensembles of random matrices studied by Wigner and Dyson: the Gaussian Orthogonal Ensemble (GOE), Gaussian Unitary Ensemble (GUE), and Gaussian Symplectic Ensemble (GSE). For each of the ensembles there is a unique probability measure, called Haar measure, that is invariant under transformations associated with the ensemble. A random matrix for a particular ensemble means a matrix chosen according to the Haar measure. For the GUE ensemble, Haar measure amounts to choosing random matrices by normal distributions, as described earlier. For the three ensembles, the probability densities for the eigenvalues of an matrices to lie in infinitesimal intervals about the points are:
where is a normalizing constant and according to whether we are considering the GOE, GUE, or GSE ensemble.
Tracy and Widom [9] showed that the largest eigenvalue, properly normalized, tends to a limiting distribution. These limiting distributions were different for the three possible values of corresponding to the three different ensembles. Their densities are shown in Figure 9. They are denoted by and they are now called Tracy-Widom distributions. As with the Gaudin distribution there is no formula for these densities though they can be computed numerically. These distributions have been found to be the limiting distributions in many different fields and appear to indicate a new central limit theorem.
Let be the largest eigenvalue of a random GUE matrix. Then Tracy and Widom proved that
has limiting distribution In Figure 10, we show the results of simulating 1000 random GUE matrices and scaling their largest eigenvalues.
The discovery of these distributions has led Tracy and Widom and others to find these distributions as limiting distributions for phenomena in a number of different areas outside of the study of random matrices. We now discuss two such applications.
Around 1960, Stanislaw Ulam considered the following problem: Given a positive integer , let be a random permutation of the integers from 1 to . Let be the length of the longest increasing subsequence in . For example, if the longest increasing sequence is , so Ulam asked: what is the average value of ? Based upon Monte Carlo simulations, Ulam conjectured that the average length is asymptotic to . This conjecture was proven by Vershik and Kerov [6]. Recently the distribution of has been the subject of much research. In particular, Baik, Deift, and Johansson [7] have shown that if is scaled in the appropriate way, then the distribution of approaches a limiting distribution as . More precisely, they show that the random variable
converges in distribution to as . In Figure 11, we show the results of simulating 1000 random permutations of length 100, and scaling the sizes of their longest increasing subsequences.
Another interesting example is based on a model for the spread of a fire studied by Gravner, Tracy and Widom (GTW) [8]. We imagine a large sheet of paper and a fire starting in the bottom left corner. The fire moves deterministically to the right but also moves up by a random process described next.
We start with a grid of unit squares on the paper. We label the rows and the columns We represent the spread of the fire by a placing letters H or D in the squares of the grid. We start at with a letter D at (0,0). To get the configuration of the fire at time we look at the fire in columns 0 to t+1 at time t. If the height of the fire at a column is less than the height of the fire in the column to its left, we add a D to the column. If the height of the fire is greater than or equal to the height of the fire in the column to its left, we toss a coin. If heads comes up we add an H to the height of the column. For column 0 which has no column to the left we always toss a coin to see if we should add an H to this column.
Let's see in more detail how this works. We start at time 0 and put a D in column 0 and so the height of the fire in this column is 1.
t = 0 D
0 1 2 3 4 5 6.
At time t = 1 we look at columns 0 and 1 at time 0. The height of the fire in column 1 is less than in column 0 so we add a D to this column. In column 0 we toss a coin. It came up heads so we added an H to the first blank square in this column. Thus for time t = 1 we have:
t = 1 H
D D
0 1 2 3 4 5 6.
Now at time we look at columns 0,1,2 at time t = 1. The height of the fire in column 2 is less than that in column 1 so we add a D to column 2. Similarly, the height of the fire in column 1 is less than the height of column 0 so we add a D to column 1. For column 0 we tossed a coin to decide if we should add an H to column 0. It came up tails so we did not add an H. Thus for time t = 2 we have:
H D
t = 2 D D D
0 1 2 3 4 5 6.
Continuing in this way we have:
H
t = 3 H D D
D D D D
0 1 2 3 4 5 6
H
H D
t = 4 H D D D
D D D D D
0 1 2 3 4 5 6
H D
H H D D
t = 5 H D D D D
D D D D D D
0 1 2 3 4 5 6
H
H D D
t = 6 H H D D D
H D D D D D.
`
Let be the height of the fire in column after time We wrote a Mathematica program to simulate the spread of the fire. We ran the program for t = 500, t = 1000 and t = 2000. In Figure 12 we show plots of the height of the fire, , for these three times.
Note that up to time the graphs have a curved shape and after they seem to follow a straight line. This behavior is completely described by the limiting results obtained by GTW.
In our simulations we have assumed that the coin that is tossed is a fair coin. The authors consider more generally that the probability that the coin turns up head is . The authors prove limit theorems for the height of the fire after time under the assumption that and in such a way that their ratio is a positive constant for . Their limit theorems use two constants, and which depend on and . These constants are:
The authors prove the following limit theorems showing what happens when the ratio x/t = c is kept fixed with c a positive constant. They have limit theorems for four different ranges for c.
1. GUE Universal Regime: x/t = c with
Thus in this regime, the limiting distribution is the Tracy-Widom distribution for the the limiting distribution for of largest eigenvalue for the GUE ensemble and also the limiting distribution for the length of the longest increasing subsequence in a random permutation.
2. Critical Regime: x/t = c with .
where is a Actually it is enough to assume that .
Note that, since it takes time units for the fire to reach column can be at most . The authors provide the following numerical values of the limiting distribution for negative integers.
Thus, when with the fire at time t will be close to it's maximum possible height t-x.
3. Deterministic Regime: x/t = c with
Note again that can be at most so this says that, in this regime, in the limit, the fire in column x reaches its maximum possible height.
Let's see what these limit theorems would predict for the height of the fire at t = 2000 when p =1/2. The GUE universal regime will apply for . Here the limiting result in this regime implies that should be approximately . The critical regime applies when x = 1000. Here the limit theorem says that will be approximately 1000. The deterministic regime will apply for . The limiting theorem for this regime implies that, for , is well approximated by the line
Putting all this together we would predict that for, t = 2000, the limiting curve should be as shown in Figure 13.
In Figure 14 we show the result of simulating the fire for 2000 time units.
These two graphs are remarkably similar. In fact, if you superimpose them you will not be able to see any difference! Thus the limit results predict the outcome for large t very well.
The authors also consider a fourth regime which they call the finite GUE regime For this regime they fix and let Thus they look for the limiting distribution for the height of the fire in a specific column. For this they prove that
has a limiting distribution which is the distribution of the largest eigenvalue in the GUE of Hermitian matrices. In Figure 15 we show the density for the x = 0 to x = 6.
Since for column one we are just tossing coins to determine the height of the fire, we will get the normal distribution as limiting distribution for column 0, and this is
## Bibliography
1
G. H. Hardy and E. M. Wright, An Introduction to the Theory of Numbers, Oxford University Press, 1938.
2
E. C. Titchmarsh, The Theory of the Riemann Zeta-function, Oxford University Press, 1951.
3
H. L. Montgomery, The pair correlation of zeros of the zeta function, Proc. Symp. Pure Math. 24, A.M.S., Providence 1973, 181-193.
4
A. M. Odlyzko,On the distribution of spacings between zeros of the zeta function, Math. Comp., 48 (1987), pp. 273-308.
5
Nicholas M. Katz and Peter Sarnak,Zeroes of Zeta Functions and Symmetry, Bulletin of the American Mathematical Society, vol. 36 Number 1 (1999), pp. 1-20.
6
Vershik, A.M. and Kerov, S.V. (1977). Asymptotics of the Plancherel measure of the symmetric group and the limiting form of Young tableaux. Soviet Math. Dokl. 18, 527-531. Translation of Dokl. Acad. Nauk. SSSR 32, 1024-1027.
7
J. Baik, P. Deift, K. Johansson, On the distribution of the length of the longest increasing subsequence of random permutations, J. Amer. Math. Soc. 12 (1999), 1119-1178.
8
J. Gravner, C .A. Tracy, H. Widom)A Growth Model in a Random Environment, to appear in Annals of Probability. Available at http://arxiv.org/.
9
C .A. Tracy, H. Widom)The Distribution of the Largest Eigenvalue in the Gaussian Ensembles, Calogero-Moser-Sutherland Models, eds. J.F. van Diejen and L. Vinet, CRM Series in Mathematical Physics 4, Springer-Verlag, New York, 2000, pp. 461-472. Available at http://arxiv.org/.
## About this document ...
This document was generated using the LaTeX2HTML translator Version 2002-2-1 (1.70)
Copyright © 1993, 1994, 1995, 1996, Nikos Drakos, Computer Based Learning Unit, University of Leeds.
Copyright © 1997, 1998, 1999, Ross Moore, Mathematics Department, Macquarie University, Sydney.
The command line arguments were:
latex2html -split 0 -no_math -html_version 3.2 -white -no_transparent part3aug10.tex
The translation was initiated by Peter Kostelec on 2005-04-05
Peter Kostelec 2005-04-05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9501681327819824, "perplexity": 300.18618833938746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00112-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://pyxtal.readthedocs.io/en/latest/Background.html | # Background and Theory¶
This is a pedagogical introduction to crystallography and basic group theory. For information about how PyXtal works specifically, see the Algorithm <Algorithm.html>_ page.
## Crystals and Structures¶
When studying solids, it is often useful to describe a material’s structure at the atomic level. From this description one can (in theory) determine the material’s physical properties, including mechanical strength, electrical and thermal conductivity, melting point, etc. Due to the near-infinite number of possible materials and atomic geometries, it is necessary to have a consistent mathematical framework. This is described by crystallography.
For an atomic structure, we could describe the geometry by specifying the type and position of every atom. This works alright for molecules, and is in fact how computers typically encode molecules. But for an ideal crystal, which is infinitely large, it is impossible to describe where each individual atom lies. Fortunately, because crystals are symmetrical, we can specify one part of the crystal, and then use the symmetry operations to generate the rest of the crystal. This creates a perfectly symmetrical structure which is infinitely large in size. Such objects do not exist in nature, but they are nevertheless useful for understanding small parts of real, imperfect crystals. So, we call this infinite and symmetrical object an ideal crystal.
Most inorganic materials are formed by many small (nearly) ideal crystals called grains. These grains may have different shapes, sizes, and orientations, but each grain has the same crystal structure at the inter-atomic scale. If we can determine this crystal structure, it becomes possible to predict the way that the grains form and interact with each other. From this, we can go on to predict properties at larger and larger scales, and determine how useful a material will behave in different physical situations. Therefore, determining a material’s small-scale crystal structure is absolutely essential for modern materials science and engineering.
At different pressures and temperatures, a material may go through a solid phase transition, and take on a different crystal structure. So, one job of crystallographers is to determine how a system will change under different conditions. Often, new structures will form at high pressure, and sometimes these structures have vastly superior properties (think diamond .v.s graphite). Thus, high pressure physics forms an active branch of physics and chemistry, and is a potential avenue for finding high temperature superconductors.
## Periodicity, Lattices, and Unit Cells¶
Formally, an ideal crystal is an atomic structure that is periodic in three dimensions. This means that when we translate the structure by a certain amount (in any one of 3 directions unique to the crystal), the crystal will look the same. This can be pictured in a few simple steps:
1. Define a small parallelepiped-shaped box.
2. Put atoms into the box (You can put as few or as many atoms as you like.
1. Make a copy of the box and place it adjacent to the original box.
4. Make a copy of the copy, and place that adjacent to the previous one, but along a different axis.
1. Repeat step 4 until you have filled all of space.
We say that the resulting object has translational symmetry, or that it is periodic. We can be more specific by defining the vectors of translational symmetry. For a given crystal, there are 3 such linearly independent vectors. These 3 vectors, placed into a matrix, define what is called the unit cell. Alternatively, we can define the unit cell using the lengths of each side of the box (usually called a, b, c), along with the angles between them (usually called $$alpha, beta, gamma$$). These 6 values are called the cell parameters. The unit cell is any parallepiped-shaped part of the crystal which can be used to generate the rest of the crystal through translations alone. Any unit cell which has the smallest possible volume is called a primitive cell.
Note: a given crystal can have multiple ways to define a primitive cell, and there is not always a clearly preferred choice. Consider a 2-dimensional square lattice. You could just as well define the lattice using parallelograms which run along the diagonal lines:
To avoid this confusion, there is a set of standards (defined in the International Tables of Crystallography) which is typically used. A cell based on these standards is called the conventional cell. In many cases, the conventional cell is not actually a primitive cell. Instead, the conventional cell may have extra atoms which exist in specific locations within the cell. So the cell type is determined both by the cell parameters, and by any additional atomic sites within the cell.
Different cell parameters lead to different rotational symmetries of the unit cell (we will discuss this more below). Based on these symmetries, unit cells can be divided into seven different crystal classes. Each crystal class has a different range of allowable cell parameters; triclinic is the general class, requiring no symmetry. Combining these restrictions with possible extra lattice positions, we get 14 possible types of lattices, called the Bravais lattices. We list these here:
Triclinic P-monoclinic C-monoclinic P-orthorhombic C-orthorhombic B-orthorhombic F-orthorhombic P-tetragonal
B-tetragonal Hexagonal Rhombohedral R-cubic Body-centered cubic Face-centered cubic [1]
Much like squares can be considered a special case of rectangles, all unit cells can be thought of as special cases of triclinic cells. Cubic cells are a subset of tetragonal cells, tetragonal cells are a subset of orthorhombic cells, and so on. The hexagonal and trigonal lattices are somewhat special cases. They can be generated using either trigonal/hexagonal prisms, or using the standard parallelepiped shape. For consistency, the parallelepiped is always used. Note that despite using a parallelepiped, this is still called a hexagonal cell choice. Some lattices can be generated using a rhombohedral unit cell. Such space groups begin with an R, and always have trigonal symmetry. For these cases, we again use the hexagonal cell.
Whenever possible, PyXtal uses the same choices of unit cell as the Bilbao Crystallographic Server, which in turn uses the standard conventional cell. For a complete list of the cell choices used by PyXtal, see the Group Settings page.
Typically, to describe coordinates within a crystal, we use what are called fractional coordinates. Fractional coordinates use the lattice vectors as the basis, as opposed to absolute coordinates, which use Euclidean space as the basis. This makes it easier to describe two similar structures that differ only in their lattice values. Unless otherwise specified, any listed coordinates are fractional coordinates.
It is important to note that when periodicity is present, multiple coordinates can actually correspond to the same point, at least in notation. It is common practice to convert all coordintaes to lie within the range [0, 1) for periodic axes. So, for example, if we have a point at (1.4,-0.3,0.6), it will usually be written as (0.4,0.7,0.6). This is because it is assumed that each unit cell is the same. In other words, an atom located at (1.4,-0.3,0.6) implies that another atom is located at (0.4,0.7,0.6). So, it is more convenient to only consider the unit cell which lies between (0,0,0) and (1,1,1).
## Symmetry Operations¶
Translations are just one kind of transformation operation. More generally, we can perform any 3-dimensional transformation which preserves the lengths and angles between atoms. This means we can also apply rotations, reflections, and inversions, as well as any combination of these. Note that successive operations do not generally commute. That is, the order of operations determines the final outcome.
A symmetry operation is any transformation which leaves the original structure unchanged. In other words, if the structure looks the same before and after a transformation, then that transformation is a symmetry operation of the object. This includes the identity operation (doing nothing to the object), which means that every object has at least a trivial symmetry.
We can artificially split a transformation into two parts: the rotational and inversional part (given by a 3x3 matrix), and the translational part (given by a 3D vector, specifically a 3x1 column matrix). Often, we denote this as a matrix-column pair (P,p) or (P|p), where the capital letter P represents the rotation matrix, and the lowercase letter p represents the translation vector.
We can define the 3x3 rotation matrix by using 3 orthogonal unit vectors as the columns. The resulting matrix is orthogonal, meaning the determinant is either +1 or -1. If only a rotation is applied, then the determinant is +1, and if an inversion is applied, the determinant is -1. If an object has no symmetry operations with determinant -1, it is said to be chiral. In this case, the object’s mirror image is different from the original, and cannot be rotated to match its twin. This is especially important for molecules with biochemical applications, since the mirror molecule may have a different effect.
Now, we can define how one operation is applied to another. We consider two operations: (P,p) and (Q,q). If we first apply (P,p), followed by (Q,q), then we get a new operation, which we will call (R,r): (Q,q)(P,p) = (R,r). Note that we apply operations from the left. Then, the relationships are:
R = Q*P
r = Q*p + q
where * denotes standard matrix multiplication. From this definition, we see that the rotation is always applied first, followed by the translation. This rule applies for multiple operations as well; with 3 operations (R,r)(Q,q)(P,p), we first apply (P,p), then (Q,q), then (R,r).
Alternatively, the matrix-column pair can be combined into a single 4x4 matrix. We simply place the vector to the right of the rotation matrix, place 0’s on the bottom row, and place a 1 in the lower right-hand corner:
This matrix is called an affine transformation matrix. With it, we can apply operations using a single matrix multiplication operation. Although this may seem like just a mathematical trick, the affine matrix notation highlights the group structure of the transformations, as it allows translations and rotations to be placed on equal footing. Furthermore, we can use the additional dimension to represent time: the ‘1’ value can be thought of as a single step forward in time, and thus we can define both rotational and translational reference frames (and equivalently, torques and forces) with a single 4x4 matrix. Objects which are (periodically) symmetric in time are called time crystals. Such objects have only recently been synthesized in the lab, and there is likely more research to be done. However, for most applications in crystallography, time is not a factor, and we consider only spatial symmetries.
Sometimes crystallographers express an affine transformation as a list of letters and numbers, separated by commas (for example, x,y,z). In this notation, the first, second, and third positions denote what happens to the unit x, y, and z axes, respectively. So if we want to perform an inversion, we replace each axis with its opposite. Then, x,y,z becomes -x,-y,-z. So, you can use -x,-y,-z to represent an inversion. Similarly, y,-x,z would represent a 90 degree rotation about the z axis (using the right hand rule). You can also map to a linear combination of axes, or add a constant value. So, you might see something like x-y,x,z+1/2. Here, we just follow the same procedure: x, which is the vector (1,0,0) is mapped onto x-y, which is the vector (1,-1,0). y (0,1,0) is mapped onto x (1,0,0), and z (0,0,1) is mapped onto z+1/2 (0,0,1), or in the 4x4 notation, (0,0,1,.5). To express the addition of a constant (in this case 1/2 for the z-axis), the right-hand side of the 4x4 matrix is used. So, we would write x-y,x,z+1/2 as:
Note that the mapped vectors are written as rows, NOT columns. So, x-y is written on the first row as (1,-1,0,0). Again, the bottom row is always (0,0,0,1), so that matrix multiplication is preserved.
## Groups¶
Symmetry operations have several nice properties, and this allows certain sets of them to be classified as a mathematical object called a group. There are several simple and intuitive examples of groups, which we will discuss below. Formally, a group G is a set of mathematical objects (called elements) with 4 properties:
1) There is a binary operation which maps any two elements in the set onto a third element which is also in the set: A*B = C. The operation must be defined for every possible pair on the set, and must map onto an element which is inside of the set.
2) There must be exactly one identity element I which maps every element of the set onto itself: A*I = I*A = A for every A in G.
3) Every element A must have an inverse A^-1, such that multiplication by the inverse gives the identity: A*A^-1 = A^-1*A = I.
1. The operation * must be associative. That is, (A*B)*C = A*(B*C).
Note that commutativity is not a requirement for groups, but associativity is. Anticommutativity has important implications for describing rotations and angular momentum in 3 dimensions, which are beyond the scope of this study.
One of the simplest examples of a group is the additive group of real integers (Z,+). Here, the set is that of the integers (-1, 0, 1, ...), and the operation is addition. Here, the inverse of a number is just its negative. For example, the inverse of -2 is 2. One can easily verify that the 4 properties listed above hold true for this group. Similarly, we can consider the additive group of real numbers (R,+), or the additive group of complex numbers (C,+).
However, if we replace addition with multiplication, then we no longer have a group, because the element 0 does not have a multiplicitive inverse: any number multiplied by 0 is 0, but any number divided by 0 is undefined. We can fix this by considering the multiplicative group of all numbers except for 0. Or, equivalently, we can consider the multiplicitave group exp(x), where x is any complex number. Then, the inverse is defined as exp(-x), and the identity element is exp(0) = 1.
Interestingly, the real numbers are a subset of the complex numbers, and yet both the complex numbers and the real numbers form groups in their own right. In this case, we call the real numbers a subgroup of the complex numbers. Likewise, we call the complex numbers a supergroup of the real numbers. More specifically, we say that the real numbers are a proper subgroup of the complex numbers, because there are fewer real numbers than complex numbers. Likewise, the complex numbers form a proper supergroup of the real numbers. So, a group is always both a subgroup and a supergroup of itself, but is never a proper subgroup or proper supergroup of itself.
These are so far all examples of infinite groups, since there are infinitely many points on the number line. However, there also exist finite groups. For example, consider the permutation group of 3 objects (we’ll call them a, b, and c). Our group elements are:
1: (a,b,c)
2: (a,c,b)
3: (b,a,c)
4: (b,c,a)
5: (c,a,b)
6: (c,b,a)
As you can see, there are only 6 elements in this group. Element (1) is the identity, as it represents keeping a, b, and c in their original order. Element (2) represents swapping b and c, element (3) represents swapping a and b, and so on.
In general, we call the number of elements in a group the order of that group. In the example above, the order is 6. If there are an infinite number of elements in a group (for example, the additive group of real numbers), we say the group has infinite order. A group of order 1 is called a trivial group, because it has only one element, and this must be the identity element. Furthermore, because every group has an identity element, every group also contains a trivial group as a subgroup.
Sometimes, it is inconvenient to list every member of a group. Instead, it is often possible to list only a few elements, which can be used to determine, or generate the other elements. These chosen elements are called generators. For example, consider elements (2) and (3) in the permutation group shown above. We can define the remaining elements (1, 4, 5, and 6) starting with only (2) and (3) (with operations acting from the left):
2 * 2 = 1 : (a,c,b) * (a,c,b) = (a,b,c)
2 * 3 = 4 : (a,c,b) * (b,a,c) = (b,c,a)
3 * 4 = 6 : (b,a,c) * (b,c,a) = (c,b,a)
6 * 2 = 5 : (c,b,a) * (a,c,b) = (c,a,b)
Thus, we say that (2) and (3) are generators of the group. Typically, there is not a single best choice of generators for a group. We could just as easily have chosen (2) and (6), or (4) and (3), or some other subset as our generators.
## Symmetry Groups¶
One can verify that the four properties of groups listed above also hold for our 4x4 transformation matrices. Thus the set of all 3D transformations (with 4x4 matrix multiplication as our operation) forms a group. Because of this, the tools of group theory become available.
When we want to define the symmetry of an object, we specify the object’s symmetry group. A symmetry group is just the set of all of the object’s symmetry operations (described above). It turns out, the set of all symmetry operations for an object always forms a group. The group properties (2-4) hold because we are using 4x4 transformation matrices, which are already a group. Property (1) holds because a symmetry group is always a closed set. This is because performing any symmetry operations always brings us back to our original state, and therefore combining multiple symmetry operations also brings us back to the original state. Thus, combinations of symmetry operations are themselves symmetry operations, and are therefore elements of the object’s symmetry group.
The simplest 3D symmetry group is the trivial group (called “1”). This group has only the identity transformation I, which means that it corresponds to a completely asymmetrical object. For such an object, there is no transformation (besides the identity) which brings the object back to its original state. Most molecules have at least some rotational symmetry, and crystals always have at least translational symmetry, so we will not encounter this group very often.
On the other hand, we can consider empty 3D space, which is perfectly symmetrical (note: this does not apply to actual empty space, which contain gravitational and quantum fields). The symmetry group of empty space includes not only rotations and translations, but also scaling and shearing, since nothing will always be mapped back onto nothing.
Note that only empty space, or other idealized objects (including some fractals) can have scaling symmetry. For atomic structures, we will never encounter this. However, shear symmetry is possible for lattices. As an example, consider the different choices for the primitive cell shown in the section above. These different primitive cells can be mapped onto each other using shear transformations. It is important to note that in general only simple lattices have this shearing symmetry; if there are atoms inside of the lattice, they may not map onto other atoms in the crystal.
We can also define symmetry groups for objects of arbitrary dimension. A simple example is the equilateral triangle, which has a 3-fold rotational symmetry, as well as 3 reflectional symmetries. A slightly more complex example is the regular hexagon, which has all of the symmetries of the triangle, but also 6-fold and 2-fold rotational symmetry, and additional reflectional symmetries. Combining rotation and reflection, the hexagon also has the inversion symmetry:
triangular symmetry hexagonal symmetry
It takes practice to develop an intuition for finding symmetries, but the results can be very rewarding. Often, a symmetry can be utilized to lessen the work needed to solve a problem, sometimes even reducing the problem to a trivial identity. This is a core concept in mathematics and physics, and deserves reflection.
### Point Groups¶
In order for an object to be translationally symmetric, it must be periodic along one or more axes. This means that most objects (excluding crystals and certain idealized chain molecules) can only have rotational/inversional symmetry. A 3D symmetry group without translational symmetry is called a point group. This is because the transformations leave at least one point of space unmoved. This includes rotations, reflections, inversions, and combinations of the three. Note that we can either use rotations and reflections, or rotations and inversions, to generate the remaining point transformations. In PyXtal and the documentation, we use rotations and inversions as the basic transformations, meaning reflections are treated as rotoinversions.
A point group can contain rotations, reflections, and possibly inversion. There are several conventions for naming point groups, but PyXtal uses the Schoenflies notation. Here, point groups have one or two letters to describe the type(s) of transformations present, and a number to describe the order. For detailed information, see the Wikipedia page. Below are a few examples of point groups found in crystallography and chemistry.
• $$H_2O$$: point group C2v (2-fold rotation axis, and two mirror planes) [2]
• Hypothetical Pmmm crystal: point group mmm (3 mirror planes)
• Buckminsterfullerene: point group Ih (Full icosahedral symmetry) [3]
$$H_2O$$ molecule (C2v) Hypothetical crystal (mmm) Buckminsterfullerene (Ih)
### Space Groups¶
For crystals, we need to describe both the translational (lattice) and rotational (point group) symmetry. A 3D symmetry group containing both of these is called a space group, and is one of the main tools used by crystallographers. We can separate a space group into its point group and its lattice group. Thus, space groups can be neatly divided into the seven different crystal classes. Mathematically, the two different types of symmetry are connected; thus, certain kinds of translational symmetry (lattice types) can only have certain kinds of rotational symmetry (point groups). This is apparent from the names of the space groups; certain symbols are only found in certain lattice systems. A list of space groups and their symmetries is provided by the Bilbao utility WYCKPOS. Note that for space groups, we use the Hermann-Mauguin (H-M) notation. This means a space group can be specified by a number between 1 and 230. However, a space group symbol should always be provided, as the numbers are not used as commonly. The numbers are more useful for computer applications like PyXtal or Pymatgen, or in conjunction with references like the Bilbao server or the International Tables.
Technically speaking, two crystals with the same lattice type and point group, but with different cell parameters, have different space groups. The space group is the set of all symmetry operations, and in this case the translational symmetry operations would be different. But typically when someone says space group, they actually mean the set of all space groups with the same lattice type and point group. In this sense, we say that there are 230 different space groups. This is the meaning of space group which we will use from now on, unless otherwise specified. This is useful, since we don’t need to define a new space group every time we shrink or stretch a crystal by some small amount.
Not every rotational symmetry is compatible with a 3D lattice. Specifically, only rotations of order 2, 3, 4, or 6 are found in real crystals (Note: pseudo-crystals may have different local symmetries, but lack long-range periodicity). As a result, only 32 point groups are found as subgroups of space groups. These are called the crystallographic point groups. So, by choosing such a point group, along with a compatible lattice, we define a space group. By compatible lattice, we mean any lattice which maps onto itself under the symmetry operations of the chosen point group. Because of this compatibility condition, the presence of a particular symmetry can tell you what kind of lattice is present. For example, a 6-fold rotation always belongs to a hexagonal lattice. A 3-fold rotation about one of the primary axes belongs to a trigonal axis, whereas a 3-fold rotation about the diagonal belongs to a cubic lattice. In this way, the lattice type can be determined from the Hermann-Mauguin symbol.
In reality, a crystal is often distorted slightly from its ideal symmetrical state. As a result, two researchers may label the same crystal with different space groups. This phenomenon is called pseudosymmetry; it is when a crystal is close to possessing a certain space group, but is only slightly off. This is a real problem for computational crystallography, since numerical accuracy makes determining symmetry an imprecise business. For example, if an atom is located at (0,1/3,0), it will be encoded as something like (0,.33333,0) due to rounding. As a result, it will be slightly off from the expected location, and the computer may not recognize the 3-fold symmetry. So, whenever you work with crystal symmetry, it is a good idea to allow some numerical tolerance (roughly somewhere between .001 and .03 Angstroms), so as to correctly assess the symmetry. On the flip side, if a provided crystal is labeled as having P1 symmetry (which means no rotational symmetry was found), it is likely that some symmetry is actually present, but was not found due to numerical issues.
## Wyckoff Positions¶
Because symmetry operations can be thought of as making copies of parts of an object, we can usually only describe part of a structure, and let symmetry generate the rest. This small part of the structure used to generate the rest is called the asymmetric unit. However, not all points in the asymmetric unit are generated the same. If an atom lies within certain regions - planes, lines, or points - then the atom may not be “copied” as many times as other atoms within the asymmetric unit. A familiar example is in the creation of a paper snowflake. We start with a hexagon, then fold it into a single triangle 6 sheets thick. Then, if we cut out a mark somewhere in the middle of the triangle, the mark is copied 6-fold. However, if we instead cut out a mark alonng the triangle’s edge, or at the tip, the marks will only have 3 or 1 copies:
These different regions are called Wyckoff positions, and the number of copies is called the multiplicity of the Wyckoff position. So, if an atom lies in a Wyckoff position with multiplicity greater than 1, then that Wyckoff position actually corresponds to multiple atoms. However, thanks to symmetry, we can refer to all of the copies (for that particular atom) as a single Wyckoff position. This makes describing a crystal much easier, since we no longer need to specify the exact location of most of the atoms. Instead, we need only list the space group, the lattice, and the location and type of one atom from each Wyckoff position. This is exactly how the cif file format encodes crystal data (more info below). Just keep in mind that in this format, a single atomic entry may correspond to multiple atoms in the unit cell.
The largest Wyckoff position, which makes a copy for every symmetry operation, is called the general Wyckoff position, or just the general position. In the snowflake example, this was the large inner region of the triangle. In general, the general position will consist of every location which does not lie along some special symmetry axis, plane, or point. For this reason, the other Wyckoff positions are called the special Wyckoff positions.
The number and type of Wyckoff positions are different for every space group; a list of them can be found using the Bilbao utility WYCKPOS. In the utility, Wyckoff positions are described using the x,y,z notation, where each operation shows how the original (x,y,z) point is transformed/copied. In other words, if we choose a single set of coordinates, then plugging these coordinates into the Wyckoff position will generate the remaining coordinates. As an example, consider the general position of space group P222 (#16), which consists of the points (x,y,z), (-x,-y,z), (-x,y,-z), and (x,-y,-z). If we choose a random point, say (0.321,0.457,0.892), we can determine the remaining points:
(x,y,z)->(0.321,0.457,0.892)
(-x,-y,z)->(0.679,0.543,0.892)
(-x,y,-z)->(0.679,0.457,0.108)
(x,-y,-z)->(0.321,0.543,0.108)
Here a negative value is equal to 1 minus that value (-0.321 = 1 - 0.321 = 0.679).
To denote Wyckoff positions, a combination of number and letter is used. The number gives the multiplicity of the Wyckoff position, while the letter differentiates between positions with the same multiplicity. The letter ‘a’ is always given to the smallest Wyckoff position (usually located at the origin or z axis), and the letter increases for positions with higher multiplicity. So, for example, the space group I4mm (#107) has 5 different Wyckoff positions: 2a, 4b, 8c, 8d, and 16e. Here, 16e is the general position, since it has the largest multiplicity and last letter alphabetically.
Note that for space groups with non-simple lattices (those which begin with a letter other than ‘P’), the Wyckoff positions also contain fractional translations. Take for example the space group I4mm (#107). The Bilbao entry can be found here. Each listed Wyckoff position coordinate has a copy which is translated by (0.5,0.5,0.5). It is inconvenient to list each of these translated copies for every Wyckoff position, so instead a note is placed at the top. This is why Wyckoff position 16e has only 8 points listed. In this case, to generate the full crystal, one could apply the 8 operations listed, then make a copy of the resulting structure by translating it by the vector (0.5,0.5,0.5). Note that in space groups beginning with letters other than P, the smallest Wyckoff position will never have a multiplicity of 1.
In addition to the generating operations, the site symmetry of each Wyckoff position is listed. The site symmetry is just the point group which leaves the Wyckoff position invariant. So, if a Wyckoff position consists of an axis, then the site symmetry might be a rotation about that axis. The general position always has site symmetry 1, since it corresponds to choosing any arbitrary structure or location can be made symmetrical by copying it and applying all of the operations in the space group.
Finally, since crystals are infinitely periodic, a Wyckoff position refers not only to the atoms inside a unit cell, but every periodic copy of those atoms in the other unit cells. Thus, the Wyckoff position x,y,z is the same as the position x+1,y+1,z`, and so on. This is usually a minor detail, but it must be taken into account for certain computational tasks.
## Molecular Wyckoff Positions¶
In most cases, it is assumed that the objects occupying Wyckoff positions will be atoms. Because atoms are spherically symmetrical, they will always possess the site symmetry associated with a given Wyckoff position. However, this is not always the case for molecules, which have their own point group symmetry. Because of this, a given molecule may or may not fit into a given Wyckoff position, depending on its symmetry and orientation.
In order for a molecule to fit within a Wyckoff position, its point group must be a supergroup of the position’s site symmetry. In other words, the molecule must be at least as symmetrical as the region of the Wyckoff position itself (with reference to the operations of the space group as a whole). Furthermore, the molecule must be oriented in such a way that its symmetry axes line up with the symmetry axes of the Wyckoff position. As an example, consider a Wyckoff position with site symmetry 2. This is an axis with 2-fold symmetry. Now consider a water molecule lying on this axis. In order to truly occupy the Wyckoff position, the water molecule’s 2-fold axis must line up with the Wyckoff position’s (See the water molecule image above).
For larger site symmetry groups, it is more complicated to check if a molecule will fit or not. The algorithm used by PyXtal for doing this is detailed in the How PyXtal Works page.
## Crystal File Formats¶
There are two main file formats used for storing crystal structures: cif and POSCAR. Each of these has standard definitions. Here is the cif file definition (given by the International Tables), and here is the POSCAR file definition (given by Vasp).
Cif uses the space group symmetry to compress the data. The core information consists of the space group, the lattice, and the location and type of a single atom from each Wyckoff position. So, for high symmetry space groups, a cif file can be much smaller than a POSCAR file. As with any type of compression, the cif file has the downside that the program using it must be able to work with symmetry operations. Specifically, each Wyckoff position’s generating atom must be copied using the symmetry operations, so that the entire unit cell can be known.
In contrast, a POSCAR file does not provide the symmetry information, but instead specifies the type and location of every atom in the unit cell, including those which are symmetrical copies of each other. This results in a larger file, but one that is easier to read, since no symmetry operations need to be applied. The downside is that if one wishes to know the space group, it must either be calculated, or given by some external source.
Each format has advantages and disadvantages. A computational crystallographer should be familiar with both, and understand the differences. If you provide a POSCAR file for a structure, you should also provide the symmetry group. Likewise, if you provide a cif file, you should be certain that the symmetry information is correct, and that you are using the correct space group setting.
[2] Image from “Molecular Orbitals for Water (H2O)”http://www1.lsbu.ac.uk/php-cgiwrap/water/pfp.php3?page=http://www1.lsbu.ac.uk/water/h2o_orbitals.html) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8071332573890686, "perplexity": 469.20510132666504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710869.86/warc/CC-MAIN-20221201185801-20221201215801-00342.warc.gz"} |
https://blog.stata.com/tag/mata/ | ### Archive
Posts Tagged ‘Mata’
## The book that Stata programmers have been waiting for
“The book that Stata programmers have been waiting for” is how the Stata Press describes my new book on Mata, the full title of which is
The Stata Press took its cue from me in claiming that it this the book you have been waiting for, although I was less presumptuous in the introduction:
This book is for you if you have tried to learn Mata by reading the Mata Reference Manual and failed. You are not alone. Though the manual describes the parts of Mata, it never gets around to telling you what Mata is, what is special about Mata, what you might do with Mata, or even how Mata’s parts fit together. This book does that.
I’m excited about the book, but for a while I despaired of ever completing it. I started and stopped four times. I stopped because the drafts were boring. Read more…
Categories: Tags:
## Programming an estimation command in Stata: Consolidating your code
$$\newcommand{\xb}{{\bf x}} \newcommand{\gb}{{\bf g}} \newcommand{\Hb}{{\bf H}} \newcommand{\Gb}{{\bf G}} \newcommand{\Eb}{{\bf E}} \newcommand{\betab}{\boldsymbol{\beta}}$$I write ado-commands that estimate the parameters of an exponential conditional mean (ECM) model and a probit conditional mean (PCM) model by nonlinear least squares, using the methods that I discussed in the post Programming an estimation command in Stata: Nonlinear least-squares estimators. These commands will either share lots of code or repeat lots of code, because they are so similar. It is almost always better to share code than to repeat code. Shared code only needs to be changed in one place to add a feature or to fix a problem; repeated code must be changed everywhere. I introduce Mata libraries to share Mata functions across ado-commands, and I introduce wrapper commands to share ado-code.
This is the 27th post in the series Programming an estimation command in Stata. I recommend that you start at the beginning. See Programming an estimation command in Stata: A map to posted entries for a map to all the posts in this series.
Ado-commands for ECM and PCM models
I now convert the examples of Read more…
Categories: Programming Tags:
## Programming an estimation command in Stata: Nonlinear least-squares estimators
$$\newcommand{\xb}{{\bf x}} \newcommand{\gb}{{\bf g}} \newcommand{\Hb}{{\bf H}} \newcommand{\Gb}{{\bf G}} \newcommand{\Eb}{{\bf E}} \newcommand{\betab}{\boldsymbol{\beta}}$$I want to write ado-commands to estimate the parameters of an exponential conditional mean (ECM) model and probit conditional mean (PCM) model by nonlinear least squares (NLS). Before I can write these commands, I need to show how to trick optimize() into performing the Gauss–Newton algorithm and apply this trick to these two problems.
This is the 26th post in the series Programming an estimation command in Stata. I recommend that you start at the beginning. See Programming an estimation command in Stata: A map to posted entries for a map to all the posts in this series.
Gauss–Newton algorithm
Gauss–Newton algorithms frequently perform better than Read more…
Categories: Programming Tags:
## Programming an estimation command in Stata: Adding analytical derivatives to a poisson command using Mata
$$\newcommand{\xb}{{\bf x}} \newcommand{\betab}{\boldsymbol{\beta}}$$Using analytically computed derivatives can greatly reduce the time required to solve a nonlinear estimation problem. I show how to use analytically computed derivatives with optimize(), and I discuss mypoisson4.ado, which uses these analytically computed derivatives. Only a few lines of mypoisson4.ado differ from the code for mypoisson3.ado, which I discussed in Programming an estimation command in Stata: Allowing for robust or cluster–robust standard errors in a poisson command using Mata.
This is the twenty-third post in the series Programming an estimation command in Stata. I recommend that you start at the beginning. See Programming an estimation command in Stata: A map to posted entries for a map to all the posts in this series.
Analytically computed derivatives for Poisson
The contribution of the i(th) observation to the log-likelihood function for the Poisson maximum-likelihood estimator is Read more…
Categories: Programming Tags:
## Programming an estimation command in Stata: Allowing for robust or cluster–robust standard errors in a poisson command using Mata
mypoisson3.ado adds options for a robust or a cluster–robust estimator of the variance–covariance of the estimator (VCE) to mypoisson2.ado, which I discussed in Programming an estimation command in Stata: Handling factor variables in a poisson command using Mata. mypoisson3.ado parses the vce() option using the techniques I discussed in Programming an estimation command in Stata: Adding robust and cluster–robust VCEs to our Mata based OLS command. Below, I show how to use optimize() to compute the robust or cluster–robust VCE.
I only discuss what is new in the code for mypoisson3.ado, assuming that you are familiar with mypoisson2.ado.
This is the twenty-second post in the series Programming an estimation command in Stata. I recommend that you start at the beginning. See Programming an estimation command in Stata: A map to posted entries for a map to all the posts in this series.
A poisson command with options for a robust or a cluster–robust VCE
mypoisson3 computes Poisson-regression results in Mata. The syntax of the mypoisson3 command is
mypoisson3 depvar indepvars [if] [in] [, vce(robust | cluster clustervar) noconstant]
where indepvars can contain factor variables or time-series variables.
In the remainder of this post, I discuss Read more…
Categories: Programming Tags:
## Programming an estimation command in Stata: Handling factor variables in a poisson command using Mata
mypoisson2.ado handles factor variables and computes its Poisson regression results in Mata. I discuss the code for mypoisson2.ado, which I obtained by adding the method for handling factor variables discussed in Programming an estimation command in Stata: Handling factor variables in optimize() to mypoisson1.ado, discussed in Programming an estimation command in Stata: A poisson command using Mata.
This is the twenty-first post in the series Programming an estimation command in Stata. I recommend that you start at the beginning. See Programming an estimation command in Stata: A map to posted entries for a map to all the posts in this series.
A Poisson command with Mata computations
mypoisson2 computes Poisson regression results in Mata. The syntax of the mypoisson2 command is
mypoisson2 depvar indepvars [if] [in] [, noconstant]
where indepvars can contain factor variables or time-series variables.
In the remainder of this post, I discuss Read more…
Categories: Programming Tags:
## Programming an estimation command in Stata: Handling factor variables in optimize()
$$\newcommand{\xb}{{\bf x}} \newcommand{\betab}{\boldsymbol{\beta}}$$I discuss a method for handling factor variables when performing nonlinear optimization using optimize(). After illustrating the issue caused by factor variables, I present a method and apply it to an example using optimize().
This is the twenty post in the series Programming an estimation command in Stata. I recommend that you start at the beginning. See Programming an estimation command in Stata: A map to posted entries for a map to all the posts in this series.
How poisson handles factor variables
Consider the Poisson regression in which I include a full set of indicator variables created from Read more…
Categories: Programming Tags:
## Programming an estimation command in Stata: A poisson command using Mata
$$\newcommand{\xb}{{\bf x}} \newcommand{\betab}{\boldsymbol{\beta}}$$I discuss mypoisson1, which computes Poisson-regression results in Mata. The code in mypoisson1.ado is remarkably similar to the code in myregress11.ado, which computes ordinary least-squares (OLS) results in Mata, as I discussed in Programming an estimation command in Stata: An OLS command using Mata.
I build on previous posts. I use the structure of Stata programs that use Mata work functions that I discussed previously in Programming an estimation command in Stata: A first ado-command using Mata and Programming an estimation command in Stata: An OLS command using Mata. You should be familiar with Read more…
Categories: Programming Tags:
## Programming an estimation command in Stata: Using optimize() to estimate Poisson parameters
$$\newcommand{\xb}{{\bf x}} \newcommand{\betab}{\boldsymbol{\beta}}$$I show how to use optimize() in Mata to maximize a Poisson log-likelihood function and to obtain estimators of the variance–covariance of the estimator (VCE) based on independent and identically distributed (IID) observations or on robust methods.
This is the eighteenth post in the series Programming an estimation command in Stata. I recommend that you start at the beginning. See Programming an estimation command in Stata: A map to posted entries for a map to all the posts in this series.
Using optimize()
There are many optional choices that one may make when solving a nonlinear optimization problem, but there are very few that one must make. The optimize*() functions in Mata handle this problem by making a set of default choices for you, requiring that you specify a few things, and allowing you to change any of the default choices.
When I use optimize() to solve a Read more…
Categories: Programming Tags:
## Programming an estimation command in Stata: A review of nonlinear optimization using Mata
$$\newcommand{\betab}{\boldsymbol{\beta}} \newcommand{\xb}{{\bf x}} \newcommand{\yb}{{\bf y}} \newcommand{\gb}{{\bf g}} \newcommand{\Hb}{{\bf H}} \newcommand{\thetab}{\boldsymbol{\theta}} \newcommand{\Xb}{{\bf X}}$$I review the theory behind nonlinear optimization and get more practice in Mata programming by implementing an optimizer in Mata. In real problems, I recommend using the optimize() function or moptimize() function instead of the one I describe here. In subsequent posts, I will discuss optimize() and moptimize(). This post will help you develop your Mata programming skills and will improve your understanding of how optimize() and moptimize() work.
This is the seventeenth post in the series Programming an estimation command in Stata. I recommend that you start at the beginning. See Programming an estimation command in Stata: A map to posted entries for a map to all the posts in this series.
A quick review of nonlinear optimization
We want to maximize a real-valued function $$Q(\thetab)$$, where $$\thetab$$ is a $$p\times 1$$ vector of parameters. Minimization is done by maximizing $$-Q(\thetab)$$. We require that $$Q(\thetab)$$ is twice, continuously differentiable, so that we can use a second-order Taylor series to approximate $$Q(\thetab)$$ in a neighborhood of the point $$\thetab_s$$,
$Q(\thetab) \approx Q(\thetab_s) + \gb_s'(\thetab -\thetab_s) + \frac{1}{2} (\thetab -\thetab_s)’\Hb_s (\thetab -\thetab_s) \tag{1}$
where $$\gb_s$$ is the $$p\times 1$$ vector of first derivatives of $$Q(\thetab)$$ evaluated at $$\thetab_s$$ and $$\Hb_s$$ is the $$p\times p$$ matrix of second derivatives of $$Q(\thetab)$$ evaluated at $$\thetab_s$$, known as the Hessian matrix. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8319891095161438, "perplexity": 2367.5684553877804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036099.6/warc/CC-MAIN-20220625190306-20220625220306-00078.warc.gz"} |
http://www.physicsforums.com/showthread.php?t=189463 | ## Runway [ Tension ]
1. The problem statement, all variables and given/known data
A transport plane takes off from a level landing field with two gliders in tow, one behind the other. The mass of each glider is 700kg, and the total resistance (air drag plus friction with the runway) on each may be assumed constant and equal to 2700 N . The tension in the towrope between the transport plane and the first glider is not to exceed 12000N.
If a speed of 40m/s is required for takeoff, what minimum length of runway is needed?
What is the tension in the towrope between the two gliders while they are accelerating for the takeoff?
2. Relevant equations
F=ma
3. The attempt at a solution
First I drew the problem out. On how I see it.
Now I will draw the forces on each object separately.
Transport Plane:
Normal and Weight equally the same.
Tension pulling from the left.
Force of 40m/s going to the right. (This is the boost it needs for the objects to move.)
Glider behind Transport Plane:
Normal and Weight equally the same.
Tension pulling from right larger.
Tension pulling from left smaller.
Glider behind Glider:
Normal and Weight equally the same.
Tension pulling from right larger.
PhysOrg.com science news on PhysOrg.com >> King Richard III found in 'untidy lozenge-shaped grave'>> Google Drive sports new view and scan enhancements>> Researcher admits mistakes in stem cell study
Mentor
Blog Entries: 1
Quote by Heat Transport Plane: Normal and Weight equally the same. Tension pulling from the left.
Good.
Force of 40m/s going to the right. (This is the boost it needs for the objects to move.)
40 m/s is a speed, not a force! There's some unknown force pulling the Transport plane to the right. Luckily you won't need to know it.
Glider behind Transport Plane: Normal and Weight equally the same. Tension pulling from right larger. Tension pulling from left smaller.
Good.
Glider behind Glider: Normal and Weight equally the same. Tension pulling from right larger.
Good.
Now assume the rope in the middle has the maximum tension allowed. Now figure out the acceleration. Hint: Consider the two gliders as a single system. What's the net force on those gliders? The total mass?
Total Mass would be 1400kg. Net force would be 2(tension pulling from Transport Plane). If this is correct, I will go on to the next step. The basics I just learned from you. :) The part I don't understand is now that I have an "single" object of 1400kg, the weight and normal force cancel out and the only horizontal force left is unknown. :O The transport plane has also a weight and normal force that cancel out, the boost it's getting is larger than the tension from the left. But for this one I don't got values. :( I went a little ahead and know that once I have the acceleration, I will be able to plug it into this equation. v^2 = vi^2 + 2a(x-xi)
Mentor
Blog Entries: 1
## Runway [ Tension ]
Quote by Heat Total Mass would be 1400kg. Net force would be 2(tension pulling from Transport Plane).
The net force on the gliders is that single tension from the rope attached to the Transport Plane. And that tension is given! (The rope between the gliders pulls in both directions, so its tension cancels out when looking at the system of both gliders together.) (Edit: Oops--my bad! We forgot the resistance acting on each glider, which is also given.)
If this is correct, I will go on to the next step. The basics I just learned from you. :) The part I don't understand is now that I have an "single" object of 1400kg, the weight and normal force cancel out and the only horizontal force left is unknown. :O
But the horizontal force is not unknown.
The transport plane has also a weight and normal force that cancel out, the boost it's getting is larger than the tension from the left. But for this one I don't got values. :(
This is true. Luckily, you know that all three are attached and must accelerate together. Thus if you can figure out the acceleration just by analyzing the gliders, that acceleration applies to everything.
I went a little ahead and know that once I have the acceleration, I will be able to plug it into this equation. v^2 = vi^2 + 2a(x-xi)
Excellent. That's the one you need.
The tension would be positive, gravity is pulling down so it would be -mg and acceleration is +a. T-mg=a 1200-1400(9.8)= -12520 would the acceleration be correct, but then why is it negative. :S let me attempt it again. :) Update, I know why, Gravity is a vertical component here, we just need to calculate horizontal. So this should be F=ma 1200 = 1400(a) a= .857 and if this is true, then the distance should be 933.49, converting to 2 sig figs it would be 9.3x10^2
Mentor
Blog Entries: 1
Quote by Heat let me attempt it again. :) Update, I know why, Gravity is a vertical component here, we just need to calculate horizontal. So this should be F=ma 1200 = 1400(a) a= .857 and if this is true, then the distance should be 933.49, converting to 2 sig figs it would be 9.3x10^2
Right approach, but we need the correct net force on the gliders. In my last post, I forgot about the resistive force that acts on the gliders, which is also given.
In light of that, what's the real net force on the gliders? (Also: The towrope tension is 12,000 N not 1200 N.)
The resistive force is 2700. (This is where it confuses me, does this include the transport plane, or just the gliders.) 12000-2700 = 9300 9300 = 1400a a= 6.64 m/s^2
Mentor
Blog Entries: 1
Quote by Heat The resistive force is 2700. (This is where it confuses me, does this include the transport plane, or just the gliders.)
Quote by Heat The mass of each glider is 700kg, and the total resistance (air drag plus friction with the runway) on each may be assumed constant and equal to 2700 N.
got it so the friction should be 2700. friction should double. 12000-2(2700) = 6600 6600 = 1400a a = 4.71
Mentor Blog Entries: 1 Looks good.
Ok, I managed to get the part a, with the kinematic equation. :D Now for the tension in the towrope between the two gliders while they are accelerating for the takeoff. We separate the blocks/objects again, we know their mass of 700kg each, the acceleration, and we find tension. T=ma T = 700(4.71)?
Mentor Blog Entries: 1 When applying Newton's 2nd law, you must consider all the forces acting on the glider.
I thought for the tension (x component) was important only. Gravity is vertical here, and the x component of gravity is 0. Wait, I just remembered the resistance of 2700. I will see if I could work this out. :)
1500N is the the net force. T - 1500 = 700(6.64) T = 3148
Mentor
Blog Entries: 1
Quote by Heat 1500N is the the net force. T - 1500 = 700(6.64) T = 3148
First off, the net force is the sum of all forces acting on the glider. There are two: The tension (T) to the right, and the resistive force (which is given) to the left. Add them and set that sum equal to ma.
How did the acceleration change from what you had calculated earlier???
lol, sorry the acceleration was my mistake. I have scrolled up too much, that I wrote down the wrong acceleration. anyways, I thought net force was the final force. The weight and normal force cancel out, why do they not add up (is it because of opposite direction). Same goes to the object, it's getting pulled to the right, but resistance to the left, hence why I thought it was the difference. So Fnet = 14700 a = 4.71 m = 700 T = 14700+ (700*4.71) T = 17997
Mentor
Blog Entries: 1
Quote by Heat anyways, I thought net force was the final force. The weight and normal force cancel out, why do they not add up (is it because of opposite direction).
The weight (-mg) and normal force (+mg) do add up--to zero! So forget them.
Same goes to the object, it's getting pulled to the right, but resistance to the left, hence why I thought it was the difference.
Yes, the net force is the difference (or sum): +T - F(resistive) = T - 2700. Set that equal to ma.
So Fnet = 14700 a = 4.71 m = 700 T = 14700+ (700*4.71) T = 17997
Where does the 14700 come from? Do over! | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8609251976013184, "perplexity": 1029.8913020569862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705310619/warc/CC-MAIN-20130516115510-00043-ip-10-60-113-184.ec2.internal.warc.gz"} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.