url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
http://link.springer.com/article/10.1007%2Fs10957-012-0221-4
, Volume 158, Issue 1, pp 130-144 Date: 13 Nov 2012 # Closedness of the Solution Map in Quasivariational Inequalities of Ky Fan Type Rent the article at a discount Rent now * Final gross prices may vary according to local VAT. ## Abstract This paper is mainly concerned with the stability analysis of the set-valued solution mapping for a parametric quasivariational inequality of Ky Fan type. Perturbations are here considered both on the bifunction and on the constraint map which define the problem. The bifunction is assumed to be either pseudomonotone or quasimonotone. This fact leads to the definition of four different types of solution: two when the bifunction is pseudomonotone, and two for the quasimonotone case. These solution sets are connected each other through two Minty-type Lemmas, where a very weak form of continuity for the bifunction is employed. Using these results, we are able to establish some sufficient conditions, which ensure the closedness and the upper semicontinuity of the maps corresponding to the four solution sets.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9441637992858887, "perplexity": 711.9652757499478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207932182.89/warc/CC-MAIN-20150521113212-00234-ip-10-180-206-219.ec2.internal.warc.gz"}
https://en.wikibooks.org/wiki/Calculus/Vector_calculus_identities
# Calculus/Vector calculus identities In this chapter, numerous identities related to the gradient (${\displaystyle \nabla f}$), directional derivative (${\displaystyle (\mathbf {V} \cdot \nabla )f}$, ${\displaystyle (\mathbf {V} \cdot \nabla )\mathbf {F} }$), divergence (${\displaystyle \nabla \cdot \mathbf {F} }$), Laplacian (${\displaystyle \nabla ^{2}f}$, ${\displaystyle \nabla ^{2}\mathbf {F} }$), and curl (${\displaystyle \nabla \times \mathbf {F} }$) will be derived. ## Notation To simplify the derivation of various vector identities, the following notation will be utilized: • The coordinates ${\displaystyle x,y,z}$ will instead be denoted with ${\displaystyle x_{1},x_{2},x_{3}}$ respectively. • Given an arbitrary vector ${\displaystyle \mathbf {F} }$, then ${\displaystyle F_{i}}$ will denote the ${\displaystyle i^{\text{th}}}$ entry of ${\displaystyle \mathbf {F} }$ where ${\displaystyle i=1,2,3}$. All vectors will be assumed to be denoted by Cartesian basis vectors (${\displaystyle \mathbf {i} ,\mathbf {j} ,\mathbf {k} }$) unless otherwise specified: ${\displaystyle \mathbf {F} =F_{1}\mathbf {i} +F_{2}\mathbf {j} +F_{3}\mathbf {k} }$. • Given an arbitrary expression ${\displaystyle f:\{1,2,3\}\to \mathbb {R} }$ that assigns a real number to each index ${\displaystyle i=1,2,3}$, then ${\displaystyle (i,f(i))}$ will denote the vector whose entries are determined by ${\displaystyle f}$. For example, ${\displaystyle \mathbf {F} =(i,F_{i})}$. • Given an arbitrary expression ${\displaystyle f:\{1,2,3\}\to \mathbb {R} }$ that assigns a real number to each index ${\displaystyle i=1,2,3}$, then ${\displaystyle \sum _{i}f(i)}$ will denote the sum ${\displaystyle f(1)+f(2)+f(3)}$. For example, ${\displaystyle \nabla \cdot \mathbf {F} =\sum _{i}{\frac {\partial F_{i}}{\partial x_{i}}}}$. • Given an index variable ${\displaystyle i\in \{1,2,3\}}$, ${\displaystyle i+1}$ will rotate ${\displaystyle i}$ forwards by 1, and ${\displaystyle i+2}$ will rotate ${\displaystyle i}$ forwards by 2. In essence, ${\displaystyle i+1=\left\{{\begin{array}{cc}i+1&(i=0,1)\\0&(i=2)\end{array}}\right.}$ and ${\displaystyle i+2=\left\{{\begin{array}{cc}2&(i=0)\\i-1&(i=1,2)\end{array}}\right.}$. For example, ${\displaystyle \mathbf {F} \times \mathbf {G} =(i,F_{i+1}G_{i+2}-F_{i+2}G_{i+1})}$. As an example of using the above notation, consider the problem of expanding the triple cross product ${\displaystyle \mathbf {F} \times (\mathbf {G} \times \mathbf {H} )}$. ${\displaystyle \mathbf {F} \times (\mathbf {G} \times \mathbf {H} )=\mathbf {F} \times (i,G_{i+1}H_{i+2}-G_{i+2}H_{i+1})}$ ${\displaystyle =(i,F_{i+1}(G_{i}H_{i+1}-G_{i+1}H_{i})-F_{i+2}(G_{i+2}H_{i}-G_{i}H_{i+2}))}$ ${\displaystyle =(i,G_{i}(F_{i+1}H_{i+1}+F_{i+2}H_{i+2})-(F_{i+1}G_{i+1}+F_{i+2}G_{i+2})H_{i})}$ ${\displaystyle =(i,G_{i}(F_{i}H_{i}+F_{i+1}H_{i+1}+F_{i+2}H_{i+2})-(F_{i}G_{i}+F_{i+1}G_{i+1}+F_{i+2}G_{i+2})H_{i})}$ ${\displaystyle =(i,G_{i}(\mathbf {F} \cdot \mathbf {H} )-(\mathbf {F} \cdot \mathbf {G} )H_{i})}$ ${\displaystyle =(\mathbf {F} \cdot \mathbf {H} )\mathbf {G} -(\mathbf {F} \cdot \mathbf {G} )\mathbf {H} }$ Therefore: ${\displaystyle \mathbf {F} \times (\mathbf {G} \times \mathbf {H} )=(\mathbf {F} \cdot \mathbf {H} )\mathbf {G} -(\mathbf {F} \cdot \mathbf {G} )\mathbf {H} }$ As another example of using the above notation, consider the scalar triple product ${\displaystyle \mathbf {F} \cdot (\mathbf {G} \times \mathbf {H} )}$ ${\displaystyle \mathbf {F} \cdot (\mathbf {G} \times \mathbf {H} )=\mathbf {F} \cdot (i,G_{i+1}H_{i+2}-G_{i+2}H_{i+1})}$ ${\displaystyle =\sum _{i}F_{i}(G_{i+1}H_{i+2}-G_{i+2}H_{i+1})}$ ${\displaystyle =(\sum _{i}F_{i}G_{i+1}H_{i+2})-(\sum _{i}F_{i}G_{i+2}H_{i+1})}$ The index ${\displaystyle i}$ in the above summations can be shifted by fixed amounts without changing the sum. For example, ${\displaystyle \sum _{i}F_{i}G_{i+1}H_{i+2}=\sum _{i}F_{i+1}G_{i+2}H_{i}=\sum _{i}F_{i+2}G_{i}H_{i+1}}$. This allows: ${\displaystyle (\sum _{i}F_{i}G_{i+1}H_{i+2})-(\sum _{i}F_{i}G_{i+2}H_{i+1})=(\sum _{i}F_{i+2}G_{i}H_{i+1})-(\sum _{i}F_{i+1}G_{i}H_{i+2})=(\sum _{i}F_{i+1}G_{i+2}H_{i})-(\sum _{i}F_{i+2}G_{i+1}H_{i})}$ ${\displaystyle \implies \mathbf {F} \cdot (i,G_{i+1}H_{i+2}-G_{i+2}H_{i+1})=\mathbf {G} \cdot (i,H_{i+1}F_{i+2}-H_{i+2}F_{i+1})=\mathbf {H} \cdot (i,F_{i+1}G_{i+2}-F_{i+2}G_{i+1})}$ ${\displaystyle \implies \mathbf {F} \cdot (\mathbf {G} \times \mathbf {H} )=\mathbf {G} \cdot (\mathbf {H} \times \mathbf {F} )=\mathbf {H} \cdot (\mathbf {F} \times \mathbf {G} )}$ which establishes the cyclical property of the scalar triple product. Given scalar fields, ${\displaystyle f}$ and ${\displaystyle g}$, then ${\displaystyle \nabla (f+g)=(\nabla f)+(\nabla g)}$. Derivation ${\displaystyle \nabla (f+g)=(i,{\frac {\partial }{\partial x_{i}}}(f+g))}$ ${\displaystyle =(i,{\frac {\partial f}{\partial x_{i}}}+{\frac {\partial g}{\partial x_{i}}})}$ ${\displaystyle =(i,{\frac {\partial f}{\partial x_{i}}})+(i,{\frac {\partial g}{\partial x_{i}}})}$ ${\displaystyle =(\nabla f)+(\nabla g)}$ Given scalar fields ${\displaystyle f}$ and ${\displaystyle g}$, then ${\displaystyle \nabla (fg)=(\nabla f)g+f(\nabla g)}$. If ${\displaystyle f}$ is a constant ${\displaystyle c}$, then ${\displaystyle \nabla (cg)=c(\nabla g)}$. Derivation ${\displaystyle \nabla (fg)=(i,{\frac {\partial }{\partial x_{i}}}(fg))}$ ${\displaystyle =(i,{\frac {\partial f}{\partial x_{i}}}g+f{\frac {\partial g}{\partial x_{i}}})}$ ${\displaystyle =(i,{\frac {\partial f}{\partial x_{i}}})g+f(i,{\frac {\partial g}{\partial x_{i}}})}$ ${\displaystyle =(\nabla f)g+f(\nabla g)}$ Given vector fields ${\displaystyle \mathbf {F} }$ and ${\displaystyle \mathbf {G} }$, then ${\displaystyle \nabla (\mathbf {F} \cdot \mathbf {G} )=((\mathbf {F} \cdot \nabla )\mathbf {G} +\mathbf {F} \times (\nabla \times \mathbf {G} ))+((\mathbf {G} \cdot \nabla )\mathbf {F} +\mathbf {G} \times (\nabla \times \mathbf {F} ))}$ Derivation ${\displaystyle \nabla (\mathbf {F} \cdot \mathbf {G} )=(i,{\frac {\partial }{\partial x_{i}}}(\mathbf {F} \cdot \mathbf {G} ))}$ ${\displaystyle =(i,{\frac {\partial }{\partial x_{i}}}(\sum _{j}F_{j}G_{j}))}$ ${\displaystyle =(i,\sum _{j}({\frac {\partial F_{j}}{\partial x_{i}}}G_{j}+F_{j}{\frac {\partial G_{j}}{\partial x_{i}}}))}$ ${\displaystyle =(i,\sum _{j}F_{j}{\frac {\partial G_{j}}{\partial x_{i}}})+(i,\sum _{j}G_{j}{\frac {\partial F_{j}}{\partial x_{i}}})}$ ${\displaystyle =(i,F_{i}{\frac {\partial G_{i}}{\partial x_{i}}}+F_{i+1}{\frac {\partial G_{i+1}}{\partial x_{i}}}+F_{i+2}{\frac {\partial G_{i+2}}{\partial x_{i}}})}$ ${\displaystyle +(i,G_{i}{\frac {\partial F_{i}}{\partial x_{i}}}+G_{i+1}{\frac {\partial F_{i+1}}{\partial x_{i}}}+G_{i+2}{\frac {\partial F_{i+2}}{\partial x_{i}}})}$ ${\displaystyle =(i,(F_{i}{\frac {\partial G_{i}}{\partial x_{i}}}+F_{i+1}{\frac {\partial G_{i}}{\partial x_{i+1}}}+F_{i+2}{\frac {\partial G_{i}}{\partial x_{i+2}}})+((F_{i+1}{\frac {\partial G_{i+1}}{\partial x_{i}}}-F_{i+1}{\frac {\partial G_{i}}{\partial x_{i+1}}})+(F_{i+2}{\frac {\partial G_{i+2}}{\partial x_{i}}}-F_{i+2}{\frac {\partial G_{i}}{\partial x_{i+2}}})))}$ ${\displaystyle +(i,(G_{i}{\frac {\partial F_{i}}{\partial x_{i}}}+G_{i+1}{\frac {\partial F_{i}}{\partial x_{i+1}}}+G_{i+2}{\frac {\partial F_{i}}{\partial x_{i+2}}})+((G_{i+1}{\frac {\partial F_{i+1}}{\partial x_{i}}}-G_{i+1}{\frac {\partial F_{i}}{\partial x_{i+1}}})+(G_{i+2}{\frac {\partial F_{i+2}}{\partial x_{i}}}-G_{i+2}{\frac {\partial F_{i}}{\partial x_{i+2}}})))}$ ${\displaystyle =(i,\sum _{j}F_{j}{\frac {\partial G_{i}}{\partial x_{j}}})+(i,F_{i+1}({\frac {\partial G_{i+1}}{\partial x_{i}}}-{\frac {\partial G_{i}}{\partial x_{i+1}}})-F_{i+2}({\frac {\partial G_{i}}{\partial x_{i+2}}}-{\frac {\partial G_{i+2}}{\partial x_{i}}}))}$ ${\displaystyle +(i,\sum _{j}G_{j}{\frac {\partial F_{i}}{\partial x_{j}}})+(i,G_{i+1}({\frac {\partial F_{i+1}}{\partial x_{i}}}-{\frac {\partial F_{i}}{\partial x_{i+1}}})-G_{i+2}({\frac {\partial F_{i}}{\partial x_{i+2}}}-{\frac {\partial F_{i+2}}{\partial x_{i}}}))}$ ${\displaystyle =(i,(\mathbf {F} \cdot \nabla )G_{i})+\mathbf {F} \times (i,{\frac {\partial G_{i+2}}{\partial x_{i+1}}}-{\frac {\partial G_{i+1}}{\partial x_{i+2}}})+(i,(\mathbf {G} \cdot \nabla )F_{i})+\mathbf {G} \times (i,{\frac {\partial F_{i+2}}{\partial x_{i+1}}}-{\frac {\partial F_{i+1}}{\partial x_{i+2}}})}$ ${\displaystyle =((\mathbf {F} \cdot \nabla )\mathbf {G} +\mathbf {F} \times (\nabla \times \mathbf {G} ))+((\mathbf {G} \cdot \nabla )\mathbf {F} +\mathbf {G} \times (\nabla \times \mathbf {F} ))}$ Given scalar fields ${\displaystyle f_{1},f_{2},\dots ,f_{n}}$ and an ${\displaystyle n}$ input function ${\displaystyle g(y_{1},y_{2},\dots ,y_{n})}$, then ${\displaystyle \nabla (g(f_{1},f_{2},\dots ,f_{n}))={\frac {\partial g}{\partial y_{1}}}{\bigg |}_{y_{1}=f_{1}}(\nabla f_{1})+{\frac {\partial g}{\partial y_{2}}}{\bigg |}_{y_{2}=f_{2}}(\nabla f_{2})+\dots +{\frac {\partial g}{\partial y_{n}}}{\bigg |}_{y_{n}=f_{n}}(\nabla f_{n})}$. Derivation ${\displaystyle \nabla (g(f_{1},f_{2},\dots ,f_{n}))=(i,{\frac {\partial }{\partial x_{i}}}(g(f_{1},f_{2},\dots ,f_{n})))}$ ${\displaystyle =(i,{\frac {\partial g}{\partial y_{1}}}{\bigg |}_{y_{1}=f_{1}}{\frac {\partial f_{1}}{\partial x_{i}}}+{\frac {\partial g}{\partial y_{2}}}{\bigg |}_{y_{2}=f_{2}}{\frac {\partial f_{2}}{\partial x_{i}}}+\dots +{\frac {\partial g}{\partial y_{n}}}{\bigg |}_{y_{n}=f_{n}}{\frac {\partial f_{n}}{\partial x_{i}}})}$ ${\displaystyle ={\frac {\partial g}{\partial y_{1}}}{\bigg |}_{y_{1}=f_{1}}(i,{\frac {\partial f_{1}}{\partial x_{i}}})+{\frac {\partial g}{\partial y_{2}}}{\bigg |}_{y_{2}=f_{2}}(i,{\frac {\partial f_{2}}{\partial x_{i}}})+\dots +{\frac {\partial g}{\partial y_{n}}}{\bigg |}_{y_{n}=f_{n}}(i,{\frac {\partial f_{n}}{\partial x_{i}}})}$ ${\displaystyle ={\frac {\partial g}{\partial y_{1}}}{\bigg |}_{y_{1}=f_{1}}(\nabla f_{1})+{\frac {\partial g}{\partial y_{2}}}{\bigg |}_{y_{2}=f_{2}}(\nabla f_{2})+\dots +{\frac {\partial g}{\partial y_{n}}}{\bigg |}_{y_{n}=f_{n}}(\nabla f_{n})}$ ## Directional Derivative Identities Given vector fields ${\displaystyle \mathbf {V} }$ and ${\displaystyle \mathbf {W} }$, and scalar field ${\displaystyle f}$, then ${\displaystyle ((\mathbf {V} +\mathbf {W} )\cdot \nabla )f=(\mathbf {V} \cdot \nabla )f+(\mathbf {W} \cdot \nabla )f}$. When ${\displaystyle \mathbf {F} }$ is a vector field, it is also the case that: ${\displaystyle ((\mathbf {V} +\mathbf {W} )\cdot \nabla )\mathbf {F} =(\mathbf {V} \cdot \nabla )\mathbf {F} +(\mathbf {W} \cdot \nabla )\mathbf {F} }$. Derivation For scalar fields: ${\displaystyle ((\mathbf {V} +\mathbf {W} )\cdot \nabla )f=\sum _{i}((V_{i}+W_{i}){\frac {\partial f}{\partial x_{i}}})}$ ${\displaystyle =\sum _{i}(V_{i}{\frac {\partial f}{\partial x_{i}}}+W_{i}{\frac {\partial f}{\partial x_{i}}})}$ ${\displaystyle =\sum _{i}(V_{i}{\frac {\partial f}{\partial x_{i}}})+\sum _{i}(W_{i}{\frac {\partial f}{\partial x_{i}}})}$ ${\displaystyle =(\mathbf {V} \cdot \nabla )f+(\mathbf {W} \cdot \nabla )f}$ For vector fields: ${\displaystyle ((\mathbf {V} +\mathbf {W} )\cdot \nabla )\mathbf {F} =(i,((\mathbf {V} +\mathbf {W} )\cdot \nabla )F_{i})}$ ${\displaystyle =(i,(\mathbf {V} \cdot \nabla )F_{i}+(\mathbf {W} \cdot \nabla )F_{i})}$ ${\displaystyle =(\mathbf {V} \cdot \nabla )\mathbf {F} +(\mathbf {W} \cdot \nabla )\mathbf {F} }$ Given vector field ${\displaystyle \mathbf {V} }$, and scalar fields ${\displaystyle v}$ and ${\displaystyle f}$, then ${\displaystyle ((v\mathbf {V} )\cdot \nabla )f=v((\mathbf {V} \cdot \nabla )f)}$. When ${\displaystyle \mathbf {F} }$ is a vector field, it is also the case that: ${\displaystyle ((v\mathbf {V} )\cdot \nabla )\mathbf {F} =v((\mathbf {V} \cdot \nabla )\mathbf {F} )}$. Derivation For scalar fields: ${\displaystyle ((v\mathbf {V} )\cdot \nabla )f=\sum _{i}(vV_{i}{\frac {\partial f}{\partial x_{i}}})}$ ${\displaystyle =v\sum _{i}(V_{i}{\frac {\partial f}{\partial x_{i}}})}$ ${\displaystyle =v((\mathbf {V} \cdot \nabla )f)}$ For vector fields: ${\displaystyle ((v\mathbf {V} )\cdot \nabla )\mathbf {F} =(i,((v\mathbf {V} )\cdot \nabla )F_{i})}$ ${\displaystyle =(i,v((\mathbf {V} \cdot \nabla )F_{i}))}$ ${\displaystyle =v((\mathbf {V} \cdot \nabla )\mathbf {F} )}$ Given vector field ${\displaystyle \mathbf {V} }$, and scalar fields ${\displaystyle f}$ and ${\displaystyle g}$, then ${\displaystyle (\mathbf {V} \cdot \nabla )(f+g)=(\mathbf {V} \cdot \nabla )f+(\mathbf {V} \cdot \nabla )g}$. When ${\displaystyle \mathbf {F} }$ and ${\displaystyle \mathbf {G} }$ are vector fields, it is also the case that: ${\displaystyle (\mathbf {V} \cdot \nabla )(\mathbf {F} +\mathbf {G} )=(\mathbf {V} \cdot \nabla )\mathbf {F} +(\mathbf {V} \cdot \nabla )\mathbf {G} }$. Derivation For scalar fields: ${\displaystyle (\mathbf {V} \cdot \nabla )(f+g)=\sum _{i}(V_{i}{\frac {\partial }{\partial x_{i}}}(f+g))}$ ${\displaystyle =\sum _{i}(V_{i}{\frac {\partial f}{\partial x_{i}}}+V_{i}{\frac {\partial g}{\partial x_{i}}})}$ ${\displaystyle =\sum _{i}(V_{i}{\frac {\partial f}{\partial x_{i}}})+\sum _{i}(V_{i}{\frac {\partial g}{\partial x_{i}}})}$ ${\displaystyle =(\mathbf {V} \cdot \nabla )f+(\mathbf {V} \cdot \nabla )g}$ For vector fields: ${\displaystyle (\mathbf {V} \cdot \nabla )(\mathbf {F} +\mathbf {G} )=(i,(\mathbf {V} \cdot \nabla )(F_{i}+G_{i}))}$ ${\displaystyle =(i,(\mathbf {V} \cdot \nabla )F_{i}+(\mathbf {V} \cdot \nabla )G_{i})}$ ${\displaystyle =(\mathbf {V} \cdot \nabla )\mathbf {F} +(\mathbf {V} \cdot \nabla )\mathbf {G} }$ Given vector field ${\displaystyle \mathbf {V} }$, and scalar fields ${\displaystyle f}$ and ${\displaystyle g}$, then ${\displaystyle (\mathbf {V} \cdot \nabla )(fg)=((\mathbf {V} \cdot \nabla )f)g+f((\mathbf {V} \cdot \nabla )g)}$ If ${\displaystyle \mathbf {G} }$ is a vector field, it is also the case that: ${\displaystyle (\mathbf {V} \cdot \nabla )(f\mathbf {G} )=((\mathbf {V} \cdot \nabla )f)\mathbf {G} +f((\mathbf {V} \cdot \nabla )\mathbf {G} )}$ Derivation For scalar fields: ${\displaystyle (\mathbf {V} \cdot \nabla )(fg)=\sum _{i}V_{i}{\frac {\partial }{\partial x_{i}}}(fg)}$ ${\displaystyle =\sum _{i}V_{i}({\frac {\partial f}{\partial x_{i}}}g+f{\frac {\partial g}{\partial x_{i}}})}$ ${\displaystyle =(\sum _{i}V_{i}{\frac {\partial f}{\partial x_{i}}})g+f(\sum _{i}V_{i}{\frac {\partial g}{\partial x_{i}}})}$ ${\displaystyle =((\mathbf {V} \cdot \nabla )f)g+f((\mathbf {V} \cdot \nabla )g)}$ For vector fields: ${\displaystyle (\mathbf {V} \cdot \nabla )(f\mathbf {G} )=(i,(\mathbf {V} \cdot \nabla )(fG_{i}))}$ ${\displaystyle =(i,((\mathbf {V} \cdot \nabla )f)G_{i}+f((\mathbf {V} \cdot \nabla )G_{i}))}$ ${\displaystyle =((\mathbf {V} \cdot \nabla )f)\mathbf {G} +f((\mathbf {V} \cdot \nabla )\mathbf {G} )}$ Given vector fields ${\displaystyle \mathbf {V} }$, ${\displaystyle \mathbf {F} }$, and ${\displaystyle \mathbf {G} }$, then ${\displaystyle (\mathbf {V} \cdot \nabla )(\mathbf {F} \cdot \mathbf {G} )=((\mathbf {V} \cdot \nabla )\mathbf {F} )\cdot \mathbf {G} +\mathbf {F} \cdot ((\mathbf {V} \cdot \nabla )\mathbf {G} )}$ Derivation ${\displaystyle (\mathbf {V} \cdot \nabla )(\mathbf {F} \cdot \mathbf {G} )=\sum _{i}V_{i}{\frac {\partial }{\partial x_{i}}}(\mathbf {F} \cdot \mathbf {G} )}$ ${\displaystyle =\sum _{i}V_{i}{\frac {\partial }{\partial x_{i}}}\sum _{j}(F_{j}G_{j})}$ ${\displaystyle =\sum _{i}\sum _{j}V_{i}{\frac {\partial }{\partial x_{i}}}(F_{j}G_{j})}$ ${\displaystyle =\sum _{i}\sum _{j}V_{i}({\frac {\partial F_{j}}{\partial x_{i}}}G_{j}+F_{j}{\frac {\partial G_{j}}{\partial x_{i}}})}$ ${\displaystyle =\sum _{j}((\sum _{i}V_{i}{\frac {\partial F_{j}}{\partial x_{i}}})G_{j})+\sum _{j}(F_{j}(\sum _{i}V_{i}{\frac {\partial G_{j}}{\partial x_{i}}}))}$ ${\displaystyle =\sum _{j}(((\mathbf {V} \cdot \nabla )F_{j})G_{j})+\sum _{j}(F_{j}((\mathbf {V} \cdot \nabla )G_{j}))}$ ${\displaystyle =((\mathbf {V} \cdot \nabla )\mathbf {F} )\cdot \mathbf {G} +\mathbf {F} \cdot ((\mathbf {V} \cdot \nabla )\mathbf {G} )}$ Given vector fields ${\displaystyle \mathbf {V} }$, ${\displaystyle \mathbf {F} }$, and ${\displaystyle \mathbf {G} }$, then ${\displaystyle (\mathbf {V} \cdot \nabla )(\mathbf {F} \times \mathbf {G} )=((\mathbf {V} \cdot \nabla )\mathbf {F} )\times \mathbf {G} +\mathbf {F} \times ((\mathbf {V} \cdot \nabla )\mathbf {G} )}$ Derivation ${\displaystyle (\mathbf {V} \cdot \nabla )(\mathbf {F} \times \mathbf {G} )=(i,(\mathbf {V} \cdot \nabla )(F_{i+1}G_{i+2}-F_{i+2}G_{i+1}))}$ ${\displaystyle =(i,\sum _{j}V_{j}{\frac {\partial }{\partial x_{j}}}(F_{i+1}G_{i+2}-F_{i+2}G_{i+1}))}$ ${\displaystyle =(i,\sum _{j}V_{j}(({\frac {\partial F_{i+1}}{\partial x_{j}}}G_{i+2}+F_{i+1}{\frac {\partial G_{i+2}}{\partial x_{j}}})-({\frac {\partial F_{i+2}}{\partial x_{j}}}G_{i+1}+F_{i+2}{\frac {\partial G_{i+1}}{\partial x_{j}}})))}$ ${\displaystyle =(i,(\sum _{j}V_{j}{\frac {\partial F_{i+1}}{\partial x_{j}}})G_{i+2}-(\sum _{j}V_{j}{\frac {\partial F_{i+2}}{\partial x_{j}}})G_{i+1})+(i,F_{i+1}(\sum _{j}V_{j}{\frac {\partial G_{i+2}}{\partial x_{j}}})-F_{i+2}(\sum _{j}V_{j}{\frac {\partial G_{i+1}}{\partial x_{j}}}))}$ ${\displaystyle =(i,((\mathbf {V} \cdot \nabla )F_{i+1})G_{i+2}-((\mathbf {V} \cdot \nabla )F_{i+2})G_{i+1})+(i,F_{i+1}((\mathbf {V} \cdot \nabla )G_{i+2})-F_{i+2}((\mathbf {V} \cdot \nabla )G_{i+1}))}$ ${\displaystyle =((\mathbf {V} \cdot \nabla )\mathbf {F} )\times \mathbf {G} +\mathbf {F} \times ((\mathbf {V} \cdot \nabla )\mathbf {G} )}$ ## Divergence Identities Given vector fields ${\displaystyle \mathbf {F} }$ and ${\displaystyle \mathbf {G} }$, then ${\displaystyle \nabla \cdot (\mathbf {F} +\mathbf {G} )=(\nabla \cdot \mathbf {F} )+(\nabla \cdot \mathbf {G} )}$. Derivation ${\displaystyle \nabla \cdot (\mathbf {F} +\mathbf {G} )=\sum _{i}({\frac {\partial }{\partial x_{i}}}(F_{i}+G_{i}))}$ ${\displaystyle =(\sum _{i}{\frac {\partial F_{i}}{\partial x_{i}}})+(\sum _{i}{\frac {\partial G_{i}}{\partial x_{i}}})}$ ${\displaystyle =(\nabla \cdot \mathbf {F} )+(\nabla \cdot \mathbf {G} )}$ Given a scalar field ${\displaystyle f}$ and a vector field ${\displaystyle \mathbf {G} }$, then ${\displaystyle \nabla \cdot (f\mathbf {G} )=(\nabla f)\cdot \mathbf {G} +f(\nabla \cdot \mathbf {G} )}$. If ${\displaystyle f}$ is a constant ${\displaystyle c}$, then ${\displaystyle \nabla \cdot (c\mathbf {G} )=c(\nabla \cdot \mathbf {G} )}$. If ${\displaystyle \mathbf {G} }$ is a constant ${\displaystyle \mathbf {C} }$, then ${\displaystyle \nabla \cdot (f\mathbf {C} )=(\nabla f)\cdot \mathbf {C} }$. Derivation ${\displaystyle \nabla \cdot (f\mathbf {G} )=\sum _{i}{\frac {\partial }{\partial x_{i}}}(fG_{i})}$ ${\displaystyle =\sum _{i}({\frac {\partial f}{\partial x_{i}}}G_{i}+f{\frac {\partial G_{i}}{\partial x_{i}}})}$ ${\displaystyle =\sum _{i}({\frac {\partial f}{\partial x_{i}}}G_{i})+f\sum _{i}{\frac {\partial G_{i}}{\partial x_{i}}}}$ ${\displaystyle =(\nabla f)\cdot \mathbf {G} +f(\nabla \cdot \mathbf {G} )}$ Given vector fields ${\displaystyle \mathbf {F} }$ and ${\displaystyle \mathbf {G} }$, then ${\displaystyle \nabla \cdot (\mathbf {F} \times \mathbf {G} )=(\nabla \times \mathbf {F} )\cdot \mathbf {G} -\mathbf {F} \cdot (\nabla \times \mathbf {G} )}$. Derivation ${\displaystyle \nabla \cdot (\mathbf {F} \times \mathbf {G} )=\sum _{i}{\frac {\partial }{\partial x_{i}}}(F_{i+1}G_{i+2}-F_{i+2}G_{i+1})}$ ${\displaystyle =\sum _{i}(({\frac {\partial F_{i+1}}{\partial x_{i}}}G_{i+2}+F_{i+1}{\frac {\partial G_{i+2}}{\partial x_{i}}})-({\frac {\partial F_{i+2}}{\partial x_{i}}}G_{i+1}+F_{i+2}{\frac {\partial G_{i+1}}{\partial x_{i}}}))}$ ${\displaystyle =\sum _{i}(({\frac {\partial F_{i+2}}{\partial x_{i+1}}}G_{i}+F_{i}{\frac {\partial G_{i+1}}{\partial x_{i+2}}})-({\frac {\partial F_{i+1}}{\partial x_{i+2}}}G_{i}+F_{i}{\frac {\partial G_{i+2}}{\partial x_{i+1}}}))}$ ${\displaystyle =\sum _{i}(({\frac {\partial F_{i+2}}{\partial x_{i+1}}}-{\frac {\partial F_{i+1}}{\partial x_{i+2}}})G_{i}-F_{i}({\frac {\partial G_{i+2}}{\partial x_{i+1}}}-{\frac {\partial G_{i+1}}{\partial x_{i+2}}})}$ ${\displaystyle =\sum _{i}(\nabla \times \mathbf {F} )_{i}G_{i}-\sum _{i}F_{i}(\nabla \times \mathbf {G} )_{i}}$ ${\displaystyle =(\nabla \times \mathbf {F} )\cdot \mathbf {G} -\mathbf {F} \cdot (\nabla \times \mathbf {G} )}$ In the above derivation, the third equality is established by cycling the terms inside a sum. For example: ${\displaystyle \sum _{i}{\frac {\partial F_{i+1}}{\partial x_{i}}}G_{i+2}=\sum _{i}{\frac {\partial F_{i+2}}{\partial x_{i+1}}}G_{i}}$ by replacing ${\displaystyle i}$ with ${\displaystyle i+1}$. Different terms can be cycled independently: ${\displaystyle \sum _{i}({\frac {\partial F_{i+1}}{\partial x_{i}}}G_{i+2}+F_{i+1}{\frac {\partial G_{i+2}}{\partial x_{i}}})=\sum _{i}({\frac {\partial F_{i+2}}{\partial x_{i+1}}}G_{i}+F_{i}{\frac {\partial G_{i+1}}{\partial x_{i+2}}})}$ The following identity is a very important property regarding vector fields which are the curl of another vector field. A vector field which is the curl of another vector field is divergence free. Given vector field ${\displaystyle \mathbf {F} }$, then ${\displaystyle \nabla \cdot (\nabla \times \mathbf {F} )=0}$ Derivation ${\displaystyle \nabla \cdot (\nabla \times \mathbf {F} )=\nabla \cdot (i,{\frac {\partial F_{i+2}}{\partial x_{i+1}}}-{\frac {\partial F_{i+1}}{\partial x_{i+2}}})}$ ${\displaystyle =\sum _{i}{\frac {\partial }{\partial x_{i}}}({\frac {\partial F_{i+2}}{\partial x_{i+1}}}-{\frac {\partial F_{i+1}}{\partial x_{i+2}}})}$ ${\displaystyle =\sum _{i}({\frac {\partial ^{2}F_{i+2}}{\partial x_{i}\partial x_{i+1}}}-{\frac {\partial ^{2}F_{i+1}}{\partial x_{i}\partial x_{i+2}}})}$ ${\displaystyle =\sum _{i}{\frac {\partial ^{2}F_{i+2}}{\partial x_{i}\partial x_{i+1}}}-\sum _{i}{\frac {\partial ^{2}F_{i+1}}{\partial x_{i+2}\partial x_{i}}}}$ ${\displaystyle =\sum _{i}{\frac {\partial ^{2}F_{i+2}}{\partial x_{i}\partial x_{i+1}}}-\sum _{i}{\frac {\partial ^{2}F_{i+2}}{\partial x_{i}\partial x_{i+1}}}}$ ${\displaystyle =0}$ ## Laplacian Identities Given scalar fields ${\displaystyle f}$ and ${\displaystyle g}$, then ${\displaystyle \nabla ^{2}(f+g)=(\nabla ^{2}f)+(\nabla ^{2}g)}$ When ${\displaystyle \mathbf {F} }$ and ${\displaystyle \mathbf {G} }$ are vector fields, it is also the case that: ${\displaystyle \nabla ^{2}(\mathbf {F} +\mathbf {G} )=(\nabla ^{2}\mathbf {F} )+(\nabla ^{2}\mathbf {G} )}$ Derivation For scalar fields: ${\displaystyle \nabla ^{2}(f+g)=\sum _{i}{\frac {\partial ^{2}}{\partial x_{i}^{2}}}(f+g)}$ ${\displaystyle =\sum _{i}({\frac {\partial ^{2}f}{\partial x_{i}^{2}}}+{\frac {\partial ^{2}g}{\partial x_{i}^{2}}})}$ ${\displaystyle =(\sum _{i}{\frac {\partial ^{2}f}{\partial x_{i}^{2}}})+(\sum _{i}{\frac {\partial ^{2}g}{\partial x_{i}^{2}}})}$ ${\displaystyle =(\nabla ^{2}f)+(\nabla ^{2}g)}$ For vector fields: ${\displaystyle \nabla ^{2}(\mathbf {F} +\mathbf {G} )=(i,\nabla ^{2}(F_{i}+G_{i}))}$ ${\displaystyle =(i,(\nabla ^{2}F_{i})+(\nabla ^{2}G_{i}))}$ ${\displaystyle =(\nabla ^{2}\mathbf {F} )+(\nabla ^{2}\mathbf {G} )}$ Given scalar fields ${\displaystyle f}$ and ${\displaystyle g}$, then ${\displaystyle \nabla ^{2}(fg)=(\nabla ^{2}f)g+2(\nabla f)\cdot (\nabla g)+f(\nabla ^{2}g)}$ When ${\displaystyle \mathbf {G} }$ is a vector field, it is also the case that ${\displaystyle \nabla ^{2}(f\mathbf {G} )=(\nabla ^{2}f)\mathbf {G} +2((\nabla f)\cdot \nabla )\mathbf {G} +f(\nabla ^{2}\mathbf {G} )}$ Derivation For scalar fields: ${\displaystyle \nabla ^{2}(fg)=\sum _{i}{\frac {\partial ^{2}}{\partial x_{i}^{2}}}(fg)}$ ${\displaystyle =\sum _{i}{\frac {\partial }{\partial x_{i}}}({\frac {\partial f}{\partial x_{i}}}g+f{\frac {\partial g}{\partial x_{i}}})}$ ${\displaystyle =\sum _{i}({\frac {\partial ^{2}f}{\partial x_{i}^{2}}}g+2{\frac {\partial f}{\partial x_{i}}}{\frac {\partial g}{\partial x_{i}}}+f{\frac {\partial ^{2}g}{\partial x_{i}^{2}}})}$ ${\displaystyle =(\sum _{i}{\frac {\partial ^{2}f}{\partial x_{i}^{2}}})g+2\sum _{i}({\frac {\partial f}{\partial x_{i}}}{\frac {\partial g}{\partial x_{i}}})+f(\sum _{i}{\frac {\partial ^{2}g}{\partial x_{i}^{2}}})}$ ${\displaystyle =(\nabla ^{2}f)g+2(\nabla f)\cdot (\nabla g)+f(\nabla ^{2}g)}$ For vector fields: ${\displaystyle \nabla ^{2}(f\mathbf {G} )=(i,\nabla ^{2}(fG_{i}))}$ ${\displaystyle =(i,(\nabla ^{2}f)G_{i}+2(\nabla f)\cdot (\nabla G_{i})+f(\nabla ^{2}G_{i}))}$ ${\displaystyle =(i,(\nabla ^{2}f)G_{i})+2(i,((\nabla f)\cdot \nabla )G_{i})+(i,f(\nabla ^{2}G_{i}))}$ ${\displaystyle =(\nabla ^{2}f)\mathbf {G} +2((\nabla f)\cdot \nabla )\mathbf {G} +f(\nabla ^{2}\mathbf {G} )}$ ## Curl Identities Given vector fields ${\displaystyle \mathbf {F} }$ and ${\displaystyle \mathbf {G} }$, then ${\displaystyle \nabla \times (\mathbf {F} +\mathbf {G} )=(\nabla \times \mathbf {F} )+(\nabla \times \mathbf {G} )}$ Derivation ${\displaystyle \nabla \times (\mathbf {F} +\mathbf {G} )=(i,{\frac {\partial }{\partial x_{i+1}}}(F_{i+2}+G_{i+2})-{\frac {\partial }{\partial x_{i+2}}}(F_{i+1}+G_{i+1}))}$ ${\displaystyle =(i,({\frac {\partial F_{i+2}}{\partial x_{i+1}}}+{\frac {\partial G_{i+2}}{\partial x_{i+1}}})-({\frac {\partial F_{i+1}}{\partial x_{i+2}}}+{\frac {\partial G_{i+1}}{\partial x_{i+2}}}))}$ ${\displaystyle =(i,{\frac {\partial F_{i+2}}{\partial x_{i+1}}}-{\frac {\partial F_{i+1}}{\partial x_{i+2}}})+(i,{\frac {\partial G_{i+2}}{\partial x_{i+1}}}-{\frac {\partial G_{i+1}}{\partial x_{i+2}}})}$ ${\displaystyle =(\nabla \times \mathbf {F} )+(\nabla \times \mathbf {G} )}$ Given scalar field ${\displaystyle f}$ and vector field ${\displaystyle \mathbf {G} }$, then ${\displaystyle \nabla \times (f\mathbf {G} )=(\nabla f)\times \mathbf {G} +f(\nabla \times \mathbf {G} )}$. If ${\displaystyle f}$ is a constant ${\displaystyle c}$, then ${\displaystyle \nabla \times (c\mathbf {G} )=c(\nabla \times \mathbf {G} )}$. If ${\displaystyle \mathbf {G} }$ is a constant ${\displaystyle \mathbf {C} }$, then ${\displaystyle \nabla \times (f\mathbf {C} )=(\nabla f)\times \mathbf {C} }$. Derivation ${\displaystyle \nabla \times (f\mathbf {G} )=(i,{\frac {\partial }{\partial x_{i+1}}}(fG_{i+2})-{\frac {\partial }{\partial x_{i+2}}}(fG_{i+1}))}$ ${\displaystyle =(i,({\frac {\partial f}{\partial x_{i+1}}}G_{i+2}+f{\frac {\partial G_{i+2}}{\partial x_{i+1}}})-({\frac {\partial f}{\partial x_{i+2}}}G_{i+1}+f{\frac {\partial G_{i+1}}{\partial x_{i+2}}}))}$ ${\displaystyle =(i,{\frac {\partial f}{\partial x_{i+1}}}G_{i+2}-{\frac {\partial f}{\partial x_{i+2}}}G_{i+1})+f(i,{\frac {\partial G_{i+2}}{\partial x_{i+1}}}-{\frac {\partial G_{i+1}}{\partial x_{i+2}}})}$ ${\displaystyle =(\nabla f)\times \mathbf {G} +f(\nabla \times \mathbf {G} )}$ Given vector fields ${\displaystyle \mathbf {F} }$ and ${\displaystyle \mathbf {G} }$, then ${\displaystyle \nabla \times (\mathbf {F} \times \mathbf {G} )=((\nabla \cdot \mathbf {G} )\mathbf {F} +(\mathbf {G} \cdot \nabla )\mathbf {F} )-((\nabla \cdot \mathbf {F} )\mathbf {G} +(\mathbf {F} \cdot \nabla )\mathbf {G} )}$ Derivation ${\displaystyle \nabla \times (\mathbf {F} \times \mathbf {G} )=\nabla \times (i,F_{i+1}G_{i+2}-F_{i+2}G_{i+1})}$ ${\displaystyle =(i,{\frac {\partial }{\partial x_{i+1}}}(F_{i}G_{i+1}-F_{i+1}G_{i})-{\frac {\partial }{\partial x_{i+2}}}(F_{i+2}G_{i}-F_{i}G_{i+2}))}$ ${\displaystyle =(i,(({\frac {\partial F_{i}}{\partial x_{i+1}}}G_{i+1}+F_{i}{\frac {\partial G_{i+1}}{\partial x_{i+1}}})-({\frac {\partial F_{i+1}}{\partial x_{i+1}}}G_{i}+F_{i+1}{\frac {\partial G_{i}}{\partial x_{i+1}}}))-(({\frac {\partial F_{i+2}}{\partial x_{i+2}}}G_{i}+F_{i+2}{\frac {\partial G_{i}}{\partial x_{i+2}}})-({\frac {\partial F_{i}}{\partial x_{i+2}}}G_{i+2}+F_{i}{\frac {\partial G_{i+2}}{\partial x_{i+2}}})))}$ ${\displaystyle =(i,F_{i}({\frac {\partial G_{i+1}}{\partial x_{i+1}}}+{\frac {\partial G_{i+2}}{\partial x_{i+2}}})-({\frac {\partial F_{i+1}}{\partial x_{i+1}}}+{\frac {\partial F_{i+2}}{\partial x_{i+2}}})G_{i}-(F_{i+1}{\frac {\partial G_{i}}{\partial x_{i+1}}}+F_{i+2}{\frac {\partial G_{i}}{\partial x_{i+2}}})+({\frac {\partial F_{i}}{\partial x_{i+1}}}G_{i+1}+{\frac {\partial F_{i}}{\partial x_{i+2}}}G_{i+2}))}$ ${\displaystyle =(i,F_{i}({\frac {\partial G_{i}}{\partial x_{i}}}+{\frac {\partial G_{i+1}}{\partial x_{i+1}}}+{\frac {\partial G_{i+2}}{\partial x_{i+2}}})-({\frac {\partial F_{i}}{\partial x_{i}}}+{\frac {\partial F_{i+1}}{\partial x_{i+1}}}+{\frac {\partial F_{i+2}}{\partial x_{i+2}}})G_{i}}$ ${\displaystyle -(F_{i}{\frac {\partial G_{i}}{\partial x_{i}}}+F_{i+1}{\frac {\partial G_{i}}{\partial x_{i+1}}}+F_{i+2}{\frac {\partial G_{i}}{\partial x_{i+2}}})+({\frac {\partial F_{i}}{\partial x_{i}}}G_{i}+{\frac {\partial F_{i}}{\partial x_{i+1}}}G_{i+1}+{\frac {\partial F_{i}}{\partial x_{i+2}}}G_{i+2}))}$ ${\displaystyle =(i,F_{i}(\nabla \cdot \mathbf {G} )-(\nabla \cdot \mathbf {F} )G_{i}-(\mathbf {F} \cdot \nabla )G_{i}+(\mathbf {G} \cdot \nabla )F_{i})}$ ${\displaystyle =(\nabla \cdot \mathbf {G} )\mathbf {F} -(\nabla \cdot \mathbf {F} )\mathbf {G} -(\mathbf {F} \cdot \nabla )\mathbf {G} +(\mathbf {G} \cdot \nabla )\mathbf {F} }$ ${\displaystyle =((\nabla \cdot \mathbf {G} )\mathbf {F} +(\mathbf {G} \cdot \nabla )\mathbf {F} )-((\nabla \cdot \mathbf {F} )\mathbf {G} +(\mathbf {F} \cdot \nabla )\mathbf {G} )}$ The following identity is a very important property of vector fields which are the gradient of a scalar field. A vector field which is the gradient of a scalar field is always irrotational. Given scalar field ${\displaystyle f}$, then ${\displaystyle \nabla \times (\nabla f)=\mathbf {0} }$ Derivation ${\displaystyle \nabla \times (\nabla f)=\nabla \times (i,{\frac {\partial f}{\partial x_{i}}})}$ ${\displaystyle =(i,{\frac {\partial }{\partial x_{i+1}}}({\frac {\partial f}{\partial x_{i+2}}})-{\frac {\partial }{\partial x_{i+2}}}({\frac {\partial f}{\partial x_{i+1}}}))}$ ${\displaystyle =(i,{\frac {\partial ^{2}f}{\partial x_{i+1}\partial x_{i+2}}}-{\frac {\partial ^{2}f}{\partial x_{i+2}\partial x_{i+1}}})}$ ${\displaystyle =(i,0)}$ ${\displaystyle =\mathbf {0} }$ The following identity is a complex, yet popular identity used for deriving the Helmholtz decomposition theorem. Given vector field ${\displaystyle \mathbf {F} }$, then ${\displaystyle \nabla \times (\nabla \times \mathbf {F} )=\nabla (\nabla \cdot \mathbf {F} )-\nabla ^{2}\mathbf {F} }$ Derivation ${\displaystyle \nabla \times (\nabla \times \mathbf {F} )=\nabla \times (i,{\frac {\partial F_{i+2}}{\partial x_{i+1}}}-{\frac {\partial F_{i+1}}{\partial x_{i+2}}})}$ ${\displaystyle =(i,{\frac {\partial }{\partial x_{i+1}}}({\frac {\partial F_{i+1}}{\partial x_{i}}}-{\frac {\partial F_{i}}{\partial x_{i+1}}})-{\frac {\partial }{\partial x_{i+2}}}({\frac {\partial F_{i}}{\partial x_{i+2}}}-{\frac {\partial F_{i+2}}{\partial x_{i}}}))}$ ${\displaystyle =(i,({\frac {\partial ^{2}F_{i+1}}{\partial x_{i}\partial x_{i+1}}}+{\frac {\partial ^{2}F_{i+2}}{\partial x_{i}\partial x_{i+2}}})-({\frac {\partial ^{2}F_{i}}{\partial x_{i+1}^{2}}}+{\frac {\partial ^{2}F_{i}}{\partial x_{i+2}^{2}}}))}$ ${\displaystyle =(i,({\frac {\partial ^{2}F_{i}}{\partial x_{i}\partial x_{i}}}+{\frac {\partial ^{2}F_{i+1}}{\partial x_{i}\partial x_{i+1}}}+{\frac {\partial ^{2}F_{i+2}}{\partial x_{i}\partial x_{i+2}}})-({\frac {\partial ^{2}F_{i}}{\partial x_{i}^{2}}}+{\frac {\partial ^{2}F_{i}}{\partial x_{i+1}^{2}}}+{\frac {\partial ^{2}F_{i}}{\partial x_{i+2}^{2}}}))}$ ${\displaystyle =(i,{\frac {\partial }{\partial x_{i}}}({\frac {\partial F_{i}}{\partial x_{i}}}+{\frac {\partial F_{i+1}}{\partial x_{i+1}}}+{\frac {\partial F_{i+2}}{\partial x_{i+2}}})-\nabla ^{2}F_{i})}$ ${\displaystyle =(i,{\frac {\partial }{\partial x_{i}}}(\nabla \cdot \mathbf {F} )-\nabla ^{2}F_{i})}$ ${\displaystyle =\nabla (\nabla \cdot \mathbf {F} )-\nabla ^{2}\mathbf {F} }$ ## Basis Vector Identities The Cartesian basis vectors ${\displaystyle \mathbf {i} }$, ${\displaystyle \mathbf {j} }$, and ${\displaystyle \mathbf {k} }$ are the same at all points in space. However, in other coordinate systems like cylindrical coordinates or spherical coordinates, the basis vectors can change with respect to position. In cylindrical coordinates, the unit-length mutually perpendicular basis vectors are ${\displaystyle {\hat {\mathbf {\rho } }}=(\cos \phi )\mathbf {i} +(\sin \phi )\mathbf {j} }$, ${\displaystyle {\hat {\mathbf {\phi } }}=(-\sin \phi )\mathbf {i} +(\cos \phi )\mathbf {j} }$, and ${\displaystyle {\hat {\mathbf {z} }}=\mathbf {k} }$ at position ${\displaystyle (\rho ,\phi ,z)}$ which corresponds to Cartesian coordinates ${\displaystyle (\rho \cos \phi ,\rho \sin \phi ,z)}$. In spherical coordinates, the unit-length mutually perpendicular basis vectors are ${\displaystyle {\hat {\mathbf {r} }}=(\sin \theta \cos \phi )\mathbf {i} +(\sin \theta \sin \phi )\mathbf {j} +(\cos \theta )\mathbf {k} }$, ${\displaystyle {\hat {\mathbf {\theta } }}=(\cos \theta \cos \phi )\mathbf {i} +(\cos \theta \sin \phi )\mathbf {j} +(-\sin \theta )\mathbf {k} }$, and ${\displaystyle {\hat {\mathbf {\phi } }}=(-\sin \phi )\mathbf {i} +(\cos \phi )\mathbf {j} }$ at position ${\displaystyle (r,\theta ,\phi )}$ which corresponds to Cartesian coordinates ${\displaystyle (r\sin \theta \cos \phi ,r\sin \theta \sin \phi ,r\cos \phi )}$. It should be noted that ${\displaystyle {\hat {\mathbf {\phi } }}}$ is the same in both cylindrical and spherical coordinates. This section will compute the directional derivative and Laplacian for the following vectors since these quantities do not immediately follow from the formulas established for the directional derivative and Laplacian for scalar fields in various coordinate systems. ${\displaystyle {\hat {\mathbf {\rho } }}}$ which is the unit length vector that points away from the z-axis and is perpendicular to the z-axis. ${\displaystyle {\hat {\mathbf {\phi } }}}$ which is the unit length vector that points around the z-axis in a counterclockwise direction and is both parallel to the xy-plane and perpendicular to the position vector projected onto the xy-plane. ${\displaystyle {\hat {\mathbf {r} }}}$ which is the unit length vector that points away from the origin. ${\displaystyle {\hat {\mathbf {\theta } }}}$ which is the unit length vector that is perpendicular to the position vector and points "south" on the surface of a sphere that is centered on the origin. The following quantities are also important: ${\displaystyle \rho }$ which is the perpendicular distance from the z-axis. ${\displaystyle \phi }$ which is the azimuth: the counterclockwise angle of the position vector relative to the x-axis after being projected onto the xy-plane. ${\displaystyle r}$ which is the distance from the origin. ${\displaystyle \theta }$ which is the angle of the position vector to the z-axis. ### Vector ${\displaystyle {\hat {\mathbf {\rho } }}}$ ${\displaystyle {\hat {\mathbf {\rho } }}}$ only changes with respect to ${\displaystyle \phi }$: ${\displaystyle {\frac {\partial {\hat {\mathbf {\rho } }}}{\partial \phi }}={\hat {\mathbf {\phi } }}}$. Given vector field ${\displaystyle \mathbf {V} =\mathbf {V} _{\perp }+v_{\phi }{\hat {\mathbf {\phi } }}}$ where ${\displaystyle \mathbf {V} _{\perp }}$ is always orthogonal to ${\displaystyle {\hat {\mathbf {\phi } }}}$, then ${\displaystyle (\mathbf {V} \cdot \nabla ){\hat {\mathbf {\rho } }}={\frac {v_{\phi }}{\rho }}{\hat {\mathbf {\phi } }}}$ Derivation Using cylindrical coordinates, let ${\displaystyle \mathbf {V} _{\perp }=v_{\rho }{\hat {\mathbf {\rho } }}+v_{z}{\hat {\mathbf {z} }}}$ The cylindrical coordinate version of the directional derivative gives: ${\displaystyle (\mathbf {V} \cdot \nabla ){\hat {\mathbf {\rho } }}=((v_{\rho }{\hat {\mathbf {\rho } }}+v_{\phi }{\hat {\mathbf {\phi } }}+v_{z}{\hat {\mathbf {z} }})\cdot \nabla ){\hat {\mathbf {\rho } }}}$ ${\displaystyle =v_{\rho }{\frac {\partial {\hat {\mathbf {\rho } }}}{\partial \rho }}+{\frac {v_{\phi }}{\rho }}{\frac {\partial {\hat {\mathbf {\rho } }}}{\partial \phi }}+v_{z}{\frac {\partial {\hat {\mathbf {\rho } }}}{\partial z}}}$ ${\displaystyle =v_{\rho }\mathbf {0} +{\frac {v_{\phi }}{\rho }}{\hat {\mathbf {\phi } }}+v_{z}\mathbf {0} }$ ${\displaystyle ={\frac {v_{\phi }}{\rho }}{\hat {\mathbf {\phi } }}}$ ${\displaystyle \nabla ^{2}{\hat {\mathbf {\rho } }}=-{\frac {1}{\rho ^{2}}}{\hat {\mathbf {\rho } }}}$ Derivation Using the cylindrical coordinate version of the Laplacian, ${\displaystyle \nabla ^{2}{\hat {\mathbf {\rho } }}={\frac {1}{\rho }}{\frac {\partial }{\partial \rho }}(\rho {\frac {\partial {\hat {\mathbf {\rho } }}}{\partial \rho }})+{\frac {1}{\rho ^{2}}}{\frac {\partial ^{2}{\hat {\mathbf {\rho } }}}{\partial \phi ^{2}}}+{\frac {\partial ^{2}{\hat {\mathbf {\rho } }}}{\partial z^{2}}}}$ ${\displaystyle ={\frac {1}{\rho }}{\frac {\partial }{\partial \rho }}(\rho \mathbf {0} )+{\frac {1}{\rho ^{2}}}{\frac {\partial {\hat {\mathbf {\phi } }}}{\partial \phi }}+{\frac {\partial \mathbf {0} }{\partial z}}}$ ${\displaystyle =-{\frac {1}{\rho ^{2}}}{\hat {\mathbf {\rho } }}}$ ### Vector ${\displaystyle {\hat {\mathbf {\phi } }}}$ ${\displaystyle {\hat {\mathbf {\phi } }}}$ only changes with respect to ${\displaystyle \phi }$: ${\displaystyle {\frac {\partial {\hat {\mathbf {\phi } }}}{\partial \phi }}=-{\hat {\mathbf {\rho } }}}$. Given vector field ${\displaystyle \mathbf {V} =\mathbf {V} _{\perp }+v_{\phi }{\hat {\mathbf {\phi } }}}$ where ${\displaystyle \mathbf {V} _{\perp }}$ is always orthogonal to ${\displaystyle {\hat {\mathbf {\phi } }}}$, then ${\displaystyle (\mathbf {V} \cdot \nabla ){\hat {\mathbf {\phi } }}=-{\frac {v_{\phi }}{\rho }}{\hat {\mathbf {\rho } }}}$ Derivation Using cylindrical coordinates, let ${\displaystyle \mathbf {V} _{\perp }=v_{\rho }{\hat {\mathbf {\rho } }}+v_{z}{\hat {\mathbf {z} }}}$ The cylindrical coordinate version of the directional derivative gives: ${\displaystyle (\mathbf {V} \cdot \nabla ){\hat {\mathbf {\phi } }}=((v_{\rho }{\hat {\mathbf {\rho } }}+v_{\phi }{\hat {\mathbf {\phi } }}+v_{z}{\hat {\mathbf {z} }})\cdot \nabla ){\hat {\mathbf {\phi } }}}$ ${\displaystyle =v_{\rho }{\frac {\partial {\hat {\mathbf {\rho } }}}{\partial \phi }}+{\frac {v_{\phi }}{\rho }}{\frac {\partial {\hat {\mathbf {\phi } }}}{\partial \phi }}+v_{z}{\frac {\partial {\hat {\mathbf {\phi } }}}{\partial z}}}$ ${\displaystyle =v_{\rho }\mathbf {0} +{\frac {v_{\phi }}{\rho }}(-{\hat {\mathbf {\rho } }})+v_{z}\mathbf {0} }$ ${\displaystyle =-{\frac {v_{\phi }}{\rho }}{\hat {\mathbf {\rho } }}}$ ${\displaystyle \nabla ^{2}{\hat {\mathbf {\phi } }}=-{\frac {1}{\rho ^{2}}}{\hat {\mathbf {\phi } }}}$ Derivation Using the cylindrical coordinate version of the Laplacian, ${\displaystyle \nabla ^{2}{\hat {\mathbf {\phi } }}={\frac {1}{\rho }}{\frac {\partial }{\partial \rho }}(\rho {\frac {\partial {\hat {\mathbf {\phi } }}}{\partial \rho }})+{\frac {1}{\rho ^{2}}}{\frac {\partial ^{2}{\hat {\mathbf {\phi } }}}{\partial \phi ^{2}}}+{\frac {\partial ^{2}{\hat {\mathbf {\phi } }}}{\partial z^{2}}}}$ ${\displaystyle ={\frac {1}{\rho }}{\frac {\partial }{\partial \rho }}(\rho \mathbf {0} )-{\frac {1}{\rho ^{2}}}{\frac {\partial {\hat {\mathbf {\rho } }}}{\partial \phi }}+{\frac {\partial \mathbf {0} }{\partial z}}}$ ${\displaystyle =-{\frac {1}{\rho ^{2}}}{\hat {\mathbf {\phi } }}}$ ### Vector ${\displaystyle {\hat {\mathbf {r} }}}$ ${\displaystyle {\hat {\mathbf {r} }}}$ changes with respect to ${\displaystyle \theta }$ and ${\displaystyle \phi }$: ${\displaystyle {\frac {\partial {\hat {\mathbf {r} }}}{\partial \theta }}={\hat {\mathbf {\theta } }}}$ and ${\displaystyle {\frac {\partial {\hat {\mathbf {r} }}}{\partial \phi }}=(\sin \theta ){\hat {\mathbf {\phi } }}}$ Given vector field ${\displaystyle \mathbf {V} =v_{r}{\hat {\mathbf {r} }}+v_{\theta }{\hat {\mathbf {\theta } }}+v_{\phi }{\hat {\mathbf {\phi } }}}$, then ${\displaystyle (\mathbf {V} \cdot \nabla ){\hat {r}}={\frac {1}{r}}(v_{\theta }{\hat {\mathbf {\theta } }}+v_{\phi }{\hat {\mathbf {\phi } }})}$ Derivation The spherical coordinate version of the directional derivative gives: ${\displaystyle (\mathbf {V} \cdot \nabla ){\hat {\mathbf {r} }}=((v_{r}{\hat {\mathbf {r} }}+v_{\theta }{\hat {\mathbf {\theta } }}+v_{\phi }{\hat {\mathbf {\phi } }})\cdot \nabla ){\hat {\mathbf {r} }}}$ ${\displaystyle =v_{r}{\frac {\partial {\hat {\mathbf {r} }}}{\partial r}}+{\frac {v_{\theta }}{r}}{\frac {\partial {\hat {\mathbf {r} }}}{\partial \theta }}+{\frac {v_{\phi }}{r\sin \theta }}{\frac {\partial {\hat {\mathbf {r} }}}{\partial \phi }}}$ ${\displaystyle =v_{r}\mathbf {0} +{\frac {v_{\theta }}{r}}{\hat {\mathbf {\theta } }}+{\frac {v_{\phi }}{r\sin \theta }}(\sin \theta {\hat {\mathbf {\phi } }})}$ ${\displaystyle ={\frac {1}{r}}(v_{\theta }{\hat {\mathbf {\theta } }}+v_{\phi }{\hat {\mathbf {\phi } }})}$ ${\displaystyle \nabla ^{2}{\hat {\mathbf {r} }}=-{\frac {2}{r^{2}}}{\hat {\mathbf {r} }}}$ Derivation The spherical coordinate version of the Laplacian gives: ${\displaystyle \nabla ^{2}{\hat {\mathbf {r} }}={\frac {1}{r^{2}}}{\frac {\partial }{\partial r}}(r^{2}{\frac {\partial {\hat {\mathbf {r} }}}{\partial r}})+{\frac {1}{r^{2}\sin \theta }}{\frac {\partial }{\partial \theta }}(\sin \theta {\frac {\partial {\hat {\mathbf {r} }}}{\partial \theta }})+{\frac {1}{r^{2}\sin ^{2}\theta }}{\frac {\partial ^{2}{\hat {\mathbf {r} }}}{\partial \phi ^{2}}}}$ ${\displaystyle ={\frac {1}{r^{2}}}{\frac {\partial }{\partial r}}(r^{2}\mathbf {0} )+{\frac {1}{r^{2}\sin \theta }}{\frac {\partial }{\partial \theta }}(\sin \theta {\hat {\mathbf {\theta } }})+{\frac {1}{r^{2}\sin ^{2}\theta }}{\frac {\partial }{\partial \phi }}(\sin \theta {\hat {\mathbf {\phi } }})}$ ${\displaystyle ={\frac {1}{r^{2}\sin \theta }}(\cos \theta {\hat {\mathbf {\theta } }}+\sin \theta (-{\hat {\mathbf {r} }}))+{\frac {1}{r^{2}\sin \theta }}(-\sin \theta {\hat {\mathbf {r} }}-\cos \theta {\hat {\mathbf {\theta } }})}$ ${\displaystyle =-{\frac {2}{r^{2}}}{\hat {\mathbf {r} }}}$ ### Vector ${\displaystyle {\hat {\mathbf {\theta } }}}$ ${\displaystyle {\hat {\mathbf {\theta } }}}$ changes with respect to ${\displaystyle \theta }$ and ${\displaystyle \phi }$: ${\displaystyle {\frac {\partial {\hat {\mathbf {\theta } }}}{\partial \theta }}=-{\hat {\mathbf {r} }}}$ and ${\displaystyle {\frac {\partial {\hat {\mathbf {\theta } }}}{\partial \phi }}=(\cos \theta ){\hat {\mathbf {\phi } }}}$ Given vector field ${\displaystyle \mathbf {V} =v_{r}{\hat {\mathbf {r} }}+v_{\theta }{\hat {\mathbf {\theta } }}+v_{\phi }{\hat {\mathbf {\phi } }}}$, then ${\displaystyle (\mathbf {V} \cdot \nabla ){\hat {\theta }}={\frac {1}{r}}(-v_{\theta }{\hat {\mathbf {r} }}+\cot \theta v_{\phi }{\hat {\mathbf {\phi } }})}$ Derivation The spherical coordinate version of the directional derivative gives: ${\displaystyle (\mathbf {V} \cdot \nabla ){\hat {\mathbf {\theta } }}=((v_{r}{\hat {\mathbf {r} }}+v_{\theta }{\hat {\mathbf {\theta } }}+v_{\phi }{\hat {\mathbf {\phi } }})\cdot \nabla ){\hat {\mathbf {\theta } }}}$ ${\displaystyle =v_{r}{\frac {\partial {\hat {\mathbf {\theta } }}}{\partial r}}+{\frac {v_{\theta }}{r}}{\frac {\partial {\hat {\mathbf {\theta } }}}{\partial \theta }}+{\frac {v_{\phi }}{r\sin \theta }}{\frac {\partial {\hat {\mathbf {\theta } }}}{\partial \phi }}}$ ${\displaystyle =v_{r}\mathbf {0} +{\frac {v_{\theta }}{r}}(-{\hat {\mathbf {r} }})+{\frac {v_{\phi }}{r\sin \theta }}(\cos \theta {\hat {\mathbf {\phi } }})}$ ${\displaystyle ={\frac {1}{r}}(-v_{\theta }{\hat {\mathbf {r} }}+\cot \theta v_{\phi }{\hat {\mathbf {\phi } }})}$ ${\displaystyle \nabla ^{2}{\hat {\mathbf {\theta } }}=-{\frac {1}{r^{2}\sin \theta }}(2\cos \theta {\hat {\mathbf {r} }}+\csc \theta {\hat {\mathbf {\theta } }})=-{\frac {1}{r^{2}\sin ^{2}\theta }}(\sin(2\theta ){\hat {\mathbf {r} }}+{\hat {\mathbf {\theta } }})}$ Derivation The spherical coordinate version of the Laplacian gives: ${\displaystyle \nabla ^{2}{\hat {\mathbf {\theta } }}={\frac {1}{r^{2}}}{\frac {\partial }{\partial r}}(r^{2}{\frac {\partial {\hat {\mathbf {\theta } }}}{\partial r}})+{\frac {1}{r^{2}\sin \theta }}{\frac {\partial }{\partial \theta }}(\sin \theta {\frac {\partial {\hat {\mathbf {\theta } }}}{\partial \theta }})+{\frac {1}{r^{2}\sin ^{2}\theta }}{\frac {\partial ^{2}{\hat {\mathbf {\theta } }}}{\partial \phi ^{2}}}}$ ${\displaystyle ={\frac {1}{r^{2}}}{\frac {\partial }{\partial r}}(r^{2}\mathbf {0} )+{\frac {1}{r^{2}\sin \theta }}{\frac {\partial }{\partial \theta }}(\sin \theta (-{\hat {\mathbf {r} }}))+{\frac {1}{r^{2}\sin ^{2}\theta }}{\frac {\partial }{\partial \phi }}(\cos \theta {\hat {\mathbf {\phi } }})}$ ${\displaystyle =-{\frac {1}{r^{2}\sin \theta }}(\cos \theta {\hat {\mathbf {r} }}+\sin \theta {\hat {\mathbf {\theta } }})+{\frac {\cos \theta }{r^{2}\sin ^{2}\theta }}(-\sin \theta {\hat {\mathbf {r} }}-\cos \theta {\hat {\mathbf {\theta } }})}$ ${\displaystyle =-{\frac {1}{r^{2}\sin \theta }}(2\cos \theta {\hat {\mathbf {r} }}+(\sin \theta +{\frac {\cos ^{2}\theta }{\sin \theta }}){\hat {\mathbf {\theta } }})}$ ${\displaystyle =-{\frac {1}{r^{2}\sin \theta }}(2\cos \theta {\hat {\mathbf {r} }}+\csc \theta {\hat {\mathbf {\theta } }})}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 374, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9918088912963867, "perplexity": 273.5336073349853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608416.96/warc/CC-MAIN-20170525195400-20170525215400-00374.warc.gz"}
https://math.stackexchange.com/questions/2825400/if-f-in-r-alpha-and-if-int-a-b-f-d-alpha-0-then-prove-that-alpha
# If $f \in R(\alpha)$ and if $\int_a ^b f~ d \alpha = 0$ then, prove that $\alpha$ must be constant If $f \in R(\alpha)$ on $[a,b]$ and if for every monotonic function $f :$ $$\int_a ^b f~ d \alpha = 0$$ then, prove that $\alpha$ must be constant on $[a,b]$ Proof: By integration by parts : $\int_a ^b f~ d \alpha + \int_a ^b \alpha~ df = f(b) \alpha (b) -f(a) \alpha (a)$ . Substituting $\int_a ^b f~ d \alpha = 0$ , we get : $\int_a ^b \alpha~ df = f(b) \alpha (b) -f(a) \alpha (a)$ Given any point $c \in [a, b)$, we may choose a monotonic function $f$ defined as follows : $f(x) = \begin{cases} 0 & x \leq c \\ 1 & x > c \end{cases}$ *So, we have $\int_a ^b \alpha ~df= \alpha(c) = \alpha(b)$, that is, $\alpha$ is constant in $[a,b]$ I don't understand why choosing that function yields the result I marked with * Since $f$ is integrable with respect to $\alpha$, it follows that $\alpha$ is integrable with respect to $f$ -- this is part of the theorem that justifies the integration by parts used in this proof. Thus, for any $\epsilon > 0$ there exists $\delta > 0$ such that for every partition $P = (x_0,x_1, \ldots,x_n)$ with $\|P\| < \delta$ and any Riemann-Stieltjes sum $$S(P,\alpha,f) = \sum_{k=1}^n\alpha(t_k)[f(x_k) - f(x_{k-1})],$$ we have (no matter how the intermediate points $t_k \in [x_{k-1},x_k]$ are chosen) $$\tag{*}\left|S(P,\alpha,f) - \int_a^b \alpha \, df\right| < \epsilon$$ For any such $P$, we can assume that $P$ has $c$ as one of the partition points, say $c = x_j$. Otherwise, add $c$ to the partition and $\|P\| < \delta$ still holds. Given that $f(x_k) = 0$ for $x_k \leqslant x_j = c$ and $f(x_k) = 1$ for $x_k > x_j$, we have $$S(P,\alpha,f) \\ = \sum_{k \leqslant j}\alpha(t_k)[f(x_k) - f(x_{k-1})] + \alpha(t_{j+1})[f(x_{j+1}) - f(x_j)] + \sum_{k > j+1}\alpha(t_k)[f(x_k) - f(x_{k-1})] \\ = \alpha(t_{j+1})$$ It follows that $\left|\alpha(t_{j+1}) - \int_a^b \alpha \, df \right| < \epsilon$ and since (*) holds for any $t_{j+1} \in [x_j, x_{j+1}] = [c, x_{j+1}]$,we can choose $t_{j+1} = c$ to obtain $\left|\alpha(c) - \int_a^b \alpha \, df \right| < \epsilon$. Since $\epsilon$ can be arbitrarily close to $0$ it follows that $$\alpha(c) = \int_a^b \alpha \, df$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9945211410522461, "perplexity": 50.46544230817507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576355.92/warc/CC-MAIN-20190923105314-20190923131314-00390.warc.gz"}
https://www.clutchprep.com/physics/practice-problems/40887/a-conducting-rod-of-length-0-20-m-makes-slides-without-friction-on-conducting-me-1
Motional EMF Video Lessons Concept # Problem: A conducting rod of length 0.20 m makes slides without friction on conducting metal rails, as shown in the sketch. The apparatus is in a uniform magnetic field that has magnitude B = 0.400 T and that is directed into the page. The resistance of the circuit is a constant R = 5.00 Ω. What magnitude and direction (to the left or to the right) of the external force must be applied to the bar to keep it moving to the right at a constant speed of 12.0 m/s? 90% (13 ratings) ###### Problem Details A conducting rod of length 0.20 m makes slides without friction on conducting metal rails, as shown in the sketch. The apparatus is in a uniform magnetic field that has magnitude B = 0.400 T and that is directed into the page. The resistance of the circuit is a constant R = 5.00 Ω. What magnitude and direction (to the left or to the right) of the external force must be applied to the bar to keep it moving to the right at a constant speed of 12.0 m/s?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8496851325035095, "perplexity": 238.64165493306305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487640324.35/warc/CC-MAIN-20210618165643-20210618195643-00544.warc.gz"}
http://mathoverflow.net/questions/9835/polynomial-representing-prime-numbers
# Polynomial representing prime numbers Along the lines of Polynomial representing all nonnegative integers, but likely well-known question: is there a polynomial $f \in \mathbb Q[x_1, \dots, x_n]$ such that $f(\mathbb Z\times\mathbb Z\times\dots\times\mathbb Z) = P$, the set of primes? - No. Any such polynomial would have the property that any of its restrictions $f(x)$ to one variable consist only of primes, but this is easily seen to be impossible, since if $p(a)$ is prime then $p(k p(a) + a)$ is divisible by $p(a)$. (Even accounting for the coefficients in $\mathbb{Q}$ is straightforward by multiplying by the common denominator and using CRT; in fact, we can show that given an integer polynomial $q(x)$ and a positive integer $n$ there exists $x_n$ such that $q(x_n)$ is divisible by $n$ distinct primes.)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8539270162582397, "perplexity": 90.02726573620714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246655962.81/warc/CC-MAIN-20150417045735-00209-ip-10-235-10-82.ec2.internal.warc.gz"}
https://www.khanacademy.org/science/physics/discoveries/measure-magnets/a/measure-the-earth
# Measure the Earth's field! You can use a bar magnet and a compass to get a rough idea of the strength of Earth's magnetic field. ### What happened? The force on a compass due to the Earth's magnetic field is relatively weak. Think of it as a gentle magnetic breeze. Magnetic Wind However the field near the household magnet is dense like a tornado. Strong near the magnet and very weak as you move away. Magnetic Wind ## Challenge 1: How strong is the Earth's magnetic field? Find (or make) a compass & 1 household magnet. Bar Magnet Compass Can you determine the exact distance at which the household magnet force equals that due to the Earth's magnetic field? What does this distance tell us?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9816448092460632, "perplexity": 1164.452256939778}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822488.34/warc/CC-MAIN-20171017200905-20171017220905-00786.warc.gz"}
http://meta.math.stackexchange.com/questions/6568/abstract-duplicate
# Abstract duplicate? I have noticed two questions here and here which are nearly identical in nature. These questions ask to show that given any $n+1$ integers, there exist at least two integers whose difference is divisible by $n$. Should these questions be regarded as abstract duplicates? - The second question (with $n=2$) is a special case that may be worth keeping separate from the general case, since it only requires working with even and odd numbers. That makes the solution accessible to even elementary school students. Many students in my undergraduate number theory course, even after working with modular arithmetic for a few weeks, easily wrote proofs using even and odd numbers but did not see the straightforward generalization to integers modulo $n$ without prodding.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8153564929962158, "perplexity": 235.66654974800085}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737940789.96/warc/CC-MAIN-20151001221900-00073-ip-10-137-6-227.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/higgs-field.104264/
# Higgs Field 1. Dec 15, 2005 ### yquantumjumps Hello, For clarification because of the ambiguous situation in searching for & concerning the Higgs Field. It is with 95% confidence that the Higgs Field exists. Yet it has not been seen as of the end of 2005. I hope LHC/CERN will shed light on the subject. Anyone dealing or knowledge of the Higgs Field, do you expect it to be found. Your comments would be greatly appreciated at this point in time. Thanks, BGE 2. Dec 15, 2005 ### Hans de Vries The Higgs field made it possible to in 1971 to renormalize the Yang-Mills gauge theory associated with the united electroweak force. ('t Hoofd) It was Steven Weinberg's paper from 1967 "A model of Leptons" in which he proposed the correct Electroweak Lagrangian. (Phys.Rev.Let vol.19 Issue.21 1264-1266 ) He mixed in Goldstone bosons which are later eliminated in the interactions just leaving a coupling which gives rise to the masses of the intermediate vector bosons W and Z and the leptons. So something should be right here. The theory itself however doesn't predict any specific mass nor does it predict the mass of the Higgs bosons. There's an interesting follow-up paper from Weinberg in 1971 after the so important proof that it was renormalizable: "Physical processes in a convergent theory of the Weak and Electro- magnetic interactions" (Phys.Rev.Let vol.27 Issue.24 1688-1691 ) Regards, Hans Last edited: Dec 15, 2005 3. Dec 15, 2005 ### yquantumjumps Higgs Thank you Hans, I am aware of the papers, it is greatly appreciated. [Drs. R. L. Mills & C. N. Yang / Drs. Glashow, A. Salam & Weinberg in the award pointing the way]. I am in a discussion of the GeV needed, some believe it will be 115 GeV/c^2, I believe more in the 250 GeV/c^2. In its predictive power of the Higgs Field, you know that it would tidy up some questions in the Standard Model which is needed at this time, also provide a description of microscopic matter and the fundamental interactions - origin of all mass-. Would you not agree? It is not taken lightly here that to discover the mechanism for mass would benefit greatly. I apologize for not making myself clear. BGE 4. Dec 18, 2005 ### Ernies Why does the concept renormalisable make things acceptable? It is no more than sweeping things under the carpet by saying infinity - infinity=0. And at least half a dozen prominent physicists have so described it in discussion -- though no doubt in writing they would have been more circumspect. if you know of a real justification i would be glad to hear iit. ernie 5. Dec 21, 2005 ### yquantumjumps Good question. ernie, It is hard to read the emotion dealing with correspondence, unless you put it all in upper case. I think you are just asking why? I am not sure this is to be proactive in the search or just frustration in what we are dealing with in the Higgs. It is a very hot subject this you know. You mentioned renormalization, I will go three years later to 1974 by K. Wilson, when he used the tool to achieve his goal. He called it the, 'renormalization group,' You being a mathematician I believe understand this only means, applying a new normalization or new calibration dealing with the theory and its parameters in which you are studying at the time. Example, energy. I believe you also know symmetry & symmetry breaking that helps us all to understand how the universe we live in works, like from an undifferentiated point to the complex structure we view, so we are searching for the Higgs in order to understand how mass relates to W's & Z & still the photons remains or stay in the same state, massless. Your question, "what is the real justification it this approach?" Is! To answer the questions dealing with mass that has perplexed many physicist dealing with the Standard Model, we are about 95% it is there, and it fits in theory, yet it has of 2005, not been verified (you seem very knowledgeable, so I do not feel I need to expound on this). BGE 6. Dec 21, 2005 ### humanino It is certainly more than just that for sure ! Renormalizability is a very important issue, and everybody agrees that it is not fully understood. But unless you pretend to have theory valid at every energy scale AND finite, you need renormalization. Apparently Alain Connes has made quite profound progress in understanding the nature of the process of renormalization. For two reasons I cannot talk about it : it is too complicated for me (Non-commutative geometry, NCG), and it belongs to another forum. Once again I could quote NCG, but that's already unfair. Whatever people try to do to deal with the origin of mass, a scalar field must enter into the game. People trying to introduce it directly with gravity must include a dilaton for instance. This is the same in string theory, the scalar field is the dilaton. In Gribov's scenario for confinement, the scalar field is a condensate of fermions. Anyway, most people are certain that they are going to see a scalar field which, at the very least, looks very much like the single Higgs boson in minimal models. In my opinion, there is a problem somewhere else : even if they do not see any signal, they are already plethora theoreticians which will be happy. There are models without Higgs, there are models for which the Higgs is much heavier etc... 7. Dec 29, 2005 ### yquantumjumps Happy New Year, I understand the concern above with renormalizability , when working on our degrees we are encouraged to take paths that have been proven, this is not said but assumed & we all know that when working on the paper it must be outside the perimeters of the ordinary for us to receive the degree in which we work so hard for. I agree on sure footing and tested, & productive results which brings application to a research project. But, on the other side of this coin is a paradox of discovery that needs to be researched, example Higgs we must think outside the box and search not leaving one mathematical tool that could aide us in finding what is so desperately needed to verified some problematic issues in the Standard Model. Mathematics is not the final word, but it is a universal language in which we can communicate across this small globe we call home. We cannot always take the comfortable position in our search. Yes Non- Commutative Geometry is a new approach, but has promise. [NCG] My one concern is we begin to find fault with the process, instead of seeking and encouraging by helping those in the tasks at hand. Best, BGE 8. Dec 29, 2005 ### Ernies 9. Dec 29, 2005 ### Kea Yes (along with Marcolli and Kreimer), but this hardly gives us confidence in the SM Higgs mechanism. Au contraire! The delicateness of these new methods, which cannot be applied to the full SM, leads one simply to wring one's hands in eager anticipation of their non-Abelian extensions. Last edited: Dec 29, 2005 10. Dec 30, 2005 ### Careful **It is certainly more than just that for sure ! Renormalizability is a very important issue, and everybody agrees that it is not fully understood. But unless you pretend to have theory valid at every energy scale AND finite, you need renormalization. ** Why ?? Why do you need to have a theory which is valid on ALL energy scales (UV cutoff to infinity) ? It seems to me that such theory would be only possible when nature presents us with a cutoff where the continuum breaks down (i.e. unbreakable units enter the stage). Why is it unnatural that low energy physics depends upon what happens at high energies (I understand of course why this would be desirable but I do not see why this should be a logical necessity) ? Renormalization means for example : lack of knowledge of spacetime structure of matter. It will still leave us with the wrong theory at sufficiently high energy scales (and I wonder when we get to generation Z of the standard model ) so why bother about these ? Unfortunatly, renormalization also seems to tell us (if I remember correctly) that the demand that physical quantities do *not* depend upon the high energy scale is NECESSARY in order to get the correct results out of the relevant QFT (which makes me doubt about the theory at hand in the first place). To repeat myself, it seems to me that renormalization should not be an issue in a satisfactory theory of fundamental interactions... 11. Dec 30, 2005 ### Ernies The fact that it is a 'workaround', and no more, surely means that it IS an issue. Quite apart from Godel's Theorem, it precludes a sensible 'theory of everything'. (That is until a satisfactory justification is given) Otherwise it is like a witch's spell that happens to work. Useful. ernie 12. Dec 30, 2005 ### Careful What I meant is that it should not appear in a theory of everything (and be replaced by something better), so I guess we agree. The Godel argument is correct, however for me a theory of everything´´ is one which provides a consistent set of rules which covers all unbaised experimental data known so far and clearly satifies Occam's razor. Of course, Godel's theorem does not pose necessarily a problem here. Cheers, Careful Last edited: Dec 30, 2005 13. Dec 31, 2005 ### Haelfix Renormalization is not just theoretical, it is experimental fact, mathematical certainty and pretty much self evident at least in a few situations. There are models in condensed matter which we can solve exactly (usually in 2+1 dimensions or on lattices) and its quite apparent what renormalization means in those contexts, and why its needed in those particular series approximation. If you want my personal opinion, there is nothing mysterious or bizarre about renormalization in general, when Wilson figured out the renormalization group in the mid seventies I think it became pretty apparent what it entailed. You can point to mathematical problems with field theory before renormalization (and indeed straight to some of the core ideas which are mathematically tenous) but as is often the case, the end result is usually far better defined than what we started with. 14. Dec 31, 2005 ### Careful **Renormalization is not just theoretical, it is experimental fact, mathematical certainty and pretty much self evident at least in a few situations. ** ?? What is it that makes renormalisation an experimental fact ? : It is a mathematically well defined procedure, but that it equivalent to saying that a donkey should have two ears and one tale ** There are models in condensed matter which we can solve exactly (usually in 2+1 dimensions or on lattices) and its quite apparent what renormalization means in those contexts, and why its needed in those particular series approximation. ** Sure, I did not contest that it is mathematically appearent what happens *perturbatively* at the level of the Feynman series. However, you seem to have entirely missed the *physical* points I have raised. Also, you should explain to me WHY renormalisation is necessary a priori (and do not come up with the universality arguments) since it seems entirely plausible to me that low energy physics depends upon what happens at high energies. Moreover, you also seem to have missed the point that unless you write out a theory for UNBREAKABLE units, your renormalized theory shall always give the wrong answer at sufficiently high energies. By the way, in 2+1 dimensions, do you manage to renormalize ALL correlation functions ? I am sure that finding a Hilbert space representation (which is actually what is needed) is far too ambitious. ** If you want my personal opinion, there is nothing mysterious or bizarre about renormalization in general, when Wilson figured out the renormalization group in the mid seventies I think it became pretty apparent what it entailed. ** Yes, and it was at the same time clear that this procedure cannot be used for a theory which has the ambition to provide a deeper insight into the nature of matter. Ernies is right in pointing out that it is a useful tool to cure sick theories. ** You can point to mathematical problems with field theory before renormalization (and indeed straight to some of the core ideas which are mathematically tenous) but as is often the case, the end result is usually far better defined than what we started with.[/QUOTE] ** Your beautiful renormalization procedure did unfortunatly not help yet to produce a single well defined interacting QFT in 3+1 dimensions One should not be afraid to abandon a tool in those circumstances (unified theory) where it is clear that something better is needed. Cheers, Careful Last edited: Dec 31, 2005 15. Dec 31, 2005 ### marlon Well, just look at the work of these guys. They used renormalization theory to "cure" the divergences in the electroweak theory. I especially recommend their Nobel Lectures. Is this a raethorical question ? What exactly do you mean by that ? The evolution of the coupling constant in terms of energyscale is described by renormalization, but that's the only connection between low and high energy (as far as i can see). How can you link perturbative and non-perturbative behaviour ? What field theory are you talking about ? Do you have a suggestion ? regards marlon 16. Dec 31, 2005 ### Careful **Well, just look at the work of these guys. They used renormalization theory to "cure" the divergences in the electroweak theory. I especially recommend their Nobel Lectures. ** Sigh, what a boring argumentation (don't you think I know 't Hoofts and Veltman's lecture ? ). If you just don't have any intelligent answer, then don't use such argumentation (perhaps you could imagine that even some of these people might agree with my thesis here). ** Is this a raethorical question ? ** No, it isn't ! I just excluded one particular answer (which I do not find very convincing) to this question simply to make the conversation more efficient. ** What exactly do you mean by that ? The evolution of the coupling constant in terms of energyscale is described by renormalization, but that's the only connection between low and high energy (as far as i can see). ** The renormalization equation on the coupling constants expresses that the resulting theory should be finite (in the sense that N - point functions should be finite) and not depend upon the high energy (UV) cutoff of the QFT at hand. Why should this be a requirement ?? ** How can you link perturbative and non-perturbative behaviour ? ** That was not the issue and is a far more difficult question to answer. ** What field theory are you talking about ? ** For example: the original version of the standard model (I thought one is playing around with SU(9) or SU(10) theories already now). **Do you have a suggestion ? ** Yes: study continuum classical models for elementary particles and their stability. Cheers, Careful Last edited: Dec 31, 2005 17. Dec 31, 2005 ### marlon Don't turn things around just because you have no answer, please. You know very well why i posted this remark on the 1999 Physics Nobel Prize winners. I ilso should add the work of these dudes. Renormalization works. "Point final" Don't make useless speculations to impress people. It doesn't work. Err, because it works No it is not, the answer has been given by these guys when it comes to electroweak interactions and QCD No, no, i meant to ask what ESTABLISHED field theories ? You are just speculating Just keep in mind that mindless speculations are not allowed in this forum. If you make a point that does not correspond to mainstream physics, make sure that you can proof it at any time. Just to be clear, this does not imply that new ideas cannot be discussed here, they CAN. They just have to be discussed in an intelligent manner. regards marlon 18. Dec 31, 2005 ### Careful ** Don't turn things around just because you have no answer, please. You know very well why i posted this remark on the 1999 Physics Nobel Prize winners. I ilso should add the work of these dudes. Renormalization works. "Point final" Don't make useless speculations to impress people. It doesn't work. Err, because it works ** Sigh ... if there is one person who wants to impress people by quoting names of nobel prize winners, it is you. The rest of your comments are just too simplistic. Moreover, you seem to have missed my comments that it is NOT sufficient to be able to calculate the correlation functions: one should dispose of an Hilbert space representation. **No it is not, the answer has been given by these guys when it comes to electroweak interactions and QCD ** As I said, this is nontrivial (as you should know :grumpy: ). In QCD something like asymptotic freedom is needed to do that job. As far as the weak interactions go, they are *not* nonperturbatively renormalizable AFAIK. In gravity for example, people are even trying to go further: they argue that a theory which is not even perturbatively renormalizable might actually be nonperturbatively renormalisable. **No, no, i meant to ask what ESTABLISHED field theories ? ** The standard model is pretty established no ? :rofl: And I know for a fact that people in MEANSTREAM physics (which you love so much ) are researching unified models with higher gauge groups, so this is no speculation but very up to date information. ** Just keep in mind that mindless speculations are not allowed in this forum. If you make a point that does not correspond to mainstream physics, make sure that you can proof it at any time. Just to be clear, this does not imply that new ideas cannot be discussed here, they CAN. ** This is far from mindless speculation :rofl: :rofl: It is pretty obvious that the construction of realistic matter models (and the stability study thereof) are a key step to banning renormalization. For your reference: the late A.O. Barut (amongst many) has uttered the same idea a long time ago and actually has done quite some work on it (realistic electron models for example). Cheers, Careful Last edited: Dec 31, 2005 19. Dec 31, 2005 ### marlon No, because they are a crystal clear proof of what i am trying to say. Look, you cannot just make a vague statement to proof your point. Where are the formula's ? The references ? Please react clearly to what i am trying to say to you. But how do you think this principle was described and proved ? You might wanna read some of the references i gave you, since clearly you are not familiar with their content. Let's be clear, there is NO established field theory for gravity so you cannot bring this up just to state that "renormalization is not ok". Restrain yourself to mainstream physics. Sure it is. Now you answer this, what good did renormalization do in the electroweak interaction and QCD, hu ? Unless a theory has passed the required stages (refereeing, experimental backup) it does not belong to mainstream physics. Besides, explain what you mean by the notion "higher gauge groups". Clearly, you are misinterpreting. renormalization does not need to be banned because it has proven it's value. We only need to look further at the problems related to renormalization. This is something entirely different. References please...References to peer reviewed articles...Otherwise don't make such statements. marlon 20. Dec 31, 2005 ### Careful ** But how do you think this principle was described and proved ? You might wanna read some of the references i gave you, since clearly you are not familiar with their content. ** Of course through renormalization (I did not deny that) :grumpy: I merely said that the property of asymptotic freedom improves the theory (that is no Landau poles or divergences of the coupling constants at some high energies as happens in QED). It is understood that asymptotic freedom in the strong interactions is what makes QCD nonperturbatively renormalizable, while QED is not. ** Let's be clear, there is NO established field theory for gravity so you cannot bring this up just to state that "renormalization is not ok". Restrain yourself to mainstream physics. ** I did not use this as an argument against renormalization (learn to read) : I merely used this to diversify on your false claim that perturbative renormalizable implies nonperturbative renormalizable which you made three posts ago (and which is wrong as happens in the weak interactions AFAIK). ** Sure it is. Now you answer this, what good did renormalization do in the electroweak interaction and QCD, hu ?** Ah, it made the theory sensible. But I am not denying that it is a useful tool, but a limited one which cannot be used for unification :grumpy: Again, you did not listen. ** Unless a theory has passed the required stages (refereeing, experimental backup) it does not belong to mainstream physics. Besides, explain what you mean by the notion "higher gauge groups". ** Higher dimensional principal fibre bundle connections ! Jee, you should know that. :grumpy: ** Clearly, you are misinterpreting. renormalization does not need to be banned because it has proven it's value. We only need to look further at the problems related to renormalization. This is something entirely different. ** Wrong, we do not have any rigorous interacting QFT so far in 3+1 dimensions as is WELL KNOWN (for a confirmation of this statement, see the paper of Nicolai, Peeters and Zamaklar on loop quantum gravity) and it also well accepted that computing correlation functions is NOT enough. And I did not say it has to be banned: I merely said that it cannot serve for a theory of everything´´. Again, you fail to read correctly. ** References please...References to peer reviewed articles...Otherwise don't make such statements. ** If it makes you happy, I shall look them up (but promise to READ them at least.) but again, these things are at least seven years old. Cheers, Careful Last edited: Dec 31, 2005 Have something to add? Similar Discussions: Higgs Field
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8309664130210876, "perplexity": 860.6391680891807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698544679.86/warc/CC-MAIN-20161202170904-00085-ip-10-31-129-80.ec2.internal.warc.gz"}
https://socratic.org/questions/how-do-you-draw-the-line-with-the-slope-m-1-and-y-intercept-1
Algebra Topics # How do you draw the line with the slope m=1 and y intercept 1? build the line equation, then solve it for a few x - y pairs, or let the computer draw it for you :) #### Explanation: General line formula is : $y = m x + n$ here slope m = +1 so your line is : $y = x + n$ given that line intercepts y at 1 means that when $x = 0 \to y = 1$ so n is : $1 = 0 + n \to n = 1$ your line formula becomes: $y = x + 1$ now solve this equation for x=-3,-2,-1, (x=0 already given in the question), +1,+2,+3 and mark (x,y) points on a cartesian coordinate system, then combine them by a ruler (but dont forget that it is not limited from -3 to +3), extend it as you can... which is: graph{x+1 [-10, 10, -5, 5]} ##### Impact of this question 349 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9485134482383728, "perplexity": 1295.5834201656949}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146485.15/warc/CC-MAIN-20200226181001-20200226211001-00255.warc.gz"}
http://math.stackexchange.com/questions/61730/what-is-wrong-with-this-proof-that-all-vector-bundles-of-the-same-rank-are-isom
# What is wrong with this proof (that all vector bundles of the same rank are isomorphic)? Suppose I have two vector bundles $E \rightarrow M, E' \rightarrow M$ of rank $k$ on a smooth manifold $M$. Let $\mathcal{E}(M), \mathcal{E'}(M)$ denote their spaces of smooth sections. We can choose some arbitrary isomorphism $\phi_p: E_p \rightarrow E'_p$ for all $p \in M$, where $E_p, E'_p$ denote the fibers above $p$. Now we use this to define a map $\mathcal{F}: \mathcal{E}(M) \rightarrow \mathcal{E'}(M)$ as follows. For any smooth section $\sigma \in \mathcal{E}(M)$, define the section $\mathcal{F}(\sigma)$ by $\mathcal{F}(\sigma)(p) = \phi_p(\sigma(p))$. Then $\mathcal{F}$ is linear over $C^\infty(M)$, so there is a smooth bundle map $F: E \rightarrow E'$ over $M$ such that $\mathcal{F}(\sigma) = F \circ \sigma$ for all $\sigma$. Defining a map $\mathcal{F}^{-1}: \mathcal{E}'(M) \rightarrow \mathcal{E}(M)$ using $\phi_p^{-1}$, we see by the same reasoning that there is a smooth bundle map $F^{-1}: E' \rightarrow E$ which is the inverse of $F$. So the two bundles are isomorphic. - The problem is that $\mathcal{F}(\sigma)$ is not going to be a smooth section of $E'$ unless the $\phi_p$'s are "coherent". For example, let $E\to M$ and $E'\to M$ both be the trivial line bundle $\mathbb{S}^1\times\mathbb{R}$ on the circle $\mathbb{S}^1$. Let $A=\{e^{2\pi it}\mid t\in\mathbb{Q}\}\subset\mathbb{S}^1$. If we choose our isomorphisms on the fibers to be $$\phi_p=\begin{cases}\;\;\;\;\text{id}_{\{p\}\times\mathbb{R}}\text{ if }p\in A\\ -\text{id}_{\{p\}\times\mathbb{R}}\text{ if }p\notin A\end{cases}$$ then the smooth section $\sigma:\mathbb{S}^1\to E$ given by $\sigma(p)=1$ is sent to $$\mathcal{F}(\sigma)(p)=\begin{cases}\;\;\;\;1\text{ if }p\in A\\ -1\text{ if }p\notin A\end{cases}$$ which is not even continuous. - In addition to the problem Zev pointed out, there's further a problem with topology. Say you intend to avoid Zev's construction, so you do the following: Since $(E,M,\pi, V)$ is a vector bundle, for every point $p \in M$ there is a neighbourhood $U\ni p$ such that $\pi^{-1}U$ is diffeomorphic to $V\times U$. Okay, so instead of just defining the isomorphism pointwise by $\phi_p$, you additionally require that it extends locally over $U$ to a smooth map between $V\times U$ and itself. If you indeed are able to find such a map, you would have found a bundle isomorphism. But in reality it may not be possible to find such a map. In the case where $M$ is simply connected, the usual arguments can be used to show that the local definition can be extended to a well-defined global one. But when that is not the case, you can consider the example of the Möbius strip as an $\mathbf{R}$ bundle over $S^1$, and the trivial $\mathbf{R}$ bundle over $S^1$. It is clear to see that any smooth section of the Möbius strip must vanish at some point. On the other hand, there exists non-vanishing sections of the trivial bundle. So any smooth map which commutes with projection from the latter to the former must, at some point in $S^1$, fail to be surjective. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9965032935142517, "perplexity": 44.4513604656739}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988598.68/warc/CC-MAIN-20150728002308-00206-ip-10-236-191-2.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-algebra/217563-isomorphism-direct-product-groups.html
# Math Help - Isomorphism of Direct product of groups 1. ## Isomorphism of Direct product of groups find which of the following groups is isomorphic to S3 $\bigoplus$ Z2. a) Z12 b) A4 c) D6 d) Z6 $\bigoplus$ Z2 I eliminate option a because Z12 is cyclic whereas S3 $\bigoplus$ Z2 is not because we know that the External direct product of G and H is cyclic if and only if the orders of G and H are relatively prime. Here it's not the case. Here's my question. Can I eliminate option d using the following argument? If S3 $\bigoplus$ Z2 isomorphic to Z6 $\bigoplus$ Z2 then we have S3 isomorphic to Z6, which is again a contradiction as Z6 is cyclic whereas S3 is not. Is my argument right? Also it would be great if I can get a head start with the other options too... Thanks 2. ## Re: Isomorphism of Direct product of groups An easier argument for d) is that it is abelian, whereas your original group is not. I don't think your original argument is justified. 3. ## Re: Isomorphism of Direct product of groups Hi Gusbob, I have found justification for my claim, yet I agree with you that your argument that the property of "being abelian" is a more convincing and more elegant solution. Any ideas about the other two options ... I just need to eliminate one more to arrive at the answer. 4. ## Re: Isomorphism of Direct product of groups The most obvious hint is a giveaway, but I can't think of anything else at an elementary level short of writing an explicit isomorphism to the correct answer. $S^3$ is a subgroup of $S^3\times Z_2$. Can you realise $S^3$ as a subgroup of either of your two remaining options? 5. ## Re: Isomorphism of Direct product of groups $S_{3}\oplus Z_{2}$ is not isomorphic to $A_{4}$ because the element ((123),1) has order 6 while $A_{4}$ doesn't have any element of order 6. Actually $S_{3}\oplus Z_{2}$ is isomorphic to the dihedral group $D_{12}$. 6. ## Re: Isomorphism of Direct product of groups xixi, your justification was very elegant. it took a while to strike me as to why A4 should n't have an element of order 6, i realized that the order of any element of A4 is got to be the lcm of the cycles into which it can be split, which can never exceed 4 cos splitting 4 letters can only be done with at most 4 parts or lesser. still i guess you mean to say that the answer is D6, eh? 7. ## Re: Isomorphism of Direct product of groups Yes, $S_{3} \oplus Z_{2}$ is isomorphic to $D_{12}$ (It is the dihedral group of order twelve) which though denoted $D_{6}$ in an alternate convention.In other words, it is the dihedral group of degree six.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8707265853881836, "perplexity": 374.06806716734303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447547188.66/warc/CC-MAIN-20141224185907-00062-ip-10-231-17-201.ec2.internal.warc.gz"}
http://www.physicsforums.com/showpost.php?p=2582930&postcount=3
View Single Post P: 1,084 I would have no idea how to support my answer using physics. But something tells me that the more kinetic energy the water has, the harder it is to stabilize. So I think the faster water flows the lower freezing point it has.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8112885355949402, "perplexity": 355.70912710710667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/522359/writing-real-invertible-matrices-as-exponential-of-real-matrices
# Writing real invertible matrices as exponential of real matrices Every invertible square matrix with complex entries can be written as the exponential of a complex matrix. I wish to ask if it is true that Every invertible real matrix with positive determinant can be written as the exponential of a real matrix. (We need +ve determinant condition because if $A=e^X$ then $\det A=e^{\operatorname{tr}(X)} > 0$.) If not is there a simple characterization of such real matrices (with +ve determinant) which are exponentials of other matrices ? - As far as I know there exists an open neighborhood $U$ in $T_IGL_n({\bf R})$ such that ${\rm exp}|_U$ is a diffeomorphism. That is, any matrix around $I$ having small pertubation can be written by exponential. –  Hee Kwon Lee Oct 11 '13 at 9:25 No, a real matrix has a real logarithm if and only if it is nonsingular and in its (complex) Jordan normal form, every Jordan block corresponding to a negative eigenvalue occurs an even number of times. So, it is possible that a matrix with positive determinant is not the matrix exponential of a real matrix. Here are two counterexamples: $\pmatrix{-1&1\\ 0&-1}$ and $\operatorname{diag}(-2,-\frac12,1,\ldots,1)$. For more details, see Walter J. Culver, On the existence and uniqueness of the real logarithm of a matrix, Proceedings of the American Mathematical Society, 17(5): 1146-1151, 1966. - Thanks! This is very useful as the criterion is not hard to check for a given matrix. This is exactly what I was looking for. –  user90041 Oct 11 '13 at 16:16 Another characterization is as follows: $A$ is the exponential of a real matrix iff $A$ is the square of a real invertible matrix. In particular, remark that if $A=e^X$, then $A=(e^{X/2})^2$. Concerning the Lee's post, if $A$ is in a neighborhood of $I$, then $A=I+B$ with $||B||<1$ and we can take $X=B-B^2/2+B^3/3+\cdots$. EDIT: An outline of the proof. Let $A=B^2$ where $B$ is invertible real; we may assume that $B$ is in Jordan form. For each Jordan block $B_k=\lambda I_k+J_k$ of $B$ s.t. $\lambda<0$, change $B_k$ with $-B_k$. Finally you obtain a matrix $C$ s.t. $C^2=A$. It is not difficult to show that such a matrix $C$ which has no $<0$ eigenvalues is the exp of a real matrix. - Thanks! But could you please give a reference where this is proved ? I am unable to see immediately why A should be exponential of a real matrix if it is a square of another real matrix. –  user90041 Oct 11 '13 at 15:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9468607306480408, "perplexity": 112.95431652166774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769374.67/warc/CC-MAIN-20141217075249-00159-ip-10-231-17-201.ec2.internal.warc.gz"}
http://mathhelpforum.com/new-users/199376-help-about-arithmetic-mean.html
Math Help - Help about Arithmetic mean! 1. Help about Arithmetic mean! Ok, its a word problem and i am struggling so much with it... i know the answer, which is 2/7 or .285... idk how to do it. please help One adult and 10 children are in an elevator. If the adult's weight is 4 times the average (arithmetic mean) weight of the children, then the adult's weight is what fraction of the total weight of the 11 people in the elevator? 2. Re: Help about Arithmetic mean! Let "w" be the average weight of the children. So the adult's weight is 4w. The total weight of the 10 children is 10w. So what is the total weight of the adult and children? What is the ratio of the adults weight to that total weight? 3. Re: Help about Arithmetic mean! Hello, Mat724! One adult and 10 children are in an elevator. If the adult's weight is 4 times the average (arithmetic mean) weight of the children, then the adult's weight is what fraction of the total weight of the 11 people in the elevator? Let $A$ = weight of the adult. Let $C$ = total weight of the children. Then $\tfrac{C}{10}$ = the average weight of the children. We are told: $A \:=\:4\left(\tfrac{C}{10}\right) \quad\Rightarrow\quad A \:=\:\tfrac{2}{5}C$ Then the total weight of the 11 people is: . $A + C \:=\:\tfrac{2}{5}C + C \:=\:\tfrac{7}{5}C$ The desired fraction is: . $\frac{\text{adult}}{\text{total}} \;=\; \frac{\frac{2}{5}C}{\frac{7}{5}C} \;=\;\frac{2}{7}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8610566854476929, "perplexity": 801.5893812232813}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398464253.80/warc/CC-MAIN-20151124205424-00116-ip-10-71-132-137.ec2.internal.warc.gz"}
http://mathoverflow.net/users/13439/shamovic
# shamovic less info reputation 16 bio website location BGU age 33 member for 4 years, 4 months seen Jun 17 at 6:18 profile views 115 # 6 Questions 11 Is every projective space curve a set-theoretic intersection of two surfaces? What is known about this question? 7 Reference Request - Spaces of Smooth Vectors 2 Generalized Hurwitz Spaces 2 Lifting of Commuting Maps of Vector Bundles 2 Castelnuovo-Mumford Regularity of Ideals of Maximal Minors # 246 Reputation +10 Generalized Hurwitz Spaces +10 Lifting of Commuting Maps of Vector Bundles +5 Is every projective space curve a set-theoretic intersection of two surfaces? What is known about this question? +5 Reference Request - Spaces of Smooth Vectors 2 algebraic closure of a subgroup of GL # 7 Tags 2 ag.algebraic-geometry × 6 0 open-problem 2 ra.rings-and-algebras 0 rt.representation-theory 0 reference-request × 5 0 lie-groups 0 ac.commutative-algebra × 2 # 2 Accounts Mathematics 972 rep 510 MathOverflow 246 rep 16
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8835317492485046, "perplexity": 2451.8001469158075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096991.38/warc/CC-MAIN-20150627031816-00267-ip-10-179-60-89.ec2.internal.warc.gz"}
http://www-personal.umich.edu/~asnowden/teaching/2013/679/L19.html
# Lecture 19: J_0(N) mod N $$\DeclareMathOperator{\Spec}{Spec} \DeclareMathOperator{\Jac}{Jac} \DeclareMathOperator{\Pic}{Pic} \newcommand{\tors}{\mathrm{tors}} \newcommand{\GL}{\mathrm{GL}} \newcommand{\un}{\mathrm{un}} \newcommand{\lbb}{[\![} \newcommand{\rbb}{]\!]} \newcommand{\bP}{\mathbf{P}} \newcommand{\bQ}{\mathbf{Q}} \let\ol\overline \newcommand{\cO}{\mathcal{O}} \newcommand{\cA}{\mathcal{A}} \newcommand{\cB}{\mathcal{B}} \DeclareMathOperator{\Pic}{Pic} \DeclareMathOperator{\Spec}{Spec} \newcommand{\cJ}{\mathcal{J}} \DeclareMathOperator{\Jac}{Jac} \newcommand{\bZ}{\mathbf{Z}} \newcommand{\fM}{\mathfrak{M}} \newcommand{\bF}{\mathbf{F}} \newcommand{\fm}{\mathfrak{m}} \newcommand{\bG}{\mathbf{G}} \newcommand{\rH}{\mathrm{H}}$$ The purpose of this lecture is to show that $J_0(N)$, and any quotient of it, has completely toric reduction at $N$. We do this in three steps: (1) We analyze the minimal regular model of $X_0(N)$ and show that its special fiber is a nodal curve whose irreducible components are $\bP^1$'s; (2) We recall a theorem of Raynaud, relating the Néron model of the Jacobian of a curve to the Picard scheme of its minimal model; (3) Combining the first two results shows that the special fiber of the Néron model of $J_0(N)$ is the Picard scheme of a nodal curve whose irreducible components are $\bP^1$'s. We explicitly compute this and find that it is a torus. ## This lecture's goal Let $N \gt 7$ be a prime. Thanks to the previous lecture, our task is to find an abelian variety $A/\bQ$ and a map $f \colon X_0(N) \to A$ satisfying the following four hypotheses: • $A$ has good reduction away from $N$. • $A$ has completely toric reduction at $N$. • The Jordan--Holder constituents of $A[p](\ol{\bQ})$ are trivial or cyclotomic. • $f(0) \ne f(\infty)$. We know an example of an abelian variety to which $X_0(N)$ maps: the Jacobian $J_0(N)$ of $X_0(N)$. In fact, this is the universal such abelian variety. Thus to find $A$ we need only consider quotients of $J_0(N)$. As we have seen, the first condition above ("$A$ has good reduction away from $N$") comes for free for quotients of $J_0(N)$, thanks to the following results: • $X_0(N)$ admits a smooth model away from $N$. • The Jacobian of such a curve has good reduction away from $N$. • Any quotient of an abelian variety having good reduction has good reduction. Today, we're going to show that the second condition ("$A$ has completely multiplicative reduction at $N$") also comes for free. The proof is similar to the above, but a bit more complicated. We proceed as follows: • The special fiber of the minimal regular model of $X_0(N)$ at $N$ is a nodal curve whose irreducible components are $\bP^1$. • The Jacobian of such a curve has completely toric reduction. • Any quotient of an abelian variety having completely toric reduction has completely toric reduction. The first step will consume most of our time. For the second, we appeal to a theorem of Raynaud relating Néron and minimal regular models. The third step follows easily from the theory of Néron models. ## Completely toric reduction for quotients Proposition. Let $\cO$ be a DVR with fraction field $K$ and residue field $k$. Let $A/K$ be an abelian variety and let $B$ be a quotient of $A$. Suppose $A$ has completely toric reduction. Then the same is true for $B$. Proof. Let $f \colon A \to B$ be the quotient map. Since the isogeny category is semi-simple, there exists a map $g \colon B \to A$ such that $fg=[n]$ for some positive integer $n$. The maps $f$ and $g$ extend to maps of the Néron models $\cA$ and $\cB$ by the Néron mapping property. These extended maps still satisfy $fg=[n]$, since it holds generically. It follows that $f$ and $g$ induce maps between $\cA_k$ and $\cB_k$ satisfying $fg=[n]$. In particular, $f \colon \cA_k \to \cB_k$ is surjective, which shows that $\cB_k$ is a torus. ## Raynaud's theorem on the relative Picard functor Let $f \colon X \to S$ be a proper flat map. We define the relative Picard functor, denoted $\Pic_{X/S}$, to be the sheafification of the functor $S' \mapsto \Pic(X_{S'})$ on the big fppf site of $S$. A lot is known about this functor, but we'll only mention the few results we need. We refer to Raynaud's article (MR0282993) as a reference. To begin with, we have the following result of Murre: Theorem. If $S$ is a field then $\Pic_{X/S}$ is representable by a group scheme. (Note: nothing about the singularities of $X$ is assumed.) Thus, when $S$ is a field, we have a component group $\Pic^0_{X/S}$, which we can think of as a subsheaf of $\Pic_{X/S}$. For a general base $S$, we define $\Pic^0_{X/S}$ to be the subsheaf of $\Pic_{X/S}$ consisting of those sections that restrict into $\Pic^0_{X_s/s}$ for every geometric point $s \to S$. Suppose now that $S=\Spec(\cO)$ where $\cO$ is a DVR. Let $K$ be the fraction field of $\cO$ and $k$ the residue field. Suppose also that $X$ is a curve (i.e., its fibers are pure of dimension 1). Let $\{X_i\}$ be the irreducible components of the special fiber $X_k$. The local ring of $X_i$ at its generic point is artinnian; let $d_i$ be its length. This is the multiplicity of $X_i$ in $X$. Theorem. Suppose that $X_K$ is smooth over $K$, $X$ is regular, and the gcd of the $d_i$ is 1. Let $\cJ$ be the Néron model of $\Jac(X_K)$ over $\cO$ and let $\cJ^0$ be its identity component. Then $\Pic^0_{X/S}$ is representable by smooth group scheme over $\cO$, and coincides with $\cJ^0$. In particular, $\cJ^0_k$ is isomorphic to $\Pic^0_{X_k/k}$. Remark. The functor $\Pic_{X/S}$ is not necessarily representable by a scheme, but it is represented by an algebraic space. Let $E$ be the scheme-theoretic closure of the identity section of $\Pic_{X/S}$. If $\Pic_{X/S}$ were separated, this would simply be the identity section, but $\Pic_{X/S}$ can fail to be separated. The quotient sheaf $\Pic_{X/S}/E$ is representable by a separated and smooth group scheme over $\cO$. It admits a degree function $\Pic_{X/S}/E \to \bZ$, the kernel of which is the full Néron model $\cJ$ of $\Jac(X_K)$. ## The minimal regular model of X_0(N) Given Raynaud's theorem, to understand the Néron model of $J_0(N)$ we should first understand the minimal regular model of $X_0(N)$. One might guess that $\ol{M}_0(N)$, the coarse space of $\ol{\fM}_0(N)$, would be the minimal regular model. This is almost the case, but not quite: the automorphisms groups in $\ol{\fM}_0(N)$ cause its coarse space to be non-regular. However, the singularities are very mild and easy to resolve. To study $\fM_0(N)$ and its coarse space, we first pass to a finite étale Galois cover which is a scheme, see what goes on there, and then take the quotient to obtain the coarse space. ### The covering space and its structure We now change notation and use $p$ in place of $N$. We assume that $p$ is a prime $\gt 3$. Let $\ell$ be a prime satisfying the following: (1) $\ell \ne p$; (2) $\ell \gt 2$; and (3) $\ell \ne \pm 1$ modulo $p$. Let $G=\GL_2(\bF_{\ell})$. The order of $G$ is $(\ell^2-1)(\ell^2-\ell)=\ell (\ell-1)^2 (\ell+1)$, and is therefore prime to $p$. We work over $\bZ[1/6\ell]$ in this section (we really only care about what goes on at $p$). We will be concerned with the following moduli spaces: • $\fM_0(p)$ and its coarse space $M_0(p)$. • The moduli space $\fM_0(p; \ell)$ of elliptic curves with $\Gamma_0(p)$- and $\Gamma(\ell)$-structure. This is a scheme since $\ell \gt 2$, so we denote it by $M_0(p; \ell)$. • The moduli space $M(\ell)$ of elliptic curves with $\Gamma(\ell)$-structure, which is smooth over $\bZ[1/\ell]$. We have a natural map $M_0(p; \ell) \to \fM_0(p)$ which is finite étale and Galois with group $G$. We have a natural identification $\fM_0(p)=[M_0(p; \ell)/G]$ and $M_0(p)=M_0(p; \ell)/G$. Note that $M_0(p; \ell)$ is affine. If we let $A$ be its coordinate ring then $M_0(p)=\Spec(A^G)$. We need the following result, a proof of which can be found in Katz--Mazur: Theorem. The scheme $M_0(p; \ell)$ is regular and flat over $\bZ$. We now have the following result: Proposition. The scheme $M_0(p; \ell)_{\bF_p}$ is Cohen--Macaulay and reduced. It is smooth away from the supersingular points. Each supersingular point is an ordinary node (i.e., the strict complete local ring is $k \lbb u, v \rbb/(uv)$). Proof. Write $M_0(p; \ell)=\Spec(A)$, where $A$ is regular and flat over $\bZ[1/\ell]$ is a regular ring. Then $M_0(p; \ell)_{\bF_p}=\Spec(B)$ with $B=A/pA$. The ring $B$ is Cohen--Macauly since it is the quotient of a regular ring by a non-zerodivisor. We have the usual maps $i,j \colon M(\ell) \to M_0(p; \ell)$ (which are closed immersions) and $f,g \colon M_0(p; \ell) \to M(\ell)$. In particular, the ordinary locus of $M_0(p; \ell)$ is isomorphic to two copies of the ordinary locus of $M(\ell)$, and thus smooth. It follows that $M_0(p; \ell)$ is reduced, as it is one-dimensional, Cohen--Macauly, and generically reduced. Let $M(\ell)=\Spec(C)$. Let $x \in M(\ell)$ be a supersingular point and $y=i(x)=j(x)$. Consider the map $a \colon A_y \to C_x \times C_x$ on complete local rings given by $(i^*, j^*)$. This map is injective since $A_y$ is reduced and the map hits each component. Let $t \in C_x$ be a uniformizer. Let $u=f^*(t)-g^*(t^p)$ and $v=g^*(t)-f^*(t^p)$. Then $a(u)=(t-t^{p^2}, 0)$ while $a(v)=(0, t-t^{p^2})$. It follows that any element of $\fm_{C,x} \times \fm_{C,x}$ can be expressed as a power series in $a(u)$ and $a(v)$. Thus if $z \in \fm_{A,y}$ then $a(z)=F(a(u),a(v))$ for some $F$, and so $a(z-F(u,v))=0$, and so $z=F(u,v)$. It follows that the map $k \lbb u, v \rbb \to A_y$ is surjective, where $k$ is the residue field. Since $a(uv)=0$, we have $uv=0$, and so we get a surjection $k \lbb u,v \rbb/(uv) \to A_y$. This map must be injective, for otherwise we'd lose a component. Proposition. The scheme $M_0(p; \ell)$ is smooth over $\bZ[1/\ell]$ away from the supersingular points in characteristic $p$. The strict complete local ring at such a point is of the form $W \lbb u,v \rbb/(uv-p)$, where $W=\bZ_p^{\un}$. Proof. Let $R$ be the strict complete local ring at a supersingular point. Then $R$ is regular and flat over $W$ of dimension 2. Furthermore, $R/p$ is of the form $k \lbb u,v \rbb/(uv)$.  It follows that $R$ is a quotient of $W \lbb u, v \rbb$. Now, $uv \in pR$, and so we have a relation of the form $uv=pw$ for some $w \in R$. Let $R'=W \lbb u,v \rbb/(uv-pw)$. One easily see that $R'$ is flat. Furthermore, the surjection $R' \to R$ induces an isomorphism mod $p$. It follows that this map must be an isomorphism: killing any non-zero element of $R'$ would either introduce $p$-torsion or kill something in $R'/p$. We thus find $R=W \lbb u, v \rbb/(uv-pw)$. Now, consider the cotangent space to $R$, the quotient of $\fm=(u,v,p)$ by its square. This is the $k$-vector space spanned by $u$, $v$, and $p$ modulo $pw$. If $w \in \fm$ then $pw \in \fm^2$, and so $\fm/\fm^2$ would have dimension 3. This contradicts the regularity of $R$. Thus $w$ is a unit, and so, replacing $u$ with $u/w$, we may assume $w=1$. ### The structure of M_0(p) Recall that $M_0(p)=M_0(p; \ell)/G$, and that, if $M_0(p; \ell)=\Spec(A)$ then $M_0(p)=\Spec(A^G)$. Since $G$ is prime to $p$, formation of $G$ invariants commutes with reduction mod $p$. In particular, formation of the coarse space of $\fM_0(p)$ commutes with reduction mod $p$. The element $-1 \in G$ acts trivially on $M_0(p; \ell)$. Let $\ol{G}=G/\{\pm 1\}$. Then $M_0(p)=M_0(p; \ell)/\ol{G}$. Let $x$ be a point in $M_0(p)$ in characteristic $p$ with automorphism group $H$. The group $\ol{G}$ transitively permutes the points of $M_0(p; \ell)$ above $x$, and the stabilizer of any point is a subgroup $\ol{G}$ isomorphic to $\ol{H}=H/\{\pm 1\}$. It follows that the strict completion $R$ of the local ring at $x$ is isomorphic to the $\ol{H}$-invariants of the strict completion of the local ring $S$ at any point $y$ over $x$. We have $S=W \lbb u, v \rbb/(xy-p)$ by the previous section, where $W$ is the Witt ring of $\ol{\bF}_p$. Now, since $p \gt 3$, we have the following: • If $j(x) \ne 0, 1728$ then $\ol{H}$ is trivial. • If $j(x) = 1728$ then $\ol{H}=\bZ/2\bZ$. • If $j(x) = 0$ then $\ol{H}=\bZ/3\bZ$. Thus if $j(x) \ne 0, 1728$ then $R=S$ and $x$ is a regular point. Note that if $j(x)$ is 0 or 1728 then, since $\ol{H}$ does not fix a point in a neighborhood of $x$, it acts non-trivially on $S$. It follows that, for an appropriate choice of $u$, $v$, the generator of $\ol{H}$ acts by $u \mapsto \zeta u$, $v \mapsto \zeta^{-1} v$, where $\zeta$ is a primitive $k$th, where $k=\# \ol{H}$. It follows that $R=S^{\ol{H}}$ is generated by $U=u^k$, $V=v^k$, and $uv=p$. We have $UV=(uv)^k=p^k$, and so $R=W \lbb U, V \rbb/(UV-p^k)$. We have thus shown that following: Theorem. Let $x$ be a characteristic $p$ point of $M_0(p)$ and let $R$ be the strict complete local ring at $x$. • If $x$ is not supersingular, then $M_0(p)$ is smooth at $x$. • If $x$ is supersingular and $j(x) \ne 0, 1728$, then $M_0(p)$ is regular at $x$ and $R=W \lbb x, y \rbb/(xy-p)$. • If $x$ is supersingular and $j(x)=1728$, then $R=W \lbb x, y \rbb/(xy-p^2)$. • If $x$ is supersingular and $j(x)=0$, then $R=W \lbb x, y \rbb/(xy-p^3)$. Remark. It is also true that the cuspidal points of $\ol{M}_0(p)$ are smooth, since they are smooth in the special fiber. ### The minimal regular model The scheme $\ol{M}_0(p)$ is a flat proper model of its generic fiber which is regular except at possibly two points. The singularities of these points can be resolved with one or two blow-ups. The result is that an additional $\bP^1$ is added at $j=1728$ if that point is supersingular, and two additional $\bP^1$'s are added in a chain at $j=0$ if that point is supersingular. The resulting model is minimal. We thus have: Proposition. Let $C$ be the special fiber of the minimal regular model of $\ol{M}_0(N)$. Then $C$ is reduced curve, all of its components are $\bP^1$'s, and all of its singularities are simple nodes. ## The special fiber of the Néron model Proposition. Let $C$ be a curve over an algebraically closed field $k$ with the following properties: $C$ is reduced, all of its components are $\bP^1$'s, and all of its singularities are simple nodes. Then $\Pic^0_{C/k}$ is a torus. Proof. Let $\Gamma$ be the graph corresponding to $C$: its vertices are the irreducible components of $C$, and there is one edge between two components at each point they touch. Given a line bundle on $C$, we get a line bundle on each component, and an identification of the fibers at the touching points. Every line bundle on $\bP^1$ is of the form $\cO(n)$. Thus if we assign to each vertex of $\Gamma$ an integer and to each edge an element of $\bG_m$ then we can build a line bundle on $C$, and all line bundles are of this form. We have thus produced a surjection from a torus to $\Pic^0_{C/k}$, which proves the proposition. Actually, we can say a bit more. Suppose we have data as above defining some line bundle. For the bundle to be trivial, it must be trivial on each component, and so the integers at each node must be 0. The non-vanishing sections of the bundle on one of the components is given by $\bG_m$. Given sections on each component (i.e., elements of $\bG_m$ at each node), they glue if and only if at each edge the quotient of their values is equal to the value of the edge. In other words, the original data defines the trivial bundle if and only if the integers are 0 and the values on the edges are a 1-coboundary. We thus see that the identity component of $\Pic_{C/k}$ is $\rH^1(\Gamma, \bG_m)$. This is a torus with character lattice $\rH_1(\Gamma, \bZ)$. Theorem. $J_0(N)$ has completely toric reduction at $N$. Proof. This follows from the above computation, Raynaud's theorem, and the form established for the minimal regular model of $X_0(N)$ at $N$. ## Injectivity of the reduction map on torsion To end this lecture, I want to give a proof of the following theorem. This result was crucial to yesterday's lecture, and it was pointed out to me that we had not yet given a proof in all cases. Theorem. Let $K/\bQ_p$ be a finite extension with ramification index $\lt p-1$. Let $\cO$ be the ring of integers of $K$ and $k$ the residue field of $\cO$. Let $A/K$ be an abelian variety, and let $\cA/\cO$ be its Néron model. Then the reduction map $\cA(\cO)_{\tors} \to \cA(k)_{\tors}$ is injective. Proof. Let $G_0=A(K)_{\tors}$, regarded as a closed subscheme of $A(K)$. Note that as a group scheme, $G_0$ is constant. Let $G$ be the scheme-theoretic closure of $G_0$ in $\cA$. Then $G$ is a flat group scheme over $\cO$. And it is finite: every field-valued point of $G$ is defined over $K$, and every $K$-point of $G$ extends to an $\cO$-point of $G$ by the Néron mapping property. Thus $G$ is proper, and therefore finite (since we know it to be quasi-finite). Since $G_0$ obviously extends to a constant group scheme over $\cO$, Raynaud's theorem implies that $G$ itself is a constant group scheme. The theorem follows, since the reduction map $G(\cO) \to G(k)$ is clearly injective for constant groups.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9935285449028015, "perplexity": 86.47698228342844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813059.39/warc/CC-MAIN-20180220165417-20180220185417-00169.warc.gz"}
https://congresso.sif.it/talk/539
# Ultrahigh energy cosmic rays. Salamida F. Relazione su invito III - Astrofisica Aula GSSI Ex ISEF - Biblioteca - Venerdì 27 h 15:30 - 19:00 In this contribution we report on the most recent progresses in the understanding of the data on ultrahigh energy cosmic rays ($E\geq$10^{18}\$ eV). After a general survey of the different experiments working in this field, a description of the energy spectrum, mass composition, distribution of arrival directions and search for neutral particles will be given. The last part of the contribution will be devoted to the future projects aiming to tackle the issues still remaining unsolved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.885312557220459, "perplexity": 1965.4389036152652}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056890.28/warc/CC-MAIN-20210919125659-20210919155659-00071.warc.gz"}
https://www.physicsforums.com/threads/limit-comparison-comparison-test-on-non-rational-functions.416138/
# Homework Help: Limit Comparison/Comparison Test on Non-rational functions 1. Jul 14, 2010 ### chrischoi614 1. The problem statement, all variables and given/known data Either the Comparison Test or Limit Comparison Test can be used to determine whether the following series converge or diverge. which test you would use (CT or LCT) [ii] which series you would use in the comparison. [iii] does the series converge or not The series of (root(n^4 +1) - n^2) n goes from 1 to infinity 2. Relevent equations Series of 1/n^2? I am not too sure 3. The attempt at a solution So what i did was drag out the n^2 from the root so it becomes (n^2)(root(1+(1/n^4))) and I know i Think i have to compare this with 1/n^2 , I know this series converge, but however I do not know how to explain correctly, to compare it with 1/n^2, if 1/n^2 really is the right one to compare to, or should i be using limit comparison test? I am quite lost at the moment, I have tried everything, but the fact that all I can use is CT and LCT, I really don't know how to solve it. I know that root (n^4 + 1) is just really close to n^2, its that (+1) that make this series happen.... Pleasee and thanks :) 2. Jul 14, 2010 ### Dick I would start by multiplying your expression by (sqrt(x^4+1)+n^2)/(sqrt(x^4+1)+n^2) and doing some algebra in the numerator. Then see what you think. 3. Jul 14, 2010 ### chrischoi614 I actually did that before, but i ended up with 2n^4 -(2n^2)(root(n^4 +1)) + 1 in the numerator, the fact that the square root is there really is making me struggle cus i dont know how to simplify it =\ 4. Jul 14, 2010 ### Dick Then show us the algebra you did to get that. It's not right. 5. Jul 14, 2010 ### chrischoi614 no..... i didnt get it wrong :S.... i just put the terms together... 6. Jul 14, 2010 ### Dick I'm glad you are so confident but (sqrt(x^4+1)-n^2)*(sqrt(x^4+1)+n^2) doesn't have a sqrt in it if you expand it. Show us how you got 2n^4-(2n^2)(root(n^4 +1))+1 or we can't help you. Did you not change the sign on the n^2? That's the whole 'conjugate' thing that makes the sqrt cancel. Last edited: Jul 14, 2010
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9084722995758057, "perplexity": 901.7109923214148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860089.11/warc/CC-MAIN-20180618051104-20180618071104-00405.warc.gz"}
http://www.sciforums.com/threads/a-theory-of-nothing-%E2%80%93-the-proto-universe.113703/
# A Theory of Nothing – The Proto-Universe Discussion in 'Pseudoscience Archive' started by conscienta, May 18, 2012. 1. ### conscientaRegistered Member Messages: 21 Can the universe be created from nothing? It starts with initial conditions. In Einstein’s spacetime, the initial conditions of the universe are described by clumping together positive energy quantum states (particles) and negative gravitational energy into a single point of infinite density but perhaps zero energy. The content of the observable universe may have zero total energy because the gravitational potential energy between massive particles is negative. This is true even in the past, but the argument requires a finite universe and a physics-breaking singularity at the beginning of time. But what if these conditions were reversed? Let’s suppose that before the universe as we know it began, space was infinite and its properties were entirely uniform. In the absence of change, time could not be said to exist. Let’s refer to this as the proto-universe. Gravity, electromagnetism, and the nuclear forces would not have existed as we know them today. Rather, the physics of the universe must have been governed by a single, unified interaction whose laws are still unknown but would be characterized as repulsive. If there is only one fundamental force in the proto-universe, there would probably be only one fundamental particle. The universe began as an infinite sea of these neutral particles. Rather than occupying positive energy quantum states as does normal matter, the proto-particles of the proto-universe occupy all available negative energy states. In the initial conditions, for each negative energy state, there is an equivalent positive energy state. However, only the negative energy states are occupied. The proto-universe is actually empty in the sense that none of the positive energy states are occupied. In the negative energy sea of the proto-universe, the singularity is replaced by an infinite number of negative energy states (but not an infinite density) with positive potential energy. If the potential energy were gravitational in origin, this would not be possible. Since I am postulating a unified fundamental force, however, nothing prohibits me from both reversing the sign of the potential energy and doing away with the singularity altogether. For particles in the negative energy sea, therefore, the potential energy of the universe may well reverse its direction. The negative energy density created by these particles is offset by the positive potential energy of their mutual interaction. This would allow the uniform, finite negative energy density of the proto-universe to be offset by a positive contribution. As far as we are concerned, the important points are that the initial state of the universe is as simple as possible, and that it has zero total energy. And finally, after a disturbance in this zero-energy vacuum, the negative energy particles begin to decay. Some of the byproducts of this decay are forced into positive energy states, filling the universe with observable matter. This disturbance sets off a chain reaction throughout the vacuum. Rather than a cataclysmic explosion of space and matter, this results in a kind of implosion of the negative energy vacuum that spreads outward and fills regions of spacetime with ordinary matter. This highly energetic transformation produced a hot dense plasma that in the standard theory is the ultimate source of the cosmic background radiation. In this process of cosmic deflation, we might imagine two connected balloons – one with an infinite volume, representing the decaying negative energy states, and the other representing the finite, inflating space of positive energy states. This deflation of the negative energy balloon represents not only a change of volume, but the expenditure of its positive potential energy. In this process the decaying volume is shrinking. Inside the deflating balloon things are contracting and outside the balloon things are expanding. The decay process comes to a halt after several phase transitions. After the last phase transition, equilibrium is reached between the collapsing region of decaying negative energy states and the global expansion of the positive energy universe. Once a state of equilibrium is reached, the laws and constants of nature take on their current values and the universe continues expanding more normally. This would not be creation ex nihlo but would be the next best thing – creation from net nothing.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9258381128311157, "perplexity": 487.08676656633065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736672328.14/warc/CC-MAIN-20151001215752-00200-ip-10-137-6-227.ec2.internal.warc.gz"}
https://infoscience.epfl.ch/record/214387
## Improving simulation predictions of wind around buildings using measurements through system identification techniques Wind behavior in urban areas is receiving increasing interest from city planners and architects. Computational fluid dynamics (CFD) simulations are often employed to assess wind behavior around buildings. However, the accuracy of CFD simulations is often unknown. Measurements can be used to help understand wind behavior around buildings more accurately. In this paper, a model-based data interpretation framework is presented to integrate information obtained from measurements with simulation results. Multiple model instances are generated from a model class through assigning values to parameters that are not known precisely, including those for inlet wind conditions. The information provided by measurements is used to falsify model instances whose predictions do not match measurements and to estimate the parameter values of the simulation. The information content of measurement data depends on levels of measurement and modeling uncertainties at sensor locations. Modeling uncertainties are those associated with the model class such as effects associated with turbulent fluctuations or thermal processes. The model-based data interpretation framework is applied to the study of the wind behavior around the buildings of the Treelodge@Punggol estate, located in Singapore. The framework incorporates modeling and measurement uncertainties and provides probability-based predictions at unmeasured locations. This paper illustrates the possibility to improve approximations of modeling uncertainties through avoiding falsification of the entire set of model instances. It is concluded that the framework has the potential to infer time-dependent sets of parameter values and to predict time-dependent responses at unmeasured locations. Published in: Building and Environment, 94, 2, 620–631 Year: 2015 Publisher: Elsevier ISSN: 0360-1323 Keywords: Laboratories:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8179359436035156, "perplexity": 1073.2062109329786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583850393.61/warc/CC-MAIN-20190122120040-20190122142040-00550.warc.gz"}
https://web2.0calc.com/questions/asymptotes_2
+0 # A function $f$ has a horizontal asymptote of $y = -4,$ a vertical asymptote of $x = 3,$ and an $x$-intercept at $(1,0).$ Part (a): Let $f$ 0 85 1 A function $f$ has a horizontal asymptote of $y = -4,$ a vertical asymptote of $x = 3,$ and an x-intercept at $(1,0).$ Part (a): Let $f$ be of the form $$f(x) = \frac{ax+b}{x+c}.$$ Find an expression for $f(x)$ Part (b): Let $f$ be of the form $$f(x) = \frac{rx+s}{2x+t}.$$ Find an expression for $f(x)$ Sep 3, 2018 edited by Guest  Sep 3, 2018 #1 +1 $$f(x) = \dfrac{a x + b}{x + c} \\ \\ \text{there is a vertical asymptote at }x=3 \text{ so clearly c=-3} \\ \\ \lim \limits_{x \to \pm \infty} f(x) = a \text{ so }a=-4 \\ \\ f(x) = \dfrac{-4x + b}{x-3} \\ \\ f(1) = \dfrac{-4+b}{1-3} = 0 \\ \\ \text{so clearly b=4} \text{ and thus }f(x) = \dfrac{-4x+4}{x-3}$$ see if you can work out part (b) using this as a template Sep 4, 2018
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9949550032615662, "perplexity": 1083.3940566024842}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660818.25/warc/CC-MAIN-20190118213433-20190118235433-00297.warc.gz"}
https://elo.mastermath.nl/course/info.php?id=255
### Course website with weekly schedule: http://www.staff.science.uu.nl/~zilte001/mastermath_symplectic_geometry_2018_2019 ### What is symplectic geometry? A symplectic structure is a closed and nondegenerate 2-form. Such a form is similar to a Riemannian metric. However, while a Riemannian metric measures distances and angles, a symplectic structure measures areas. The closedness condition is an analogue of the notion of flatness for a metric. Symplectic geometry has its roots in the Hamiltonian formulation of classical mechanics. The canonical symplectic form on phase space occurs in Hamilton's equation. Symplectic geometry studies local and global properties of symplectic forms and Hamiltonian systems. A famous conjecture by Arnol'd, for instance, gives a lower bound on the number of periodic orbits of a Hamiltonian system. Many problems in symplectic geometry are either flexible or rigid. In the flexible case methods from differential topology, such as Gromov's h-principle, can be applied to construct objects. In the rigid case partial differential equations can be used to define symplectic invariants. As an example, holomorphic curves (solutions of the Cauchy-Riemann equations) are used to define the so-called Gromov-Witten invariants. Apart from classical mechanics, symplectic structures appear in many other settings, for example in: * Algebraic geometry: Every smooth algebraic subvariety of the complex projective space carries a canonical symplectic form. * Gauge theory: The moduli space of Yang-Mills instantons over a product of two real surfaces carries a canonical symplectic form. * Differential topology: Certain invariants of smooth real 4-manifolds (the Seiberg-Witten invariants) are closely related to certain symplectic invariants (the Gromov-Witten invariants). ### Contents of this course Some highlights of this course will be the following: * A normal form theorem for a submanifold of a symplectic manifold. A special case of this is Darboux's theorem, which states that locally, all symplectic manifolds look the same. * Symplectic reduction for a Hamiltonian Lie group action. This corresponds to the reduction of the degrees of freedom of a mechanical system. It gives rise to many examples of symplectic manifolds. * A construction of symplectic forms on open manifolds, which is based on Gromov's h-principle. Here is a more complete list of topics that we will cover: * linear symplectic geometry * canonical symplectic form on a cotangent bundle * symplectic manifolds, symplectomorphisms, Hamiltonian diffeomorphisms, Poisson bracket * Moser's isotopy method * symplectic, (co-)isotropic and Lagrangian submanifolds of a symplectic manifold * normal form theorem for a submanifold of a symplectic manifold * Darboux's theorem * Weinstein's neighbourhood theorem for a Lagrangian submanifold * Hamiltonian Lie group actions, momentum maps * symplectic reduction, Marsden-Weinstein quotient * Gromov's h-principle and the construction of symplectic forms on open manifolds We will also explain connections to classical mechanics, such as Noether's theorem and the reduction of degrees of freedom. Furthermore, we will develop the basics of contact geometry, which is a field that is closely related to symplectic geometry. If time permits, we will also cover one or more of the following topics: * Delzant's classification of toric symplectic manifolds * Atiyah-Guillemin-Sternberg convexity theorem for the image of the momentum map The last lecture will be reserved for a panorama of recent results in the field of symplectic geometry, for instance the existence of symplectic capacities and the Arnol'd conjecture. ### Prerequisites The notions taught in a first course on differential geometry, such as: manifold, smooth map, immersion, submersion, tangent vector, Lie derivative along a vector field, the flow of a vector field, tangent bundle, differential form, de Rham cohomology. Basic understanding of Lie groups and Lie algebras will also be useful, but not strictly necessary. A suitable reference for differential geometry is: J. Lee, Introduction to Smooth Manifolds, second edition Graduate Texts in Mathematics, Springer, 2002. The relevant chapters from this book are: 1-5,7-12,14-17,19,21. Some of the material covered in these chapters, in particular the one involving Lie groups, will be recalled in our lecture course. Some knowledge of classical mechanics can be useful in understanding the context and some examples. Lecturers F. Ziltener (UU) A. del Pino (UU)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9496685266494751, "perplexity": 441.839414157037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251779833.86/warc/CC-MAIN-20200128153713-20200128183713-00262.warc.gz"}
https://en.wikipedia.org/wiki/Prime_ideal
# Prime ideal A Hasse diagram of a portion of the lattice of ideals of the integers ${\displaystyle \mathbb {Z} .}$ The purple nodes indicate prime ideals. The purple and green nodes are semiprime ideals, and the purple and blue nodes are primary ideals. In algebra, a prime ideal is a subset of a ring that shares many important properties of a prime number in the ring of integers.[1][2] The prime ideals for the integers are the sets that contain all the multiples of a given prime number, together with the zero ideal. Primitive ideals are prime, and prime ideals are both primary and semiprime. ## Prime ideals for commutative rings An ideal P of a commutative ring R is prime if it has the following two properties: • If a and b are two elements of R such that their product ab is an element of P, then a is in P or b is in P, • P is not the whole ring R. This generalizes the following property of prime numbers: if p is a prime number and if p divides a product ab of two integers, then p divides a or p divides b. We can therefore say A positive integer n is a prime number if and only if ${\displaystyle n\mathbb {Z} }$ is a prime ideal in ${\displaystyle \mathbb {Z} .}$ ### Examples • A simple example: In the ring ${\displaystyle R=\mathbb {Z} ,}$ the subset of even numbers is a prime ideal. • Given a unique factorization domain (UFD) ${\displaystyle R}$, any irreducible element ${\displaystyle r\in R}$ generates a prime ideal ${\displaystyle (r)}$. Eisenstein's criterion for integral domains (hence UFD's) is an effective tool for determining whether or not an element in a polynomial ring is irreducible. For example, take an irreducible polynomial ${\displaystyle f(x_{1},\ldots ,x_{n})}$ in a polynomial ring ${\displaystyle \mathbb {F} [x_{1},\ldots ,x_{n}]}$ over some field ${\displaystyle \mathbb {F} }$. • If R denotes the ring ${\displaystyle \mathbb {C} [X,Y]}$ of polynomials in two variables with complex coefficients, then the ideal generated by the polynomial Y 2X 3X − 1 is a prime ideal (see elliptic curve). • In the ring ${\displaystyle \mathbb {Z} [X]}$ of all polynomials with integer coefficients, the ideal generated by 2 and X is a prime ideal. It consists of all those polynomials whose constant coefficient is even. • In any ring R, a maximal ideal is an ideal M that is maximal in the set of all proper ideals of R, i.e. M is contained in exactly two ideals of R, namely M itself and the entire ring R. Every maximal ideal is in fact prime. In a principal ideal domain every nonzero prime ideal is maximal, but this is not true in general. For the UFD ${\displaystyle \mathbb {C} [x_{1},\ldots ,x_{n}]}$, Hilbert's Nullstellensatz states that every maximal ideal is of the form ${\displaystyle (x_{1}-\alpha _{1},\ldots ,x_{n}-\alpha _{n})}$. • If M is a smooth manifold, R is the ring of smooth real functions on M, and x is a point in M, then the set of all smooth functions f with f (x) = 0 forms a prime ideal (even a maximal ideal) in R. ### Non-Examples • Consider the composition of the following two quotients ${\displaystyle \mathbb {C} [x,y]\to {\frac {\mathbb {C} [x,y]}{(x^{2}+y^{2}-1)}}\to {\frac {\mathbb {C} [x,y]}{(x^{2}+y^{2}-1,x)}}}$ Although the first two rings are integral domains (in fact the first is a UFD) the last is not an integral domain since it is isomorphic to ${\displaystyle {\frac {\mathbb {C} [x,y]}{(x^{2}+y^{2}-1,x)}}\cong {\frac {\mathbb {C} [y]}{(y^{2}-1)}}\cong \mathbb {C} \times \mathbb {C} }$ showing that the ideal ${\displaystyle (x^{2}+y^{2}-1,x)\subset \mathbb {C} [x,y]}$ is not prime. (See the first property listed below.) • Another non-example is the ideal ${\displaystyle (2,x^{2}+5)\subset \mathbb {Z} [x]}$ since we have ${\displaystyle x^{2}+5-2\cdot 3=(x-1)(x+1)\in (2,x^{2}+5)}$ but neither ${\displaystyle x-1}$ nor ${\displaystyle x+1}$ are elements of the ideal. ### Properties • An ideal I in the ring R (with unity) is prime if and only if the factor ring R/I is an integral domain. In particular, a commutative ring is an integral domain if and only if (0) is a prime ideal. • An ideal I is prime if and only if its set-theoretic complement is multiplicatively closed.[3] • Every nonzero ring contains at least one prime ideal (in fact it contains at least one maximal ideal), which is a direct consequence of Krull's theorem. • More generally, if S is any multiplicatively closed set in R, then a lemma essentially due to Krull shows that there exists an ideal of R maximal with respect to being disjoint from S, and moreover the ideal must be prime. This can be further generalized to noncommutative rings (see below).[4] In the case {S} = {1}, we have Krull's theorem, and this recovers the maximal ideals of R. Another prototypical m-system is the set, {x, x2, x3, x4, ...}, of all positive powers of a non-nilpotent element. • The set of all prime ideals (the spectrum of a ring) contains minimal elements (called minimal prime). Geometrically, these correspond to irreducible components of the spectrum. • The preimage of a prime ideal under a ring homomorphism is a prime ideal. • The sum of two prime ideals is not necessarily prime. For an example, consider the ring ${\displaystyle \mathbb {C} [x,y]}$ with prime ideals P = (x2 + y2 − 1) and Q = (x) (the ideals generated by x2 + y2 − 1 and x respectively). Their sum P + Q = (x2 + y2 − 1, x) = (y2 − 1, x) however is not prime: y2 − 1 = (y − 1)(y + 1) ∈ P + Q but its two factors are not. Alternatively, the quotient ring has zero divisors so it is not an integral domain and thus P + Q cannot be prime. • Prime Ideal is not equivalent to cannot be factored into two ideals e.g. ${\displaystyle (x,y^{2})\subset \mathbb {R} [x,y]}$ cannot be factored but is not prime. • In a commutative ring R with at least two elements, if every proper ideal is prime, then the ring is a field. (If the ideal (0) is prime, then the ring R is an integral domain. If q is any non-zero element of R and the ideal (q2) is prime, then it contains q and then q is invertible.) • A nonzero principal ideal is prime if and only if it is generated by a prime element. In a UFD, every nonzero prime ideal contains a prime element. ### Uses One use of prime ideals occurs in algebraic geometry, where varieties are defined as the zero sets of ideals in polynomial rings. It turns out that the irreducible varieties correspond to prime ideals. In the modern abstract approach, one starts with an arbitrary commutative ring and turns the set of its prime ideals, also called its spectrum, into a topological space and can thus define generalizations of varieties called schemes, which find applications not only in geometry, but also in number theory. The introduction of prime ideals in algebraic number theory was a major step forward: it was realized that the important property of unique factorisation expressed in the fundamental theorem of arithmetic does not hold in every ring of algebraic integers, but a substitute was found when Richard Dedekind replaced elements by ideals and prime elements by prime ideals; see Dedekind domain. ## Prime ideals for noncommutative rings The notion of a prime ideal can be generalized to noncommutative rings by using the commutative definition "ideal-wise". Wolfgang Krull advanced this idea in 1928.[5] The following content can be found in texts such as Goodearl's [6] and Lam's.[7] If R is a (possibly noncommutative) ring and P is an ideal in R other than R itself, we say that P is prime if for any two ideals A and B of R: • If the product of ideals AB is contained in P, then at least one of A and B is contained in P. It can be shown that this definition is equivalent to the commutative one in commutative rings. It is readily verified that if an ideal of a noncommutative ring R satisfies the commutative definition of prime, then it also satisfies the noncommutative version. An ideal P satisfying the commutative definition of prime is sometimes called a completely prime ideal to distinguish it from other merely prime ideals in the ring. Completely prime ideals are prime ideals, but the converse is not true. For example, the zero ideal in the ring of n × n matrices over a field is a prime ideal, but it is not completely prime. This is close to the historical point of view of ideals as ideal numbers, as for the ring ${\displaystyle \mathbb {Z} }$ "A is contained in P" is another way of saying "P divides A", and the unit ideal R represents unity. Equivalent formulations of the ideal PR being prime include the following properties: • For all a and b in R, (a)(b) ⊆ P implies aP or bP. • For any two right ideals of R, ABP implies AP or BP. • For any two left ideals of R, ABP implies AP or BP. • For any elements a and b of R, if aRbP, then aP or bP. Prime ideals in commutative rings are characterized by having multiplicatively closed complements in R, and with slight modification, a similar characterization can be formulated for prime ideals in noncommutative rings. A nonempty subset SR is called an m-system if for any a and b in S, there exists r in R such that arb is in S.[8] The following item can then be added to the list of equivalent conditions above: • The complement RP is an m-system. ### Examples • Any primitive ideal is prime. • As with commutative rings, maximal ideals are prime, and also prime ideals contain minimal prime ideals. • A ring is a prime ring if and only if the zero ideal is a prime ideal, and moreover a ring is a domain if and only if the zero ideal is a completely prime ideal. • Another fact from commutative theory echoed in noncommutative theory is that if A is a nonzero R module, and P is a maximal element in the poset of annihilator ideals of submodules of A, then P is prime. ## Important facts • Prime avoidance lemma. If R is a commutative ring, and A is a subring (possibly without unity), and I1, ..., In is a collection of ideals of R with at most two members not prime, then if A is not contained in any Ij, it is also not contained in the union of I1, ..., In.[9] In particular, A could be an ideal of R. • If S is any m-system in R, then a lemma essentially due to Krull shows that there exists an ideal I of R maximal with respect to being disjoint from S, and moreover the ideal I must be prime (the primality I can be proved as follows. If ${\displaystyle a,b\not \in I}$, then there exist elements ${\displaystyle s,t\in S}$ such that ${\displaystyle s\in I+(a),t\in I+(b)}$ by the maximal property of I. We can take ${\displaystyle r\in R}$ with ${\displaystyle srt\in S}$. Now, if ${\displaystyle (a)(b)\subset I}$, then ${\displaystyle srt\in (I+(a))r(I+(b))\subset I+(a)(b)\subset I}$, which is a contradiction).[4] In the case {S} = {1}, we have Krull's theorem, and this recovers the maximal ideals of R. Another prototypical m-system is the set, {x, x2, x3, x4, ...}, of all positive powers of a non-nilpotent element. • For a prime ideal P, the complement RP has another property beyond being an m-system. If xy is in RP, then both x and y must be in RP, since P is an ideal. A set that contains the divisors of its elements is called saturated. • For a commutative ring R, there is a kind of converse for the previous statement: If S is any nonempty saturated and multiplicatively closed subset of R, the complement RS is a union of prime ideals of R.[10] • The intersection of members of a descending chain of prime ideals is a prime ideal, and in a commutative ring the union of members of an ascending chain of prime ideals is a prime ideal. With Zorn's Lemma, these observations imply that the poset of prime ideals of a commutative ring (partially ordered by inclusion) has maximal and minimal elements. ## Connection to maximality Prime ideals can frequently be produced as maximal elements of certain collections of ideals. For example: • An ideal maximal with respect to having empty intersection with a fixed m-system is prime. • An ideal maximal among annihilators of submodules of a fixed R module M is prime. • In a commutative ring, an ideal maximal with respect to being non-principal is prime.[11] • In a commutative ring, an ideal maximal with respect to being not countably generated is prime.[12] ## References 1. ^ Dummit, David S.; Foote, Richard M. (2004). Abstract Algebra (3rd ed.). John Wiley & Sons. ISBN 0-471-43334-9. 2. ^ Lang, Serge (2002). Algebra. Graduate Texts in Mathematics. Springer. ISBN 0-387-95385-X. 3. ^ Reid, Miles (1996). Undergraduate Commutative Algebra. Cambridge University Press. ISBN 0-521-45889-7. 4. ^ a b Lam First Course in Noncommutative Rings, p. 156 5. ^ Krull, Wolfgang, Primidealketten in allgemeinen Ringbereichen, Sitzungsberichte Heidelberg. Akad. Wissenschaft (1928), 7. Abhandl.,3-14. 6. ^ Goodearl, An Introduction to Noncommutative Noetherian Rings 7. ^ Lam, First Course in Noncommutative Rings 8. ^ Obviously, multiplicatively closed sets are m-systems. 9. ^ Jacobson Basic Algebra II, p. 390 10. ^ Kaplansky Commutative rings, p. 2 11. ^ Kaplansky Commutative rings, p. 10, Ex 10. 12. ^ Kaplansky Commutative rings, p. 10, Ex 11.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 31, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.935447633266449, "perplexity": 274.642780827769}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514046.20/warc/CC-MAIN-20210117235743-20210118025743-00589.warc.gz"}
https://www.physicsforums.com/threads/simple-absolute-value-problem-with-inequalities.85481/
# Simple absolute value problem with inequalities 1. Aug 17, 2005 ### complexhuman "Simple" absolute value problem with inequalities OK...Im totally stuck and could use some help :) given...for all e>0, d>0...the following holds |x-a|<d => |f(x) - f(a)| < e where f(x) = sqrt(x) how do I find d in terms of e? 2. Aug 17, 2005 ### SGT |x - a| = |[f(x) - f(a)][f(x) + f(a)| < e.|f(x) + f(a)| < d I don't think you can get simpler than that. 3. Aug 17, 2005 ### EnumaElish I am going to assume you meant: for all e > 0 there is some d > 0 such that |$x-a$|< d implies |$\sqrt x - \sqrt a$| < e. ($\sqrt x - \sqrt a$)2 < e2 $x + a - 2\sqrt{x a}$ < e2 $x - a + 2(a-\sqrt{x a})$ < e2 $x - a$ < e2 - $2(a-\sqrt{x a})$ If $x - a$ > 0 then d = e2 - $2(a-\sqrt{x a})$ (So d depends on x, and I guess that's okay.) I need to think about the case where $x - a$ < 0. 4. Aug 17, 2005 ### arildno It is simplest to note that: $$|\sqrt{x}-\sqrt{a}|=\frac{|x-a|}{\sqrt{x}+\sqrt{a}}$$ and proceed from there. 5. Aug 17, 2005 ### complexhuman well...I end up with something like $$|x-a|<d => |x-a|=e|\sqrt{x}+\sqrt{a}|$$...And thats where I am stuck on :( yah...d has to be independent of x...its one of those proving limit typa thing. I am just allowed to assume a = 4 first 6. Aug 17, 2005 ### HallsofIvy If you are attempting to prove that $\sqrt{x}$ is continuous for all positive values of x, then d does not have to be independent of d. That's only true for uniform continuity. If you have $|x-a| |x-a|=e|\sqrt{x}+\sqrt{a}|$ and x is "sufficiently close to a", say, |x-a|< 1/2, so that a- 1/2< x< a+ 1/2, what can you say about $\sqrt{x}+ \sqrt{a}$? 7. Aug 17, 2005 ### rsnd hehe...how did you get $$|x-a|<d => |x-a|=e|\sqrt{x}+\sqrt{a}|$$??? I think it should be $$|x-a|<d => |x-a|<e|\sqrt{x}+\sqrt{a}|$$ Similar Discussions: Simple absolute value problem with inequalities
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8577072024345398, "perplexity": 2310.6710123213293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812938.85/warc/CC-MAIN-20180220110011-20180220130011-00446.warc.gz"}
https://stats.stackexchange.com/questions/67533/sum-of-noncentral-chi-square-random-variables
# sum of noncentral Chi-square random variables I need to find the distribution of the random variable $$Y=\sum_{i=1}^{n}(X_i)^2$$ where $X_i\sim{\cal{N}}(\mu_i,\sigma^2_i)$ and all $X_i$s are independent. I know that it is possible to first find the product of all moment generating functions for $X_i$s, and then transform back to obtain $Y$'s distribution. However, I wonder whether there is a general form for $Y$ like the Gaussian case: we know the sum of independent Gaussian is still a Gaussian, and thus we only need to know the summed mean and summed variance. How about all $\sigma^2_i=\sigma^2$? Will this condition make a general solution? • Looking at the first paragraph under here, clearly the final condition yields a scaled noncentral chi-square (divide through by $\sigma^2$ (the scale factor you take out the front) and make $\sigma_i=1$ in $\sum_{i=1}^k (X_i/\sigma_i)^2$). The more general form you started with looks like a linear combination or scaled-weighted-average, with coefficients $\sigma^2_i$ rather than a plain sum of scaled squares ... and I believe that won't generally have the required distribution. Aug 16 '13 at 3:22 • Depending on what you need it for, in specific cases you may be able to do numerical convolution, or simulation. Aug 16 '13 at 22:25 • This is generalized by the 'weighted sum of log chi-squares to power' distribution. My R package sadists provides approximate 'dpqr' functions for $Y$; c.f. github.com/shabbychef/sadists Mar 21 '15 at 4:37 • @shabbychef thank you for your work! – runr Oct 19 '20 at 3:33 If not, there is a concept of a generalized chi-squared distribution, i.e. $x^T A x$ for $x \sim N(\mu, \Sigma)$ and $A$ fixed. In this case, you have the special case of diagonal $\Sigma$ ($\Sigma_{ii} = \sigma_i^2$), and $A = I$. You can also write it as a linear combination of independent noncentral chi-squared variables $Y = \sum_{i=1}^n \sigma_i^2 \left( \frac{X_i^2}{\sigma_i^2} \right)$, in which case:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9548957347869873, "perplexity": 276.7037933165896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.56/warc/CC-MAIN-20211020024111-20211020054111-00022.warc.gz"}
https://infoscience.epfl.ch/record/178577
Infoscience Journal article # Search for CP violation in D+ -> K- K+ pi(+) decays A model-independent search for direct CP violation in the Cabibbo-suppressed decay D+ -> K- K+ pi(+) in a sample of approximately 370 000 decays is carried out. The data were collected by the LHCb experiment in 2010 and correspond to an integrated luminosity of 35 pb(-1). The normalized Dalitz plot distributions for D+ and D- are compared using four different binning schemes that are sensitive to different manifestations of CP violation. No evidence for CP asymmetry is found.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.936942458152771, "perplexity": 2804.170587798044}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720471.17/warc/CC-MAIN-20161020183840-00505-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/flow-of-electrons-hit-a-potential-hole.732249/
# Homework Help: Flow of electrons hit a potential hole 1. Jan 11, 2014 ### skrat 1. The problem statement, all variables and given/known data Flow of 500 electrons per second with kinetic energy 3 eV hits a perpendicular 5 eV potential hole 0.3 nm wide. How many electrons pass per second pass the obstacle? 2. Relevant equations 3. The attempt at a solution Hmm, I checked my notes where it is written that coefficient of electrons that passes the obstacle is calculated as $T=(1+\frac{1}{4}(\frac{k_1}{\kappa }+\frac{\kappa }{k_1})^2sinh^2(\kappa a))^{-1}$ Where I used notation $k_1=\sqrt{\frac{2mE}{(h)^2}}$ and $\kappa =\sqrt{\frac{2m(V-E)}{(h)^2}}$. I don't know how to write crossed h in latex, so I used (h) instead. Notation a tells how wide the hole is. So $k_1=8.66 nm^{-1}$ and $\kappa =7.07 nm^{-1}$ and $sinh^2(\kappa a)=0.00137$ Which gives me $T=0.05377$ and therefore 26 electrons should pass the obstacle. BUT the result in the book states 408 electrons as result... Doesn anybody know what am I doing wrong here? 2. Jan 11, 2014 ### TSny Not sure what potential "hole" means. But if it means a potential "well" of depth 5 eV, then the kinetic energy will still be positive inside the well. So, you will have an oscillatory wavefunction inside the well rather than exponential behavior. Instead of a "kappa" $\kappa$, you'll have a $k_2$ wavevector inside the well. What happens to the Sinh function in this case? 3. Jan 12, 2014 ### skrat Potential well it is. In direct translation from my language it is a hole. :) Now here is my question. How will the kinetic energy still be positive inside the well? Before the well it is 3 eV and the well has a depth of 5 eV. so $k_2 =\sqrt{\frac{2m(E-V)}{(h)^2}}=i\sqrt{\frac{2m(v-E)}{(h)^2}}=i\kappa$ In case you are right, which you probably are but I would like to understand why... sinh is than sin function. 4. Jan 12, 2014 ### TSny If you take the potential to be 0 outside the well, then inside the well it will be -5eV. The kinetic energy is the difference between E and V: E-V. This gives a positive value of the KE inside the well. Right, the sinh function becomes a sin function. 5. Jan 12, 2014 ### skrat So E-V=8 eV. Than, only if V is positive than wavevector $k_2$ will be complex, or... ? 6. Jan 12, 2014 ### TSny If V is positive and greater than E, then the wavevector will be imaginary. So, if you had a potential barrier of height 5 eV with E = 3 ev, then the kinetic energy E-V would be negative inside the barrier. 7. Jan 12, 2014 ### skrat How does this differ from my original post (problem)? 8. Jan 12, 2014 ### TSny The original post dealt with a potential well (I think), whereas, my last comment was for a potential barrier. 9. Jan 12, 2014 ### skrat For a moment a thought that's the same.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8117939233779907, "perplexity": 1034.93928866605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267859817.15/warc/CC-MAIN-20180617213237-20180617233237-00216.warc.gz"}
https://hero88.co/d4gpio6/7e3a43-what-is-standard-error-of-the-mean
# what is standard error of the mean Note: The Student's probability distribution is approximated well by the Gaussian distribution when the sample size is over 100. a) The standard error of the mean birth weight for a treatment group provides a measure of the precision of the sample mean as an estimate of the population parameter …. answer choices . N Calculating Standard Error of the Mean (SEM). So we know that the variance-- or we could almost say the variance of the mean or the standard error-- the variance of the sampling distribution of the sample mean is equal to the variance of our original … {\displaystyle {\bar {x}}} The setting was 327 villages in two rural counties in northwest China. , What is N? Confidence Interval: The two confidence intervals i.e. {\displaystyle \operatorname {SE} } It is used to make a comparison between sample means across the populations. Definition of standard error in the Definitions.net dictionary. x , R. A. Fisher names the limits of the confidence interval which contains the parameter as “fiduciary limits” and named the confidence placed in the interval as fiduciary probability. The standard deviation (often SD) is a measure of variability. {\displaystyle \operatorname {E} (N)=\operatorname {Var} (N)} What is the Standard Error? Standard Error of the Mean (a.k.a. Guide to Standard Error Formula. n If a number is added to a set that is far away from the mean, how does this affect standard deviation? is used, is to make confidence intervals of the unknown population mean. Statistics courses, especially for biologists, assume formulae = understanding and teach how to do statistics, but largely ignore what those procedures assume, and how their results mislead when those assumptions are unreasonable. Now learn Live with India's best teachers. , When you use VARMETHOD=TAYLOR, or by default if you do not specify the VARMETHOD= option, PROC SURVEYMEANS uses the Taylor series method to estimate the variance of the mean .The procedure computes the estimated variance as Standard Error of the Mean (a.k.a. When the sampling fraction is large (approximately at 5% or more) in an enumerative study, the estimate of the standard error must be corrected by multiplying by a ''finite population correction'':[10] Moreover, this formula works for positive and negative ρ alike. A t-test is a statistical method used to see if two sets of data are significantly different. Assuming a normal distribution, we can state that 95% of the sample mean would lie within 1.96 SEs above or below the population mean, since 1.96 is the 2-sides 5% point of the standard normal distribution. 1 A high standard error corresponds to the higher spreading of data for the undertaken sample. This means you're free to copy, share and adapt any parts (or all) of the text in the article, as long as you give appropriate credit … SE We are an Essay Writing Company Get an essay written for you for as low as $13/page simply by clicking the Place Order button! In other words, it's a numerical value that represents standard deviation of the sampling distribution of a statistic for sample mean x̄ or proportion p, difference between two sample means (x̄ 1 - x̄ 2) or proportions (p 1 - p 2) (using either standard deviation or p value) in statistical surveys & experiments. We do not capture any email address. x In 1893, Karl Pearson coined the notion of standard deviation, which is undoubtedly most used measure, in research studies. The standard deviation (SD) & standard error of the mean (SEM) are used to represent the characteristics of the sample data and explain statistical analysis results. x Psychology Definition of STANDARD ERROR OF THE MEAN: a standard deviation of the mean. , leading the following formula for standard error: (since the standard deviation is the square root of the variance). In such cases, the sample size sigma — standard deviation; n — sample size. Calculate deviation from the mean Calculate each measurement's deviation from the mean by subtracting the individual... 3. Therefore, the standard error of the mean is usually estimated by replacing The formula given above for the standard error assumes that the sample size is much smaller than the population size, so that the population can be considered to be effectively infinite in size. Calculating the ‘Standard Error of the mean’ or SEM is simple using Excel’s in-built functions. A trial with three treatment arms was used. {\displaystyle {\sigma }_{\bar {x}}} {\displaystyle {\bar {x}}} 900 seconds . Q. {\displaystyle X} For example, normally, the estimator of the population mean is the sample mean. N N = size of the sample data set answer explanation . Where: s = sample standard deviation x 1, ..., x N = the sample data set x̄. is simply given by. are taken from a statistical population with a standard deviation of Please note: your email address is provided to the journal, which may use this information for marketing purposes. [9] If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases. Is SE just the abbreviation of SEM? and standard deviation The standard error of the mean, also called the standard deviation of the mean, is a method used to estimate the standard deviation of a sampling distribution. n independent observations from a population with mean (As we can rarely have the S.D. E {\displaystyle N} with estimator (c) What is the probability that you would have gotten this mean difference (see #24) or less in your sample? Standard error measures the precision of the estimate of the sample mean. alternatives . Control treatment was daily folic acid. N instead: As this is only an estimator for the true "standard error", it is common to see other notations here such as: A common source of confusion occurs when failing to distinguish clearly between the standard deviation of the population ( To the uninformed, surveys appear to be an easy type of research to design and conduct, but when students and professionals delve deeper, they encounter the answer … Therefore, the relationship between the standard error of the mean and the standard deviation is such that, for a given sample size, the standard error of the mean equals the standard deviation divided by the square root of the sample size. Mathematically, the variance of the sampling distribution obtained is equal to the variance of the population divided by the sample size. Psychology Definition of STANDARD ERROR OF THE MEAN: a standard deviation of the mean. x X are the standard deviation of the sampling distribution of the sample mean! We can describe this using STANDARD ERROR of the MEAN (SEM) -> mathematically, SEM = SD/√(sample size). If the sampling distribution is normally distributed, the sample mean, the standard error, and the quantiles of the normal distribution can be used to calculate confidence intervals for the true population mean. with the sample standard deviation of a population, for σ we use the value of S.D. ¯ You can download a PDF version for your personal record. Outcome measures included birth weight. SE The SEM gets smaller as your samples get larger. Statology Study is the ultimate online statistics study guide that helps you understand all of the core concepts taught in any elementary statistics course and … Hence the estimator of Assuming a normal distribution, we can state that 95% of the sample mean would lie within 1.96 SEs above or below the population mean, since 1.96 is the 2-sides 5% point of the standard normal distribution. n Calculation of CI for mean = (mean + (1.96 x SE)) to (mean – (1.96 x SE)) ¯ … , σ Where: s = sample standard deviation x 1, ..., x N = the sample data set x̄. Definition of standard error in the Definitions.net dictionary. σ In this case, the observed values fall an average of 4.89 units from the regression line. This is basically a variant of standard deviation as both concepts correspond to the spread measures. S , then the mean value calculated from the sample Meaning of standard error. 25. Join courses with the best schedule and enjoy fun and interactive classes. The following expressions can be used to calculate the upper and lower 95% confidence limits, where A cluster randomised double blind controlled trial investigated the effects of micronutrient supplements during pregnancy. {\displaystyle \sigma _{x}} σ How can you calculate the Confidence Interval (CI) for a mean? which is simply the square root of the variance: There are cases when a sample is taken without knowing, in advance, how many observations will be acceptable according to some criterion. Here we discuss the formula for the calculation of standard error of mean with the examples and downloadable excel sheet.. observations Standard Deviation, is a measure of the spread of a series or the distance from the standard. The standard error is, by definition, the standard deviation of {\displaystyle nS_{X}^{2}+n{\bar {X}}^{2}} {\displaystyle \sigma } Now, this is where everybody gets confused, the standard error is a type of standard deviation for the distribution of the means. 4 . N 2 What is standard deviation? Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. Similar … How to calculate Standard Error Note the number of measurements (n) and determine the sample mean (μ). This is usually the case even with finite populations, because most of the time, people are primarily interested in managing the processes that created the existing finite population; this is called an analytic study, following W. Edwards Deming. Report an issue . In simple words, SD determines how the sample data represents the mean accurately. Standard deviation (SD) is the measure of dispersion of the individual data values. But if you mean you are interested in whether a particular data point is plausibly from the population you have modelled (eg to ask "is this number a really big outlier? a statistical index of the probability that a given sample mean is representative of the mean of the population from which the sample was drawn. Average birth weight was significantly higher in the multiple micronutrients group than in the control (folic acid) group (difference 42.3 g; P=0.019). {\displaystyle x_{1},x_{2},\ldots ,x_{n}} {\displaystyle {\widehat {\sigma _{\bar {x}}}}} Tags: Topics: Question 8 . While every effort has been made to follow citation style rules, there may be some discrepancies. The resulting misuse is, shall we say, predictable... Use and Misuse An interval estimate gives you a range of values where the parameter is expected to lie. Practically this tells us that when trying to estimate the value of a mean, due to the factor n 16 . 95% and 99% are in general use. When a … is equal to the standard error for the sample mean, and 1.96 is the approximate value of the 97.5 percentile point of the normal distribution: In particular, the standard error of a sample statistic (such as sample mean) is the actual or estimated standard deviation of the sample mean in the process by which it was generated. It gives an idea of the exactness and … Birth weight was available for analysis for 4421 live births. {\displaystyle {\bar {x}}} Gurland and Tripathi (1971) provide a correction and equation for this effect. It is the average of all the measurements. 2 ), you need to compare it to your estimate of the population mean and your estimate of the population standard deviation (not the sample mean's standard deviation, also known as SEM). , which is the most often calculated quantity, and is also often colloquially called the standard error). What does standard error mean? Small samples are somewhat more likely to underestimate the population standard deviation and have a mean that differs from the true population mean, and the Student t-distribution accounts for the probability of these events with somewhat heavier tails compared to a Gaussian. x ) Please refer to the appropriate style manual or other sources if you have any questions. n Standard Error (SE) provides, the standard deviation in different values of the sample mean. This often leads to confusion about their interchangeability. such that. ¯ … Although average birth weight was higher in the iron-folic acid group than in the control group, the difference was not significant (24.3 g; P=0.169). ¯ [5] See unbiased estimation of standard deviation for further discussion. In probability & statistics, the standard deviation of sampling distribution of a statistic is called as Standard Error often abbreviated as SE. σ ror of the mean (SEM), a statistical index of the probability that a given sample mean is representative of the mean of the population from which the sample was drawn. The SEM quantifies how accurately you know the true mean of the population. Determine how much each measurement varies from the mean. = mean value of the sample data set. You can easily calculate the standard error of the true mean using functions contained within the base R package. To estimate the standard error of a Student t-distribution it is sufficient to use the sample standard deviation "s" instead of σ, and we could use this value to calculate confidence intervals. . The standard error (SE) of a statistic (usually an estimate of a parameter) is the standard deviation of its sampling distribution or an estimate of that standard deviation. Taylor Series Method. With n = 2, the underestimate is about 25%, but for n = 6, the underestimate is only 5%. x σ In regression analysis, the term "standard error" refers either to the square root of the reduced chi-squared statistic, or the standard error for a particular regression coefficient (as used in, say, confidence intervals). Introduction. If we plot the actual data points along with … σ This is because as the sample size increases, sample means cluster more closely around the population mean. If people are interested in managing an existing finite population that will not change over time, then it is necessary to adjust for the population size; this is called an enumerative study. is equal to the sample mean, What does standard error mean? ⁡ σ Understanding ‘Standard Error of the mean’ isn’t h Meaning of standard error. [12] See also unbiased estimation of standard deviation for more discussion. It is used to make statistical inferences about the population parameter, either through statistical hypothesis testing or through estimation by confidence intervals. x If you have a subscription to The BMJ, log in: Subscribe and get access to all BMJ articles, and much more. The effect of the FPC is that the error becomes zero when the sample size n is equal to the population size N. If values of the measured quantity A are not statistically independent but have been obtained from known locations in parameter space x, an unbiased estimate of the true standard error of the mean (actually a correction on the standard deviation part) may be obtained by multiplying the calculated standard error of the sample by the factor f: where the sample bias coefficient ρ is the widely used Prais–Winsten estimate of the autocorrelation-coefficient (a quantity between −1 and +1) for all sample point pairs. I recommend Snedecor and … Var is a random variable whose variation adds to the variation of [11]. when the probability distribution is unknown, This page was last edited on 5 February 2021, at 18:49. n Mean birth weight was 3153.7 g (n=1545; 95% confidence interval 3131.5 to 3175.9, standard deviation 444.9, standard error 11.32) in the control group, 3173.9 g (n=1470; 3152.2 to 3195.6, 424.4, 11.07,) in the iron-folic acid group, and 3197.9 g (n=1406; 3175.0 to 3220.8, 438.0, 11.68) in the multiple micronutrients group. An online standard error calculator helps you to estimate the standard error of the mean (SEM) from the given data sets and shows step-by-step calculations. How can you calculate the Confidence Interval (CI) for a mean? To summarize: SD measures variability in data we used to get 1 average (in this case, cell counts). ", "On the value of a mean as calculated from a sample", "Analysis of Short Time Series: Correcting for Autocorrelation", Multivariate adaptive regression splines (MARS), Autoregressive conditional heteroskedasticity (ARCH), https://en.wikipedia.org/w/index.php?title=Standard_error&oldid=1005049147, Creative Commons Attribution-ShareAlike License, in many cases, if the standard error of several individual quantities is known then the standard error of some. The standard error is defined as the error which arises in the sampling distribution while performing statistical analysis. When you look at scientific papers, sometimes the \"error bars\" on graphs or the ± number after means in tables represent the standard error of the mean, while in other papers they represent 95% confidence intervals. When the true underlying distribution is known to be Gaussian, although with unknown σ, then the resulting estimated distribution follows the Student t-distribution. In total, 5828 pregnant women were recruited. {\displaystyle \sigma } All the sample means which are normally distributed around M pop will lie between M pop + 3 SE M and M pop – 3 SE M . {\displaystyle \sigma } When you divide by a bigger number, you get a smaller number, so the more samples you have, the lower the SEM. Standard error of the mean is often abbreviated to standard error. The notation for standard error can be any one of SE, SEM (for standard error of measurement or mean), or SE. {\displaystyle \sigma } given by:[2]. In other words, it is the actual or estimated standard deviation of the sampling distribution of the sample statistic. The sampling distribution of a population mean is generated by repeated sampling and recording of the means obtained. This makes sense, because the mean of a large sample is likely to be closer to the true population mean than is the mean of a small sample. the standard deviation of the sampling distribution of the sample mean! ( Standard error of the mean tells you how accurate your estimate of the mean is likely to be. It is a measure of how far each observed value is from the mean. σ ¯ https://www.khanacademy.org/.../v/standard-error-of-the-mean x σ technical support for your product directly (links go to external sites): Thank you for your interest in spreading the word about The BMJ. Two interventions were investigated—daily iron with folic acid and daily multiple micronutrients (recommended allowance of 15 vitamins and minerals). of the entire population being sampled is seldom known. It is abbreviated as SEM. I prefer 95% confidence intervals. If the statistic is the sample mean, it is called the standard error of the mean (SEM).[2]. ( = mean value of the sample data set. This question is for testing whether or not you are a human visitor and to prevent automated spam submissions. To obtain an unbiased estimate of the temporal variance, we must remove the sampling variation from the estimate of the total variance. Far each observed value is from the regression line 1 day for £30... Data is you have any questions live births more data style manual or other sources if you any... In northwest China number of samples 2 — sample size ). [ 2 ] that of. Any questions the size of the population download a PDF version for your record! Machine Learning Toolbox implementation of the mean, how does this affect deviation! The Gaussian distribution when the probability distribution is unknown in … a t-test is measure... The cases fall within two standard deviations of the mean ( SEM ). [ ]! Estimator of the mean is a statistical method used to get 1 (... Excludes VAT ). [ 2 ] in … a t-test is a method to! Of its sampling distribution of the sample data set x̄ = the sample mean it! And enjoy fun and interactive classes an equation of the mean ( ). Repeated sampling and recording of the mean mean or SEM in Excel what is standard error of the mean dispersion... Or estimated standard deviation in … a t-test is a measure of.! Fall from the mean What is the standard deviation for more discussion calculating standard error of mean SEM! Iron with folic acid and daily multiple micronutrients ( recommended allowance of 15 vitamins and minerals.! The base R package serve the same as SEM only estimates of mean... Of 4.89 units from the mean can use the latter distribution, may! Randomised to treatment group, stratified by county, with a fixed ratio treatments... To standard error often abbreviated to standard error of the mean population being sampled is seldom known the samples and... Latter distribution, which is much simpler determines how the sample data set x̄ mean the standard error the! Probability & statistics, the population mean is the sample data set x̄ series. Sd & SEM both are different, each have its own meaning get larger short... A population mean estimated value is from the mean ’ or SEM is simple Excel. Standard error of the spread of possible σ 's a set that is far away the. Machine Learning Toolbox implementation of the population Tripathi ( 1971 ) provide a correction and equation for effect. Mean Add all the samples together and divide the sum total by the sample mean with the best and. Get 1 average ( in this case, cell counts ). [ ]... Individual... 3 \displaystyle \sigma } of the true mean using functions contained within the base R package prevent. In: Subscribe and get access to all BMJ articles, and vary depending on the of! Distribution when the probability distribution is approximated well by the number of measurements ( n ) determine! The statistics and Machine Learning Toolbox implementation of the mean and is abbreviated as SEM spread of possible 's... On the size of the data is through estimation by confidence intervals would contain the true value of σ unknown... 95 percent of those confidence intervals and standard error of the mean the. That spread of a statistic is the actual or estimated standard deviation of a population, σ. About the standard deviation of sampling distribution obtained is equal to the appropriate style manual or sources! Use this information for marketing purposes the regression is the standard deviation as concepts... Access this article for 1 day for: £30 /$ 37 / €33 ( excludes VAT.... Comparison between sample means cluster more closely around the population mean is the (... You how accurate your estimate of the sampling distribution of the mean, true... Follow citation style rules, there may be some discrepancies each observed value is from regression... An average of 4.89 units from the standard error of mean or SEM is simple using Excel s. Individual data values account for the added precision gained by sampling close to a set that is far from! Where: s = sample standard deviation See if two sets of data are different. Visitor and to prevent automated spam submissions and recording of the true mean using contained. Note the number of samples 2, normally, the standard error to! Deviation of the mean ( SEM ). [ 2 ] samples 2 is because the..., first we need to use a distribution of a sampling distribution of a statistic is sample. Use a distribution that takes into account that spread of possible σ 's data for the precision. Same as SEM correspond to the spread measures Subscribe and get access to BMJ! And this distribution has its own mean and variance measurements ( n ) and determine the sample size over. Recommended allowance of 15 vitamins and minerals ). [ 2 ] comparison between sample means which. Make a comparison between sample means of which our mean is one ) and determine the sample.! — sample size is over 100 is also called the standard deviation of the statements. Mean estimated value is … standard error of a sample, we must remove the sampling distribution of a mean! ( variability ) of the temporal variance, we are using it … Taylor series method different means, much... Express the reliability of an estimate of the Student 's probability distribution is.... Moreover, this page was last edited on 5 February 2021, at 18:49 a high standard of. Distribution is unknown, this formula works for positive and negative ρ.. Sem gets smaller as your samples get larger gets smaller as your samples get larger generated by sampling... Your estimate of the mean and variance confidence interval ( CI ) for a mean SD determines how the mean! A method used to evaluate the standard error of the total variance vitamins minerals. = 6, the population mean is one and to prevent automated spam submissions stratified by county, a! [ 5 ] See unbiased estimation of standard deviation of the sampling variation from the mean subtracting! Together and divide the sum total by the number of measurements ( n ) and determine the sample mean those! Measurement varies from the mean often abbreviated to standard error of the (. Question is for testing whether or not you are a human visitor and prevent! Supplements during pregnancy visitor and to prevent automated spam submissions = SD/√ ( sample size increases sample... By subtracting the individual... 3 = SD/√ ( sample size live births for... = sample standard deviation ( SD ) is the standard deviation of the mean more.. Error is the average distance that the observed values fall an average of units! And Tripathi ( 1971 ) provide a correction and equation for this effect a statistical used... R package number is added to a larger percentage of the mean calculate each measurement varies from the.... Statistic is called the standard error of the sampling distribution villages in two counties! Functions contained within the base R package interactive classes in simple words, it is used to if! An unbiased estimate of the population mean is one of samples 2 and prevent... Average ( in this article for 1 day for: £30 / 37! Mean tells you how accurate your estimate of the sampling distribution } of the temporal what is standard error of the mean we... Same as SEM mean or SEM is simple using Excel ’ s in-built.. Using functions contained within the base R package of 15 vitamins and minerals ) [... Made to follow citation style rules, there may be some discrepancies and much more and equation for effect... Get larger, Karl Pearson coined the notion of standard deviation ( SD.! Gaussian, and vary depending on the size of the mean ’ or SEM is simple using Excel ’ in-built... Data represents the mean is one correspond to the mean is one into account that spread of possible 's. Value of σ is unknown in 1893, Karl Pearson coined the notion of deviation..., for σ we use the value of S.D..., x n = 2, the underestimate about! A high standard error is the standard error of the mean Add all the samples and.: £30 / \$ 37 / €33 ( excludes VAT ). [ 2 ] small samples of <... Of an estimate of the mean ( SEM ) - > mathematically, SEM = (... As both concepts correspond to the higher spreading of data are significantly different and interactive classes abbreviated to error! Articles, and vary depending on the size of the sampling distribution, either through statistical hypothesis and interval.. In two rural counties in northwest China the sum total by the number of measurements ( n ) and the! Means cluster more closely around the population divided by the Gaussian distribution when the sample!. Is expected to lie < 20 page was last edited on what is standard error of the mean February 2021, at 18:49 means and. Add all the samples together and divide the sum total by the number of measurements ( n ) and the... Size increases, sample means of which our mean is a measure how! Of an estimate of the Student t-distribution and Tripathi ( 1971 ) a! Can you calculate the standard deviation, which may use this information for marketing.. Value of σ is unknown Gaussian distribution when the sample size ). [ 2.! Purpose, to express the reliability of an estimate of the spread of possible σ 's you spread. An average of 4.89 units from the standard deviation tells you how spread out the data....
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9658553004264832, "perplexity": 891.6725495132763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304471.99/warc/CC-MAIN-20220124023407-20220124053407-00151.warc.gz"}
http://michaelnielsen.org/polymath1/index.php?title=Fujimura.tex&oldid=2026
Fujimura.tex \section{Fujimura's problem}\label{fujimura-sec} Let $\overline{c}^\mu_n$ be the size of the largest subset of the trianglular grid $$\Delta_n := \{(a,b,c)\in {\mathbb Z}^3_+ : a+b+c = n\}$$ which contains no equilateral triangles $(a+r,b,c), (a,b+r,c), (a,b,c+r)$ with $r>0$. These are upward-pointing equilateral triangles. We shall refer to such sets as 'triangle-free'. (Kobon Fujimura is a prolific inventor of puzzles, and in this puzzle asked the related question of eliminating all equilateral triangles.) The following table was formed mostly by computer searches for optimal solutions. We also found human proofs for most of them (see {\tt http://michaelnielsen.org/polymath1/index.php?title=Fujimura's\_problem}). \begin{figure} \centerline{ \begin{tabular}{l|llllllllllllll} $n$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13\\ \hline $\overline{c}^\mu_n$ & 1 & 2 & 4 & 6 & 9 & 12 & 15 & 18 & 22 & 26 & 31 & 35 & 40 & 46 \end{tabular} } \label{lowFujimura} \caption{Fujimura numbers} \end{figure} For any equilateral triangle $(a+r,b,c)$,$(a,b+r,c)$ and $(a,b,c+r)$, the value $y+2z$ forms an arithmetic progression of length 3. A Behrend set is a finite set of integers with no arithmetic progression of length 3 (see {\tt http://arxiv.org/PS\_cache/arxiv/pdf/0811/0811.3057v2.pdf}). By looking at those triples $(a,b,c)$ with $a+2b$ inside a Behrend set, one can obtain the lower bound of $\overline{c}^\mu_n \geq n^2 exp(-O(\sqrt{\log n}))$. It can be shown by a 'corners theorem' of Ajtai and Szemeredi \cite{ajtai} that $\overline{c}^\mu_n = o(n^2)$ as $n \rightarrow \infty$. An explicit lower bound is $3(n-1)$, made of all points in $\Delta_n$ with exactly one coordinate equal to zero. An explicit upper bound comes from counting the triangles. There are $\binom{n+2}{3}$ triangles, and each point belongs to $n$ of them. So you must remove at least $(n+2)(n+1)/6$ points to remove all triangles, leaving $(n+2)(n+1)/3$ points as an upper bound for $\overline{c}^\mu_n$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9002770185470581, "perplexity": 136.21381486608064}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987828425.99/warc/CC-MAIN-20191023015841-20191023043341-00134.warc.gz"}
http://www.boxin-package.com/info/how-to-make-kraft-paper-bag-23437508.html
TEL : +86-574-88029820*808 E-mail: [email protected] Add : Haishu District, Ningbo City, Zhejiang Province No. 77, heng heng road, jishihong town (detailed address) Home > Knowledge > Content How to make kraft paper bag Jan 25, 2018 For kraft paper bags, we are no stranger to, but we all know kraft paper bag is made from it? If your answer is "do not know" or "ambiguous", then please keep up with the experts to learn kraft paper bag is the production method. Kraft paper bags in accordance with the size and size of the paper bag itself, the practice is divided into small white kraft paper bag approach, the practice of medium-sized kraft paper bag, big bag practice. Small white kraft paper bag approach: This paper bag is fully formed using a pipeline machine, attached rope. Short duration, high efficiency and low cost, so this bag is generally wide range of applications and affordable, is the choice of all the housewives who. Medium-sized paper bags are formed on the basis of the machine, using artificial rope way to complete. Big bag in the practice due to the limitations of machinery and equipment, can only be used by hand-made way, which to some extent, an increase of production costs, the cost of manpower and resources and work efficiency is low. Seemingly simple kraft paper bag, the process of its production is not a problem. Treasure every paper bag that cherish a hardworking Next: Carton definition
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8350554704666138, "perplexity": 4373.719322575688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510998.39/warc/CC-MAIN-20181017044446-20181017065946-00456.warc.gz"}
https://support.google.com/adsense/answer/1745239?hl=en&ctx=cb&src=cb&cbid=-p2h6jkbsho62&cbrank=1
# What to consider before blocking Google-certified ad networks Before you block a Google-certified ad network, you should consider the following: #### Ad auctions The ad auction process and the revenue share are the same for the AdWords program as they are for Google-certified ad networks. Our system will always show the highest paying ads, whether the ads come from Adwords or an ad network. If you block an ad network it might have a negative revenue impact because the blocked network won't compete in the auction on your site, and therefore won't drive up potential earnings for your ad space. #### Revenue per thousand impressions (RPM) RPM represents the estimated earnings you'd accrue for every 1000 impressions you receive. We do not recommend that you block ad networks based on RPM. Consider the following example: Ad network Impressions RPM Network A 10000 \$1 Network B 14 \$3 Network C 1000 \$0.50 Network B has the highest RPM and appears to be outperforming the other networks. However, this metric is based only on 14 impressions and doesn't mean that you should expect the same revenue for the next 1000 or 10000 impressions. The value of impressions varies widely so the RPM for a small number of impressions can be misleading. Network C has the lowest RPM. However, blocking Network C because it has a low RPM might have a negative impact on revenue. Network C has a lower RPM because it’s winning the auctions of less valuable impressions. Our system will always try to maximize the value of every impression in an auction. If you block Network C then another network with a lower paying bid may win the auction. Was this article helpful?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8189787268638611, "perplexity": 2539.8275498654652}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398465089.29/warc/CC-MAIN-20151124205425-00045-ip-10-71-132-137.ec2.internal.warc.gz"}
https://assignmentutor.com/%E7%BB%9F%E8%AE%A1%E4%BB%A3%E5%86%99%E8%B4%9D%E5%8F%B6%E6%96%AF%E5%88%86%E6%9E%90%E4%BB%A3%E5%86%99bayesian-analysis%E4%BB%A3%E8%80%83data5711/
assignmentutor-lab™ 为您的留学生涯保驾护航 在代写贝叶斯分析Bayesian Analysis方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写贝叶斯分析Bayesian Analysis代写方面经验极为丰富,各种代写贝叶斯分析Bayesian Analysis相关的作业也就用不着说。 • Statistical Inference 统计推断 • Statistical Computing 统计计算 • (Generalized) Linear Models 广义线性模型 • Statistical Machine Learning 统计机器学习 • Longitudinal Data Analysis 纵向数据分析 • Foundations of Data Science 数据科学基础 assignmentutor™您的专属作业导师 ## 统计代写|贝叶斯分析代写Bayesian Analysis代考|THE USE OF CONJUGATE PRIORS WITH LATENT VARIABLE Earlier in this section, it was demonstrated that conjugate priors make Bayesian inference tractable when complete data is available. Example $3.1$ demonstrated this by showing how the posterior distribution can easily be identified when assuming a conjugate prior. Explicit computation of the evidence normalization constant with conjugate priors is often unnecessary, because the product of the likelihood together with the prior lead to an algebraic form of a well-known distribution. As mentioned earlier, the calculation of the posterior normalization constant is the main obstacle in performing posterior inference. If this is the case, we can ask: do conjugate priors help in the case of latent variables being present in the model? With latent variables, the normalization constant is more complex, because it involves the marginalization of both the parameters and the latent variables. Assume a full distribution over the parameters $\theta$, latent variables $z$ and observed variables $x$ (both being discrete), which factorize as follows: $$p(\theta, z, x \mid \alpha)=p(\theta \mid \alpha) p(z \mid \theta) p(x \mid z, \theta)$$ The posterior over the latent variables and parameters has the form (see Section $2.2 .2$ for a more detailed example of such posterior): $$p(\theta, z \mid x, \alpha)=\frac{p(\theta \mid \alpha) p(z \mid \theta) p(x \mid z, \theta)}{p(x \mid \alpha)}$$ and therefore, the normalization constant $p(x \mid \alpha)$ equals: $$p(x \mid \alpha)=\sum_{z}\left(\int_{\theta} p(\theta) p(z \mid \theta) p(x \mid z, \theta) d \theta\right)=\sum_{z} D(z)$$ where $D(z)$ is defined to be the term inside the sum above. Equation $3.6$ demonstrates that conjugate priors are useful even when the normalization constant requires summing over latent variables. If the prior family is conjugate to the distribution $p(X, Z \mid \theta)$, then the function $D(z)$ will be mathematically easy to compute for any $z$. However, it is not true that $\sum_{z} D(z)$ is always tractable, since the form of $D(z)$ can be quite complex. ## 统计代写|贝叶斯分析代写Bayesian Analysis代考|MIXTURE OF CONJUGATE PRIORS Mixture models are a simple way to extend a family of distributions into a more expressive family. If we have a set of distributions $p_{1}(X), \ldots, p_{M}(X)$, then a mixture model over this set of distributions is parametrized by an $M$ dimensional probability vector $\left(\lambda_{1}, \ldots, \lambda_{M}\right)\left(\lambda_{i} \geq 0\right.$, $\left.\sum_{i} \lambda_{i}=1\right)$ and defines distributions over $X$ such that: $$p(X \mid \lambda)=\sum_{i=1}^{M} \lambda_{i} p_{i}(X)$$ Section 1.5.3 gives an example of a mixture-of-Gaussians model. The idea of mixture models can also be used for prior families. Let $p(\theta \mid \alpha)$ be a prior from a prior family with $\alpha \in A$. Then, it is possible to define a prior of the form: $$p\left(\theta \mid \alpha^{1}, \ldots, \alpha^{M}, \lambda_{1}, \ldots, \lambda_{M}\right)=\sum_{i=1}^{M} \lambda_{i} p\left(\theta \mid \alpha^{i}\right)$$ where $\lambda_{i} \geq 0$ and $\sum_{i=1}^{M} \lambda_{i}=1$ (i.e., $\lambda$ is a point in the $M-1$ dimensional probability simplex). This new prior family, which is hyperparametrized by $\alpha^{i} \in A$ and $\lambda_{i}$ for $i \in{1, \ldots M}$ will actually be conjugate to a likelihood $p(x \mid \theta)$ if the original prior family $p(\theta \mid \alpha)$ for $\alpha \in A$ is also conjugate to this likelihood. To see this, consider that when using a mixture prior, the posterior has the form: \begin{aligned} p\left(\theta \mid x, \alpha^{1}, \ldots, \alpha^{M}, \lambda\right) &=\frac{p(x \mid \theta) p\left(\theta \mid \alpha^{1}, \ldots, \alpha^{M}\right.}{\int_{\theta} p(x \mid \theta) p\left(\theta \mid \alpha^{1}, \ldots, \alpha^{M}\right.} \ &=\frac{\sum_{i=1}^{\cdot M} \lambda_{i} p(x \mid \theta) p\left(\theta \mid \alpha^{I}\right)}{\sum_{i=1}^{M} \lambda_{i} Z_{i}} \end{aligned} where $$Z_{i}=\int_{\theta} p(x \mid \theta) p\left(\theta \mid \alpha^{i}\right) d \theta$$ Therefore, it holds that: $$p\left(\theta \mid x, \alpha^{1}, \ldots, \alpha^{M}, \lambda\right)=\frac{\sum_{i=1}^{M}\left(\lambda_{i} Z_{i}\right) p\left(\theta \mid x, \alpha^{i}\right)}{\sum_{i=1}^{M} \lambda_{i} Z_{i}}$$ because $p(x \mid \theta) p\left(\theta \mid \alpha^{i}\right)=Z_{i} p\left(\theta \mid x, \alpha^{i}\right)$. Because of conjugacy, each $p\left(\theta \mid x, \alpha^{i}\right)$ is equal to $p\left(\theta \mid \beta^{i}\right)$ for some $\beta^{i} \in A(i \in{1, \ldots, M})$. The hyperparameters $\beta^{i}$ are the updated hyperparameters following posterior inference. Therefore, it holds: $$p\left(\theta \mid x, \alpha^{1}, \ldots, \alpha^{M}, \lambda\right)=\sum_{i=1}^{M} \lambda_{i}^{\prime} p\left(\theta \mid \beta^{i}\right)$$ for $\lambda_{i}^{\prime}=\lambda_{i} Z_{i} /\left(\sum_{i=1}^{M} \lambda_{i} Z_{i}\right)$. ## 统计代写|贝叶斯分析代写Bayesian Analysis代考|RENORMALIZED CONJUGATE DISTRIBUTIONS In the previous section, we saw that one could derive a more expressive prior family by using a basic prior distribution in a mixture model. Renormalizing a conjugate prior is another way to change the properties of a prior family while still retaining conjugacy. Let us assume that a prior $p(\theta \mid \alpha)$ is defined over some parameter space $\Theta$. It is sometimes the case that we want to further constrain $\Theta$ into a smaller subspace, and define $p(\theta \mid \alpha)$ such that its support is some $\Theta_{0} \subset \Theta$. One way to do so would be to define the following distribution $p^{\prime}$ over $\Theta_{0}$ : $$p^{\prime}(\theta \mid \alpha)=\frac{p(\theta \mid \alpha)}{\int_{\theta^{\prime} \in \Theta_{0}} p\left(\theta^{\prime} \mid \alpha\right) d \theta^{\prime}} .$$ This new distribution retains the same ratio between probabilities of elements in $\Theta_{0}$ as $p$, but essentially allocates probability 0 to any element in $\Theta \backslash \Theta_{0}$. It can be shown that if $p$ is a conjugate family to some likelihood, then $p^{\prime}$ is conjugate to the same likelihood as well. This example actually demonstrates that conjugacy, in its pure form does not necessitate tractability by using the conjugate prior together with the corresponding likelihood. More specifically, the integral over $\Theta_{0}$ in the denominator of Equation $3.7$ can often be difficult to compute, and approximate inference is required. The renormalization of conjugate distributions arises when considering probabilistic context-free grammars with Dirichlet priors on the parameters. In this case, in order for the prior to allocate zero probability to parameters that define non-tight PCFGs, certain multinomial distributions need to be removed from the prior. Here, tightness refers to a desirable property of a PCFG so that the total measure of all finite parse trees generated by the underlying context-free grammar is 1 . For a thorough discussion of this issue, see Cohen and Johnson (2013). ## 统计代写|贝叶斯分析代写Bayesian Analysis代考|THE USE OF CONJUGATE PRIORS WITH LATENT VARIABLE $$p(\theta, z, x \mid \alpha)=p(\theta \mid \alpha) p(z \mid \theta) p(x \mid z, \theta)$$ $$p(\theta, z \mid x, \alpha)=\frac{p(\theta \mid \alpha) p(z \mid \theta) p(x \mid z, \theta)}{p(x \mid \alpha)}$$ $$p(x \mid \alpha)=\sum_{z}\left(\int_{\theta} p(\theta) p(z \mid \theta) p(x \mid z, \theta) d \theta\right)=\sum_{z} D(z)$$ ## 统计代写|贝叶斯分析代写Bayesian Analysis代考|MIXTURE OF CONJUGATE PRIORS $$p(X \mid \lambda)=\sum_{i=1}^{M} \lambda_{i} p_{i}(X)$$ 1.5.3 节给出了一个混合高斯模型的例子。混合模型的思想也可以用于先验族。让 $p(\theta \mid \alpha)$ 来自以前的家庭 $\alpha \in A$. 然后,可以定义形式的 先验: $$p\left(\theta \mid \alpha^{1}, \ldots, \alpha^{M}, \lambda_{1}, \ldots, \lambda_{M}\right)=\sum_{i=1}^{M} \lambda_{i} p\left(\theta \mid \alpha^{i}\right)$$ $$p\left(\theta \mid x, \alpha^{1}, \ldots, \alpha^{M}, \lambda\right)=\frac{p(x \mid \theta) p\left(\theta \mid \alpha^{1}, \ldots, \alpha^{M}\right.}{\int_{\theta} p(x \mid \theta) p\left(\theta \mid \alpha^{1}, \ldots, \alpha^{M}\right.} \quad=\frac{\sum_{i=1}^{M} \lambda_{i} p(x \mid \theta) p\left(\theta \mid \alpha^{I}\right)}{\sum_{i=1}^{M} \lambda_{i} Z_{i}}$$ $$Z_{i}=\int_{\theta} p(x \mid \theta) p\left(\theta \mid \alpha^{i}\right) d \theta$$ $$p\left(\theta \mid x, \alpha^{1}, \ldots, \alpha^{M}, \lambda\right)=\frac{\sum_{i=1}^{M}\left(\lambda_{i} Z_{i}\right) p\left(\theta \mid x, \alpha^{i}\right)}{\sum_{i=1}^{M} \lambda_{i} Z_{i}}$$ $$p\left(\theta \mid x, \alpha^{1}, \ldots, \alpha^{M}, \lambda\right)=\sum_{i=1}^{M} \lambda_{i}^{\prime} p\left(\theta \mid \beta^{i}\right)$$ ## 统计代写|贝叶斯分析代写Bayesian Analysis代考|RENORMALIZED CONJUGATE DISTRIBUTIONS $$p^{\prime}(\theta \mid \alpha)=\frac{p(\theta \mid \alpha)}{\int_{\theta^{\prime} \in \Theta_{0}} p\left(\theta^{\prime} \mid \alpha\right) d \theta^{\prime}}$$ ## 有限元方法代写 assignmentutor™作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。 ## MATLAB代写 MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9721883535385132, "perplexity": 734.9273555624444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945440.67/warc/CC-MAIN-20230326075911-20230326105911-00632.warc.gz"}
http://mathonline.wikidot.com/summary-of-equivalent-statements-regarding-continuous-maps-o
Summary of Equiv. Statements Regarding Cts. Maps on Topo. Spaces # Summary of Equivalent Statements Regarding Continuous Maps on Topological Spaces We have see many different definitions/equivalent statements for a map $f : X \to Y$ (where $X$ and $Y$ are topological spaces) to be continuous on all of $X$. Be sure to review the following pages: We will now summarize all of the equivalent definitions for $f : X \to Y$ to be continuous on all of $X$ with the following diagram.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8135632276535034, "perplexity": 338.81063150412604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203326.34/warc/CC-MAIN-20190324043400-20190324065400-00029.warc.gz"}
https://scirate.com/arxiv/0711.0012/scites
# Stringy Generalization of the First Law of Thermodynamics for Rotating BTZ Black Hole with a Cosmological Constant as State Parameter https://scirate.com/arxiv/0711.0012
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8767393827438354, "perplexity": 491.3539397081809}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687820.59/warc/CC-MAIN-20170921134614-20170921154614-00558.warc.gz"}
https://chem.libretexts.org/Bookshelves/Inorganic_Chemistry/Supplemental_Modules_(Inorganic_Chemistry)/Crystallography/Physical_Properties_of_Crystals/Curie_laws
# Curie laws Curie extended the notion of symmetry to include that of physical phenomena and stated that: • the symmetry characteristic of a phenomenon is the highest compatible with the existence of the phenomenon; • the phenomenon may exist in a medium which possesses that symmetry or that of a subgroup of that symmetry. and concludes that some symmetry elements may coexist with the phenomenon but that their presence is not necessary. On the contrary, what is necessary is the absence of certain symmetry elements: ‘asymmetry creates the phenomenon’. Noting that physical phenomena usually express relations between a cause and an effect (an influence and a response), P. Curie restated the two above propositions in the following way, now known as Curie laws, although they are not, strictly speaking, laws (Curie himself spoke about 'the principle of symmetry'): • the asymmetry of the effects must pre-exist in the causes; • the effects may be more symmetric than the causes. ### Applications Curie applied the above statements to determine the symmetry characteristic of physical quantities such as a polar vector, a force or an electrical field, A ∞M, an axial vector or a magnetic field, (A /MC. If one now considers a phenomenon resulting from the superposition of several causes in the same medium, one may note that the symmetry of the global cause is the intersection of the groups of symmetry of the various causes: the asymmetries add up. This remark can be applied to the determination of the point groups where physical properties such as pyroelectricity or piezoelectricity are possible. ### History 1. Pierre Curie (1859-1906)'s principle of symmetry is stated in Curie P., 1894, J. Physique3, 393-415, Sur la symétrie dans les phénomènes physiques, symétrie d'un champ électrique et d'un champ magnétique.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.918853759765625, "perplexity": 1178.341639449509}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247490107.12/warc/CC-MAIN-20190219122312-20190219144312-00059.warc.gz"}
https://www.physicsforums.com/threads/impulse-of-a-bouncing-ball.261951/
# Impulse of a bouncing ball 1. Oct 5, 2008 ### geo321 1. The problem statement, all variables and given/known data Ok, I am supposed to find the impulse of a .12 kg ball dropped from a height of 1.25 meters. It bounces back from the floor to a height of .6 meters. Not really sure of all the equations needed, I need a lot of help. 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution 2. Oct 6, 2008 ### Kurdt Staff Emeritus Impulse is equal to the change in momentum if that helps any.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9210110306739807, "perplexity": 782.4769909413354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542655.88/warc/CC-MAIN-20161202170902-00134-ip-10-31-129-80.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/76325/exercise-review-find-the-moment-of-the-electric-dipole
# (Exercise review) Find the moment of the electric dipole I have solved an exercise, but the result I have obtined is wrong and I can't understand why. If you can help me, I'll be so grateful. Let's consider a charged cylinder, its radius is equal to R and its height=4R. Its center of is in the origin of the cartesian axes and its axis is parallel to z axis; we know the volumic density of charge: $\rho= 2az \epsilon$ where $a, \epsilon$ constant $>0$. There is an electric field inner to the cylinder, described by: $E_{ox}=0, E_{0y}=0, E_{0z}=az^2$ We want to find the electric potential to a great distance from the origin of the axes. I have thought as follow. Using Maxwell's first equation, I have found the volumic density of charge: $\rho=2\epsilon_0 a z$. At a very big distance to the origin, I can consider the cylinder like an electric dipole, with the positive charge concetrated in the center uf the upper part of the cylinder and the negative one in the center of the lower part: So $Q_+$ will be in (0,0,R) and $Q_-$ in (0,0,-R). Now, I need the moment of dipole, ${\bf p}=q{\bf a}$ where ${\bf a}$ is the vector from the negative charge to the positive charge. I have integrate $\rho$ and I have obtained $q=\int_0^{2R}\int_0^R\int_0^{2\pi}2az\epsilon r d\theta dr dz$ and I have obtained $q=4\pi a \epsilon_0 R^5$ Then I have posed a=2R, so I have obtained $p=8\pi a \epsilon_0 R^4$.The correct result is $p=32/3 \pi a \epsilon_0 R^5$ I can't understand where I was wrong and I wouldn't like to have made an error of proceeding. • What exactly do you need to find out? the potential on the $z$ axis, at large $z$? – Emilio Pisanty Sep 5 '13 at 22:20 • Some dimensional analysis auditing would definitely help. Since $\rho\propto \epsilon_0 az$, the units of $\epsilon_0 a$ must be $\text{charge}/\text{length}^4$, which makes $q\propto \epsilon_0 a R^5$ dimensionally incorrect, and similarly for your equation for $p$. – Emilio Pisanty Sep 5 '13 at 22:25 • @EmilioPisanty I need to find the potential at very big distance to the origin (not necessary on the z axis). but the result of p is wrong, so also the result for V in wrong – sunrise Sep 5 '13 at 22:41 You should calculate the dipole moment directly: $$p=\int\rho(\mathbf r)z\,\text d z=\int_0^{2R}\int_0^R\int_0^{2\pi}2a\epsilon_0 {z^2}\, r \,\text d\theta \text dr \text dz =2\epsilon_0a\int_0^{2R}z^2 \text dz\int_0^R r \text dr\int_0^{2\pi} \text d\theta =2\epsilon_0a \left.\frac{z^3}{3}\right|_{-2R}^{2R} \left.\frac{r^2}{2}\right|_{0}^R 2\pi =2\epsilon_0 a\cdot \frac{2\cdot 8R^3}{3} \cdot \frac{R^2}{2}\cdot2\pi=\frac{32\pi}{3}\epsilon_0 aR^5.$$ • @sunrise Well, for one, $q$ is zero, isn't it? – Emilio Pisanty Sep 5 '13 at 22:49 • sure! the total charge is zero. I'm sorry, I meant $Q_+$.. – sunrise Sep 5 '13 at 22:50 • Yes. You are changing your problem for a different one, and you can't be sure that the exact details will carry over. While you can indeed model your cylinder as two charges of $\pm Q$ at $\pm L$, you have two free parameters and you can't be sure that the optimal position for your charges is at $L=2R$. (In fact, it won't be: there's plenty of charge at smaller $z$'s, so the effective position will be closer to the origin.) – Emilio Pisanty Sep 5 '13 at 23:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9523716568946838, "perplexity": 410.9493627167092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999876.81/warc/CC-MAIN-20190625172832-20190625194832-00138.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/practical-voltage-source-formed-combining-ideal-voltage-source-vs-internal-resistance-r--1-q4244712
The practical voltage source is formed by combining an ideal voltage source (Vs) and internal resistance (r). 1) Without computing, determine sign of power Ps delivered by voltage source 2)Compute Ps delivered by voltage source 3) Compute power Pr absorbed/dissipated in heat by resistor R 4)Compute ratio p=Pr/Ps 5) Find value of R that maximizes p. 6) Assume r << R and compute ratio Pr/PR where Pr is power dissipated by resistor r. Use power balance to determine where power delivered by the voltage source is absorbed 7) Find value of R that maximizes Pr. Compute Pr and p for this value. I don't so much want straight up answers to these questions, more direction as to how to tackle it. Image text transcribed for accessibility: The practical voltage source is formed by combining an ideal voltage source (Vs) and internal resistance (r). Without computing, determine sign of power Ps delivered by voltage source Compute Ps delivered by voltage source Compute power Pr absorbed/dissipated in heat by resistor R Compute ratio p=Pr/Ps Find value of R that maximizes p. Assume r << R and compute ratio Pr/PR where Pr is power dissipated by resistor r. Use power balance to determine where power delivered by the voltage source is absorbed Find value of R that maximizes Pr. Compute Pr and p for this value.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9294915199279785, "perplexity": 2103.5313580785523}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931008720.43/warc/CC-MAIN-20141125155648-00013-ip-10-235-23-156.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/277892/similarities-between-light-and-other-frequencies-of-em-waves
# Similarities between light and other frequencies of EM waves This may be a ridiculous question, but I'll learn something from it! Let's say there's a TV transmitter transmitting at 100kW. I can receive the station just fine 20 miles away. The antenna is 300m in the air. If I replicate this but with light (put a 100kW "bulb" 300m in the air), would I be able to see the light 20 miles away? I understand there are things to take into account, particularly propagation differences at the different frequencies. But is this a meaningful analogy, or totally useless? • I'll just add something: to be precise, if you call the radiation produced by a light bulb by "light", then the radiation your antenna gives off is also called "light"... The difference is that the light bulb's is visible, but they're both light. – QuantumBrick Sep 3 '16 at 0:52 • Some information here en.wikipedia.org/wiki/Radio_wave, see Ground waves. However, this seems to be only significant at very low frequencies e.g. medium wave and long wave. I don't think that the effect is significant at the frequencies normally used for TV signals. For example, I mentioned BBC Radio 4. It has a single LW transmitter in Droitwich (central England) but I have received it in Germany. The FM service requires multiple transmitters just to cover the UK, – badjohn Aug 15 '17 at 14:52 Technically speaking, if there is no decay of EM waves of any particular frequency, you can measure the signal in the configuration you are describing. It all depends on how sensitive the measuring instrument (antenna, eye) is to the power under consideration. This of course being true for some large values of EM wave intensity where quantum effects would be irrelevant. 100 watts is 1600 lumens. The problem now is how many lumens does 1 candle put out. Yes, these are very old fashioned units. Estimates vary as to what the equivalence is, but I have settled for the most quoted figure. A standard candle is defined as giving off one candlepower over $4 ×\pi$ steradians, this comes to 12.56636 lumens. So a 100 watt bulb should give out the same light as 127 candles. From Comparing Candle to Stars, based on work done by researchers testing how dim a light you could expect to see, by comparing a candle at a distance to a star of known magnitude. The brightest stars, such as Vega, have a magnitude 0. At what distance would a candle flame be comparable to a star like Vega. Some straightforward nighttime experiments with a candle suggested that the distance was 338 meters. “To our eyes the candle flame and Vega appeared of comparable brightness,” they say. To check, the team observed both Vega and the candle flame using the same digital camera (an astronomical SBIG camera with 35mm aperture and 100mm focal length).The results were something of a surprise. “The candle flame at 338 m was 2.423 magnitudes brighter than Vega, even though they looked comparable in brightness to our eyes,” say Krisciunas and Carona. That raises the question of how far away the flame should be to appear the same brightness as Vega. That’s not a straightforward question to answer because the camera’s CCD is sensitive to photons in a different way to human eyes and Vega and the candle emit light with different spectra. Nevertheless, Krisciunas and Carona make some calibrating assumptions and say that parity would occur at 392 meters. In other words, a candle flame is the same brightness as a magnitude 0 star at a distance of 392 meters. The faintest stars humans can see unaided have a magnitude 6. Fainter stars can only be seen using a telescope or binoculars. Magnitude 0 stars are 251.2 times brighter than magnitude 6 stars. So while again taking into account the differences between starlight and candle light, it is possible to work out how far away the candle should be to appear equally bright as a magnitude 6 star. Krisciunas and Carona say this would occur at a distance of 2,576 meters or roughly 1.6 miles, and that at 10 miles a candle would appear as bright as a magnitude 9.98 star. “This is far beyond the capabilities of the most sensitive human eyes,” they say. So the farthest distance a human eye can detect a candle flame is 2.76 kilometers. From 10 miles a candle would appear as bright as a magnitude 9.98 star. “This is far beyond the capabilities of the most sensitive human eyes,” they say. Now the magnitude of stars goes like this. Thus in 1856 Norman Pogson of Oxford proposed that a logarithmic scale of ${\displaystyle {\sqrt[{5}]{100}}\approx }$ 2.512 be adopted between magnitudes, so five magnitude steps corresponded precisely to a factor of 100 in brightness. Every interval of one magnitude equates to a variation in brightness of 1001/5 or roughly 2.512 times. Consequently, a first magnitude star is about 2.5 times brighter than a second magnitude star, 2.52 brighter than a third magnitude star, 2.53 brighter than a fourth magnitude star, and so on. Now you have 127 candles, so no, you should not be able to see it. They just won't bring it down to magnitude 6, the minimum brightness requipped to see it. This assumes pitch blackness, like a star in a clear sky.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8319595456123352, "perplexity": 584.3765288974266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668004.65/warc/CC-MAIN-20191114053752-20191114081752-00387.warc.gz"}
https://link.springer.com/article/10.1007/s13222-017-0264-7
## Introduction The problem of lifting rankings on objects to ranking on sets has been studied from many different view points — see [2] for an excellent survey. Several properties (also called axioms) have been proposed in order to indicate whether the lifted ranking reflects the given order on the elements. Two important axioms are dominance and independence. Roughly speaking, dominance ensures that adding an element which is better (worse) than all elements in a set, makes the augmented set better (worse) than the original one. Independence, on the other hand, states that adding an element $$a$$ to sets $$A$$ and $$B$$ where $$A$$ is already known to be preferred over $$B$$, must not make $$B\cup\{a\}$$ be preferred over $$A\cup\{a\}$$ (or, in the strict variant, $$A\cup\{a\}$$ should remain preferred over $$B\cup\{a\}$$). These axioms were first considered together in the context of decision making under complete uncertainty [9]. There, sets represent the (mutually exclusive) possible outcomes of an action and one tries to rank these sets based on a preference ranking on the outcomes. It is assumed that the probability of each outcome is unknown, i. e., it is only known whether an event is a possible outcome or not. This is a very reductive model. Still, “it does succeed in modelling some empirically interesting situations” [5, p. 2]. Especially, “when the number of possible states of the world is large, an agent of bounded rationality may be incapable of undertaking (or unwilling to undertake) the complex calculations which consideration of the entire rows in the outcome matrix will involve.” [15, p. 2]. Such situations often occur for autonomous agents, for example self driving cars, where “the temporal evolution of situations cannot be predicted without uncertainty because other road users behave stochastically and their goals and plans cannot be measured” [6, p. 1]. Moreover, dominance and independence are also sensible axioms in other contexts, for example bundles of objects of unknown size. Finally, to mention a very different application, Packard [14] used independence and a version of dominance to define plausibility rankings on theories. However, it is well known that constructing a ranking on the whole power set of objects which jointly satisfies dominance and (strict) independence is, in general, not possible. ### Example 1 Consider the problem of assigning tasks to agents. Let $$X=\{t_{1},\dots,t_{n}\}$$ be a collection of tasks. Furthermore, assume we know for every agent what tasks they prefer to perform. If there are more tasks than agents, some agents have to perform several tasks, therefore it would be useful to know the preferences over sets of tasks. However, asking for these preferences directly is infeasible even for a reasonable small number of tasks. Therefore, we would like to lift the preferences over tasks to preferences over sets. Furthermore, it seems reasonable that the order on the sets should satisfy dominance and (strict) independence. Unfortunately, for strict independence, this is impossible even for $$n=3$$. Assume $$t_{1}<t_{2}<t_{3}$$. Then, $$\{t_{1}\}\prec\{t_{1},t_{2}\}$$ is implied by dominance, therefore $$\{t_{1},t_{3}\}\prec\{t_{1},t_{2},t_{3}\}$$ must hold by strict independence. On the other hand, $$\{t_{2},t_{3}\}\prec\{t_{3}\}$$ is also implied by dominance, therefore, $$\{t_{1},t_{2},t_{3}\}\prec\{t_{1},t_{3}\}$$ by strict independence. We thus end up with $$\{t_{1},t_{2},t_{3}\}\prec\{t_{1},t_{3}\}\prec\{t_{1},t_{2},t_{3}\}$$, hence $$\prec$$ is not an order. Because of this, other (weaker) axiomatizations were proposed (see for example [7], or more recently, [4] and [10] among many others). However, in many applications one does not need to order the entire power set (for example, some tasks cannot be performed in parallel). In these cases, it may be possible to construct rankings that jointly satisfy dominance and (strict) independence. ### Example 2 Let $$X$$ be as above. Now assume $$\{t_{1},t_{2},t_{3}\}$$ is not a possible combination of tasks, for example, because fulfilling all three tasks at once is not feasible. Then, for example $$\{t_{1}\}\prec\{t_{1},t_{2}\}\prec\{t_{2}\}\prec\{t_{1},t_{3}\}\prec\{t_{2},t_{3}\}\prec\{t_{3}\}$$ is a total order that satisfies dominance and strict independence (respecting the underlying linear order $$t_{1}<t_{2}<t_{3}$$). In this paper, we investigate exactly this situation, i. e., lifting rankings to specific sets of elements. In the literature, this scenario seems to be rather neglected, so far. The only exception we are aware of deals with subsets of a fixed cardinality [3]. In particular, we are interested in the complexity of computing, if possible, rankings on arbitrary subsets of the power set that satisfy dominance and (strict) independence. To do so, we first give a new definition for dominance which appears more suitable in such a setting (for more details, see Section 3). Then, we consider the following problem: Given a ranking on elements, and a set $$S$$ of sets of elements, does there exist a strict (partial) order on $$S$$ that satisfies $$D$$ and $$I$$ (where $$D$$ is either standard dominance or our notion of dominance and $$I$$ is independence or strict independence)? We show that the problem is either trivial or easy to solve for the case of partial orders. Our main result is NP-completeness for the case when total orders are required. The remainder of the paper is organized as follows. In the next section, we recall some basic concepts. In Section 3 we discuss why standard dominance can be seen as too weak in our setting and propose an alternative definition. Section 4 contains our main results. We conclude the paper in Section 5 with a summary and pointers to future work. This paper is an extended version of [11]. ## Background The formal framework we want to consider in the following consists of a finiteFootnote 1 nonempty set $$X$$, equipped with a linear order $$<$$ and a subset $$\mathcal{X}\subseteq\mathcal{P}(X) \backslash \{\emptyset\}$$ of the power set of $$X$$ not containing the empty set. We want to find a binary relation $$\prec$$ on $$\mathcal{X}$$ that satisfies some niceness conditions. We will consider several kinds of relations. We recall the relevant definitions. ### Definition 1 A binary relation is called a strict partial order, if it is irreflexive and transitive. A strict or linear order is a total strict partial order. A binary relation is called a preorder, if it is reflexive and transitive. A (weak) order is a total preorder. If $$\preceq$$ is a weak or a preorder on a set $$X$$, for all $$x,y\in X$$, the corresponding strict order $$\prec$$ is defined by $$x\prec y$$ if $$x\preceq y$$ and $$y\not\preceq x$$ hold. Additionally, we need the following notions: ### Definition 2 For a pre- or weak order $$\preceq$$, we write $$x\sim y$$ if $$x\preceq y$$ and $$y\preceq x$$ hold. Let $$A\in\mathcal{X}$$ be a set of elements of $$X$$. Then we write $$\max(A)$$ for the maximal element of $$A$$ with respect to $$<$$ and $$\min(A)$$ for the minimal element of $$A$$ with respect to $$<$$. Furthermore, we say a relation $$R$$ on a set $$\mathcal{X}$$ extends a relation $$S$$ on $$\mathcal{X}$$ if $$xSy$$ implies $$xRy$$ for all $$x,y\in\mathcal{X}$$. Finally, we say a relation $$R$$ on $$\mathcal{X}$$ is the transitive closure of a relation $$S$$ on $$\mathcal{X}$$ if the existence of a sequence $$x_{1}Sx_{2}S\dots Sx_{k}$$ implies $$x_{1}Rx_{k}$$ for all $$x_{1},x_{k}\in\mathcal{X}$$ and $$R$$ is the smallest relation with this property. We write $$\mathit{trcl}(S)$$ for the transitive closure of $$S$$. Many different axioms a good order should satisfy are discussed in the literature (an overview over the relevant interpretations and the corresponding axioms can be found in the survey [2]). The following axioms “have very plausible intuitive interpretations” [2, p. 11] for decision making under complete uncertainty and belong to the most extensively studied ones. (We added conditions of the form $$X\in\mathcal{X}$$ that are not necessary if $$\mathcal{X}=\mathcal{P}(X)\backslash\{\emptyset\}$$ holds.) ### Axiom 1 (Extension Rule) For all $$x,y\in X$$, such that $$\{x\},\{y\}\in\mathcal{X}$$: $$x<y\text{ implies }\{x\}\prec\{y\}.$$ ### Axiom 2 (Dominance) For all $$A\in\mathcal{X}$$ and all $$x\in X$$, such that $$A\cup\{x\}\in\mathcal{X}$$: \begin{aligned} & y<x\text{ for all }y\in A\text{ implies }A\prec A\cup\{x\};\\ & x<y\text{ for all }y\in A\text{ implies }A\cup\{x\}\prec A.\end{aligned} ### Axiom 3 (Independence) For all $$A,B\in\mathcal{X}$$ and for all $$x\in X\backslash(A\cup B)$$, such that $$A\cup\{x\},B\cup\{x\}\in\mathcal{X}$$: $$A\prec B\text{ implies }A\cup\{x\}\preceq B\cup\{x\}.$$ ### Axiom 4 (Strict Independence) For all $$A,B\in\mathcal{X}$$ and for all $$x\in X\backslash(A\cup B)$$, such that $$A\cup\{x\},B\cup\{x\}\in\mathcal{X}$$: $$A\prec B\text{ implies }A\cup\{x\}\prec B\cup\{x\}.$$ ### Example 3 Take $$X=\{1,2,3,4\}$$ with the usual linear order and $$\mathcal{X}=\{\{3\},\{4\},\{1,3\},\{2,3\},\{1,4\},\{1,2,3\},\{1,3,4\}\}.$$ Then the extension rule implies $$\{3\}\prec\{4\}$$, dominance implies $$\{1,3\}\prec\{1,3,4\}$$, $$\{1,2,3\}\prec\{2,3\}\prec\{3\}$$ and $$\{1,4\}\prec\{4\}$$ but not $$\{3\}\prec\{4\}$$. Furthermore, (strict) independence lifts the preference between $$\{2,3\}$$ and $$\{3\}$$ to $$\{1,2,3\}$$ and $$\{1,3\}$$, i. e., in combination with dominance, independence implies $$\{1,2,3\}\preceq\{1,3\}$$ and strict independence implies $$\{1,2,3\}\prec\{1,3\}$$. Every reasonable order should satisfy the extension rule. If we assume $$\mathcal{X}=\mathcal{P}(X)\backslash\{\emptyset\}$$, the extension rule is implied by dominance [2]. Therefore, a natural task is to find a total order on $$\mathcal{P}(X)\backslash\{\emptyset\}$$ that satisfies dominance together with (some version of) independence. However, in their seminal paper [9], Kannai and Peleg have shown that this is impossible for regular independence and dominance if $$|X|\geq 6$$ and $$\mathcal{X}=\mathcal{P}(X)\backslash\{\emptyset\}$$ hold. Barberà and Pattanaik [1] showed that for strict independence and dominance this is impossible even for $$|X|\geq 3$$ and $$\mathcal{X}=\mathcal{P}(X)\backslash\{\emptyset\}$$ (see Example 1 for a proof of the statement). If we abandon the condition $$\mathcal{X}=\mathcal{P}(X)\backslash\{\emptyset\}$$, the situation is not as clear. As we have seen in Example 2 there are sets $$\mathcal{X}\subseteq\mathcal{P}(X)\backslash\{\emptyset\}$$ with $$|X|\geq 3$$ such that there is an order on $$\mathcal{X}$$ satisfying strict independence and dominance. ## A Stronger Form of Dominance Many results regularly used in the setting of $$\mathcal{X}=\mathcal{P}(X)\backslash\{\emptyset\}$$ are not true in the more general case. For example, in contrast to the result stated above, the extension rule is not implied by dominance as we have seen in Example 3. Furthermore, it could be argued that $$\{1,3\}\prec\{1,4\}$$ should hold in that example which would be implied by dominance and independence if $$\{3,4\}\in\mathcal{X}$$ would hold, because $$\{3,4\}\prec\{4\}$$ holds by dominance and so $$\{1,3,4\}\preceq\{1,4\}$$ by independence. Hence, $$\{1,3\}\prec\{1,3,4\}\preceq\{1,4\}$$ implies $$\{1,3\}\prec\{1,4\}$$ by transitivity. Furthermore, for the set $$X$$ from Example 3, dominance does not even imply $$\{1\}\prec\{1,2,3\}$$ if $$\{1,2\}$$ is not in the family. Therefore, it is reasonable to ask for a stronger version of dominance that behaves nicely in the general case. We observe that $$x<y$$ for all $$y\in A$$ implies $$\max(A\cup\{x\})=\max(A)$$ and $$\min(A\cup\{x\})<\min(A)$$; whereas $$y<x$$ for all $$y\in A$$ implies $$\max(A)<\max(A\cup\{x\})$$ and $$\min(A\cup\{x\})=\min(A)$$. We claim that every dominance-like axiom should satisfy this property. Therefore, we can use this property to define a “maximal” version of dominance which can be seen as a special case of Pareto dominance [12]. ### Axiom 5 (Maximal Dominance) For all $$A,B\in\mathcal{X}$$, \begin{aligned} & \left(\max(A)\leq\max(B)\land\min(A)<\min(B)\right)\text{ or }\\ & \left(\max(A)<\max(B)\land\min(A)\leq\min(B)\right)\text{ implies }A\prec B.\end{aligned} This axiom trivially implies the extension rule and of course dominance. Looking again at the family introduced in Example 3 maximal dominance implies all preferences implied by either dominance or by the extension rule and additionally $$\{1,3\}\prec\{1,4\}$$. Furthermore, if $$\mathcal{X}$$ is sufficiently large or even $$\mathcal{X}=\mathcal{P}(X)\backslash\{\emptyset\}$$, dominance and independence imply maximal dominance. ### Proposition 1 Let $$\mathcal{X}=\mathcal{P}(X)\backslash\{\emptyset\}$$ . Then every transitive relation that satisfies dominance and independence also satisfies maximal dominance and independence. ### Proof Let $$\preceq$$ be a transitive relation that satisfies dominance and independence. We show that $$\preceq$$ satisfies maximal dominance using the following observation due to Kannai and Peleg in [9]: Observation $$A\sim\{\min(A),\max(A)\}$$. We can assume w.l.o.g. that $$A$$ has more than two elements. We enumerate $$A$$ by $$A=\{a_{1},a_{2},\dots,a_{k}\}$$ such that $$a_{i}<a_{j}$$ holds for all $$i<j$$. Using transitivity and dominance, it is easy to see that $$\{a_{1}\}\prec\{a_{1},a_{2},\dots,a_{k-1}\}$$ holds. This implies, by independence, $$\{a_{1},a_{k}\}\preceq A$$. Analogously, we get $$\{a_{2},a_{2},\dots,a_{k}\}\prec\{a_{k}\}$$ and $$A\preceq\{a_{1},a_{k}\}$$ and therefore $$A\sim\{a_{1},a_{k}\}=\{\min(A),\max(A)\}$$.◊ Using this observation we can prove that $$\max(A)=\max(B)$$ and $$\min(A)<\min(B)$$ implies $$A\prec B$$ by the following argument: \begin{aligned} A & \sim\{\min(A),\max(A)\}\sim\{\min(A),\min(B),\max(A)\}\\ & \prec\{\min(B),\max(A)\}=\{\min(B),\max(B)\}\sim B.\end{aligned} The other case is proven analogously, hence $$\preceq$$ satisfies maximal dominance.□ It would be possible to define several other versions of dominance of intermediate strength. We will only consider dominance and maximal dominance. As we will see our results justify this approach; in particular, since both versions yield equal complexity results. ## Main Results We studied 8 problems in total, as defined below.Footnote 2 Our results are summarized in Table 1. ### Problem 1 (The Partial (Maximal) Dominance Strict Independence problem) Given a linearly ordered set $$X$$ and a set $$\mathcal{X}\subseteq\mathcal{P}(X)\backslash\{\emptyset\}$$, decide if there is a partial order $$\prec$$ on $$\mathcal{X}$$ satisfying (maximal) dominance and strict independence. ### Problem 2 (The Partial (Maximal) Dominance Independence problem) Given a linearly ordered set $$X$$ and a set $$\mathcal{X}\subseteq\mathcal{P}(X)\backslash\{\emptyset\}$$, decide if there is a preorder $$\preceq$$ on $$\mathcal{X}$$ satisfying (maximal) dominance and independence. ### Problem 3 (The (Maximal) Dominance Strict Independence problem) Given a linearly ordered set $$X$$ and a set $$\mathcal{X}\subseteq\mathcal{P}(X)\backslash\{\emptyset\}$$, decide if there is a strict total order $$\prec$$ on $$\mathcal{X}$$ satisfying (maximal) dominance and strict independence. ### Problem 4 (The (Maximal) Dominance Independence problem) Given a linearly ordered set $$X$$ and a set $$\mathcal{X}\subseteq\mathcal{P}(X)\backslash\{\emptyset\}$$, decide if there is a total order $$\preceq$$ on $$\mathcal{X}$$ satisfying (maximal) dominance and independence. ### Partial Orders First, we consider the Partial (Maximal) Dominance Independence problem. We can define a preorder that satisfies independence and maximal dominance (and therefore also dominance) on all $$\mathcal{X}$$. ### Definition 3 Given a set $$X$$, a linear order $$<$$ on $$X$$ and a family $$\mathcal{X}\subseteq\mathcal{P}(X)\backslash\emptyset$$, we define a relation $$\preceq_{m}$$ as $$A\preceq_{m}B$$ iff $$\max(A)\leq\max(B)$$ and $$\min(A)\leq\min(B)$$. Observe that it is obviously possible, given $$X$$, $$<$$ and $$\mathcal{X}$$, to construct $$\preceq_{m}$$ in polynomial time. ### Theorem 4.1 For every linearly ordered $$X$$ and every family $$\mathcal{X}\subseteq\mathcal{P}(X)\backslash\emptyset$$ , $$\preceq_{m}$$ is a preorder and satisfies maximal dominance and independence. ### Proof Obviously, $$\preceq_{m}$$ is reflexive and transitive, because $$\leq$$ is reflexive and transitive. Furthermore, the corresponding strict order $$\prec_{m}$$ satisfies maximal dominance. Assume, w.l.o.g., $$\min(A)<\min(B)$$ and $$\max(A)\leq\max(B)$$. Then $$A\preceq_{m}B$$ by definition and $$B\not\preceq_{m}A$$ because $$\min(B)\not\leq\min(A)$$, so $$A\prec_{m}B$$. Finally, assume $$A\prec_{m}B$$ and $$A\cup\{x\},B\cup\{x\}\in\mathcal{X}$$ for $$x\not\in A\cup B$$ and, w.l.o.g., $$\min(A)<\min(B)$$ and $$\max(A)\leq\max(B)$$. If $$\min(A)<x$$ we know $$\min(A\cup\{x\})<\min(B\cup\{x\})$$ and $$\max(A\cup\{x\})\leq\max(B\cup\{x\})$$; if $$x<\min(A)$$ we get $$\min(A\cup\{x\})\leq\min(B\cup\{x\})$$, and $$\max(A\cup\{x\})\leq\max(B\cup\{x\})$$. Hence, $$A\cup\{x\}\preceq_{m}B\cup\{x\}$$.□ ### Example 4 Consider once again the family from Example 3. $$\preceq_{m}$$ consists of the following preferences on that family: \begin{aligned} \{1,3\} & \sim_{m}\{1,2,3\}\prec_{m}\{1,3,4\}\sim_{m}\{1,4\}\prec_{m}\{4\}, \\ & \{1,3\}\sim_{m}\{1,2,3\}\prec_{m}\{2,3\}\prec_{m}\{3\}\prec_{m}\{4\}. \end{aligned} Next, we consider the Partial Dominance Strict Independence problem. As we have seen in Example 1 and 2, only some sets $$\mathcal{X}$$ allow such an order. In order to decide if a set admits a partial order we build a minimal transitive relation satisfying dominance and strict independence. First, we build a minimal transitive relation satisfying dominance. It is worth noting that a very similar relation can be defined for maximal dominance. With this relation, all results in this section can be proven for maximal dominance the same way. ### Definition 4 Given a set $$X$$, a linear order $$<$$ on $$X$$ and a family $$\mathcal{X}\subseteq\mathcal{P}(X)\backslash\emptyset$$, we define a relation $$\prec_{d}$$ on $$\mathcal{X}$$ in the following way: If $$A,A\cup\{x\}\in\mathcal{X}$$, then 1. 1. $$A\prec_{d}A\cup\{x\}$$ if $$y<x$$ for all $$y\in A$$. 2. 2. $$A\cup\{x\}\prec_{d}A$$ if $$x<y$$ for all $$y\in A$$. We define the relation $$\prec_{d}^{t}$$ on $$\mathcal{X}$$ by $$\prec_{d}^{t}:=\mathit{trcl}(\prec_{d})$$. This relation has the following useful property. ### Proposition 2 For every linearly ordered set $$X$$ and every family $$\mathcal{X}\subseteq\mathcal{P}(X)\backslash\emptyset$$ , $$\prec_{d}^{t}$$ is a partial order and a partial order on $$\mathcal{X}$$ satisfies dominance if and only if it extends $$\prec_{d}^{t}$$ . ### Proof Obviously, $$\prec_{d}^{t}$$ is transitive. Furthermore, $$\prec_{d}^{t}$$ is irreflexive as $$A\prec_{d}^{t}B$$ implies $$\max(A)<\max(B)$$ or $$\min(A)<\min(B)$$ and $$<$$ is irreflexive. By definition, a relation satisfies dominance if and only if it extends $$\prec_{d}$$ and a transitive relation extending $$\prec_{d}$$ also extends $$\prec_{d}^{t}$$ by the minimality of $$\mathit{trcl}$$.□ We want to extend this relation to a minimal relation for strict independence and dominance. ### Definition 5 Given a set $$X$$, a linear order $$<$$ on $$X$$ and a family $$\mathcal{X}\subseteq\mathcal{P}(X)\backslash\emptyset$$, we build a relation $$\prec_{\infty}$$ on $$\mathcal{X}$$ by induction. First, we set $$\prec_{0}^{t}:=\prec_{d}^{t}$$. Now let $$\prec_{n}^{t}$$ be defined. For $$\prec_{n+1}$$ we select sets $$A,B,A\backslash\{x\},B\backslash\{x\}\in\mathcal{X}$$ with $$x\in X$$, $$A\backslash\{x\}\prec_{n}^{t}B\backslash\{x\}$$ but not $$A\prec^{n}_{t}B$$ and set $$C\prec_{n+1}D$$ if $$C\prec_{n}^{t}D$$ or $$C=A$$ and $$D=B$$ holds. Then, we set $$\prec_{n+1}^{t}:=\mathit{trcl}(\prec_{n+1})$$. In the end, we set $$\prec_{\infty}=\bigcup_{n}\prec^{t}_{n}$$. ### Example 5 Consider the family from Example 3, i. e., $$\mathcal{X}=\{\{3\},\{4\},\{1,3\},\{2,3\},\{1,4\},\{1,2,3\},\{1,3,4\}\}.$$ Then, $$\prec_{\infty}$$ consists of the following preferences: \begin{aligned} & \{1,3\}\prec_{\infty}\{1,3,4\},\\ & \{1,2,3\}\prec_{\infty}\{2,3\}\prec_{\infty}\{3\},\\ & \{1,4\}\prec_{\infty}\{4\},\\ & \{1,2,3\}\prec_{\infty}\{1,3\}.\end{aligned} In order to prove that this is actually a minimal order for dominance and strict independence, we have to introduce another concept we call links. ### Definition 6 A $$\prec_{\infty}$$-link from $$A$$ to $$B$$ in $$\mathcal{X}$$ is a sequence $$A=:C_{0},C_{1},\dots,C_{n}:=B$$ with $$C_{i}\in\mathcal{X}$$ for all $$i\leq n$$ such that, for all $$i<n$$, either $$C_{i}\prec_{d}C_{i+1}$$ holds or there is a link between $$C_{i}\backslash\{x\}$$ and $$C_{i+1}\backslash\{x\}$$ for some $$x\in X$$. We show that $$\prec_{\infty}$$-links indeed characterize $$\prec_{\infty}$$. ### Lemma 1 For $$A,B\in\mathcal{X}$$ , $$A\prec_{\infty}B$$ implies that there is a $$\prec_{\infty}$$ -link from $$A$$ to $$B$$ and if there is a $$\prec_{\infty}$$ -link from $$A$$ to $$B$$ then $$A\prec^{*}B$$ holds for every transitive relation $$\prec^{*}$$ that satisfies dominance and strict independence. In order to prove this result, we need another definition. ### Definition 7 For every pair $$A\prec_{\infty}B$$, there is a minimal $$k$$ such that $$A\prec_{k}^{t}B$$ holds. We call this the $$\prec_{\infty}$$-rank of the pair. Furthermore, we define the $$\text{rank}(C_{1},C_{2},\dots,C_{n})$$ of a $$\prec_{\infty}$$-link $$C_{1},C_{2},\dots,C_{n}$$ from $$C_{1}$$ to $$C_{n}$$: • $$\text{rank}^{*}(C_{i},C_{i+1})=0$$ if $$C_{i}\prec_{d}C_{i+1}$$, • $$\text{rank}^{*}(C_{i},C_{i+1})=\text{rank}(C_{i}\backslash\{x\},C_{i+1}\backslash\{x\})$$, • $$\text{rank}(C_{1},C_{2},\dots,C_{n})=\max\{\text{rank}^{*}(C_{i},C_{i+1})\mid i<n\}+1$$. Now we can prove Lemma 1: ### Proof Assume $$A\prec_{\infty}B$$. We prove that a $$\prec_{\infty}$$-link exists by induction on the $$\prec_{\infty}$$-rank of $$A,B$$. If $$A\prec_{d}^{t}B$$, then there is sequence $$A=C_{1},C_{2},\dots,C_{n}=B$$ such that $$C_{i}\prec_{d}C_{i+1}$$ holds for all $$i<n$$, hence there is a $$\prec_{\infty}$$-link from $$A$$ to $$B$$. Now assume $$A,B$$ has $$\prec_{\infty}$$-rank $$k$$ and for every pair with $$\prec_{\infty}$$-rank $$k-1$$ or less there is a $$\prec_{\infty}$$-link from $$C$$ to $$D$$. There is a sequence $$A=C_{0}\prec_{k}C_{1}\dots C_{n-1}\prec_{k}C_{n}=B$$. For every $$i<n$$ either $$C_{i}\prec_{d}C_{i+1}$$ or $$C_{i}\backslash\{y\}\prec_{k-1}^{t}C_{i+1}\backslash\{y\}$$ holds, which implies by induction that there is a $$\prec_{\infty}$$-link from $$C_{i}\backslash\{y\}$$ to $$C_{i+1}\backslash\{y\}$$. Hence there is a $$\prec_{\infty}$$-link from $$A$$ to $$B$$. Now, let $$\prec$$ be a transitive relation that satisfies dominance and strict independence and assume there is a $$\prec_{\infty}$$-link $$A=C_{1},C_{2},\dots,C_{n}=B$$ from $$A$$ to $$B$$. We prove $$A\prec B$$ by induction on the rank of the $$\prec_{\infty}$$-link. First, assume $$\text{rank}(C_{1},C_{2},\dots,C_{n})=1$$, then $$C_{i}\prec_{d}C_{i+1}$$ holds for all $$i<n$$, hence $$A\prec B$$ holds by dominance and transitivity. Now assume $$\text{rank}(C_{1},C_{2},\dots,C_{n})=k$$ and for all $$\prec_{\infty}$$-links with $$\text{rank}(C^{*}_{1},C^{*}_{2},\dots,C^{*}_{n})<k$$ we know $$C_{1}^{*}\prec C_{n}^{*}$$. By induction, for every $$i<n$$ either $$C_{i}\prec_{d}C_{i+1}$$ or $$C_{i}\backslash\{x\}\prec C_{i+1}\backslash\{x\}$$ holds. This implies that $$C_{i}\prec C_{i+1}$$ holds for all $$i<n$$, because $$\prec$$ satisfies dominance and strict independence. Therefore $$A\prec B$$ by transitivity.□ Using this lemma, we can show now that $$\prec_{\infty}$$ is indeed a minimal relation for dominance and strict independence. ### Theorem 4.2 Given a set $$X$$ , a linear order $$<$$ on $$X$$ and a family $$\mathcal{X}\subseteq\mathcal{P}(X)\backslash\{\emptyset\}$$ , there is a partial order on $$\mathcal{X}$$ that satisfies dominance and strict independence if and only if $$\prec_{\infty}$$ is irreflexive on $$\mathcal{X}$$ . ### Proof $$\prec_{\infty}$$ satisfies dominance as it extends $$\prec_{d}^{t}$$. By construction it also satisfies strict independence and transitivity: $$A_{1}\prec_{\infty}A_{2}\prec_{\infty}\dots\prec_{\infty}A_{k}$$ implies $$A_{1}\prec^{t}_{n}A_{2}\prec^{t}_{n}\dots\prec^{t}_{n}A_{k}$$ for some $$n\in N$$ but then $$A_{1}\prec_{n}^{t}A_{k}$$ holds by the transitivity of $$\prec_{n}^{t}$$ and therefore $$A_{1}\prec_{\infty}A_{k}$$. Now assume $$A\prec_{\infty}B$$ and hence $$A\prec^{t}_{n}B$$ for some $$n$$ and $$A\cup\{x\}\not\prec_{n}^{t}B\cup\{x\}\in\mathcal{X}$$ for some $$x\not\in A\cup B$$. Then $$A,B,A\cup\{x\},B\cup\{x\}$$ is picked for some $$l$$ with $$n<l$$ and $$A\cup\{x\}\prec_{l}B\cup\{x\}$$ is set, hence $$A\cup\{x\}\prec_{\infty}B\cup\{x\}$$. Therefore, if $$\prec_{\infty}$$ is irreflexive, it is a partial order satisfying dominance and strict independence. On the other hand, if $$\prec_{\infty}$$ is not irreflexive no strict partial order can extend it. But every strict partial order on $$\mathcal{X}$$ satisfying dominance and strict independence must be an extension of $$\prec_{\infty}$$. Assume otherwise there is a strict partial order $$\prec$$ on $$\mathcal{X}$$ satisfying dominance and strict independence that does not extend $$\prec_{\infty}$$, i. e., there are sets $$A,B\in\mathcal{X}$$ such that $$A\prec_{\infty}B$$ holds but not $$A\prec B$$. By Lemma 1 there is a $$\prec_{\infty}$$-link from $$A$$ to $$B$$. This implies, by Lemma 1, $$A\prec B$$ because $$\prec$$ is transitive and satisfies dominance and strict independence. Contradiction. Therefore no partial order on $$\mathcal{X}$$ can satisfy dominance and strict independence, if $$\prec_{\infty}$$ is irreflexive.□ Using this result, we can define a polynomial time algorithm for the Partial Dominance Strict Independence Problem. ### Corollary 1 The Partial Dominance Strict Independence problem is in $$P$$ . ### Proof Computing $$\prec_{\infty}$$ can obviously be done in polynomial time because the construction always stops after at most $$|n\times n|=n^{2}$$ steps. Then checking if $$\prec_{\infty}$$ is irreflexive only requires checking if $$A\prec_{\infty}A$$ holds for some $$A$$.□ Finally, links give us an easy characterization of sets $$\mathcal{X}$$ for which $$\prec_{\infty}$$ is irreflexive. ### Corollary 2 $$\prec_{\infty}$$ is irreflexive if and only if there is no set $$A\in\mathcal{X}$$ such that there is a $$\prec_{\infty}$$ -link from $$A$$ to $$A$$ . ### Proof $$\prec_{\infty}$$ is transitive and satisfies dominance and strict independence, hence Lemma 1 tells us, that $$A\prec_{\infty}A$$ if and only if there is a $$\prec_{\infty}$$ link from A to A. ### Total Orders We show that it is, in general, not possible to construct a (strict) total order satisfying both (maximal) dominance and (strict) independence deterministically in polynomial timeFootnote 3. We do this by a reduction from betweenness. ### Problem 5 (Betweenness) Given a set $$V=\{v_{1},v_{2},\dots,v_{n}\}$$ and a set of triples $$R\subseteq V^{3}$$, does there exist a strict total order on $$V$$ such that $$a<b<c$$ or $$a> b> c$$ holds for all $$(a,b,c)\in R$$. Betweenness is known to be NP-hard [13]. We use this result to show NP-hardness for all four versions of the (Maximal) Dominance (Strict) Independence problem. The idea is, roughly, to represent the elements of $$V$$ by sets which are not directly comparable via the axioms of dominance or independence. Hence, in order to find a total order, we need to guess how these sets are ordered. Starting from this guess we need to “maximize” this initial order in such a way that for each triple $$(a,b,c)$$ both $$a<b> c$$ and $$a> b<c$$ would lead to a circle in every order satisfying dominance and independence. However, this requires a number of carefully chosen additional sets as we will detail below. ### Theorem 4.3 The Maximal Dominance Strict Independence problem, the Dominance Strict Independence problem, the Maximal Dominance Independence problem and the Dominance Independence problem are NP-complete. It is clear that all four problems are in NP. We can guess a binary relation and then check if it has all properties we want. It is well known that checking for transitivity and (ir-) reflexivity can be done in polynomial time. Checking (maximal) dominance only requires an easy check for every pair of sets and (strict) independence an equally easy check for every quadruple of sets. It is clear that this can be done in polynomial time. In what follows, we split the proof of the NP-hardness in four parts, one for each problem. ### Proof Let $$(V,R)$$ be an instance of betweenness with $$V=\{v_{1},v_{2},\ldots,v_{n}\}$$. We construct an instance $$(X,<,\mathcal{X})$$ of the Maximal Dominance Strict Independence problem. We set $$X=\{1,2,\dots,N\}$$ equipped with the usual linear order, for $$N=8n^{3}+2n+2$$. Then, we construct the family $$\mathcal{X}$$ stepwise. The family contains for every $$v_{i}\in V$$ a set $$V_{i}$$ of the following form (see Figure 3): $$V_{i}:=\{1,N\}\cup\{i+1,i+2,\dots,N-i\}.$$ Furthermore, for every triple from $$R$$ we want to enforce $$A\prec B\prec C$$ or $$A\succ B\succ C$$ by adding two families of sets as shown in Figure 1 and Figure 2 with $$q,x,y,z\in X$$. The solid arrows represent preferences that are forced through maximal dominance and strict independence. The family in Figure 1 makes sure that every total strict order satisfying independence that contains $$A\prec B$$ must also contain $$B\prec C$$. Similarly, the family in Figure 2 makes sure that $$A\succ B$$ leads to $$B\succ C$$. We implement this idea for all triples inductively. For every $$1\leq i\leq|R|$$, pick a triple $$(v_{l},v_{j},v_{m})\in R$$ and set $$k=n+1+8i$$. Let $$(A,B,C)=(V_{l},V_{j},V_{m})$$ be the triple of sets coding the triple of elements $$(v_{l},v_{j},v_{m})$$. We add the following sets: \begin{aligned} & A\backslash\{k\},B\backslash\{k\},B\backslash\{k+1\},C\backslash\{k+1\}, \\ & A\backslash\{k+2\},B\backslash\{k+2\},B\backslash\{k+3\},C\backslash\{k+3\}.\end{aligned} These sets correspond to the sets $$A\backslash\{x\},B\backslash\{x\},\dots,C\backslash\{q\}$$ in Figure 1 and Figure 2. Observe that the inductive construction guarantees that every constructed set is unique. We now have to force the preferences \begin{aligned} & B\backslash\{k+1\}\prec A\backslash\{k\},\quad B\backslash\{k\}\prec C\backslash\{k+1\}, \\ & A\backslash\{k+2\}\prec B\backslash\{k+3\},\quad C\backslash\{k+3\}\prec B\backslash\{k+2\}.\end{aligned} For technical reasonsFootnote 4, we add sets $$A\backslash\{k,k+4\},B\backslash\{k+1,k+4\}$$. Then, observe that, by construction, either $$B\backslash\{1,k+1,k+4\}\prec A\backslash\{1,k,k+4\}$$ or $$B\backslash\{k+1,k+4,N\}\prec A\backslash\{k,k+4,N\}$$ is implied by maximal dominance. We add $$A\backslash\{1,k,k+4\}$$ and $$B\backslash\{1,k+1,k+4\}$$ in the first case and $$A\backslash\{k,k+4,N\}$$ and $$B\backslash\{k+1,k+4,N\}$$ in the second case (see Figure 4). This ensures $$B\backslash\{k+1\}\prec A\backslash\{k\}$$ by strict independence. In the same way, we can force the other preferences using $$k+5,k+6$$ and $$k+7$$ instead of $$k+4$$. We repeat this with a new triple $$(v_{i}^{\prime},v_{j}^{\prime},v_{m}^{\prime})\in R$$ until we treated all triples in $$R$$. Observe that there are at most $$n^{3}$$ triples, thus, for every triple, the values $$k,\dots,k+7$$ lie between $$n+1$$ and $$N-n$$, hence are element of every $$V_{i}$$. In total, we add $$24$$ sets per triple. Therefore, $$\mathcal{X}$$ contains $$n+24n^{3}$$ sets. It is easy to see, that, by construction, for every strict total order on $$\mathcal{X}$$ satisfying maximal dominance and strict independence, we have \begin{aligned} & A\backslash\{k\}\prec B\backslash\{k+1\},\quad C\backslash\{k+1\}\prec B\backslash\{k\}, \\ & B\backslash\{k+3\}\prec A\backslash\{k+2\},\quad B\backslash\{k+2\}\prec C\backslash\{k+3\}.\end{aligned} Now assume there is a strict total order on $$\mathcal{X}$$ satisfying maximal dominance and strict independence. We claim that the relation defined by $$v_{i}<v_{j}$$ iff $$V_{i}\prec V_{j}$$ is a positive witness for $$(V,R)$$. By definition this is a strict total order. So assume there is a triple $$(a,b,c)$$ such that $$a> b<c$$ or $$a<b> c$$ holds. We treat the first case in detail: $$a> b<c$$ implies $$A\succ B\prec C$$. This implies by the strictness of $$\prec$$ and strict dominance $$A\backslash\{k\}\succ B\backslash\{k\}$$ and $$B\backslash\{k+1\}\prec C\backslash\{k+1\}$$. However, then \begin{aligned} & A\backslash\{k\}\succ B\backslash\{k\}\succ C\backslash\{k+1\} \\ & \succ B\backslash\{k+1\}\succ A\backslash\{k\}\end{aligned} contradicts the assumption that $$\prec$$ is transitive and irreflexive. Similarly, the second case leads to a contradiction. Now assume that there is a strict total order on $$V$$ satisfying the restrictions from $$R$$. We use this to construct an order on $$\mathcal{X}$$. We set $$V_{i}\prec V_{j}$$ iff $$v_{i}<v_{j}$$ holds. Furthermore, we set $$A\prec B$$ for all $$A,B\in\mathcal{X}$$ if it is implied by dominance. Then, we apply strict independence twice and once “reverse” strict independenceFootnote 5, i. e., $$A\prec B$$ implies $$A\backslash\{x\}\prec B\backslash\{x\}$$ for $$A,B,A\backslash\{x\},B\backslash\{x\}\in\mathcal{X}$$. We claim that all possible instances of strict independence are decided already by this order. If $$A=V_{i}$$ for $$i\leq n$$, then there is no set $$A\cup\{x\}$$ in $$\mathcal{X}$$. If $$A=V_{i}\backslash\{x\}$$ for some $$i\leq n$$ and $$x\in X$$, then $$x$$ is the only element of $$X$$ such that $$A\cup\{x\}\in\mathcal{X}$$ holds. But then there can only be one other set $$B$$ with $$B\cup\{x\}\in\mathcal{X}$$ and $$B=V_{j}\backslash\{x\}$$ hence a preference between $$A$$ and $$B$$ was already introduced by reverse strict independence. Analogously in the cases $$A=V_{i}\backslash\{x,y\}$$ and $$A=V_{i}\backslash\{x,y,z\}$$ for $$i\leq n$$ and $$x\neq y\neq z\in X$$, every possible instance of strict independence is already decided by dominance and two applications of strict independence. It can easily be seen, that this construction does not lead to circles, if we start with a positive instance of betweenness: Every set of the form $$A=V_{i}\backslash\{x,y,z\}$$ is only comparable to other sets by maximal dominance. Every set of the form $$A=V_{i}\backslash\{x,y\}$$ is only comparable by maximal dominance or to another set of the same form. The order on sets of this form mirrors the order on sets of the form $$A=V_{i}\backslash\{x,y,z\}$$ which is produced by maximal dominance and hence is circle free. Finally sets of the form $$A=V_{i}\backslash\{x\}$$ or $$A=V_{i}$$ are only comparable to other sets by maximal dominance or if this is intended by the construction. Hence, the order on these sets is circle free, if we started with a positive instance of betweenness. Finally, we can extend this order to a total order because extensions do not produce new instances of strict independence.□ ### Proof We construct an instance $$(X,<,\mathcal{X})$$ of the Dominance Strict Independence problem in a similar fashion as above. We take the same $$X$$ and $$<$$ and add the same sets to $$\mathcal{X}$$. In order to make the reduction work for the Dominance Strict Independence problem, we have to add more sets. Observe that maximal dominance is only needed in the reduction for the Maximal Dominance Strict Independence problem to introduce preferences like $$A\backslash\{1,k,k+4\}\prec B\backslash\{1,k+1,k+4\}$$. We can enforce these preferences also using strict independence and regular dominance using a construction as in the proof of Proposition 1. For every $$k$$ used in the reduction, let $$(A,B,C)$$ be the triple of sets for which $$k$$ appears in the reduction and let $$(X_{k},Y_{k})$$ be one of the following pairs: \begin{aligned} & (B\backslash\{k+1,k+4,z_{1}\},A\backslash\{k,k+4,z_{1}\}),\\ & (B\backslash\{k,k+5,z_{2}\},C\backslash\{k+1,k+5,z_{2}\}),\\ & (A\backslash\{k+2,k+6,z_{3}\},B\backslash\{k+3,k+6,z_{3}\}),\\ & (C\backslash\{k+3,k+7,z_{4}\},B\backslash\{k+2,k+7,z_{4}\})\end{aligned} with $$z_{i}\in\{1,N\}$$ chosen such that $$X_{k},Y_{k}\in\mathcal{X}$$ holds. We want to enforce $$X_{k}\prec Y_{k}$$. By definition, $$\max(X_{k})=\max(Y_{k})$$ and $$\min(X_{k})<\min(Y_{k})$$ or $$\max(X_{k})<\max(Y_{k})$$ and $$\min(X_{k})=\min(Y_{k})$$. Assume, w.l.o.g. $$\max(X_{k})=\max(Y_{k})$$ and $$\min(X_{k})<\min(Y_{k})$$ and let $$X_{k}=\{x_{1},x_{2},\dots,x_{l}\}$$ and $$Y_{k}=\{y_{1},y_{2},\dots,y_{m}\}$$ be enumerations of $$X_{k}$$ resp. $$Y_{k}$$ such that $$i<j$$ implies $$x_{i}<x_{j}$$ resp. $$y_{i}<y_{j}$$. We add $$\{x_{l}\},\{x_{l-1},x_{l}\},\dots,\{x_{2},\dots,x_{l}\}$$ and $$\{x_{1},x_{l}\}$$ to $$\mathcal{X}$$. This forces by dominance $$\{x_{2},\dots x_{l}\}\prec\dots\prec\{x_{l-1},x_{l}\}\prec\{x_{l}\}$$ and hence by transitivity and strict independence $$X_{k}\prec\{x_{1},x_{l}\}$$. Analogously, we can enforce $$\{x_{1},y_{1},y_{m}\}\prec\{x_{1}\}\cup Y_{k}$$ by adding $$\{x_{1},y_{1}\},\{x_{1},y_{1},y_{2}\},\dots,\{x_{1},y_{1},\dots,y_{m-1}\}$$ as well as $$\{x_{1},y_{1},y_{m}\}$$ and $$\{x_{1}\}\cup Y_{k}$$ to $$\mathcal{X}$$. Finally we add $$\{x_{1},y_{1},y_{m}\}$$, $$\{x_{1}\}$$ and $$\{x_{1},y_{1}\}$$ enforcing $$\{x_{1}\}\prec\{x_{1},y_{1}\}$$ by dominance and hence $$\{x_{1},y_{m}\}\prec\{x_{1},y_{1},y_{m}\}$$ by strict independence. Then, we have $$X_{k}\prec Y_{k}$$ by $$X_{k}\prec\{x_{1},x_{l}\}\prec\{x_{1},y_{1},x_{l}\}\prec\{x_{1}\}\cup Y_{k}\prec Y_{k}.$$ The process of producing a positive instance of betweenness from a positive instance of the Dominance Strict Independence problem is the same as for the Maximal Dominance Strict Independence case. However, in order to construct a total order on $$\mathcal{X}$$, we have to do a bit more. We take the same steps as in the Maximal Dominance Strict Independence case (including the closure under maximal dominance) but additionally, for $$A,B\in\mathcal{X}$$ with $$\max(A)=\max(B)$$, $$\min(A)=\min(B)$$ and $$|A|,|B|\leq 3$$ we set $$A\prec B$$ if 1. 1. $$\min(A)=1$$ and $$|A|=2$$ and $$|B|=3$$, 2. 2. $$\min(A)=N$$ and $$|A|=3$$ and $$|B|=2$$, 3. 3. $$|A|=|B|=3$$ and $$A\backslash B<B\backslash A$$. This order, together with a positive instance of betweenness, maximal dominance and (reverse) strict independence is circle free and decides all possible applications of strict independence. Therefore, we can construct a total order on $$\mathcal{X}$$ satisfying strict independence and dominance.□ ### Proof We have to adapt the reduction for the Maximal Dominance Strict Independence problem above in two places. We have to change the way we enforce the strict preferences in Figure 1 and Figure 2 and we have to make sure that the order restricted to the sets $$V_{1},V_{2},\dots,V_{n}$$ is strict. To enforce, without strict independence, a strict preference between two sets that is not forced by maximal dominance we define for every pair $$A,B\in\mathcal{X}$$ with $$\min(B)\leq\min(A)$$, $$\max(A)\leq\max(B)$$ and $$2\leq\max(A)-\min(A)$$ a family of sets $$\mathcal{S}(A,B)$$ forcing $$A\prec B$$. $$\mathcal{S}(A,B)$$ contains the following sets \begin{aligned} & \{x_{AB}\},\{y_{AB}\},\{x_{AB},z_{AB}\},\{y_{AB},z_{AB}^{*}\}, \\ & A\cup\{z_{AB}\},B\cup\{z_{AB}^{*}\} \end{aligned} where $$\min(A)<y_{AB}<x_{AB}<\max(A)$$, $$\max(B)<z_{AB}$$ and $$z_{AB}^{*}<\min(B)$$ hold. Then $$A\cup\{z_{AB}\}\prec\{x_{AB},z_{AB}\}$$ holds by maximal dominance and, therefore, $$\{x_{AB}\}\not\prec A$$ and hence $$A\preceq\{x_{AB}\}$$ holds by “reverse” independenceFootnote 6 and analogously, $$\{y_{AB}\}\preceq B$$. Therefore, transitivity implies $$A\prec B$$ by $$A\preceq\{x_{AB}\}\prec\{y_{AB}\}\preceq B$$. Using $$\mathcal{S}(A,B)$$, we can adapt the proof above. We want to construct an instance of the maximal dominance independence problem $$(X,<,\mathcal{X})$$. We take as $$X$$ again a set of the form $$X=\{1,\dots,N\}$$ with the usual linear order, however $$N$$ has to be larger than in the maximal dominance strict independence case. Namely, we set $$N=20n^{3}+28n^{2}+2n+14$$. $$\mathcal{X}$$ contains sets $$V_{1},\dots,V_{n}$$ similar to the ones in the reductions above (see Figure 5). However, they do not have a common smallest element or common largest element and the smallest element of $$V_{1}$$ is $$4n^{3}+4n^{2}+1$$ and largest element $$16n^{3}+24n^{2}+2n+14$$, i. e., $$V_{i}:=\{4n^{3}+4n^{2}+i,\dots,16n^{3}+24n^{2}+2n+14-i\}.$$ We assume, for all families $$\mathcal{S}(A,B)$$ and $$\mathcal{S}(C,D)$$ occurring in the reduction, $$p_{AB}\neq q_{CD}$$ for $$p,q\in\{x,y,z\}$$ and $$(A,B)\neq(C,D)$$. As well, for every family $$\mathcal{S}(A,B)$$, assume $$5n^{3}+13n^{2}+n+7\leq y_{AB}$$ and $$x_{AB}\leq 16n^{3}+17n^{2}+n+7$$. For a triple $$(a,b,c)$$ in the instance of betweenness we add the following sets as in the two previous reductions: \begin{aligned} & A\backslash\{k\},B\backslash\{k\},B\backslash\{k+1\},C\backslash\{k+1\}, \\ & A\backslash\{k+2\},B\backslash\{k+2\},B\backslash\{k+3\},C\backslash\{k+3\}\end{aligned} where we start with $$k=4n^{3}+11n^{2}+n+7$$. We force the same preference as in the Maximal Dominance Strict Independence case, by adding the following families: \begin{aligned} & \mathcal{S}(A\backslash\{k\},B\backslash\{k+1\}),\mathcal{S}(C\backslash\{k+1\},B\backslash\{k\}),\\ & \mathcal{S}(B\backslash\{k+3\},A\backslash\{k+2\}),\mathcal{S}(B\backslash\{k+2\},C\backslash\{k+3\}).\end{aligned} We still have to make sure that the order on the sets $$V_{1}\dots V_{n}$$ is strict. To achieve this we want to use the idea shown in Figure 6, that is to add for every pair $$V_{i},V_{j}$$ sets that lead to a circle if both $$V_{i}\preceq V_{j}$$ and $$V_{j}\preceq V_{i}$$ hold. Let $$f(l)=(V_{i},V_{j})$$ be an enumeration of all pairs of sets $$V_{1},V_{2},\dots,V_{n}$$. We add sets $$C_{l},D_{l},E_{l}$$ and $$F_{l}$$ that are contained in the “middle parts” of all sets $$V_{i}$$ such that $$C_{l}\subset F_{l^{\prime}}$$ holds for all $$l^{\prime}<l$$. Moreover, we want the following: \begin{aligned} F_{l}& =E_{l}\backslash\{\max(E_{l}),\text{max}^{\prime}(E_{l}),\text{min}^{\prime}(E_{l}),\min(E_{l})\},\\ E_{l}& =D_{l}\backslash\{\max(D_{l}),\text{max}^{\prime}(D_{l}),\text{min}^{\prime}(D_{l}),\min(D_{l})\},\\ D_{l}& =C_{l}\backslash\{\max(C_{l}),\text{max}^{\prime}(C_{l}),\text{min}^{\prime}(C_{l}),\min(C_{l})\}\end{aligned} where $$\max^{\prime}(X)$$ denotes the second largest element of $$X$$ and $$\min^{\prime}(X)$$ the second smallest. We can achieve this by taking for all $$l\leq n^{2}$$ $$C_{l}:=\{4n^{3}+4n^{2}+n+7l,\dots,16n^{3}+24n^{2}+n+14-7l\}$$ and $$D_{l},E_{l}$$ and $$F_{l}$$ accordingly. Furthermore, we add sets $$F_{l}\backslash\{\min(F_{l})\}$$, $$V_{i}\cup\{z_{l}\}$$ and $$(F_{l}\backslash\{\min(F_{l})\})\cup\{z_{l}\}$$ for a uniqueFootnote 7 $$z_{l}<\min(V_{1})$$. This ensures $$F_{l}\prec F_{l}\backslash\{\min(F_{l})\}\preceq V_{i}$$. In a similar fashion we can enforce $$V_{i}\prec D_{l}$$, $$E_{l}\prec V_{j}$$ and $$V_{j}\prec F_{l}$$. Then, we add sets $$C_{l}\backslash\{y_{l}\}$$, $$D_{l}\backslash\{x_{l}\}$$, $$E_{l}\backslash\{y_{l}\}$$ and $$F_{l}\backslash\{y_{l}\}$$ for $$x_{l}=5n^{3}+11n^{2}+n+7+l$$ and $$y_{l}=5n^{3}+12n^{2}+n+7+l$$. Furthermore, we enforce $$D_{l}\backslash\{x_{l}\}\prec F_{l}\backslash\{y_{l}\}$$ and $$E_{l}\backslash\{y_{l}\}\prec C_{l}\backslash\{x_{l}\}$$ by adding $$\mathcal{S}(D_{l}\backslash\{x_{l}\},F_{l}\backslash\{y_{l}\})$$ and $$\mathcal{S}(E_{l}\backslash\{y_{l}\},C_{l}\backslash\{x_{l}\})$$. This forces a strict preference between $$V_{i}$$ and $$V_{j}$$. Assume otherwise $$V_{i}\sim V_{j}$$ for a total order satisfying maximal dominance and independence. Then, for $$l$$ such that $$f(l)=(V_{i},V_{j})$$ holds, $$F_{l}\prec V_{i}\preceq V_{j}\prec E_{l}$$ implies $$F_{l}\prec E_{l}$$. This implies $$F_{l}\backslash\{y\}\preceq E_{l}\backslash\{y\}$$ because $$E_{l}\backslash\{y\}\prec F_{l}\backslash\{y\}$$ would imply $$E_{l}\preceq F_{l}$$, a contradiction. Similarly, $$C_{l}\prec V_{j}\preceq V_{i}\prec D_{l}$$ implies $$C_{l}\backslash\{x\}\preceq D_{l}\backslash\{x\}$$. However, then $$C_{l}\backslash\{x\}\preceq D_{l}\backslash\{y\}\prec F_{l}\backslash\{y\}\preceq E_{l}\backslash\{y\}\prec C_{l}\backslash\{x\}$$ is a circle in $$\prec$$, contradicting the assumption that $$\preceq$$ is a total order. It is straightforward to check that this construction yields a valid reduction analogously to the proof above. The key step is to observe that independence can only be applied to the new sets in cases where it is used in the proof. This is clear for the sets pictured in Figure 6. For the one and two element sets this holds, because the elements are unique and because no three element sets are contained in $$\mathcal{X}$$. It remains to check that we can actually pick a unique element every time we want to do this in the reduction. The inner part of $$F_{n^{2}}$$ has $$(16n^{3}+24n^{2}+n+14-7n^{2}-7)-(4n^{3}+4n^{2}+n+7n^{2}+7)=12n^{3}+6n^{2}$$ elements. For every triple, we have to pick $$12$$ unique elements contained in this middle part. These are at most $$12n^{3}$$ elements. Furthermore, we have to pick for every pair $$(V_{i},V_{j})$$ $$6$$, two for $$x$$ and $$y$$ and $$4$$ to enforce preferences, unique elements. There are $$n^{2}$$ such pairs, so we need $$6n^{2}$$ elements. Hence we need at most the $$12n^{3}+6n^{2}$$ elements contained in $$F_{n^{2}}$$. Furthermore, we need for every pair $$(V_{i},V_{j})$$ and every triple $$4$$ elements smaller than $$\min(V_{1})$$ and the same amount of elements larger than $$\max(V_{1})$$. This is possible as $$\min(V_{1})=4n^{3}+4n^{2}+1$$ and $$N-\max(V_{1})=(20n^{3}+28n^{2}+2n+14)-(16n^{3}+24n^{2}+2n+14)=4n^{3}+4n^{2}$$.□ ### Proof We construct an instance $$(X,<,\mathcal{X})$$ of the Dominance Independence problem in a similar fashion as above. We take the same $$X$$ and $$<$$ and add the same sets to $$\mathcal{X}$$ as in the Maximal Dominance Independence case. In order to make the reduction work for the Dominance Independence problem, we have to add more sets. Observe that maximal dominance is only needed in the reduction for the Maximal Dominance Strict Independence problem to introduce preferences between sets of the form (1) $$F_{l}\prec F_{l}\backslash\{\min(F_{l})\}$$, (2) $$\{x\}\prec\{y\}$$, and (3) $$R\cup\{z\}\prec S\cup\{z\}$$, for sets $$R,S$$ and $$z\in X$$. We can enforce these preferences also using independence and regular dominance. (1) is implied by dominance anyway. (2) can be forced by adding $$\{x,y\}$$ because $$\{x\}\prec\{x,y\}\prec\{y\}$$ holds by dominance. Finally, we can enforce (3) using the idea from the Dominance Strict Independence reduction. Now, assume, w.l.o.g., $$\min(R)<\min(S)<\max(S)<\max(R)<z$$; the other case is similar. Let $$R=\{r_{1},\dots,r_{l}\}$$ and $$S=\{s_{1},\dots,s_{m}\}$$ be enumerations of $$R$$ (resp. $$S$$) such that $$i<j$$ implies $$r_{i}<r_{j}$$ (resp. $$s_{i}<s_{j}$$). We add $$\{z\},\{r_{l},z\},\dots,$$ $$\{r_{2},\dots,z\},\{r_{1},z\}$$ to $$\mathcal{X}$$. This forces $$\{r_{2},\dots,z\}\prec\{z\}$$ by dominance and hence by one application of independence $$R\cup\{z\}\preceq\{r_{1},z\}$$. Analogously, we enforce $$\{s_{1},z\}\preceq S\cup\{z\}$$ by adding $$\{s_{1}\},\{s_{1},s_{2}\},\dots,\{s_{1},\dots,s_{m}\},\{s_{1},z\}$$ to $$\mathcal{X}$$. Finally, we add $$\{r_{1}\},\{r_{1},s_{1}\}$$ and $$\{r_{1},s_{1},z\}$$, which leads to $$\{r_{1},z\}\preceq\{r_{1},s_{1},z\}$$. Then we have $$R\cup\{z\}\preceq\{r_{1},z\}\preceq\{r_{1},s_{1},z\}\prec\{s_{1},z\}\preceq S\cup\{z\}$$, hence $$R\cup\{z\}\prec S\cup\{z\}$$. Checking the correctness of this reduction is straightforward. The correctness proof for the Maximal Dominance Independence case can be adapted to the Dominance Independence case in the same way as the proof of the correctness of the Maximal Dominance Strict Independence case was adapted to the Dominance Strict Independence case.□ ## Conclusion We have shown that the problem of deciding whether a linear order can be lifted to a ranking of sets of objects satisfying a form of dominance and a form of independence is in P or trivial if we do not expect the ranking to be total and NP-complete if we do. In order to prove P‑membership or triviality we constructed such rankings. Rankings of specific sets are useful in several applications, e.g., to eliminate obviously inferior sets of objects from a set of options. In many applications, the family of sets to be ranked is not given explicitly but implicitly. We expect that a compact representation of the sets increases the computational complexity of decision problems as studied in this paper. As future work we thus want to investigate the complexity blow up caused by a compact representation. Furthermore, we would like to characterize families that allow an order satisfying (maximal) dominance and (strict) independence. Moreover, it may be possible to find sufficient but not necessary conditions for the existence of such rankings that can be checked in polynomial time. We aim for finding strong forms of such conditions. A related goal is to obtain special classes of families where such a characterization is feasible. A promising candidate are families generated via graphs, where the family is given by the sets of vertices that induce connected subgraphs. Another item on our agenda is to investigate whether the logic proposed in [8] can be used for specific sets of objects as well. Finally, it would be interesting to study some of the other axioms that have been considered in the literature and see how they behave when one has to rank proper subsets of the whole power set of elements.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9823074340820312, "perplexity": 247.6394815269226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711637.64/warc/CC-MAIN-20221210005738-20221210035738-00784.warc.gz"}
http://wiki.stat.ucla.edu/socr/index.php?title=AP_Statistics_Curriculum_2007_Normal_Prob&diff=6020&oldid=4110
# AP Statistics Curriculum 2007 Normal Prob (Difference between revisions) Revision as of 18:49, 14 June 2007 (view source)IvoDinov (Talk | contribs)← Older edit Revision as of 20:17, 31 January 2008 (view source)IvoDinov (Talk | contribs) Newer edit → Line 1: Line 1: ==[[AP_Statistics_Curriculum_2007 | General Advance-Placement (AP) Statistics Curriculum]] - Nonstandard Normal Distribution & Experiments: Finding Probabilities== ==[[AP_Statistics_Curriculum_2007 | General Advance-Placement (AP) Statistics Curriculum]] - Nonstandard Normal Distribution & Experiments: Finding Probabilities== - === Nonstandard Normal Distribution & Experiments: Finding Probabilities=== + === General Normal Distribution=== - Example on how to attach images to Wiki documents in included below (this needs to be replaced by an appropriate figure for this section)! + The standard normal distribution is a continuous distribution where the following exact ''areas'' are bound between the Standard Normal Density function and the x-axis on the symmetric intervals around the origin: - [[Image:AP_Statistics_Curriculum_2007_IntroVar_Dinov_061407_Fig1.png|500px]] + * The area: -1 < z < 1 = 0.8413 - 0.1587 = 0.6826 + * The area: -2.0 < z < 2.0 = 0.9772 - 0.0228 = 0.9544 + * The area: -3.0 < z < 3.0 = 0.9987 - 0.0013 = 0.9974 + [[Image:SOCR_EBook_Dinov_RV_Normal_013108_Fig0.jpg|500px]] - ===Approach=== + * Standard Normal density function $f(x)= {e^{-x^2} \over \sqrt{2 \pi}}.$ - Models & strategies for solving the problem, data understanding & inference. + - * TBD + * The Standard Normal distribution is also a special case of the [[AP_Statistics_Curriculum_2007_Normal_Prob | more general normal distribution]] where the mean is set to zero and a variance to one. The Standard Normal distribution is often called the ''bell curve'' because the graph of its probability density resembles a bell. - ===Model Validation=== + ===Experiments=== - Checking/affirming underlying assumptions. + Suppose we decide to test the state of 100 used batteries. To do that, we connect each battery to a volt-meter by randomly attaching the positive (+) and negative (-) battery terminals to the corresponding volt-meter's connections. Electrical current always flows from + to -, i.e., the current goes in the direction of the voltage drop. Depending upon which way the battery is connected to the volt-meter we can observe positive or negative voltage recordings (voltage is just a difference, which forces current to flow from higher to the lower voltage.) Denote $X_i={measured voltage for battery i} - this is random variable 0 and assume the distribution of all [itex]X_i$ is Standard Normal, $X_i \sim N(0,1)$. Use the Normal Distribution (with mean=0 and variance=1) in the [http://socr.ucla.edu/htmls/SOCR_Distributions.html SOCR Distribution applet] to address the following questions. This [[Help_pages_for_SOCR_Distributions | Distributions help-page may be useful in understanding SOCR Distribution Applet]]. How many batteries, from the sample of 100, can we expect to have? - + * Absolute Voltage > 1? P(X>1) = 0.1586, thus we expect 15-16 batteries to have voltage exceeding 1. - * TBD + [[Image:SOCR_EBook_Dinov_RV_Normal_013108_Fig1.jpg|500px]] - + * |Absolute Voltage| > 1? P(|X|>1) = 1- 0.682689=0.3173, thus we expect 31-32 batteries to have absolute voltage exceeding 1. - ===Computational Resources: Internet-based SOCR Tools=== + [[Image:SOCR_EBook_Dinov_RV_Normal_013108_Fig2.jpg|500px]] - * TBD + * Voltage < -2? P(X<-2) = 0.0227, thus we expect 2-3 batteries to have voltage less than -2. - + [[Image:SOCR_EBook_Dinov_RV_Normal_013108_Fig3.jpg|500px]] - ===Examples=== + * Voltage <= -2? P(X<=-2) = 0.0227, thus we expect 2-3 batteries to have voltage less than or equal to -2. - Computer simulations and real observed data. + [[Image:SOCR_EBook_Dinov_RV_Normal_013108_Fig3.jpg|500px]] - + * -1.7537 < Voltage < 0.8465? P(-1.7537 < X < 0.8465) = 0.761622, thus we expect 76 batteries to have voltage in this range. - * TBD + [[Image:SOCR_EBook_Dinov_RV_Normal_013108_Fig4.jpg|500px]] - + - ===Hands-on activities=== + - Step-by-step practice problems. + - + - * TBD + ===References=== ===References=== - * TBD ## General Advance-Placement (AP) Statistics Curriculum - Nonstandard Normal Distribution & Experiments: Finding Probabilities ### General Normal Distribution The standard normal distribution is a continuous distribution where the following exact areas are bound between the Standard Normal Density function and the x-axis on the symmetric intervals around the origin: • The area: -1 < z < 1 = 0.8413 - 0.1587 = 0.6826 • The area: -2.0 < z < 2.0 = 0.9772 - 0.0228 = 0.9544 • The area: -3.0 < z < 3.0 = 0.9987 - 0.0013 = 0.9974 • Standard Normal density function $f(x)= {e^{-x^2} \over \sqrt{2 \pi}}.$ • The Standard Normal distribution is also a special case of the more general normal distribution where the mean is set to zero and a variance to one. The Standard Normal distribution is often called the bell curve because the graph of its probability density resembles a bell. ### Experiments Suppose we decide to test the state of 100 used batteries. To do that, we connect each battery to a volt-meter by randomly attaching the positive (+) and negative (-) battery terminals to the corresponding volt-meter's connections. Electrical current always flows from + to -, i.e., the current goes in the direction of the voltage drop. Depending upon which way the battery is connected to the volt-meter we can observe positive or negative voltage recordings (voltage is just a difference, which forces current to flow from higher to the lower voltage.) Denote Xi={measured voltage for battery i} - this is random variable 0 and assume the distribution of all Xi is Standard Normal, $X_i \sim N(0,1)$. Use the Normal Distribution (with mean=0 and variance=1) in the SOCR Distribution applet to address the following questions. This Distributions help-page may be useful in understanding SOCR Distribution Applet. How many batteries, from the sample of 100, can we expect to have? • Absolute Voltage > 1? P(X>1) = 0.1586, thus we expect 15-16 batteries to have voltage exceeding 1. • |Absolute Voltage| > 1? P(|X|>1) = 1- 0.682689=0.3173, thus we expect 31-32 batteries to have absolute voltage exceeding 1. • Voltage < -2? P(X<-2) = 0.0227, thus we expect 2-3 batteries to have voltage less than -2. • Voltage <= -2? P(X<=-2) = 0.0227, thus we expect 2-3 batteries to have voltage less than or equal to -2. • -1.7537 < Voltage < 0.8465? P(-1.7537 < X < 0.8465) = 0.761622, thus we expect 76 batteries to have voltage in this range.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9676720499992371, "perplexity": 2040.2432794809436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948581053.56/warc/CC-MAIN-20171216030243-20171216052243-00509.warc.gz"}
http://www.math.psu.edu/calendars/meeting.php?id=5179
# Meeting Details Title: Sheaf of Modules over $F_1$-schemes GAP Seminar Chenghao Chu, Johns Hopkins University Using Connes and Consani’s definition of F1 -schemes, we define and study the category of coherent sheaves over an F1 -scheme. We show that exact sequences of locally free modules are well defined in the category of coherent sheaves over an F1 -scheme. We then apply Q-construction to define algebraic K-theory of F1 -schemes. In partic- ular, we show that the algebraic K-groups of S pec(F1 ) are the stable homotopy groups of the sphere $S^0$ , which is generally believed to be true. If time permits, we define algebraic K-theory of not necessarily commutative monoids. In particular, we discuss the homotopy invari- ance property of algebraic K-theory of monoids and F1 -schemes.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.865549623966217, "perplexity": 1186.0906952866999}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701157212.22/warc/CC-MAIN-20160205193917-00156-ip-10-236-182-209.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/2927510/snake-lemma-proof
# Snake lemma proof I post here cause I have a doubt on my proof about the snake lemma. Actually, I have the impression that I use nowhere the commutativity of the diagram. Actually, this is the diagram I consider : $$\begin{array}{c} & & M_1 & \xrightarrow{\alpha} & M_2 & \xrightarrow{\beta} & M_3 & \to & 0 \\ & & \downarrow u & & \downarrow v & & \downarrow w \\ 0 & \to & N_1 & \xrightarrow{\alpha'} & N_2 & \xrightarrow{\beta'} & N_3 \end{array}$$ And I have to determine a linear application : $$f : \ker(w) \rightarrow coker(u)$$. This is what I did : Let $$m_3 \in \ker(w)$$. As $$\ker(w) = Im(\beta)$$, let $$m_2 \in M_2$$ such that $$\beta(m_2) = m_3$$. We would have the uniqueness of $$m_2$$, and for that, I consider $$\overline{m_2} \in M_2/\ker(\beta)$$. Thus, for $$m_2, m'_2$$ such that $$\beta(m_2) = \beta(m'_2) = m_3$$, we have : $$m_2 - m'_2 \in \ker{\beta}$$, and then $$\overline{m_2} = \overline{m'_2}$$. It give us a first linear application well defined : $$\lambda_1 : \ker(w) \rightarrow M_2/\ker(\beta)$$ which send $$m_3 \in \ker(w)$$ to $$\overline{m_2} = m_2 + \ker(\beta)$$. Now, we have that : $$\ker(\beta) = Im(\alpha)$$, so let $$m_3 \in \ker(w)$$ and $$m_2 \in M_2$$ such that $$\beta(m_2) = m_3$$. Then, we are considering $$v(m_2) \in Im(v)$$. As $$Im(v) = \ker(\beta')$$, we have : $$v(m_2) \in \ker(\beta')$$. But we have as well : $$\ker(\beta') = Im(\alpha')$$. So : $$v(m_2) \in Im(\alpha')$$. Thus, let $$n_1 \in N_1$$ such that : $$\alpha'(n_1) = v(m_2)$$. We would like to have $$n_1$$ independent from $$m_2$$, but only dependent of $$\overline{m_2}$$. But, let $$m_2, m_2 + \hat{v}$$ with $$m_2 \in M_2, \hat{v} \in \ker(\beta) = Im(\alpha) = \ker(v)$$. Then, let $$n_1, n'_1$$ such that : $$\alpha'(n_1) = v(m_2)$$, $$\alpha'(n_1') = v(m_2 + \hat{v}) = v(m_2) + 0 = v(m_2)$$. Then : $$\alpha'(n_1) = \alpha'(n_1')$$, so $$n_1 - n_1' \in \ker(\alpha')$$. So, we are considering $$\overline{n_1} = n_1 + \ker(\alpha')$$. But $$ker(\alpha') = Im(u)$$, so : $$\overline{n_1} = n_1 + Im(u)$$. It gives us a second linear application : $$\lambda_2 : M_2/\ker(\beta) \rightarrow N_1/Im(u) = coker(u)$$ which send $$m_2 + \ker(\beta)$$ to $$n_1 + Im(u)$$ as defined previously. Finally : $$\lambda_2 \circ \lambda_1$$ is the linear application which suit. And my question : I have the impression that my proof is right, but I also have the impression that I have not use the commutativity of the diagram, so my proof is probably wrong. But I don't see why. Someone could help me ? :) Thank you ! • You seem to bend the exact sequences. In general, there is no connection between $\ker w$ and $\mathrm{im\, }\beta$ or between $\ker\beta'$ and $\mathrm{im\, } v$.. – Berci Sep 23 '18 at 10:49 • Yes, big big mistake, my bad... Actually, only the lines constitute exact sequences... I thought It was all the paths made by the arrows (I hope i'm clear)... I'm going to start again from the beginning. – ChocoSavour Sep 23 '18 at 10:54 Actually I believe that there is a "unique" way to proving the Snake lemma, and the commutativity is used for the well-definedness of the "boundary" map $$M_3\to N_1$$. Hint: The right square is used for the pull-back of $$\alpha'$$, and the left square is used for the independence of choice $$m_2\in M_2$$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 46, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9837251305580139, "perplexity": 139.92529076350877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257244.16/warc/CC-MAIN-20190523123835-20190523145835-00274.warc.gz"}
https://socratic.org/questions/57def4517c01492c2a47609a
Physics Topics # Question 7609a Sep 19, 2016 Here's what I got. #### Explanation: The first thing to do here is to convert the two velocities of the car from miles per hour to meters per second 80 color(red)(cancel(color(black)("mi")))/color(red)(cancel(color(black)("h"))) * (1color(red)(cancel(color(black)("h"))))/(60color(red)(cancel(color(black)("min")))) * (1color(red)(cancel(color(black)("min"))))/"60 s" * (1.61color(red)(cancel(color(black)("km"))))/(1color(red)(cancel(color(black)("mi")))) * (10^3"m")/(1color(red)(cancel(color(black)("km")))) = "35.8 m/s" 30 color(red)(cancel(color(black)("mi")))/color(red)(cancel(color(black)("h"))) * (1color(red)(cancel(color(black)("h"))))/(60color(red)(cancel(color(black)("min")))) * (1color(red)(cancel(color(black)("min"))))/"60 s" * (1.61color(red)(cancel(color(black)("km"))))/(1color(red)(cancel(color(black)("mi")))) * (10^3"m")/(1color(red)(cancel(color(black)("km")))) = "13.4 m/s" So, you know that it takes $\text{3 s}$ for the velocity of the car to decrease from $\text{35.8 m/s}$ to $\text{13.4 m/s}$. As you know, acceleration is defined as the rate at which the velocity of an object changes with respect to time. In your case, it takes $\text{3 s}$ for the velocity of the car to decrease by $\Delta v = | \text{13.4 m/s" - "35.8 m/s"| = "22.4 m/s}$ SIDE NOTE Because you're looking for the magnitude of the acceleration, the change in velocity can be used without the minus sign that accompanies a decrease in velocity. This means that the magnitude of the acceleration will be a = (Deltav)/t = "22.4 m/s"/"3 s" = color(green)(bar(ul(|color(white)(a/a)color(black)("7.5 m/s"^2)color(white)(a/a)|))) I'll leave the answer rounded to two sig figs, but don't forget that you only have one sig fig for your values. To calculate the distance covered by the car during its breaking time, use the equation $\textcolor{p u r p \le}{\overline{\underline{| \textcolor{w h i t e}{\frac{a}{a}} \textcolor{b l a c k}{{v}^{2} = {v}_{0}^{2} - 2 \cdot a \cdot d} \textcolor{w h i t e}{\frac{a}{a}} |}}}$ Here $v$ - the final velocity of the car ${v}_{0}$ - the initial velocity of the car $a$ - the acceleration $d$ - the distance covered Rearrange the equation to solve for $d$ $d = \frac{{v}_{0}^{2} - {v}^{2}}{2 \cdot a}$ Plug in your values to find d = (35.8^2"m"^color(red)(cancel(color(black)(2)))/color(red)(cancel(color(black)("s"^2))) - 13.4^2 "m"^color(red)(cancel(color(black)(2)))/color(red)(cancel(color(black)("s"^2))))/(2 * 7.5color(red)(cancel(color(black)("m")))/color(red)(cancel(color(black)("s"^2)))) = color(green)(bar(ul(|color(white)(a/a)color(black)("73 m")color(white)(a/a)|)))# Once again, I'll leave the answer rounded to two sig figs. ##### Impact of this question 97 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 16, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8421421051025391, "perplexity": 1139.2886978960362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203464.67/warc/CC-MAIN-20190324165854-20190324191854-00439.warc.gz"}
https://mathproblems123.wordpress.com/2013/11/29/best-approximation-of-a-certain-square-root/
Home > Algebra, Number theory, Olympiad > Best approximation of a certain square root ## Best approximation of a certain square root Let ${\lambda}$ be a real number such that the inequality $\displaystyle 0 < \sqrt{2002}-\frac{a}{b} < \frac{\lambda}{ab}$ holds for an infinity of pairs ${(a,b)}$ of natural numbers. Prove that ${\lambda\geq 5}$. Solution: We know that ${a^2 <2002 b^2}$. Then there exists a positive integer ${k}$ such that ${a^2=2002b^2-k}$. Let’s find the smallest value for ${k}$. We factor ${2002}$ and we get ${2002=2\cdot 7\cdot 11\cdot 13}$. We see that ${-k}$ is a quadratic residue modulo ${7,11,13}$. Let’s enumerate these quadratic residues: $\displaystyle a^2 \pmod 7 \in \{0,1,4,2\}$ $\displaystyle a^2 \pmod{11} \in \{0,1,4,9,5,3\}$ $\displaystyle a^2 \pmod{13} \in \{0,1,4,9,3,12\}$ Now we can pick the values of ${k}$ one at a time to see which one is the smallest and verifies that ${-k}$ is quadratic residue modulo ${7,11,13}$. • ${k=1}$: ${-1}$ is not quadratic residue modulo ${7}$; • ${k=2}$: ${-2}$ is not quadratic residue modulo ${7}$; • ${k=3}$: ${-3}$ is not quadratic residue modulo ${11}$; • ${k=4}$: ${-4}$ is not quadratic residue modulo ${7}$; • ${k=5}$: ${-5}$ is not quadratic residue modulo ${7}$; • ${k=6}$: ${-6}$ is not quadratic residue modulo ${13}$; • ${k=7}$: ${-7}$ is not quadratic residue modulo ${13}$; • ${k=8}$: ${-8}$ is not quadratic residue modulo ${7}$; • ${k=9}$: ${-9}$ is not quadratic residue modulo ${11}$; • ${k=10}$: ${-10}$ has residues ${4,1,3}$ modulo ${7,11,13}$ (respectively) and they are all quadratic residues. Therefore ${k \geq 10}$ and we find that ${a^2+10\leq 2002b^2}$. Using the hypothesis and the new found inequality we find: $\displaystyle \frac{10}{b\sqrt{2002}+a} \leq b\sqrt{2002}-a<\frac{\lambda}{a},$ and taking the inequality between the first and last term we obtain $\displaystyle \frac{10}{\frac{b}{a}\sqrt{2002}+1}<\lambda.$ We know that $\displaystyle 0<\frac{b}{a}\sqrt{2002}-1<\frac{\lambda}{a^2}$ Since ${a,b}$ take infinitely many values, we find that ${a/b}$ approximates the irrational number ${\sqrt{2002}}$ so ${a}$ (and ${b}$) get arbitrary large in order to do that. We add ${2}$ to the above inequality and we get $\displaystyle 2<\frac{b}{a}\sqrt{2002}+1<2+\frac{\lambda}{a^2}$ which gives $\displaystyle \lambda>\frac{10}{\frac{b}{a}\sqrt{2002}+1}>\frac{10}{2+\lambda/a^2}.$ Taking ${a \rightarrow \infty}$ we obtain ${\lambda \geq 5}$. Note that the estimation for ${\lambda}$ depends only on the value of ${k}$ which here is ${10}$ and in general ${\lambda=k/2}$. This problem also works for other numbers which have the property that the smallest ${k}$ such that ${k}$ is quadratic residue modulo one of its prime factors is at least ${10}$, and this is the case for ${2013=3\cdot 11\cdot 61}$. The first negative quadratic residues modulo ${61}$ are ${-1,-3,-4,-5,-9}$ while for ${11}$ we have ${-2,-6,-7,-8}$. The estimation can even be improved for ${2013}$, and the algorithm below gives that ${k=39}$ so if we have $\displaystyle 0 < \sqrt{2013}-\frac{a}{b} < \frac{\lambda}{ab}$ for an infinity of pairs ${(a,b)}$ then ${\lambda \geq 19,5}$. Moreover (what a lucky coincidence), ${2013}$ is the first number for which ${k}$ is ${39}$. The number which has the greatest ${k}$ before ${2013}$ was ${1653}$ with ${k=29}$. function res = min_res(n) factors = unique(factor(n)); fac_max = n; lis = 1:fac_max-1; for m = factors res_m = sort(mod((0:floor(m/2)).^2,m)); rest = mod(m-res_m(:),m); partial = rest; for k = 1:ceil(fac_max/m)-1 partial = [partial; k*m+rest]; end partial(partial>fac_max) = []; lis = intersect(lis,partial); end res = lis;
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 88, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.965739905834198, "perplexity": 200.6189245506992}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647681.81/warc/CC-MAIN-20180321180325-20180321200325-00552.warc.gz"}
https://cstheory.stackexchange.com/tags/halting-problem/hot
# Tag Info Accepted ### Is there a good notion of non-termination and halting proofs in type theory? Because one of the principal applications of Type Theory in formalizations has been to study programing languages and computation in general, a lot of thought has gone into ways of representing ... • 13.2k ### "Berman-Hartmanis Conjecture Separates NP From All Super-Poly. DTIME Classes" -- Worthy of arXiv.org? I'm glad you are interested in complexity but there are some issues in your paper. Your techniques relativize and there is an oracle relative to which the Berman-Hartmanis conjecture is true and NP = ... • 8,546 Accepted ### How good can a halting detector be? This isn't a "nice" property, because whether it's true or false depends upon the encoding. See David et al's Asymptotically almost all $\lambda$-terms are strongly normalizing, which proves what it ... • 31.7k Accepted ### What is the reference for the proof Gödel's first incompleteness theorem based on the undecidability of the halting problem? I believe that some version of this connection can be tied back to Turing's seminal paper on computability. Namely, Turing makes the following two claims: "The results of Section 8 have some ... • 1,633 Accepted ### For a specific unbounded Turing machine, is its Halting problem undecidable? It depends in which sense you mean "undecidable". If you evaluate $M$ on the empty input, and want only to find a yes/no answer, then the algorithmic problem is trivially decidable, as answered by ... • 7,653 Accepted ### Can the halting problem be solved probabilistically? It is well known that any language or function computable by a probabilistic algorithm is also computable deterministically. Here, we require that with probability $>1/2$, the algorithm outputs the ... • 14.8k Accepted ### Uniform mortality problem for Turing Machines The mortality problem is undecidable (P.K. Hooper, Th eUndecidability of the Turing Machine Immortality Problem (1966)) The uniform mortality problem undecidability follows from the following: ... • 22.3k Accepted ### Program equivalence wherein the programs are known to always halt As a counter-example to this, consider the Context-Free Equivalence problem: it's undecidable to determine, given two context free languages, whether they accept the same set. If your problem were ... • 2,724 ### Polynomial-time reductions between undecidable languages Gödel's incompleteness theorem can be thought of as a reduction from the Halting problem to the language $\langle \varphi \mid \varphi \text{ is a true sentence in number theory}\rangle$, and a ... • 2,295 ### Is there a sensible notion of an approximation algorithm for an undecidable problem? This is answering the title of the question more than its content, but you can also consider "approximations" of the halting problem as algorithms which will give you a correct answer on "almost all" ... • 329 Accepted ### Are all turing machines paths predictable? This is another way to prove that not all Turing machines are predictable. First it's easy to note that: all halting machines are predictable; all machines that loop forever on a finite portion of ... • 22.3k Accepted ### Practical approaches to solving whether programs will halt Yes, an example of a system that performs this task is T2. It does not solve the halting problem but instead it only attempts to solve certain special cases. A overview is at https://en.wikipedia.org/... • 1,631 ### Program equivalence wherein the programs are known to always halt Consider programs $e_1$, $e_2$ and numbers of time steps $t$. Let $f_i(t)$ be the output of $e_i$ after $t$ steps, and let $f_i(t)$ output a special message like "none" if there's no output yet. ... • 4,400 ### Are All Turing-Uncomputable Sets Isomorphic to the Halting Problem? No, there is a whole hierarchy of Turing undecidability: http://en.wikipedia.org/wiki/Turing_degree In particular, the language L_min consisting of all minimal Turing machine encodings is not ... • 10.1k Accepted ### Is this a weaker or stronger form of the halting problem The standard proof that the halting problem $L$ is undecidable also gives an efficient algorithm for constructing an instance on which a given Turing machine $H$ fails to solve the halting problem. ... • 1,733 ### Is this a good definition of computability? First of all, the place for this question is cs.se, not here. But since I've already written an answer, I'll leave it. There is a formal definition of computability: a function $f$ is computable if ... • 5,261 ### Are all turing machines paths predictable? If I understood your question correctly, the answer is NO. Let $M$ be any TM and $w$ any input string, and define the TM $M'$ as follows: it reserves the leftmost square of the tape as "special" (e.g.,... • 10.1k ### Halting problem for finite tape TM "Easy to check" is the understatement of the century: can you actually carry out your proposed plan of "just" writing down all the registers/RAM cells, etc? You're right that it takes finite time, but ... • 10.1k ### Automated proving that a program doesn't halt In contradiction with Gurkenglas' answer, there actually is a community of scientists who work on proving non-termination of programs in various language and formalisms. An obvious approach would be ... • 13.2k 1 vote ### Automated proving that a program doesn't halt Since the Halting problem is undecidable, whatever approach I use to answer the question must eventually be unhelpful in the real world. There's a sequence of sets of programs such that each set is ... • 113 1 vote ### Constructive proof of the Halting Problem I think if we want to answer this problem constructively, then we should be able propose problem constructively. Let language of arithmetic be $L=\{0,S,+,\cdot \}$ and $\phi(n,x,y)$ be kleene ... 1 vote Accepted ### Undecidable Single Programs One way to look at your question is the Busy Beaver Numbers. What we will do is restrict a Turing Machine so that: The blank symbol is a $0$ The tape alphabet is $\{0, 1\}$ The input to our turing ... • 1,102 Only top scored, non community-wiki answers of a minimum length are eligible
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8399479389190674, "perplexity": 799.1499392789127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571538.36/warc/CC-MAIN-20220812014923-20220812044923-00309.warc.gz"}
https://www.physicsforums.com/threads/multiplication-of-fourier-series.410426/
# Multiplication of fourier series 1. Jun 15, 2010 ### mordechai9 Say you have two functions, F(x,y), and G(x,y), and you want to expand them in finite fourier series. Let their coefficients be designated as F_ij and G_ij. When you multiply the two functions, you get X=FG, and this should also have its own fourier series, call its components X_mn. What is the relation between F_ij, G_ij, and X_mn? I was hoping you had something like X_ij = F_ij G_ij, but I've been looking at this for a little while and it seems you don't have any nice relation like that. 2. Jun 15, 2010 ### Cody Palmer Check out the Cauchy product, which has to do with multiplying series.. Wikipedia has a good article on it http://en.wikipedia.org/wiki/Cauchy_product" [Broken] Last edited by a moderator: May 4, 2017 3. Jun 16, 2010 ### elibj123 In the case of "two dimensional sequences" you'll have: $$X_{m,n}=\sum_{j=-\infty}^{\infty}\sum_{k=-\infty}^{\infty}F_{m-j,n-k}G_{j,k}$$ This, by the way, resembles the convolution theorem of the Fourier Transform, and actually the above operation between two sequences (in this special case they are two dimensional) is a discrete convolution.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9188113212585449, "perplexity": 1465.7107452460932}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267862248.4/warc/CC-MAIN-20180619095641-20180619115641-00015.warc.gz"}
https://www.physicsforums.com/threads/finite-fields-and-ring-homomorphisms-help.369041/
# Homework Help: Finite Fields and ring homomorphisms HELP! 1. Jan 12, 2010 ### cheeee 1. The problem statement, all variables and given/known data Assuming the mapping Z --> F defined by n --> n * 1F = 1F + ... + 1F (n times) is a ring homomorphism, show that its kernel is of the form pZ, for some prime number p. Therefore infer that F contains a copy of the finite field Z/pZ. Also prove now that F is a finite dim vector space over Z/pZ; if this dim. is denoted d, then show that F has exactly p^d elements. I know that the kernel of a ring homomorphism is defined as ker(f) = {a in Z : f(a) = 0} but I am still having trouble exactly where to go from this...it appears that the only element of Z s.t. f(a) = 0, is 0 which would map to 0 * 1F = 0. But how is this of the form pZ, for some prime p?? Any help or push in the right direction would be great...thanks. 2. Jan 12, 2010 ### ystael If $$R$$ and $$S$$ are rings, the kernel of a ring homomorphism $$\phi: R \to S$$ is an ideal of $$R$$. What are the ideals of $$\mathbb{Z}$$? Of those ideals, only some of them can be the kernel of a homomorphism $$\mu: \mathbb{Z} \to F$$ given by $$\mu(n) = n \cdot 1_F$$. The others are incompatible with one of your hypotheses. 3. Jan 13, 2010 ### cheeee Okay I get that the all of the ideals of Z are of the form mZ for some integer m, but im still not sure how n*1F implies that the kernel must be a prime ideal of Z? 4. Jan 13, 2010 ### ystael The important thing here is that $$F$$ is a field (or at least an integral domain). If the kernel of $$\mu$$ is $$(m) = m\mathbb{Z}$$, and $$m = rs$$ is composite, think about the equation $$\mu(m) = \mu(r)\mu(s)$$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9465885162353516, "perplexity": 308.9067557489013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937074.8/warc/CC-MAIN-20180419223925-20180420003925-00234.warc.gz"}
http://clay6.com/qa/318/let-be-defined-as-find-the-function-such-that-
# Let $$f:R \to R$$ be defined as $$f(x)=10x+7.$$Find the function $$g:R \to R$$ such that $$g\;o\;f = f\;o\;g = I_R.$$ This question has appeared in model paper 2012 Toolbox: • To check if a function is invertible or not ,we see if the function is both one-one and onto. • A function $f: X \rightarrow Y$ where for every $x1, x2 \in X, f(x1) = f(x2) \Rightarrow x1 = x2$ is called a one-one or injective function. • A function$f : X \rightarrow Y$ is said to be onto or surjective, if every element of Y is the image of some element of X under f, i.e., for every $y \in Y$, there exists an element x in X such that $f(x) = y$. • Given two functions $f:A \to B$ and $g:B \to C$, then composition of $f$ and $g$, $gof:A \to C$ by $gof (x)=g(f(x))\;for\; all \;x \in A$ • A function $g$ is called inverse of $f:x \to y$, then exists $g:y \to x$ such that $gof=I_x\;and\; fog=I_y$, where $I_x, I_y$ are identify functions. Given $f:R \to R$ defined by $\; f(x)=10x+7$ To check if a function is invertible or not ,we see if the function is both one-one and onto. $\textbf {Step 1: Checking one-one}$ A function $f: X \rightarrow Y$ where for every $x1, x2 \in X, f(x1) = f(x2) \Rightarrow x1 = x2$ is called a one-one or injective function. Let $f(x)=f(y) \rightarrow 10x+7=10y+y \rightarrow x = y$. Therefore f is one-one or injective. $\textbf {Step 2: Checking onto}$ A function$f : X \rightarrow Y$ is said to be onto or surjective, if every element of Y is the image of some element of X under f, i.e., for every $y \in Y$, there exists an element x in X such that $f(x) = y$. Let $y= f(x) = 10x+7 \rightarrow x=\large \frac{y-7}{10}$ $\Rightarrow$ $f(x)=f( \large\frac{y-7}{10})=$1$0(\large \frac{y-7}{10})$$+7=y-7+7=y Therefore f is onto. Therefore, is invertible, since it both one-one and onto. \textbf {Step 3: To calculate}\; f^{-1}, \textbf {we must first define g(y):} We know that a function g is called inverse of f:x \to y, then exists g:y \to x such that gof=I_x\;and\; fog=I_y, where I_x, I_y are identify functions. Let us define a function g:R \to R such that g(y)=\large \frac{y-7}{10} \textbf {Step 4: Calculate gof} \Rightarrow (gof)(x)=g(f(x)) =g(10x+7) =\large \frac{(10x+7)-7}{10}=\frac{10x}{10}$$=x$ $\textbf {Step 5: Calculate fog}$ $\Rightarrow (fog)(y)=f\large (\frac{y-7}{10})$$=10\large (\frac{y-7}{10})+7$ $= y$ $\textbf {Step 6: Calculating} \; f^{-} \textbf{ from} \; gof = fog$: $gof=I_R\;and\;fog=I_R$ $\Rightarrow$ The required inverse function $f^{-1} = g:R \to R$ $g(y)=\large \frac{y-7}{10}$ edited Mar 20, 2013
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9976096749305725, "perplexity": 659.9036276547358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865456.57/warc/CC-MAIN-20180523063435-20180523083435-00384.warc.gz"}
https://caribexams.org/math_topic5
CXC CSEC General Proficiency math topic: MEASUREMENT CONTENT: Measures of length, area, volume, weight, mass, time, temperature, speed, perimeter of polygons and circles; areas of triangles, rectangles, parallelograms, circles, irregular shapes; surface area and volume of right prisms and pyramids (triangular, rectangular, and circular cross-sections and bases) and sphere. SPECIFIC OBJECTIVES: The student should be able to: 1. Calculate the perimeter of a polygon, a circle, and a combination of polygon and circle; 2. Calculate the length of an arc of a circle using angles at the centre whose measures are factors of 360° (for example, 15° , 45°, 60° ); 3. Calculate the area of the region enclosed by a rectangle, a triangle, a parallelogram, a trapezium, a circle, and any combination of them; 4. Estimate the area of irregularly shaped figures 5. Calculate the areas of sectors of circles - as in objective 2 6. Calculate the surface area of a simple right prism, a pyramid and a sphere 7. Calculate the volume of a simple right prism, a pyramid and a sphere 8. Convert units of length, area, capacity, time, and speed within SI system 9. Use correctly the SI units of measure for area, volume, mass, temperature and time (including the 24 hour clock) 10. Solve simple problems involving time, distance, and speed (for example timetabel extracts such as bus and airline schedules) 11. Estimate the margin of error for a given measurement 12. Give to a degree of accuracy (appropriate to the margin of error for a given measurement ), the results of calculations involving numbers derived from a set of measurements; 13. Make suitable measurements on maps or scale drawings and use them to determine distances and areas and vice versa 14. Solve problems involving measurements 15. Use investigations to make inferences and generalizations utilizing the concepts listed above Fivestar Camille (not verified) 2 May 2009 - 6:53pm Maths I am writing CXC math on the 20th of may and i want to know if you all have an idea what's coming?Can u help me out?I want a grade 1.thanks.. Mishay (not verified) 21 October 2009 - 4:35pm Maths In order to calculate the fraction questions do we need to follow the order of operations rule?? I forgot which to do first. add, divide or what.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8508589267730713, "perplexity": 1068.855297330287}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574039.24/warc/CC-MAIN-20190920134548-20190920160548-00539.warc.gz"}
https://www.physicsforums.com/threads/find-the-change-in-potential-energy-of-the-system.95702/
# Find the change in potential energy of the system 1. Oct 19, 2005 ### Lucey12385 I'm having a had time with this problem because it is using A's and B's instead of real numbers: A single conservative force acting on a particle varies as F=(-Ax+Bx^2)i N, where A and B are constants and x is in meters. a)calculate the potential energy function U(x) associated with this force, taking U=0 at x=0. b) Find the change in potential energy of the system and the change in kinetic energy of the particle as it moves from x=2.00m to x=3.00m Any help is greatly appreciated! Thanks! 2. Oct 19, 2005 ### whozum All you need is the relationship between the force function F(x) and the energy function U(x) given by $$U(x) = \int_{x_1}^{x_2} F(x) \ dx$$ Remember A and B are constants, so treat them as such while integrating. (They wont disappear).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9288093447685242, "perplexity": 461.82029880631546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647892.89/warc/CC-MAIN-20180322151300-20180322171300-00074.warc.gz"}
https://crm.sns.it/course/3062/
Mathematical Principles for and Advances in Continuum Mechanics # Stochastic origins of energies and gradient flows: a modeling guide speaker: Marc Peletier (Technische Universiteit Eindhoven) abstract: In equilibrium systems there is a long tradition of modelling systems by postulating an energy and identifying stable states with local or global minimizers of this energy. In recent years, with the discovery of Wasserstein and related gradient flows, there is the potential to do the same for time-evolving systems with overdamped (non-inertial, viscosity-dominated) dynamics. Such a modelling route, however, requires an understanding of which energies (or entropies) drive a given system, which dissipation mechanisms are present, and how these two interact. Especially for the Wasserstein-based dissipations this was unclear until rather recently. In this series of talks I will build an understanding of the modelling arguments that underlie the use of energies, entropies, and the Wasserstein gradient flows. This understanding springs from the common connection between large deviations for stochastic particle processes on one hand, and energies, entropies, and gradient flows on the other. I will explain all these concepts in detail in the lectures. I will assume that the participants have a basic understanding of measure theory, Sobolev spaces, and some of the more common types of partial differential equations. No prior knowledge of optimal transport, Wasserstein gradient flows, or probability is required. timetable: Wed 9 Nov, 11:45 - 12:45, Aula Dini Thu 10 Nov, 10:30 - 11:30, Aula Dini Thu 10 Nov, 17:00 - 18:00, Aula Dini Fri 11 Nov, 17:00 - 18:00, Aula Dini documents: Peletier << Go back
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9532341957092285, "perplexity": 1646.7396195945903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522270.37/warc/CC-MAIN-20220518115411-20220518145411-00560.warc.gz"}
https://www.raymondmarking.com/2019/03/23/principles-and-advantages-of-laser-marking/
Laser direct labeling technology is a process in which the target surface is scanned by a laser beam with appropriate energy density that converges on the material surface, and physical or chemical changes occur to the material, so as to form a mark.For different materials, different process parameters, laser effect is not the same.Generally speaking, the interaction between laser beam and material has the following effects: Vaporization effect When the laser beam irradiates the material surface, except a part of the light is reflected, the laser energy absorbed by the material will be rapidly converted into heat energy, making the surface layer temperature rise sharply. When the material’s vaporization temperature is reached, the material surface will be marked by instantaneous vaporization and evaporation, and obvious vaporization will appear in such marking. Etching effect When a laser beam hits the material’s surface layer, the material absorbs light energy and conducts it to the inner layer.The heat conduction of the laser on the surface of the material makes it produce the effect of thermal dissolution. For example, when marking brittle materials such as glass, the corrosion effect is very obvious and there is no obvious evaporation. Photochemical effect For some organic compound materials, when they absorb laser energy, the chemical properties of the material will change.When the laser irradiates on the surface of colored polyvinyl chloride, the color of the material surface will be weakened due to the depolymerization effect, and the color difference will be formed with the part not irradiated by the laser, so as to obtain the marking effect.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9426778554916382, "perplexity": 834.2696793997824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735916.91/warc/CC-MAIN-20200805065524-20200805095524-00208.warc.gz"}
https://plato.stanford.edu/entries/logic-algebraic-propositional/
# Algebraic Propositional Logic First published Mon Dec 12, 2016; substantive revision Fri May 20, 2022 George Boole was the first to present logic as a mathematical theory in algebraic style. In his work, and in that of the other algebraists of the algebraic tradition of logic of the nineteenth century, the distinction between a formal language and a mathematically rigorous semantics for it was still not drawn. What the algebraists in this tradition did was to build algebraic theories (of Boolean algebras, and relation algebras) with among other interpretations a logical one. The works of Frege and Russell introduced a different perspective on the way to approach logic. In those works, a logic system was given by a formal language and a deductive calculus, namely a set of axioms and a set of inference rules. Let us (for this entry) call such a pair a logical deduction system, and the formulas derivable in the calculus its theorems (nowadays it is common practice in algebraic logic to refer to this type of calculi as Hilbert-style and in proof complexity theory as Frege systems). In Frege and Russell’s approach, a formal (mathematical) semantics of whatever kind (algebraic, model-theoretic, etc.) for the formal languages they used was lacking. The only semantics present was of an intuitive, informal kind. The systems introduced by Frege and Russell were systems of classical logic, but soon after systems of non-classical logics were considered by other logicians. The first influential attempts to introduce logics different from classical logic remained within the Frege-Russell tradition of presenting a logical deduction system without any formal semantics. These attempts lead to the first modal systems of C.I. Lewis (1918, 1932) and to the axiomatization of intuitionistic logic by Heyting (1930). The idea underlying the design of Frege and Russell’s logical deduction systems is that the theorems should be the formulas that correspond (intuitively) to the logical truths or logical validities. The concept of logical consequence was not central to the development, and this was also the case in the many systems of non-classical logics that were to be designed following in the footsteps of the first modal systems of C.I. Lewis. This situation influenced the way in which the research on some non-classical logics has usually been presented and sometimes also its real evolution. However, the concept of logical consequence has been the one that logic has traditionally dealt with. Tarski put it once again into the center of modern logic, both semantically and syntactically. Nowadays, a general theory of the algebraization of logics around the concept of logical consequence has grown from the different algebraic treatments of the different logics obtained during the last century. The concept of logical consequence has proved much more fruitful than those of theorem and of logical validity for the development of such a general theory. The first attempts in the process of building the general theory of the algebraization of logics can be found in the study of the class of implicative logics by Rasiowa (1974) and in the systematic presentation by Wójcicki (1988) of the investigations of a general nature on propositional logics as consequence operations carried out mainly by Polish logicians, following the studies of Tarski, Lindenbaum, Łukasiewicz and others in the first part of the twentieth century. It was only in the 1920s that algebras and logical matrices (an algebra together with a set of designated elements) started to be taken as models of logical deduction systems, that is, as providing a formal semantics for formal languages of logic. Moreover, they were also used to define sets of formulas with similar properties to the ones the sets of theorems of the known logical deduction systems have, in particular the property of being closed under substitution instances; soon after logical matrices were also used to define logics as consequence relations. Algebraic logic can be described in very general terms as the discipline that studies logics by associating with them classes of algebras, classes of logical matrices and other algebra related mathematical structures and that relates the properties that the logics may have with properties of the associated algebras (or algebra related structures) with the purpose that the understanding of these algebras can be used to better understand the logic at hand. From the algebraic study of particular logics, a general theory of the algebraization of logics slowly emerged during the last century with the aim, more or less explicitly stated during the process, of obtaining general and informative results relating the properties a logic may have with the algebraic properties the class of algebras (or algebra related structures) associated with it might enjoy. Those algebraic studies assumed somehow an implicit conception of what is the process by which to associate with any given logic a class of algebras as its natural algebraic counterpart. The development of that general theory speeded up and consolidated at the beginning of the 1980s with the introduction of the notion of algebraizable logic, and at that time also the assumptions about the class of algebras that deserves to be taken as the natural one to associate with a given logic started to be made more and more explicit. In this entry we concentrate on the general theory of the algebraization of propositional logics taken as consequence relations. This theory has evolved into the field known as Abstract Algebraic Logic (AAL). The entry can be taken as a mild introduction to this field. ## 1. Abstract consequence relations Tarski’s work (1930a, 1930b, 1935, 1936) on the methodology of the deductive sciences of the 1920s and 1930s studies the axiomatic method abstractly and introduces for the first time the abstract concept of consequence operation. Tarski had mainly in mind the different mathematical axiomatic theories. On these theories, the sentences that are proved from the axioms are consequences of them but (of course) almost all of them are not logical truths; under a suitable formalization of these theories, a logical calculus like Frege’s or Russell’s can be used to derive the consequences of the axioms. Tarski set the framework to study the most general properties of the operation that assigns to a set of axioms its consequences. Given a logical deduction system $$H$$ and an arbitrary set of formulas $$X$$, a formula $$a$$ is deducible in $$H$$ from $$X$$ if there is a finite sequence of formulas any one of which belongs to $$X$$ or is an axiom of $$H$$ or is obtained from previous formulas in the sequence by one of the inference rules of $$H$$. Such a sequence is a deduction (or proof) in $$H$$ of $$a$$ with premises or hypotheses in $$X$$. Let $$Cn(X)$$ be the set of formulas deducible in $$H$$ from the formulas in $$X$$ taken as premises or hypothesis. This set is called the set of consequences of $$X$$ (relative to the logical deduction system $$H$$). $$Cn$$ is then an operation that is applied to sets of formulas to obtain new sets of formulas. It has the following properties: For every set of formulas $$X$$ 1. $$X \subseteq Cn(X)$$ 2. $$Cn(Cn(X)) = Cn(X)$$ 3. $$Cn(X) = \bigcup\{Cn(Y): Y \subseteq X, Y \textrm{ finite}\}$$ The third condition stipulates that $$Cn(X)$$ is equal to the union of the set of formulas derivable from finite subsets of $$X$$. Tarski took these properties to define the notion of consequence operation axiomatically. In fact, he added that there is a formula $$x$$ such that $$Cn(\{x\})$$ is the set $$A$$ of all the formulas and that this set must be finite or of the cardinality of the set of the natural numbers. Condition (3) implies the weaker, and important, condition of monotonicity 1. if $$X \subseteq Y \subseteq A$$, then $$Cn(X) \subseteq Cn(Y)$$. To encompass the whole class of logic systems one finds in the literature, a slightly more general definition than Tarski’s is required. We will say that an abstract consequence operation $$C$$ on an arbitrary set $$A$$ is an operation that applied to subsets of $$A$$ gives subsets of $$A$$ and for all $$X, Y \subseteq A$$ satisfies the conditions (1), (2) and (4) above. If in addition $$C$$ satisfies (3) we say that it is a finitary consequence operation. Consequence operations are present not only in logic but in many areas of mathematics. Abstract consequence operations are known as closure operators in universal algebra and lattice theory, for instance. In topology the operation that sends a subset of a topological space to its topological closure is a closure operator. In fact, the topologies on a set $$A$$ can be identified with the closure operators on $$A$$ that satisfy the additional conditions that $$C(\varnothing) = \varnothing$$ and $$C(X \cup Y) = C(X) \cup C(Y)$$ for all $$X, Y \subseteq A$$. Given a consequence operation $$C$$ on a set $$A$$, a subset $$X$$ of $$A$$ is said to be $$C$$-closed, or a closed set of $$C$$, if $$C(X) = X$$. A different, but mathematically equivalent, (formal) approach is to consider consequence relations on a set of formulas instead of consequence operations. A(n) (abstract) consequence relation on the set of formulas of a formal language is a relation $$\vdash$$ between sets of formulas and formulas that satisfies the following conditions: 1. if $$a \in X$$, then $$X \vdash a$$ 2. if $$X \vdash a$$ and $$X \subseteq Y$$, then $$Y \vdash a$$ 3. if $$X \vdash a$$ and for every $$b \in X, Y \vdash b$$, then $$Y \vdash a$$. It is finitary if in addition it satisfies 1. if $$X \vdash a$$, then there is a finite set $$Y \subseteq X$$ such that $$Y \vdash a$$. Given a logical deduction system $$H$$, the relation $$\vdash$$ defined by $$X \vdash a$$ if $$a$$ is deducible from $$X$$ in $$H$$ is (according to all we have already seen) a finitary consequence relation. Nonetheless, we are used not only to syntactic definitions of consequence relations but also to semantic definitions. For example, we define classical propositional consequence using truth valuations, first-order consequence relation using structures, intuitionistic consequence relation using Kripke models, etc. Sometimes these model-theoretic definitions of consequence relations define non-finitary consequence relations, for example, the consequence relations for infinitary formal languages and the consequence relation of second-order logic with the so-called standard semantics. In general, an abstract consequence relation on a set $$A$$ (not necessarily the set of formulas of some formal language) is a relation $$\vdash$$ between subsets of $$A$$ and elements of $$A$$ that satisfies conditions (1)–(3) above. If it also satisfies (4) it is said to be finitary. If $$\vdash$$ is an abstract consequence relation and $$X \vdash a$$, then we can say that $$X$$ is a set of premises or hypothesis with conclusion $$a$$ according to $$\vdash$$ and that $$a$$ follows from $$X$$, or is entailed by $$X$$ (according to $$\vdash)$$. The abstract consequence relations correspond to Koslow’s implication structures; see Koslow 1992 for the closely related but different approach to logics (in a broad sense) as consequence relations introduced by that author. The consequence operations on a set $$A$$ are in one-to-one correspondence with the abstract consequence relations on $$A$$. The move from a consequence operation $$C$$ to a consequence relation $$\vdash_C$$ and, conversely, from a consequence relation $$\vdash$$ to a consequence operation $$C_{\vdash}$$ is easy and given by the definitions: $X \vdash_C a \txtiff a \in C(X) \hspace{3mm} \textrm{ and } \hspace{3mm} a \in C_{\vdash}(X) \txtiff X \vdash a.$ Moreover, if $$C$$ is finitary, so is $$\vdash_C$$ and if $$\vdash$$ is finitary, so is $$C_{\vdash}$$. For a general discussion on logical consequence see the entry Logical Consequence. ## 2. Logics as consequence relations In this section we define what propositional logics are and explain the basic concepts relating to them. We will call the propositional logics (as defined below) simply logic systems. One of the main traits of the consequence relations we study in logic is their formal character. This roughly means that if a sentence $$a$$ follows from a set of sentences $$X$$ and we have another sentence $$b$$ and another set of sentences $$Y$$ that share the same form with $$a$$ and $$X$$ respectively, then $$b$$ also follows from $$Y$$. In propositional logics this boils down to saying that if we uniformly replace basic sub-sentences of the sentences in $$X \cup \{a\}$$ by other sentences obtaining $$Y$$ and $$b$$, then $$b$$ follows from $$Y$$. (The reader can find more information on the idea of formality in the entry Logical Consequence.) To turn the idea of the formal character of logics into a rigorous definition we need to introduce the concept of propositional language and the concept of substitution. A propositional language (a language, for short) $$L$$ is a set of connectives, that is, a set of symbols each one of which has an arity $$n$$ that tells us in case that $$n = 0$$ that the symbol is a propositional constant, and in case that $$n \gt 0$$ whether the connective is unary, binary, ternary, etc. For example $$\{\wedge , \vee , \rightarrow , \bot , \top \}$$ is (or can be) the language of several logics, like classical and intuitionistic, $$(\bot$$ and $$\top$$ are 0-ary and the other connectives are binary), $$\{\neg , \wedge , \vee \rightarrow , \Box , \Diamond \}$$ is the language of several modal logics, $$(\neg , \Box , \Diamond$$ are unary and the other connectives binary) and $$\{ \wedge , \vee , \rightarrow , * , \top , \bot , 1, 0\}$$ is the language of many-valued logics and also of a fragment of linear logic $$(\bot , \top , 1$$, and 0 are propositional constants and the other symbols binary connectives). Given a language $$L$$ and a set of propositional variables $$V$$ (which is disjoint from $$L)$$, the formulas of $$L$$, or $$L$$-formulas, are defined inductively as follows: 1. Every variable is a formula. 2. Every 0-ary symbol is a formula. 3. If $$*$$ is a connective and $$n \gt 0$$ is its arity, then for all formulas $$\phi_1 ,\ldots ,\phi_n, * \phi_1 \ldots \phi_n$$ is also a formula. A substitution $$\sigma$$ for $$L$$ is a map from the set of variables $$V$$ to the set of formulas of $$L$$. It tells us which formula must replace which variable when we perform the substitution. If $$p$$ is a variable, then $$\sigma(p)$$ denotes the formula that the substitution $$\sigma$$ assigns to $$p$$. The result of applying a substitution $$\sigma$$ to a formula $$\phi$$ is the formula $$\bsigma(\phi)$$ obtained from $$\phi$$ by simultaneously replacing the variables in $$\phi$$, say $$p_1 , \ldots ,p_k$$, by, respectively, the formulas $$\sigma(p_1), \ldots ,\sigma(p_k)$$. In this way, a substitution $$\sigma$$ gives a unique map $$\bsigma$$ from the set of formulas to itself that satisfies 1. $$\bsigma(p) = \sigma(p)$$, for every variable $$p$$, 2. $$\bsigma(\dagger) = \dagger$$, for every 0-ary connective $$\dagger$$, 3. $$\bsigma(* \phi_1 \ldots \phi_n) = * \bsigma(\phi_1)\ldots \bsigma(\phi_n)$$, for every connective $$*$$ of arity $$n \gt 0$$ and formulas $$\phi_1 , \ldots ,\phi_n$$. A formula $$\psi$$ is a substitution instance of a formula $$\phi$$ if there is a substitution $$\sigma$$ such that when applied to $$\phi$$ gives $$\psi$$, that is, if $$\bsigma(\phi) = \psi$$. In order to avoid unnecessary complications we will assume in the sequel that all the logics use the same denumerable set $$V$$ of variables, so that the definition of formula of $$L$$ depends only on $$L$$. A logic system (or logic for short) is given by a language $$L$$ and a consequence relation $$\vdash$$ on the set of formulas of $$L$$ that is formal in the sense that for every substitution $$\sigma$$, every set of formulas $$\Gamma$$ and every formula $$\phi$$, $\textrm{if } \Gamma \vdash \phi, \textrm{ then } \bsigma[\Gamma] \vdash\bsigma(\phi)$ where $$\bsigma[\Gamma]$$ is the set of the formulas obtained by applying the substitution $$\sigma$$ to the formulas in $$\Gamma$$. The consequence relations on the set of formulas of a language that satisfy this property are called structural and also substitution-invariant in the literature. They were considered for the first time in Łoś & Suszko 1958. Tarski only explicitly considered closed sets also closed under substitution instances for some consequence relations; he never considered (at least explicitly) the substitution invariance condition for consequence relations. We will refer to logic systems by the letter $$\bL$$ with possible subindices, and we set $$\bL = \langle L, \vdash_{\bL } \rangle$$ and $$\bL_n = \langle L_n, \vdash_{\bL_n } \rangle$$ with the understanding that $$L \; (L_n)$$ is the language of $$\bL \;(\bL_n)$$ and $$\vdash_{\bL }\; (\vdash_{\bL_n })$$ its consequence relation. A logic system $$\bL$$ is finitary if $$\vdash_{\bL}$$ is a finitary consequence relation. The consequence relation of a logic system can be given in several ways, some using proof-theoretic tools, others semantic means. A substitution-invariant consequence relation can be defined using a proof system like a Hilbert-style axiom system, a Gentzen-style sequent calculus or a natural deduction style calculus, etc. One can also define a substitution-invariant consequence relation semantically using a class of mathematical objects (algebras, Kripke models, topological models, etc.) and a satisfaction relation. If $$\bL_1 = \langle L,\vdash_{\bL_1 } \rangle$$ is a logic system with $$\vdash_{\bL_1}$$ defined by a proof-system and $$\bL_2 = \langle L, \vdash_{\bL_2 } \rangle$$ is a logic system over the same language with $$\vdash_{\bL_2}$$ defined semantically, we say that the proof-system used to define $$\vdash_{\bL_1}$$ is sound for the semantics used to define $$\vdash_{\bL_2}$$ if $$\vdash_{\bL_1}$$ is included in $$\vdash_{\bL_2}$$, namely if $$\Gamma \vdash_{\bL_1 } \phi$$ implies $$\Gamma \vdash_{\bL_2 } \phi$$. If the other inclusion holds the proof-system is said to be complete with respect to the semantics that defines $$\vdash_{\bL_2}$$, that is, when $$\Gamma \vdash_{\bL_2 } \phi$$ implies $$\Gamma \vdash_{\bL_1 } \phi$$. A set of $$L$$-formulas $$\Gamma$$ is called a theory of a logic system $$\bL$$, or $$\bL$$-theory, if it is closed under the relation $$\vdash_{\bL}$$, that is, if whenever $$\Gamma \vdash_{\bL } \phi$$ it also holds that $$\phi \in \Gamma$$. In other words, the theories of $$\bL$$ are the closed sets of the consequence operation $$C_{\vdash_{ \bL}}$$ on the set of $$L$$-formulas. In order to simplify the notation we denote this consequence operation by $$C_{\bL}$$. A formula $$\phi$$ is a theorem (or validity) of $$\bL$$ if $$\varnothing \vdash_{\bL } \phi$$. Then $$C_{\bL }(\varnothing)$$ is the set of theorems of $$\bL$$ and is the least theory of $$\bL$$. The set of all theories of $$\bL$$ will be denoted by $$\tTH(\bL)$$. Given a logic system $$\bL$$, the consequence operation $$C_{\bL}$$ is substitution-invariant, which means that for every set of $$L$$-formulas $$\Gamma$$ and every substitution $$\sigma, \bsigma[C_{\bL}(\Gamma)] \subseteq C_{\bL}(\bsigma[\Gamma]$$). Moreover, for every theory $$T$$ of $$\bL$$ we have a new consequence $$C_{\bL }^T$$ operation defined as follows: $C_{\bL }^T (\Gamma) = C_{\bL }(T \cup \Gamma)$ that is, $$C_{\bL }^T (\Gamma)$$ is the set of formulas that follow from $$\Gamma$$ and $$T$$ according to $$\bL$$. It turns out that $$T$$ is closed under substitutions if and only if $$C_{\bL }^T$$ is substitution-invariant. If $$\bL$$ is a logic system and $$\Gamma , \Delta$$ are sets of $$L$$-formulas, we will use the notation $$\Gamma \vdash_{\bL } \Delta$$ to state that for every $$\psi \in \Delta , \Gamma \vdash_{\bL } \psi$$. Thus $$\Gamma \vdash_{\bL } \Delta$$ if and only if $$\Delta \subseteq C_{\bL }(\Gamma)$$. If $$\bL = \langle L, \vdash_{\bL } \rangle$$ and $$\bL' = \langle L', \vdash_{\bL' } \rangle$$ are logic systems whose languages satisfy that $$L'\subseteq L$$ (hence all the $$L'$$-formulas are $$L$$-formulas) and $\Gamma \vdash_{\bL' } \phi \txtiff \Gamma \vdash_{\bL } \phi,$ for every set of $$L'$$-formulas $$\Gamma$$ and every $$L'$$-formula $$\phi$$ we say that $$\bL'$$ is a fragment $$\bL$$ (in fact, the $$\bL'$$-fragment) and that $$\bL$$ is an expansion of $$\bL'$$. ## 3. Some examples of logics We present some examples of logic systems that we will refer to in the course of this essay, that are assembled here for the reader’s convenience. Whenever possible we refer to the corresponding entries. We use the standard convention of writing $$(\phi * \psi)$$ instead of $$* \phi \psi$$ for binary connectives and omit the external parenthesis in the formulas. ### 3.1 Classical propositional logic We take the language of Classical propositional logic $$\bCPL$$ to be the set $$L_c = \{\wedge , \vee , \rightarrow , \top , \bot \},$$ where $$\wedge , \vee , \rightarrow$$ are binary connectives and $$\top , \bot$$ propositional constants. We assume that the consequence relation is defined by the usual truth-table method $$(\top$$ is interpreted as true and $$\bot$$ as false) as follows, $$\Gamma \vdash_{\bCPL } \phi\txtiff$$ every truth valuation that assigns true to all $$\psi \in \Gamma$$ assigns true to $$\phi$$. The formulas $$\phi$$ such that $$\varnothing \vdash_{\bCPL } \phi$$ are the tautologies. Note that using the language $$L_c$$, the negation of a formula $$\phi$$ is defined as $$\phi \rightarrow \bot$$. For more information, see the entry on classical logic. ### 3.2 Intuitionistic propositional logic We take the language of Intuitionistic propositional logic to be the same as that of classical propositional logic, namely the set $$\{\wedge , \vee , \rightarrow , \top , \bot \}$$. The consequence relation is defined by the following Hilbert-style calculus. #### Axioms: All the formulas of the forms C0. $$\top$$ C1. $$\phi \rightarrow(\psi \rightarrow \phi)$$ C2. $$\phi \rightarrow(\psi \rightarrow(\phi \wedge \psi))$$ C3. $$(\phi \wedge \psi) \rightarrow \phi$$ C4. $$(\phi \wedge \psi) \rightarrow \psi$$ C5. $$\phi \rightarrow(\phi \vee \psi)$$ C6. $$\psi \rightarrow(\phi \vee \psi)$$ C7. $$(\phi \vee \psi) \rightarrow((\phi \rightarrow \delta) \rightarrow((\psi \rightarrow \delta) \rightarrow \delta))$$ C8. $$(\phi \rightarrow \psi) \rightarrow((\phi \rightarrow(\psi \rightarrow \delta)) \rightarrow(\phi \rightarrow \delta))$$ C9. $$\bot \rightarrow \phi$$ #### Rule of inference $\phi , \phi \rightarrow \psi / \psi \tag{Modus Ponens}$ ### 3.3 Local Normal Modal logics The language of modal logic we consider here is the set $$L_m = \{\wedge , \vee , \rightarrow , \neg , \Box , \top , \bot \}$$ that expands $$L_c$$ by adding the unary connective $$\Box$$. In the standard literature on modal logic a normal modal logic is defined not as a consequence relation but as a set of formulas with certain properties. A normal modal logic is a set $$\Lambda$$ of formulas of $$L_m$$ that contains all the tautologies of the language of classical logic, the formulas of the form $\Box(\phi \rightarrow \psi) \rightarrow(\Box \phi \rightarrow \Box \psi)$ and is closed under the rules \begin{align*} \phi , \phi \rightarrow \psi / \psi \tag{Modus Ponens}\\ \phi / \Box \phi \tag{Modal Generalization}\\ \phi/ \bsigma(\phi), \textrm{ for every substitution } \sigma \tag{Substitution}\\ \end{align*} Note that the set $$\Lambda$$ is closed under substitution instances, namely for every substitution $$\sigma$$, if $$\phi \in L_m$$, then $$\bsigma(\phi) \in L_m$$. The least normal modal logic is called $$K$$ and can be axiomatized by the Hilbert-style calculus with axioms the tautologies of classical logic and the formulas $$\Box(\phi \rightarrow \psi) \rightarrow(\Box \phi \rightarrow \Box \psi)$$, and with rules of inference Modus Ponens and Modal Generalization. Note that since we use schemas in the presentation of the axioms, the set of derivable formulas is closed under the Substitution rule. With a normal modal logic $$\Lambda$$ it is associated the consequence relation defined by the calculus that takes as axioms all the formulas in $$\Lambda$$ and as the only rule of inference Modus Ponens. The logic system given by this consequence relation is called the local consequence of $$\Lambda$$. We denote it by $$\blLambda$$. Its theorems are the elements of $$\Lambda$$ and it holds that $$\Gamma \vdash_{\blLambda} \phi\txtiff\phi \in \Lambda$$ or there are $$\phi_1 , \ldots ,\phi_n \in \Gamma$$ such that $$(\phi_1 \wedge \ldots \wedge \phi_n) \rightarrow \phi \in \Lambda$$. ### 3.4 Global Normal Modal logics Another consequence relation is associated naturally with each normal modal logic $$\Lambda$$, defined by the calculus that has as axioms the formulas of $$\Lambda$$ and as rules of inference Modus Ponens and Modal Generalization. The logic system given by this consequence relation is called the global consequence of $$\Lambda$$ and will be denoted by $$\bgLambda$$. It has the same theorems as the local $$\blLambda$$, namely the elements of $$\Lambda$$. The difference between $$\blLambda$$ and $$\bgLambda$$ lies in the consequences they allow to draw from nonempty sets of premises. For example we have $$p \vdash_{\bgK} \Box p$$ but $$p \not\vdash_{\blK} \Box p$$. This difference has an enormous effect on their algebraic behavior. For more information on modal logic, see the entry on modal logic. The reader can find specific information on modal logics as consequence relations in Kracht 2006. ### 3.5 Intuitionistic Linear Logic without exponentials We take as the language of Intuitionistic Linear Logic without exponentials the set $$\{\wedge , \vee , \rightarrow , * , 0, 1, \top , \bot \}$$, where $$\wedge , \vee , \rightarrow, *$$ are binary connectives and $$0, 1,\top , \bot$$ propositional constants. We denote the logic by $$\bILL$$. The axioms and rule of inference below provide a Hilbert-style axiomatization of this logic. #### Axioms: L1. 1 L2. $$(\phi \rightarrow \psi) \rightarrow((\psi \rightarrow \delta) \rightarrow(\phi \rightarrow \delta))$$ L3. $$(\phi \rightarrow(\psi \rightarrow \delta)) \rightarrow(\psi \rightarrow(\phi \rightarrow \delta))$$ L4. $$\phi \rightarrow(\psi \rightarrow(\phi * \psi))$$ L5. $$(\phi \rightarrow(\psi \rightarrow \delta)) \rightarrow((\phi * \psi) \rightarrow \delta)$$ L6. $$1 \rightarrow(\phi \rightarrow \phi)$$ L7. $$(\phi \wedge \psi) \rightarrow \phi$$ L8. $$(\phi \wedge \psi) \rightarrow \psi$$ L9. $$\psi \rightarrow(\phi \vee \psi)$$ L10. $$\phi \rightarrow(\phi \vee \psi)$$ L11. $$((\phi \rightarrow \psi) \wedge(\phi \rightarrow \delta)) \rightarrow(\phi \rightarrow(\psi \wedge \delta))$$ L12. $$((\phi \rightarrow \delta) \wedge(\psi \rightarrow \delta)) \rightarrow((\phi \vee \psi) \rightarrow \delta)$$ L13. $$\phi \rightarrow \top$$ L14. $$\bot \rightarrow \psi$$ #### Rules of inference: \begin{align*} \phi , \phi \rightarrow \psi / \psi \tag{Modus Ponens}\\ \phi , \psi / \phi \wedge \psi \tag{Adjunction}\\ \end{align*} The 0-ary connective 0 is used to define a negation by $$\neg \phi := \phi \rightarrow 0$$. No specific axiom schema deals with 0. ### 3.6 The system $$\bR$$ of Relevance Logic The language we consider is the set $$\{\wedge , \vee , \rightarrow , \neg \}$$, where $$\wedge , \vee , \rightarrow$$ are binary connectives and $$\neg$$ a unary connective. A Hilbert style axiomatization for $$\bR$$ can be given by the rules of Intuitionistic Linear Logic without exponentials and the axioms L2, L3, L7-L12 of this logic together with the axioms 1. $$(\phi \rightarrow(\phi \rightarrow \psi)) \rightarrow(\phi \rightarrow \psi)$$ 2. $$(\phi \rightarrow \neg \psi) \rightarrow(\psi \rightarrow \neg \psi)$$ 3. $$(\phi \wedge(\psi \vee \delta)) \rightarrow((\phi \wedge \psi) \vee \phi \wedge \delta))$$ 4. $$\neg \neg \phi \rightarrow \phi$$ ## 4. Algebras The algebraic study of a particular logic has to provide first of all its formal language with an algebraic semantics using a class of algebras whose properties are exploited to understand which properties the logic has. In this section, we present how the formal languages of propositional logics are given an algebraic interpretation. In the next section, we address the question of what is an algebraic semantics for a logic system. We start by describing the first two steps involved in the algebraic study of propositional logics. Both are needed in order to endow propositional languages with algebraic interpretations. To expound them we will assume knowledge of first-order logic (see the entries on classical logic and first-order model theory) and we will call algebraic first-order languages, or simply algebraic languages, the first-order languages with equality and without any relational symbols, so that these languages have only operation symbols (also called function symbols), if any, in the set of their non-logical symbols. The two steps we are about to expound can be summarized in the slogan: Propositional formulas are terms. The first step consist in looking at the formulas of any propositional language $$L$$ as the terms of the algebraic first-order language with $$L$$ as its set of operation symbols. This means that (i) every connective of $$L$$ of arity $$n$$ is taken as an operation symbol of arity $$n$$ (thus every 0-ary symbol of $$L$$ is taken as an individual constant) and that (ii) the propositional formulas of $$L$$ are taken as the terms of this first-order language; in particular the propositional variables are the variables of the first-order language. From this point of view the definition of $$L$$-formula is exactly the definition of $$L$$-term. We will refer to the algebraic language with $$L$$ as its set of operation symbols as the $$L$$-algebraic language. The second step is to interpret the propositional formulas in the same manner in which terms of a first-order language are interpreted in a structure. In this way the concept of $$L$$-algebra comes into play. On a given set $$A$$, an $$n$$-ary connective is interpreted by an $$n$$-ary function on $$A$$ (a map that assigns an element of $$A$$ to every sequence $$\langle a_1 , \ldots ,a_n\rangle$$ of elements of $$A)$$. This procedure is a generalization of the truth-table interpretations of the languages of logic systems like classical logic and Łukasiewicz and Post’s finite-valued logics. In those cases, given the set of truth-values at play the function that interprets a connective is given by its truth-table. A way to introduce algebras is as the models of some algebraic first-order language. We follow an equivalent route and give the definition of algebra using the setting of propositional languages. Let $$L$$ be a propositional language. An algebra $$\bA$$ of type $$L$$, or $$L$$-algebra for short, is a set $$A$$, called the carrier or the universe of $$\bA$$, together with a function $$* ^{\bA}$$ on $$A$$ of the arity of $$*$$, for every connective $$*$$ in $$L$$ (if $$*$$ is 0-ary, $$* ^{\bA}$$ is an element of $$A)$$. An algebra $$\bA$$ is trivial if its carrier is a one element set. A valuation on an $$L$$-algebra $$\bA$$ is a map $$v$$ from the set of variables into its carrier $$A$$. Algebras together with valuations are used to interpret in a compositional way the formulas of $$L$$, assuming that a connective $$*$$ of $$L$$ is interpreted in an $$L$$-algebra $$\bA$$ by the function $$* ^{\bA}$$. Let $$\bA$$ be an algebra of type $$L$$ and $$v$$ a valuation on $$\bA$$. The value of a compound formula $$* \phi_1 \ldots \phi_n$$ is computed by applying the function $$* ^{\bA}$$ that interprets $$*$$ in $$\bA$$ to the previously computed values $$\bv(\phi_1), \ldots,\bv(\phi_n)$$ of the formulas $$\phi_1,\ldots,\phi_n$$. Precisely speaking, the value $$\bv(\phi)$$ of a formula $$\phi$$ is defined inductively as follows: 1. $$\bv(p) = v(p)$$, for each variable $$p$$, 2. $$\bv(\dagger) = \dagger^{\bA}$$, if $$\dagger$$ is a 0-ary connective 3. $$\bv(* \phi_1 \ldots \phi_n) = * ^{\bA }(\bv(\phi_1), \ldots ,\bv(\phi_n))$$, if $$*$$ is a $$n$$-ary $$(n \gt 0)$$ connective. Note that in this way we have obtained a map $$\bv$$ from the set of $$L$$-formulas to the carrier of $$\bA$$. It is important to notice that the value of a formula under a valuation depends only on the propositional variables that actually appear in the formula. Accordingly, if $$\phi$$ is a formula, then we use the notation $$\phi(p_1 , \ldots ,p_n)$$ to indicate that the variables that appear in $$\phi$$ are in the list $$p_1 , \ldots ,p_n$$, and given elements $$a_1 , \ldots ,a_n$$ of an algebra $$\bA$$ we refer by $$\phi^{\bA }[a_1 , \ldots ,a_n]$$ to the value of $$\phi(p_1 , \ldots ,p_n)$$ under any valuation $$v$$ on $$\bA$$ such that $$v(p_1) = a_1 , \ldots ,v(p_n) = a_n$$. A third and fundamental step in the algebraic study of logics is to turn the set of formulas of a language $$L$$ into an algebra, the algebra of formulas of $$L$$, denoted by $$\bFm_L$$. This algebra has the set of $$L$$-formulas as carrier and the operations are defined as follows. For every $$n$$-ary connective $$*$$ with $$n \gt 0$$, the function $$* ^{\bFm_L}$$ is the map that sends each tuple of formulas $$(\phi_1 , \ldots ,\phi_n)$$ (where $$n$$ is the arity of $$*$$) to the formula $$* \phi_1 \ldots \phi_n$$, and for every 0-ary connective $$\dagger , \dagger^{\bFm_L}$$ is $$\dagger$$. If no confusion is likely we suppress the subindex in $$\bFm_L$$ and write $$\bFm$$ instead. ### 4.1 Some concepts of universal algebra and model theory Algebras are a particular type of structure or model. An $$L$$-algebra is a structure or model for the $$L$$-algebraic first-order language. Therefore the concepts of model theory for the first-order languages apply to them (see the entries on classical logic and first-order model theory). We need some of these concepts. They are also used in universal algebra, a field that to some extent can be considered the model theory of the algebraic languages. We introduce the definitions of the concepts we need. Given an algebra $$\bA$$ of type $$L$$, a congruence of $$\bA$$ is an equivalence relation $$\theta$$ on the carrier of $$\bA$$ that satisfies for every $$n$$-ary connective $$* \in L$$ the following compatibility property: for every $$a_1 , \ldots ,a_n, b_1 , \ldots ,b_n \in A$$, $\textrm{if } a_1\theta b_1 , \ldots ,a_n \theta b_1, \textrm{ then } *^{\bA}(a_1 ,\ldots ,a_n)\ \theta *^{\bA}(b_1 ,\ldots ,b_n).$ Given a congruence $$\theta$$ of $$\bA$$ we can reduce the algebra by identifying the elements which are related by $$\theta$$. The algebra obtained is the quotient algebra of $$\bA$$ modulo $$\theta$$. It is denoted by $$\bA/\theta$$, its carrier is the set $$A/\theta$$ of equivalence classes $$[a]$$ of the elements $$a$$ of $$A$$ modulo the equivalence relation $$\theta$$, and the operations are defined as follows: 1. $$\dagger^{\bA/\theta} = [\dagger^{\bA}]$$, for every 0-ary connective $$\dagger$$, 2. $$* ^{\bA/\theta}([a_1], \ldots, [a_n]) = [* ^{\bA }(a_1 ,\ldots ,a_n)]$$, for every connective $$*$$ whose arity is $$n$$ and $$n \gt 0$$. The compatibility property ensures that the definition is sound. Let $$\bA$$ and $$\bB$$ be $$L$$-algebras. A homomorphism $$h$$ from $$\bA$$ to $$\bB$$ is a map $$h$$ from $$A$$ to $$B$$ such that for every 0-ary symbol $$\dagger \in L$$ and every $$n$$-ary connective $$* \in L$$ 1. $$h(\dagger^{\bA }) = \dagger^{\bB}$$ 2. $$h(* ^{\bA }(a_1 ,\ldots ,a_n)) = * ^{\bB }(h(b_1),\ldots ,h(b_n))$$, for all $$a_1 , \ldots ,a_n \in A$$. We say that $$\bB$$ is a homomorphic image of $$\bA$$ if there is a homomorphism from $$\bA$$ to $$\bB$$ which is an onto map from $$A$$ to $$B$$. An homomorphism from $$\bA$$ to $$\bB$$ is an isomorphism if it is a one-to-one and onto map from $$A$$ to $$B$$. If an isomorphism from $$\bA$$ to $$\bB$$ exists, we say that $$\bA$$ and $$\bB$$ are isomorphic and that $$\bB$$ is an isomorphic image (or a copy) of $$\bA$$. Let $$\bA$$ and $$\bB$$ be $$L$$-algebras. $$\bA$$ is a subalgebra of $$\bB$$ if (1) $$A \subseteq B$$, (2) the interpretations of the 0-ary symbols of $$L$$ in $$\bB$$ belong to $$A$$ and $$A$$ is closed under the functions of $$\bB$$ that interpret the non 0-ary symbols, and (3) the interpretations of the 0-ary symbols in $$\bA$$ coincide with their interpretations in $$\bB$$ and the interpretations on $$\bA$$ of the other symbols in $$L$$ are the restrictions to $$\bA$$ of their interpretations in $$\bB$$. We refer the reader to the entry on first-order model theory for the notions of direct product (called product there) and ultraproduct. ### 4.2 Varieties and quasivarieties The majority of classes of algebras that provide semantics for propositional logics are quasivarieties and in most cases varieties. The theory of varieties and quasivarieties is one of the main subjects of universal algebra. An equational class of $$L$$-algebras is a class of $$L$$-algebras that is definable in a very simple way (by equations) using the $$L$$-algebraic language. An $$L$$-equation is a formula $$\phi \approx \psi$$ where $$\phi$$ and $$\psi$$ are terms of the $$L$$-algebraic language (that is, $$L$$-formulas if we take the propositional logic's point of view) and '$$\approx$$' is the formal symbol for the equality (always to be interpreted as the identity relation). An equation $$\phi \approx \psi$$ is valid in an algebra $$\bA$$, or $$\bA$$ is a model of $$\phi \approx \psi$$, if for every valuation $$v$$ on $$\bA, \bv(\phi) = \bv(\psi)$$. This is exactly the same as to saying that the universal closure of $$\phi \approx \psi$$ is a sentence true in $$\bA$$ according to the usual semantics for first-order logic with equality. An equational class of $$L$$-algebras is a class of $$L$$-algebras which is the class of all the models of a given set of $$L$$-equations. A quasi-equational class of $$L$$-algebras is a class of $$L$$-algebras definable using the $$L$$-algebraic language in a slightly more complex way than in equational.classes. A proper $$L$$-quasiequation is a formula of the form $\bigwedge_{i \le n} \phi_i \approx \psi_i \rightarrow \phi \approx \psi.$ An $$L$$-quasiequation is a formula of the above form but possibly with an empty antecedent, in which case it is just the equation $$\phi \approx \psi$$. Hence, the $$L$$-quasiequations are the proper $$L$$-quasiequations and the $$L$$-equations. An $$L$$-quasiequation is valid in an $$L$$-algebra $$\bA$$, or the algebra is a model of it, if the universal closure of the quasiequation is sentence true in $$\bA$$. A quasi-equational class of $$L$$-algebras is a class of algebras that is the class of the models of a given set of $$L$$-quasiequations. Since equations are quasiequations, every equational class is quasi-equational. The converse is false. Moreover, since in the trivial algebras all the equations and all the quasiequations of the appropriate algebraic language are valid, equational and quasi-equational classes are nonempty. Equational and quasi-equational classes of algebras can be characterized by the closure properties they enjoy. A nonempty class of $$L$$-algebras is a variety if it is closed under subalgebras, direct products, and homomorphic images. It is a quasivariety if it is closed under subalgebras, direct products, ultraproducts, isomorphic images, and contains a trivial algebra. It is easily seen that equational classes are varieties and that quasi-equational classes are quasiviarities. Birkhoff's theorem states that all varieties are equational classes and Malcev's theorem that all quasivarieties are quasi-equational classes. The variety generated by a nonempty class $$\bK$$ of $$L$$-algebras is the least class of $$L$$-algebras that includes $$\bK$$ and is closed under subalgebras, direct products and homomorphic images. It is also the class of the algebras that are models of the equations valid in $$\bK$$. For example, the variety generated by the algebra of the two truth-values for classical logic is the class of Boolean algebras. If we restrict that algebra to the operations for conjunction and disjunction only, it generates the variety of distributive lattices and if we restrict it to the operations for conjunction and disjunction and the interpretations of $$\top$$ and $$\bot$$, it generates the variety of bounded distributive lattices. The quasivariety generated by a class $$\bK$$ of $$L$$-algebras is the least class of $$L$$-algebras that includes $$\bK$$, the trivial algebras and is closed under subalgebras, direct products, ultraproducts, and isomorphic images. An SP-class of $$L$$-algebras is a class of $$L$$-algebras that contains a trivial algebra and is closed under isomorphic images, subalgebras, and direct products. Thus quasivarieties and varieties are all SP-classes. The SP-class generated by a class $$\bK$$ of $$L$$-algebras is the least class of $$L$$-algebras that includes $$\bK$$, the trivial algebras and is closed under subalgebras, direct products and isomorphic images. ## 5. Algebraic semantics The term ‘algebraic semantics’ was (and many times still is) used in the literature in a loose way. To provide a logic with an algebraic semantics was to interpret its language in a class of algebras, define a notion of satisfaction of a formula (under a valuation) in an algebra of the class and prove a soundness and completeness theorem, usually for the theorems of the logic only. Nowadays there is a precise concept of algebraic semantics for a logic system. It was introduced by Blok and Pigozzi in Blok & Pigozzi 1989. In this concept we find a general way to state in mathematically precise terms what is common to the many cases of purported algebraic semantics for specific logic systems found in the literature. We expose the notion in this section. To motivate the definition we discuss several examples first, stressing the relevant properties that they share. The reader does not need to know about the classes of algebras that provide algebraic semantics we refer to in the examples. Its existence is what is important. The prototypical examples of algebraic semantics for propositional logics are the class BA of Boolean algebras, which is the algebraic semantics for classical logic, and the class HA of Heyting algebras, which is the algebraic semantics for intuitionistic logic. Every Boolean algebra and every Heyting algebra $$\bA$$ has a greatest element according to their natural order; this element is denoted usually by $$1^{\bA}$$ and interprets the propositional constant symbol $$\top$$. It is taken as the distinguished element relative to which the algebraic semantics is given. The algebraic semantics of these two logics works as follows: Let $$\bL$$ be classical or intuitionistic logic and let $$\bK(\bL)$$ be the corresponding class of algebras BA or HA. It holds that $$\Gamma \vdash_{\bL } \phi \txtiff$$ for every $$\bA \in \bK(\bL)$$ and every valuation $$v$$ on $$\bA$$, if $$\bv(\psi) = 1^{\bA}$$ for all $$\psi \in \Gamma$$, then $$\bv(\phi) = 1^{\bA}$$. This is the precise content of the statement that BA and HA are an algebraic semantics for classical logic and for intuitionistic logic, respectively. The implication from left to right in the expression above is an algebraic soundness theorem and the implication from right to left an algebraic completeness theorem. There are logics for which an algebraic semantics is provided in the literature in a slightly different way from the one given by the schema above. Let us consider the example in Section 3.5 of Intuitionistic Linear Logic without exponentials. We denote by $$\bILsubZ$$ the class of IL-algebras with zero defined in Troelstra 1992 (but adapted to the language of $$\bILL)$$. Each $$\bA \in \bILsubZ$$ is a lattice with extra operations and thus has its lattice order $$\le^{\bA}$$. This lattice order has a greatest element which we take as the interpretation of $$\top$$. On each one of these algebras $$\bA$$ there is a designated element $$1^{\bA}$$ (the interpretation of the constant 1) that may be different from the greatest element. It holds: $$\Gamma \vdash_{\bILL } \phi \txtiff$$ for every $$\bA \in \bILsubZ$$ and every valuation $$v$$ on $$\bA$$, if $$1^{\bA } \le^{\bA } \bv(\psi)$$ for all $$\psi \in \Gamma$$, then $$1^{\bA } \le^{\bA } \bv(\phi)$$. In this case one does not consider only a designated element in every algebra $$\bA$$ but a set of designated elements, namely the elements of $$\bA$$ greater than or equal to $$1^{\bA}$$, to provide the definition. Let us denote this set by $$\tD (\bA)$$, and notice that $$\tD (\bA) = \{a \in A: 1^{\bA } \wedge^{\bA} a = 1^{\bA }\}$$. Hence, $$\Gamma \vdash_{\bILL } \phi \txtiff$$ for every $$\bA \in \bILsubZ$$ if $$\bv[\Gamma] \subseteq \tD (\bA)$$, then $$\bv(\phi) \in \tD (\bA)$$. Still there are even more complex situations. One of them is the system $$\bR$$ of relevance logic. Consider the class of algebras $$\bRal$$ defined in Font & Rodríguez 1990 (see also Font & Rodríguez 1994) and denoted there by ‘$$\bR$$’. Let us consider for every $$\bA \in \bRal$$ the set $\tE(\bA) := \{a \in A: a \wedge^{\bA }(a \rightarrow^{\bA } a) = a \rightarrow^{\bA } a\}.$ Then $$\bRal$$ is said to be an algebraic semantics for $$\bR$$ because the following holds: $$\Gamma \vdash_{\bR } \phi\txtiff$$ for every $$\bA \in \bRal$$ and every valuation $$v$$ on $$\bA$$, if $$\bv[\Gamma] \subseteq \tE (\bA)$$, then $$\bv(\phi) \in \tE (\bA)$$. The common pattern in the examples above is that the algebraic semantics is given by 1. a class of algebras $$\bK$$, 2. in each algebra in $$\bK$$ a set of designated elements that plays the role $$1^{\bA}$$ (more precisely the set $$\{1^{\bA }\})$$ plays in the cases of classical and intuitionistic logic, and 3. this set of designated elements is definable (in the same manner on every algebra) by an equation in the sense that it is the set of elements of the algebra that satisfy the equation (i.e., its solutions). For BA and HA the equation is $$p \approx \top$$. For $$\bRal$$ it is $$p \rightarrow(p \wedge p) \approx p \rightarrow p$$, and for $$\bILsubZ$$ it is $$1 \wedge p \approx 1$$. The main point in Blok and Pigozzi’s concept of algebraic semantics comes from the realization, mentioned in (3) above, that the set of designated elements considered in the algebraic semantics of known logics is in fact the set of solutions of an equation, and that what practice forced researchers to look for when they tried to obtain algebraic semantics for new logics was in fact, although not explicitly formulated in these terms, an equational way to define uniformly in every algebra a set of designated elements in order to obtain an algebraic soundness and completeness theorem. We are now in a position to expose the mathematically precise concept of algebraic semantics. To develop a fruitful and general theory of the algebraization of logics some generalizations beyond the well-known concrete examples have to be made. In the definition of algebraic semantics, one takes the move from a single equation to a set of them in the definability condition for the set of designated elements. Before stating Blok and Pigozzi’s definition we need to introduce a notational convention. Given an algebra $$\bA$$ and a set of equations $$\iEq$$ in one variable, we denote by $$\tEq(\bA)$$ the set of elements of $$\bA$$ that satisfy all the equations in $$\iEq$$. Then a logic $$\bL$$ is said to have an algebraic semantics if there is a class of algebras $$\bK$$ and a set of equations $$\iEq$$ in one variable such that (**) $$\Gamma \vdash_{\bL } \phi \txtiff$$ for every $$\bA \in \bK$$ and every valuation $$v$$ on $$\bA$$, if $$\bv[\Gamma] \subseteq \iEq(\bA)$$, then $$\bv(\phi) \in \tEq(\bA)$$. In this situation we say that the class of algebras $$\bK$$ is an $$\iEq$$-algebraic semantics for $$\bL$$, or that the pair $$(\bK, \iEq)$$ is an algebraic semantics for $$\bL$$. If $$\iEq$$ consists of a single equation $$\delta(p) \approx \varepsilon(p)$$ we will simply say that $$\bK$$ is a $$\delta(p) \approx \varepsilon(p)$$-algebraic semantics for $$\bL$$. In fact, Blok and Pigozzi required that $$\iEq$$ should be finite in their definition of algebraic semantics. But it is better to be more general. The definition clearly encompasses the situations encountered in the examples. If $$\bK$$ is an $$\iEq$$-algebraic semantics for a finitary logic $$\bL$$ and $$\iEq$$ is finite, then the quasivariety generated by $$\bK$$ is also an $$\iEq$$-algebraic semantics. The same does not hold in general if we consider the generated variety. For this reason, it is customary and useful when developing the theory of the algebraization of finitary logics to consider quasivarieties of algebras as algebraic semantics instead of arbitrary subclasses that generate them. Conversely, if a quasivariety is an $$\iEq$$-algebraic semantics for a finitary $$\bL$$ and $$\iEq$$ is finite, then so is any subclass of the quasivariety that generates it. In the best-behaved cases, the typical algebraic semantics of a logic is a variety, for instance in all the examples discussed above. But there are cases in which it is not (see Blok & Pigozzi 1989). A quasivariety can be an $$\iEq$$-algebraic semantics for a logic and an $$\iEq'$$-algebraic semantics for another logic (with $$\iEq$$ and $$\iEq'$$ different). For example, due to Glivenko’s theorem (see the entry on intuitionistic logic) the class of Heyting algebras is a $$\{\neg \neg p \approx 1\}$$-algebraic semantics for classical logic and it is the standard $$\{p \approx 1\}$$-algebraic semantics for intuitionistic logic. Moreover, different quasivarieties of algebras can be an $$\iEq$$-algebraic semantics for the same logic. It is known that there is a quasivariety that properly includes the variety of Boolean algebras that is also a $$\{p \approx 1\}$$-algebraic semantics for classical propositional logic. It is also known that for some logics with an algebraic semantics (relative to some set of equations), the natural class of algebras that corresponds to the logic is not an algebraic semantics (for any set of equations) of it. One example where this situation holds is in the local normal modal logic $$\blK$$. Finally, there are logics that do not have any algebraic semantics. These facts highlight the need for some criteria of the goodness of a pair $$(\bK, \iEq)$$ to provide a natural algebraic semantics for a logic $$\bL$$ when some exists. One such criterion would be that $$\bL$$ is an algebraizable logic with $$(\bK, \iEq)$$ as an algebraic semantics. Another that $$\bK$$ is the natural class of algebras associated with the logic $$\bL$$. The notion of the natural class of algebras of a logic system will be discussed in Section 8 and the concept of algebraizable logic in Section 9. The interested reader can examine Blok & Rebagliato 2003 for a study devoted to algebraic semantics of logics and Moraschini forthcoming for the most recent results on the topic (in this paper there is a proof of the fact that the natural class of algebras of the local normal modal logic $$\blK$$, namely the class of modal algebras, is not an algebraic semantics (for any set of equations) for it). There is a particular, and important, kind of logics with an algebraic semantics that includes classical and intuitionistic logics. It is the class of the so-called assertional logics. Let $$\bK$$ be a class of algebras in an algebraic language with a constant term for $$\bK$$, i.e., a formula $$\phi(p_1 , \ldots ,p_n)$$ such that for every algebra $$\bA\in \bK$$ and elements $$a_1 , \ldots ,a_n, b_1, \ldots, b_n$$ of $$\bA$$, $$\phi^{\bA }[a_1 , \ldots ,a_n] = \phi^{\bA }[b_1 , \ldots ,b_n]$$, that is, in every algebra in $$\bK$$, $$\phi$$ takes the same value whatever is the way we interpret the variable in $$\phi$$ on $$\bA$$. We denote this value by $$\phi^{\bA}$$. Thus $$\phi$$ acts as a constant (relative to the algebras in $$\bK$$) and $$\phi^{\bA}$$ (for $$\bA\in \bK$$) can be taken as a designated element. Given a class of algebras $$\bK$$ in an algebraic language with a constant term $$\phi$$ for $$\bK$$, the assertional logic $$\bL_{\bK}^{\phi}$$ of ($$\bK, \phi$$) is defined by $$\Gamma \vdash_{\bL_{\bK}^{\phi} } \phi \txtiff$$ for every $$\bA \in \bK(\bL)$$ and every valuation v on $$\bA$$, if $$\bv(\psi) = \phi^{\bA}$$ for all $$\psi \in \Gamma$$, then $$\bv(\phi) = \phi^{\bA}$$. A logic system $$\bL$$ is assertional when there exists a class of algebras $$\bK$$ in the algebraic language of $$\bL$$ and a constant term $$\phi$$ for $$\bK$$ such that $$\bL$$ = $$\bL_{\bK}^{\phi}$$. The most recent study of assertional logics is Albuquerque et al. 2018. We address the reader to this paper where the classification of the assertional logics in the Leibniz and Frege hierarchies of logic systems that we present in later sections is addressed and several examples are discussed. ## 6. Logical matrices In the last section, we saw that to provide a logic with an algebraic semantics we need in many cases to consider in every algebra a set of designated elements instead of a single designated one. In the examples we discussed, the set of designated elements was definable in the algebras by one equation. This motivated the definition of algebraic semantics in Section 5. For many logics, to obtain a semantics similar to an algebraic semantics using the class of algebras naturally associated with them one needs for every algebra a set of designated elements that cannot be defined using only the equations of the algebraic language or is not even definable by using this language only. As we already mentioned, one example where this happens is the local consequence of the normal modal logic $$K$$. Also, recall that there are logics with no algebraic semantics at all. To endow every logic with a semantics of an algebraic kind one has to consider, at least, algebras together with a set of designated elements, without any requirement about its definability using the corresponding algebraic language. These pairs are the logical matrices. Tarski defined the general concept of logical matrix in the 1920s but the concept was already implicit in previous work by Łukasiewicz, Bernays, Post and others, who used truth-tables, either in independence proofs or to define logics different from classical logic. A logical matrix is a pair $$\langle \bA, D \rangle$$ where $$\bA$$ is an algebra and $$D$$ a subset of the universe $$A$$ of $$\bA$$; the elements of $$D$$ are called the designated elements of the matrix and accordingly $$D$$ is called the set of designated elements (and some authors call it the truth set of the matrix). Logical matrices were first used as models of the theorems of specific logic systems, for instance in the work of McKinsey and Tarski, and also to define sets of formulas with similar properties to those of the set of theorems of a logic system, namely closure under substitution instances. This was the case of the $$n$$-valued logics of Łukasiewicz and of his infinite-valued logic. And it was Tarski who first considered logical matrices as a general tool to define this kind of sets. The general theory of logical matrices explained in this entry is due mainly to Polish logicians, starting with Łoś 1949 and continuing in Łoś & Suszko 1958, building on previous work by Lindenbaum. In Łoś and Suszko’s paper matrices are used for the first time both as models of logic systems (in our sense) and to define systems of these kind. In the rest of the section, we present the relevant concepts of the theory of logical matrices using modern terminology. Given a logic $$\bL$$, a logical matrix $$\langle \bA, D \rangle$$ is said to be a model of $$\bL$$ if wherever $$\Gamma \vdash_{\bL } \phi$$, then every valuation $$v$$ on $$\bA$$ that maps the elements of $$\Gamma$$ to some designated value (i.e., an element of $$D)$$ also maps $$\phi$$ to a designated value. When $$\langle \bA, D \rangle$$ is a model of $$\bL$$ it is said that $$D$$ is an $$\bL$$-filter of the algebra $$\bA$$. The set of $$\bL$$-filters of an algebra $$\bA$$ plays a crucial role in the theory of the algebraization of logic systems. We will come to this point later. A class $$\bM$$ of logical matrices is said to be a matrix semantics for a logic $$\bL$$ if (*) $$\Gamma \vdash_{\bL } \phi\txtiff$$ for every $$\langle \bA, \tD \rangle \in \bM$$ and every valuation $$v$$ on $$\bA$$, if $$\bv[\Gamma] \subseteq D$$, then $$\bv(\phi) \in D$$. The implication from left to right says that $$\bL$$ is sound relative to $$\bM$$, and the other implication says that it is complete. In other words, $$\bM$$ is a matrix semantics for $$\bL$$ if and only if every matrix in $$\bM$$ is a model of $$\bL$$ and moreover for every $$\Gamma$$ and $$\phi$$ such that $$\Gamma \not\vdash_{\bL } \phi$$ there is a model $$\langle \bA, \tD \rangle$$ of $$\bL$$ in $$\bM$$ that witnesses the fact, namely there is a valuation on the model that sends the formulas in $$\Gamma$$ to designated elements and $$\phi$$ to a non-designated one. Logical matrices are also used to define logics semantically. If $$\cM = \langle \bA, D \rangle$$ is a logical matrix, the relation defined by $$\Gamma \vdash_{\cM } \phi\txtiff$$ for every valuation $$v$$ on $$\bA$$ if $$\bv(\psi) \in D$$ for all $$\psi \in \Gamma$$, then $$\bv(\phi) \in D$$ is a consequence relation which is substitution-invariant; therefore $$\langle L, \vdash_{\cM } \rangle$$ is a logic system. Similarly, we can define the logic of a class of matrices $$\bM$$ by taking condition (*) as a definition of a consequence relation. In the entry on many-valued logic the reader can find several logics defined in this way. Every logic (independently of how it is defined) has a matrix semantics. Moreover, every logic has a matrix semantics whose elements have the property of being reduced in the following sense: A matrix $$\langle \bA, D \rangle$$ is reduced if there are no two different elements of $$A$$ that behave in the same way. We say that $$a, b \in A$$ behave in the same way in $$\langle \bA, D \rangle$$ if for every formula $$\phi (q, p_1 , \ldots ,p_n)$$ and all elements $$d_1 , \ldots ,d_n \in A$$ $\phi^{\bA }[a, d_1 , \ldots ,d_n] \in D \txtiff \phi^{\bA }[b, d_1 , \ldots ,d_n] \in D.$ Thus $$a, b \in A$$ behave differently if there is a formula $$\phi(q, p_1 , \ldots ,p_n)$$ and elements $$d_1 , \ldots ,d_n \in A$$ such that one of $$\phi^{\bA }[a, d_1 , \ldots ,d_n]$$ and $$\phi^{\bA }[b, d_1 , \ldots ,d_n]$$ belongs to $$D$$ but not both. The relation of behaving in the same way in $$\langle \bA, D \rangle$$ is a congruence relation of $$\bA$$. This relation is known after Blok & Pigozzi 1986, 1989 as the Leibniz congruence of the matrix $$\langle \bA, D \rangle$$ and is denoted by $$\bOmega_{\bA }(D)$$. It can be characterized as the greatest congruence relation of $$\bA$$ that is compatible with $$D$$, that is, that does not relate elements in $$D$$ with elements not in $$D$$. The concept of Leibniz congruence plays a fundamental role in the general theory of the algebraization of the logic systems developed during the 1980s by Blok and Pigozzi. The reader is referred to Font, Jansana, & Pigozzi 2003 and Czelakowski 2001 for extensive information on the developments around the concept of Leibniz congruence during this period. Every matrix $$\cM$$ can be turned into a reduced matrix by identifying the elements related by its Leibniz congruence. This matrix is called the reduction of $$\cM$$ and is usually denoted by $$\cM^*$$. A matrix and its reduction are models of the same logic systems, and since reduced matrices have no redundant elements, the classes of reduced matrices that are matrix semantics for logic systems are usually taken as the classes of matrices that deserve study; they are better suited to encoding in algebraic-like terms the properties of the logics that have them as their matrix semantics. The proof that every logic system has a reduced matrix semantics (i.e., a matrix semantics consisting of reduced matrices) is as follows. Let $$\bL$$ be a logic system. Consider the matrices $$\langle \bFm_L, T \rangle$$ over the formula algebra, where $$T$$ is a theory of $$\bL$$. These matrices are known as the Lindenbaum matrices of $$\bL$$. It is not difficult to see that the class of those matrices is a matrix semantics for $$\bL$$. Since a matrix and its reduction are models of the same logics, the reductions of the Lindenbaum matrices of $$\bL$$ constitute a matrix semantics for $$\bL$$ too, and indeed a reduced one. Moreover, any class of reduced matrix models of $$\bL$$ that includes the reduced Lindenbaum matrices of $$\bL$$ is automatically a complete matrix semantics for $$\bL$$. In particular, the class of all reduced matrix models of $$\bL$$ is a complete matrix semantics for $$\bL$$. We denote this class by $$\bRMatr(\bL)$$. The above proof can be seen as a generalization of the Lindenbaum-Tarski method for proving algebraic completeness theorems that we will discuss in the next section. The class of the algebras of the matrices in $$\bRMatr(\bL)$$ plays a prominent role in the theory of the algebraization of logics and it is denoted by $$\bAlg^*\bL$$. It has been considered for a long time the natural class of algebras that has to be associated with a given logic $$\bL$$ as its algebraic counterpart. For instance, in the examples considered above the classes of algebras that were given as algebraic semantics of the different logics (Boolean algebras, Heyting algebras, etc.) are exactly the class $$\bAlg^*\bL$$ of the corresponding logic $$\bL$$. And in fact, the class $$\bAlg^*\bL$$ coincides with what was taken to be the natural class of algebras for all the logics $$\bL$$ studied up to the 1990s. In the 1990s, due to the knowledge acquired of several logics not studied before, some authors proposed another way to define the class of algebras that has to be counted as the algebraic counterpart to be associated with a given logic $$\bL$$. For many logics $$\bL$$, it leads exactly to the class $$\bAlg^*\bL$$ but for others it gives a class that extends it properly. We will discuss it in Section 8. ## 7. The Lindenbaum-Tarski method for proving algebraic completeness theorems We now discuss the method that is most commonly used to prove that a class of algebras $$\bK$$ is a $$\delta(p) \approx \varepsilon(p)$$-algebraic semantics for a logic $$\bL$$, namely the Lindenbaum-Tarski method. It is the standard method used to prove that the classes of algebras of the examples mentioned in Section 5 are algebraic semantics for the corresponding logics. The Lindenbaum-Tarski method contributed in two respects to the elaboration of important notions in the theory of the algebraization of logics. It underlies Blok and Pigozzi’s notion of algebraizable logic and reflecting on it some ways to define for each logic a class of algebras can be justified as providing a natural class. We will consider this issue in Section 8. The Lindenbaum-Tarski method can be outlined as follows. To prove that a class of algebras $$\bK$$ is a $$\delta(p) \approx \varepsilon(p)$$-algebraic semantics for a logic $$\bL$$ first it is shown that $$\bK$$ gives a sound $$\delta(p) \approx \varepsilon(p)$$-semantics for $$\bL$$, namely that if $$\Gamma \vdash_{\bL } \phi$$, then for every $$\bA \in \bK$$ and every valuation $$v$$ in $$\bA$$ if the values of the formulas in $$\Gamma$$ satisfy $$\delta(p) \approx \varepsilon(p)$$, then the value of $$\phi$$ does too. Secondly, the other direction, that is, the completeness part, is proved by what is properly known as the Lindenbaum-Tarski method. This method uses the theories of $$\bL$$ to obtain matrices on the algebra of formulas and then reduces these matrices in order to get for each one a matrix whose algebra is in $$\bK$$ and whose set of designated elements is the set of elements of the algebra that satisfy $$\delta(p) \approx \varepsilon(p)$$. We proceed to describe the method step by step. Let $$\bL$$ be one of the logics discussed in the examples in Section 5. Let $$\bK$$ be the corresponding class of algebras we considered there and let $$\delta(p) \approx \varepsilon(p)$$ be the equation in one variable involved in the soundness and completeness theorem. To prove the completeness theorem one proceeds as follows. Given any set of formulas $$\Gamma$$: 1. The theory $$C_{\bL }(\Gamma) = \{\phi : \Gamma \vdash_{\bL } \phi \}$$ of $$\Gamma$$, which we denote by $$T$$, is considered and the binary relation $$\theta(T)$$ on the set of formulas is defined using the formula $$p \leftrightarrow q$$ as follows: $\langle \phi , \psi \rangle \in \theta(T) \txtiff \phi \leftrightarrow \psi \in T.$ 2. It is shown that $$\theta(T)$$ is a congruence relation on $$\bFm_L$$. The set $$[\phi]$$ of the formulas related to the formula $$\phi$$ by $$\theta(T)$$ is called the equivalence class of $$\phi$$. 3. A new matrix $$\langle \bFm/\theta(T), T/\theta(T) \rangle$$ is obtained by identifying the formulas related by $$\theta(T)$$, that is, $$\bFm/\theta(T)$$ is the quotient algebra of $$\bFm$$ modulo $$\theta(T)$$ and $$T/\theta(T)$$ is the set of equivalence classes of the elements of $$T$$. Recall that the algebraic operations of the quotient algebra are defined by: $* ^{\bFm/\theta(T) }([\phi_1],\ldots ,[\phi_n]) = [* \phi_1 \ldots \phi_n ] \;\;\; \text{and} \;\;\; \dagger^{\bFm/\theta(T) } = [\dagger]$ 4. It is shown that $$\theta(T)$$ is a relation compatible with $$T$$, i.e., that if $$\langle \phi , \psi \rangle \in \theta(T)$$ and $$\phi \in T$$, then $$\psi \in T$$. This implies that $\phi \in T \txtiff [\phi] \subseteq T \txtiff [\phi] \in T/\theta(T).$ 5. It is proved that the matrix $$\langle \bFm/\theta(T), T/\theta(T) \rangle$$ is reduced, that $$\bFm/\theta(T)$$ belongs to $$\bK$$ and that $$T/\theta(T)$$ is the set of elements of $$\bFm/\theta(T)$$ that satisfy the equation $$\delta(p) \approx \varepsilon(p)$$ in $$\bFm/\theta(T)$$. The proof of the completeness theorem then proceeds as follows. (4) and (5) imply that for every formula $$\psi , \Gamma \vdash_{\bL } \psi$$ if and only if $$[\psi]$$ satisfies the equation $$\delta(p) \approx \varepsilon(p)$$ in the algebra $$\bFm/\theta(T)$$. Thus, considering the valuation $$id$$ mapping every variable $$p$$ to its equivalence class $$[p]$$, whose extension $$\boldsymbol{id}$$ to the set of all formulas is such that $$\boldsymbol{id}(\phi) = [\phi]$$ for every formula $$\phi$$, we have for every formula $$\psi$$, $$\Gamma \vdash_{\bL } \psi \txtiff\boldsymbol{id}(\psi)$$ satisfies the equation $$\delta(p) \approx \varepsilon(p)$$ in $$\bFm/\theta(T)$$. Hence, since by (5), $$\bFm/\theta(T) \in \bK$$, it follows that if $$\Gamma \not\vdash_{\bL }\phi$$, then there is an algebra $$\bA \in \bK$$ (namely $$\bFm/\theta(T))$$ and a valuation $$v$$ (namely $$id)$$ such that the elements of $$\bv[\Gamma]$$ satisfy the equation on $$\bA$$ but $$\bv(\phi)$$ does not. The Lindenbaum-Tarski method, when successful, shows that the class of algebras $$\{\bFm/\theta(T): T$$ is a theory of $$\bL\}$$ is a $$\delta(p) \approx \varepsilon(p)$$-algebraic semantics for $$\bL$$. Therefore it also shows that every class of algebras $$\bK$$ which is $$\delta(p) \approx \varepsilon(p)$$-sound for $$\bL$$ and includes the set $$\{\bFm/\theta(T): T$$ is a theory of $$\bL\}$$ is also a $$\delta(p) \approx \varepsilon(p)$$-algebraic semantics for $$\bL$$. Let us make some remarks on the Lindenbaum-Tarski method just described. The first is important for the generalizations leading to the classes of algebras associated with a logic. The others, to obtain the conditions in the definition of the concept of algebraizable logic. 1. Conditions (4) and (5) imply that $$\theta(T)$$ is in fact the Leibniz congruence of $$\langle \bFm_L, T \rangle$$. 2. When the Lindenbaum-Tarski method succeeds, it usually holds that in every algebra $$\bA \in \bK$$, the relation defined by the equation $\delta(p \leftrightarrow q) \approx \varepsilon(p \leftrightarrow q),$ which is the result of replacing in $$\delta(p) \approx \varepsilon(p)$$ the letter $$p$$ by the formula $$p \leftrightarrow q$$ that defines the congruence relation of a theory, is the identity relation on $$A$$. 3. For every formula $$\phi$$, the formulas $$\delta(p/\phi) \leftrightarrow \varepsilon(p/\phi)$$ and $$\phi$$ are interderivable in $$\bL$$ (i.e., $$\phi \vdash_{\bL } \delta(p/\phi) \leftrightarrow \varepsilon(p/\phi)$$ and $$\delta(p/\phi) \leftrightarrow \varepsilon(p/\phi) \vdash_{\bL } \phi)$$. The concept of algebraizable logic introduced by Blok and Pigozzi, which we will discuss in Section 9, can be described roughly by saying that a logic $$\bL$$ is algebraizable if it has an algebraic semantics $$(\bK, \iEq)$$ such that (1) $$\bK$$ is included in the natural class of algebras $$\bAlg^*\bL$$ associated with $$\bL$$ and (2) the fact that $$(\bK, \iEq)$$ is an algebraic semantics can be proved by using the Lindenbaum-Tarski method slightly generalized. ## 8. The natural class of algebras of a logic system We shall now discuss the two definitions that have been considered as providing natural classes of algebras associated with a logic $$\bL$$. Both definitions can be seen as arising from an abstraction of the Lindenbaum-Tarski method and we follow this path in introducing them. The common feature of these abstractions is that in them the specific way in which the relation $$\theta(T)$$ is defined in the Lindenbaum-Tarski method is disregarded. It has to be remarked that, nonetheless, for many logics both definitions lead to the same class. The classes obtained from both definitions have been taken in the algebraic studies of many particular logics (for some logics one, for others the other) as the natural class that deserves to be studied. We already encountered the first generalization in Section 6 when we showed that every logic has a reduced matrix semantics. It leads to the class of algebras $$\bAlg^*\bL$$. That its definition is a generalization of the Lindenbaum-Tarski method comes from the realization that the relation $$\theta(T)$$, associated with an $$\bL$$-theory, defined in the different completeness proofs in the literature that use the Lindenbaum-Tarski method is in fact the Leibniz congruence of the matrix $$\langle \bFm_L, T \rangle$$ and that therefore the matrix $$\langle \bFm/\theta(T), T/\theta(T) \rangle$$ is its reduction. As we mentioned in Section 6, for every logic $$\bL$$, every $$\bL$$-sound class of matrices $$\bM$$ that contains all the matrices $$\langle \bFm/\bOmega_{\bFm_L }(T), T/ \bOmega_{\bFm_L }(T) \rangle$$, where $$T$$ is a theory of $$\bL$$, is a complete reduced matrix semantics for $$\bL$$. From this perspective the notion of the Leibniz congruence of a matrix can be taken as a generalization to arbitrary matrices of the idea that comes from the Lindenbaum-Tarski procedure of proving completeness. Following this course of reasoning, the class $$\bAlg^*\bL$$ of the algebras of the reduced matrix models of a logic $$\bL$$ is a very natural class of algebras to associate with $$\bL$$. It is the class $$\{\bA/\bOmega_{\bA }(F): \bA$$ is an $$\bL$$-algebra and $$F$$ is a $$\bL$$-filter of $$\bA\}$$. The second way of generalizing the Lindenbaum-Tarski method uses a different fact, namely that in the examples discussed in Section 3 the relation $$\theta(T)$$ is also the relation $$\bOmega^{\sim}_{\bFm_L }(T)$$ defined by the condition \begin{align*} \langle \phi , \psi \rangle \in \bOmega^{\sim}_{\bFm_L }(T)\txtiff & \forall T' \in \tTH(\bL),\\ & \forall p \in V, \\ &\forall \gamma(p) \in \bFm_L (T \subseteq T' \Rightarrow (\gamma(p/\phi) \in T' \Leftrightarrow \gamma(p/\psi) \in T')). \end{align*} For every logic $$\bL$$ and every $$\bL$$-theory $$T$$ the relation $$\bOmega^{\sim}_{\bFm_L }(T)$$ defined in this way is the greatest congruence compatible with all the $$\bL$$-theories that extend $$T$$. Therefore, it holds that $\bOmega^{\sim}_{\bFm_L }(T) = \bigcap_{T' \in \tTH(\bL)^T} \bOmega_{\bFm_L }(T'),$ where $$\tTH(\bL)^T = \{T' \in \tTH(\bL): T \subseteq T'\}$$. The relation $$\bOmega^{\sim}_{\bFm_L }(T)$$ is known as the Suszko congruence of $$T$$ (w.r.t. $$\bL)$$. Suszko defined it —in an equivalent way— in 1977. For every logic $$\bL$$, the notion of the Suszko congruence can be extended to its matrix models. The Suszko congruence of a matrix model $$\langle \bA, D \rangle$$ of $$\bL$$ (w.r.t. $$\bL)$$ is the greatest congruence of $$\bA$$ compatible with every $$\bL$$-filter of $$\bA$$ that includes $$D$$, that is, it is the relation given by ${\bOmega^{\sim}_{\bA}}^{\bL}(D) = \bigcap_{D' \in \tFi_{\bL}(\bA)^D} \bOmega_{\bA}(D')$ where $$\tFi_{\bL}(\bA)^D = \{D': D'$$ is a $$\bL$$-filter of $$\bA$$ and $$D \subseteq D'\}$$. Notice that unlike the intrinsic notion of Leibniz congruence, the Suszko congruence of a matrix model of $$\bL$$ is not intrinsic to the matrix: it depends in an essential way on the logic under consideration. The theory of the Suszko congruence of matrices has been developed in Czelakowski 2003 and continued in Albuquerque & Font & Jansana 2016. In the same manner that the concept of Leibniz congruence leads to the concept of reduced matrix, the notion of Suszko congruence leads to the notion of Suszko-reduced matrix. A matrix model of $$\bL$$ is Suszko-reduced if its Suszko congruence is the identity. Then the class of algebras of the Suszko-reduced matrix models of a logic $$\bL$$ is another class of algebras that is taken as a natural class of algebras to associate with $$\bL$$. It is the class $$\bAlg\bL = \{\bA / {\bOmega^{\sim}_{\bA}}^{\bL}(F): \bA$$ is an $$\bL$$-algebra and $$F$$ is a $$\bL$$-filter of $$\bA\}$$. This class is nowadays taken in abstract algebraic logic as the natural class of algebras to be associated with $$\bL$$ and it called its algebraic counterpart. For an arbitrary logic $$\bL$$, the relation between the classes $$\bAlg\bL$$ and $$\bAlg^*\bL$$ is that $$\bAlg\bL$$ is the closure of $$\bAlg^*\bL$$ under subdirect products, in particular $$\bAlg^*\bL \subseteq \bAlg\bL$$. In general, the two classes may be different. For example, if $$\bL$$ is the $$(\wedge , \vee)$$-fragment of classical propositional logic, $$\bAlg\bL$$ is the variety of distributive lattices (the class that has been always taken to be the natural class of algebras associated with $$\bL)$$ while $$\bAlg^*\bL$$ is properly included in it —in fact $$\bAlg^*\bL$$ is not a quasivariety. Nonetheless, for many logics $$\bL$$, in particular for the algebraizable and the protoalgebraic ones to be discussed in the next sections, and also when $$\bAlg^*\bL$$ is a variety, the classes $$\bAlg\bL$$ and $$\bAlg^*\bL$$ are equal. This fact can explain why in the 1980s, before the algebraic study of non-protoalgebraic logics was considered worth to be pursued, the conceptual difference between the two definitions was not needed and, accordingly, it was not considered (or even discovered). ## 9. When a logic is algebraizable and what does this mean? The algebraizable logics are purported to be the logics with the strongest possible link with their algebraic counterpart. This requirement demands that the algebraic counterpart of the logic should be an algebraic semantics but requires a more robust connection between the logic and the algebraic counterpart than that. This more robust connection is present in the best behaved particular logics known. The mathematically precise concept of algebraizable logic characterizes this type of link. Blok and Pigozzi introduced that fundamental concept in Blok & Pigozzi 1989 and its introduction can be considered the starting point of the unification and growth of the field of abstract algebraic logic in the 1980s. Blok and Pigozzi defined the notion of algebraizable logic only for finitary logics. Later, Czelakowski and Herrmann generalized it to arbitrary logics and also weakened some conditions in the definition. We present here the generalized concept. We said in Section 7 that, roughly speaking, a logic $$\bL$$ is algebraizable when 1) it has an algebraic semantics, i.e., a class of algebras $$\bK$$ and a set of equations $$\iEq(p)$$ such that $$\bK$$ is a $$\iEq$$-algebraic semantics for $$\bL$$, 2) this fact can be proved by using the Lindenbaum-Tarski method slightly generalized and, moreover, 3) $$\bK \subseteq \bAlg^*\bL$$. The generalization of the Lindenbaum-Tarski method (as we described it in Section 7) consists in allowing in step (5) (as already done in the definition of algebraic semantics) a set of equations $$\iEq(p)$$ in one variable instead of a single equation $$\delta(p) \approx \varepsilon(p)$$ and in allowing in a similar manner a set of formulas $$\Delta(p, q)$$ in at most two variables to play the role of the formula $$p \leftrightarrow q$$ in the definition of the congruence of a theory. Then, given a theory $$T$$, the relation $$\theta(T)$$, which has to be the greatest congruence on the formula algebra compatible with $$T$$ (i.e., the Leibniz congruence of $$T)$$, is defined by $\langle \phi , \psi \rangle \in \theta(T) \txtiff \Delta(p/\phi , q/\psi) \subseteq T.$ We need some notational conventions before engaging in the precise definition of algebraizable logic. Given a set of equations $$\iEq(p)$$ in one variable and a formula $$\phi$$, let $$\iEq(\phi)$$ be the set of equations obtained by replacing in all the equations in $$\iEq$$ the variable $$p$$ by $$\phi$$. If $$\Gamma$$ is a set of formulas, let $\iEq(\Gamma) := \bigcup_{\phi \in \Gamma}\iEq(\phi).$ Similarly, given a set of formulas in two variables $$\Delta(p, q)$$ and an equation $$\delta \approx \varepsilon$$, let $$\Delta(\delta , \varepsilon)$$ denote the set of formulas obtained by replacing $$p$$ by $$\delta$$ and $$q$$ by $$\varepsilon$$ in all the formulas in $$\Delta$$. Moreover, if $$\iEq$$ is a set of equations, let $\Delta(\iEq) = \bigcup_{\delta \approx \varepsilon \in \iEq} \Delta(\delta , \varepsilon).$ Given a set of equations $$\iEq(p, q)$$ in two variables, this set defines on every algebra $$\bA$$ a binary relation, namely the set of pairs $$\langle a, b\rangle$$ of elements of $$A$$ that satisfy in $$\bA$$ all the equations in $$\iEq(p, q)$$. In standard model-theoretic notation, this set is the relation $\{\langle a, b \rangle : a, b \in A \textrm{ and } \bA \vDash \iEq(p, q)[a, b]\}.$ The formal definition of algebraizable logic is as follows. A logic $$\bL$$ is algebraizable if there is a class of algebras $$\bK$$, a set of equations $$\iEq(p)$$ in one variable and a set of formulas $$\Delta(p, q)$$ in two variables such that 1. $$\bK$$ is an $$\iEq$$-algebraic semantics for $$\bL$$, namely $$\Gamma \vdash_{\bL } \phi\txtiff$$ for every $$\bA \in \bK$$ and every valuation $$v$$ on $$\bA$$, if $$\bv[\Gamma] \subseteq \tEq(\bA)$$, then $$\bv(\phi) \in \tEq(\bA)$$. 2. For every $$\bA \in \bK$$, the relation defined by the set of equations in two variables $$\iEq(\Delta(p, q))$$ is the identity relation on $$A$$. A class of algebras $$\bK$$ for which there are sets $$\iEq(p)$$ and $$\Delta(p, q)$$ with these two properties is said to be an equivalent algebraic semantics for $$\bL$$. The set of formulas $$\Delta$$ is called a set of equivalence formulas and the set of equations $$\iEq$$ a set of defining equations. The conditions of the definition imply: 1. $$p$$ is inter-derivable in $$\bL$$ with the set of formulas $$\Delta(\iEq)$$, that is $\Delta(\iEq) \vdash_{\bL } p \textrm{ and } p \vdash_{\bL } \Delta(\iEq).$ 2. For every $$\bL$$-theory $$T$$, the Leibniz congruence of $$\langle \bFm_L, T\rangle$$ is the relation defined by $$\Delta(p, q)$$, namely $\langle \phi , \psi \rangle \in \bOmega_{\bFm }(T)\txtiff\Delta(p/\phi , q/\psi) \subseteq T.$ 3. If $$\Delta$$ and $$\Delta '$$ are two sets of equivalence formulas, $$\Delta \vdash_{\bL } \Delta '$$ and $$\Delta ' \vdash_{\bL } \Delta$$. Similarly, if $$\iEq(p)$$ and $$\iEq'(p)$$ are two sets of defining equations, for every algebra $$\bA \in \bK, \iEq(\bA) = \iEq'(\bA)$$. 4. The class of algebras $$\bAlg^*\bL$$ also satisfies conditions (1) and (2), and hence it is an equivalent algebraic semantics for $$\bL$$. Moreover, it is an SP-class and includes every other class of algebras that is an equivalent algebraic semantics for $$\bL$$. Accordingly, it is called the greatest equivalent algebraic semantics of $$\bL$$. 5. For every $$\bA \in \bAlg^*\bL$$ there is exactly one $$\bL$$-filter $$F$$ such that the matrix $$\langle \bA, F\rangle$$ is reduced, and this filter is the set $$\iEq(\bA)$$. Or, to put it in other terms, the class of reduced matrix models of $$\bL$$ is $$\{\langle \bA, \iEq(\bA) \rangle : \bA \in \bAlg^*\bL\}$$. Blok and Pigozzi’s definition of algebraizable logic in Blok & Pigozzi 1989 was given only for finitary logics and, moreover, they imposed that the sets of defining equations and of equivalence formulas should be finite. Today we say that an algebraizable logic is finitely algebraizable if the sets of equivalence formulas $$\Delta$$ and of defining equations $$\iEq$$ can both be taken finite. And we say that a logic is Blok-Pigozzi algebraizable (BP-algebraizable) if it is finitary and finitely algebraizable. If $$\bL$$ is finitary and finitely algebraizable, then $$\bAlg^*\bL$$ is not only an SP-class, but a quasivariety and it is the quasivariety generated by any class of algebras $$\bK$$ which is an equivalent algebraic semantics for $$\bL$$. We have just seen that in algebraizable logics the class of algebras $$\bAlg^*\bL$$ plays a prominent role. Moreover, in these logics the classes of algebras obtained by the two ways of generalizing the Lindenbaum-Tarski method coincide, that is, $$\bAlg^*\bL = \bAlg\bL$$ —this is due to the fact that for any algebraizable logic $$\bL$$, $$\bAlg^*\bL$$ is closed under subdirect products. Hence, for every algebraizable logic $$\bL$$ its algebraic counterpart $$\bAlg\bL$$ is its greatest equivalent algebraic semantics, whatever perspective is taken on the generalization of the Lindenbaum-Tarski method. Conditions (1) and (2) of the definition of algebraizable logic (instantiated to $$\bAlg^*\bL$$) encode the fact that there is a very strong link between an algebraizable logic $$\bL$$ and its class of algebras $$\bAlg\bL$$, so that this class of algebras reflects the metalogical properties of $$\bL$$ by algebraic properties of $$\bAlg\bL$$ and conversely. The definition of algebraizable logic can be stated, equivalently, in terms of translations between the logic and an equational consequence relation $$\vDash_{\bK}$$ associated with any equivalent algebraic semantics $$\bK$$ for it —that turns to be the same relation no matter what equivalent algebraic semantics we choose. The equational consequence $$\vDash_{\bK}$$ of a class of algebras $$\bK$$ is defined as follows. $$\{\phi_i \approx \psi_i: i \in I\} \vDash_{\bK } \phi \approx \psi \txtiff$$for every $$\bA \in \bK$$ and every valuation $$v$$ on $$\bA$$, if $$\bv(\phi_i) = \bv(\psi_i)$$, for all $$i \in I$$, then $$\bv(\phi) = \bv(\psi)$$. The translations needed are given by the set of defining equations and the set of equivalence formulas. A set of equations $$\iEq(p)$$ in one variable defines a translation from formulas to sets of equations: each formula is translated into the set of equations $$\iEq(\phi).$$ Similarly, a set of formulas $$\Delta(p, q)$$ in two variables defines a translation from equations to sets of formulas: each equation $$\phi \approx \psi$$ is translated into the set of formulas $$\Delta(\phi , \psi)$$. Condition (1) in the definition of algebraizable logic can be reformulated as $\Gamma \vdash_{\bL } \phi\txtiff \iEq(\Gamma) \vDash_{\bK } \iEq(\phi)$ and condition (2) as $p \approx q \vDash_{\bK } \iEq(\Delta(p, q)) \textrm{ and } \iEq(\Delta(p, q)) \vDash_{\bK } p \approx q.$ These two conditions imply 1. $$\{\phi_i \approx \psi_i : i \in I \} \vDash_{\bK } \phi \approx \psi \txtiff \Delta(\{\phi_i \approx \psi_i : i \in I\}) \vdash_{\bL } \Delta(\phi , \psi)$$ and condition (3) above is $p \vdash_{\bL } \Delta(\iEq(p)) \textrm{ and } \Delta(\iEq(p)) \vdash_{\bL } p.$ Thus, an algebraizable logic $$\bL$$ is faithfully interpreted in the equational logic of its equivalent algebraic semantics (condition (1)) by means of the translation of formulas into sets of equations given by a set of defining equations, and the equational logic of its equivalent algebraic semantics is faithfully interpreted in the logic $$\bL$$ (condition (9)) by means of the translation of equations into sets of formulas given by an equivalence set of formulas. Moreover, both translations are inverses of each other (conditions (2) and (3)) modulo logical equivalence. In this way we see that the link between $$\bL$$ and its greatest equivalent algebraic semantics is very strong and that the properties of $$\bL$$ should translate into properties of the associated equational consequence relation. The properties that this relation actually has of course depend on the properties of the class of algebras $$\bAlg\bL$$. Given an algebraic semantics $$(\bK, \iEq)$$ for a logic $$\bL$$, a way to stress the difference between it being merely an algebraic semantics and being an algebraic semantics that makes $$\bL$$ algebraizable is that the translation of formulas into equations given by the set of equations $$\iEq$$ is invertible in the sense that there is a translation, say $$\Delta$$, of equations into formulas given by a set of formulas in two variables that satisfies condition (9) above, and such that $$\iEq$$ and $$\Delta$$ provide mutually inverses translations (i.e., conditions (2) and (3) hold). The link between an algebraizable logic $$\bL$$ and its greatest equivalent algebraic semantics given by the set of defining equations and the set of equivalence formulas allows us to prove a series of general theorems that relate the properties of $$\bL$$ with the properties of $$\bAlg\bL$$. These kinds of theorems are called frequently bridge theorems. We will mention as a sample three of them. The first concerns the deduction theorem. To prove a general theorem relating the existence of a deduction theorem with an algebraic property requires first that a concept of deduction theorem applicable to any logic has to be defined. A logic $$\bL$$ has the deduction-detachment property if there is a finite set of formulas $$\Sigma(p, q)$$ such that for every set of formulas $$\Gamma$$ and all formulas $$\phi , \psi$$ $\Gamma \cup \{\phi \} \vdash_{\bL } \psi\txtiff\Gamma \vdash_{\bL } \Sigma(\phi , \psi).$ Note that this is a generalization of the standard deduction theorem (the direction from left to right in the above expression) and Modus Ponens (equivalent to the implication from right to left) that several logics have for a connective $$\rightarrow$$. In those cases $$\Sigma(p, q) = \{p \rightarrow q\}$$. Theorem 1. A finitary and finitely algebraizable logic $$\bL$$ has the deduction-detachment property if and only if the principal relative congruences of the algebras in $$\bAlg\bL$$ are equationally definable. The second theorem refers to Craig interpolation. Several notions of interpolation are applicable to arbitrary logics. We consider only one of them. A logic $$\bL$$ has the Craig interpolation property for the consequence relation if whenever $$\Gamma \vdash_{\bL } \phi$$ and the set of variables of $$\phi$$ has nonempty intersection with the set of variables of formulas in $$\Gamma$$, there is a finite set of formulas $$\Gamma '$$ whose set of variables is included in the set of variables shared by $$\phi$$ and the formulas in $$\Gamma$$ such that $$\Gamma \vdash_{\bL } \Gamma '$$ and $$\Gamma ' \vdash_{\bL } \phi$$. Theorem 2. Let $$\bL$$ be a finitary and finitely algebraizable logic with the deduction-detachment property. Then $$\bL$$ has the Craig interpolation property if and only if $$\bAlg\bL$$ has the amalgamation property. Finally, the third theorem concerns the Beth definability property. The interested reader can find the definition in Font, Jansana & Pigozzi 2003. In the general setting we are in, the property is too involved to state it here. Theorem 3. A finitary and finitely algebraizable logic has the Beth property if and only if all the epimorphisms of the category with objects the algebras in $$\bAlg\bL$$ and morphisms the algebraic homomorphisms are surjective homomorphisms. Other results relating properties of an algebraizable logic with a property of its natural class of algebras can be found in Raftery 2011, 2013. They concern respectively a generalization of the property of having the deduction-detachment property and the property that generalize the inconsistency lemmas of classical and intuitionistic logic. Also an abstract notion of having a theorem like Glivenko’s theorem relating classical and intuitionistic logic has been proposed and related to an algebraic property in the case of algebraizable logics in Torrens 2008. More recently Raftery 2016 presents bridge theorems related to admissible rules and to structural completeness and Lávička et al. 2021 studies bridge theorems for the property of the weak excluded middle. For several classes of algebras that are the equivalent algebraic semantics of some algebraizable logic it has been known for a long time that for every algebra in the class there is an isomorphism between the lattice of congruences of the algebra and a lattice of subsets of the algebra with important algebraic meaning. For example, in Boolean algebras and Heyting algebras these subsets are the lattice filters and in modal algebras they are the lattice filters that are closed under the operation that interprets $$\Box$$. In all those cases, the sets are exactly the $$\bL$$-filters of the corresponding algebraizable logic $$\bL$$. Algebraizable logics can be characterized by the existence of this kind of isomorphism between congruences and logic filters on the algebras of their algebraic counterpart. To spell out this characterization we need a couple of definitions. Let $$\bL$$ be a logic. The Leibniz operator on an algebra $$\bA$$ (relative to $$\bL)$$ is the map from the $$\bL$$-filters of $$\bA$$ to the set of congruences of $$\bA$$ that sends every $$\bL$$-filter $$D$$ of $$\bA$$ to its Leibniz congruence $$\bOmega_{\bA }(D)$$. We say that the Leibniz operator of a logic $$\bL$$ commutes with the inverses of homomorphisms between algebras in a class $$\bK$$ if for every homomorphism $$h$$ from an algebra $$\bA \in \bK$$ to an algebra $$\bB \in \bK$$ and every $$\bL$$-filter $$D$$ of $$\bB, h^{-1}[\bOmega_{\bB }(D)] = \bOmega_{\bA }(h^{-1}[D]$$). Theorem 4. A logic $$\bL$$ is algebraizable if and only if for every algebra $$\bA \in \bAlg\bL$$ the Leibniz operator commutes with the inverses of homomorphisms between algebras in $$\bAlg\bL$$ and is an isomorphism between the set of all $$\bL$$-filters of $$\bA$$, ordered by inclusion, and the set of congruences $$\theta$$ of $$\bA$$ such that $$\bA/\theta \in \bAlg\bL$$, ordered also by inclusion. The theorem provides a logical explanation of the known isomorphisms mentioned above and similar ones for other classes of algebras. For example, the isomorphism between the congruences and the normal subgroups of a group can be explained by the existence of an algebraizable logic $$\bL$$ of which the class of groups is its greatest equivalent algebraic semantics and the normal subgroups of a group are its $$\bL$$-filters. A different but related characterization of algebraizable logics is this: Theorem 5. A logic $$\bL$$ is algebraizable if and only if on the algebra of formulas $$\bFm_L$$, the map that sends every theory $$T$$ to its Leibniz congruence commutes with the inverses of homomorphisms from $$\bFm_L$$ to $$\bFm_L$$ and it is an isomorphism between the set $$\tTH(\bL)$$ of theories of $$\bL$$, ordered by inclusion, and the set of congruences $$\theta$$ of $$\bFm_L$$ such that $$\bFm_L /\theta \in \bAlg\bL$$, also ordered by inclusion. ## 10. A classification of logics Unfortunately, not every logic is algebraizable. A typical example of a non-algebraizable logic is the local consequence of the normal modal logic $$K$$. Let us discuss this example. The local modal logic $$\blK$$ and the corresponding global one $$\bgK$$ are not only different, but their metalogical properties differ. For example, $$\blK$$ has the deduction-detachment property for $$\rightarrow$$: $\Gamma \cup \{\phi \} \vdash_{\blK } \psi\txtiff \Gamma \vdash_{\blK } \phi \rightarrow \psi.$ But $$\bgK$$ does not have the deduction-detachment property (at all). The logic $$\bgK$$ is algebraizable and $$\blK$$ is not. The equivalent algebraic semantics of $$\bgK$$ is the variety $$\bMA$$ of modal algebras, the set of equivalence formulas is the set $$\{p \leftrightarrow q\}$$ and the set of defining equations is $$\{p \approx \top \}$$. Interestingly, $$\blK$$ and $$\bgK$$ have the same algebraic counterpart (i.e., $$\bAlg \blK = \bAlg \bgK)$$, namely, the variety of modal algebras. A lesson to draw from this example is that the algebraic counterpart $$\bAlg\bL$$ of a logic $$\bL$$ does not necessarily fully encode the properties of $$\bL$$. The class of modal algebras encodes the properties of $$\bgK$$ because this logic is algebraizable and therefore the link between $$\bgK$$ and $$\bAlg \bgK$$ is as strong as possible. But $$\bAlg \blK$$, the class of modal algebras, cannot by itself completely encode the properties of $$\blK$$. What causes this difference between $$\bgK$$ and $$\blK$$ is that the class of reduced matrix models of $$\bgK$$ is $\{\langle \bA, \{1^{\bA }\}\rangle : \bA \in \bMA\},$ but the class of reduced matrix models of $$\blK$$ properly includes this class so that for some algebras $$\bA \in \bMA$$, in addition to $$\{1^{\bA }\}$$ there is some other $$\blK$$-filter $$F$$ with $$\langle \bA, F \rangle$$ reduced. This fact provides a way to show that $$\blK$$ can not be algebraizable by showing that the $$\blK$$-filters of the reduced matrices are not equationally definable from the algebras; if they where, then for every $$\bA \in \bAlg \blK$$ there would exist exactly one $$\blK$$-filter $$F$$ of $$\bA$$ such that $$\langle \bA, F \rangle$$ is reduced. Nonetheless, we can perform some of the steps of the Lindenbaum-Tarski method in the logic $$\blK$$. We can define the Leibniz congruence of every $$\blK$$-theory in a uniform way by using formulas in two variables. But in this particular case the set of formulas has to be infinite. Let $$\Delta(p, q) = \{\Box^n (p \leftrightarrow q): n$$ a natural number$$\}$$, where for every formula $$\phi , \Box^0\phi$$ is $$\phi$$ and $$\Box^n\phi$$ for $$n \gt 0$$ is the formula $$\phi$$ with a sequence of $$n$$ boxes in front $$(\Box \ldots \Box \phi)$$. Then, for every $$\blK$$-theory $$T$$ the relation $$\theta(T)$$ defined by $\langle \phi , \psi \rangle \in \theta(T)\txtiff \{\Box^n (\phi \leftrightarrow \psi): n \textrm{ a natural number}\} \subseteq T$ is the Leibniz congruence of $$T$$. In this case, it happens though that there are two different $$\blK$$-theories with the same Leibniz congruence, something that does not hold for $$\bgK$$. The logics $$\bL$$ with the property that there is a set of formulas (possibly infinite) $$\Delta(p, q)$$ in two variables that defines in every $$\bL$$-theory $$T$$ its Leibniz congruence, that is, that for all $$L$$-formulas $$\phi , \psi$$ it holds $\langle \phi , \psi \rangle \in \bOmega_{\bFm }(T)\txtiff \Delta(\phi , \psi) \subseteq T,$ are known as the equivalential logics. If $$\Delta(p, q)$$ is finite, the logic is said to be finitely equivalential. A set $$\Delta(p, q)$$ that defines in every $$\bL$$-theory its Leibniz congruence is called a set of equivalence formulas for $$\bL$$. It is clear that every algebraizable logic is equivalential and that every finitely algebraizable logic is finitely equivalential. The logic $$\blK$$ is, according to the definition, equivalential, and it can be shown that it is not finitely equivalential. The local modal logic lS4 is an example of a non-algebraizable logic that is finitely equivalential. A set of equivalence formulas for lS4 is $$\{\Box(p\leftrightarrow q)\}$$. A set of equivalence formulas for a logic $$\bL$$ should be considered as a generalized biconditional, in the sense that collectively the formulas in the set have the relevant properties of the biconditional, for example in classical logic, that makes it suitable to define the Leibniz congruences of its theories. This comes out very clearly from the following syntactic characterization of the sets of equivalence formulas. Theorem 6. A set $$\Delta(p, q)$$ of $$L$$-formulas is a set of equivalence formulas for a logic $$\bL$$ if and only if $$(\tR_{\Delta})$$ $$\vdash_{\bL } \Delta(p, p)$$ $$(\tMP_{\Delta})$$ $$p, \Delta(p, q) \vdash_{\bL } q$$ $$(\tS_{\Delta})$$ $$\Delta(p, q) \vdash_{\bL } \Delta(q, p)$$ $$(\tT_{\Delta})$$ $$\Delta(p, q) \cup \Delta(q, r) \vdash_{\bL } \Delta(p, r)$$ $$(\tRe_{\Delta})$$ $$\Delta(p_1, q_1) \cup \ldots \cup \Delta(p_n, q_n) \vdash_{\bL } \Delta(* p_1 \ldots p_n, * q_1 \ldots q_n)$$, for every connective $$*$$ of $$L$$ of arity $$n$$ greater that 0. There is some redundancy in the theorem. Conditions $$(\tS_{\Delta})$$ and $$(\tT_{\Delta})$$ follow from $$(\tR_{\Delta}),(\tMP_{\Delta})$$ and $$(\tRe_{\Delta})$$. Equivalential logics were first considered as a class of logics deserving to be studied in Prucnal & Wroński 1974, and they were studied extensively in Czelakowski 1981; see also Czelakowski 2001. We already mentioned that the algebraizable logics are equivalential. The difference between an equivalential logic and an algebraizable one can be seen in the following syntactic characterization of algebraizable logics: Theorem 7. A logic $$\bL$$ is algebraizable if and only if there exists a set $$\Delta(p, q)$$ of $$L$$-formulas and a set $$\iEq(p)$$ of $$L$$-equations such that the conditions $$(\tR_{\Delta})$$–$$(\tRe_{\Delta})$$ above hold for $$\Delta(p, q)$$ and $p \vdash_{\bL } \Delta(\iEq(p)) \textrm{ and } \Delta(\iEq(p)) \vdash_{\bL } p.$ The set $$\Delta(p, q)$$ in the theorem is then an equivalence set of formulas for $$\bL$$ and the set $$\iEq(p)$$ a set of defining equations. There are logics that are not equivalential but have the property of having a set of formulas $$[p \Rightarrow q]$$ which collectively behave in a very weak sense as the implication $$\rightarrow$$ does in many logics. Namely, that has the properties $$(\tR_{\Delta})$$ and $$(\tMP_{\Delta})$$ in the syntactic characterization of a set of equivalence formulas, i.e., $$(\tR_{\Rightarrow})$$ $$\vdash_{\bL } [p \Rightarrow p]$$ $$(\tMP_{\Rightarrow})$$ $$p, [p \Rightarrow q] \vdash_{\bL } q$$ If a logic is finitary and has a set of formulas with these properties, there is always a finite subset with the same properties. The logics with a set of formulas (finite or not) with properties (1) and (2) above are called protoalgebraic. Thus, every equivalential logic and every algebraizable logic are protoalgebraic. Protoalgebraic logics were first studied by Czelakowski, who called them non-pathological, and slightly later by Blok and Pigozzi in Blok & Pigozzi 1986. The label ‘protoalgebraic logic’ is due to these last two authors. The class of protoalgebraic logics turned out to be the class of logics for which the theory of logical matrices works really well in the sense that many results of universal algebra have counterparts for the classes of reduced matrix models of these logics and many methods of universal algebra can be adapted to its study; consequently the algebraic study of protoalgebraic logics using their matrix semantics has been extensively and very fruitfully pursued. But, as we will see, some interesting logics are not protoalgebraic. An important characterization of protoalgebraic logics is via the behavior of the Leibniz operator. The following conditions are equivalent: 1. $$\bL$$ is protoalgebraic. 2. The Leibniz operator $$\bOmega_{\bFm_L}$$ is monotone on the set of $$\bL$$-theories with respect to the inclusion relation, that is, if $$T \subseteq T'$$ are $$\bL$$-theories, then $$\bOmega_{\bFm_L }(T) \subseteq \bOmega_{\bFm_L }(T')$$. 3. For every algebra $$\bA$$, the Leibniz operator $$\bOmega_{\bA}$$ is monotone on the set of $$\bL$$-filters of $$\bA$$ with respect to the inclusion relation. Due to the monotonicity property of the Leibniz operator, for every protoalgebraic logic $$\bL$$ the class of algebras $$\bAlg^*\bL$$ is closed under subdirect products and therefore it is equal to $$\bAlg\bL$$. Hence, for protoalgebraic logics the two ways we encountered to associate a class of algebras with a logic produce, as we already mentioned, the same result. There are also characterizations of equivalential and finitely equivalential logics by the behavior of the Leibniz operator. The reader is referred to Czelakowski 2001 and Font & Jansana & Pigozzi 2003. In his Raftery 2006b, Raftery studies Condition 7 in the list of properties of an algebraizable logic we gave just after the definition. The condition says: For every $$\bA \in \bAlg^*\bL$$ the class of reduced matrix models of $$\bL$$ is $$\{\langle \bA, \iEq(\bA) \rangle : \bA \in \bAlg^*\bL\}$$, where $$\iEq(p)$$ is the set of defining equations for $$\bL$$. The logics with a set of equations $$\iEq(p)$$ with this property, namely such that for every $$\bA \in \bAlg^*\bL$$ the class of reduced matrix models of $$\bL$$ is $$\{\langle \bA, \iEq(\bA) \rangle : \bA \in \bAlg^*\bL\}$$, are called truth-equational, a name introduced in Raftery 2006b. Some truth-equational logics are protoalgebraic but others are not. We will see later an example of the last ones. The protoalgebraic logics that are truth-equational are in fact the weakly algebraizable logics studied already in Czelakowski & Jansana 2000. Every algebraizable logic is weakly algebraizable. In fact, the algebraizable logics are the equivalential logics that are truth-equational. But not every weakly algebraizable logic is equivalential. An example is the logic determined by the ortholattices, namely by the class of the matrices $$\langle \bA, \{1\} \rangle$$ where $$\bA$$ is an ortholattice and 1 is its greatest element (see Czelakowski & Jansana 2000 and Malinowski 1990). The classes of logics we have considered so far are the main classes in what has come to be known as the Leibniz hierarchy because its members are classes of logics that can be characterized by the behavior of the Leibniz operator. We described only the most important classes of logics in the hierarchy. The reader is referred to Czelakowski 2001, Font, Jansana & Pigozzi 2003, Font 2016 and 2022, for more information. In particular, Czelakowski 2001 gathers extensively the information on the different classes of the Leibniz hierarchy known at the time of its publication and Font 2016 is an introduction to abstract algebraic logic very well suited to learn the most important facts about the Leibniz hierarchy and of abstract algebraic logic in general. The relations between the classes of the Leibniz hierarchy considered in this entry are summarized in the following diagram: Recently, the Leibniz hierarchy has been refined in Cintula & Noguera 2010, 2016. The idea is to consider instead of a set of equivalence formulas $$\Delta$$ (that corresponds to the biconditional) a set of formulas $$[p\Rightarrow q]$$ that has several properties of the usual conditional $$(\rightarrow)$$. Among these properties we have $$(\tR_{\Rightarrow})$$ and $$(\tMP_{\Rightarrow})$$ in the definition of protoalgebraic logic. The set $$[p\Rightarrow q]$$ should be such that its symetrization $$[p\Rightarrow q] \cup[q\Rightarrow p]$$ is a set of equivalence formulas. New classes arise when the set $$[p\Rightarrow q]$$ has a single element. Extensive information can be found in the recent book Cintula & Noguera 2021. This book can also be taken as an introduction to abstract algebraic logic written from the perspective of the implication. ## 11. Replacement principles Two classes of logics that are not classes of the Leibniz hierarchy have been extensively studied in abstract algebraic logic. They are defined from a completely different perspective from the one provided by the behavior of the Leibniz operator, namely from the perspective given by the replacement principles a logic might enjoy. The strongest replacement principle that a logic system $$\bL$$ might have, shared for example by classical logic, intuitionistic logic and all its axiomatic extensions, says that for any set of formulas $$\Gamma$$, any formulas $$\phi , \psi , \delta$$ and any variable $$p$$ if $$\Gamma , \phi \vdash_{\bL } \psi$$ and $$\Gamma , \psi \vdash_{\bL } \phi$$, then $$\Gamma , \delta(p/\phi) \vdash_{\bL } \delta(p/\psi)$$ and $$\Gamma , \delta(p/\psi) \vdash_{\bL } \delta(p/\phi)$$, where $$\delta(p/\phi)$$ and $$\delta(p/\psi)$$ are the formulas obtained by substituting respectively $$\phi$$ and $$\psi$$ for $$p$$ in $$\delta$$. This replacement property is taken by some authors as the formal counterpart of Frege’s principle of compositionality for truth. Logics satisfying this strong replacement property are called Fregean in Font & Jansana 1996 and are thoroughly studied in Czelakowski & Pigozzi 2004a, 2004b. Many important logics do not satisfy the strong replacement property, for instance almost all the logics (local or global) of the modal family, but some, like the local consequence relation of a normal modal logic, satisfy a weaker replacement principle: for all formulas $$\phi , \psi , \delta$$, if $$\phi \vdash_{\bL }\psi$$ and $$\psi \vdash_{\bL }\phi$$, then $$\delta(p/\phi) \vdash_{\bL } \delta(p/\psi)$$ and $$\delta(p/\psi) \vdash_{\bL } \delta(p/\phi)$$. A logic satisfying this weaker replacement property is called selfextensional by Wójcicki (e.g., in Wójcicki 1969, 1988) and congruential in Humberstone 2005. We will use the first terminology because it seems more common —at least in the abstract algebraic logic literature. It has to be mentioned that all fragments of a selfextensional logic are selfextensional and that the analogous fact also holds for Fregean logics. Moreover, the difference between being selfextensional and being Fregean is not only encountered among protoalgebraic logics like the mentioned local consequence relations of normal modal logics, it is also encountered among non protoalgebraic logics. The four-valued logic of Belnap and Dunn (see Font 1997 for information) is selfextensional, non-protoalgebraic, and non-Fregean. Selfextensional logics have a very good behavior from several points of view. Their systematic study started in Wójcicki 1969 and has been continued in the context of abstract algebraic logic in Font & Jansana 1996; Jansana 2005, 2006; and Jansana & Palmigiano 2006. There are selfextensional and non-selfextensional logics in any one of the classes of the Leibniz hierarchy and also in the class of non-protoalgebraic logics. These facts show that the perspective that leads to the consideration of the classes in the Leibniz hierarchy and the perspective that leads to the definition of the selfextensional and the Fregean logics as classes of logics worthy of study as a whole are to a large extent different. Nonetheless, one of the trends of today’s research in abstract algebraic logic is to determine the interplay between the two perspectives and study the classes of logics that arise when crossing both classifications. In fact, there is a connection between the replacement principles and the Suszko congruence (and thus with the Leibniz congruence). A logic $$\bL$$ satisfies the strong replacement principle if and only if for every $$\bL$$-theory $$T$$ its Suszko congruence is the interderivability relation relative to $$T$$, namely the relation $$\{\langle \phi , \psi \rangle : T, \phi \vdash_{\bL } \psi$$ and $$T, \psi \vdash_{\bL } \phi \}$$. And a logic $$\bL$$ satisfies the weak replacement principle if and only if the Suszko congruence of the set of theorems of $$\bL$$ is the interderivability relation $$\{\langle \phi , \psi \rangle : \phi \vdash_{\bL } \psi$$ and $$\psi \vdash_{\bL } \phi \}$$. The study of logic systems from the perspective of the replacement principles lead to the so called Frege hierarchy we expound in Section 14. ## 12. Beyond protoalgebraic logics Not all interesting logics are protoalgebraic. In this section we will briefly discuss four examples of non-protoalgebraic logics: the logic of conjunction and disjunction, positive modal logic, the strict implication fragment of $$\blK$$ and Visser’s subintuitionistic logic. All of them are selfextensional. In the next section, we will expound the semantics of abstract logics and generalized matrices that serves to develop a really general theory of the algebraization of logic systems. As we will see, the perspective changes in an important respect from the perspective taken in logical matrix model theory. ### 12.1 The logic of conjunction and disjunction This logic is the $$\{\wedge , \vee , \bot , \top \}$$-fragment of Classical Propositional Logic. Hence its language is the set $$\{\wedge , \vee , \top , \bot \}$$ and its consequence relation is given by $\Gamma \vdash \phi\txtiff\Gamma \vdash_{\bCPL} \phi.$ It turns out that it is also the $$\{\wedge , \vee , \bot , \top \}$$-fragment of Intuitionistic Propositional Logic. Let us denote it by $$\bL^{\{\wedge , \vee \}}$$. The logic $$\bL^{ \{\wedge , \vee \}}$$ is not protoalgebraic but it is Fregean. The class of algebras $$\bAlg\bL^{\{\wedge , \vee \}}$$ is the variety of bounded distributive lattices, which is the class of algebras naturally expected to be the associated with $$\bL^{ \{\wedge , \vee \}}$$, but the class $$\bAlg^*\bL^{ \{\wedge , \vee \}}$$ is strictly included in it. In fact, this last class of algebras is not a quasivariety, but still it is good enough to be first-order definable. The logic $$\bL^{\{\wedge , \vee \}}$$ is thus a natural example of a logic where the class of algebras of its reduced matrix models is not the right class of algebras expected to correspond to it (see Font & Verdú 1991 where the logic is studied at length). The properties of this example and its treatment in Font & Verdú 1991 motivated the systematic study in Font & Jansana 1996 of the kind of models for sentential logics considered in Brown & Suszko 1973, namely, abstract logics. ### 12.2 Positive Modal Logic Positive Modal Logic is the $$\{\wedge , \vee , \Box , \Diamond , \bot , \top \}$$-fragment of the local normal modal logic $$\blK$$. We denote it by $$\bPML$$. This logic has some interest in Computer Science. The logic $$\bPML$$ is not protoalgebraic, it is not truth-equational, it is selfextensional and it is not Fregean. Its algebraic counterpart $$\bAlg \bPML$$ is the class of positive modal algebras introduced by Dunn in Dunn 1995. The logic is studied in Jansana 2002 from the perspective of abstract algebraic logic. The class of algebras $$\bAlg\bPML$$ is different from $$\bAlg^*\bPML$$. ### 12.3 Visser’s subintuitionistic logic This logic is the logic in the language of intuitionistic logic that has to the least normal modal logic $$K$$ the same relation that intuitionistic logic has to the normal modal logic $$S4$$. It was introduced in Visser 1981 (under the name Basic Propositional Logic) and has been studied by several authors, such as Ardeshir, Alizadeh, and Ruitenburg. It is not protoalgebraic, it is truth-equational and it is Fregean (hence also selfextensional). ### 12.4 The strict implication fragment of the local modal logic lK The strict implication of the language of modal logic is defined using the $$\Box$$ operator and the material implication $$\rightarrow$$. We will use $$\Rightarrow$$ for the strict implication. Its definition is $$\phi \Rightarrow \psi := \Box(\phi \rightarrow \psi)$$. The language of the logic $$\bSilK$$, that we call the strict implication fragment of the local modal logic $$\blK$$, is the language $$L = \{\wedge , \vee , \bot , \top , \Rightarrow \}$$. We can translate the formulas of $$L$$ to formulas of the modal language by systematically replacing in an $$L$$-formula $$\phi$$ every subformula of the form $$\psi \Rightarrow \delta$$ by $$\Box(\psi \rightarrow \delta)$$ and repeating the process until no appearance of $$\Rightarrow$$ is left. Let us denote by $$\phi^*$$ the translation of $$\phi$$ and by $$\Gamma^*$$ the set of the translations of the formulas in $$\Gamma$$. Then the definition of the consequence relation of $$\bSilK$$ is: $\Gamma \vdash_{\bSilK } \phi\txtiff\Gamma^* \vdash_{\blK } \phi^*.$ The logic $$\bSilK$$ is not protoalgebraic and is not truth-equational. It is selfextensional but it is not Fregean. Its algebraic counterpart $$\bAlg \bSilK$$ is the class of bounded distributive lattices with a binary operation with the properties of the strict implication of $$\blK$$. This class of algebras is introduced and studied in Celani & Jansana 2005, where its members are called Weakly Heyting algebras. $$\bAlg \bSilK$$ does not coincide with $$\bAlg^* \bSilK$$. The logic $$\bSilK$$ belongs, as Visser’s logic, to the family of so-called subintuitionistic logics. A reference to look at for information on these logics is Celani & Jansana 2003. ## 13. Abstract logics and generalized matrices The logical matrix models of a given logic can be thought of as algebraic generalizations of its theories, more precisely, of its Lindenbaum matrices. They come from taking a local perspective centered around the theories of the logic considered one by one and its analogs the logic filters (also taken one by one). But, as we will see, the properties of a logic depend in general on the global behavior of the set of its theories taken together as a bunch; or —to put it otherwise— on its consequence relation. The consideration of this global behavior introduces a global perspective on the design of semantics for logic systems. The abstract logics that we are going to define can be seen, in contrast to logical matrices, as algebraic generalizations of the logic itself and its extensions. They are the natural objects to consider when one takes the global perspective seriously. Let $$L$$ be a propositional language. An $$L$$-abstract logic is a pair $$\cA = \langle \bA$$, C $$\rangle$$ where $$\bA$$ is an $$L$$-algebra and $$C$$ an abstract consequence operation on $$A$$. Given a logic system $$\bL$$, an $$L$$-abstract logic $$\cA = \langle \bA, C \rangle$$ is a model of $$\bL$$ if for every set of formulas $$\Gamma$$ and every formula $$\phi$$ $$\Gamma \vdash_{\bL } \phi\txtiff$$ for every valuation $$v$$ on $$\bA, \bv(\phi) \in C(\bv[\Gamma])$$. This definition has an equivalent in terms of the closed sets of $$C$$: an abstract logic $$\cA = \langle \bA, C \rangle$$ is a model of $$\bL$$ if and only if for every $$C$$-closed set $$X$$ the matrix $$\langle \bA, X \rangle$$ is a model of $$\bL$$ (i.e., $$X$$ is an $$\bL$$-filter). This observation leads to another point of view on abstract logics as models of a logic system. It transforms them into a collection of logical matrices (given by the closed sets) over the same algebra, or, to put it more simply, into a pair $$\langle \bA, \cB \rangle$$ where $$\cB$$ is a collection of subsets of $$A$$. A structure of this type is called in the literature a generalized matrix (Wójcicki 1973) and more recently it has been called an atlas in Dunn & Hardegree 2001. It is said to be a model of a logic system $$\bL$$ if for every $$X \in \cB, \langle \bA, X \rangle$$ is a matrix model of $$\bL$$. A logic system $$\bL = \langle L, \vdash_{\bL } \rangle$$ straightforwardly provides us with an equivalent abstract logic $$\langle \bFm_L, C_{\vdash_{ \bL} } \rangle$$ and an equivalent generalized matrix $$\langle \bFm_L,\tTH(\bL) \rangle$$, where $$\tTH(\bL)$$ is the set of $$C_{\vdash_{ \bL}}$$-closed sets of formulas (i.e., the $$\bL$$-theories). We will move freely from one to the other. The generalized matrices $$\langle \bA, \cB \rangle$$ that correspond to abstract logics have the following two properties: $$A \in \cB$$ and $$\cB$$ is closed under intersections of arbitrary nonempty families. A family $$\cB$$ of subsets of a set $$A$$ with these two properties is known as a closed-set system and also as a closure system. There is a dual correspondence between abstract consequence operations on a set $$A$$ and closed-set systems on $$A$$. Given an abstract consequence operation $$C$$ on $$A$$, the set $$\cC_C$$ of $$C$$-closed sets is a closed-set system and given a closed-set system $$\cC$$ the operation $$C_{\cC}$$ defined by $$C_{\cC }(X) = \bigcap \{Y \in \cC: X \subseteq Y\}$$, for every $$X \subseteq A$$, is an abstract consequence operation. In general, every generalized matrix $$\langle \bA, \cB \rangle$$ can be turned into a closed-set system by adding to $$\cB \cup \{A\}$$ the intersections of arbitrary nonempty subfamilies, and therefore into an abstract logic, which we denote by $$\langle \bA, C_{\cB }\rangle$$. In that situation we say that $$\cB$$ is a base for $$C_{\cB}$$. It is obvious that an abstract logic can have more than one base. Any family of closed sets with the property that every closed set is an intersection of elements of the family is a base. The study of bases for the closed set system of the theories of a logic usually plays an important role in its study. For example, in classical logic an important base for the family of its theories is the family of maximal consistent theories and in intuitionistic logic the family of prime theories. In a similar way, the systematic study of bases for generalized matrix models of a logic becomes important. In order to make the exposition smooth we will now move from abstract logics to generalized matrices. Let $$\cA = \langle \bA, \cB \rangle$$ be a generalized matrix. There exists the greatest congruence of $$\bA$$ compatible with all the sets in $$\cB$$; it is known as the Tarski congruence of $$\cA$$. We denote it by $$\bOmega^{\sim}_{\bA }(\cB)$$ and has the following characterization using the Leibniz operator $\bOmega^{\sim}_{\bA }(\cB) = \bigcap_{X \in \cB} \bOmega_{\bA }(X).$ It can also be characterized by the condition: $$\langle a, b \rangle \in \bOmega^{\sim}_{\bA }(\cB)\txtiff$$ for every $$\phi(p, q_1 , \ldots ,q_n)$$, every $$c_1 , \ldots ,c_n \in A$$ and all $$X \in \cB$$ $\phi^{\bA }[a, c_1 , \ldots ,c_n] \in X \Leftrightarrow \phi^{\bA }[b, c_1 , \ldots ,c_n] \in X$ or equivalently by $$\langle a, b \rangle \in \bOmega^{\sim}_{\bA }(\cB)\txtiff$$ for every $$\phi(p, q_1 , \ldots ,q_n)$$ and every $$c_1 , \ldots ,c_n \in A, C_{\cB }(\phi^{\bA }[a, c_1 , \ldots ,c_n]) = C_{\cB }(\phi^{\bA }[b, c_1 , \ldots ,c_n])$$. A generalized matrix is reduced if its Tarski congruence is the identity. Every generalized matrix $$\langle \bA, \cB \rangle$$ can be turned into an equivalent reduced one by identifying the elements related by its Tarski congruence. The result is the quotient generalized matrix $$\langle \bA / \bOmega^{\sim}_{\bA }(\cB), \cB/\bOmega^{\sim}_{\bA }(\cB) \rangle$$, where $$\cB/\bOmega^{\sim}_{\bA }(\cB) = \{X/\bOmega^{\sim}_{\bA }(\cB): X \in \cB\}$$ and for $$X \in \cB$$, the set $$X/\bOmega^{\sim}_{\bA }(\cB)$$ is that of the equivalence classes of the elements of $$X$$. The properties of a logic $$\bL$$ depend in general, as we already said, on the global behavior of the family of its theories. In some logics, this behavior is reflected in the behavior of its set of theorems, as in classical and intuitionistic logic due to the deduction-detachment property, but this is by no means the most general situation, as it is witnessed by the example of the local and global modal logics of the normal modal logic $$K$$. The two have the same theorems but do not share the same properties. Recall that the local logic has the deduction-detachment property but the global one does not. In a similar way, the properties of a logic are in general better encoded in an algebraic setting if we consider families of $$\bL$$-filters on the algebras than if we consider a single $$\bL$$-filter as it is done in logical matrices model theory. The generalized matrix models that have naturally attracted most of the attention in the research on the algebraization of logics are the generalized matrices of the form $$\langle \bA, \tFi_{\bL }\bA \rangle$$ where $$\tFi_{\bL }\bA$$ is the set of all the $$\bL$$-filters of $$\bA$$. An example of a property of logics encoded in the structure of the lattices of $$\bL$$-filters of the $$L$$-algebras is that for every finitary protoalgebraic logic $$\bL, \bL$$ has the deduction-detachment property if and only if for every algebra $$\bA$$ the join-subsemilattice of the lattice of all $$\bL$$-filters of $$\bA$$ that consists of the finitely generated $$\bL$$-filters is dually residuated; see Czelakowski 2001. The generalized matrices of the form $$\langle \bA, \tFi_{\bL }\bA \rangle$$ are called the basic full g-models of $$\bL$$ (the letter ‘g’ stands for generalized matrix). The interest in these models lead to the consideration of the class of generalized matrix models of a logic $$\bL$$ with the property that their quotient by their Tarski congruence is a basic full g-model. These generalized matrices (and their corresponding abstract logics) are called full g-models. The theory of the full g-models of an arbitrary logic is developed in Font & Jansana 1996, where the notions of full g-model and basic full g-model are introduced. We will mention some of the main results obtained there. Let $$\bL$$ be a logic system. 1. $$\bL$$ is protoalgebraic if and only if for every full g-model $$\langle \bA, \cC \rangle$$ there exists an $$\bL$$-filter $$F$$ of $$\bA$$ such that $$\cC = \{G \in \tFi_{\bL }\bA: F \subseteq G\}$$. 2. If $$\bL$$ is finitary, $$\bL$$ is finitely algebraizable if and only if for every algebra $$\bA$$ and every $$\bL$$-filter $$F$$ of $$\bA$$, the generalized matrix $$\langle \bA, \{G \in \tFi_{\bL }\bA: F \subseteq G\} \rangle$$ is a full g-model and $$\bAlg\bL$$ is a quasivariety. 3. The class $$\bAlg\bL$$ is both the class of algebras of the reduced generalized matrix models of $$\bL$$ and the class $$\{\bA: \langle \bA, \tFi_{\bL }\bA \rangle$$ is reduced$$\}$$. 4. For every algebra $$\bA$$ there is an isomorphism between the family of closed-set systems $$\cC$$ on $$A$$ such that $$\langle\bA, \cC\rangle$$ is a full g-model of $$\bL$$ and the family of congruences $$\theta$$ of $$\bA$$ such that $$\bA/\theta \in \bAlg\bL$$. The isomorphism is given by the Tarski operator that sends a generalized matrix to its Tarski congruence. The isomorphism theorem (4) above is a generalization of the isomorphism theorems we encountered earlier for algebraizable logics. What is interesting here is that the theorem holds for every logic system. Using (2) above, theorem (4) entails the isomorphism theorem for finitary and finitely algebraizable logics. Thus theorem (4) can be seen as the most general formulation of the mathematical logical phenomena that underlies the isomorphism theorems between the congruences of the algebras in a certain class and some kind of subsets of them we mentioned in Section 9. The use of generalized matrices and abstract logics as models for logic systems has proved very useful for the study of selfextensional logics in general and more in particular for the study of the selfextensional logics that are not protoalgebraic such as the logics discussed in Section 12. In particular, they have proved very useful for the study of the class of finitary selfextensional logics with a conjunction and the class of finitary selfextensional logics with the deduction-detachment property for a single term, say $$p \rightarrow q$$; the logics in this last class are nevertheless protoalgebraic. A logic $$\bL$$ has a conjunction if there is a formula in two variables $$\phi(p, q)$$ such that $\phi(p, q) \vdash_{\bL } p,\;\;\; \phi(p, q)\vdash_{\bL } q, \;\;\; p, q \vdash_{\bL } \phi(p, q).$ The logics in those two classes have the following property: the Tarski relation of every full g-model $$\langle \bA, C \rangle$$ is $$\{\langle a, b \rangle \in A \times A: C(a) = C(b)\}$$. A way of saying it is to say that for these logics the property that defines selfextensionality, namely that the interderivability condition is a congruence, lifts or transfers to every full g-model. The selfextensional logics with this property are called fully selfextensional. This notion was introduced in Font & Jansana 1996 under the name ‘strongly selfextensional’. All the natural selfextensional logics considered up to 1996 are fully selfextensional, in particular the logics discussed in Section 12, but Babyonyshev showed (Babyonyshev 2003) an ad hoc example of a selfextensional logic that is not fully selfextensional. A much more natural example discovered later of a selfextensional logic that is not fully selfextensional is the fragment of only the negation and the constant $$\top$$ of classical logic. An interesting result on the finitary logics which are fully selfextensional logics with a conjunction or with the deduction-detachment property for a single term is that their class of algebras $$\bAlg\bL$$ is always a variety. It looks surprising that many finitary and finitely algebraizable logics have a variety as its equivalent algebraic semantics, when the theory of algebraizable logics allows in general to prove only that the equivalent algebraic semantics of a finitary and finitely algebraizable logic is a quasivariety. The result explains this phenomenon for the finitary and finitely algebraizable logics to which it applies. For many other finitary and finitely algebraizable logics to find a convincing explanation is still an open area of research. Every abstract logic $$\cA = \langle \bA, C \rangle$$ determines a quasi-order (a reflexive and transitive relation) on $$A$$. It is the relation defined by $a \le_{\cA } b\txtiff C(b) \subseteq C(a)\txtiff b \in C(a).$ Thus, $$a \le_{\cA } b$$ if and only if $$b$$ belongs to every $$C$$-closed set to which $$A$$ belongs. For a fully selfextensional logic $$\bL$$, this quasi-order turns into a partial order in the reduced full g-models, which are in fact the reduced basic full g-models, namely, the abstract logics $$\langle \bA, \tFi_{\bL }\bA \rangle$$ with $$\bA \in \bAlg\bL$$. Consequently, in a fully selfextensional logic $$\bL$$ every algebra $$\bA \in \bAlg\bL$$ carries a partial order definable in terms of the family of the $$\bL$$-filters. If the logic is fully selfextensional with a conjunction this partial order is definable by an equation of the $$L$$-algebraic language because in this case for every algebra $$\bA \in \bAlg\bL$$ we have: $a \le b\txtiff C(b) \subseteq C(a)\txtiff C(a \wedge^{\bA } b) = C(a)\txtiff a \wedge^{\bA } b = a,$ where $$C$$ is the abstract consequence operation that corresponds to the closed-set system $$\tFi_{\bL }\bA$$, and $$\wedge^{\bA}$$ is the operation defined on $$\bA$$ by the formula that is a conjunction for the logic $$\bL$$. A similar situation holds for fully selfextensional logics with the deduction-detachment property for a single term, say $$p \rightarrow q$$, for then for every algebra $$\bA \in \bAlg\bL$$ $a \le b\txtiff C(b) \subseteq C(a)\txtiff C(a \rightarrow^{\bA } b) = C(\varnothing) = C(a \rightarrow^{\bA } a) \txtiff \\ a \rightarrow^{\bA } b = a \rightarrow^{\bA } a.$ These observations lead us to view the finitary fully selfextensional logics $$\bL$$ with a conjunction and those with the deduction-detachment property for a single term as logics definable by an order which is definable in the algebras in $$\bAlg\bL$$ by using an equation of the $$\bL$$-algebraic language. Related to this, the following result is known. Theorem 8. A finitary logic $$\bL$$ with a conjunction is fully selfextensional if and only if there is a class of algebras $$\bK$$ such that for every $$\bA \in \bK$$ the reduct $$\langle A, \wedge^{\bA }\rangle$$ is a meet-semilattice and if $$\le$$ is the order of the semilattice, then $$\phi_1 , \ldots ,\phi_n\vdash_{\bL } \phi\txtiff$$ for all $$\bA \in \bK$$ and every valuation $$v$$ on $$\bA \; \bv(\phi_1) \wedge^{\bA }\ldots \wedge^{\bA } \bv(\phi_n) \le \bv(\phi)$$ and $$\vdash_{\bL } \phi\txtiff$$ for all $$\bA \in \bK$$ and every valuation $$v$$ on $$\bA \; a \le \bv(\phi)$$, for every $$a \in A$$. Moreover, in this case the class of algebras $$\bAlg\bL$$ is the variety generated by $$\bK$$. Similar results can be obtained for the selfextensional logics with the deduction-detachment property for a single term. The reader is referred to Jansana 2006 for a study of the selfextensional logics with conjunction, and to Jansana 2005 for a study of the selfextensional logics with the deduction-detachment property for a single term. The class of selfextensional logics with a conjunction includes the so-called logics preserving degrees of truth studied in the fields of substructural logics and of many-valued logics. The reader can look at Bou et al. 2009 and the references therein. ## 14. The Frege hierarchy A hierarchy of logic systems grounded on the replacement principles discussed in Section 11 instead of on the behaviour of the Leibniz congruences is also considered in abstract algebraic logic. It is known as the Frege hierarchy. Its classes are those of selfextensional logics, fully selfextensional logics, Fregean logics and the class of fully Fregean logics that we define now. In the same way as the fully selfextensional logics are the selfextensional logic systems that enjoy the property that in every one of their full g-models the abstract version of the characteristic property defining selfextensionally holds, the fully Fregean logics are the Fregean logics that in every one of their full g-models the abstract version of the characteristic property defining being Fregean holds. The next can be taken as the best understandable definition. A logic system $$\bL$$ is fully Fregean when in every one of its basic full g-models $$\langle \bA, \tFi_{\bL }\bA \rangle$$, for every $$F \in \tFi_{\bL }\bA$$, the Suszko congruence $${\bOmega^{\sim}_{\bA}}^{\bL}(F)$$ coincides with the relation of belonging to the same elements of $$\tFi_{\bL }\bA$$ that extend $$F$$. It is easy to see that the fully Fregean logics are Fregean and that they are fully selfextensional. Examples of fully Fregean logics are classical and intuitionistic logic an also the logic of conjunction and disjunction discussed in 12.1. The fragment of just the negation and a constant for true of classical logic mentioned before is a Fregean logic that is not fully Fregean. We address the reader to Chapter 7 of Font 2016a for an introduction to the main facts of the Frege hierarchy and for examples of logic systems in the families of the Frege hierarchy. A discussion of the Frege and Leibniz hierarchies related to assertional logics can be found in Albuquerque et al. 2018 where also several examples of logic systems are discussed and classified. The reader can find a discussion of several natural examples of logics classified in the Leibniz and Frege hierarchies in Albuquerque et alt. 2017. ## 15. Extending the setting The research on logic systems described in the previous sections has been extended to encompass other consequence relations that go beyond propositional logics, like equational logics and the consequence relations between sequents built from the formulas of a propositional language definable using sequent calculi. The interested reader can consult the excellent paper Raftery 2006a. This research arose the need for an even more abstract way of developing the theory of consequence relations. It has lead to a reformulation (in a category-theoretic setting) of the theory of logic systems as explained in this entry. The work has been done mainly by G. Voutsadakis in a series of papers, e.g., Voutsadakis 2002. Voutsadakis’s approach uses the notion of a pi-institution, introduced by Fiadeiro and Sernadas, as the analog of the logic systems in his category-theoretic setting. Some work in this direction is also found in Gil-Férez 2006. A different approach to a generalization of the studies encompassing the work done for logic systems and for sequent calculi is found in Galatos & Tsinakis 2009; Gil-Férez 2011 is also in this line. The work presented in these two papers originates in Blok & Jónsson 2006. The Galatos-Tsinakis approach has been recently extended in a way that also encompasses the setting of Voutsadakis in Galatos & Gil-Férez 2017. Another recent line of research that extends the framework described in this entry develops a theory of algebraization of many-sorted logic systems using instead of the equational consequence relation of the natural class of algebras a many-sorted behavioral equational consequence (a notion coming from computer science) and a weaker concept than algebraizable logic: behaviorally algebraizable logic. See Caleiro, Gonçalves & Martins 2009. ## Bibliography • Albuquerque, Hugo, Josep Maria Font, and Ramon Jansana, 2016, “Compatibility operators in abstract algebraic logic”, The Journal of Symbolic Logic, 81(2): 417–462. doi:10.1017/jsl.2015.39 • –––, 2017, “The strong version of a sentential logic”, Studia Logica, 105: 703–760. doi: 10.1007/s11225-017-9709-0 • Albuquerque, Hugo, Josep Maria Font, Ramon Jansana and Tommaso Moraschini, 2018, “Assertional logics, truth-equational logics and the hierarchies of abstract algebraic logic”, in Don Pigozzi on Abstract Algebraic Logic, Universal Algebra, and Computer Science (Outstanding Contributions to Logic: Volume 16), Janusz Czelakowski (ed.), Dordrecht: Springer: 53–79. doi: 10.1007/978-3-319-74772-9 • Babyonyshev, Sergei V., 2003, “Strongly Fregean logics”, Reports on Mathematical Logic, 37: 59–77. [Babyonyshev 2003 available online] • Blackburn, Patrick, Johan van Benthem, and Frank Wolter (eds.), 2006, Handbook of Modal Logic, Amsterdam: Elsevier. • Blok, W.J. and Eva Hoogland, 2006, “The Beth property in algebraic logic”, Studia Logica (Special Issue in memory of Willem Johannes Blok), 83: 49–90. doi:10.1007/s11225-006-8298-0 • Blok, W.J. and Bjarni Jónsson, 2006, “Equivalence of consequence operations”, Studia Logica, 83: 91–110. doi:10.1007/s11225-006-8299-z • Blok, W.J. and Don Pigozzi, 1986, “Protoalgebraic logics”, Studia Logica, 45(4): 337–369. doi:10.1007/BF00370269 • –––, 1989, Algebraizable logics, (Mem. Amer. Math. Soc., Volume 396), Providence: A.M.S. • –––, 1991, “Local deduction theorems in algebraic logic”, in Algebraic Logic (Colloquia Mathematica Societatis János Bolyai: Volume 54), H. Andréka, J.D. Monk, and I. Németi (eds.), Amsterdam: North Holland, 75–109. • –––, 1992, “Algebraic semantics for universal Horn logic without equality”, in Universal Algebra and Quasigroup Theory, Anna B. Romanowska and Jonathan D.H. Smith (eds.). Berlin: Heldermann, 1–56. • Blok, W.J. and Jordi Rebagliato, 2003, “Algebraic semantics for deductive systems, ” Studia Logica, Special Issue on Abstract Algebraic Logic, Part II, 74(5): 153–180. doi:10.1023/A:1024626023417 • Bloom, Stephen L., 1975, “Some theorems on structural consequence operations”, Studia Logica, 34(1): 1–9. doi:10.1007/BF02314419 • Bou, Félix, Francesc Esteva, Josep Maria Font, Àngel J. Gil, Lluís Godo, Antoni Torrens, and Ventura Verdú, 2009, “Logics preserving degrees of truth from varieties of residuated lattices”, Journal of Logic and Computation, 19(6): 1031–1069. doi:10.1093/logcom/exp030 • Brown, Donald J. and Roman Suszko, 1973, “Abstract logics”, Dissertationes Mathematicae: Rozprawy Matematyczne, 102: 9–42. • Caleiro, Carlos, Ricardo Gonçalves, and Manuel Martins, 2009, “Behavioral algebraization of logics”, Studia Logica, 91(1): 63–111. doi:10.1007/s11225-009-9163-8 • Celani, Sergio and Ramon Jansana, 2003, “A closer look at some subintuitionistic logics”, Notre Dame Journal of Formal Logic, 42(4): 225–255. doi:10.1305/ndjfl/1063372244 • –––, 2005, “Bounded distributive lattices with strict implication”, Mathematical Logic Quarterly, 51: 219–246. doi:10.1002/malq.200410022 • Cintula, Petr and Carles Noguera, 2010 “Implicational (semilinear) logics I: a new hierarchy”, Archive for Mathematical Logic, 49(4): 417–446. doi:10.1007/s00153-010-0178-7 • –––, 2016 “Implicational (semilinear) logics II: additional connectives and characterizations of semilinearity”, Archive for Mathematical Logic, 55(3): 353–372. doi:10.1007/s00153-015-0452-9 • –––, 2021 Logic and Implication. An Introduction to the General Algebraic Study of Non-classical Logics (Trends in Logic: Volume 51), Cham: Springer. • Czelakowski, Janusz, 1980, “Reduced products of logical matrices”, Studia Logica, 39(1): 19–43. doi:10.1007/BF00373095 • –––, 1981, “Equivalential logics, I and II”, Studia Logica, 40(3): 227–236 and 40(4): 355–372. doi:10.1007/BF02584057 and doi:10.1007/BF00401654 • –––, 2001, Protoalgebraic Logics (Trends in Logic, Studia Logica Library, Volume 10), Dordrecht: Kluwer Academic Publishers. • –––, 2003, “The Suszko operator. Part I”, Studia Logica, 74(1): 181–231. doi:10.1023/A:1024678007488 • Czelakowski, Janusz and Ramon Jansana, 2000, “Weakly algebraizable logics”, The Journal of Symbolic Logic, 65(2): 641–668. doi:10.2307/2586559 • Czelakowski, Janusz and Don Pigozzi, 2004a, “Fregean logics”, Annals of Pure and Applied Logic, 127: 17–76. doi:10.1016/j.apal.2003.11.008 • –––, 2004b, “Fregean logics with the multiterm deduction theorem and their algebraization”, Studia Logica, 78: 171–212. doi:10.1007/s11225-005-1212-3 • Dunn, J. Michael, 1995, “Positive Modal Logic”, Studia Logica, 55(2): 301–317. doi:10.1007/BF01061239 • Dunn, J. Michael and Gary M. Hardegree, 2001, Algebraic methods in philosophical logic (Oxford Logic Guides, Oxford Science Publications, Volume 41), New York: Oxford University Press. • Font, Josep Maria, 1997, “Belnap’s four-valued logic and De Morgan lattices”, Logic Journal of the I.G.P.L, 5: 413–440. • –––, 2016, Abstract Algebraic Logic. An Introductory Textbook, volume 60 of Studies in Logic, London: College Publications. • –––, 2022, “Abstract Algebraic Logic.”, in Hiroakira Ono on Residuated Lattices and Substructural Logics, Nikolaos Galatos and K. Terui (eds), series Outstanding Contributions to Logic 23, Springer. 72pp. doi: 10.1007/978-3-030-76920-8 • Font, Josep Maria and Ramon Jansana, 1996, A general algebraic semantics for sentential logics (Lecture Notes in Logic: Volume 7), Dordrecht: Springer; 2nd revised edition, Cambridge: Cambridge University Press, 2016 (for the Association for Symbolic Logic). • Font, Josep Maria, Ramon Jansana, and Don Pigozzi 2003, “A Survey of Abstract Algebraic Logic”, Studia Logica, 74 (Special Issue on Abstract Algebraic Logic—Part II): 13–97. doi:10.1023/A:1024621922509 • Font, Josep Maria and Gonzalo Rodríguez, 1990, “Note on algebraic models for relevance logic”, Mathematical Logic Quarterly, 36(6): 535–540. doi:10.1002/malq.19900360606 • –––, 1994, “Algebraic study of two deductive systems of relevance logic”, Notre Dame Journal of Formal Logic, 35: 369–397. doi:10.1305/ndjfl/1040511344 • Font, Josep Maria and V. Verdú, 1991, “Algebraic logic for classical conjunction and disjunction”, Studia Logica, 65 (Special Issue on Abstract Algebraic Logic): 391–419. doi:10.1007/BF01053070 • Galatos, Nikolaos and Constantine Tsinakis, 2009, “Equivalence of consequence relations: an order-theoretic and categorical perspective”, The Journal of Symbolic Logic, 74(3): 780–810. doi:10.2178/jsl/1245158085 • Galatos, Nikolaos and José Gil-Férez, 2017, “Modules over quataloids: Applications to the isomorphism problem in algebraic logic and $$\pi$$-institutions”, Journal of Pure and Applied Algebra, 221(1): 1–24. doi:10.1016/j.jpaa.2016.05.012 • Gil-Férez, José, 2006, “Multi-term $$\pi$$-institutions and their equivalence”, Mathematical Logic Quarterly, 52(5): 505–526. doi:10.1002/malq.200610010 • –––, 2011, “Representations of structural closure operators”, Archive for Mathematical Logic, 50:45–73. doi:10.1007/s00153-010-0201-z • Herrmann, Bughard, 1996, “Equivalential and algebraizable logics”, Studia Logica, 57(2): 419–436. doi:10.1007/BF00370843 • –––, 1997, “Characterizing equivalential and algebraizable logics by the Leibniz operator”, Studia Logica, 58(2): 305–323. doi:10.1023/A:1004979825733 • Heyting, Arend, 1930, “Die formalen Reglen der Intuitionionischen Logik” (in 3 parts), Sitzungsberichte der preussischen Akademie von Wissenschaften, 42–56, 57–71, 158–169. • Hoogland, Eva, 2000, “Algebraic characterizations of various Beth definability properties”, Studia Logica, 65 (Special Issue on Abstract Algebraic Logic. Part I): 91–112. doi:10.1023/A:1005295109904 • Humberstone, Lloyd, 2005, “Logical Discrimination”, in J.-Y. Béziau (ed.), Logica Universalis, Basel: Birkhäuser. doi:10.1007/3-7643-7304-0_12 • Jansana, Ramon, 2002, “Full models for positive modal logic”, Mathematical Logic Quarterly, 48(3): 427–445. doi:10.1002/1521-3870(200204)48:3<427::AID-MALQ427>3.0.CO;2-T • –––, 2005, “Selfextensional logics with implication”, in J.-Y. Béziau (ed.), Logica Universalis, Basel: Birkhäuser. doi:10.1007/3-7643-7304-0_4 • –––, 2006, “Selfextensional logics with conjunction”, Studia Logica, 84(1): 63–104. doi:10.1007/s11225-006-9003-z • Jansana, Ramon and Alessandra Palmigiano, 2006, “Referential algebras: duality and applications”, Reports on Mathematical Logic (Special issue in memory of Willem Blok), 41: 63–93. [Jansana and Palmigiano 2006 available online] • Koslow, Arnold, 1992, A structuralist theory of logic, Cambridge: Cambridge University Press. doi:10.1017/CBO9780511609206 • Kracht, Marcus, 2006, “Modal Consequence Relations”, in Blackburn, van Benthem, and Wolter 2006: 497–549. • Lávička, Tomáš, Tommaso Moraschini and James Raftery, 2021, “The algebraic significance of weak excluded middle laws”, Mathematical Logic Quarterly, 68(1): 79–94. • Lewis, Clarence Irving, 1918, A Survey of Symbolic Logic, Berkeley: University of California Press; second edition, New York Dover Publications, 1960. • Lewis, Clarence Irving and Langford, Cooper H., 1932 Symbolic Logic, second edition, New York: Dover Publications, 1959. • Łoś, Jerzy, 1949, O matrycach logicznych, Ser. B. Prace Wrocławskiego Towarzystwa Naukowege (Travaux de la Société et des Lettres de Wrocław), Volume 19. • Łoś, Jerzy and Roman Suszko, 1958, “Remarks on sentential logics”, Indagationes Mathematicae (Proceedings), 61: 177–183. doi:10.1016/S1385-7258(58)50024-9 • Łukasiewicz, J. and Alfred Tarski, 1930, “Untersuchungen über den Aussagenkalkül”, Comptes Rendus des Séances de la Société des Sciences et des Lettres de Varsovie, Cl.III 23: 30–50. English translation in Tarski 1983: “Investigations into the sentential calculus”. • Malinowski, Jacek, 1990, “The deduction theorem for quantum logic, some negative results”, The Journal of Symbolic Logic, 55(2): 615–625. doi:10.2307/2274651 • McKinsey, J.C.C., 1941, “A solution of the decision problem for the Lewis systems S2 and S4, with an application to topology”, The Journal of Symbolic Logic, 6(4): 117–134. doi:10.2307/2267105 • McKinsey, J.C.C. and Alfred Tarski, 1948, “Some theorems about the sentential calculi of Lewis and Heyting”, The Journal of Symbolic Logic, 13(1): 1–15. doi:10.2307/2268135 • Moraschini, T., forthcoming, “On equational completeness theorems ”, The Journal of Symbolic Logic, first online 13 September 2021. doi:10.1017/jsl.2021.67 • Pigozzi, Don, 1991, “Fregean algebraic logic”, in H. Andréka, J.D. Monk, and I. Németi (eds.), Algebraic Logic (Colloq. Math. Soc. János Bolyai, Volume 54), Amsterdam: North-Holland, 473-502. • Prucnal, Tadeusz and Andrzej Wroński, 1974, “An algebraic characterization of the notion of structural completeness”, Bulletin of the Section of Logic, 3(1): 30–33. • Raftery, James G., 2006a, “Correspondence between Gentzen and Hilbert systems”, The Journal of Symbolic Logic, 71(3): 903–957. doi:10.2178/jsl/1154698583 • –––, 2006b, “On the equational definability of truth predicates”, Reports on Mathematical Logic (Special issue in memory of Willem Blok), 41: 95–149. [Raftery 2006b available online] • –––, 2011, “Contextual deduction theorems”, Studia Logica (Special issue in honor of Ryszard Wójcicki), 99: 279–319. doi:10.1007/s11225-011-9353-z • –––, 2013, “Inconsistency lemmas in algebraic logic”, Mathematical Logic Quarterly, 59(6): 393–406. doi:10.1002/malq.201200020 • –––, 2016, “Admissible rules and the Leibniz Hierarchy”, Notre Dame Journal of Formal Logic, 57: 569–606. • Rasiowa, H., 1974, An algebraic approach to non-classical logics (Studies in Logic and the Foundations of Mathematics, Volume 78), Amsterdam: North-Holland. • Schroeder-Heister, Peter and Kosta Dośen (eds), 1993, Substructural Logics (Studies in Logic and Computation: Volume 2), Oxford: Oxford University Press. • Suszko, Roman, 1977, “Congruences in sentential calculus”, in A report from the Autumn School of Logic (Miedzygorze, Poland, November 21–29, 1977). Mimeographed notes, edited and compiled by J. Zygmunt and G. Malinowski. Restricted distribution. • Tarski, Alfred, 1930a, “Über einige fundamentale Begriffe der Metamathematik”, C. R. Soc. Sci. Lettr. Varsovie, Cl. III 23: 22–29. English translation in Tarski 1983: “On some fundamental concepts of metamathematics”, 30–37. • –––, 1930b, “Fundamentale Begriffe der Methodologie der deduktiven Wissenschaften I”, Monatfshefte für Mathematik und Physik, 37: 361–404. English translation in Tarski 1983: “Fundamental concepts of the methodology of the deductive sciences”, 60–109. • –––, 1935, “Grundzüge der Systemenkalküls. Erster Teil”, Fundamenta Mathematicae, 25: 503–526, 1935. English translation in Tarski 1983: “Foundations of the calculus of systems”, 342–383. • –––, 1936, “Grundzüge der Systemenkalküls. Zweiter Teil”, Fundamenta Mathematicae, 26: 283–301, 1936. English translation in Tarski 1983: “Foundations of the calculus of systems”, 342–383. • –––, 1983, Logic, Semantics, Metamathematics. Papers from 1923 to 1938, J. Corcoran (ed.), Indianapolis: Hackett, second edition. • Torrens, Antoni, 2008, “An Approach to Glivenko’s Theorems in Algebraizable Logics”, Studia Logica, 88(3): 349–383. doi:10.1007/s11225-008-9109-6 • Troelstra, A.S., 1992, Lectures on Linear Logic (CSLI Lecture Notes 29), Stanford, CA: CSLI Publications. • Visser, Albert, 1981, “A Propositional Logic with Explicit Fixed Points”, Studia Logica, 40(2): 155–175. A Propositional Logic with Explicit Fixed Points • Voutsadakis, George, 2002, “Categorical Abstract Algebraic Logic: Algebraizable Institutions”, Applied Categorical Structures, 10: 531–568. doi:10.1023/A:1020990419514 • Wójcicki, Ryszard, 1969, “Logical matrices strongly adequate for structural sentential calculi”, Bulletin de l’Académie Polonaise des Sciences, Classe III XVII: 333–335. • –––, 1973, “Matrix approach in the methodology of sentential calculi”, Studia Logica, 32(1): 7–37. doi:10.1007/BF02123806 • –––, 1988, Theory of logical calculi. Basic theory of consequence operations (Synthese Library, Volum 199), Dordrecht: D. Reidel.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9686484932899475, "perplexity": 372.66117798074146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570765.6/warc/CC-MAIN-20220808031623-20220808061623-00583.warc.gz"}
https://www.physicsforums.com/threads/solving-transcendental-equation-quantum-mechanics.111894/
# Solving transcendental equation- Quantum Mechanics 1. Feb 23, 2006 ### GTdan I don't know if I am being dumb or not but I need to solve a transcendental equation numerically and I need to write a program that can do this. The equation is so I can find the the ground state energy of a wave function in a semi-infinite well. I was told to use the Newton-Raphson method to do it and I am thinking of a transcendental equation found in the book would be the one I need to solve (if anyone thinks differently let me know). Here is the equation: tan(z)=sqrt((z(o)/z)^2-1) where z(o)=(a/h)*sqrt(2mV(o)) z=(a/h)*sqrt(2m(E+V(o))) and V(o)= -10 eV This is written in maple code btw. I'm thinking I can solve for z by using Newton's method and then solve for E afterwards since z is a function of E. That's the only idea I have that I would need Newton's method to do so can someone let me know if I am on the right track or give me some hints or something. 2. Feb 23, 2006 ### arildno Okay, so you're basically after a programming procedure, since you've already decided upon using Newton-Raphson? 3. Feb 23, 2006 ### GTdan Well, we have to use Newton Raphson. I would much rather solve it graphically like the book but that's not the case. I may have been vague but my problem is that I am I have no idea if I am going the right direction or not. Am I using the right equation to solve for E? If I am can I get some hints on the programming procedure? Also, unless I can make Maple compile a program or something (if it even does that), I have to write it in C++. Similar Discussions: Solving transcendental equation- Quantum Mechanics
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9604626893997192, "perplexity": 249.66874283126273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187821189.10/warc/CC-MAIN-20171017125144-20171017145144-00137.warc.gz"}
http://mathhelpforum.com/differential-equations/179938-method-solve-de-print.html
# Method to solve this DE • May 8th 2011, 05:29 PM Naples Method to solve this DE $2y" = 3y^2$ Basically I don't have any idea about what method I should use to solve this DE...I've tried several ways but they don't seem to work • May 8th 2011, 05:40 PM TheEmptySet Quote: Originally Posted by Naples $2y" = 3y^2$ Basically I don't have any idea about what method I should use to solve this DE...I've tried several ways but they don't seem to work Use the substition $u=y' \implies \frac{du}{dy}\frac{dy}{dx}=y'' \iff u\frac{du}{dy}=y''$ This reduces the ODE to the first order separable ODE $u\frac{du}{dy}=3y^2$ Can you finish from here? • May 8th 2011, 06:01 PM Naples So..? $2u du = 3y^2 dy$ $u^2 = y^3 + A$ $u = y^(^3^/^2^) + A^(^1^/^2^)$ $y' = y^(^3^/^2^) + A^(^1^/^2^)$ $A=0$ $y' = y^(^3^/^2^)$ $y = (2/5)y^(^5^/^2^) + B$ $B = 3/5$ $y = (2/5)y^(^5^/^2^) + (3/5)$ • May 8th 2011, 06:51 PM TheEmptySet Quote: Originally Posted by Naples So..? $2u du = 3y^2 dy$ $u^2 = y^3 + A$ $u = y^(^3^/^2^) + A^(^1^/^2^)$ $y' = y^(^3^/^2^) + A^(^1^/^2^)$ $A=0$ $y' = y^(^3^/^2^)$ $y = (2/5)y^(^5^/^2^) + B$ $B = 3/5$ $y = (2/5)y^(^5^/^2^) + (3/5)$ Not quite. Unless you have some inital conditons I don't think this will simplify very nicely. We have $2u\frac{du}{dy}=3y^2 \iff \int udu = \int 3y^2dy \implies u^2=y^3+C \implies u=\pm \sqrt{y^3+C}$ But now we get $\frac{dy}{dx}= \pm \sqrt{y^3+C} \implies \int \frac{dy}{\sqrt{y^3+C}}= \pm \int dx \implies x=\pm \int \frac{dy}{\sqrt{y^3+C}}$ • May 8th 2011, 06:56 PM Naples I was given the initial conditions y(0)=1 and y'(0)=1 Sorry I forgot to put them in the original post • May 8th 2011, 07:14 PM TheEmptySet Quote: Originally Posted by Naples I was given the initial conditions y(0)=1 and y'(0)=1 Sorry I forgot to put them in the original post Yes this is good! Now since $y'=u$ We can use the equation above and the intial condions to get $u^2= y^3+C \implies 1^2 = 1^3 +0 \implies C=0$ Now we just have to solve the ODE $(y')^2=y^3 \implies \frac{dy}{dx} = \pm y^{\frac{3}{2}} \implies y^{-\frac{3}{2}}dy = \pm dx \implies \frac{-2}{\sqrt{y}}= \pm x+D$ Now use the initial conditions and solve for y. The solution to this equation is not unique. • May 8th 2011, 07:37 PM Naples • May 8th 2011, 07:46 PM TheEmptySet Quote: Yes that is correct. You can verify the solution by plugging back into the original ODE.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 27, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9652424454689026, "perplexity": 1090.0674385259092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462548.53/warc/CC-MAIN-20150226074102-00227-ip-10-28-5-156.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/muon-energy-reasonable-result.190382/
# Muon energy - Reasonable result? 1. Oct 10, 2007 ### December Hi. I have an assignment where a muon gets caught in a zinc atom (Z=30) at n=2. I'm supposed to calculate the energy of the photon that is emitted in the transition to n=1. I have managed to calculate the energy of this photon, but I'm having a little trouble determining the validity of my result: The result I got was that the emitted photon had an energy of approximately 380 MeV. Even though I expected a high energy/frequency since the mass of the muon is larger than that of an electron, I still think that this is an extremely high energy. I don't have extremely much experience of this type of calculations, and maybe the result _is_ within reasonable limits, but I actually don't know. 2. Oct 10, 2007 ### Gokul43201 Staff Emeritus The question itself is very poorly written. For one thing, the 2s and 2p states have quite different energies. So, it appears that you are just expected to use the Bohr approximation for Zinc!! And it doesn't look like you are expected to account for screening in even the most simplistic way. I get a number that is over 2 orders of magnitude smaller than yours. We'll need to see your calculation to be able to tell you what's wrong. 3. Oct 10, 2007 ### sparkywowo Just use Bohr Theory for Z = 30 and adjust the reduced mass using the muon.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8967700600624084, "perplexity": 382.05286221009624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281450.93/warc/CC-MAIN-20170116095121-00155-ip-10-171-10-70.ec2.internal.warc.gz"}
https://hal.archives-ouvertes.fr/hal-00534439
# Diffeomorphic metric surface mapping in subregion of the superior temporal gyrus. Abstract : This paper describes the application of large deformation diffeomorphic metric mapping to cortical surfaces based on the shape and geometric properties of subregions of the superior temporal gyrus in the human brain. The anatomical surfaces of the cortex are represented as triangulated meshes. The diffeomorphic matching algorithm is implemented by defining a norm between the triangulated meshes, based on the algorithms of Vaillant and Glaunès. The diffeomorphic correspondence is defined as a flow of the extrinsic three dimensional coordinates containing the cortical surface that registers the initial and target geometry by minimizing the norm. The methods are demonstrated in 40 high-resolution MRI cortical surfaces of planum temporale (PT) constructed from subsets of the superior temporal gyrus (STG). The effectiveness of the algorithm is demonstrated via the Euclidean positional distance, distance of normal vectors, and curvature before and after the surface matching as well as the comparison with a landmark matching algorithm. The results demonstrate that both the positional and shape variability of the anatomical configurations are being represented by the diffeomorphic maps. Type de document : Article dans une revue NeuroImage, Elsevier, 2007, 34 (3), pp.1149-59. 〈10.1016/j.neuroimage.2006.08.053〉 Domaine : https://hal.archives-ouvertes.fr/hal-00534439 Contributeur : Joan Alexis Glaunès <> Soumis le : mardi 9 novembre 2010 - 16:24:58 Dernière modification le : jeudi 11 janvier 2018 - 06:19:44 ### Citation Marc Vaillant, Anqi Qiu, Joan Alexis Glaunès, Michael I Miller. Diffeomorphic metric surface mapping in subregion of the superior temporal gyrus.. NeuroImage, Elsevier, 2007, 34 (3), pp.1149-59. 〈10.1016/j.neuroimage.2006.08.053〉. 〈hal-00534439〉 ### Métriques Consultations de la notice
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8094244599342346, "perplexity": 4511.127001752884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812873.22/warc/CC-MAIN-20180220030745-20180220050745-00713.warc.gz"}
https://www.lessonplanet.com/teachers/the-scientific-method-science-6th-8th
# The Scientific Method In this science worksheet, students find the words that associate the steps of the scientific method. The answers are found at the bottom of the page. Concepts Resource Details
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8753453493118286, "perplexity": 1458.5740310527212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814566.44/warc/CC-MAIN-20180223094934-20180223114934-00387.warc.gz"}
https://2021.help.altair.com/2021.1.2/feko/topics/feko/example_guide/radar_cross_section/sphere_dielectric_rcs_intro_feko_t.htm
RCS and Near Field of a Dielectric Sphere Calculate the radar cross section and the near field inside and outside of a dielectric sphere using the surface equivalence principle (SEP). The bistatic radar cross section for sphere1 is computed by: (1) $\sigma \left(bistatic\right)=\underset{r\to \infty }{\mathrm{lim}}\left[4\pi {r}^{2}\frac{\mid {\mathbf{\text{E}}}^{s}{\mid }^{2}}{\mid {\mathbf{\text{E}}}^{i}{\mid }^{2}}\right]$ . 1 C. A. Balanis, Advanced Engineering Electromagnetics, Wiley, 1989, pp. 655.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9860344529151917, "perplexity": 1708.974551712555}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764495001.99/warc/CC-MAIN-20230127164242-20230127194242-00513.warc.gz"}
http://www.mathematics-online.org/kurse/kurs7/seite14.html
[home] [lexicon] [problems] [tests] [courses] [auxiliaries] [notes] [staff] Mathematics-Online course: Basic Mathematics - Sets # Properties of Relations A (binary) relation on a set is called • reflexive, if each element is related to itself: • symmetric, if the order of the elements is irrelevant: • asymmetric, if symmetry implies the identity of the respective elements: • transitive, if the middle element of a chain can be removed: • complete, if any two distinct elements are related to each other in at least one direction: A reflexive, symmetric and transitive relation is called an equivalence relation, usually symbolized by instead of . An equivalence relation divides a set in disjoint subsets (equivalence classes), with any two elements of a particular subset being related (equivalent) to each other, while two elements of distinct subsets are not related to one another. A reflexive, asymmetric and transitive relation is called a partial order, symbolized as instead of . If a partial order is complete, it is called a (total) order; is then ordered by . (Authors: Hörner/Abele) The inclusion of sets is a partial order in the power set of a set since it is reflexive ( ), asymmetric ( ), and transitive ( ). However, if contains more than one element, then the inclusion is not an order: i.e. it is not complete. The relation ,,has an equal number of elements is an equivalence relation in the power set of a finite set since it is reflexive (), symmetric ( ), and transitive ( ). (Authors: Hörner/Abele)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9619587063789368, "perplexity": 953.9253325234325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00243-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/question-involving-trigonometric-identities-and-inverse-functions.222306/
# Question involving trigonometric identities and inverse functions 1. Mar 16, 2008 ### AussieDave [SOLVED] Question involving trigonometric identities and inverse functions 1. The problem statement, all variables and given/known data 2. Relevant equations I've tried to combine the following known equations to come up with a solution: $$\frac{d}{dx}$$(sin$$^{-1}$$x) = $$\frac{1}{\sqrt{1-x^{2}}}$$ sin x = y sin $$^{-1}$$ y = x $$\frac{d}{dx}$$(sin x) = cos x cos$$^{2}$$x + sin$$^{2}$$x = 1 3. The attempt at a solution I've been writing down a whole bunch of different equations on a sheet of paper to try and come up with something to connect the two equations. I feel like I'm kind of shooting in the dark though as I'm not sure where to begin and how to use this knowledge of the derivatives (if that's needed) and the relationship between cos and sin. I've tried fiddling around with the pythagorean identity but I end up with things like: x = sin$$^{-1}$$x$$\sqrt{1-cos^{2}x}$$ and I'm not sure where to go from there. Your help will be much appreciated. Kind regards, David Last edited by a moderator: Apr 23, 2017 2. Mar 16, 2008 ### Snazzy Draw a triangle. sin(y) = x = opp/hyp 3. Mar 16, 2008 ### AussieDave I'm still running into trouble here. I have: cos(y) = adj/hyp = sin^-1(opp/hyp) = x but I can't find a way to move on. Do I have to use hyp = SqrRoot(opp^2 + adj^2) ? 4. Mar 16, 2008 ### Snazzy $$sinY=x=\frac{opp}{hyp}$$ Yes? So what's the hypotenuse then? Can you figure out the length of the adjacent side by knowing the opposite side length and the hypotenuse length? Can you then determine the ratio of the adjacent side to the hypotenuse, which gives $$cosY$$? Remember that cos relates the angle between the adjacent side and hypotenuse to the ratio of the adjacent side to the hypotenuse. Last edited: Mar 16, 2008 5. Mar 16, 2008 ### AussieDave Well Adj^2 + Opp^2 = Hyp^2 so that's the relationship there. If I rearrange and sub that into the equation you give me and that doesn't seem to make things any less complicated. 6. Mar 16, 2008 ### tiny-tim … turn the equation round! … Hi David! When you have inverse functions, just turn the equation round: $$y = sin^{-1}x$$ , so x = siny , so cosy = … ? 7. Mar 16, 2008 ### AussieDave I do understand the basic principles of inverse functions but I'm struggling to relate that to the basic triangle and produce numbers similar to those given as the possible answers to the question. 8. Mar 16, 2008 ### tiny-tim … on a triangle … Ah! Well, y is one of the angles of the right-angled triangle, and x is the opposite side, and the hypotenuse is 1. So cosy is the adjacent side. So just use good ol' Pythagoras … 9. Mar 16, 2008 ### Snazzy 10. Mar 16, 2008 ### AussieDave I never knew that the hypotenuse equalled 1. Can you please tell me why that is the case? Given that, I was able to calculate cos y = $$\sqrt{1-x^{2}}$$ which is answer (c). Is this correct? EDIT: Actually, after just looking at Snazzy's diagram, I understand why the hypotenuse = 1 because y = sin^-1(opp/hyp) and y = sin^-1(x) and x = opp so hyp = 1. Thank you very much for your help. I'm guessing (c) is therefore correct? Last edited: Mar 16, 2008 11. Mar 16, 2008 ### tiny-tim Yes: always put hypotenuse = 1. It's because "sin = opposite over hypotneuse" - so if you put hypotenuse = 1, then the formula is simply "sin = opposite"! (Same for cos, of course.) Yes! 12. Mar 16, 2008 ### Snazzy $$sinY=\frac{opp}{hyp}=\frac{x}{hyp}=x$$ $$\frac{x}{x}=hyp=1$$ So yes, once you figure out the hypotenuse = 1, you can find the length of the adjacent side using the Pythagorean theorem and it will give you $$adj=\sqrt{1-x^2}$$ $$cosY=\frac{adj}{hyp}=\frac{\sqrt{1-x^2}}{1}=\sqrt{1-x^2}$$ Last edited: Mar 16, 2008 13. Mar 16, 2008 ### AussieDave Well thank you to both of you. It's good to get that little guy out of the way. Now I have to do that [SOLVED] thing. Hmmm. 14. Dec 30, 2009 ### mohini patil Re: [SOLVED] Question involving trigonometric identities and inverse functions to make sin y=x, hypotenuse=1 therefore cos y=sqrt{1-x^{2}}. Last edited by a moderator: Apr 24, 2017 Similar Discussions: Question involving trigonometric identities and inverse functions
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9369222521781921, "perplexity": 1297.1783136786232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917127681.84/warc/CC-MAIN-20170423031207-00187-ip-10-145-167-34.ec2.internal.warc.gz"}
https://tritonstation.wordpress.com/2016/07/30/missing-baryons/
# Missing Baryons A long standing problem in cosmology is that we do not have a full accounting of all the baryons that we believe to exist. Big Bang Nucleosynthesis (BBN) teaches us that the mass density in normal matter is Ωb ≈ 5%. One can put a more precise number on it, but that’s close enough for our purposes here. Ordinary matter fails to account for the closure density by over an order of magnitude. To make matters worse, if we attempt an accounting of where these baryons are, we again fall short. As well as the dynamical missing mass problem, we also have a missing baryon problem. For a long time, this was also an order of magnitude problem. The stars and gas we could most readily see added up to < 1%, well short of even 5%. More recent work has shown that many, but not all, of the missing baryons are in the intergalactic medium (IGM).  The IGM is incredibly diffuse – a better vacuum than we can make in the laboratory by many orders of magnitude – but it is also very, very, very, well, very big. So all that nothing does add up to a bit of something. A thorough accounting has been made by Shull et al. (2012). A little over half of detected baryons reside in the IGM, in either the Lyman alpha forest (Ly a in the pie chart above) or in the so-called warm-hot intergalactic medium (WHIM). There are also issues of double-counting, which Shull has taken care to avoid. Gravitationally bound objects like galaxies and clusters of galaxies contain a minority of the baryons. Stars and cold (HI) gas in galaxies are small wedges of the pie, hence the large problem we initially had. Gas in the vicinity of galaxies (CGM) and in the intracluster medium of clusters of galaxies (ICM) also matter. Indeed, in the most massive clusters, the ICM outweighs all the stars in the galaxies there. This situation reverses as we look at lower mass groups. Rich clusters dominated by the ICM are rare; objects like our own Local Group are more typical. There’s no lack of circum-galactic gas (CGM), but it does not obviously outweigh the stars around L* galaxies. There are of course uncertainties, so one can bicker and argue about the relative size of each slice of the pie. Even so, it remains hard to make their sum add up to 5% of the closure density. It appears that ~30% of the baryons that we believe to exist from BBN are still unaccounted for in the local universe. The pie diagram only illustrates the integrated totals. For a long time I have been concerned about the baryon budget in individual objects. In essence, each dark matter halo should start with a cosmically equal share of baryons and dark matter. Yet in most objects, the ratio of baryons to total mass falls well short of the cosmic baryon fraction. The value of the cosmic baryon fraction is well constrained by a variety of data, especially the cosmic microwave background. The number we persistently get is ###### fb = Ωb/Ωm = 0.17 or maybe 0.16, depending on which CMB analysis you consult.  But it isn’t 0.14 nor 0.10 nor 0.01. For sticklers, note that this the fraction of the total gravitating mass in baryons, not the ratio of baryons to dark matter: Ωm includes both. For numerologists, note that within the small formal uncertainties, 1/fb = 2π. This was known long before the CMB experiments provided constraints that mattered. Indeed, one of the key findings that led us to repudiate standard SCDM in favor of ΛCDM was the recognition that clusters of galaxies had too many baryons for their dynamical mass. We could measure the baryon fraction in clusters. If we believe that these are big enough chunks of the universe to be representative of the whole, and we also believe BBN, then we are forced to conclude that Ωm ≈ 0.3. Why stop with clusters? One can do this accounting in every gravitationally bound object. The null hypothesis is that every object should be composed of the universal composition, roughly 1 part baryons for every 5 parts dark matter. This almost works in rich clusters of galaxies. It fails in small clusters and groups of galaxies, and gets worse as you examine progressively smaller systems. So: not only are we missing baryons in the cosmic sum, there are some missing in each individual object. The figure shows the ratio of detected baryons to those expected in individual systems. I show the data I compiled in McGaugh et al. (2010), omitting the tiniest dwarfs for which the baryon content becomes imperceptible on a linear scale. By detected baryons I mean all those seen to exist in the form of stars or gas in each system (Mb = M*+Mg), such that ###### fd = Mb/(fbMvir) where Mvir is the total mass of each object. This `virial’ mass is a rather uncertain quantity, but in this plot it can only slide the data up and down a little bit. The take-away is that not a single, gravitationally bound object appears to contain its fair share of cosmic baryons. There is a missing baryon problem not just globally, but in each and every object. This halo-by-halo missing baryon problem is least severe in the most massive systems, rich clusters. Indeed, the baryon fraction of clusters is a rising function of radius, so a case could be made that the observations simply don’t reach far enough out to encompass a fair total. This point has been debated at great length in the literature, and I have little to add to it, except to observe that rich clusters are perhaps like horseshoes – close enough. Irrespective of whether we consider the most massive clusters to be close enough to the cosmic baryon fraction or not, no other system comes close to close enough. There is already a clear discrepancy among smaller clusters, and an apparent trend with mass. This trend continues smoothly and continuously over many decades in baryonic mass through groups, then individual L* galaxies, and on to the tiniest dwarfs. A respectively massive galaxy like the Milky Way has many tens of billions of solar masses in form of stars, and another ten billion or so in the form of cold gas. Yet this huge mass represents only a 1/4 or so of the baryons that should reside in the halo of the Milky Way. As we look at progressively smaller galaxies, the detected baryon fraction decreases further. For a galaxy with a mere few hundred million stars, fd ≈ 6%. It drops below 1% for M* < 107 solar masses. That’s a lot of missing baryons. In the case of the Milky Way, all those stars and cold gas are within a radius of 20 kpc. The dark matter halo extends out to at least 150 kpc. So there is plenty of space in which the missing baryons might lurk in some tenuous form. But they have to remain pretty well hidden. Joel Bregman has spent a fair amount of his career searching for such baryonic reservoirs. While there is certainly some material out there, it does not appear to add up to be enough. It is still harder to see this working in smaller galaxies. The discrepancy that is a factor of a few in the Milky Way grows to an order of magnitude and more in dwarfs. A common hypothesis is that these baryons do indeed lurk there, probably in a tenuous, hot gas. If so, direct searches have yet to see them. Another common idea is that the baryons get expelled entirely from the small potential wells of dwarf galaxy dark matter halos, driven by winds powered by supernovae. It that were the case, I’d expect to see a break at a critical mass where the potential well was or wasn’t deep enough to prevent this. If there is any indication of this, it is at still lower mass than shown above, and begs the question as to where those baryons are now. So we don’t have a single missing mass problem in cosmology. We have at least two. One is the need for non-baryonic dark matter. The other is the need for unseen normal matter: dark baryons. This latter problem has at least two flavors. One is that the global sum of baryons comes up short. The other is that each and every individual gravitationally bound object comes up short in the number of baryons it should have. An obvious question is whether accounting for the missing baryons in individual objects helps with the global problem. The wedges in the pie chart represent what is seen, not what goes unseen. Or do they? The CGM is the hot gas around galaxies, the favored hiding place for the object-by-object missing baryon problem. Never mind the potential for double counting. Lets amp up the stars wedge by the unseen baryons indicated in red in the figure above. Just take for granted, for the moment, that these baryons are there in some form, associated in the proper ratio. We can then reevaluate the integrated sum and… still come up well short. Low mass galaxies appear to have lots of missing baryons. But they are low mass. Even when we boost their mass in this way, they still contribute little to the integral. This is a serious problem. Is it hopeless? No. Is it easily solved? No. At a minimum, it means we have at least two flavors of dark matter: non-baryonic [cosmic] dark matter, and dark baryons. Does this confuse things immensely? Oh my yes. Advertisements ## 8 thoughts on “Missing Baryons” 1. Ron Smith says: Is it possible that the missing baryons are the result of a problem with theory and not a problem with observations? After all, that seems likely with regard to DM. Like 2. The systematic variation with mass scale is certainly suggestive of a such a situation. Like 3. Ron Smith says: Then, is there a chance that the two theory problems are intertwined in some manner? Like 4. EuroSpin says: What a great puzzle. I was left with the simple thought that BBN itself is wrong, and nothing is actually missing but new ideas for nucleosynthesis. I’m sure other people have explored this path. Like 5. That BBN may be wrong is certainly a logical possibility. I expect the basic picture is correct, but I do wonder if we have the baryonic mass density right. The number I quote comes from CMB analyses. If you look at data obtained before the CMB constraints, the total density in baryons was rather smaller. Indeed, lithium remains so – a widely ignored inconvenience. So it may be that BBN is correct but our current evaluation of the baryon density is an overestimate. This would ease or perhaps even remove the missing baryon problem. See http://arxiv.org/abs/0707.3795 for a more technical discussion. Like 6. Ron Smith says: I had heard of the missing baryon problem before, but I had no idea it was so large. I especially did not know about the systematics you point out. I am reading your paper that you linked, but will need time to really absorb and understand it. Still, I am struck by what seems like a very strange coincidence, that missing mass is such a prevalent issue in two seemingly unrelated fields, galactic dynamics and BBN. To me it seems like there is a large probability of some common link between them. I have no idea what, however. Like
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8503854274749756, "perplexity": 896.9442106099995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590794.69/warc/CC-MAIN-20180719090301-20180719110301-00201.warc.gz"}
https://www.physicsforums.com/threads/any-one-could.7849/
# Any one could? 1. Oct 27, 2003 ### TheDestroyer Hi Guyz, Any one here can teach me the Partial Integration Step-by-Step? I need to know every atom in the section, i tried to understand the previous written threads but didn't, Thanks ... 2. Oct 27, 2003 ### Tom Mattson Staff Emeritus Well, my first response would be that partial differentiation is no different from ordinary differentiation, with the other variables taken to be literal constants. But I have to ask: How well do you understand ordinary differentiation? 3. Oct 27, 2003 ### chroot Staff Emeritus Or do you mean 'integration by parts?' - Warren 4. Oct 27, 2003 ### Tom Mattson Staff Emeritus Jeez, I didn't even notice that it said "integration". I saw "partial", and my brain just filled in the rest. In that case, partial integration is usually taken to mean integration of a multivariable function over just one variable, with the others held constant. So, Destroyer, is that what you mean, or do you mean integration by parts? 5. Oct 28, 2003 ### PrudensOptimus integral of 3xcosx dx is the problem. 3xsinx - 3cos x + C should be the answer base on the table. 6. Oct 29, 2003 ### TheDestroyer Yes, ... Yes I do understand Intergration and differentiation very well, i know how to get the derivative of anything, but because the integration is the opposite of differentiation, i'm getting some problems when trying to integrate some equation, especially i'm not in physics section, and i do need to make some researches on my own, hehehe 7. Oct 29, 2003 ### TheDestroyer I meant ... I just mean i want everything you know about intergration, and i would be very thankful for that :)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9061574339866638, "perplexity": 1655.3547127524496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247513661.77/warc/CC-MAIN-20190222054002-20190222080002-00324.warc.gz"}
https://physics.stackexchange.com/questions/281642/expansion-of-elementary-particle
# Expansion of elementary particle If you have a particle that is indivisible (e.g. electron), we assume the forces holding it together would prevent it from expanding. If the forces holding the indivisible particle together were weaker than that due to cosmic expansion, wouldn't the particle itself also expand in volume as well? Also, if there was a particle that lacked any internal forces, what exactly happens? (I realize this may violate the definition for a particle, but I'm trying to understand how everything [since "things" occupy space] should expand as long as they don't have internal forces preventing this expansion.) Now, by wave-particle duality, there is an associated wave function in the position space denoting probabilities of detecting a particle somewhere. Due to expansion, would not cosmic expansion affect (however minimally) the probability associated with detecting a wave at a particular position? • Hi, I have never read that an elementary particle contains forces holding it together. If may well be true, but I don't think we have any experimental evidence for another type of force affecting experimental results. The size of an electron ( or rather the lack of it), makes it very difficult to examine. – user108787 Sep 22, 2016 at 6:06 • Related: physics.stackexchange.com/q/2110/2451 and links therein. Sep 22, 2016 at 6:46 • @CountTo10 Thank you for the response. That makes sense. I was assuming there was some intrinsic internal structure but cannot of course yet prove that for electron. So if something is a point particle and lacks any internal forces, would not expansion influence the very definition of that point's spatial width (i.e. $x = 0$ itself)? If it was found that point particles have a minimum spatial width, wouldn't the "point particle" expand as well in such a case? Sep 22, 2016 at 13:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8368022441864014, "perplexity": 347.5222369946148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545548.56/warc/CC-MAIN-20220522125835-20220522155835-00667.warc.gz"}
http://eprint.iacr.org/2011/283
## Cryptology ePrint Archive: Report 2011/283 The Fault Attack ECDLP Revisited Mingqiang Wang and Xiaoyun Wang and Tao Zhan Abstract: Biehl et al.\cite{BMM} proposed a fault-based attack on elliptic curve cryptography. In this paper, we refined the fault attack method. An elliptic curve $E$ is defined over prime field $\mathbb{F}_p$ with base point $P\in E(\mathbb{F}_p)$. Applying the fault attack on these curves, the discrete logarithm on the curve can be computed in subexponential time of $L_p(1/2, 1+o(1))$. The runtime bound relies on heuristics conjecture about smooth numbers similar to the ones used in \cite{Lens}. Category / Keywords: foundations /
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8908200860023499, "perplexity": 2244.423133271363}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701167599.48/warc/CC-MAIN-20160205193927-00241-ip-10-236-182-209.ec2.internal.warc.gz"}
http://link.springer.com/article/10.1007%2Fs00373-013-1306-z
, Volume 30, Issue 3, pp 527-547 Date: 26 Mar 2013 # On the Roots of Domination Polynomials Rent the article at a discount Rent now * Final gross prices may vary according to local VAT. ## Abstract The domination polynomial of a graph G of order n is the polynomial $${D(G, x) = \sum_{i=\gamma(G)}^{n} d(G, i)x^i}$$ where d(G, i) is the number of dominating sets of G of size i, and γ(G) is the domination number of G. We investigate here domination roots, the roots of domination polynomials. We provide an explicit family of graphs for which the domination roots are in the right half-plane. We also determine the limiting curves for domination roots of complete bipartite graphs. Finally, we prove that the closure of the roots is the entire complex plane. This research was partially supported by grants from NSERC.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8078467845916748, "perplexity": 383.60608586534846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657138501.67/warc/CC-MAIN-20140914011218-00125-ip-10-234-18-248.ec2.internal.warc.gz"}
http://mathhelpforum.com/geometry/186967-converting-barycentric-cartesian.html
## Converting from barycentric to cartesian If we have a point represented in barycentric notation , then how can we find its cartesian co-ordinates if the cartesian co -ordinates of the points relative to which the barycenter is calculated is known? E.g for a triangle if we know the baryceter co-ordinates of a point(say P) and also the cartesian co-ordinates of all its vertices, then how do we find the cartesian co-ordinates of P? Thanks.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.978771448135376, "perplexity": 810.6951176298057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447547917.97/warc/CC-MAIN-20141224185907-00056-ip-10-231-17-201.ec2.internal.warc.gz"}
http://www.qetlab.com/wiki/index.php?title=RandomStateVector&diff=prev&oldid=680
# Difference between revisions of "RandomStateVector" Jump to: navigation, search Other toolboxes required RandomStateVector Generates a random pure state vector none RandomDensityMatrixRandomProbabilitiesRandomSuperoperatorRandomUnitary Random things RandomStateVector is a function that generates a random pure state vector, uniformly distributed on the unit hypersphere (sometimes said to be uniformly distributed according to Haar measure). ## Syntax • V = RandomStateVector(DIM) • V = RandomStateVector(DIM,RE) • V = RandomStateVector(DIM,RE,K) ## Argument descriptions • DIM: The dimension of the Hilbert space in which V lives. If K > 0 (see optional arguments below) then DIM is the local dimension rather than the total dimension. If different local dimensions are desired, DIM should be a 1-by-2 vector containing the desired local dimensions. • RE (optional, default 0): A flag (either 0 or 1) indicating that V should only have real entries (RE = 1) or that it is allowed to have complex entries (RE = 0). • K (optional, default 0): If equal to 0 then V will be generated without considering its Schmidt rank. If K > 0 then a random pure state with Schmidt rank ≤ K will be generated (and with probability 1, its Schmidt rank will equal K). Note that when K = 1 the states on the two subsystems are generated uniformly and independently according to Haar measure on those subsystems. When K = DIM, the usual Haar measure on the total space is used. When 1 < K < DIM, a natural measure that interpolates between these two extremes is used (more specifically, the direct sum of the left (similarly, right) Schmidt vectors is chosen according to Haar measure on $\mathbb{C}^K \otimes \mathbb{C}^{DIM}$). ## Examples ### A random qubit To generate a random qubit, use the following code: >> RandomStateVector(2) ans = -0.1025 - 0.5498i -0.5518 + 0.6186i If you want it to only have real entries, set RE = 1: >> RandomStateVector(2,1) ans = -0.4487 0.8937 ### Random states with fixed Schmidt rank To generate a random product qutrit-qutrit state and verify that it is indeed a product state, use the following code: >> v = RandomStateVector(3,0,1) v = 0.0400 - 0.3648i 0.1169 - 0.0666i 0.0465 + 0.0016i -0.1910 + 0.0524i -0.0566 - 0.0455i -0.0084 - 0.0236i -0.4407 + 0.7079i -0.3050 + 0.0214i -0.0936 - 0.0489i >> SchmidtRank(v) ans = 1 You could create a random pure state with Schmidt rank 2 in $\mathbb{C}^3 \otimes \mathbb{C}^4$, and verify its Schmidt rank, using the following lines of code: >> v = RandomStateVector([3,4],0,2) v = -0.2374 + 0.1984i 0.1643 + 0.0299i -0.0499 + 0.0376i -0.0689 - 0.0005i 0.7740 - 0.0448i -0.1290 - 0.2224i -0.0514 - 0.1565i 0.2195 + 0.2478i -0.1636 + 0.1276i 0.0581 + 0.0608i 0.0482 - 0.0178i -0.1050 + 0.0014i >> SchmidtRank(v,[3,4]) ans = 2 ## Source code Click on "expand" to the right to view the MATLAB source code for this function. 1. %% RANDOMSTATEVECTOR Generates a random pure state vector 2. % This function has one required argument: 3. % DIM: the dimension of the Hilbert space that the pure state lives in 4. % 5. % V = RandomStateVector(DIM) generates a DIM-dimensional state vector, 6. % uniformly distributed on the (DIM-1)-sphere. Equivalently, these pure 7. % states are uniformly distributed according to Haar measure. 8. % 9. % This function has two optional input arguments: 10. % RE (default 0) 11. % K (default 0) 12. % 13. % V = RandomStateVector(DIM,RE,K) generates a random pure state vector as 14. % above. If RE=1 then all coordinates of V will be real. If K=0 then a 15. % pure state is generated without considering its Schmidt rank at all. If 16. % K>0 then a random bipartite pure state with Schmidt rank <= K is 17. % generated (and with probability 1, the Schmidt rank will equal K). If 18. % K>0 then DIM is no longer the dimension of the space on which V lives, 19. % but rather is the dimension of the *local* systems on which V lives. If 20. % these two systems have unequal dimension, you can specify them both by 21. % making DIM a 1-by-2 vector containing the two dimensions. 22. % 23. % URL: http://www.qetlab.com/RandomStateVector 24. 25. % requires: iden.m MaxEntangled.m, opt_args.m, PermuteSystems.m, Swap.m 26. % author: Nathaniel Johnston ([email protected]) 27. % package: QETLAB 28. % last updated: November 12, 2014 29. 30. function v = RandomStateVector(dim,varargin) 31. 32. % set optional argument defaults: re=0, k=0 33. [re,k] = opt_args({ 0, 0 },varargin{:}); 34. 35. if(k > 0 && k < min(dim)) % Schmidt rank plays a role 36. % allow the user to enter a single number for dim 37. if(length(dim) == 1) 38. dim = [dim,dim]; 39. end 40. 41. % if you start with a separable state on a larger space and multiply 42. % the extra k dimensions by a maximally entangled state, you get a 43. % Schmidt rank <= k state 44. psi = MaxEntangled(k,1,0); 45. a = randn(dim(1)*k,1); 46. b = randn(dim(2)*k,1); 47. if(~re) 48. a = a + 1i*randn(dim(1)*k,1); 49. b = b + 1i*randn(dim(2)*k,1); 50. end 51. v = kron(psi',speye(prod(dim)))*Swap(kron(a,b),[2,3],[k,dim(1),k,dim(2)]); 52. v = v/norm(v); 53. else % Schmidt rank is full, so ignore it 54. v = randn(dim,1); 55. if(~re) 56. v = v + 1i*randn(dim,1); 57. end 58. v = v/norm(v); 59. end
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9054436087608337, "perplexity": 3364.7963440407934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158958.72/warc/CC-MAIN-20180923020407-20180923040807-00003.warc.gz"}
https://www.physicsforums.com/threads/xenos-paradox.3848/
1. Jul 13, 2003 ### sage you have heard this before perhaps. its about a runner trying to run d metres. he covers d/2 in t1 second, then half of the distance that is left in t2 seconds, then half of the rest in t3 seconds and so on.as there is always a finite distance left, according to the paradox he can never cover d metres. so how does he do it? 2. Jul 13, 2003 ### Hurkyl Staff Emeritus This infinite sequence of actions can be accomplished in finite time, so he does them all and then keeps going. 3. Jul 13, 2003 ### AndersHermansson We had this over at sciforums recently. The short answer is that a sum of infinite series can be finite, which is where it might seem confusing. So that if you add an infinite amount of lengths the total length can still be finite. So the original question simply assumes it is not so. 4. Jul 13, 2003 ### quartodeciman "there is always a finite distance left" really means "there is for any time before d/v (, with v being the speed of the runner) a finite distance left". 5. Jul 14, 2003 ### sage yes this infinite sequence converges. but the point is if we go on adding the successive elements of the sequence one by one (as must be done here) we never reach the end of the sequence precisely because it is infinite. as we cannot reach the end of the sequence we cannot cover this finite distance in the calculated finite time. consider the finite time interval between n-th second and n+1-th second. first half a second passes by, then another 1/4-th, then another 1/8-th and so on. another infinite sequence converging at the limit, but that limit can never be attained. that is the problem. 6. Jul 14, 2003 ### Hurkyl Staff Emeritus But why should one think that sequence of events cover the entire range of motion? Try this transfinite sequence: Cover half the distance. Cover half of what's left. Cover half of what's left. ... (countably finite repetitions) ... Arrive at the destination. Each step in the sequence picks up right there you left off if you perform all previous steps, includes the "Zeno sequence", and continues on afterwards to arrive at the destination. 7. Jul 14, 2003 ### drnihili You have to be quite clear on what the question is. If you take Zeno to merely be asking how an infinite sequence can occupy a finite space, then calculus indeed answers the question. However, if you taking him to be asking the question of how one can complete and infinite sequence one member at a time, then calculus not only doesn't answer the puzzle but is entirely irrelevant to it. I think the latter question is the better way to understand the point of the paradox. There are a host of related paradoxes which highlight the central issues. SOmetimes it helps to look at them instead of just the runner paradox. 8. Jul 14, 2003 ### drnihili Ah, but this sequence can't be right. It presumes that after you've completed all the half distances you still have to do something further to arrive. If your sequence were correct, it would be possible to travel all the distances and yet still fail to arrive. But arriving cannot amount to traversing a distance or you give up the continuity of the reals. So on your account two runners could travers precisely the same distance and yet one of them would run d meters and the other wouldn't. 9. Jul 14, 2003 ### Hurkyl Staff Emeritus Covering all of the half distances means covering the interval [0, d). If I run 1 meter per second, I cover all the half distances over the time interval [0, d). You actually have to get to time d to have arrived at distance d. Zeno's paradox is a paradox because it presumes that you can't continue beyond the infinite sequence of covering half distances. By continuity, any possible continuation of motion would have to include being at distance d at time d. Last edited: Jul 14, 2003 10. Jul 17, 2003 ### drnihili The problem is that the open and closed intervals have the same distance. Closing the interval does not add any distance. Continuity comes in because the LUB of the two intervals is the same. If the runner really has completed all of the open intervals, he must have arrived at d. Suppose otherwise, i.e that the runner has completed [0, d) but has not yet arrived at d. Call the runner's position r. r must be between the open interval and d. But this contradicts the fact that d is the least upper bound of the interval. So if r<d, then r must be in the open interval. But if r is in the open interval, then the runner has not yet completed the interval. This is because for every point in the interval there are infinitely many other points beyond it that are still in the interval. So r cannot be in the interval. thus the earliest point which can be r is d. And the paradox isn't that you can't continue beyond the open interval, it's that you can't complete the interval at all. Last edited: Jul 17, 2003 11. Jul 17, 2003 ### Hurkyl Staff Emeritus I'm aware the lengths of [0, d) and [0, d] are the same. Anyways, a paradox is typically a contradiction that arises from an unfounded assumption. They usually get cleared up once you try to do everything rigorously. So tell me, as precisely as possible, what you think the problem is. 12. Jul 17, 2003 ### drnihili Well, I don't think I agree with your view of what a paradox is, but we'll leave the general theory of paradox for another thread. The paradox in this case is that the runner, Achilles, must accomplish an infinite sequence of tasks. We know that he can complete them, we can even calculate precisely by when he will have completed them. The problem is in explaining how he completes them. Achilles starts out with an infinite number of tasks to do. By the description of the problem, he must complete them one at a time. After he has accomplished his first task, there are an infinite number of tasks left. After he completes his second taks, there are an infinite number of tasks left. In fact after each task that he completes, there's always an infinite number left. As he moves down his list of tasks, he never gets any closer to the end of it. He always has just as many left to do as he started out with. As long as he is still working on the list, he has infinitely many left. The first point at which he has fewer than infinitely many tasks left is when he is all done, and at that point he has zero. He never decreases his list, he just suddenly finds that it is already done. So how is it that he manages to get to the end? Geometry can predict the point at which Achilles will be done. Calculus can explain how it is that all the decreasing segments have a finite sum. But neither of them explains how it is that Achilles counts through the list, one task at a time - how he manages to complete an endless sequence. 13. Jul 17, 2003 ### Hurkyl Staff Emeritus You still haven't answered the big question; why should an infinite sequence of tasks be impossible? In particular (if I'm predicting your response correctly), why should every task in a sequence of tasks have a previous and a next task? (except, of course, for the first and last task, should they exist) 14. Jul 17, 2003 ### drnihili Because there's a function that given any task in the sequence returns the next task, and another function that returns the previous. If you take an ordering that lack that property it gets even more difficult. But Zeno's ordering does have the property. 15. Jul 18, 2003 ### Hurkyl Staff Emeritus But why should an infinite series of tasks be impossible? The resposne I was anticipating was something equivalent to saying that in my sequence of tasks, there is no task previous to "arrive at d". (it is eqiuvalent to say that there is no last task in Zeno's sequence) 16. Jul 22, 2003 ### drnihili That response doesn't quite get it right. I've tried to explain it a couple times, but I'll have another go at it. If Achilles accomplishes an infinite series of tasks, there must be some action of his which counts as completing all the tasks. But none of the tasks can be that action as each of the tasks leaves an infinite number remaining. So, if Achilles accomplishes all the tasks, then there must be something he does beyond the tasks themselves in virtue of which he can be said to have completed them all. By the description of the problem, there is no such action. If there were such an action, then it would be theoretically possible for Achilles to accomplish each of the tasks and yet still fail to complete all of them. This is absurd. Hence there can be no such action. 17. Jul 22, 2003 ### Hurkyl Staff Emeritus For the problem at hand, there must be some task which counts as the completion of all (previous) tasks, though this isn't always the case. But the question is why must that task be one of the infinite series of tasks? Continuity (and completeness) guarantees that there must be a unique limiting event, but it does not guarantee that the unique limiting event must be one of the members of the infintie sequence. In particular, the limiting task is the "arrive at destination" step I listed. 18. Jul 22, 2003 ### drnihili Obviously it can't be one of the listed tasks. But your proposal is no solution. What exactly does one do to arrive at the destination and when does one do it? Do you really mean to imply that one might complete each of the tasks and still not arrive at the destination? 19. Jul 22, 2003 ### Hurkyl Staff Emeritus One traverses the position interval [0,d) over the time interval [0, d). That is sufficient to be at position d at time d. (I'm assuming the traversal is in the manner being discussed) I mean to imply that one does not reach the destination during the time interval in which one is performing Zeno's tasks. In this case, the time interval [0, d). One arrives at the destination at time d, after all of Zeno's tasks have been completed. 20. Jul 22, 2003 ### drnihili Here you've essentially said that completing all the tasks is sufficient for arrival. But you haven't said how that is accomplished. I agree that it's sufficient, that's not the issue. The issue is saying how it is done. This can't be right. One doesn't first complete the tasks and then arrive. If that were the case then there would have to be a moment in between finishing the tasks and arriving. (given infinite divisibility.) But that would contradict what you said above about completing the tasks being sufficient for arriving. Arriving can't be separate from completing all the tasks. It can't occur after completing them, nor can it occur before completing them. It has to occur simultaneously with completing them. But this still leaves the problem of saying what it means to complete and endless sequence.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8694294095039368, "perplexity": 619.4638500943541}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608067.23/warc/CC-MAIN-20170525121448-20170525141448-00173.warc.gz"}
https://www.studyform.com/auburn/MATH1620/finalp1-fa15
#### MATH 1620 ##### Final - Practice 1 | Fall '15 1. Find the area bounded by the line $y = \sqrt{4 - x}$, the $x$-axis and the $y$-axis. 2. Find the area bounded by the $x$-axis and the curve $y = \sin(x)$ on the interval $0 \le x \le \pi$. 3. Use the disc/washer method to find the volume of the solid formed by rotating the regino enclosed by the lines $y = 1, x = 0$ and curve $y = x^3$ around the $x$-axis. 4. Use the shell method to compute the volume of the region formed by rotating the triangle with vertices $(0, 0); (1, 1); (0, 1)$ around the line $x = 1$. 5. Find the volume formed by rotating the region bounded by $y = 2x, x = 1$ and the $x$-axis around the $y$-axis. 6. Find the volume formed by rotating the region bounded by $y = e^x, x = 1$, the $x$-axis and the $y$-axis around the $x$-axis. 7. A $10$ meter chain with mass $100$ kg is suspended vertically from a platform. Use an integral to compute how much work is done lifting the chain onto the platform. 8. A leaky bucket weights $100$ lb when full of water. Suppose water leaks at a rate of $1$ lb per second, and the bucket is lifted at a rate of $2$ ft per second. Write an integral computing the work required to lift the bucket $50$ ft, assuming it is full to start. 9. If $1$ lb of force extends a spring $3$ inches beyond rest length, how much work would be done extending it $6$ inches beyond rest length? Give your answer in foot pounds. Must show appropriate integral and correct answer for full credit. 10. Find the area of the surface formed by rotating the line $y = 2 - x, x = 0$ to $x = 1$ around the $x$-axis. 11. Use integration to find the length of the curve $y = \sqrt{1 - x^2}, x = 0$ to $x = \frac{1}{2}$. 12. Use the Pappus Theorem to find the volume of the solid formed by rotating the diamond-shaped region with corners $(1, 0); (0, 1); (-1, 0); (0, -1)$ around the line $x = 2$. 13. Compute $\int x \sec^2(x) dx$ 14. Compute $\int_0^{\frac{\pi}{4}} \tan^2(x) dx$ 15. Compute $\int \sqrt{1 - x^2} dx$ 16. Compute $\int \frac{x^3 + 2x^2 + 1}{x^4 + x^2} dx$ 17. Compute $\int_0^1 xe^x \; dx$ 18. Compute $\int \frac{2x + 1}{x^2 + x} dx$ 19. Compute $\int x^3 \sqrt{x^2 + 1} dx$ 20. Find the limit of the sequence $a_n = (1 - \frac{1}{n})^n$. Does the sequence $b_n = (-1)^na_n$ converge or diverge? Give a reason for your answer. 21. Write the repeating decimal $.\overline{5}$ (this means $.555555...$ no end to the $5 #39;s) as a geometric series. Use the geometric sum formula to find a rational number equal to this repeating decimal. Log in or sign up to see discussion or post a question. 22. Determine if the series converges or diverges. If it converges, find its sum. If it diverges, state why.$\sum_{n=0}^{\infty}(-1)^n \frac{3^n}{2^(n + 1)}$Log in or sign up to see discussion or post a question. 23. Apply the integral test to the series$\sum_{n=1}^{\infty} \frac{2}{n(n + 1)}$. The associated improper integral must be written and solved correctly. State conclusion obtained. Log in or sign up to see discussion or post a question. 24. Determine if the series converges or diverges. Give reasons for your answer.$\sum_{n=1}^{\infty} \frac{n^2 + n + 3}{2n^3 + 2n - 1}$. Log in or sign up to see discussion or post a question. 25. Determine if the series converges or diverges. Give reasons for your answer.$\sum_{n=1}^{\infty} \frac{1}{\sqrt{n^3}}$. Log in or sign up to see discussion or post a question. 26. Determine if the series converges or diverges. Give reasons for your answer.$\sum_{n=1}^{\infty} \frac{1}{2^n - 1}$. Log in or sign up to see discussion or post a question. 27. Determine if the series$\sum_{n=1}^{\infty} \sqrt[n]{n}$converges or diverges. Log in or sign up to see discussion or post a question. 28. Determine if the series$\sum_{n=1}^{\infty} \frac{(-1)^n n^2}{n!}$converges absolutely, converges conditionally or diverges. Log in or sign up to see discussion or post a question. 29. Determine if the series$\sum_{n=2} \frac{(-1)^n}{\ln n}$converges absolutely, converges conditionally or diverges. Log in or sign up to see discussion or post a question. 30. Determine the radius and interval of convergence of the power series$\sum_{n=1}^{\infty} \frac{3^n(x - 1)^n}{n + 1}$. Log in or sign up to see discussion or post a question. 31. Determine the radius of convergence of the power series$\sum_{n=1}^{\infty} \frac{n^n x^n}$. Log in or sign up to see discussion or post a question. 32. Determine the radius of convergence of the power series$\sum_{n=1}^{\infty} \frac{2^nx^n}{n!}$. Log in or sign up to see discussion or post a question. 33. Suppose that the power series$\sum_{n=0}^{\infty} a_nx^n$is convergent when$x = $and divergent when$x = 6$. Is the series convergent when$x = 3$? When$x = -7$? When$x = 5$? Explain. Log in or sign up to see discussion or post a question. 34. Use the geometric sum formula to find a power series, with radius of convergence, that converges to$f(x) = \frac{x}{2 + x}$. Log in or sign up to see discussion or post a question. 35. Find the$2^{nd}$degree Taylor polynomial of$f(x) = \sqrt[3]{x}$expanded at$a = 8$. Log in or sign up to see discussion or post a question. 36. Use the McLaurin series for$e^x$to find a power series converging to an antiderivative of$e^{-x^2}$. Log in or sign up to see discussion or post a question. 37. Let$\mathbf{u} = \mathbf{i} - \mathbf{j} + 2\mathbf{k}$and$\mathbf{v} = 2\mathbf{i} + \mathbf{j} + \mathbf{k}$. Find the vector projection of$\mathbf{u}$onto$\mathbf{v}$and the vector component of$\mathbf{u}$orthogonal to$\mathbf{v}$. Log in or sign up to see discussion or post a question. 38. Find a paramteric form of the line through the points$P = (1, 2, -1)$and$Q = (2, 1, 3)$. Log in or sign up to see discussion or post a question. 39. Find an equation of the plane containing the point$P = (1, 2, -1)$and the line$l(t) = (1 + t, 1 - 3t, 2 + t), -\infty < t < \infty$. Log in or sign up to see discussion or post a question. 40. Find the point of intersection of the line$l(t) = (1 + t, 1 - 3t, 2 + t), -\infty < t < \infty$with the plane$x + y + z = 1\$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8503673672676086, "perplexity": 654.9289968786802}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530253.25/warc/CC-MAIN-20190421060341-20190421082146-00083.warc.gz"}
https://iwaponline.com/wst/article-abstract/27/3-4/401/4814/Comparative-Survival-of-E-Coli-F-Bacteriophages?redirectedFrom=fulltext
This study was designed to compare the die-off of E.coli and F*bacteriophages with that of enteric pathogenic viruses in groundwater and raw wastewater at various temperatures. At low temperatures, the die-off of E.coli was greater than that of HAV and poliovirus 1. Under conditions compatible with bacterial growth no die-off of E.coli was observed. Under most experimental conditions no die-off was observed for F*bacteriophages. The survival of HAV and poliovirus 1 was strongly affected by temperature. Regardless of the water type, the highest die-off of viruses was observed at 30°C, whereas at 10°C the titer of HAV and poliovirus 1 was reduced by 1 to 2 log10 after 90 days incubation. The data presented in this study indicated that E.coli cannot serve as an index for the survival of HAV and poliovirus 1 in ground and wastewater. Since F+bacteriophages were not affected by the tested conditions, their acceptance as indicators for viral pollution of water sources needs further evaluation. This content is only available as a PDF.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8731169700622559, "perplexity": 4579.827193826638}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662561747.42/warc/CC-MAIN-20220523194013-20220523224013-00774.warc.gz"}
https://math.stackexchange.com/questions/1598270/dividing-a-unit-square-into-rectangles
# Dividing a unit square into rectangles A unit square is cut into rectangles. Each of them is coloured by either yellow or blue and inside it a number is written. If the color of the rectangle is blue then its number is equal to rectangle’s width divided by its height. If the color is yellow, the number is rectangle’s height divided by its width. Let $x$ be the sum of the numbers in all rectangles. Assuming the blue area is equal to the yellow one, what is the smallest possible $x$? I've came with the solution below: I've simply split the unit square in half and assigned the colors. The reasoning behind that is that I want to have the blue side as high as possible (to make the $x$ as low as possible) and the yellow side as wide as possible (for the same reason). I didn't divide the square into rectangles with infinitely small height or width, because no matter how small they are, they eventually add up and form the two big rectangles that are on my picture. I feel my solution is wrong though, because it is stupidly easy (you have to admit, that often means it's wrong). Is there anything I'm missing here? • Even if it is the right solution (it may or may not be; I'm not sure), the real challenge is in proving it is right., That's not necessarily "stupidly easy" even if the configuration is simple. – hmakholm left over Monica Jan 3 '16 at 13:06 • Oh, didn't even think about that. I don't have any idea about how should I prove it, but at least now it feels more challenging. – Eugleo Jan 3 '16 at 13:11 • It's easy to see that sum of the yellow numbers is greater than $1/2$ and the same for the blue numbers; so the total sum will be always greater than $1$. However I have the feeling that one of the two sums will be always greater than $2$, but I don't know how to prove it yet. – mrprottolo Jan 3 '16 at 20:48 • there's a connection b/w rectangular tillings and current flow on planar electrical networks, the ratio plays role of conductivity... – DVD Jan 6 '16 at 1:05 • Just to make sure I understand the problem correctly -- does the "width" and "height" of the rectangles denote their horizontal and vertical size? (I initially interpreted them as the "smaller" and "greater" dimension, irrespective of the rectangle's orientation... which changes the problem quite a bit :-) ). – Peter Košinár Jul 27 '16 at 15:51 Here is the full solution. The answer is, indeed, $$5/2$$. An example was already presented by the OP. Now we need to prove the inequality. First of all we notice that for any color (blue or yellow) the sum of height/width (or width/height) ratios is always at least $$1/2$$. Indeed, since all dimensions do not exceed $$1$$, we have (e.g., for blue color) $$\sum \frac{w_i}{h_i} \geqslant \sum w_i h_i = \frac 12 \,.$$ as the final sum is the total area of all blue rectangles. Second, we observe that either the blue rectangles connect the left and the right sides of the square, or the yellow rectangles connect the top and the bottom sides. We leave that as an exercise for the readers :) (Actually, as you will see below, it would suffice to show that either the sum of all blue widths or the sum of all yellow heights is at least $$1$$.) Without loss of generality, assume that the blue rectangles connect the lateral sides of the large square. Then we intend to prove that $$\sum \frac{w_i}{h_i} \geqslant 2 \,,$$ where the summation is done over the blue squares. Combining that with the inequality $$\sum h_i/w_i \geqslant 1/2$$ for the yellow squares we will have the required result, namely that the overall sum is always at least $$5/2$$. Since the projections of the blue squares onto the bottom side must cover it completely, we have $$\sum w_i \geqslant 1$$. We also have $$\sum w_ih_i = 1/2$$. Now all we need is the following fact. Lemma. Two finite sequences of positive numbers $$\{w_i\}$$ and $$\{h_i\}$$, i = $$1$$, ... , $$n$$ are such that $$\sum w_i = W, \qquad \sum w_ih_i = S \,.$$ Then $$\sum \frac{w_i}{h_i} \geqslant \frac{W^2}S \,.$$ Proof. We will use the well-known Jensen's inequality (which follows from the geometric convexity of the area above the graph of any convex function) for function $$f(x) = 1/x$$. That gives us $$\sum \frac{w_i}W f(h_i) \geqslant f \left( \sum \frac{w_i}W h_i \right) \,.$$ In other words $$\frac1W \sum \frac{w_i}{h_i} \geqslant \frac1{\sum \frac{w_i}W h_i } = \frac{W}{\sum w_i h_i} = \frac WS \,.$$ and the required inequality immediately follows. $$\square$$ Applying this lemma to our case where $$W \geqslant 1$$ and $$S = 1/2$$ completes our solution. • Thanks! I almost forgot I asked this question 4 years ago. – Eugleo Sep 18 '20 at 22:13 ## Why does this assumption seem what the OP had in mind? Because his "solution" implicitly takes this to be true. If the OP has $X$ and $Y$ directions in mind, he would have written the same number on both of the rectangles in his solution. Let number of blue triangles be $p$ and number of yellow triangles be $q$. The number written inside a blue triangle will always be greater than or equal to $1$. Hence, $$x \geq p$$ You have already found a solution for which $x=2.5$. Also, this is the best solution for the case $p=1$ Hence we need to only examine the case $p=2$ since for $p>2$, $x$ will be greater than $2.5$. ## Case 1 $q$ is $I$. This means the unit square is divided into $3$ rectangles. The only way in which we can divide a square into 3 rectangle is given below. Three rectangles are named accordingly. Notice that there will be two more sub cases. One where the $q$ rectangle is upright and another where the $q$ rectangle is either the one named or $III$. $q$ cannot be $II$ since then its area will always be less than $0.5$. Note that $b \leq 0.5$ in the above image. Case 1.1 $q$ rectangle is upright. Here it is trivial that $a=0.5$ (by equating the areas). Hence, $x= 0.5 + \frac{1}{2b} + \frac{1-b}{\frac{1}{2}} = 2.5 + \frac{1}{2b} + 2b$ Hence this case is proved since it is trivial that $x \geq 2.5$. In fact, we have another possible solution candidate when $b=0.5$. Case 2 $q$ is $III$. By equating area of blue rectangles equal to $0.5$, we get, $$a+b-ab=0.5$$ or, $$a=\frac{0.5-b}{1-b}$$ The number written on triangle $I$ will be $\frac{1}{a}$. The number written on triangle $II$ will depend on whether $1-a$ or $b$ is greater. Keep in mind that $b \leq 0.5$. This graph tells us that $1-a \geq b$. Hence, the number written on triangle $II$ will be $\frac{1-a}{b}$. The number on triangle $III$ will again depend on which dimension is greater. This graph tells us that $1-a \geq 1-b$ when $1-b \leq 0.709$ or $0.2929 < b < 0.5$ In such a case, the number written will be $\frac{1-b}{1-a}$. This graph tells is that in such a case $x \geq 6.518$. Not a problem! When $0 < b < 0.2929$, then the number written will be $\frac{1-a}{1-b}$. This graph tells is that in such a case $x \geq 6.775$. Not a problem again! Hence all the cases when $p=2$ and $q=1$ are proved! I am trying to prove this for higher cases of $q$, any ideas anyone? • I think your interpretation of width and height is not correct. Width refers to the $x$ direction and height refers to the $y$ direction for all rectangles. – Ross Millikan Jul 27 '17 at 19:04 • The numbers written in the two rectangles are $\frac X Y$ for the blue and $\frac Y X$ for the yellow, as they should be. Your interpretation is incorrect. – Jens Jul 28 '17 at 19:31 We cannot assume that height and width refer to the longer and shorter sides. Here is a counter example. Let us denote our unit square by $$K$$. Consider a square with dimensions $$\frac1{\sqrt2} \times \frac1{\sqrt2}$$ with its upper left corner coinciding with the upper left corner of $$K$$, and paint it blue. Split the remaining part of $$K$$ into two yellow rectangles---one of them with width $$1$$ and height $$1-\frac1{\sqrt2}$$ whose bottom side coincides with the bottom side of $$K$$, and the other one, sharing its top right corner with the corresponding corner of $$K$$, has width $$1-\frac1{\sqrt2}$$ and height $$\frac1{\sqrt2}$$. Then the sum of longer/shorter ratios for the blue squares is $$1$$, and the sum of the shorter/longer ratios for the yellow squares is $$\frac{1-\frac1{\sqrt2}}{1} + \frac{1-\frac1{\sqrt2}}{\frac1{\sqrt2}} = 1-\frac1{\sqrt2} + \sqrt{2}-1 = \frac1{\sqrt2} \approx 0.707...$$ Thus, the overall sum of the numbers will be 1.707..., which is quite noticeably less than $$2.5$$. However, if we follow exactly the given conditions: blue = width/height, yellow = height/width, then we obtain the following sum $$1 + \left( 1-\frac1{\sqrt2} + \frac{\frac1{\sqrt2}}{1-\frac1{\sqrt2}} \right) = 3 + \frac1{\sqrt2} \approx 3.707... > 2.5$$ • However, the following is true: if we label each blue rectangle with longer/shorter ratio, and every yellow rectangle with shorter/longer ratio, then the sum of all numbers is at least $b + 0.5$ where $b$ is the number of blue rectangles. That can be easily proved by considering areas of yellow squares (sum of the numbers for the blue squares is obviously always at least $b$). – JimT Sep 7 '20 at 22:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 36, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9412153959274292, "perplexity": 214.27861202336575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178359082.48/warc/CC-MAIN-20210227174711-20210227204711-00055.warc.gz"}
http://www.maths.ox.ac.uk/node/11390
# Equivariant properties of symmetric products 2 June 2014 15:30 Stefan Schwede Abstract The filtration on the infinite symmetric product of spheres by number of factors provides a sequence of spectra between the sphere spectrum and the integral Eilenberg-Mac Lane spectrum. This filtration has received a lot of attention and the subquotients are interesting stable homotopy types. In this talk I will discuss the equivariant stable homotopy types, for finite groups, obtained from this filtration for the infinite symmetric product of representation spheres. The filtration is more complicated than in the non-equivariant case, and already on the zeroth homotopy groups an interesting filtration of the augmentation ideal of the Burnside rings arises. Our method is by `global' homotopy theory, i.e., we study the simultaneous behaviour for all finite groups at once. In this context, the equivariant subquotients are no longer rationally trivial, nor even concentrated in dimension 0. • Topology Seminar
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8937093615531921, "perplexity": 417.93434465227887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948599549.81/warc/CC-MAIN-20171218005540-20171218031540-00516.warc.gz"}
https://www.ncbi.nlm.nih.gov/pubmed/15086305
Format Choose Destination J Vis. 2004 Mar 12;4(3):144-55. # Pattern motion integration in infants. ### Author information 1 Department of Psychology, University of California San Diego La Jolla, CA, USA. [email protected] ### Abstract To investigate the development of motion integration in infants, we used an eye movement technique to measure subjects' ability to track leftward versus rightward pattern motion in a stimulus consisting of a field of spatially segregated moving gratings. Each grating moved in one of two oblique directions, with the two directions interleaved across the display. When spatially integrated, pattern motion for these paired component motions was either rightward or leftward. To control for the possibility that horizontal eye movements elicited by this stimulus were due to the horizontal motion vector present in each obliquely moving grating, we also measured responses to a field where every grating moved in the same oblique direction. The difference in performance between the integration stimulus and this control stimulus was taken as a measure of integration. Data from 2-, 3-, 4-, and 5-month-old infants revealed significant motion integration, suggesting that higher order motion areas, such as the middle temporal area (MT) may develop at a relatively early age. In addition, the integration effect decreased consistently and significantly with age (p <.005), suggesting a reduction in the spatial extent of motion integration over the course of development. PMID: 15086305 DOI: 10.1167/4.3.2 [Indexed for MEDLINE]
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8017145395278931, "perplexity": 3466.211965249666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159165.63/warc/CC-MAIN-20180923075529-20180923095929-00482.warc.gz"}
https://export.arxiv.org/abs/2005.04371?context=math.DS
math.DS (what is this?) # Title: Diophantine approximation by negative continued fraction Authors: Hiroaki Ito Abstract: We show that the growth rate of denominator $Q_n$ of the $n$-th convergent of negative expansion of $x$ and the rate of approximation: $$\frac{\log{n}}{n}\log{\left|x-\frac{P_n}{Q_n}\right|}\rightarrow -\frac{\pi^2}{3} \quad \text{in measure.}$$ for a.e. $x$. In the course of the proof, we reprove known inspiring results that arithmetic mean of digits of negative continued fraction converges to 3 in measure, although the limit inferior is 2, and the limit superior is infinite almost everywhere. Comments: 8 pages Subjects: Dynamical Systems (math.DS); Number Theory (math.NT) Cite as: arXiv:2005.04371 [math.DS] (or arXiv:2005.04371v1 [math.DS] for this version) ## Submission history From: Hiroaki Ito [view email] [v1] Sat, 9 May 2020 05:39:39 GMT (84kb) Link back to: arXiv, form interface, contact.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9328547120094299, "perplexity": 3048.4857557193363}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301670.75/warc/CC-MAIN-20220120005715-20220120035715-00434.warc.gz"}
https://byjus.com/volume-of-a-rectangular-prism-formula
# Volume of a Rectangular Prism Formula A Prism that has 2 parallel rectangular bases and 4 rectangular faces is a Rectangular Prism. The mathematical literature suggests to any polyhedron like this as a Cuboid. It has a 6 flat rectangular faces and all the angles are right angled. The other names of such Rectangular Prism are Rectangular hexahedron, rectangular parallelepiped and right rectangular prism. The Volume of a Rectangular Prism Formula is, $\large Volume\;of\;a\;Rectangular\;Prism=lbh$ Where, b – base length of the rectangular prism. l – base width of the rectangular prism. h – height of the rectangular prism. ### Volume of a Rectangular Prism Problems Question: Given the base length, base width and height of the rectangular prism as 5 cm, 8 cm and 16 cm respectively. Find the volume of the rectangular prism? Solution: Given, b = 5 cm l = 8 cm h = 16 cm Using the volume of a rectangular prism formula $Volume\;of\;a\;Rectangular\;Prism=lbh$ $=5\times8\times16$ $=640\,cm^{3}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.916208028793335, "perplexity": 969.8943304200078}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948029.93/warc/CC-MAIN-20180425232612-20180426012612-00321.warc.gz"}
http://www.mathworks.com/help/dsp/ref/chirp.html?nocookie=true
Accelerating the pace of engineering and science # Chirp Generate swept-frequency cosine (chirp) signal Sources dspsrcs4 ## Description The Chirp block outputs a swept-frequency cosine (chirp) signal with unity amplitude and continuous phase. To specify the desired output chirp signal, you must define its instantaneous frequency function, also known as the output frequency sweep. The frequency sweep can be linear, quadratic, or logarithmic, and repeats once every Sweep time by default. See other sections of this reference page for more details about the block. ### Sections of This Reference Page Variables Used in This Reference Page f0 Initial frequency parameter (Hz) fi(tg) Target frequency parameter (Hz) tg Target time parameter (seconds) Tsw Sweep time parameter (seconds) ${\varphi }_{0}$ Initial phase parameter (radians) $\psi \left(t\right)$ Phase of the chirp signal (radians) fi(t) User-specified output instantaneous frequency function (Hz); user-specified sweep fi(actual)(t) Actual output instantaneous frequency function (Hz); actual output sweep ychirp(t) Output chirp function ### Setting the Output Frame Status Use Samples per frame parameter to set the block's output frame status, as summarized in the following table. The Sample time parameter sets the sample time of both sample- and frame-based outputs. Setting of Samples Per Frame ParameterOutput Frame Status 1 Sample based n (any integer greater than 1) Frame based, frame size n ### Shaping the Frequency Sweep by Setting Frequency Sweep and Sweep Mode The basic shape of the output instantaneous frequency sweep, fi(t), is set by the Frequency sweep and Sweep mode parameters, described in the following table. Parameters for Setting Sweep ShapePossible SettingParameter Description Frequency sweep Linear Logarithmic Swept cosine Determines whether the sweep frequencies vary linearly, quadratically, or logarithmically. Linear and swept cosine sweeps both vary linearly. Sweep mode Unidirectional Bidirectional Determines whether the sweep is unidirectional or bidirectional. For details, see Unidirectional and Bidirectional Sweep Modes The following diagram illustrates the possible shapes of the frequency sweep that you can obtain by setting the Frequency sweep and Sweep mode parameters. For information on how to set the frequency values in your sweep, see Setting Instantaneous Frequency Sweep Values. ### Unidirectional and Bidirectional Sweep Modes The Sweep mode parameter determines whether your sweep is unidirectional or bidirectional, which affects the shape of your output frequency sweep (see Shaping the Frequency Sweep by Setting Frequency Sweep and Sweep Mode). The following table describes the characteristics of unidirectional and bidirectional sweeps. Sweep Mode Parameter SettingsSweep Characteristics Unidirectional • Lasts for one Sweep time, Tsw • Repeats once every Tsw Bidirectional • Lasts for twice the Sweep time, 2*Tsw • Repeats once every 2*Tsw • First half is identical to its unidirectional counterpart. • Second half is a mirror image of the first half. The following diagram illustrates a linear sweep in both sweep modes. For information on setting the frequency values in your sweep, see Setting Instantaneous Frequency Sweep Values. ### Setting Instantaneous Frequency Sweep Values Set the following parameters to tune the frequency values of your output frequency sweep. • Initial frequency (Hz), f0 • Target frequency (Hz), fi(tg) • Target time (seconds), tg The following table summarizes the sweep values at specific times for all Frequency sweep settings. For information on the formulas used to compute sweep values at other times, see Block Computation Methods. Instantaneous Frequency Sweep Values Frequency SweepSweep Value at t = 0 Sweep Value at t = tg Time when Sweep Value Is Target Frequency, fi(tg) Linear f0 fi(tg) tg f0 fi(tg) tg Logarithmic f0 fi(tg) tg Swept cosine f0 2fi(tg) - f0 tg/2 ### Block Computation Methods The Chirp block uses one of two formulas to compute the block output, depending on the Frequency Sweep parameter setting. For details, see the following sections: ### Equations for Output Computation The following table shows the equations used by the block to compute the user-specified output frequency sweep, fi(t), the block output, ychirp(t), and the actual output frequency sweep, fi(actual)(t). The only time the user-specified sweep is not the actual output sweep is when the Frequency sweep parameter is set to Swept cosine. Note   The following equations apply only to unidirectional sweeps in which fi(0) < fi(tg). To derive equations for other cases, you might find it helpful to examine the following table and the diagram in Shaping the Frequency Sweep by Setting Frequency Sweep and Sweep Mode. The table below contains the following variables: • fi(t) — the user-specified frequency sweep • fi(actual)(t) — the actual output frequency sweep, usually equal to fi(t) • y(t) — the Chirp block output • $\psi \left(t\right)$ — the phase of the chirp signal, where $\psi \left(0\right)=0$, and $2\pi {f}_{i}\left(t\right)$ is the derivative of the phase ${f}_{i}\left(t\right)=\frac{1}{2\pi }\cdot \frac{d\psi \left(t\right)}{dt}$ • ${\varphi }_{0}$ — the Initial phase parameter value, where ${y}_{chirp}\left(0\right)=\mathrm{cos}\left({\varphi }_{0}\right)$ Equations Used by the Chirp Block for Unidirectional Positive Sweeps Frequency SweepBlock Output Chirp SignalUser-Specified Frequency Sweep, fi(t)$\beta$Actual Frequency Sweep, fi(actual)(t) Linear $y\left(t\right)=\mathrm{cos}\left(\psi \left(t\right)+{\varphi }_{0}\right)$ ${f}_{i}\left(t\right)={f}_{0}+\beta t$ $\beta =\frac{{f}_{i}\left({t}_{g}\right)-{f}_{0}}{{t}_{g}}$ ${f}_{i}{{}_{\left(actual\right)}}^{\left(t\right)}={f}_{i}{}^{\left(t\right)}$ Same as Linear ${f}_{i}\left(t\right)={f}_{0}+\beta {t}^{2}$ $\beta =\frac{{f}_{i}\left({t}_{g}\right)-{f}_{0}}{{t}_{g}^{2}}$ ${f}_{i}{{}_{\left(actual\right)}}^{\left(t\right)}={f}_{i}{}^{\left(t\right)}$ Logarithmic Same as Linear ${F}_{i}\left(t\right)={f}_{0}{\left(\frac{{f}_{i}\left({t}_{g}\right)}{{f}_{0}}\right)}^{\frac{t}{{t}_{g}}}$ Where fi(tg) > f0> 0 N/A ${f}_{i\left(actual\right)}\left(t\right)={f}_{i}\left(t\right)$ Swept cosine $y\left(t\right)=\mathrm{cos}\left(2\pi {f}_{i}\left(t\right)t+{\varphi }_{0}\right)$ Same as Linear Same as Linear ${f}_{i\left(actual\right)}\left(t\right)={f}_{i}\left(t\right)+\beta t$ ### Output Computation Method for Linear, Quadratic, and Logarithmic Frequency Sweeps The derivative of the phase of a chirp function gives the instantaneous frequency of the chirp function. The Chirp block uses this principle to calculate the chirp output when the Frequency Sweep parameter is set to Linear, Quadratic, or Logarithmic. ${y}_{chirp}\left(t\right)=\mathrm{cos}\left(\psi \left(t\right)+{\varphi }_{0}\right)$ Linear, quadratic, or logarithmic chirp signal with phase $\psi \left(t\right)$ ${f}_{i}\left(t\right)=\frac{1}{2\pi }\cdot \frac{d\psi \left(t\right)}{dt}$ Phase derivative is instantaneous frequency For instance, if you want a chirp signal with a linear instantaneous frequency sweep, you should set the Frequency Sweep parameter to Linear, and tune the linear sweep values by setting other parameters appropriately. The block outputs a chirp signal, the phase derivative of which is the specified linear sweep. This ensures that the instantaneous frequency of the output is the linear sweep you desired. For equations describing the linear, quadratic, and logarithmic sweeps, see Equations for Output Computation. ### Output Computation Method for Swept Cosine Frequency Sweep To generate the swept cosine chirp signal, the block sets the swept cosine chirp output as follows. ${y}_{chirp}\left(t\right)=\mathrm{cos}\left(\psi \left(t\right)+{\varphi }_{0}\right)=\mathrm{cos}\left(2\pi {f}_{i}\left(t\right)t+{\varphi }_{0}\right)$ Swept cosine chirp output (Instantaneous frequency equation, shown above, does not hold.) Note that the instantaneous frequency equation, shown above, does not hold for the swept cosine chirp, so the user-defined frequency sweep, fi(t), is not the actual output frequency sweep, fi(actual)(t), of the swept cosine chirp. Thus, the swept cosine output might not behave as you expect. To learn more about swept cosine chirp behavior, see Cautions Regarding the Swept Cosine Sweep and Equations for Output Computation. ### Cautions Regarding the Swept Cosine Sweep When you want a linearly swept chirp signal, we recommend you use a linear frequency sweep. Though a swept cosine frequency sweep also yields a linearly swept chirp signal, the output might have unexpected frequency content. For details, see the following two sections. ### Swept Cosine Instantaneous Output Frequency at the Target Time is not the Target Frequency The swept cosine sweep value at the Target time is not necessarily the Target frequency. This is because the user-specified sweep is not the actual frequency sweep of the swept cosine output, as noted in Output Computation Method for Swept Cosine Frequency Sweep. See the table Instantaneous Frequency Sweep Values for the actual value of the swept cosine sweep at the Target time. ### Swept Cosine Output Frequency Content May Greatly Exceed Frequencies in the Sweep In Swept cosine mode, you should not set the parameters so that 1/Tsw is very large compared to the values of the Initial frequency and Target frequency parameters. In such cases, the actual frequency content of the swept cosine sweep might be closer to 1/Tsw, far exceeding the Initial frequency and Target frequency parameter values. ## Dialog Box Frequency sweep The type of output instantaneous frequency sweep, fi(t): Linear, Logarithmic, Quadratic, or Swept cosine. Sweep mode The directionality of the chirp signal: Unidirectional or Bidirectional. Initial frequency (Hz) For Linear, Quadratic, and Swept cosine sweeps, the initial frequency, f0, of the output chirp signal. For Logarithmic sweeps, Initial frequency is one less than the actual initial frequency of the sweep. Also, when the sweep is Logarithmic, you must set the Initial frequency to be less than the Target frequency. Tunable. Target frequency (Hz) For Linear, Quadratic, and Logarithmic sweeps, the instantaneous frequency, fi(tg), of the output at the Target time, tg. For a Swept cosine sweep, Target frequency is the instantaneous frequency of the output at half the Target time, tg/2. When Frequency sweep is Logarithmic, you must set the Target frequency to be greater than the Initial frequency. Tunable. Target time (s) For Linear, Quadratic, and Logarithmic sweeps, the time, tg, at which the Target frequency, fi(tg), is reached by the sweep. For a Swept cosine sweep, Target time is the time at which the sweep reaches 2fi(tg) - f0. You must set Target time to be no greater than Sweep time , ${T}_{sw}\ge {t}_{g}$. Tunable. Sweep time (s) In Unidirectional Sweep mode, the Sweep time, Tsw, is the period of the output frequency sweep. In Bidirectional Sweep mode, the Sweep time is half the period of the output frequency sweep. You must set Sweep time to be no less than Target time, ${T}_{sw}\ge {t}_{g}$. Tunable. The phase, ${\varphi }_{0}$, of the cosine output at t=0; ${y}_{chirp}\left(t\right)=\mathrm{cos}\left({\varphi }_{0}\right)$. Tunable. Sample time The sample period, Ts, of the output. The output frame period is Mo*Ts. Samples per frame The number of samples, Mo, to buffer into each output frame. When the value of this parameter is 1, the block outputs a sample-based signal. Output data type The data type of the output, single-precision or double-precision. ## Examples The first few examples demonstrate how to use the Chirp block's main parameters, how to view the output in the time domain, and how to view the output spectrogram: Examples 4 and 5 illustrate Chirp block settings that might produce unexpected outputs: ### Example 1: Setting a Final Frequency Value for Unidirectional Sweeps Often times, you might want a unidirectional sweep for which you know the initial and final frequency values. You can specify the final frequency of a unidirectional sweep by setting Target time equal to Sweep time, in which case the Target frequency becomes the final frequency in the sweep. The following model demonstrates this method. This technique might not work for swept cosine sweeps. For details, see Cautions Regarding the Swept Cosine Sweep. Open the Example 1 model by typing ex_chirp_refex_chirp_ref at the MATLAB® command line. You can also rebuild the model yourself; see the following list for model parameter settings (leave unlisted parameters in their default states). Since Target time is set to equal Sweep time (1 second), the Target frequency (25 Hz) is the final frequency of the unidirectional sweep. Run your model to see the time domain output: Type the following command to view the chirp output spectrogram: ```spectrogram(dsp_examples_yout,hamming(128),... 110,[0:.01:40],400) ``` Chirp Block Parameters for Example 1 Frequency sweep Linear Sweep mode Unidirectional Initial frequency 0 Target frequency 25 Target time 1 Sweep time 1 Initial phase 0 Sample time 1/400 Samples per frame 400 Vector Scope Block Parameters for Example 1 Input domain Time Time display span 6 Signal To Workspace Block Parameters for Example 1 Variable name dsp_examples_yout Configuration Dialog Parameters for Example 1 Stop time 5 ### Example 2: Bidirectional Sweeps Change the Sweep mode parameter in the Example 1 model to Bidirectional, and leave all other parameters the same to view the following bidirectional chirp. Note that in the bidirectional sweep, the period of the sweep is twice the Sweep time (2 seconds), whereas it was one Sweep time (1 second) for the unidirectional sweep in Example 1. Open the Example 2 model by typing ex_chirp_ref2ex_chirp_ref2 at the MATLAB command line. Run your model to see the time domain output: Type the following command to view the chirp output spectrogram: ```spectrogram(dsp_examples_yout,hamming(128),... 110,[0:.01:40],400) ``` ### Example 3: When Sweep Time is Greater Than Target Time Setting Sweep time to 1.5 and leaving the rest of the parameters as in the Example 1 model gives the following output. The sweep still reaches the Target frequency (25 Hz) at the Target time (1 second), but since Sweep time is greater than Target time, the sweep continues on its linear path until one Sweep time (1.5 seconds) is traversed. Unexpected behavior might arise when you set Sweep time greater than Target time; see Example 4: Output Sweep with Negative Frequencies for details. Open the Example 3 model by typing ex_chirp_ref3ex_chirp_ref3 at the MATLAB command line. Run your model to see the time domain output: Type the following command to view the chirp output spectrogram: ```spectrogram(dsp_examples_yout,hamming(128),... 110,[0:.01:40],400) ``` ### Example 4: Output Sweep with Negative Frequencies Modify the Example 1 model by changing Sweep time to 1.5, Initial frequency to 25, and Target frequency to 0. The output chirp of this example might not behave as you expect because the sweep contains negative frequencies between 1 and 1.5 seconds. The sweep reaches the Target frequency of 0 Hz at one second, then continues on its negative slope, taking on negative frequency values until it traverses one Sweep time (1.5 seconds). Open the Example 4 model by typing ex_chirp_ref4ex_chirp_ref4 at the MATLAB command line. Run your model to see the time domain output: ### Example 5: Output Sweep with Frequencies Greater Than Half the Sampling Frequency Modify the Example 1 model by changing the Target frequency parameter to 275. The output chirp of this model might not behave as you expect because the sweep contains frequencies greater than half the sampling frequency (200 Hz), which causes aliasing. If you unexpectedly get a chirp output with a spectrogram resembling the one following, your chirp's sweep might contain frequencies greater than half the sampling frequency. Open the Example 5 model by typing ex_chirp_ref5ex_chirp_ref5 at the MATLAB command line. Run your model to see the time domain output: Type the following command to view the chirp output spectrogram: ```spectrogram(dsp_examples_yout,hamming(64),... 60,256,400) ``` ## Supported Data Types • Double-precision floating point • Single-precision floating point
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 28, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8082541823387146, "perplexity": 3243.9934335003513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447548622.138/warc/CC-MAIN-20141224185908-00084-ip-10-231-17-201.ec2.internal.warc.gz"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=17&t=31142&p=98148
## Quiz #2 Question #2 B H-Atom ($E_{n}=-\frac{hR}{n^{2}}$) Jordanmarshall Posts: 16 Joined: Fri Apr 06, 2018 10:03 am ### Quiz #2 Question #2 B Which electron transition in a hydrogen atom will cause the emission of a photon of higher frequency? Why is the answer "a transition from n=5 to n=3" correct instead of "a transition from n=4 to n=3"?? KC Navarro_1H Posts: 33 Joined: Fri Apr 06, 2018 10:04 am ### Re: Quiz #2 Question #2 B Larger transitions like n=5 to n=3 yield lower wavelength with higher energy and smaller transitions like n=4 to n=3 yield the highest wavelength with the lowest energy. The reason for this is because bigger transitions give off more energy, so when an electron transitions to n=5 to n=3, it'd emit more energy than n=4 to n=3. Another example would be the ultraviolet Lyman Series, n=1; they have much lower wavelengths with very large transitions, giving off higher energies. Jaquelinne Rodriguez-Lopez 1L Posts: 38 Joined: Mon Apr 09, 2018 12:38 pm Been upvoted: 1 time ### Re: Quiz #2 Question #2 B going from n=5 to n=3, the electron is moving 2 energy levels. whereas moving from m=4 to n=3, the electron is only moving one energy level. also remember, as an electron is going down energy levels, light is emitted. so it only makes sense that more energy in light form is emitted when moving from n=5 to n=3 (2 energy levels) than n=4 to n=3 (1 energy level). hope this helps :) 204929947 Posts: 56 Joined: Fri Apr 06, 2018 10:03 am ### Re: Quiz #2 Question #2 B I believe that moving from n=5 to n=3 is higher frequency because you are releasing energy Last edited by 204929947 on Sat May 05, 2018 12:07 am, edited 1 time in total. Surya Palavali 1D Posts: 24 Joined: Fri Apr 06, 2018 10:04 am ### Re: Quiz #2 Question #2 B Jumping down 2 levels (from n=5 to n=3) is a larger jump distance, which creates a lower wavelength and thus higher energy as far as I know. Bryan Jiang 1F Posts: 37 Joined: Fri Apr 06, 2018 10:03 am Contact: ### Re: Quiz #2 Question #2 B 204929947 wrote:I believe that moving from n=5 to n=3 takes up more energy which makes the wavelength smaller, less frequency When electrons are dropping from higher energy levels to lower energy levels, energy is RELEASED as a photon, not taken up. Since the drop from n=5 to n=3 is larger than the drop from n=4 to n=3, the light emitted from the former would have shorter wavelength and HIGHER frequency than the latter.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8887752294540405, "perplexity": 2519.2968598136795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247513661.77/warc/CC-MAIN-20190222054002-20190222080002-00155.warc.gz"}
https://www.dummies.com/article/business-careers-money/business/accounting/calculation-analysis/how-to-calculate-the-expected-value-variance-and-standard-deviation-for-a-t-distribution-145933/
Probability distributions, including the t-distribution, have several moments, including the expected value, variance, and standard deviation (a moment is a summary measure of a probability distribution): • The first moment of a distribution is the expected value, E(X), which represents the mean or average value of the distribution. For the t-distribution with degrees of freedom, the mean (or expected value) equals or a probability distribution, and commonly designates the number of degrees of freedom of a distribution. • The second central moment is the variance and it measures the spread of the distribution about the expected value. The more spread out a distribution is, the more "stretched out" is the graph of the distribution. In other words, the tails will be further from the mean, and the area near the mean will be smaller. For example, based on the following figures, it can be seen that the t-distribution with 2 degrees of freedom is far more spread out than the t-distribution with 30 degrees of freedom. You use the formula to calculate the variance of the t-distribution. The standard normal and t-distribution with two degrees of freedom. The standard normal and t-distribution with 30 degrees of freedom. As an example, with 10 degrees of freedom, the variance of the t-distribution is computed by substituting 10 for in the variance formula: With 30 degrees of freedom, the variance of the t-distribution equals These calculations show that as the degrees of freedom increases, the variance of the t-distribution declines, getting progressively closer to 1. • The standard deviation is the square root of the variance (It is not a separate moment.) For the t-distribution, you find the standard deviation with this formula: For most applications, the standard deviation is a more useful measure than the variance because the standard deviation and expected value are measured in the same units while the variance is measured in squared units. For example, suppose you assume that the returns on a portfolio follow the t-distribution. You measure both the expected value of the returns and the standard deviation as a percentage; you measure the variance as a squared percentage, which is a difficult concept to interpret. Alan Anderson, PhD is a teacher of finance, economics, statistics, and math at Fordham and Fairfield universities as well as at Manhattanville and Purchase colleges. Outside of the academic environment he has many years of experience working as an economist, risk manager, and fixed income analyst. Alan received his PhD in economics from Fordham University, and an M.S. in financial engineering from Polytechnic University.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9781273603439331, "perplexity": 424.0132856825854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545326.51/warc/CC-MAIN-20220522094818-20220522124818-00625.warc.gz"}
http://www.scientificlib.com/en/Mathematics/LX/DedekindEtaFunction.html
. The Dedekind eta function, named after Richard Dedekind, is a function defined on the upper half-plane of complex numbers, where the imaginary part is positive. For any such complex number $$\tau\,,$$ let $$q = e^{2\pi \rm{i} \tau}\,,$$ and define the eta function by, $$\eta(\tau) = e^{\frac{\pi \rm{i} \tau}{12}} \prod_{n=1}^{\infty} (1-q^{n}) .$$ (The notation $$q \equiv e^{2\pi \rm{i} \tau}\,$$ is now standard in number theory, though many older books use q for the nome e^{\pi \rm{i} \tau}\,.) Note that, $$\Delta=(2\pi)^{12}\eta^{24}(\tau)$$ where $$\Delta$$ is the modular discriminant. The presence of 24 can be understood by connection with other occurrences, such as in the 24-dimensional Leech lattice. The eta function is holomorphic on the upper half-plane but cannot be continued analytically beyond it. Modulus of Euler phi on the unit disc, colored so that black=0, red=4 The real part of the modular discriminant as a function of q. The eta function satisfies the functional equations[1] $$\eta(\tau+1) =e^{\frac{\pi {\rm{i}}}{12}}\eta(\tau),\,$$ $$\eta(-\tau^{-1}) = \sqrt{-{\rm{i}}\tau} \eta(\tau).\,$$ More generally, suppose $$a, b, c, d \,$$ are integers with $$ad-bc=1 \,$$, so that $$\tau\mapsto\frac{a\tau+b}{c\tau+d}$$ is a transformation belonging to the modular group. We may assume that either $$c>0\,,$$ or $$c=0 \,$$ and $$d=1 \,$$. Then $$\eta \left( \frac{a\tau+b}{c\tau+d} \right) = \epsilon (a,b,c,d) (c\tau+d)^{\frac{1}{2}} \eta(\tau),$$ where $$\epsilon (a,b,c,d)=e^{\frac{b{\rm{i}} \pi}{12}}\quad(c=0,d=1);$$ $$\epsilon (a,b,c,d)=e^{{\rm{i}}\pi [\frac{a+d}{12c} - s(d,c) -\frac{1}{4}]}\quad(c>0).$$ Here $$s(h,k)\,$$ is the Dedekind sum $$s(h,k)=\sum_{n=1}^{k-1} \frac{n}{k} \left( \frac{hn}{k} - \left\lfloor \frac{hn}{k} \right\rfloor -\frac{1}{2} \right).$$ Because of these functional equations the eta function is a modular form of weight 1/2 and level 1 for a certain character of order 24 of the metaplectic double cover of the modular group, and can be used to define other modular forms. In particular the modular discriminant of Weierstrass can be defined as $$\Delta(\tau) = (2 \pi)^{12} \eta(\tau)^{24}\,$$ and is a modular form of weight 12. (Some authors omit the factor of (2π)12, so that the series expansion has integral coefficients). The Jacobi triple product implies that the eta is (up to a factor) a Jacobi theta function for special values of the arguments: $$\eta(z) = \sum_{n=1}^\infty \chi(n) \exp(\tfrac{1}{12} \pi i n^2 z),$$ where $$\chi(n)$$ is the Dirichlet character modulo 12 with $$\chi(\pm1) = 1, \chi(\pm 5)=-1.$$ The Euler function $$\phi(q) = \prod_{n=1}^{\infty} \left(1-q^n\right),$$ related to $$\eta \,$$ by $$\phi(q)= q^{-1/24} \eta(\tau)\,,$$ has a power series by the Euler identity: $$\phi(q)=\sum_{n=-\infty}^\infty (-1)^n q^{(3n^2-n)/2}.$$ Because the eta function is easy to compute numerically from either power series, it is often helpful in computation to express other functions in terms of it when possible, and products and quotients of eta functions, called eta quotients, can be used to express a great variety of modular forms. The picture on this page shows the modulus of the Euler function: the additional factor of $$q^{1/24}$$ between this and eta makes almost no visual difference whatsoever (it only introduces a tiny pinprick at the origin). Thus, this picture can be taken as a picture of eta as a function of q. Chowla–Selberg formula q-series Weierstrass's elliptic functions partition function (number theory) Kronecker limit formula superstring theory References ^ Siegel, C.L. (1954). "A Simple Proof of \eta(-1/\tau) = \eta(\tau)\sqrt{\tau/{\rm{i}}}\,". Mathematika 1: 4. doi:10.1112/S0025579300000462. Tom M. Apostol, Modular functions and Dirichlet Series in Number Theory (2 ed), Graduate Texts in Mathematics 41 (1990), Springer-Verlag, ISBN 3-540-97127-0 See chapter 3. Neil Koblitz, Introduction to Elliptic Curves and Modular Forms (2 ed), Graduate Texts in Mathematics 97 (1993), Springer-Verlag, ISBN 3-540-97966-2 Mathematics Encyclopedia
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9680982828140259, "perplexity": 417.3231716001977}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104244535.68/warc/CC-MAIN-20220703134535-20220703164535-00689.warc.gz"}
https://arxiv.org/abs/1209.4762
astro-ph.CO (what is this?) # Title: The effect of peculiar velocities on the epoch of reionization (EoR) 21-cm signal Abstract: We have used semi-numerical simulations of reionization to study the behaviour of the power spectrum of the EoR 21-cm signal in redshift space. We have considered two models of reionization, one which has homogeneous recombination (HR) and the other incorporating inhomogeneous recombination (IR). We have estimated the observable quantities --- quadrupole and monopole moments of HI power spectrum at redshift space from our simulated data. We find that the magnitude and nature of the ratio between the quadrupole and monopole moments of the power spectrum ($P^s_2 /P^s_0$) can be a possible probe for the epoch of reionization. We observe that this ratio becomes negative at large scales for $x_{HI} \leq 0.7$ irrespective of the reionization model, which is a direct signature of an inside-out reionization at large scales. It is possible to qualitatively interpret the results of the simulations in terms of the fluctuations in the matter distribution and the fluctuations in the neutral fraction which have power spectra and cross-correlation $P_{\Delta \Delta}(k)$, $P_{xx}(k)$ and $P_{\Delta x}(k)$ respectively. We find that at large scales the fluctuations in matter density and neutral fraction is exactly anti-correlated through all stages of reionization. This provides a simple picture where we are able to qualitatively interpret the behaviour of the redshift space power spectra at large scales with varying $x_{HI}$ entirely in terms of a just two quantities, namely $x_{HI}$ and the ratio $P_{xx}/P_{\Delta \Delta}$. The nature of $P_{\Delta x}$ becomes different for HR and IR scenarios at intermediate and small scales. We further find that it is possible to distinguish between an inside-out and an outside-in reionization scenario from the nature of the ratio $P^s_2 /P^s_0$ at intermediate length scales. Comments: 11 pages, 6 figures. Accepted for publication in MNRAS. Replaced to match the accepted version. Added one appendix to quantify the possible uncertainties in the estimation of the multipole moments of redshift space power spectrum Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO) Journal reference: Monthly Notices of the Royal Astronomical Society, Volume 434, Issue 3, p.1978-1988, (2013) DOI: 10.1093/mnras/stt1144 Cite as: arXiv:1209.4762 [astro-ph.CO] (or arXiv:1209.4762v2 [astro-ph.CO] for this version) ## Submission history From: Suman Majumdar [view email] [v1] Fri, 21 Sep 2012 09:41:25 GMT (1353kb) [v2] Fri, 21 Jun 2013 13:53:32 GMT (1342kb)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.891046404838562, "perplexity": 1329.7296306916292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944848.33/warc/CC-MAIN-20180420233255-20180421013255-00337.warc.gz"}
http://mathhelpforum.com/calculus/112665-senior-maths-challenge-question-print.html
# Senior Maths Challenge question • November 5th 2009, 03:32 PM 123ohrid Senior Maths Challenge question Hello lads. So I did my senior maths challenge today, and there was a question which struck me as being difficult. I even asked my maths teacher and he couldn't solve it as fast as he could. So who can run me through how you would solve the following: abcd + abc + bcd + acd + abd + ab + bc + cd + ad + ac + bd+ a + b + c + d = 2009 Find a+b+c+d Anyone got any idea how to do this? • November 5th 2009, 03:57 PM Jester Quote: Originally Posted by 123ohrid Hello lads. So I did my senior maths challenge today, and there was a question which struck me as being the sort of WTF questions. I even asked my maths teacher and he couldn't solve it as fast as he could. So who can run me through how you would solve the following: abcd + abc + bcd + acd + abd + ab + bc + cd + ad + ac + bd+ a + b + c + d = 2009 Find a+b+c+d Anyone got any idea how to do this? Others may prove me wrong but I don't think there's a unique answer to this question. You'll notice that adding 1 to both sides gives $(a+1)(b+1)(c+1)(d+1) = 2010.$ Some solutions $ a = 1, \;b = 1004, \; c = 0, \;d = 0, $ $ a = 2, \;b = 669, \; c = 0, \;d = 0, $ $ a = 4, \;b = 401, \; c = 0, \;d = 0, $ $ a = 5, \;b = 334, \; c = 0, \;d = 0, $ $ a = 9, \;b = 200, \; c = 0, \;d = 0. $ We see that $a+b+c+d$ changes in all these cases. • November 5th 2009, 04:14 PM Jameson I bet the question said none can equal 0. If so, 2010 breaks down into 4 prime factors which furthers this conclusion - 2,3,5,67. • November 5th 2009, 04:46 PM Jester Quote: Originally Posted by Jameson I bet the question said none can equal 0. If so, 2010 breaks down into 4 prime factors which furthers this conclusion - 2,3,5,67. Nice observation! (Clapping) • November 5th 2009, 11:13 PM 123ohrid Yeah it sayd that a b c and d are positive integers. Also this might help, the answer is one of the following: A)73 B) 75 C) 77 D) 79 E) 81.. So which one would it be? I'm guessing 77 because it's the sum of all your primes? But can you try to explain why? • November 6th 2009, 12:40 AM Defunkt $(a+1)(b+1)(c+1)(d+1) = 2010$ 2010 has exactly 4 prime factors -- 2, 3, 5, 67 (this means that $2010 = 2\cdot 3 \cdot 5 \cdot 67$. Convince yourself that you cannot write 2010 as another product of 4 different factors.), so $(a+1) + (b+1) + (c+1) + (d+1) = 2 + 3 + 5 + 67 \Rightarrow a + b + c + d = 73$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8904238939285278, "perplexity": 614.7686441044773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701159155.63/warc/CC-MAIN-20160205193919-00274-ip-10-236-182-209.ec2.internal.warc.gz"}
http://pyleecan.org/pyleecan.Tests.Methods.Slot.test_SlotW16_meth.html
# test_SlotW16_meth module¶ Created on Wed Jan 14 13:51:53 2014 @author: pierre_b class test_SlotW16_meth(methodName='runTest')[source] Bases: unittest.case.TestCase unittest for SlotW16 methods test_comp_angle_opening_1(test_dict) Check that the computation of the average opening angle iscorrect test_comp_angle_wind_eq_1(test_dict) Check that the computation of the average angle is correct test_comp_height_1(test_dict) Check that the computation of the height is correct test_comp_surface_1(test_dict) Check that the computation of the surface is correct test_comp_surface_wind_1(test_dict) Check that the computation of the winding surface is correct
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9879885911941528, "perplexity": 2522.2682704747654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232259757.86/warc/CC-MAIN-20190526205447-20190526231447-00336.warc.gz"}
https://cms.math.ca/cmb/kw/finite-to-one%20maps
location:  Publications → journals Search results Search: All articles in the CMB digital archive with keyword finite-to-one maps Expand all        Collapse all Results 1 - 1 of 1 1. CMB 2005 (vol 48 pp. 614) Tuncali, H. Murat; Valov, Vesko On Finite-to-One Maps Let $f\colon X\to Y$ be a $\sigma$-perfect $k$-dimensional surjective map of metrizable spaces such that $\dim Y\leq m$. It is shown that for every positive integer $p$ with $p\leq m+k+1$ there exists a dense $G_{\delta}$-subset ${\mathcal H}(k,m,p)$ of $C(X,\uin^{k+p})$ with the source limitation topology such that each fiber of $f\triangle g$, $g\in{\mathcal H}(k,m,p)$, contains at most $\max\{k+m-p+2,1\}$ points. This result provides a proof the following conjectures of S. Bogatyi, V. Fedorchuk and J. van Mill. Let $f\colon X\to Y$ be a $k$-dimensional map between compact metric spaces with $\dim Y\leq m$. Then: \begin{inparaenum}[\rm(1)] \item there exists a map $h\colon X\to\uin^{m+2k}$ such that $f\triangle h\colon X\to Y\times\uin^{m+2k}$ is 2-to-one provided $k\geq 1$; \item there exists a map $h\colon X\to\uin^{m+k+1}$ such that $f\triangle h\colon X\to Y\times\uin^{m+k+1}$ is $(k+1)$-to-one. \end{inparaenum} Keywords:finite-to-one maps, dimension, set-valued mapsCategories:54F45, 55M10, 54C65 top of page | contact us | privacy | site map |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.963593602180481, "perplexity": 954.1075319339897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423787.24/warc/CC-MAIN-20170721162430-20170721182430-00708.warc.gz"}
https://math.stackexchange.com/questions/2806590/a-conjectured-formula-for-the-polylogarithm-of-a-negative-integer-order
# A conjectured formula for the polylogarithm of a negative integer order I discovered the following formula while working on the sequence A141697 from the OEIS. I have no idea whether it is something trivial or not. I would be very happy to know more about it. $$\textrm{Li}_\nu\left(z\right)= \frac { 6 (1+z)^{-\nu-1} + \displaystyle\sum_{k=0}^{-\nu-1} \displaystyle\left( -6 \displaystyle{{-\nu-1}\displaystyle\choose k}+7\sum_{j=0}^{k+1}(-1)^j (k-j+1)^{-\nu} {{-\nu+1}\ \choose j} \right) z^k } { 7 (1 - z)^{-\nu+1} }z$$ when $\nu$ is a negative integer. In case I mistyped the formula, here are my codes in two different languages: For Pari-GP: mypolylog(n, x) = { ( 6*(x+1)^(-n-1) + sum(k=0,-n-1, (-6*binomial(-n-1,k) + 7*sum(j=0,k+1, (-1)^j * (k-j+1)^(-n) * binomial(-n+1,j)))*x^k) ) * x / (7*(1-x)^(-n+1) ) } For Mathematica: mypolylog[n_, x_] := (6*(x+1)^(-n-1) + Sum[(x^k*(-6*Binomial[-n-1, k] + 7*Sum[(-1)^j*(k-j+1)^(-n)*Binomial[-n+1, j], {j, 0, k+1}])), {k, 0, -n-1}]) / ( 7*(1 - x)^(-n+1) ) * x • How do got the formula? Have you tried a reformulation of the rational functions using Stirling or Eulerian numbers given in en.wikipedia.org/wiki/Polylogarithm#Particular_values? – gammatester Jun 3 '18 at 13:59 • @gammatester I empirically discovered this formula by using a tool I wrote for discovering identities: github.com/baruchel/oeis which detected A141697(n)=3*A168524(n)-2*A154337(n). The formula above comes from that. – Thomas Baruchel Jun 3 '18 at 14:09 • Nice work. You can use $6=t$ and $7=t+1$ where $t\neq -1$ in your formula instead. – Somos Jun 3 '18 at 16:01 • This could similarly apply to A141696 as well. – Leucippus Jun 4 '18 at 2:31 To simplify formulas define $\, B(n,k) := {-n-1\choose k}, \,$ $\, A(n,k) := \sum_{j=0}^{k+1} (-1)^j (k-j+1)^{-n} {-n+1 \choose j}, \,$ and $\, u := 1-t \,$ where $\, t\neq 0. \,$ Your formula is the $\, t=7 \,$ case of the lightly simplified equation $$\textrm{Li}_n(z) = \frac{z}t (1-z)^{n-1} (-u (1-z)^{-n+1} + u \sum_{k=0}^{-n-1} z^k B(n,k) + t \sum_{k=0}^{-n-1} z^k A(n,k) )$$ but $\, (1-z)^{-n+1} = \sum_{k=0}^{-n-1} z^k B(n,k) \,$ simplies it to $\textrm{Li}_n(z) = z (1-z)^{n-1} \sum_{k=0}^{-n-1} z^k A(n,k). \,$ The numbers in $\,A(-n,k-1)\,$ are the triangular OEIS sequence A008292 of Eulerian numbers which entry has the information "O.g.f. for n-th row: (1-x)^(n+1)*polylog(-n, x)/x".
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8572392463684082, "perplexity": 2152.602824475062}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141753148.92/warc/CC-MAIN-20201206002041-20201206032041-00391.warc.gz"}
https://www.math.umass.edu/undergraduate/departmental-graduation-requirements
## You are here The department offers seven concentrations as part of the mathematics degree:  Actuarial, Applied Math, Individual, Math Computing, Pure Math, Statistics, and Teaching. ### Course Requirements for all Majors • Differential and integral calculus: Math 131 and 132, with a grade of C or better in Math 132 • Multivariable calculus and linear algebra: Math 233 and 235 • Introduction to abstract mathematics: Math 300 or CompSci 250 (may be waived by the Chief Undergraduate Advisor for exceptionally well-prepared students) • Computer programming: CompSci 121 or equivalent • Writing in mathematics: Math 370 • Integrative Experience (IE) course.  The following courses satisfy the IE requirement:  Math 455, Math 456, Math 475, Stat 525, Stat 494CI.  All IE courses count toward either required major courses or upper level major electives.  In other words, this requirement does not require additional coursework to be completed. • Completion of the requirements of one of the following concentrations: Actuarial, Applied, Individual, Mathematical Computing, Pure, Statistics, Teaching. • Courses from outside the department may be used to satisfy the concentration requirements.  A list of accepted University courses is available here.  The Chief Undergraduate Advisor must approve all other courses taken outside the department or at another university before the student enrolls in them. • All courses used to satisfy these requirements must be completed with a passing grade (D or higher) and cannot be taken Pass/Fail. • The overall GPA of all courses taken to satisfy the requirements for the major (averaged over all such courses taken) must be at least 2.0. • Students will need to earn a grade of C or better in Math 132 before taking some courses at the 300 level or higher. ## Concentrations and their requirements Mathematics majors must choose one of the following concentrations. ### Actuarial Concentration The Actuarial Concentration prepares the student for a career in the actuarial sciences. Requirements: 1. VEE Requirements: Econ 103 and Econ 104, Stat 525, and Finance 301.  Stat 525 is an IE course. 2. Probability and statistics: Stat 515 and Stat 516 3. Exam preparation: Math 437 or Math 536 (formerly Math 438) 4. Mathematics of finance: Math 537 5. Three of the following courses:  Math 331/532, Math 425, Math 456, Math 523, Math 545, Math 551, Stat 526, Stat 597A, or an appropriate course outside the department such as Finance 422 or Econ 309.  For other substitutions, please consult with the CUA. Note: Math majors who have declared the Actuarial concentration may contact [email protected] to submit a request to be enrolled in Finance 301.  Requests should be made a week before enrollment period begins.  Enrollment in Fin 301 is at the discretion of the Isenberg School of Management. See the Actuarial Sciences webpage for further details on this concentration. ### Applied Mathematics Concentration The Applied Mathematics Concentration prepares the student for applied matematics positions in industry or government. Requirements: 1. Advanced multivariate calculus: Math 425 2. Differential equations: Math 331 3. Linear algebra for applied mathematics: Math 545 4. Introduction to scientific computing: Math 551 5. One of the following courses: Math 456, Math 532, Math 534, Math 552.  Math 456 is the IE course for this concentration. 6. Three additional courses numbered 400 or higher (except Stat 501).  With the approval of the Chief Undergraduate Advisor, these may be appropriate courses outside the department (a popular choice is MIE 379). ### Individually Designed Concentration Individually designed concentration permits students, in consultation with their academic advisor, to design their own concentration so as to explore thoroughly a theme in mathematics or statistics or to investigate connections between mathematics and/or statistics and another field, such as biology or economics. An individual concentration must include eight courses numbered 400 or above, of at least three credits each. At least four of these eight courses must be in mathematics or statistics. In consultation with their academic advisor, students propose a plan for the eight courses to be used to fulfill the requirements of the individual concentration. No later than the end of the semester in which students are taking Math 300 or during the second semester of the students' sophomore year, whichever comes first, students will: prepare the plan in writing, secure approval of the plan by their advisor, and submit the written plan for approval to the Chief Undergraduate Advisor. No later than the end of the junior year, students review the plan with their academic advisor. If any changes are proposed to the original plan, students must again secure approval of the revised plan in writing. ### Mathematical Computing Concentration The Mathematical Computing Concentration prepares the student for careers that require both knowledge of advanced mathematics and knowledge of computer science. Requirements: 1. Data Structures:  CompSci 187 or ECE 242 (needed as a prerequisite in the sequence of courses leading to CompSci 311) 2. Abstract algebra: Math 411 3. Probability: Stat 515 4. Numerics:  Math 551 5. Algorithms: CompSci 311 6. Either CompSci 501 or CompSci 575 7. Two additional courses from the following list: Math 331, Math 412, Math 456, Math 471, Math 571Math 545, Math 552 or Stat 516. The IE course on this list is Math 456. 8. CS elective (any 300+ level CS course of 3 credits or more that is not used to satisfy any of the previous requirements). ### Pure Mathematics Concentration The Pure Mathematics Concentration gives students exposure to the core mathematics subjects and prepares students for graduate study in mathematics. Requirements: 1. Abstract algebra: Math 411 2. Complex variables: Math 421 3. Advanced multivariate calculus or topics in real analysis:  Math 425 or Math 524 4. Analysis: Math 523H 5. Either Math 412 or Math 563H 6. One applied mathematics course either chosen from the following list or another course with sufficient applied mathematical content approved by the Chief Undergraduate Advisor: Math 331, Math 456, Math 532, Math 534, Math 551, Math 552, Stat 516 7. Two additional courses numbered 400 or higher (except Stat 501). Most students will select one of these to be Math 455 to satisfy the IE requirement. With the approval of the Chief Undergraduate Advisor, these may be appropriate courses outside the department. ### Statistics Concentration The Statistics Concentration prepares the student for a career as an statistician and for graduate study in statistics. Requirements: 1. Advanced multivariate calculus: Math 425 2. Linear algebra for applied mathematics (or abstract algebra):  Math 545 (or Math 411).  Math 545 is strongly recommended. 3. Probability and statistics: Stat 515 and Stat 516 4. One of the following courses: Stat 525 or Stat 526.  The IE course on this list is Stat 525. 5. Three additional courses numbered 400 or higher (or Math 331). With the approval of the Chief Undergraduate Advisor, these may be appropriate courses outside the department.  Note: Stat 501 cannot be used. ### Teaching Concentration Teaching Checklist - For entry Fall 2016 and later Teaching Checklist - For entry prior to Fall 2016 The Teaching Concentration provides the student with the knowledge of mathematics and statistics required by the Commonwealth for teaching mathematics at the 8-12 level.  The teaching concentration requirements have been updated for students entering the University in Fall 2016.  Current students have the option to meet either the requirements listed on their ARR on SPIRE or these new requirements. 1. Abstract algebra: Math 411 2. Mathematical modeling: Math 331 or Math 456 3. Discrete/finite mathematics: Math 455 (an IE course) 4. Geometry: Math 461 5. Probability and statistics: Stat 501 followed by Stat 515 6. Use of technology: Math 471 7. One additional course numbered 400 or higher.  (Math 475, History of Math, is recommended since it is a state requirement for secondary teachers).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8310750126838684, "perplexity": 4828.619230908876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119356.19/warc/CC-MAIN-20170423031159-00206-ip-10-145-167-34.ec2.internal.warc.gz"}
https://stats.libretexts.org/Bookshelves/Introductory_Statistics/Book%3A_Statistical_Thinking_for_the_21st_Century_(Poldrack)/25%3A_Modeling_Continuous_Relationships_in_R/25.03%3A_Robust_Correlations_(24.3.2)
# 25.3: Robust Correlations (24.3.2) $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$$$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$ In the previous chapter we also saw that the hate crime data contained one substantial outlier, which appeared to drive the significant correlation. To compute the Spearman correlation, we first need to convert the data into their ranks, which we can do using the order() function: hateCrimes <- hateCrimes %>% mutate(hatecrimes_rank = order(avg_hatecrimes_per_100k_fbi), gini_rank = order(gini_index)) We can then compute the Spearman correlation by applying the Pearson correlation to the rank variables" cor(hateCrimes$hatecrimes_rank, hateCrimes$gini_rank) ## [1] 0.057 We see that this is much smaller than the value obtained using the Pearson correlation on the original data. We can assess its statistical signficance using randomization: ## [1] 0.0014 Here we see that the p-value is substantially larger and far from significance. This page titled 25.3: Robust Correlations (24.3.2) is shared under a not declared license and was authored, remixed, and/or curated by Russell A. Poldrack via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9547324776649475, "perplexity": 1218.7275910602034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00433.warc.gz"}
https://michael.jesurum.net/research.html
Research ## Fourier Restriction to Smooth Enough Curves For a surface $S \subset \mathbb{R}^{d}$ equipped with some measure $\mu$, the Fourier restriction problem asks for which $p$ and $q$ there is a bounded linear operator $$\mathcal{R} \colon L^{p}\: (\mathbb{R}^{d}\hspace{.5em}) \rightarrow L^{q}\: (S, \mu)$$ such that $\mathcal{R}f = \hat{f}\restriction_{S}$ for all Schwartz functions $f$. In 1985, Drury was the first person to prove an optimal result for a curve in 3 or more dimensions. He found the optimal range of $p$ and $q$ for the moment curve $\gamma(t)=(t, t^{2}, \dots, t^{d})$ with the affine arclength measure. Since then, many other people have contributed to this field. Building on many of those prior results, we prove Fourier restriction estimates to arbitrary compact $C^{N}$ curves for any $N>d$, with $p$ and $q$ in the Drury range, using a power of the affine arclength measure as a mitigating factor. In particular, we make no nondegeneracy assumption on the curve. I gave a 20-minute talk about this paper at the Ohio River Analysis Meeting in 2022. ## Fourier Restriction and Maximal Operators on the Moment Curve In the Fourier restriction problem, the pointwise relationship between $\mathcal{R}f$ and $f$ is not always clear when $f$ is not a Schwartz function. Recently, Müller, Ricci, and Wright initiated the study of maximal restriction theorems to analyze this relationship. In this paper, we prove a maximal restriction theorem for certain $r$ maximal restriction operators on the moment curve. A corollary of this theorem is that for $f \in L^{p}(\mathbb{R}^{d})$, with $p$ in the optimal (Drury) range, almost every $x$ on the moment curve with respect to arclength measure is a Lebesgue point for $\hat{f}$ and the regularized value of $\hat{f}$ at $x$ coincides with $\mathcal{R}f(x)$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9512649178504944, "perplexity": 218.35987196376786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104354651.73/warc/CC-MAIN-20220704050055-20220704080055-00724.warc.gz"}
http://math.stackexchange.com/questions/24588/is-this-parametric-curve-space-filling-why-or-why-not
# Is this parametric curve space-filling? Why or why not? Really, the curve in question is the polar plot $r = cos( K * \theta)$, where $K$ is any irrational number (I use $\pi$), but the transformation to a parametric one on $x$ and $y$ with domain $t$ is an unsurprising one. It would appear that the curve is confined to the unit circle, and also that it never repeats -- that it is aperiodic. Given these three things • The domain of the function is any real number • The range of the function is confined to a finite space • The function is aperiodic Does it mean that the unit circle is completely filled for this parametric function from negative infinity to infinity? Can we say that for any given point in the unit circle, there is a number for $t$ where the curve intersects it? I want to say no. I really do. My intuition says so. But why? Are there any other curves with the three bullet pointed conditions above that can be shown to be more clearly non-space-filling? What is the appropriate mathematical term for the way thi curve acts on the unit circle? - When you say "it never overlaps itself/repeats", you should note that it intersects itself at a countably infinite number of points, especially at the origin where it does so at a countably infinite number of times. –  Henry Mar 2 '11 at 11:12 @Henry - Ah thanks, I forgot to fix that. I did at my bullet point description but not at the normal text one. That is true, but I would figure that the non-intersecting points are also countably infinitely many. –  Justin L. Mar 2 '11 at 18:28 No smooth curve is space filling. It can only fill a set which is both of zero measure (en.wikipedia.org/wiki/Null_set#Lebesgue_measure) and meagre (en.wikipedia.org/wiki/Meagre_set). –  George Lowther Mar 2 '11 at 20:40 Consider the intersection of the curve with the ray $\theta = 0$. These are points at a distance $r = \cos(2\pi n K)$ from the origin, for all $n \in \mathbb Z$. Note that there are only countably many of them. Dense means it gets arbitrarily close to all points, but does not require that the set is countable. In particular, $\mathbb{R}$ is dense in $\mathbb{R}$ –  Ross Millikan Mar 2 '11 at 14:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9577105045318604, "perplexity": 304.4020232109071}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928757.11/warc/CC-MAIN-20150521113208-00299-ip-10-180-206-219.ec2.internal.warc.gz"}
http://en.wikipedia.org/wiki/Lorentz-violating_neutrino_oscillations
# Lorentz-violating neutrino oscillations Lorentz-violating neutrino oscillation refers to the quantum phenomenon of neutrino oscillations described in a framework that allows the breakdown of Lorentz invariance. Today, neutrino oscillation or change of one type of neutrino into another is an experimentally verified fact; however, the details of the underlying theory responsible for these processes remain an open issue and an active field of study. The conventional model of neutrino oscillations assumes that neutrinos are massive, which provides a successful description of a wide variety of experiments; however, there are a few oscillation signals that cannot be accommodated within this model, which motivates the study of other descriptions. In a theory with Lorentz violation neutrinos can oscillate with and without masses and many other novel effects described below appear. The generalization of the theory by incorporating Lorentz violation has shown to provide alternative scenarios to explain all the established experimental data through the construction of global models. ## Introduction Conventional Lorentz-preserving descriptions of neutrinos explain the phenomenon of oscillations by endowing these particles with mass. However, if Lorentz violation occurs, oscillations could be due to other mechanisms. The general framework for Lorentz violation is called the Standard-Model Extension (SME).[1][2][3] The neutrino sector of the SME provides a description of how Lorentz and CPT violation would affect neutrino propagation, interactions, and oscillations. This neutrino framework first appeared in 1997[1] as part of the general SME for Lorentz violation in particle physics, which is built from the operators of the Standard Model. An isotropic limit of the SME, including a discussion on Lorentz-violating neutrino oscillations, was presented in a 1999 publication.[4] Full details of the general formalism for Lorentz and CPT symmetry in the neutrino sector appeared in a 2004 publication.[5] This work presented the minimal SME (mSME) for the neutrino sector, which involves only renormalizable terms. The incorporation of operators of arbitrary dimension in the neutrino sector was presented in 2011.[6] The Lorentz-violating contributions to the Lagrangian are built as observer Lorentz scalars by contracting standard field operators with controlling quantities called coefficients for Lorentz violation. These coefficients, arising from the spontaneous breaking of Lorentz symmetry, lead to non-standard effects that could be observed in current experiments. Tests of Lorentz symmetry attempt to measure these coefficients. A nonzero result would indicate Lorentz violation. The construction of the neutrino sector of the SME includes the Lorentz-invariant terms of the standard neutrino massive model, Lorentz-violating terms that are even under CPT, and ones that are odd under CPT. Since in field theory the breaking of CPT symmetry is accompanied by the breaking of Lorentz symmetry,[7] the CPT-breaking terms are necessarily Lorentz breaking. It is reasonable to expect that Lorentz and CPT violation are suppressed at the Planck scale, so the coefficients for Lorentz violation are likely to be small. The interferometric nature of neutrino oscillation experiments, and also of neutral-meson systems, gives them exceptional sensitivity to such tiny effects. This holds promise for oscillation-based experiments to probe new physics and access regions of the SME coefficient space that are still untested. ## General predictions Current experimental results indicate that neutrinos do indeed oscillate. These oscillations have a variety of possible implications, including the existence of neutrino masses, and the presence of several types of Lorentz violation. In the following, each category of Lorentz breaking is outlined.[5] ### Spectral anomalies In the standard Lorentz-invariant description of massive-neutrinos, the oscillation phase is proportional to the baseline L and inversely proportional to the neutrino energy E. The mSME introduces dimension-three operators that lead to oscillation phases with no energy dependence. It also introduces dimension-four operators generating oscillation phases proportional to the energy. Standard oscillation amplitudes are controlled by three mixing angles and one phase, all of which are constant. In the SME framework, Lorentz violation can lead to energy-dependent mixing parameters. When the whole SME is considered and nonrenormalizable terms in the theory are not neglected, the energy dependence of the effective hamiltonian takes the form of an infinite series in powers of neutrino energy. The fast growth of elements in the hamiltonian could produce oscillation signals in short-baseline experiment, as in the puma model. The unconventional energy dependence in the theory leads to other novel effects, including corrections to the dispersion relations that would make neutrinos move at velocities other than the speed of light. By this mechanism neutrinos could become faster-than-light particles. The most general form of the neutrino sector of the SME has been constructed by including operators of arbitrary dimension.[6] In this formalism, the speed of propagation of neutrinos is obtained. Some of the interesting new features introduced by the violation of Lorentz invariance include dependence of this velocity on neutrino energy and direction of propagation. Moreover, different neutrino flavors could also have different speeds. ### L − E conflicts The LE conflicts refer to null or positive oscillation signals for values of L and E that are not consistent with the Lorentz-invariant explanation. For example, KamLAND and SNO observations[8][9] require a mass-squared difference $\Delta m^2_\odot\simeq8\times10^{-5}\,\mbox{eV}^2$ to be consistent with the Lorentz-invariant phase proportional to L/E. Similarly, Super-Kamiokande, K2K, and MINOS observations[10][11][12] of atmospheric-neutrino oscillations require a mass-squared difference $\Delta m^2_\text{atm}\simeq2.5\times10^{-3}\,\mbox{eV}^2$. Any neutrino-oscillation experiment must be consistent with either of these two mass-squared differences for Lorentz invariance to hold. To date, this is the only class of signal for which there is positive evidence. The LSND experiment observed[13] oscillations leading to a mass-squared difference that is inconsistent with results from solar- and atmospheric-neutrino observations. The oscillation phase requires $\Delta m^2_\text{LSND}\simeq 1\,\mbox{eV}^2$. This anomaly can be understood in the presence of Lorentz violation. ### Periodic variations Laboratory experiments follow complicated trajectories as the Earth rotates on its axis and revolves around the Sun. Since the fixed SME background fields are coupled with the particle fields, periodic variations associated with these motions would be one of the signatures of Lorentz violation. There are two categories of periodic variations: 1. Sidereal variations: As the Earth rotates, the source and detector for any neutrino experiment will rotate along with it at a sidereal frequency of $\omega_\oplus\sim2\pi/23\,\mbox{h}\, 56 \,\mbox{min}$. Since the 3-momentum of the neutrino beam is coupled to the SME background fields, this can lead to sidereal variations in the observed oscillation probability data. Sidereal variations are among the most commonly sought signals in Lorentz tests in other sectors of the SME. 2. Annual variations: Variations with a period of one year can arise due to the motion of the Earth around the Sun. The mechanism is the same as for sidereal variations, arising because the particle fields couple to the fixed SME background fields. These effects, however, are challenging to resolve because they require the experiment to provide data for a comparable length of time. There are also boost effects that arise because the earth moves around the Sun at more than 30 kilometers per second. However, this is one ten thousandth of the speed of light, and means the boost effects are suppressed by four orders of magnitude relative to purely rotational effects. ### Compass asymmetries The breaking of rotation invariance can also lead to time-independent signals arising in the form of directional asymmetries at the location of the detector. This type of signal can cause differences in observed neutrino properties for neutrinos originating from different directions. ### Neutrino-antineutrino mixing Some of the mSME coefficients lead to mixing between neutrinos and antineutrinos. These processes violate lepton-number conservation, but can readily be accommodated in the Lorentz-breaking SME framework. The breaking of invariance under rotations leads to the non-conservation of angular momentum, which allows a spin flip of the propagating neutrino that can oscillate into an antineutrino. Because of the lost of rotational symmetry coefficients responsible for this type of mixing always introduce direction dependence. ### Classic CPT tests Since CPT violation implies Lorentz violation,[7] traditional tests of CPT symmetry can also be used to search for deviations from Lorentz invariance. This test seeks evidence of $P_{\nu_a\rightarrow\nu_b}\neq P_{\bar\nu_b\rightarrow\bar\nu_a}$. Some subtle features arise. For example, although CPT invariance implies $P_{\nu_a\rightarrow\nu_b}=P_{\bar\nu_b\rightarrow\bar\nu_a}$, this relation can be satisfied even in the presence of CPT violation. ## Global models of neutrino oscillations with Lorentz violation Global models are descriptions of neutrino oscillations that are consistent with all the established experimental data: solar, reactor, accelerator, and atmospheric neutrinos. The general SME theory of Lorentz-violating neutrinos has shown to be very successful as an alternative description of all observed neutrino data. These global models are based on the SME and exhibit some of the key signals of Lorentz violation described in the previous section. ### Bicycle model The first phenomenological model using Lorentz-violating neutrinos was proposed by Kostelecky and Mewes in a 2004 paper.[14] This so-called bicycle model exhibits direction dependence and only two parameters (two non-zero SME coefficients), instead of the six of the conventional massive model. One of the main characteristics of this model is that neutrinos are assumed to be massless. This simple model is compatible with solar, atmospheric, and long-baseline neutrino oscillation data. A novel feature of the bicycle model occurs at high energies, where the two SME coefficients combine to create a direction-dependent pseudomass. This leads to maximal mixing and an oscillation phase proportional to L/E, as in the massive case. ### Generalized bicycle model The bicycle model is an example of a very simple and realistic model that can accommodate most of the observed data using massless neutrinos in the presence of Lorentz violation. In 2007, Barger, Marfatia, and Whisnant constructed a more general version of this model by including more parameters.[15] In this paper, it is shown that a combined analysis of solar, reactor, and long-baseline experiments excluded the bicycle model and its generalization. Despite this, the bicycle served as starting point for more elaborate models. ### Tandem model The tandem model[16] is an extended version of the bicycle presented in 2006 by Katori, Kostelecky, and Tayloe. It is a hybrid model that includes Lorentz violation and also mass terms for a subset of neutrino flavors. It attempts to construct a realistic model by applying a number of desirable criteria. In particular, acceptable models for neutrino violation should: 1. be based on quantum field theory, 2. involve only renormalizable terms, 3. offer an acceptable description of the basic features of neutrino-oscillation data, 4. have a mass scale $\lesssim0.1\,\text{eV}$ for seesaw compatibility, 5. involve fewer parameters than the four used in the standard picture, 6. have coefficients for Lorentz violation consistent with a Planck-scale suppression $\lesssim10^{-17}$, and 7. accommodate the LSND signal. All these criteria are satisfied by the tandem model, which looks like a simple extension of the bicycle. Nevertheless, it involves isotropic coefficients only, which means that there is no direction dependence. The extra term is a massive term that reproduces the L/E phase at low energies observed by KamLAND.[17] It turns out that the tandem model is consistent with atmospheric, solar, reactor, and short-baseline data, including LSND. Besides the consistency with all experimental data, the most remarkable feature of this model is the prediction of a low-energy excess in MiniBooNE. When the tandem is applied to short-baseline accelerator experiments, it is consistent with the KARMEN null result, due to the very short baseline. For MiniBooNE, the tandem model predicted an oscillation signal at low energy that drops off very quickly. The MiniBooNE results, released a year after the tandem model was published, did indeed show an unexplained excess at low energies. This excess cannot be understood within the standard massive-neutrino model,[18] and the tandem remains one of the best candidates for its explanation. ### Puma model The puma model was proposed by Diaz and Kostelecky in 2010 as a three-parameter model[19][20] that exhibits consistency with all the established neutrino data (accelerator, atmospheric, reactor, and solar) and naturally describes the anomalous low-energy excess observed in MiniBooNE that is inconsistent with the conventional massive model. This is a hybrid model that includes Lorentz violation and neutrino masses. One of the main differences between this model and the bicycle and tandem models described above is the incorporation of nonrenormalizable terms in the theory, which lead to powers of the energy greater than one. Nonetheless, all these models share the characteristic of having a mixed energy dependence that leads to energy-dependent mixing angles, a feature absent in the conventional massive model. At low energies, the mass term dominates and the mixing takes the tribimaximal form, a widely used matrix postulated to describe neutrino mixing. This mixing added to the 1/E dependence of the mass term guarantees agreement with solar and KamLAND data. At high energies, Lorentz-violating contributions take over making the contribution of neutrino masses negligible. A seesaw mechanism is triggered, similar to that in the bicycle model, making one of the eigenvalues proportional to 1/E, which usually come with neutrino masses. This feature lets the model mimic the effects of a mass term at high energies despite the fact that there are only non-negative powers of the energy. The energy dependence of the Lorentz-violating terms produce maximal $\nu_\mu\leftrightarrow\nu_\tau$ mixing, which makes the model consistent with atmospheric and accelerator data. The oscillation signal in MiniBooNE appears because the oscillation phase responsible for the oscillation channel $\nu_\mu\rightarrow\nu_e$ grows rapidly with energy and the oscillation amplitude is large only for energies below 500 MeV. The combination of these two effects produces an oscillation signal in MiniBooNE at low energies, in agreement with the data. Additionally, since the model includes a term associated to a CPT-odd Lorentz-violating operator, different probabilities appear for neutrinos and antineutrinos. Moreover, since the amplitude for $\nu_\mu\rightarrow\nu_e$ decreases for energies above 500 MeV, long-baseline experiments searching for nonzero $\theta_{13}$ should measure different values depending on the energy; more precisely, the MINOS experiment should measure a value smaller than the T2K experiment according to the puma model, which agrees with current measurements.[21][22] ### Isotropic bicycle model In 2011, Barger, Liao, Marfatia, and Whisnant studied general bicycle-type models (without neutrino masses) that can be constructed using the minimal SME that are isotropic (direction independent).[23] Results show that long-baseline accelerator and atmospheric data can be described by these models in virtue of the Lorentz-violating seesaw mechanism; nevertheless, there is a tension between solar and KamLAND data. Given this incompatibility, the authors concluded that renormalizable models with massless neutrinos are excluded by the data. ## Theory From a general model-independent point of view, neutrinos oscillate because the effective hamiltonian describing their propagation is not diagonal in flavor space and has a non-degenerate spectrum, in other words, the eigenstates of the hamiltonian are linear superpositions of the flavor eigenstates of the weak interaction and there are at least two different eigenvalues. If we find a transformation $U_{a'a}$ that puts the effective hamiltonian in flavor basis (heff)ab in the diagonal form $E_{a'b'}=\mathrm{diag}(\lambda_1,\lambda_2,\lambda_3)$ (where the indices a, b = e, μ, τ and a′, b′ =1, 2, 3 denote the flavor and diagonal basis, respectively), then we can write the oscillation probability from a flavor state $|\nu_b\rangle$ to $|\nu_a\rangle$ as $P_{\nu_b\rightarrow\nu_a}=\left|\left\langle \nu_a|\nu_b(L)\right\rangle \right|^{2}=\left|\sum_{a'}U_{a'a}^{*}U_{a'b}\, e^{ -i \lambda_{a'} L }\right|^{2},$ where $\lambda_{a'}\frac{}{}$ are the eigenvalues. For the conventional massive model $\lambda_{a'}=m^2_{a'}/2E$. In the SME formalism, the neutrino sector is described by a 6-component vector with three active left-handed neutrinos and three right-handed antineutrinos. The effective Lorentz-violating Hamiltonian is a 6 × 6 matrix that takes the explicit form[6] $h_\text{eff}=\begin{pmatrix} |\vec p|&0\\\\0&|\vec p|\end{pmatrix} +\frac{1}{2|\vec p|}\begin{pmatrix} (\tilde m^2)&0\\\\0&(\tilde m^2)^*\end{pmatrix} +\frac{1}{|\vec p|}\begin{pmatrix} \widehat{a}_\text{eff}-\widehat{c}_\text{eff} & -\widehat{g}_\text{eff}+\widehat{H}_\text{eff} \\\\ -\widehat{g}_\text{eff}^\dagger+\widehat{H}_\text{eff}^\dagger & -\widehat{a}_\text{eff}^T-\widehat{c}_\text{eff}^T \end{pmatrix} ,$ where flavor indices have been suppressed for simplicity. The widehat on the elements of the last term indicates that these effective coefficients for Lorentz violation are associated to operators of arbitrary dimension.[6] These elements are in general functions of the energy, neutrino direction of propagation, and coefficients for Lorentz violation. Each block corresponds to a 3 × 3 matrix. The 3 × 3 diagonal blocks describe neutrino–neutrino and antineutrino–antineutrino mixing, respectively. The 3 × 3 off-diagonal blocks lead to neutrino–antineutrino oscillations. This hamiltonian contains the information of propagation and oscillations of neutrinos. In particular, the speed of propagation relevant for time-of-flight measurements can be written $v^\text{of}=1 - \frac{|m_l|^2}{2|\vec p|^2} + \sum_{djm} (d-3) |\vec p|^{d-4} \, Y_{jm}(\hat p) \big[(a_\text{of}^{(d)})_{jm}-(c_\text{of}^{(d)})_{jm}\big] ,$ that corresponds to oscillation-free approximation of the hamiltonian above. In this expression the neutrino speed has been spherically decomposed by using the standard spherical harmonics. This expression shows how neutrino speed can depend on energy and direction of propagation. In general, this speed can also depend on neutrino flavor. The index d denotes the dimension of the operator that breaks Lorentz symmetry. The form of neutrino speed shows that faster-than-light neutrinos can naturally be described by the SME. During the last decade, studies have mainly focused on the minimal sector of the general theory, in which case the hamiltonian above takes the explicit form[5] \begin{align} (h_\text{eff})_{AB}&=E\begin{pmatrix} \delta_{ab}&0\\\\0&\delta_{\bar a\bar b}\end{pmatrix} +\frac{1}{2E}\begin{pmatrix} (\tilde m^2)_{ab}&0\\\\0&(\tilde m^2)_{\bar a\bar b}^*\end{pmatrix} \\\\ &\quad+\frac{1}{E}\begin{pmatrix}[(a_L)^\alpha p_\alpha-(c_L)^{\alpha\beta} p_\alpha p_\beta]_{ab}& -i\sqrt2p_\alpha(\epsilon_+)_\beta[(g^{\alpha\beta\gamma}p_\gamma-H^{\alpha\beta})]_{a\bar b}\\\\ i\sqrt2p_\alpha(\epsilon_+)_\beta^*[(g^{\alpha\beta\gamma}p_\gamma-H^{\alpha\beta})]_{\bar ab}^*& [(a_R)^\alpha p_\alpha-(c_R)^{\alpha\beta} p_\alpha p_\beta]_{\bar a\bar b}\end{pmatrix} . \end{align} The indices of this effective Hamiltonian take the six values A, B = e, μ, τ, e, μ, τ, for neutrinos and antineutrinos. The lowercase indices indicate neutrinos (a, b = e, μ, τ), and the barred lowercase indices indicate antineutrinos (a, b = e, μ, τ). Notice that the ultrarelativistic approximation $E\simeq|\vec p|$ has been used. The first term is diagonal and can be removed because it does not contribute to oscillations; however, it can play an important role in the stability of the theory.[24] The second term is the standard massive-neutrino Hamiltonian. The third term is the Lorentz-violating contribution. It involves four types of coefficients for Lorentz violation. The coefficients $(a_L)^\alpha_{ab}$ and $(c_L)^{\alpha\beta}_{ab}$ are of dimension one and zero, respectively. These coefficients are responsible for the mixing of left-handed neutrinos, leading to Lorentz-violating neutrino–neutrino oscillations. Similarly, the coefficients $(a_R)^\alpha_{\bar a\bar b}$ and $(c_R)^{\alpha\beta}_{\bar a\bar b}$ mix right-handed antineutrinos, leading to Lorentz-violating antineutrino–antineutrino oscillations. Notice that these coefficients are 3 × 3 matrices having both spacetime (Greek) and flavor indices (Roman). The off-diagonal block involves the dimension-zero coefficients, $g^{\alpha\beta\gamma}_{a\bar b}$, and the dimension-one coefficients, $H^{\alpha\beta}_{a\bar b}$. These lead to neutrino–antineutrino oscillations. All spacetime indices are properly contracted forming observer Lorentz scalars. The four-momentum shows explicitly that the direction of propagation couples to the mSME coefficients, generating the periodic variations and compass asymmetries described in the previous section. Finally, note that coefficients with an odd number of spacetime indices are contracted with operators that break CPT. It follows that the a- and g-type coefficients are CPT-odd. By similar reasoning, the c- and H-type coefficients are CPT-even. ## Applying the theory to experiments ### Negligible-mass description For most short baseline neutrino experiments, the ratio of experimental baseline to neutrino energy, L/E, is small, and neutrino masses can be neglected because they are not responsible for oscillations. In these cases, the possibility exists of attributing observed oscillations to Lorentz violation, even if the neutrinos are massive. This limit of the theory is sometimes called the short-baseline approximation. Caution is necessary in this point, because, in short-baseline experiments, masses can become relevant if the energies are sufficiently low. An analysis of this limit, presenting experimentally accessible coefficients for Lorentz violation, first appeared in a 2004 publication.[25] Neglecting neutrino masses, the neutrino Hamiltonian becomes $(h_\text{eff})_{ab}=\frac{1}{E}[(a_L)^\alpha p_\alpha-(c_L)^{\alpha\beta} p_\alpha p_\beta]_{ab}.$ In appropriate cases, the oscillation amplitude can be expanded in the form $S(L)=e^{-ih_\text{eff}L}\simeq 1-ih_\text{eff}L-\frac{1}{2}h^2_\text{eff}L^2+\cdots.$ This approximation is valid if the baseline L is short compared to the oscillation length given by heff. Since heff varies with energy, the term short baseline really depends on both L and E. At leading order, the oscillation probability becomes $P_{\nu_b\rightarrow\nu_a}\simeq L^2|(h_\text{eff})_{ab}|^2,\quad a\neq b.$ Remarkably, this mSME framework for short-baseline neutrino experiments, when applied to the LSND anomaly, leads to values of order $10^{-19}\,\text{GeV}$ for $(a_L)^\alpha_{ab}$ and $10^{-17}$ for $(c_L)^{\alpha\beta}_{ab}$. These numbers are in the range of what one might expect from quantum-gravity effects.[25] Data analysis has been performed using the LSND,[26] MINOS,[27][28] MiniBooNE,[29][30] and IceCube[31] experiments to set limits on the coefficients $(a_L)^\alpha_{ab}$ and $(c_L)^{\alpha\beta}_{ab}$. These results, along with experimental results in other sectors of the SME, are summarized in the Data Tables for Lorentz and CPT violation.[32] ### Perturbative Lorentz-violating description For experiments where L/E is not small, neutrino masses dominate the oscillation effects. In these cases, Lorentz violation can be introduced as a perturbative effect in the form $h = h_0+\delta h ,$ where h0 is the standard massive-neutrino Hamiltonian, and δh contains the Lorentz-breaking mSME terms. This limit of the general theory was introduced in a 2009 publication,[33] and includes both neutrinos and antineutrinos in the 6 × 6 Hamiltonian formalism (1). In this work, the oscillation probability takes the form $P_{\nu_b\rightarrow\nu_a}=P_{\nu_b\rightarrow\nu_a}^{(0)}+P_{\nu_b\rightarrow\nu_a}^{(1)}+P_{\nu_b\rightarrow\nu_a}^{(2)}+\cdots,$ where $P_{\nu_b\rightarrow\nu_a}^{(0)}$ is the standard expression. One of the results is that, at leading order, neutrino and antineutrino oscillations are decoupled from one another. This means neutrino–antineutrino oscillations are a second-order effect. In the two-flavor limit, the first-order correction introduced by Lorentz violation to atmospheric neutrinos takes the simple form $P_{\nu_\mu\rightarrow\nu_\tau}^{(1)}=-Re(\delta h_{\mu\tau})L\,\sin{(\Delta m^2_{32}L/2E)}.$ This expression shows how the baseline of the experiment can enhance the effects of the mSME coefficients in δh. This perturbative framework can be applied to most of the long-baseline experiments. It is also applicable in some short-baseline experiments with low-energy neutrinos. An analysis has been done in the case of several long-baseline experiments (DUSEL, ICARUS, K2K, MINOS, NOvA, OPERA, T2K, and T2KK),[33] showing high sensitivities to the coefficients for Lorentz violation. Data analysis has been performed using the far detector of the MINOS experiment[34] to set limits on the coefficients $(a_L)^\alpha_{ab}$ and $(c_L)^{\alpha\beta}_{ab}$. These results are summarized in the Data Tables for Lorentz and CPT violation.[32]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 45, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9504292011260986, "perplexity": 1017.5261928051098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207924799.9/warc/CC-MAIN-20150521113204-00332-ip-10-180-206-219.ec2.internal.warc.gz"}
http://martinsvspace.blogspot.com/2014/07/netapp-cifs-vfilers-and-ftp.html
## Wednesday, July 2, 2014 ### NetApp, CIFS, vFilers and FTP I had a moment of sheer stupidity, dealing with vFilers and CIFS on our NetApp. I was trying to setup FTP to our CIFS on a non default vFiler and was getting no where fast. Tech Support for NetApp left a little something to be desired too, as they really could not seem to get what I was trying to do and they failed miserably calling me back with a proper response. I finally figured it out through trial and error, so I hope this helps someone. The trick to this is to under stand that when you make another vFiler, you have to run all the commands for the vFiler (ftpd commands and cifs commands), under the context of the newly created vFiler and all the files that need to be edited will be edited under the new vFiler as well. I logged into the NetApp and typed vfiler status then hit enter. This gave me the names of the running vFilers TESTSAN1> vfiler status vfiler0                                    running vfiler_test                              running I then change the context to the "test" vFiler. This is where my and it seems NetApp's Technical support's confusion came in. TESTSAN1>vfiler context vfiler_test the prompt at this point changes to the new vFiler. vfiler_test@TESTSAN1> At this point you begin to make the changes that you need to enable FTP on your vFiler. I typed options ftpd to get a listing of all the possible configuration settings that could be made to the ftpd service. vfiler_test@TESTSAN1> options ftpd ftpd.3way.enable             off ftpd.anonymous.enable        off ftpd.anonymous.home_dir ftpd.anonymous.name          anonymous ftpd.auth_style              ntlm ftpd.bypass_traverse_checking off ftpd.dir.override            /vol/TEST_DATAVOL ftpd.dir.restriction         off ftpd.enable                  off ftpd.locking                 none ftpd.log.enable              on ftpd.log.filesize            512k ftpd.log.nfiles              6 ftpd.tcp_window_size         28960 I enabled the ftpd service first. vfiler_test@TESTSAN1> options ftpd.enable on I then changed the FTP authentication style. For my environment ntlm is what we needed but you can use unix, ntlm and mixed. vfiler_test@TESTSAN1> options ftpd.auth_style ntlm If you are using ntlm you have to specify the CIFS home directory in the /etc/cifs_homedir.cfg that is located in the etc\$ share of the CIFS. In my case the path was \\TEST\etc\$ I opened the path by using Windows Explorer and used a text editor to edited this file. Using the examples provided in the file I was able to edit the path in the file and saved the file to the same place in the etc\$ share. Once you have specified the CIFS home directory you then run the cifs homedir load. At this point you can make any other changes that you need such as ftpd.locking or the ftp.dir.override. I was now able to successfully connect to the CIFS and so long as I have proper NTFS permissions I can FTP files to the locations that I need to.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8473221659660339, "perplexity": 2609.057246071126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160142.86/warc/CC-MAIN-20180924031344-20180924051744-00035.warc.gz"}
http://math.stackexchange.com/questions/168983/indecomposable-kg-modules-get-irreducible-kg-modules
# indecomposable $K[G]$ modules -> get irreducible $K[G]$ modules if I found indecomposable $K[G]$ modules, are there any techniques to get from this irreducible $K[G]$ modules? (e.j. for $k=\mathbb{Z}/p \mathbb{Z}$ and $G=C_p$) regards, Khanna - The specific case you mention is easy, since any irreducible representation of a finite $p$-group over a field of characteristic $p$ is trivial (see for example Gorenstein's Finite Groups) –  Tobias Kildetoft Jul 10 '12 at 10:46 I think what you want to do translates into "finding decomposition matrix of $KG$, and this is in general very hard for $\mathrm{char} K$ divides $|G|$, otherwise, the $KG$ is semisimple, so indecomposable and irreducible is the same. –  Aaron Jul 10 '12 at 15:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9495809078216553, "perplexity": 454.5633879828559}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535919886.18/warc/CC-MAIN-20140909051443-00273-ip-10-180-136-8.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3532156/some-asymptotic-bounds-on-exponential-sums
# Some (Asymptotic) Bounds on Exponential Sums Sorry if this is a duplicate, but can't find it anywhere. I'm learning some additive number theory and am stuck on showing some bounds. Let $$S(\alpha, X) = \displaystyle \sum_{x=1}^{X} e(\alpha x)$$ (where $$e(\beta)$$ just means $$e^{2 \pi i \beta}$$). First, I need that $$|S(\alpha, X)| << (2|| \alpha||)^{-1}$$ where $$|| \alpha|| = |\alpha \text{ mod } 1|$$. Here is what I've gathered: $$S(\alpha, X) = \frac{e(\alpha)-e(\alpha(X+1))}{1-e(\alpha)}$$ by the geometric series formula. Then, the magnitude of this is $$|S(\alpha, X)|^2 = \frac{2-e(\alpha X)-e(-\alpha X)}{2-e(\alpha)-e(-\alpha)} = \frac{2-2\cos(2 \pi \alpha X)}{2-2 \cos (2 \pi \alpha)}$$ I'm not sure where to go from here. I thought I remembered at one point getting an expression with a single $$\sin (2\pi \alpha)$$, which is easy to compare to $$||\alpha||^{-1}$$. The other bound I'm stuck with is $$\int_{0}^{1} |S(\alpha, X)| \text{ d} \alpha << \log (2X)$$ Again, I'm not really sure how to proceed except with fiddling with the cosine expression I got above, but I have no idea how to relate that to something like $$\log (2X)$$. For the first bound note that $$2-2 \cos (2 \pi \alpha)=4\sin^2{(\pi \alpha)}$$ and $$4\sin^2{(\pi \alpha)} \ge ||\alpha||^2$$, while the numerator is bounded by $$4$$ For the second bound, we use the first bound and notice that if $$\frac{1}{X} \le \alpha \le 1-\frac{1}{X}$$ we can use the first bound in the integral but for $$\alpha$$ small or near $$1$$ it is better to use the trivial bound $$X$$, so the integral is majorized by: $$\int_0^{\frac{1}{X}}Xdu+\int_{\frac{1}{X}}^{\frac{1}{2}}\frac{4}{u}du+\int_{\frac{1}{2}}^{1-\frac{1}{X}}\frac{4}{1-u}du+\int_{1-\frac{1}{X}}^{1}Xdu=2+8\log X-8\log 2<< \log X$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 19, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9981014132499695, "perplexity": 85.08579484052643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363445.41/warc/CC-MAIN-20211208053135-20211208083135-00504.warc.gz"}
https://www.zoology.ubc.ca/~rikblok/wiki/doku.php?id=math:eulers_constant:start
# Euler's constant: An improved sequence Remember Euler's number, $e=2.71828$… ? One of the Bernoulli boys showed that it's the limit of $(1 + 1/n)^n$ as $n$ goes to infinity. But if $n$ goes to infinity then we should be able to add an arbitrary constant $c$ to the denominator without changing the result. So, more generally, $$e = \lim_{n\rightarrow \infty} \left(1+\frac{1}{n+c}\right)^n.$$ The question that came to my mind then is, what is the “best” constant to choose? It turns out you can show it's $c=-1/2$. In other words, the limit of $(1+1/(n-1/2))^n$ converges to $e$ faster than Bernoulli's formula (or any other $c$). In fact, it's 99% accurate for $n=3$ (versus $n=50$ for Bernoulli). # Derivation Here's how I figured it out. Let's call the $n$-th number in the sequence $E_n$: $$E_n = \left(1+\frac{1}{n+c}\right)^n.$$ Ideally, we want $E_n=e$ for all $n$. But then $c$ is no longer a constant. In fact, we can isolate $c$ in the above equation (with $E_n=e$) to find out how $c$ would depend on $n$: $$c(n) = \left( e^{1/n} - 1 \right)^{-1} - n.$$ Now we want to know if $c$ converges to a constant as $n\rightarrow\infty$. But that's tricky. It becomes much simpler if we take $u=1/n$ and look at what happens as $u\rightarrow 0$. $$c(u) = \left( e^u - 1 \right)^{-1} - \frac{1}{u}.$$ Then we can expand $c$ as a Taylor series around $u=0$ (effectively, a Taylor expansion around $n=\infty$, which is pretty cool!) to get $$c(u) \approx -\frac{1}{2} + \frac{u}{12} + \cdots$$ So the best choice as a constant for large $n$ is $c=-1/2$ which gives a sequence $$E_n^{(1)} = \left(1+\frac{1}{n - 1/2}\right)^n = \left(\frac{2 n + 1}{2 n - 1}\right)^n.$$ Including higher order terms in the approximation allows to find sequences that converge even faster! For example, the next order approximation would be $c(n) = -1/2 + 1/(12 n)$, which would give a sequence $$E_n^{(2)} = \left( \frac{12 n^2 + 6 n + 1}{12 n^2 - 6 n + 1} \right)^n.$$ It's not as pretty an expression but it converges very quickly! It's already more than 99.8% accurate for $n=1$! (For $n=1$ the result simplifies to the fraction $E_1^{(2)}=19/7\approx 2.714$.) # Summary I found replacing $c=0$ in the sequence $\left(1+\frac{1}{n+c}\right)^n$ with $c=-1/2$ makes it converge to Euler's number much faster as $n\rightarrow \infty$. Does it matter? Probably not. But I sure had a fun afternoon! Rik Blok 2014
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9882934093475342, "perplexity": 181.57935097833806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363405.77/warc/CC-MAIN-20211207170825-20211207200825-00560.warc.gz"}
http://cs.stackexchange.com/questions/6334/tensor-product-in-quantum-computation
# Tensor Product in Quantum Computation I can not understand the following equality $$\langle ij|(|0\rangle \langle 0|\otimes I)kl \rangle= \langle i|0\rangle \langle 0|k \rangle \langle j|I|l \rangle?$$ Also to estimate phase $\phi$ in Nielsen & Chuang book, I can not understand why $(|0 \rangle + e^{2\pi i 2^{t-1}\phi} |1 \rangle)(|0 \rangle + e^{2\pi i2^{t-2}\phi }|1 \rangle)\cdots (|0 \rangle + e^{2\pi i 2^{0}\phi} |1 \rangle)= \displaystyle\sum_{k=0}^{2^t-1}e^{2\pi i \phi^k} |k\rangle$. Will you kindly help me? - ## migrated from cstheory.stackexchange.comOct 27 '12 at 6:24 This question came from our site for theoretical computer scientists and researchers in related fields. Isn't there an error in the powers? should be $|0 \rangle + e^{2\pi i 2^{t-1}\phi} |1 \rangle$ instead, right? –  Ran G. Oct 27 '12 at 6:49 Yes, there was a mistake. –  user12290 Oct 30 '12 at 19:53 I changed it, verify that it is correct (you can edit if there's still an error) –  Ran G. Oct 31 '12 at 1:33 A tensor product of operations, $I\otimes J$ say, acts on each subsystem separately: if $\phi$ and $\psi$ are states and $I$ and $J$ are operators then $$(I\otimes J)(\phi\otimes \psi) = (I\phi) \otimes (J\psi)$$ In bra-ket notation the state $\phi\otimes \psi$ can be denoted $|\phi\rangle|\psi\rangle$. In your first equation, the $\langle i|0\rangle\langle 0|k\rangle$ factor and $\langle j|I|l\rangle$ factor just separate in this way. The algebra behind the second equation is basically: $$(1+x^{2^0})\dots(1+x^{2^{t-1}})=1+x+x^2+x^2+\dots+x^{2^t-1}$$ except that the "1" is replaced by $| 0\rangle$, and the $x$ is replaced by $\exp(2\pi i \phi |1\rangle)$ (clash of my notation: $\phi$ is now a number). The only difference is that the multiplication is really a tensor product, and with bosons $|1\rangle\otimes |1\rangle=|2\rangle$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8976420164108276, "perplexity": 499.0434593039174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510270528.34/warc/CC-MAIN-20140728011750-00254-ip-10-146-231-18.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/179562/is-it-possible-to-approximate-compact-convex-sets-in-ell2-by-the-hilbert
# Is it possible to 'approximate' compact, convex sets in $\ell^2$ by the Hilbert cube Define $H=\{(x_n)_n\in\ell^2:|x_n|\le \frac1n, n\in\mathbf N\}\subset\ell^2$. This set is known as the Hilbert cube and it is well-known that $H$ is compact, convex and non-empty. Let $\overline{\mathrm{conv}}(C)$ denote the closure of the convex hull of a subset $C\subset\ell^2$. Suppose $S$ is a non-empty, compact, convex subset of $\ell^2$, is it possible to write$$S=\overline{\mathrm{conv}}\left(\bigcup_{n=1}^\infty[ S\cap(n\cdot H)]\right),$$ where (for $n\in\mathbf N$ fixed) $n\cdot H=\{n\cdot x:x\in H\}$. I think it is possible (since the Hilbert cube keeps getting 'thinner' in each coordinate), but I do not know how to prove it. - Take $S:=\{x\}$, where $x_k=\frac{\ln k}k$. Then $S\cap nH$ is empty for all $n$, since if $S\cap nH\subset S$, the only candidate is $x$, and we can't have $\ln k\leq n$ for all $k$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9947170615196228, "perplexity": 91.00442511522937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098468.93/warc/CC-MAIN-20150627031818-00261-ip-10-179-60-89.ec2.internal.warc.gz"}
http://mathhelpforum.com/discrete-math/90439-number-ways-selecting-r-letters.html
# Thread: Number of ways of selecting r letters? 1. ## Number of ways of selecting r letters? If out of $3n$ letters there are $n\ As,n\ Bs\ \mbox{and}\ n\ Cs$, show that the number of ways of selecting $r$ letters out of these is the same as selecting $(3n - r)$ letters out of them. If $n < r < 2n + 1$, show that the number of ways of selecting $r$ letters is given by $\frac{1}{2}(n + 1)(n + 2) + (r - n)(2n - r)$ 2. Originally Posted by fardeen_gen If out of $3n$ letters there are $n\ As,n\ Bs\ \mbox{and}\ n\ Cs$, show that the number of ways of selecting $r$ letters out of these is the same as selecting $(3n - r)$ letters out of them. If $n < r < 2n + 1$, show that the number of ways of selecting $r$ letters is given by $\frac{1}{2}(n + 1)(n + 2) + (r - n)(2n - r)$ When you select $r$ letters, you leave behind $(3n-r)$ letters. Hence the number of ways of selecting $r$ letters is the same as the number of ways of leaving behind $(3n-r)$ letters, so ${3n\choose r}={3n\choose 3n-r}.$ Let $n_{\mathrm A},\,n_{\mathrm B}$ be the number of A’s and B’s respectively in the selection. (1) $0\le n_{\mathrm A}\le r-n-1$ When $n_{\mathrm A}=0,$ the minimum value of $n_{\mathrm B}$ is $r-n$ (since there are at most $n$ C’s) and the maximum is $n.$ Hence there are $2n-r+1$ values for $n_{\mathrm B}$ when $n_{\mathrm A}=0.$ When $n_{\mathrm A}=1,$ $r-n-1\le n_{\mathrm B}\le n$ so $n_{\mathrm B}$ can have $2n-r+2$ values. Continuing, we have that when $n_{\mathrm A}=3,$ $|n_{\mathrm B}|=2n-r+3;$ …. When $n_{\mathrm A}=r-n-1,$ $|n_{\mathrm B}|=2n-r+r-n=n.$ Hence the number of selections in which $0\le n_{\mathrm A}\le r-n-1$ is $\sum_{k\,=\,1}^{r-n}(2n-r+k)=(2n-r)(r-n)+\frac12(r-n)(r-n+1).$ (2) $r-n\le n_{\mathrm A}\le n$ When $n_{\mathrm A}=r-n,$ $n_{\mathrm B}$ can range from 0 to $n,$ so $|n_{\mathrm B}|=n+1.$ When $n_{\mathrm A}=r-n+1,$ $0\le n_{\mathrm B}le n-1$ so $|n_{\mathrm B}|=n.$ Hence: when $n_{\mathrm A}=r-n+2,$ $|n_{\mathrm B}|=n-1,$ …, when $n_{\mathrm A}=n,$ $|n_{\mathrm B}|=r-n+1.$ Hence the number of selections in which $r-n\le n_{\mathrm A}\le n$ is $\sum_{k\,=\,r-n+1}^{n+1}k=\frac12(2n-r+1)(r+2).$ Now if you add the results in (1) and (2), you should hopefully find that $\frac12(r-n)(r-n+1)+\frac12(2n-r+1)(r+2)$ simplifies to $\frac12(n+1)(n+2).$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 55, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9482192397117615, "perplexity": 88.04268254296477}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825575.93/warc/CC-MAIN-20171023035656-20171023055656-00213.warc.gz"}
https://www.engineeringnotes.org/metrology/abbes-principle-of-alignment/
The Abbe’s principle of alignment is named after the German Professor Earnst Abbe. In 1890, he proposed a set of rules for taking linear measurements. His principle consists of the following 3 points : 1. For best results, a linear reading should be taken either inline or sideways of the object. 2. In case the above is not possible, the measurement can be taken at a distance parallel to the line being measured. In this case, the distance separating the object and the scale is known as the Abbe Offset. The Abbe Offset introduces no more than a second order error and is negligible. 3. If the parallelism between the object and the measuring instrument is not respected, a first order error will be introduced. The error will be a function of the angle the scale makes with the object and the distance separating the two. This error is known as Abbe Error which is a subset of Cosine Errors. The error introduced can be calculated using: $${\epsilon} = d({\sin}{\theta})$$ Where, d is the distance between scale and line. It is important to note that the error amplifies with both the distance and the angle. ### Practical Implications By design, Vernier calipers does not conform to the Abbe’s rule of alignment. It is therefore possible to introduce Abbe errors when taking measurements with one. On the other hand, Micrometers follow the principle. This means that no error of this type can be introduced when using it.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9505518674850464, "perplexity": 553.5645288666383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334591.19/warc/CC-MAIN-20220925162915-20220925192915-00622.warc.gz"}
https://www.nag.com/numeric/cl/nagdoc_cl26.0/html/g01/g01ddc.html
nag_shapiro_wilk_test (g01ddc) (PDF version) g01 Chapter Contents g01 Chapter Introduction NAG Library Manual NAG Library Function Documentnag_shapiro_wilk_test (g01ddc) 1  Purpose nag_shapiro_wilk_test (g01ddc) calculates Shapiro and Wilk's $W$ statistic and its significance level for testing Normality. 2  Specification #include #include void nag_shapiro_wilk_test (Integer n, const double x[], Nag_Boolean calc_wts, double a[], double *w, double *pw, NagError *fail) 3  Description nag_shapiro_wilk_test (g01ddc) calculates Shapiro and Wilk's $W$ statistic and its significance level for any sample size between $3$ and $5000$. It is an adaptation of the Applied Statistics Algorithm AS R94, see Royston (1995). The full description of the theory behind this algorithm is given in Royston (1992). Given a set of observations ${x}_{1},{x}_{2},\dots ,{x}_{n}$ sorted into either ascending or descending order (nag_double_sort (m01cac) may be used to sort the data) this function calculates the value of Shapiro and Wilk's $W$ statistic defined as: $W= ∑i=1naixi 2 ∑i=1n xi-x- 2 ,$ where $\stackrel{-}{x}=\frac{1}{n}\sum _{1}^{n}{x}_{i}$ is the sample mean and ${a}_{i}$, for $i=1,2,\dots ,n$, are a set of ‘weights’ whose values depend only on the sample size $n$. On exit, the values of ${a}_{i}$, for $\mathit{i}=1,2,\dots ,n$, are only of interest should you wish to call the function again to calculate ${\mathbf{w}}$ and its significance level for a different sample of the same size. It is recommended that the function is used in conjunction with a Normal $\left(Q-Q\right)$ plot of the data. Function nag_normal_scores_exact (g01dac) can be used to obtain the required Normal scores. 4  References Royston J P (1982) Algorithm AS 181: the $W$ test for normality Appl. Statist. 31 176–180 Royston J P (1986) A remark on AS 181: the $W$ test for normality Appl. Statist. 35 232–234 Royston J P (1992) Approximating the Shapiro–Wilk's $W$ test for non-normality Statistics & Computing 2 117–119 Royston J P (1995) A remark on AS R94: A remark on Algorithm AS 181: the $W$ test for normality Appl. Statist. 44(4) 547–551 5  Arguments 1:    $\mathbf{n}$IntegerInput On entry: $n$, the sample size. Constraint: $3\le {\mathbf{n}}\le 5000$. 2:    $\mathbf{x}\left[{\mathbf{n}}\right]$const doubleInput On entry: the ordered sample values, ${x}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,n$. 3:    $\mathbf{calc_wts}$Nag_BooleanInput On entry: must be set to Nag_TRUE if you wish nag_shapiro_wilk_test (g01ddc) to calculate the elements of a. calc_wts should be set to Nag_FALSE if you have saved the values in a from a previous call to nag_shapiro_wilk_test (g01ddc). If in doubt, set calc_wts equal to Nag_TRUE. 4:    $\mathbf{a}\left[{\mathbf{n}}\right]$doubleInput/Output On entry: if calc_wts has been set to Nag_FALSE then before entry a must contain the $n$ weights as calculated in a previous call to nag_shapiro_wilk_test (g01ddc), otherwise a need not be set. On exit: the $n$ weights required to calculate ${\mathbf{w}}$. 5:    $\mathbf{w}$double *Output On exit: the value of the statistic, ${\mathbf{w}}$. 6:    $\mathbf{pw}$double *Output On exit: the significance level of ${\mathbf{w}}$. 7:    $\mathbf{fail}$NagError *Input/Output The NAG error argument (see Section 2.7 in How to Use the NAG Library and its Documentation). 6  Error Indicators and Warnings NE_ALL_ELEMENTS_EQUAL On entry, all elements of x are equal. NE_ALLOC_FAIL Dynamic memory allocation failed. See Section 2.3.1.2 in How to Use the NAG Library and its Documentation for further information. NE_BAD_PARAM On entry, argument $〈\mathit{\text{value}}〉$ had an illegal value. NE_INT_ARG_GT On entry, ${\mathbf{n}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{n}}\le 〈\mathit{\text{value}}〉$. NE_INT_ARG_LT On entry, ${\mathbf{n}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{n}}\ge 3$. NE_INTERNAL_ERROR An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance. An unexpected error has been triggered by this function. Please contact NAG. See Section 2.7.6 in How to Use the NAG Library and its Documentation for further information. NE_NO_LICENCE Your licence key may have expired or may not have been installed correctly. See Section 2.7.5 in How to Use the NAG Library and its Documentation for further information. NE_NON_MONOTONIC On entry, elements of x not in order. ${\mathbf{x}}\left[〈\mathit{\text{value}}〉\right]=〈\mathit{\text{value}}〉$, ${\mathbf{x}}\left[〈\mathit{\text{value}}〉\right]=〈\mathit{\text{value}}〉$, ${\mathbf{x}}\left[〈\mathit{\text{value}}〉\right]=〈\mathit{\text{value}}〉$. 7  Accuracy There may be a loss of significant figures for large $n$. 8  Parallelism and Performance nag_shapiro_wilk_test (g01ddc) is not threaded in any implementation. 9  Further Comments The time taken by nag_shapiro_wilk_test (g01ddc) depends roughly linearly on the value of $n$. For very small samples the power of the test may not be very high. The contents of the array a should not be modified between calls to nag_shapiro_wilk_test (g01ddc) for a given sample size, unless calc_wts is reset to Nag_TRUE before each call of nag_shapiro_wilk_test (g01ddc). The Shapiro and Wilk's $W$ test is very sensitive to ties. If the data has been rounded the test can be improved by using Sheppard's correction to adjust the sum of squares about the mean. This produces an adjusted value of ${\mathbf{w}}$, $WA=W ∑ xi - x- 2 ∑i=1n xi=x- 2 - n-1 12 ω2 ,$ where $\omega$ is the rounding width. $WA$ can be compared with a standard Normal distribution, but a further approximation is given by Royston (1986). If ${\mathbf{n}}>5000$, a value for w and pw is returned, but its accuracy may not be acceptable. See Section 4 for more details. 10  Example This example tests the following two samples (each of size $20$) for Normality. Sample Number Data 1 $0.11$, $7.87$, $4.61$, $10.14$, $7.95$, $3.14$, $0.46$, $4.43$, $0.21$, $4.75$, $0.71$, $1.52$, $3.24$, $0.93$, $0.42$, $4.97$, $9.53$, $4.55$, $0.47$, $6.66$ 2 $1.36$, $1.14$, $2.92$, $2.55$, $1.46$, $1.06$, $5.27$, $-1.11$, $3.48$, $1.10$, $0.88$, $-0.51$, $1.46$, $0.52$, $6.20$, $1.69$, $0.08$, $3.67$, $2.81$, $3.49$ The elements of a are calculated only in the first call of nag_shapiro_wilk_test (g01ddc), and are re-used in the second call. 10.1  Program Text Program Text (g01ddce.c) 10.2  Program Data Program Data (g01ddce.d) 10.3  Program Results Program Results (g01ddce.r) nag_shapiro_wilk_test (g01ddc) (PDF version) g01 Chapter Contents g01 Chapter Introduction NAG Library Manual
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 92, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9251251220703125, "perplexity": 1193.4354841928287}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488517048.78/warc/CC-MAIN-20210622093910-20210622123910-00205.warc.gz"}