url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
http://gradestack.com/MCAT-Complete-Tutor/Warm-Up-Drill-I/17293-3351-19769-test-wtbh
# Warm-Up Drill-I Directions: Find, then underline, the conclusion to each of the following arguments. If an argument does not state the conclusion, complete it with the most natural conclusion.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8936651945114136, "perplexity": 1385.0587002111808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189471.55/warc/CC-MAIN-20170322212949-00431-ip-10-233-31-227.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/56532/structure-sheaf-of-a-point
# structure sheaf of a point I have the following problem: Let $X$ be a scheme and $x$ a closed point on it. If $F$ is a sheaf on X with nontrivial stalk $F_x$ at $x$, then one has a canonical surjective morphism $F\rightarrow G$, where $G$ is the structure sheaf of the point $x$. I dont understand how to describe $G$ and where this morphism comes from and why it should be surjective. Perhaps one can explain that in detail for me. Thank you very much - Dear Descartes, What makes you think such a canonical map exists? (It doesn't.) Also, what do you mean by the structure sheaf of $x$? Are you thinking of $x$ as the underlying topological space of Spec $\kappa(x)$? Regards, –  Matt E Aug 9 '11 at 15:17 well, I mean x as a closed subscheme –  Descartes Aug 9 '11 at 16:05 Dear Descartes, Yes, but which scheme structure? The reduced one? Regards, –  Matt E Aug 9 '11 at 16:38 @Descartes: If this coming from a textbook or some lecture notes, can you give a citation? It may be more helpful to see it in context. If I had to guess, I would say that $G$ is the direct image sheaf under the obvious morphism $\{ x \} \hookrightarrow X$. (This is also known as the skyscraper sheaf.) –  Zhen Lin Aug 9 '11 at 16:46 well, the proof where it occurs is rather involved; but it comes down to the following: you have a sheaf F, take a closed point in its support and then you have this morphism from F to k(x); I have called G what the author calls k(x); but he doesn't say what he means with that –  Descartes Aug 9 '11 at 17:00 The map exists (under some hypotheses --- e.g. $\mathcal F$ is a coherent sheaf of $\mathcal O_X$-modules), but is not canonical. We can form the stalk $\mathcal F_x$, which is a module over $\mathcal O_{X,x}$. We can then form the fibre $\mathcal F_x /\mathfrak m_x \mathcal F_x$, which is a vector space over $\kappa(x)$. If the original sheaf $\mathcal F$ was coherent, and $x$ is in its support, then the fibre will be non-zero. Any non-zero vector space over $\kappa(x)$ admits a surjective map to $\kappa(x)$. (If a vector space $V$ is non-zero then the same is true of its dual.) Choosing such a non-zero map $\mathcal F_x/\mathfrak m_x \mathcal F_x \to \kappa(x)$, we obtain the map $\mathcal F \to \mathcal G$ that you are asking about.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.984805703163147, "perplexity": 205.9604293789714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510256737.1/warc/CC-MAIN-20140728011736-00418-ip-10-146-231-18.ec2.internal.warc.gz"}
http://mathoverflow.net/revisions/60398/list
I'm pretty certain the answer is no provided $n \geq 3$. Let $X$ be the Whitehead manifold: http://en.wikipedia.org/wiki/Whitehead_manifold It's a contractible open 3-manifold which is not homeomorphic to $\mathbb R^3$. $X^2$ I claim is homeomorphic to $\mathbb R^6$. I don't have a slick proof of this. The idea is, ask yourself if you can put a boundary on $X^2$ to make $X^2$ the interior of a compact manifold with boundary. Larry Siebenmann's dissertation says if $X^2$ is simply-connected at infinity, you're okay. But the fundamental-group of the end of $X^2$ has a presentation of the form $(\pi_1 X Y * \pi_1 XY) / <\pi_1 X^2>$Y^2>$, where$*$denotes free product and angle brackets "normal closure". This , and$\pi_1 Y$is "the fundamental group at infinity for$X$". So$\pi_1 X^2$is the trivial group. Once you have it as the interior of a compact manifold with boundary, the h-cobordism theorem kicks in and tells you this manifold is$D^6$. 1 I'm pretty certain the answer is no provided$n \geq 3$. Let$X$be the Whitehead manifold: http://en.wikipedia.org/wiki/Whitehead_manifold It's a contractible open 3-manifold which is not homeomorphic to$\mathbb R^3$.$X^2$I claim is homeomorphic to$\mathbb R^6$. I don't have a slick proof of this. The idea is, ask yourself if you can put a boundary on$X^2$to make$X^2$the interior of a compact manifold with boundary. Larry Siebenmann's dissertation says if$X^2$is simply-connected at infinity, you're okay. But the fundamental-group of the end of$X^2$has a presentation of the form$(\pi_1 X * \pi_1 X) / <\pi_1 X^2>$, where$*$denotes free product and angle brackets "normal closure". This is the trivial group. Once you have it as the interior of a compact manifold with boundary, the h-cobordism theorem kicks in and tells you this manifold is$D^6\$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9335166215896606, "perplexity": 486.90896176336815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382450/warc/CC-MAIN-20130516092622-00017-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.r-bloggers.com/2021/03/compositional-scalar-on-function-regression-as-a-tool-not-only-for-geological-data/
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. Compositional data are characterized by the fact that the relevant information is contained not necessarily in the absolute values but rather in the relative proportions between particular components. As an example, take household expenditures for different purposes (housing, groceries, travel etc.) or geochemical composition of a certain soil sample. In the latter case, the resulting composition of chemical elements is determined strongly by the particle size distribution (PSD, i.e., distribution of the size of soil grains). These distributions – although sampled in their discrete form as histogram data – show both relative and functional character and therefore can be described through probability density functions. A valid question to ask is how to modify the common multiple and/or functional regression model for the introduced case of relative (compositional) data. ## Functional regression model First, take a look at the standard functional data analysis (FDA) approach which was developed for functions from $$L^2$$ space. A functional linear regression model with functional predictor is built as $y_{i} = \beta_{0} + \int_{I} \beta_{1}(t)f_{i}(t)dt + \epsilon_{i},\quad i=1,\dots,N,\quad t \in I$ where $$\beta_{0}$$ is the scalar intercept and $$\beta_{1}$$ represents the functional regression parameter. This model can be seen as an extension of the multiple regression – therefore, the estimators $$\hat{\beta_{0}}$$ and $$\hat{\beta_{1}}$$ minimize the following sum of squared errors (SSE) $\text{SSE} (\beta_{0},\beta_{1}) = \sum_{i=1}^{N}\left(y_{i}-\beta_{0}-\int_{I}\beta_{1}(t)f_{i}(t)dt\right)^2.$ Unfortunately, it is not common for functional data to be available in its continuous form – we are usually left with dicrete observations. To represent the sparsely measured data as functions, a proper basis expansion for both the predictors and $$\beta_{1}$$ is necessary. This way, a reduction to a multivariate problem can be achieved. Furthermore, it is useful to apply the results of functional principal component analysis (FPCA) to project the data into a lower-dimensional space. But how can we use these ideas and adapt them for the situation where the covariate consists of density functions? As each PSD forms a probability density function on the considered support, specific properties of densities (scale invariance, relative scale, unit integral) prevent from using standard FDA methods directly to PSDs. Instead, we acknowledge the possibility to represent density functions in the Bayes space $$\mathcal{B^2}$$ with square-integrable log-densities as they can be then adequately represented in the $$L^2$$ space due to the isomorphism between $$\mathcal{B^2}$$ and $$L^2$$. Frequently, the centered log-ratio (clr) transformation $\text{clr}(f)(t):=f_{c}(t)= \text{ln} f(t) – \frac{1}{\eta}\int_{I}\text{ln}f(t) dt$ is used to the original densities with $$\eta$$ representing the length of their common (bounded) support $$I$$. It can be shown that the clr transformation of densities enforces the resulting functions to integrate on $$I$$ to 0. To represent the original data in continuous form while fulfilling the zero integral constraint, the so-called compositional splines were developed (more on this in Machalová et al. (2020)) and used throughout the regression modeling. In our geological example, 96 soil samples from loesses are examined and the task is to analyze how the geochemistry of the samples is influenced by their PSDs. The cubic polynomials were chosen for the spline basis of the PSDs together with 16 knots represented in the graphs by the grey dashed lines. The resulting clr densities are now ready to serve as predictor in our regression model. For the response, the clr-transformed geochemical compositions of the observed soil samples are taken into consideration. In this case, each composition is characterized by a real vector consisting of concentrations of 9 elements (Al, Si, K, Ca, Fe, As, Rb, Sr, Zr). ## Compositional regression As mentioned above, the FPCA is a useful technique here to filter out noise which could distort the regression estimates – the FPCA allows us to represent the predictor using only a few functional principal components while explaining a substantial percentage of the variability of the original data. In this case, 3 principal components were used as they explained over 90% of the variability. The regression modeling is then performed on these functional principal components. The resulting functional parameters $$\beta_{1}$$ are shown in the plot below (in their clr form, of course). ## Quality of the model, interpretation To assess the goodness-of-fit of the regression model, standard coefficient of determination can be computed with values close to one indicating a good fit of the model. Another possibility is to use a nonparametrical method of bootstrap confidence bands. The idea of bootstrap bands is based on resampling the residuals – for each part of the composition, the residuals can be estimated as $\hat{\epsilon}_{i}=y_i-\hat{y}_i.$ By resampling we are able to compute an arbitrary number of bootstrap samples $y_{i}^{boot}=\beta_{0}+\int_{I}\beta_{1}(t)\cdot f_{i}(t)dt + \epsilon_{i}^{boot},\quad i=1,\dots,N,$ and the resulting bootstrap estimates of the functional regression parameter then form a band “around” $$\beta_{1}$$. Here, 100 bootstrap functions were plotted together with the estimate of $$\beta_{1}$$. The bootstrap bands appear to be very useful for interpretation of the functional parameters $$\beta_{1}$$ as shown for Al and Ca bellow. While sticking with the clr form of $$\beta_{1}$$ (further as clr($$\beta_{1}$$)) and their zero integral constraint, the functions have to cross the x-axis meaning that we are able to split the original support $$I$$ on subdomains where clr($$\beta_{1}$$) is positive or negative, respectively. The same can be said about the clr transformation of the particle size distributions. For interpretation, we look at the positive and negative subdomains individually. For subdomain where the clr transformed PSDs are positive ($$I^{+}$$), three situations may occur: • the estimated parameter clr($$\beta_{1}$$) is positive – in that case, we can expect an increasing relative presence of the given element within the geochemical composition (by considering intepretation of the clr representation of this element). • clr($$\beta_{1}$$) is negative, resulting in decreasing relative presence. • clr($$\beta_{1}$$) $$\approx 0$$, meaning that the relative presence of the given element within the geochemical composition is not influenced by the respective particle sizes of the PSDs. The bootstrap confidence bands can be used to define these subdomains. Clearly, the opposite would apply for the subdomain with negative clr transformed PSDs ($$I^{-}$$). In case of Al it means that its relative presence in the composition is strongly (positively) influenced by the finest fractions and there is also a stronger negative effect of the fraction around 10 $$\mu m$$. For Ca completely opposite effects can be observed. To sum up, the specific properties of compositional data (as multivariate data) and probability density functions (as functional data) need a proper adaptation of standard statistical methods. Here, the linear regression was addressed by presenting a compositional scalar-on-function regression model with functional predictor and real response. Hopefully the presented example demonstrated that the compositional approach with the clr transformation not only provides an adequate platform for working with probability densities, but also leads to an easier and more straight-forward interpretation of the resulting parameters. ### Based on: Talská, R., Hron, K., Matys Grygar, T.: Compositional scalar-on-function regression with application to sediment particle size distributions. Mathematical Geosciences, accepted for publication. Machalová, J., Talská, R., Hron, K., Gába, A.: Compositional splines for representation of density functions. Computational Statistics (2020). https://doi.org/10.1007/s00180-020-01042-7
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9238342642784119, "perplexity": 728.2224593649943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587854.13/warc/CC-MAIN-20211026072759-20211026102759-00158.warc.gz"}
https://www.physicsforums.com/threads/are-all-forces-vectors.91267/
# Are all forces vectors? 1. Sep 28, 2005 ### Poy Are all forces vectors? What forces do and don't conserve energy? What is the English unit of force? Its one of those little web search things, and some of those aren't so easy to find... 2. Sep 28, 2005 ### Thomas Becker Hi, Poy, Yes, forces are vectors, they are only completely described by giving them a length and a direction, where we come to the question of energy: it depends on the movement, displacement produced by the force. If the body the forces acts on 1) is not deformed or moved --- no work is done, no change in energy. 2) moves perpendicular to the direction of the force --- no change in energy. but 3) moves, but not perpendicular to the direction of the force, or is deformed against its inner "springs" --- there is some work done by the force and energy changes (at least, it transforms from potential to kinetic energy or temperature, depends whther the actor of the force belongs to the system). Thomas :-) 3. Sep 28, 2005 ### HallsofIvy Staff Emeritus Yes, all forces are vectors. If you have a very artificial problem where you know everything will be in a straight line, you can set up your coordinate system so that the x-axis is along that line and ignore the other components, but the force really is a vector still. If a force depends only on position and not on such things as the speed with which an object is moving are conservative. The English unit? You mean the one that the English don't use any more and only us silly Americans do? The pound. The corresponding unit of mass- the mass that on the surface of the earth would weigh 32 pounds is called a "slug". (In the English system the acceleration due to gravity is about 32 ft/s<sup>2</sup> so the slug is a "unit" mass.) Unfortunately a lot of people also use "pound" when they really mean a "slug". 4. Sep 28, 2005 ### Poy Lots of thank Similar Discussions: Are all forces vectors?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8287439942359924, "perplexity": 783.1645483350053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824471.6/warc/CC-MAIN-20171020230225-20171021010225-00754.warc.gz"}
https://www.physicsforums.com/threads/heat-in-a-resistor-circuit-with-inductor-and-capacitor.768538/
# Heat in a resistor (circuit with inductor and capacitor) 1. Sep 1, 2014 ### Rugile 1. The problem statement, all variables and given/known data A source of ε = 10V, capacitor of C = 5 μf, inductor of L = 15 mH and r = 10 Ω and a resistor of R = 100 Ω are connected as in the schematic (attached). How much heat will dissipate in the resistor after the switch is opened? 2. Relevant equations dQ = I2Rdt KVL 3. The attempt at a solution First of all I was quite confused by the question, so I just assumed that the switch is opened after a long time, when the capacitor and the inductor are fully charged. Then the voltage across the capacitor is U0 = ε and the current in the inductor is I0 = ε / (R+r). We can write the equation (KVL): $L\frac{dI}{dt} + I(R+r) + U_C = 0$, where UC is the voltage across the capacitor. I get stuck here - I suppose I could write second order differential equation involving the function of q: $L \frac{d^2 q}{dt^2} + (R+r) \frac{dq}{dt} + \frac{C}{q} = 0$, but I'm not sure what to do with that either. Any hints? File size: 2.1 KB Views: 94 2. Sep 1, 2014 ### td21 Step 1: After switch is opened, LC oscillation occurs as described by your equation. Step 2: Heat loss through R and r. Your task is to find out the heat loss thorugh R. At the end, all energy is dissipated through heat loss. So I suggest you find the ratio between heat loss through R: heat loss through r. I also suggest you find the initial energy. Step 3: The ratio can be found from the ratio of R:r. 3. Sep 1, 2014 ### Rugile Okay, so it is enough to find the initial energy of the system and then calculate the heat loss in R using the ratio? Can I say that the initial energy of the system (after opening the switch) was $E = \frac{CU^2}{2} + \frac{LI^2}{2}$, where $I = \frac{\epsilon}{R+r}$ and U = epsilon = EMF ? 4. Sep 2, 2014 What switch? 5. Sep 3, 2014 ### Staff: Mentor The one marked with letter 'J'. 6. Sep 3, 2014 ### Rugile Yes, exactly. Does anyone have any idea if the equation of initial energy written above is correct? I have doubts about the current in the inductor, since when we open the switch it disappears (gradually), doesn't it? 7. Sep 3, 2014 ### BvU O2: Very subtle... Rugile's post #3 seems pretty good to me: keep going ! And post #6: yes, it dampens out quickly. But the exercise only wants the total, so no need to solve the D.E. 8. Sep 3, 2014 ### Staff: Mentor That looks right. That's the stored energy before the jumper to the battery is removed. After the jumper is removed, the inductor current becomes the capacitor current, the two now being in series. Draft saved Draft deleted Similar Discussions: Heat in a resistor (circuit with inductor and capacitor)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9375650882720947, "perplexity": 1019.780391454565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110792.29/warc/CC-MAIN-20170822143101-20170822163101-00195.warc.gz"}
https://www.physicsforums.com/threads/koide-mass-formula-for-neutrinos.117787/
# Koide Mass Formula for Neutrinos 1. Apr 17, 2006 ### CarlB Abstract: Since 1982 the Koide mass relation has provided an amazingly accurate relation between the masses of the charged leptons. In this note we show how the Koide relation can be expanded to cover the neutrinos, and we use the relation to predict neutrino masses. Full paper attached, this version is slightly abbreviated. In 1982, Yoshio Koide [ http://www.arxiv.org/abs/hep-ph/0506247 ], discovered a formula relating the masses of the charged leptons: $$\frac{(\sqrt{m_e} + \sqrt{m_\mu} + \sqrt{m_\tau})^2} {m_e+m_\mu+m_\tau} = \frac{3}{2}.$$ Written in the above manner, this relation removes one degree of freedom from the three charged lepton masses. In this paper, we will first derive this relation as an eigenvalue equation, then obtain information about the other degrees of freedom, and finally speculatively apply the same techniques to the problem of predicting the neutrino masses. If we suppose that the leptons are composite particles made up of colorless combinations of colored subparticles or preons, we expect that the three colors must be treated equally, and therefore a natural form for a matrix operator that can cross generation boundaries is the \emph{circulant}: $$\Gamma(A,B,C) = \left( \begin{array}{ccc} A&B&C\\C&A&B\\B&C&A \end{array}\right).$$ where A, B and C are complex constants. Other authors have explored these sorts of matrices in the context of neutrino masses and mixing angles. Such matrices have eigenvectors of the form: $$|n\ra = \left(\begin{array}{c} 1\\e^{+2in\pi/3}\\e^{-2in\pi/3}\end{array}\right), \;\;\;\;n=1,2,3.$$ If we require that the eigenvalues be real, we obtain that $A$ must be real, and that B and C are complex conjugates. This reduces the 6 real degrees of freedom present in the 3 complex constants A, B and C to just 3 real degrees of freedom, the same as the number of eigenvalues for the operators. In order to parameterize these sorts of operators,let us write: $$\Gamma(\mu,\eta,\delta) = \mu\left(\begin{array}{ccc} 1&\eta\exp(+i\delta)&\eta\exp(-i\delta)\\ \eta\exp(-i\delta)&1&\eta\exp(+i\delta)\\ \eta\exp(+i\delta)&\eta\exp(-i\delta)&1 \end{array}\right),$$ where we can assume eta to be non negative. Note that while eta and delta are pure numbers, mu scales with the eigenvalues. Then the three eigenvalues are given by: $$\begin{array}{rcl} \Gamma(\mu,\eta,\delta)\;|n\ra &=& \lambda_n \;|n\ra,\\ &=&\mu(1+2\eta\cos(\delta+2n\pi/3))\;|n\ra. \end{array}$$ The sum of the eigenvalues are given by the trace of Gamma: $$\lambda_1+\lambda_2+\lambda_3 = 3\mu,$$ and this allows us to calculate mu from a set of eigenvalues. The sum of the squares of the eigenvalues are given by the trace of Gamma^2: $$\lambda_1^2+\lambda_2^2+\lambda_3^2 = 3\mu^2(1+2\eta^2),$$ and this gives a formula for eta^2 in terms of the eigenvalues: $$\frac{\lambda_1^2+\lambda_2^2+\lambda_3^2} {(\lambda_1+\lambda_2+\lambda_3)^2} = \frac{1+2\eta^2}{3},$$ The value of delta is then easy to calculate. We have restricted Gamma to a form where all its eigenvalues are real, but it is still possible that some or all will be negative. For situations where all the eigenvalues are non negative, it is natural to suppose that our values are eigenvalues not of the Gamma matrix, but instead of Gamma^2. The masses of the charged leptons are positive, so let us compute the square roots of the masses of the charged leptons, and find the values for mu_1, eta_1^2 and delta_1, where the subscript will distinguish the parameters for the masses of the charged leptons from that of the neutral leptons. Given the latest PDG data (MeV): $$\begin{array}{rcccl} m_1 &=& m_e &=& 0.510998918 (44)\\ m_2 &=& m_\mu &=& 105.6583692 (94)\\ m_3 &=& m_\tau &=& 1776.99 +0.29-0.26 \end{array}$$ and ignoring, for the moment, the error bars, and keeping 7 digits of accuracy, we obtain $$\begin{array}{rcl} \mu_1 &=& 17.71608 \;\;{\rm MeV}^{0.5}\\ \eta_1^2 &=& 0.5000018\\ \delta_1 &=& 0.2222220 \end{array}$$ The fact that eta_1^2 is very close to 0.5 was noticed in 1982 by Yoshio Koide at a time when the tau mass had not been experimentally measured to anywhere near its present accuracy. That delta_1 is close to 2/9 went unnoticed until this author discovered it in 2005 and announced it here on Physics Forums. The present constraints on the electron, muon and tau masses exclude the possibility that \delta_1 is exactly 2/9, while on the other hand, \eta_1^2 = 0.5 fits the data close to the middle of the error bars. If we interpret \Gamma as a matrix of coupling constants, \eta^2 = 1/2 is a probability. Thus if the preons are to be spin-1/2 states, the P = (1+\cos(\theta))/2 rule for probabilities implies that the three coupled states are perpendicular, a situation that would be more natural for classical waves than quantum states. By making the assumption that \eta_1^2 is precisely 0.5, one obtains a prediction for the mass of the tau. Since the best measurements for the electron and muon masses are in atomic mass units, we give the predicted tau mass in both those units and in MeV: $$\begin{array}{rcll} m_\tau &=& 1776.968921(158) \;\; & {\rm MeV}\\ &=& 1.907654627(46) \;\; & {\rm AMU}. \end{array}$$ The error bars in the above, and in later calculations in this paper, come from assuming that the electron and muon masses are anywhere inside the measured mass error bars. This gives us the opportunity to fine tune our estimate for \mu_1 and \delta_1. Since the electron and muon data are the most exact, we assume the Koide relation and compute the tau mass from them. Then we compute \mu_1 and then \delta_1 over the range of electron and muon masses, obtaining: $$\begin{array}{rcll} \delta_1 &=&.22222204715(311)\;\;\;&\textrm{from MeV data}\\ &=&.22222204717(48)\;\;\;&\textrm{from AMU data}. \end{array}$$ If \delta_1 were zero, the electron and muon would have equal masses, while if \delta_1 were \pi/12, the electron would be massless. Instead, \delta_1 is close to a rational fraction, while the other terms inside the cosine are rational multiples of pi. Using the more accurate AMU data, the difference between \delta_1 and 2/9 is: $$2/9 - \delta_1 = 1.7505(48)\;\times10^{-7},$$ and we can hope that a deeper theory will allow this small difference to be computed. This difference could be written as $$1.75\;\times 10^{-7} = \frac{4\pi}{3^{12}}\left(\alpha + \mathcal{O}(\alpha^2)\right)$$ for example. Note that \delta_1 is close to a rational number, while the other terms that are added to it inside the cosine are rational multiples of pi. This distinction follows our parameterization of the eigenvalues in that the rational fraction part comes from the operator Gamma, while the 2n\pi/3 term comes from the eigenvectors. Rather than depending on the details of the operator, the 2n\pi/3 depends only on the fact that the operator has the symmetry of a circulant matrix. We will use m_1, m_2 and m_3 to designate the masses of the neutrinos. The experimental situation with the neutrinos is primitive at the moment. The only accurate measurements are from oscillation experiments, and are for the absolute values of the differences between squares of neutrino masses. Recent 2\sigma data are: $$\begin{array}{rcll} |m_2^2-m_1^2| &=& 7.92(1\pm.09) \times 10^{-5}\;\; & {\rm eV}^2\\ |m_3^2-m_2^2| &=& 2.4(1+0.21-0.26) \times 10^{-3}\;\; & {\rm eV}^2 \end{array}$$ Up to this time, attempts to apply the unaltered Koide mass formula to the neutrinos have failed, but these attempts have assumed that the square roots of the neutrino masses must all be positive. Without loss of generality, we will assume that \mu_0 and \eta_0 are both positive, thus there can be at most one square root mass that is negative, and it can be only the lowest or central mass. Of these two cases, having the central mass with a negative square root is incompatible with the oscillation data, but we can obtain \eta_0^2 = 1/2 with masses around: $$\begin{array}{rcl} m_1 &=& 0.0004\;\; {\rm eV},\\ m_2 &=& 0.009\;\; {\rm eV},\\ m_3 &=& 0.05\;\; {\rm eV}. \end{array}$$ These masses approximately satisfy the squared mass differences of as well as the Koide relation as follows: $$\frac{m_1+m_2+m_3}{(-\sqrt{m_1}+\sqrt{m_2}+\sqrt{m_3})^2} \;=\;\frac{2}{3}.$$ The above neutrino masses were chosen to fix the value of eta_0^2 = 1/2. As such, the fact that this can be done is of little interest, at least until absolute measurements of the neutrino masses are available. However, given these values, we can now compute the mu_0 and delta_0 values for the neutrinos. The results give that, well within experimental error: $$\begin{array}{rcl} \delta_0 &=& \delta_1\;\; +\;\; \pi/12,\\ \mu_1 / \mu_0 &=& 3^{11}. \end{array}$$ Recalling the split between the components of the cosine in the charged lepton mass formula, the fact that pi/12 is a rational multiple of pi suggests that it should be related to a symmetry of the eigenvectors rather than the operator. One possible explanation is that in transforming from a right handed particle to a left handed particle, the neutrino (or a portion of it) pick up a phase difference that is a fraction of pi. As a result the neutrino requires 12 stages to make the transition from left handed back to left handed (with the same phase), while the electron requires only a single stage, and it is natural for the mass hierarchy between the charged and neutral leptons to be a power of 12-1 = 11. We will therefore assume that the relations given above are exact, and can then use the measured masses of the electron and muon to write down predictions for the absolute masses of the neutrinos. The parameterization of the masses of the neutrinos is as follows: $$m_n = \frac{\mu_1}{3^{11}}(1+\sqrt{2}\cos(\delta_1+\pi/12 + 2n\pi/3)),$$ where mu_1 and delta_1 are from the charged leptons. Substituting in the the measured values for the electron and muon masses, we obtain extremely precise predictions for the neutrino masses: $$\begin{array}{rcll} m_1 &=& 0.000383462480(38) \;\; & {\rm eV}\\ &=&0.4116639106(115)\times 10^{-12} \;\; & {\rm AMU} \end{array}$$ $$\begin{array}{rcll} m_2 &=& 0.00891348724(79)\;\; & {\rm eV}\\ &=&9.569022271(246)\times 10^{-12} \;\; & {\rm AMU} \end{array}$$ $$\begin{array}{rcll} m_3 &=& 0.0507118044(45) \;\; & {\rm eV}\\ &=&54.44136198(131)\times 10^{-12} \;\; & {\rm AMU} \end{array}$$ Similarly, the predictions for the differences of the squares of the neutrino masses are: $$\begin{array}{rcll} m_2^2-m_1^2 &=& 7.930321129(141)\times 10^{-5} \;\; & {\rm eV}^2\\ &=&.913967200(47)\times 10^{-24} \;\; & {\rm AMU}^2 \end{array}$$ $$\begin{array}{rcll} m_3^2-m_2^2 &=& 2.49223685(44)\times 10^{-3} \;\; & {\rm eV}^2\\ &=&2872.295707(138)\times 10^{-24} \;\; & {\rm AMU}^2 \end{array}$$ As with the Koide predictions for the tau mass, these predictions for the squared mass differences are dead in the center of the error bars. We can only hope that the future will show our calculations to be as prescient as Koide's. That the masses of the leptons should have these sorts of relationships is particularly mysterious in the context of the standard model. It is hoped that this paper will stimulate thought among theoreticians. Perhaps the fundamental fermions are bound states of deeper objects. The author would like to thank Yoshio Koide and Alejandro Rivero for their advice, encouragement and references, and Mark Mollo (Liquafaction Corporation) for financial support. Carl #### Attached Files: • ###### MASSES.pdf File size: 129.8 KB Views: 106 Last edited by a moderator: Apr 20, 2006 2. Apr 24, 2006 ### CarlB The above LaTex came out fairly well. The only problem is that the \rangles that should appear after each |n got deleted, likely because I use a macro for these in the original. Since the above, I've got a new result to add to the pile. There were some mysterious 12s showing up in the above, particularly with the charged / neutral mass ratio. It turns out that the MNS mixing matrix in tribimaximal form (which is generally assumed to be the correct one in most recent reviews of neutrinos) can be put into a form where it is a 24th root of unity. To get it into this form, one multiplies it on the left by nothing other than a 3x3 matrix of eigenvectors of the circulant matrix. Here's the details: $$\frac{1}{\sqrt{3}} \left(\begin{array}{ccc}1&e^{+2i\pi/3}&e^{-2i\pi/3}\\ 1&1&1\\ 1&e^{-2i\pi/3}&e^{+2i\pi/3}\end{array}\right) \left(\begin{array}{ccc}\sqrt{2/3}&\sqrt{1/3}&0\\ -\sqrt{1/6}&\sqrt{1/3}&-\sqrt{1/2}\\ -\sqrt{1/6}&\sqrt{1/3}&\sqrt{1/2}\end{array}\right)$$ $$=\left(\begin{array}{ccc} \sqrt{1/2}&\;\;\;\;0\;\;\;\;&-i\sqrt{1/2}\\ 0&1&0\\ \sqrt{1/2}&0&i\sqrt{1/2}\end{array}\right) = \left(\begin{array}{ccc}1&0&0\\0&1&0\\0&0&1\end{array}\right)^{1/24}$$ In the above, the MNS matrix is on the top right, the eigenvectors are top left. The MNS matrix transforms vectors of $$(\nu_e, \nu_\mu, \nu_\tau)$$ into vectors of $$(\nu_1,\nu_2,\nu_3)$$ where the first are what you get when you arrange a weak interaction with a charged lepton, and the second are the mass eigenstates of the neutrinos. Some notes. First, there is one column of $$\sqrt{1/3}$$ in the MNS matrix and one row of the same in the array of eigenvectors. This implies that the (1,1,1) eigenvectors of the operator must correspond to the muon neutrino and the 2nd mass eigenstate. This is very useful information because it allows us to remove almost all the redundancy from the values of $$\delta$$ in the eigenvalue equation. That is, instead of simply finding the smallest value of $$\delta$$ that gives the three desired eigenvalues, we can instead choose the value that makes the (1,1,1) eigenvalue be the muon in the case of the charged leptons, and the middle neutrino in the case of the uncharged leptons. That is, $$\lambda_{(1,1,1)} = \mu(1 + 2\eta^2\cos(\delta))$$ with no need to guess which value of n to choose in the $$2n\pi/3$$ that otherwise shows up in the cosine. As a result of making this change, the new delta value for the electron can be chosen as: $$\delta_1 = 2\pi/3 - .22222204717(48) = \frac{\pi}{180}(120 - 12.73)$$ or $$\delta_1 = 4\pi/3 + .22222204717(48) = \frac{\pi}{180}(240 + 12.73)$$ where the two options correspond to the two solutions for cos(x) = y, and the neutrino delta is: $$\delta_0 = 7\pi/12 - .22222204717(48) =\frac{\pi}{180}(105-12.73)$$ or $$\delta_0 = 17\pi/12 + .22222204717(48) =\frac{\pi}{180}(75+12.73)$$ It turns out that having the angle delta_1 between 60 and 120 degrees is useful in that it implies that we can add a hidden dimension to modify the Pauli algebra's usual relationship between phase and probability so that we can match the above. This is a somewhat mysterious statement that I will expound on later. It's very simple math and not at all difficult. Accordingly, we have to choose the upper of each of the two equations, and for the angle calculation, we can use either 120-12.73 or 105-12.73, with an assumption that the electron is the simple mass relationship particle, or that the neutrino is, respectively. Carl Last edited: Apr 24, 2006 3. May 3, 2006 ### CarlB I've updated the paper to include the mixing matrix and a better derivation of the 3^11 power here: http://brannenworks.com/MASSES2.pdf Carl 4. May 11, 2006 ### CarlB My neutrino mass formula is now on arXiv in that Yoshio Koide has uploaded a paper referencing my work here: Tribimaximal Neutrino Mixing and a Relation Between Neutrino- and Charged Lepton-Mass Spectra Yoshio Koide Brannen has recently pointed out that the observed charged lepton masses satisfy the relation $$m_e +m_\mu +m_\tau = {2/3} (\sqrt{m_e}+\sqrt{m_\mu}+\sqrt{m_\tau})^2$$, while the observed neutrino masses satisfy the relation $$m_{\nu 1} +m_{\nu 2} +m_{\nu 3} = {2/3} (-\sqrt{m_{\nu 1}}+\sqrt{m_{\nu 2}} +\sqrt{m_{\nu 3}})^2.$$ Stimulated by this fact, a neutrino mass matrix model, which gives a relation between the observed neutrino- and charged lepton-mass spectra, and which yields the so-called tribimaximal mixing in the neutrino sector, is proposed. http://www.arxiv.org/abs/hep-ph/0605074 Carl 5. May 14, 2006 ### CarlB Another reference to my mass formula has appeared in the literature on page 10 of this pdf file: Koide relation [Y. Koide, Lett. Nuov. Cim., 34 (1982), 201]: $$\frac{m_e+m_\mu+m_\tau}{(\sqrt{m_e} + \sqrt{m_\mu} + \sqrt{m_\tau})^2} = \frac{2}{3}$$ with accuracy 10^{-5} $$tan \theta_c = \left(\frac{\sqrt{m_\mu}-\sqrt{m_e}}{2\sqrt{m_\tau}}-\sqrt{m_\mu}-\sqrt{m_e}}\right)^{1/3}$$ Both relations can be reproduced if [C A Brannen] Neutrinos $$m_i = m_0(Z_i + Z_0)^2$$ $$\sum_i Z_i = 0$$ $$Z_0 = \sqrt{\sum_i Z_i^2/3}$$ Another representation which is closely connected to circulant symmetry http://users.physik.tu-muenchen.de/lsoft/seminar-talks/AlexeiSmirnov06.pdf Carl 6. Jun 24, 2006 ### jhmar I deal in old fashion classical terms and am awaiting judgement on a submission to PF independent research. Meanwhile I would greatly appreciate some clarification. As far as I can understand the work mentioned on this forum, it deals only with mass and does not show any relationship between say, charge and mass, or charge and wavelength; neither does it explain why particles have their particular mass but simply predicts a mass number. Would you please comfirm my understanding or explain where I am wrong. 7. Jun 24, 2006 ### CarlB You are pretty much correct, the above paper just predicts masses. The one by Yoshio Koide applies a traditional vacuum expectation value Higgs field approach to the problem. The reason I wrote the paper that way is because my papers that gave an explanation were pretty much ignored. Some of that was due to their using new math. What you end up with is a bunch of incomprehensible mathematics, followed by a formula predicting masses. It's not very convincing unless you can follow the mathematics. So I've been concentrating on making the mathematics accessible. In many ways, what I am promoting are alternatives to doing quantum mechanics. I believe that my alternatives are easier to learn for the student, give results with less effort, and are closer to the underlying reality. To get a hint at how the work is easier for the student, see my write up on Pauli spinors at the Wikipedia that gives a "trick" on how to find the eigenvectors of a spin operator: http://en.wikipedia.org/wiki/Spinor#Example:_Spinors_of_the_Pauli_Spin_Matrices Basically, the QM I work with involves a generalization of the above "trick". None of this is standard practice in the industry, which is why I wrote the wikipedia section without reference to anything heretical. I'm quite convinced that this is far superior to the usual way of doing QM. I'm working on a new version of the MASSES2.pdf paper that will add back in the theory, but again without most of the "adult mathematics". It begins with the "density marix formulation" of quantum mechanics. I plan on submitting it to arXiv and have received a note from them apparently allowing it in the "physics" section. For an amateur, that's about as good as you can get from them. The density matrix formulation of QM is an alternative to the usual "state vector formulation". It's based on two observations. The first is that any prediction from a state vector description of a quantum object can also be predicted from a density matrix description. Thus the two formulations are on equal grounds as far as supposing which is the fundamental. The second observation is that just as it is possible to define a density matrix state from a state vector state, it is also possible to define a state vector state from a density matrix state. In moving from a state vector description of a situation to a density matrix description, one loses the arbitrary complex phase (i.e. the U(1) gauge freedom). Thus in going the reverse direction you must add the arbitrary phase back in, and in doing this you get insight into the nature of the complex numbers used in QM, and on gauge theory. When you translate all that into QFT, you get insight into the nature of the quantum vacuum. My notes for this are mostly written up. I'm hoping to publish them before the end of the month. Until then, for some introduction to density matrix theory, try http://www.DensityMatrix.com a website I started a couple weeks ago. To get a hint as to how one goes about obtaining weak hypercharge and weak isospin from density matrix calculations, my website is http://www.snuark.com , which is devoted to the subject of the theoretical underpinnings beneath this mass theory. But these websites are just days old and nowhere near sufficient in explaining these things. Carl 8. Jun 25, 2006 ### jhmar Carl Thanks for the reply. My education did not extend to the level needed to follow your work, but being dissatisfied with the inability of the experts to explain their work to the layman got me interested in interpretation. Eventual I realise there was a need for an explanation of particle structure before Quantum physics could be explained in words. With this aim I used the experiments described in The Particle Explosion and books for beginners by Baggot, Veltman, Pais and others, to produce a formula that links mass with field force and charge. A continuation shows a possible connection with wave structure (this wave structure, I show, is an astronomically observable entity). This is the article awaiting approval for entry into the Independent research forum which I hope we will be able to discuss and compare with your work sometime soon. John 9. Jul 24, 2006 ### CarlB My papers predicting the neutrino masses: Koide Mass Formula for Neutrinos, April 7, 2006 http://brannenworks.com/MASSES.pdf and The Lepton Masses, May 2, 2006 http://brannenworks.com/MASSES2.pdf predicted squared mass differences for the neutrinos of 7.93x10^{-5} eV and 2.49x10^{-5} eV. At the time I wrote them, the best fit values were 7.92 and 2.4, which means I was saying that the best fit experimental numbers were high by 0.13% and low by 3.6%. Now the latest figures from Minos (released but not yet published, I believe) have shifted the 2.4 value up to 2.5 (see http://www.arxiv.org/abs/hep-ph/0606060 ). So now my formula says that the best fit experimental numbers are high by 0.13% and 0.4%, a 9x improvement on the second value. Carl Another paper that uses my neutrino masses, but at a lower accuracy is: http://www.arxiv.org/abs/hep-ph/0605074 10. Aug 4, 2006 ### CarlB The Koide paper that references my formula for the neutrino masses: Tribimaximal Neutrino Mixing and a Relation Between Neutrino- and Charged Lepton-Mass Spectra Y. Koide http://www.arxiv.org/abs/hep-ph/0605074 Now has two papers referencing it: Magic Neutrino Mass Matrix and the Bjorken-Harrison-Scott Parameterization C. S. Lam http://www.arxiv.org/abs/hep-ph/0606220 Real Invariant Matrices and Flavour-Symmetric Mixing Variables with Emphasis on Neutrino Oscillations P. F. Harrison, W. G. Scott, T. J. Weiler http://www.arxiv.org/abs/hep-ph/0607335 Also, I've submitted a short abstract for the DPF2006 meeting in Hawaii: http://www.dpf2006.org/ Yoshio Koide will attend the meeting, so I will likely go even if my abstract is rejected (as is likely). Carl Last edited: Aug 4, 2006 11. Aug 15, 2006 ### CarlB Another paper referencing Koide's paper has come to my notice: Real Invariant Matrices and Flavour-Symmetric Mixing Variables with Emphasis on Neutrino Oscillations P. F. Harrison, W. G. Scott, T. J. Weiler http://www.arxiv.org/abs/hep-ph/0607335 Carl 12. Oct 1, 2006 ### CarlB I've started writing up a derivation of the Koide formula from first principles. The problem is that "first principles" means going back to the foundations of quantum mechanics and rebuilding the standard model from the assumption that density matrices are fundamental and spinors (and Hilbert space) are just mathematical conveniences. The standard model is fairly complicated and deriving it is quite involved. That it has to be done with a new foundation makes it possible, but not easy. Accordingly, I'm writing a description of these new foundations of quantum mechanics in book form, with the target audience first year graduate students in physics. The first two chapters introduce Clifford algebra in as painless and easy way as I can imagine, by analogy with the familiar Pauli spin matrices and the Dirac gamma matrices. The first chapter goes into everything that one can do, from a geometric point of view, with the algebra of the Pauli spin matrices. The chapter is mostly about what complex numbers are in a geometric context. The chapter ends with a formula showing how to reduce products of projection operators in the Pauli algebra, a calculational method that will be used repeatedly in the rest of the book. So far, only the first chapter is complete: http://www.brannenworks.com/dmaa.pdf The second chapter continues the story with the Dirac algebra. The Dirac algebra allows us to investigate how particles appear in Clifford algebras. The particles of the Pauli algebra are trivial, one has spin +1/2 and spin -1/2 with the direction of orientation being the only choice. The Dirac algebra is more general and more interesting. We will discuss the various choices of operators that define the particles, and we will generalize to Clifford algebra. Carl 13. Oct 19, 2006 ### CarlB The Mohaptra and Smirnov review article on the neutrino masses is out: http://staging.arjournals.annualreviews.org/doi/pdf/10.1146/annurev.nucl.56.080805.140534 My neutrino mass formula is mentioned briefly on page 599: The Koide relation can be satisfied for neutrinos provided $$\sqrt{m_1} < 0$$ (76) implies strongly hierarchical mass spectrum, with $$m_1 = 3.9 10^{-4}eV$$ and $$\sqrt{m_1}<0$$ (76; see also Reference 77). 76. Brannen CA. http://brannenworks.com (2006) 77. Koide Y. hep-ph/0605074 This is the first published, peer reviewed mention of the formula. Carl 14. Nov 17, 2006 ### CarlB An article in Japanese gives more detail on the calculations for how the Koide formula is extended to the neutrinos. The various possible sign choices and how they work out with the charged leptons, neutrinos, and quarks are in a table on the bottom of page 23. A note on the coincidence with the Cabibbo angle is on the last page. http://higgs.phys.kyushu-u.ac.jp/ppp06/slide/koide.pdf I presented some slides from this at the 2006 Joint Particle Physics meeting in Hawaii. The slides are here: http://www.brannenworks.com/jpp06.pdf Actually, the above are the slides I was tried to present. I ended up with a slightly older version that had been downloaded earlier. Meanwhile, I'm making steady progress in describing how one obtains the Koide formula, and more, from a modification to the foundations of quantum mechanics. I'm about three chapters away from the Koide mass formulas. I've been keeping the book on the web here: http://www.brannenworks.com/dmaa.pdf Carl 15. Nov 29, 2006 ### jhmar My submission has been rejected as I failed to comply with the submission rules; the length of my work makes compliance somewhat difficult. So I have decided to continue this debate without introducing my model. What the Koide 'mass' formula does is to provide a mathematical short cut for finding the surface density of an expanding force field with a fixed quantity of matter. This can easily be checked using the following radii for the 'masses' already given: V(1) 3755164.695: V(2) 161549; V(3) 28395.06 in arbitrary units. That is to say that each neutrino is a different state (volume) of a single elementary particle. This, of course is not a complete explanation of reality, but, it does explain what the prediction does without telling us why; that requires a different model. John Last edited: Nov 29, 2006 16. Apr 25, 2007 ### CarlB Some comments on that damned number Let the complex Clifford algebra be that generated by the canonical basis vectors $$\hat{x}, \hat{y}, \hat{z}, \hat{s}, \hat{t}$$ where the signature is + + + + -, respectively. There are a bunch of ways of writing primitive idempotents. One can convert between these. I will use as the base PI the following form: $$(1\pm \widehat{zt})(1\pm \hat{s})(1\pm\widehat{ixyzst})/8$$ Let $$\kappa$$ be a real number. Let Z be an arbitrary element of the Clifford algebra (complex or real) that is built from the subalgebra of the Clifford algebra generated by $$\{\hat{x},\hat{y},\hat{s},\widehat{zt}\}$$. This subalgebra is used so as to define no preferred direction in 3-space. Transform the PIs as follows: $$PI \to \exp(-\kappa Z) PI \exp(+\kappa Z)$$ where PI is any primitive idempotent (or any member of the Clifford algebra in fact). As shown in dmaa.pdf, the above transformation preserves multiplication and addition, and therefore maps complete sets of primitive idempotents to complete sets of primitive idempotents. As in dmaa, we apply a potential energy which depends on blade, so that 1 contributes very low energy, vectors much more, bivectors even more, etc. Now $$\widehat{ixyzst}$$ is a fixed point of the above transformation. Therefore the analysis in dmaa.pdf, which showed that the PIs would combine in pairs so as to cancel their $$\widehat{ixyzst}$$ components, still holds. But the other analyses may or may not. As an example of how these transformations work, let $$Z = \widehat{xyzt}$$. Note that $$\frac{d}{d\kappa}\exp(\kappa Z) = Z\exp(\kappa Z) = \exp(\kappa Z) Z.$$ The action of the transformation on the canonical basis vectors is as follows: $$\begin{array}{rcl} \exp(-\kappa \widehat{xyzt})1\exp(+\kappa \widehat{xyzt}) &\to& 1,\\ \exp(-\kappa \widehat{xyzt})\hat{x}\exp(+\kappa \widehat{xyzt}) &\to& \hat{x}\exp(2\kappa\widehat{xyzt}),\\ \exp(-\kappa \widehat{xyzt})\hat{y}\exp(+\kappa \widehat{xyzt}) &\to& \hat{y}\exp(2\kappa\widehat{xyzt}),\\ \exp(-\kappa \widehat{xyzt})\hat{z}\exp(+\kappa \widehat{xyzt}) &\to& \hat{z}\exp(2\kappa\widehat{xyzt}),\\ \exp(-\kappa \widehat{xyzt})\hat{s}\exp(+\kappa \widehat{xyzt}) &\to& \hat{s},\\ \exp(-\kappa \widehat{xyzt})\hat{t}\exp(+\kappa \widehat{xyzt}) &\to& \hat{t}\exp(2\kappa\widehat{xyzt}), \end{array}$$ that is, the transformation multiplies all the canonical basis vectors on the right by $$\exp(2\kappa \widehat{xyzt})$$ except $$\hat{s}$$. The difference is that $$\hat{s}$$ commutes with $$\widehat{xyzt}$$. Now $$(\widehat{xyzt})^2 = - 1$$, so we can think of $$\widehat{xyzt}$$ as an imaginary unit. The above transformation is therefore multiplication on the right by $$\exp(2 \hat{i}\kappa)$$ for most of the canonical basis vectors. Other choices for Z will give other patterns of multiplication, depending on how they commute with the canonical basis vectors. The reason for considering transformations of a complete set of primitive idempotents is that, while these transformations preserve multiplication and addition, they do not preserve energy. Therefore, one imagines that if one understood the mass interaction better, one could write an equation for the energy as a function of the variable $$\kappa$$. Then the value of $$\kappa$$ that minimized the energy (after taking into account all possible Z values) would be a minimum of that function so we would have some sort of equation like: $$\frac{d}{d\kappa} E(\kappa) = 0$$ Following the Koide prescription, the energy might be written as a quadratic function of $$\kappa$$. This would give some sort of linear equation for the minimizing value of $$\kappa$$. Now my speculation is that the minimizing value of $$\kappa$$ will work out to be close to 1/9. Perhaps the proper ratio for energy between the scalar and vector components is 1/3 or 1/9. This will cause the exponential transformation to have a factor $$\exp(2i/9)$$, about the same as that damned number in Koide's equations. Last edited: Apr 25, 2007 17. Apr 28, 2007 ### CarlB I need some scratch paper to do some calculation. Readers are invited to send me PMs rather than posting long off topic comments here. Painleve coordinates: $$ds^2 = -dt^2 + (dr-(2/r)^{0.5} dt)^2 + r^2 d\phi^2.$$ First step is to convert them to "Cartesian" form: $$\begin{array}{rcl}ds^2 &=& -dt^2 + ((xdx+ydy)r^{-1}-\sqrt{2}\;r^{-0.5} dt)^2 + (y^2dx^2 + x^2dy^2 -2xy\;dx\;dy)r^{-2},\\&=&-dt^2 + dx^2 + dy^2 + 2r^{-1}dt^2 -2\sqrt{2}\;r^{-1.5}(x\;dx\;dt+y\;dy\;dt),\\I = \left(\frac{ds}{dt}\right)^2 &=&\frac{2}{r}-1 + \dot{x}^2 + \dot{y}^2-2\sqrt{2}r^{-1.5}(x\dot{x}+y\dot{y})\end{array}$$ The idea is to apply the Euler Lagrange equations to the integral over t for s: $$s = \int \sqrt{\left(\frac{ds}{dt}\right)^2}\;dt = \int \sqrt{I}\;dt$$ to obtain the equations of motion in a Cartesian coordinate system. The resulting equations of motion (for x) are: $$\frac{1}{2\sqrt{I}} \frac{\partial I}{\partial x} =\frac{d}{d t} \left[ \frac{1}{2\sqrt{I}} \left( \frac{\partial I}{\partial \dot{x}} \right) \right],$$ or $$\frac{I^{-0.5}}{2} \frac{\partial I}{\partial x} =$$ $$\frac{I^{-0.5}}{2}\left( \frac{\partial^2 I}{\partial\dot{x}\partial x}\dot{x}+ \frac{\partial^2 I}{\partial\dot{x}\partial y}\dot{y}+ \frac{\partial^2 I}{\partial\dot{x}^2}\ddot{x}+ \frac{\partial^2 I}{\partial\dot{x}\partial\dot{y}}\ddot{y}\right)$$ $$-\frac{I^{-1.5}}{4}\left( \frac{\partial I}{\partial x}\frac{\partial I}{\partial\dot{x}}\dot{x}+ \frac{\partial I}{\partial y}\frac{\partial I}{\partial\dot{x}}\dot{y}+ \frac{\partial I}{\partial \dot{x}}\frac{\partial I}{\partial\dot{x}}\ddot{x}+ \frac{\partial I}{\partial \dot{y}}\frac{\partial I}{\partial\dot{x}}\ddot{y} \right)$$ I want to get rid of the sqrt(I) and the 4s, and rearrange the terms because we need to solve for the 2nd derivatives in terms of the 1st derivatives and constants. Moving the 2nd derivatives to the right hand side we get: $$2I\frac{\partial I}{\partial x} + \left(\frac{\partial I}{\partial x}\frac{\partial I}{\partial\dot{x}}- 2I\frac{\partial^2 I}{\partial\dot{x}\partial x}\right)\dot{x} + \left(\frac{\partial I}{\partial y}\frac{\partial I}{\partial\dot{x}}- 2I\frac{\partial^2 I}{\partial\dot{x}\partial y}\right)\dot{y} =$$ $$\left(2I\frac{\partial^2 I}{\partial\dot{x}^2}- \frac{\partial I}{\partial \dot{x}}\frac{\partial I}{\partial\dot{x}} \right)\ddot{x}+ \left(2I\frac{\partial^2 I}{\partial\dot{x}\partial\dot{y}}- \frac{\partial I}{\partial \dot{y}}\frac{\partial I}{\partial\dot{x}} \right)\ddot{y}$$ Similarly, the Euler Lagrange equation in y gives: $$2I\frac{\partial I}{\partial y} + \left(\frac{\partial I}{\partial y}\frac{\partial I}{\partial\dot{y}}- 2I\frac{\partial^2 I}{\partial\dot{y}\partial y}\right)\dot{y} + \left(\frac{\partial I}{\partial x}\frac{\partial I}{\partial\dot{y}}- 2I\frac{\partial^2 I}{\partial\dot{y}\partial x}\right)\dot{x} =$$ $$\left(2I\frac{\partial^2 I}{\partial\dot{y}^2}- \frac{\partial I}{\partial \dot{y}}\frac{\partial I}{\partial\dot{y}} \right)\ddot{y}+ \left(2I\frac{\partial^2 I}{\partial\dot{y}\partial\dot{x}}- \frac{\partial I}{\partial \dot{x}}\frac{\partial I}{\partial\dot{y}} \right)\ddot{x}$$ This matches the result in https://www.physicsforums.com/showpost.php?p=1050082&postcount=25 so okay so far. The above equations mix the 2nd deriviatives. To use them, we need to solve for $$\ddot{x}$$ and $$\ddot{y}$$. The equations are linear so they can be written in 2x2 matrix form. To solve the equations requires that we invert the matrix, which can be done provided its determinant is not zero. The determinant is: $$\left(2I\frac{\partial^2 I}{\partial\dot{x}^2}-\frac{\partial I}{\partial \dot{x}}\frac{\partial I}{\partial\dot{x}}\right)\left(2I\frac{\partial^2 I}{\partial\dot{y}^2}-\frac{\partial I}{\partial \dot{y}}\frac{\partial I}{\partial\dot{y}}\right) -\left(2I\frac{\partial^2 I}{\partial\dot{x}\partial\dot{y}}-\frac{\partial I}{\partial \dot{y}}\frac{\partial I}{\partial\dot{x}}\right)\left(2I\frac{\partial^2 I}{\partial\dot{y}\partial\dot{x}}-\frac{\partial I}{\partial \dot{x}}\frac{\partial I}{\partial\dot{y}}\right)$$ Now to calculate the various derivatives for the Painleve coordinates. Last edited: Apr 29, 2007 18. Apr 29, 2007 ### CarlB The first order partial derivatives of I are as follows: $$\frac{\partial I}{\partial x} = -2r^{-3}x - \sqrt{8}r^{-1.5}\dot{x} +\sqrt{18}r^{-3.5}x(x\dot{x}+y\dot{y}),$$ $$\frac{\partial I}{\partial y} = -2r^{-3}y - \sqrt{8}r^{-1.5}\dot{y} +\sqrt{18}r^{-3.5}y(x\dot{x}+y\dot{y}),$$ $$\frac{\partial I}{\partial \dot{x}} = 2\dot{x}-\sqrt{8}r^{-1.5}x,$$ $$\frac{\partial I}{\partial \dot{y}} = 2\dot{y}-\sqrt{8}r^{-1.5}y$$ The required second order partial derivatives are: $$\frac{\partial^2 I}{\partial x\partial\dot{x}} = -\sqrt{8}r^{-1.5} + \sqrt{18}r^{-3.5}x^2$$ $$\frac{\partial^2 I}{\partial x\partial\dot{y}} = \sqrt{18}r^{-3.5}xy$$ $$\frac{\partial^2 I}{\partial y\partial\dot{x}} = \sqrt{18}r^{-3.5}xy$$ $$\frac{\partial^2 I}{\partial y\partial\dot{y}} = -\sqrt{8}r^{-1.5} + \sqrt{18}r^{-3.5}y^2$$ $$\frac{\partial^2 I}{\partial \dot{x}^2} = 2$$ $$\frac{\partial^2 I}{\partial \dot{y}^2} = 2$$ $$\frac{\partial^2 I}{\partial \dot{x}\partial\dot{y}} = 0$$ And the value of I was: $$I =\frac{2}{r}-1 + \dot{x}^2 + \dot{y}^2-\sqrt{8}r^{-1.5}(x\dot{x}+y\dot{y})$$ The 2nd partials are fairly simple. Substituting them into the formula for the determinant gives: $$\left(4I-\frac{\partial I}{\partial \dot{x}}\frac{\partial I}{\partial\dot{x}}\right)\left(4I-\frac{\partial I}{\partial \dot{y}}\frac{\partial I}{\partial\dot{y}}\right) - \left(\frac{\partial I}{\partial \dot{y}}\frac{\partial I}{\partial\dot{x}}\right)\left(\frac{\partial I}{\partial \dot{x}}\frac{\partial I}{\partial\dot{y}}\right)$$ Dividing by 4, this factors to: $$I\left(4I - \left(\frac{\partial I}{\partial \dot{x}}\frac{\partial I}{\partial \dot{y}}\right)^2\right)$$ which is zero when I=0 or when $$I = \frac{1}{4}\left(\frac{\partial I}{\partial \dot{x}}\frac{\partial I}{\partial \dot{y}}\right)^2$$ Last edited: Apr 29, 2007 19. Apr 29, 2007 ### CarlB The determinant is evidently nonzero "almost everywhere". The usual coordinates blow up at r=2, so an interesting substitution is to put x=2, y=0. The determinant is then zero when either I=0, which gives: $$I+1 = (\dot{x}-1)^2 + \dot{y}^2 = 1$$ At this point, $$-2 < \dot{x} < 0$$ for an object with mass, so equality can only happen with $$\dot{x}=\dot{y}=0$$. This is a light ray stuck trying to exit the black hole and modeling this does not concern me, so I will not try to avoid this division by zero. The other case for a zero determinant happens when: $$I = \dot{x}^2 + \dot{y}^2-2\dot{x} = 4(\dot{x}-1)^2\dot{y}^2$$ Last edited: Apr 29, 2007 20. May 12, 2007 ### CarlB Let's see if I can clean this up a little. The signs on the square roots are wrong, and it appears that it is easier if I rewrite things in river coordinates so: $$I =\frac{2}{r}-1 + \dot{x}^2 + \dot{y}^2+\sqrt{8}r^{-1.5}(x\dot{x}+y\dot{y})$$ $$I = v_x^2 + v_y^2 -1$$ $$v_x = x\sqrt{2/r^3} + \dot{x}$$ $$v_y = y\sqrt{2/r^3} + \dot{y}$$ The first order partial derivatives are: $$\frac{\partial I}{\partial x} = 2\sqrt{2}v_x\;r^{-1.5} -3\sqrt{2}(x^2v_x+xyv_y)r^{-3.5},$$ $$\frac{\partial I}{\partial y} = 2\sqrt{2}v_y\;r^{-1.5} -3\sqrt{2}(xyv_x+y^2v_y)r^{-3.5},$$ $$\frac{\partial I}{\partial \dot{x}} = 2v_x,$$ $$\frac{\partial I}{\partial \dot{y}} = 2v_y,$$ The required second order partial derivatives are: $$\frac{\partial^2 I}{\partial x\partial\dot{x}} = 2\sqrt{2}r^{-1.5} - 3\sqrt{2}x^2r^{-3.5},$$ $$\frac{\partial^2 I}{\partial x\partial\dot{y}} = -3\sqrt{2}r^{-3.5}xy$$ $$\frac{\partial^2 I}{\partial y\partial\dot{x}} = -3\sqrt{2}r^{-3.5}xy$$ $$\frac{\partial^2 I}{\partial y\partial\dot{y}} = 2\sqrt{2}r^{-1.5} - 3\sqrt{2}y^2r^{-3.5},$$ $$\frac{\partial^2 I}{\partial \dot{x}^2} = 2$$ $$\frac{\partial^2 I}{\partial \dot{y}^2} = 2$$ $$\frac{\partial^2 I}{\partial \dot{x}\partial\dot{y}} = 0$$ Solving for the equations of motion, we find: $$\begin{array}{rcl} \ddot{x} &=&-2x/r^3,\\ &&-4\dot{y}(x\dot{y}-y\dot{x})/r^3,\\ &&+4x/r^4,\\ &&+6x(x\dot{x}+y\dot{y})^2/r^5,\\ &&-2\sqrt{2}\dot{x}(\dot{x}^2+\dot{y}^2)/r^{1.5},\\ &&+6\sqrt{2}\dot{x}/r^{2.5},\\ &&+3\sqrt{2}\dot{x}(x\dot{x}+y\dot{y})/r^{3.5},\\ &&+4\sqrt{2}y(x\dot{y}-y\dot{x})/r^{4.5} \end{array}$$ $$\begin{array}{rcl} \ddot{y} &=&-2y/r^3,\\ &&+4\dot{x}(x\dot{y}-y\dot{x})/r^3,\\ &&+4y/r^4,\\ &&+6y(x\dot{x}+y\dot{y})^2/r^5,\\ &&-2\sqrt{2}\dot{y}(\dot{x}^2+\dot{y}^2)/r^{1.5},\\ &&+6\sqrt{2}\dot{y}/r^{2.5},\\ &&+3\sqrt{2}\dot{y}(x\dot{x}+y\dot{y})/r^{3.5},\\ &&-4\sqrt{2}x(x\dot{y}-y\dot{x})/r^{4.5} \end{array}$$ Last edited: May 13, 2007
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9637028574943542, "perplexity": 817.5372605536919}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191986.44/warc/CC-MAIN-20170322212951-00350-ip-10-233-31-227.ec2.internal.warc.gz"}
http://factsforkids.net/electricity-facts-kids/
# Electricity Facts For Kids – The Power that Runs the World The flow of electric charge is known as electricity. There are two kinds of electric charges: positive charge and negative charge. When a positive charge comes close to another positively charged substance, it repels. Likewise negative charges repel against other negatively charged substances. A negatively charged substance is however attracted to a positively charged substance and vice versa. Now let’s have a look at other electricity facts for kids. ## Electricity Facts For Kids First person to study electricity: Thales of Miletus Symbol of Electric Charge: ‘Q’ SI Unit of Electric Charge: Coulomb (C) SI Unit of Current: Ampere SI Unit of Electric Power: Watt ### History of Electricity 1. The word ‘Electricity’ was coined by a British scientist and doctor named William Gilbert in 1600. 2. A Greek philosopher who is thought to be the first one to study electricity is ‘Thales of Miletus’. 3. Arabs are thought to be first people who had some idea about electricity or the natural source of it like lightning. 4. In olden times no one was familiar with the concept of electricity but still they knew about electric fish like electric eels. They used to cure some patients by touching the victim’s body with it. 5. Benjamin Franklin, who is known as ‘The First American’, carried out deep research on electricity during the 18th century. ### Electric Charge 1. The symbol for electric charge is ‘Q’. 2. The SI unit of electric charge is Coulomb (C). 3. When a substance has surplus electrons in its atomic structure, it becomes a negatively charged object. 4. When a substance becomes electrically charged, it creates a field around itself which is known as Electromagnetic Field (EMF). 5. Electromagnetic field is made up of magnetic field and electric field. 6. An object that allows the flow of electric charge is called a Conductor. 7. An object that does not allow the flow of electric charge is called an Insulator. 8. An object that partially allows the flow of electric charge is called Semi-conductor. ### Electric Current 1. The flow of electric charge is known as electric current. The SI unit of current in Ampere. 2. If an object resists against electric current and does not allow the electric charge to flow easily, this phenomenon is known as Electrical Resistivity. 3. The method by which a material allows electric current to pass through it is called Electrical Conduction. It is exactly the opposite of electrical resistivity. 4. The speed at which an electrical circuit transmits electrical energy is known as Electric power. It is the rate of doing work. It is denoted by a symbol ‘P’. ### Generator 1. The mechanical energy is converted into electrical energy with the help of a special device known as Generator. 2. Electric power is generated by using electric generators. Electric batteries can also produce electric power. 3. The very first generator for producing electricity was called Dynamo. 4. A machine that generates static electricity is called electrostatic generator. 5. There are two types of electrostatic generators: Friction machines and Influence machines. ### Battery 1. A device which converts chemical energy (stored in the cells) into electrical energy with the help of very small electrochemical cells is known as a Battery. 2. During 1800s, an Italian physicist named Alessandro Volta was the first scientist who invented batteries. 3. Two kinds of batteries are there: primary batteries are those batteries that cannot be used again and secondary batteries are those that can be used again by recharging it. ### Power Station 1. The industrial space from where electric power is generated is called power station. It is also known as powerhouse. 2. A generator is installed at the heart of every power station in order to convert mechanical energy into electrical energy. 3. In 1868 in England, the design of the first power station of the world was made by William George Armstrong. 4. The name of the first ever power station was ‘Edison Electric Light Station’. It was built in London in 1882 by Thomas Edison. 5. A thermal energy is converted into rotational energy by using a heat engine. The heat engine generates mechanical power to convert this energy. This setup is known as thermal power station. It is also called steam power station. Do you know that wind can also generate electricity? Read more about it at Wind Power Facts For Kids ### Hydroelectricity 1. When electrical power is produced by using the force of falling water, it is called hydropower. When hydropower generates electricity, it is known as hydroelectricity. 2. The biggest producer of hydroelectricity around the world is China. 3. Almost 16 percent of the electricity generated all over the world is produced by hydropower. ### More Electricity Facts For Kids 1. Michael Faraday was a British scientist. He was the first person who gave the concept of ‘Electric Field’.  Any charged body creates an electric field around it. 2. The electrical energy is converted into mechanical energy by using a special machine called electric motor. Thus, it performs exactly the reverse of what is performed by the generator. 3. The world’s biggest hydroelectric power station is in China. It is built on Three Gorges Dam and has the capacity to produce 22,500 MW (Megawatt) of electricity. 4. Solar energy is also converted into electricity with the help of Photovoltaic cells (PV). This type of electricity is known as solar power. Did you really find these electricity facts for kids helpful? Is it what you’re looking for? Please comment and help us improving this article. Thanks for reading it! SHARE
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9041004180908203, "perplexity": 1025.5159931484081}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887535.40/warc/CC-MAIN-20180118171050-20180118191050-00672.warc.gz"}
https://astarmathsandphysics.com/university-physics-notes/quantum-mechanics/1631-hermitian-operators.html?tmpl=component&print=1&page=
## Hermitian Operators Quantum Mechanical operators are hermitian. If a quantum mechanical operator is represented by a matrixthenso thatis equal to the complex conjugate transpose ofFor example, the spin Pauli operators may be represented by the matrices Every quantum mechanical operatoris associated with an eigenvalue equationwhereis an eigenvalueIf the quantity corresponding to the operatoris measured, the only possible values of the quantitythat may be observed are the eigenvalues of the operator The eigenvalues of the matrix represent actual observable quantities and must be real numbers ie not complex numbers. This is a feature of Hermitian operators – that the eigenvalues are real. Proof: Ifthensince Hence Now take the complex transpose of both sides: Butso For the spin Pauli matrices above, the eigenvalues are +-1 , but in general eigenvalues may take any one of a possibly infinite range of values. This is the case for the energy of a harmonic oscillator, the position of momentum of a free particle, or the energy of an electron in an atom. Notice that the spin is quantized, since the eigenvalues are +- 1. This falls quite naturally out of the eigenvalue equations. If a physical quantity is quantized then the set of eigenvalues will form a (possibly infinite) discrete set.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9795670509338379, "perplexity": 441.42053885645885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813109.36/warc/CC-MAIN-20180220204917-20180220224917-00010.warc.gz"}
https://physics.stackexchange.com/questions/333784/physical-interpretation-of-entanglement-in-qft-vacuum/333815
# Physical interpretation of entanglement in QFT vacuum The ground state of a relativistic QFT has nonzero correlation between field operators at spatially separated points. A way to interpret this is through entanglement between different spacetime regions. In Quantum Mechanics entanglement is between for example, two qubits which is the mathematical representation of for example, entanglement between spins of two electrons. What is the physical interpretation of entanglement in the vacuum of relativistic QFT? I understand that the entanglement is between regions of spacetime but what are the states of the two regions which are entangled? Update: I would like to make this questions a bit more precise using Rindler decomposition of Minkowski space. The Minkowski space can be decomposed into left and right rindler wedges. The vacuum state wavefunction can be written as (the derivation can be seen in for example Daniel Harlow's lecture notes, section 3.3) $$|\Omega\rangle=\frac{1}{\sqrt{Z}}\sum_i e^{-\pi\omega_i}|i^{*}\rangle_L|i\rangle_R$$ where $|i^{*}\rangle_L$ and $|i\rangle_R$ are $\mathrm{eigenstates^{*}}$ of the restriction of Lorentz boost operator to the right wedge ($K_r$). In this the entanglement between two pieces of the Minkowski space is evident. But, is there an operational interpretation for this entanglement? What would be a measurement procedure which would destroy this entanglement? *Actually, $|i^{*}\rangle_L$ is not an eigenstate of $K_r$. $|i^{*}\rangle_L=\Phi |i\rangle _L$ where $|i\rangle _L$ is an eignestate of $K_r$ and $\Phi$ is a antiunitary operator that exists in all quantum field theories and is usually called CPT. (details can be found in the reference given above) • The entanglement is attributed to the fluctuating quantum field. Even if the particle number is zero, the quantum field still has a nonzero fluctuation. Consider the ground state of a simple harmonic oscillator, if you act the number operator $a^\dagger a$ to the ground state, the number is zero, there no particle, but the ground state wave function $\exp(-x^2)$ is not trivial, the position (as well as momentum) of the oscillator is still fluctuating. This is the vacuum fluctuation, and this is where the vacuum entanglement comes from. – Everett You May 19 '17 at 1:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9678401947021484, "perplexity": 151.04270713012681}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986672723.50/warc/CC-MAIN-20191017045957-20191017073457-00386.warc.gz"}
http://www.ck12.org/book/CK-12-Trigonometry---Second-Edition/r6/section/1.2/
<meta http-equiv="refresh" content="1; url=/nojavascript/"> Special Right Triangles | CK-12 Foundation You are reading an older version of this FlexBook® textbook: CK-12 Trigonometry - Second Edition Go to the latest version. # 1.2: Special Right Triangles Created by: CK-12 ## Learning Objectives • Recognize special right triangles. • Use the special right triangle ratios to solve special right triangles. ## Special Right Triangle #1: Isosceles Right Triangle An isosceles right triangle is an isosceles triangle and a right triangle. This means that it has two congruent sides and one right angle. Therefore, the two congruent sides must be the legs. Because the two legs are congruent, we will call them both $a$ and the hypotenuse $c$. Plugging both letters into the Pythagorean Theorem, we get: $a^2 + a^2 & = c^2\\2a^2 & = c^2\\\sqrt{2a^2} & = \sqrt{c^2}\\a\sqrt{2} & = c$ From this we can conclude that the hypotenuse length is the length of a leg multiplied by $\sqrt{2}$. Therefore, we only need one of the three lengths to determine the other two lengths of the sides of an isosceles right triangle. The ratio is usually written $x:x:x\sqrt{2}$, where $x$ is the length of the legs and $x\sqrt{2}$ is the length of the hypotenuse. Example 1: Find the lengths of the other two sides of the isosceles right triangles below. a. b. c. d. Solution: a. If a leg has length 8, by the ratio, the other leg is 8 and the hypotenuse is $8\sqrt{2}$. b. If the hypotenuse has length $7\sqrt{2}$, then both legs are 7. c. Because the leg is $10\sqrt{2}$, then so is the other leg. The hypotenuse will be $10\sqrt{2}$ multiplied by an additional $\sqrt{2}$. $10\sqrt{2} \cdot \sqrt{2} = 10 \cdot 2 = 20$ d. In this problem set $x\sqrt{2} = 9\sqrt{6}$ because $x\sqrt{2}$ is the hypotenuse portion of the ratio. $x\sqrt{2} & = 9\sqrt{6}\\x & = \frac{9\sqrt{6}}{\sqrt{2}} \cdot \frac{\sqrt{2}}{\sqrt{2}} = \frac{9\sqrt{12}}{2} = \frac{18\sqrt{3}}{2} = 9\sqrt{3}$ So, the length of each leg is $9\sqrt{3}$. What are the angle measures in an isosceles right triangle? Recall that the sum of the angles in a triangle is $180^\circ$ and there is one $90^\circ$ angle. Therefore, the other two angles add up to $90^\circ$. Because this is an isosceles triangle, these two angles are equal and $45^\circ$ each. Sometimes an isosceles right triangle is also referred to as a $45-45-90$ triangle. ## Special Right Triangle #2: 30-60-90 Triangle $30-60-90$ refers to each of the angles in this special right triangle. To understand the ratios of the sides, start with an equilateral triangle with an altitude drawn from one vertex. Recall from geometry, that an altitude, $h$, cuts the opposite side directly in half. So, we know that one side, the hypotenuse, is $2s$ and the shortest leg is $s$. Also, recall that the altitude is a perpendicular and angle bisector, which is why the angle at the top is split in half. To find the length of the longer leg, use the Pythagorean Theorem: $s^2 + h^2 & = (2s)^2\\s^2 + h^2 & = 4s^2\\h^2 & = 3s^2\\h & = s\sqrt{3}$ From this we can conclude that the length of the longer leg is the length of the short leg multiplied by $\sqrt{3}$ or $s\sqrt{3}$. Just like the isosceles right triangle, we now only need one side in order to determine the other two in a $30-60-90$ triangle. The ratio of the three sides is written $x:x\sqrt{3}:2x$, where $x$ is the shortest leg, $x\sqrt{3}$ is the longer leg and $2x$ is the hypotenuse. Notice, that the shortest side is always opposite the smallest angle and the longest side is always opposite $90^\circ$. If you look back at the Review Questions from the last section we now recognize #8 as a $30-60-90$ triangle. Example 2: Find the lengths of the two missing sides in the $30-60-90$ triangles. a. b. c. d. Solution: Determine which side in the $30-60-90$ ratio is given and solve for the other two. a. $4\sqrt{3}$ is the longer leg because it is opposite the $60^\circ$. So, in the $x:x\sqrt{3}:2x$ ratio, $4\sqrt{3} = x\sqrt{3}$, therefore $x = 4$ and $2x = 8$. The short leg is 4 and the hypotenuse is 8. b. 17 is the hypotenuse because it is opposite the right angle. In the $x:x\sqrt{3}:2x$ ratio, $17 = 2x$ and so the short leg is $\frac{17}{2}$ and the long leg is $\frac{17\sqrt{3}}{2}$. c. 15 is the long leg because it is opposite the $60^\circ$. Even though 15 does not have a radical after it, we can still set it equal to $x\sqrt{3}$. $x\sqrt{3} & = 15\\x & = \frac{15}{\sqrt{3}} \cdot \frac{\sqrt{3}}{\sqrt{3}} = \frac{15\sqrt{3}}{3} = 5\sqrt{3} \quad \text{So, the short leg is} \ 5\sqrt{3}.$ Multiplying $5\sqrt{3}$ by 2, we get the hypotenuse length, which is $10\sqrt{3}$. d. $24\sqrt{21}$ is the length of the hypotenuse because it is opposite the right angle. Set it equal to $2x$ and solve for $x$ to get the length of the short leg. $2x & = 24\sqrt{21}\\x &= 12\sqrt{21}$ To find the length of the longer leg, we need to multiply $12\sqrt{21}$ by $\sqrt{3}$. $12\sqrt{21} \cdot \sqrt{3} = 12\sqrt{3 \cdot 3 \cdot 7} = 36\sqrt{7}$ The length of the longer leg is $36\sqrt{7}$. Be careful when doing these problems. You can always check your answers by finding the decimal approximations of each side. For example, in 2d, short leg $= 12\sqrt{21} \approx 54.99$, long leg $= 36\sqrt{7} \approx 95.25$ and the hypotenuse $= 24\sqrt{21} \approx 109.98$. This is an easy way to double-check your work and verify that the hypotenuse is the longest side. ## Using Special Right Triangle Ratios Special right triangles are the basis of trigonometry. The angles $30^\circ, \ 45^\circ, \ 60^\circ$ and their multiples have special properties and significance in the unit circle (sections 1.5 and 1.6). Students are usually required to memorize these two ratios because of their importance. First, let’s compare the two ratios, so that we can better distinguish the difference between the two. For a $45-45-90$ triangle the ratio is $x:x:x\sqrt{2}$ and for a $30-60-90$ triangle the ratio is $x:x\sqrt{3}:2x$. An easy way to tell the difference between these two ratios is the isosceles right triangle has two congruent sides, so its ratio has the $\sqrt{2}$, whereas the $30-60-90$ angles are all divisible by 3, so that ratio includes the $\sqrt{3}$. Also, if you are ever in doubt or forget the ratios, you can always use the Pythagorean Theorem. The ratios are considered a short cut. Example 3: Determine if the sets of lengths below represent special right triangles. If so, which one? a. $8\sqrt{3}:24:16\sqrt{3}$ b. $\sqrt{5}:\sqrt{5}:\sqrt{10}$ c. $6\sqrt{7}:6\sqrt{21}:12$ Solution: a. Yes, this is a $30-60-90$ triangle. If the short leg is $x = 8\sqrt{3}$, then the long leg is $8\sqrt{3} \cdot \sqrt{3} = 8 \cdot 3 = 24$ and the hypotenuse is $2 \cdot 8\sqrt{3} = 16\sqrt{3}$. b. Yes, this is a $45-45-90$ triangle. The two legs are equal and $\sqrt{5} \cdot \sqrt{2} = \sqrt{10}$, which would be the length of the hypotenuse. c. No, this is not a special right triangle, nor a right triangle. The hypotenuse should be $12\sqrt{7}$ in order to be a $30-60-90$ triangle. ## Points to Consider • What is the difference between Pythagorean triples and special right triangle ratios? • Why are these two ratios considered “special”? ## Review Questions Solve each triangle using the special right triangle ratios. 1. A square window has a diagonal of 6 ft. To the nearest hundredth, what is the height of the window? 2. Pablo has a rectangular yard with dimensions 10 ft by 20 ft. He is decorating the yard for a party and wants to hang lights along both diagonals of his yard. How many feet of lights does he need? Round your answer to the nearest foot. 3. Can $2:2:2\sqrt{3}$ be the sides of a right triangle? If so, is it a special right triangle? 4. Can $\sqrt{5}:\sqrt{15}:2\sqrt{5}$ be the sides of a right triangle? If so, is it a special right triangle? 1. Each leg is $\frac{16}{\sqrt{2}} = \frac{16}{\sqrt{2}} \cdot \frac{\sqrt{2}}{\sqrt{2}} = \frac{16\sqrt{2}}{2} = 8\sqrt{2}$. 2. Short leg is $\frac{\sqrt{6}}{\sqrt{3}} = \sqrt{\frac{6}{3}} = \sqrt{2}$ and hypotenuse is $2\sqrt{2}$. 3. Short leg is $\frac{12}{\sqrt{3}} = \frac{12}{\sqrt{3}} \cdot \frac{\sqrt{3}}{\sqrt{3}} = \frac{12\sqrt{3}}{3} = 4\sqrt{3}$ and hypotenuse is $8\sqrt{3}$. 4. The hypotenuse is $4\sqrt{10} \cdot \sqrt{2} = 4\sqrt{20} = 8\sqrt{5}$. 5. Each leg is $\frac{5\sqrt{2}}{\sqrt{2}} = 5$. 6. The short leg is $\frac{15}{2}$ and the long leg is $\frac{15\sqrt{3}}{2}$. 7. If the diagonal of a square is 6 ft, then each side of the square is $\frac{6}{\sqrt{2}}$ or $3\sqrt{2} \approx 4.24 \ ft$. 8. These are not dimensions for a special right triangle, so to find the diagonal (both are the same length) do the Pythagorean Theorem: $10^2 + 20^2 & = d^2\\100 + 400 & = d^2\\\sqrt{500} & = d\\10\sqrt{5} & = d$ So, if each diagonal is $10\sqrt{5}$, two diagonals would be $20\sqrt{5} \approx 45 \ ft.$ Pablo needs 45 ft of lights for his yard. 9. $2:2:2\sqrt{3}$ does not fit into either ratio, so it is not a special right triangle. To see if it is a right triangle, plug these values into the Pythagorean Theorem: $2^2 + 2^2 & = \left ( 2\sqrt{3} \right )^2\\4 + 4 & = 12\\8 & < 12$ this is not a right triangle, it is an obtuse triangle. 10. $\sqrt{5}:\sqrt{15}:2\sqrt{5}$ is a $30-60-90$ triangle. The long leg is $\sqrt{5} \cdot \sqrt{3} = \sqrt{15}$ and the hypotenuse is $2\sqrt{5}$. Feb 23, 2012 Dec 16, 2014
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 105, "texerror": 0, "math_score": 0.8993215560913086, "perplexity": 230.65616054658963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462548.53/warc/CC-MAIN-20150226074102-00138-ip-10-28-5-156.ec2.internal.warc.gz"}
https://plainmath.net/26332/compute-laplace-transform-equal-less-then-equal-less-then-less-equal-infty
# Compute the Laplace transform of f(t)={(8 ,if 0 <= x <14),(9t,if 14 <= x <infty):} Compute the Laplace transform of You can still ask an expert for help ## Want to know more about Laplace transform? • Questions are typically answered in as fast as 30 minutes Solve your problem for the price of one coffee • Math expert for every subject • Pay only if we can solve it Usamah Prosser Step 1 Consider the given information. The Laplace transform is defined as, $F\left(s\right)={\int }_{0}^{\mathrm{\infty }}{e}^{-st}f\left(t\right)dt$ Put the value in the integral. $L\left\{f\right\}\left(s\right)={\int }_{0}^{14}{e}^{-st}8dt+{\int }_{14}^{\mathrm{\infty }}{e}^{-st}\left(9t\right)dt$ $=8{\int }_{0}^{14}{e}^{-st}dt+9{\int }_{14}^{\mathrm{\infty }}t{e}^{-st}dt$ Step 2 Consider the values for integration as, $u=t,dv={e}^{-st}$ $du=1,v=\frac{{e}^{-st}}{-s}$ Put the values in the function. $L\left\{f\right\}\left(s\right)=8{\int }_{0}^{14}{e}^{-st}+9{\int }_{14}^{\mathrm{\infty }}t{e}^{-st}$ $=8{\left[\frac{{e}^{-st}}{-s}\right]}_{0}^{14}+9{\left[\frac{t{e}^{-st}}{-s}\right]}_{14}^{\mathrm{\infty }}+\frac{1}{s}{\int }_{14}^{\mathrm{\infty }}{e}^{-st}dt$ $=-8\frac{\left({e}^{-14s}-1\right)}{s}-9{\left[\frac{t{e}^{-st}}{s}\right]}_{14}^{\mathrm{\infty }}-9\frac{1}{s}{\left[\frac{{e}^{-st}}{s}\right]}_{14}^{\mathrm{\infty }}$ $=-8\frac{\left({e}^{-14s}-1\right)}{s}+\frac{\left(126s+9\right){e}^{-14s}}{{s}^{2}}$ $=\frac{118{e}^{-14s}s+8s+9{e}^{-14s}}{{s}^{2}}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 18, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8082664012908936, "perplexity": 2946.1479824716607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662593428.63/warc/CC-MAIN-20220525182604-20220525212604-00276.warc.gz"}
https://physics.stackexchange.com/questions/75363/how-is-the-schroedinger-equation-a-wave-equation
# How is the Schroedinger equation a wave equation? Wave equations take the form: $$\frac{ \partial^2 f} {\partial t^2} = c^2 \nabla ^2f$$ But the Schroedinger equation takes the form: $$i \hbar \frac{ \partial f} {\partial t} = - \frac{\hbar ^2}{2m}\nabla ^2f + U(x) f$$ The partials with respect to time are not the same order. How can Schroedinger's equation be regarded as a wave equation? And why are interference patterns (e.g in the double-slit experiment) so similar for water waves and quantum wavefunctions? • For a connection between Schr. eq. and Klein-Gordon eq, see e.g. A. Zee, QFT in a Nutshell, Chap. III.5, and this Phys.SE post plus links therein. Jul 27, 2014 at 17:47 • "In what sense is the Schrödinger equation a wave equation?" in a loose sense. Its solutions are intuitively wave-like. From a mathematical point of view, things are not as easy. Standard classifications of PDE's dont accommodate the Schrödinger equation, which kinda looks parabolic but it is not dissipative. It shares many properties with hyperbolic equations, so we can say that it is a wave-equation -- not in the technical sense, but yes in a heuristic sense. May 16, 2017 at 22:21 • I had left a comment on one of the answers below, but then deleted it... I'll post something similar here because it's along the lines of what @AccidentalFourierTransform said. I wouldn't call this equation a wave equation. It's not hyperbolic. Wave-like? Maybe. But I don't think I would try to defend the statement that it's a wave equation. To me, hyperbolic <-> wave equation and anything else is just something else. May 16, 2017 at 22:28 • A variant on this question - why does double-slit interference produce such similar interference patterns for water waves as for the electron wavefunction, if their underlying differential equations are so different? May 16, 2017 at 22:50 • @tparker We see that all the time in, say, fluid dynamics. Linear potential equations can generate very similar solutions as the full Navier-Stokes equations under some circumstances despite the vast differences in their underlying equations. But, there are solutions that can't be produced by one or the other. I'm reluctant to say it's all just coincidental, but it's not unheard of that fundamentally different equations can produce similar solutions in a limited number of situations. May 16, 2017 at 23:15 Actually, a wave equation is any equation that admits wave-like solutions, which take the form $f(\vec{x} \pm \vec{v}t)$. The equation $\frac{\partial^2 f}{\partial t^2} = c^2\nabla^2 f$, despite being called "the wave equation," is not the only equation that does this. If you plug the wave solution into the Schroedinger equation for constant potential, using $\xi = x - vt$ \begin{align} -i\hbar\frac{\partial}{\partial t}f(\xi) &= \biggl(-\frac{\hbar^2}{2m}\nabla^2 + U\biggr) f(\xi) \\ i\hbar vf'(\xi) &= -\frac{\hbar^2}{2m}f''(\xi) + Uf(\xi) \\ \end{align} This clearly depends only on $\xi$, not $x$ or $t$ individually, which shows that you can find wave-like solutions. They wind up looking like $e^{ik\xi}$. • Doesn't any translationally-invariant PDE satisfy this criterion, even if it isn't rotationally invariant or even linear? $\partial f({\bf \xi}) / \partial x_i = \partial f({\bf \xi}) / \partial \xi_i$ and $\partial f({\bf \xi}) / \partial t = -{\bf v} \cdot {\bf \nabla}_{\bf \xi} f({\bf \xi})$, so if you take any translationally invariant PDE and replace every $\partial / \partial t$ with $-\bf{v} \cdot {\bf \nabla}_{\bf \xi}$, then can't any solution $f({\bf \xi})$ of the resulting 3D PDE be converted into a "wave-like" solution to the original 4D PDE by ... May 8, 2017 at 6:35 • ... letting $f({\bf \xi}) \to f({\bf x} - {\bf v} t)$? May 8, 2017 at 6:35 • I've expanded the comment above into an answer. May 10, 2017 at 8:28 • I disagree. For the wave equation any function f(x-vt) (with correctly fixed v) is a solution. In you Schroedinger example only very special special functions do fulfill the equation. May 17, 2017 at 5:19 • @lalala ... for non-dispersive waves. However, plenty of other phenomena that your really do want to keep calling 'waves' (like slinky waves, sound in solids, light in glass, or ripples in a pond) no longer support that property: they do have an infinite basis of solutions of the form $e^{i(kx-\omega t)}$, but they no longer sustain $f(x-vt)$ as a solution, exactly in the way that the Schrödinger equation does. "Wave" is a bit of a fluffy term, but if you use that basis to write out the Schrödinger equation, you've got to be prepared to kick out the others. May 25, 2017 at 18:32 Both are types of wave equations because the solutions behave as you expect for "waves". However, mathematically speaking they are partial differential equations (PDE) which are not of the same type (so you expect that the class of solutions, given some boundary conditions, will present different behaviour). The constraints on the eigenvalues of the linear operator are also particular to each of the types of PDE. Generally, a second order partial differential equation in two variables can be written as $$A \partial_x^2 u + B \partial_x \partial_y u + C \partial_y^2 u + \text{lower order terms} = 0$$ The wave equation in one dimension you quote is a simple form for a hyperbolic PDE satisfying $$B^2 - 4AC > 0$$. The Schrödinger equation is a parabolic PDE in which we have $$B^2 - 4AC = 0$$ since $$B=C=0$$. It can be mapped to the heat equation. • Shouldn't it be $B^2 - 4AC$ or maybe $2B$ in the PDE? May 18, 2017 at 15:21 • Is there an insightful explanation why Schroedinger equation and heat equation are so similar (actually identical with $U=0$ except for being complex-valued) yet result in such different behavior? – divB Apr 4, 2020 at 22:40 • @divB I think it's similar to how the ODE f'(x) = -k f(x) gives exponential decay (for real k > 0), but add an i before the k and you instead get oscillating solutions like you get for f''(x) = -k f(x) Jun 6, 2020 at 2:20 In the technical sense, the Schrödinger equation is not a wave equation (it is not a hyperbolic PDE). In a more heuristic sense, though, one may regard it as one because it exhibits some of the characteristics of typical wave-equations. In particular, the most important property shared with wave-equations is the Huygens principle. For example, this principle is behind the double slit experiment. If you want to read about this principle and the Schrödinger equation, see Huygens' principle, the free Schrodinger particle and the quantum anti-centrifugal force and Huygens’ Principle as Universal Model of Propagation. See also this Math.OF post for more details about the HP and hyperbolic PDE's. • Could you elaborate on this? I am trying to understand how the DS experiment can be predicted via solutions to the Schrödinger equation, usually it seems just heuristically done with wave equation solutions but these don't typically solve Schrödinger. Maybe you have a source which explains this? Looking through the sources you posted, they don't discuss the DS experiment. Sep 18, 2020 at 14:33 As Joe points out in his answer to a duplicate, the Schrodinger equation for a free particle is a variant on the slowly-varying envelope approximation of the wave equation, but I think his answer misses some subtleties. Take a general solution $f(x)$ to the wave equation $\partial^2 f = 0$ (we use Lorentz-covariant notation and the -+++ sign convention). Imagine decomposing $f$ into a single plane wave modulated by an envelope function $\psi(x)$: $f(x) = \psi(x)\, e^{i k \cdot x}$, where the four-vector $k$ is null. The wave equation then becomes $$(\partial^\mu + 2 i k^\mu) \partial_\mu \psi = ({\bf \nabla} + 2 i\, {\bf k}) \cdot {\bf \nabla} \psi + \frac{1}{c^2} (-\partial_t + 2 i \omega) \partial_t \psi= 0,$$ where $c$ is the wave velocity. If there exists a Lorentz frame in which $|{\bf k} \cdot {\bf \nabla} \psi| \ll |{\bf \nabla} \cdot {\bf \nabla} \psi|$ and $|\partial_t \dot{\psi}| \ll \omega |\dot{\psi}|$, then in that frame the middle two terms can be neglected, and we are left with $$i \partial_t \psi = -\frac{c^2}{2 \omega} \nabla^2 \psi,$$ which is the Schrodinger equation for a free particle of mass $m = \hbar \omega / c^2$. $|\partial_t \dot{\psi}| \ll \omega |\dot{\psi}|$ means that the envelope function's time derivative $\dot{\psi}$ is changing much more slowly than the plane wave is oscillating (i.e. many plane wave oscillations occur in the time $|\dot{\psi} / \partial_t \dot{\psi}|$ that it takes for $\dot{\psi}$ to change significantly) - hence the name "slowly-varying envelope approximation." The physical interpretation of $|{\bf k} \cdot {\bf \nabla} \psi| \ll |{\bf \nabla} \cdot {\bf \nabla} \psi|$ is much less clear and I don't have a great intuition for it, but it seems to basically imply that if we take the direction of wave propagation to be $\hat{{\bf z}}$, then $\partial_z \psi$ changes very quickly in space along the direction of wave propagation (i.e. you only need to travel a small fraction of a wavelength $\lambda$ before $\partial_z \psi$ changes significantly). This is a rather strange limit, because clearly it doesn't really make sense to think of $\psi$ as an "envelope" if it changes over a length scale much shorter than the wavelength of the wave that it's supposed to be enveloping. Frankly, I'm not even sure if this limit is compatible with the other limit $|\partial_t \dot{\psi}| \ll \omega |\dot{\psi}|$. I would welcome anyone's thoughts on how to interpret this limit. As stressed in other answers and comments, the common point between these equations is that their solutions are "waves". It is the reason why the physics they describe (eg interference patterns) is similar. Tentatively, I would define a "wavelike" equation as 1. a linear PDE 2. whose space of spacially bounded* solutions admits a (pseudo-)basis of the form $$e^{i \vec{k}.\vec{x} - i \omega_{\alpha}(\vec{k})t}, \vec{k} \in \mathbb{R}^n, \alpha \in \left\{1,\dots,r\right\}$$ with $\omega_1(\vec{k}),\dots,\omega_r(\vec{k})$ real-valued (aka. dispersion relation). For example, in 1+1 dimension, these are going to be the PDE of the form $$\sum_{p,q} A_{p,q} \partial_x^p \partial_t^q \psi = 0$$ such that, for all $k \in \mathbb{R}$ the polynomial $$Q_k(\omega) := \sum_{p,q} (i)^{p+q} A_{p,q} k^p \omega^q$$ only admits real roots. In this sense this is reminiscent of the hyperbolic vs parabolic classification detailed in @DaniH answer, but without giving a special role to 2nd order derivatives. Note that with such a definition the free Schrödinger equation would qualify as wavelike, but not the one with a potential (and rightly so I think, as the physics of, say, the quantum harmonic oscillator is quite different, with bound states etc). Nor would the heat equation $\partial_t \psi -c \partial_x^2 \psi = 0$: the '$i$' in the Schrödinger equation matters! * Such equations will also often admit evanescent waves solutions corresponding to imaginary $\vec{k}$. This answer elaborates on my comment to David Z's answer. I think his definition of a wave equation is excessively broad, because it includes every translationally invariant PDE and every value of $v$. For simplicity, let's specialize to linear PDE's in one spatial dimension. A general order-$N$ such equation takes the form $$\sum_{n=0}^N \sum_{\mu_1, \dots, \mu_n \in \{t, x\}} c_{\mu_1, \dots, \mu_n} \partial_{\mu_1} \dots \partial_{\mu_n}\, f(x, t) = 0.$$ To simplify the notation, we'll let $\{ \mu \}$ denote $\mu_1, \dots, \mu_n$, so that $$\sum_{n=0}^N \sum_{\{\mu\}} c_{\{\mu\}} \partial_{\mu_1} \dots \partial_{\mu_n}\, f(x, t) = 0.$$ Let's make the ansatz that $f$ only depends on $\xi := x - v t$. Then $\partial_x f(\xi) = f'(\xi)$ and $\partial_t f(\xi) = -v\, f'(\xi)$. If we define $a_{\{\mu\}} \in \mathbb{N}$ to simply count the number of indices $\mu_i \in \{\mu\}$ that equal $t$, then the PDE becomes $$\sum_{n=0}^N f^{(n)}(\xi) \sum_{\{\mu\}} c_{\{\mu\}} (-v)^{a_{\{\mu\}}} = 0.$$ Defining $c'_n := \sum \limits_{\{\mu\}} c_{\{\mu\}} (-v)^{a_{\{\mu\}}}$, we get the ordinary differential equation with constant coefficients $$\sum_{n=0}^N c'_n\ f^{(n)}(\xi) = 0.$$ Now as usual, we can make the ansatz $f(\xi) = e^{i z \xi}$ and find that the differential equation is satisfied as long as $z$ is a root of the characteristic polynomial $\sum \limits_{n=0} c_n' (iz)^n$. So our completely arbitrary translationally invariant linear PDE will have "wave-like solutions" traveling at every possibly velocity! • Interesting point. I'm not ready to admit that the definition is overly broad; maybe all translationally invariant linear PDEs are "wave equations". But I am wondering whether there's more to the story. For example, do some of these PDEs admit other solutions which cannot be expressed as a linear combination of waves $f(x \pm vt)$? May 10, 2017 at 19:07 • @DavidZ The PDE's don't even have to be linear, as mentioned in my original comment (I just considered the linear case for simplicity). If you allow the phrase "wave equation" to cover general TI nonlinear PDE's, in my opinion it becomes so broad that you might as well just say "translationally invariant PDE." May 10, 2017 at 23:02 While it is not very technical in nature it is worth going back to the first definition of what a wave is (that is, the one you use before learning that "a wave is a solution to a wave-equation"). The wording I use in the introductory classes is a moving disturbance where the 'disturbance' is allowed to be in any measurable quantity, and simple means that the quantity is seen to vary from its equilibrium value and then return to that value. The surprising thing is not how general that expression is, but that it is necessary to use something that general to cover all the basic cases: waves on strings, surface waves on liquids: sound and light. And by that definition Schrödinger's equation is used to describe the moving variation of various observables, so arguably qualifies. There is room to quibble—the wave-function itself is not an observable, and even the distributions of values that can be observed are often statistical in nature—but I've always been comfortable with this approach. • Yes, I'm starting to regret placing the bounty on this question instead of creating my own. The thing I really want to understand is the much more concrete question of why slit interference patterns look so similar (both qualitatively and quantitatively) for the free-particle Schrödinger equation and for "the" wave equation, even the the differential equations are so mathematically different. May 24, 2017 at 2:15 • I see. That is an interesting question, but not one I've given a lot of thought to before. A line of inquiry which presents itself is considering the TDSE as the Newtonian approximation to the underlying relativistic, quantum, wave-equations, which have the symmetry between time and space that we see in the "the" wave-equation. Certainly that fits with the usual heuristic picture in which $H = p^2/2m + V(x)$ plus the time derivative of $\Psi_0\exp(kx - \omega t)$ resulting in energy while spacial derivatives result in momentum (to within appropriate constants, of course). May 24, 2017 at 2:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9458803534507751, "perplexity": 356.03021134890344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103355949.26/warc/CC-MAIN-20220628050721-20220628080721-00586.warc.gz"}
http://www.mathwarehouse.com/geometry/similar/triangles/geometric-mean.php
# Similar Right Triangles side lengths Video, Interactive Applet and Examples ### Video Tutorialon Similar Right Triangle The mean proportion is any value that can be expressed just the way that 'x' is in the proportion on the left. n the proportion on the left 'x', is the geometric mean, we could solve for x by cross multiplying and going from there (more on that later) In the proportion on the left, '4', is the geometric mean #### So what does this have to do with right similar triangles? It turns out the when you drop an altitude (h in the picture below) from the the right angle of a right triangle, the length of the altitude becomes a geometric mean. This occurs because you end up with similar triangles which have proportional sides and the altitude is the long leg of 1 triangle (left side) and the short leg of the other similar triangle (right side in pic below) To better understand how the altitude of a right triangle acts as a mean proportion in similar triangles, look at the triangle below with sides a, b and c and altitude H. Below is a picture of the many similar triangles created when you drop the altitude from a right angle of a right triangle. AB BD AB BC AB AC ### Example Problem Types Students usually have to solve 2 different core types of problems involving the geometric mean. This problem is just example problem 1 above (solving for an altitude using the parts of the large hypotenuse) This problem is just example problem 2 because it involves the outer triangle's hypotenuse, leg and the side of an inner triangle.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9434375762939453, "perplexity": 542.8857869979014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931009084.22/warc/CC-MAIN-20141125155649-00159-ip-10-235-23-156.ec2.internal.warc.gz"}
https://leanprover-community.github.io/mathlib_docs/topology/sheaves/sheaf_condition/equalizer_products.html
# mathlibdocumentation topology.sheaves.sheaf_condition.equalizer_products # The sheaf condition in terms of an equalizer of products # Here we set up the machinery for the "usual" definition of the sheaf condition, e.g. as in https://stacks.math.columbia.edu/tag/0072 in terms of an equalizer diagram where the two objects are ∏ F.obj (U i) and ∏ F.obj (U i) ⊓ (U j). def Top.presheaf.sheaf_condition_equalizer_products.pi_opens {C : Type u} {X : Top} (F : X) {ι : Type v} (U : ι → ) : C The product of the sections of a presheaf over a family of open sets. Equations def Top.presheaf.sheaf_condition_equalizer_products.pi_inters {C : Type u} {X : Top} (F : X) {ι : Type v} (U : ι → ) : C The product of the sections of a presheaf over the pairwise intersections of a family of open sets. Equations def Top.presheaf.sheaf_condition_equalizer_products.left_res {C : Type u} {X : Top} (F : X) {ι : Type v} (U : ι → ) : The morphism Π F.obj (U i) ⟶ Π F.obj (U i) ⊓ (U j) whose components are given by the restriction maps from U i to U i ⊓ U j. Equations def Top.presheaf.sheaf_condition_equalizer_products.right_res {C : Type u} {X : Top} (F : X) {ι : Type v} (U : ι → ) : The morphism Π F.obj (U i) ⟶ Π F.obj (U i) ⊓ (U j) whose components are given by the restriction maps from U j to U i ⊓ U j. Equations def Top.presheaf.sheaf_condition_equalizer_products.res {C : Type u} {X : Top} (F : X) {ι : Type v} (U : ι → ) : The morphism F.obj U ⟶ Π F.obj (U i) whose components are given by the restriction maps from U j to U i ⊓ U j. Equations @[simp] theorem Top.presheaf.sheaf_condition_equalizer_products.res_π {C : Type u} {X : Top} (F : X) {ι : Type v} (U : ι → ) (i : ι) : def Top.presheaf.sheaf_condition_equalizer_products.diagram {C : Type u} {X : Top} (F : X) {ι : Type v} (U : ι → ) : The equalizer diagram for the sheaf condition. Equations def Top.presheaf.sheaf_condition_equalizer_products.fork {C : Type u} {X : Top} (F : X) {ι : Type v} (U : ι → ) : The restriction map F.obj U ⟶ Π F.obj (U i) gives a cone over the equalizer diagram for the sheaf condition. The sheaf condition asserts this cone is a limit cone. Equations @[simp] theorem Top.presheaf.sheaf_condition_equalizer_products.fork_X {C : Type u} {X : Top} (F : X) {ι : Type v} (U : ι → ) : @[simp] theorem Top.presheaf.sheaf_condition_equalizer_products.fork_ι {C : Type u} {X : Top} (F : X) {ι : Type v} (U : ι → ) : @[simp] @[simp] @[simp] def Top.presheaf.sheaf_condition_equalizer_products.pi_opens.iso_of_iso {C : Type u} {X : Top} {F : X} {ι : Type v} (U : ι → ) {G : X} (α : F G) : Isomorphic presheaves have isomorphic pi_opens for any cover U. Equations @[simp] def Top.presheaf.sheaf_condition_equalizer_products.pi_inters.iso_of_iso {C : Type u} {X : Top} {F : X} {ι : Type v} (U : ι → ) {G : X} (α : F G) : Isomorphic presheaves have isomorphic pi_inters for any cover U. Equations def Top.presheaf.sheaf_condition_equalizer_products.diagram.iso_of_iso {C : Type u} {X : Top} {F : X} {ι : Type v} (U : ι → ) {G : X} (α : F G) : Isomorphic presheaves have isomorphic sheaf condition diagrams. Equations def Top.presheaf.sheaf_condition_equalizer_products.fork.iso_of_iso {C : Type u} {X : Top} {F : X} {ι : Type v} (U : ι → ) {G : X} (α : F G) : If F G : presheaf C X are isomorphic presheaves, then the fork F U, the canonical cone of the sheaf condition diagram for F, is isomorphic to fork F G postcomposed with the corresponding isomorphism between sheaf condition diagrams. Equations @[simp] def Top.presheaf.sheaf_condition_equalizer_products.cover.of_open_embedding {X : Top} {ι : Type v} {V : Top} {j : V X} (oe : open_embedding j) (𝒰 : ι → ) : Push forward a cover along an open embedding. Equations @[simp] def Top.presheaf.sheaf_condition_equalizer_products.pi_opens.iso_of_open_embedding {C : Type u} {X : Top} {F : X} {ι : Type v} {V : Top} {j : V X} (oe : open_embedding j) (𝒰 : ι → ) : The isomorphism between pi_opens corresponding to an open embedding. Equations @[simp] def Top.presheaf.sheaf_condition_equalizer_products.pi_inters.iso_of_open_embedding {C : Type u} {X : Top} {F : X} {ι : Type v} {V : Top} {j : V X} (oe : open_embedding j) (𝒰 : ι → ) : The isomorphism between pi_inters corresponding to an open embedding. Equations def Top.presheaf.sheaf_condition_equalizer_products.diagram.iso_of_open_embedding {C : Type u} {X : Top} {F : X} {ι : Type v} {V : Top} {j : V X} (oe : open_embedding j) (𝒰 : ι → ) : The isomorphism of sheaf condition diagrams corresponding to an open embedding. Equations def Top.presheaf.sheaf_condition_equalizer_products.fork.iso_of_open_embedding {C : Type u} {X : Top} {F : X} {ι : Type v} {V : Top} {j : V X} (oe : open_embedding j) (𝒰 : ι → ) : If F : presheaf C X is a presheaf, and oe : U ⟶ X is an open embedding, then the sheaf condition fork for a cover 𝒰 in U for the composition of oe and F is isomorphic to sheaf condition fork for oe '' 𝒰, precomposed with the isomorphism of indexing diagrams diagram.iso_of_open_embedding. We use this to show that the restriction of sheaf along an open embedding is still a sheaf. Equations
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8083638548851013, "perplexity": 1088.0050069339688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153897.89/warc/CC-MAIN-20210729203133-20210729233133-00652.warc.gz"}
http://proceedings.mlr.press/v42/mack14.html
# Weighted Classification Cascades for Optimizing Discovery Significance in the HiggsML Challenge Lester Mackey, Jordan Bryan, Man Yue Mo Proceedings of the NIPS 2014 Workshop on High-energy Physics and Machine Learning, PMLR 42:129-134, 2015. #### Abstract We introduce a minorization-maximization approach to optimizing common measures of discovery significance in high energy physics. The approach alternates between solving a weighted binary classification problem and updating class weights in a simple, closed-form manner. Moreover, an argument based on convex duality shows that an improvement in weighted classification error on any round yields a commensurate improvement in discovery significance. We complement our derivation with experimental results from the 2014 Higgs boson machine learning challenge.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8180076479911804, "perplexity": 1975.8870125649598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305052.56/warc/CC-MAIN-20220127012750-20220127042750-00680.warc.gz"}
http://en.wikipedia.org/wiki/Generalized_coordinates
# Generalized coordinates In analytical mechanics, specifically the study of the rigid body dynamics of multibody systems, the term generalized coordinates refers to the parameters that describe the configuration of the system relative to some reference configuration. These parameters must uniquely define the configuration of the system relative to the reference configuration.[1] The generalized velocities are the time derivatives of the generalized coordinates of the system. An example of a generalized coordinate is the angle that locates a point moving on a circle. The adjective "generalized" distinguishes these parameters from the traditional use of the term coordinate to refer to Cartesian coordinates: for example, describing the location of the point on the circle using x and y coordinates. Although there may be many choices for generalized coordinates for a physical system, parameters are usually selected which are convenient for the specification of the configuration of the system and which make the solution of its equations of motion easier. If these parameters are independent of one another, then number of independent generalized coordinates is defined by the number of degrees of freedom of the system.[2] [3] ## Constraint equations Generalized coordinates for one degree of freedom (of a particle moving in a complicated path). Instead of using all three Cartesian coordinates x, y, z (or other standard coordinate systems), only one is needed and is completely arbitrary to define the position. Four possibilities are shown. Top: distances along some fixed line, bottom left: an angle relative to some baseline, bottom right: the arc length of the path the particle takes. All are defined relative to a zero position - again arbitrarily defined. Generalized coordinates are usually selected to provide the minimum number of independent coordinates that define the configuration of a system, which simplifies the formulation of Lagrange's equations of motion. However, it can also occur that a useful set of generalized coordinates may be dependent, which means that they are related by one or more constraint equations. ### Holonomic constraints If the constraints introduce relations between the generalized coordinates qi, i=1,..., n and time, of the form, $f_j(q_1,..., q_n, t) = 0, j=1,..., k,$ are called holonomic.[1] These constraint equations define a manifold in the space of generalized coordinates qi, i=1,...,n, known as the configuration manifold of the system. The degree of freedom of the system is d=n-k, which is the number of generalized coordinates minus the number of constraints.[4]:260 It can be advantageous to choose independent generalized coordinates, as is done in Lagrangian mechanics, because this eliminates the need for constraint equations. However, in some situations, it is not possible to identify an unconstrained set. For example, when dealing with nonholonomic constraints or when trying to find the force due to any constraint—holonomic or not, dependent generalized coordinates must be employed. Sometimes independent generalized coordinates are called internal coordinates because they are mutually independent, otherwise unconstrained, and together give the position of the system. Top: one degree of freedom, bottom: two degrees of freedom, left: an open curve F (parameterized by t) and surface F, right: a closed curve C and closed surface S. The equations shown are the constraint equations. Generalized coordinates are chosen and defined with respect to these curves (one per degree of freedom), and simplify the analysis since even complicated curves are described by the minimum number of coordinates required. ### Non-holonomic constraints A mechanical system can involve constraints on both the generalized coordinates and their derivatives. Constraints of this type are known as non-holonomic, and have the form $g_j(q_1,... , q_n, \dot{q}_1,... , \dot{q}_n, t) = 0, j=1,.... , k.$ An example of a non-holonomic constraint is a rolling wheel or knife-edge that constrains the direction of the velocity vector. ## Example: Simple pendulum Dynamic model of a simple pendulum. The relationship between the use of generalized coordinates and Cartesian coordinates to characterize the movement of a mechanical system can be illustrated by considering the constrained dynamics of a simple pendulum.[5][6] ### Coordinates A simple pendulum consists of a mass M hanging from a pivot point so that it is constrained to move on a circle of radius L. The position of the mass is defined by the coordinate vector r=(x, y) measured in the plane of the circle such that y is in the vertical direction. The coordinates x and y are related by the equation of the circle $f(x, y) = x^2+y^2 - L^2=0,$ that constrains the movement of M. This equation also provides a constraint on the velocity components, $\dot{f}(x, y)=2x\dot{x} + 2y\dot{y} = 0.$ Now introduce the parameter θ, that defines the angular position of M from the vertical direction. It can be used to define the coordinates x and y, such that $\mathbf{r}=(x, y) = (L\sin\theta, -L\cos\theta).$ The use of θ to define the configuration of this system avoids the constraint provided by the equation of the circle. ### Virtual work Notice that the force of gravity acting on the mass m is formulated in the usual Cartesian coordinates, $\mathbf{F}=(0,-mg),$ where g is the acceleration of gravity. The virtual work of gravity on the mass m as it follows the trajectory r is given by $\delta W = \mathbf{F}\cdot\delta \mathbf{r}.$ The variation δr can be computed in terms of the coordinates x and y, or in terms of the parameter θ, $\delta \mathbf{r} =(\delta x, \delta y) = (L\cos\theta, L\sin\theta)\delta\theta.$ Thus, the virtual work is given by $\delta W = -mg\delta y = -mgL\sin\theta\delta\theta.$ Notice that the coefficient of δy is the y-component of the applied force. In the same way, the coefficient of δθ is known as the generalized force along generalized coordinate θ, given by $F_{\theta} = -mgL\sin\theta.$ ### Kinetic energy To complete the analysis consider the kinetic energy T of the mass, using the velocity, $\mathbf{v}=(\dot{x}, \dot{y}) = (L\cos\theta, L\sin\theta)\dot{\theta},$ so, $T= \frac{1}{2} m\mathbf{v}\cdot\mathbf{v} = \frac{1}{2} m (\dot{x}^2+\dot{y}^2) = \frac{1}{2} m L^2\dot{\theta}^2.$ ### Lagrange's equations Lagrange's equations for the pendulum in terms of the coordinates x and y are given by, $\frac{d}{dt}\frac{\partial T}{\partial \dot{x}} - \frac{\partial T}{\partial x} = F_{x} + \lambda \frac{\partial f}{\partial x},\quad \frac{d}{dt}\frac{\partial T}{\partial \dot{y}} - \frac{\partial T}{\partial y} = F_{y} + \lambda \frac{\partial f}{\partial y}.$ This yields the three equations $m\ddot{x} = \lambda(2x),\quad m\ddot{y} = -mg + \lambda(2y),\quad x^2+y^2 - L^2=0,$ in the three unknowns, x, y and λ. Using the parameter θ, Lagrange's equations take the form $\frac{d}{dt}\frac{\partial T}{\partial \dot{\theta}} - \frac{\partial T}{\partial \theta} = F_{\theta},$ which becomes, $mL^2\ddot{\theta} = -mgL\sin\theta,$ or $\ddot{\theta} + \frac{g}{L}\sin\theta=0.$ This formulation yields one equation because there is a single parameter and no constraint equation. This shows that the parameter θ is a generalized coordinate that can be used in the same way as the Cartesian coordinates x and y to analyze the pendulum. ## Example: Double pendulum A double pendulum The benefits of generalized coordinates become apparent with the analysis of a double pendulum. For the two masses mi, i=1, 2, let ri=(xi, yi), i=1, 2 define their two trajectories. These vectors satisfy the two constraint equations, $f_1 (x_1, y_1, x_2, y_2) = \mathbf{r}_1\cdot \mathbf{r}_1 - L_1^2 = 0, \quad f_2 (x_1, y_1, x_2, y_2) = (\mathbf{r}_2-\mathbf{r}_1) \cdot (\mathbf{r}_2-\mathbf{r}_1) - L_2^2 = 0.$ The formulation of Lagrange's equations for this system yields six equations in the four Cartesian coordinates xi, yi i=1, 2 and the two Lagrange multipliers λi, i=1, 2 that arise from the two constraint equations. ### Coordinates Now introduce the generalized coordinates θi i=1,2 that define the angular position of each mass of the double pendulum from the vertical direction. In this case, we have $\mathbf{r}_1 = (L_1\sin\theta_1, -L_1\cos\theta_1), \quad \mathbf{r}_2 = (L_1\sin\theta_1, -L_1\cos\theta_1) + (L_2\sin\theta_2, -L_2\cos\theta_2).$ The force of gravity acting on the masses is given by, $\mathbf{F}_1=(0,-m_1 g),\quad \mathbf{F}_2=(0,-m_2 g)$ where g is the acceleration of gravity. Therefore, the virtual work of gravity on the two masses as they follow the trajectories ri, i=1,2 is given by $\delta W = \mathbf{F}_1\cdot\delta \mathbf{r}_1 + \mathbf{F}_2\cdot\delta \mathbf{r}_2.$ The variations δri i=1, 2 can be computed to be $\delta \mathbf{r}_1 = (L_1\cos\theta_1, L_1\sin\theta_1)\delta\theta_1, \quad \delta \mathbf{r}_2 = (L_1\cos\theta_1, L_1\sin\theta_1)\delta\theta_1 +(L_2\cos\theta_2, L_2\sin\theta_2)\delta\theta_2$ ### Virtual work Thus, the virtual work is given by $\delta W = -(m_1+m_2)gL_1\sin\theta_1\delta\theta_1 - m_2gL_2\sin\theta_2\delta\theta_2,$ and the generalized forces are $F_{\theta_1} = -(m_1+m_2)gL\sin\theta_1,\quad F_{\theta_2} = -m_2gL\sin\theta_2.$ ### Kinetic energy Compute the kinetic energy of this system to be $T= \frac{1}{2}m_1 \mathbf{v}_1\cdot\mathbf{v}_1 + \frac{1}{2}m_2 \mathbf{v}_2\cdot\mathbf{v}_2 = \frac{1}{2}(m_1+m_2)L_1^2\dot{\theta}_1^2 + \frac{1}{2}m_2L_2^2\dot{\theta}_2^2 + m_2L_1L_2 \cos(\theta_2-\theta_1)\dot{\theta}_1\dot{\theta}_2.$ ### Lagrange's equations Lagrange's equations yield two equations in the unknown generalized coordinates θi i=1, 2, given by[7] $(m_1+m_2)L_1\ddot{\theta}_1+m_2L_1L_2\ddot{\theta}_2\cos(\theta_2-\theta_1) + m_2L_1L_2\sin(\theta_2-\theta_1) = -(m_1+m_2)gL_1\sin\theta_1,$ and $m_2L_2\ddot{\theta}^2+m_2L_1L_2\ddot{\theta}_1\cos(\theta_2-\theta_1) - m_2L_1L_2\sin(\theta_2-\theta_1)=-m_2gL_2\sin\theta_2.$ The use of the generalized coordinates θi i=1, 2 provides an alternative to the Cartesian formulation of the dynamics of the double pendulum. ## Generalized coordinates and virtual work The principle of virtual work states that if a system is in static equilibrium, the virtual work of the applied forces is zero for all virtual movements of the system from this state, that is, δW=0 for any variation δr.[4] When formulated in terms of generalized coordinates, this is equivalent to the requirement that the generalized forces for any virtual displacement are zero, that is Fi=0. Let the forces on the system be Fj, j=1, ..., m be applied to points with Cartesian coordinates rj, j=1,..., m, then the virtual work generated by a virtual displacement from the equilibrium position is given by $\delta W = \sum_{j=1}^m \mathbf{F}_j\cdot \delta\mathbf{r}_j.$ where δrj, j=1, ..., m denote the virtual displacements of each point in the body. Now assume that each δrj depends on the generalized coordinates qi, i=1, ..., n, then $\delta \mathbf{r}_j = \frac{\partial \mathbf{r}_j}{\partial q_1} \delta{q}_1 + \ldots + \frac{\partial \mathbf{r}_j}{\partial q_n} \delta{q}_n,$ and $\delta W = \left(\sum_{j=1}^m \mathbf{F}_j\cdot \frac{\partial \mathbf{r}_j}{\partial q_1}\right) \delta{q}_1 + \ldots + \left(\sum_{j=1}^m \mathbf{F}_j\cdot \frac{\partial \mathbf{r}_j}{\partial q_n}\right) \delta{q}_n.$ The n terms $F_i = \sum_{j=1}^m \mathbf{F}_j\cdot \frac{\partial \mathbf{r}_j}{\partial q_i},\quad i=1,\ldots, n,$ are the generalized forces acting on the system. Kane[8] shows that these generalized forces can also be formulated in terms of the ratio of time derivatives, $F_i = \sum_{j=1}^m \mathbf{F}_j\cdot \frac{\partial \mathbf{v}_j}{\partial \dot{q}_i},\quad i=1,\ldots, n,$ where vj is the velocity of the point of application of the force Fj. In order for the virtual work to be zero for an arbitrary virtual displacement, each of the generalized forces must be zero, that is $\delta W = 0 \quad \Rightarrow \quad F_i =0, i=1,\ldots, n.$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 33, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9849936366081238, "perplexity": 367.8258527047315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345758566/warc/CC-MAIN-20131218054918-00023-ip-10-33-133-15.ec2.internal.warc.gz"}
http://mathhelpforum.com/business-math/196357-proof-yield-curve-being-nondecreasing.html
## Proof of the yield curve being nondecreasing. I need to show that the yield curve defined by r(t) = 1/t integral r(s) ds from 0 to t is a nondecreasing function iff: P(αt) ≥ (P(t))^α, for all 0<=α<=1 , t>= 0 and P(t) is defined as: P(t) = exp{-integral r(s) ds from 0 to t} and r(s) is the spot rate function. So basically the yield curve is the average of all the spot rates until time t. My attempt: So the definition of a nondecreasing function is that if f(b) > f(a) for all b>a. Now P(t) = exp{-t r(t)} so P(αt) = exp{-αt r(αt)} Now I have to show that P(αt) ≥ (P(t))^α results in r(t) being nondecreasing. Doing some simplification I came up with the following inequality ∫r(s) ds from 0 to αt ≥ α∫r(s) ds from 0 to t = r(αt) ≥ r(t) so that will be nondecreasing as long as that is satisfied for all αt > t But that means that α >= 1 which is not the case?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9029814004898071, "perplexity": 3286.5201587543506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719877.27/warc/CC-MAIN-20161020183839-00534-ip-10-171-6-4.ec2.internal.warc.gz"}
https://investsolver.com/implied-volatility-calculator/
# Implied Volatility Calculator Implied volatility is a term which is very commonly used in the context of options trading. This is a very important metric to consider for your trading strategies. If you trade options, IV can help you get the market’s best guess for volatility. This post walks you through in building Implied Volatility Calculator model in Excel. The Black-Scholes model can be used to estimate implied volatility.Implied Volatility can be estimated using  spot price, strike price, asset price, risk-free rate, time to maturity, and dividend yield. To achieve this, given an actual option value, you have to iterate to find the volatility solution. There are various techniques available; we will use the Newton-Raphson bisection method for calculating Implied Volatility in Excel. ### 6 Ways Implied Volatility Helps You Make The Right Trading Decisions • When to Buy or Sell – Timing is key. When IV is high, you should consider selling an option. Buy, when IV is low • Bullish or Bearish – IV decreases on bullish market conditions, rises on bearish market conditions • Future Pricing – You can have a good directional idea of the future price of an option by computing its IV • Highs and Lows – By the time an option expires, you may be able to estimate the stock’s high and low through IV • Calls and Puts – Before you trade a call or a put option, it is always better to check the IV for any pitfalls • Track Fluctuations – IV can help you trace irregular movements in the price of the underlying asset ### Newton-Raphson Method and Implied Volatility Implied Volatility is distinctively different from historical volatility measures. The term implied volatility comes from the fact that the volatility is removed from the market prices of options. Using Black-Scholes option pricing model, we can calculate Implied Volatility using trial and error. A great benefit of Newton-Raphson bisection method is that it gives fast convergences and the error approximation reduces rapidly with each additional iteration. The equation to calculate Implied Volatility of an option: Pm is the market price of the option which we are trying to solve a fit for, Pt is the option price given by Black-Scholes equation, σ is the implied volatility Once Black-Scholes is structured, we use an iterative technique to solve for σ. This method works for options where Black-Scholes model has a closed form solution. ### How does IV work An ITM (In the money) option has 10 days for expiration. The strike price is 55 and the current stock price is 50. The stock has daily volatility of 0.03. The risk free interest rate is assumed to be 0.02. Alternatively, d2 can be calculated as So, the European Call value can be calculated as: where, C = call value, S = spot price of 50, t = expiry period which is 10 days, Norm = Normal probability distribution with mu=0 and sigma = 1, K = option strike price of 55, r = risk free rate of interest – we have assumed to be 0.02 in this example, σ = daily stock volatility of 3% or 0.03 In Excel speak, 50 * NORM.S.DIST(-0.95145,TRUE)-55 * EXP((-0.02) * 10/365) * NORM.S.DIST(-1.04632,TRUE) ### How to Perform an Implied Volatility Calculation in Excel The model spreadsheet is easy to use. Just key in current stock price, strike price, risk free rate, days to maturity, dividend yield (if any) and option price. The VBA computes implied volatility and back solves the option price which you have entered. As a cross check, option price calculated using Black-Scholes equation must equal to the option price in the input values. The following VBA snippet calculates price of an European option using Black-Scholes equation: As you can see, lines 19 through 47 calculate implied volatility for the input values that you have entered. The method iterates till it finds a solution. Precision is defined by constant DELTA_VOL, which is the acceptable error of the function’s result. Since the function iterates to find the correct volatility, setting a higher value can speed execution in a worksheet containing 100 and above instances of this function. Lower values increases the precision. Example:  If the exact implied volatility is 16%, setting “Precision” to .05 will cause the function to return a value between 15.95% and 16.05%. Closing Notes: • “European” call options have the same theoretical value as “American” calls.  American puts are difficult to value than European puts. • Black-Scholes formula gives you a good approximation of volatility. • Since Black-Scholes cannot be deconstructed to solve for volatility, this model iteratively finds the implied volatility.  It is very similar to Excel’s “Goal-Seek” function. • Dividend Yield – The annualized dividend yield of the underlying stock expressed in continuous compounding terms. ## 4 thoughts on “Implied Volatility Calculator” 1. B Chan I feel that the host of this web-site is a dedicated person who have passion to develop & to share.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8516674637794495, "perplexity": 1921.411015615407}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347402457.55/warc/CC-MAIN-20200529054758-20200529084758-00182.warc.gz"}
http://link.springer.com/article/10.1007%2Fs00041-012-9245-2
, Volume 19, Issue 1, pp 115-139 Date: 17 Oct 2012 # Sharp Weak Type Inequalities for the Dyadic Maximal Operator Rent the article at a discount Rent now * Final gross prices may vary according to local VAT. ## Abstract We obtain sharp estimates for the localized distribution function of $$\mathcal{M}\phi$$ , when ϕ belongs to L p,∞ where $$\mathcal{M}$$ is the dyadic maximal operator. We obtain these estimates given the L 1 and L q norm, q<p and certain weak-L p conditions.In this way we refine the known weak (1,1) type inequality for the dyadic maximal operator. As a consequence we prove that the inequality 0.1 is sharp allowing every possible value for the L 1 and the L q norm for a fixed q such that 1<q<p, where ∥⋅∥ p,∞ is the usual quasi norm on L p,∞. Communicated by Loukas Grafakos.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9917605519294739, "perplexity": 1611.9954987673095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400382386.21/warc/CC-MAIN-20141119123302-00086-ip-10-235-23-156.ec2.internal.warc.gz"}
https://www.asknumbers.com/meters-to-cm.aspx
# Meters to Centimeters Conversion ## How many centimeters in a meter? Meters to centimeters (m to cm) length units conversion factor is 100. To find out how many centimeters in meters, multiply the meter value by 100 or use the converter. 1 Meter = 100 Centimeters In metric system, the prefix "centi" means "one hundredth of", that makes easier to remember how to convert from meters to centimeters, because if you replace the "centi" with "100", centimeter means 100th of a meter (or 1 meter = 100 cm). There are 100 centimeters in a meter, because there are 10 decimeters (dm) in a meter and 10 centimeters in each decimeter, that makes 10 dm * 10 (cm in 1 dm) = 100 cm in a meter. For example, to calculate how many centimeters in 1/2 meter, multiply the meter value by the conversion factor, 1/2 m * 100 (cm in 1 m) = 50 cm in 1/2 meter. Meter (metre in SI spelling) is a metric system base length unit and defined as the length of the path travelled by light in vacuum during a time interval of 1/299,792,458 of a second. There are 3.28084 feet or 39.3701 inches in a meter. 1 kilometer is 1000 meters. The abbreviation of meter is "m". Centimeter (centimetre in SI spelling) is a commonly used metric length unit. There are 10 millimeters in a centimeter and 2.54 cm in 1 inch. The abbreviation is "cm". Converter Enter a meter value to convert into centimeters and click on the "convert" button. Create Custom Conversion Table ## Conversion Table Meter Centimeter Meter Centimeter Meter Centimeter Meter Centimeter 1 m 100 cm 26 m 2600 cm 51 m 5100 cm 76 m 7600 cm 2 m 200 cm 27 m 2700 cm 52 m 5200 cm 77 m 7700 cm 3 m 300 cm 28 m 2800 cm 53 m 5300 cm 78 m 7800 cm 4 m 400 cm 29 m 2900 cm 54 m 5400 cm 79 m 7900 cm 5 m 500 cm 30 m 3000 cm 55 m 5500 cm 80 m 8000 cm 6 m 600 cm 31 m 3100 cm 56 m 5600 cm 81 m 8100 cm 7 m 700 cm 32 m 3200 cm 57 m 5700 cm 82 m 8200 cm 8 m 800 cm 33 m 3300 cm 58 m 5800 cm 83 m 8300 cm 9 m 900 cm 34 m 3400 cm 59 m 5900 cm 84 m 8400 cm 10 m 1000 cm 35 m 3500 cm 60 m 6000 cm 85 m 8500 cm 11 m 1100 cm 36 m 3600 cm 61 m 6100 cm 86 m 8600 cm 12 m 1200 cm 37 m 3700 cm 62 m 6200 cm 87 m 8700 cm 13 m 1300 cm 38 m 3800 cm 63 m 6300 cm 88 m 8800 cm 14 m 1400 cm 39 m 3900 cm 64 m 6400 cm 89 m 8900 cm 15 m 1500 cm 40 m 4000 cm 65 m 6500 cm 90 m 9000 cm 16 m 1600 cm 41 m 4100 cm 66 m 6600 cm 100 m 10000 cm 17 m 1700 cm 42 m 4200 cm 67 m 6700 cm 125 m 12500 cm 18 m 1800 cm 43 m 4300 cm 68 m 6800 cm 150 m 15000 cm 19 m 1900 cm 44 m 4400 cm 69 m 6900 cm 175 m 17500 cm 20 m 2000 cm 45 m 4500 cm 70 m 7000 cm 200 m 20000 cm 21 m 2100 cm 46 m 4600 cm 71 m 7100 cm 250 m 25000 cm 22 m 2200 cm 47 m 4700 cm 72 m 7200 cm 300 m 30000 cm 23 m 2300 cm 48 m 4800 cm 73 m 7300 cm 500 m 50000 cm 24 m 2400 cm 49 m 4900 cm 74 m 7400 cm 750 m 75000 cm 25 m 2500 cm 50 m 5000 cm 75 m 7500 cm 1000 m 100000 cm
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8286421895027161, "perplexity": 3251.4442379410484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213540.33/warc/CC-MAIN-20180818095337-20180818115337-00091.warc.gz"}
https://www.physicsforums.com/threads/maximum-speed.822298/
# Maximum speed 1. Jul 7, 2015 ### Mary1910 Prove that the maximum speed (Vmax) of a mass on a spring is given by 2πfA. I have no idea how to solve this problem. I have not taken calculus yet, and I am really confused on how to come up with a solution for this. Any help would be appreciated. Thanks 2. Jul 7, 2015 ### Dr. Courtney With conservation of energy, you can solve it without Calculus. Compute max potential energy. Set equal to max kinetic energy. Solve for v. 3. Jul 7, 2015 ### haruspex As Dr. Courtney says, but first you need to quote the general equation for the motion. 4. Jul 7, 2015 ### Mary1910 Ok I'm still lost. Should I be using, E=½mv^2 + ½kx^2 for the general equation for the motion? 5. Jul 7, 2015 ### haruspex No, that's just conservation of energy in a spring. What's the equation of simple harmonic motion? Or, you can use the above equation if you know the relationship between f, k and m for a spring. 6. Jul 7, 2015 ### Mary1910 Hooke's Law: F=-kx 7. Jul 7, 2015 ### SammyS Staff Emeritus The general equation of motion should have a sine or cosine in it. Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook Have something to add? Draft saved Draft deleted Similar Discussions: Maximum speed
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8722308874130249, "perplexity": 1645.384557257088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105976.13/warc/CC-MAIN-20170820053541-20170820073541-00690.warc.gz"}
https://mospace.umsystem.edu/xmlui/handle/10355/251/browse?value=2010+Summer&type=semester
Now showing items 1-3 of 3 • #### Measures on Hilbert spaces and applications to hydrodynamics  (University of Missouri--Columbia, 2010) Homogeneous and isotropic statistical solutions of the Navier-Stokes equations are produced. These are shown to be approximated by Galyerkin statistical solutions on finite dimensional subspaces. Homogeneous and isotropic ... • #### Minimal resolutions for a class of Gorenstein determinantal ideals  (University of Missouri--Columbia, 2010) Let X = {x[subscript ij]} [subscript mxn] be a matrix with entries in a noetherian commutative ring R. I[subscript t](X) denotes the determinantal ideal generated by the t x t minors of X. The ideals are called generic if ... • #### On the periodicity of the first Betti number of the semigroup ring under translations  (University of Missouri--Columbia, 2010) Any curve C in any dimension can be described by a parameterization. In particular in the plane, that is dimension 2, the coordinates x and y are both given as a function of a third variable t, called parameter: x=x(t), ...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.837644100189209, "perplexity": 1254.5242856532664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398444047.40/warc/CC-MAIN-20151124205404-00077-ip-10-71-132-137.ec2.internal.warc.gz"}
https://openreview.net/forum?id=r1gixp4FPH
## Accelerating SGD with momentum for over-parameterized learning Sep 25, 2019 Blind Submission readers: everyone Show Bibtex • TL;DR: This work proves the non-acceleration of Nesterov SGD with any hyper-parameters, and proposes new algorithm which provably accelerates SGD in the over-parameterized setting. • Abstract: Nesterov SGD is widely used for training modern neural networks and other machine learning models. Yet, its advantages over SGD have not been theoretically clarified. Indeed, as we show in this paper, both theoretically and empirically, Nesterov SGD with any parameter selection does not in general provide acceleration over ordinary SGD. Furthermore, Nesterov SGD may diverge for step sizes that ensure convergence of ordinary SGD. This is in contrast to the classical results in the deterministic setting, where the same step size ensures accelerated convergence of the Nesterov's method over optimal gradient descent. To address the non-acceleration issue, we introduce a compensation term to Nesterov SGD. The resulting algorithm, which we call MaSS, converges for same step sizes as SGD. We prove that MaSS obtains an accelerated convergence rates over SGD for any mini-batch size in the linear setting. For full batch, the convergence rate of MaSS matches the well-known accelerated rate of the Nesterov's method. We also analyze the practically important question of the dependence of the convergence rate and optimal hyper-parameters on the mini-batch size, demonstrating three distinct regimes: linear scaling, diminishing returns and saturation. Experimental evaluation of MaSS for several standard architectures of deep networks, including ResNet and convolutional networks, shows improved performance over SGD, Nesterov SGD and Adam. • Code: https://github.com/ts66395/MaSS • Keywords: SGD, acceleration, momentum, stochastic, over-parameterized, Nesterov • Original Pdf:  pdf 0 Replies
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8858911991119385, "perplexity": 1739.438148662664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370499280.44/warc/CC-MAIN-20200331003537-20200331033537-00505.warc.gz"}
https://phys.libretexts.org/TextBooks_and_TextMaps/University_Physics/Book%3A_University_Physics_(OpenStax)/Map%3A_University_Physics_II_-_Thermodynamics%2C_Electricity%2C_and_Magnetism_(OpenStax)/6%3A_Gauss's_Law/6.4%3A_Conductors_in_Electrostatic_Equilibrium
$$\require{cancel}$$ # 6.4: Conductors in Electrostatic Equilibrium So far, we have generally been working with charges occupying a volume within an insulator. We now study what happens when free charges are placed on a conductor. Generally, in the presence of a (generally external) electric field, the free charge in a conductor redistributes and very quickly reaches electrostatic equilibrium. The resulting charge distribution and its electric field have many interesting properties, which we can investigate with the help of Gauss’s law and the concept of electric potential. # The Electric Field inside a Conductor Vanishes If an electric field is present inside a conductor, it exerts forces on the free electrons (also called conduction electrons), which are electrons in the material that are not bound to an atom. These free electrons then accelerate. However, moving charges by definition means nonstatic conditions, contrary to our assumption. Therefore, when electrostatic equilibrium is reached, the charge is distributed in such a way that the electric field inside the conductor vanishes. If you place a piece of a metal near a positive charge, the free electrons in the metal are attracted to the external positive charge and migrate freely toward that region. The region the electrons move to then has an excess of electrons over the protons in the atoms and the region from where the electrons have migrated has more protons than electrons. Consequently, the metal develops a negative region near the charge and a positive region at the far end (Figure). As we saw in the preceding chapter, this separation of equal magnitude and opposite type of electric charge is called polarization. If you remove the external charge, the electrons migrate back and neutralize the positive region. Figure $$\PageIndex{1}$$: Polarization of a metallic sphere by an external point charge $$+q$$. The near side of the metal has an opposite surface charge compared to the far side of the metal. The sphere is said to be polarized. When you remove the external charge, the polarization of the metal also disappears. The polarization of the metal happens only in the presence of external charges. You can think of this in terms of electric fields. The external charge creates an external electric field. When the metal is placed in the region of this electric field, the electrons and protons of the metal experience electric forces due to this external electric field, but only the conduction electrons are free to move in the metal over macroscopic distances. The movement of the conduction electrons leads to the polarization, which creates an induced electric field in addition to the external electric field (Figure). The net electric field is a vector sum of the fields of $$+q$$ and the surface charge densities $$-\sigma_A$$ and $$+\sigma_B$$. This means that the net field inside the conductor is different from the field outside the conductor. Figure $$\PageIndex{2}$$: In the presence of an external charge q, the charges in a metal redistribute. The electric field at any point has three contributions, from $$+q$$ and the induced charges $$-\sigma_A$$ and $$+\sigma_B$$. Note that the surface charge distribution will not be uniform in this case. The redistribution of charges is such that the sum of the three contributions at any point P inside the conductor is $\vec{E}_p = \vec{E}_q + \vec{E}_B + \vec{E}_A = \vec{0}.$ Now, thanks to Gauss’s law, we know that there is no net charge enclosed by a Gaussian surface that is solely within the volume of the conductor at equilibrium. That is, $$q_{enc} = 0$$ and hence $\vec{E}_{net} = \vec{0} \space (at \space points \space inside \space a \space conductor).$ # Charge on a Conductor An interesting property of a conductor in static equilibrium is that extra charges on the conductor end up on the outer surface of the conductor, regardless of where they originate. Figure illustrates a system in which we bring an external positive charge inside the cavity of a metal and then touch it to the inside surface. Initially, the inside surface of the cavity is negatively charged and the outside surface of the conductor is positively charged. When we touch the inside surface of the cavity, the induced charge is neutralized, leaving the outside surface and the whole metal charged with a net positive charge. Figure $$\PageIndex{3}$$: Electric charges on a conductor migrate to the outside surface no matter where you put them initially. To see why this happens, note that the Gaussian surface in Figure (the dashed line) follows the contour of the actual surface of the conductor and is located an infinitesimal distance within it. Since $$E = 0$$ everywhere inside a conductor, $\oint \vec{E} \cdot \hat{n} dA = 0.$ Thus, from Gauss’ law, there is no net charge inside the Gaussian surface. But the Gaussian surface lies just below the actual surface of the conductor; consequently, there is no net charge inside the conductor. Any excess charge must lie on its surface. Figure $$\PageIndex{4}$$: The dashed line represents a Gaussian surface that is just beneath the actual surface of the conductor. This particular property of conductors is the basis for an extremely accurate method developed by Plimpton and Lawton in 1936 to verify Gauss’s law and, correspondingly, Coulomb’s law. A sketch of their apparatus is shown in Figure. Two spherical shells are connected to one another through an electrometer E, a device that can detect a very slight amount of charge flowing from one shell to the other. When switch S is thrown to the left, charge is placed on the outer shell by the battery B. Will charge flow through the electrometer to the inner shell? No. Doing so would mean a violation of Gauss’s law. Plimpton and Lawton did not detect any flow and, knowing the sensitivity of their electrometer, concluded that if the radial dependence in Coulomb’s law were $$1/r^{2+\delta}$$, $$\delta$$ would be less than $$2 \times 10^{-9}$$ 1. . More recent measurements place $$\delta$$ at less than $$3 \times 10^{-16}$$ 2 , a number so small that the validity of Coulomb’s law seems indisputable. Figure $$\PageIndex{5}$$: A representation of the apparatus used by Plimpton and Lawton. Any transfer of charge between the spheres is detected by the electrometer E. # The Electric Field at the Surface of a Conductor If the electric field had a component parallel to the surface of a conductor, free charges on the surface would move, a situation contrary to the assumption of electrostatic equilibrium. Therefore, the electric field is always perpendicular to the surface of a conductor. At any point just above the surface of a conductor, the surface charge density $$\delta$$ and the magnitude of the electric field E are related by $E = \dfrac{\sigma}{\epsilon_0}.$ To see this, consider an infinitesimally small Gaussian cylinder that surrounds a point on the surface of the conductor, as in Figure. The cylinder has one end face inside and one end face outside the surface. The height and cross-sectional area of the cylinder are $$\delta$$ and $$\Delta A$$, respectively. The cylinder’s sides are perpendicular to the surface of the conductor, and its end faces are parallel to the surface. Because the cylinder is infinitesimally small, the charge density $$\sigma$$ is essentially constant over the surface enclosed, so the total charge inside the Gaussian cylinder is $$\sigma\Delta A$$. Now E is perpendicular to the surface of the conductor outside the conductor and vanishes within it, because otherwise, the charges would accelerate, and we would not be in equilibrium. Electric flux therefore crosses only the outer end face of the Gaussian surface and may be written as $$E\Delta A$$ since the cylinder is assumed to be small enough that E is approximately constant over that area. From Gauss’ law, $E\Delta A = \dfrac{\sigma \Delta A}{\epsilon_0}.$ Thus $E = \dfrac{\sigma}{\epsilon_0}.$ Figure $$\PageIndex{6}$$: An infinitesimally small cylindrical Gaussian surface surrounds point P, which is on the surface of the conductor. The field $$\vec{E}$$ is perpendicular to the surface of the conductor outside the conductor and vanishes within it. Example $$\PageIndex{1}$$: Electric Field of a Conducting Plate The infinite conducting plate in Figure has a uniform surface charge density $$\sigma$$. Use Gauss’ law to find the electric field outside the plate. Compare this result with that previously calculated directly. Figure $$\PageIndex{7}$$: A side view of an infinite conducting plate and Gaussian cylinder with cross-sectional area A. Strategy For this case, we use a cylindrical Gaussian surface, a side view of which is shown. Solution The flux calculation is similar to that for an infinite sheet of charge from the previous chapter with one major exception: The left face of the Gaussian surface is inside the conductor where $$\vec{E} = \vec{0}$$, so the total flux through the Gaussian surface is EA rather than 2EA. Then from Gauss’ law, $EA = \dfrac{\sigma A}{\epsilon_0}$ and the electric field outside the plate is $E = \dfrac{\sigma}{\epsilon_0}.$ Significance This result is in agreement with the result from the previous section, and consistent with the rule stated above. Example $$\PageIndex{2}$$: Electric Field between Oppositely Charged Parallel Plates Two large conducting plates carry equal and opposite charges, with a surface charge density $$\sigma$$ of magnitude $$6.81 \times 10^{-7} C/m^2$$, as shown in Figure. The separation between the plates is $$l = 6.50 \space mm$$. What is the electric field between the plates? Figure $$\PageIndex{8}$$: The electric field between oppositely charged parallel plates. A test charge is released at the positive plate. Strategy Note that the electric field at the surface of one plate only depends on the charge on that plate. Thus, apply $$E = \sigma /\epsilon_0$$ with the given values. Solution The electric field is directed from the positive to the negative plate, as shown in the figure, and its magnitude is given by $E = \dfrac{\sigma}{\epsilon_0} = \dfrac{6.81 \times 10^{-7} C/m^2}{8.85 \times 10^{-12} C^2/Nm^2} = 7.69 \times 10^4 N/C$ Significance This formula is applicable to more than just a plate. Furthermore, two-plate systems will be important later. Example $$\PageIndex{3}$$: A Conducting Sphere The isolated conducting sphere (Figure) has a radius R and an excess charge q. What is the electric field both inside and outside the sphere? Figure $$\PageIndex{9}$$: An isolated conducting sphere. Strategy The sphere is isolated, so its surface change distribution and the electric field of that distribution are spherically symmetrical. We can therefore represent the field as $$\vec{E} = E(r) \hat{r}$$. To calculate E(r), we apply Gauss’s law over a closed spherical surface S of radius r that is concentric with the conducting sphere. Solution Since r is constant and $$\hat{n} = \hat{r}$$ on the sphere, $\oint_S \vec{E} \cdot \hat{n} dA = E(r) \oint_S dA = E(r) 4\pi r^2.$ For $$r < R$$, S is within the conductor, so $$q_{enc} = 0$$, and Gauss’s law gives $E(r) = 0,$ as expected inside a conductor. If $$r > R$$, S encloses the conductor so $$q_{enc} = q$$. From Gauss’s law, $E(r) 4\pi r^2 = \dfrac{q}{\epsilon_0}.$ The electric field of the sphere may therefore be written as $\vec{E} = \vec{0} \space (r < R),$ $\vec{E} = \frac{1}{4\pi \epsilon_0} \frac{q}{r^2} \hat{r} \space (r \geq R).$ Significance Notice that in the region $$r \geq R$$, the electric field due to a charge q placed on an isolated conducting sphere of radius R is identical to the electric field of a point charge q located at the center of the sphere. The difference between the charged metal and a point charge occurs only at the space points inside the conductor. For a point charge placed at the center of the sphere, the electric field is not zero at points of space occupied by the sphere, but a conductor with the same amount of charge has a zero electric field at those points (Figure). However, there is no distinction at the outside points in space where $$r > R$$, and we can replace the isolated charged spherical conductor by a point charge at its center with impunity. Figure $$\PageIndex{10}$$: Electric field of a positively charged metal sphere. The electric field inside is zero, and the electric field outside is same as the electric field of a point charge at the center, although the charge on the metal sphere is at the surface. Exercise $$\PageIndex{1}$$ How will the system above change if there are charged objects external to the sphere? Solution If there are other charged objects around, then the charges on the surface of the sphere will not necessarily be spherically symmetrical; there will be more in certain direction than in other directions. For a conductor with a cavity, if we put a charge $$+q$$ inside the cavity, then the charge separation takes place in the conductor, with $$-q$$ amount of charge on the inside surface and a $$+q$$ amount of charge at the outside surface (Figure(a)). For the same conductor with a charge $$+q$$ outside it, there is no excess charge on the inside surface; both the positive and negative induced charges reside on the outside surface (Figure(b)). Figure $$\PageIndex{11}$$: (a) A charge inside a cavity in a metal. The distribution of charges at the outer surface does not depend on how the charges are distributed at the inner surface, since the E-field inside the body of the metal is zero. That magnitude of the charge on the outer surface does depend on the magnitude of the charge inside, however. (b) A charge outside a conductor containing an inner cavity. The cavity remains free of charge. The polarization of charges on the conductor happens at the surface. If a conductor has two cavities, one of them having a charge $$+q_a$$ inside it and the other a charge $$-q_b$$the polarization of the conductor results in $$-q_a$$on the inside surface of the cavity a, $$+q_b$$ on the inside surface of the cavity b, and $$q_a - q_b$$ on the outside surface (Figure). The charges on the surfaces may not be uniformly spread out; their spread depends upon the geometry. The only rule obeyed is that when the equilibrium has been reached, the charge distribution in a conductor is such that the electric field by the charge distribution in the conductor cancels the electric field of the external charges at all space points inside the body of the conductor. Figure $$\PageIndex{12}$$: The charges induced by two equal and opposite charges in two separate cavities of a conductor. If the net charge on the cavity is nonzero, the external surface becomes charged to the amount of the net charge. ## Contributors Paul Peter Urone (Professor Emeritus at California State University, Sacramento) and Roger Hinrichs (State University of New York, College at Oswego) with Contributing Authors: Kim Dirks (University of Auckland) and Manjula Sharma (University of Sydney). This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9576558470726013, "perplexity": 201.11717313658048}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215222.74/warc/CC-MAIN-20180819145405-20180819165405-00594.warc.gz"}
https://infoscience.epfl.ch/record/166015
Infoscience Journal article # Spectral Properties And Linear Stability Of Self-Similar Wave Maps We study co-rotational wave maps from (3 + 1)-Minkowski space to the three-sphere S-3. It is known that there exists a countable family {f(n)} of self-similar solutions. We investigate their stability under linear perturbations by operator theoretic methods. To this end we study the spectra of the perturbation operators, prove well-posedness of the corresponding linear Cauchy problem and deduce a growth estimate for solutions. Finally, we study perturbations of the limiting solution which is obtained from f(n) by letting n -> infinity. #### Reference • EPFL-ARTICLE-166015 Record created on 2011-05-23, modified on 2017-05-12 ### Fulltext • There is no available fulltext. Please contact the lab or the authors.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8958871364593506, "perplexity": 1456.245996630667}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424846.81/warc/CC-MAIN-20170724102308-20170724122308-00204.warc.gz"}
https://www.transtutors.com/questions/establish-the-lower-complexity-bound-stated-in-4-2-3-and-4-2-4-for-stochastic-optimi-6774300.htm
# Establish the lower complexity bound stated in (4.2.3) and (4.2.4) for stochastic optimization... Establish the lower complexity bound stated in (4.2.3) and (4.2.4) for stochastic optimization methods applied to problem 4.2.1. ## Plagiarism Checker Submit your documents and get free Plagiarism report Free Plagiarism Checker
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8723121881484985, "perplexity": 3456.4214696094773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587963.12/warc/CC-MAIN-20211026231833-20211027021833-00476.warc.gz"}
https://yutsumura.com/common-eigenvector-of-two-matrices-a-b-is-eigenvector-of-ab-and-ab/
# Common Eigenvector of Two Matrices $A, B$ is Eigenvector of $A+B$ and $AB$. ## Problem 382 Let $\lambda$ be an eigenvalue of $n\times n$ matrices $A$ and $B$ corresponding to the same eigenvector $\mathbf{x}$. (a) Show that $2\lambda$ is an eigenvalue of $A+B$ corresponding to $\mathbf{x}$. (b) Show that $\lambda^2$ is an eigenvalue of $AB$ corresponding to $\mathbf{x}$. (The Ohio State University, Linear Algebra Final Exam Problem) ## Proof. ### (a) Show that $2\lambda$ is an eigenvalue of $A+B$ corresponding to $\mathbf{x}$. Since $\lambda$ is an eigenvalue of $A$ and $B$, and $\mathbf{x}$ is a corresponding eigenvector, we have $A\mathbf{x}=\lambda \mathbf{x} \text{ and } B\mathbf{x}=\lambda \mathbf{x} \tag{*}.$ Then we compute \begin{align*} (A+B)\mathbf{x}&=A\mathbf{x}+B\mathbf{x}\\ &=\lambda \mathbf{x}+ \lambda \mathbf{x} && \text {by (*)}\\ &=2\lambda \mathbf{x}. \end{align*} Since $\mathbf{x}$ is an eigenvector, it is a nonzero vector by definition. Hence from the equality $(A+B)\mathbf{x}=2\lambda \mathbf{x},$ we see that $2\lambda$ is an eigenvalue of the matrix $A+B$ and $\mathbf{x}$ is an associated eigenvector. ### (b) Show that $\lambda^2$ is an eigenvalue of $AB$ corresponding to $\mathbf{x}$. We have \begin{align*} (AB)\mathbf{x}&=A(B\mathbf{x})\\ &=A(\lambda \mathbf{x}) && \text{by (*)}\\ &=\lambda (A\mathbf{x})\\ &=\lambda (\lambda \mathbf{x}) && \text{by (*)}\\ &=\lambda^2 \mathbf{x}. \end{align*} Since $\mathbf{x}$ is a nonzero vector as it is an eigenvector, it follows from the equality $(AB)\mathbf{x}=\lambda^2 \mathbf{x}$ that $\lambda^2$ is an eigenvalue of the matrix $AB$ and $\mathbf{x}$ is a corresponding eigenvector. ### More from my site #### You may also like... ##### Prove that the Length $\|A^n\mathbf{v}\|$ is As Small As We Like. Consider the matrix $A=\begin{bmatrix} 3/2 & 2\\ -1& -3/2 \end{bmatrix} \in M_{2\times 2}(\R).$ (a) Find the eigenvalues and corresponding eigenvectors... Close
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9935261011123657, "perplexity": 222.02951332624977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815034.13/warc/CC-MAIN-20180224013638-20180224033638-00217.warc.gz"}
http://www.thefosterfamily.pwp.blueyonder.co.uk/latex/latex-examples.html
# Examples of the use of eethesis.cls ## Page-styles Page-styles are the basic definitions of page layout. With page-styles, you can pretty much do anything you want (though it will not always look good!), switching between styles at any point in your documents. LaTeX has a number of styles already defined, including: • empty – no page number, header or footer • plain – a centred page number, but no header or footer eethesis defines some more: • romanpage – a page number is placed at the bottom right-corner, in roman-numerals. There is no header or footer • generalpage – a page number is placed at the bottom right-corner, in (normal) arabic-numerals. There is no header or footer • headedpage – a page number is placed at the bottom right-corner, in arabic-numerals. There is a simple header (the user can define the contents) but no footer • eeplain – a centred page number, but no header or footer (i.e. the same as plain) eeplain exists so that the plain style can be redefined as empty or romanpage, depending on whether the front-matter is listed. This is needed particularly when the table of contents etc. run over multiple pages, due to how the respective commands are defined. generalpage (default) or headedpage are for the main text, whilst front- and back-matter can be either empty or romanpage (independent of each other). eeplain is in case someone really wants plain: under NO circumstances should the user attempt to use plain, as we redefine it internally as required. ## Thesis Class Options There are many options available to override the formatting of documents created using eethesis.cls. These are explained below. More information may also be found in the documentation for eethesis.cls (see the downloads page). ### Formatting Rules • optimal – this is the default setting and takes the requirements of the University of Birmingham and modifies them according to our own preconceptions so that the document "looks good"(!) • strict – this is our attempt at applying the requirements exactly. It should be noted that this is only our understanding of the requirements and not endorsed by the University in any way: use this at your own risk! :-) The main differences are summarised in the following table. Optimal Strict Paragraphs separated by vertical space Paragraphs not separated by vertical space Paragraphs are not indented Paragraphs are indented The differences are very small and are really a question of personal preference. There are methods of achieving a mixture of the two styles – e.g., the noindent option (see later). ### Line Spacing There are four options here: 1. optimalline – this is the default for both strict and optimal and is set at 1.3 times normal spacing; 2. double – this is twice the normal spacing; 3. single – this is ‘normal’ spacing – not so hot for embedding equations; 4. linehalf – this is the ‘minimum’ spacing of 1.5 often specified by those who have no passion for typography. ### Type of Formatting There are three options in this category: phdthesis, fypreport and progressreport. phdthesis is for any thesis. fypreport is intended for dissertation-level reports (e.g. Final Reports of Final Year Projects), although it is appropriate for other reports during projects. progressreport is for the progress reports required at different stages throughout a Ph.D. or project. The main differences are the layout of title-page and inclusion of front-matter. NOTE: one of these options MUST be specified in the \documentclass declaration or an error will be generated. ### Front- and Back-matter There are two areas in which the front-matter (i.e. text such as abstracts and listings like the contents) can be controlled. The first, listfrontmatter, allows textual front-matter (e.g. abstract, dedication, acknowledgements) appear in the table of contents, should you so desire it. The other area determines whether listings such as the contents actually appear in the document. The options are: hidetitle, hidetoc, hidelof and hidelot remove the title-page, ‘Contents’, ‘List of Figures’ and ‘List of Tables’ respectively from the document, whilst showlol includes the ‘List of Listings’. They are all independent of each other. These are provided for draft versions or for reports where a type of object (e.g. figures) may not be used. Obviously, if you use figures, tables, etc., the final document should include corresponding listings, in addition to the title-page and contents. The back-matter is much simpler – there is only one option, listbackmatter, which determines whether the back-matter is listed in the contents. There are also some commands relating to the front- and back-matter (see later). ### Compilation There are two options in this category: preview is intended for whilst the document is being written and is therefore faster than finalbuild. It achieves this by removing the front-matter completely, optimising graphics inclusion for speed and changing the formatting of various parts of the document. It is expected that this option will be used for proof-reading during the early stages only. ### Miscellaneous • footnoteline – this places a line above any footnotes on the page; • footnotenumbers – this changes the numbering of footnotes from a set of symbols to superscript numbers; • footnotesperpage – this resets the footnote counter after every chapter instead of every page; • noindent – this means the start of paragraphs are not indented, even in strict mode; • nopackages – this prevents the inclusions of the ‘convenience’ packages, such as the AMS packages; and changes the mode of custom_shortcuts. ## Commands and Environments ### Title Page Commands It was decided that, in the hope that this may be found useful to more people, we would parameterise as much as possible within the code, so that control of certain aspects of the document could be ‘safely’ passed to users (i.e. they would not need to modify the class file in any way). Some of the font selection and formatting was dealt with in this manner. The other main area was the information to be used on the title page, which is dealt with here. All of the commands listed below should appear in the preamble (i.e. before \begin{document}) of your document if used. These commands supplement the standard \author{}, \title{} and \date{} commands. Any of these items can be removed by including the appropriate command in the preamble, or in my_formatting, with no contents (e.g. \reporttype{}), if they are not required. ### Front- and Back-matter Commands The front-matter commands are: The \changeeeXXXname commands are best used in my_formatting. The back-matter commands are: • \eethesisbackmatter – this tells eethesis that the back-matter has started (required so that the page-style can be properly formatted); • \extrabackmatter{heading} – this command can be used as many times as required, in order to insert other things (e.g., a list of mathematical symbols used) into the back-matter, in a similar manner to the \chapter{} command i.e., followed by the required text (note the difference with \extrafrontmatter{}{} command); • \eethesisbib{your commands} – this command allows the user to control the name of the Bibliography and is required for compatibility purposes with babel; it is reflected in how it is listed in the Contents, when used with the listbackmatter option; • \changeeebibname{new heading} – this allows the author to change the heading of the Bibliography (e.g. to ‘References’). \changeeebibname{} is compatible with the listbackmatter option (i.e. the change in name will appear in the table of contents, if so desired). Most of the text must come within the following environment: \begin{document} ... \end{document} However, the title, author and date (if required) for the titlepage must be set in the preamble (unless set globally in my_formatting), i.e. between: \documentclass[phdthesis,strict]{eethesis} ... % preamble ... \begin{document} The same is true for the additional frontmatter sections (i.e. abstract, acknowledgements, dedication and sections introduced with the \extrafrontmatter command). The skeleton.tex file gives an example of this. The end of the main document will look similar to this: % Main document ... % Start back-matter \eethesisbackmatter \extrabackmatter{List of Symbols} % Symbols listed and formatted as user desires ... % Bibliography -- BibTeX version \eethesisbib{\bibliography{myBibFile}} \end{document} If you do not use BibTeX, preferring to use the LaTeX environment thebibliography directly, the end looks like this: ... % Bibliography -- non-BibTeX version \eethesisbib{% \begin{thebibliography} % Some \bibitem entries ... \end{thebibliography}} % Note second '}' \end{document} ### Miscellaneous Commands The eethesis class also defines a command to ensure compatibility with the chappg package. This package allows pages to be numbered by chapter. The command, \thesisuseschappg, must be included after inclusion of the chappg package. The authors suggest the use of my_formatting for this task. This raises another issue; namely, the use of \part{} divisions. Though the division of a thesis into parts may be rare, this is possible; the design decision with eethsis is that parts are normally included in the table of contents and the pages numbered sequentially. However, if chppg is used, no page number is given, though the part name still appears in the TOC. To achieve this behaviour, the command \eeaddcontentsline{ext}{type}{text} has been defined; it behaves like the standard \addcontentsline{ext}{type}{text} but hides the page number in the TOC listing. Also included in my_formatting are the commands, provided by eethesis, that allow the user to customise the fonts etc. used in the document. This includes the fonts used in the listings (contents, etc. ). It is our strongest recommendation that they are only used from within my_formatting. See later and my_formatting for more information. Of a similar category are the commands \eethesispageheader{} and \eeheaderfont{}. These are also found in my_formatting and are related to the page style, headedpage, that has been provided for if (rudimentary) headers are desired. \eeheaderfont{} obviously sets the style of the header, whilst the content of the header is set by \eethesispageheader{}. More information can be found in my_formatting. More complex headers can be achieved using the fancyhdr package; however, care should be taken that any new page style meets the requirements imposed and is also consistent with the page styles we enforce for the front- and back-matter. ### A Simple Environment for Chapter-Quotations Some people like the idea of including pretentious or pithy quotations at the start of each chapter of their document. Although there are already packages available for this task (e.g. epigraph), for various reasons we decided to create a new environment. The usage is as follows: \begin{chapterquote}{Author} The actual quotation goes here. \end{chapterquote} A simple example would be: \chapter{How to Write Good Code\label{chpGoodCode}} \begin{chapterquote}{Donald Knuth} Beware of the above code, I have only proved that it works, not tested it. \end{chapterquote} ## The custom_shortcuts Package ### Overview and Options custom_shortcuts has an interesting relationship with eethesis. In some cases, custom_shortcuts depends on eethesis; in others, it is the opposite way around. For example, eethesis uses some abbreviations declared in custom_shortcuts, whilst some macros defined in custom_shortcuts are dependent on packages (the AMS packages, plus latexsym, mathrsfs and mathscinet) that may or may not be included in eethesis. This relationship resulted in a set of options to control the mode of operation of custom_shortcuts. The default is eethesis, which means that all macros are defined assuming normal (i.e. our default) use with eethesis. The second is standalone, which assumes the user is not using eethesis and therefore includes the relevant packages it expects from eethesis. The final option is noams, which assumes the AMS packages (i.e., all those listed above) are not loaded and therefore defines some commands differently. ### Cross-referencing Commands LaTeX contains powerful cross-referencing facilities for within documents, using the \label, \ref and \pageref commands. \label{tag} marks an ‘object’ (e.g. figure or section of the document) with the label ‘tag’, \ref{tag} returns the number corresponding to ‘tag’ and \pageref{tag} returns the page number corresponding to ‘tag’. We have grouped together commands that make use of the \ref command to produce preformatted references for the various objects within a document that may be referenced. All multiple cross-referencing commands (i.e. those prefixed with m) also have an optional argument for modifying the separating characters. So, for example, we could use \meqnref[--]{eqnRef1}{eqnRef5} or even \meqnref[ and ]{eqnRef1}{eqnRef2} to refer to lists of equations. ### Useful Abbreviations Abbreviations sometimes have rules as to their appearance, e.g. the spacing following 'e.g.' is not a double space that would follow the full-stop at the end of a sentence. • \beng – B.Eng. for Bachelor of Engineering; • \meng – M.Eng. for Master of Engineering; • \msc – M.Sc. for Master of Science; • \mphil – M.Phil. for Master of Philosophy; • \phd – Ph.D. for Doctor of Philosophy. • \ie – i.e. for any document; • \eg – e.g. for any document; • \nb – N.B. for any document; • \etc – etc. for any document; • \ca – ca. for any document referring to dates (e.g. ‘ca. 1900’); • \cf – cf. for the bibliography/references section of any document (‘compare’); • \etal – et al. for the bibliography/references section of any document (‘and others’); • \opcit – op. cit. for the bibliography/references section of any document (‘in the work previously quoted’); • \ibid – ibid. for the bibliography/references section of any document (‘in the same place as the previous reference’); • \typ – typ. (typically) for any technical document. ### Technical Macros These are mainly Mathematical, but also include notations common in Electronic Engineering. Miscellaneous shortcuts and macros are: • \hydrogen – for type-setting 1H; • \carbon – for type-setting 13C; • \keyword – for quickly type-setting keywords; • \url – for quickly type-setting web addresses; • \invivo – the Latin in vivo; • \invitro – the Latin in vitro; • \latin{} – for type-setting Latin; • \tblhead{} – for type-setting row and/or column headers in tables (defaults to slanted in current font, plus additional space below line); • \replaceme{} – for marking-up text for revision in future drafts. It should be noted that a subset of the mathematical abbreviations and commands can only be used in math-mode, mainly as they are of no use on their own, requiring mathematical expressions to make any sense. These include the vector product commands, the conjugate commands, the differential commands and the logic and set theory commands (except for the \logic{} macro). The units, other maths. macros, Big-O notation and miscellaneous items can be used in any mode. In addition, the following commands will differ in result depending on the mode of operation (i.e. whether the AMS packages are used, which is the case for this documentation): the sets of numbers; the real and imaginary operators; the trig. operators; the Fourier, Laplace and Expectation operators; and the \vect{}, \vr{} and \mx{} commands. For more information, see the eethesis documentation. ## Using my_formatting The my_formatting file is specific to this class. It provides a means to change the formatting, without putting messy preamble in your beautiful context-only document. Some of this formatting can be changed by modifying commands provided by us; other changes may be achieved by the inclusion of other packages (check the list of incompatible packages first). Some packages will provide extra commands for use in the document proper; others have a wider effect on the document format and so any commands used should also (in general) be placed in my_formatting. The main purpose of my_formatting is to allow the user to change the fonts from the defaults provided by eethesis.cls, thus removing the need for modification of the class by users. Commands are also provided to over-ride the names of the various listings (e.g. change ‘Table of Contents’ to ‘Contents’, or ‘Bibliography’ to ‘References’). There are also commands related to adding a header to the page style (in general, this requires insertion of a command into the main document – see the notes in my_formatting for more details). To use this package you just need to create the file my_formatting.sty in a place where LaTeX can see it (e.g. your project directory). Our example version comes with the distributions available from this site. It should also be noted that the order of font-changing commands may be important. It is probably best just to copy the example given here and un-comment and change the lines that apply. See the downloads and links pages for other sources of information on fonts in LaTeX. ### An example: changing the style of chapter headings In the my_formatting.sty file, the commands are all commented and show the default. For chapter headings, this is: %\renewcommand{\chaptertitlefont}[1]{\centering\sffamily\bfseries\LARGE\MakeTextUppercase{#1}} So, if you wanted the chapter headers huge, right-justified and in italic roman characters without making them upper-case, the line in my_formatting will become: \renewcommand{\chaptertitlefont}[1]{\raggedleft\rmfamily\itshape\Huge #1} ### my_formatting and chappg Additional packages may be included in my_formatting for use in the document. It is suggested that my_formatting may act as a central location for including extra packages, keeping the main document uncluttered. One package that is given as an example is chappg. This package allows the page-numbering to be re-set at the start of each chapter. The inclusion of the package and use of relevant commands can be found in my_formatting. It should be noted that chappg is slightly unusual, in that eethesis defines a command that must be used to ensure compatibility. Robert Foster and Greg Reynolds. We are not responsible for the content of external pages. There is no warranty for any file that may be downloaded from this page. Last updated 2007/11/04.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9049825072288513, "perplexity": 2924.5507577100047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119654793.48/warc/CC-MAIN-20141024030054-00210-ip-10-16-133-185.ec2.internal.warc.gz"}
https://undergroundmathematics.org/thinking-about-geometry/in-betweens/solution
Many ways problem ## Solution 1. Here is a graph showing a straight line. It is not drawn accurately. The points $(2,3)$ and $(8,8)$ lie on the line. Another point, $(4,a)$, also lies on the line. Can you work out the value of $a$? How many different ways can you find to do this? One way to do this question is to draw some similar triangles on the graph. We can then sometimes find lengths we are interested in by using properties of similar triangles. #### A similar triangles method We extend the horizontal dashed line at $y=3$ to create some similar triangles, as shown in the following graph. There are at least three triangles in this graph; we will focus on two of them and redraw them separately: We can see that these triangles are similar, since the gradients of the sides are the same, and they are both right-angled. We can also fill in some of the side lengths we know from the triangles’ coordinates: So since the ratios of corresponding sides in similar triangles are equal, we have $$$\frac{a-3}{2}=\frac{5}{6}. \label{eq:1}$$$ Multiplying by $2$ gives $a-3=\frac{5}{3}$ so $a=\frac{14}{3}=4\frac{2}{3}$. And this answer seems reasonable: it’s between $3$ and $8$, and it’s closer to $3$. There are several variants of this method which use slightly different choices of triangles. We can think about the gradient of the line $AB$. There are three ways to calculate it: we can find the gradient between the points $A$ and $B$, which is $\frac{\text{change in y}}{\text{change in x}} = \frac{5}{6}.$ Alternatively, we can find the gradient between the points $A$ and $M$, which is $\frac{a-3}{2}.$ Finally, we can find the gradient between the points $M$ and $B$, which gives $\frac{8-a}{4}.$ But all of these gradients are equal, as the three points $A$, $B$ and $M$ all lie on the same line. So if we take any two of them we get an equation which we can solve to find $a$. For example, if we take the first two gradients, we get the equation $\eqref{eq:1}$ we had before, so again we find that $a=\frac{14}{3}$. #### The equation of the line method Another way of doing this question is to work out the equation of the line $AB$ (see Geometry of Equations for more on finding equations of straight lines). The gradient of the line is $\frac{8-3}{8-2}=\frac{5}{6}.$ So the equation of the line is $y=\frac{5}{6}x+c$. To find $c$, we know that it passes through the point $(2,3)$, so we can substitute in to get $3=\frac{5}{6}\times 2+c$, so $c=\frac{4}{3}$. (Alternatively, we can use the general form of a straight line with gradient $m$ passing through a given point, namely $y-y_1=m(x-x_1)$. This gives us $y-3=\frac{5}{6}(x-2)$, which we can rearrange if we wish to again get $y=\frac{5}{6}x+\frac{4}{3}$.) We can now substitute in $x=4$ to work out the value of $y$, which gives $y=\frac{5}{6}\times 4+\frac{4}{3}=\frac{14}{3}$, so $a=\frac{14}{3}$. We have described three different approaches to finding the value of $a$. Do you prefer one of these approaches to the other? If so, why? 1. This time, the $x$-coordinate of one of the points is missing—can you work it out? Did all of your approaches from question 1 work, or did some not? Or perhaps they worked with some modification? Can you find any new ways to do this question which are different from your methods for question 1? The idea here is the same. We will use the similar triangles approach, and draw in the same triangles on the graph as we did in question 1: Again, we can draw out the two triangles separately to get a clearer picture of what’s going on. This time, we won’t label all of the points with letters, as it’s clear now where they come from. Then comparing the ratios of the sides gives the equation $\frac{b-5}{2} = \frac{11}{5}$ so that $b-5=\frac{22}{5}$, which brings us to $b=\frac{47}{5}=9\frac{2}{5}=9.4$. Note that for this question, we wrote the ratios with the $x$-coordinates on the tops of the fractions, as this is where the unknown $b$ is—this just makes our life easier when we go to work out $b$. You had to draw your own sketch and the numbers are nastier! But it’s exactly the same idea still. We’ll show the sketch graphs here, but not the individual triangles. • The point $(7.3, c)$ lies on the straight line joining $(4.1, 37)$ and $(8.9, 63)$. Find $c$. The sketch is: Then comparing the similar triangles gives the equation $\frac{c-37}{7.3-4.1} = \frac{63-37}{8.9-4.1}$ and working out the calculations gives $\frac{c-37}{3.2} = \frac{26}{4.8} = 5.416\ldots,$ so $c=3.2\times5.416\ldots+37=54.3$ to 1 d.p. Note that we did not round $\frac{26}{4.8}$ to $5.417$ in the middle of the calculation; it is better to keep numbers unrounded on your calculator and only round the final answer, as this prevents errors from accumulating. • The point $(d, 47.5)$ lies on the straight line joining $(15.05, 42)$ and $(17.55, 56)$. Find $d$. This time, the sketch is: so the corresponding equation is $\frac{d-15.05}{47.5-42} = \frac{17.55-15.05}{56-42}.$ Working things out as before gives $\frac{d-15.05}{5.5} = \frac{2.5}{14}=0.178\ldots$ so $d=5.5\times 0.178\ldots+15.05=16.0$ to 1 d.p. • The point $(12, e)$ lies on the straight line joining $(8, 20)$ and $(17, 1)$. Find $e$. This time, the sketch is Now the gradient is negative, but the method works in exactly the same way, as long as we take care. We should also double-check that our final answer makes sense! If we compare the lengths of the triangle sides, we have $\frac{20-e}{12-8} = \frac{20-1}{17-8},$ so $\frac{20-e}{4} = \frac{19}{9},$ yielding $20-e=8.444\dots$, so $e=11.6$ to 1 d.p., and this seems sensible, as $11.6$ is between $20$ and $1$, and close to half way between them. This does seem a little bit confusing, though, since we have to think carefully about which way round to subtract each pair of coordinates to ensure we get a positive number – it does lead to a significant potential for mistakes. Thinking about gradients instead might make our life somewhat easier here. Here is an alternative using gradients: equating the gradient between the points $(8,20)$ and $(12,e)$ with the gradient between $(8,20)$ and $(17,1)$ gives $\frac{e-20}{12-8} = \frac{1-20}{17-8},$ so $\frac{e-20}{4} = \frac{-19}{9},$ yielding $e-20=-8.444\dots$, so $e=11.6$ to 1 d.p. This way, the signs took care of themselves, and we could have done the problem without drawing a sketch, had we been so inclined.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.8221042156219482, "perplexity": 753.0647899554755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359976.94/warc/CC-MAIN-20211201083001-20211201113001-00007.warc.gz"}
https://infoscience.epfl.ch/record/224269
## Regularization networks: Fast weight calculation via Kalman filtering Regularization networks are nonparametric estimators obtained from the application of Tychonov regularization or Bayes estimation to the hypersurface reconstruction problem. Their main drawback is that the computation of the weights scales as $O(n^3)$ where $n$ is the number of data. In this paper we show that for a class of monodimensional problems, the complexity can be reduced to $O(n)$ by a suitable algorithm based on spectral factorization and Kalman filtering. Moreover, the procedure applies also to smoothing splines. Published in: IEEE Trans. Neural Networks, 12, 2, 228-235 Year: 2001 Laboratories:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8015344142913818, "perplexity": 463.4168464438629}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812938.85/warc/CC-MAIN-20180220110011-20180220130011-00010.warc.gz"}
http://openstudy.com/updates/51fe3fdce4b0cc46c14adb24
Here's the question you clicked on: 55 members online • 0 viewing ## anonymous 2 years ago ABC is an isosceles triangle with AB=BC. The circumcircle of ABC has radius 8. Given that AC is a diameter of the circumcircle, what is the area of triangle ABC? Delete Cancel Submit • This Question is Open 1. anonymous • 2 years ago Best Response You've already chosen the best response. 0 |dw:1375617084417:dw|Note that angle B will be a right angle (which can be proved using the 'Angle at the center theorem'). We also note that this triangle is isosceles and and it's base is the diameter of the circle $$\bf AC=16$$. And the height of the triangle is just the radius of the circle which is given to us as 8. Hence of the triangle is:$\bf Area \ of \ \triangle ABC=\frac{ 16 \times 8 }{ 2 }=64 \ units^2$ 2. anonymous • 2 years ago Best Response You've already chosen the best response. 0 @fayaz803 3. CGGURUMANJUNATH • 2 years ago Best Response You've already chosen the best response. 0 64 4. anonymous • 2 years ago Best Response You've already chosen the best response. 0 true"genius" u have all concepts in ur finger tips!!!!! 5. Not the answer you are looking for? Search for more explanations. • Attachments: Find more explanations on OpenStudy ##### spraguer (Moderator) 5→ View Detailed Profile 23 • Teamwork 19 Teammate • Problem Solving 19 Hero • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9986423850059509, "perplexity": 2701.44985603286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00038-ip-10-164-35-72.ec2.internal.warc.gz"}
http://gmstat.com/category/uncategorized/
## Variance: A Measure of Dispersion Variance is a measure of dispersion of a distribution of a random variable. The term variance was introduced by R. A. Fisher in 1918. The variance of a set of observations (data set) is defined as the mean of the squares of deviations of all the observations from their mean. When it is computed for entire population, the variance is called the population variance, usually denoted by $\sigma^2$, while for sample data, it is called sample variance and denoted by $S^2$ in order to distinguish between population variance and sample variance. Variance is also denoted by $Var(X)$ when we speak about the variance of a random variable. The symbolic definition for population and sample variance is $\sigma^2=\frac{\sum (X_i - \mu)^2}{N}; \quad \text{for population data}$ $\sigma^2=\frac{\sum (X_i - \overline{X})^2}{n-1}; \quad \text{for sample data}$ It should be noted that the variance is in square of units in which the observations are expressed and the variance is a large number compared to observations themselves. The variance because of its some nice mathematical properties, assumes an extremely important role in statistical theory. Variance can be computed if we have standard deviation as variance is square of standard deviation i.e. $\text{Variance} = (\text{Standard Deviation})^2$. Variance can be used to compare dispersion in two or more set of observations. Variance can never be negative since every term in the variance is squared quantity, either positive or zero. To calculate the standard deviation one have to follow these steps: 1. First find the mean of the data. 2. Take difference of each observation from mean of the given data set. The sum of these differences should be zero or near to zero it may be due to rounding of numbers. 3. Square the values obtained in step 1, which should be greater than or equal to zero, i.e. should be a positive quantity. 4. Sum all the squared quantities obtained in step 2. We call it sum of squares of differences. 5. Divide this sum of squares of differences by total number of observation if we have to calculate population standard deviation ($\sigma$). For sample standard deviation (S) divide the sum of squares of differences by total number of observation minus one i.e. degree of freedom. Find the square root of the quantity obtained in step 4. The resultant quantity will be standard deviation for given data set. The major characteristics of the variances are: a)    All of the observations are used in the calculations b)    Variance is not unduly influenced by extreme observations c)    The variance is not in same units as the observation, the variance is in square of units in which the observations are expressed. ## Sampling Basics It is often required to collect information from the data. There two methods for collecting the required information. • Complete information • Sampling ## Complete Information In this method the required information are collected from each and every individual of the population. This method is used when it is difficult to draw some conclusion (inference) about the population on the basis of sample information. This method is costly and time consuming. This method of getting data also called Complete Enumeration or Population Census. ## Sampling Sampling is the most commonly and wisely used method of collecting the information. In this method instead of studying the whole population only a small part of population is selected and studied and result is applied to the whole population. For example, a cotton dealer picked up a small quantity of cotton from different bale in order to know the quality of the cotton. ## Purpose or objective of sampling Two basic purposes of sampling are 1. To obtain the maximum information about the population without examining each and every unit of the population. 2. To find the reliability of the estimates derived from the sample, which can be done by computing standard error of the statistic. ## Advantages of sampling over complete enumeration 1. It is much cheaper method to collect the required information from sample as compare to complete enumeration as lesser units are studied in sample rather than population. 2. From sample, the data can be collected more quickly and save time a lot. 3. Planning for sample survey can be done more carefully and easily as compare to complete enumeration. 4. Sampling is the only available method of collecting the required information when the population object/ subject or individual in population are of destructive nature. 5. Sampling is the only available method of collecting the required information when the population is infinite or large enough. 6. The most important advantage of sampling is that it provides reliability of the estimates. 7. Sampling is extensively used to obtain some of the census information. For further reading visit: Sampling Theory and Reasons to Sample
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 7, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9648956060409546, "perplexity": 456.11735649397593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689775.73/warc/CC-MAIN-20170923194310-20170923214310-00381.warc.gz"}
http://eprints.pascal-network.org/archive/00008082/
A Triangle Inequality for p-Resistance Mark Herbster In: NIPS Workshop 2010: Networks Across Disciplines: Theory and Applications, December 11, 2010, Whistler, Canada. ## Abstract The geodesic distance (path length) and effective resistance are both metrics defined on the vertices of a graph. The effective resistance is a more refined measure of connectivity than the geodesic distance. For example if there are $k$ edge disjoint paths of geodesic distance $d$ between two vertices, then the effective resistance is no more than $\frac{d}{k}$. Thus, the more paths, the closer the vertices. We continue the study of the recently introduced p-effective resistance [HL09]. The main technical contribution of this note is to prove that the p-effective resistance is a metric for p in (1,2] and obeys a strong triangle inequality. Given an efficient method to compute the p-effective resistance then an easy consequence of this inequality is that we may efficiently find a k-center clustering within a factor of 2^(p-1) from the optimal clustering with respect to p-effective resistance.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8170214891433716, "perplexity": 523.1779177289716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707439012/warc/CC-MAIN-20130516123039-00084-ip-10-60-113-184.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/493050/hline-is-drawn-over-vertical-ruling
\hline is drawn over vertical ruling I'm trying to generate pairs of tables in several different templates. The first table has a "normal" appearance, but the second is a transformation of the first that displays all of the rulings, especially those that are not drawn in the original table. It is important that the original table also includes the horizontal separators because the two images need to be the same size. The second must show the "grid" that shows all independent cells. An example: \documentclass[preview]{standalone} \usepackage{booktabs}\usepackage{colortbl}\begin{document} \begin{table} \begin{tabular}{l|cr} \hline & Romanization & polverine \\ \arrayrulecolor{black} \hline \arrayrulecolor{black} gastratrophia & 9 & 77 \\ \arrayrulecolor{white} \hline \arrayrulecolor{black} Adamitical & 3 & 61 \\ \arrayrulecolor{white} \hline \arrayrulecolor{black} \end{tabular} \end{table} \begin{table} \setlength{\aboverulesep}{0.1pt}\setlength{\belowrulesep}{0.1pt} \color{white}\arrayrulecolor{red}\begin{tabular}{|l|c|r|} \hline & Romanization & polverine \\ \hline gastratrophia & 9 & 77 \\ \hline Adamitical & 3 & 61 \\ \hline \end{tabular} \end{table}\end{document} Which results in the following tables: I realize that it is really discouraged to use vertical separators in tables, but due to the goal of the table generation I need them. Now as you can see, the white line is drawn over the vertical ruling, which results in a small white gap. I've tried to find ways to make the \hline transparent instead of white, but I cannot find a way to do that. Is there a way to draw the vertical lines after the horizontal lines, such that they will be placed over the white lines? If not, is there a way of replacing the white lines by some extra vertical spacing that is exactly the size of what a \hline would be? A working solution is to use \\[0.4pt] for the rows that do not have a \hline. Since a \hline has this width, this compensates for the space that a \hline would produce otherwise. It does not interrupt the vertical lines. • Or \\[\arrayrulewidth] in case it changes. – John Kormylo May 28 '19 at 19:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9992772936820984, "perplexity": 637.2810145410082}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347432237.67/warc/CC-MAIN-20200603050448-20200603080448-00260.warc.gz"}
https://en.wikipedia.org/wiki/T-norm_fuzzy_logics
# T-norm fuzzy logics T-norm fuzzy logics are a family of non-classical logics, informally delimited by having a semantics which takes the real unit interval [0, 1] for the system of truth values and functions called t-norms for permissible interpretations of conjunction. They are mainly used in applied fuzzy logic and fuzzy set theory as a theoretical basis for approximate reasoning. T-norm fuzzy logics belong in broader classes of fuzzy logics and many-valued logics. In order to generate a well-behaved implication, the t-norms are usually required to be left-continuous; logics of left-continuous t-norms further belong in the class of substructural logics, among which they are marked with the validity of the law of prelinearity, (A → B) ∨ (B → A). Both propositional and first-order (or higher-order) t-norm fuzzy logics, as well as their expansions by modal and other operators, are studied. Logics which restrict the t-norm semantics to a subset of the real unit interval (for example, finitely valued Łukasiewicz logics) are usually included in the class as well. Important examples of t-norm fuzzy logics are monoidal t-norm logic MTL of all left-continuous t-norms, basic logic BL of all continuous t-norms, product fuzzy logic of the product t-norm, or the nilpotent minimum logic of the nilpotent minimum t-norm. Some independently motivated logics belong among t-norm fuzzy logics, too, for example Łukasiewicz logic (which is the logic of the Łukasiewicz t-norm) or Gödel–Dummett logic (which is the logic of the minimum t-norm). ## Motivation As members of the family of fuzzy logics, t-norm fuzzy logics primarily aim at generalizing classical two-valued logic by admitting intermediary truth values between 1 (truth) and 0 (falsity) representing degrees of truth of propositions. The degrees are assumed to be real numbers from the unit interval [0, 1]. In propositional t-norm fuzzy logics, propositional connectives are stipulated to be truth-functional, that is, the truth value of a complex proposition formed by a propositional connective from some constituent propositions is a function (called the truth function of the connective) of the truth values of the constituent propositions. The truth functions operate on the set of truth degrees (in the standard semantics, on the [0, 1] interval); thus the truth function of an n-ary propositional connective c is a function Fc: [0, 1]n → [0, 1]. Truth functions generalize truth tables of propositional connectives known from classical logic to operate on the larger system of truth values. T-norm fuzzy logics impose certain natural constraints on the truth function of conjunction. The truth function $*\colon[0,1]^2\to[0,1]$ of conjunction is assumed to satisfy the following conditions: • Commutativity, that is, $x*y=y*x$ for all x and y in [0, 1]. This expresses the assumption that the order of fuzzy propositions is immaterial in conjunction, even if intermediary truth degrees are admitted. • Associativity, that is, $(x*y)*z = x*(y*z)$ for all x, y, and z in [0, 1]. This expresses the assumption that the order of performing conjunction is immaterial, even if intermediary truth degrees are admitted. • Monotony, that is, if $x \le y$ then $x*z \le y*z$ for all x, y, and z in [0, 1]. This expresses the assumption that increasing the truth degree of a conjunct should not decrease the truth degree of the conjunction. • Neutrality of 1, that is, $1*x = x$ for all x in [0, 1]. This assumption corresponds to regarding the truth degree 1 as full truth, conjunction with which does not decrease the truth value of the other conjunct. Together with the previous conditions this condition ensures that also $0*x = 0$ for all x in [0, 1], which corresponds to regarding the truth degree 0 as full falsity, conjunction with which is always fully false. • Continuity of the function $*$ (the previous conditions reduce this requirement to the continuity in either argument). Informally this expresses the assumption that microscopic changes of the truth degrees of conjuncts should not result in a macroscopic change of the truth degree of their conjunction. This condition, among other things, ensures a good behavior of (residual) implication derived from conjunction; to ensure the good behavior, however, left-continuity (in either argument) of the function $*$ is sufficient.[1] In general t-norm fuzzy logics, therefore, only left-continuity of $*$ is required, which expresses the assumption that a microscopic decrease of the truth degree of a conjunct should not macroscopically decrease the truth degree of conjunction. These assumptions make the truth function of conjunction a left-continuous t-norm, which explains the name of the family of fuzzy logics (t-norm based). Particular logics of the family can make further assumptions about the behavior of conjunction (for example, Gödel logic requires its idempotence) or other connectives (for example, the logic IMTL requires the involutiveness of negation). All left-continuous t-norms $*$ have a unique residuum, that is, a binary function $\Rightarrow$ such that for all x, y, and z in [0, 1], $x*y\le z$ if and only if $x\le y\Rightarrow z.$ The residuum of a left-continuous t-norm can explicitly be defined as $(x\Rightarrow y)=\sup\{z\mid z*x\le y\}.$ This ensures that the residuum is the pointwise largest function such that for all x and y, $x*(x\Rightarrow y)\le y.$ The latter can be interpreted as a fuzzy version of the modus ponens rule of inference. The residuum of a left-continuous t-norm thus can be characterized as the weakest function that makes the fuzzy modus ponens valid, which makes it a suitable truth function for implication in fuzzy logic. Left-continuity of the t-norm is the necessary and sufficient condition for this relationship between a t-norm conjunction and its residual implication to hold. Truth functions of further propositional connectives can be defined by means of the t-norm and its residuum, for instance the residual negation $\neg x=(x\Rightarrow 0)$ or bi-residual equivalence $x\Leftrightarrow y = (x\Rightarrow y)*(y\Rightarrow x).$ Truth functions of propositional connectives may also be introduced by additional definitions: the most usual ones are the minimum (which plays a role of another conjunctive connective), the maximum (which plays a role of a disjunctive connective), or the Baaz Delta operator, defined in [0, 1] as $\Delta x = 1$ if $x=1$ and $\Delta x = 0$ otherwise. In this way, a left-continuous t-norm, its residuum, and the truth functions of additional propositional connectives determine the truth values of complex propositional formulae in [0, 1]. Formulae that always evaluate to 1 are called tautologies with respect to the given left-continuous t-norm $*,$ or $*\mbox{-}$tautologies. The set of all $*\mbox{-}$tautologies is called the logic of the t-norm $*,$ as these formulae represent the laws of fuzzy logic (determined by the t-norm) which hold (to degree 1) regardless of the truth degrees of atomic formulae. Some formulae are tautologies with respect to a larger class of left-continuous t-norms; the set of such formulae is called the logic of the class. Important t-norm logics are the logics of particular t-norms or classes of t-norms, for example: It turns out that many logics of particular t-norms and classes of t-norms are axiomatizable. The completeness theorem of the axiomatic system with respect to the corresponding t-norm semantics on [0, 1] is then called the standard completeness of the logic. Besides the standard real-valued semantics on [0, 1], the logics are sound and complete with respect to general algebraic semantics, formed by suitable classes of prelinear commutative bounded integral residuated lattices. ## History Some particular t-norm fuzzy logics have been introduced and investigated long before the family was recognized (even before the notions of fuzzy logic or t-norm emerged): A systematic study of particular t-norm fuzzy logics and their classes began with Hájek's (1998) monograph Metamathematics of Fuzzy Logic, which presented the notion of the logic of a continuous t-norm, the logics of the three basic continuous t-norms (Łukasiewicz, Gödel, and product), and the 'basic' fuzzy logic BL of all continuous t-norms (all of them both propositional and first-order). The book also started the investigation of fuzzy logics as non-classical logics with Hilbert-style calculi, algebraic semantics, and metamathematical properties known from other logics (completeness theorems, deduction theorems, complexity, etc.). Since then, a plethora of t-norm fuzzy logics have been introduced and their metamathematical properties have been investigated. Some of the most important t-norm fuzzy logics were introduced in 2001, by Esteva and Godo (MTL, IMTL, SMTL, NM, WNM),[1] Esteva, Godo, and Montagna (propositional ŁΠ),[6] and Cintula (first-order ŁΠ).[7] ## Logical language The logical vocabulary of propositional t-norm fuzzy logics standardly comprises the following connectives: • Implication $\rightarrow$ (binary). In the context of other than t-norm-based fuzzy logics, the t-norm-based implication is sometimes called residual implication or R-implication, as its standard semantics is the residuum of the t-norm that realizes strong conjunction. • Strong conjunction $\And$ (binary). In the context of substructural logics, the sign $\otimes$ and the names group, intensional, multiplicative, or parallel conjunction are often used for strong conjunction. • Weak conjunction $\wedge$ (binary), also called lattice conjunction (as it is always realized by the lattice operation of meet in algebraic semantics). In the context of substructural logics, the names additive, extensional, or comparative conjunction are sometimes used for lattice conjunction. In the logic BL and its extensions (though not in t-norm logics in general), weak conjunction is definable in terms of implication and strong conjunction, by $A\wedge B \equiv A \mathbin{\And} (A \rightarrow B).$ The presence of two conjunction connectives is a common feature of contraction-free substructural logics. • Bottom $\bot$ (nullary); $0$ or $\overline{0}$ are common alternative signs and zero a common alternative name for the propositional constant (as the constants bottom and zero of substructural logics coincide in t-norm fuzzy logics). The proposition $\bot$ represents the falsity or absurdum and corresponds to the classical truth value false. • Negation $\neg$ (unary), sometimes called residual negation if other negation connectives are considered, as it is defined from the residual implication by the reductio ad absurdum: $\neg A \equiv A \rightarrow \bot$ • Equivalence $\leftrightarrow$ (binary), defined as $A \leftrightarrow B \equiv (A \rightarrow B) \wedge (B \rightarrow A)$ In t-norm logics, the definition is equivalent to $(A \rightarrow B) \mathbin{\And} (B \rightarrow A).$ • (Weak) disjunction $\vee$ (binary), also called lattice disjunction (as it is always realized by the lattice operation of join in algebraic semantics). In t-norm logics it is definable in terms of other connectives as $A \vee B \equiv ((A \rightarrow B) \rightarrow B) \wedge ((B \rightarrow A) \rightarrow A)$ • Top $\top$ (nullary), also called one and denoted by $1$ or $\overline{1}$ (as the constants top and zero of substructural logics coincide in t-norm fuzzy logics). The proposition $\top$ corresponds to the classical truth value true and can in t-norm logics be defined as $\top \equiv \bot \rightarrow \bot.$ Some propositional t-norm logics add further propositional connectives to the above language, most often the following ones: • The Delta connective $\triangle$ is a unary connective that asserts classical truth of a proposition, as the formulae of the form $\triangle A$ behave as in classical logic. Also called the Baaz Delta, as it was first used by Matthias Baaz for Gödel–Dummett logic.[8] The expansion of a t-norm logic $L$ by the Delta connective is usually denoted by $L_{\triangle}.$ • Truth constants are nullary connectives representing particular truth values between 0 and 1 in the standard real-valued semantics. For the real number $r$, the corresponding truth constant is usually denoted by $\overline{r}.$ Most often, the truth constants for all rational numbers are added. The system of all truth constants in the language is supposed to satisfy the bookkeeping axioms:[9] $\overline{r \mathbin{\And} s} \leftrightarrow (\overline{r} \mathbin{\And} \overline{s}),$ $\overline{r \rightarrow s} \leftrightarrow (\overline{r} \mathbin{\rightarrow} \overline{s}),$ etc. for all propositional connectives and all truth constants definable in the language. • Involutive negation $\sim$ (unary) can be added as an additional negation to t-norm logics whose residual negation is not itself involutive, that is, if it does not obey the law of double negation $\neg\neg A \leftrightarrow A$. A t-norm logic $L$ expanded with involutive negation is usually denoted by $L_{\sim}$ and called $L$ with involution. • Strong disjunction $\oplus$ (binary). In the context of substructural logics it is also called group, intensional, multiplicative, or parallel disjunction. Even though standard in contraction-free substructural logics, in t-norm fuzzy logics it is usually used only in the presence of involutive negation, which makes it definable (and so axiomatizable) by de Morgan's law from strong conjunction: $A \oplus B \equiv \mathrm{\sim}(\mathrm{\sim}A \mathbin{\And} \mathrm{\sim}B).$ • Additional t-norm conjunctions and residual implications. Some expressively strong t-norm logics, for instance the logic ŁΠ, have more than one strong conjunction or residual implication in their language. In the standard real-valued semantics, all such strong conjunctions are realized by different t-norms and the residual implications by their residua. Well-formed formulae of propositional t-norm logics are defined from propositional variables (usually countably many) by the above logical connectives, as usual in propositional logics. In order to save parentheses, it is common to use the following order of precedence: • Unary connectives (bind most closely) • Binary connectives other than implication and equivalence • Implication and equivalence (bind most loosely) First-order variants of t-norm logics employ the usual logical language of first-order logic with the above propositional connectives and the following quantifiers: • General quantifier $\forall$ • Existential quantifier $\exists$ The first-order variant of a propositional t-norm logic $L$ is usually denoted by $L\forall.$ ## Semantics Algebraic semantics is predominantly used for propositional t-norm fuzzy logics, with three main classes of algebras with respect to which a t-norm fuzzy logic $L$ is complete: • General semantics, formed of all $L$-algebras — that is, all algebras for which the logic is sound. • Linear semantics, formed of all linear $L$-algebras — that is, all $L$-algebras whose lattice order is linear. • Standard semantics, formed of all standard $L$-algebras — that is, all $L$-algebras whose lattice reduct is the real unit interval [0, 1] with the usual order. In standard $L$-algebras, the interpretation of strong conjunction is a left-continuous t-norm and the interpretation of most propositional connectives is determined by the t-norm (hence the names t-norm-based logics and t-norm $L$-algebras, which is also used for $L$-algebras on the lattice [0, 1]). In t-norm logics with additional connectives, however, the real-valued interpretation of the additional connectives may be restricted by further conditions for the t-norm algebra to be called standard: for example, in standard $L_\sim$-algebras of the logic $L$ with involution, the interpretation of the additional involutive negation $\sim$ is required to be the standard involution $f_\sim(x)=1-x,$ rather than other involutions which can also interpret $\sim$ over t-norm $L_\sim$-algebras.[10] In general, therefore, the definition of standard t-norm algebras has to be explicitly given for t-norm logics with additional connectives. ## Bibliography • Esteva F. & Godo L., 2001, "Monoidal t-norm based logic: Towards a logic of left-continuous t-norms". Fuzzy Sets and Systems 124: 271–288. • Flaminio T. & Marchioni E., 2006, T-norm based logics with an independent involutive negation. Fuzzy Sets and Systems 157: 3125–3144. • Gottwald S. & Hájek P., 2005, Triangular norm based mathematical fuzzy logic. In E.P. Klement & R. Mesiar (eds.), Logical, Algebraic, Analytic and Probabilistic Aspects of Triangular Norms, pp. 275–300. Elsevier, Amsterdam 2005. • Hájek P., 1998, Metamathematics of Fuzzy Logic. Dordrecht: Kluwer. ISBN 0-7923-5238-6. ## References 1. ^ a b Esteva & Godo (2001) 2. ^ Łukasiewicz J., 1920, O logice trojwartosciowej (Polish, On three-valued logic). Ruch filozoficzny 5:170–171. 3. ^ Hay, L.S., 1963, Axiomatization of the infinite-valued predicate calculus. Journal of Symbolic Logic 28:77–86. 4. ^ Gödel K., 1932, Zum intuitionistischen Aussagenkalkül, Anzeiger Akademie der Wissenschaften Wien 69: 65–66. 5. ^ Dummett M., 1959, Propositional calculus with denumerable matrix, Journal of Symbolic Logic 27: 97–106 6. ^ Esteva F., Godo L., & Montagna F., 2001, The ŁΠ and ŁΠ½ logics: Two complete fuzzy systems joining Łukasiewicz and product logics, Archive for Mathematical Logic 40: 39–67. 7. ^ Cintula P., 2001, The ŁΠ and ŁΠ½ propositional and predicate logics, Fuzzy Sets and Systems 124: 289–302. 8. ^ Baaz M., 1996, Infinite-valued Gödel logic with 0-1-projections and relativisations. In P. Hájek (ed.), Gödel'96: Logical Foundations of Mathematics, Computer Science, and Physics, Springer, Lecture Notes in Logic 6: 23–33 9. ^ Hájek (1998) 10. ^ Flaminio & Marchioni (2006)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 83, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.947600245475769, "perplexity": 1654.473528305011}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988065.26/warc/CC-MAIN-20150728002308-00138-ip-10-236-191-2.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/7862-find-1st-2nd-derivative.html
# Thread: Find 1st and 2nd Derivative. 1. ## Find 1st and 2nd Derivative. Find the first derivative and then the second derivative of Thank you in advance! 2. Originally Posted by Affinity Find the first derivative and then the second derivative of Thank you in advance! $y=\frac{e^x}{x^2}$ $y'=\frac{(e^x)'(x^2)-(e^x)(x^2)'}{(x^2)^2}$ $y'=\frac{x^2e^x-2xe^x}{x^4}=\frac{xe^x(x-2)}{x^4}=\frac{e^x(x-2)}{x^3}$ $y''=\frac{[e^x(x-2)]'(x^3)-[e^x(x-2)](x^3)'}{(x^3)^2}$ $y''=\frac{[(e^x)'(x-2)+e^x(x-2)']x^3-e^x(x-2)(x^3)'}{x^6}$ $y''=\frac{[e^x(x-2)+e^x]x^3-3x^2e^x(x-2)}{x^6}$ $y''=\frac{e^x(x-2)x^3+x^3e^x-3x^2e^x(x-2)}{x^6}=\frac{e^x[(x-2)x^3+x^3-3x^2(x-2)]}{x^6}$ $y''=\frac{e^x(x^4-2x^3+x^3-3x^3+6x^2)}{x^6}=\frac{e^x(x^4-4x^3+6x^2)}{x^6}=$ $\frac{x^2e^x(x^2-4x+6)}{x^6}$ $y''=\frac{e^x(x^2-4x+6)}{x^4}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9857490658760071, "perplexity": 2551.8229697874303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661778.22/warc/CC-MAIN-20160924173741-00190-ip-10-143-35-109.ec2.internal.warc.gz"}
http://www.researchgate.net/publication/2166281_Simulation_of_Beam-Beam_Effects_in_ee-_Storage_Rings
Article # Simulation of Beam-Beam Effects in $e^+e^-$ Storage Rings 05/2001; Source: arXiv ABSTRACT The beam-beam effects of the PEP-II as an asymmetric collider are studied with strong-strong simulations using a newly developed particle-in-cell (PIC) code. The simulated luminosity agrees with the measured one within 10% in a large range of the beam currents. The spectra of coherent dipole oscillation are simulated with and without the transparency symmetry. The simulated tune shift of the coherent $\pi$ mode agrees with the linearized Vlasov theory even at large beam-beam parameters. The Poicare map of coherent dipole is used to identify the beam-beam resonances. 0 Followers ·
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9474278688430786, "perplexity": 3198.3795843742228}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398454553.89/warc/CC-MAIN-20151124205414-00140-ip-10-71-132-137.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/finding-intial-momentum.744182/
# Homework Help: Finding intial momentum 1. Mar 19, 2014 ### savaphysics 1. The problem statement, all variables and given/known data You are riding in a car that crashes into a solid wall. The car comes to a complete stop without bouncing back. The car has a mass of 1500 kg and has a speed of 30 m/s before the crash. What is the car’s initial momentum? What is your initial momentum? (Recall that the weight of one kilogram is 2.2 lbs) What is the change in the momentum of the car? What is the change in your momentum? 2. Relevant equations 3. The attempt at a solution CarI= (1500kg) X (30m/s)= 45000kg m/s YouI=(Mass) X (30 m/s) Car change= (1500kg) X (-30m/s)= -45000kg m/s You change= (Mass) X (-30m/s) 2. Mar 19, 2014 ### PhanthomJay Yes...where your mass in kg is your weight in pounds divided by 2.2. I guess that is what the problem is asking.... 3. Mar 19, 2014 ### haruspex That all looks right. Is that all you wanted to know?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.837225615978241, "perplexity": 2512.1187502827947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867309.73/warc/CC-MAIN-20180526033945-20180526053945-00249.warc.gz"}
https://www.gradesaver.com/textbooks/science/physics/physics-for-scientists-and-engineers-a-strategic-approach-with-modern-physics-4th-edition/chapter-9-work-and-kinetic-energy-exercises-and-problems-page-229/37
## Physics for Scientists and Engineers: A Strategic Approach with Modern Physics (4th Edition) (a) $W = 206~J$ (b) $P = 68.7~watts$ (a) Since the block moves at a steady speed, the applied force $F$ must be equal in magnitude to the force of kinetic friction. We can find the magnitude of the force $F$ which is applied by the person. We can use $\mu_k = 0.70$. Therefore; $F = F_f$ $F = mg~\mu_k$ $F = (10~kg)(9.80~m/s^2)(0.70)$ $F = 68.6~N$ We can find the distance $d$ the block moves in 3.0 seconds. $d = v~t$ $d = (1.0~m/s)(3.0~s)$ $d = 3.0~m$ We can find the work done by the person. $W = F~d$ $W = (68.6~N)(3.0~m)$ $W = 206~J$ (b) We can find the power output. $P = \frac{W}{t}$ $P = \frac{206~J}{3.0~s}$ $P = 68.7~watts$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9719878435134888, "perplexity": 124.48727283125936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948047.85/warc/CC-MAIN-20180426012045-20180426032045-00158.warc.gz"}
https://en.wikipedia.org/wiki/Mean_time_between_failures
# Mean time between failures Mean time between failures (MTBF) is the predicted elapsed time between inherent failures of a system during operation.[1] MTBF can be calculated as the arithmetic mean (average) time between failures of a system. The term is used in both plant and equipment maintenance contexts. The definition of MTBF depends on the definition of what is considered a system failure. For complex, repairable systems, failures are considered to be those out of design conditions which place the system out of service and into a state for repair. Failures which occur that can be left or maintained in an unrepaired condition, and do not place the system out of service, are not considered failures under this definition.[2] In addition, units that are taken down for routine scheduled maintenance or inventory control are not considered within the definition of failure.[3] ## Overview Mean time between failures (MTBF) describes the expected time between two failures for a repairable system, while mean time to failure (MTTF) denotes the expected time to failure for a non-repairable system. For example, three identical systems starting to function properly at time 0 are working until all of them fail. The first system failed at 100 hours, the second failed at 120 hours and the third failed at 130 hours. The MTBF of the system is the average of the three failure times, which is 116.667 hours. If the systems are non-repairable, then their MTTF would be 116.667 hours. In general, MTBF is the "up-time" between two failure states of a repairable system during operation as outlined here: For each observation, the "down time" is the instantaneous time it went down, which is after (i.e. greater than) the moment it went up, the "up time". The difference ("down time" minus "up time") is the amount of time it was operating between these two events. Once the MTBF of a system is known, the probability that any one particular system will be operational at time equal to the MTBF can be calculated. This calculation requires that the system is working within its "useful life period", which is characterized by a relatively constant failure rate (the middle part of the "bathtub curve") when only random failures are occurring. MTBF value prediction is an important element in the development of products. it gives an estimation of the life time of a component, which reflects the failure rates in the "end-of-life wearout" part. Reliability engineers and design engineers often use reliability software to calculate a product's MTBF according to various methods and standards (MIL-HDBK-217F, Telcordia SR332, Siemens Norm, FIDES,UTE 80-810 (RDF2000), etc.). The Mil-HDBK-217 reliability calculator manual in combination with RelCalc software (or other comparable tool) enables MTBF reliability rates to be predicted based on design. A concept which is closely related to MTBF, and is important in the computations involving MTBF, is the mean down time (MDT). MDT can be defined as mean time which the system is down after the failure. Usually, MDT is considered different from MTTR (Mean Time To Repair); in particular, MDT usually includes organizational and logistical factors (such as business days or waiting for components to arrive) while MTTR is usually understood as more narrow and more technical. ## Formal definition of MTBF and MDT By referring to the figure above, the MTBF of a component is the sum of the lengths of the operational periods divided by the number of observed failures: ${\displaystyle {\text{MTBF}}={\frac {\sum {({\text{start of downtime}}-{\text{start of uptime}})}}{\text{number of failures}}}.}$ In a similar manner, MDT can be defined as ${\displaystyle {\text{MDT}}={\frac {\sum {({\text{start of uptime}}-{\text{start of downtime}})}}{\text{number of failures}}}.}$ The MTBF can be alternatively defined in terms of the expected value of the density function ƒ(t) of time until failure, also often referred as reliability function: ${\displaystyle {\text{MTBF}}=\int _{0}^{\infty }tf(t)\,dt\;.}$ ## MTBF and MDT for networks of components Two components ${\displaystyle c_{1},c_{2}}$ (for instance hard drives, servers, etc) may be arranged in a network, in series or in parallel. The terminology is here used by close analogy to electrical circuits, but has a slightly different meaning. We say that the two components are in series if the failure of either causes the failure of the network, and that they are in parallel if only the failure of both causes the network to fail. The MTBF of the resulting two-component network with repairable components can be computed according to the following formulae, assuming that the MTBF of both individual components is known:[4][5] ${\displaystyle {\text{mtbf}}(c_{1};c_{2})={\frac {1}{{\frac {1}{{\text{mtbf}}(c_{1})}}+{\frac {1}{{\text{mtbf}}(c_{2})}}}}={\frac {{\text{mtbf}}(c_{1})\times {\text{mtbf}}(c_{2})}{{\text{mtbf}}(c_{1})+{\text{mtbf}}(c_{2})}}\;,}$ where ${\displaystyle c_{1};c_{2}}$ is the network in which the components are arranged in series. For the network containing parallel repairable components, to find out the MTBF of the whole system, in addition to component MTBFs, it is also necessary to know their respective MDTs. Then, assuming that MDTs are negligible compared to MTBFs (which usually stands in practice), the MTBF for the parallel system consisting from two parallel repairable components can be written as follows:[4][5] {\displaystyle {\begin{aligned}{\text{mtbf}}(c_{1}\parallel c_{2})&={\frac {1}{{\frac {1}{{\text{mtbf}}(c_{1})}}\times {\text{PF}}(c_{2},{\text{mdt}}(c_{1}))+{\frac {1}{{\text{mtbf}}(c_{2})}}\times {\text{PF}}(c_{1},{\text{mdt}}(c_{2}))}}\\[1em]&={\frac {1}{{\frac {1}{{\text{mtbf}}(c_{1})}}\times {\frac {{\text{mdt}}(c_{1})}{{\text{mtbf}}(c_{2})}}+{\frac {1}{{\text{mtbf}}(c_{2})}}\times {\frac {{\text{mdt}}(c_{2})}{{\text{mtbf}}(c_{1})}}}}\\[1em]&={\frac {{\text{mtbf}}(c_{1})\times {\text{mtbf}}(c_{2})}{{\text{mdt}}(c_{1})+{\text{mdt}}(c_{2})}}\;,\end{aligned}}} where ${\displaystyle c_{1}\parallel c_{2}}$ is the network in which the components are arranged in parallel, and ${\displaystyle PF(c,t)}$ is the probability of failure of component ${\displaystyle c}$ during "vulnerability window" ${\displaystyle t}$. Intuitively, both these formulae can be explained from the point of view of failure probabilities. First of all, let's note that the probability of a system failing within a certain timeframe is the inverse of its MTBF. Then, when considering series of components, failure of any component leads to the failure of the whole system, so (assuming that failure probabilities are small, which is usually the case) probability of the failure of the whole system within a given interval can be approximated as a sum of failure probabilities of the components. With parallel components the situation is a bit more complicated: the whole system will fail if and only if after one of the components fails, the other component fails while the first component is being repaired; this is where MDT comes into play: the faster the first component is repaired, the less is the "vulnerability window" for the other component to fail. Using similar logic, MDT for a system out of two serial components can be calculated as:[4] ${\displaystyle {\text{mdt}}(c_{1};c_{2})={\frac {{\text{mtbf}}(c_{1})\times {\text{mdt}}(c_{2})+{\text{mtbf}}(c_{2})\times {\text{mdt}}(c_{1})}{{\text{mtbf}}(c_{1})+{\text{mtbf}}(c_{2})}}\;,}$ and for a system out of two parallel components MDT can be calculated as:[4] ${\displaystyle {\text{mdt}}(c_{1}\parallel c_{2})={\frac {{\text{mdt}}(c_{1})\times {\text{mdt}}(c_{2})}{{\text{mdt}}(c_{1})+{\text{mdt}}(c_{2})}}\;.}$ Through successive application of these four formulae, the MTBF and MDT of any network of repairable components can be computed, provided that the MTBF and MDT is known for each component. In a special but all-important case of several serial components, MTBF calculation can be easily generalised into ${\displaystyle {\text{mtbf}}(c_{1};\dots ;c_{n})=\left(\sum _{k=1}^{n}{\frac {1}{{\text{mtbf}}(c_{k})}}\right)^{-1}\;,}$ which can be shown by induction,[6] and likewise ${\displaystyle {\text{mdt}}(c_{1}\parallel \dots \parallel c_{n})=\left(\sum _{k=1}^{n}{\frac {1}{{\text{mdt}}(c_{k})}}\right)^{-1}\;,}$ since the formula for the mdt of two components in parallel is identical to that of the mtbf for two components in series. ## Variations of MTBF There are many variations of MTBF, such as mean time between system aborts (MTBSA), mean time between critical failures (MTBCF) or mean time between unscheduled removal (MTBUR). Such nomenclature is used when it is desirable to differentiate among types of failures, such as critical and non-critical failures. For example, in an automobile, the failure of the FM radio does not prevent the primary operation of the vehicle. Mean time to failure (MTTF) is sometimes used instead of MTBF in cases where a system is replaced after a failure, since MTBF denotes time between failures in a system which is repaired. MTTFd is an extension of MTTF, where MTTFd is only concerned about failures which would result in a dangerous condition. ### MTTF and MTTFd calculation {\displaystyle {\begin{aligned}{\text{MTTF}}&\approx {\frac {B_{10}}{0.1n_{\text{onm}}}},\\[8pt]{\text{MTTFd}}&\approx {\frac {B_{10d}}{0.1n_{\text{op}}}},\end{aligned}}} where B10 is the number of operations that a device will operate prior to 10% of a sample of those devices would fail and nop is number of operations. B10d is the same calculation, but where 10% of the sample would fail to danger. nop is the number of operations/cycles.[7] ## Notes 1. ^ Jones, James V., Integrated Logistics Support Handbook, page 4.2 2. ^ Colombo, A.G., and Sáiz de Bustamante, Amalio: Systems reliability assessment – Proceedings of the Ispra Course held at the Escuela Tecnica Superior de Ingenieros Navales, Madrid, Spain, September 19–23, 1988 in collaboration with Universidad Politecnica de Madrid, 1988 3. ^ "Defining Failure: What Is MTTR, MTTF, and MTBF?". Stephen Foskett, Pack Rat. Retrieved 2016-01-18. 4. ^ a b c d "Reliability Characteristics for Two Subsystems in Series or Parallel or n Subsystems in m_out_of_n Arrangement (by Don L. Lin)". auroraconsultingengineering.com. 5. ^ a b Dr. David J. Smith. Reliability, Maintainability and Risk (eighth ed.). ISBN 978-0080969022. 6. ^ "MTBF Allocations Analysis1". www.angelfire.com. Retrieved 2016-12-23. 7. ^ "B10d Assessment – Reliability Parameter for Electro-Mechanical Components" (PDF). TUVRheinland. Retrieved 7 July 2015. ## References • Jones, James V., Integrated Logistics Support Handbook, McGraw–Hill Professional, 3rd edition (June 8, 2006), ISBN 0-07-147168-5
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 16, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8869502544403076, "perplexity": 1744.5079311221355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00060-ip-10-171-10-108.ec2.internal.warc.gz"}
https://www.semanticscholar.org/paper/Serrin%E2%80%99s-overdetermined-problem-and-constant-mean-Pino-Pacard/5defedc08507f2a705159930d3d9b6e773245a24
# Serrin’s overdetermined problem and constant mean curvature surfaces @article{Pino2015SerrinsOP, title={Serrin’s overdetermined problem and constant mean curvature surfaces}, author={Manuel del Pino and Frank Pacard and Juncheng Wei}, journal={Duke Mathematical Journal}, year={2015}, volume={164}, pages={2643-2722} } • Published 16 October 2013 • Mathematics • Duke Mathematical Journal For all $N \geq 9$, we find smooth entire epigraphs in $\R^N$, namely smooth domains of the form $\Omega : = \{x\in \R^N\ / \ x_N > F (x_1,\ldots, x_{N-1})\}$, which are not half-spaces and in which a problem of the form $\Delta u + f(u) = 0$ in $\Omega$ has a positive, bounded solution with 0 Dirichlet boundary data and constant Neumann boundary data on $\partial \Omega$. This answers negatively for large dimensions a question by Berestycki, Caffarelli and Nirenberg \cite{bcn2}. In 1971… Serrin’s overdetermined problem on the sphere • Mathematics • 2016 We study Serrin’s overdetermined boundary value problem \begin{aligned} -\Delta _{S^N}\, u=1 \quad \text { in }\Omega ,\quad u=0, \; \partial _\eta u={\text {const}} \quad \text {on }\partial Unbounded Periodic Solutions to Serrin’s Overdetermined Boundary Value Problem • Mathematics • 2016 AbstractWe study the existence of nontrivial unbounded domains{\Omega}$$Ω in$${{\mathbb R}^{N}}$$RN such that the overdetermined problem$${-\Delta u = 1 \quad {\rm in} \, \Omega}, \quad u = 0, Solutions to overdetermined elliptic problems in nontrivial exterior domains • Mathematics Journal of the European Mathematical Society • 2019 In this paper we construct nontrivial exterior domains $\Omega \subset \mathbb{R}^N$, for all $N\geq 2$, such that the problem $$\left\{ {ll} -\Delta u +u -u^p=0,\ u >0 & \mbox{in }\; \Omega, {1mm] Serrin's over-determined problem on Riemannian manifolds • Mathematics • 2014 Abstract Let (ℳ,g){(\mathcal {M},g)} be a compact Riemannian manifold of dimension N, N ≥ 2. In this paper, we prove that there exists a family of domains (Ω ε ) ε∈(0,ε 0 ) {(\Omega _\varepsilon Serrin’s overdetermined problem for fully nonlinear nonelliptic equations • Mathematics Analysis & PDE • 2021 Let u denote a solution to a rotationally invariant Hessian equation F(D^2u)=0 on a bounded simply connected domain \Omega\subset R^2, with constant Dirichlet and Neumann data on \partial Geometric rigidity of constant heat flow • A. Savo • Mathematics Calculus of Variations and Partial Differential Equations • 2018 Let$$\Omega $$Ω be a compact Riemannian manifold with smooth boundary and let$$u_t$$ut be the solution of the heat equation on$$\Omega $$Ω, having constant unit initial data$$u_0=1$$u0=1 and On Smooth Solutions to One Phase-Free Boundary Problem in \mathbb{R}^{n} • Mathematics International Mathematics Research Notices • 2019 We construct a smooth axially symmetric solution to the classical one phase free boundary problem in \mathbb{R}^{n}, n\geq 3. Its free boundary is of “catenoid” type. This is a higher On one phase free boundary problem in \mathbb{R}^{N} • Mathematics • 2017 We construct a smooth axially symmetric solution to the classical one phase free boundary problem in \mathbb{R}^{N}. Its free boundary is of \textquotedblleft catenoid\textquotedblright\ type. This A Rigidity Result for Overdetermined Elliptic Problems in the Plane • Mathematics • 2015 Let f:[0,+∞)→ℝ be a (locally) Lipschitz function and Ω⊂ℝ2 a C1,α domain whose boundary is unbounded and connected. If there exists a positive bounded solution to the overdetermined elliptic problem ## References SHOWING 1-10 OF 35 REFERENCES New extremal domains for the first eigenvalue of the Laplacian in flat tori We prove the existence of nontrivial compact extremal domains for the first eigenvalue of the Laplacian in manifolds$${\mathbb{R}^{n}\times \mathbb{R}{/}T\, \mathbb{Z}} with flat metric, for some Constant mean curvature surfaces with Delaunay ends • Mathematics • 1998 In this paper we shall present a construction of complete surfaces M in R3 with finitely many ends and finite topology, and with nonzero constant mean curvature (CMC). This construction is parallel Stability of Hypersurfaces of Constant Mean Curvature in Riemannian Manifolds. • Mathematics • 1988 Hypersurfaces $$M^n$$with constant mean curvature in a Riemannian manifold $$\overline{M}^{n+1}$$display many similarities with minimal hypersurfaces of $$\overline{M}^{n+1}$$. They are both On De Giorgi's conjecture in dimension N>9 • Mathematics • 2011 A celebrated conjecture due to De Giorgi states that any bounded solution of the equation u + (1 u 2 )u = 0 in R N with @yNu > 0 must be such that its level setsfu = g are all hyperplanes, at least Entire solutions of semilinear elliptic equations in R^3 and a conjecture of De Giorgi This paper is concerned with the study of bounded solutions of semilinear elliptic equations u F u in the whole space R under the assumption that u is monotone in one direction say nu in R n The goal Regularity of flat level sets in phase transitions We consider local minimizers of the Ginzburg-Landau energy functional ∫1/2|∇u| 2 +1/4(1-u 2 ) 2 dx and prove that, if the 0 level set is included in a flat cylinder then, in the interior, it is 1D symmetry for solutions of semilinear and quasilinear elliptic equations • Mathematics • 2011 Several new $1$D results for solutions of possibly singular or degenerate elliptic equations, inspired by a conjecture of De Giorgi, are provided. In particular, $1$D symmetry is proven under the Splitting Theorems, Symmetry Results and Overdetermined Problems for Riemannian Manifolds • Mathematics • 2012 Our work proposes a unified approach to three different topics in a general Riemannian setting: splitting theorems, symmetry results and overdetermined elliptic problems. By the existence of a stable
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9660965800285339, "perplexity": 1460.3111886489623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662515501.4/warc/CC-MAIN-20220517031843-20220517061843-00426.warc.gz"}
http://math.stackexchange.com/questions/30906/is-there-a-term-for-the-contradiction-of-the-converse-of-a-logical-relationship/30923
# Is there a term for the contradiction of the converse of a logical relationship? new here (apologies for the formatting). I want to express the relationship between $$A \longrightarrow B$$ and $$B \longrightarrow \neg A.$$ Clearly this is neither converse, inverse, nor contrapositive; does it have a name at all? - If the first one implies the second one, you get a contradiction ($A \rightarrow \neg A$). Same the other way around since the second statement is equivalent to $A \rightarrow \neg B$ which means that $A$ implies both $B$ and $\neg B$. In all, I don't know if there is a specific name for this. – David Kohler Apr 4 '11 at 15:39 I'm not sure you have the notation right in your title - the converse of the inverse (or inverse of the converse) is the contrapositive $\neg B \longrightarrow \neg A$. – Michael Chen Apr 4 '11 at 15:40 @Cheezmeister: What is the "contradiction" of a proposition? If you mean the negation, then note that the negation of the converse of $A\rightarrow B$ is $\neg(B\rightarrow A) = B\land \neg A$, which is not $B\rightarrow \neg A$. – Arturo Magidin Apr 4 '11 at 15:44 B->-A gives the same truth values as -(A and B) – yoyo Apr 4 '11 at 15:44 @David These are independent statements that I'm simply comparing. @Michael Right you are, sir. I've fixed the title. – Cheezmeister Apr 4 '11 at 15:45 Then there's endless ways you can string those steps together, among which are "the converse of the contradiction of the inverse of $A \rightarrow B$", or the shorter "the contradiction of the converse of $A \rightarrow B$", or the equally short "the contrapositive of the contradiction of $A \rightarrow B$".
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9494038224220276, "perplexity": 361.81333366999075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701165484.60/warc/CC-MAIN-20160205193925-00027-ip-10-236-182-209.ec2.internal.warc.gz"}
https://cs.stackexchange.com/questions/64448/equivalent-definitions-for-subgraph
# Equivalent definitions for subgraph I know that a graph G' is a subgraph for G if V(G')⊆V(G) and E(G')⊆E(G). Today my professor wrote that G' is a subgraph for G if G' is a graph and V(G')⊆V(G), and then he told us that this definition is equivalent to the one we've seen before (= the first line in this post). However, I don't think that these definitions are equivalent. In fact, if this is G: 2 / \ 1 3 and this is G': 1 - 3 we have that G' is a subgraph for G according to the second definition (wrong), while it is not according to the first one (right). Am I missing something or was the professor wrong?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8774471879005432, "perplexity": 453.74504786281926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141180636.17/warc/CC-MAIN-20201125012933-20201125042933-00380.warc.gz"}
https://brilliant.org/problems/test-in-algebra-level-2/
# Test In Algebra Level 2 For some natural number 'n', the sum of the first 'n' natural number is 240 less than the sum of the first (n+5) natural numbers. Then n itself is the sum of how many natural numbers starting with 1 ×
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9060071110725403, "perplexity": 621.0544255686708}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889567.48/warc/CC-MAIN-20180120102905-20180120122905-00033.warc.gz"}
https://webwork.maa.org/moodle/mod/forum/discuss.php?d=3534
## Features & Development ### Bug: ⊂ mit by Christopher Heckman - Number of replies: 2 I just noticed a bug when checking the answers to a Gateway/Quiz in WeBWorK. Instead of saying [submit], it says [⊂ mit]. I suspect some subroutine (or should I say ⊂routine?) goes into math mode when the answer can be a function. (The attachment shown below shows what I mean.) ### Re: Bug: ⊂ mit by Davide Cervone - The past-answer code uses MathJax to format expressions that are formulas. It does this using the AsciiMath input format (since the student answers are basically in that form, and it doesn't have the TeX form to work from). But AsciiMath interprets a number of words as symbols, with "sub" being one of them. That is why you get the subset symbol, and why the "mit" is in italics (it is treated as mathematics). Is the "[submit]" really part of the student's answer? That seems a bit odd. In any case, I think it would be better to use the TeX version of the answer. Currently that is not being stored, but it could be. Alternatively, the student's answer could be passed to the answer evaluator for parsing and the resulting MathObject could be used to generate the TeX code. The problem is that it might be difficult to get that to occur within the context of the problem (the safe compartment), but I do see that the problem has been evaluated in order to get the associated AnswerHash objects, so it may be possible to still tie into that environment properly. It would take some experimenting. But the results would be better in that the TeX would properly correspond to the context used in the problem, rather than the generic AsciiMath interpretation.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9698972702026367, "perplexity": 1007.7526672079382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337803.86/warc/CC-MAIN-20221006092601-20221006122601-00198.warc.gz"}
https://www.proofwiki.org/wiki/Abel%27s_Test
# Abel's Test ## Theorem Let $\ds \sum a_n$ be a convergent real series. Let $\sequence {b_n}$ be a decreasing sequence of positive real numbers. Then the series $\ds \sum a_n b_n$ is also convergent. ### Abel's Test for Uniform Convergence Let $\sequence {\map {a_n} z}$ and $\sequence {\map {b_n} z}$ be sequences of complex functions on a compact set $K$. Let $\sequence {\map {a_n} z}$ be such that: $\sequence {\map {a_n} z}$ is bounded in $K$ $\ds \sum \size {\map {a_n} z - \map {a_{n + 1} } z}$ is convergent with a sum which is bounded in $K$ $\ds \sum \map {b_n} z$ is uniformly convergent in $K$. Then $\ds \sum \map {a_n} z \map {b_n} z$ is uniformly convergent on $K$. ## Source of Name This entry was named for Niels Henrik Abel.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9940463304519653, "perplexity": 239.57680761789095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945381.91/warc/CC-MAIN-20230326013652-20230326043652-00397.warc.gz"}
https://www.physicsforums.com/threads/conceptualizing-resonance.642059/
# Conceptualizing resonance. 1. Oct 7, 2012 ### holtto In a mass-spring system that is driven by an external force, its displacement-time graph consists of several components. One of them is Asin(bt), where b is the driving frequency and$$A=\frac{a}{w^{2}-b^{2}}$$ where w is the natural frequency. as b approaches w, A approaches infinity. however, i find the differential equation used to derive the above expression quite counter-intuitive. is there an easier to way to visualize resonance of a mass-spring system? 2. Oct 7, 2012 ### holtto is there something to do with the oscillation of the system stabilizing itself? 3. Oct 8, 2012 ### vanhees71 We are discussing this issue right now in another thread. I've given the full solution of the equation and discussed the corresponding Green's function. This explains everything on this issue (see the posting, I've just written a minute ago :-)):
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9105402827262878, "perplexity": 1331.030623217977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864039.24/warc/CC-MAIN-20180621055646-20180621075646-00235.warc.gz"}
http://hal.in2p3.fr/in2p3-01188999
# Inclusive production of the $X(4140)$ state in $p \overline p$ collisions at D0 Abstract : We present a study of the inclusive production of the $X(4140)$ with the decay to the $J/\psi \phi$ final state in hadronic collisions. Based on $10.4~\rm{fb^{-1}}$ of $p \overline p$ collision data collected by the D0 experiment at the Fermilab Tevatron collider, we report the first evidence for the prompt production of $X(4140)$ and find the fraction of $X(4140)$ events originating from $b$ hadrons to be $f_b=0.39\pm 0.07 {\rm \thinspace (stat)} \pm 0.10 {\rm \thinspace (syst)}$. The ratio of the non-prompt $X(4140)$ production rate to the $B_s^0$ yield in the same channel is $R=0.19 \pm 0.05 {\rm \thinspace (stat)} \pm 0.07 {\rm \thinspace (syst)}$. The values of the mass $M=4152.5 \pm 1.7 (\rm {stat}) ^{+6.2}_{-5.4} {\rm \thinspace (syst)}$~MeV and width $\Gamma=16.3 \pm 5.6 {\rm \thinspace (stat)} \pm 11.4 {\rm \thinspace (syst)}$~MeV are consistent with previous measurements. 8 pages, 2 figues Document type : Journal articles http://hal.in2p3.fr/in2p3-01188999 Contributor : Sylvie Flores <> Submitted on : Tuesday, September 1, 2015 - 7:19:11 AM Last modification on : Tuesday, May 14, 2019 - 10:42:52 AM ### Citation V.M. Abazov, U. Bassler, G. Bernardi, M. Besançon, D. Brown, et al.. Inclusive production of the $X(4140)$ state in $p \overline p$ collisions at D0. Physical Review Letters, American Physical Society, 2015, 115, pp.232001. ⟨10.1103/PhysRevLett.115.232001⟩. ⟨in2p3-01188999⟩ Record views
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.973459005355835, "perplexity": 3140.758423846573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986675598.53/warc/CC-MAIN-20191017172920-20191017200420-00495.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/thomas-calculus-13th-edition/chapter-14-partial-derivatives-section-14-7-extreme-values-and-saddle-points-exercises-14-7-page-843/42
## Thomas' Calculus 13th Edition Published by Pearson # Chapter 14: Partial Derivatives - Section 14.7 - Extreme Values and saddle Points - Exercises 14.7 - Page 843: 42 #### Answer Local Minimum: $f(\dfrac{1}{2}, 2 )=2-\ln \dfrac{1}{2}=2 +\ln 2$ #### Work Step by Step $$f_x(x,y)=y+2-\dfrac{2}{x} \\ f_y(x,y)=x-\dfrac{1}{y}=0$$ The critical points are: $(\dfrac{1}{2}, 2 )$ Apply the second derivative test for the critical point $(\dfrac{1}{2}, 2 )$. $D(\dfrac{1}{2}, 2 )=f_{xx}f_{yy}-f^2_{xy}=(8)(\dfrac{1}{4})-1=2-1=1 \gt 0$ Now, we have the Local Minimum: $f(\dfrac{1}{2}, 2 )=2-\ln \dfrac{1}{2}=2 +\ln 2$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8457995057106018, "perplexity": 1293.327478395713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950110.72/warc/CC-MAIN-20230401160259-20230401190259-00610.warc.gz"}
http://math.stackexchange.com/questions/82441/rough-estimate-for-the-fourier-coefficients-of-a-discrete-bump-function
# Rough estimate for the fourier coefficients of a discrete bump function Setup: Let $p$ be a large prime number and $F = \{0, \dots, p-1\}$ be the field of order $p$. Let $I$ denote the discrete interval $I = \{1, \dots, M\}$ for some $M < p$. Regard both $I$ and $F$ as subsets of the real line, and let $f : [0,p) \to R$ be a smooth indicator function of $I$ with support contained in $\tilde I = \{0, \dots, M+1\}$. (By smooth indicator function we mean the following: Let $f_1(x) = \exp( ( |x|^2 - 1 )^{-1} )$ for $|x| < 1$ and zero otherwise. Observe $\text{supp} f_1 \subset [-1,1]$. Define $f_2(x) = f_1(2x/(M_1+1)-1)$, so that $\text{supp} f_2 \subset [0,M_1+1]$. Set $c^{-1} = \int_0^{M_1+1} f_2(x) \ dx$ and put $f_3 = c f_2 * \mathbb I_{[0,M_1+1]},$ where $\mathbb I_{[0,M_1+1]}$ is the indicator function of $[0,M_1+1]$. This $f_3$ is manifestly smooth on $\mathbb F_p \simeq \{0,\dots,p-1\}$ and satisfies $f_3|_{[1,M]} \equiv 1$, $\text{supp} f_3 \subset [0,M_1+1]$.) Extend $f$ to be periodic on the line with period $p$ and (abusing notation) call this function $f$ as well. Now sample $f$ at the integers to obtain a discrete function (abusing notation still) called $f$ with period $p$. This last $f$ has a finite fourier series $$f(x) = \sum_{\xi=0}^{p-1} \hat f(\xi) \exp(2 \pi i \xi x / p)$$ with fourier coefficients $$\hat f(\xi) = \frac1p \sum_{z=0}^{p-1} f(z) \exp(- 2 \pi i \xi z / p).$$ and since f is smooth its fourier coefficients decay quickly, $\sum_{\xi=0}^{p-1} \hat f(\xi) = O(1).$ Problem: One should be able to show (1) The coefficients $\hat f(\xi)$ can be dropped for $\xi > p/M$, and (2) For $\xi = 1, \dots, p/M$ the bound $|\hat f(\xi)| \leq M/p$ holds. I think this can be accomplished with an appropriate form of the Poisson summation formula. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9980137944221497, "perplexity": 93.50036064470298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500835505.87/warc/CC-MAIN-20140820021355-00382-ip-10-180-136-8.ec2.internal.warc.gz"}
https://repository.kaust.edu.sa/handle/10754/663909?show=full
dc.contributor.author Khaled, Ahmed dc.contributor.author Sebbouh, Othmane dc.contributor.author Loizou, Nicolas dc.contributor.author Gower, Robert M. dc.contributor.author Richtarik, Peter dc.date.accessioned 2020-06-29T07:42:35Z dc.date.available 2020-06-29T07:42:35Z dc.date.issued 2020-06-20 dc.identifier.uri http://hdl.handle.net/10754/663909 dc.description.abstract We present a unified theorem for the convergence analysis of stochastic gradient algorithms for minimizing a smooth and convex loss plus a convex regularizer. We do this by extending the unified analysis of Gorbunov, Hanzely \& Richt\'arik (2020) and dropping the requirement that the loss function be strongly convex. Instead, we only rely on convexity of the loss function. Our unified analysis applies to a host of existing algorithms such as proximal SGD, variance reduced methods, quantization and some coordinate descent type methods. For the variance reduced methods, we recover the best known convergence rates as special cases. For proximal SGD, the quantization and coordinate type methods, we uncover new state-of-the-art convergence rates. Our analysis also includes any form of sampling and minibatching. As such, we are able to determine the minibatch size that optimizes the total complexity of variance reduced methods. We showcase this by obtaining a simple formula for the optimal minibatch size of two variance reduced methods (\textit{L-SVRG} and \textit{SAGA}). This optimal minibatch size not only improves the theoretical total complexity of the methods but also improves their convergence in practice, as we show in several experiments. dc.publisher arXiv dc.relation.url https://arxiv.org/pdf/2006.11573 dc.rights Archived with thanks to arXiv dc.title Unified Analysis of Stochastic Gradient Methods for Composite Convex and Smooth Optimization dc.type Preprint dc.contributor.department Computer Science Program dc.contributor.department Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division dc.eprint.version Pre-print dc.contributor.institution Cairo University, Giza, Egypt. dc.contributor.institution Mila, Universit´e de Montr´eal, Montr´eal, Canada. dc.contributor.institution Facebook AI Research, New York, USA. dc.identifier.arxivid 2006.11573 kaust.person Khaled, Ahmed kaust.person Sebbouh, Othmane kaust.person Richtarik, Peter refterms.dateFOA 2020-06-29T07:43:03Z  Files in this item Name: Preprintfile1.pdf Size: 719.0Kb Format: PDF Description: Pre-print
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9197256565093994, "perplexity": 3967.918638175943}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945248.28/warc/CC-MAIN-20230324051147-20230324081147-00439.warc.gz"}
https://www.arxiv-vanity.com/papers/hep-ph/0212396/
# Prospective Analysis of Spin- and CP-sensitive Variables in H → Zz → l+1l−1l+2l−2 with Atlas C. P. Buszello1, I. Fleck2, P. Marquard3, J. J. van der Bij4 Albert-Ludwigs-Universität Fakultät für Mathematik und Physik, Physikalisches Institut, Hermann-Herder-Straße 3, D-79104 Freiburg Germany 11 22Ivor.F 33 44 May 13, 2003 ###### Abstract A possibility to prove spin and CP-eigenvalue of a Standard Model (SM) Higgs boson is presented. We exploit angular correlations in the subsequent decay H ZZ 4l (muons or electrons) for Higgs masses above 200 GeV. We compare the angular distributions of the leptons originating from the SM Higgs with those resulting from decays of hypothetical particles with differing quantum numbers. We restrict our analysis to the use of the Atlas-detector which is one of two multi-purpose detectors at the upcoming 14 TeV proton-proton-collider (LHC) at CERN. By applying a fast simulation of the Atlas detector it can be shown that these correlations will be measured sufficiently well that consistency with the spin-CP hypothesis 0+ of the Standard Model can be verified and the 0- and 1 can be ruled out with an integrated luminosity of 100 fb. Freiburg-THEP 02/16 ## 1 Introduction Although the standard electroweak gauge theory successfully explains all current electroweak data, the mechanism of spontaneous symmetry breaking has been tested only partially. Since in the Standard Model spontaneous symmetry breaking is due to the Higgs Mechanism, the search for a Higgs particle will be one of the main tasks of future colliders. At present, LEP gives a lower limit of 114.4 GeV/c [1] for the Higgs boson mass. There is also an indirect upper limit from electroweak precision measurements of 219 GeV/c at a 95% confidence level [2] which is valid in the minimal standard model. However, this limit is still preliminary and the quality of the SM fit, when including all EW measurements from both low and high energy experiments, is still an object of discussion amongst experts [3]. Furthermore, a heavier Higgs boson would be consistent with the electroweak precision measurements in models more general than the minimal standard model [4]. In this first analysis we will therefore also consider higher Higgs masses well above this limit, as can be produced at high energy hadron colliders such as the LHC (pp collisions at 14 TeV). A Standard Model Higgs boson lying below the threshold will mainly decay into a pair. In this case, there is an overwhelming direct QCD background which dominates the signal. Therefore, the Higgs boson is difficult to study in detail in this mass region, even though one can use rare decays as a signal. Rare decays considered in the literature include, for example, , , , or . All of these signals are rather difficult to see, but can eventually be used to establish the existence of the Higgs boson [5]. While these decay modes can be used to discover the Higgs boson, a detailed study of its properties will be difficult. The situation is much better for a heavy Higgs boson (). For such a Higgs boson the main decay products are vector boson pairs, or . For the latter decay mode, a clear signal for the Higgs consists of a peak in the invariant mass spectrum of the produced vector bosons. The double leptonic decay of the boson, , leads to a particularly clean signal. In this case, the basic strategy for discovering a Higgs boson in a clean mode is to select events with 4 high P leptons that can be combined to form two Z-bosons. Here, an exposure of 30 fb is already sufficient. If one finds such a signal one might be tempted to assume this to be the Standard Model Higgs boson. However, given the fact that the Higgs sector is not fully prescribed, one has to allow for other possibilities. In strongly interacting models, for instance, low lying (pseudo-)vector resonances are possible[6, 7]. Also, pseudoscalar particles are present in a variety of models [9]. Therefore, the first priority after finding a signal is to establish the nature of the resonance, in particular its spin and CP-eigenvalue. This can be done by studying angular distributions and correlations among the decay leptons. In the following, we will make this study. We will limit ourselves to (pseudo-)vector and (pseudo-)scalar particles. To demonstrate consistency with a Spin 0, CP even hypothesis, we will compare the angular distributions, to those produced by different particles, always assuming the production rate of a Standard Model Higgs boson. This is the right assumption to make because, in order to be recognized as a candidate for a Standard Model like Higgs, the detected signal must be a resonance with the appropriate width and branching ratios. Since the production mechanism - gluon-fusion rules out spin 1 particles, due to Yang’s theorem[8] - cannot be seen, the only way to prove that the spin and CP nature of the new particle is Standard Model like is to study the decay angles of the leptons. Theoretical studies of angular distributions have been performed in the literature [9-15]. So far, such studies have been limited to theoretical discussions. However, it was shown in [14] that acceptance and efficiencies of the detector can play a role since they can generate correlations, mimicking physical ones. Therefore, it is necessary to use a detector simulation in order to establish how well one can do in practice. The complete triple differential cross-sections for a Higgs-boson decaying into two onshell Z-bosons which subsequently decay into fermion pairs can be calculated at tree level. The angular dependence of this cross section is given in the appendix together with the most important integrated angular distributions. For the definition of the angles see Figure 1. We study essentially two distributions. One is the distribution of the cosine of the polar angle, , of the decay leptons relative to the boson. Because the heavy Higgs decays mainly into longitudinally polarised vector bosons the cross section should show a maximum around )=0. The other is the distribution of the angle between the decay planes of the two bosons in the rest frame of the Higgs boson. This distribution depends on the details of the Higgs decay mechanism. Within the Standard Model, a behaviour roughly like is expected. This last distribution is flattened in the decay chain , because of the small vector coupling of the leptons, in contrast to the decay of the Higgs Boson into W’s or decay of the Z into quarks. Also, cuts can significantly affect the correlations. Therefore one needs a precise measurement of the momenta of the outgoing leptons. The Atlas-detector should be well-suited to measure these distributions, since the muon and electron reconstruction is very precise over a large solid angle. A detector Monte-Carlo is however needed in order to determine whether the angular distributions can be measured sufficiently well in order to determine the quantum numbers of the Higgs particle. The present paper is organized as follows. In Chapter 2 the generator is described, in Chapter 3 detector simulation and reconstruction are given. In Chapter 4 we define quantities that can be used to characterize the different distributions. In Chapter 5 we present the results, concluding that the quantum numbers of the Higgs particle can indeed be determined. In the appendix we give formulae for the complete differential and integrated distributions for the decay of the resonance assuming arbitrary couplings computed in tree level and narrow width approximation. ## 2 The Generators In order to distinguish between different spins J=0,1 and/or CP-eigenvalues one needs to study four different distributions: that resulting from the decay of the Standard Model Higgs boson, and the three distributions that would result from hypothetical particles with spin and CP-eigenvalue combinations (0, 1), (1, 1), (1, -1). The feasibility of using angular correlations in the decay of the Z bosons in order to distinguish between these particles has been evaluated using two different Monte-Carlo generators. One was written for the Standard Model Higgs () and the irreducible ZZ-background.[14] The latter includes contributions from both gluons and quarks to the ZZ production ( and ) , whereas all Higgs production mechanisms other than the gluon fusion are neglected. This generator keeps all polarisations of the Z-boson for the quark initiated as well as for the gluon initiated processes. This allows for an analysis of the angular distributions of the leptons. The gluonic production of Z-boson pairs is only about 30% of the total background. However, one should not ignore its contribution, since it has different angular distributions from the other backgrounds and its presence can affect measured correlations. The programme contains no K-factors, therefore our conclusions regarding the feasibility of the determination of Higgs quantum numbers are conservative. Indeed, K-factors are expected to be larger for gluon-induced processes (such as the signal) than for quark-induced processes (70% of the background). Since the narrow width approximation is used, the results are only valid for Higgs masses above the ZZ threshold. For the alternative particles, a new generator was written based on an article by C. A. Nelson and J. R. Dell’ Aquilla [13]. The programmes for the production of background, the Standard Model Higgs and all alternative particles use Cteq4M structure functions [16] and hdecay [17] for branching-ratio and width of the Higgs, and all use the narrow width approximation. The background as well as all cross sections for the four simulations are taken from the first generator. Thus, all cases show identical distributions of invariant mass of the Z-pairs and transverse momentum of Z-bosons and leptons and have the same width and cross sections. The only difference lies in the angular distributions of the leptons. For the alternative particles no special assumption concerning the coupling has to be made; only CP-invariance is assumed. It is worth mentioning that the angular correlations are completely independent of the production mechanism. ## 3 Detector Simulation and Reconstruction The detector response is simulated using ATLFast [18], a software-package for particle level simulation of the Atlas detector. It is used for fast event-simulation including the most crucial detector aspects. Starting from a list of particles in the event, it provides a list of reconstructed jets, isolated leptons and photons and expected missing transverse energy. It applies momentum- and energy- smearing to all reconstructed particles. The values of the detector-dependent parameters are chosen to match the expected performance that was evaluated mostly by full simulations using Geant3[19]. The event selection is modeled exactly after the event selection in the Atlas-Physics-TDR [20]. Four leptons (electrons or muons) are required in the pseudorapidity range Throughout this paper, we use the term signal for distributions where the background has been statistically subtracted. The only background considered is the Z pair production. Other possible backgrounds like top pair production or Zbb are negligible for masses of the Higgs boson above 200 GeV/c. Systematic uncertainties due to the simulation of the background could be studied by comparing distributions from the sidebins of the Higgs signal with the results of the generator. A proper treatment of the background is very important, since the angular distributions of the background itself and correlations introduced by detector effects have a large impact on the shape of the distributions discussed. These effects are detailed below. For high invariant masses, the Z bosons from the background processes are mainly transversely polarised leading to a polar angle distribution of the form . This distribution flattens the distribution expected for the Higgs decay. Figure 2 shows the polar angle distributions of the signal (left) and the background (right). The dashed line shows the shape of the distribution expected when no cuts are applied and the detector response is not taken into account. It has just been scaled by the overall acceptance of the cuts, so that the shape can be compared. The expected distribution with all cuts and smearing applied is drawn as a solid line. Figures 2 and 3 are produced assuming a Higgs mass of 200 GeV/c and Z decaying to muons only. For the decay ZZ eeee or ZZ ee the graphs look similar. The smearing effects are largely independent of the Higgs mass. The effect of the detector acceptance and isolation cuts on the decay plane angle distribution is shown in Figure 3. Again, the distribution of the signal is shown left and the background right. The dashed histogrammes are scaled to have the same integral as the solid histogrammes, and zero is suppresssed in order to facilitate the comparison of the shape. The definition of the line styles are the same as above. For the decay plane angle, the background shows an almost flat distribution before applying selection cuts. But a minor correlation is introduced by detector effects. This has been simulated and taken into account for the analysis of decay plane angle distributions. In conclusion, the isolation cuts lead to a small distortion of the angular distributions as discussed in [14], but these effects are almost negligible for the Atlas detector. The cut on enhances the decay plane correlation a little, but the smearing and the requirements reduce this effect. Altogether, there is a small enhancement of the correlation of almost the same amount for all four particles. A further cut on the transverse momentum of the Z bosons is known to additionally reduce the background, but it also affects the correlation. Since an optimisation of the signal-to-background ratio is not crucial to this analysis, this cut has not been applied, rendering the analysis less dependent on the details of the production mechanism like initial of the Higgs boson. ## 4 Parametrisation of Decay Angle Distributions The differential cross sections for the different models can be computed directly or can be derived from the formulae given in [13]. The explicit distributions are given in the appendix. From the article [13] we quote the simple distributions of the alternative particles. Table 1 shows the distribution of the polar angle . and are the polar angles of the leptons originating from the Z Bosons and respectively. In Table 2, the distribution of the decay plane angle is shown where the polar angle is integrated over different ranges. F11 gives the distribution for , F22 for , F12 for and , and F21 for and . , , and are the parameters that characterise the decay density matrix. For the decay modes used in this analysis, they amount to the following values: = = -1/2, = = . r is the ratio of the axial vector to vector coupling which for the muons amounts to r = (1 - 4 . We used = 0.23. The plane-correlation can be parametrised as F(ϕ)=1+α⋅cos(ϕ)+β⋅cos(2ϕ) (1) In all four cases discussed here, there is no or contribution. For the Standard Model Higgs, and depend on the Higgs mass while they are constant over the whole mass range in the other cases. The polar angle distribution can be described by G(θ)=T⋅(1+cos2(θ))+L⋅sin2(θ) (2) reflecting the longitudinal or transverse polarisations of the Z boson. We define the ratio R:=L−TL+T (3) of transversal and longitudinal polarisation. The dependence of the parameters , and on the Higgs mass is shown in Figure 4. The pseudoscalar shows the largest deviation from the SM Higgs. It would have and whereas the scalar always has and . The vector and the axialvector can be excluded through the parameter for most of the mass range, but for Higgs masses around 200 GeV/c the main difference lies in the value of which is zero for and and about 0.1 for the scalar. The value for can only discriminate between the scalar and the axialvector but the difference is very small. ## 5 Background Estimation The subtraction of the angular distributions of the background is necessary to obtain and analize the angular distributions of the signal alone. This bears a risk of introducing systematic errors. Thus, the background distributions as produced by Monte-Carlo-Generators have to be checked against the data. In this chapter we will estimate the effects and possible systematic errors introduced by the subtraction. First, the absolute number of background events has to be estimated. This can be done by comparing the sidebands of the signal to a simulated distribution of the background only. This procedure is illustrated in Figure 5. In order to obtain the number of expected events the number of simulated events in the signal region N is scaled by the number of events in the sidebands N divided by the number of simulated events in the sidebands N. The error from this calculation is = . In the case of a 250 GeV/c Higgs boson the estimated number of background events is N=130 with a systematic error of = 4.1 which is well below the statistical error of = 11.4. Checking of the shape of the background distribution can be done by using bins below and above the signal region, too. This is demonstrated in Figure 6. It shows the parameter derived from a fit described in chapter 4 to the background distribution only (black line) and the background plus signal (points with errorbars). For most of the fitted values a bin width of 20 GeV/c was used, except for the last bin where 100 GeV/c were used to compensate for the fact that there are less events for higher invariant masses. From the expected errors one finds that the parameter for the background can be estimated with a precision of about = 0.08. This might not seem too good, but the effect of using a slightly wrong background distribution is not so large. To demonstrate this, a fit to the angular distribution of the angle was performed, where a wrong background distribution was subtracted from the signal-plus-background distribution as obtained from the generator. The parameter R of the subtracted distribution was changed to values higher and lower than the value of R of the generated distributions. In Table 3, the difference R = R - R and the value of R obtained from the fit to the signal distribution produced by subtracting the wrong background distribution are shown. The shift of the parameter is thus expected to be less than 0.01. Again, this is very small compared to the statistical error of R = 0.053. This error is not considered in the rest of the analysis. Furthermore, the effects will be even smaller when considering K-factors. Any K greater than 1 will give better conditions to check the background distributions. And, since the K-factor of the gluonic Higgs production is higher than the K-factor of the main ZZ production process by quark antiquark pairs, the signal to background ratio will be even higher than predicted here. ## 6 Results In Chapter 4, the exact results for the signal were given. However, in practice one needs a procedure to separate signal from background, which will lead to uncertainties in the distributions. The expected errors have been calculated by generating a large number of events and scaling the distributions to the expected number of events, since the expected values of the parameters follow a Gaussian distribution. The background was statistically subtracted after applying the same cuts to it as were applied to the signal. The error reflects the statistical error from the number of the signal events, the statistical error from the number of background events subtracted and the error made by the estimation of the number of background events as described in Chapter 5. No error from a possibly different angular distribution of background events has been taken into acount, but we have shown that the effect is small. Then the parametrisations for and as described above were fitted to the distributions. Signal and background are summed over muons and electrons. Figure 7 (top) shows the expected values and errors for the parameter R, using an integrated luminosity of 100 fb. It is clearly visible that for masses above 250 GeV/c the measurement of this parameter allows the various hypotheses considered here for the spin and CP-state of the “Higgs Boson” to be unambiguously separated. For a Higgs mass of 200 GeV/ only the pseudoscalar is excluded. Figure 7 (bottom) shows the expected values and errors for and for a 200 GeV/c Higgs and an integrated luminosity of 100 fb. The parameter can be used to distinguish between a spin 1 and the SM Higgs particle, but its use is statistically limited. The same applies to the parameter . Measuring , which is zero for spin 1 and 0 in the SM case, can contribute only very little to the spin measurement even if is in the range where , in the SM case, is close to its maximum value. Nevertheless, can be useful to rule out a CP odd spin 0 particle. The values of get more widely separated when the correlation between the sign of for the two Z Bosons and is exploited. In Figure 8, we plot the parameters separately for (F11 + F22 in Table 2) and (F12 + F21 in Table 2). As can be seen, the difference in becomes bigger for and . For higher masses and of the SM Higgs approach 0; thus only can be used to measure the spin. But the measurement of R compensates this. Figure 9 shows the significance, i. e. the difference of the expected values divided by the expected error of the SM Higgs. We add up the significance for and exploiting the - correlation and plot the significance from the polar angle measurement separately. For higher Higgs masses the decay plane angle correlation contributes almost nothing, but the polarisation leads to a good measurement of the parameters spin and CP-eigenvalue. For full luminosity (300 fb) the significance can simply be multiplied by . This is especially interesting for a Higgs mass of 200 GeV/c. The Spin 1, CP even hypothesis can then be ruled out with a significance of 6.4, while for the Spin 1, CP odd case the significance is still only 3.9. In conclusion, for Higgs masses larger than about 230 GeV/c a Spin 1 hypothesis can be clearly ruled out already with 100 fb. For m around 200 GeV/c the distinction is less clear, and one will need the full integrated luminosity of the LHC. A spin-CP hypothesis of 0- can be ruled out with less than 100 fb for the whole mass range above and around 200 GeV/c. ## Acknowledgements This work has been performed within the ATLAS Collaboration, and we thank collaboration members for helpful discussions. We have made use of the physics analysis framework and tools which are the result of collaboration-wide efforts. This work was supported by the DFG-Forschergruppe “Quantenfeldtheorie, Computeralgebra und Monte-Carlo-Simulation” and by the European Union under contract HPRN-CT-2000-00149. ## Appendix A Formulae for differential angular distributions The most general coupling of a (pseudo) scalar Higgs boson to two on-shell Z-bosons is of the following form: Lscalar=Xδμν+Ykμkν/M2h+iPϵμνpZqZ/M2h (4) Here the momentum of the first Z-boson is , that of the second Z-boson is . The momentum of the Higgs boson is and is the total antisymmetric tensor with . Within the Standard Model one has , . For a pure pseudoscalar particle one has . If both and one of the other interactions are present, one cannot assign a definite parity to the Higgs boson. The same formula for a (pseudo) vector with momentum reads: Lvector=X(δρμpνZ+δρνqμZ)+P(iϵμνρpZ−iϵμνρqZ) (5) It is to be noted that the coupling to the vector field actually contains only two parameters and is therefore simpler than to the scalar. In the following we give the angular dependence of the triple differential cross section for the case of a scalar or vector Higgs decaying into two on-shell Z bosons which subsequently decay into two lepton pairs. The meaning of the angles and is explained in Figure 1. is the absolute value of the momentum of the Z boson, . In the following we use the definitions and . and are the vector and axial vector couplings: , , where is the weak isospin, the charge of the fermion and the Weinberg angle. For our case, the values of and are = -0.0379 and = -0.5014. ### a.1 The general case #### Scalar Higgs dσdϕdcosθ1dcosθ2∼−8XYc2ac2vx2(x2−4)cosϕsinθ1sinθ2−XY(c2v+c2a)2x2(x2−4)(2cosϕsinθ1sinθ2cosθ1cosθ2+(x2−2)sin2θ1sin2θ2)+16XPc2ac2vxy(x2−2)sinϕsinθ1sinθ2+4XP(c2v+c2a)2xysinϕ(2cosϕsin2θ1sin2θ2+(x2−2)sinθ1sinθ2cosθ1cosθ2)+16X2c2ac2vx2(2cosθ1cosθ2+(x2−2)cosϕsinθ1sinθ2)+X2(c2v+c2a)2x2{4(1+cos2θ1cos2θ2+cos2ϕsin2θ1sin2θ2+(x2−2)cosϕsinθ1sinθ2cosθ1cosθ2)+x2(x2−4)sin2θ1sin2θ2}−8PYc2ac2vxy(x2−4)sinϕsinθ1sinθ2−2PY(c2v+c2a)2xy(x2−4)sinϕsinθ1sinθ2cosθ1cosθ2+1/4Y2(c2v+c2a)2x2(x2−4)2sin2θ1sin2θ2+8P2c2ac2v(x2−4)cosθ1cosθ2+P2(c2v+c2a)2(x2−4)(1+cos2θ1cos2θ2−cos2ϕsin2θ1sin2θ2) #### Vector Higgs dσdϕdcosθ1dcosθ2∼−16XPc2ac2vxysinϕsinθ1sinθ2+4XP(c2v+c2a)2xysinϕsinθ1sinθ2cosθ1cosθ2+4X2c2ac2vx2cosϕsinθ1sinθ2+X2(c2v+c2a)2x2(1−cos2θ1cos2θ2−cosϕsinθ1sinθ2cosθ1cosθ2)−4P2c2ac2v(x2−4)cosϕsinθ1sinθ2+P2(c2v+c2a)2(x2−4)(1−cos2θ1cos2θ2+cosϕsinθ1sinθ2cosθ1cosθ2) ### a.2 The special cases In this appendix we list the triple differential cross section for pure Higgs spin and CP states. In addition, we also give the differential cross sections, where some of the angular variables have been integrated over. F11, F12, F21, F22 refer to the different quadrants as defined in Chapter 4. The spin 0, CP even part only contains the pure SM contribution. #### Spin 0, CP even dσdϕdcosθ1dcosθ2∼+16c2ac2v(2cosθ1cosθ2+(x2−2)cosϕsinθ1sinθ2)+(c2v+c2a)2{4(1+cos2θ1cos2θ2+cos2ϕsin2θ1sin2θ2+(x2−2)cosϕsinθ1sinθ2cosθ1cosθ2)+x2(x2−4)sin2θ1sin2θ2}dσdcosθ1dcosθ2∼+32c2ac2vcosθ1cosθ2+(c2v+c2a)2{4(1+cos2θ1cos2θ2)+(x4−4x2+2)sin2θ1sin2θ2}F11 = F22: dσdϕ∼c2ac2v(8+π2(x2−2)cosϕ)+4/9(c2v+c2a)2(x4−4x2+10+(x2−2)cosϕ+4cos2ϕ)F12 = F21: dσdϕ∼−c2ac2v(8−π2(x2−2)cosϕ)+4/9(c2v+c2a)2(x4−4x2+10−(x2−2)cosϕ+4cos2ϕ) #### Spin 0, CP odd dσdϕdcosθ1dcosθ2∼+8c2ac2vcosθ1cosθ2+(c2v+c2a)2(1+cos2θ1cos2θ2−cos2ϕsin2θ1sin2θ2)dσdcosθ1dcosθ2∼+16c2ac2vcosθ1cosθ2+(c2v+c2a)2(1+cos2θ1)(1+cos2θ2)F11 = F22: dσdϕ∼c2ac2v+1/9(c2v+c2a)2(5−2cos2ϕ)F12 = F21: dσdϕ∼−c2ac2v+1/9(c2v+c2a)2(5−2cos2ϕ) #### Spin 1, CP even dσdϕdcosθ1dcosθ2∼+4c2ac2vcosϕsinθ1sinθ2+(c2v+c2a)2(1−cos2θ1cos2θ2−cosϕsinθ1sinθ2cosθ1cosθ2)dσdcosθ1dcosθ2∼1−cos2θ1cos2θ2F11 = F22: dσdϕ∼+c2ac2vπ2cosϕ+1/9(c2v+c2a)2(32−4cosϕ)F12 = F21: dσdϕ∼+c2ac2vπ2cosϕ+1/9(c2v+c2a)2(32+4cosϕ) #### Spin 1, CP odd dσdϕdcosθ1dcosθ2∼−4c2ac2vcosϕsinθ1sinθ2+(c2v+c2a)2(1−cos2θ1cos2θ2+cosϕsinθ1sinθ2cosθ1cosθ2)dσdcosθ1dcosθ2∼1−cos2θ1cos2θ2F11 = F22: dσdϕ∼−c2ac2vπ2cosϕ+1/9(c2v+c2a)2)(32+4cosϕ)F12 = F21: dσdϕ∼−c2ac2vπ2cosϕ+1/9(c2v+c2a)2)(32−4cosϕ)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9742113351821899, "perplexity": 516.8220922026752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500904.44/warc/CC-MAIN-20230208191211-20230208221211-00488.warc.gz"}
https://en.wikiversity.org/wiki/Astronomy_college_course/Jupiter/questions
Astronomy college course/Jupiter/questions AstroJupiter_Study ${\displaystyle {\mathcal {CONTRIBUTE}}\;\;{\mathcal {TO}}\;\;{\mathcal {THIS}}\;\;{\mathcal {QUIZ!}}}$ Edit, discuss, or propose related questions by clicking the number that preceeds each question. Click here for details. If you are reading this as a Wikiversity page, proper pagebreaks should result if printed using your browser's print option, usually located in the upper right portion of your screen. But, pagebreaks do not render properly if you use "Printable version" on Wikiversity's Print/export option on the left-hand sidebar. Attribution for the quizzes, and study links are at the end of this document. AstroJupiter_Study-v1s1 1. The black spot in this image of Jupiter is ___ a) the shadow of a moon ___ b) Two other answers are correct (making this the only true answer). ___ c) a solar eclipse ___ d) an electric storm ___ e) a magnetic storm 2. Which of the following statements is FALSE? ___ a) Jupiter is the largest known planet ___ b) The Great Red Spot is a storm that has raged for over 300 years ___ c) Jupiter has four large moons and many smaller ones ___ d) Jupiter has a system of rings ___ e) Jupiter emits more energy than it receives from the Sun 3. What is the mechanism that heats the interior of Jupiter? ___ a) magnetism ___ b) electricity ___ c) tides ___ d) rain 4. Why is Jupiter an oblate spheroid? ___ a) tides from the Jupiter's moons ___ b) tides from other gas planets ___ d) revolution around Sun ___ e) tides from the Sun 5. What statement best describes the Wikipedia's explanation of the helium (He) content of Jupiter's upper atmosphere (relative to the hydrogen (H) content)? ___ a) Jupiter's atmosphere has 80% more He because Jupiter's hydrogen fell to the core. ___ b) Jupiter's atmosphere has only 80% as much helium because the He fell to the core. ___ c) Jupiter's atmosphere has only 80% as much helium because the He escaped into space. ___ d) Jupiter's atmosphere has 80% more He because Jupiter's hydrogen escaped into space. ___ e) Jupiter and the Sun have nearly the same ratio of He to H. 6. Where is the Sun-Jupiter barycenter? ___ a) The question remains unresolved ___ b) At the center of Jupiter ___ c) Just above the Sun's surface ___ d) Just above Jupiter's surface ___ e) At the center of the Sun 7. The barycenter of two otherwise isolated celestial bodies is? ___ a) the focal point of two elliptical orbital paths ___ b) a place where two bodies exert equal and opposite gravitational forces ___ c) both of these are true 8. Knowing the barycenter of two stars is useful because it tells us the total mass ___ a) TRUE ___ b) FALSE 9. Knowing the barycenter of two stars is useful because it tells us the ratio of the two masses ___ a) TRUE ___ b) FALSE 10. Although there is some doubt as to who discovered Jupiter's great red spot, it is generally credited to ___ a) Cassini in 1665 ___ b) Tycho in ___ c) Galileo in 1605 ___ d) Newton in 1668 ___ e) Messier in 1771 11. The bands in the atmosphere of Jupiter are associated with a patter of alternating wind velocities that are ___ a) easterly and westerly ___ b) updrafts and downdrafts ___ c) both of these 12. As one descends down to Jupiter's core, the temperature ___ a) increases ___ b) decreases ___ c) stays about the same Key to AstroJupiter_Study-v1s1 1. The black spot in this image of Jupiter is - a) the shadow of a moon + b) Two other answers are correct (making this the only true answer). - c) a solar eclipse - d) an electric storm - e) a magnetic storm 2. Which of the following statements is FALSE? + a) Jupiter is the largest known planet - b) The Great Red Spot is a storm that has raged for over 300 years - c) Jupiter has four large moons and many smaller ones - d) Jupiter has a system of rings - e) Jupiter emits more energy than it receives from the Sun 3. What is the mechanism that heats the interior of Jupiter? - a) magnetism - b) electricity - c) tides + d) rain 4. Why is Jupiter an oblate spheroid? - a) tides from the Jupiter's moons - b) tides from other gas planets - d) revolution around Sun - e) tides from the Sun 5. What statement best describes the Wikipedia's explanation of the helium (He) content of Jupiter's upper atmosphere (relative to the hydrogen (H) content)? - a) Jupiter's atmosphere has 80% more He because Jupiter's hydrogen fell to the core. + b) Jupiter's atmosphere has only 80% as much helium because the He fell to the core. - c) Jupiter's atmosphere has only 80% as much helium because the He escaped into space. - d) Jupiter's atmosphere has 80% more He because Jupiter's hydrogen escaped into space. - e) Jupiter and the Sun have nearly the same ratio of He to H. 6. Where is the Sun-Jupiter barycenter? - a) The question remains unresolved - b) At the center of Jupiter + c) Just above the Sun's surface - d) Just above Jupiter's surface - e) At the center of the Sun 7. The barycenter of two otherwise isolated celestial bodies is? + a) the focal point of two elliptical orbital paths - b) a place where two bodies exert equal and opposite gravitational forces - c) both of these are true 8. Knowing the barycenter of two stars is useful because it tells us the total mass - a) TRUE + b) FALSE 9. Knowing the barycenter of two stars is useful because it tells us the ratio of the two masses + a) TRUE - b) FALSE 10. Although there is some doubt as to who discovered Jupiter's great red spot, it is generally credited to + a) Cassini in 1665 - b) Tycho in - c) Galileo in 1605 - d) Newton in 1668 - e) Messier in 1771 11. The bands in the atmosphere of Jupiter are associated with a patter of alternating wind velocities that are - a) easterly and westerly - b) updrafts and downdrafts + c) both of these 12. As one descends down to Jupiter's core, the temperature + a) increases - b) decreases - c) stays about the same
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.84358149766922, "perplexity": 4712.514371018442}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989012.26/warc/CC-MAIN-20210509183309-20210509213309-00257.warc.gz"}
http://umj.imath.kiev.ua/article/?lang=en&article=7362
2018 Том 70 № 9 # Distribution of a Random Continued Fraction with Markov Elements Vynnyshyn Ya. F. Abstract We study the structure of the distribution of a random variable such that the elements of its continued fraction form a Markov chain of order m. It is proved that an absolutely continuous component is absent from such distributions. English version (Springer): Ukrainian Mathematical Journal 55 (2003), no. 9, pp 1522-1531. Citation Example: Vynnyshyn Ya. F. Distribution of a Random Continued Fraction with Markov Elements // Ukr. Mat. Zh. - 2003. - 55, № 9. - pp. 1260-1268. Full text
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9328790307044983, "perplexity": 839.718998139662}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743854.48/warc/CC-MAIN-20181117205946-20181117231946-00501.warc.gz"}
http://math.stackexchange.com/questions/64529/properties-of-fermat-primes
# Properties of Fermat primes Fermat primes 17 and 257 appear a lot in the prime composition of numbers of the form $a^{2^n}+1$. For example, $11^8+1$ is divisible by 17 and $11^{32}+1$ is divisible by 257. I have verified the following statement for 3, 5, 17 and 257 but can't prove it. I would be grateful if someone could provide a hint. Let $F = 2^{2^k} + 1$ be a Fermat prime. Then for $n=0, 1, ..., 2^k-1, x^{2^n} + 1 \equiv 0 \mod F$ has exactly $2^n$ solutions. For example, when k=2 we have: $(n=0) \hspace{10pt} x^1 + 1 \equiv 0 \mod 17$ has one solution (x=16). $(n=1) \hspace{10pt} x^2 + 1 \equiv 0 \mod 17$ has two solutions (x=4 and 13). $(n=2) \hspace{10pt} x^4 + 1 \equiv 0 \mod 17$ has four solutions (x=2, 8, 9 and 15). $(n=3) \hspace{10pt} x^8 + 1 \equiv 0 \mod 17$ has eight solutions (x=3, 5, 6, 7, 10, 11, 12 and 14). - This sort of rephrases Henning's answer. (That one is bottom up, this one is top down, which I like more personally) Fermat's little theorem says that $x^{2^{2^k}}-1$ has $2^{2^k}$ solution mod $p$. Now $x^{2^{2^k}}-1 = \left (x^{2^{2^{k-1}}} - 1 \right)\left(x^{2^{2^{k-1}}} + 1 \right)$, each factor having at most $2^{2^{k-1}}$ solutions because the degree of each factor is $2^{2^{k-1}}$. As their product has $2^{2^k}$ solutions we are forced to conclude that each factor has exactly $2^{2^{k-1}}$ solutions. Now take the factor $x^{2^{2^{k-1}}} - 1$ and repeat the same argument. - +1 I have to agree on top-down being clearer. The bottom-up approach went fine until I noticed that I'd read the equation as $x^{2^n}=1$ instead of $x^{2^n}=-1$, and I had to scramble to recover some sense from my train of thought ... – Henning Makholm Sep 14 '11 at 17:42 Obviously $x^1+1\equiv 0 \pmod p$ has exactly one solution. For each $n$ it must hold that $x^{2^{n+1}}+1\equiv 0 \pmod p$ has at most twice as many solutions as $x^{2^n}+1\equiv 0\pmod p$ -- otherwise one of the $2^n$th roots must have more than two square roots in $\mathbb F_p$, which is is impossible. Fermat's little theorem ensures that $x^{2^{2^k}}\equiv 1\pmod p$ for all nonzero $x$. But this means that $x^{2^{(2^k-1)}}$ must always be a square root of $1$ modulo $p$, that is, either $1$ or $-1$. An analogous argument to the one above shows that there cannot be more than $2^{(2^k-1)}$ instances of either $1$ or $-1$, so that must be the number of solutions to $x^{2^{(2^k-1)}}+1\equiv 0\pmod p$. But that determines the number of $2^n$th roots of $-1$ uniquely for $1\le n\le 2^k-1$. In general, this reasoning means that $x^{2^n}\equiv a \pmod p$ with $a\not\equiv 0$ always has either exactly $2^n$ solutions or none at all, depending on how many times one can square $a$ modulo $p$ without reaching $1$. - [Removed my previous generalization for this] Even more general theorem, with even easier proof: If $p$ is prime and $2d|p-1$, then the number of solutions of $x^d+1=0$ in $\mathbb{F}_p$ is $d$. Proof: Since $\mathbb{F}_p^\times$ is a cyclic group of order $p-1$, the number of solutions for $x^{2d}-1=0$ is $2d$, and the number of solutions of $x^d-1=0$ must be $d$, so the number of solutions of $x^d+1=0$ must be $d$. This uses two pieces of knowledge: Lemma: $\mathbb{F}_p^\times$ is a cyclic group of order $p-1$. Lemma: If $G$ is a cylic group of order $n$ and $m|n$, the number of solutions to $x^m=1$ in $G$ is $m$. Even more generally: If $a\in\mathbb{F}_p^\times$ is of order $n$ and $dn|p-1$, then the number of solutions to $x^d = a$ is $d$. (The previous theorem corresponds to the case $a=-1$ and $n=2$.) Most generally: If $(C_m,\times)$ is a cyclic group of order $m$, and $a\in C_m$ is of order $n$, and $d\geq 1$, let $d_0=\gcd(m,d)$. Then $x^d=a$ has solutions in $C_m$ iff and only if $d_0n|m$, in which case, there will be exactly $d_0$ solutions. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9649947881698608, "perplexity": 74.17587993584335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701166261.11/warc/CC-MAIN-20160205193926-00224-ip-10-236-182-209.ec2.internal.warc.gz"}
http://math.ucr.edu/home/baez/octonions/conway_smith/
Next: Bibliography Up: The Octonions Previous: Brougham Bridge On Quaternions and Octonions: Their Geometry, Arithmetic, and Symmetry by John H. Conway and Derek A. Smith review by John C. Baez Department of Mathematics University of California Riverside CA 92521 August 12, 2004 Published in Bull. Amer. Math. Soc. 42 (2005), 229-243. Also available in Postscript and PDF formats Conway and Smith's book is a wonderful introduction to the normed division algebras: the real numbers (), the complex numbers (), the quaternions (), and the octonions (). The first two are well-known to every mathematician. In constrast, the quaternions and especially the octonions are sadly neglected, so the authors rightly concentrate on these. They develop these number systems from scratch, explore their connections to geometry, and even study number theory in quaternionic and octonionic versions of the integers. Conway and Smith warm up by studying two famous subrings of : the Gaussian integers and Eisenstein integers. The Gaussian integers are the complex numbers for which and are integers. They form a square lattice: Any Gaussian integer can be uniquely factored into 'prime' Gaussian integers — at least if we count differently ordered factorizations as the same, and ignore the ambiguity introduced by the invertible Gaussian integers, namely and . To show this, we can use a straightforward generalization of Euclid's proof for ordinary integers. The key step is to show that given nonzero Gaussian integers and , we can always write for some Gaussian integers and , where the 'remainder' has This is equivalent to showing that we can find a Gaussian integer with And this is true, because no point in the complex plane has distance to the nearest Gaussian integer. Similarly, the Eisenstein integers are complex numbers of the form where and are integers and is a nontrivial cube root of . These form a lattice with hexagonal symmetry: Again one can prove unique factorization up to reordering and units, using the fact that no point in the complex plane has distance to the nearest Eisenstein integer. To see the importance of this condition, consider the 'Kummer integers': numbers of the form where and are integers. If we draw an open ball of radius about each Kummer integer, there is still room for more disjoint open balls of this radius: Thus there exist points in the complex plane with distance from the nearest Kummer integer, so Euclid's proof of unique prime factorization fails — and so does unique prime factorization: In short, there is an interesting relation between number theory and a subject on which Conway is an expert: densely packed lattices [5]. For a lattice in a normed division algebra to be closed under multiplication, all its points must have distance from each other: otherwise the smallest element of nonzero norm, say , would have and thus — a contradiction! On the other hand, for the Euclidean algorithm to work, at least in the simple form described here, there must be no point in the plane with distance from the nearest lattice point. So for both of these to hold, our lattice must be 'well packed': if we place open balls of radius centered at all the lattice points, they must be disjoint, but they must not leave room for any more disjoint open balls of this radius. When it comes to subrings of the complex numbers, these ideas are well-known. Back in the 1890's, Minkowski used them to study unique prime factorization (or more generally, ideal class groups) not only for algebraic integers in quadratic number fields, as we have secretly been doing here, but also in other number fields, which require lattices in higher dimensions. He called this subject 'the geometry of numbers' [4,11,16]. Conway and Smith explore a lesser-known aspect of the geometry of numbers by applying it to subrings of the quaternions and octonions. But they cannot resist a little preliminary detour into the geometry of lattices in 2 dimensions — nor should we. The Gaussian and Eisenstein integers are the most symmetrical lattices in the plane, since they have 4-fold and 6-fold rotational symmetry, respectively. As such, they naturally turn up in the classification of 2-dimensional space groups. A 'space group' is a subgroup of the Euclidean group (the group of transformations of generated by rotations, reflections and translations) that acts transitively on a lattice. Up to isomorphism, there are 230 space groups in 3 dimensions. These act as symmetries of various kinds of crystals, so they form a useful classification scheme in crystallography -- perhaps the most easily understood application of group theory to physics. In 2 dimensions, there are just 17 isomorphism classes of space groups. These are also called 'wallpaper groups', since they act as symmetries of different wallpaper patterns. Conway and Smith describe all these groups. Two of them act on a lattice with the least amount of symmetry: Seven act on a lattice with rectangular symmetry: or alternatively, on one with rhombic symmetry. Three act on a lattice with square symmetry, and five act on a lattice with hexagonal symmetry. After this low-dimensional warmup, Conway and Smith's book turns to the quaternions and their applications to geometry. The quaternions were discovered by Sir William Rowan Hamilton in 1843. Fascinated by the applications of complex numbers to 2d geometry, he had been struggling unsuccessfully for many years to invent a bigger algebra that would do something similar for 3d geometry. In modern language, it seems he was looking for a 3-dimensional normed division algebra. Unfortunately, no such thing exists! Finally, on October 16th, while walking with his wife along the Royal Canal to a meeting of the Royal Irish Academy in Dublin, he discovered a 4-dimensional normed division algebra. In his own words, "I then and there felt the galvanic circuit of thought close; and the sparks which fell from it were the fundamental equations between ; exactly such as I have used them ever since." He was so excited that he carved these equations on the soft stone wall of Brougham Bridge. Hamilton's original inscription has long since been covered by other graffiti, though a plaque remains to commemorate the event. The quaternions, which in the late 1800's were a mandatory examination topic in Dublin and the only advanced mathematics taught in some American universities, have now sunk into obscurity. The reason is that the geometry and physics which Hamilton and his followers did with quaternions is now mostly done using the dot product and cross product of vectors, invented by Gibbs in the 1880's [7]. Scott Kim's charming sepia-toned cover art for this book nicely captures the 'old-fashioned' flavor of some work on quaternions. But the quaternions are also crucial to some distinctly modern mathematics and physics. As a vector space, the quaternions are They become an associative algebra with 1 as the multiplicative unit via: Copying what works for complex numbers, we define the 'conjugate' of a quaternion to be , and define its 'real part' to be . It is then easy to check that This suggests defining a norm by , and it then turns out that the quaternions are a normed division algebra: In particular, any nonzero quaternion has a two-sided multiplicative inverse given by It follows that the quaternions of norm 1 form a group under multiplication. This group is usually called , because people think of its elements as unitary matrices with determinant 1. However, the quaternionic viewpoint is better adapted to seeing how this group describes rotations in 3 and 4 dimensions. The unit quaternions act via conjugation as rotations of the 3d space of 'pure imaginary' quaternions, namely those with . This gives a homomorphism from onto the 3d rotation group . The kernel of this homomorphism is , so we see is a double cover of . The unit quaternions also act via left and right multiplication as rotations of the 4d space of all quaternions. This gives a homomorphism from onto the 4d rotation group . The kernel of this homomorphism is , so we see is a double cover of . These facts are incredibly important throughout mathematics and physics. With their help, Conway and Smith classify the finite subgroups of the 3d rotation group , its double cover , the 3d rotation/reflection group , and the 4d rotation group . These classifications are all in principle well known'. However, they seem hard to find in one place, so Conway and Smith's elegant treatment is very helpful. Next, Conway and Smith turn to quaternionic number theory. The obvious analogue of the Gaussian integers are the 'Lipschitz integers', namely quaternions of the form where are all integers. The Lipschitz integers are a subring of the quaternions, and this has a nice application to ordinary number theory. Applying the formula to the product of two Gaussian integers gives the famous 'two squares formula': which shows that the set of integers expressible as the sum of two squares is closed under multiplication. Similarly, taking the norm of the product of two Lipschitz integers gives a four squares formula'. This shows the set of integers expressible as the sum of four squares is closed under multiplication. This fact is less impressive than it might at first sound, since one can prove that all integers can be written as the sum of four squares — but the four-square formula reduces the task of proving this to the case of prime numbers. Alas, the Lipschitz integers are not well packed, so their factorization into Lipschitz primes is far from unique. This was noted already by Lipschitz himself [8]. For example: However, it is easy to correct this problem. Consider the cubical lattice of all points with integer coordinates in . To make the distance to the nearest lattice point as big as possible, we can go to any point with all half-integer coordinates. The distance to the nearest lattice point is then This gets arbitrarily large as increases — so in high dimensions we could pack space with a cubical lattice of steel balls with half-inch radius snugly touching each other, but still leave places light-years away from metal. More to the point, the distance reaches precisely in dimension — the case we are interested in. So, if we place open balls of radius centered at the Lipschitz integers, there is still room to slip in a translated copy of this lattice of balls centered at quaternions where are half-integers. This gives the 'Hurwitz integers': quaternions of the form where are either all integers or all integers plus . The Hurwitz integers are a well-packed lattice and also a subring of the quaternions. This lets Conway and Sloane prove a version of unique prime factorization for Hurwitz integers. To state this result, they restrict attention to 'primitive' Hurwitz integers, namely those that are not divisible by any natural number. They show that for any primitive Hurwitz integer and any factorization of into a product of ordinary prime numbers, there is a factorization of into a product of Hurwitz primes with . Moreover, given any factorization with this property, all the other factorizations with this property are of the form where the are Hurwitz integers of norm 1 -- of which there are precisely 24, as we shall soon see. Conway and Smith call this 'uniqueness up to unit-migration'. For example, these two factorizations are not the same up to unit-migration if we work in the Lipschitz integers: but they become so in the Hurwitz integers, since we have where is a Hurwitz integer of norm 1, namely . The Hurwitz integers are so beautiful that we should pause and admire them before following Conway and Smith to higher dimensions. Though it is far from obvious, they give the densest possible lattice packing of balls in 4 dimensions [5]. In this setup, each ball touches 24 others. For example, the ball centered at the origin touches the balls centered at Hurwitz integers of norm 1. There are 8 of these with integer coordinates: and 16 with half-integer coordinates: The 8 with integer coordinates form the vertices of a cross-polytope (the 4d analogue of an octahedron): while the 16 with half-integer coordinates form the vertices of a hypercube (the 4d analogue of a cube): Taken together, they form the vertices of a regular polytope called the '24-cell': All but one of the regular polytopes in 4 dimensions are analogues of Platonic solids in 3 dimensions; the exception is the 24-cell. The picture above is a bit too cluttered to reveal all the charms of this entity. It is helpful to look at 3-dimensional slices: The thin dashed lines show one of the faces of the 24-cell: though distorted in this picture, it is really a regular octahedron. Since the hypercube is dual to the cross-polytope, the 24-cell is self-dual — so it has 24 of these octahedral faces, and if we draw a dot in the middle of each one, we get the vertices of another 24-cell. There is much more to say in favor of the 24-cell. It is not only a regular polytope; it is also a group! More precisely, its vertices form a 24-element subgroup of . This is usually called the binary tetrahedral group', since it is a double cover of the rotational symmetry group of the tetrahedron. This group also goes by the name of : matrices with determinant 1 having entries in the integers modulo 3. In this guise it explains some of the mystical importance of the number 24 in bosonic string theory [18]. It would be enjoyable to spend more time delving into these matters, but alas, this review is not the proper place. Instead, we should move on to the octonions. These were first discovered by Hamilton's college friend John Graves. It had been Graves' interest in algebra that got Hamilton thinking about complex numbers and their generalizations in the first place. The day after discovering the quaternions, Hamilton sent a letter describing them to Graves. The day after Christmas on that same year, Graves wrote to Hamilton describing an 8-dimensional algebra which he called the 'octaves'. He showed that they were a normed division algebra, and used this to express the product of two sums of eight squares as another sum of eight squares: the 'eight squares theorem'. Hamilton offered to publicize Graves' discovery, but kept putting it off, absorbed in work on the quaternions. Eventually Arthur Cayley rediscovered them and published an article announcing their existence in 1845. For this reason they are sometimes called 'Cayley numbers' — but these days, all right-thinking people call them the 'octonions'. As a vector space, the octonions are We make them into a nonassociative algebra with as multiplicative unit using a gadget called the 'Fano plane': There are 7 points and 7 lines in this picture, if we count the circle containing as an honorary 'line'. Each line contains 3 points, and each of these triples is equipped with a cyclic order as indicated by the arrows. The rule is that if are cyclically ordered in this way, they satisfy: Thus, they give a copy of the quaternions inside the octonions. Copying what worked for the quaternions, we define the 'conjugate' of an octonion to be , and define its 'real part' to be . It is then easy to check that so we can define a norm by . This makes the octonions into a normed division algebra: and any nonzero octonion has a two-sided inverse given by A brute-force verification of these last facts is unpleasant, in part because the octonions are nonassociative. It is also not very enlightening. Conway and Smith wisely use the 'Cayley-Dickson construction' instead. This is a dimension-doubling procedure that produces from , from , and from , and explains why they are normed division algebras — while also explaining why there are no more. Conway and Smith then develop the fascinating relationship between octonions and , the double cover of the rotation group in 8 dimensions. In physics lingo, the octonions can be described not only as the vector representation of , but also the left-handed spinor representation and the right-handed spinor representation. This fact is called 'triality'. It has many amazing spinoffs, including structures like the exceptional Lie groups and the exceptional Jordan algebra, and the fact that supersymmetric string theory works best in 10-dimensional spacetime — fundamentally because . To develop the theory of triality, Conway and Smith make use of Moufang loops and their isotopies — two concepts which never made much sense to me until I saw their lucid treatment. Anyone interested in triality must read this section. Next, Conway and Smith tackle octonionic number theory. Various lattices in present themselves as possible octonionic analogues of the integers, but the best candidate is the least obvious. Starting with the most obvious, the 'Gravesian integers' are octonions of the form where all the coefficients are integers. The 'Kleinian integers' are octonions where the are either all integers or all half-integers. Both these are lattices closed under multiplication — but alas, neither is well-packed. To get a denser lattice, first pick a line in the Fano plane. Then, take all integral linear combinations of Gravesian integers, octonions of the form where , and lie on this line, and those of the form where all lie off this line. The resulting lattice is called the 'double Hurwitzian integers'. Actually we obtain 7 isomorphic copies of the double Hurwitzian integers this way, one for each line in the Fano plane. The double Hurwitzian integers are closed under multiplication, and it is easy to see that as a lattice, they are the product of two copies of the Hurwitz integers — hence their name. In fact, they can be obtained from the Hurwitz integers using the Cayley-Dickson doubling construction. But unlike the Hurwitz integers, they are not well-packed. To see this, note that the point has distance 1 from all the double Hurwitzian integers. To fix this, we need an even denser lattice closed under multiplication. One natural guess is to take the union of all 7 copies of the double Hurwitzian integers. This gives a well-packed lattice — and in fact, the densest possible lattice packing of balls in 8 dimensions. In this setup, each ball touches 240 others. To see this, just count the lattice vectors of norm 1. First, we have for . Second, we have where , and all lie on some line in the Fano plane. And third, we have where all lie off some line. There are vectors of the first form, of the second form, and of the third form, for a total of 240. Curiously, I had just been thinking about this lattice when Conway and Smith's book arrived in my mail. After checking a couple of cases, I had jumped to the conclusion that it is closed under multiplication. I was shocked to read that it is not. But I was comforted to hear that this is a common mistake. Following Coxeter [6], Conway and Smith call it 'Kirmse's mistake', after the first person to make it in public. To rub salt in the wound, they mockingly call this lattice the 'Kirmse integers'. To fix Kirmse's mistake, you need to perform a curious trick. Pick a number from 1 to 7. Then, take all the Kirmse integers and switch the coefficients and . As a lattice, the resulting 'Cayley integers' are just a reflected version of the Kirmse integers, so they are still well-packed. But bizarrely, they are now closed under multiplication! Since this trick involved an arbitrary choice, there are 7 different copies of the Cayley integers containing the Gravesian integers. And this is as good as it gets: each one is maximal among lattices closed under multiplication. Conway and Smith then study prime factorization in the Cayley integers. This is a fascinating subject, but even trickier than the quaternionic case, since the octonions are nonassociative: one has to worry about different parenthesizations, as well as different orderings. So at this point, I will stop trying to explain their work, and leave it to them. Instead, I will say a bit about a topic that Conway and Smith skip: how the Hurwitz integers and Cayley integers show up in the theory of Lie groups. Every compact simple Lie group has a subgroup that is isomorphic to a product of circles and is as big as possible while having this property. Though not unique, this subgroup is unique up to conjugation; it is called a 'maximal torus' and denoted . Since it is abelian, it is much easier to study than itself. We cannot recover just from this subgroup . But has a god-given Riemannian metric on it, which restricts to a metric on . One of the miracles of Lie theory is that knowing the group together with this metric on it is enough to determine up to isomorphism, at least when is connected. We can simplify things even further if we work with the Lie algebra of the torus . Since is abelian, the bracket on vanishes. If that were all, would be a mere vector space. However, also has an inner product coming from the Riemannian metric on . There is also a lattice in , namely the kernel of the exponential map So, is really an inner product space with a lattice in it. From the lattice one can recover the torus , and from the inner product on one can recover the metric on this torus. So, we have compressed all the information about into something very simple: an inner product space containing a lattice. Not any lattice in an inner product space comes from a compact simple Lie group in this manner. However, we can work out which ones do — and Hurwitz integers and the Cayley integers do! In this context, the Hurwitz integers are called the ' lattice', and the corresponding Lie group is , the double cover of the rotation group in 8 dimensions. I already mentioned that this group is closely tied to the octonions via triality. Now we are seeing its ties to the quaternions! In this context, triality manifests itself as the symmetry that cyclically permutes the Hurwitz integers and . Similarly, in this context the Cayley integers are called the ' lattice'. The corresponding group is also called . It is the biggest of the 5 exceptional cases that show up in the classification of compact simple Lie groups. In order of increasing dimension, these are called and -- where the subscript gives the dimension of the maximal torus. They are all connected to the octonions, and they all play a role in string theory. In some ways is the most mysterious, because its smallest nontrivial representation is the 'adjoint representation', in which it acts by conjugation on its own Lie algebra. Since is 248-dimensional, this means that the smallest matrices we can use to describe its elements are of size . This is a nuisance, but the real problem is that the best way to understand a group is to see it as the group of symmetries of something. In the adjoint representation, we are only seeing as symmetries of itself! It seems to be pulling itself up into existence by its own bootstraps. Recently some mathematical physicists have been studying a construction of as the symmetries of a 57-dimensional manifold equipped with extra structure [12,14]. When I heard this, the number 57 instantly intrigued me — and not just because Heinz advertises 57 varieties of ketchup, either. No, the real reason was that the smallest nontrivial representation of 's little brother is 56-dimensional. When you study exceptional Lie algebras, you start noticing that strange numbers can serve as clues to hidden relationships... and indeed, there is one here. One can actually find the numbers 56 and 57 lurking in the geometry of the 240 Cayley integers of norm 1. However, it helps to begin with some general facts about graded Lie algebras. Here I am not referring to -graded Lie algebras, also known as 'Lie superalgebras'. Instead, I mean Lie algebras that have been written as a direct sum of subspaces , one for each integer , such that . If only the middle 3 of these subspaces are nonzero, so that we say that is '3-graded'. Similarly, if only the middle 5 are nonzero, so that we say L is '5-graded', and so on. In these situations, some nice things happen [12]. First of all, is always a Lie subalgebra of . Second of all, it acts on each other space by means of the bracket. Third of all, if is 3-graded, we can give a product by picking any element and defining This product automatically satisfies the identities defining a 'Jordan algebra': so 3-graded Lie algebras are a great source of Jordan algebras [15]. If is the Lie algebra of a compact simple Lie group , there is a very nice way to look for gradings of its complexification . This involves some more Lie theory — standard stuff that I will only briefly sketch here [1,10]. Recall that we can pick a maximal torus for . The Lie algebra of this maximal torus is contained in , and similarly its complexification is contained in . It turns out that is the direct sum of and a bunch of 1-dimensional complex vector spaces , one for each 'root' . Roots are certain special vectors in the 'dual lattice' , meaning the lattice of vectors such that is an integer for all in the original lattice . It is handy to define to be , so that whenever and are either roots or zero. So, to put a grading on , we just need to slice with evenly spaced parallel hyperplanes in such a way that each root, as well as the origin, lies on one of these hyperplanes. Now let us turn to the case of . Let us call the complexification of its Lie algebra — what we have been calling above — simply . In this case is the octonions and is the Cayley integers. However, it will be simpler to work in a coordinate system where is the Kirmse integers, since they have the same geometry as a lattice, and they are easier to describe. This could be called 'Kirmse's revenge'. If we use the inner product on the octonions to identify with its dual, it turns out that : the lattice of Kirmse integers is self-dual. Moreover, the roots are just the Kirmse integers of norm 1. Since there are 240 of these, the dimension of is . To put a grading on , you should imagine these 240 roots as the vertices of a gleaming 8-dimensional diamond. Imagine yourself as a gem cutter, turning around this diamond, looking for nice ways to slice it. You need to slice it with evenly spaced parallel hyperplanes that go through every vertex, as well as the center of the diamond. The easiest way to do this is to let each slice go through all the roots whose real part takes a given value. This value can be or , so we obtain a 5-grading of the Lie algebra . We can count the number of roots in each slice: • The number of roots with real part 1 is 1. • The number of roots with real part is 56. • The number of roots with real part 0 is 126. • The number of roots with real part is 56. • The number of roots with real part is 1. The only root with real part 1 is the octonion 1. Similarly, the only root with real part is the octonion . We get 56 roots with real part by multiplying the number of lines in the Fano plane by the number of sign choices in The number of roots with real part is the same, by symmetry. We get 126 roots with real part 0 by subtracting all the other numbers on the above list from 240. It follows that there is a 5-grading of : where the dimensions of the subspaces work as follows: Here we must remember to include in , obtaining a Lie subalgebra of dimension . This immediately shows how to get to act on a 57-dimensional manifold. Form the group , and form the subgroup whose Lie algebra is . The quotient is a manifold on which acts. Its tangent spaces all look like , so they are 57-dimensional! These tangent spaces are complex vector spaces, so we are getting a 57-dimensional complex manifold on which the complexification of acts, but with some extra work we can get certain real forms' of to act on 57-dimensional real manifolds. The options have been catalogued by Kaneyuki [13]. In the above grading of , the 134-dimensional Lie algebra is the direct sum of the Lie algebra and the 1-dimensional abelian Lie algebra . This comes as little surprise if one knows that the dimension of is 133, but the reason for it is that if we take all the roots of that are orthogonal to a given root, we obtain the roots of . From this point of view, the 5-grading of looks like this: Recall that acts on all the other spaces . In particular, is the 1-dimensional trivial representation of , while is the 'Freudenthal algebra': a 56-dimensional representation of , which happens to be the smallest nontrivial representation of . This gadget was Freudenthal's way of trying to understand the group . It has a symplectic structure and ternary product that are invariant under , and he showed that is precisely the group of transformations preserving these structures [2,9]. It is not clear to me how enlightening this is. More interesting is that these three facts: • is 248-dimensional. • is the symmetry group of a 57-dimensional manifold equipped with extra geometrical structure. • is the symmetry group of a 56-dimensional vector space equipped with extra algebraic structure. turn out to have a common origin: namely, that when we pack 8-dimensional balls in a lattice modelled after the Cayley integers, each ball has 240 nearest neighbors, and when we take any one of these neighbors and count the number of others that touch it, we find that there are 56! There are many more games to play along these lines. For example, we have just seen that the pure imaginary Kirmse integers of norm 1 are the roots of . These form the vertices of a gemstone in 7 dimensions, and we can repeat our 'gem-cutting' trick to get a 3-grading of : for which the dimensions work as follows: Since the dimension of is 78, it is not very surprising that is the direct sum of and the one-dimensional abelian Lie algebra . Since 3-gradings give Jordan algebras, it is also not surprising that is a famous 27-dimensional Jordan algebra. The 'exceptional Jordan algebra' is the space of self-adjoint matrices with octonion entries, equipped with the product . This is a Jordan algebra over the real numbers. It is 27-dimensional, since its elements look like this: Its complexification is none other than . The space is best thought of as the dual of this, so we have: Using our facts about graded Lie algebras, this implies that acts on and . In fact, and its dual are the smallest nontrivial representations of . Furthermore, if we use the above inclusion of in , we can take our previous decomposition of : and decompose everything in sight as irreducible representations of . When we do this, the Freudenthal algebra decomposes as with the dimensions working as follows: Usually people identify with its dual here, and use this decomposition to write elements of the Freudenthal algebra as matrices: This is probably too much 'exceptional mathematics' for most people to enjoy, at least on first exposure, so I shall stop here. The point, however, is that the three largest exceptional Lie groups sit inside each other like nested Russian dolls, in a pattern determined by the geometry of the Cayley integers — or the Kirmse integers, if you prefer. Furthermore, this pattern explains how the smallest nontrivial representations of these Lie groups can be built from matrices involving octonions. There is a world of strange beauty to be explored here... and Conway and Smith's book provides a lucid and elegant introduction. Indeed, I should emphasize that Conway and Smith's book is remarkably self-contained. It assumes no knowledge of number theory, string theory, Lie theory or lower-case Gothic letters. It scarcely hints at some of the more esoteric delights I have mentioned here. Indeed, I mention these only to show that the quaternions and octonions are part of a fascinating and intricate landscape of structures, which can be toured at greater length elsewhere [3,17]. The place to start is Conway and Smith. ### Acknowledgements I would like to thank Bill Dubuque, José Carlos Santos, Thomas Larsson, Tony Smith, and Tony Sudbery for help with this review. Next: Bibliography Up: The Octonions Previous: Brougham Bridge
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8638557195663452, "perplexity": 401.8057852764932}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447266.73/warc/CC-MAIN-20151124205407-00072-ip-10-71-132-137.ec2.internal.warc.gz"}
https://convexoptimization.com/wikimization/index.php?title=Geometric_Presolver_example&diff=prev&oldid=2990
# Geometric Presolver example (Difference between revisions) Revision as of 17:15, 11 April 2013 (edit)← Previous diff Revision as of 14:06, 12 April 2013 (edit) (undo)Next diff → Line 13: Line 13: A lower bound on number of rows of $\,E\in\mathbb{R}^{533\times 2704}\,$ retained is $217$. A lower bound on number of rows of $\,E\in\mathbb{R}^{533\times 2704}\,$ retained is $217$. - A lower bound on number of columns retained is $1104$. + A lower bound on number of columns retained is $1104$. + An eliminated column means it is evident that the corresponding entry in solution $x$ must be $0$. The present exercise is to determine whether any contemporary presolver can meet this lower bound. The present exercise is to determine whether any contemporary presolver can meet this lower bound. ## Revision as of 14:06, 12 April 2013 Assume that the following optimization problem is massive: $LaTeX: \begin{array}{rl}\mbox{find}&x\in\mathbb{R}^n\\ \mbox{subject to}&E\,x=t\\ &x\succeq_{}\mathbf{0}\end{array}$ The problem is presumed solvable but not computable by any contemporary means. The most logical strategy is to somehow make the problem smaller. Finding a smaller but equivalent problem is called presolving. This Matlab workspace file contains a real $LaTeX: E$ matrix having dimension $LaTeX: 533\times 2704$ and compatible $LaTeX: t$ vector. There exists a cardinality $LaTeX: 36$ binary solution $LaTeX: x$. Before attempting to find it, we presume to have no choice but to reduce dimension of the $LaTeX: E$ matrix prior to computing a solution. A lower bound on number of rows of $LaTeX: \,E\in\mathbb{R}^{533\times 2704}\,$ retained is $LaTeX: 217$. A lower bound on number of columns retained is $LaTeX: 1104$. An eliminated column means it is evident that the corresponding entry in solution $LaTeX: x$ must be $LaTeX: 0$. The present exercise is to determine whether any contemporary presolver can meet this lower bound.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9512345194816589, "perplexity": 788.9103149742306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662552994.41/warc/CC-MAIN-20220523011006-20220523041006-00559.warc.gz"}
http://cotpi.com/p/1/
|« « # » »| Alice tosses a fair coin $$10$$ times. Bob tosses it $$11$$ times. Charlie tosses it $$12$$ times. Who is the most likely to get more heads than tails? [SOLVED] #### Prunthaban Kanthakumar solved this puzzle: Bob. The probability is $$0.5$$ for Bob using symmetry. For Alice and Charlie, there is an additional case in which the number of heads is equal to the number of tails, the probability of which gets subtracted, so they get less than $$0.5$$. #### Susam Pal from cotpi added: Bob is the most likely to get more heads than tails. Let the probability that a particular person gets more heads than tails, the probability that he or she gets more tails than heads and the probability that he or she gets an equal number of heads and tails be $$P_h(X)$$, $$P_t(X)$$ and $$P_{eq}(X)$$, respectively, where $$X$$ is one of Alice, Bob or Charlie. Thus, $$P_h(X) + P_t(X) + P_{eq}(X) = 1$$. For each person, the probability that he gets more heads than tails is equal to the probability that he gets more tails than heads due to symmetry. Thus, $$P_h(X) = P_t(X)$$. Since Alice and Charlie have an even number of coins each, the probability that each gets an equal number of heads and tails is positive. Thus, $$P_{eq}(\text{Alice}) \gt 0$$ and $$P_{eq}(\text{Charlie}) \gt 0$$. However, $$P_{eq}(\text{Bob}) = 0$$ because it is impossible to get an equal number of heads and tails by tossing an odd number of coins. Thus, \begin{align*} & P_h(\text{Alice}) + P_t(\text{Alice}) + P_{eq}(\text{Alice}) = 1 \\ & \implies P_h(\text{Alice}) + P_t(\text{Alice}) = 1 - P_{eq}(\text{Alice}) \\ & \implies P_h(\text{Alice}) + P_t(\text{Alice}) \lt 1 \\ & \implies P_h(\text{Alice}) + P_h(\text{Alice}) \lt 1 \\ & \implies P_h(\text{Alice}) \lt 0.5. \end{align*} Similarly, $P_h(\text{Charlie}) \lt 0.5$ However, \begin{align*} & P_h(\text{Bob}) + P_t(\text{Bob}) + P_{eq}(\text{Bob}) = 1 \\ & \implies P_h(\text{Bob}) + P_t(\text{Bob}) + 0 = 1 \\ & \implies P_h(\text{Bob}) + P_h(\text{Bob}) = 1 \\ & \implies P_h(\text{Bob}) = 0.5. \end{align*} ### Credit This puzzle is taken from folklore. |« « # » »|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9995656609535217, "perplexity": 464.3182508186141}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578534596.13/warc/CC-MAIN-20190422035654-20190422061558-00073.warc.gz"}
https://quant.stackexchange.com/questions/31397/pricing-log-contract-with-black-scholes-pde
Pricing log-contract with Black-Scholes PDE I was wondering if someone could help me with a problem, regarding the Merton Black Scholes PDE. I have an exam soon and this question on an old exam has been bothering me and a friend for quite a while. We simply don't get it. The question goes this way: The payoff of a so-called European log-contract is $g \left( S_T \right) = \ln \left( S_T / K \right)$ where $K$ is the strike price and S is a risky non-dividend MBS asset. Find the price $c(s,t)$ of such asset. Hint: Use the Black-Scholes PDE and give yourself the fact that c has the following form: $$c(s,t) = a(t) + b(t) \ln(s/K)$$ Find the functions $a(t)$ and $b(t)$. I've tried to figure out the solution and see if there's anything online, but nothing works. Using the BS-PDE is not helping. All help and advice would be GREATLY appreciated! The B/S PDE for a contingent claim $V(S, t)$ is $$\frac{\partial V}{\partial t} + r S \frac{\partial V}{\partial S} + \frac{1}{2} \sigma^2 S^2 \frac{\partial V^2}{\partial S^2} - r V = 0$$ subject to the terminal condition $V(S, T) = \ln(S / K)$. According to the hint, the solution to $V(S, t$) takes the form $$V(S, t) = a(t) + b(t) \ln(S / K).$$ Since the terminal condition has to hold for all $S$, it follows that $a(T) = 0$, $b(T) = 1$. Take the partial derivatives of the guess to get \begin{eqnarray} \frac{\partial V}{\partial t}(S, t) & = & a'(t) + b'(t) \ln(S / K),\\ \frac{\partial V}{\partial S}(S, t) & = & b(t) \frac{1}{S},\\ \frac{\partial^2 V}{\partial S^2}(S, t) & = & -b(t) \frac{1}{S^2}.\\ \end{eqnarray} Substituting back yields $$a'(t) + b'(t) \ln(S / K) + r b(t) - \frac{1}{2} \sigma^2 b(t) - r a(t) - r b(t) \ln(S / K) = 0.$$ Since for each fixed $t$, this solution has to hold for all values of $S$, we collect terms containing $\ln(S / K)$ and get the ODEs \begin{eqnarray} 0 & = & a'(t) + \left( r - \frac{1}{2} \sigma^2 \right) b(t) - r a(t),\\ 0 & = & \left( b'(t) - r b(t) \right) \ln(S / K) \end{eqnarray} The solution to the second ODE that satisfies $b(T) = 1$ is $$b(t) = e^{-r (T - t)}.$$ The solution to the first ODE that satisfies $a(T) = 0$ is $$a(t) = b(t) \left( r - \frac{1}{2} \sigma^2 \right) (T - t).$$ Combining these results yields $$V(S, t) = e^{-r (T - t)} \left[ \left( r - \frac{1}{2} \sigma^2 \right) (T - t) + \ln(S / K) \right].$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 7, "x-ck12": 0, "texerror": 0, "math_score": 0.8861914873123169, "perplexity": 510.64615477636477}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104514861.81/warc/CC-MAIN-20220705053147-20220705083147-00435.warc.gz"}
http://mathhelpforum.com/advanced-statistics/207687-n-p-n-n-3-5-2-5-n-find-e-n.html
# Math Help - N: P(N=n)=(3/5)[(2/5)^n], find E(N) 1. ## N: P(N=n)=(3/5)[(2/5)^n], find E(N) Suppose the number of children N of a random family satifies P(N=n)=(3/5)[(2/5)^n] where n=0,1,2,... Compute E(N). I went through the math and I come up with 1, which is the probability space. So, I know I'm not going about this the right way. $E(N=n)=\frac{3}{5}(\frac{2}{5})^0 + \frac{3}{5}(\frac{2}{5})^1 + \frac{3}{5}(\frac{2}{5})^2 + ...$ $=\sum_{n=0}^{\infty}\frac{3}{5}(\frac{2}{5})^n$ $=\frac{3}{5}\sum_{n=0}^{\infty}(\frac{2}{5})^n$ $=\frac{3}{5}\frac{5}{3}$ $=1$ Could somone point out the basic understanding of expected value that I seem to be missing in this problem? 2. ## Re: N: P(N=n)=(3/5)[(2/5)^n], find E(N) I'm going to assume you mean expected value. Here is the definition $\operatorname{E}[N] = n_1p_1 + n_2p_2 + \dotsb + n_kp_k \;.$ What you have is the sum of all possible probabilities, which of course must be equal to 1! However, each of your probabilities P[N=i] has a weight ni 3. ## Re: N: P(N=n)=(3/5)[(2/5)^n], find E(N) $E(N=n)=0*\frac{3}{5}(\frac{2}{5})^0 + 1*\frac{3}{5}(\frac{2}{5})^1 + 2*\frac{3}{5}(\frac{2}{5})^2 + ...$ $=\sum_{n=0}^{\infty}n*\frac{3}{5}(\frac{2}{5})^n$ $=\frac{2}{3}$ At least I was right about missing a basic understanding. The unfortunate part is that I need to learn how to go from the summation to the answer using paper and pencil, I won't always have access to a calculator that can do the sum for me... 4. ## Re: N: P(N=n)=(3/5)[(2/5)^n], find E(N) I'm trying to remember back in my statistics class. If you were to take the natural log of both sides, would that help? when I see the product of n and (2/5)^n, that's what I want to do. EDIT:// but it is a sum still so it might not.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8806406259536743, "perplexity": 720.9074283402396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418815948154.77/warc/CC-MAIN-20141217113228-00126-ip-10-231-17-201.ec2.internal.warc.gz"}
http://mathhelpforum.com/pre-calculus/102380-functions-question-2-a.html
# Math Help - Functions Question 2 1. ## Functions Question 2 Given $f(x)=\frac{1}{x}$.Find $\frac{f(x+h)+f(x)}{h}$ where h is not equal to zero Attempt: $\frac{\frac{h}{x+h*x)}}{h}$ How to simplify (please provide steps) Thank you 2. Originally Posted by mj.alawami Given $f(x)=\frac{1}{x}$.Find $\frac{f(x+h)+f(x)}{h}$ where h is not equal to zero Attempt: $\frac{\frac{h}{x+h*x)}}{h}$ How to simplify (please provide steps) Thank you $\frac{\frac{1}{x+h}+\frac{1}{x}}{h}$ $\frac{2x+h}{x^2+xh}\cdot \frac{1}{h}$ $ \frac{2x+h}{x^2h+xh^2} $ 3. Originally Posted by mj.alawami Given $f(x)=\frac{1}{x}$.Find $\frac{f(x+h)+f(x)}{h}$ where h is not equal to zero you sure it's not $\frac{f(x+h)-f(x)}{h}$ 4. Originally Posted by skeeter you sure it's not $\frac{f(x+h)-f(x)}{h}$ yes you are correct and i am wrong? So how is it done ? 5. Originally Posted by mj.alawami Given $f(x)=\frac{1}{x}$.Find $\frac{f(x+h)+f(x)}{h}$ where h is not equal to zero Attempt: $\frac{\frac{h}{x+h*x)}}{h}$ How to simplify (please provide steps) Thank you $f(x) = \frac{1}{x}$ $f(x + h) = \frac{1}{x + h}$ $f(x + h) - f(x) = \frac{1}{x + h} - \frac{1}{x}$ $= \frac{x}{x(x + h)} - \frac{x + h}{x(x + h)}$ $= -\frac{h}{x(x + h)}$ $\frac{f(x +h) - f(x)}{h} = \frac{-\frac{h}{x(x + h)}}{h}$ $= -\frac{h}{x(x + h)}\cdot\frac{1}{h}$ $= -\frac{1}{x(x + h)}$. 6. Originally Posted by mj.alawami So how is it done ? To learn, in general, how to evaluate a function at an expression (rather than a number), try here. Then try working this exercise in pieces, rather than all at one. First write down the formula for f(x). Then find the expression for f(x + h). Then subtract the polynomial for f(x) from the polynomial for f(x + h). Then divide by h, cancelling if you can.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 24, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.975822925567627, "perplexity": 1542.2780135759174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657141651.17/warc/CC-MAIN-20140914011221-00165-ip-10-234-18-248.ec2.internal.warc.gz"}
http://clay6.com/qa/34761/a-plant-virus-is-found-to-consists-of-uniform-cylindrical-particles-of-150-
# A plant virus is found to consists of uniform cylindrical particles of $150 A^{\circ}$ in diameter and $5000\;A^{\circ}$ long. The specific volume of the virus is $0.75\;cm^3/g$ If the virus is considered to be a single particle , find its molar mass. $(a)\;7.09 \times 10^8\;g \\ (b)\;7.09 \times 10^7\;g \\(c)\;6.09 \times 10^7\;g \\(d)\;7.09 \times 10^6 \;g$ Since the virus is cylinder so the volume of virus $=\pi r^2 l$ $\qquad= \large\frac{22}{7} \times \bigg( \large\frac{150}{2} \bigg)^2 $$\times 10^{-6} \times 5000 \times 10^{-8} \qquad= 0.884 \times 10^{-16}\;cm^3 weight of one virus = \large\frac{0.884 \times 10^{-16}}{0.75}$$g$ $\qquad= 1.178 \times 10^{-16}$ Hence gram molecular weight of virus = Weight of No virus. = Weight of $6.023 \times 10^{23}\;virus$ $\qquad= 1.178 \times 10^{-16} \times 6.023 \times 10^{23}\;g$ $\qquad= 7.09 \times 10^{7}\;g$ Molecular weight of virus $=7.09 \times 10^7\;g$ Hence b is the correct answer.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999053478240967, "perplexity": 1823.5143939858626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945222.55/warc/CC-MAIN-20180421125711-20180421145711-00405.warc.gz"}
http://proceedings.mlr.press/v28/yang13f.html
# Quantile Regression for Large-scale Applications Jiyan Yang, Xiangrui Meng, Michael Mahoney ; Proceedings of the 30th International Conference on Machine Learning, PMLR 28(3):881-887, 2013. #### Abstract Quantile regression is a method to estimate the quantiles of the conditional distribution of a response variable, and as such it permits a much more accurate portrayal of the relationship between the response variable and observed covariates than methods such as Least-squares or Least Absolute Deviations regression. It can be expressed as a linear program, and interior-point methods can be used to find a solution for moderately large problems. Dealing with very large problems, \emphe.g., involving data up to and beyond the terabyte regime, remains a challenge. Here, we present a randomized algorithm that runs in time that is nearly linear in the size of the input and that, with constant probability, computes a (1+ε) approximate solution to an arbitrary quantile regression problem. Our algorithm computes a low-distortion subspace-preserving embedding with respect to the loss function of quantile regression. Our empirical evaluation illustrates that our algorithm is competitive with the best previous work on small to medium-sized problems, and that it can be implemented in MapReduce-like environments and applied to terabyte-sized problems.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8861227035522461, "perplexity": 394.54203519360675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155702.33/warc/CC-MAIN-20180918205149-20180918225149-00080.warc.gz"}
https://arxiv.org/abs/1409.4491
astro-ph.CO (what is this?) (what is this?) # Title: Genus Topology and Cross-Correlation of BICEP2 and Planck 353 GHz B-Modes: Further Evidence Favoring Gravity Wave Detection Abstract: We have analyzed the genus topology of the BICEP2 B-modes and find them to be Gaussian random phase as expected if they have a cosmological origin. These BICEP2 B-modes can be produced by gravity waves in the early universe, but some question has arisen as to whether these B-modes (for 50 < l < 120) may instead be produced by foreground polarized dust emission. The dust emission at 150 GHz observed by BICEP2 should be less in magnitude but have similar structure to that at 353 GHz. We have therefore calculated and mapped the B-modes in the BICEP2 region from the publicly available Q and U 353 GHz preliminary Planck polarization maps. These have a genus curve that is different from that seen in the BICEP2 observations, with features at different locations from those in the BICEP2 map. The two maps show a positive correlation coefficient of 15.2% +/- 3.9% (1-sigma). This requires the amplitude of the Planck (50 < l <120) dust modes to be low in the BICEP2 region, and the majority of the Planck 353 GHz signal in the BICEP2 region in these modes to be noise. We can explain the observed correlation coefficient of 15.2% with a BICEP2 gravity wave signal with an rms amplitude equal to 54% of the total BICEP2 rms amplitude. The gravity wave signal corresponds to a tensor-to-scalar ratio r = 0.11 +/- 0.04 (1-sigma). This is consistent with a gravity wave signal having been detected, at a 2.5-sigma level. The Planck and BICEP2 teams have recently engaged in joint analysis of their combined data|it will be interesting to see if that collaboration reaches similar conclusions. Comments: version as accepted by MNRAS Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO) DOI: 10.1093/mnras/stu2547 Cite as: arXiv:1409.4491 [astro-ph.CO] (or arXiv:1409.4491v2 [astro-ph.CO] for this version) ## Submission history From: Wes Colley [view email] [v1] Tue, 16 Sep 2014 03:20:55 GMT (1460kb,D) [v2] Thu, 4 Dec 2014 20:53:11 GMT (1464kb,D)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8791967630386353, "perplexity": 2520.115968004691}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662856.94/warc/CC-MAIN-20160924173742-00020-ip-10-143-35-109.ec2.internal.warc.gz"}
https://par.nsf.gov/search/author:%22Massarotti,%20P.%22
# Search for:All records Creators/Authors contains: "Massarotti, P." Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval). What is a DOI Number? Some links on this page may take you to non-federal websites. Their policies may differ from this site. 1. Free, publicly-accessible full text available July 1, 2023 2. Abstract The accurate simulation of additional interactions at the ATLAS experiment for the analysis of proton–proton collisions delivered by the Large Hadron Collider presents a significant challenge to the computing resources. During the LHC Run 2 (2015–2018), there were up to 70 inelastic interactions per bunch crossing, which need to be accounted for in Monte Carlo (MC) production. In this document, a new method to account for these additional interactions in the simulation chain is described. Instead of sampling the inelastic interactions and adding their energy deposits to a hard-scatter interaction one-by-one, the inelastic interactions are presampled, independent of the hardmore » Free, publicly-accessible full text available December 1, 2023 3. Abstract The ATLAS experiment at the Large Hadron Collider has a broad physics programme ranging from precision measurements to direct searches for new particles and new interactions, requiring ever larger and ever more accurate datasets of simulated Monte Carlo events. Detector simulation with Geant4 is accurate but requires significant CPU resources. Over the past decade, ATLAS has developed and utilized tools that replace the most CPU-intensive component of the simulation—the calorimeter shower simulation—with faster simulation methods. Here, AtlFast3, the next generation of high-accuracy fast simulation in ATLAS, is introduced. AtlFast3 combines parameterized approaches with machine-learning techniques and is deployed tomore » Free, publicly-accessible full text available December 1, 2023 4. Free, publicly-accessible full text available September 1, 2022 5. A bstract The NA62 experiment reports the branching ratio measurement $$\mathrm{BR}\left({K}^{+}\to {\pi}^{+}\nu \overline{\nu}\right)=\left({10.6}_{-3.4}^{+4.0}\left|{}_{\mathrm{stat}}\right.\pm {0.9}_{\mathrm{syst}}\right)\times {10}^{-11}$$ BR K + → π + ν ν ¯ = 10.6 − 3.4 + 4.0 stat ± 0.9 syst × 10 − 11 at 68% CL, based on the observation of 20 signal candidates with an expected background of 7.0 events from the total data sample collected at the CERN SPS during 2016–2018. This provides evidence for the very rare K + → $${\pi}^{+}\nu \overline{\nu}$$ π + ν ν ¯ decay, observed with a significance of 3.4 σ . The experimentmore » 6. A bstract A search for the K + → π + X decay, where X is a long-lived feebly interacting particle, is performed through an interpretation of the K + → $${\pi}^{+}\nu \overline{\nu}$$ π + ν ν ¯ analysis of data collected in 2017 by the NA62 experiment at CERN. Two ranges of X masses, 0–110 MeV /c 2 and 154–260 MeV /c 2 , and lifetimes above 100 ps are considered. The limits set on the branching ratio, BR( K + → π + X ), are competitive with previously reported searches in the first mass range,more » 7. A bstract The NA62 experiment at the CERN SPS reports a study of a sample of 4 × 10 9 tagged π 0 mesons from K + → π + π 0 ( γ ), searching for the decay of the π 0 to invisible particles. No signal is observed in excess of the expected background fluctuations. An upper limit of 4 . 4 × 10 − 9 is set on the branching ratio at 90% confidence level, improving on previous results by a factor of 60. This result can also be interpreted as a model- independent upper limit onmore » 8. A bstract The NA62 experiment reports an investigation of the $${K}^{+}\to {\pi}^{+}\nu \overline{\nu}$$ K + → π + ν ν ¯ mode from a sample of K + decays collected in 2017 at the CERN SPS. The experiment has achieved a single event sensitivity of (0 . 389 ± 0 . 024) × 10 − 10 , corresponding to 2.2 events assuming the Standard Model branching ratio of (8 . 4 ± 1 . 0) × 10 − 11 . Two signal candidates are observed with an expected background of 1.5 events. Combined with the result of amore » 9. Free, publicly-accessible full text available May 1, 2023
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9587860703468323, "perplexity": 1886.7441078194597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572043.2/warc/CC-MAIN-20220814143522-20220814173522-00063.warc.gz"}
http://www.physicsforums.com/showpost.php?p=896819&postcount=3
View Single Post P: 56 How do I know that 2 seconds is 2 seconds? They're equivalent by definition. However, according to Relativity, time is not invariable (neither is the "second"). So the "second" depends on which "clock" we are measuring. To agree on time, we must define what the standard clock will be. i think what alkammy is trying to say is how do we know that it was a millionth of a second? it could be more...right?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8484392166137695, "perplexity": 728.2919006191098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010554119/warc/CC-MAIN-20140305090914-00004-ip-10-183-142-35.ec2.internal.warc.gz"}
https://projecteuclid.org/euclid.aoms/1177703744
## The Annals of Mathematical Statistics ### Effect of Truncation on a Test for the Scale Parameter of the Exponential Distribution A. P. Basu #### Abstract In this paper we have investigated the effects on the operating characteristics of a test for the scale parameter of the exponential distribution on the assumption that the sample is from a "complete" exponential population when in reality it is known to have come from a truncated exponential population. The results derived are valid for samples of any size. #### Article information Source Ann. Math. Statist., Volume 35, Number 1 (1964), 209-213. Dates First available in Project Euclid: 27 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aoms/1177703744 Digital Object Identifier doi:10.1214/aoms/1177703744 Mathematical Reviews number (MathSciNet) MR157463 Zentralblatt MATH identifier 0126.14903 JSTOR links.jstor.org #### Citation Basu, A. P. Effect of Truncation on a Test for the Scale Parameter of the Exponential Distribution. Ann. Math. Statist. 35 (1964), no. 1, 209--213. doi:10.1214/aoms/1177703744. https://projecteuclid.org/euclid.aoms/1177703744
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8151826858520508, "perplexity": 769.8851023232585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998084.36/warc/CC-MAIN-20190616082703-20190616104703-00026.warc.gz"}
https://www.semanticscholar.org/paper/Probing-Quantization-Via-Branes-Gaiotto-Witten/621a7fcc191e986726b90a3c009f68697a844cf9
Corpus ID: 236428231 # Probing Quantization Via Branes @inproceedings{Gaiotto2021ProbingQV, title={Probing Quantization Via Branes}, author={Davide Gaiotto and Edward Witten}, year={2021} } • Published 26 July 2021 • Physics, Mathematics We re-examine quantization via branes with the goal of understanding its relation to geometric quantization. If a symplectic manifold $M$ can be quantized in geometric quantization using a polarization ${\mathcal P}$, and in brane quantization using a complexification $Y$, then the two quantizations agree if ${\mathcal P}$ can be analytically continued to a holomorphic polarization of $Y$. We also show, roughly, that the automorphism group of $M$ that is realized as a group of symmetries in… Expand 3 Citations #### Figures from this paper Koszul duality in quantum field theory • Physics, Mathematics • 2021 In this article, we introduce basic aspects of the algebraic notion of Koszul duality for a physics audience. We then review its appearance in the physical problem of coupling QFTs to topologicalExpand Microsheaves from Hitchin fibers via Floer theory Consider a smooth component of the moduli space of stable Higgs bundles on which the Hitchin fibration is proper. We record in this note the following corollaries to existing results on Floer theoryExpand N ov 2 02 1 KOSZUL DUALITY IN QUANTUM FIELD THEORY In this article, we introduce basic aspects of the algebraic notion of Koszul duality for a physics audience. We then review its appearance in the physical problem of coupling QFTs to topologicalExpand
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8579422831535339, "perplexity": 889.8231348298764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362297.22/warc/CC-MAIN-20211202205828-20211202235828-00265.warc.gz"}
http://mathoverflow.net/questions/116326?sort=oldest
## Why Ryll-Nardzewski theorem fails for uncountable theories? Ryll-Nardzewski theorems states that if $T$ is a countable complete theory, then $T$ is $\aleph_0$-categorical if and only if for every $n<\omega$ there are only finitely many formulas $\varphi(x_1,\ldots,x_n)$ up to equivalence relative to $T$. $T$ is a countable theory if it can be built in a countable language. My question is: Is there a complete uncountable theory which is $\aleph_0$-categorical, but that for some $n<\omega$ there are infinitely many formulas $\varphi(x_1,\ldots,x_n)$ up to equivalence relative to $T$? Thanks - Welcome to MO! A local custom is that every question should normally bear at least one arXiv-style tag, e.g. "lo.logic" or "ct.category-theory". I've just added one to yours. – Tom Leinster Dec 13 at 22:15 math.stackexchange.com/questions/258015/… – Andres Caicedo Dec 13 at 22:20 Ah yes. Another custom is that you don't ask the same question simultaneously here and at math.stackexchange, as this potentially wastes people's time. Your question here may get closed for this reason. – Tom Leinster Dec 13 at 23:03 Yet another custom is to accompany links with text so you know why you may or may not want to click through. – François G. Dorais Dec 13 at 23:32 Let $T$ be the complete theory of $\mathbb N$, with a binary predicate $<$ for the standard ordering, unary predicates for all subsets of $\mathbb N$, and constants for all the elements of $\mathbb N$. This clearly has uncountably many inequivalent unary formulas. I claim that its standard model is, up to isomorphism, the only countable model. The main point in the proof is that there exist continuum many infinite subsets $A_i$ of $\mathbb N$ any two of which have a finite intersection. If $M$ were a nonstandard model, it would have an element $c$ greater (in the sense of $M$) than all the standard numbers; each $A_i$ woujld have an element $a_i$ greater than $c$ (because the sentence saying $A_i$ has arbitrarily large elements is in $T$); and these $a_i$'s would all be distinct (because each $A_i\cap A_j$ has a standard upper bound).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9010091423988342, "perplexity": 233.63059600235403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700107557/warc/CC-MAIN-20130516102827-00043-ip-10-60-113-184.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/412909/showing-symmetry-involving-a-matrix-and-its-transposed-matrix
# Showing symmetry involving a matrix and its transposed matrix I'd appreciate if someone could find a better title for this question, for I'm short of ideas right now. Given a matrix $A \in R^{n,n}$, show that $$\frac{1}{2}(A + A^t)$$ is symmetric. I see that it's symmetric and it seems obvious, but I don't really know how to show that in particular. - $(B+B^t)^t=B^t+(B^t)^t=B^t+B$ So $B+B^t$ is symmetric $\forall B\in R^{n\times n}$. Take $B=A/2$ and you get the desired result. - Are you familiar with 'entry notation' to represent arbitrary elements in a matrix? I am omitting the scalar, but the idea is to show $ent_{ij}(A+A^T) = ent_{ij}(A+A^T)^T$, where $$ent_{ij}(A^T) = ent_{ji}(A).$$ - You should apply the known properties of transposition: 1. $(cX)^t=cX^t$ (where $c$ is a scalar and $X$ is any matrix); 2. $(X+Y)^t=X^t + Y^t$ (where $X$ and $Y$ are matrices and the sum is defined). Moreover, recall that $X+Y=Y+X$. That's all you need: set $$C=\frac{1}{2}(A+A^t)$$ and apply the properties to show that $C^t=C$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9342175126075745, "perplexity": 277.72488388177845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098071.98/warc/CC-MAIN-20150627031818-00062-ip-10-179-60-89.ec2.internal.warc.gz"}
http://www.maplesoft.com/support/help/Maple/view.aspx?path=DifferentialGeometry/LieAlgebras/LieAlgebraData/StructureConstants
convert an array of structure constants to a Lie algebra data structure - Maple Programming Help LieAlgebraData[StructureConstants] - convert an array of structure constants to a Lie algebra data structure Calling Sequence Parameters StructureConstants  - a 3-dimensional array C AlgName             - a name or string, the name to be assigned to the Lie algebra Description • In the LieAlgebras package, the command DGsetup is used to initialize a Lie algebra -- that is, to define the basis elements for the Lie algebra and its dual and to store the structure constants for the Lie algebra in memory.  The first argument for DGsetup is a Lie algebra data structure which contains the structure constants in a standard format used by the LieAlgebras package. • One commonly used format for describing the structure equations of a Lie algebra is to specify the structure constants as an array, where $n$ is the dimension of the Lie algebra to be created. The structure constants are defined to be the Lie brackets of the basis elements of the Lie algebra by . • The function LieAlgebraData enables one to create a Lie algebra in Maple from an array $C$.  Only the entries ${C}_{\mathrm{ijk}}$, where need be specified. • The command LieAlgebraData is part of the DifferentialGeometry:-LieAlgebras package.  It can be used in the form LieAlgebraData(...) only after executing the commands with(DifferentialGeometry) and with(LieAlgebras), but can always be used by executing DifferentialGeometry:-LieAlgebras:-LieAlgebraData(...). Examples > with(DifferentialGeometry): with(LieAlgebras): Example 1. In this example we create a 4-dimensional Lie algebra called Ex1 from an Array of structure constants. First we define an array and initialize all the entries to zero.  Then we specify the non-zero structure constants. > C := Array(1..4, 1..4, 1..4, 0): > C[1, 4, 1] := a: > C[2, 3, 1] := b: > C[3, 4, 2] := c: > C[3, 4, 3] := a: LieAlgebraData gives us a Lie algebra data structure with these structure constants. ${\mathrm{L1}}{:=}\left[\left[{\mathrm{e1}}{,}{\mathrm{e4}}\right]{=}{a}{}{\mathrm{e1}}{,}\left[{\mathrm{e2}}{,}{\mathrm{e3}}\right]{=}{b}{}{\mathrm{e1}}{,}\left[{\mathrm{e3}}{,}{\mathrm{e4}}\right]{=}{c}{}{\mathrm{e2}}{+}{a}{}{\mathrm{e3}}\right]$ (2.1) > DGsetup(L1): Ex1 > Query("Jacobi"); ${\mathrm{true}}$ (2.2) Note that the structure constants in an Array C for an initialized Lie algebra can be obtained using DGinfo. (This command also applies to the structure equations for any frame or manifold and is not restricted in its use to Lie algebras.) Ex1 > K := Array(1..4, 1..4, 1..4, (i, j ,k) -> Tools:-DGinfo([i, j, k], "LieBracketStructureFunction")); ${K}{:=}\left[\begin{array}{c}{\mathrm{1..4 x 1..4 x 1..4}}{\mathrm{Array}}\\ {\mathrm{Data Type:}}{\mathrm{anything}}\\ {\mathrm{Storage:}}{\mathrm{rectangular}}\\ {\mathrm{Order:}}{\mathrm{Fortran_order}}\end{array}\right]$ (2.3) Ex1 > K[1, 4, 1]; ${a}$ (2.4) Ex1 > K[4, 1, 1]; ${-}{a}$ (2.5)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 9, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8892112374305725, "perplexity": 1196.4879606625134}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120206.98/warc/CC-MAIN-20170423031200-00232-ip-10-145-167-34.ec2.internal.warc.gz"}
http://mathhelpforum.com/statistics/134882-significance-test.html
# Math Help - significance test 1. ## significance test The p-value of a two-tailed z-test for a population mean is 0.064 when the level of significance is 0.05. Which of the following statements are true? • I. A one-sided test using the same data would indicate a statistically significant result. II. The mean in the null hypothesis is true. • III. The test has failed to prove that the alternative hypothesis is true I and III are correct but I have couple questions. 1) When dealing with two sided tests, significance levels(alpha) are halfed just like P-values 2)What does it really mean by "statistically significant result" ? I tried googling but couldn't find a good answer.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9620083570480347, "perplexity": 891.3644794326322}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246637364.20/warc/CC-MAIN-20150417045717-00035-ip-10-235-10-82.ec2.internal.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/group-2-1-a-solenoid-has-a-radius-of-200-cm-and-1-000-turns-per-meter-over-a-certain-time--q3314398
## nter a quick description of your question... Group 2: 1. A solenoid has a radius of 2.00 cm and 1 000 turns per meter. Over a certain time interval the current varies with time according to the expression I = 3e0.2t, where I is in amperes and t is in seconds. Calculate the electric field 5.00 cm from the axis of the solenoid at t = 10.0 s.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8621237277984619, "perplexity": 369.6477508009615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705195219/warc/CC-MAIN-20130516115315-00097-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/fly-by-of-planets.49919/
# Fly-By of planets 1. Oct 27, 2004 ### airbuzz How does fly-by works? How is it possible that after fly-by I have more energy than before? Where do I take it? 2. Oct 27, 2004 ### physicsguy I thought fly by of planets used the gravity of the planet to "sling-shot" the object past. Stuff like the mission to Saturn/ titan used it to accellerate the craft since the launcher was not powerful enough. 3. Oct 27, 2004 ### airbuzz I know what it does, but I don´t understand how it works... When I come closer to a planet i accelerate because some of my potential energy transforms in kinetic. The opposite happens when I get far from it. But with fly-by they obtain a kinetic energy bigger than the initial potential energy. From where arrives this energy? I think from the planet, but how? 4. Oct 27, 2004 ### Integral Staff Emeritus The passing satellite picks up the ORBITAL velocity (at least a portion of it) of the body it is passing. You are correct that all energy gained from entering the gravity well of the planetary body is lost on exit. But all the time the satellite is in the gravitational influence of the planetary body it is being drug along with the orbital motion. This is the slingshot velocity. 5. Oct 27, 2004 ### airbuzz Great!!!! I was thinking something like that. Thank you very much!!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8744470477104187, "perplexity": 1353.6239962351108}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720945.67/warc/CC-MAIN-20161020183840-00239-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/electron-concentration.303698/
# Electron Concentration 1. Mar 30, 2009 ### jg370 To determine the Fermi Energy for metals such as sodium, copper, gold, ect., one needs the Electron Concentration per cubic meter for the metal of interest. I am curious how this quantity can be determined. I refered to my physic textbooks and did search the Internet. However, the information found is mostly related to semiconductor material and other exotic topics Can anyone provide information on how such electron concentration can be determined or refer me to some literaure that would help. Thank you for your kind attention 2. Mar 30, 2009 ### genneth Look up the density. Look up the mass of one atom. Look up the number of valence electrons. Put these three numbers together in a way that gives you a number of the right units ;-) 3. Mar 30, 2009 ### lbrits A neutral gold atom has 79 electrons and mass of 3.27x10^-25 kg. As a metal, gold has a density of 19300 kg/m^3. So there are 5.90x10^28 atoms and 4.66x10^30 electrons in a cubic meter of gold. This is the "first year" answer. Hope that helps. Similar Discussions: Electron Concentration
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9170895218849182, "perplexity": 1436.2059746109219}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814300.52/warc/CC-MAIN-20180222235935-20180223015935-00084.warc.gz"}
https://agtb.wordpress.com/2010/10/15/computation-of-correlated-equilibrium-for-concise-games/
Feeds: Posts ## Computation of Correlated Equilibrium for Concise games It is well known that a correlated equilibrium can be computed in polynomial time in the size of the game.  In a beautiful paper published several years ago, Papadimitriou and Roughgarden proved that this extends even to games that are given concisely.  I.e. for many natural ways of succinctly representing “exponential size” games, one can still compute a correlated equilibrium in time that is polynomial in the representation length.  This holds for graphical games, anonymous games, polymatrix games, congestion games, scheduling games, local effect games, as well as several generalizations.  The logic of the algorithm is as follows: A correlated equilibrium can be expressed as a solution to a linear program.  In the case of a succinctly represented game the LP has exponentially many variables but only polynomially many constraints, and thus it’s dual can be solved by the Ellipsoid algorithm — provided that an appropriate separation oracle can be provided.   It turns out that, for many succinct representations, this can indeed be done, using a technique of Hart and Schmeidler.  The details are somewhat more delicate as the primal LP is unbounded and thus its dual is known to be infeasible, so “solving the dual” will certainly fail, but the failure will supply enough information as to find a solution to the primal. In a paper by Stein, Parrilo, and Ozdaglar, recently uploaded to the arXiv, the authors claim to have found a bug in the Papadimitriou and Roughgarden paper (I have only skimmed the paper and have not verified it).  They note that the Ellipsoid-based algorithm sketched above returns a correlated equilibrium with three properties: 1. All its coefficients are rational numbers, if so was the input. 2. It is a convex combination of product distributions 3. It is symmetric, if so was the game. However, they exhibit a simple 3-player symmetric game that has a unique equilibrium satisfying the last two properties, an equilibrium with irrational coordinates.  (The game gives each player a strategy set of {0,1}, and the common utility of all players is $s_1 + s_2 +s_3 \:mod\: 3$.)  They also isolate the problem: when attempting to move from the dual LP to the primal solution, the precision needs to go up, requiring an increase in the bounding Ellipsoid.  Finally, they show that the algorithm still finds an approximate correlated equilibrium (in time that is polynomial also in $\log \epsilon^{-1}$).  The question of whether one can find an exact correlated equilibrium of succinctly represented games in polynomial time remains open. ### 16 Responses 1. Noam, what does it mean to compute an exact eqm. if its coordinates are (could be) irrational? It seems like approximation within arbitrary good accuracy is as good as its gets to exact computation, no? Here, I mean approximation of the coefficients describing an exact eqm, rather than an approximate eqm. in the sense of epsilon-best-response. Or perhaps what’s shown in this recent arxiv paper is the latter concept (computing approximate eqm.)? Robi. • Well, there *is* a correlated equilibrium with all-rational coordinates, it just isn’t a symmetric convex combination of product distributions. Thus for example some form of exhaustive search algorithm will find exactly an all-rational equilibrium. 2. This blog makes an appearance on an economics discussion board: http://www.econjobrumors.com/topic.php?id=15883 3. Noam, there is a paper of Hart and Mas-Colell with a simple dynamic process leading to correlated equilibrium. Does this has some computational consequences? • That gives approximation also but the running time would be polynomial in $\epsilon$ rather than its log. 4. Thanks for the nice summary of our paper. I just want to make a minor clarification on bullet 3: it is important that each of the product distributions from bullet 2 be symmetric (assuming the game was), not just that the resulting convex combination of products be symmetric. For an example to illustrate the difference, consider the 2×2 symmetric probability matrix with zeros on the diagonals and 1/2 on the antidiagonals. This can be written as a convex combination of products: 1/2 times the matrix with a 1 in the upper right plus 1/2 times the matrix with a 1 in the lower left. But these product distributions are not symmetric matrices and indeed this matrix cannot be written as a convex combination of symmetric products. 5. on October 25, 2010 at 4:22 pm | Reply Christos Papadimitriou Sorry that I had not seen this posting for some time. Here is my take. First of all, to compute a number efficiently means to compute it approximately in time polynomial in the input and the precision bits (since Turing’s paper, recall the title…). So, our theorem is correct, our algorithm does compute a correlated equilibrium efficiently. In fact, it’s a little better: In this case, the exact solution can be found by a simple, natural and well known trick (first pointed out in a paper by Karp and myself in the early 1980s): Once the precision is twice that of your instance, round to the closest small rational. If it is not a solution, this means that you are very close to a constraint, and you can set it to equality, and proceed from there. We should have mentioned this aspect in our original paper. The authors do catch an easily fixable bug in our paper: Our “ellipsoid against hope” technique will yield a modified dual which could have a feasible solution (which must be huge), and we don’t want that. In our paper we fix it by enlarging the ellipsoid, and the authors here point out that this does not work. But an easier trick does: explicitly bound the variables. So, the way this new paper is written (and its title) is a result of cross-cultural misunderstanding and unfamiliarity. We have explained this to the authors, and I believe they are about to remove their paper from the arxiv. Thanks, Christos • Thanks for the clarification, Christos. I suppose that the strange thing that still remains is that it is not clear how to efficiently find a *rational* equilibrium. • on October 25, 2010 at 7:42 pm Christos Papadimitriou No, this is an LP, there can be no mystery, look again at my third paragraph. —c • I assume that you are referring to setting the constraints that you are close to to be equalities. It seems that this can maintain symmetry, so I suppose that the point is that you no longer get a convex combination of products? 6. on October 25, 2010 at 11:28 pm | Reply Christos Papadimitriou Details need to be worked out, how the repeated projection method works in this context: exponential dimension, implicit variables via product distributions. We hadn’t thought about it (because one gets fast approximation directly — as it turns out with the slight modification proposed in this paper) but we should have. I’m pretty sure it should work. And it may violate both symmetry and product, I’ll think about it. —c 7. Stein, Parrilo, and Ozdaglar paper seemingly withdrawn from ArXiv. 8. […] delicate issues of numerical precision raised by Stein, Parrilo, and Ozdaglar described here in a previous blog post.  This also makes it completely clear that the algorithm operates also in the communication […] 9. […] only little progress, mostly for some market variants.  There was some commotion regrading the complexity of correlated equilibria in concise games, but that was soon […] 10. I must convey my love for your generosity supporting those who should have guidance on your niche. Your personal commitment to passing the message across turned out to be extraordinarily beneficial and has continually helped individuals like me to get to their desired goals. Your entire helpful tutorial can mean this much to me and still more to my office workers. Best wishes; from everyone of us.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 3, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8356765508651733, "perplexity": 625.3863202080628}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989043.35/warc/CC-MAIN-20150728002309-00325-ip-10-236-191-2.ec2.internal.warc.gz"}
https://arxiv.org/abs/1707.07685
hep-ph (what is this?) # Title: Charged Composite Scalar Dark Matter Abstract: We consider a composite model where both the Higgs and a complex scalar $\chi$, which is the dark matter (DM) candidate, arise as light pseudo Nambu-Goldstone bosons (pNGBs) from a strongly coupled sector with TeV scale confinement. The global symmetry structure is $SO(7)/SO(6)$, and the DM is charged under an exact $U(1)_{\rm DM} \subset SO(6)$ that ensures its stability. Depending on whether the $\chi$ shift symmetry is respected or broken by the coupling of the top quark to the strong sector, the DM can be much lighter than the Higgs or have a weak-scale mass. Here we focus primarily on the latter possibility. We introduce the lowest-lying composite resonances and impose calculability of the scalar potential via generalized Weinberg sum rules. Compared to previous analyses of pNGB DM, the computation of the relic density is improved by fully accounting for the effects of the fermionic top partners. This plays a crucial role in relaxing the tension with the current DM direct detection constraints. The spectrum of resonances contains exotic top partners charged under the $U(1)_{\rm DM}$, whose LHC phenomenology is analyzed. We identify a region of parameters with $f = 1.4\; \mathrm{TeV}$ and $200\;\mathrm{GeV} \lesssim m_\chi \lesssim 400\;\mathrm{GeV}$ that satisfies all existing bounds. This DM candidate will be tested by XENON1T in the near future. Comments: 36 pages + appendices and references, 10 figures. v2: minor modifications, references added. Matches version published in JHEP Subjects: High Energy Physics - Phenomenology (hep-ph); Cosmology and Nongalactic Astrophysics (astro-ph.CO) Journal reference: JHEP 1711 (2017) 094 DOI: 10.1007/JHEP11(2017)094 Report number: TUM-HEP-1090-17 Cite as: arXiv:1707.07685 [hep-ph] (or arXiv:1707.07685v2 [hep-ph] for this version) ## Submission history From: Ennio Salvioni [view email] [v1] Mon, 24 Jul 2017 18:00:01 GMT (1980kb,D) [v2] Thu, 16 Nov 2017 18:59:29 GMT (1980kb,D)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8566609025001526, "perplexity": 2442.4659558692165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221212598.67/warc/CC-MAIN-20180817143416-20180817163416-00493.warc.gz"}
https://physics.stackexchange.com/questions/521592/back-emf-in-an-inductor-with-an-a-c-supply
# Back emf in an Inductor with an A.C supply If a pure inductor is supplied with an AC voltage we will always have AC current and time changing electric field in the circuit. That always changing electricfield will cause changing magnetic field which will cause an emf in an inductor which is always equal and opposite to the changing applied voltage (Lenz law). So every time the voltage is changing in the source it will face equal and opposite voltage in the inductor everytime. My question is if there is always equal and opposite voltage in an inductor against the applied voltage then how can the current flow in such a circuit.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9095510840415955, "perplexity": 262.41206758959675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371824409.86/warc/CC-MAIN-20200408202012-20200408232512-00543.warc.gz"}
http://math.stackexchange.com/questions/302405/existence-of-a-ring-automorphism
# Existence of a ring automorphism Let $A$ be a commutative ring, $B$ a commutative ring that is also an $A$-algebra of finite presentation. Let $f_1$ and $f_2$ be two elements of $B$ such that $(f_1) + (f_2) = B$ and as $A$-modules, $$B=A \oplus f_1 B \quad ; \quad B=A \oplus f_2 B.$$ In which cases do you know that there exists an $A$-automorphism of rings $\psi : B \to B$ such that $\psi(f_1) = f_2$ and $\psi(f_2) = f_1$ ? - Please clarify the categories with which you work. Does $B = A \oplus f_1 B$ refer to the underlying $A$-modules? Is $\phi$ an automorphism of the $A$-algebra $B$? –  Martin Brandenburg Feb 13 '13 at 20:05 Yes. And I would like to know if such a $\psi$ exists. –  Damien L Feb 13 '13 at 20:14 Please edit the question to include all relevant information. –  Mariano Suárez-Alvarez Feb 13 '13 at 20:48 Are $A,B$ commutative? What about other finiteness assumptions, for example is $A$ noetherian? Are $f_1,f_2$ regular? –  Martin Brandenburg Feb 13 '13 at 21:55 $\rm A$ and $\rm B$ commutative yes but not noetherian. If you can say something in the case where $f_1$ and $f_2$ are regular, it is nice. –  Damien L Feb 13 '13 at 22:04 A special case of this question with an algebro-geometric flavor is the following: Let $k$ be a field and let $X$ be an affine algebraic scheme over $k$. Does $\mathrm{Aut}(X)$ act transitively on the $k$-rational points of $X$ which correspond to principal maximal ideals? But this is far from being true since automorphisms preserve the local rings, which can be regular or not. A simple counterexample is given by $X = \mathrm{Spec}(k) \sqcup \mathrm{Spec}(k[\varepsilon]/\varepsilon^2)$. In purely algebraic terms and with the notation of the question, this means $A=k$, $B=k \times k[\varepsilon]/\epsilon^2$, $f_1=(0,1)$, $f_2 = (1,\varepsilon)$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.923565149307251, "perplexity": 187.3152198984458}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095671.53/warc/CC-MAIN-20150627031815-00035-ip-10-179-60-89.ec2.internal.warc.gz"}
http://sorenknudsen.com/news/2020-10-22/
I am attending IEEEVIS virtually in a few days. I look forward to the many interesting papers presenting, and to discuss some of my work presented there.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9643612504005432, "perplexity": 444.02742244812964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446708046.99/warc/CC-MAIN-20221126180719-20221126210719-00246.warc.gz"}
http://math.stackexchange.com/questions/304033/let-a-b-be-two-subsets-of-a-finite-group-g-if-abg-show-that-g-a/304095
# Let $A,B$ be two subsets of a finite group $G$. If $|A|+|B|>|G|$, show that $G=AB$ Let $A,B$ be two subsets of a finite group $G$. If $|A|+|B|>|G|$, show that $G=AB$. My attempt is : Since $|A|+|B|>|G|$, there exists one common element in both sets $A$ and $B$, say $g$. Then since $G$ is a group, by closure, $g^2 \in G$, which implies that $G \subset AB$. Let $a \in A$, $b \in B$. Then I get stuck at proving another inclusion. - When you say $A$ and $B$ are subsets, do you actually mean subgroups? –  anon271828 Feb 14 '13 at 15:12 Inclusion $AB\subset G$ is obvious. But your proof is wrong since you proved only that some element of $G$ is in $AB$ –  Norbert Feb 14 '13 at 15:12 That $AB\subseteq G$ is trivial. –  Tobias Kildetoft Feb 14 '13 at 15:13 @anon271828 If $A$ and $B$ were subgroups there'd be nothing to do. Since $|A|+|B|>|G|$ implies either $|A|>|G|/2$ or $|B|>|G|/2$ which would force either $A=G$ or $B=G$. Although it's possible the question is that straightforward. –  JSchlather Feb 14 '13 at 16:06 @peoplepower, why should this be a counterexample? Surely $\mathbb Z_5 = A + B$ here. –  Andreas Caranti Feb 14 '13 at 16:34 Take any $g \in G$. Let $A^{-1} = \lbrace a^{-1}, a \in A \rbrace$. Then $\vert A^{-1}g \cap B \vert \gt 0$ by easy counting. Let $b = a^{-1} g$ for some $a \in A$. Then $ab = a a^{-1} g = g$. +1 It's the argument that can be used to show that in a finite field $F$ of odd order $q$ every element is the sum of two squares. In this case $A = B = \{ u^2 : u \in F \}$ has $(q+1)/2 > q/2$ elements. –  Andreas Caranti Feb 14 '13 at 16:30 "Let $b=a^{-1} g, \exists a \in A$" Ok it's clear what you mean, but this really contradicts all rules of mathematical well-defined formulas. Why not "Choose $a \in A$ with $b=a^{-1} g$?" –  Martin Brandenburg Feb 14 '13 at 16:35 Good proof. Two more corrections: You can say $A^{-1}g\cap B\ne\varnothing$, $\left\vert A^{-1}g \cap B \right\vert \ne 0$, or even $\left\vert A^{-1}g \cap B \right\vert \gt 0$ but not $\left\vert A^{-1}g \cap B \right\vert \ne \varnothing$. And using \phi$(\phi)$ instead of \emptyset$(\emptyset)$ or \varnothing$(\varnothing)$ for the empty set is just bad juju. –  Travis Bemrose Apr 10 '14 at 20:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9824796915054321, "perplexity": 339.2426202127425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986022.41/warc/CC-MAIN-20150728002306-00138-ip-10-236-191-2.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/38831/right-centered-columns-in-lyx
# Right-centered columns in LyX This is very near to my topic: How to align integers on the right, but still center them. However, I do not get it to work under LyX. This is the code I have: \begin{table}[H] \centering \begin{threeparttable}\caption{\textbf{\label{tab:uebersicht-alle-mspatienten}Untersuchungsergebnisse der soziodemographischen und neurologischen Parameter aller untersuchten MS-Patienten} } \begin{tabular}{>{\raggedright}p{0.35\columnwidth}>{\centering}p{0.1\columnwidth}>{\centering}p{0.1\columnwidth}>{\centering}p{0.1\columnwidth}>{\centering}p{0.1\columnwidth}} \toprule & \textbf{Anzahl (n)} & \textbf{Prozent} & \textbf{Mittelwert} & \textbf{SD}\tabularnewline \midrule \midrule \textbf{Patienten} & 180 & & & \tabularnewline \midrule \textbf{Alter} (in Jahren) & & & 43,9 & 13,1\tabularnewline \midrule \textbf{Geschlecht:} - Weiblich - Männlich & ~ 125 55 & ~ 69,4 30,6 & & \tabularnewline \midrule \textbf{Erkrankungsdauer der MS} (in Jahren) & & & 12,3 & 8,8\tabularnewline \midrule \textbf{Verlaufsform:} - RRMS - SCP - PPMS - CIS & ~ 86 67 24 3 & ~ 47,8 37,2 13,3 1,7 & & \tabularnewline \midrule \end{tabular} \end{threeparttable} \end{table} It produces the following (correct) output: However, if I change the first line to: \begin{tabular}{>{\raggedright}p{0.35\columnwidth}S[table-format=3.0]>{\centering}p{0.1\columnwidth}>{\centering}p{0.1\columnwidth}>{\centering}p{0.1\columnwidth}} it gives the following (incorrect) output: My version of siunitx is 2.2i (2011/06/15) from TeX Live 2011. - You need to remove the braces (grouping) around your S[table-format=3.0] column definition. – Werner Dec 19 '11 at 22:07 Like so? Then I get even more errors (all illegal pream-token): \begin{tabular}{>{\raggedright}p{0.35\columnwidth}S[table-format=3.0]>{\centerin‌​g}p{0.1\columnwidth}>{\centering}p{0.1\columnwidth}>{\centering}p{0.1\columnwidth‌​}} – Jan Dec 19 '11 at 22:11 Sorry, I had commented out the usepackage command, now the doc. compiles fine. – Jan Dec 19 '11 at 22:15 Yes. Did you include the siunitx package? If so, what version do you have (add \listfiles before \documentclass and view the contents of your .log file after **File List**)? The most recent version on CTAN is 2011/12/11 v2.4e. – Werner Dec 19 '11 at 22:15 For completeness, since it seemed to have solved your problem, I've converted my comment into an answer. – Werner Dec 19 '11 at 22:21 The elements contained within the column specification of a tabular environment should be considered similar to single-letter commands that may take parameters. For example, S[<key-value list>] denotes an S column with parameters set according to the (optional) <key-value list>. Wrapping this in braces \begin{tabular}{...{S[table-format=3.0]}...} confuses LaTeX. Removing the braces should fix your problem: \begin{tabular}{...S[table-format=3.0]...} Also remember to include the siunitx package, since it defines the S column type. Here is a minimal example of your input producing the correct layout when using siunitx: \documentclass{article} \usepackage{siunitx}% http://ctan.org/pkg/siunitx \usepackage{booktabs}% http://ctan.org/pkg/booktabs \usepackage{threeparttable}% http://ctan.org/pkg/threeparttable \begin{document} \begin{table}[H] \centering \begin{threeparttable} \caption{\textbf{\label{tab:uebersicht-alle-mspatienten}Untersuchungsergebnisse der soziodemographischen und neurologischen Parameter aller untersuchten MS-Patienten}} % \begin{tabular}{>{\raggedright}p{0.35\columnwidth} % >{\centering}p{0.1\columnwidth} % >{\centering}p{0.1\columnwidth} % >{\centering}p{0.1\columnwidth} % >{\centering}p{0.1\columnwidth}} \begin{tabular}{>{\raggedright}p{0.35\columnwidth} S[table-format=3.0] >{\centering}p{0.1\columnwidth} >{\centering}p{0.1\columnwidth} >{\centering}p{0.1\columnwidth}} \toprule & \textbf{Anzahl (n)} & \textbf{Prozent} & \textbf{Mittelwert} & \textbf{SD}\tabularnewline \midrule \midrule \textbf{Patienten} & 180 & & & \tabularnewline \midrule \textbf{Alter} (in Jahren) & & & 43,9 & 13,1\tabularnewline \midrule \textbf{Geschlecht:} \tabularnewline - Weiblich & 125 & 69,4 & & \tabularnewline - M\"{a}nnlich & 55 & 30,6 & & \tabularnewline \midrule \textbf{Erkrankungsdauer der MS} (in Jahren) & & & 12,3 & 8,8 \tabularnewline \midrule \textbf{Verlaufsform:} \tabularnewline - RRMS & 86 & 47,8 \tabularnewline - SCP & 67 & 37,2 \tabularnewline - PPMS & 24 & 13,3 \tabularnewline - CIS & 3 & 1,7 & & \tabularnewline \midrule \end{tabular} \end{threeparttable} \end{table} \end{document} TeX constructs a table (tabular) on a row-by-row basis. It seems like your code (at least what is given in your code snippet) doesn't allow for this row-by-row construction. Specifically, as an example, \textbf{Verlaufsform:} - RRMS - SCP - PPMS - CIS & ~ 86 67 24 3 & ~ 47,8 37,2 13,3 1,7 & & \tabularnewline needs to be modified to read \textbf{Verlaufsform:} \tabularnewline - RRMS & 86 & 47,8 \tabularnewline - SCP & 67 & 37,2 \tabularnewline - PPMS & 24 & 13,3 \tabularnewline - CIS & 3 & 1,7 & & \tabularnewline otherwise all the entries that seems like it should be processed in the same column (underneath one another) will end up in the same column on a single row. TeX's output will not match your code layout as input - you need to explicitly say where the columns are separated (using &), just like you explicitly state where the row ends (using \\ or \tabularnewline). - Unfortunately, it now looks like !this instead of !this Is there sth. I'm missing ? – Jan Dec 19 '11 at 22:28 @Jan: Please add the entire code you have that generates the output you included to your original post via an edit, as well as the images. As a new user without image posting privileges simply include the image as normal and remove the ! in front of it to turn it into a link. Someone with editing privileges will embed the link for you. That way we can assess what the problem is. Also include your version of the siunitx package. – Werner Dec 19 '11 at 22:30 Although I know to say thank you by clicking the arrow, I wanted to do it the old way ;-) Btw Lyx produced this code. I inserted another lines (like you suggested) and now it works. – Jan Dec 19 '11 at 23:23 @Jan: You're welcome! I think LyX sometimes removes the intricacies of (La)TeX coding from the inexperienced user in order to make a little more WYSIWYM (or WYSIWYG). However, this can obscure problems. – Werner Dec 19 '11 at 23:51 @Jan: How did you do this in LyX? Did you write the whole table in a raw block, or is it possible to somehow use the table editor in combination with the siunitx package's S column type? – andreas-h Apr 11 '13 at 15:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8092938661575317, "perplexity": 3757.475206563097}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398446250.47/warc/CC-MAIN-20151124205406-00197-ip-10-71-132-137.ec2.internal.warc.gz"}
https://www.ybstudy.com/2021/04/mcqs-on-resonance-tube-for-neet.html?m=1
# Resonance Tube MCQ for NEET | Pdf Practicing our resonance tube mcq for Class 12 with Answers is one of the best ways to prepare for the NEET exam. By practicing more Physics MCQs with Answers Pdf Download, students can improve their speed and accuracy which can help them during NEET exam. ## MCQs on Resonance Tube for NEET 1. In resonance tube experiment we use_________ (a) Transverse progressive waves (b) Longitudinal progressive waves (c) Longitudinal stationary waves (d) Transverse stationary waves 2. The term 0.3D, D being inner diameter of a tube is called as______ (b) Rayleigh's correction (c) Laplace's correction (d) Epoach 3. If a resonance tube gives two consecutive resonances at the length of 15 and 48 cm, then the velocity of sound in air is [frequency of fork = 500 Hz] (a) 320 m/s (b) 330 m/s (c) 340 m/s (d) 350 m/s 4. An organ pipe (A) closed at one end and vibrating in its first harmonic and another pipe B open at both ends, Vibrating in its third harmonic are in resonance with agiven tuning fork. The ratio of the length of A to that of B is (a) 1/2 (b) 1/3 (c) 8/3 (d) 1/6 5. The end correction is_________ (a) the distance between the open end of the tube and the tuning fork (b) the diameter of the tube (c) the distance between the open end and the nearest node (d) the distance between the open end of the tube and the antinode just outside the tube 6. Ifinstead of water, the resonance tube is filled with a liquid of density higher than that of water, then the resonating frequency_______ (a) will increase (b) will decrease (c) will not change (d) may increase or decrease 7. In a resonance tube experiment, a tuning fork resonates With an air column of length 12 cm and again resonates when it is 38 cm long. The end correction will be_______ (a) 0.25 cm (b) 0.5 cm (c) 0.75 cm (d) 1 cm 8. The end correction of a resonance column is 1 cm. If the shortest length resonating with a tuning fork is 15 cm, the next resonating length will be (a) 35 cm (b) 40 cm (c) 47 cm (d) 64 cm 9. In a resonance tube experiment, resonance occurs with a tuning fork of frequency 420 Hz, when the length of the air column is 19 cm. Resonance occurs again when the length is increased to 59 cm. What is the velocity of sound in air? (a) 300 m/s (b) 336 m/s (c) 310 m/s (d) 320 m/s 10. In a resonance tube experiment, the first and second resonance occur, hen the water levels in the tube are 25 cm and 80 cms below the open end respectively. What is the inner diameter of the resonance tube ? (a) 10/3 Cm (b) 20/3 Cm (c) 25/3 Cm (d) 40/3 Cm
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8546571731567383, "perplexity": 1745.1723704575495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060882.17/warc/CC-MAIN-20210928184203-20210928214203-00140.warc.gz"}
http://www.nag.com/numeric/CL/nagdoc_cl23/html/G12/g12zac.html
g12 Chapter Contents g12 Chapter Introduction NAG C Library Manual NAG Library Function Documentnag_surviv_risk_sets (g12zac) 1  Purpose nag_surviv_risk_sets (g12zac) creates the risk sets associated with the Cox proportional hazards model for fixed covariates. 2  Specification #include #include void nag_surviv_risk_sets (Nag_OrderType order, Integer n, Integer m, Integer ns, const double z[], Integer pdz, const Integer isz[], Integer ip, const double t[], const Integer ic[], const Integer isi[], Integer *num, Integer ixs[], Integer *nxs, double x[], Integer mxn, Integer id[], Integer *nd, double tp[], Integer irs[], NagError *fail) 3  Description The Cox proportional hazards model (see Cox (1972)) relates the time to an event, usually death or failure, to a number of explanatory variables known as covariates. Some of the observations may be right-censored, that is, the exact time to failure is not known, only that it is greater than a known time. Let ${t}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,n$, be the failure time or censored time for the $i$th observation with the vector of $p$ covariates ${z}_{i}$. The covariance matrix $Z$ is constructed so that it contains $n$ rows with the $i$th row containing the $p$ covariates ${z}_{i}$. It is assumed that censoring and failure mechanisms are independent. The hazard function, $\lambda \left(t,z\right)$, is the probability that an individual with covariates $z$ fails at time $t$ given that the individual survived up to time $t$. In the Cox proportional hazards model, $\lambda \left(t,z\right)$ is of the form $λt,z=λ0texpzTβ,$ where ${\lambda }_{0}$ is the base-line hazard function, an unspecified function of time, and $\beta$ is a vector of unknown arguments. As ${\lambda }_{0}$ is unknown, the arguments $\beta$ are estimated using the conditional or marginal likelihood. This involves considering the covariate values of all subjects that are at risk at the time when a failure occurs. The probability that the subject that failed had their observed set of covariate values is computed. The risk set at a failure time consists of those subjects that fail or are censored at that time and those who survive beyond that time. As risk sets are computed for every distinct failure time, it should be noted that the combined risk sets may be considerably larger than the original data. If the data can be considered as coming from different strata such that ${\lambda }_{0}$ varies from strata to strata but $\beta$ remains constant, then nag_surviv_risk_sets (g12zac) will return a factor that indicates to which risk set/strata each member of the risk sets belongs rather than just to which risk set. Given the risk sets the Cox proportional hazards model can then be fitted using a Poisson generalized linear model (nag_glm_poisson (g02gcc) with nag_dummy_vars (g04eac) to compute dummy variables) using Breslow's approximation for ties (see Breslow (1974)). This will give the same fit as nag_surviv_cox_model (g12bac). If the exact treatment of ties in discrete time is required, as given by Cox (1972), then the model is fitted as a conditional logistic model using nag_condl_logistic (g11cac). 4  References Breslow N E (1974) Covariate analysis of censored survival data Biometrics 30 89–99 Cox D R (1972) Regression models in life tables (with discussion) J. Roy. Statist. Soc. Ser. B 34 187–220 Gross A J and Clark V A (1975) Survival Distributions: Reliability Applications in the Biomedical Sciences Wiley 5  Arguments 1:     orderNag_OrderTypeInput On entry: the order argument specifies the two-dimensional storage scheme being used, i.e., row-major ordering or column-major ordering. C language defined storage is specified by ${\mathbf{order}}=\mathrm{Nag_RowMajor}$. See Section 3.2.1.3 in the Essential Introduction for a more detailed explanation of the use of this argument. Constraint: ${\mathbf{order}}=\mathrm{Nag_RowMajor}$ or Nag_ColMajor. 2:     nIntegerInput On entry: $n$, the number of data points. Constraint: ${\mathbf{n}}\ge 2$. 3:     mIntegerInput On entry: the number of covariates in array z. Constraint: ${\mathbf{m}}\ge 1$. 4:     nsIntegerInput On entry: the number of strata. If ${\mathbf{ns}}>0$ then the stratum for each observation must be supplied in isi. Constraint: ${\mathbf{ns}}\ge 0$. 5:     z[$\mathit{dim}$]const doubleInput Note: the dimension, dim, of the array z must be at least • $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{pdz}}×{\mathbf{m}}\right)$ when ${\mathbf{order}}=\mathrm{Nag_ColMajor}$; • $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}×{\mathbf{pdz}}\right)$ when ${\mathbf{order}}=\mathrm{Nag_RowMajor}$. The $\left(i,j\right)$th element of the matrix $Z$ is stored in • ${\mathbf{z}}\left[\left(j-1\right)×{\mathbf{pdz}}+i-1\right]$ when ${\mathbf{order}}=\mathrm{Nag_ColMajor}$; • ${\mathbf{z}}\left[\left(i-1\right)×{\mathbf{pdz}}+j-1\right]$ when ${\mathbf{order}}=\mathrm{Nag_RowMajor}$. On entry: must contain the $n$ covariates in column or row major order. 6:     pdzIntegerInput On entry: the stride separating row or column elements (depending on the value of order) in the array z. Constraints: • if ${\mathbf{order}}=\mathrm{Nag_ColMajor}$, ${\mathbf{pdz}}\ge {\mathbf{n}}$; • if ${\mathbf{order}}=\mathrm{Nag_RowMajor}$, ${\mathbf{pdz}}\ge {\mathbf{m}}$. 7:     isz[m]const IntegerInput On entry: indicates which subset of covariates are to be included in the model. ${\mathbf{isz}}\left[j-1\right]\ge 1$ The $j$th covariate is included in the model. ${\mathbf{isz}}\left[j-1\right]=0$ The $j$th covariate is excluded from the model and not referenced. Constraint: ${\mathbf{isz}}\left[j-1\right]\ge 0$ and at least one value must be nonzero. 8:     ipIntegerInput On entry: $p$, the number of covariates included in the model as indicated by isz. Constraint: ${\mathbf{ip}}=\text{}$ the number of nonzero values of isz. 9:     t[n]const doubleInput On entry: the vector of $n$ failure censoring times. 10:   ic[n]const IntegerInput On entry: the status of the individual at time $t$ given in t. ${\mathbf{ic}}\left[i-1\right]=0$ Indicates that the $i$th individual has failed at time ${\mathbf{t}}\left[i-1\right]$. ${\mathbf{ic}}\left[i-1\right]=1$ Indicates that the $i$th individual has been censored at time ${\mathbf{t}}\left[i-1\right]$. Constraint: ${\mathbf{ic}}\left[\mathit{i}-1\right]=0$ or $1$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$. 11:   isi[$\mathit{dim}$]const IntegerInput Note: the dimension, dim, of the array isi must be at least • ${\mathbf{n}}$ when ${\mathbf{ns}}>0$; • $1$ otherwise. On entry: if ${\mathbf{ns}}>0$, the stratum indicators which also allow data points to be excluded from the analysis. If ${\mathbf{ns}}=0$, isi is not referenced. ${\mathbf{isi}}\left[i-1\right]=k$ Indicates that the $i$th data point is in the $k$th stratum, where $k=1,2,\dots ,{\mathbf{ns}}$. ${\mathbf{isi}}\left[i-1\right]=0$ Indicates that the $i$th data point is omitted from the analysis. Constraint: if ${\mathbf{ns}}>0$, $0\le {\mathbf{isi}}\left[\mathit{i}-1\right]\le {\mathbf{ns}}$, for $\mathit{i}=0,1,\dots ,{\mathbf{n}}-1$. 12:   numInteger *Output On exit: the number of values in the combined risk sets. 13:   ixs[mxn]IntegerOutput On exit: the factor giving the risk sets/strata for the data in x and id. If ${\mathbf{ns}}=0$ or $1$, ${\mathbf{ixs}}\left[i-1\right]=l$ for members of the $l$th risk set. If ${\mathbf{ns}}>1$, ${\mathbf{ixs}}\left[i-1\right]=\left(j-1\right)×{\mathbf{nd}}+l$ for the observations in the $l$th risk set for the $j$th strata. 14:   nxsInteger *Output On exit: the number of levels for the risk sets/strata factor given in ixs. 15:   x[${\mathbf{mxn}}×{\mathbf{ip}}$]doubleOutput Note: the $\left(i,j\right)$th element of the matrix $X$ is stored in • ${\mathbf{x}}\left[\left(j-1\right)×{\mathbf{mxn}}+i-1\right]$ when ${\mathbf{order}}=\mathrm{Nag_ColMajor}$; • ${\mathbf{x}}\left[\left(i-1\right)×{\mathbf{ip}}+j-1\right]$ when ${\mathbf{order}}=\mathrm{Nag_RowMajor}$. On exit: the first num rows contain the values of the covariates for the members of the risk sets. 16:   mxnIntegerInput On entry: the first dimension of the array x and the dimension of the arrays ixs and id. Constraint: mxn must be sufficiently large for the arrays to contain the expanded risk sets. The size will depend on the pattern of failures times and censored times. The minimum value will be returned in num unless the function exits with NE_INT. 17:   id[mxn]IntegerOutput On exit: indicates if the member of the risk set given in x failed. ${\mathbf{id}}\left[i-1\right]=1$ if the member of the risk set failed at the time defining the risk set and ${\mathbf{id}}\left[i-1\right]=0$ otherwise. 18:   ndInteger *Output On exit: the number of distinct failure times, i.e., the number of risk sets. 19:   tp[n]doubleOutput On exit: ${\mathbf{tp}}\left[\mathit{i}-1\right]$ contains the $\mathit{i}$th distinct failure time, for $\mathit{i}=1,2,\dots ,{\mathbf{nd}}$. 20:   irs[n]IntegerOutput On exit: indicates rows in x and elements in ixs and id corresponding to the risk sets. The first risk set corresponding to failure time ${\mathbf{tp}}\left[0\right]$ is given by rows $1$ to ${\mathbf{irs}}\left[0\right]$. The $\mathit{l}$th risk set is given by rows ${\mathbf{id}}\left[\mathit{l}-2\right]+1$ to ${\mathbf{id}}\left[\mathit{l}-1\right]$, for $\mathit{l}=1,2,\dots ,{\mathbf{nd}}$. 21:   failNagError *Input/Output The NAG error argument (see Section 3.6 in the Essential Introduction). 6  Error Indicators and Warnings NE_ALLOC_FAIL Dynamic memory allocation failed. On entry, argument $〈\mathit{\text{value}}〉$ had an illegal value. NE_INT On entry, element $〈\mathit{\text{value}}〉$ of ic is not equal to $0$ or $1$. On entry, element $〈\mathit{\text{value}}〉$ of isi is not valid. On entry, element $〈\mathit{\text{value}}〉$ of ${\mathbf{isz}}<0$. On entry, ${\mathbf{m}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{m}}\ge 1$. On entry, ${\mathbf{n}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{n}}\ge 2$. On entry, ${\mathbf{ns}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{ns}}\ge 0$. On entry, ${\mathbf{pdz}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{pdz}}>0$. On entry, ${\mathbf{pdz}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{pdz}}\ge {\mathbf{n}}$. NE_INT_2 On entry, ${\mathbf{pdz}}=〈\mathit{\text{value}}〉$ and ${\mathbf{m}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{pdz}}\ge {\mathbf{m}}$. NE_INT_ARRAY_ELEM_CONS mxn is too small: min value $\text{}=〈\mathit{\text{value}}〉$. On entry, there are not ip values of ${\mathbf{isz}}>0$. NE_INTERNAL_ERROR An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance. 7  Accuracy Not applicable. When there are strata present, i.e., ${\mathbf{ns}}>1$, not all the nxs groups may be present. 9  Example The data are the remission times for two groups of leukemia patients (see page 242 of Gross and Clark (1975)). A dummy variable indicates which group they come from. The risk sets are computed using nag_surviv_risk_sets (g12zac) and the Cox's proportional hazard model is fitted using nag_condl_logistic (g11cac). 9.1  Program Text Program Text (g12zace.c) 9.2  Program Data Program Data (g12zace.d) 9.3  Program Results Program Results (g12zace.r)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 128, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9916108846664429, "perplexity": 2541.764243265133}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246658904.34/warc/CC-MAIN-20150417045738-00287-ip-10-235-10-82.ec2.internal.warc.gz"}
https://iwaponline.com/wqrj/article/40/1/51/39377/UV-Spectrophotometry-as-a-Non-parametric
## Abstract The composition of water and wastewater, varying temporally and spatially, depends on factors such as environmental context, types of pollution sources, weather conditions leading to dilution or solids transportation, length of sewer network, etc. Because quantitative parameters are often not adapted for the characterization of wastewater quality variability, a non-parametric measurement is proposed, based on comparison of the UV absorption spectra of samples. The presence of isosbestic points, occurring in the set of spectra either directly or indirectly after normalization, allows quantification of the variability of a given water or effluent. A normalization step is used when dilution exists in the case of a mixture of water types (discharge or rain). Several examples show how to calculate the variability or to estimate the dilution factor from UV spectra data, even without results of physicochemical parameters. This content is only available as a PDF.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8562734127044678, "perplexity": 1724.9168937693014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505730.14/warc/CC-MAIN-20200401100029-20200401130029-00435.warc.gz"}
http://mathhelpforum.com/advanced-statistics/15550-finding-value-z-score-print.html
Finding value of z-score. • Jun 1st 2007, 09:24 AM ThePerfectHacker Finding value of z-score. How do I find the corresponding value to a certain z-score? Besides using the table of value. I was assuming that if $z>0$ then I need to compute: $\frac{1}{ \sqrt{\pi} } \int_0^z e^{-x^2} dx$ But that does not work when I numerically integrate that. • Jun 1st 2007, 09:38 AM CaptainBlack Quote: Originally Posted by ThePerfectHacker How do I find the corresponding value to a certain z-score? Besides using the table of value. I was assuming that if $z>0$ then I need to compute: $\frac{1}{ \sqrt{\pi} } \int_0^z e^{-x^2} dx$ But that does not work when I numerically integrate that. $\frac{1}{ \sqrt{2 \pi} } \int_0^z e^{-x^2/2} dx$ RonL
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9194103479385376, "perplexity": 923.2907271505335}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123172.42/warc/CC-MAIN-20170423031203-00552-ip-10-145-167-34.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/104418/boundedness-of-riemann-like-sums-on-unbounded-interval
# Boundedness of Riemann-like sums on unbounded interval Hi I am trying to find suitable conditions (integrability, growth...) on a function $f:\mathbb{R}\to \mathbb{R}$ such that: $$\sum_{k\in\mathbb{Z}}f(kh)h= \mathcal{O}(1),\qquad h\to 0^+.$$ In other words I am trying to find conditions on $f$ such that the above sum is bounded for $h$ small enough. Alternatively, can I impose conditions of $f$ that make $\sum_{k\in\mathbb{Z}}f(kh)h \to_{h\to0} \int_R f (x)dx$? Many thanks. Francesco - This looks too much like a homework problem and therefore inappropriate for this site. You might want to try math.stackexchange.com. But if it really is homework, you really should try to figure this out yourself using the definition of an integral as a limit of Riemann sums, as well as the usual $\epsilon-\delta$ type of arguments. –  Deane Yang Aug 10 '12 at 20:52 It is not a homework problem. And the interval of integration is not closed and bounded, Riemann sums need not converge to the respective integral, even when the latter is finite... –  Francesco Mina Aug 11 '12 at 17:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9512061476707458, "perplexity": 250.6318090246222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644062782.10/warc/CC-MAIN-20150827025422-00228-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.all-dictionary.com/sentences-with-the-word-eclecticism
# Sentence Examples with the word eclecticism Of Eclecticism in Gk. In the 2nd century B.C. a remarkable tendency toward eclecticism began to manifest itself. A wave of eclecticism passed over all the Greek schools in the 1st century B.C. Platonism and scepticism had left undoubted traces upon the doctrine of such a reformer as Panaetius. View more EKX yw, I select), a term used specially in philosophy and theology for a composite system of thought made up of views borrowed from various other systems. Where the characteristic doctrines of a philosophy are not thus merely adopted, but are the modified products of a blending of the systems from which it takes its rise, the philosophy is not properly eclectic. Eclecticism always tends to spring up after a period of vigorous constructive speculation, especially in the later stages of a controversy between thinkers of pre-eminent ability. At the same time, the essence of eclecticism is the refusal to follow blindly one set of formulae and conventions, coupled with a determination to recognize and select from all sources those elements which are good or true in the abstract, or in practical affairs most useful ad hoc. Theoretically, therefore, eclecticism is a perfectly sound method, and the contemptuous significance which the word has acquired is due partly to the fact that many eclectics have been intellectual trimmers, sceptics or dilettanti, and partly to mere partisanship. On the other hand, eclecticism in the sphere of abstract thought is open to this main objection that, in so far as every philosophic system is, at least in theory, an integral whole, the combination of principles from hostile theories must result in an incoherent patchwork. Cicero in particular is important as showing the effect or philosophical eclecticism upon Roman cultivation, and as the often author and always popularizer of the Latin terminology of philosophy. This assumes that every philosophical truth is already contained somewhere in the existing systems. If, however, as it would surely be rash to deny, there still remains philosophical truth undiscovered, but discoverable by human intelligence, it is evident that eclecticism is not the only philosophy. The cause of eclecticism is the unsatisfying character of the creeds of such science, in conjunction with the familiar law that, in triangular or plusquamtriangular controversies a common hatred will produce an alliance 4 Sextus Empiricus, Pyrrhon. To this work, and that of Owen Jones, can be traced the origin of the eclecticism which has laid all past styles of art under contribution. They become in practice Psychology, Ontology and Eclecticism in history.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8083853721618652, "perplexity": 3183.9364927247907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188891.62/warc/CC-MAIN-20170322212948-00203-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition-blitzer/chapter-10-cumulative-review-exercises-page-1127/19
## Precalculus (6th Edition) Blitzer The value of the expression $2{{x}^{3}}+3{{x}^{2}}-8x+3=0$ is $\left\{ -3,\frac{1}{2},1 \right\}$ We have to find the value of x from $2{{x}^{3}}+3{{x}^{2}}-8x+3=0$ By the trial method, $x=1$ is a solution of the equation. So, $x-1$ is a factor of the given equation. Then, divide $2{{x}^{3}}+3{{x}^{2}}-8x+3=0$ by $x-1$: $\frac{\left( 2{{x}^{3}}+3{{x}^{2}}-8x+3 \right)}{x-1}=2{{x}^{2}}+5x-3$ We have obtained a quadratic equation. Now, obtain the factors of $2{{x}^{2}}+5x-3$ as given below. \begin{align} & 2{{x}^{2}}+5x-3=0 \\ & 2{{x}^{2}}+6x-x-3=0 \\ & 2x\left( x+3 \right)-1\left( x+3 \right)=0 \\ & \left( 2x-1 \right)\left( x+3 \right)=0 \end{align} So, $x=\frac{1}{2}$ and $x=-3$. Hence, the values of x are $\left\{ -3,\frac{1}{2},1 \right\}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000079870224, "perplexity": 391.80620404872315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986669546.24/warc/CC-MAIN-20191016190431-20191016213931-00000.warc.gz"}
http://math.stackexchange.com/questions/97294/effect-the-zero-vector-has-on-the-dimension-of-affine-hulls-and-linear-hulls
# Effect the zero vector has on the dimension of affine hulls and linear hulls I am currently working through "An Introduction to Convex Polytopes" by Arne Brondsted and there is a question in the exercises that I would like a hint, or a nudge in the right direction, please no full solutions (yet)! The question is as follows, For any subset $M$ of $\mathbb{R}^d$, show that $\dim(\text{aff M}) = \dim(\text{span M})$ when $\textbf{0} \in \text{aff M}$, and $\dim(\text{aff M}) = \dim(\text{span M}) - 1$ when $\textbf{0} \notin \text{aff M}$. My general approach thus far has been to try and interpret the dimension of the affine hull aff $M$ in terms of the affine basis of $M$ and the linear hull span $M$ in terms of the linear basis of $M$. I tried to prove a lemma Let $L=(x_{1},...,x_{n})$ be the linear basis of $M$ and let $A=(x_{1},...,x_{k})$ be the affine basis of $M$. For any $M \subseteq \mathbb{R}^d$, $\dim(L)=\dim(\text{span M})=n$ and $\dim(A)=\dim(\text{aff M})=k-1$. I'm fairly certain that it is true, but I'm having trouble coming up with a rigorous proof. In any case, taking that lemma to be true I was able to come up with following attempt at a proof, We are guaranteed that there exists a linearly independent n-family $(x_{1},...,x_{n})$ of vectors from $M$ such that $\text{span M}$ is the set of all linear combinations $\sum_{i=1}^{n} \lambda_{i}x_{i}$; and that there exists an affinely independent k-family of $(x_{1},...,x_{k})$ of points from $M$ such that $\text{aff M}$ is the set of all linear combinations $\sum_{i=1}^{k} \lambda_{i} x_{i}$, with $\sum_{i=1}^{k} \lambda_{i} = 1$. This is equivalent to saying that for any $M \subseteq \mathbb{R}^d$, there exists a linear basis $L=(x_{1},...,x_{n})$ of $M$ and there exists an affine basis $A=(x_{1},...,x_{k})$ of $M$. We show that when $\textbf{0} \in \text{aff M}$ that $\dim(\text{aff M}) = \dim(\text{span M})$ and that when $\textbf{0} \notin \text{aff M}$ that $\dim(A) = \dim(L)$. By the lemma, this is equivalent to saying when $\textbf{0} \in \text{aff M}$ that $\dim(A)=\dim(L)$ and that when $\textbf{0} \notin \text{aff M}$ that $\dim(A)= \dim(L) -1$. Assume that for an arbitrary $M \subseteq \mathbb{R}^d$ that $\textbf{0} \in \text{aff M}$. We want to prove that $\dim(A)=\dim(L)$. Since $A=(x_{1},...,x_{k})$ is an affine basis of $M$, $A$ has dimension $k-1$ and since $L=(x_{1},...,x_{n})$ is a linear basis of $M$, $L$ has dimension $n$. We show that $k-1=n$. Since $\textbf{0} \in \text{aff M}$, then some affine combination from $M$ is equal to the zero vector. So, for $\lambda_{1}+...+\lambda_{k}=1$ we have that, $$\sum_{i=1}^{k} \lambda_{i} x_{i} =\sum_{i=1}^{k} \lambda_{i} \cdot \sum_{i=1}^{k} x_{i} = \sum_{i=1}^{k} x_{i} = 0$$ [From here somehow relate this to a property of linear dependence or something else that shows $k-1=n$]. Assume that for an arbitrary $M \subseteq \mathbb{R}^d$ that $\textbf{0} \notin \text{aff M}$. We want to prove that $\dim(A)=\dim(L)-1$, which is that $k-1 = n-1$, so $k=n$. [Try a similar approach to the first part if it ends up working and show that $k=n$]. Therefore, $\dim(\text{aff M}) = \dim(\text{span M})$ when $\textbf{0} \in \text{aff M}$, and $\dim(\text{aff M}) = \dim(\text{span M}) - 1$ when $\textbf{0} \notin \text{aff M}$. I would appreciate some suggestions that have a fair bit of detail, but nothing like a full solution please. If you happen to be able to rip off a quick proof of the lemma I'd like to see that, and if it is wrong or useless please tell me! EDIT: I realize my other approach was a bit off. From the suggestion Robert gave I was able to construct the outline of the proof (I just need to actually show that either $B$ or $B \cup \textbf{0}$ is an affine basis of aff $M$ depending on the condition) as follows, Let $M$ be an arbitrary subset of $\mathbb{R}^d$. We want to show that if $\textbf{0} \in \text{aff M}$, then $\dim(\text{aff M}) = \dim(\text{span M})$ and that if $\textbf{0} \notin \text{aff M}$, then $\dim(\text{aff M}) = \dim(\text{span M}) - 1$. Since aff $M$ is an affine subspace of $\mathbb{R}^d$, then for an affine basis $A=(x_{1},...,x_{n})$ of aff $M$, $\dim(A)=\dim(\text{aff M})=n-1$. Similarly, since span $M$ is a linear subspace of $\mathbb{R}^d$, then for a linear basis $L=(x_{1},...,x_{n})$ of span $M$, $\dim(L)=\dim(\text{span M})=n$. So, we equivalently show for a linear basis $B$ of span $M$ that if $\textbf{0} \in \text{aff M}$, then $B \cup \{\textbf{0}\}$ is an affine basis for aff $M$ and that if $\textbf{0} \notin \text{aff M}$, then $B$ is an affine basis for aff $M$. Assume that $\textbf{0} \in \text{aff M}$ and that $B=(x_{1},...,x_{n})$ is a linear basis of span $M$. We want to prove that $B \cup \{\textbf{0}\}$ is an affine basis of aff $M$, since that would show $\dim(B \cup \{\textbf{0}\})=\dim(\text{aff M})=n$. Since $B$ is a linear basis of span $M$, then it would show that $\dim(B)=\dim(\text{span M})=n$, so we would have that $\dim(\text{aff M}) = \dim(\text{span M})$. [Show here that $B \cup \{\textbf{0}\}$ is an affine basis of aff $M$]. Assume that $\textbf{0} \notin \text{aff M}$ and that $B=(x_{1},...,x_{n})$ is a linear basis of span $M$. We want to prove that $B$ is an affine basis of aff $M$, since that would show $\dim(B)=\dim(\text{aff M})=n-1$. Since $B$ is a linear basis of span $M$, then it would show that $\dim(B)=\dim(\text{span M})=n$, so we would have that $\dim(\text{aff M})=\dim(\text{span M}) -1$. [Show here that $B$ is an affine basis of aff $M$]. Therefore, for any subset $M$ of $\mathbb{R}^d$, if $\textbf{0} \in \text{aff M}$ then $\dim(\text{aff M}) = \dim(\text{span M})$, and if $\textbf{0} \notin \text{aff M}$ then $\dim(\text{aff M}) = \dim(\text{span M}) - 1$. Given that I can prove the condition for the affine basis in each case, would this be a complete proof? I'm still working on showing the condition, but I want to know if the construction of the proof is correct. Thanks! SECOND EDIT: I have posted an attempted proof as an answer, please comment on it and let me know if it is correct. - Your lemma makes no sense because you're using the same letter $x$ for the members of both bases. Also don't say "the" linear basis and "the" affine basis: there are lots of both kinds of basis. What you might show is this. Suppose $B$ is a linear basis of $\text{span} M$. If $0$ is not in $\text{aff} M$, then $B$ is an affine basis for $\text{aff} M$, while if if $0$ is in $\text{aff} M$, then $B \cup \{0\}$ is an affine basis for $\text{aff} M$. – Robert Israel Jan 8 '12 at 4:11 @Robert Israel. Shouldn't you need to assume that $B \subset M$? – a.r. Jan 9 '12 at 4:58 I've edited my proof and need help on one part when I'm assuming $\textbf{0} \notin \text{aff }M$. Could one you take a look at it? – Samuel Reid Jan 10 '12 at 23:17 Question: For any subset $M$ of $\mathbb{R}^d$, show that $\dim(\text{aff }M) = \dim(\text{span }M)$ when $\textbf{0} \in \text{aff }M$, and $\dim(\text{aff }M) = \dim(\text{span }M) - 1$ when $\textbf{0} \notin \text{aff }M$. Attempted Proof: Let $M$ be an arbitrary subset of $\mathbb{R}^d$. We want to show that $\dim(\text{aff }M) = \dim(\text{span }M)$ when $\textbf{0} \in \text{aff }M$, and $\dim(\text{aff }M) = \dim(\text{span }M) - 1$ when $\textbf{0} \notin \text{aff }M$. Assume that $\textbf{0} \in \text{aff }M$ and let $A=(x_{1},...,x_{n})$ be an affine basis of aff $M$. Then, there exists an affine combination from $M$ that is equal to the zero vector. So, $\lambda_{1}x_{1} + ... + \lambda_{n}x_{n} = \textbf{0}$ with $\lambda_{1}+...+\lambda_{n}=1$. So, by the preclusion of $\lambda_{1}=...=\lambda_{n}=\textbf{0}$, we then have that $A$ is linearly dependent and so $\dim(\text{span }M) \leq n-1$. Yet, $A$ is an affine basis of aff $M$ and so $\dim(\text{aff }M)=n-1$. By definition, $\dim(\text{aff }M) \leq \dim(\text{span }M)$. Thus, $n-1=\dim(\text{aff }M)\leq \dim(\text{span }M) \leq n-1$. Therefore, $\dim(\text{aff }M)=\dim(\text{span }M)$. Assume that $\textbf{0} \notin \text{aff }M$ and let $L=(x_{1},...,x_{n})$ be a linear basis of span $M$. Then, $L$ is linearly independent and $\dim(\text{span }M)=n$. Since the n-family $(x_{1},...,x_{n})$ of vectors from $L$ is linearly independent, $L$ is also affinely independent because any $(n-1)$-family $(x_{1},...,x_{n-1})$ of vectors from $L$ is linearly independent. Since $\text{aff }M \subseteq \text{span }M$, and $\textbf{0} \notin \text{aff }M$ then $\text{aff }M = \text{aff }L$. Thus, $L$ is an affine basis of aff $M$ and $\dim(\text{aff }M)=n-1$. Therefore, $\dim(\text{aff }M)=\dim(\text{span }M)-1$. Therefore, for any subset $M$ of $\mathbb{R}^d$, we have that $\dim(\text{aff }M) = \dim(\text{span }M)$ when $\textbf{0} \in \text{aff }M$, and $\dim(\text{aff }M) = \dim(\text{span }M) - 1$ when $\textbf{0} \notin \text{aff }M$. QED The one point I feel I may not be justified in saying is, Since $\text{aff }M \subseteq \text{span }M$, and $\textbf{0} \notin \text{aff }M$ then $\text{aff }M = \text{aff }L$. I know that $\text{aff }M = \text{aff }L$ must be true when $\textbf{0} \notin \text{aff }M$, but I may not have given a sufficient explanation. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9819839000701904, "perplexity": 73.35749351438884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049278385.1/warc/CC-MAIN-20160524002118-00058-ip-10-185-217-139.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/2678613/on-proving-that-a-splitting-field-over-a-field-l-is-also-a-splitting-field-ove
# On proving that a splitting field over a field $L$ is also a splitting field over a subfield of $L$ The argument quoted below comes from my textbook1. In the argument, $L$ is a finite extension of a field $K$. Let $\{z_1, z_2, \dots, z_n\}$ be a basis for $L$ over $K$. Each $z_i$ is algebraic over $K$ 2, with minimum polynomial $m_i$ (say). Let $m = m_1m_2\dots m_n$ and let $N$ be a splitting field for $m$ over $L$. Then $N$ is also a splitting field for $m$ over $K$, since $L$ is generated over $K$ by some of the roots of $m$ in $N$. I am puzzled by the last sentence ("Then $N$ is also..."). To prove that $N$ is a splitting field over $K$, one needs to show that 1. $m$ splits completely over $N$; and 2. $m$ does not split completely over any field $E$ such that $K \subset E \subset N$. Assertion (1) follows from the choice of $N$. I don't see how the observation "since $L$ is generated over $K$ by some of the roots of $m$ in $N$" proves assertion (2), or in any other way helps to prove that $N$ is a splitting field of $m$ over $K$. I would appreciate any additional detail that may help flesh out the argument. 1 Fields and Galois theory, by John M. Howie, p. 107. 2 This follows from theorem 3.12, on p. 59 of the book, and the assumption that $L$ is finite. This theorem asserts simply that "every finite extension is algebraic." • One usual definition of splitting field of a polynomial $f$ over a field $L$ is a field extension $K$ over which $f$ splits completely, and which is generated over $L$ by the roots of the polynomial, essentially. – Pedro Tamaroff Mar 6 '18 at 0:10 • @PedroTamaroff: thanks; it's still fuzzy in my mind, but now at least I see the shape of the argument. – kjo Mar 6 '18 at 0:16 Let $r=\deg m$, and denote by $u_1, \dots u_r$ the roots of $m$ in the splitting field $M$ of $m$ over $L$. This means that $N=L(u_1, \dots u_r)$. Now, by hypothesis, $L=K(z_1,\dots, z_n)=$, so $$M=K(u_{i_1},\dots , u_{i_n})(u_1, \dots u_r),$$ which of course is the same as $K(u_1, \dots u_r)$. As $m\in K[X]$, this proves $M$ is a splitting field of $m$ over $K$ as well. • Thank you, but it burned my eyes that the same variable ($N$) was being used for the name of a field extension and the degree of a polynomial. I went ahead and edited your post to eliminate this naming collision. I hope you don't mind. Also, the answer would be more helpful if it said something about the meaning of the indices $i_1, i_2, \dots, i_n$ in the equality $N = K(u_{i_1},\dots,u_{i_n})(u_1,\dots, u_r)$. – kjo Mar 6 '18 at 0:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8973246812820435, "perplexity": 59.854635378622}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256546.11/warc/CC-MAIN-20190521182616-20190521204616-00043.warc.gz"}
https://www.khanacademy.org/math/calculus-home/integral-calculus/indefinite-integrals
# Indefinite integrals Contents If f is the derivative of F, then F is an antiderivative of f. We also call F the "indefinite integral" of f. In other words, indefinite integrals and antiderivatives are, essentially, reverse derivatives. Why differentiate in reverse? Good question! Keep going and you'll find out! 9 exercises available ### Antiderivatives If f' is the derivative of f, then f is the antiderivative of f'. To find the antiderivative of a function we need to perform some kind of reverse differentiation. Learn about it here. ### Indefinite integrals of common functions Indefinite integrals (or antiderivatives) are really just backward differentiation. Therefore, the indefinite integral of eˣ is eˣ+c, the indefinite integral of 1/x is ln(x)+c, the indefinite integral of sin(x) is -cos(x)+c, and the indefinite integral of cos(x) is sin(x)+c. ### Review: Indefinite integrals & antiderivatives Review your understanding of indefinite integrals and antiderivatives with some challenge problems.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.964188814163208, "perplexity": 1350.3621883633748}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170884.63/warc/CC-MAIN-20170219104610-00621-ip-10-171-10-108.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/circular-motion-with-static-friction.722460/
# Circular Motion with static friction 1. Nov 13, 2013 ### NickyQT 1. The problem statement, all variables and given/known data A bus passenger has her laptop sitting on the flat seat beside her as the bus, traveling at 10.0 m/s, goes around a turn with a radius of 25.0 m. What minimum coefficient of static friction is necessary to keep the laptop from sliding? Given: V = 10 m/s r = 25.0 m 2. Relevant equations Fc = mv ^ 2 / r Ff = Us x Fn 3. The attempt at a solution mv^2/r = Umg U = v^2/gr U = 10^2/9.6(25) U = 100/245 U = 0.41 *rounded up I think i got it right because i checked my lesson and this is the way they did it but i don't understand what is happening with the forces for me to get this solution, can someone please just clarify on whats actually happening and why this is what i'm supost to do (if it is infact correct). Thank you - Nicky 2. Nov 13, 2013 ### PhanthomJay Hi Nicky, welcome to PF! It appears that you got the correct answer by plugging in the given values into the equation you found in your lesson, without understanding why you were using that equation. This is definitely not a good thing. You should become familiar with friction, newton's laws, free body diagrams, and cenrtipetal acceleration. An object moving in a curved path (like a circle) experiences an acceleration, v^2/r, toward the center of the circle (why?) which must be caused by a net force acting toward the center of the circle, per Newton's 2nd law F_net = ma. In this case, the only force acting on the laptop toward the center of the circle is the friction force, uN, where N is found by applying Newton's first law in the vertical direction. Since the friction force is the only force acting toward the center, it is the net force acting toward the center, or the so called centripetal force. 3. Nov 13, 2013 ### NickyQT Thank you so much, i just needed to picture it in my head, but now i understand how it works. I really appreciate you taking the time to help me out =).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8005115985870361, "perplexity": 478.49381394884995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645604.18/warc/CC-MAIN-20180318091225-20180318111225-00469.warc.gz"}
https://math.stackexchange.com/questions/2238830/what-is-foundations-of-mathematics-and-why-isnt-it-taught-more-often
# What is foundations of mathematics ? And why isn't it taught more often? I have just come across "Foundations of Mathematics". Wikipedia describes it as Foundations of mathematics can be conceived as the study of the basic mathematical concepts (number, geometrical figure, set, function, etc.) and how they form hierarchies of more complex structures and concepts, especially the fundamentally important structures that form the language of mathematics (formulas, theories and their models giving a meaning to formulas, definitions, proofs, algorithms, etc.) also called metamathematical concepts, with an eye to the philosophical aspects and the unity of mathematics. Now this seems a bit strange to me. If foundations of mathematics really do deal with the study of basic mathematical concepts shouldn't they be taught to us as the first thing ? Also in theory wouldn't this mean that someone who has a good grasp of foundations of mathematics will have a very good understanding of mathematics as a whole ? If that is true then why isn't foundation of mathematics taught at a earlier stage in life and more often ? I apologize for the over simplification, but I am not an ace mathematician. Hence I want some insight into what foundations of mathematics are. • "why isn't foundation of mathematics taught at a earlier stage in life ?" Because "basic" math concepts are quite abstract and not intuitive, and thus quite difficult to grasp before having a deep knowledge of math. – Mauro ALLEGRANZA Apr 18 '17 at 15:48 In ancient Greece, Euclid proposed an idea mathematicians have liked (in principle) ever since: have assumptions called "axioms" we take for granted, then prove "theorems" from them. In theory, you could write a list of all the axioms of modern mathematics, then take it from there. Euclid had a few axioms of geometry, as if to show us how it's done. However, until c. 1900 mathematicians rarely took that approach. They were very careful about the way results were proven, but pretty much intuited the starting points. Nowadays, they can say, "Number theory has the Peano axioms" or "All mainstream mathematics has a model in ZFC" (a version of set theory), or something like that; but such axioms were designed to capture what we were already doing, and today when you teach people these axioms it's the familiarity of the implications that makes them palatable. There was actually a teach-foundations-to-children experiment called the New Math, but it went badly. The basic problem was that, even though theoretically you'd think the easiest way to understand the subject is to start from axioms then see where they lead, in practice that's not a particularly intuitive way to do it, if only because you need some quite heavy-handed machinery even to state the starting points when they're done properly. For example, how do you found the idea of real numbers as a continuum? One option is as Dedekind cuts on rational numbers; another is as equivalence classes of Cauchy sequences of rational numbers. Yeah: good luck explaining that in primary school, even if rational numbers have been explained or taken for granted. We've had much more success teaching mathematics a very different way that more closely (though not exactly) parallels the history of ideas' development. First you teach natural numbers, then you introduce fractions, then you get to surds and complex numbers and so on. (I'm skipping topics other than "types of numbers" to simplify the timetable.) The reason this works, presumably, is because each level of the resulting education provides the right context to understand the problems that motivated the next development in mathematical theory. But to motivate the foundation of mathematics the same way, it turns out you have to natter about some very abstract and complicated ideas like consistency and decidability. • I don't think we know that axiomatics originates with Euclid. Indeed, we know very little about the context of Euclid. We have other examples of axiomatics that are approximately contemporaneous without attribution to Euclid: Archimedes states the axiom we normally name after him as an axiom in On the Sphere and the Cylinder, and he attributes it to Eudoxus, on which Euclid's theory of proportion is definitely based. There are several references to pre-Euclidean "Elements", but we have absolutely nothing about what their structure is. – Chappers Apr 17 '17 at 18:36 • Just because one extremely-poorly-thought-out approach failed does not imply that there is no good approach. Just saying... – user21820 Nov 7 '17 at 10:27 Let me put it this way: if you had to learn precisely how an engine works before you could drive a car, there will be very few drivers on the road. Mathematicians mainly drive cars rather than analyzing the functioning of the engine. For some details on the metaphor of set theory as the automobile, see this article. • I used to think mathematicians were the ones analyzing the engine, while the "non-math" people were just driving the car. – ng.newbie Apr 21 '17 at 13:34 • Well, it depends what you mean by the car. If the car is, say, calculus, then sure, you have plenty of physicists, engineers, and other scientists exploiting it rather than tinkering with the "engine". On the other hand, if the car is set theory, then most practicing mathematicians today are mostly ignorant of it. For some details on the metaphor of set theory as an automobile see the article linked in my answer. – Mikhail Katz Apr 23 '17 at 8:05
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8229451775550842, "perplexity": 479.84912427733263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255182.37/warc/CC-MAIN-20190519221616-20190520003616-00258.warc.gz"}
https://www.physicsforums.com/threads/some-work-and-elastic-energy-problems.135351/
Some work and elastic energy problems 1. Oct 8, 2006 Byrgg I'm having trouble trying to figure out some of the questions from my homework, here are the problems: 1. A 68.5 kg skier rides a 2.56 m ski lift from the base of a mountina to the top. The lift is at an angle of 13.9 degrees to the horizontal. Determine the skier's gravitational potential energy at the top of the mountain relative to the base of the mountain. Because of the law of conversation of energy that I'm supposed to apply, I figured that the total energy would be the same at both points. But when I tried to figure out the numbers, I got two unknowns, and I wasn't even sure if I had organized the problem properly. Any help here would be greatly appreciated. 2. A child's toy shoots a rubber dart of mass 7.8g, using a compressed spring with a force constant of 3.5 x 10^2 N/m. The spring is initially compressed 4.5cm. All the elastic potential energy is converted into kinetic energy of the dart. a) What is the elastic potential energy of the spring? I got this part. b) What is the speed of the dart as it leaves the toy? I was thinking of using $E_k = 1/2mv^2$ to calculate this, is this right? There's more questions, but I'll get to them after these have been solved. 2. Oct 8, 2006 Regarding part 1, you don't need to apply the law of energy conservation, since you're only asked to determine the gravitational potential energy of the skier at the top of the mountain. Regarding part 2, apply the law of energy conservation. 3. Oct 8, 2006 Byrgg Part 2 of which question? 4. Oct 8, 2006 Sorry, I wasn't specific enough - question 2, part b.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8752168416976929, "perplexity": 383.21204952867834}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542972.9/warc/CC-MAIN-20161202170902-00509-ip-10-31-129-80.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/mathematical-formulation-of-kaluza-klein-theory.496974/
# Mathematical formulation of Kaluza-Klein theory 1. May 8, 2011 ### rbtqwt Good morning, what is the mathematical formalism used for a rigorous formulation of the Kaluza-Klein theory? Is there any reference? Thanks! 2. May 8, 2011 ### mitchell porter Just about any theory with compact extra dimensions is a Kaluza-Klein theory. So it could be a classical field theory, a quantum field theory, or a string theory. Take a look at the Wikipedia article, especially the references at the end.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.953581690788269, "perplexity": 500.3199764129149}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719416.57/warc/CC-MAIN-20161020183839-00037-ip-10-171-6-4.ec2.internal.warc.gz"}
http://entitymodelling.org/theory/overview.html
Entity Modelling www.entitymodelling.org - entity modelling introduced from first principles - relational database design theory and practice - dependent type theory ## The mathematics of dependent types, category theory and entity modelling The mathematical notions of (a) Contextual Categories and (b) Generalised Algebraic Theories encapsulate the idea of types dependent on contexts which in turn are constructed from types1,2 . Now types of things are equally concepts and (c) Entity Modelling as described here is about modelling the types of things of interest in a particular situation or or as part of a particular endeavour (the chosen perspective); in an entity model, composition relationships model dependencies between types of things. (a), (b) and (c) are therefore closely related. A project of mine has been to try narrow the gap between (a) and (c) by finding a variation on the algebra of contextual categories which, unlike contextual categories, explicitly includes conjunctive dependencies such as we find them in entity modelling (here on this website at least) and also in the schemas of network databases. A paper I published on this in 19863 called "Formalising the Network and Hierarchical Data Models " I later came to realise was a long way off the mark. Subsequently I have come up with revised definitions and these I have drafted here: DependencyCategories.pdf. In addition here are some relatively recent notes on the generalised algebraic theory of contextual categories: theGATofCCs.pdf. I wrote these after having learnt of the definition of C-system given by Vladimir Voevodsky. Finally, here is a recent write up of some work that I did at the time of my thesis, basically it is a variation on contextual categories — in brief, meta-GAT algebras are to a contexual category as a clones are to Lawvere theories. I wrote these notes suspecting, as was subsequently confirmed, that Voevodsky was producing a paper describing similar structures to my meta-GAT algebras (he calls them B-systems): MetaGAT and MetaGAT algebras.pdf. 1 Cartmell, John. Generalised algebraic theories and contextual categories. PhD thesis, Generalised algebraic theories and contextual categories. University of Oxford, 1978. 2 Cartmell, John. Generalised algebraic theories and contextual categories. Annals of Pure and Applied Logic, 32(0):209 - 243, 1986. 3 Cartmell, John. Formalising the network and hierarchical data models - an application of categorical Logic. In Pitt, David and Abramsky, Samson and Poigné, Axel and Rydeheard, David, editor, Category Theory and Computer Programming, volume 240, Lecture Notes in Computer Science, 466-492, Springer Berlin Heidelberg, 1986.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8073902726173401, "perplexity": 1600.6197404192897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221219197.89/warc/CC-MAIN-20180821230258-20180822010258-00052.warc.gz"}
http://mathoverflow.net/users/10400/plusepsilon-de?tab=activity&sort=all&page=3
plusepsilon.de less info reputation 21040 bio website plusepsilon.de location Uni Hamburg age 29 member for 3 years, 10 months seen Jul 28 at 18:22 profile views 8,218 Marc Palm Postdoc in Mathematics http://www.plusepsilon.de 1,781 Actions Apr14 answered Fourier series of functions on compact groups Apr14 comment The sum over zeros in the explicit formula for $\zeta(s)$ If you mean the explicit formula of Weil, the test function should be bounded by $(1+|\im z|))^{2 + \epsilon}$ and we also have absolute convergence there. So I am not sure if your argument can be made rigorous, because it would imply an explicit formula with more relaxed conditions on the allowed test functions. Apr12 comment What is the current status of representations of $GL_n(F)$ (and other algebraic groups)? I also recommend Bushnell-Henniart Local Langlands for GL(2) for the Bushnell-Kutzko theory. It is more digestible for a beginner, I think. Apr11 revised What is the current status of representations of $GL_n(F)$ (and other algebraic groups)? added 17 characters in body Apr11 comment What is the current status of representations of $GL_n(F)$ (and other algebraic groups)? Thanks, that's what I meant. Apr11 revised What is the current status of representations of $GL_n(F)$ (and other algebraic groups)? added 874 characters in body Apr11 answered What is the current status of representations of $GL_n(F)$ (and other algebraic groups)? Apr11 comment What is the current status of representations of $GL_n(F)$ (and other algebraic groups)? When $F$ is local non-archimedean field,... Apr11 comment What is the intuition behind the definition of cuspidal representations? "some and therefore every" Apr10 comment What is the intuition behind the definition of cuspidal representations? Yes the unipotent radiacal of any Borel subgroup (defined over $F$). Note that your are allowed to conjugate by elements of $GL_2(F)$. There are may expositions on how to move between classical and adelic language, see e.g. Bump's book. Apr10 comment leading-order behaviour of riemann zeta function? $\zeta$ gets arbitrary small in $1/2 \leq \Re s <1$. I am not sure about $0< \Re s <1/2$. Apr10 comment leading-order behaviour of riemann zeta function? I am not getting it? Is your "I'm looking for something...." not stronger then LH, which is certainly not known? Apr10 reviewed Approve suggested edit on multiplication of two ergodic and stationary processes Apr10 answered Looking for paper: Weil's original 1952 “Sur les formules explicites de la théorie des nombres premiers” Apr10 reviewed Approve suggested edit on Does Cauchy continuity imply uniform continuity? [No.] Apr10 comment Reference for Kronecker-Weyl theorem in full generality Sorry, I didn't see the part: " a proof under the assumption that the θj are linearly independent over the rational numbers will not suffice for me." I deleted my answer. Apr8 comment Inequality for a gamma function The logarithmic derivative of the Selberg Zeta function grows like $CT^2$ as $\Im z = T \rightarrow \infty$, which can be seen from the Weyl law. More important for its growth is the Barnes-G-function. $\Gamma$ contributes at most $T \log(T)$ in the non-compact setting. Apr8 answered What is the intuition behind the definition of cuspidal representations? Apr7 comment What is the logarithmic derivative of an (intertwining) operator? Note that my computation apply only to the highest type. For the computations at the real places and the unramified complex cases, you can have a look at my PhD thesis. There is a good reason for working with highest types/smallest weights = irreducible $K$-reps as soon as you have pinned down the local conditions. Apr6 comment Homeomorphisms that admit a decomposition Okay, my mistake;) I see now that it seems to more complicated...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9166712164878845, "perplexity": 589.8553880368194}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500834883.60/warc/CC-MAIN-20140820021354-00253-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.varsitytutors.com/high_school_math-help/solid-geometry/geometry/pyramids
# High School Math : Pyramids ## Example Questions ← Previous 1 ### Example Question #1 : How To Find The Volume Of A Pyramid What is the volume of a pyramid with a height of  and a square base with a side length of ? Explanation: To find the volume of a pyramid we must use the equation We must first solve for the area of the square using We plug in  and square it to get We then plug our answer into the equation for the pyramid with the height to get We multiply the result to get our final answer for the volume of the pyramid . ### Example Question #2 : How To Find The Volume Of A Pyramid Find the volume of the following pyramid. Round the answer to the nearest integer. Explanation: The formula for the volume of a pyramid is: where  is the width of the base,  is the length of the base, and  is the height of the pyramid. In order to determine the height of the pyramid, you will need to use the Pythagoream Theorem to find the slant height: Now we can use the slant height to find the pyramid height, once again using the Pythagoream Theorem: Plugging in our values, we get: ### Example Question #1 : Pyramids Find the volume of the following pyramid. Explanation: The formula for the volume of a pyramid is: Where  is the length of the base,  is the width of the base, and  is the height of the pyramid Use the Pythagorean Theorem to find the length of the slant height: Now, use the Pythagorean Theorem again to find the length of the height: Plugging in our values, we get: ### Example Question #4 : How To Find The Volume Of A Pyramid What is the sum of the number of vertices, edges, and faces of a square pyramid? Explanation: A square pyramid has one square base and four triangular sides. Vertices (where two or more edges come together):  5. There are four vertices on the base (one at each corner of the square) and a fifth at the top of the pyramid. Edges (where two faces come together):  8. There are four edges on the base (one along each side) and four more along the sides of the triangular faces extending from the corners of the base to the top vertex. Faces (planar surfaces):  5. The base is one face, and there are four triangular faces that form the top of the pyramid. Total ### Example Question #5 : How To Find The Volume Of A Pyramid An architect wants to make a square pyramid and fill it with 12,000 cubic feet of sand. If the base of the pyramid is 30 feet on each side, how tall does he need to make it? Explanation: Volume of Pyramid = 1/3 * Area of Base * Height 12,000 ft3 = 1/3 * 30ft * 30ft * H 12,000  = 300 * H H = 12,000  / 300 = 40 H = 40 ft ### Example Question #1 : How To Find The Volume Of A Pyramid The volume of a 6-foot-tall square pyramid is 8 cubic feet. How long are the sides of the base? Explanation: Volume of a pyramid is Thus: Area of the base is . Therefore, each side is . ### Example Question #1 : How To Find The Surface Area Of A Pyramid What is the surface are of a pyramid with a square base length of 15 and a slant height (the height from the midpoint of one of the side lengths to the top of the pyramid) of 12? Explanation: To find the surface area of a pyramid we must add the areas of all five of the shapes creating the pyramid together. We have four triangles that all have the same area and a square that supports the pyramid. To find the area of the square we take the side length of 15 and square it The area of the square is  . To find the area of the triangle we must use the equation for the area of a triangle which is Plug in the slant height 12 as the height of the triangle and use the side length of the square 15 as the base in our equation to get The area of each triangle is . We then multiply the area of each triangle by 4 to find the area of all four triangles . The four triangles have a surface area of . We add the surface area of the four triangles with the area of the square to get the answer for the surface area of the pyramid which is . ### Example Question #2 : How To Find The Surface Area Of A Pyramid Find the surface area of the following pyramid. Explanation: The formula for the surface area of a pyramid is: Where  is the length of the slant height,  is the width of the base, and  is the length of the base In order to determine the areas of the triangle, you will need to use the Pythagorean Theorem to find the slant height: Plugging in our values, we get: ### Example Question #3 : How To Find The Surface Area Of A Pyramid Find the surface area of the following pyramid. Explanation: The formula for the surface area of a pyramid is: Where  is the length of the base,  is the width of the base, and  is the slant height Use the Pythagorean Theorem to find the length of the slant height: Plugging in our values, we get: ### Example Question #1 : How To Find The Surface Area Of A Pyramid What is the surface area of a square pyramid with a base side equal to 4 and a slant length equal to 6?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8335422277450562, "perplexity": 320.07669016876264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057913.34/warc/CC-MAIN-20210926175051-20210926205051-00080.warc.gz"}