url
stringlengths 15
1.13k
| text
stringlengths 100
1.04M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
https://nyuscholars.nyu.edu/en/publications/unusual-proton-transfer-kinetics-in-water-at-the-temperature-of-m | # Unusual Proton Transfer Kinetics in Water at the Temperature of Maximum Density
Emilia V. Silletta, Mark E. Tuckerman, Alexej Jerschow
Research output: Contribution to journalArticlepeer-review
## Abstract
Water exhibits numerous anomalous properties, many of which remain poorly understood. One of its intriguing behaviors is that it exhibits a temperature of maximum density (TMD) at 4 °C. We provide here new experimental evidence for hitherto unknown abrupt changes in proton transfer kinetics at the TMD. In particular, we show that the lifetime of OH- ions has a maximum at this temperature, in contrast to hydronium ions. Furthermore, base-catalyzed proton transfer shows a sharp local minimum at this temperature, and activation energies change abruptly as well. The measured lifetimes agree with earlier theoretical predictions as the temperature approaches the TMD. Similar results are also found for heavy water at its own TMD. These findings point to a high propensity of forming fourfold coordinated OH- solvation complexes at the TMD, underlining the asymmetry between hydroxide and hydronium transport. These results could help to further elucidate the unusual properties of water and related liquids.
Original language English (US) 076001 Physical Review Letters 121 7 https://doi.org/10.1103/PhysRevLett.121.076001 Published - Aug 13 2018
## ASJC Scopus subject areas
• Physics and Astronomy(all)
## Fingerprint
Dive into the research topics of 'Unusual Proton Transfer Kinetics in Water at the Temperature of Maximum Density'. Together they form a unique fingerprint. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8761487007141113, "perplexity": 3901.286743927351}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571090.80/warc/CC-MAIN-20220809215803-20220810005803-00118.warc.gz"} |
https://www.semanticscholar.org/paper/Improved-Sobolev-embeddings%2C-profile-decomposition%2C-Palatucci-Pisante/0a99d73c566fb41cb0acc5b136f498c8c29c57cb | # Improved Sobolev embeddings, profile decomposition, and concentration-compactness for fractional Sobolev spaces
@article{Palatucci2013ImprovedSE,
title={Improved Sobolev embeddings, profile decomposition, and concentration-compactness for fractional Sobolev spaces},
journal={Calculus of Variations and Partial Differential Equations},
year={2013},
volume={50},
pages={799-829}
}
• Published 24 February 2013
• Mathematics
• Calculus of Variations and Partial Differential Equations
We obtain an improved Sobolev inequality in $$\dot{H}^s$$H˙s spaces involving Morrey norms. This refinement yields a direct proof of the existence of optimizers and the compactness up to symmetry of optimizing sequences for the usual Sobolev embedding. More generally, it allows to derive an alternative, more transparent proof of the profile decomposition in $$\dot{H}^s$$H˙s obtained in Gérard (ESAIM Control Optim Calc Var 3:213–233, 1998) using the abstract approach of dislocation spaces…
Weighted inequalities for the fractional Laplacian and the existence of extremals
• Mathematics
Communications in Contemporary Mathematics
• 2019
In this paper, we obtain improved versions of Stein–Weiss and Caffarelli–Kohn–Nirenberg inequalities, involving Besov norms of negative smoothness. As an application of the former, we derive the
The concentration-compactness principle for fractional order Sobolev spaces in unbounded domains and applications to the generalized fractional Brezis–Nirenberg problem
• Mathematics
Nonlinear Differential Equations and Applications NoDEA
• 2018
In this paper we extend the well-known concentration-compactness principle for the Fractional Laplacian operator in unbounded domains. As an application we show sufficient conditions for the
Concentration-compactness principle for nonlocal scalar field equations with critical growth
• Mathematics
• 2017
Multiple positive solutions for nonlinear critical fractional elliptic equations involving sign-changing weight functions
• Mathematics
• 2016
AbstractIn this article, we prove the existence and multiplicity of positive solutions for the following fractional elliptic equation with sign-changing weight functions:
Existence and multiplicity results for fractional p-Kirchhoff equation with sign changing nonlinearities
• Mathematics
• 2015
Abstract In this paper, we show the existence and multiplicity of nontrivial non-negative solutions of the fractional p-Kirchhoff problem M∫ ℝ 2n |u(x)-u(y)| p |x-y| n+ps d x d y(-Δ) p s u=λf(x)|u|
Asymptotic behavior of Palais–Smale sequences associated with fractional Yamabe-type equations
• Mathematics
• 2015
In this paper, we analyze the asymptotic behavior of Palais-Smale sequences associated to fractional Yamabe-type equations on an asymptotically hyperbolic Riemannian manifold. We prove that
The existence and multiplicity of the normalized solutions for fractional Schrödinger equations involving Sobolev critical exponent in the L2-subcritical and L2-supercritical cases
• Mathematics
• 2022
Abstract This paper is devoted to investigate the existence and multiplicity of the normalized solutions for the following fractional Schrödinger equation: (P) ( − Δ ) s u + λ u = μ ∣ u ∣ p − 2 u + ∣
Ground state solutions and decay estimation of Choquard equation with critical exponent and Dipole potential
• Mathematics
Discrete and Continuous Dynamical Systems - S
• 2022
In this paper, we study a class of Choquard equations with critical exponent and Dipole potential. We prove the existence of radial ground state solutions for Choquard equations by using the refined
## References
SHOWING 1-10 OF 82 REFERENCES
The concentration-compactness principle in the Calculus of Variations
Description du défaut de compacité de l’injection de Sobolev. ESAIM: Control, Optimisation and Calculus of Variations
• 1998
Lions: The concentration-compactness principle in the calculus of variations. The limit case, part 2
• Rev. Mat. Iberoamericana
• 1985
Oru: Inégalités de Sobolev précisées. Séminaire sur lesles´lesÉquations aux Dérivées Partielles
• ´ Ecole Polytech., Palaiseau., Exp. no. IV
• 1996
Wheeden: Weighted Inequalities for Fractional Integrals on Euclidean and Homogeneous Spaces
• Amer. J. Math
• 1992
Lions: The concentration-compactness principle in the calculus of variations. The limit case, part 1
• Rev. Mat. Iberoamericana
• 1985
Best constants in Sobolev inequalities | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9437255263328552, "perplexity": 2182.53783325501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103033816.0/warc/CC-MAIN-20220624213908-20220625003908-00109.warc.gz"} |
http://www.maa.org/publications/periodicals/mathematics-magazine/mathematics-magazine-june-2004 | # Mathematics Magazine - June 2004
### ARTICLES
Falling Down a Hole through the Earth
Andrew J. Simoson
171-189
Drop a magic pebble into the rotating Earth, allowing it to fall, drilling its own hole without resistance. With respect to a stationary reference frame, the pebble's path is an ellipse whose center is the center of the Earth (a result proved by Newton) and whose nearest approach to the Earth's center is over 300 km when the pebble is dropped at the Equator. With respect to the rotating reference frame of the Earth, the pebble's hole is a star-shaped figure, an analysis of which leads naturally into determining the pebble's dynamic distance from straight down and into a discussion of the early pebble-drop experiments of Galileo and Hooke. When hypothetically changing the Earth's gravity field, the resultant paths twist as precessing ellipses and raise some natural questions such as why the Spirograph Nebula looks like a tangle of pebble holes falling through the gravitational field of the dust cloud.
### PROOF WITHOUT WORDS
Euler’s Arctangent Identity
Rex H. Wu
189
The identity arctan(1/x) = arctan(1/(x+y)) + arctan(y/(x(x+y)+1) is one of many elegant arctangent identities discovered by Leonard Euler. He employed them in the computation of π.
Upper Bounds on the Sum of Principal Divisors of an Integer
Roger B. Eggleton and William P. Galvin
190-200
By the Fundamental Theorem of Arithmetic, a positive integer is uniquely the product of distinct prime-powers, which we call its principal divisors. For instance, the principal divisors of 90 are 2, 5, and 9. Brian Alspach recently asked for a nice proof that, apart from prime-powers, every odd integer greater than 15 is more than twice the sum of its principal divisors. Thus 21 is more than twice 3 plus 7, while 15 is almost, but not quite, twice 3 plus 5. How can Alspach’s observation be proved? To what extent is it true for even integers? For example, 20 is more than twice 4 plus 5, but 22 is less than twice 2 plus 11. When an integer has more than two principal divisors, are there stronger bounds on their sum? Part of the challenge is to find elegant bounds, the rest is to find elementary proofs.
PROOF WITHOUT WORDS
Every Octagonal Number Is the Difference between Two Squares
Roger B. Nelsen
200
A proof without words that the nth octagonal number is (2n-1)2-(n-1) 2. NOTES
Centroids Constructed Graphically
Tom M. Apostol and Mamikon A. Mnatsakanian
201-210
Two different methods are described for locating the centroid of a finite number of points in 1-space, 2-space, or 3-space by graphical construction, without using coordinates or numerical calculations. The first involves making a guess and forming a closed polygon. The second is an inductive procedure that combines centroids of two disjoint sets to determine the centroid of their union. For 1-space and 2-space, both methods can be applied in practice with drawing instruments or with computer graphic programs.
There Are Only Nine Finite Groups of Fractional Linear Transforms with Integer Coefficients
Gregory P. Dresden
211-218
Many of us recall fractional linear transforms (also known as Mobius transforms) from complex analysis, where we learned that these conformal functions map lines and circles to lines and circles (in the complex plane). In this article, we show that under composition, and using only integer coefficients, there are exactly nine distinct finite groups of fractional linear transforms. We establish the identical result for projective integer matrices of dimension two. The proof requires only a small amount of abstract algebra, and is entirely appropriate for upper-division math majors.
The One-dimensional Random Walk Problem in Path
Oscar Bolina
218-225
Path representations are very useful in physics and mathematics to transform hard algebraic problems into easier combinatorial ones, and this is true even of combinatorial problems themselves! We put the elementary problem of the random walk of a particle on the real line into a path representation form and use the geometry of the lattice paths to solve for the probability that the particle ends up at the origin given that it starts anywhere.
Why Some Elementary Functions Are Not Rational
Gabriela Chaves and José Carlos Santos
225-226
We prove, in an elementary way, that the restrictions to an open interval of certain elementary functions, such as the sine function or the exponential function, are not rational functions, and we do it by using the concept of degree of a rational function.
Another Look at Sylow's Third Theorem
Eugene Spiegel
227-232
Sylow's third theorem tells us that the number of p-Sylow subgroups in a finite group is congruent to 1 modulo p, and, in particular, there must exist p-Sylow subgroups of the group. But, to obtain Sylow's third theorem, it has been necessary to first show the existence of a p-Sylow subgroup. In this note we show how Mobius inversion can be used to obtain a generalized Sylow third theorem directly. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8270319700241089, "perplexity": 505.95537990012076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768285.15/warc/CC-MAIN-20141217075248-00153-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://mathoverflow.net/questions/370029/a-packing-ball-problem-verify-lower-bound-on-gaussian-width-of-sparse-ball | # A packing ball problem: verify lower bound on Gaussian width of sparse ball
Note: This should be a geometry problem about packing balls. All the necessary probability pre-requisite is given below.
Consider a set of sparse vectors: $$T_{n,s}:=\{x\in \mathbb{R}^n:\|x\|_0 \le s, \|x\|_2\le1\}$$ where $$\|x\|_0 \le s$$ simply means there can be at most $$s$$ non-zero coordinates. Gaussian width of a set of vectors is defined as $$w(T)=\sup_{x\in T}\langle x, g\rangle$$ where $$g\sim \mathcal{N}(0,I_n)$$.
The claim is that $$w(T_{n,s})\ge c\sqrt{s\log{(2n/s)}}.$$
The author suggests that we can use so-called Sudakov inequality which states that $$w(T)\ge \epsilon\sqrt{\log P(T,d,\epsilon)}$$ where $$P(T,d,\epsilon)$$ is ANY valid $$\epsilon-$$packing of $$T_{n,s}$$. A $$\epsilon-$$packing is a subset of $$T$$ such that for any pair of points in the packing has distance larger than $$\epsilon>0$$.
A partial result of mine: I considered packing $$T_{n,s}$$ with the following: there are $$\binom{n}{s}\ge (n/s)^s$$ ways to choose the k nonzero coordinates out of n coordinates. For each choice, we consider assigning all s non-zero coordinates as $$\sqrt{1/s}.$$ This way any pair of points has distance at least $$\epsilon=\sqrt{2}/\sqrt{s}$$ (because they have at least two non-overlapping coordinates). This packing gives $$w(T)\ge \frac{\sqrt{2}}{\sqrt{s}}\sqrt{s \log (2n/s)}$$. I'm still missing $$\sqrt{s}$$ factor.
How can I choose the packing more optimally to recover this $$\sqrt{s}$$ factor?
Thank you!
• The usual trick: you have ${n\choose s}$ vectors of your type and for each vector there are at most ${s\choose s/2}{n\choose s/2}$ vectors of your type that overlap with a given vector in at least $s/2$ coordinates. Thus, doing the greedy algorithm, you can choose at least ${n\choose s}/[{n\choose s/2}{s\choose s/2}]\ge 2^{-s}[\frac{n-s}{s}]^{s/2}$ vectors at constant distance from each other, For $s<n/10$ this crude bound gives the desired result and then you just use the monotonicity of the width. Aug 24, 2020 at 20:36
• Thank you for the insight. Aug 24, 2020 at 20:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 20, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9776571989059448, "perplexity": 246.43322664818237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662529538.2/warc/CC-MAIN-20220519141152-20220519171152-00160.warc.gz"} |
http://mathoverflow.net/questions/128981/bounding-a-recursively-defined-sequence?answertab=votes | # Bounding a recursively defined sequence
I have a sequence $\lambda_0,\lambda_1,\ldots,$ which is defined recursively as
$$\lambda_0 = \frac{1}{2},$$
and
$$\lambda_{k+1} = \max_{\lambda\in [1,b]} \left(\frac{1}{2\lambda}\prod_{0\leq j\leq k}\left(\frac{\lambda-\lambda_j}{\lambda+\lambda_j}\right)^2\right), \qquad k\geq 0,$$
where $b>1$. I would like to find a relatively sharp upper bound for $\lambda_k$. Numerically, it seems that $\lambda_k$ is bounded by something like,
$$\lambda_k \leq \mathcal{O}\left(e^{-8k/\log(b)}\right).$$
For instance, the following very simple argument gives an upper bound that is very weak and I'm seeking techniques to do better:
Note that we have, for $k\geq 0$,
$$\lambda_{k+1} \leq \max_{\lambda\in[1,b]} \left(\frac{1}{2\lambda}\prod_{0\leq j\leq k-1}\left(\frac{\lambda-\lambda_j}{\lambda+\lambda_j}\right)^2\right)\max_{\lambda\in[1,b]}\left(\frac{\lambda - \lambda_k}{\lambda+\lambda_k}\right)^2\leq \lambda_k \left(\frac{b-1}{b+1}\right)^2$$
and hence,
$$\lambda_k \leq \left(\frac{b-1}{b+1}\right)^{2k}\lambda_0 =\frac{1}{2}\left(\frac{b-1}{b+1}\right)^{2k},\qquad k\geq0.$$
I think the usual name for this kind of definition is "recursive", not "implicit". An implicit definition would be of the form $f(\lambda_n)=0$, or $f(\lambda_n, \lambda_{n+1})=0$, etc. Each term in your sequence is defined quite explicitly as a function of the previous terms. Or did I miss something? – Goldstern Apr 28 '13 at 23:19
How about using the estimate for $\lambda_k$ that you obtained, when estimating the second $\max$? Currently, for estimates that $\max$ you bound $\lambda_k$ by $1$. It is the bootstrapping method popularized by Münchausen :-) – Boris Bukh Apr 29 '13 at 14:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.993122398853302, "perplexity": 276.03687616631316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413558067214.90/warc/CC-MAIN-20141017150107-00295-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://www.groundai.com/project/the-effect-of-combined-magnetic-geometries-on-thermally-driven-winds-i-interaction-of-dipolar-and-quadrupolar-fields/ | Dipole and Quadrupole Mixing
# The Effect of Combined Magnetic Geometries on Thermally Driven Winds I: Interaction of Dipolar and Quadrupolar Fields
## Abstract
Cool stars with outer convective envelopes are observed to have magnetic fields with a variety of geometries, which on large scales are dominated by a combination of the lowest order fields such as the dipole, quadrupole and octupole modes. Magnetised stellar wind outflows are primarily responsible for the loss of angular momentum from these objects during the main sequence. Previous works have shown the reduced effectiveness of the stellar wind braking mechanism with increasingly complex, but singular, magnetic field geometries. In this paper, we quantify the impact of mixed dipolar and quadrupolar fields on the spin-down torque using 50 MHD simulations with mixed field, along with 10 of each pure geometries. The simulated winds include a wide range of magnetic field strength and reside in the slow-rotator regime. We find that the stellar wind braking torque from our combined geometry cases are well described by a broken power law behaviour, where the torque scaling with field strength can be predicted by the dipole component alone or the quadrupolar scaling utilising the total field strength. The simulation results can be scaled and apply to all main-sequence cool stars. For Solar parameters, the lowest order component of the field (dipole in this paper) is the most significant in determining the angular momentum loss.
magnetohydrodynamics (MHD) - stars: low-mass - stars: stellar winds, outflows - stars: magnetic field- stars: rotation, evolution
1\epstopdfsetup
outdir=./
## 1. Introduction
The spin down of cool stars () is a complex function of mass and age, as shown by the increasing number of rotation period measurements for large stellar populations (Barnes, 2003; Irwin & Bouvier, 2009; Barnes, 2010; Agüeros et al., 2011; Meibom et al., 2011; McQuillan et al., 2013; Bouvier et al., 2014; Stauffer et al., 2016; Davenport, 2017). Observed properties of these stars show a wide range of mass loss rates, coronal temperatures, field strengths and geometries, which all connect with stellar rotation to control the loss of angular momentum (Reiners & Mohanty, 2012; Gallet & Bouvier, 2013; Van Saders & Pinsonneault, 2013; Brown, 2014; Matt et al., 2015; Gallet & Bouvier, 2015; Amard et al., 2016; Blackman & Owen, 2016; See et al. in prep). Despite the wide range of interlinking stellar properties an overall trend of spin down with an approximately Skumanich law is observed at late ages; (Skumanich, 1972; Soderblom, 1983).
For Sun-like stars on the main sequence, the spin-down process is governed primarily by their magnetised stellar winds which remove angular momentum over the star’s lifetime. Parker (1958) originally posited that stellar winds must exist due to the thermodynamic pressure gradient between the high temperature corona and interplanetary space. Continued solar observations have constrained theoretical models for the solar wind to a high degree of accuracy (van der Holst et al., 2014; Usmanov et al., 2014; Oran et al., 2015). Recent models of the solar wind are beginning to accurately reproduce the energetics within the corona and explain the steady outflow of plasma into the Heliosphere (e.g. Grappin et al., 1983; Van der Holst et al., 2010; Pinto et al., 2016). The wind driving is now known to be much more complex than a thermal pressure gradient, with authors typically heating the wind through the dissipation of Alfvén waves in the corona. Other cool stars are observed with x-ray emissions indicating hot stellar coronae like that of the Sun (Rosner et al., 1985; Hall et al., 2007; Wright et al., 2004; Wolk et al., 2005). Similar stellar winds and wind heating mechanisms are therefore expected to exist across a range of Sun-like stars. Assuming equivalent mass loss mechanisms, results from the Solar wind are incorporated into more general stellar wind modelling efforts (e.g. Cohen & Drake, 2014; Alvarado-Gómez et al., 2016).
Detailed studies of wind driving physics remain computationally expensive to run, so are usually applied on a case-by-case basis. How applicabile the heating physics gained from modelling the Solar wind is to other stars still in question. With the reliability of such results even for the global properties of a given star in question, large parameter studies with simpler physics remain useful. A more general method can allow for parametrisations which are more appropriate to the variety of stellar masses and rotation periods found in observed stellar populations. Parker-type solutions remain useful for this due to their simplicity and versatility (Parker, 1965; Mestel, 1968; Sakurai, 1990; Keppens & Goedbloed, 1999). In these solutions, wind plasma is accelerated from the stellar surface and becomes transonic at the sonic surface. With the addition of magnetic fields the wind also become trans-alfvénic, i.e faster than the Alfvén speed, at the Alfvén surface. Weber & Davis (1967) showed for a one-dimensional magnetised wind that the Alfvén radius represented a lever arm for the spin-down torque. Since the introduction of this result, many researchers have produced scaling laws for the Alfvén radius (Mestel, 1984; Kawaler, 1988; Matt & Pudritz, 2008; Matt et al., 2012; Ud-Doula et al., 2009; Pinto et al., 2011; Réville et al., 2015a; Pantolmos. in prep) all of which highlight the importance of the magnetic field strength and mass loss rate in correctly parametrising a power law dependence. In such formulations, the mass loss rate is incourporated as a free parameter as the physical mechanisms which determines it are not yet completely understood. Measuring the mass loss rate from Sun-like stars is particularly difficult due to the wind’s tenuous nature and poor emission. Wood (2004) used Lyman- absorption from the interaction of stellar winds and their local interstellar medium to measure mass loss rates, but the method is model-dependent and only available for a few stars. Theoretical work from Cranmer & Saar (2011) predicts the mass loss rates from Sun-like stars, but it is uncertain if the physics used within the model scales correctly between stars. Therefore, parameter studies where the mass loss rate is an unknown parameter are needed.
In addition to the mass loss rate, the angular momentum loss rate is strongly linked with the magnetic properties of a given star. Frequently researchers assume the dipole component of the field to be the most significant in governing the global wind dynamics (e.g. Ustyugova et al., 2006; Zanni & Ferreira, 2009; Gallet & Bouvier, 2013; Cohen & Drake, 2014; Gallet & Bouvier, 2015; Matt et al., 2015; Johnstone et al., 2015). Zeeman Doppler Imaging (ZDI) studies (e.g. Morin et al., 2008; Petit et al., 2008; Fares et al., 2009; Vidotto et al., 2014b; Jeffers et al., 2014; See et al., 2015, 2016; Folsom et al., 2016; Hébrard et al., 2016; See et al., 2017), provide information on the large scale surface magnetic fields of active stars. Observations have shown stellar magnetic fields to be much more complex than simple dipoles, containing combinations of many different field modes. ZDI is a topographic technique typically decomposes the field at the stellar surface into individual spherical harmonic modes. The 3D field geometry can then be recovered with field extrapolation techniques using the ZDI map as an inner boundary. Several studies have considered how these observed fields affect the global wind properties. Typically used to determine an initial 3D field solution, then a magnetohydrodynamics code evolves this initial state in time until a steady state solution for the wind and magnetic field geometry is attained (e.g. Vidotto et al., 2011; Cohen et al., 2011; Garraffo et al., 2016b; Réville et al., 2016; Alvarado-Gómez et al., 2016; Nicholson et al., 2016; do Nascimento Jr et al., 2016). These works are less conducive to the production of semi-analytical formulations, as the principle drivers of the spin-down process are hidden within complex field geometries, rotation and wind heating physics.
A few studies show systematically how previous torque formulations depend on magnetic geometry using single modes. Réville et al. (2015a) explored thermally driven stellar winds with dipolar, quadrupolar and octupolar field geometries. They concluded that higher order field modes produce a weaker torque for the same field strength and mass loss, which is supported by results from Garraffo et al. (2016a). Despite these studies and works like them, only one study has systematic scaled the mass loss rate for a mixed field geometry field (Strugarek et al., 2014a). However, the aforementioned studies of the angular momentum loss from Sun-like stars have yet to address the systematic addition of individual spherical harmonic field modes.
Mixed geometry fields are observed within our closest star, the Sun, which undergoes a 11 year cycle oscillating between dipolar and quadrupolar field modes from cycle minimum to maximum respectively (DeRosa et al., 2012). Observed Sun-like stars also exhibit a range of spherical harmonic field combinations. Simple magnetic cycles are observed using ZDI, both HD 201091 (Saikia et al., 2016) and HD 78366 (Morgenthaler et al., 2012) show combinations of the dipole, quadrupole and octupole field modes oscillating similarly to the solar field. Other cool stars exist with seemingly stochastic changing field combinations (Petit et al., 2009; Morgenthaler et al., 2011). Observed magnetic geometries all contain combinations of different spherical harmonic modes with a continuous range of mixtures, it is unclear what impact this will have on the braking torque.
In this study we will investigate the significance of the dipole field when combined with a quadrupolar mode. We focus on these two field geometries, which are thought to contribute in anti-phase to the solar cycle and perhaps more generally to stellar cycles in cool stars. Section 2 covers the numerical setup with a small discussion of the magnetic geometries for which we develop stellar wind solutions. Section 3 presents the main simulation results, including discussion of the qualitative wind properties and field structure, along with quantitative parametrisations for the stellar wind torque. Here we also highlight the dipole’s importance in the braking, and introduce an approximate scaling relation for the torque. Finally in Section 4 we focus on the magnetic field in the stellar wind, first a discussion of the overall evolution of the flux, then a discussion of the open flux and opening radius within our simulations. Conclusions and thoughts for further work can then be found in Section 5. The Appendix contains a short note on the wind acceleration profiles of our wind solutions.
## 2. Simulation Method
### 2.1. Numerical Setup
This work uses the magnetohydrodynamics (MHD) code PLUTO (Mignone et al., 2007; Mignone, 2009), a finite-volume code which solves Riemann problems at cell boundaries in order to calculate the flux of conserved quantities through each cell. PLUTO is modular by design, capable of interchanging solvers and physics during setup. The present work uses a diffusive numerical scheme, the solver of Harten, Lax, and van Leer, HLL (Einfeldt, 1988), which allows for greater numerical stability in the higher strength magnetic field cases. The magnetic field solenoidality condition () is maintained using the Constrained Transport method (See Tóth (2000) for discussion).
The MHD equations are solved in a conservative form, with each equation relating to the conservation of mass, momentum and energy, plus the induction equation for magnetic field,
∂ρ∂t+∇⋅ρv = 0, (1) ∂m∂t+∇⋅(mv−BB+IpT) = ρa, (2) ∂E∂t+∇⋅((E+pT)v−B(v⋅B)) = m⋅a, (3) ∂B∂t+∇⋅(vB−Bv) = 0. (4)
Here is the mass density, is the velocity field, is the gravitational acceleration, is the magnetic field2, is the combined thermal and magnetic pressure, is the momentum density given by and is the total energy density. The energy of the system is written as , with representing the internal energy per unit mass of the fluid. is the identity matrix. A polytropic wind is used for this study, such that the closing equation of state takes the form where represents the polytropic index.
We assume the wind profiles to be axisymmetric and solve the MHD equations using a spherical geometry in 2.5D, i.e. our domain contains two spatial dimensions () but allows for 3D axisymetric solutions for the fluid flow and magnetic field using three vector components (). The domain extends from one stellar radius () out to with a uniform grid spacing in and a geometrically stretched grid in , which grows from an initial spacing of to at the outer boundary. The computational mesh contains grid cells. These choices allow for the highest resolution near the star, where we set the boundary conditions that govern the wind profile in the rest of the domain.
Initially a polytropic parker wind (Parker, 1965; Keppens & Goedbloed, 1999) with fills the domain, along with a super-imposed background field corresponding to our chosen magnetic geometry and strength. During the time-evolution, the plasma pressure, density, and poloidal components of the magnetic field () are held fixed at the stellar surface, whilst the poloidal components of the velocity () are allowed to evolve in response to the magnetic field (the boundary is held with and ). We then enforce the flow at the surface to be parallel to the magnetic field (). The star rotates as a solid body, with linearly extrapolated into the boundary and set using the stellar rotation rate ,
vϕ=Ω∗rsinθ+vp⋅Bp|Bp|2Bϕ, (5)
where the subscript “p” denotes the poloidal components () of a given vector. This condition enforces an effective rotation rate for the field lines which, in steady state ideal MHD, should be equal to the stellar rotation rate and conserved along field lines (Zanni & Ferreira, 2009; Réville et al., 2015a). This ensures the footpoints of the stellar magnetic field are correctly anchored into the surface of the star. The final boundary conditions are applied to the outer edges of the simulation, a simple outflow (zero derivative) is set at allowing for the outward transfer of mass, momenta and magnetic field, along with an axisymmetric condition along the rotation axis ( and ). Due to the supersonic flow properties at the outer boundary and its large radial extent compared with the location of the fast magnetosonic surface, any artefacts from the outer boundary cannot propagate upwind into the domain.
The code is run, following the MHD equations above, until a steady state solution is found. The magnetic fields modify the wind dynamics compared to the spherically symmetric initial state, with regions of high magnetic pressure shutting off the radial outflow. In this way, the applied boundary conditions allow for closed and open regions of flow to form (e.g Washimi & Shibata, 1993; Keppens & Goedbloed, 2000), as observed within the solar wind. In some cases of strong magnetic field small reconnection events are seen, caused by the numerical diffusivity of our chosen numerical scheme. Reconnection events are also seen in Pantolmos & Matt (in prep) and discussed within their Appendix. We adopt a similar method for deriving flow quantities in cases exhibiting periodic reconnection events. In such cases, once a quasi-steady state is established a temporal average of quantities such as the torque and mass loss are used.
Inputs for the simulations are given as ratios of characteristic speeds which control key parameters such as the wind temperature (), field strength () and rotation rate (). Where is the sound speed at the surface, is the Alfvén speed at the north pole, is the rotation speed at the equator, is the surface escape speed and is the keplerian speed at the equator. In this way, all simulations represent a family of solutions for stars with a range of gravities. As this work focuses on the systematic addition of dipolar and quadrupolar geometries, we fix the rotation rate for all our simulations. Matt et al. (2012) showed that the non-linear effects of rotation on their torque scaling can be neglected for slow rotators. They defined velocities as a fraction of the breakup speed,
f=vrotvkep∣∣∣r=R∗,θ=π/2=Ω∗R3/2∗(GM∗)1/2. (6)
The Alfvén radius remains independent of the stellar spin rate until , after which the effects of fast rotation start to be important. For this study a solar rotation rate is chosen (), which is well within the slow rotator regime. We set the temperature of the wind with , higher than used previosuly in Réville et al. (2015a). This choice of higher sound speed drives the wind to slightly higher terminal speeds, which are more consistent with observed solar wind speeds. Each geometry is studied with 10 different field strengths controlled by the input parameter , which is defined here with the Alfvén speed on the stellar north pole (see following Section). Table 1 lists all our variations of for each geometry.
Due to the use of characteristic speeds as simulation inputs, our results can be scaled to any stellar parameters. For example, using solar parameters, the wind is driven by a coronal temperature of 1.4MK and our parameter space covers a range of stellar magnetic field strengths from 0.9G to 87G over the pole. Changing these normalisations will modify this range.
### 2.2. Magnetic Field Configuration
Within this work, we consider magnetic field geometries that encompass a range of dipole and quadrupole combinations with different relative strengths. We represent the mixed fields using the ratio, , of dipolar field to the total combined field strength.
In this study the magnetic fields of the dipole and quadrupole are described in the formalism of Gregory et al. (2010) using polar field strengths,
Br,dip(r,θ) = Bl=1∗(R∗r)3cosθ, (7) Bθ,dip(r,θ) = 12Bl=1∗(R∗r)3sinθ, (8) Br,quad(r,θ) = 12Bl=2∗(R∗r)4(3cos2θ−1), (9) Bθ,quad(r,θ) = Bl=2∗(R∗r)4cosθsinθ. (10)
The total field, comprised of the sum of the two geometries,
where the total polar field , is controlled by the parameter,
This work considers aligned magnetic moments such that ranges from 1 to 0, corresponding to all the field strength in the dipolar or quadrupolar mode respectively. As with , is calculated at the north pole. This sets the relative strengths of the dipole and quadrupole fields,
Bl=1∗=RdipB∗,Bl=2∗=(1−Rdip)B∗, (13)
Alternative parametrisations are commonly used in the analysis of ZDI observations and dynamo modelling. These communities use the surface averaged field strengths, , or the ratio of magnetic energy density () stored within each of the dipole and quadrupole field modes at the stellar surface. During the solar magnetic cycle, values of can range from at solar maximum to at solar minimum (DeRosa et al., 2012). A transformation from our parameter to the ratio of energies is simply given by:
where the numerical pre-factor accounts for the integration of magnetic energy in each mode over the stellar surface.
Initial field configurations are displayed in Figure 1. The pure dipolar and quadrupolar cases are shown in comparison to two mixed cases (). These combined geometry fields add in one hemisphere and subtract in the other. This effect is due to the different symmetry families each geometry belongs to, with the dipole’s polarity reversing over the equator unlike the equatorially symmetric quadrupole. Continuing the use of “primary” and “secondary” families as in McFadden et al. (1991) and DeRosa et al. (2012), we refer to the dipole as primary and quadrupole as secondary. The fields are chosen such that they align in polarity in the northern hemisphere. This choice has no impact on the derived torque or mass loss rate due to the symmetry of the quadrupole about the equator. Either aligned or anti-aligned, these fields will always create one additive hemisphere and one subtracting; swapping their relative orientations simply switches the respective hemispheres. This is in contrast to combining dipole & octupole fields, where the aligned and anti-aligned cases cause subtraction at the equator or poles respectively (Gregory et al., 2016; Finley & Matt. in prep).
Figure 1 indicates that even with equal quadrupole and dipole polar field strengths, , the overall dipole topology will remain. In this case the magnetic energy density in the dipolar mode is 1.5 times greater than the quadrupolar mode and with the more rapid radial decay of the quadrupolar field, this explains the overall dipolar topology. A higher fraction of quadrupole is required to produce a noticeable deviation from this configuration, which is shown at . More than half of the parameter space that we explore lies in the range where the energy density of the quadrupole mode is greater than that of the dipole (). For this study both the pure dipolar and quadrupolar fields are used as controls (both of which were studied in detail within Réville et al. (2015a)), and 5 mixed cases parametrised by values ( = 0.8, 0.5, 0.3, 0.2, 0.1). We include to demonstrate the dominance of the dipole at higher values. Each value is given a unique identifying colour which is maintained in all figures throughout this paper. Table 1 contains a complete list of parameters for all cases, which are numbered by increasing and quadrupole fraction.
## 3. Simulation results
### 3.1. Morphology of the Field and Wind Outflow
Figure 1 shows the topological changes in field structure from the addition of dipole and quadrupole fields. It is evident in these initial magnetic field configurations that the global magnetic field becomes asymmetric about the equator for mixed cases, as does the magnetic boundary condition which is maintained fixed at the stellar surface. It is not immediately clear how this will impact the torque scaling from Réville et al. (2015a), who studied only single geometries.
Results for these field configurations using our PLUTO simulations are displayed in Figure 2. The dipole and quadrupole cases are shown in conjunction with the mixed field cases, . The Figure displays for a comparable value of polar magnetic field strength, the different sizes of Alfvén surface that are produced. The mixed magnetic geometries modify the size and morphology of the Alfvén and sonic surfaces. Due to the slow rotation, the fast and slow magnetosonic surfaces are co-located with the sonic and Alfvén surfaces (the fast magnetosonic surface being always the larger of the two surfaces).
The field geometry is found to imprint itself onto the stellar wind velocity with regions of closed magnetic field confining the flow creating areas of co-rotating plasma, referred to as deadzones (Mestel, 1968). Steady state wind solutions typically have regions of open field where a faster wind and most of the torque is contained, along with these deadzone(s) around which a slower wind is produced. Similarly to the solar wind, slower wind can be found on the open field lines near the boundary of closed field (Feldman et al., 2005; Riley et al., 2006; Fisk et al., 1998). Observations of the Sun reveal the fast wind component emerging from deep within coronal holes, typically over the poles, and the slow wind component originating from the boundary between coronal holes and close field regions. Due to the polytropic wind used here, we do not capture the different heating and acceleration mechanisms required to create a true fast and slow solar-like wind (as seen with the Ulysses spacecraft e.g. McComas et al., 2000; Ebert et al., 2009). Our models produce an overall wind speed consistent with slow solar wind component, which we assume to represent the average global flow. More complex wind driving and coronal heating physics are required to recover a multi-speed wind, as observed from the Sun (Cranmer et al., 2007; Pinto et al., 2016).
Figure 3 displays a grid of simulations with a range of magnetic field strengths and values ( ranges from 3.6 to 54; values consistent with the solar cycle maximum), where the mixing of the fields plays a clear role in the changing dynamics of the flow. Regions of closed magnetic field cause significant changes to the morphology of the wind. A single deadzone is established on the equator by the dipole geometry whereas the quadrupole creates two over mid latitudes. Mixed cases have intermediate states between the pure regimes. Within our simulations the deadzones are accompanied by streamers which form above closed field regions and drive slower speed wind than from the open field regions. The dynamics of these streamers, their location and size are an interesting result of the changing topology of the flow.
The dashed coloured lines within Figure 3 show where the field polarity reverses using , which traces the location of the streamers. The motion of the streamers through the grid of simulations is then observed. With increasing quadrupole field, the single dipolar streamer moves into the northern hemisphere and with continued quadrupole addition a second streamer appears from the southern pole and travels towards the northern hemisphere until the quadrupolar streamers are recovered both sitting at mid latitudes. This motion can also be seen for fixed cases as the magnetic field strength is decreased. For a given value the current sheets sweep towards the southern hemisphere with increased polar field strength, in some cases (36 and 38) moving onto the axis of rotation. This is the opposite behaviour to decreasing the value, i.e. the streamer configuration is seen to take a more dipolar morphology as the field strength is increased. Additionally within Figure 3, for low field strengths each produces a comparable Alfvén surface with very similar morphology, all dominated by the quadrupolar mode.
### 3.2. Global Flow Quantities
Our simulations produce steady state solutions for the density, velocity and magnetic field structure. To compute the wind torque on the star we calculate , a quantity related directly to the angular momentum flux (Keppens & Goedbloed, 2000),
Λ(r,θ)=rsinθ(vϕ−Bϕρ|Bp|2vp⋅Bp). (15)
Within axisymmetric steady state ideal MHD, is conserved along any given field line. However we find variations from this along the open-closed field boundary due to numerical diffusion across the sharp transition in quantities found there. The spin-down torque, , due to the transfer of angular momentum in the wind is then given by the area integral,
τ=∫AΛρv⋅dA, (16)
where is the area of any surface enclosing the star. For illustrative purposes, Figure 3 shows the Alfvén surface coloured by angular momentum flux (thick multi-coloured line), which is seen to be strongly focused around the equatorial region. The angular momentum flux is calculated normal to the Alfvén surface,
dτdA=Λρv⋅^A=FAM⋅^A, (17)
where is the normal unit vector to the Alfvén surface. The mass loss rate from our wind solutions is calculated similarly to the torque,
˙M=∫Aρv⋅dA. (18)
Both expressions for the mass loss and torque are evaluated using spherical shells of area which are outside the closed field regions. This allows for the calculation of an average Alfvén radius (which is cylindrical from the rotation axis) in terms of the torque, mass flux and rotation rate,
⟨RA⟩=√τ˙MΩ∗. (19)
Throughout this work, is used as a normalised torque which accounts for the mass loss rates which we do not control. Values of the average Alfvén radius are tabulated within Table 1. is shown in Figure 3 using a grey vertical dashed line. For each case, the cylindrical Alfvén radius is offset inwards of the maximum Alfvén radius from the simulation, a geometrical effect as this corresponds to the average cylindrical and includes variations in flow quantities as well. Exploring Figure 3, the motion of the deadzones/current sheets have little impact on the overall torque. For example, no abrupt increase in the Alfvén radius is seen from case 34 to 36 (where the southern streamer is forced onto the rotation axis) compared to cases 44 and 46. The torque is instead governed by the magnetic field strength in the wind which controls the location of the Alfvén surface.
We parametrise the magnetic and mass loss properties using the “wind magnetisation” defined by,
Υ=B2∗R2∗˙Mvesc, (20)
where is the combined field strength at the pole. Previous studies that used this parameter defined it with the equatorial field strength (e.g. Matt & Pudritz, 2008; Matt et al., 2012; Réville et al., 2015a; Pantolmos & Matt. in prep). We use polar values unlike previous authors due to the additive property of the radial field at the pole, for aligned axisymmetric fields. Note that selecting one value of the field on the surface will not always produce a value which describes the field as a whole. The polar strength works for these aligned fields, but will easily break down for un-aligned fields and anti-aligned axisymmetric odd fields, thus it suits the present study, but a move away from this parameter in future is warranted.
During analysis, the wind magnetisation, , is treated as an independent parameter that determines the Alfvén radius and thus the torque, . We increase by setting a larger , creating a stronger global magnetic field. Table 1 displays all the input values of and as well as the resulting global outflow properties from our steady state solutions, which are used to formulate the torque scaling relations within this study. Figure 4 displays all 70 simulations in space. Cases are colour-coded here by their value, a convention which is continued throughout this work.
### 3.3. Single Mode Torque Scalings
The efficiency of the magnetic braking mechanism is known to be dependent on the magnetic field geometry. This has been previsously shown for single mode geometries (e.g. Réville et al., 2015a; Garraffo et al., 2016a). We first concider two pure gemetries, dipole and quadrupole, using the formulation from Matt & Pudritz (2008),
⟨RA⟩R∗=KsΥms, (21)
where and are fitting parameters for the pure dipole and quadrupole cases, using the surface field strength. Here we empirically fit ; the interpretation of is discussed in Matt & Pudritz (2008), Réville et al. (2015a) and Pantolmos & Matt (in prep), where it is determined to be dependant on magnetic geometry and the wind acceleration profile. The Appendix contains further discussion of the wind acceleration profile and its impact on this power law relationship.
The left panel of Figure 5 shows the Alfvén radii vs the wind magnetisations for all cases (colour-coded with their value). Solid lines show scaling relations for dipolar (red) and quadrupolar (blue) geometries, as first shown in Réville et al. (2015a). We calculate best fit values for and for the dipole and quadrupole, tabulated in Table 2. Values here differ due to our hotter wind ( than their ), using polar , and we do not account for our low rotation rate. As previously shown, the dipole field is far more efficient at transferring angular momentum than the quadrupole. In this study we concider the effect of combined geometries, within Figure 5 these cases lie between the dipole and quadrupole slopes, with no single power law of this form to describe them.
Pantolmos & Matt (in prep) have shown the role of the velocity profile in the power law dependence of the torque. In our simulations, the acceleration of the flow from the base wind velocity to its terminal speed is primarily governed by the thermal pressure gradient, however magnetic topologies can all modify the radial velocity profile (as can changes in wind temperature, , and rapid rotation, not included in our study). Effects on the torque formulations due to these differences in acceleration can be removed via the multiplication of with . In their work, the authors determine the theoretical power law dependence, , from one-dimensional analysis. In this formulation the slope of the power law is controlled only by the order of the magnetic geometry, , which is and for the dipole and quadrupole respectively,
⟨RA⟩R∗=Kl[Υvesc⟨v(RA)⟩]ml, (22)
where and are fit parameters to our wind solutions, tabulated in Table 2. The value of is calculated as an average of the velocity at all points on the Alfvén surface in the meridional plane. 3
Equation (22) is able to predict accurately the power law dependence for the two pure modes using the order of the spherical harmonic field, . We show this in the right panel of Figure 5, where the Alfvén radii are plotted against the new parameter, . A similar qualitative behaviour is shown to the scaling with in the left panel. Using the theoretical power law dependencies, the dipolar (red) and quadrupolar (blue) slopes are plotted with and respectively. Using a single fit constant for both sloes within this figure shows good agreement with the simulation results.
More accurate values of and are fit for each mode independently. These values produce a better fit and are compared with the theoretical values in Table 2. The mixed simulations show a similar qualitative behaviour to the plot against .
Obvious trends are seen within the mixed case scatter. A saturation to quadrupolar Alfvén radii values for lower and values is observed, along with a power law trend with a dipolar gradient for higher and values. This indicates that both geometries play a role in governing the lever arm, with the dipole dominating the braking process at higher wind magnetisations.
### 3.4. Broken Power Law Scaling For Mixed Field Cases
Observationally the field geometries of cool stars are, at large scales, dominated by the dipole mode with higher order modes playing smaller roles in shaping the global field. It is the global field which controls the spin-down torque in the magnetic braking process. Higher order modes (such as the quadrupole) decay radially much faster than the dipole and as such they have a reduced contribution to setting the Alfvén speed at distances larger than a few stellar radii.
We calculate , which only takes into account the dipole’s field strength,
Υdip=(Bl=1∗B∗)2B2∗R2∗˙Mvesc=R2dipΥ. (23)
Taking as a hypothesis that the field controlling the location of the Alfvén radius is the dipole component, a power law scaling using can be constructed in the same form as Matt & Pudritz (2008),
⟨RA⟩R∗=Ks,dip[Υdip]ms,dip=Ks,dip[R2dipΥ]ms,dip. (24)
Substitution of the dipole component into equation (22) similarly gives,
⟨RA⟩R∗=Kl,dip[R2dipΥvesc⟨v(RA)⟩]ml,dip, (25)
where , , , and will be parameters fit to simulations.
A comparison of these approximations can be seen in Figure 5, where equations (24) (left panel) and (25) (right panel) are plotted with dashed lines for all the values used in our simulations. Mixed cases which lie above the quadrupolar slope are shown to agree with the dashed-lines in both forms. Such cases are dominated by the dipole component of the field only, irrespective of the quadrupolar component.
The role of the dipole is even more clear in Figure 6 where only the dipole component of is plotted for each simulation. The solid red line in Figure 6, given by equation (24), shows agreement at a given with deviation from this caused by a regime change onto the quadrupolar slope (shown in dashed colour).
The behaviour of our simulated winds, despite using a combination of field geometries, simply follow existing scaling relations with this modification. In general, the dipole () prediction shows good agreement with the simulated wind models, except in cases where the Alfvén surface is close-in to the star. In these cases, the quadrupole mode still has magnetic field strength able to control the location of the Alfvén surface. Interestingly, and in contrast to the dipole-dominated regime, the quadrupole dominated regime behaves as if all the field strength is within the quadrupolar mode. This is visible within Figure 5 for low values of and .
The mixed field scaling can be described as a broken power law, set by the maximum of either the dipole component or the pure quadrupolar relation. With the break in the power law given by ,
where is the location of the intercept for the dipole component and pure quadrupole scalings,
The solid lines in Figure 4 show the value of , equation (27), diving the two regimes. Specifically, the solutions above the solid black line behave as if only the dipole component () is governing the Alfvén radius.
Transitioning from regimes is not perfectly abrupt. Therefore producing an analytical solution for the mixed cases which includes this behaviour would increase the accuracy for stars near the regime change. E.g. we have formulated a slightly better fit, using a relationship based on the quadrature addition of different regions of field. However it provides no reduction to the error on this simpler form and is not easily generalised to higher topologies. For practical purposes, the scaling of equation (26) and (27) predict accurately the simulation torque with increasing magnetic field strength for a variety of dipole fractions. We therefore present the simplest available solution, leaving the generalised form to be developed within future work.
## 4. The impact of geometry on the magnetic flux in the wind
### 4.1. Evolution of the Flux
The magnetic flux in the wind is a useful diagnostic tool. The rate of the stellar flux decay with distance is controlled by the overall magnetic geometry. We calculate the magnetic flux as a function of radial distance by evaluating the integral of the magnetic field threading closed spherical shells, where we take the absolute value of the flux to avoid field polarity cancellations,
Φ(r)=∮r|B⋅dA|. (28)
Considering the initial potential fields of the two pure modes this is simply a power law in field order ,
Φ(r)P=Φ∗(R∗r)l, (29)
where dipole and quadrupole, we denote the flux with “” for the potential field. Figure 7 displays the flux decay of all values of for each value, grey lines. The behaviour is qualitatively identical to that observed within previous works (e.g. Schrijver et al., 2003; Johnstone et al., 2010; Vidotto et al., 2014a; Réville et al., 2015a), where the field decays as the potential field does until the pressure of the wind forces the field into a purely radial configuration with a constant magnetic flux, referred to as the open flux. The power law dependence of equation (29) indicates for higher mode magnetic fields, the decay will be faster. We therefore expect the more quadrupolar dominated fields studied in this work to have less open flux.
In the case of mixed geometries a simple power law is not available for the initial potential configurations, instead we evaluate the flux using equation (28), where is the initial potential field for each mixed geometry. This allows us to calculate the radial evolution of the flux for a given which we compare to the simulated cases. Figure 7 shows the flux normalised by the surface flux versus radial distance from the star. For each value, the magnetic flux decay of the potential field (black solid line) is shown with the different strength simulations (grey solid lines). A comparison of the flux decay for all potential magnetic geometries is available in the bottom right panel showing, as expected, the increasingly quadrupolar fields decaying faster.
In this study we control which, for a given surface density, sets the polar magnetic field strength for our simulations. The stellar flux for different topologies and the same will differ and must be taken into account in order to describe the dipole and quadrupolar components (dashed red and blue) in Figure 7. We plot the magnetic flux of the potential field quadrupole component alone in dotted blue for each value,
and similarly the potential field dipole component of the magnetic flux,
Φ(r)P,dip=RdipΦ∗,dip(R∗r), (31)
where in both equations the surface flux of a pure dipole/quadrupole (, ) field is required to match our normalised flux representation.
Due to the rapid decay of the quadrupolar mode, the flux at large radial distances for all simulations containing the dipole mode is described by the dipolar component. The quadrupole component decay sits below and parallel to the potential field prediction for small radii, becoming indistinguishable for the lowest values as the flux stored in the dipole is decreased. Importantly for small radii, simulations containing a quadrupolar component are dominated by the quadrupolar decay following a power law decay, which can be seen by shifting the blue dashed line upwards to intercept at the stellar surface.
This result for the flux decay is reminiscent of the broken power law description for the Alfvén radius in Section 3.4. The field acts as a quadrupole using the total field for small radii and the dipole component only for large radii. There is a transition between these two regimes that is not described by either approximation. But is shown by the potential solution in solid black.
### 4.2. Topology Independent Open Flux Formulation
The magnetic flux within the wind decays following the potential field solution closely until the magnetic field geometry is opened by the pressures of the stellar wind and the field lines are forced into a nearly radial configuration with constant flux, shown in Figure 7 for all simulations. The importance of this open flux is discussed by Réville et al. (2015a). These authors showed a single power law dependence for the Alfvén radius, independent of magnetic geometry, when parametrised in terms of the open flux, ,
Υopen=Φ2open/R2∗˙Mvesc, (32)
which, ignoring the effects of rapid rotation, can be fit with,
⟨RA⟩R∗=Ko[Υopen]mo, (33)
where, and are fitting parameters for the open flux formulation.
Using the open flux parameter, Figure 8 shows a collapse towards a single power law dependence as in Réville et al. (2015a). However our wind solutions show a systematic difference in power law dependence from dipole to quadrupole. On careful inspection of the result from Figure 6 of Réville et al. (2015a), the same systematic trend between their topologies and the fit scaling is seen. 4 We calculate best fits for each pure mode separately i.e. the dipole and quadrupole, tabulated in Table 3.
Pantolmos & Matt (in prep) find solutions for thermally driven winds with different coronal temperatures, from these they find the wind acceleration profiles of a given wind to very significantly alter the slope in - space. From this work our trend with geometry indicates that each geometry must have a slightly different wind acceleration profile. This is most likely due to difference in the super radial expansion of the flux tubes for each geometry, which is not taken into account with equation (33). The field geometry is imprinted onto the wind as it accelerates out to the Alfvén surface. As such, this scaling relation is not entirely independent of topology. Further details on the wind acceleration profile within our study is available in the Appendix. Pantolmos (in prep) are able to include the effects of acceleration in their scaling through multiplication of with . The expected semi-analytic solution from Pantolmos & Matt (in prep) is given,
⟨RA⟩R∗=Kc[Υopenvesc⟨v(RA)⟩]mc, (34)
where the fit parameters are derived from one-dimensional theory as constants, and .
We are able to reproduce this power law fit of with the wind acceleration effects removed, on the right panel of Figure 8. Including all simulations in the fit, we arrive at values of and for the constants of proportionality and power law dependence. However a systematic difference is still seem from one value to another. More precise fits can be found for each geometry independently, but the systematic difference appearing in the right panel implies a modification to our semi-analytic formulations is required to describe the torque fully in terms of the open flux.
Here we show the scaling law from Réville et al. (2015a) is improved with the modification from Pantlomos (in prep). This formulation is able to describe the Alfvén radius scaling with changing open flux and mass loss. However with the open flux remaining an unknown from observations and difficult to predict, scaling laws that incorporate known parameters (such as those of equations (26) and (27)) are still needed for rotational evolution calculations.
### 4.3. The Relationship Between the Opening and Alfvén Radii
The location of the field opening is an important distance. It is both critical for determining the torque and for comparison to potential field source surface (PFSS) models (Altschuler & Newkirk, 1969), which set the open flux with a tunable free parameter . The opening radius, , we define is the radial distance at which the potential flux reaches the value of the open flux (). This definition is chosen because it relates to the 1D analysis employed to describe the power law dependences of our torque scaling relations. Specifically, a known value of allows for a precise calculation of the open flux (a priori from the potential field equations), which then gives the torque on the star within our simulations. The physical opening of the simulation field takes place at slightly larger radii than this with the field becoming non-potential due to its interaction with the wind (which explains why the closed field regions seen in Figure 3 typically extend slightly beyond ). A similar smooth transition is produced with PFSS modelling.â
is marked for each simulation in Figure 7 and again for comparative purposes in the bottom right panel. It is clear that smaller opening radii are found for lower cases. Due to their more rapidly decaying flux, they tend to have a smaller fraction of the stellar flux remaining in the open flux. From the radial decay of the magnetic field, the open flux and opening radii are observed to be dependent on the available stellar flux and topology. Pantolmos & Matt (in prep) have recently shown these to also be dependent on the wind acceleration profile. This complex dependence makes it difficult to predict the open flux for a given system.
A method for predicting within our simulations remains unknown, however it is understood that is key to predicting the torque from our simulated winds. We do however find the ratio of to be roughly constant for a given geometry, deviations from which may be numerical or suggest additional physics which we do not explore here.
## 5. Conclusion
We undertake a systematic study of the two simplest magnetic geometries, dipolar and quadrupolar, and for the first time their combinations with varying relative strengths. We parametrise the study using the ratio, , of dipolar to total combined field strength, which is shown to be a key variable in our new torque formulation.
We have shown that a large proportion of the magnetic field energy needs to be in the quadrupole for any significant morphology changes to be seen in the wind. All cases above 50% dipole field show a single streamer and are dominated by dipolar behaviour. Even in cases of small we observe the dipole field to be the key parameter controlling the morphology of the flow, with the quadrupolar field rapidly decaying away for most cases leaving the dipole component behind. For smaller field strengths the Alfvén radii appears close to the star, where the quadrupolar field is still dominant, and thus a quadrupolar morphology is established. Increasing the fraction of quadrupolar field strength allows this behaviour to continue for larger Alfvén radii.
The morphology of the wind can be concidered in the context of star-planet or disk interactions. Our findings suggest that the connectivity, polarity and strength of the field within the orbital plane depend in a simple way on the relative combination of dipole and quadrupole fields. Different combinations of these two field modes change the location of the current sheet(s) and the relative orientation of the stellar wind magnetic field with respect to any planetary or disk magnetic field. Asymmetries such as these can modify the poynting flux exchange for close-in planets (Strugarek et al., 2014b) or the strength of magnetospheric driving and geomagnetic storms on Earth-like exoplanets. Cohen et al. (2014) use observed magnetic fields to simulate the stellar wind environment surrounding the planet hosting star EV Lac. They calculate the magnetospheric joule heating on the exoplanets orbiting the M dwarf, finding significant changes to atmospheric properties such as thickness and temperature. Additionally, transient phenomena in the Solar wind such as coronal mass ejections are shown to deflect towards streamer belts (Kay et al., 2013). This has been applied to mass ejections around M dwarfs stars (Kay & Opher, 2014), and could similarly be applied here using knowledge of the streamer locations from our model grid.
If the host star magnetic field can be observed and decomposed into constituent field modes, containing dominant dipole and quadrupole components, a qualitative assessment of the stellar wind environment can be made. We find the addition of these primary and secondary fields to create an asymmetry which may shift potentially habitable exoplanets in and out of volatile wind streams. Observed planet hosting stars such as Bootis have already been shown to have global magnetic fields which are dominated by combinations of these low order field geometries (Donati et al., 2008). With further investigation it is possible to qualitatively approximate the conditions for planets in orbit of such stars. For dipole and quadrupole dominated host stars with a given magnetic field strength our grid of models provide an estimate of the location of the streamers and open field regions.
Within this work we build on the scaling relations from, Matt et al. (2012), Réville et al. (2015a) and Pantolmos & Matt (in prep). We confirm existing scaling laws and explore a new mixed field parameter space with similar methods. From our wind solutions we fit the variables, , , and (see Table 4), which describe the torque scaling for the pure dipole and quadrupole modes. From the 50 mixed case simulations, we produce an approximate scaling relation which takes the form of a broken power law, as a single power law fit is not available for the mixed geometries cases in space.
For low and low dipole fraction, the Alfvén radius behaves like a pure quadrupole,
At higher and dipole fractions, the torque is only dependent on the dipolar component of the field,
τ=Ks,dip˙MΩ∗R2∗[Υdip]2ms,dip, (37)
=Ks,dip˙M1−2ms,dipΩ∗R2+4ms,dip∗[(Bl=1∗)2vesc]2ms,dip. (38)
The later formulation is used when the Alfvén radius of a given dipole & quadrupole mixed field is greater than the pure quadrupole case for the same , i.e. the maximum of our new formula or the pure quadrupole. We define to separate the two regimes (see Figure 4).
The importance of the relative radial decay of both modes and the location of the opening and Alfvén radii appear to play a key role, and deserve further follow up investigation. This work analytically fits the decay of the magnetic flux, but a parametric relationship for the field opening remains uncertain. The relation of the relative sizes of the Alfvén and opening radii are found to be dependent on geometry, which can be used to inform potential field source surface modelling, where by the source surface must be specified when changing the field geometry.
Paper II includes the addition of octupolar field geometries, another primary symmetry family which introduces an additional complication in the relative orientation of the octupole to the dipole. It is shown however, that the mixing of any two axisymmetric geometries will follow a similar behaviour, especially if each belongs to different symmetry families (Finley & Matt. in prep). The lowest order mode largely dominates the dynamics of the torque until the Alfvén radii and opening radii are sufficiently close to the star for the higher order modes to impact the field strength.
Thanks for helpful discussions and technical advice from Georgios Pantolmos & Matt, Victor See, Victor Réville, Sasha Brun and Claudio Zanni. This project has received funding from the European Research Council (ERC) under the European Unionâs Horizon 2020 research and innovation programme (grant agreement No 682393). We thank Andrea Mignone and others for the development and maintenance of the PLUTO code. Figures within this work are produced using the python package matplotlib (Hunter, 2007).
## Appendix A Wind Acceleration
The creation of a semi-analytic formulation for the Alfvén radius for a variety of stellar parameters has been the goal of many studies proceeding this (e.g. Matt & Pudritz, 2008; Matt et al., 2012; Réville et al., 2015a; Pantolmos & Matt. in prep). Using a one-dimensional approximation based on work by Kawaler (1988), previous studies have aimed to predict the power law dependence, , of the torque formulations used within this work.
Using the one-dimensional framework, the field strength is assumed to decay as a power law , which in this study is only valid for the pure cases. Pantolmos & Matt (in prep) show the effect of wind acceleration can be removed from the torque scaling relations through the multiplication of and with . The power law dependences then becomes,
ml,th=1/(2l+2), (A1)
and similarly,
mc,th=1/2. (A2)
The modified dependent parameter, , is used throughout this work (see Figures 5 and 8), and the analytic predictions for the power law slopes are shown to have good agreement with our simulations. This dependent variable however, requires additional information about the wind speed at the Alfvén surface which is often unavailable.
Typically, rotation evolution models use the available stellar surface parameters e.g. . Therfore knowledge of the flow speed at the Alfvén radius, , is required for the semi-analytic formulations. is shown by Pantolmos & Matt. (in prep) and Réville et al. (2015a) to share a similar profile to a one-dimensional thermal wind, . Figure 10 displays the average Alfvén speed vs the Alfvén radius for all 70 simulations (coloured points). The parker wind solution (Parker, 1965) used in the initial condition is displayed for comparison (dashed line). Nearly all simulations follow the hydrodynamic solution, with a behaviour mostly independent of . Towards higher values of the Alfvén radius, a noticeable separation starts to develop between geometries. This range is accessed less by the higher order geometries as the range of Alfvén radii is much smaller than that for the pure dipole mode.
In order to include the effects of wind acceleration in the simplified 1D analysis to explain the simulation scalings between and , Réville et al. (2015a) introduced a parametrisation for the acceleration of the wind to the Alfvén radius with a power law dependence in radial distance using ,
v(RA)/vesc=(RA/R∗)q. (A3)
A single power law with is fit to the simulation data, which is chosen for simplicity within the 1D formalism. The use of this parameter is approximate if is a power law in , which we show over the parameter space has a significant deviation. Using the semi-analytic theory, Réville et al. (2015a) then derived the power law dependence for the scaling (Equation (21)),
ms,th=1/(2l+2+q), (A4)
which includes geometric and wind acceleration parameters in the form of and respectively. Using this result, is computed for both the dipole () and quadrupole () geometries in Table 4, and compared to the simulation results with good agreement.
Pantolmos & Matt. (in prep) explain the power-law dependence, so long as remains constant and the wind acceleration profile is known. Reiners & Mohanty (2012), Réville et al. (2015a) and Pantolmos & Matt. (in prep) all analytically describe the power law dependence of the open flux formulation (Equation (33)) using the power law dependence ,
mo,th=1/(2+q). (A5)
The result is independent of geometry, . As before the parameter approximates the wind driving as a power law in radius, which is fit with a single power law for both geometries such that should be the same for both the dipole and quadrupole. This prediction is tabulated in Table 4, however the simulation slopes are shown to no longer agree with the result. It is suggested that the open flux slope is much more sensitive to the wind acceleration than the formulation, therefore slight changes in flow acceleration modify the result. Slightly different slopes can be fit for the dipole and quadrupole cases which can recover the different values, however this is seemingly just a symptom of the power law approximation breaking down.
We conclude that the approximate power law of equation (A3) give a reasonable adjustment to the torque prediction for known wind velocity profiles, despite the badness of fit to the simulations points. Even though the power-law approximation to the wind velocity profile (equation A3) is not a precise fit to the data in Figure 10, the value of does provide a way to approximately include the contribution of the wind acceleration to the fit power-law exponents and . A more precise formulation could be derived based on a Parker-like wind profile without the use of a power law, however the torque scaling with is relative insensitive to the chosen approximate velocity profile.
### Footnotes
1. slugcomment: Draft: March 6, 2018
2. The PLUTO code operates with a factor of absorbed into the normalisation of B. Tabulated parameters are given in cgs units with this factor incorporated.
3. It could be argued that this should be weighted by the total area of the Alfvén surface, but for simplicity we calculate the un-weighted average.
4. A choice in our parameter space may have made this clearer to see in Figure 8, due to the increased heating and therefore larger range of acceleration allowing the topology to impact the velocity profile.
### References
1. Agüeros, M. A., Covey, K. R., Lemonias, J. J., et al. 2011, The Astrophysical Journal, 740, 110
2. Altschuler, M. D., & Newkirk, G. 1969, Solar Physics, 9, 131
3. Alvarado-Gómez, J., Hussain, G., Cohen, O., et al. 2016, Astronomy & Astrophysics, 594, A95
4. Amard, L., Palacios, A., Charbonnel, C., Gallet, F., & Bouvier, J. 2016, Astronomy & Astrophysics, 587, A105
5. Barnes, S. A. 2003, The Astrophysical Journal, 586, 464
6. —. 2010, The Astrophysical Journal, 722, 222
7. Blackman, E. G., & Owen, J. E. 2016, Monthly Notices of the Royal Astronomical Society, 458, 1548
8. Bouvier, J., Matt, S. P., Mohanty, S., et al. 2014, Protostars and Planets VI, 433
9. Brown, T. M. 2014, The Astrophysical Journal, 789, 101
10. Cohen, O., Drake, J., Glocer, A., et al. 2014, The Astrophysical Journal, 790, 57
11. Cohen, O., Drake, J., Kashyap, V., Hussain, G., & Gombosi, T. 2010, The Astrophysical Journal, 721, 80
12. Cohen, O., & Drake, J. J. 2014, The Astrophysical Journal, 783, 55
13. Cohen, O., Kashyap, V., Drake, J., et al. 2011, The Astrophysical Journal, 733, 67
14. Cranmer, S. R., & Saar, S. H. 2011, The Astrophysical Journal, 741, 54
15. Cranmer, S. R., Van Ballegooijen, A. A., & Edgar, R. J. 2007, The Astrophysical Journal Supplement Series, 171, 520
16. Davenport, J. R. A. 2017, ApJ, 835, 16
17. DeRosa, M., Brun, A., & Hoeksema, J. 2012, The Astrophysical Journal, 757, 96
18. do Nascimento Jr, J.-D., Vidotto, A., Petit, P., et al. 2016, The Astrophysical Journal Letters, 820, L15
19. Donati, J.-F., Moutou, C., Fares, R., et al. 2008, Monthly Notices of the Royal Astronomical Society, 385, 1179
20. Dunstone, N., Hussain, G., Collier Cameron, A., et al. 2008, Monthly Notices of the Royal Astronomical Society, 387, 481
21. Ebert, R., McComas, D., Elliott, H., Forsyth, R., & Gosling, J. 2009, Journal of Geophysical Research: Space Physics, 114
22. Einfeldt, B. 1988, SIAM Journal on Numerical Analysis, 25, 294
23. Fares, R., Donati, J.-F., Moutou, C., et al. 2009, Monthly Notices of the Royal Astronomical Society, 398, 1383
24. —. 2010, Monthly Notices of the Royal Astronomical Society, 406, 409
25. Feldman, U., Landi, E., & Schwadron, N. 2005, Journal of Geophysical Research: Space Physics, 110
26. Fisk, L., Schwadron, N., & Zurbuchen, T. 1998, Space Science Reviews, 86, 51
27. Folsom, C. P., Petit, P., Bouvier, J., et al. 2016, Monthly Notices of the Royal Astronomical Society, 457, 580
28. Gallet, F., & Bouvier, J. 2013, Astronomy & Astrophysics, 556, A36
29. —. 2015, Astronomy & Astrophysics, 577, A98
30. Garraffo, C., Drake, J. J., & Cohen, O. 2016a, Astronomy & Astrophysics, 595, A110
31. —. 2016b, The Astrophysical Journal Letters, 833, L4
32. Grappin, R., Leorat, J., & Pouquet, A. 1983, Astronomy and Astrophysics, 126, 51
33. Gregory, S., Jardine, M., Gray, C., & Donati, J. 2010, Reports on Progress in Physics, 73, 126901
34. Gregory, S. G., Donati, J.-F., & Hussain, G. A. 2016, arXiv preprint arXiv:1609.00273
35. Hall, J. C., Lockwood, G., & Skiff, B. A. 2007, The Astronomical Journal, 133, 862
36. Hébrard, É., Donati, J.-F., Delfosse, X., et al. 2016, Monthly Notices of the Royal Astronomical Society, 461, 1465
37. Hunter, J. D. 2007, Computing In Science & Engineering, 9, 90
38. Irwin, J., & Bouvier, J. 2009, in IAU Symp, Vol. 258
39. Jardine, M., Barnes, J. R., Donati, J.-F., & Cameron, A. C. 1999, Monthly Notices of the Royal Astronomical Society, 305, L35
40. Jardine, M., Collier Cameron, A., & Donati, J.-F. 2002, Monthly Notices of the Royal Astronomical Society, 333, 339
41. Jeffers, S., Petit, P., Marsden, S., et al. 2014, Astronomy & Astrophysics, 569, A79
42. Johnstone, C., Güdel, M., Lüftinger, T., Toth, G., & Brott, I. 2015, Astronomy & Astrophysics, 577, A27
43. Johnstone, C., Jardine, M., & Mackay, D. 2010, Monthly Notices of the Royal Astronomical Society, 404, 101
44. Kawaler, S. D. 1988, The Astrophysical Journal, 333, 236
45. Kay, C., & Opher, M. 2014, in American Astronomical Society Meeting Abstracts# 224, Vol. 224
46. Kay, C., Opher, M., & Evans, R. M. 2013, The Astrophysical Journal, 775, 5
47. Keppens, R., & Goedbloed, J. 1999, Astron. Astrophys, 343, 251
48. —. 2000, The Astrophysical Journal, 530, 1036
49. Matt, S., & Pudritz, R. E. 2008, The Astrophysical Journal, 678, 1109
50. Matt, S. P., Brun, A. S., Baraffe, I., Bouvier, J., & Chabrier, G. 2015, The Astrophysical Journal Letters, 799, L23
51. Matt, S. P., MacGregor, K. B., Pinsonneault, M. H., & Greene, T. P. 2012, The Astrophysical Journal Letters, 754, L26
52. McComas, D., Barraclough, B., Funsten, H., et al. 2000, Journal of Geophysical Research: Space Physics, 105, 10419
53. McFadden, P., Merrill, R., McElhinny, M., & Lee, S. 1991, Journal of Geophysical Research: Solid Earth, 96, 3923
54. McQuillan, A., Aigrain, S., & Mazeh, T. 2013, Monthly Notices of the Royal Astronomical Society, 432, 1203
55. Meibom, S., Mathieu, R. D., Stassun, K. G., Liebesny, P., & Saar, S. H. 2011, The Astrophysical Journal, 733, 115
56. Mestel, L. 1968, Monthly Notices of the Royal Astronomical Society, 138, 359
57. —. 1984, in Cool Stars, Stellar Systems, and the Sun (Springer), 49
58. Mignone, A. 2009, Memorie della Societa Astronomica Italiana Supplementi, 13, 67
59. Mignone, A., Bodo, G., Massaglia, S., et al. 2007, The Astrophysical Journal Supplement Series, 170, 228
60. Morgenthaler, A., Petit, P., Morin, J., et al. 2011, Astronomische Nachrichten, 332, 866
61. Morgenthaler, A., Petit, P., Saar, S., et al. 2012, Astronomy & Astrophysics, 540, A138
62. Morin, J., Donati, J.-F., Petit, P., et al. 2008, Monthly Notices of the Royal Astronomical Society, 390, 567
63. Nicholson, B., Vidotto, A., Mengel, M., et al. 2016, Monthly Notices of the Royal Astronomical Society, 459, 1907
64. Oran, R., Landi, E., van der Holst, B., et al. 2015, The Astrophysical Journal, 806, 55
65. Parker, E. 1965, Space Science Reviews, 4, 666
66. Parker, E. N. 1958, The Astrophysical Journal, 128, 664
67. Petit, P., Dintrans, B., Morgenthaler, A., et al. 2009, Astronomy & Astrophysics, 508, L9
68. Petit, P., Dintrans, B., Solanki, S., et al. 2008, Monthly Notices of the Royal Astronomical Society, 388, 80
69. Pinto, R., Brun, A., & Rouillard, A. 2016, Astronomy & Astrophysics, 592, A65
70. Pinto, R. F., Brun, A. S., Jouve, L., & Grappin, R. 2011, The Astrophysical Journal, 737, 72
71. Reiners, A., & Mohanty, S. 2012, The Astrophysical Journal, 746, 43
72. Réville, V., Brun, A. S., Matt, S. P., Strugarek, A., & Pinto, R. F. 2015a, The Astrophysical Journal, 798, 116
73. Réville, V., Brun, A. S., Strugarek, A., et al. 2015b, The Astrophysical Journal, 814, 99
74. Réville, V., Folsom, C. P., Strugarek, A., & Brun, A. S. 2016, The Astrophysical Journal, 832, 145
75. Riley, P., Linker, J., Mikić, Z., et al. 2006, The Astrophysical Journal, 653, 1510
76. Rosén, L., Kochukhov, O., & Wade, G. A. 2015, The Astrophysical Journal, 805, 169
77. Rosner, R., Golub, L., & Vaiana, G. 1985, Annual review of astronomy and astrophysics, 23, 413
78. Saikia, S. B., Jeffers, S., Morin, J., et al. 2016, Astronomy & Astrophysics, 594, A29
79. Sakurai, T. 1990, Computer Physics Reports, 12, 247
80. Schrijver, C. J., DeRosa, M. L., et al. 2003, The Astrophysical Journal, 590, 493
81. See, V., Jardine, M., Vidotto, A., et al. 2015, Monthly Notices of the Royal Astronomical Society, 453, 4301
82. —. 2016, Monthly Notices of the Royal Astronomical Society, 462, 4442
83. —. 2017, Monthly Notices of the Royal Astronomical Society, stw3094
84. Skumanich, A. 1972, The Astrophysical Journal, 171, 565
85. Soderblom, D. 1983, The Astrophysical Journal Supplement Series, 53, 1
86. Stauffer, J., Rebull, L., Bouvier, J., et al. 2016, The Astronomical Journal, 152, 115
87. Strugarek, A., Brun, A., Matt, S., et al. 2014a, in SF2A-2014: Proceedings of the Annual meeting of the French Society of Astronomy and Astrophysics, 279
88. Strugarek, A., Brun, A. S., Matt, S. P., & Réville, V. 2014b, The Astrophysical Journal, 795, 86
89. Tóth, G. 2000, Journal of Computational Physics, 161, 605
90. Ud-Doula, A., Owocki, S. P., & Townsend, R. H. 2009, Monthly Notices of the Royal Astronomical Society, 392, 1022
91. Usmanov, A. V., Goldstein, M. L., & Matthaeus, W. H. 2014, The Astrophysical Journal, 788, 43
92. Ustyugova, G., Koldoba, A., Romanova, M., & Lovelace, R. 2006, The Astrophysical Journal, 646, 304
93. Van der Holst, B., Manchester IV, W., Frazin, R., et al. 2010, The Astrophysical Journal, 725, 1373
94. van der Holst, B., Sokolov, I. V., Meng, X., et al. 2014, The Astrophysical Journal, 782, 81
95. Van Saders, J. L., & Pinsonneault, M. H. 2013, The Astrophysical Journal, 776, 67
96. Vidotto, A., Jardine, M., Morin, J., et al. 2014a, Monthly Notices of the Royal Astronomical Society, 438, 1162
97. Vidotto, A., Jardine, M., Opher, M., Donati, J., & Gombosi, T. 2011, in 16th Cambridge Workshop on Cool Stars, Stellar Systems, and the Sun, Vol. 448, 1293
98. Vidotto, A., Gregory, S., Jardine, M., et al. 2014b, Monthly Notices of the Royal Astronomical Society, 441, 2361
99. Washimi, H., & Shibata, S. 1993, Monthly Notices of the Royal Astronomical Society, 262, 936
100. Weber, E. J., & Davis, L. 1967, The Astrophysical Journal, 148, 217
101. Wolk, S., Harnden Jr, F., Flaccomio, E., et al. 2005, The Astrophysical Journal Supplement Series, 160, 423
102. Wood, B. E. 2004, Living Reviews in Solar Physics, 1, 1
103. Wright, J. T., Marcy, G. W., Butler, R. P., & Vogt, S. S. 2004, The Astrophysical Journal Supplement Series, 152, 261
104. Zanni, C., & Ferreira, J. 2009, Astronomy & Astrophysics, 508, 1117
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.
The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9277281165122986, "perplexity": 1829.1633254729281}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999948.3/warc/CC-MAIN-20190625213113-20190625235113-00186.warc.gz"} |
https://socratic.org/questions/can-a-repeating-decimal-be-equal-to-an-integer#152124 | Precalculus
Topics
# Can a repeating decimal be equal to an integer?
Jun 11, 2015
No, it will allways turn out to be a fraction.
#### Explanation:
I will not delve into how you turn a repeating decimal into a fraction, but just one example:
$0.333 \ldots . = \frac{1}{3}$
There is one exeption though (see example above):
$0.999 \ldots . = 3 \cdot 0.333 \ldots . = 3 \cdot \frac{1}{3} = 1$
Dec 18, 2016
Yes
#### Explanation:
The general term of a geometric series can be written:
${a}_{n} = a \cdot {r}^{n - 1}$
where $a$ is the initial term and $r$ the common ratio.
When $\left\mid r \right\mid < 1$ then its sum to infinity converges and is given by the formula:
${\sum}_{n = 1}^{\infty} a {r}^{n - 1} = \frac{a}{1 - r}$
So for example:
$0.999 \ldots = \frac{9}{10} + \frac{9}{100} + \frac{9}{1000} + \ldots$
is given by $a = 9$ and $r = \frac{1}{10}$
which has sum:
${\sum}_{n = 1}^{\infty} \frac{9}{10} \cdot {\left(\frac{1}{10}\right)}^{n - 1} = \frac{\frac{9}{10}}{1 - \frac{1}{10}} = \frac{\frac{9}{10}}{\frac{9}{10}} = 1$
So $0. \overline{9} = 0.999 \ldots = 1$
In fact, any integer can be expressed as a repeating decimal using $9$'s.
For example:
$12345 = 12344.999 \ldots = 12344. \overline{9}$
$- 5 = - 4.999 \ldots = - 4. \overline{9}$
##### Impact of this question
4102 views around the world | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 15, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9448901414871216, "perplexity": 946.8270910323582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545326.51/warc/CC-MAIN-20220522094818-20220522124818-00797.warc.gz"} |
https://www.allaboutcircuits.com/technical-articles/discontinuous-conduction-mode-of-simple-converters/ | Technical Article
# Discontinuous Conduction Mode of Simple Converters
June 11, 2015 by Editorial Team
## Discussed here are the discontinuous conduction mode, mode boundary, and conversion ratio of simple converters. During continuous conduction mode, the inductor current in the energy transfer never reaches zero value. In discontinuous conduction mode, the inductor current falls to zero level which is common in DC-to-DC converters.
Discussed here are the discontinuous conduction mode, mode boundary, and conversion ratio of simple converters.
Beginner
#### Origin of the Discontinuous Conduction Mode
During continuous conduction mode, the inductor current in the energy transfer never reaches zero value. In the case of the discontinuous conduction mode, the inductor current falls to zero level which is very common in DC-to-DC converters.
If the peak of the inductor current ripples is less than the DC component of the inductor current, the diode current is always positive and the diode is forced to turn on when the switch S (either a transistor or thyristor) is off. On the other hand, if the peak of the inductor current ripples becomes more than the DC component of the inductor current, the total current falls to zero value while the diode is conducting. Thus, the diode will stop conducting and the inductor current will remain at zero value until the switch S will be gated again due to the polarity reversal across the switch. This gives rise to the discontinuous conduction mode in the chopper or the DC-to-DC converter.
In the discontinuous conduction mode, inductor current is not persistent throughout the complete cycle and reaches zero level earlier even before the end of the period. Discontinuous conduction mode inductance is less than the minimum value of the inductance for the continuous conduction mode,
LDCM < LCCM.
Thus, this condition generally arises for the light-load condition.
Let the value of inductance in the case of the discontinuous conduction mode be,
LDCM=ξ LCCM where 0<ξ<1 for the discontinuous conduction.
The discontinuous conduction mode usually occurs in converters which consist of single-quadrant switches and may also occur in converters with two-quadrant switches. Two-level DC buck, and boost and buck-boost converters will be discussed further in this article. There are two levels indicated here towards the two-voltage level for the inductor voltage.
The energy stored in the inductor is proportional to the square of the current flowing through it. Having the same power through the converter, the requirement of the inductor current is higher in the case of the discontinuous conduction as compared to the continuous conduction mode. This causes more losses in the circuit of the discontinuous conduction. As the energy stored is not yet released to the output in the discontinuous conduction, the output gets affected by the ringing. This may also cause a noise in the discontinuous conduction mode.
Moreover, the value of the inductance required for discontinuous conduction mode is lesser as compared to the continuous conduction mode since it allows the fall of the inductor current to zero level. This causes higher values for the root-mean-square and the peak current. Thus, the size of the transformer required in isolated converters is bigger as compared to the continuous-conduction transformer size to suit the larger flux linkage and the losses.
Conversion ratio is independent of the load during the continuous conduction mode but when it enters in discontinuous conduction mode, it becomes dependent to the load. This complicates the DC-circuit analysis because the first-order equations become second order.
In most of the applications, continuous conduction mode is employed. Yet, discontinuous conduction mode can also be used for certain applications such as for the low-current and loop-compensation applications.
#### Buck Converter
Consider the simple buck converter circuit shown in Fig. 1. The current in the converter is controlled here by two switches labeled as S (MOSFET) and D (Diode).
Figure 1. Circuit for Buck Converter
This is a single-quadrant converter with the following waveforms for the continuous conduction mode shown in Fig. 2.
Figure 2. Supply Current IS, Diode Current ID, Inductor Current I, and Inductor Voltage VL Waveforms respectively (Buck Converter)
The buck converter in discontinuous and continuous conduction modes are in the second-order and first-order systems respectively.
For continuous conduction mode,
$$I_{L+}=\frac{1}{L}\int_{0}^{t}(V_{S}-V_{O})dt+I_{min}$$ For 0≤t≤DT.
$$\Rightarrow I_{L+} =\frac{V_{S}-V_{O}}{L}t+I_{min}$$
At t=DT, inductor current is at maximum value,
$$I_{max}=\frac{V_{S}-V_{O}}{L}DT+I_{min}$$ [Equation 1]
$$I_{L-}=(\frac{1}{L})\int_{DT}^{t}-V_{O} dt+I_{max}$$ For DT≤t≤T.
$$\Rightarrow I_{L-}=\frac{V_{O}}{L}(DT-t)+ I_{min}$$
The average value of the inductor current for the buck converter is
$$I_{avg}=\frac{V_{O}}{R}.$$
Because the inductor is always connected to the load whether the switch is on or off.
The average value of the current through the capacitor is nil due to the capacitor charge balance condition.
From Fig. 2, the area under the inductor current waveform is,
$$(Area)_{L} = T I_{min}+\frac{1}{2}T (I_{max}-I_{min})$$
Average value of the inductor current is,
$$I_{avg}=\frac{V_{O}}{R}=I_{min}+\frac{1}{2}(I_{max}-I_{min})$$ [Equation 2]
From Equations 1 and 2 we can get,
$$I_{avg}=\frac{V_{S}-V_{O}}{2L}DT+I_{min}$$
$$\Rightarrow I_{avg}=\frac{D(V_{S}-V_{O})}{2Lf}+ I_{min}=\frac{V_{O}}{R}$$
The value of inductance is,
$$L=\frac{D(V_{S}-V_{O})R}{2f(V_{O}-I_{min} R)}$$
The boundary of continuous condition is when Imin=0. If the value of Imin<0, the converter enters in the discontinuous conduction mode.
Thus,
L=LCCM for Imin=0
Hence,
$$L_{CCM}=\frac{D(V_{S}-V_{O})R}{2fV_{O}}$$
The value of inductance for the discontinuous conduction is given by
$$L=L_{DCM}=ξ L_{CCM}=\frac{ξD(V_{S}-V_{O})R}{2fV_{O}},$$ where 0<ξ<1.
For discontinuous conduction mode, when L< LCCM, the waveforms for the inductor current and inductor voltage are shown in Fig.3.
Figure 3. Inductor Current and Voltage for the Discontinuous Conduction Mode of Buck Converter
It is clear from the Fig.3 that the value of the minimum inductor current is zero i.e. Imin=0.
As the current across the inductor current is reduced to zero, the value of the voltage across the inductor is also reduced to zero value while VC =VO during the entire cycle.
For the time duration 0 ≤ t ≤ TON
$$I_{L+}(t)=\frac{V_{S}-V_{O}}{L}t$$ [Equation 3]
As the value of the peak inductor current occurs at t = TON,
$$\Rightarrow I_{max}=\frac{V_{S}-V_{O}}{L}T_{ON}=\frac{V_{S}-V_{O}}{L}DT=\frac{V_{S}-V_{O}}{Lf}D.$$
For the time duration TON ≤ t ≤ TX,
$$I_{L-}(t)=\int_{T_{ON}}^{t}-\frac{V_{C}}{L}dt+I_{max}$$
$$\Rightarrow I_{L-}(t)=\frac{V_{C}}{L}(T_{ON}-t)+\frac{V_{S}-V_{O}}{Lf}$$ [Equation 4]
At t = TX, current reduces to zero value,
$$0=\frac{V_{C}}{L}(T_{ON}-T_{X})+\frac{V_{S}-V_{O}}{Lf}$$
$$\Rightarrow T_{X}=D\frac{V_{S}}{f V_{O}}$$
Compared to the continuous condition, the amount of energy needed by the load is lesser in the discontinuous condition.
It is considered that the converter is operated in the steady state. Thus, the energy in the inductor remains the same at the start and at the end of the cycle. The volt-time balance condition can also be applied here.
The above equation can also be derived using the inductor volt-second balance condition as,
$$(V_{S}-V_{C})T_{ON}+(-V_{C})(T_{X}-T_{ON})=0$$
$$\Rightarrow (V_{S}-V_{C})DT+(-V_{C})(T_{X}-DT)=0$$
$$\Rightarrow T_{X}=V_{S}\frac{D}{f V_{O}}$$
For the time duration TX ≤ t ≤ T
$$I_{L0}(t)=0$$
From the Fig. 3, it is clear that the average value of the inductor current is equal to the area under the load current curve divided by T.
$$I_{avg}=\frac{\frac{1}{2}T_{X} I_{max}}{T}$$
For the DC supply,
$$I_{avg}=\frac{V_{O}}{R}$$
Hence,
$$\frac{V_{O}}{R}=\frac{V_{S}(V_{S}-V_{O})D^{2}}{2LV_{O} f}$$
The duty cycle ratio for the discontinuous conduction mode in the case of the buck converter is,
$$D=V_{O}\sqrt{\frac{2Lf}{R V_{S}(V_{S}-V_{O})}}$$ [Equation 5]
The duty cycle ratio of the buck converter in its continuous conduction mode is
$$D =\frac{V_{O}}{V_{i}}.$$
The duty cycle ratio for the buck converter is also dependent on the inductance L, load resistance R, and the switching frequency f.
For discontinuous conduction mode,
$$L=L_{DCM}=ξL_{CCM} =\frac{ξD(V_{S}-V_{O})R}{2fV_{O}}$$ [Equation 6]
Substitution of the Equation 5 into Equation 6 gives,
$$D=\frac{V_{O}}{V_{S}}\sqrt{ξ}$$ [Equation 7]
Since 0 < ξ < 1, duty cycle ratio of the buck converter in the discontinuous conduction mode is less than its value in the continuous conduction mode. Thus, less amount of energy is transferred through the converter which is not enough to maintain the inductor current throughout the entire period. This is the reason the discontinuous current flows through the inductor.
The conversion ratio of buck DC-to-DC converter is,
$$\frac{V_{O}}{V_{S}}=\frac{D}{\sqrt{ξ}}$$
where 0<ξ<1
If the value of ξ is greater than 1, the converter enters in the continuous conduction mode. We can easily know the conduction state of the buck converter, which is either continuous or discontinuous, if we know the value of input and output voltages of the converter by simply measuring the value of ξ.
Instantaneous value of the capacitor current is given by subtracting the value of the inductor current to the load current. When the inductor current value is reduced to zero value, the load current is supplied by the capacitor.
From Equations 3 and 4 we can get:
For the time duration 0 < t
$$I_{C+}(t)=\frac{V_{S}-V_{O}}{L}t-I_{O}$$ [Equation 8]
For the time duration DT < t < TX,
$$I_{C-}(t)=\frac{V_{O}}{L}(DT-t)+\frac{D(V_{S}-V_{O})}{Lf}-I_{O}$$ [Equation 9]
And for the time duration TX < t < T,
$$I_{C_{O}}=-I_{O}$$ [Equation 10]
If the capacitance is assumed to be ideal, the capacitor current will not decay even after the inductor current value is reduced to zero value. For that case, the waveforms for the capacitor and inductor current are shown in Fig. 4.
From Fig.4, it is clear that the value of the capacitor current is zero at time t=Ta and at t=Tb.
Equation 8 at time t =Ta gives,
$$0=\frac{V_{S}-V_{O}}{L}T_{a}-I_{O}$$
$$\Rightarrow T_{a}=L\frac{I_{O}}{V_{S}-V_{O}}$$ [Equation 11]
And Equation 9 at time t=Tb gives,
$$0=\frac{V_{O}(DT-T_{b})}{L}+\frac{D(V_{S}-V_{O})}{Lf}-I_{O}$$
$$\Rightarrow T_{b}=DT-\frac{LI_{O}}{V_{O}}+\frac{D(V_{S}-V_{O})}{fV_{O}}$$ [Equation 12]
Figure 4. Inductor Current and Capacitor Current respectively for the Discontinuous Conduction Mode
of the Buck Converter
The positive time interval for the charge accumulation i.e. Tb-Ta from Equations 11 and 12 is given by:
$$T_{b}-T_{a}=D\frac{V_{S}}{fV_{O}}-\frac{LI_{O}V_{S}}{V_{O}(V_{S}-V_{O})}$$ [Equation 13]
From Equation 6 and Equation 7 we can get,
$$T_{b}-T_{a}=\frac{2\sqrt{ξ}-ξ}{2f}$$ [Equation 14]
From Fig.4, it is also clear that the maximum value of the capacitor current occurs at the time t=DT.
At t = DT,
IC(DT) = Ihp
From Equation 8 we can get,
$$I_{hp} = (\frac{2}{\sqrt{\xi}}-1)I_{O}$$ [Equation 15]
The charge accumulated is the integration of the capacitor current (area under the capacitor current from Ta to Tb) which is also given by the expression:
∆Q=C∆V [Equation 16]
Thus,
$$C ∆V=\frac{1}{2}(2\sqrt{ξ}-ξ)(\frac{2}{\sqrt{ξ}}-1)I_{O}(\frac{1}{2f})=\frac{V_{O}(2-\sqrt{ξ})^2}{4Rf}$$
The ripples in the load due to the ripples in the capacitor are given by the following expression:
$$r=\frac{∆V}{V_{O}}=\frac{(2-\sqrt{ξ})^2}{4Rf}$$
#### Boost Converter
Circuit for the boost converter is shown in Fig. 5.
Figure 5. Circuit for Boost Converter
The waveform for the continuous conduction mode is shown in Fig. 6. When it is in the discontinuous conduction mode, the waveform is shown in Fig. 7.
We can assume that the inductor is connected to the load for the time Ty such that
IO =Y Iavg [Equation 17]
where
Y = Ty/T
Figure 6. Supply Current, Diode Current, Inductor Current and Inductor Voltage respectively (Boost Converter)
Figure 7. Inductor Current and Voltage for the Discontinuous Conduction Mode of Boost Converter
When the converter operates in the steady-state condition, the energy at the start and at the end of the cycle is the same. Thus, volt-time balance condition can be applied here too.
From the figure and the volt-time balance condition it is clear that,
TON VS+(TX-TON).(VS-VC)=0
$$\Rightarrow DT V_{S}+(T_{X}-DT).(V_{S}-V_{O})=0$$
$$\Rightarrow T_{X}=D\frac{V_{O}}{(V_{S}-V_{O})f}$$
From Fig. 6, it is also evident that the value of the minimum and maximum currents are as follows:
Imin=0;
and
$$I_{max}=\frac{V_{S}}{L}T_{ON}=\frac{V_{S}}{Lf}D$$
Thus, the average value of the inductor current is,
$$I_{avg}=\frac{1}{1-D}\frac{V_{O}}{R}=\frac{\frac{1}{2}T_{X}I_{max}}{T}$$
$$\Rightarrow \frac{V_{O}}{R}=\frac{(\frac{1}{2})(\frac{DV_{O}}{(V_{S}-V_{O})f})(\frac{V_{S}D}{Lf})(1-D)}{T}$$
From Equation 17 we can get,
$$D=\sqrt{2\frac{(V_{O}-V_{S})Lf}{RYV_{S}}}$$ [Equation 18]
The duty cycle ratio of the buck converter for the continuous conduction mode is equal to $$\frac{V_{O}-V_{S}}{V_{S}}.$$
In the discontinuous conduction mode, the duty cycle ratio of the boost converter is not only dependent on the input and output voltages but it also depends on the inductance L, load resistance R, and the switching frequency f.
The discontinuous inductance for the boost converter is,
$$L_{DCM}=ξY R\frac{V_{S}(V_{O}-V_{S})}{2 f V_{O}}$$
Substituting this value of inductance in Equation 18 we can get,
$$D=\frac{V_{O}-V_{S}}{V_{O}}\sqrt{ξ}$$
$$\Rightarrow \frac{V_{O}}{V_{S}}=\frac{V_{S}}{1-\frac{D}{\sqrt{ξ}}}$$
Hence, the complete conversion ratio for the boost converter in the discontinuous conduction mode is given by the above expression.
#### Buck-Boost Converter
The circuit for the buck-boost converter is shown in Fig. 8 and the related waveforms of the buck-boost converter in the case of continuous conduction mode are shown in Fig. 9.
Figure 8. Circuit for the Buck-Boost Converter
Inductor is connected to the load during the switch-off period; where Y= (1-D).
Thus,
$$I_{avg}=Y I_{O}=Y\frac{V_{O}}{R}=\frac{(1-D)V_{O}}{R}$$
Figure 9. Supply Current, Diode Current, Inductor Current and Inductor Voltage respectively (Buck-Boost Converter) in Continuous Conduction Mode
Figure 10. Inductor Current and Inductor Voltage for the Discontinuous Conduction Mode
of the Buck-Boost Converter
Assume that the converter is operating in steady state; therefore, energy at the start up to the end of the cycle must be equal. Thus, volt-time balance condition is applied here.
Applying the volt-sec balance across the inductor using the Fig. 9,
VS TON + (TX - TON) (-VO) = 0
$$\Rightarrow V_{S}DT - (T_{X}-DT)V_{O}=0$$
$$\Rightarrow T_{X} = \frac{D(V_{S}+V_{O})}{V_{O}f}$$
From the Fig. 9, it is also noticed that the value of the minimum and maximum currents are as follows:
Imin = 0
$$I_{max}=\frac{V_{S}}{L}T_{ON}=\frac{V_{S}}{Lf}D$$
Thus, the average value of the inductor current is,
$$I_{avg} = Y\frac{V_{O}}{R} = \frac{\frac{1}{2}I_{max}T_{X}}{T}$$
$$\Rightarrow \frac{V_{O}}{R}=\frac{\frac{\frac{1}{2}V_{S}D}{Lf}D(V_{S}+V_{O})}{YV_{O}f}f$$
In the discontinuous conduction mode of the buck-boost converter, the value of the duty cycle ratio is given by
$$D=V_{O}\sqrt{\frac{2Lf}{R YV_{S}(V_{S}+V_{O})}}$$
The duty cycle ratio of the buck-boost converter for the continuous conduction mode is equal to $$\frac{V_{O}}{V_{O}+V_{S}}.$$
In the case of the discontinuous conduction mode, the duty cycle ratio of the buck-boost converter is also dependent on the inductance L, load resistance R, and the switching frequency f.
The conversion ratio for the buck-boost converter is,
$$\frac{V_{O}}{V_{S}}=\frac{D}{\sqrt{ξ}-D}$$
• Share
### You May Also Like
1 Comment
• S
Sajeel Ahmed February 22, 2018
This article has realized me to think something new. I am working on boost converter TPS61030 and It produces noise and ringing maybe due to the discontinous conduction mode and I want to solve this issue because I want silent device. Is to possible to operate it in continuous conduction mode by using the formula had been derived in this article?
Like. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8916910290718079, "perplexity": 1398.3238837750553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400208095.31/warc/CC-MAIN-20200922224013-20200923014013-00131.warc.gz"} |
https://simple.wikipedia.org/wiki/Vector_field | # Vector field
Part of a vector field
A vector field is a function that assigns to every point in space a vector. It can be imagined as a collection of arrows, each one attached to a different point in space. For example, the wind (the velocity of air) can be represented by a vector field. This is because in every point one can write an arrow showing the direction of the wind and its strength.
Vector calculus is the study of vector fields.
## Physical examples
In everyday life, gravity is the force that makes objects fall down. More generally, it is the force that pulls objects towards each other. If we describe for each point in space the direction and strength of the earth's pull, we will get a gravity vector field for the Earth.
The theory dealing with electricity and magnetism in physics is called electromagnetism. One of the basic assumptions is that there are two vector fields in all space which govern electric and magnetic forces. One is the electric field, which is often written as ${\displaystyle {\vec {E}}}$ and the second is the magnetic field, which is often written as ${\displaystyle {\vec {B}}}$.
Atmospheric circulation can be represented by a vector field. The more precisely the vector field describes the actual flow, the better the resulting weather forecast. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9685376882553101, "perplexity": 162.32054814446255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499888.62/warc/CC-MAIN-20230131154832-20230131184832-00783.warc.gz"} |
http://math.stackexchange.com/questions/165479/chain-rule-confusion | Chain Rule Confusion
Below i have a function that i need to use the chain rule on. My friend showed me his answer which was correct which was $-8x^7\sin(a^8+x^8)$.
$$y = \cos(a^8 + x^8)$$
I am really confused as how he got that. I know that in the chain rule you bring whats outside to the front. So why is $a^8$ not in this solution?
-
What is the derivative of a constant? – M.B. Jul 1 '12 at 22:19
Use \cos and \sin. – Did Jul 1 '12 at 22:32
SO a is a constant? How do i know this though? – soniccool Jul 2 '12 at 1:13
It's really something you have to pick up from context (i.e. the bit of the question which isn't the equation). If you're trying to work out $dy/dx$, anything which doesn't involve either $y$ or $x$ is a constant. If you're trying to work out $dz/dt$, then something involving $x$ would be a constant. Really this is all an unfortunate consequence of sloppy notation. – Ben Millwood Jul 2 '12 at 2:17
You’re trying to treat the chain rule as a mechanical manipulation of symbols instead of understanding what it actually says. It says that when you differentiate a composite function, say $g\big(f(x)\big)$, you first take the derivative of $g$ as if $f(x)$ were the independent variable, and then you multiply by $f\,'(x)$.
Here you have $h(x)=\cos(a^8+x^8)$, and you want $h'(x)$. First pretend that what’s inside the cosine is a single variable; call it $u$, if you like so that $u=a^8+x^8$ and $h(x)=\cos u$. Now differentiate with respect to $u$ to get $-\sin u$. But you weren’t really differentiating with respect to $u$: you were differentiating with respect to $x$. The chain rule says that in order to compensate for this distinction, you must now multiply by $\frac{du}{dx}$. Since $a^8$ is a constant, its derivative (with respect to anything!) is $0$, and therefore $\frac{du}{dx}=8x^7$. The chain rule now tells you that $$h'(x)=\Big(-\sin(a^8+x^8)\Big)\Big(8x^7\Big)=-8x^7\sin(a^8+x^8)\;.$$
Question though, how do i know $a^8$ is a constant? Or just memorize that a in a derivative is a constant? – soniccool Jul 2 '12 at 1:14
No, a is a constant because the expression is derived regarding x. If it were $\frac{du}{da} then x were the constant, and a the variable. – vsz Jul 2 '12 at 6:07 @vsz: Nowhere in the problem as relayed by the OP does it specify that the differentiation is with respect to$x$; we can only infer that from convention (and the friend’s correct answer). Moreover, the fact that the differentiation is w.r.t.$x$is not sufficient to rule out the possibility that$a$is itself some function of$x$, and that the correct is actually$-8(a^7a'+x^7)\sin(a^8+x^8)$. Again, only our knowledge of convention and of what’s likely to appear in a problem at this level lets us answer correctly. – Brian M. Scott Jul 2 '12 at 16:55 Note that$y$is a function of$x$and$a$is just a constant. To understand the procedure, let us call$x^8 + a^8$as a function$f(x)$. We then have $$y = \cos(f(x))$$ Hence, by chain rule we get that $$\dfrac{dy}{dx} = \dfrac{dy}{df} \times \dfrac{df}{dx}$$ Now$\dfrac{dy}{df} = -\sin(f(x))$and$\dfrac{df(x)}{dx} = \dfrac{d(x^8 + a^8)}{dx} = \dfrac{d(x^8)}{dx} + \dfrac{d(a^8)}{dx}$. Now recall that $$\dfrac{d (x^n)}{dx} = n x^{n-1} \text{ and } \dfrac{d (\text{ constant })}{dx} = 0$$ Hence, we get that$\dfrac{d(x^8)}{dx} + \dfrac{d(a^8)}{dx} = 8 x^7 + 0 = 8x^7$. Hence, we get that $$\dfrac{dy}{dx} = \dfrac{dy}{df} \times \dfrac{df}{dx} = - \sin(f(x)) \times \left(8x^7 \right) = - 8x^7 \sin \left( x^8+a^8 \right)$$ - HINT What is the derivative of$a^8+x^8$? It doesn't have any$a$term either. - The fact that$a$is a constant is one of those things you have to infer from context. Typically, in single variable calculus, one uses letters towards the end of the alphabet are all dependent on one another (typically the relationship is they are all functions of$x$,$t$, or$z$), and other letters without prior meaning (e.g.$f$and$g$) as constants. One can use$a$as a variable functionally dependent on$x$. If$a$is used that way in the stated problem, then you would have another term in the answer that includes a factor of$\frac{da}{dx}\$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9922348260879517, "perplexity": 416.157730256694}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246661364.60/warc/CC-MAIN-20150417045741-00279-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://www.groundai.com/project/an-axiomatic-basis-for-blackwell-optimality/ | An axiomatic basis for Blackwell optimality
# An axiomatic basis for Blackwell optimality
Adam Jonsson Department of Engineering Sciences and Mathematics
Luleå University of Technology, 97187 Luleå, Sweden
###### Abstract.
In the theory of Markov decision processes (MDPs), a Blackwell optimal policy is a policy that is optimal for every discount factor sufficiently close to one. This paper provides an axiomatic basis for Blackwell optimality in discrete-time MDPs with finitely many states and finitely many actions.
###### Key words and phrases:
Markov decision processes; Blackwell optimality
###### 2010 Mathematics Subject Classification:
Primary 90C40, Secondary 91B06
## 1. Introduction
In his foundational paper, Blackwell [4] showed that for any discrete-time Markov decision process (MDP) with finitely many states and finitely many actions, there exists a stationary policy that is optimal for every discount factor sufficiently close to one. Following Veinott [15], policies that possess this property are now referred to as Blackwell optimal. Blackwell optimality and the related concept of 1-optimality (also known as near optimality, 0-discount optimality, and bias optimality) have come to provide two of the most well studied optimality criteria for undiscounted MPDs (see, e.g., [10, 9, 13, 12, 14, 6, 7]). However, the question of which assumptions on a decision maker’s preferences lead to these criteria has not been answered in the literature.
To address this question, we consider a decision maker with preferences over . The preference relation is postulated to be reflexive and transitive, where means that is at least as good as , means that is better than ( but not ), and means that and are equally good ( and ). A policy generates a stream of expected rewards (see Eq. (3) below), where is the expected reward at time . Let denote the set of streams generated by stationary policies, that is, policies for which the action chosen at time depends only on the state at time . The principal result of this paper (Theorem 1) provides conditions on that ensure that and coincide on , where
u≿\textscBv⟺liminfβ→1−∞∑t=1βt(ut−vt)≥0 (1)
is the preference relation induced by the 1-optimality criterion. To state this result, we use the following notation: For and , we let denote . If for all and for some , we write . The long-run average
limn→∞1nn∑t=1ut (2)
of is denoted by if the limit (2) exists.
###### Theorem 1.
Let be a preference relation on with the following three properties.
A1. For all , if , then .
A2. For all , if , then .
A3. For all , if is well defined, then .
Then and coincide on .
This result is proved in [8] on a different domain (the set of streams that are either summable or eventually periodic). To prove Theorem 1, we extend the result from [8] to a larger domain (Lemma 2) and show that this domain contains (Lemma 3).
The first two assumptions in Theorem 1, A1 and A2, are standard (cf. [2, 3]). To interpret A3, which is the Compensation Principle from [8], imagine that the decision maker is faced with two different scenarios: In the first scenario, a stream of rewards is received. In the second scenario, there is a one-period postponement of , for which a compensation of is received in the first period. According to A3, the decision maker is indifferent between and if . For an axiomatic defence of this assertion, see [8, Prop. 1].
Theorem 1 tells us that if a decision maker restricts attention to stationary policies and respects A1, A2, and A3, then any stationary 1-optimal policy is (weakly) best possible with respect to his or her preferences. (The same conclusion hold for Blackwell optimal policies since such policies are 1-optimal by definition.) While restricting attention to stationary policies is often natural, it is well known that not all optimality criteria admit stationary optimal policies [5, 13, 11]. The arguments used in the proof of Theorem 1 apply to sequences that are asymptotically periodic (see Eq. (8) below). We mention without proof that as a consequence, the conclusion in Theorem 1 holds also on the set of streams generated by eventually periodic policies.
## 2. Definitions
We use Blackwell’s [4] formulation of a discrete-time MDP, with a finite set of states, a finite set of actions, and the set of all functions . Thus at each time , a system is observed to be in one of states, an action is chosen from , and a reward is received. The reward is assumed to be a function from to . The transition probability matrix and reward (column) vector that correspond to are denoted by and , respectively. So, if the system is observed to be in state and action is chosen, then a reward of is received and the system moves to with probability .
A policy is a sequence , each . The set of all policies is denoted by . A policy is stationary if for all , and eventually periodic if there exist such that for all .
The stream of expected rewards that generates, given an initial state , is the sequence defined (see [4, p. 719])
u1 =[R(f1)]s, ut =[Q(f1)…Q(ft−1)⋅R(ft)]s,t≥2. (3)
We define as the set of all that can be written (3) for some stationary and some , where is a MDP with finitely many states and finitely many actions.
## 3. Proof of Theorem 1
The proof of Theorem 1 will be completed through three lemmas. The first lemma shows that if satisfies A1A3, then and coincide on the set of pairs for which the series is Cesàro-summable and has bounded partial sums, where
u≿\textscVv⟺liminfn→∞1nn∑T=1T∑t=1(ut−vt)≥0 (4)
is the preference relation induced by Veinott’s [15] average overtaking criterion. All results presented in this paper hold with in the role of .
###### Lemma 1.
(a) The preference relation satisfies A1A3.
(b) Let be a preference relation that satisfies A1A3. For all , if the series is Cesàro-summable and has bounded partial sums, then if and only if .
###### Proof.
(a) See [8, Theorem 1]. (b) A consequence of (a) and Lemma 2 in [8]. ∎
That the conclusion in Lemma 1(b) holds with in the role of follows from that satisfies A1A3. The rest of the proof consists of identifying a superset of to which the conclusion in Lemma 1(b) extends. Lemma 2 shows that this conclusion holds on the set of that can be written
u=w+△, (5)
where is eventually periodic and the series is Cesàro-summable (the limit exists and is finite) and has bounded partial sums. Let denote the set of streams that can be written in this way.
###### Lemma 2.
A preference relation on that satisfies A1A3 is complete on and coincides with on this domain.
That is complete on means that for all , if does not hold, then .
###### Proof.
Let be a preference relation that satisfies A1A3, and let . Then . We show that if and only if . Take and such that , where for all and where is Cesàro-summable with bounded partial sums. Without loss of generality, we may assume that .
Case 1: . Then is Cesàro-summable and has bounded partial sums. This means that is Cesàro-summable and has bounded partial sums. By Lemma 1, .
Case 2: . (A similar argument applies when .) Then as . Since has bounded partial sums, . We show that . Choose and with the following properties.
(i) is eventually periodic with period .
(ii) for all and for all .
(iii) for all .
(iv) for all .
Since , (iv) follows from (i)–(ii) by taking sufficiently large. Let . By (iii), for all . This means that , is eventually periodic. Thus and hence is Cesàro-summable with bounded partial sums. Since , the Cesàro sum of is nonnegative by (iv). This means that , so by Lemma 1. Here , so by A1 and transitivity. By A2, . Since also satisfies A1A3, the same argument shows that . ∎
It remains to verify that contains . For this it is sufficient to show that every can be written
u=w+△, (6)
where is eventually periodic and goes to zero at exponential rate as . We say that is asymptotically periodic if can be written in this way.
###### Lemma 3.
If is generated by a stationary policy, then is asymptotically periodic.
###### Proof.
Let be generated by applying given an initial state , so that is the :th component of (here is the identity matrix)
Q(f)t−1⋅R(f),t≥1. (7)
We need to show that there exist and with
u=w+△, (8)
where is eventually periodic and . A well known corollary of the Perron-Frobenius theorem for nonnegative matrices says that for any stochastic matrix and , the sequence converges exponentially to a periodic orbit (see, e.g., [1].) That is, there exist , and such that for all and where
limt→∞|(Pt⋅x−y(t))s|eρt=0
for every . Thus we can take , and such that
(Q(f))t−1⋅R(f)=w(t)+e(t) (9)
for every , where for all and where each component of goes to zero faster than . If we now set , then , where is eventually periodic and . ∎
## References
• [1] Mustafa A. Akcoglu and Ulrich Krengel. Nonlinear models of diffusion on a finite space. Probability Theory and Related Fields, 76(4):441–420, 1987.
• [2] Geir B. Asheim, Claude d’Aspremont, and Kuntal Banerjee. Generalized time-invariant overtaking. Journal of Mathematical Economics, 46(4):519–533, 2010.
• [3] Kaushik Basu and Tapan Mitra. Utilitarianism for infinite utility streams: A new welfare criterion and its axiomatic characterization. Journal of Economic Theory, 133(1):350–373, 2007.
• [4] David Blackwell. Discrete dynamic programming. Annals of Mathematical Statistics, 33(2):719–726, 1962.
• [5] Barry W. Brown. On the iterative method of dynamic programming on a finite space discrete time Markov process. Ann. Math. Statist., 36(4):1279–1285, 1965.
• [6] Arie Hordijk and Alexander A. Yushkevich. Blackwell optimality. In E.A. Feinberg and A Shwartz, editors, Handbook of Markov Decision Processes, Imperial College Press Optimization Series. Springer, Boston, MA, 2002.
• [7] Hèctor Jasso-Fuentes and Onèsimo Hernàndez-Lerma. Blackwell optimality for controlled diffusion processes. Journal of Applied Probability, 46(2):372–391, 2009.
• [8] Adam Jonsson and Mark Voorneveld. The limit of discounted utilitarianism. Theoretical Economics, 2017. To appear, available at: https://econtheory.org.
• [9] J.B Lasserre. Conditions for existence of average and blackwell optimal stationary policies in denumerable markov decision processes. Journal of Mathematical Analysis and Applications, 136(2):479–489, 1988.
• [10] Steven A. Lippman. Letter to the Editor — Criterion equivalence in discrete dynamic programming. Operations Research, 17(5):920–923, 1969.
• [11] Andrzej S. Nowak and Oscar Vega-Amaya. A counterexample on overtaking optimality. Math. Methods Oper. Res., 49(3):435–439, 1999.
• [12] Alexey B. Piunovskiy. Examples in Markov decision processes, volume 2 of Imperial College Press Optimization Series. Imperial College Press, London, 2013.
• [13] Martin L. Puterman. Markov decision processes: discrete stochastic dynamic programming. Wiley Series in Probability and Mathematical Statistics. John Wiley & Sons, Inc., New York, 1994.
• [14] Dinah Rosenberg, Eilon Solan, and Nicolas Vieille. Blackwell optimality in Markov decision processes with partial observation. The Annals of Statistics, 30(4):1178–1193, 2002.
• [15] Arthur. F Veinott. On finding optimal policies in discrete dynamic programming with no discounting. Annals of Mathematical Statistics, 37(5):1284–1294, 1966.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.
The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9695568084716797, "perplexity": 794.8889767944538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657134758.80/warc/CC-MAIN-20200712082512-20200712112512-00196.warc.gz"} |
http://www.sciforums.com/threads/gravity-never-zero.111586/page-36 | # Gravity never zero
Discussion in 'Astronomy, Exobiology, & Cosmology' started by Ivan, Dec 18, 2011.
Thread Status:
Not open for further replies.
1. ### OnlyMeValued Senior Member
Messages:
3,747
Refer back to "The Concept of Mass" previously linked.., there is no "relativistic mass" there is just mass. Sometimes referred to as invariant mass, inertial mass, gravitational mass or rest mass. They are all the same thing and relativistic mass is a confusing and misleading description for the TOTAL energy, of an invariant mass plus any kinetic energy associated with its velocity.
While E is generally used to denote total energy, the total energy it refers to is defined by the context it is used. In the equation E = mc^2, it referrs only to the total energy associated with a specific invariant mass. It does not include any energy associated with acceleration or velocity.
2. ### Google AdSenseGuest Advertisement
to hide all adverts.
3. ### Robittybob1BannedBanned
Messages:
4,199
But that was the thing I was leading too, you can't weigh an electron once it is motion (bound). I was thinking we might have to say it has rest mass plus Energy = hV (Planck's constant times electron wave frequecy).
4. ### Google AdSenseGuest Advertisement
to hide all adverts.
5. ### EmilValued Senior Member
Messages:
2,789
Fireflies lose weight when emits light?
6. ### Google AdSenseGuest Advertisement
to hide all adverts.
7. ### OnlyMeValued Senior Member
Messages:
3,747
No, E refers to the TOTAL ENERGY of the invariant mass m.
I do not completely understand what you are saying here.., however, "I think" you are confusing the equation $E = mc^2$, with $E = \frac{mc^2}{\sqrt{1-v^2/c^2}}$. These are two different equations and E represents a different total energy in each. E in the first equation is defined above. In the second equation E represents the total energy associated with both an invariant mass and any energy associated with its velocity. It is a misinterpretation or perhaps old and misleading interpretation of this second formula that often leads these discussions "down the rabbit hole".
As others have tried to point out there are other ways to mathematically describe both, but they involve an understanding of math that I think is beyond the discussion here.
8. ### OnlyMeValued Senior Member
Messages:
3,747
That was really good! You got me laughing and almost choking.
Theoretically, yes. But I don't believe we have scales than can measure the loss.
9. ### Robittybob1BannedBanned
Messages:
4,199
Yes but it might gain mass when it flies at the speed of light.
10. ### RealityCheckBannedBanned
Messages:
800
.
Hi guys.
I'm trying to review the K-capture process (where one electron from the K-shell is absorbed by a proton in a nucleus and an electron from the L-shell drops down to replace the absorbed K-shell electron, emitting a photon as the replacement electron drops from the L to the K shell).
The questions which I would like to have clarified are:
- does the proton-to-neutron conversion of the electron-capturing proton increase the mass of that proton perfectlt commensurate with the mass of that electron when in its K-shell bound state?...or is the mass/energy of the newly formed neutron different by some amount (either gained or lost somehow when being captured)?
- is there any radiation/particle emitted when that capture takes place and the K-shell electron drops from a higher energy bound state (in erstwhile K-shell state) to a lower energy bound state (as part of, or otherwise now contributing to the creation of a neutron where before there was only a proton)?
- does the mass of the erstwhile L-shell electron change (decrease) perfectly commensurate with the (X-ray?) photon energy emitted when it drops from the L to the K shell?...and if so, is the now-departing X-ray just a mass which has attained lightspeed, and so being effectively 'massless' purely by dint of having attained that speed and tus all its mass is 'moving in the one direction' rather than being 'self-involved' into some sort of self-interfering state which makes its energy/mass content movement trying to go in many directions at one and so making it more 'ponderous' when part of the electron and now less ponderous because of its single-direction motion as an X-ray photon whose total energy is going in one common direction?
I am much obliged for all your inputs and to the OP/thread author. Kudos.
Your discussion is very interesting on many levels. And if anyone here can cast any light on my questions in the context of this discussion so far, I would be very grateful for any inputs related to the aspects I mentioned.
Cheers!
.
Last edited: Mar 25, 2012
11. ### Robittybob1BannedBanned
Messages:
4,199
Rambling thoughts on the issue:
When an electron is ionized it will fly off its source molcule as a high speed electron. Which then can be slowed down probably by a magnetic field say so the excess energy over and above the electron's Rest Mass is removed.
I find it rather fascinating that from the ubiquitous electron as it reunits with the ionized atoms it can gice off photons of the exact amount to drop down through a cascade of electron energy levels. One would think there could be a reversal of the energy release process and the energy capture when the electron absorbs an electron and jumps up a level. The wave of the light is blended in with the wave path of the electron (just my primitive picture). I wonder if the momentum of the photon reflects the molecular vibratory momentum of motion at the time of release??? The direction it was going determines the direction of the photon ( How does a laser work?)
12. ### hansdaValued Senior Member
Messages:
1,641
What is the TOTAL ENERGY (TE) of the invariant mass (M) ?
TOTAL ENERGY (TE) = POTENTIAL ENERGY (PE) + KINETIC ENERGY(KE) .
This TOTAL ENERGY (TE) of a invariant mass (M) is constant . PE and KE changes accordingly so that TOTAL ENERGY(TE) remains constant .
In Einstein's Equation E = MC^2 ; the mass M is diminished and its PE and KE gets converted into LIGHT ENERGY . Here TOTAL ENERGY (TE) of mass (M) is converted into LIGHT ENERGY . So, this E also can be referred as LIGHT ENERGY . Here M can be either mass of electron , neutron or proton but NOT photon .
If you refer Einstein's paper (for which you already supplied the link earlier(thank you for that link)) , Einstein used two co-ordinate system at a relative velocity . He also considered Lorentz Transformation of Energy in these two co-ordinate system to prove his Equation . These two E's as you mentioned above corresponds to the energy(E) in the two co-ordinate system . In his paper he used the terms L and L* to denote these two energies instead of E's .
Einstein in his paper used simple math of Lorentz Transformation , in two co-ordinate system to prove his famous equation .
His last mathematical equation in his paper is K0 − K1 = (1/2) (L/c^2)v^2 .
where H0 − E0 = K0 + C ; H1 − E1 = K1 + C .
The following is quote from Einstein's paper to understand H , E and K .
[ NOTE : In the above quote I deleted , modified some mathematical expressions for ease of copy-paste from your link of Einstein's paper .]
From the equation K0 − K1 = (1/2) (L/c^2)v^2 ; he concluded that
M = L/c^2 or L = Mc^2 .
13. ### hansdaValued Senior Member
Messages:
1,641
Is the Sun or other stars loosing mass because they emit light or photon ?
Is our Earth or other planets gaining mass because they absorb light ?
What I think is that , particle photon carries Light-Energy and not Kinetic-Energy . So , absorption of a photon may increase energy-content but may not increase mass-content or inertia-content .
14. ### Robittybob1BannedBanned
Messages:
4,199
"Is the Sun or other stars loosing mass because they emit light or photon ?
" - Yes
"Is our Earth or other planets gaining mass because they absorb light" - If that was the case it would be called "Heavy " not "Light". No. because it is re-radiated away again.
Read the posts in the CO2 absorbing photon thread discussing this.
15. ### hansdaValued Senior Member
Messages:
1,641
Do you mean to say if an electron absorbs a photon ; the mass of the electron will increase ?
16. ### Robittybob1BannedBanned
Messages:
4,199
Now that they talk of the electron cloud the whole discussion gets blurry too. So I don't know but I was thinking it must for when you think of the electron as a particle it slows down after gaining energy and the only way I think that can happen is that the mass is raised. I was going to look into this as that Photon absorption thread progresses, so I really don't know at this stage.
What is your take on this? What does the internet say? Type this question as a google search and see what sort of answers you get.
Does electron gain mass after absorbing photon?
17. ### hansdaValued Senior Member
Messages:
1,641
I think we have already discussed this issue earlier . If an electron slows down after gaining energy(kinetic energy) ; this may be a case of frame-dragging effect , which slows down the sub-atomic particle . See post number #388 .
Internet says NO . So, mass of an electron does not increase after absorbing photon , only its kinetic energy and momentum increases .
See the following sites .
1. http://astronomyonline.org/Science/Atoms.asp
2. http://wiki.answers.com/Q/Does_photon_absorption_cause_an_electron_to_gain_mass
Last edited: Mar 30, 2012
18. ### OnlyMeValued Senior Member
Messages:
3,747
Hansda, read the about me for the first link. It is a blog by an author who joined the military out of high school and claims his job was as an eye specialist. Not that this in itself limits his credibility, but he would likely not be able to publish even in arXiv.
The second reference seems focused more toward practical applications, chemistry etc. And the answers author is not, at least readily identified. That makes it difficult to understand the contex of the answer.
While there is some debate whether the electron itself gains mass with absorption, or the added mass is some function of the atom as a whole system is debatable, an increase of mass is the underlying principle behind the equation E = mc^2.
19. ### hansdaValued Senior Member
Messages:
1,641
May be you are right . But can you show any reference , which says 'mass of an electron increases after absorbing photon' .
This is a wiki site for wiki answers . So, this can be accepted as 'right' .
I think you are confusing with increase of relativistic-mass which slows down an electron .
It is not relativistic-mass but the effect of frame-dragging which slows down an electron .
With the absorption of photon , electron's kinetic energy and momentum increases . This increased momentum can cause frame-dragging , which in turn may slow down electron .
Last edited: Mar 30, 2012
20. ### OnlyMeValued Senior Member
Messages:
3,747
I think I have posted both of these references earlier, but once again...
DOES THE INERTIA OF A BODY DEPEND UPON ITS ENERGY-CONTENT? this is Einstein's 1905 paper that introduces the equation E = mc^2 and its association with photon emission/absorption.
The Concept of Mass
Thus a massless photon may "transfer" nonvanishing mass. In absorbing a massless photon, ...
In the above reference, Lev Okun discusses the whole relativistic mass issue. Though when I have provided this link in the past I thought it was relatively clear and simple, I seem to be mistaken on that. You will have to think some of it through, since Okun presents both the historical view and context and the contemporay perspective. The specific quote does speak to a photon transferring mass, but not in the exact same context as we have been discussing. It still involves the electron but he describes it in the context of an object.., a cylinder....
WiKi is not always the best source of up to date information. It is both a good resource and at the same time, sometimes includes outdated concepts without discussing the contemporary view of an issue. This is the case for relativistic mass which does not exist. Mass is mass is mass. It is invariant. A particles velocity in at least the macroscopic view does not change its mass.
21. ### hansdaValued Senior Member
Messages:
1,641
Followings are three quotes from Einstein's paper .
1.
What do you understand by the term 'energy-content' ?
Is it radiation-energy ? Total-Energy ? Potential-Energy ? Kinetic-Energy ?
2.
Radium salts may loose mass with emission of radiation but do they also gain mass with absorption of radiation ? Is it experimentally proven ?
3.
Here Einstein says , " If the theory corresponds to facts , ..." . He explains his theory only with emission but not with absorption . His theory is proven for emission of energy . Is his theory also proven for absorption of energy ?
I could not access Lev Okun's paper .
Last edited: Mar 30, 2012
22. ### OnlyMeValued Senior Member
Messages:
3,747
L as used by Einstein in that paper represents the total energy associated with a specific mass, excluding any kinetic energy associated with its motion in space.
Where the radium salts issue was raised, Eimstein at the time appears to have had no knowledge that radiation associated with radium is alpha radiation, not photons. Alpha particles have mass. Remember this was 1905.
I believe Einstein actually says that mass is transferred as energy in the emission and absorption.., of energy, where the energy referred to is the photon... But I have not gone back and checked the actual wording. Positing on a device with limited memory I often lose connection with SciForums when switching to my PDF library.
What was the issue with your access to Okun's paper? Was it an issue related to a bad link or translation?
23. ### hansdaValued Senior Member
Messages:
1,641
What do you mean by " total energy " ?
Is it mechanical energy ( potential or kinetic ) ? ... Light energy ? ... Chemical energy ? ... Nuclear energy ? or some other form of energy ? Consider the fact that there are only eight forms of energy .
What do you mean by specific mass ?
Is it " rest mass " ? ... simply mass or something else ?
Einstein mentioned radium salts as an example in his paper .
Please go through Einstein's paper as you provided the link . Einstein used the word "IF" to prove his claim that , mass increases with absorbing radiation energy .
So, Einstein's this paper is not conclusive proof that , mass of an electron increases with absorbing a photon .
Now I am able to get Okun's paper .
Thread Status:
Not open for further replies. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9219520688056946, "perplexity": 1204.5549470207575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375102712.76/warc/CC-MAIN-20150627031822-00159-ip-10-179-60-89.ec2.internal.warc.gz"} |
http://fmoldove.blogspot.com/2013/06/quantum-mechanics-and-unitarity-part-4.html | Quantum mechanics and unitarity (part 4 of 4)
Now we can put the whole thing together and attempt to solve the measurement problem. But is there a problem to begin with? Here is a description of the problem as written by Roderich Tumulka http://www.math.rutgers.edu/~tumulka/teaching/fall11/325/script2.pdf (see page 53):
• In each run of the experiment, there is a unique outcome.
• The wave function is a complete description of a system’s physical state.
• The evolution of the wave function of an isolated system is always given by the
Schrödinger equation
Then in the standard formulation of quantum mechanics at least one of them has to be refuted. From the quantum mechanics reconstruction work, the last two bullets are iron-clad and cannot be violated without collapsing the entire theory. This means that GRW theory, and Bohmian interpretations are automatically excluded. Also the usual Copenhagen interpretation is not viable either because it makes use of classical physics (we know that we cannot have a consistent theory of classical and quantum mechanics). Epistemic approaches in the spirit of Peres are not the whole story either because while collapse is naturally understood as information update, this means that Leibniz identity is violated as well.
So what do we have left? Only the many-worlds interpretation (MWI), or its more modern form of Zurek’s relative state interpretation http://arxiv.org/abs/0707.2832.
However, I will argue for another fully unitary solution different than MWI/relative state interpretation (and I agree with Zurek that the old fashion MWI gives up too soon on finding the solution), but in the same spirit of Zurek’s approach. The basic idea is that measurement is not a primitive operation. The experimental outcome creates huge numbers of information copies. The key difference between Zurek’s quantum Darwinism and the new explanation is on who succeeds in creating the information copies: the full wavefunction (as in quantum Darwinism), or the one and only experimental outcome. In other words, the Grothendieck equivalence relationship is broken by the measurement amplification effects: only one equivalent representative of the Grothendieck group element succeeds in making information copies and statistically overwhelms all the other ones (for all practical purposes). The information in the “collapsed part of the wavefunction” is not erased, but becomes undetectable.
Of course there are still open problems of delicate technical nature to be solved in this new paradigm, but they do seem to get their full answer in this framework. Solving them is a work in progress, and the solution is not yet ready for public disclosure.
In subsequent posts I’ll show how the wavefunction is neither epistemological, nor ontological and I will touch on Bell’s theorem, and the recent PBR result among other things. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8351361155509949, "perplexity": 611.1916766766698}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189090.69/warc/CC-MAIN-20170322212949-00158-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/87150/how-to-see-a-plane-is-tangent-to-a-sphere-from-their-equations | # How to see a plane is tangent to a sphere from their equations
Say you have two equations with three variables, the first is the equation of the surface of a sphere and the second of a plane. In this case they intersect in a point $(1,0,0)$. The only way I know to find this point is to rewrite the equation of the sphere so you know its center point and intersect a line going through that point at and at an angle of 90 degrees with the plane. Are there other methods to solve this? Without geometry?
Here are two example equations.
$$\begin{cases} x^2 + y^2 + z^2 - 6x + 6y - 12z + 5&=&0\\ 2x - 3y + 6 z - 2&=&0\\ \end{cases}$$
and the solution
$$\begin{cases} x = 1\\ y = 0\\ z = 0\\ \end{cases}$$
-
You mention two "equations" in three variables. I don't see any, an equation has an equals sign. Perhaps you mean $x^2+y^2+z^2-4x+6y-12z=0$, and $2x-3y+6z-2=0$. But perhaps not, since $(1,0,0)$ is not on the sphere with above equation. And ultimately, given right sphere and plane, there will often be infinitely many points of intersection. – André Nicolas Nov 30 '11 at 21:01
Without geometry? Why? My preference would be to find the distance from the centre to the plane --- compare with the radius and work from there using a projection onto the plane. – Jp McCarthy Nov 30 '11 at 23:26
Oh, I'm sorry I typed the wrong equation for the sphere. – Jus Dec 1 '11 at 17:33
Any sphere $S$ has equation $s(x,y,z)=0$, where $$s(x,y,z)=x^2+y^2+z^2-2ax-2by-2cz+d,$$ for some $d\lt a^2+b^2+c^2$. Any plane $P$ has equation $p(x,y,z)=0$, where $$p(x,y,z)=ux+vy+wz+t,$$ for some $(u,v,w)\ne(0,0,0)$. That $P$ is tangent to $S$ is equivalent to the condition that a point $(x,y,z)$ belongs to both $P$ and $S$, such that the line between the center $(a,b,c)$ of $S$ and $(x,y,z)$ is orthogonal to $P$.
The first part is $p(x,y,z)=s(x,y,z)=0$. The vector $(u,v,w)$ is orthogonal to $P$ hence the second part is that $(x-a,y-b,z-c)$ and $(u,v,w)$ are proportional.
Thus $(x,y,z)=(a+su,b+sv,c+sw)$ for some $\lambda$. Then $p(x,y,z)=0$ if and only if $$(u^2+v^2+w^2)\lambda=ua+bv+cw+t,$$ and $s(x,y,z)=0$ if and only if $$(u^2+v^2+w^2)\lambda^2=a^2+b^2+c^2-d.$$ Thus $P$ plane is tangent to $S$ if and only if these two equations have a common solution $\lambda$, that is, $$(ua+bv+cw+t)^2=(u^2+v^2+w^2)\cdot(a^2+b^2+c^2-d).$$
-
First, a sphere and plane can intersect in a circle, a point, or not at all. As André Nicolas has said, $(1,0,0)$ does not satisfy your equations. For your equations, Alpha shows that the intersection is a circle, not a point.
With the new equation, Alpha still thinks there are more than one point of intersection. The first is now $(x-3)^2+(y+3)^2+(z-6)^2=7^2$ so $(1,0,0)$ is an intersection.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9372687339782715, "perplexity": 109.5758769912582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644059993.5/warc/CC-MAIN-20150827025419-00159-ip-10-171-96-226.ec2.internal.warc.gz"} |
http://www.thestudentroom.co.uk/showthread.php?t=2046693&page=3 | Just one sec...
You are Here: Home
# A Summer of Maths
Announcements Posted on
Take our short survey, £100 of Amazon vouchers to be won! 23-09-2016
• View Poll Results: Have you studied any Group Theory already?
Yes, I did some Group Theory during/at my A-level.
37.84%
No, but I plan to study some before Uni.
21.62%
No, I haven't.
40.54%
1. I haven't done any maths for like 2 weeks and I already feel rusty :/
2. What is the derivative of y=x^x?
What is the derivatie and inverse of y=x^x^x^x^... (an infinite string of x's)
Here x^x^x=x^(x^x) not (x^x)^x
Also for what values of x does y exist?
3. (Original post by Brit_Miller)
I have a question but not an answer as I don't know how to do it. Hopefully someone can show how.
Let
I have never done anything like this so the following is possibly wrong. Even if it is correct I'm sure there's a much prettier way of doing it:
Spoiler:
Show
(for the bijection proof)
Suppose there exists such that
We would have:
Clearly
Furthermore:
Now consider the functions and
The above equality is satisfied when
Obviously is strictly increasing. Differentiating with respect to gives so decreases. Therefore has only one solution, namely . But this is defies our initial conditions.
Therefore is bijective.
4. (Original post by Brit_Miller)
I have a question but not an answer as I don't know how to do it. Hopefully someone can show how.
Let
Spoiler:
Show
Lemma 1. The cube of a negative number is the negative of its magnitude cubed.
Spoiler:
Show
Suppose that, for some , we have
where is non-negative.
Now, if we take for granted that , we continue by
Therefore, since the result is trivially true for , we get
as desired.
Lemma 2 (Corollary of Existence of n-th roots). Two cubes are equal if and only if the are the same.
Spoiler:
Show
Existence of n-th roots:
If is positive, and , then there exists exactly one positive real number such that .
By Lemma 1, we can always write as
but then the above theorem says that they must be equal.
{**} A solution:
Spoiler:
Show
We want to show that is injective and surjective which would imply that it is bijective.
[1] Suppose that with and at least one of and holds.
Then, we obtain the following two equations.
Using Lemma 2, we see from that , and by substituting into we get
which contradicts our assumption. Therefore, is injective.
[2] We show surjectivity by existence.
Take any . Then, it suffices to show that such that and .
We find that and where by applying Lemmas 1 & 2.
Therefore, is bijective.
Now, its inverse is given by
Spoiler:
Show
Lemma 1. The cube of a negative number is the negative of its magnitude cubed.
Spoiler:
Show
Suppose that, for some , we have
where is non-negative.
Now, if we take for granted that , we continue by
Therefore, since the result is trivially true for , we get
as desired.
Lemma 2 (Corollary of Existence of n-th roots). Two cubes are equal if and only if the are the same.
Spoiler:
Show
Existence of n-th roots:
If is positive, and , then there exists exactly one positive real number such that .
By Lemma 1, we can always write as
but then the above theorem says that they must be equal.
{**} A solution:
Spoiler:
Show
We want to show that is injective and surjective which would imply that it is bijective.
[1] Suppose that with and at least one of and holds.
Then, we obtain the following two equations.
Using Lemma 2, we see from that , and by substituting into we get
which contradicts our assumption. Therefore, is injective.
[2] We show surjectivity by existence.
Take any . Then, it suffices to show that such that and .
We find that and where by applying Lemmas 1 & 2.
Therefore, is bijective.
Now, its inverse is given by
Nice - and informative too! I guess I was completely off the map on this one.
6. (Original post by Lord of the Flies)
Nice - and informative too! I guess I was completely off the map on this one.
You can add some bits to it, and fix some, but in general I don't think you were off the map at all. (not that I am much on the map, but anyway)
For instance, , so that is a point in the plane.
The function is a mapping from points on the plane to points on the real plane; i.e. .
Spoiler:
Show
Lemma 1. The cube of a negative number is the negative of its magnitude cubed.
Spoiler:
Show
Suppose that, for some , we have
where is non-negative.
Now, if we take for granted that , we continue by
Therefore, since the result is trivially true for , we get
as desired.
Lemma 2 (Corollary of Existence of n-th roots). Two cubes are equal if and only if the are the same.
Spoiler:
Show
Existence of n-th roots:
If is positive, and , then there exists exactly one positive real number such that .
By Lemma 1, we can always write as
but then the above theorem says that they must be equal.
{**} A solution:
Spoiler:
Show
We want to show that is injective and surjective which would imply that it is bijective.
[1] Suppose that with and at least one of and holds.
Then, we obtain the following two equations.
Using Lemma 2, we see from that , and by substituting into we get
which contradicts our assumption. Therefore, is injective.
[2] We show surjectivity by existence.
Take any . Then, it suffices to show that such that and .
We find that and where by applying Lemmas 1 & 2.
Therefore, is bijective.
Now, its inverse is given by
Nicely done!
8. Is anyone interested in summing up
where is a positive integer? Ideas how we can do this?
Is anyone interested in summing up
where is a positive integer? Ideas how we can do this?
Without doing any mathematics, I think that should diverge. For large n the value of x can be ignored so n!/(n+x)! is approximately 1 so it must diverge.
EDIT: After doing some maths, it seems that the above reasoning is flawed, the terms do indeed tend to 0 so it may converge.
10. (Original post by james22)
Without doing any mathematics, I think that should diverge. For large n the value of x can be ignored so n!/(n+x)! is approximately 1 so it must diverge.
Take x = 2, then n!/(n+2)! = 1/(n+1)(n+2) which converges.
Is anyone interested in summing up
where is a positive integer? Ideas how we can do this?
Spoiler:
Show
I'd start off by noting that , which should converge for .
I'll finish it off when I get back if someone hasn't already.
12. subscribing.....around 2 in the night I tend to get bored
13. (Original post by Lord of the Flies)
Spoiler:
Show
If is the number of powers of we can define:
In particular
I don't believe it is possible to express the inverse in the form though.
Sorry I wasn't clear, there are meant to be infinite x's.
As a side question to it, for what values of x is f(x) defined?
14. (Original post by james22)
Sorry I wasn't clear, there are meant to be infinite x's.
As a side question to it, for what values of x is f(x) defined?
Ah. In any case my working is wrong, I stupidly misread the question!...
... which makes the question more difficult, but more interesting! Hm...
Spoiler:
Show
Lemma 1. The cube of a negative number is the negative of its magnitude cubed.
Spoiler:
Show
Suppose that, for some , we have
where is non-negative.
Now, if we take for granted that , we continue by
Therefore, since the result is trivially true for , we get
as desired.
Lemma 2 (Corollary of Existence of n-th roots). Two cubes are equal if and only if the are the same.
Spoiler:
Show
Existence of n-th roots:
If is positive, and , then there exists exactly one positive real number such that .
By Lemma 1, we can always write as
but then the above theorem says that they must be equal.
{**} A solution:
Spoiler:
Show
We want to show that is injective and surjective which would imply that it is bijective.
[1] Suppose that with and at least one of and holds.
Then, we obtain the following two equations.
Using Lemma 2, we see from that , and by substituting into we get
which contradicts our assumption. Therefore, is injective.
[2] We show surjectivity by existence.
Take any . Then, it suffices to show that such that and .
We find that and where by applying Lemmas 1 & 2.
Therefore, is bijective.
Now, its inverse is given by
Nice
The following result could of been quoted to make your answer alot shorter (though it's always good practice to do things straight from the definitions)
Related exercise:
Let
g is said to be a left [or right] inverse of f if respectively
Show that f is surjective iff it has a right inverse
Show that f is injective iff it has a left inverse
Hence a map is bijective iff it has an (left and right) inverse. (an g is said to be an inverse iff g is both a left and right inverse.)
Using this, you only need to verify that your inverse is an inverse .
16. {*} Question:
The polynomial is irreducible over .
i) By completing the square, show that is not irreducible over the set of real numbers.
Hence, derive the Sophie Germain algebraic identity
by starting from the left-hand side.
ii) Evaluate
{**} Required:
Spoiler:
Show
A polynomial is said to be irreducible over a set if it cannot be factored into polynomials with coefficients from the given set.
As an example,
is irreducible over the set of rational numbers denoted by .
17. Okay, I have another question which I'm sure is relatively simple and someone will know (sorry for using the thread without answers, but it's a nice place to ask questions!)
Consider this 2nd order differential equation:
Write this as a system of 1st order equations with appropriate initial conditions.
18. (Original post by Brit_Miller)
Okay, I have another question which I'm sure is relatively simple and someone will know (sorry for using the thread without answers, but it's a nice place to ask questions!)
Consider this 2nd order differential equation:
Write this as a system of 1st order equations with appropriate initial conditions.
Let z=y'.
Also quite easy: How many (real) solutions does have? Knowledge required: GCSE & below.
19. In an office, at various times during the day, the boss gives the secretary a letter to type, each time putting the letter on top of the pile and types it. There are nine letters to be typed during the day, and the boss delivers them in the order 1,2,3,4,5,6,7,8,9. While leaving for lunch, the secretary tells a colleague that letter 8 has already been typed, but says nothing about the morning's typing. The colleague wonders which of the nine letters remain to be typed after lunch and in what order they will be typed. Based on the above information, how many such after-lunch typing orders are possible? (That there are no letters to be typed is on of the possibilities).
No ugrad knowledge required beyond combinations/ permutations.
20. (Original post by electriic_ink)
Let z=y'.
Also quite easy: How many (real) solutions does have? Knowledge required: GCSE & below.
Thanks
(and none surely?)
## Register
Thanks for posting! You just need to create an account in order to submit the post
1. this can't be left blank
2. this can't be left blank
3. this can't be left blank
6 characters or longer with both numbers and letters is safer
4. this can't be left empty
1. Oops, you need to agree to our Ts&Cs to register
Updated: August 12, 2013
TSR Support Team
We have a brilliant team of more than 60 Support Team members looking after discussions on The Student Room, helping to make it a fun, safe and useful place to hang out.
This forum is supported by:
Today on TSR
### Not keeping up in class?
5 reasons why you shouldn't be worried
Poll
Useful resources
### Maths Forum posting guidelines
Not sure where to post? Read here first
### How to use LaTex
Writing equations the easy way
### Study habits of A* students
Top tips from students who have already aced their exams | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9333842992782593, "perplexity": 798.0777133428772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661555.40/warc/CC-MAIN-20160924173741-00250-ip-10-143-35-109.ec2.internal.warc.gz"} |
https://fr.maplesoft.com/support/help/Maple/view.aspx?path=QuantumChemistry/CorrelationEnergy | CorrelationEnergy - Maple Help
Home : Support : Online Help : Toolboxes : Quantum Chemistry : CorrelationEnergy
QuantumChemistry
CorrelationEnergy
compute the correlation energy
Calling Sequence CorrelationEnergy(molecule, method, options)
Parameters
molecule - list of lists; each list has 4 elements, the string of an atom's symbol and atom's x, y, and z coordinates method - (optional) method = name/procedure where name is one of 'HartreeFock' (default), 'DensityFunctional', options - (optional) equation(s) of the form option = value where option is any valid option of the chosen method
Description
• CorrelationEnergy computes the correlation energy, the difference between the exact energy and Hartree-Fock energy.
• The procedure returns the correlation energy as a float.
• The default method is 'HartreeFock' method whose correlation energy is 0.
• The result depends upon the chosen molecule, method, and basis set among other options such as charge, spin, and symmetry.
• Because the methods employ Maple cache tables, the procedure only computes the correlation energy if it has not been previously computed by calling the method directly or indirectly through another property.
Examples
> $\mathrm{with}\left(\mathrm{QuantumChemistry}\right):$
Computation of the correlation energy of the molecule with the MP2 method
>
${\mathrm{molecule}}{≔}\left[\left[{"H"}{,}{0}{,}{0}{,}{0}\right]{,}\left[{"F"}{,}{0}{,}{0}{,}{0.95000000}\right]\right]$ (1)
>
${\mathrm{output_hf}}{≔}{0.}$ (2)
>
${\mathrm{output_hf}}{≔}{-0.01882706}$ (3)
> | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9676927924156189, "perplexity": 2785.812574339581}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943483.86/warc/CC-MAIN-20230320114206-20230320144206-00615.warc.gz"} |
https://infoscience.epfl.ch/record/129889 | Infoscience
Conference paper
# Methodology for Risk Assessment of Part Load Resonance in Francis Turbine Power Plant.
At low flow rate operation, Francis turbines feature a cavitating vortex rope in the draft tube resulting from the swirling flow of the runner outlet. The unsteady pressure field related to the precession of the vortex rope induces plane wave propagating in the entire hydraulic system. The frequency of the vortex rope precession being comprised between 0.2 and 0.4 times the turbine rotational speed, there is a risk of resonance between the hydraulic circuit, the synchronous machine and the turbine itself an acting as excitation source. This paper presents a systematic methodology for the assessment of the resonance risk for a given Francis turbine power plant. The test case investigated is a 1GW 4 Francis turbines power plant. The methodology is based on a transient simulation of the dynamic behavior of the whole power plant considering a 1D model of the hydraulic installation, comprising gallery, surge chamber, penstock, Francis turbine but also mechanical masses, synchronous machines, transformer, grid model, speed and voltage regulators. A stochastic excitation having energy uniformly distributed in the frequency range of interest is taken into account in the draft tube. As the vortex rope volume has a strong influence on the natural frequencies of the hydraulic system, the wave speed in the draft tube is considered as a parameter for the investigation. The transient simulation points out the key excitation frequencies and the draft tube wave speed producing resonance between the vortex rope excitation and the circuit and provide a good evaluation of the impact on power quality. The comparison with scale model tests results allows resonance risk assessment in the early stage of project pre-study. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8163930773735046, "perplexity": 1189.8444042226108}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118477.15/warc/CC-MAIN-20170423031158-00309-ip-10-145-167-34.ec2.internal.warc.gz"} |
http://mathhelpforum.com/algebra/161591-expanding-quadratic-expressions-answers-needed.html#post577739 | 1) Expand and Simplify
(4x-3)(x+5)
2) Prove that
(n+5)^2 - (n+3)^2 = 4(n+4)
3)
a) Expand and simplify (2n+1)^2
b) Prove that the square of any odd number is always 1 more than a multiple of 8.
4) Expand and simplify (x-3)(2x+1)
5)
a) Expand and simplify (x+y)(x-y)
b) Using your answer to part (a). or otherwise, find the exact value of
780^2 - 220^2
2. What have you tried yourself? What do you have problems with?
3. Could you perhaps give a brief step by step guide as to what to do?
I missed alot of work on this and i'm struggling. Thanks.
4. 1) Multiply 1st term in first bracket by 1st term in second bracket
Add to this, the product of the 1st term in the 1st bracket and the 2nd term in the 2nd bracket
Add to this, the product of the 2nd term in the 1st bracket and the 1st term in the 2nd bracket
Add to this, the product of the 2nd term in the 1st bracket and the 2nd term in the 2nd bracket
2) Can you expand the two brackets on the left?
We'll do the others after those.
5. 1)
4x * x = 4x^2
4x * 5 = 20x
-3 * x = -3x
-3 * 5 = -15
= 4x^2 + 20x -3x -15
Correct so far?
6. Yes, you simplify now, taking the part containing: 20x - 3x
7. 8x + 17x - 15?
8. Great! Next number now.
9. How do you expand the two brackets on the left? >_<
10. Just like you did for the first one.
(n+5)^2 - (n+3)^2 = (n+5)(n+5) - (n+3)(n+3)
You'll notice that if you have something of the type:
$(a+b)^2$
It expands in the fom:
$a^2 + 2ab + b^2$
But for the time being, expand it like you did for 1)
11. Whilst doing this, do I ignore the powers?
12. You already took the powers into consideration.
Doesn't
$5^2 = 5\times5 = 25$
$x^2 = x \times x$
Similarly,
$(n+5)^2 = (n+5)(n+5)$
13. (n+5)^2 - (n+3)^2
n x n = n^2
n x 3 = 3n
5 x n = 5n
5 x 3 = 15
n^2 + 3n + 5n + 15
= N^2 + 8n + 15
Did i do it right? :S
14. No, you didn't get my point.
First expand (n+5)^2.
Then expand (n+3)^2.
Lastly, subtract the two expansions.
15. Originally Posted by Roxas
(n+5)^2 - (n+3)^2
n x n = n^2
n x 3 = 3n
5 x n = 5n
5 x 3 = 15
n^2 + 3n + 5n + 15
= N^2 + 8n + 15
Did i do it right? :S
No. If you want to do it by expanding each term, then you need to recall that (a + b)^2 = a^2 + 2ab + b^2. After correctly expanding, you then need to correctly take the difference. It should be clear, for example, that the final answer will NOT include n^2 ....
I suggest you get a copy of the class notes that you missed and review them carefully. Your textbook will also have exmaples that you should carefully review.
Page 1 of 2 12 Last | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8884179592132568, "perplexity": 900.8569682156863}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988722653.96/warc/CC-MAIN-20161020183842-00429-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://bugreports.qt.io/browse/QTCREATORBUG-23440?gerritReviewStatus=All | Build progress goes to 100% much before build is done
XMLWordPrintable
Details
• Type: Bug
• Status: Closed
• Priority: Not Evaluated
• Resolution: Out of scope
• Affects Version/s: Qt Creator 4.11.0
• Fix Version/s: None
• Component/s:
• Labels:
• Platform/s:
Linux/X11
Description
I upgraded from v4.9.0 to v4.11.0 and now the build progress bar is broken. I build with CMake+Ninja. The problem is that the build progress bar reaches 100% or very near while the build is not even half-way done.
I am not sure what information I can provide that can help you troubleshoot this, but I am happy to provide whatever you ask for. Also, I run a Qt Creator that I build from git, so I can test patches. I can also take a look at the problem myself, but in that case, I would appreciate someone pointing me at good "suspects"
People
• Assignee:
Tobias Hunger
Reporter:
Thomas Sondergaard | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9505889415740967, "perplexity": 3838.2668728783756}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506959.34/warc/CC-MAIN-20200402111815-20200402141815-00389.warc.gz"} |
http://mathhelpforum.com/algebra/84995-finding-two-unknows-one-equation.html | # Math Help - Finding two unknows from one equation
1. ## Finding two unknows from one equation
Dear members
I have been set a task to find n and m (two constants) in the following equation: N=(z^m).(r^n). I have been provided with data for N, z and r. I am assuming that I can plot this data to find m and n, but I am unsure on how to rearrange the equation to achieve this. I would be appreciative of your help.
Thanks
2. $N = z^{m} \cdot r^{n}$
$ln(N) = ln(z^{m} \cdot r^{n}) = m\cdot ln(z) + n\cdot ln(r)$
If we let $ln(z) = x \mbox{ and } ln(r) = y \mbox{ and } ln(N)=c$, then
$\left[ \begin{matrix} m\cdot x_{1} & n\cdot y_{1} \\ m\cdot x_{2} & n\cdot y_{2} \end{matrix}\right] = \left[ \begin{matrix} x_{1} & y_{1} \\ x_{2} & y_{2} \end{matrix}\right] \, \left[ \begin{matrix} m \\ n \end{matrix} \right]$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8770449757575989, "perplexity": 448.6036768927798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430457725048.66/warc/CC-MAIN-20150501052205-00095-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/perturbative-expansion-of-beta-function-renormalization.898004/ | # A Perturbative expansion of Beta function - Renormalization
1. Dec 21, 2016
### nigelscott
I am trying to understand the basics of Renormalization. I have read that β encodes the running coupling and can be expanded as a power series as:
β(g) = ∂g/(∂ln(μ)) = β0g3 + β1g5 + ...
However, I don't understand how this is derived.. I assume that the terms correspond to 1 loop, 2 loops etc. Can somebody help me out.
2. Dec 27, 2016
### Greg Bernhardt
Thanks for the thread! This is an automated courtesy bump. Sorry you aren't generating responses at the moment. Do you have any further information, come to any new conclusions or is it possible to reword the post? The more details the better.
Draft saved Draft deleted
Similar Discussions: Perturbative expansion of Beta function - Renormalization | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8110175132751465, "perplexity": 1608.4216772377715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320215.92/warc/CC-MAIN-20170624031945-20170624051945-00164.warc.gz"} |
http://zbmath.org/?q=an:0616.10025 | # zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
On the $l$-adic representations associated to Hilbert modular forms. (Sur les représentations $l$-adiques associées aux formes modulaires de Hilbert.) (French) Zbl 0616.10025
Let $\pi ={\otimes }_{v}{\pi }_{v}$ be a cuspidal automorphic representation of ${\text{GL}}_{2}\left({F}_{𝔸}\right)$ where ${F}_{𝔸}$ is the ring of adeles of a totally real algebraic number field $F$ of degree $d$ over $ℚ$, of the same type as representations corresponding to Hilbert modular forms of weight $\left({k}_{1},···,{k}_{d}\right)$, i.e. whose local components ${\pi }_{v}$ for each of the $d$ Archimedean places $v$ of $F$ are essentially square integrable representations of ${\text{GL}}_{2}\left(ℝ\right)$ occurring in the induced representation $\text{Ind}\left(\mu ,\nu \right)$ (under unitary induction) with characters $\mu$, $\nu$ of ${ℝ}^{*}$ given by $\mu \left(t\right):={|t|}^{\left(k-w-1\right)/2}{\left(\text{sgn}\phantom{\rule{0.166667em}{0ex}}t\right)}^{k}$, $\nu \left(t\right):={|t|}^{\left(-k-w+1\right)/2}$ for integral $k\ge 2$ and $w\equiv k\phantom{\rule{4.44443pt}{0ex}}\left(mod\phantom{\rule{0.277778em}{0ex}}2\right)$, all depending on $v$. For $d$ even, ${\pi }_{v}$ is taken to be a special or cuspidal representation of ${\text{GL}}_{2}\left({F}_{v}\right)$, for at least one non-Archimedean place $v$ of $F$. Let $\overline{F}$ be an algebraic closure of $F$.
The main theorem proved is the following: there exists an algebraic number field $E$ depending on the given $\pi$ and a strictly compatible system $\left\{{\sigma }^{\lambda }\right\}$ of continuous 2-dimensional ${E}_{\lambda }$-adic representations of $\text{Gal}\left(\overline{F}/F\right)$ such that for every non-Archimedean place $v$ of $F$ and $\lambda$ of $E$ with residue characteristic different from that of $v$, the restriction ${\sigma }_{v}^{\lambda }$ of ${\sigma }^{\lambda }$ to the local Weil group $W{F}_{v}$ is equivalent to ${\sigma }^{\lambda }\left({\pi }_{v}\right)·$
What is new here is that the author determines ${\sigma }_{v}^{\lambda }$ for every non-Archimedean place $v$ of $F$. Moreover, according to Ribet, ${\sigma }^{\lambda }$ turns out to be irreducible and as such, is characterized entirely.
First, for a weaker version of the main theorem, the author gives a (“geometric”) proof and then he uses ”base change for $\text{GL}\left(2\right)$” to deduce the main theorem. As a corollary of the main theorem for $d=1$ and weight $k=2$, the author shows an affirmative answer to a conjecture on Weil curves over $ℚ$.
Reviewer: S. Raghavan
##### MSC:
11F70 Representation-theoretic methods in automorphic theory 11F67 Special values of automorphic $L$-series, etc 11G25 Varieties over finite and local fields 11F41 Hilbert modular forms and surfaces | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 54, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8635106086730957, "perplexity": 1171.8482941028685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999653202/warc/CC-MAIN-20140305060733-00003-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://www.askamathematician.com/2011/08/q-why-does-light-choose-the-path-of-least-time/ | # Q: Why does light choose the “path of least time”?
Physicist: Light travels at different speeds in different materials. When you shine a beam of light from one material into another (like from air to water) it bends in such a way that the path it takes from one point to another requires the least possible time.
Taking a straight line means traveling through a lot of the “slow material”. Traveling through lots of “fast material” to make the path through the slow material as short as possible means taking a longer path overall (and taking more time). The path of “least time” is in between.
This should come across as deeply spooky. A particle that somehow “scouts the future” and then picks the fastest path to get where it’s going seems impossible.
And to be fair: it is. The crux of the problem is (as with damn near everything) wave/particle-ness. Particles can’t magically know what the shortest path will be, but waves find it “accidentally” every time.
First, check out the path that the “principle of least time” carves out. What follows is math, which some people dig. If you skip over the block of calculations, you won’t really miss anything.
The time, T, that it takes to traverse a path is D/V, where D is the length of the path and V is the speed.
$T = \frac{D_1}{V_1}+\frac{D_2}{V_2} = \frac{1}{V_1}\sqrt{a^2+x^2}+\frac{1}{V_2}\sqrt{b^2+(c-x)^2}$
The picture for the derivation below. In the top material the wave speed is “V1” and in the bottom “V2”. With a little calculus, by sliding x back and forth you can find the “minimum-time path”.
By changing x you change the path, and the amount of time T it takes to move along that path. Calculus wonks may know that to find an “extremum” (either a maximum or a minimum), all you have to do is find the derivative, set it to zero, and solve. With any luck, those same wonks will be forgiving if I just declare that the following derivation finds the minimum time (and not a maximum or something) instead of proving that it’s a minimum.
$\begin{array}{ll}0=\frac{dT}{dx}\\\Leftrightarrow 0=\frac{d}{dx}\left[\frac{1}{V_1}\sqrt{a^2+x^2}+\frac{1}{V_2}\sqrt{b^2+(c-x)^2}\right]\\\Rightarrow 0=\frac{1}{V_1}\frac{1}{2\sqrt{a^2+x^2}}(2x)+\frac{1}{V_2}\frac{1}{2\sqrt{b^2+(c-x)^2}}(-2(c-x))\\\Leftrightarrow 0=\frac{1}{V_1}\frac{x}{\sqrt{a^2+x^2}}-\frac{1}{V_2}\frac{c-x}{\sqrt{b^2+(c-x)^2}}\\\Leftrightarrow \frac{1}{V_2}\frac{c-x}{\sqrt{b^2+(c-x)^2}}=\frac{1}{V_1}\frac{x}{\sqrt{a^2+x^2}}\\\Leftrightarrow \frac{1}{V_2}\frac{c-x}{D_2}=\frac{1}{V_1}\frac{x}{D_1}\\\Leftrightarrow \frac{1}{V_2}\sin{(\theta_2)}=\frac{1}{V_1}\sin{(\theta_1)}\end{array}$
The angles for the last step above. This is also “Snell’s law”.
The exact value of x isn’t particularly useful. What is useful are those angles. The statement “$\frac{1}{V_2}\sin{(\theta_2)}=\frac{1}{V_1}\sin{(\theta_1)}$” is Snell’s law.
Snell’s law should look familiar to anyone whose used to talking about waves going from one material to another. It describes, for example, the bending of light as it crosses between air and water.
So, the law of “least propagation time” is nothing more than a different, and far more difficult, way of stating “Snell’s law”. And again, if you’re talking about a particle, it’s hard not to think that the particle is testing each path in advance, and then taking the quickest one.
However, you can derive the same result by thinking about how waves propagate. Waves (light, sound, water, whatever) propagate perpendicular to their wave crests. Take a second: picture an ocean wave.
As one side of the crest enters a different material it changes speed. When different parts of the wave are traveling at different speeds the wave front as a whole changes direction.
Top: a section of the wave starts to hit the boundary between two materials. Middle and Bottom: In the second material the wave moves slower. Since one side of the wave is moving faster than the other the wave front “swings around” into a new direction.
A good (but not quite accurate) way to picture the situation is to think of a car where the wheels on one side spin faster than the wheels on the other. Naturally, the car’ll turn to the side.
Left: the important angles, and where they show up in the triangles. Right: the angles and lengths involved in the math below.
You can be a bit more exact about this. The diagrams above describe a piece of the wave crest from the moment when one side hits the boundary, to the time the other side hits. Call that time “T” (why not?).
The distance that the top end of the piece-of-crest travels is D1 = V1T, and the distance the bottom tip travels is D2 = V2T. Now, using the definition of sine: $\sin{(\theta_1)}=\frac{D_1}{L}$ and $\sin{(\theta_2)}=\frac{D_2}{L}$.
Combining these you get:
$\begin{array}{ll}\frac{1}{D_1}\sin{(\theta_1)}=\frac{1}{L} = \frac{1}{D_2}\sin{(\theta_2)}\\\Rightarrow\frac{1}{V_1T}\sin{(\theta_1)}=\frac{1}{V_2T}\sin{(\theta_2)}\\\Rightarrow\frac{1}{V_1}\sin{(\theta_1)}=\frac{1}{V_2}\sin{(\theta_2)}\end{array}$
Holy crap! Snell’s law again! Having the same result means that, in terms of behavior, the two approaches are indistinguishable. So, instead of a spooky particle scouting every path looking for the quickest one, you have a wave that’s just doing it’s thing.
The principle of least time is a cool idea, and actually makes the math behind a lot of more complicated situations easier, but at it’s root is waviness.
This entry was posted in -- By the Physicist, Equations, Geometry, Logic, Physics. Bookmark the permalink.
### 15 Responses to Q: Why does light choose the “path of least time”?
1. Rick says:
I have heard something like this explained as a principle of least action, where “action” refers to a highly abstract physical notion.
http://en.wikipedia.org/wiki/Principle_of_least_action
2. The Physicist says:
That’s the “more complicated situation” I was thinking of.
3. Rick R says:
I made a flash animation showing path selection via Fenman’s arrows method.
5. Dan says:
Thank you so much posting this. I knew about the principle of least time and I knew about Snell’s law, but I never made the connection before.
6. Stephen says:
How does this prove that one of the theories is more correct than the other?
Just because we don’t know how a particle would be able to take every path simultaneously, we also don’t know why the wave crest would actually turn. Because this isn’t a water wave, and the wave crest itself isn’t physical – it’s the imagined barrier of all of the photons on the front of the light beam. We know of no reason why these photons would link together.
The math works for both, but the only on with any actual experimental evidence that supports it is the path of least action – the evidence being the double slit experiment.
The path of least time sounds pretty unbelievable, but so do many things we now know to be true in modern physics.
7. John Cabbage says:
I have always disliked the “path of least time” argument, because it doesn’t easily relate to any physical principal.
As you showed here, Snell’s Law is easily derived from the wave model of light. The “path of least time” thing, however, frequently feels like more of a happy coincidence than a physical law.
As a counter example, consider a coherent laser beam that I fire into a material with a VERY high index of refraction (so it causes the laser to move extremely slowly). It would always make the time shorter if the laser beam somehow bent around the material instead of going through it. But it doesn’t, it flies straight ahead because that’s what lasers do.
Thus, the path of least time has always seemed like an odd condition that just happens to work in certain constructed circumstances.
8. Angelo says:
This might be a silly question, but why has the size of the wave front gotten bigger (stretched) as light traveled from one medium to the other?
9. Robin Smith says:
The principle of least time at first seems strange to those accustomed to living in a three-dimensional Euclidean space whom usually experience only small relative velocities between objects in their surroundings (us), and thus don’t have a day-to-day experience of the four-dimensional space-time in which they actually live.
However, a photon very much lives in the four-dimensional Minowski space-time that we don’t usually experience, because it’s moving at the speed of light. As such, you would expect even slight disturbances in this fourth (time) dimension to have a real effect on the path of a photon through its four-dimensional Minowski space. Sending the photon along the shortest Euclidean path may be sort of be like ‘forcing’ the photon to go up a four-dimensional hill instead of around it.
This to my mind makes just as much intuitive sense as the way a planet ‘knows’ which path to fall along as it orbits around another body (which has caused the spacetime around it to become curved), without having to actually try all of the possible paths, in order to minimise its path through that curved space. In this case, we are noticing the curvature of space more than a curvature of time. So why should not a photon take the path that it does between two points in order to minimise the time that it takes? When you stick a piece of glass in a photon’s path, you’re altering its speed dramatically (as projected onto our reference frame). You’re altering significantly the space through which it travels, particularly in the photon’s time dimension.
That’s the analogy i like to make.
10. Anders says:
Hi,
thank you for an interesting post, which I have just stumbled upon.
I have a question ( – I realise that the post is a year old…):
In your wave-argument you have a drawing of a wave entering another material. This gives rise to 2 triangles.
1. The upper triangle is simply defined to be a “right triangle”.
2. The bottom triangle is drawn as also forming a 90 degree angle. Is this per assumption of the refraction, or can it somehow be inferred (without reference to Snell’s law, or the Principle of least action…)?
11. c walsh says:
Could anyone please give me an expression for x?
referring to the first diagram
n=v1/v2
This means solving for x in
x^2/(a^2 +x^2)=(n^2*(c-x)^2)/((b^2 +(c-x)^2))
12. rob reede says:
I really, really like that you give the two approaches back to back, for contrast. I am not a physicist—–just a teacher. I like to see things put in a manner described by Denzel Washington, in “Philadelphia” : “……I need you to explain the whole thing to me, as if I were a six yr old” No gross assumptions.
If you can clear up, for me, WHY you introduced the idea of dT/dx, right after drawing the triangles , it would help. I think I know, but still it would be good to see it. I understand what the derivative does, but how it plays into the greater derivaTION is the burning interest to me.
Sorry if I sound too elementary.
Thanks !
13. The Physicist says:
@rod reede
Thanks!
There are a lot of paths, and the only difference between them is the parameter x, the location where the path crosses through the surface. By setting $\frac{dT}{dx}=0$ we figure out what values of x correspond with extreme values of T. Those x’s tell us which paths that takes the least or greatest amount of time. There’s no longest path, since you can just slide x to the right or left forever (to create arbitrarily long paths), so the one critical value of x is a minimum.
Alternatively, you could do it right and use a derivative test. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 7, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8242647647857666, "perplexity": 728.8085886917725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806736.55/warc/CC-MAIN-20171123050243-20171123070243-00767.warc.gz"} |
https://math.stackexchange.com/questions/3428259/nbg-set-theory-and-the-axiom-regularity | # NBG set theory and the axiom regularity
In E. Mendelson's book 'Introduction to Mathematical Logic' he develops NBG set theory. I believe it's well-known enough that a description here is not needed. Althought it could not hurt to summarise the main points:
• The objects of NBG are classes and sets are defined as classes which are members of another class,
• NBG is a conservative extension of ZF,
• NBG and ZF are equiconsistent.
When Mendelson formulates the axiom of regularity though he essentially states it as "Every class is well-founded". Why does he not formulate it as "Every set is well-founded"? They seem (unless I am mistaken) to be equivalent in $$\mathbf{NBG}+\mathbf{AC}$$ but I'm unsure otherwise.
You're right, they are equivalent. Using the axiom of choice (in particular, dependent choice), you can show that the axiom of regularity holds if and only if there are no infinite descending sequences under the $$\in$$ relation (I can provide a proof if desired). If regularity did not hold for classes, then we would have a sequence of classes $$\ldots x_2\in x_1\in x_0$$ However, by Mendelson's definition of a set, all but $$x_0$$ are sets, so we have the sequence of sets $$\ldots x_3\in x_2\in x_1$$ which means regularity does not hold for sets. Thus, if all sets are well-founded, then all classes must be well-founded. I'm guessing he formulated the axiom of regularity in terms of classes simply because the axioms are meant to establish the properties of classes, and consequently, sets as well.
Suppose all sets are well-founded and let $$C$$ be a nonempty class. Let $$x\in C$$. If $$x\cap C=\emptyset$$, then $$C$$ is well-founded. If $$x\cap C\neq\emptyset$$, then $$TC(x)\cap C\neq\emptyset$$ where $$TC(x)$$ denotes the transitive closure of $$x$$ (the fact that the transitive closure of $$x$$ exists and is a set is true in NBG without the axiom of regularity, as can be seen in my proof here). Since $$TC(x)\cap C\subset TC(x)$$ and $$TC(x)$$ is a set, so is $$TC(x)\cap C$$. So by assumption, there exists $$y\in TC(x)\cap C$$ such that $$y\cap TC(x)\cap C=\emptyset$$. Assume $$z\in y\cap C$$. Then, since $$z\in y\in TC(x)$$ and $$TC(x)$$ is transitive, $$z\in TC(x)$$ so that $$z\in y\cap TC(x)\cap C$$, a contradiction. So, $$y\cap C=\emptyset$$ and hence, $$C$$ is well-founded. Thus, all classes are well-founded if and only if all sets are well-founded.
• @Anonymous The argument is that if $C$ is a nonempty class and $y\in C,$ then either $y$ is $\in$-minimal in $C$, in which case we're done, or $trcl(y)\cap C$ is a nonempty set whose $\in$-minimal element is necessarily $\in$-minimal in $C$ since $trcl(y)$ is transitive. – spaceisdarkgreen Nov 10 '19 at 6:41
• @Jean-PierredeVilliers I made an edit to my answer which builds on the argument spaceisdarkgreen mentioned. Note that $trcl(y)$ is the smallest transitive set that containing $y$. It is not the intersection of all transitive sets in $C$ containing $y$ since $C$ need not even contain any transitive sets. I also linked to a rigorous proof in my edit for the existence of the transitive closure. The proof I linked to does not use any axiom of choice or axiom of regularity, and is valid in the context of NBG. – Anonymous Nov 10 '19 at 17:32
• @Jean-PierredeVilliers I think Anonymous has covered it in their edit. The $trcl(y)$ is just the smallest transitive set with $y$ as a subset, irrespective to being in $C,$ and it won't generally be in $C$ (and doesn't need to be for the argument to work). – spaceisdarkgreen Nov 10 '19 at 17:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 26, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9768761992454529, "perplexity": 139.4524393730658}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143079.30/warc/CC-MAIN-20200217175826-20200217205826-00123.warc.gz"} |
http://en.wikipedia.org/wiki/Close-packing_of_equal_spheres | # Close-packing of equal spheres
Regular arrangement of equal spheres in a plane changing to an irregular arrangement of unequal spheres (bubbles).
hcp and fcc close-packing of spheres
In geometry, close-packing of equal spheres is a dense arrangement of congruent spheres in an infinite, regular arrangement (or lattice). Carl Friedrich Gauss proved that the highest average density – that is, the greatest fraction of space occupied by spheres – that can be achieved by a lattice packing is
$\frac{\pi}{3\sqrt 2} \simeq 0.74048.$
The same packing density can also be achieved by alternate stackings of the same close-packed planes of spheres, including structures that are aperiodic in the stacking direction. The Kepler conjecture states that this is the highest density that can be achieved by any arrangement of spheres, either regular or irregular. This conjecture is now widely considered proven by T. C. Hales.[1][2]
Many crystal structures are based on a close-packing of a single kind of atom, or a close-packing of large ions with smaller ions filling the spaces between them. The cubic and hexagonal arrangements are very close to one another in energy, and it may be difficult to predict which form will be preferred from first principles.
## fcc and hcp lattices
There are two simple regular lattices that achieve this highest average density. They are called face-centered cubic (fcc) (also called cubic close packed) and hexagonal close-packed (hcp), based on their symmetry. Both are based upon sheets of spheres arranged at the vertices of a triangular tiling; they differ in how the sheets are stacked upon one another. The fcc lattice is also known to mathematicians as that generated by the A3 root system.[3]
### Cannonball problem
Cannonballs piled on a triangular (front) and rectangular (back) base, both fcc lattices.
The problem of close-packing of spheres was first mathematically analyzed by Thomas Harriot around 1587, after a question on piling cannonballs on ships was posed to him by Sir Walter Raleigh on their expedition to America.[4] Cannonballs were usually piled in a rectangular or triangular wooden frame, forming a three-sided or four-sided pyramid. Both arrangements produce a face-centered cubic lattice – with different orientation to the ground.
### Positioning and spacing
In both the fcc and hcp arrangements each sphere has twelve neighbors. For every sphere there is one gap surrounded by six spheres (octahedral) and two smaller gaps surrounded by four spheres (tetrahedral). The distances to the centers of these gaps from the centers of the surrounding spheres is $\scriptstyle \sqrt{\frac{3}{2}}$ for the tetrahedral, and $\scriptstyle \sqrt2$ for the octahedral, when the sphere radius is 1.
Relative to a reference layer with positioning A, two more positionings B and C are possible. Every sequence of A, B, and C without immediate repetition of the same one is possible and gives an equally dense packing for spheres of a given radius.
The most regular ones are:
• fcc = ABCABCA (every third layer is the same)
• hcp = ABABABA (every other layer is the same).
In close-packing, the center-to-center spacing of spheres in the xy plane is a simple honeycomb-like tessellation with a pitch (distance between sphere centers) of one sphere diameter. The distance between sphere centers, projected on the z (vertical) axis, is:
$\text{pitch}_Z = \sqrt{6} \cdot {d\over 3}\approx0.81649658 d,$
where d is the diameter of a sphere; this follows from the tetrahedral arrangement of close-packed spheres.
The coordination number of hcp and fcc is 12 and its atomic packing factor (APF) is the number mentioned above, 0.74.
Comparison between hcp and fcc
Figure 1 – The hcp lattice (left) and the fcc lattice (right). The outline of each respective Bravais lattice is shown in red. The letters indicate which layers are the same. There are two "A" layers in the hcp matrix, where all the spheres are in the same position. All three layers in the fcc stack are different. Note the fcc stacking may be converted to the hcp stacking by translation of the upper-most sphere, as shown by the dashed outline.
Figure 2 – Shown here is a stack of eleven spheres of the hcp lattice illustrated in Figure 1. The hcp stack differs from the top 3 tiers of the fcc stack shown in Figure 3 only in the lowest tier; it can be modified to fcc by an appropriate rotation or translation. Figure 3 – Thomas Harriot, circa 1585, first pondered the mathematics of the cannonball arrangement or cannonball stack, which has an fcc lattice. Note how adjacent balls along each edge of the regular tetrahedron enclosing the stack are all in direct contact with one another. This does not occur in an hcp lattice, as shown in Figure 2.
## Lattice generation
When forming any sphere-packing lattice, the first fact to notice is that whenever two spheres touch a straight line may be drawn from the center of one sphere to the center of the other intersecting the point of contact. The distance between the centers along the shortest path namely that straight line will therefore be r1 + r2 where r1 is the radius of the first sphere and r2 is the radius of the second. In close packing all of the spheres share a common radius, r. Therefore two centers would simply have a distance 2r.
### Simple hcp lattice
An animation of close-packing lattice generation. Note: If a third layer (not shown) is directly over the first layer then the HCP lattice is built, if the third layer is placed over holes in the first layer then the FCC lattice is created
To form an A-B-A-B-... hexagonal close packing of spheres, the coordinate points of the lattice will be the spheres' centers. Suppose, the goal is to fill a box with spheres according to hcp. The box would be placed on the x-y-z coordinate space.
First form a row of spheres. The centers will all lie on a straight line. Their x-coordinate will vary by 2r since the distance between each center if the spheres are touching is 2r. The y-coordinate and z-coordinate will be the same. For simplicity, say that the balls are the first row and that their y- and z-coordinates are simply r, so that their surfaces rest on the zero-planes. Coordinates of the centers of the first row will look like (2rrr), (4rrr), (6r ,rr), (8r ,rr), ... .
Now, form the next row of spheres. Again, the centers will all lie on a straight line with x-coordinate differences of 2r, but there will be a shift of distance r in the x-direction so that the center of every sphere in this row aligns with the x-coordinate of where two spheres touch in the first row. This allows the spheres of the new row to slide in closer to the first row until all spheres in the new row are touching two spheres of the first row. Since the new spheres touch two spheres, their centers form an equilateral triangle with those two neighbors' centers. The side lengths are all 2r, so the height or y-coordinate difference between the rows is $\scriptstyle\sqrt{3}r$. Thus, this row will have coordinates like this:
$\left(r, r + \sqrt{3}r, r\right),\ \left(3r, r + \sqrt{3}r, r\right),\ \left(5r, r + \sqrt{3}r, r\right),\ \left(7r, r + \sqrt{3}r, r\right), \dots.$
The first sphere of this row only touches one sphere in the original row, but its location follows suit with the rest of the row.
The next row follows this pattern of shifting the x-coordinate by r and the y-coordinate by $\scriptstyle\sqrt{3}$. Add rows until reaching the x and y maximum borders of the box.
In an A-B-A-B-... stacking pattern, the odd numbered planes of spheres will have exactly the same coordinates save for a pitch difference in the z-coordinates and the even numbered planes of spheres will share the same x- and y-coordinates. Both types of planes are formed using the pattern mentioned above, but the starting place for the first row's first sphere will be different.
Using the plane described precisely above as plane #1, the A plane, place a sphere on top of this plane so that it lies touching three spheres in the A-plane. The three spheres are all already touching each other, forming an equilateral triangle, and since they all touch the new sphere, the four centers form a regular tetrahedron.[5] All of the sides are equal to 2r because all of the sides are formed by two spheres touching. The height of which or the z-coordinate difference between the two "planes" is $\scriptstyle\sqrt{6}r2/3$. This, combined with the offsets in the x and y-coordinates gives the centers of the first row in the B plane:
$\left(r, r + \frac{\sqrt{3}r}{3}, r + \frac{\sqrt{6}r2}{3}\right),\ \left(3r, r + \frac{\sqrt{3}r}{3}, r + \frac{\sqrt{6}r2}{3}\right),\ \left(5r, r + \frac{\sqrt{3}r}{3}, r + \frac{\sqrt{6}r2}{3}\right),\ \left(7r, r + \frac{\sqrt{3}r}{3}, r + \frac{\sqrt{6}r2}{3}\right), \dots.$
The second row's coordinates follow the pattern first described above and are:
$\left(2r, r + \frac{4\sqrt{3}r}{3}, r + \frac{\sqrt{6}r2}{3}\right),\ \left(4r, r + \frac{4\sqrt{3}r}{3}, r + \frac{\sqrt{6}r2}{3}\right),\ \left(6r, r + \frac{4\sqrt{3}r}{3}, r + \frac{\sqrt{6}r2}{3}\right),\ \left(8r,r + \frac{4\sqrt{3}r}{3}, r + \frac{\sqrt{6}r2}{3}\right),\dots.$
The difference to the next plane, the A plane, is again $\scriptstyle \sqrt{6}r2/3$ in the z-direction and a shift in the x and y to match those x- and y-coordinates of the first A plane.[6]
In general, the coordinates of sphere centers can be written as:
$\begin{bmatrix} 2i + ((j\ +\ k)\ \bmod{2})\\ \sqrt{3}\left[j + \frac{1}{3}(k\ \bmod{2})\right]\\ \frac{2\sqrt{6}}{3}k\\ \end{bmatrix}r$
where $i$, $j$ and $k$ are indices starting at $0$ for the $x$, $y$ and $z$ coordinates.
## Miller indices
Main article: Miller index
Miller–Bravais index for hcp lattice
Crystallographic features of hcp systems, such as vectors and atomic plane families can be described using a four-value Miller index notation ( hkil ) in which the third index i denotes a convenient but degenerate component which is equal to −h − k. The h, i and k index directions are separated by 120°, and are thus not orthogonal; the l component is mutually perpendicular to the h, i and k index directions.
## Filling the remaining space
The FCC and HCP packings are the densest known packings of equal spheres. Denser sphere packings are known, but they involve unequal sphere packing. A packing density of 1, filling space completely, requires non-spherical shapes, such as honeycombs.
Replacing each contact point between two spheres with an edge connecting the centers of the touching spheres produces tetrahedrons and octahedrons of equal edge lengths. The FCC arrangement produces the tetrahedral-octahedral honeycomb. The HCP arrangement produces the gyrated tetrahedral-octahedral honeycomb. If, instead, every sphere is augmented with the points in space that are closer to it than to any other sphere, the duals of these honeycombs are produced: the rhombic dodecahedral honeycomb for FCC, and the trapezo-rhombic dodecahedral honeycomb for HCP.
Spherical bubbles in soapy water in a FCC or HCP arrangement, when the water in the gaps between the bubbles drains out, also approach the rhombic dodecahedral honeycomb or trapezo-rhombic dodecahedral honeycomb. However, such FCC of HCP foams of very small liquid content are unstable, as they do not satisfy Plateau's laws. The Kelvin foam an the Weaire Phelan foam are more stable, having smaller interfacial energy in the limit of a very small liquid content. [7] | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 19, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.844042956829071, "perplexity": 752.1928931778668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430452189638.13/warc/CC-MAIN-20150501034949-00063-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://edurev.in/studytube/Dynamical-systems-4/716d8441-8219-4da6-ad76-b17a91b97a24_t | Dynamical systems - 4
# Dynamical systems - 4 Notes | Study Physics for IIT JAM, UGC - NET, CSIR NET - Physics
## Document Description: Dynamical systems - 4 for Physics 2022 is part of Physics for IIT JAM, UGC - NET, CSIR NET preparation. The notes and questions for Dynamical systems - 4 have been prepared according to the Physics exam syllabus. Information about Dynamical systems - 4 covers topics like and Dynamical systems - 4 Example, for Physics 2022 Exam. Find important definitions, questions, notes, meanings, examples, exercises and tests below for Dynamical systems - 4.
Introduction of Dynamical systems - 4 in English is available as part of our Physics for IIT JAM, UGC - NET, CSIR NET for Physics & Dynamical systems - 4 in Hindi for Physics for IIT JAM, UGC - NET, CSIR NET course. Download more important topics related with notes, lectures and mock test series for Physics Exam by signing up for free. Physics: Dynamical systems - 4 Notes | Study Physics for IIT JAM, UGC - NET, CSIR NET - Physics
1 Crore+ students have signed up on EduRev. Have you?
Centres and foci
Next special case we consider is the dynamical system of the form
Fig. 8.9. Phase portrait for dynamical system (8.20). Blue line represents unstable manifold, red line represents stable manifold.
(8.21)
Matrix of this system is
(8.22)
System (8.21) is little trickier to solve. Let us switch to polar coordinate system by usual transformation
x = r cos θ; y = r sin θ;
where r = r(t) and θ = θ(t). Inverse transformation reads
These relations can be used to find
Now we use (8.21) to derive corresponding equations for r and θ:
We can see that dynamical system (8.21) in polar coordinates decouples to two independent equations for coordinates r and θ,
(8.23)
First we solve equation for r. Let us write it in the form
which integrates to
log r = α t + log C
where the integration constant has been written as a logarithm (see footnote on page 162). Exponentiating the last equation we arrive at
Obviously, at time t = 0 we have r(0) = C and so we write the solution in the form
Next we solve equation for θ. This is trivial since we have
which integrates to
where the integration constant has been denoted by θ0 and represents the value of θ at t = 0. Summa summarum, solution of system (8.23) acquires the form
(8.24)
Hence, solution of original system (8.21) in the Cartesian coordinates reads
(8.25)
Suppose that α = 0 so that
Clearly, this represents motion at constant angular velocity β and constant radius r0 and therefore the phase trajectories are circles of radius r0. If
α ≠ 0, the radius of the "circle" will be
and hence the trajectory will be a spiral. If α > 0, the radius will increase exponentially and the spiral will tend to in nity. If, on the other hand, α < 0, the radius will decrease exponentially and the phase tra jectories will spiral towards the origin. All cases are plotted in gure 8.10 by Mathematica commands
and can be classi ed as follows:
• α = 0 Critical point is called centre. Tra jectories are circles centred at the origin.
• α > 0 Critical point is called unstable focus, tra jectories are spirals escaping to in nity.
• α < 0 Critical point is called stable focus, tra jectories are spirals tending to the origin.
Parameter β has the meaning of angular velocity. If it is zero, spirals become straight lines and dynamical system reduces to previous case (8.17). If it is non-zero, its sign determines the sense of rotation: tra jectories orbit the origin in a clockwise sense for β > 0 and in a counter-clockwise sense for β < 0.
Let us now analyse critical points of system (8.21) in terms of eigenvalues of matrix (8.22)
We can use Mathematica to nd the eigenvalues and eigenvectors of matrix (8.22) by
which shows that this matrix has two eigenvalues
with eigenvectors
In other words, eigenvalues and eigenvectors of matrix J satisfy relations
The rst observation is that the eigenvectors are complex and hence there are no neither stable nor unstable manifolds, i.e. there is no real direction which is mapped to the same direction. The only exception is when β = 0 since in this case dynamical system (8.21) reduces to (8.17) and the eigenvectors become real.
Second, eigenvalues λ1,2 are mutually complex conjugated (as well as the eigenvectors),
Fig. 8.10. Classi cation of critical points for the system (8.21): a, b) centre, c) unstable focus, d) stable focus.
where the bar denotes the complex conjugation. Hence, even if the dynamical system is not of the form (8.21), we can conclude, that if the matrix J has two complex conjugated eigenvalues
the critical point is stable/unstable focus or a centre, depending on the values of α and β as classi ed above.
Example. Consider dynamical system
This system is not of the form (8.21) but we can apply the criterion based on the analysis of eigenvalues. In Mathematica we type
where we have used Expand in order to simplify the expression for eigenvectors (try this code without Expand). We have found two eigenvalues
which are mutually complex conjugated. In this case, parameters α and β are
Parameter α is positive and so the critical point is an unstable focus. Tra jectories of dynamical system considered:
Another example is the system
Eigenvalues are found by
Hence, now the eigenvalues are
which means that
Since α = 0, critical point is a centre rather than focus. Tra jectories of this dynamical system are the following:
General case
In the previous two sections we studied two special cases of planar linear dynamical systems given by matrices
However, we have seen that the analysis can be performed using the eigenvalues of these matrices. Now we consider general linear planar dynamical system
(8.26)
Let us find the eigenvalues and eigenvectors of this general matrix. Recall that the determinant of matrix J is
The trace of the matrix is de ned as a sum of its diagonal elements, i.e.
Eigenvalues λ are defined by equation
where e is an eigenvector. The last equation can be rewritten in the form
where I is the unit matrix 2 x 2 so that
This equation is a homogeneous system of linear equations which has non-trivial solutions only if the determinant of the system is zero:
Expanding the brackets we arrive at
or, equivalently
This is a quadratic equation for λ and its solutions are
(8.27)
Now we can summarize the classification of critical points as follows.
Moreover, if the real parts of eigenvalues λ1,2 are non-zero, critical point is called hyperbolic, otherwise it is called non-hyperbolic.
The document Dynamical systems - 4 Notes | Study Physics for IIT JAM, UGC - NET, CSIR NET - Physics is a part of the Physics Course Physics for IIT JAM, UGC - NET, CSIR NET.
All you need of Physics at this link: Physics
## Physics for IIT JAM, UGC - NET, CSIR NET
159 docs
Use Code STAYHOME200 and get INR 200 additional OFF
## Physics for IIT JAM, UGC - NET, CSIR NET
159 docs
### Up next
Track your progress, build streaks, highlight & save important lessons and more!
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
; | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9424208402633667, "perplexity": 1438.9124846636266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710916.70/warc/CC-MAIN-20221202215443-20221203005443-00205.warc.gz"} |
http://math.stackexchange.com/questions/51725/when-trying-to-calculate-arc-length-what-is-the-easiest-way-to-approach-the-d | # When trying to calculate arc length, what is the easiest way to approach the $(dy/dx)^2$ portion?
When trying to calculate arc length, what is the easiest way to approach the $(dy/dx)^2$ portion?
If I have: $$x = \frac{1}{3}\sqrt{y}(y-3),\qquad 1\leq y\leq 9;$$
I take the derivative of the function and get $\frac{1}{2}y^{1/2} - \frac{1}{2}y^{-1/2}$.
Next I have to square the derivative and I got $$\frac{1}{4}y^{1/4} + \frac{1}{2} + \frac{1}{4}y^{-1}$$ after adding the 1 from the formula (for arc length) to it.
Now to condense everything into the formula up to that point I would have:
$$L = \int_1^9 \sqrt{ \frac{1}{4}y^{1/4} + \frac{1}{2} + \frac{1}{4}y^{-1}}$$
Now in order to get rid of that radical I would have to get some sort of perfect square but the trouble is sometimes it's difficult to see it right away, and I don't really see it in this one. Is there a better way to go about these problems other than just "looking" at it and trying to figure it out?a
-
As far as I can tell, you've (i) computed the derivative incorrectly; and you've (ii) squared incorrectly. – Arturo Magidin Jul 16 '11 at 3:43
@Arturo - I checked both parts against the solutions manual and they are correct. Actually I lied. Only the derivative is correct. Edit: Ok yeah I'm dumb, I squared incorrectly and I think that threw the problem off for me the rest of the way. I'll try it again. – Ryan Jul 16 '11 at 3:48
@Ryan: Yes, what is the square of $y^{1/2}$? And after you get it right, note that adding $1$ just changes the sign of the middle term. Does this suggest what your new expression might be the square of? – André Nicolas Jul 16 '11 at 3:53
@Ryan: I posted an answer assuming there is a typo in the title and in the first sentence. Instead of $dy/dx$ should be $dx/dy$, and that you want to evaluate $$\int_{1}^{9}\sqrt{1+\left( \frac{dx}{dy}\right) ^{2}}dy$$, because the derivative $dy/dx$ and the corresponding integral would become too complicated. – Américo Tavares Jul 16 '11 at 14:34
@Ryan: I was about to leave when I typed the comment, so I was wrong about (i). I had misinterpreted your original function, and realized the mistake when I edited the question; only to then be too rushed to fix the comment. Sorry if I misdirected you on that score. – Arturo Magidin Jul 17 '11 at 3:56
Updated. I assume there is a typo in the title and in the first sentence of the question, as I commented, and that you want to evaluate $\displaystyle\int_{1}^{9}\sqrt{1+\left( \dfrac{dx}{dy}\right) ^{2}}\mathrm{d}y$, where $x=\dfrac{1}{3}\sqrt{y}\left( y-3\right)$. Its derivative is
$$\dfrac{\mathrm{d}x}{\mathrm{d}y}=\dfrac{1}{2}y^{1/2}-\dfrac{1}{2}y^{-1/2}.$$
So
$$\left( \frac{1}{2}y^{1/2}-\frac{1}{2}y^{-1/2}\right) ^{2}=\frac{1}{4}y-% \frac{1}{2}+\frac{1}{4}y^{-1},$$
and
$$1+\left( \frac{1}{2}y^{1/2}-\frac{1}{2}y^{-1/2}\right) ^{2}=\frac{1}{4}y+% \frac{1}{2}+\frac{1}{4}y^{-1}.$$
Now in order to get rid of that radical I would have to get some sort of perfect square but the trouble is sometimes it's difficult to see it right away, and I don't really see it in this one. Is there a better way to go about these problems other than just "looking" at it and trying to figure it out?
Hint:
$$\frac{1}{4}y+\frac{1}{2}+\frac{1}{4}y^{-1}=\frac{1}{4}\frac{\left( y+1\right) ^{2}}{y},$$
or use the completing the square technique.
Note: most of the times inside the radical you have a function $f(y)$ which is not a perfect square nor anything similar. What you get as integrand is a radical $R(y)=\sqrt{f(y)}$. And you have to integrate it using the normal integration techniques: substitution or by parts. But it is not guaranteed that the integral has a closed form. However, in the present case you do obtain a closed form.
-
As I comented above this answer assumes there is a typo in the title and in the first sentence of the question. Instead of $dy/dx$ should be $dx/dy$, and OP wants to evaluate $$\int_{1}^{9}\sqrt{1+\left( \frac{dx}{dy}\right) ^{2}}dy,$$ because the derivative $dy/dx$ and the corresponding integral would become too complicated. – Américo Tavares Jul 16 '11 at 14:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9562870860099792, "perplexity": 116.43596105783237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098987.83/warc/CC-MAIN-20150627031818-00275-ip-10-179-60-89.ec2.internal.warc.gz"} |
http://tex.stackexchange.com/questions/9921/font-sabon-into-texworks | # Font “Sabon” into TeXworks
What has to be done to include a new font such as Sabon into TeXworks? I already have the necessary files, however, ´cause I´m a beginner with LaTex, I can´t manage to bring the files into TeXworks. Somebody able and interested to help me?
-
Your question is a kind of category error: TeXWorks is a front end to a TeX Distribution, so there is no coherent way to "bring the files into TeXWorks" (since I'm assuming that you don't simply want to type your source files in Sabon within TeXWorks.) Are you asking about how to create LaTeX documents using the Sabon font? – Alan Munn Jan 31 '11 at 0:19
I'm assuming what Alan says in his note is right, and that you want to create documents using Sabon, not somehow use Sabon in TeXworks' GUI.
The easiest way then, is probably to use XeLaTeX. First, makse sure Sabon is installed and is accessible to other programs. Add the following to your LaTeX preamble:
``````\usepackage{fontspec}
\defaultfontfeatures{Mapping=tex-text}
\setmainfont{Sabon} % Or it might need to be \setmainfont{Sabon LT Std} or something
``````
To build your document, choose "XeLaTeX" from TeXworks's dropdown box.
Now, Sabon should be the main font of the document. For more information, see the fontspec package documentation.
- | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9581984281539917, "perplexity": 3823.1452477070206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500815756.79/warc/CC-MAIN-20140820021335-00361-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://scholars.hkbu.edu.hk/en/publications/shrinkage-based-diagonal-hotellings-tests-for-high-dimensional-sm-2 | # Shrinkage-based diagonal Hotelling's tests for high-dimensional small sample size data
Kai Dong, Herbert Pang, Tiejun TONG*, Marc G. Genton
*Corresponding author for this work
Research output: Contribution to journalArticlepeer-review
11 Citations (Scopus)
## Abstract
High-throughput expression profiling techniques bring novel tools and also statistical challenges to genetic research. In addition to detecting differentially expressed genes, testing the significance of gene sets or pathway analysis has been recognized as an equally important problem. Owing to the "large p small n" paradigm, the traditional Hotelling's T2 test suffers from the singularity problem and therefore is not valid in this setting. In this paper, we propose a shrinkage-based diagonal Hotelling's test for both one-sample and two-sample cases. We also suggest several different ways to derive the approximate null distribution under different scenarios of p and n for our proposed shrinkage-based test. Simulation studies show that the proposed method performs comparably to existing competitors when n is moderate or large, but it is better when n is small. In addition, we analyze four gene expression data sets and they demonstrate the advantage of our proposed shrinkage-based diagonal Hotelling's test.
Original language English 127-142 16 Journal of Multivariate Analysis 143 https://doi.org/10.1016/j.jmva.2015.08.022 Published - 1 Jan 2016
## Scopus Subject Areas
• Statistics and Probability
• Numerical Analysis
• Statistics, Probability and Uncertainty
## User-Defined Keywords
• Diagonal Hotelling's test
• High-dimensional data
• Microarray data
• Null distribution
• Optimal variance estimation
## Fingerprint
Dive into the research topics of 'Shrinkage-based diagonal Hotelling's tests for high-dimensional small sample size data'. Together they form a unique fingerprint. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8291404843330383, "perplexity": 4701.935057344736}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585120.89/warc/CC-MAIN-20211017021554-20211017051554-00106.warc.gz"} |
http://mathhelpforum.com/trigonometry/229909-demonstration-envolving-complex-numbers-help-please.html | 1. Demonstration envolving Complex numbers!! Help please
Please someone help me.. I've an exam in 2 days ... The following problem makes me nuts.. I can't get it right..
Show that:
cos(∏ - alpha) + i cos (∏/2 - alpha)
-------------------------------------------- = cis (∏ - 2alpha)
cis (alpha)
I'm desperate... I've tried everything I know.. any help is appreciated
2. Re: Demonstration envolving Complex numbers!! Help please
Originally Posted by josepbigorra
Please someone help me.. I've an exam in 2 days ... The following problem makes me nuts.. I can't get it right..
Show that:
cos(∏ - alpha) + i cos (∏/2 - alpha)
-------------------------------------------- = cis (∏ - 2alpha)
cis (alpha)
I'm desperate... I've tried everything I know.. any help is appreciated
What have you tried? It's hard to give you advice when we don't know what you've tried.
I'd favor expanding out $cos( \pi - \alpha ) = cos(\pi)~cos(\alpha) + sin(\pi)~sin(\alpha) = -cos(\alpha)$ etc.
-Dan
3. Re: Demonstration envolving Complex numbers!! Help please
What I've tried is the following:
$cos (\pi- \alpha ) = - cos (\alpha )$
$i cos (\frac{\pi }{2} - \alpha ) = isin (\alpha )$
Then i proceed to the substitution in the expression from above, but I don't know how to continue.
4. Re: Demonstration envolving Complex numbers!! Help please
Originally Posted by topsquark
What have you tried? It's hard to give you advice when we don't know what you've tried.
I'd favor expanding out $cos( \pi - \alpha ) = cos(\pi)~cos(\alpha) + sin(\pi)~sin(\alpha) = -cos(\alpha)$ etc.
-Dan
What I've tried is the following:
Then i proceed to the substitution in the expression from above, but I don't know how to continue.
5. Re: Demonstration envolving Complex numbers!! Help please
Okay, you've got the $cos( \pi - \alpha )$ and $sin( \pi / 2 - \alpha )$ expressions. So you need to evaluate:
$\frac{-cos( \alpha ) + i ~sin( \alpha ) }{cos( \alpha ) + i~sin(\alpha)}$
You could do the division but recall Euler's theorem: $cos(x) + i~sin(x) = e^{ix}$. See what you can do with this.
-Dan
6. Re: Demonstration envolving Complex numbers!! Help please
Or "rationalize the denominator" by multiplying numerator and denominator by the conjugate of the denominator:
$\left(\frac{-cos(\alpha)+ isin(\alpha)}{cos(\alpha)+ isin(\alpha)}\right)\left(\frac{cos(\alpha)- isin(\alpha)}{cos(\alpha)- isin(\alpha)}\right)$
$= -cos^2(\alpha)+ sin^2(\alpha)+ 2sin(\alpha)cos(\alpha)$.
Now, what about that left side? $cis(\pi- 2\alpha)= cos(\pi- 2\alpha)+ isin(\pi- 2\alpha)= -cos(2\alpha)+ isin(2\alpha)$
So what are $cos(2\alpha)$ and $sin(2\alpha)$?
7. Re: Demonstration envolving Complex numbers!! Help please
Originally Posted by HallsofIvy
Or "rationalize the denominator" by multiplying numerator and denominator by the conjugate of the denominator:
$\left(\frac{-cos(\alpha)+ isin(\alpha)}{cos(\alpha)+ isin(\alpha)}\right)\left(\frac{cos(\alpha)- isin(\alpha)}{cos(\alpha)- isin(\alpha)}\right)$
$= -cos^2(\alpha)+ sin^2(\alpha)+ 2sin(\alpha)cos(\alpha)$.
Now, what about that left side? $cis(\pi- 2\alpha)= cos(\pi- 2\alpha)+ isin(\pi- 2\alpha)= -cos(2\alpha)+ isin(2\alpha)$
So what are $cos(2\alpha)$ and $sin(2\alpha)$?
My friend thanks a lot for the info... I managed to get the problem working.. I really appreciate your help | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9195080399513245, "perplexity": 2832.955256133005}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814311.76/warc/CC-MAIN-20180223015726-20180223035726-00273.warc.gz"} |
https://vitalflux.com/quick-introduction-smoothing-techniques-language-models/ | Smoothing techniques in NLP are used to address scenarios related to determining probability / likelihood estimate of a sequence of words (say, a sentence) occuring together when one or more words individually (unigram) or N-grams such as bigram($$w_{i}$$/$$w_{i-1}$$) or trigram ($$w_{i}$$/$$w_{i-1}w_{i-2}$$) in the given set have never occured in the past.
In this post, you will go through a quick introduction to various different smoothing techniques used in NLP in addition to related formulas and examples. The following is the list of some of the smoothing techniques:
• Laplace smoothing: Another name for Laplace smoothing technique is add one smoothing.
• Good-turing smoothing
• Kneser-Ney smoothing
• Katz smoothing
• Church and Gale Smoothing
You will also quickly learn about why smoothing techniques to be applied. In the examples below, we will take the following sequence of words as corpus and test data set.
• Corpus (Training data): The following represents the corpus of words:
cats chase rats
cats meow
rats chatter
cats chase birds
rats sleep
• Test Data
rats chase birds
cats sleep
### Why Smoothing Techniques?
Based on the training data set, what is the probability of “cats sleep” assuming bigram technique is used? Based on bigram technique, the probability of the sequence of words “cats sleep” can be calculated as the product of following:
$$P(cats sleep) = P(\frac{cats}{<s>})\times P(\frac{sleep}{cats})\times P(\frac{</s>}{sleep})$$
You will notice that $$P(\frac{sleep}{cats}) = 0$$. Thus, the overall probability of occurrence of “cats sleep” would result in zero (0) value. However, the probability of occurrence of a sequence of words should not be zero at all.
This is where various different smoothing techniques come into the picture.
### Laplace (Add-One Smoothing)
In Laplace smoothing, 1 (one) is added to all the counts and thereafter, the probability is calculated. This is one of the most trivial smoothing techniques out of all the techniques.
Maximum likelihood estimate (MLE) of a word $$w_i$$ occuring in a corpus can be calculated as the following. N is total number of words, and $$count(w_{i})$$ is count of words for whose probability is required to be calculated.
MLE: $$P(w_{i}) = \frac{count(w_{i})}{N}$$
After applying Laplace smoothing, the following happens. Adding 1 leads to extra V observations.
MLE: $$P_{Laplace}(w_{i}) = \frac{count(w_{i}) + 1}{N + V}$$
Similarly, for N-grams (say, Bigram), MLE is calculated as the following:
$$P(\frac{w_{i}}{w_{i-1}}) = \frac{count(w_{i-1}, w_{i})}{count(w_{i-1})}$$
After applying Laplace smoothing, the following happens for N-grams (Bigram). Adding 1 leads to extra V observations.
MLE: $$P_{Laplace}(\frac{w_{i}}{w_{i-1}}) = \frac{count(w_{i-1}, w_{i}) + 1}{count(w_{i-1}) + V}$$
This is very similar to “Add One” or Laplace smoothing. Instead of adding 1 as like in Laplace smoothing, a delta($$\delta$$) value is added. Thus, the formula to calculate probability using additive smoothing looks like following:
$$P(\frac{w_{i}}{w_{i-1}}) = \frac{count(w_{i-1}, w_{i}) + \delta}{count(w_{i-1}) + \delta|V|}$$
### Good-Turing Smoothing
Good Turing Smoothing technique uses the frequencies of the count of occurrence of N-Grams for calculating the maximum likelihood estimate. For example, consider calculating the probability of a bigram (chatter/cats) from the corpus given above. Note that this bigram has never occurred in the corpus and thus, probability without smoothing would turn out to be zero. As per the Good-turing Smoothing, the probability will depend upon the following:
• In case, the bigram (chatter/cats) has never occurred in the corpus (which is the reality), the probability will depend upon the number of bigrams which occurred exactly one time and the total number of bigrams.
• In case, the bigram has occurred in the corpus (for example, chatter/rats), the probability will depend upon number of bigrams which occurred more than one time of the current bigram (chatter/rats) (the value is 1 for chase/cats), total number of bigram which occurred same time as the current bigram (to/bigram) and total number of bigram.
The following is the formula:
For the unknown N-grams, the following formula is used to calculate the probability:
$$P_{unknown}(\frac{w_{i}}{w_{i-1}}) = \frac{N_1}{N}$$
In above formula, $$N_1$$ is count of N-grams which appeared one time and N is count of total number of N-grams
For the known N-grams, the following formula is used to calculate the probability:
$$P(\frac{w_{i}}{w_{i-1}}) = \frac{c*}{N}$$
where c* = $$(c + 1)\times\frac{N_{i+1}}{N_{c}}$$
In the above formula, c represents the count of occurrence of n-gram, $$N_{c + 1}$$ represents count of n-grams which occured for c + 1 times, $$N_{c}$$ represents count of n-grams which occured for c times and N represents total count of all n-grams.
This video represents great tutorial on Good-turing smoothing.
### Kneser-Ney smoothing
In Good Turing smoothing, it is observed that the count of n-grams is discounted by a constant/abolute value such as 0.75. The same intuiton is applied for Kneser-Ney Smoothing where absolute discounting is applied to the count of n-grams in addition to adding the product of interpolation weight and probability of word to appear as novel continuation.
$$P_{Kneser-Ney}(\frac{w_{i}}{w_{i-1}}) = \frac{max(c(w_{i-1},w_{i} – d, 0))}{c(w_{i-1})} + \lambda(w_{i-1})*P_{continuation}(w_{i})$$
where $$\lambda$$ is a normalizing constant which represents probability mass that have been discounted for higher order. The following represents how $$\lambda$$ is calculated:
$$\lambda(w_{i-1}) = \frac{d\times|c(w_{i-1},w_{i})|}{c(w_{i-1})}$$
The following video provides deeper details on Kneser-Ney smoothing.
### Katz smoothing
Good-turing technique is combined with interpolation. Outperforms Good-Turing
by redistributing different probabilities to different unseen units.
### Church and Gale Smoothing
Good-turing technique is combined with bucketing.
• Each n-gram is assigned to one of serveral buckets based on its frequency predicted from lower-order models.
• Good-turing estimate is calculated for each bucket. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8473113179206848, "perplexity": 1828.6078954141199}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347439213.69/warc/CC-MAIN-20200604063532-20200604093532-00261.warc.gz"} |
http://math.stackexchange.com/questions/168786/how-to-approach-integrals-as-a-function/168832 | # How to approach integrals as a function?
I'm trying to solve the following question involving integrals, and can't quite get what am I supposed to do:
$$f(x) = \int_{2x}^{x^2}\root 3\of{\cos z}~dz$$ $$f'(x) =\ ?$$
How should I approach such integral functions? Am I just over-complicating a simple thing?
-
Are you familiar with the Fundamental Theorem of Calculus and the chain rule for differentiation? – user17794 Jul 9 '12 at 20:35
Yes, I do; I tried messing with this integral around, using the Fundamental Theorem of Calculus, but wasn't quite sure I'm on the right path. I've ended up with a weird expression that I'm not sure is correct/final answer. I've tried gathering information from wolframalpha, but it doesn't seem to handle such functions/integrals. Could you direct me to the right way - what should I end with? – Dvir Azulay Jul 9 '12 at 20:40
@Sam: That looks interesting; Could you explain how it can be used here? – Dvir Azulay Jul 9 '12 at 20:41
You could first split the integral into two integrals: $\int_{2x}^{x^2} \root3\of{\cos z}\,dz =\int_{2x}^0\root3\of{\cos z}\,dz+\int_0^{x^2}\root3\of{\cos z}\,dz=-\int_0^{2x}\root3\of{\cos z}\,dz+\int_0^{x^2}\root3\of{\cos z}\,dz$, and then use Tim's hint in his comment. – David Mitra Jul 9 '12 at 20:52
For this problem, you will ultimately use a version of the Fundamental Theorem of Calculus: If $f$ is continuous, then the function $F$ defined by $F(x)=\int_a^x f(z)\,dz$ is differentiable and $F'(x)=f(x)$.
So for instance, for $F(x)=\int_0^x\root3\of{\cos z}\,dz$, we have $F'(x)=\root3\of{\cos x}$.
One can combine this with the chain rule, when it applies, to differentiate a function whose rule is of the form $F(x)=\int_a^{g(x)} f(z)\,dz$. Here, we recognize that $F$ is a composition of the form $F=G\circ g$ with $G(x)=\int_a^x f(z)\,dz$. The derivative is $F'(x)=\bigl[ G(g(x))\bigr]'=G'(g(x))\cdot g'(x)=f(g(x))\cdot g'(x)$.
For example, for $F(x)=\int_0^{x^2}\root3\of{\cos z}\,dz$, we have $F'(x)=\root3\of{\cos x^2}\cdot(x^2)'=2x\root3\of{\cos x^2}$.
Now to tackle your problem proper and take advantage of these rules, we just "split the integral": $$\tag{1} \int_{2x}^{x^2}\root3\of{\cos z}\,dz= \int_{2x}^{0}\root3\of{\cos z}\,dz+ \int_{0}^{x^2}\root3\of{\cos z}\,dz.$$ But wait! We can only use the aforementioned differentiation rules for functions defined by an integral when it's the upper limit of integration that is the variable. The first integral in the right hand side of $(1)$ does not satisfy this. Things are easily remedied, though; write the right hand side of $(1)$ as: $$-\int_{0}^{2x}\root3\of{\cos z}\,dz+ \int_{0}^{x^2}\root3\of{\cos z}\,dz;$$ and now things are set up to use our rule (of course, you'll also use the rule $[cf+g]'=cf'+g'$).
-
Such a well written answer. Wish I could up-vote it a few more times; Thank you so much – Dvir Azulay Jul 9 '12 at 23:49
@dvir, I did it for you. – Tpofofn Jul 10 '12 at 2:38
Not to give it away completely. Using the Fundamental Theorem of Calculus, $f(x) = C(x^2)-C(2x)$, where $C(x)$ is the anti-derivative of the integrand. Now, use the Chain rule to compute $f'(x)$, which will depend only on the $C'(x)$, which is the integrand itself, evaluated at $x$.
-
Really thanks for your answer! – Dvir Azulay Jul 9 '12 at 23:49
\begin{eqnarray} f'(x)&=&(x^2)'(\sqrt[s]{\cos z})|_{z=x^2}-(2x)'(\sqrt[s]{\cos z})|_{z=2x}\cr &=&2x\sqrt[s]{\cos x^2}-2\sqrt[s]{\cos 2x} \end{eqnarray}
-
This doesn't seem correct. – Dom Jul 9 '12 at 21:49
Then show us what you think is correct! – Mercy Jul 9 '12 at 21:55
Shouldn't the answer be $2x\root s\of{\cos x^2} - 2\root s\of{\cos 2x}$ ? – Dom Jul 9 '12 at 22:03
Yes, you are right! – Mercy Jul 9 '12 at 22:13
Generally, to differentiate an integral of the form:
$$\int_{g_1(x)}^{g_2(x)}f(z)dz$$
we use Leibniz rule. First assume that $F(x)$ is the anti-derivative of $f(x)$. That is $F'(x) = f(x)$. Then it follows that,
$$\int_{g_1(x)}^{g_2(x)}f(z)dz = F(z)|_{z=g_2(x)} - F(z)|_{z=g_1(x)} = F(g_2(x)) - F(g_1(x))$$
Now if we differentiate this result using the chain rule we get:
$$\frac{d}{dx}\left(F(g_2(x)) - F(g_1(x))\right) = f(g_2(x))g_2'(x) - f(g_1(x))g_1'(x).$$
Note that it is not necessary to find the anti-derivative $F()$.
-
Using the Leibnitz rule of differentiation of integrals, which states that if \begin{align} f(x) = \int_{a(x)}^{b(x)} g(y) \ dy, \end{align} then \begin{align} f^{\prime}(x) = g(b(x)) b^{\prime}(x) - g(a(x)) a^{\prime}(x). \end{align} Thus, for your problem $a^{\prime}(x) = 2$ and $b^{\prime}(x) = 2x$ and, therefore, \begin{align} f^{\prime}(x) = \int_{2x}^{x^2} \sqrt[3]{\cos z} dz = \sqrt[3]{\cos (x^2)} (2 x) - \sqrt[3]{\cos (2x)} (2). \end{align}
-
The downvote seems a bit harsh. – user02138 Jul 16 '12 at 1:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9897775053977966, "perplexity": 546.8968645876398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988305.14/warc/CC-MAIN-20150728002308-00152-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://dlmf.nist.gov/22.19 | # §22.19 Physical Applications
## §22.19(i) Classical Dynamics: The Pendulum
With appropriate scalings, Newton’s equation of motion for a pendulum with a mass in a gravitational field constrained to move in a vertical plane at a fixed distance from a fulcrum is
being the angular displacement from the point of stable equilibrium, . The bounded oscillatory solution of (22.19.1) is traditionally written
for an initial angular displacement , with at time 0; see Lawden (1989, pp. 114–117). The period is . The angle is a separatrix, separating oscillatory and unbounded motion. With the same initial conditions, if the sign of gravity is reversed then the new period is ; see Whittaker (1964, §44).
Alternatively, Sala (1989) writes:
for the initial conditions , the point of stable equilibrium for , and . Here is the energy, which is a first integral of the motion. This formulation gives the bounded and unbounded solutions from the same formula (22.19.3), for and , respectively. Also, is not restricted to the principal range . Figure 22.19.1 shows the nature of the solutions of (22.19.3) by graphing for both , as in Figure 22.16.1, and , where it is periodic.
Figure 22.19.1: Jacobi’s amplitude function for and . When , increases monotonically indicating that the motion of the pendulum is unbounded in , corresponding to free rotation about the fulcrum; compare Figure 22.16.1. As , plateaus are seen as the motion approaches the separatrix where , , at which points the motion is time independent for . This corresponds to the pendulum being “upside down” at a point of unstable equilibrium. For , the motion is periodic in , corresponding to bounded oscillatory motion.
## §22.19(ii) Classical Dynamics: The Quartic Oscillator
Classical motion in one dimension is described by Newton’s equation
where is the potential energy, and is the coordinate as a function of time . The potential
plays a prototypal role in classical mechanics (Lawden (1989, §5.2)), quantum mechanics (Schulman (1981, Chapter 29)), and quantum field theory (Pokorski (1987, p. 203), Parisi (1988, §14.6)). Its dynamics for purely imaginary time is connected to the theory of instantons (Itzykson and Zuber (1980, p. 572), Schäfer and Shuryak (1998)), to WKB theory, and to large-order perturbation theory (Bender and Wu (1973), Simon (1982)).
For real and positive, three of the four possible combinations of signs give rise to bounded oscillatory motions. We consider the case of a particle of mass 1, initially held at rest at displacement from the origin and then released at time . The subsequent position as a function of time, , for the three cases is given with results expressed in terms of and the dimensionless parameter .
### ¶ Case I:
This is an example of Duffing’s equation; see Ablowitz and Clarkson (1991, pp. 150–152) and Lawden (1989, pp. 117–119). The subsequent time evolution is always oscillatory with period :
### ¶ Case II:
There is bounded oscillatory motion near , with period , for initial displacements with :
As from below the period diverges since are points of unstable equilibrium.
### ¶ Case III:
Two types of oscillatory motion are possible. For an initial displacement with , bounded oscillations take place near one of the two points of stable equilibrium . Such oscillations, of period , are given by:
As from below the period diverges since is a point of unstable equlilibrium. For initial displacement with the motion extends over the full range :
with period . As from above the period again diverges. Both the and solutions approach as from the appropriate directions.
## §22.19(iii) Nonlinear ODEs and PDEs
Many nonlinear ordinary and partial differential equations have solutions that may be expressed in terms of Jacobian elliptic functions. These include the time dependent, and time independent, nonlinear Schrödinger equations (NLSE) (Drazin and Johnson (1993, Chapter 2), Ablowitz and Clarkson (1991, pp. 42, 99)), the Korteweg–de Vries (KdV) equation (Kruskal (1974), Li and Olver (2000)), the sine-Gordon equation, and others; see Drazin and Johnson (1993, Chapter 2) for an overview. Such solutions include standing or stationary waves, periodic cnoidal waves, and single and multi-solitons occurring in diverse physical situations such as water waves, optical pulses, quantum fluids, and electrical impulses (Hasegawa (1989), Carr et al. (2000), Kivshar and Luther-Davies (1998), and Boyd (1998, Appendix D2.2)).
## §22.19(iv) Tops
The classical rotation of rigid bodies in free space or about a fixed point may be described in terms of elliptic, or hyperelliptic, functions if the motion is integrable (Audin (1999, Chapter 1)). Hyperelliptic functions are solutions of the equation , where is a polynomial of degree higher than 4. Elementary discussions of this topic appear in Lawden (1989, §5.7), Greenhill (1959, pp. 101–103), and Whittaker (1964, Chapter VI). A more abstract overview is Audin (1999, Chapters III and IV), and a complete discussion of analytical solutions in the elliptic and hyperelliptic cases appears in Golubev (1960, Chapters V and VII), the original hyperelliptic investigation being due to Kowalevski (1889).
## §22.19(v) Other Applications
Numerous other physical or engineering applications involving Jacobian elliptic functions, and their inverses, to problems of classical dynamics, electrostatics, and hydrodynamics appear in Bowman (1953, Chapters VII and VIII) and Lawden (1989, Chapter 5). Whittaker (1964, Chapter IV) enumerates the complete class of one-body classical mechanical problems that are solvable this way. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9593717455863953, "perplexity": 1388.1863943890246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704933573/warc/CC-MAIN-20130516114853-00067-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://yufeizhao.wordpress.com/2012/04/ | ### Extremal results for sparse pseudorandom graphs
David Conlon, Jacob Fox and I have just uploaded to the arXiv our paper Extremal results in sparse pseudorandom graphs. The main advance of this paper is a sparse extension of the counting lemma associated to Szemerédi’s regularity lemma, allowing us to extend a wide range of classical extremal and Ramsey results to sparse pseudorandom graphs.
An important trend in modern combinatorics research is in extending classical results to the sparse setting. For instance, Szemerédi’s theorem says that every subset of the integers with positive density contains arbitrarily long arithmetic progressions. The celebrated result of Green and Tao says that the primes also contain arbitrarily long arithmetic progressions. While the primes have zero density in the integers, they may be placed inside a pseudorandom set of “almost primes” with positive relative density. Green and Tao established a transference principle, allowing them to apply Szemerédi’s theorem as a black box to the sparse setting. Our work has a similar theme. We establish a transference principle extending many classical extremal graph theoretic results to sparse pseudorandom graphs.
One of the most powerful tools in extremal graph theory is Szemerédi’s regularity lemma. Roughly speaking, it says that every large graph can be partitioned into a bounded number of roughly equally-sized parts so that the graph is random-like between pairs of parts. With this tool in hand, many important results in extremal graph theory can be proven using a three-step recipe, known as the regularity method:
1. Starting with any graph ${G}$, apply Szemerédi’s regularity lemma to obtain a regular partition;
2. Clean up the graph and create an associated reduced graph. Solve an easier problem in the reduced graph;
3. Apply the counting lemma. Profit.
The counting lemma is a result that says that the number of embeddings of a fixed graph (e.g., a triangle) into the regular partition is roughly what you would expect if the large graph were actually random. The original version of Szemerédi’s regularity lemma is useful only for dense graphs. Kohayakawa and Rödl later independently developed regularity lemmas for sparse graphs. However, for sparse extensions of the applications, the counting lemma remained a key missing ingredient and an important open problem in the field. Our main advance lies in a counting lemma that complements the sparse regularity lemma. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9216365814208984, "perplexity": 353.9783474501607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511175.9/warc/CC-MAIN-20181017132258-20181017153758-00466.warc.gz"} |
http://math.stackexchange.com/questions/268605/inequality-involving-roots-of-a-third-degree-polynomial | # Inequality involving roots of a third degree polynomial
Let $a,b$ be two positive numbers such that $a^3 \gt 27b$. Consider the polynomial
$$W(x)=x^3-2ax^2+a^2x-4b$$
Then we have
$$W(0)=-4b \lt 0, \ W(\frac{a}{3})=\frac{4}{27}(a^3-27b) \gt 0, \ W(a)=-4b \lt 0$$
We deduce that $W$ has three roots $\alpha,\beta,\gamma$ with
$$0 \lt \alpha \lt \frac{a}{3} \lt \beta \lt a \lt \gamma$$
Prove or find a counterexample : $2\alpha+\beta \leq a$.
-
What happens when you work out $W(2\alpha+\beta)$? – Gerry Myerson Jan 1 '13 at 13:48
@GerryMyerson : $W(2\alpha+\beta)$ is exactly $-6a^2\alpha + (8\alpha^2 + 4\beta\alpha)a + (8b + 6\beta\alpha^2)$, an expression whose sign is not obvious. So what ? – Ewan Delanoy Jan 1 '13 at 14:05
Sorry, just thought it might be worth a try. – Gerry Myerson Jan 1 '13 at 23:29
$W(\alpha + a)=a\, \alpha\,(3 \alpha-a)\leq 0$ so $\alpha + a \leq \gamma$. Together with $\alpha+\beta+\gamma=2a$ this implies that $2\alpha + \beta = 2a +\alpha-\gamma\leq a$.
@EwanDelanoy Quasi systematic at best. In this case I started with the last inequality and concluded that $\gamma-\alpha \geq a$ would suffice. Then I tried $W(\alpha + a)$ and got lucky. – WimC Jan 1 '13 at 15:30
by the way, I also the need the "reverse" inequality $a \leq \alpha+2\beta$. Because of your humiliatingly simple solution, I’ll try harder to find a proof for myself before asking it officially here. But if once again, a one-line proof leaps to your lucky eye, let me know, it might save me some trouble ... – Ewan Delanoy Jan 1 '13 at 15:48
@EwanDelanoy Note that $W(4a/3)>0$ so $\beta-\gamma\geq -a$. – WimC Jan 1 '13 at 16:00
How do you deduce $\beta - \gamma \geq -a$ from $W(\frac{4a}{3}) \gt 0$ ? – Ewan Delanoy Jan 1 '13 at 16:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9318791031837463, "perplexity": 678.2618346779944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246639121.73/warc/CC-MAIN-20150417045719-00114-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://www.gradesaver.com/textbooks/science/physics/university-physics-with-modern-physics-14th-edition/chapter-10-dynamics-of-rotational-motion-problems-exercises-page-335/10-76 | ## University Physics with Modern Physics (14th Edition)
(a) $\omega = 1.28~rad/s$ (b) $\omega = 0.631~rad/s$ (c) $h = 1.26~m$
We can find the moment of inertia of the vine when Jane is swinging on it. $I = \frac{1}{3}M_vR^2+M_jR^2$ $I = \frac{1}{3}(30.0~kg)(8.00~m)^2+(60.0~kg)(8.00~m)^2$ $I = 4480~kg~m^2$ We can use conservation of energy to find the angular speed at the bottom. Note that the center of mass of the vine drops a height of 2.50 meters when Jane falls 5.00 meters. $\frac{1}{2}I\omega^2 = M_j~gh+M_v~g(\frac{h}{2})$ $\omega^2 = \frac{2M_j~gh+M_v~gh}{I}$ $\omega = \sqrt{\frac{2M_j~gh+M_v~gh}{I}}$ $\omega = \sqrt{\frac{(2)(60.0~kg)(9.80~m/s^2)(5.00~m)+(30.0~kg)(9.80~m/s^2)(5.00~m)}{4480~kg~m^2}}$ $\omega = 1.28~rad/s$ (b) We can find the moment of inertia when Tarzan is included. $I = \frac{1}{3}M_vR^2+(M_j+M_t)R^2$ $I = \frac{1}{3}(30.0~kg)(8.00~m)^2+(60.0~kg+72.0~kg)(8.00~m)^2$ $I = 9088~kg~m^2$ We can use conservation of angular momentum to find the angular speed after Jane grabs Tarzan. $L_2=L_1$ $I_2\omega_2=I_1\omega_1$ $\omega_2=\frac{I_1\omega_1}{I_2}$ $\omega_2=\frac{(4480~kg~m^2)(1.28~rad/s)}{9088~kg~m^2}$ $\omega_2 = 0.631~rad/s$ (c) We can use conservation of energy to find the maximum height that Jane and Tarzan swing up. Note that the center of mass of the vine rises a height of $\frac{h}{2}$ meters when Jane rises $h$ meters. $(M_j+M_t)~gh+M_v~g(\frac{h}{2}) = \frac{1}{2}I_2\omega_2^2$ $(M_j+M_t)~2gh+M_v~gh = I_2\omega_2^2$ $h = \frac{I_2\omega_2^2}{(M_j+M_t)~2g+M_v~g}$ $h = \frac{(9088~kg~m^2)(0.631~rad/s)^2}{(60.0~kg+72.0~kg)(2)(9.80~m/s^2)+(30.0~kg)(9.80~m/s^2)}$ $h = 1.26~m$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9812091588973999, "perplexity": 196.28672091538206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742483.3/warc/CC-MAIN-20181115033911-20181115055911-00483.warc.gz"} |
http://mathhelpforum.com/pre-calculus/205439-inequality.html | 1. ## Inequality
Let $x,y,z \in R$ such that $x^2+y^2+z^2=1$. Prove $|x+y+z| \leq \sqrt{3}$. I thinking along the lines of $AGM \leq GM$.
2. ## Re: Inequality
Originally Posted by brucewayne
Let $x,y,z \in R$ such that $x^2+y^2+z^2=1$.
Prove $|x+y+z| \leq \sqrt{3}$.
Here are the facts you need.
$2|xy|\le x^2+y^2$
$|x+y+z|^2\le (|x|+|y|+|z|)^2=x^2+y^2+z^2+2|xy|+2|xz|+2|yz|$
3. ## Re: Inequality
Ok, I am heading in the right direction...
$1+2x^2+2y^2+2z^2 \leq \sqrt{3}$
4. ## Re: Inequality
Originally Posted by brucewayne
Ok, I am heading in the right direction...
$1+2x^2+2y^2+2z^2 \leq \sqrt{3}$
Factor out the 2 in the last three terms.
You have
$|x+y+z|^2\le 3$
5. ## Re: Inequality
Ok, Maybe I am missing something, but what happened to 1?
6. ## Re: Inequality
Originally Posted by brucewayne
Ok, Maybe I am missing something, but what happens to 1?
Come man do basic mathamatics.
$1+2x^2+2y^2+2z^2=1+2(x^2+y^2+z^2)=1+2(1)=3$
7. ## Re: Inequality
Dang it, I really do over analyze things at times. My sincerest apologies; I am trying to get better at this. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9248746037483215, "perplexity": 2576.793765530699}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890874.84/warc/CC-MAIN-20180121195145-20180121215145-00342.warc.gz"} |
http://math.stackexchange.com/questions/174616/how-high-will-the-water-rise | # How high will the water rise
I need to know where I am going wrong since I am getting the wrong answer
12 litres of water are poured into an aquarium of $50$ cm length , $30$ cm breadth and $40$ cm Height.How high in cm will the water rise. (Ans 8 cm)
Edit: So I was making some very illogical assumptions however according to advice given which suggested that I keep the changing dimension value variable I got the answer which is
$1000cm^3$ = $50 \times 30 \times x$ so $x = \frac{1000}{1500}$. so $1$ litre has height $\frac{1000}{1500} = \frac {2}{3}$ so $12$ litres will give $12 \times \frac{2}{3} = 8 cm$
-
I think a good start would be to ask whatever led you to write $50\times30\times40=5\times3\times4$ when it's patently not so. – Gerry Myerson Jul 24 '12 at 10:53
I agree that does not make sense – MistyD Jul 24 '12 at 10:55
Tips on solving this problem ? – MistyD Jul 24 '12 at 10:56
Which dimension (length, breadth, height) changes when you pour in the water? This is where the $x$ should go. The other two stay constant, so you have $x\cdot C_1 \cdot C_2$, which should match $12\;l$. To make the match, you have to convert litres to $cm^3$... – draks ... Jul 24 '12 at 10:56
how did you get $12l$ ? – MistyD Jul 24 '12 at 11:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.857846736907959, "perplexity": 571.2026760935028}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207926924.76/warc/CC-MAIN-20150521113206-00101-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/a-de-sitter-like-universe-with-matter.521084/ | # A de Sitter like Universe with matter
1. Aug 12, 2011
### johne1618
As I understand it the de Sitter model is a model of the Universe with:
rho = matter density = rho = 0
p = pressure = 0
k = spatial curvature = 0
cosmological constant = Lambda = non zero
Putting these values in the Friedmann equations one finds the solution for the scale factor a(t) is:
a(t) = exp( sqrt(Lambda c^2/3) * t)
This describes an accelerating empty universe with a non-zero cosmological constant.
Although this model has the right deceleration parameter q = -1 it is contrary to observations as we know there is matter in the Universe.
Now consider the following model:
p = - rho c^2
k = 0
Plugging these values into the Friedmann equations we find we are left with the following equation for the scale factor a:
a'^2 = a a''
This also has the solution:
a(t) = exp(H * t)
where
H^2 = 8 Pi G rho' / 3
where rho' = rho + Lambda c^2
Now this model describes a matter-filled accelerating Universe with no explicit cosmological constant provided that the equation of state of the matter is:
p = -rho c^2
Is this right?
Does this latter model describe the present Universe provided that p = -rho c^2 holds for present day matter?
In this model the negative pressure is associated with the particles of matter themselves rather than having a cosmological constant that is associated with the background space.
Perhaps the negative pressure is a zero-point energy phenomenon holding the individual particles of matter together (in the same manner as the Casimir effect pushes conducting plates together).
Last edited: Aug 12, 2011
2. Aug 12, 2011
### IsometricPion
The http://en.wikipedia.org/wiki/Einstein_field_equations" [Broken].
Last edited by a moderator: May 5, 2017
3. Aug 13, 2011
### Chalnoth
There's no difference between an empty universe with a cosmological constant and a universe that is filled only with matter that has negative pressure equal to its energy density. They are just two different ways of describing the same thing.
Just bear in mind that our own universe has quite a bit of normal matter that has no pressure on cosmological scales.
4. Aug 13, 2011
### johne1618
Maybe each baryon of normal and dark matter is held together by the excess pressure of zero-point gluon fields outside the particle. Thus there would be a region of negative pressure hiding inside every baryon in the Universe.
Last edited: Aug 13, 2011
5. Aug 13, 2011
### Chalnoth
But then matter wouldn't collapse and form structures.
6. Aug 13, 2011
### johne1618
Good point - I'll have to think about that one! | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8942152261734009, "perplexity": 1375.8378194709314}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267861641.66/warc/CC-MAIN-20180619002120-20180619022120-00037.warc.gz"} |
https://cracku.in/ssc-chsl-21-jan-2017-afternoon-shift-question-paper-solved?page=9 | # SSC CHSL 21 Jan 2017 Afternoon Shift
Instructions
For the following questions answer them individually
Question 81
If Gafur's salary is 4/3 times of Haashim's and Satish's is 5/4 times of Haashim's, what is the ratio of Gafur's salary to Satish's?
Question 82
If cosecA/(cosecA - 1) + cosecA/(cosecA + 1) = x, then x is
Question 83
Which of the following is correct?
Question 84
Of the 3 numbers whose average is 77, the first is 3/4 times the sum of other 2. The first number is
Question 85
If the amount received at the end of 2nd and 3rd year at Compound Interest on a certain Principal is Rs 34992, and Rs 37791.36 respectively, what is the rate of interest?
Question 86
Slope of the side DA of the rectangle ABCD is -3/4. What is the slope of the side AB?
Question 87
If cot 30° - cos 45° = x, then x is
Question 88
The length of the diagonal of a rectangle is 10 cm and that of one side is 8 cm. What is the area of this rectangle?
Question 89
The two numbers are 55 and 99, HCF is 11, What is their LCM?
Question 90
A dishonest milkman buys milk at Rs 25 per litre and adds 1/5 of water to it and sells the mixture at Rs 29 per litre. His gain is
OR | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8041014075279236, "perplexity": 1698.5457659103668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370515113.54/warc/CC-MAIN-20200403154746-20200403184746-00380.warc.gz"} |
http://nag.com/numeric/CL/nagdoc_cl23/html/F08/f08ffc.html | f08 Chapter Contents
f08 Chapter Introduction
NAG C Library Manual
# NAG Library Function Documentnag_dorgtr (f08ffc)
## 1 Purpose
nag_dorgtr (f08ffc) generates the real orthogonal matrix $Q$, which was determined by nag_dsytrd (f08fec) when reducing a symmetric matrix to tridiagonal form.
## 2 Specification
#include #include
void nag_dorgtr (Nag_OrderType order, Nag_UploType uplo, Integer n, double a[], Integer pda, const double tau[], NagError *fail)
## 3 Description
nag_dorgtr (f08ffc) is intended to be used after a call to nag_dsytrd (f08fec), which reduces a real symmetric matrix $A$ to symmetric tridiagonal form $T$ by an orthogonal similarity transformation: $A=QT{Q}^{\mathrm{T}}$. nag_dsytrd (f08fec) represents the orthogonal matrix $Q$ as a product of $n-1$ elementary reflectors.
This function may be used to generate $Q$ explicitly as a square matrix.
## 4 References
Golub G H and Van Loan C F (1996) Matrix Computations (3rd Edition) Johns Hopkins University Press, Baltimore
## 5 Arguments
1: orderNag_OrderTypeInput
On entry: the order argument specifies the two-dimensional storage scheme being used, i.e., row-major ordering or column-major ordering. C language defined storage is specified by ${\mathbf{order}}=\mathrm{Nag_RowMajor}$. See Section 3.2.1.3 in the Essential Introduction for a more detailed explanation of the use of this argument.
Constraint: ${\mathbf{order}}=\mathrm{Nag_RowMajor}$ or Nag_ColMajor.
2: uploNag_UploTypeInput
On entry: this must be the same argument uplo as supplied to nag_dsytrd (f08fec).
Constraint: ${\mathbf{uplo}}=\mathrm{Nag_Upper}$ or $\mathrm{Nag_Lower}$.
3: nIntegerInput
On entry: $n$, the order of the matrix $Q$.
Constraint: ${\mathbf{n}}\ge 0$.
4: a[$\mathit{dim}$]doubleInput/Output
Note: the dimension, dim, of the array a must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{pda}}×{\mathbf{n}}\right)$.
On entry: details of the vectors which define the elementary reflectors, as returned by nag_dsytrd (f08fec).
On exit: the $n$ by $n$ orthogonal matrix $Q$.
If ${\mathbf{order}}=\mathrm{Nag_ColMajor}$, the $\left(i,j\right)$th element of the matrix is stored in ${\mathbf{a}}\left[\left(j-1\right)×{\mathbf{pda}}+i-1\right]$.
If ${\mathbf{order}}=\mathrm{Nag_RowMajor}$, the $\left(i,j\right)$th element of the matrix is stored in ${\mathbf{a}}\left[\left(i-1\right)×{\mathbf{pda}}+j-1\right]$.
5: pdaIntegerInput
On entry: the stride separating row or column elements (depending on the value of order) of the matrix $A$ in the array a.
Constraint: ${\mathbf{pda}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$.
6: tau[$\mathit{dim}$]const doubleInput
Note: the dimension, dim, of the array tau must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}-1\right)$.
On entry: further details of the elementary reflectors, as returned by nag_dsytrd (f08fec).
7: failNagError *Input/Output
The NAG error argument (see Section 3.6 in the Essential Introduction).
## 6 Error Indicators and Warnings
NE_ALLOC_FAIL
Dynamic memory allocation failed.
On entry, argument $〈\mathit{\text{value}}〉$ had an illegal value.
NE_INT
On entry, ${\mathbf{n}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{n}}\ge 0$.
On entry, ${\mathbf{pda}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{pda}}>0$.
NE_INT_2
On entry, ${\mathbf{pda}}=〈\mathit{\text{value}}〉$ and ${\mathbf{n}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{pda}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$.
NE_INTERNAL_ERROR
An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance.
## 7 Accuracy
The computed matrix $Q$ differs from an exactly orthogonal matrix by a matrix $E$ such that
$E2 = Oε ,$
where $\epsilon$ is the machine precision.
The total number of floating point operations is approximately $\frac{4}{3}{n}^{3}$.
The complex analogue of this function is nag_zungtr (f08ftc).
## 9 Example
This example computes all the eigenvalues and eigenvectors of the matrix $A$, where
$A = 2.07 3.87 4.20 -1.15 3.87 -0.21 1.87 0.63 4.20 1.87 1.15 2.06 -1.15 0.63 2.06 -1.81 .$
Here $A$ is symmetric and must first be reduced to tridiagonal form by nag_dsytrd (f08fec). The program then calls nag_dorgtr (f08ffc) to form $Q$, and passes this matrix to nag_dsteqr (f08jec) which computes the eigenvalues and eigenvectors of $A$.
### 9.1 Program Text
Program Text (f08ffce.c)
### 9.2 Program Data
Program Data (f08ffce.d)
### 9.3 Program Results
Program Results (f08ffce.r) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 47, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9947894811630249, "perplexity": 2281.0908795873665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989790.89/warc/CC-MAIN-20150728002309-00030-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://www.jmlr.org/papers/v15/yamazaki14a.html | ## Asymptotic Accuracy of Distribution-Based Estimation of Latent Variables
Keisuke Yamazaki; 15(109):3721−3742, 2014.
### Abstract
Hierarchical statistical models are widely employed in information science and data engineering. The models consist of two types of variables: observable variables that represent the given data and latent variables for the unobservable labels. An asymptotic analysis of the models plays an important role in evaluating the learning process; the result of the analysis is applied not only to theoretical but also to practical situations, such as optimal model selection and active learning. There are many studies of generalization errors, which measure the prediction accuracy of the observable variables. However, the accuracy of estimating the latent variables has not yet been elucidated. For a quantitative evaluation of this, the present paper formulates distribution-based functions for the errors in the estimation of the latent variables. The asymptotic behavior is analyzed for both the maximum likelihood and the Bayes methods.
[abs][pdf][bib] | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.836000382900238, "perplexity": 516.2031352993276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039560245.87/warc/CC-MAIN-20210422013104-20210422043104-00400.warc.gz"} |
https://www.physicsforums.com/threads/concentration-of-ammonia-in-a-solution.196081/ | # Concentration of ammonia in a solution
• Thread starter sveioen
• Start date
• #1
14
0
Hello all,
I had chemistry a long time ago, but now I am very rusty at it so I am hoping you can get me started with this problem I have;
A particular solution of ammonia (Kb = 1.8 x 10-5) has a pH of 8.3.
What is the concentration of ammonia in this solution?
Is it the concentration of NH3 I have to find? I know I can find [OH-] since I know the pH, but what does the final equation look like? Something like $$Kb=[OH-][NH4+]/[NH3]$$?
Thank you for any help!
## Answers and Replies
• #2
74
0
NH4 and OH- is going to have same amount of equilibrium concentration gained from the NH3. So If you know the pH, then how do you find the pOH, and what is the concentration of OH-? Multiply both sides by NH3 and divide bothsides by Kb. What happens?
• #3
14
0
Ok, so pOH = 14 - pH = 14 - 8,3 = 5,7. Concentration of OH- and NH4 is therefore $$1,995 \times 10^{-6}$$? And then $$[NH3] = \frac{[OH^-][NH4^+]}{K_b}$$?
• #4
74
0
So plug the values and see what you get, I hope this answer agree with the true answer, does it?? If not tell me.
• #5
14
0
I got $$2,21 \times 10^{-7}$$, which seems reasonable I guess. Maybe a bit low?!
• #6
74
0
You don't have the answer? It should be reasonable right? because its the equilibrum concentration right?
• #7
14
0
Nope dont have answer (yet) :(, but it seems kinda right.. Probably is equilibrum concentration..
• #8
1,939
50
I got $$2,21 \times 10^{-7}$$, which seems reasonable I guess. Maybe a bit low?!
I got $$2,21 \times 10^{-6}$$ instead.
• Last Post
Replies
20
Views
12K
• Last Post
Replies
2
Views
5K
• Last Post
Replies
1
Views
7K
• Last Post
Replies
5
Views
5K
• Last Post
Replies
6
Views
549
• Last Post
Replies
6
Views
19K
• Last Post
Replies
1
Views
1K
• Last Post
Replies
2
Views
5K
• Last Post
Replies
4
Views
3K
• Last Post
Replies
2
Views
17K | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9008536338806152, "perplexity": 2930.9402145937984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522309.14/warc/CC-MAIN-20220518183254-20220518213254-00376.warc.gz"} |
http://www.ask.com/question/what-is-30ml-in-ounces | # What Is 30ml in Ounces?
30 ml is equivalent to 1.014 fluid ounces. Millilitre is a metric unit of measuring volume that is equivalent to a thousand of a litre while a fluid ounce is a unit of volume capacity equivalent to one twentieth of a pint. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.969977855682373, "perplexity": 2808.6840341321217}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678694628/warc/CC-MAIN-20140313024454-00092-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://mathhelpforum.com/calculus/169605-i-dont-get-simplification-print.html | # I don't get this simplification
Printable View
• January 28th 2011, 02:01 PM
konvos
I don't get this simplification
I can't see how they did this operation.
http://img824.imageshack.us/img824/1111/simpr.png
thx for the help.
• January 28th 2011, 02:09 PM
pickslides
Looks like simple trig identities to me i.e. $2\sin x \cos x = \sin 2x$
• January 28th 2011, 02:09 PM
Archie Meade
Quote:
Originally Posted by konvos
I can't see how they did this operation.
http://img824.imageshack.us/img824/1111/simpr.png
thx for the help.
$sin(2x)=2sinxcosx$
$-sin^2x+cos^2x=-sin^2x-cos^2x+2cos^2x$
and
$cos^2x+sin^2x=1\Rightarrow\ -\left(sin^2x+cos^2x\right)=-1$
• January 28th 2011, 02:11 PM
Houdini
Ok..1.sin2(x) + cos2(x) = 1=>sin2(x)=1+cos2(x)....but you have - in front of sin so you can see why -1+cos2(x)
2.sin(2x) = 2 sin x cos x
from 1 and 2 =>-sin2(x) + cos2(x)+2 sin x cos x=-1+cos2(x)+sin(2x)
• January 28th 2011, 02:23 PM
konvos
thx for the help everybody
Quote:
Originally Posted by Archie Meade
$sin(2x)=2sinxcosx$
$-sin^2x+cos^2x=-sin^2x-cos^2x+2cos^2x$
and
$cos^2x+sin^2x=1\Rightarrow\ -\left(sin^2x+cos^2x\right)=-1$
I'd no idea of this identity http://www.mathhelpforum.com/math-he...2177745298.png
I suppose I should review trig...
• January 28th 2011, 02:48 PM
Defunkt
$-sin^2x + cos^2x = -sin^2x + (cos^2x - cos^2x) + cos^2x =$
$= -sin^2x -cos^2x + cos^2x + cos^2x$
$= -(sin^2x + cos^2x) + 2cos^2x$
$= -1 + 2cos^2x$
• January 28th 2011, 04:44 PM
Archie Meade
Quote:
Originally Posted by konvos
thx for the help everybody
I'd no idea of this identity http://www.mathhelpforum.com/math-he...2177745298.png
I suppose I should review trig...
No, that's not an identity.
Just algebraic manipulation.
You could of course use identities to weave your way to the final line.
• January 28th 2011, 05:59 PM
mr fantastic
Quote:
Originally Posted by Defunkt
$-sin^2x + cos^2x = -sin^2x + (cos^2x - cos^2x) + cos^2x =$
$= -sin^2x -cos^2x + cos^2x + cos^2x$
$= -(sin^2x + cos^2x) + 2cos^2x$
$= -1 + 2cos^2x$
It's also probably worth noting that $\cos^2(x) - \sin^2(x) = \cos(2x)$ is a standard double angle formula (because I just know that the next question asked by the OP will be how to find y ....) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8087837100028992, "perplexity": 3909.7131298284908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065375.30/warc/CC-MAIN-20150827025425-00164-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/photoelectrons-ejecting-from-cs-metal.729202/ | # Photoelectrons ejecting from Cs metal
1. Dec 19, 2013
### utkarshakash
1. The problem statement, all variables and given/known data
A small piece of cesium metal (W=1.9eV) is kept at a distance of 20cm from a large metal plate having a charge density of 1.0*10^-9 C m^-2 on the surface facing the cesium piece. A monochromatic light of wavelength 400nm is incident on the cesium piece. Find the minimum and the maximum kinetic energy of the photoelectrons reaching the large metal plate.
2. Relevant equations
3. The attempt at a solution
I can find the electric field due to metal plate and thus the corresponding potential difference. I can also find the stopping potential. But what about kinetic energies?
2. Dec 19, 2013
### Zondrina
I believe to find $E_{k_{max}}$ you should use:
$E_k = \frac{hc}{\lambda} - W$ - Don't forget to convert your work function into Joules and nm to meters.
If you knew the cutoff voltage, there would be another way to find the max kinetic energy as well by using the charge density.
Also, to find $E_{k_{min}}$, you should think about what causes the minimum kinetic energy. Think about the threshold frequency and how it relates to the minimum kinetic energy. The light must meet this minimum frequency in order to give enough energy to the photoelectrons, so that they can be ejected from the cesium metal to the large metal plate.
Last edited: Dec 19, 2013
3. Dec 19, 2013
### rude man
Since there is only one radiating frequency there will be one value of k.e. corresponding to the difference between hf and the given work function W. That would have to be the max. k.e. also.
To determine the min. k.e. it seems one would have to know the distribution of the work function for Cesium among its atoms, which I could not find. This may instead have to do with the energy levels of the electrons in each atom. The min. W would be associated with the outermost orbital electrons, which would = 1.9 eV, and the max. W with the innermost orbital electrons, for which W = outermost W + difference in energy betw. outermost & innermost electrons. Need expert physics help here ...
4. Dec 19, 2013
### utkarshakash
I did my calculation and got KE(max) = 1.19eV which is not correct.
5. Dec 19, 2013
### rude man
You did not include the effect of the electric field. You just computed the photon energy and subtracted the work function.
BTW are you sure the charge on the plate is positive? It's usually negative in experiments.
The serious question of minimum k.e. remains ...
6. Dec 20, 2013
### utkarshakash
But the question mentions that electric field is to be neglected (sorry for not giving that info earlier).
7. Dec 20, 2013
### ehild
hc/λ-W=KE(max), the maximum kinetic energy. That is the KE of the electrons which escaped the metal from the top of the potential well. Other electrons can have less KE than that. The minimum KE of the electrons leaving the metal is zero. They are accelerated by the electric field on the surface charge of the opposite metal plate . That surface charge density is 1.0*10^-9 C m^-2 positive (there would be a minus sign in front otherwise). What is the electric field between the metal plate and the Caesium piece?
ehild
8. Dec 20, 2013
### rude man
Why should the electric field be neglected if it was given in great detail?
9. Dec 20, 2013
### rude man
You have not explained how electrons can have zero emitted k.e. Why zero exactly?
A photon at 400 nm imparts 3.1 eV to an electron, which exceeds the work function by 1.2 eV. How can an electron be kicked to the surface with zero k.e. or indeed with any k.e. < 1.2 eV? Must have something to do with the quantized electron energy levels: S, P etc. And if it does, the lowest released electron's k.e. is not likely to be zero since the orbital electron levels are quantized. Would be quite a coincidence.
10. Dec 20, 2013
### ehild
The energy levels in a crystal are arranged in bands. The energy levels in the band, although quantized, are very close to each other: the atomic levels split when the atoms interact. In case of two interacting atoms, they split into two, in case of N atoms in a crystal, they split into N sub-levels. The outermost electrons occupy levels in the valence band, those of the metals occupy levels in the conduction band. There are electrons with energy near the bottom of the band and there are others near the Fermi level.
Have you ever asked yourself why the maximum kinetic energy appears in the equation hf=W+KE(max)?
KE(max) is the upper bound for the KE of the electrons kicked out of the metal, so there must be ones with lower KE. What can be the lower bound of the KE? Anyway, it can not be negative.
ehild
Last edited: Dec 20, 2013
11. Dec 20, 2013
### rude man
Of course it cannot be negative, but my question was, what is it?
But OK, what I have found is that
work function phi = E0 - EF
where E0 is the energy required to kick an electron out of the lowest free electron state, and EF is the Fermi level. Since EF for Cesium = 1.6 eV and phi = 1.9 eV, this indicates that E0 ~ 3.5 eV for Cs. There is a continuum of electron energies with values from - 3.5 eV to -1.9 eV so there is effectively a continuum of work functions also, from 1.9 eV to 3.5 eV. The work function given (1.9 eV) is the energy required to liberate an electron in the highest free electron state; all lower-state electrons require more than 1.9 eV. The lowest-state electrons require a light frequency f corresponding to hf = 3.5 eV. This frequency is above our light emitter frequency of c/λ, λ = 400 nm.
So, bottom line, the lowest emitted electron k.e. is indeed zero since the 400 nm radiation can liberate electrons only to the - 3.1 eV level.
So now the OP can compute the range of kinetic energies at the charged plate.
Last edited: Dec 20, 2013
Draft saved Draft deleted
Similar Discussions: Photoelectrons ejecting from Cs metal | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.831642210483551, "perplexity": 807.7121125659828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806660.82/warc/CC-MAIN-20171122194844-20171122214844-00122.warc.gz"} |
https://stats.stackexchange.com/questions/220078/what-is-the-proper-way-to-measure-error-for-an-estimation-algorithm | # What is the proper way to measure error for an estimation algorithm?
Our algorithm is about estimating the true statistic values from a data set. The data set is a table in relational database, we are going to estimate the statistic value for filtered records, like SUM("Sales") WHERE city="New York". We do this because the table is too large to calculate the true answer.
We use relative error for accuracy measurement at first, but we soon noticed that for small values, the error usually exceeds 100% and raises the average error. For example, if the true answer is 3, and my algorithm gives 9, it is a 200% error and will result in a very high average error, even if the other queries are answered properly. So I'm wondering if using relative error is not proper here, because if my algorithm always estimate a very small value, there will be unlikely for my algorithm to give an average error over 100%. It is unfair if my algorithm overestimates the true value.
Please note that I'm not trying to develop an algorithm to do the estimation, but I'm finding a fair measurement to evaluate the accuracy for different estimation algorithms. For example, we can estimate the sum by 1) Sampling from the original data set and estimate the sum with CLT, or 2) Draw a histogram offline and give an approximate answer for specific queries online according to the histogram. My question here is that under the traditional definition of relative error, the algorithm that always give small values tend to benefit more, so I'm looking for another measurement which is fairer.
I use the following formula in the past, but I'm wondering if it has any theories behind it:
$error=abs(x_{estimate}-x_{true})/{max(x_{estimate},x_{true})}$
So is there any better measurement to measure the error for an estimation algorithm?
Thanks!
• Let me ask you if I really understood what your question. Lets suppose you have the data $x_1,\,x_2,...,x_n$. In your example you want to find out the value of $x_1 + x_2 +...+x_n$, but you are not able to perform the whole summation. Is that why you need the approximation? Do the algorithm you created require or use randomness in any sense? – Mur1lo Jun 22 '16 at 3:39
• Can you say more about your situation, your data & your goals? This doesn't make sense to me, & I don't think this question is answerable. – gung - Reinstate Monica Jun 22 '16 at 4:04
• @Mur1lo Yes, the original data set is too large, so we used some kind of algorithm to make an approximation for it. We don't include randomness in our algorithm. – DarkZero Jun 22 '16 at 5:09
• @gung I added some background information for you. I am just wondering if it is reasonable to use relative error to measure the error of an estimation algorithm, because it punishes too much for overestimation... – DarkZero Jun 22 '16 at 5:10
• In the denominator of your error formula why do you have max? Shouldn't it be only the true value? ref: en.wikipedia.org/wiki/Approximation_error – Mur1lo Jun 23 '16 at 3:47
I would suggest adding a "small" number to the denominator of your ratio.
$$error=\frac{|x_{estimate}-x_{true}|}{x_{true}+K}$$
You set $$K$$ equal to a "negligible" amount. For your example, setting $$K=6$$ would give an error of 66% instead of 200% for $$K=0$$.
The number to add will depend on your context and what kind of absolute errors are negligible
There are a lot of measures for error of estimation and the one you provided is a valid one. But since you are working with the sum of random variables, I suggest using normal distribution (supported by the Central Limit Theorem) and instead of calculating once the sum of sales in New York, you’ll have to repeat that algorithm (at least 30 times) including randomness in your selection.
With your sample of 30 "sum of sales" you can use Normal Distribution and not only calculate the Mean Square Error as a good estimator of error, but also calculate probabilities.
Another good news is most of statistical inference is developed for variables with normal distribution.
• The classical CLT applies only to the sample mean, not to the sum. – Mur1lo Jun 22 '16 at 17:38
• Central Theorem Limit it's actually a summary of different convergence laws. In en.wikipedia.org/wiki/Central_limit_theorem says " central limit theorem is any of a set of weak-convergence theorems in probability theory. They all express the fact that a sum of many independent and identically distributed (i.i.d.) random variables (...) will tend to be distributed according to (...) normal distribution" – Camila Burne Jun 22 '16 at 18:12
• By "a sum" the author did not mean ANY sum. The mean is a particular type of sum, just like $\sum(X_i -\bar X)^2/n$ is another sum. The problem is that the variance of $\sum X_i$ goes to infinity and as a consequence it cannot converge in law to any distribution with finite variance. – Mur1lo Jun 22 '16 at 19:10
• The recommendation to repeat an estimator 30 times in order to use a Normal distribution is truly strange. What support can you adduce for that? – whuber Jun 22 '16 at 20:55
• I'm not questioning the 30: I'm questioning the very idea that any replication is needed in the first place! That idea seems to belie a fundamental misconception about sampling and estimation. – whuber Jun 23 '16 at 14:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8738276362419128, "perplexity": 346.22370451330653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141188146.22/warc/CC-MAIN-20201126113736-20201126143736-00492.warc.gz"} |
http://ncvm3.books.nba.co.za/chapter/unit-4-find-the-general-solution-to-trig-equations/ | Space, shape and measurement: Solve problems by constructing and interpreting trigonometric models
# Unit 4: Find the general solution to trig equations
Dylan Busa
### Unit outcomes
By the end of this unit you will be able to:
• Solve trigonometric equations using a general solution.
## What you should know
Before you start this unit, make sure you can:
• Define the three basic trigonometric ratios of sine, cosine and tangent.
• Solve basic linear equations.
• Calculate with the special angles. Refer to unit 1 of this subject outcome if you need help with this.
• Use the trig reduction formulae. Refer to unit 2 of this subject outcome if you need help with this.
• Draw and work with the CAST diagram to determine in which quadrants each of the trig ratios are positive or negative. Refer to level 2 subject outcome 3.6 unit 1 if you need help with this.
## Introduction
If $\scriptsize \sin \theta =\displaystyle \frac{1}{2}$, what is $\scriptsize \theta$? If you said that $\scriptsize \theta ={{30}^\circ}$ because of your knowledge of special angles from unit 1, you would be absolutely correct and yet still not completely correct! How can this be?
Remember that the trig functions are periodic – they repeat themselves over and over again. We know already, for example, that $\scriptsize \sin {{150}^\circ}=\sin ({{180}^\circ}-{{30}^\circ})=\sin {{30}^\circ}=\displaystyle \frac{1}{2}$ as well. But the function repeats itself again every $\scriptsize {{360}^\circ}$. Therefore, there are infinitely many solutions to $\scriptsize \sin \theta =\displaystyle \frac{1}{2}$.
If you have an internet connection, visit this simulation to see all the solutions to $\scriptsize \sin \theta =\displaystyle \frac{1}{2}$ for the interval $\scriptsize -{{720}^\circ}\le \theta \le {{720}^\circ}$.
You will find that there are eight in total.
All of these solutions are shown in Figure 1.
Therefore, it is not good enough to say that the solution to $\scriptsize \sin \theta =\displaystyle \frac{1}{2}$ is $\scriptsize \theta ={{30}^\circ}$. Nor can we list all the possible solutions. We need a way to give a general solution that can generate all the possible solutions for $\scriptsize \theta \in \mathbb{R}\text{ }$.
## The general solution for $\scriptsize \sin \theta$
Let’s look at the solutions for $\scriptsize \theta$ for $\scriptsize \sin \theta =\displaystyle \frac{1}{2}$ for the interval $\scriptsize -{{720}^\circ}\le \theta \le {{720}^\circ}$ again.
$\scriptsize -{{690}^\circ},-{{570}^\circ},-{{330}^\circ},-{{210}^\circ},{{30}^\circ},{{150}^\circ},{{390}^\circ},{{510}^\circ}$. Can you see any pattern? Look back at Figure 1 to help you.
What if you arrange the solutions in two columns like this:
\scriptsize \begin{align*} -{{690}^\circ}&&-{{570}^\circ}\\ -{{330}^\circ}&&-{{210}^\circ}\\ {{30}^\circ}&&{{150}^\circ}\\ {{390}^\circ}&&{{510}^\circ} \end{align*}
Can you see the pattern now? Hopefully you can see that each solution in each column is separated from the next by $\scriptsize {{360}^\circ}$, the period of the sine function. Look at the first column again. If we use $\scriptsize {{30}^\circ}$ (the answer that a calculator would give us) as the starting angle or the reference angle, look at how we can generate all the other solutions.
\scriptsize \begin{align*}&{{30}^\circ}-2\times {{360}^\circ}-{{690}^\circ}\\&{{30}^\circ}-1\times {{360}^\circ}=-{{330}^\circ}\\&{{30}^\circ}\\&{{30}^\circ}+1\times {{360}^\circ}={{390}^\circ}\end{align*}
We can do the same for the second column of solutions using $\scriptsize {{180}^\circ}-{{30}^\circ}={{150}^\circ}$ as the starting angle.
\scriptsize \begin{align*}&{{150}^\circ}-2\times {{360}^\circ}=-{{570}^\circ}\\&{{150}^\circ}-1\times {{360}^\circ}=-{{210}^\circ}\\&{{150}^\circ}\\&{{150}^\circ}+1\times {{360}^\circ}={{510}^\circ}\end{align*}
So, with just the two starting angles $\scriptsize {{30}^\circ}$ and $\scriptsize {{150}^\circ}$, we can generate every other solution to $\scriptsize \sin \theta =\displaystyle \frac{1}{2}$ by adding integer ($\scriptsize \mathbb{Z}$) multiples of the period of the sine function i.e. $\scriptsize {{360}^\circ}$.
We write the full or general solution to $\scriptsize \sin \theta =\displaystyle \frac{1}{2}$ as follows:
$\scriptsize \theta ={{30}^\circ}+k{{.360}^\circ}\text{ or }\theta ={{150}^\circ}+k{{.360}^\circ},k\in \mathbb{Z}\text{ }$
### Take note!
We call the answer we get for $\scriptsize \theta$, (usually on a calculator) and from which we generate all the other possible solutions, the reference angle.
### Example 4.1
Determine the general solution for $\scriptsize \sin \theta =0.35$.
Solution
Step 1: Use a calculator to determine the reference angle
\scriptsize \begin{align*}\sin \theta &=0.35\\\therefore \theta &={{20.5}^\circ}\end{align*}
Note: Unless told otherwise, we usually round the reference angle to one decimal place.
Step 2: Use the CAST diagram to determine any other possible solutions
$\scriptsize \sin \theta =0.35$. In other words, sine is positive. Sine is positive in the first and second quadrants. We already have the first quadrant solution (the reference angle of $\scriptsize \theta ={{20.5}^\circ}$). We need to find the second quadrant solution. We know that $\scriptsize \sin ({{180}^\circ}-\theta )=\sin \theta$. Therefore, the second quadrant solution is $\scriptsize {{180}^\circ}-{{20.5}^\circ}={{159.5}^\circ}$.
Step 3: Generate the general solution
$\scriptsize \theta ={{20.5}^\circ}+k{{.360}^\circ}\text{ or }\theta ={{159.5}^\circ}+k{{.360}^\circ},k\in \mathbb{Z}\text{ }$
Step 4: Check your general solution
It is always a good idea to check that your final solutions satisfy the original equation. Choose a random value for $\scriptsize k$.
$\scriptsize k=-2$:
\scriptsize \begin{align*}&\theta ={{20.5}^\circ}-2\times {{360}^\circ}\text{ or }\theta ={{159.5}^\circ}-2\times {{360}^\circ}\\&\therefore \theta =-{{699.5}^\circ}\text{ or }\theta =-{{560.5}^\circ}\end{align*}
$\scriptsize \sin (-{{699.5}^\circ})=0.35$
$\scriptsize \sin (-{{560.5}^\circ})=0.35$
Our general solution is correct.
### Example 4.2
Solve for $\scriptsize \theta$ if $\scriptsize 4\sin \theta =-3$ for the interval $\scriptsize [-{{180}^\circ},{{180}^\circ}]$.
Solution
\scriptsize \begin{align*}4\sin \theta =-3&\\\therefore \sin \theta =-\displaystyle \frac{3}{4}&=-0.75\end{align*}
Step 1: Use a calculator to determine the reference angle
When we have a negative ratio, we ignore the sign when finding the reference angle.
\scriptsize \begin{align*}&\sin \theta =0.75\\&\therefore \theta ={{48.6}^\circ}\end{align*}
Step 2: Use the CAST diagram to determine any other possible solutions
Our equation is $\scriptsize \sin \theta =-0.75$. In other words, sine is negative. Sine is negative in the third and fourth quadrants. Our reference angle is $\scriptsize \theta ={{48.6}^\circ}$.
Third quadrant: $\scriptsize \sin ({{180}^\circ}+\theta )=-\sin \theta$. Therefore, the third quadrant solution is $\scriptsize {{180}^\circ}+{{48.6}^\circ}={{228.6}^\circ}$.
Fourth quadrant: $\scriptsize \sin ({{360}^\circ}-\theta )=-\sin \theta$. Therefore, the fourth quadrant solution is $\scriptsize {{360}^\circ}-{{48.6}^\circ}={{311.4}^\circ}$.
Step 3: Generate the general solution
$\scriptsize \theta ={{228.6}^\circ}+k{{.360}^\circ}\text{ or }\theta ={{311.4}^\circ}+k{{.360}^\circ},k\in \mathbb{Z}\text{ }$
Step 4: Generate the solution for the specific range
In this instance, we were not asked for the general solution but only for the solutions in $\scriptsize [-{{180}^\circ},{{180}^\circ}]$. We need to use our general solution to generate solutions that fall within this range by adding or subtracting multiples of $\scriptsize {{360}^\circ}$.
$\scriptsize \theta ={{228.6}^\circ}-1\times {{360}^\circ}=-{{131.4}^\circ}\text{or }\theta ={{311.4}^\circ}-1\times {{360}^\circ}=-{{48.6}^\circ}$
Step 5: Check your general solution
It is always a good idea to check that your final solutions satisfy the original equation.
$\scriptsize \sin (-{{131.4}^\circ})=-0.75$
$\scriptsize \sin (-{{48.6}^\circ})=-0.75$
### Exercise 4.1
1. Determine the general solution for $\scriptsize 2\sin \theta =\sqrt{3}$.
2. Solve for $\scriptsize x$ if $\scriptsize 5\sin x=-2$ and $\scriptsize {{0}^\circ}\le x\le {{360}^\circ}$.
The full solutions are at the end of the unit.
## The general solution for $\scriptsize \cos \theta$ and $\scriptsize \tan \theta$
The general solution for $\scriptsize \cos \theta =y$ is basically the same as that for sine. Remember that cosine also has a period of $\scriptsize {{360}^\circ}$. The only difference occurs when you find the quadrants in which cosine is either positive or negative.
The general solution for $\scriptsize \tan \theta =y$ is basically the same as that for sine except that the period of tangent is $\scriptsize {{180}^\circ}$. The other difference occurs when you find the quadrants in which tangent is either positive or negative.
### Example 4.3
Determine the general solution for $\scriptsize 3\cos x=\sin {{14}^\circ}$.
Solution
\scriptsize \begin{align*}3\cos x&=\sin {{14}^\circ}\\\therefore \cos x&=\displaystyle \frac{{\sin {{{14}}^\circ}}}{3}\end{align*}
Step 1: Use a calculator to determine the reference angle
\scriptsize \begin{align*}\cos x & =\displaystyle \frac{{\sin {{{14}}^\circ}}}{3}\\\therefore x & ={{85.4}^\circ}\end{align*}
Step 2: Use the CAST diagram to determine any other possible solutions
Our equation is $\scriptsize \cos x=\displaystyle \frac{{\sin {{{14}}^\circ}}}{3}$. Because $\scriptsize \sin {{14}^\circ} \gt 0$ we know that $\scriptsize \cos x \gt 0$. Cosine is positive in the first and fourth quadrants. Our reference angle is $\scriptsize \theta ={{85.4}^\circ}$.
First quadrant: $\scriptsize \theta ={{85.4}^\circ}$
Fourth quadrant: $\scriptsize \cos ({{360}^\circ}-\theta )=\cos \theta$
$\scriptsize {{360}^\circ}-{{85.4}^\circ}={{274.6}^\circ}$.
Step 3: Generate the general solution
$\scriptsize \theta ={{85.4}^\circ}+k{{.360}^\circ}\text{ or }\theta ={{274.6}^\circ}+k{{.360}^\circ},k\in \mathbb{Z}\text{ }$
Step 4: Check your general solution
$\scriptsize k=2$:
$\scriptsize \theta ={{85.4}^\circ}+{{2.360}^\circ}={{805.4}^\circ}\text{ or }\theta ={{274.6}^\circ}+{{2.360}^\circ}={{994.6}^\circ}$
$\scriptsize \cos {{805.4}^\circ}=\displaystyle \frac{{\sin {{{14}}^\circ}}}{3}$
$\scriptsize \sin {{994.6}^\circ}=\displaystyle \frac{{\sin {{{14}}^\circ}}}{3}$
### Example 4.4
Solve for $\scriptsize \alpha$ if $\scriptsize \tan \alpha =-5.2$ and $\scriptsize {{0}^\circ}\le \alpha \le {{360}^\circ}$.
Solution
Step 1: Use a calculator to determine the reference angle
\scriptsize \begin{align*}\tan \alpha&=5.2&&\text{Remember, when finding the reference angle we always use the positive ratio}\\\therefore \alpha&={{79.1}^\circ}\end{align*}
Step 2: Use the CAST diagram to determine any other possible solutions
Our equation is $\scriptsize \tan \alpha =-5.2$. $\scriptsize \tan \alpha \lt 0$.
Second quadrant: $\scriptsize \tan ({{180}^\circ}-\theta )=-\tan \theta$. Therefore, the second quadrant solution is $\scriptsize {{180}^\circ}-{{79.1}^\circ}={{100.9}^\circ}$.
Fourth quadrant: $\scriptsize \tan ({{360}^\circ}-\theta )=-\tan \theta$. Therefore, the fourth quadrant solution is $\scriptsize {{360}^\circ}-{{79.1}^\circ}={{280.9}^\circ}$.
Step 3: Generate the general solution
$\scriptsize \alpha ={{100.9}^\circ}+k{{.180}^\circ}\text{ or }\alpha ={{280.9}^\circ}+k{{.180}^\circ},k\in \mathbb{Z}$
Note: Because the period of tangent is $\scriptsize {{180}^\circ}$, the general solution includes integer multiples of $\scriptsize {{180}^\circ}$. Therefore, it is also possible to generate the general solution from only one angle. $\scriptsize {{280.9}^\circ}={{100.9}^\circ}+1\times {{180}^\circ}$.
The simplest general solution is thus $\scriptsize \alpha ={{100.9}^\circ}+k{{.180}^\circ},k\in \mathbb{Z}$
Step 4: Generate the solution for the specific range
In this instance, we were not asked for the general solution but only for the solutions in$\scriptsize {{0}^\circ}\le \alpha \le {{360}^\circ}$. We need to use our general solution to generate solutions that fall within this range by adding or subtracting multiples of $\scriptsize {{180}^\circ}$.
$\scriptsize \alpha ={{100.9}^\circ}+0\times {{180}^\circ}={{100.9}^\circ}\text{or }\alpha ={{100.9}^\circ}+1\times {{180}^\circ}={{280.9}^\circ}$
Step 5: Check your general solutions
$\scriptsize \tan {{100.9}^\circ}=-5.2$
$\scriptsize \tan {{280.9}^\circ}=-5.2$
### Exercise 4.2
1. Determine the general solution for $\scriptsize 5-3\tan \theta =0$.
2. Solve for $\scriptsize x$ if $\scriptsize 3\cos x-1=-2$ for the interval $\scriptsize [{{0}^\circ},{{360}^\circ}]$.
The full solutions are at the end of the unit.
## Solve more complicated trig equations
In level 3 subject outcome 2.1 units 7, 9 and 11, we learnt about the functions $\scriptsize y=a\sin kx$, $\scriptsize y=a\cos kx$ and $\scriptsize y=a\tan kx$. We saw, for example, that the period of the function $\scriptsize y=\sin 2x$ was no longer $\scriptsize {{360}^\circ}$. The period was actually $\scriptsize \displaystyle \frac{{{{{360}}^\circ}}}{2}={{180}^\circ}$.
### Note
If you have an internet connection, visit this interactive simulation.
Here you will see the solutions to the equation $\scriptsize \sin kx=c$ where you can control the values of $\scriptsize k$ and $\scriptsize c$.
How many solutions are there to $\scriptsize \sin x=0.5$? How many solutions are there to $\scriptsize \sin 2x=0.5$?
Figure 2 shows the solutions to $\scriptsize \sin x=0.5$. As you would expect, these are $\scriptsize x={{30}^\circ}$ and $\scriptsize x={{150}^\circ}$ for the interval $\scriptsize [{{0}^\circ},{{360}^\circ}]$.
Now look at the solutions to $\scriptsize \sin 2x=0.5$ in Figure 3. There are three important things to notice.
1. The period of the graph has halved and the graph repeats twice in the interval $\scriptsize [{{0}^\circ},{{360}^\circ}]$. Therefore, there are double the number of solutions in the same interval.
2. The solution $\scriptsize x={{30}^\circ}$ from before is now $\scriptsize x=\displaystyle \frac{{{{{30}}^\circ}}}{2}={{15}^\circ}$. The solution $\scriptsize x={{150}^\circ}$ from before is now $\scriptsize x=\displaystyle \frac{{{{{150}}^\circ}}}{2}={{75}^\circ}$.
3. Solutions to $\scriptsize \sin x=0.5$ are separated by integer multiples of $\scriptsize {{360}^\circ}$. Solutions to $\scriptsize \sin 2x=0.5$ are seperated by interger multiples of $\scriptsize \displaystyle \frac{{{{{360}}^\circ}}}{2}={{180}^\circ}$.
Therefore, when we solve equations such as $\scriptsize \sin 2\theta =\displaystyle \frac{1}{2}$, we need to modify the general solution to take into account the period of the function is now $\scriptsize {{180}^\circ}$.
So, if the general solution to $\scriptsize \sin \theta =\displaystyle \frac{1}{2}$ is $\scriptsize \theta ={{30}^\circ}+k{{.360}^\circ}\text{ or }\theta ={{150}^\circ}+k{{.360}^\circ},k\in \mathbb{Z}\text{ }$, then the general solution to $\scriptsize \sin 2\theta =\displaystyle \frac{1}{2}$ is $\scriptsize \theta ={{15}^\circ}+k{{.180}^\circ}\text{ or }\theta ={{75}^\circ}+k{{.180}^\circ},k\in \mathbb{Z}\text{ }$.
The same is true for cosine and tangent equations.
### Example 4.5
Determine the general solution for $\scriptsize \theta$ if $\scriptsize 5\sin 2\theta =3$.
Solution
\scriptsize \begin{align*}5\sin 2\theta & =3\\\therefore \sin 2\theta & =\displaystyle \frac{3}{5}\end{align*}
Step 1: Use a calculator to determine the reference angle
\scriptsize \begin{align*}\sin 2\theta & =\displaystyle \frac{3}{5}\\\therefore 2\theta & ={{36.9}^\circ}\end{align*}
Note: We keep working with the reference angle as $\scriptsize 2\theta$ until we generate the general solution.
Step 2: Use the CAST diagram to determine any other possible solutions
Our equation is $\scriptsize \sin 2\theta =\displaystyle \frac{3}{5}$. $\scriptsize \sin 2\theta \gt 0$. Sine is positive in the first and second quadrants.
First quadrant: $\scriptsize 2\theta ={{36.9}^\circ}$
Second quadrant: $\scriptsize \sin ({{180}^\circ}-\theta )=\sin \theta$
$\scriptsize 2\theta ={{180}^\circ}-{{36.9}^\circ}={{143.2}^\circ}$.
Step 3: Generate the general solution
\scriptsize \begin{align*}2\theta & ={{36.9}^\circ}+k{{.360}^\circ}\text{ or 2}\theta ={{143.2}^\circ}+k{{.360}^\circ},k\in \mathbb{Z}\text{ }\\\therefore \theta & ={{18.45}^\circ}+k{{.180}^\circ}\text{ or }\theta ={{71.6}^\circ}+k{{.180}^\circ},k\in \mathbb{Z}\text{ }\end{align*}
Step 4: Check your general solution
$\scriptsize k=2$:
$\scriptsize \therefore \theta ={{18.45}^\circ}+2\times {{180}^\circ}={{378.45}^\circ}\text{ or }\theta ={{71.6}^\circ}+2\times {{180}^\circ}={{431.6}^\circ}$
$\scriptsize \sin (2\times {{378.45}^\circ})=0.6$
$\scriptsize \sin (2\times {{431.6}^\circ})=0.6$
### Example 4.6
Solve for $\scriptsize \theta$ if $\scriptsize \tan (3\theta -{{38}^\circ})=-5$ for the interval $\scriptsize [-{{180}^\circ},{{180}^\circ}]$.
Solution
Step 1: Use a calculator to determine the reference angle
\scriptsize \begin{align*}\tan (3\theta -{{38}^\circ}) & =5\quad \text{Remember, when finding the reference angle, use the positive ratio}\\\therefore 3\theta -{{38}^\circ} & ={{78.7}^\circ}\quad \text{Keep your reference angle in terms of }3\theta -{{38}^\circ}\end{align*}
Note: We keep working with the reference angle as $\scriptsize 3\theta -{{38}^\circ}$ until we generate the general solution.
Step 2: Use the CAST diagram to determine any other possible solutions
Our equation is $\scriptsize \tan (3\theta -{{38}^\circ})=-5$. $\scriptsize \tan (3\theta -{{38}^\circ}) \lt 0$. Tangent is negative in the second and fourth quadrants.
Second quadrant: $\scriptsize \tan ({{180}^\circ}-\theta )=-\tan \theta$
$\scriptsize {{180}^\circ}-{{78.7}^\circ}={{101.3}^\circ}$
Step 3: Generate the general solution
\scriptsize \begin{align*}3\theta -{{38}^\circ} & ={{101.3}^\circ}+k{{.180}^\circ},\text{ }k\in \mathbb{Z}\quad \text{Add }{{38}^\circ}\text{to the angle}\\\therefore 3\theta & ={{139.3}^\circ}+k{{.180}^\circ},\text{ }k\in \mathbb{Z}\quad \text{Divide both the angle and the period by }3\\\therefore \theta &={{46.37}^\circ}+k{{.60}^\circ},\text{ }k\in \mathbb{Z}\end{align*}
Step 4: Generate the solution for the specific range
The interval is $\scriptsize [-{{180}^\circ},{{180}^\circ}]$.
$\scriptsize \theta ={{46.37}^\circ}-3\times {{60}^\circ}=-{{133.63}^\circ}$ or
$\scriptsize \theta ={{46.37}^\circ}-2\times {{60}^\circ}=-{{73.63}^\circ}$ or
$\scriptsize \theta ={{46.37}^\circ}-1\times {{60}^\circ}=-{{13.63}^\circ}$ or
$\scriptsize \theta ={{46.37}^\circ}+0\times {{60}^\circ}={{46.37}^\circ}$ or
$\scriptsize \theta ={{46.37}^\circ}+1\times {{60}^\circ}={{106.37}^\circ}$ or
$\scriptsize \theta ={{46.37}^\circ}+2\times {{60}^\circ}={{166.37}^\circ}$
### Exercise 4.3
1. Determine the general solution for $\scriptsize \cos \left( {\displaystyle \frac{1}{2}x} \right)=0.54$.
2. Solve for $\scriptsize x$ if $\scriptsize 2\sin \left( {\displaystyle \frac{{3x}}{2}+{{{10}}^\circ}} \right) =-1$ for the interval $\scriptsize [-{{180}^\circ},{{180}^\circ}]$.
The full solutions are at the end of the unit.
## Summary
In this unit you have learnt the following:
• How to find the general solution for trig equations involving sine, cosine and tangent and simple angles.
• How to find the general solution for trig equations involving sine, cosine and tangent and more complex angles.
• How to find the solutions to trig equations involving sine, cosine and tangent for specific intervals.
# Unit 4: Assessment
#### Suggested time to complete: 45 minutes
All the questions below have been adapted from NC(V) examination questions.
1. Determine the general solution for $\scriptsize \sin \theta +1=0.3$.
2. Calculate the value/s of $\scriptsize \theta$ if $\scriptsize 2\tan \theta =0.279$ where $\scriptsize \theta \in [{{0}^\circ},{{360}^\circ}]$.
3. Calculate the value/s of $\scriptsize \theta$ if $\scriptsize 5\cos 2\theta =-2.7$ where $\scriptsize \theta \in [{{0}^\circ},{{360}^\circ}]$.
4. Calculate the value/s of $\scriptsize \theta$ if $\scriptsize \tan (3\theta -{{48}^\circ})=3.2$ where $\scriptsize \theta \in [{{0}^\circ},{{180}^\circ}]$.
5. Determine the value/s of $\scriptsize \theta$ in the following equation without a calculator if $\scriptsize {{0}^\circ}\le \theta \le {{360}^\circ}$.
$\scriptsize \sin \theta +\sqrt{2}=-\sin \theta$
The full solutions are at the end of the unit.
# Unit 4: Solutions
### Exercise 4.1
1. .
\scriptsize \begin{align*}2\sin \theta & =\sqrt{3}\\\therefore \sin \theta & =\displaystyle \frac{{\sqrt{3}}}{2}\end{align*}
Ref angle: $\scriptsize \theta ={{60}^\circ}$ (you should have recognised this as a special angle ratio)
$\scriptsize \sin \theta \gt 0$ in first and second quadrant. Therefore, $\scriptsize \theta ={{60}^\circ}\text{ or }\theta ={{180}^\circ}-{{60}^\circ}={{120}^\circ}$.
General solution: $\scriptsize \theta ={{60}^\circ}+k{{.360}^\circ}\text{ or }\theta ={{120}^\circ}+k{{.360}^\circ},k\in \mathbb{Z}\text{ }$
2. .
\scriptsize \begin{align*}5\sin x & =-2\\\therefore \sin x & =-\displaystyle \frac{2}{5}\end{align*}
Ref angle: $\scriptsize x={{23.6}^\circ}$
$\scriptsize \sin x \lt 0$ in third and fourth quadrants.
$\scriptsize \theta ={{180}^\circ}+23.6={{203.6}^\circ}\text{ or }\theta ={{360}^\circ}-{{23.6}^\circ}={{336.4}^\circ}$.
General solution: $\scriptsize \theta ={{203.6}^\circ}+k{{.360}^\circ}\text{ or }\theta ={{336.4}^\circ}+k{{.360}^\circ},k\in \mathbb{Z}\text{ }$
Specific solution for $\scriptsize {{0}^\circ}\le x\le {{360}^\circ}$: $\scriptsize \theta ={{203.6}^\circ}\text{ or }\theta ={{336.4}^\circ}$
Back to Exercise 4.1
### Exercise 4.2
1. .
\scriptsize \begin{align*}5-3\tan \theta & =0\\\therefore \tan \theta & =\displaystyle \frac{5}{3}\end{align*}
Ref angle: $\scriptsize \theta ={{59}^\circ}$
$\scriptsize \tan \theta \gt 0$ in the first and third quadrants.
$\scriptsize \theta ={{59}^\circ}\text{ or }\theta ={{180}^\circ}+{{59}^\circ}={{239}^\circ}$
General solution: $\scriptsize \theta ={{59}^\circ}+k{{.180}^\circ},k\in \mathbb{Z}\text{ }$
2. .
\scriptsize \begin{align*}3\cos x-1 & =-2\\\therefore 3\cos x & =-1\\\therefore \cos x & =-\displaystyle \frac{1}{3}\end{align*}
Ref angle: $\scriptsize x={{70.5}^\circ}$
$\scriptsize \cos x \lt 0$ in the second and third quadrants.
$\scriptsize x={{180}^\circ}-{{70.5}^\circ}={{109.5}^\circ}\text{ or }x={{180}^\circ}+{{70.5}^\circ}={{250.5}^\circ}$
General solution: $\scriptsize x={{109.5}^\circ}+k{{.360}^\circ}\text{ or }x={{250.5}^\circ}+k{{.360}^\circ},k\in \mathbb{Z}\text{ }$
Specific solution for $\scriptsize [{{0}^\circ},{{360}^\circ}]$: $\scriptsize x={{109.5}^\circ}\text{ or }x={{250.5}^\circ}$
Back to Exercise 4.2
### Exercise 4.3
1. .
\scriptsize \begin{align*}\cos \left( {\displaystyle \frac{1}{2}x} \right) & =0.54\\\therefore \displaystyle \frac{1}{2}x & ={{57.3}^\circ}\end{align*}
Ref angle: $\scriptsize \displaystyle \frac{1}{2}x={{57.3}^\circ}$
$\scriptsize \cos \left( {\displaystyle \frac{1}{2}x} \right) \gt 0$ in the first and fourth quadrants.
$\scriptsize \displaystyle \frac{1}{2}x={{57.3}^\circ}\text{ or }\displaystyle \frac{1}{2}x={{360}^\circ}-{{57.3}^\circ}={{302.7}^\circ}$
General solution:
\scriptsize \begin{align*}\displaystyle \frac{1}{2}x & ={{57.3}^\circ}+k{{.360}^\circ}\text{ or }\displaystyle \frac{1}{2}x={{302.7}^\circ}+k{{.360}^\circ},k\in \mathbb{Z}\text{ }\\\therefore x & ={{114.6}^\circ}+k{{.720}^\circ}\text{ or }x={{605.4}^\circ}+k{{.720}^\circ},k\in \mathbb{Z}\text{ }\end{align*}
2. .
\scriptsize \begin{align*}2\sin \left( {\displaystyle \frac{{3x}}{2}+{{{10}}^\circ}} \right) & =-1\\\therefore \sin \left( {\displaystyle \frac{{3x}}{2}+{{{10}}^\circ}} \right) &=-\displaystyle \frac{1}{2}\end{align*}
Ref angle: $\scriptsize \left( {\displaystyle \frac{{3x}}{2}+{{{10}}^\circ}} \right) ={{30}^\circ}$
$\scriptsize \sin \left( {\displaystyle \frac{{3x}}{2}+{{{10}}^\circ}} \right) \lt 0$ in the third and fourth quadrants.
$\scriptsize \displaystyle \frac{{3x}}{2}+{{10}^\circ}={{180}^\circ}+{{30}^\circ}={{210}^\circ}\text{ or }\displaystyle \frac{{3x}}{2}+{{10}^\circ}={{360}^\circ}-{{30}^\circ}={{330}^\circ}$
General solution:
\scriptsize \begin{align*}\displaystyle \frac{{3x}}{2}+{{10}^\circ} & ={{210}^\circ}+k{{.360}^\circ}\text{ or }\displaystyle \frac{{3x}}{2}+{{10}^\circ}={{330}^\circ}+k{{.360}^\circ},k\in \mathbb{Z}\text{ }\\\therefore \displaystyle \frac{{3x}}{2} & ={{200}^\circ}+k{{.360}^\circ}\text{ or }\displaystyle \frac{{3x}}{2}={{320}^\circ}+k{{.360}^\circ},k\in \mathbb{Z}\text{ }\\\therefore x & ={{133.33}^\circ}+k{{.240}^\circ}\text{ or }\displaystyle \frac{{3x}}{2}={{213.33}^\circ}+k{{.240}^\circ},k\in \mathbb{Z}\text{ }\end{align*}
Specific solution for $\scriptsize [-{{180}^\circ},{{180}^\circ}]$:
$\scriptsize x ={{133.33}^\circ}-1\times {{240}^\circ}=-{{106.67}^\circ}$ or
$\scriptsize x ={{133.33}^\circ}+0\times {{240}^\circ}={{133.33}^\circ}$ or
$\scriptsize x ={{213.33}^\circ}-1\times {{240}^\circ}=-{{26.67}^\circ}$
Back to Exercise 4.3
### Unit 4: Assessment
1. .
\scriptsize \begin{align*}\sin \theta +1 & =0.3\\\therefore \sin \theta & =-0.7\end{align*}
Ref angle: $\scriptsize \theta ={{44.4}^\circ}$
$\scriptsize \sin \theta \lt 0$: $\scriptsize \theta ={{180}^\circ}+{{44.4}^\circ}={{224.4}^\circ}\text{ or }\theta ={{360}^\circ}-{{44.4}^\circ}={{315.6}^\circ}$
General solution: $\scriptsize \theta ={{224.4}^\circ}+k\text{.36}{{\text{0}}^\circ}\text{ or }\theta ={{315.6}^\circ}+k{{.360}^\circ},k\in \mathbb{Z}\text{ }$
2. .
\scriptsize \begin{align*}2\tan\theta&=0.279\\\therefore \tan\theta&=0.1395\end{align*}
Ref angle: $\scriptsize \theta ={{8.0}^\circ}$
$\scriptsize \tan \theta \gt 0:$ $\scriptsize \theta ={{8.0}^\circ}\text{ or }\theta ={{180}^\circ}+{{8.0}^\circ}={{188}^\circ}$
General solution: $\scriptsize \theta ={{8.0}^\circ}+k{{.180}^\circ},k\in \mathbb{Z}\text{ }$
Specific solution for $\scriptsize \theta \in [{{0}^\circ},{{360}^\circ}]$:
$\scriptsize \theta ={{8}^\circ}+0\times {{180}^\circ}={{8}^\circ}$ or
$\scriptsize \theta ={{8.0}^\circ}+1\times {{180}^\circ}={{188.0}^\circ}$
3. .
\scriptsize \begin{align*}5\cos2\theta&=-2.7\\\therefore \cos2\theta&=-0.54\end{align*}
Ref angle: $\scriptsize 2\theta ={{57.3}^\circ}$
$\scriptsize \cos 2\theta \lt 0$: $\scriptsize 2\theta ={{180}^\circ}-{{57.3}^\circ}={{122.7}^\circ}\text{ or }2\theta ={{180}^\circ}+{{57.3}^\circ}={{237.3}^\circ}$
General solution:
\scriptsize \begin{align*}2\theta ={{122.7}^\circ}+k{{.360}^\circ}\text{ or }2\theta ={{237.3}^\circ}+k{{.360}^\circ},k\in \mathbb{Z}\text{ }\\\therefore \theta ={{61.35}^\circ}+k{{.180}^\circ}\text{ or }\theta ={{118.65}^\circ}+{{180}^\circ},k\in \mathbb{Z}\text{ }\end{align*}
Specific solution for $\scriptsize \theta \in [{{0}^\circ},{{360}^\circ}]$:
$\scriptsize \theta ={{61.35}^\circ}+0\times {{180}^\circ}={{61.35}^\circ}$ or
$\scriptsize \theta ={{61.35}^\circ}+1\times {{180}^\circ}={{241.35}^\circ}$ or
$\scriptsize \theta ={{118.65}^\circ}+0\times {{180}^\circ}={{118.65}^\circ}$ or
$\scriptsize \theta ={{118.65}^\circ}+1\times {{180}^\circ}={{298.65}^\circ}$
4. $\scriptsize \tan (3\theta -{{48}^\circ})=3.2$
Ref angle: $\scriptsize 3\theta -{{48}^\circ}={{72.6}^\circ}$
$\scriptsize 3\theta -{{48}^\circ}={{72.6}^\circ}\text{ or }3\theta -{{48}^\circ}={{180}^\circ}+{{72.6}^\circ}={{252.6}^\circ}$:
General solution:
\scriptsize \begin{align*}3\theta -{{48}^\circ} & ={{72.6}^\circ}+k{{.180}^\circ}\text{ or }3\theta -{{48}^\circ}={{252.6}^\circ}+k{{.180}^\circ},k\in \mathbb{Z}\text{ }\\\therefore 3\theta & ={{120.6}^\circ}+k{{.180}^\circ}\text{ or }3\theta ={{300.6}^\circ}+k{{.180}^\circ},k\in \mathbb{Z}\text{ }\\\therefore \theta & ={{40.2}^\circ}+k{{.60}^\circ}\text{ or }\theta ={{100.2}^\circ}+k{{.60}^\circ},k\in \mathbb{Z}\text{ }\end{align*}
But $\scriptsize {{40.2}^\circ}+1\times {{60}^\circ}={{100.2}^\circ}$
Therefore, simplest general solution is $\scriptsize \theta ={{40.2}^\circ}+k{{.60}^\circ},k\in \mathbb{Z}\text{ }$
Specific solution for $\scriptsize \theta \in [{{0}^\circ},{{180}^\circ}]$:
$\scriptsize \theta ={{40.2}^\circ}+0\times {{60}^\circ}={{40.2}^\circ}$ or
$\scriptsize \theta ={{40.2}^\circ}+1\times {{60}^\circ}={{100.2}^\circ}$ or
$\scriptsize \theta ={{40.2}^\circ}+2\times {{60}^\circ}={{160.2}^\circ}$
5. .
\scriptsize \begin{align*}\sin \theta +\sqrt{2} & =-\sin \theta \\\therefore 2\sin \theta & =-\sqrt{2}\\\therefore \sin \theta & =-\displaystyle \frac{{\sqrt{2}}}{2}=-\displaystyle \frac{{\sqrt{2}}}{{\sqrt{2}\times \sqrt{2}}}=-\displaystyle \frac{1}{{\sqrt{2}}}\end{align*}
Ref angle: $\scriptsize \theta ={{45}^\circ}$
$\scriptsize \sin \theta \lt 0$: $\scriptsize \theta ={{180}^\circ}+{{45}^\circ}={{225}^\circ}\text{ or }\theta ={{360}^\circ}-{{45}^\circ}={{315}^\circ}$
General solution: $\scriptsize \theta ={{225}^\circ}+k{{.360}^\circ}\text{ or }\theta ={{315}^\circ}+k{{.360}^\circ}$
Specific solution for $\scriptsize {{0}^\circ}\le \theta \le {{360}^\circ}$:
$\scriptsize \theta ={{225}^\circ}+{{0.360}^\circ}={{225}^\circ}$ or
$\scriptsize \theta ={{315}^\circ}+{{0.360}^\circ}={{315}^\circ}$
Back to Unit 4: Assessment | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9734927415847778, "perplexity": 1453.3177207882998}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572215.27/warc/CC-MAIN-20220815235954-20220816025954-00483.warc.gz"} |
https://www.neetprep.com/question/25821-pressure-greater-atmospheric-pressure-applied-solution-itswater-potential-Increases-Decreases-Remains-Becomes-zero/53-Botany/630-Transport-Plants | If a pressure greater than atmospheric pressure is applied to a solution, its water potential:
1. Increases
2. Decreases
3. Remains same
4. Becomes zero | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9191609621047974, "perplexity": 3381.7405940779504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506121.24/warc/CC-MAIN-20200401192839-20200401222839-00455.warc.gz"} |
https://www.physicsforums.com/threads/textbook-for-learning-pdes-applied-to-physics.848409/ | # Calculus Textbook for learning PDE's applied to physics?
1. Dec 15, 2015
### Vannay
Took a graduate level mathematical methods for physics course and came out the other side feeling a bit lacking in solving stuff like the heat equation, wave equation, laplaces equation and so on. I'm still unsure of the Green's Function method for them, how to look at them with Fourier series, and so on.
The textbook we used was Goldbart and Stone which often would introduce the Green Function method for a given PDE by saying "So here's the solution" and then showing it is. That doesn't help when I'm taking a test that says solve this diffusion PDE given these certain boundary conditions.
2. Dec 15, 2015
### Dr Transport
Last edited by a moderator: May 7, 2017
3. Apr 19, 2016
### UnivMathProdigy
4. Apr 20, 2016
### deskswirl
Draft saved Draft deleted
Similar Discussions: Textbook for learning PDE's applied to physics? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8612613081932068, "perplexity": 1828.498363848379}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886116921.70/warc/CC-MAIN-20170823000718-20170823020718-00540.warc.gz"} |
https://academia.stackexchange.com/questions/86701/where-to-define-a-symbol-that-also-has-an-abbreviation | # Where to define a symbol that also has an abbreviation
I have a symbol μ, which is the linear attenuation coefficient for a material (X-ray tomography). I've already defined that symbol in my document's list of symbols, but the problem is that I also refer to it by its abbreviation, LAC, in my text (LAC is well known in the field, hence I prefer keeping it, only using μ in equations).
Should I add that abbreviation to the list of abbreviations, resulting in their being two entries for linear attenuation coefficient, one in the list of symbols and one in the list of abbreviations, or should I just leave it in the list of symbols?
If you actually have significant explanation beyond just expanding the abbreviation, I recommend putting that in the list of abbreviations and using a cross-reference from the list of symbols. Just like an index can have "attenuation coefficient, linear -- see linear attenuation coefficient", your lists of abbreviations and symbols could have, respectively,
LAC, linear attenuation coefficient, symbol \mu. The linear attenuation coefficient applies to materials where absorbance dominates scattering as an attenuation mechanism, and total transmission energy follows Lambert's Law.
If there's no additional explanation, then both entries can be complete and short and a hyperlink would be pointless.
LAC, linear attenuation coefficient, symbol \mu
\mu, linear attenuation coefficient, abbreviated LAC
• My list of symbols actually has more explanations in it than the list of abbreviations, so I'd prefer to do the reverse of your first suggestion - put the long description in the list of symbols with a link to the abbreviation. Mar 20 '17 at 10:55
• That's perfectly valid as well. Mar 20 '17 at 14:19 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8179872632026672, "perplexity": 1089.7936203013276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358078.2/warc/CC-MAIN-20211127013935-20211127043935-00153.warc.gz"} |
http://mathhelpforum.com/calculus/92748-chain-rule.html | # Math Help - chain rule
1. ## chain rule
Is the derivative of $q(x)=x(2x+3)^{3/2}$
$q'(x)=(5x+3)\sqrt{2x+3}$ ?
In case you guys haven't noticed yet, I'm checking my homework. So, if the answer is wrong, just tell me that it's wrong, or give me a hint. I want to do the actual problems by myself.
2. looks good!
3. Originally Posted by VonNemo19
Is the derivative of $q(x)=x(2x+3)^{3/2}$
$q'(x)=(5x+3)\sqrt{2x+3}$ ?
In case you guys haven't noticed yet, I'm checking my homework. So, if the answer is wrong, just tell me that it's wrong, or give me a hint. I want to do the actual problems by myself.
Yes. Factorise the output from here to see that: differentiate x (2x + 3)^(3/2) - Wolfram|Alpha
4. Originally Posted by mr fantastic
Yes. Factorise the output from here to see that: differentiate x (2x + 3)^(3/2) - Wolfram|Alpha
Huh... You learn somthing new every day. Thanks. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8509431481361389, "perplexity": 1060.0011036575847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276537.37/warc/CC-MAIN-20160524002116-00132-ip-10-185-217-139.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/torque-on-an-anemometer.56432/ | # Torque on an anemometer
1. Dec 12, 2004
### ponjavic
Is it possible to calculate the torque on an anemometer, of radius x with cupradius y, depending on the wind speed?
If not is there a similar horizontally rotating device with which you can calculate the torque?
2. Dec 12, 2004
### ceptimus
If the anemometer is turning at a constant speed, and we assume that the bearings it is turning on are frictionless, then the torque is zero. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9811456799507141, "perplexity": 1531.7060877151355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00133-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://forum.allaboutcircuits.com/threads/mosfet-seemingly-equation-contradiction-or-my-misunderstanding.161929/ | # MOSFET - Seemingly equation contradiction (or my misunderstanding)
#### bobpease4ever
Joined Jul 28, 2019
23
Hello all,
If you remember, most textbooks say that the depletion charge in an N-MOSFET's channel in strong inversion is given by this equation:
Qb = -Sqrt( 2 * q * N_a * Epsilon_Si * 2 * Phi_F), where N_a is the substrate doping, Phi_F is the Fermi potential at the bulk.
The textbooks say that this charge is in the channel exactly when the gate voltage is equal to the threshold voltage (which itself has that charge term in it divided by C_ox). It also says this is the strong inversion charge ( so n-type carriers).
On the other hand, later on when developing the drain characteristics, it says that Q = Cox(Vgs - V_t) (with drain voltage at zero for this example). But if you look at this equation, the charge is exactly ZERO when Vgs = V_t.
How do we solve this apparent contradiction? On the one hand it says that when Vgs is equal to Vt, the charge is given as my first equation. On the other hand the charge is also ZERO when Vgs = Vt for the drain characteristics.
Could it be that even though the channel is inverted at Vgs = Vt, there is still too little charge and so they consider it as ZERO charge? But clearly there isn't zero charge when Vgs = Vt. There should be some charge there, no?
What on earth is going on here?
Thank you so much
Last edited:
#### Dodgydave
Joined Jun 22, 2012
10,591
Homework?
#### bobpease4ever
Joined Jul 28, 2019
23
#### bobpease4ever
Joined Jul 28, 2019
23
So, nobody knows?
#### pcolarusso
Joined Jul 10, 2019
1
I think Qb and Q are different things. Qb is the bulk charge so it is the charge that is there from the doping of the semiconductor. This is the charge that needs to be depleted to create the channel. Q is zero at the threshold voltage because Qb has been depleted so there is no charge there
#### bobpease4ever
Joined Jul 28, 2019
23
I think Qb and Q are different things. Qb is the bulk charge so it is the charge that is there from the doping of the semiconductor. This is the charge that needs to be depleted to create the channel. Q is zero at the threshold voltage because Qb has been depleted so there is no charge there
I have thought about that, but it still does not make sense because the threshold voltage is composed of BOTH the depletion charge voltage, AND the 2Phi_F potential, which IS the potential to invert the channel !!
I think this equation is an extreme simplification since it says that for Vgs < Vt the charge is zero.
But that is so wrong.... I am doing a calculation now to find the correct amount of charge at the channel... Let's see if it is zero or not!
Any any more input to this serious issue?
Last edited:
#### bobpease4ever
Joined Jul 28, 2019
23
I think Qb and Q are different things. Qb is the bulk charge so it is the charge that is there from the doping of the semiconductor. This is the charge that needs to be depleted to create the channel. Q is zero at the threshold voltage because Qb has been depleted so there is no charge there
I just did the calculation, and it says that the concentration of negative channel in the channel (p-channel) is exactly Nb = concentration at the bulk, when the channel reaches strong inversion. Hence there should be Nb of charge in the channel and not zero...
There is also ni^2 / Nb holes in the channel......
Makes sense because ni^2 / Nb * Nb = ni^2.
So there is charge in the channel. It's not zero charge.
#### bobpease4ever
Joined Jul 28, 2019
23
I just did the calculation, and it says that the concentration of negative channel in the channel (p-channel) is exactly Nb = concentration at the bulk, when the channel reaches strong inversion. Hence there should be Nb of charge in the channel and not zero...
There is also ni^2 / Nb holes in the channel......
Makes sense because ni^2 / Nb * Nb = ni^2.
So there is charge in the channel. It's not zero charge.
The formula is that that Qi = -Cox(Vgs - 2Phi_F - Gamma*Sqrt(2Phi_F)),
because we need Cox*Gamma*Sqrt(2PHI_F) in order to create the depletion charge, and the charge across the Cox cap is [Vgs -GammaSqrt(2PHI_F)] - 2Phi_F.
I know that PHI_S is the surface potential with respect to the bulk. But there is no potential unless we apply some at the gate(ignoring parasitics). If Vgs = 0, then the surface potential is zero. If Vgs = -Qi/Cox + Vt is calculated with Vgs = Vt, then Qi=0.
So I must conclude that this Vgs is a gate potential for any charges ABOVE the inversion charges, and NOT for the charges below threshold.
So really this equation must mean that Qi is any charge inverted AFTER strong inversion and not before. I think ........... Eureka!
Anyone could confirm this ? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.970998227596283, "perplexity": 1488.6729977420355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499911.86/warc/CC-MAIN-20230201045500-20230201075500-00252.warc.gz"} |
http://en.wikipedia.org/wiki/Bell_test_experiments | # Bell test experiments
Bell test experiments or Bell's inequality experiments are designed to demonstrate the real world existence of certain theoretical consequences of the phenomenon of entanglement in quantum mechanics which could not possibly occur according to a classical picture of the world, characterised by the notion of local realism. Under local realism, correlations between outcomes of different measurements performed on separated physical systems have to satisfy certain constraints, called Bell inequalities. John Bell derived the first inequality of this kind in his paper "On the Einstein-Podolsky-Rosen Paradox".[1] Bell's Theorem states that the predictions of quantum mechanics cannot be reproduced by any local hidden variable theory.
The term "Bell inequality" can mean any one of a number of inequalities satisfied by local hidden variables theories; in practice, in present day experiments, most often the CHSH; earlier the CH74 inequality. All these inequalities, like the original inequality of Bell, by assuming local realism, place restrictions on the statistical results of experiments on sets of particles that have taken part in an interaction and then separated. A Bell test experiment is one designed to test whether or not the real world satisfies local realism.
## Conduct of optical Bell test experiments
In practice most actual experiments have used light, assumed to be emitted in the form of particle-like photons (produced by atomic cascade or spontaneous parametric down conversion), rather than the atoms that Bell originally had in mind. The property of interest is, in the best known experiments, the polarisation direction, though other properties can be used. Such experiments fall into two classes, depending on whether the analysers used have one or two output channels.
### A typical CHSH (two-channel) experiment
Scheme of a "two-channel" Bell test
The source S produces pairs of "photons", sent in opposite directions. Each photon encounters a two-channel polariser whose orientation can be set by the experimenter. Emerging signals from each channel are detected and coincidences counted by the coincidence monitor CM.
The diagram shows a typical optical experiment of the two-channel kind for which Alain Aspect set a precedent in 1982.[2] Coincidences (simultaneous detections) are recorded, the results being categorised as '++', '+−', '−+' or '−−' and corresponding counts accumulated.
Four separate subexperiments are conducted, corresponding to the four terms E(a, b) in the test statistic S (equation (2) shown below). The settings a, a′, b and b′ are generally in practice chosen to be 0, 45°, 22.5° and 67.5° respectively — the "Bell test angles" — these being the ones for which the quantum mechanical formula gives the greatest violation of the inequality.
For each selected value of a and b, the numbers of coincidences in each category (N++, N--, N+- and N-+) are recorded. The experimental estimate for E(a, b) is then calculated as:
(1) E = (N++ + N--N+-N-+)/(N++ + N-- + N+- + N-+).
Once all four E’s have been estimated, an experimental estimate of the test statistic
(2) S = E(a, b) − E(a, b′) + E(a′, b) + E(a′, b′)
can be found. If S is numerically greater than 2 it has infringed the CHSH inequality. The experiment is declared to have supported the QM prediction and ruled out all local hidden variable theories.
A strong assumption has had to be made, however, to justify use of expression (2). It has been assumed that the sample of detected pairs is representative of the pairs emitted by the source. That this assumption may not be true comprises the fair sampling loophole.
The derivation of the inequality is given in the CHSH Bell test page.
### A typical CH74 (single-channel) experiment
Setup for a "single-channel" Bell test
The source S produces pairs of "photons", sent in opposite directions. Each photon encounters a single channel (e.g. "pile of plates") polariser whose orientation can be set by the experimenter. Emerging signals are detected and coincidences counted by the coincidence monitor CM.
Prior to 1982 all actual Bell tests used "single-channel" polarisers and variations on an inequality designed for this setup. The latter is described in Clauser, Horne, Shimony and Holt's much-cited 1969 article as being the one suitable for practical use.[3] As with the CHSH test, there are four subexperiments in which each polariser takes one of two possible settings, but in addition there are other subexperiments in which one or other polariser or both are absent. Counts are taken as before and used to estimate the test statistic.
(3) S = (N(a, b) − N(a, b′) + N(a′, b) + N(a′, b′) − N(a′, ∞) − N(∞, b)) / N(∞, ∞),
where the symbol ∞ indicates absence of a polariser.
If S exceeds 0 then the experiment is declared to have infringed Bell's inequality and hence to have "refuted local realism". In order to derive (3), CHSH in their 1969 paper had to make an extra assumption, the so-called "fair sampling" assumption. This means that the probability of detection of a given photon, once it has passed the polarizer, is independent of the polarizer setting (including the 'absence' setting). If this assumption were violated, then in principle an LHV model could violate the CHSH inequality.
In a later 1974 article, Clauser and Horne replaced this assumption by a much weaker, "no enhancement" assumption, deriving a modified inequality, see the page on Clauser and Horne's 1974 Bell test.[4]
## Experimental assumptions
In addition to the theoretical assumptions made, there are practical ones. There may, for example, be a number of "accidental coincidences" in addition to those of interest. It is assumed that no bias is introduced by subtracting their estimated number before calculating S, but that this is true is not considered by some to be obvious. There may be synchronisation problems — ambiguity in recognising pairs due to the fact that in practice they will not be detected at exactly the same time.
Nevertheless, despite all these deficiencies of the actual experiments, one striking fact emerges: the results are, to a very good approximation, what quantum mechanics predicts. If imperfect experiments give us such excellent overlap with quantum predictions, most working quantum physicists would agree with John Bell in expecting that, when a perfect Bell test is done, the Bell inequalities will still be violated. This attitude has led to the emergence of a new sub-field of physics which is now known as quantum information theory. One of the main achievements of this new branch of physics is showing that violation of Bell's inequalities leads to the possibility of a secure information transfer, which utilizes the so-called quantum cryptography (involving entangled states of pairs of particles).
## Notable experiments
Over the past thirty or so years, a great number of Bell test experiments have now been conducted. These experiments are subject to assumptions, in particular the ‘no enhancement’ hypothesis of Clauser and Horne (above). The experiments are commonly interpreted to rule out local hidden variable theories, though so far no experiment has been performed which is not subject to either the locality loophole or the detection loophole. An experiment free of the locality loophole is one where for each separate measurement and in each wing of the experiment, a new setting is chosen and the measurement completed before signals could communicate the settings from one wing of the experiment to the other. An experiment free of the detection loophole is one where close to 100% of the successful measurement outcomes in one wing of the experiment are paired with a successful measurement in the other wing. This percentage is called the efficiency of the experiment. Advancements in technology have led to significant improvement in efficiencies, as well as a greater variety of methods to test the Bell Theorem. The challenge is to combine high efficiency with rapid generation of measurement settings and completion of measurements.
Some of the best known:
### Freedman and Clauser, 1972
This was the first actual Bell test, using Freedman's inequality, a variant on the CH74 inequality.[5]
### Aspect, 1981-2
Alain Aspect and his team at Orsay, Paris, conducted three Bell tests using calcium cascade sources. The first and last used the CH74 inequality. The second was the first application of the CHSH inequality. The third (and most famous) was arranged such that the choice between the two settings on each side was made during the flight of the photons (as originally suggested by John Bell).[6][7]
### Tittel and the Geneva group, 1998
The Geneva 1998 Bell test experiments showed that distance did not destroy the "entanglement". Light was sent in fibre optic cables over distances of several kilometers before it was analysed. As with almost all Bell tests since about 1985, a "parametric down-conversion" (PDC) source was used.[8][9]
### Weihs' experiment under "strict Einstein locality" conditions
In 1998 Gregor Weihs and a team at Innsbruck, led by Anton Zeilinger, conducted an ingenious experiment that closed the "locality" loophole, improving on Aspect's of 1982. The choice of detector was made using a quantum process to ensure that it was random. This test violated the CHSH inequality by over 30 standard deviations, the coincidence curves agreeing with those predicted by quantum theory.[10]
### Pan et al.'s (2000) experiment on the GHZ state
This is the first of new Bell-type experiments on more than two particles; this one uses the so-called GHZ state of three particles.[11]
### Rowe et al. (2001) are the first to close the detection loophole
The detection loophole was first closed in an experiment with two entangled trapped ions, carried out in the ion storage group of David Wineland at the National Institute of Standards and Technology in Boulder. The experiment had detection efficiencies well over 90%.[12]
### Gröblacher et al. (2007) test of Leggett-type non-local realist theories
A specific class of non-local theories suggested by Anthony Leggett is ruled out. Based on this, the authors conclude that any possible non-local hidden variable theory consistent with quantum mechanics must be highly counterintuitive.[13][14]
### Salart et al. (2008) Separation in a Bell Test
This experiment filled a loophole by providing an 18 km separation between detectors, which is sufficient to allow the completion of the quantum state measurements before any information could have traveled between the two detectors.[15][16]
### Ansmann et al. (2009) Overcoming the detection loophole in solid state
This was the first experiment testing Bell inequalities with solid-state qubits (superconducting Josephson phase qubits were used). This experiment surmounted the detection loophole using a pair of superconducting qubits in an entangled state. However, the experiment still suffered from the locality loophole because the qubits were only separated by a few millimeters.[17]
### Giustina et al. (2013) Overcoming the detection loophole for photons
The detection loophole for photons has been closed for the first time in a group by Anton Zeilinger, using highly efficient detectors. This makes photons the first system for which all of the main loopholes have been closed, albeit in different experiments. [18]
## Loopholes
Though the series of increasingly sophisticated Bell test experiments has convinced the physics community in general that local realism is untenable, it remains true that the outcome of every single experiment done so far that violates a Bell inequality can still theoretically be explained by local realism, by exploiting the detection loophole and/or the locality loophole. The locality (or communication) loophole means that since in actual practice the two detections are separated by a time-like interval, the first detection may influence the second by some kind of signal. To avoid this loophole, the experimenter has to ensure that particles travel far apart before being measured, and that the measurement process is rapid. More serious is the detection (or unfair sampling) loophole, due to the fact that particles are not always detected in both wings of the experiment. It can be imagined that the complete set of particles would behave randomly, but instruments only detect a subsample showing quantum correlations, by letting detection be dependent on a combination of local hidden variables and detector setting. Experimenters have repeatedly stated that loophole-free tests can be expected in the near future.[19] On the other hand, some researchers point out the logical possibility that quantum physics itself prevents a loophole-free test from ever being implemented.[20][21]
## References
1. ^ J.S. Bell (1964), Physics 1: 195–200
2. ^ Alain Aspect, Philippe Grangier, Gérard Roger (1982), "Experimental Realization of Einstein-Podolsky-Rosen-Bohm Gedankenexperiment: A New Violation of Bell's Inequalities", Phys. Rev. Lett. 49 (2): 91–4, Bibcode:1982PhRvL..49...91A, doi:10.1103/PhysRevLett.49.91
3. ^ J.F. Clauser, M.A. Horne, A. Shimony, R.A. Holt (1969), "Proposed experiment to test local hidden-variable theories", Phys. Rev. Lett. 23 (15): 880–4, Bibcode:1969PhRvL..23..880C, doi:10.1103/PhysRevLett.23.880
4. ^ J.F. Clauser, M.A. Horne (1974), "Experimental consequences of objective local theories", Phys. Rev. D 10 (2): 526–35, Bibcode:1974PhRvD..10..526C, doi:10.1103/PhysRevD.10.526
5. ^ S.J. Freedman, J.F. Clauser (1972), "Experimental test of local hidden-variable theories", Phys. Rev. Lett. 28 (938), Bibcode:1972PhRvL..28..938F, doi:10.1103/PhysRevLett.28.938
6. ^ Alain Aspect, Philippe Grangier, Gérard Roger (1981), "Experimental Tests of Realistic Local Theories via Bell's Theorem", Phys. Rev. Lett. 47 (7): 460–3, Bibcode:1981PhRvL..47..460A, doi:10.1103/PhysRevLett.47.460
7. ^ Alain Aspect, Jean Dalibard, Gérard Roger (1982), "Experimental Test of Bell's Inequalities Using Time-Varying Analyzers", Phys. Rev. Lett. 49 (25): 1804–7, Bibcode:1982PhRvL..49.1804A, doi:10.1103/PhysRevLett.49.1804
8. ^ W. Tittel, J. Brendel, B. Gisin, T. Herzog, H. Zbinden, N. Gisin (1998), "Experimental demonstration of quantum-correlations over more than 10 kilometers", Physical Review A 57: 3229, arXiv:quant-ph/9707042, Bibcode:1998PhRvA..57.3229T, doi:10.1103/PhysRevA.57.3229
9. ^ W. Tittel, J. Brendel, H. Zbinden, N. Gisin (1998), "Violation of Bell inequalities by photons more than 10 km apart", Physical Review Letters 81: 3563-6, arXiv:quant-ph/9806043, Bibcode:1998PhRvL..81.3563T, doi:10.1103/PhysRevLett.81.3563
10. ^ G. Weihs, T. Jennewein, C. Simon, H. Weinfurter, A. Zeilinger (1998), "Violation of Bell's inequality under strict Einstein locality conditions", Phys. Rev. Lett. 81: 5039, arXiv:quant-ph/9810080, Bibcode:1998PhRvL..81.5039W, doi:10.1103/PhysRevLett.81.5039
11. ^ Jian-Wei Pan, D. Bouwmeester, M. Daniell, H. Weinfurter & A. Zeilinger (2000). "Experimental test of quantum nonlocality in three-photon GHZ entanglement". Nature 403 (6769): 515–519. Bibcode:2000Natur.403..515P. doi:10.1038/35000514.
12. ^ M.A. Rowe, D. Kielpinski, V. Meyer, C.A. Sackett, W.M. Itano, C. Monroe, D.J. Wineland (2001), "Experimental violation of a Bell's inequality with efficient detection", Nature 409 (6822): 791–94, Bibcode:2001Natur.409..791K, doi:10.1038/35057215
13. ^ Quantum physics says goodbye to reality, physicsworld.com, 2007
14. ^ S Gröblacher, T Paterek, Rainer Kaltenbaek, S Brukner, M Zdotukowski, M Aspelmeyer, A Zeilinger (2006), "An experimental test of non-local realism", Nature 446: 871–5, arXiv:0704.2529, Bibcode:2007Natur.446..871G, doi:10.1038/nature05677, PMID 17443179
15. ^ Salart, D.; Baas, A.; van Houwelingen, J. A. W.; Gisin, N.; and Zbinden, H. (2008), "Spacelike Separation in a Bell Test Assuming Gravitationally Induced Collapses", Physical Review Letters 100 (22): 220404, arXiv:0803.2425, Bibcode:2008PhRvL.100v0404S, doi:10.1103/PhysRevLett.100.220404
16. ^ World's Largest Quantum Bell Test Spans Three Swiss Towns, phys.org, 2008-06-16
17. ^ Ansmann, Markus; H. Wang, Radoslaw C. Bialczak, Max Hofheinz, Erik Lucero, M. Neeley, A. D. O'Connell, D. Sank, M. Weides, J. Wenner, A. N. Cleland, John M. Martinis (2009-09-24). "Violation of Bell's inequality in Josephson phase qubits". Nature 461 (504-6): 2009. Bibcode:2009Natur.461..504A. doi:10.1038/nature08363.
18. ^ Giustina, Marissa; Alexandra Mech, Sven Ramelow, Bernhard Wittmann, Johannes Kofler, Jörn Beyer, Adriana Lita, Brice Calkins, Thomas Gerrits, Sae Woo Nam, Rupert Ursin & Anton Zeilinger (2013-04-14). "Bell violation using entangled photons without the fair-sampling assumption". Nature 497 (7448): 227-30. arXiv:1212.0533. Bibcode:2013Natur.497..227G. doi:10.1038/nature12012.
19. ^ R. García-Patrón, J. Fiurácek, N. J. Cerf, J. Wenger, R. Tualle-Brouri, Ph. Grangier (2004), "Proposal for a Loophole-Free Bell Test Using Homodyne Detection", Phys. Rev. Lett. 93 (13): 130409, arXiv:quant-ph/0403191, Bibcode:2004PhRvL..93m0409G, doi:10.1103/PhysRevLett.93.130409
20. ^ Richard D. Gill (2003), "Time, Finite Statistics, and Bell's Fifth Position", Foundations of Probability and Physics - 2 (Vaxjo Univ. Press): 179–206, arXiv:quant-ph/0301059, Bibcode:2003quant.ph..1059G
21. ^ Emilio Santos (2005), "Bell's theorem and the experiments: Increasing empirical support to local realism", Studies In History and Philosophy of Modern Physics 36 (3): 544–65, arXiv:quant-ph/0410193, doi:10.1016/j.shpsb.2005.05.007 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8097890019416809, "perplexity": 3274.4259415178626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802764752.1/warc/CC-MAIN-20141217075244-00044-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://mylomowalk.wordpress.com/ | # ACL 2012
What’s interesting?
A Deep Learning Tutorial by Richard Socher, Yoshua Bengio and Chris Manning.
– Selective Sharing for Multilingual Dependency Parsing. I always like the work from MIT people. This is an interesting paper on multilingual learning. The model does not assume the existing of parallel corpus, which makes it more practical. Moreover, it can transfer linguistic structures between unrelated languages.
– Unsupervised Morphology Rivals Supervised Morphology for Arabic MT. Very close to my thesis. In fact, I just need to read Mark Johnson’s 2009 paper Improving nonparameteric Bayesian inference: experiments on unsupervised word segmentation with adaptor grammars.
# One does not simply study Computational Complexity
Last night, I ran into an interesting question on Complexity while surfing the Internet. Here it is:
Let A be the language containing only the single string w where: w = 0 if God does exist and w = 1 otherwise. Is A decidable? Why or why not? (Note: the answer does not depend on your religious convictions.)
So what’s the proof. I come up with my proof that A is undecidable.
Consider a Turing Machine M used to decide A. If A is decidable, the Turing Machine M will stop and output God, the supreme being, in the output tape.
Forget about religious viewpoint, from logical point of view, let’s assume that God is the supreme being, who holds the most super power and can create anything. So, where does God come from? There must be another supreme being, who created God. The Turing Machine M, therefore continues working to output The One, who created God. But then, who created The One? Turing Machine M definitely enters infinite loop seeking for the original supreme being. Thus, A is undecidable.
Hey, does the proof sound familiar? Of course, it’s an analogue version of Turtles all the way down. In case you’ve never heard of this philosophical story, here is one of those version, found in the first page of my book Plato and a Platypus Walk into a Bar, Philogagging chapter.
Dimitri: If Atlas holds up the world, what holds up Atlas?
Tasso: Atlas stands on the back of a turtle.
Dimitri: But what does the turtle stand on?
Tasso: Another turtle.
Dimitri: And what does that turtle stand one?
Tasso: My dear Dimitri, it’s turtles all the way down!
# Recursion Theorem
(Kleene – Recursion Theorem) For any computable function
$f$ there exists an $e$ with $\phi_e = \phi_{f(e)}$.
Proof:
We define a partial recursive function $\theta$ by $\theta(u,x) = \phi_{\phi_u (u)} (x)$ . By s-m-n theorem or parameter theorem, we can find a recursive function $d$ such that
$\forall u: \phi_{d(u)}(x)=\theta(u,x)$ for $\forall x$.
Let $\phi_v = f \circ d$, choose $e=d(v)$ then we have
$\phi_e(x) = \phi_{d(v)}(x) = \theta(v,x) = \phi_{\phi_v (v)} (x) = \phi_{f \circ d(v)}(x) = \phi_{f(e)}(x)$
Using recursion theorem, we have a beautiful proof for Rice’s Theorem.
Let $C$ be a class of partial recursive functions. Set $\left\{e | \phi_e \in C \right\}$ is recursive, if and only if $C = \emptyset$ or $C$ contains all partial recursive functions.
Proof:
$C = \emptyset$ or $C = \mathbb{N}$ is trivial. We proof the case $C \ne \emptyset$ and $C \ne \mathbb{N}$.
Denote $S = \left\{e | \phi_e \in C \right\}$.
Exist $e_0 \in S$ and $e_1 \in \neg S$. If $S = \left\{e | \phi_e \in C \right\}$ is recursive, the following function is also recursive:
$f(x) = \left\{ {\begin{array}{*{20}c} {e_0 : x \in \neg S} \\ {e_1 : x \in S} \\ \end{array}} \right.$
By recursion theorem, $\exists e': \phi_e' =\phi_{f(e')}$. We consider 2 cases:
1. $e' \in S$, by index property we have $f(e') \in S$, so $e_1 \in S$ by the definition of $f$. We get contradiction.
2. $e' \in \neg S$, by index property $f(e') \in \neg S$, so $e_0 \in \neg S$ by the definition of $f$. Again we arrive to contradiction.
Functionception
Why’s recursion theorem interesting? A typical application of recursion theorem is to find a function that can print itself. Let’s say print(*program*) is that function. The output of print(*program*) is not *program* but the function that excuses print(*program*). In other words, we could ask if there is a function which can be self-conscious. I like to call it function-ception, which can be related to inception: does one know that he is dreaming in his dream?
Kleene recursion theorem offers an answer to our question: there is a function-ception, or a program that can print itself. We’ll construct such that function.
Let $\phi_e = \pi_1^2$, $\pi_i^n$ is the projection function $\pi_i^n(x_1,...,x_n) = x_i$.
By s-m-n theorem, there is total recursive function $s(e,x)$ such that $\phi_{s(e,x)}(y)=\phi_e(x,y)$. Let $g(x) = s(e,x)$. Apply Kleene recursion theorem, there is a number $n$ such that $\phi_{g(n)} = \phi_n$.
For each $x$: $\phi_n(x) = \phi_{g(n)}(x) = \phi_{s(e,n)}(x) = \phi_e(n,x) = \pi_1^2 = n$
My work here is done!
How many fixed points does a recursive function $f$ have?
Intuitively, we can see that a (total) recursive function $f$ has infinite fixed points. Recall the proof for Kleene recursion theorem, when we choose $v: \phi_v = f \circ d$. There is an infinite number of $v$ having that property. Analogically , there is an infinite number of ways to implement (write code) a function (by adding comments, using different variable names, …)
Now let’s work on some exercises using recursion theorem.
Lemma: There is a number $n \in \mathbb{N}$, such that $W_n = \left\{ n \right\}$
Proof:
We need to find a function $\phi_n$ that is only defined for input $n$ and undefined on the rest of natural numbers $\mathbb{N}-{n}$
I’m doing this in reversing way (how my thought leads me to the solution). By recursion theorem, $\exists n : \phi_n(x) = \phi_{f(n)}(x) = \phi_{s(e,n)}(x) = \phi_e(n,x)$.
Let $g(x,y)$ is the function that it is only defined for $(x,y)$ such that $x=y$. Such an $g(x,y)$ can be constructed as following:
$g(x,y) = \left\lfloor {\frac{1}{{\neg sign\left( {\left| {x - y} \right|} \right)}}} \right\rfloor$
My work here is done. There is an index $e$ such that $\phi_e(x,y) = g(x,y)$, by s-m-n theorem there is a total recursive function $s: \phi_{s(e,x)}(y) = \phi_e(x,y)$. Let $f(x) = s(e,x)$, and the last step is trivial.
Lemma: There is a number $n \in \mathbb{N}$, such that $\phi_n = \lambda x[n]$
Proof:
With the same line of proof for the previous lemma, we only need to construct a partial recursive function $g(x,y) = x \forall y$. One can easily recognize $g(x,y)$ is a projection function $\pi_1^2$
# Hello world!
I started Clojure past few days. Thanks to Jo and Milos, my nerdy classmates who are big fans of functional programing. Now I’m getting into it.
I heard of functional programming years ago, I played with Haskell but not much. Probably, the main reason I started liking Clojure because of Computational Complexity course I’ve taken at Charles University.
To be honest, I hated this course at the beginning. Machine Turing is not that bad, NP-complete and reduction techniques are pretty much fun, but recursive function and λ-calculus are so fucked up. I did hate all the lambda notations and hazy proofs with old math style.
I still hate λ-calculus if I don’t have to study for the exam. After several group studies with Jo, and he kept telling me how much he likes Scala and functional programing I thought “Hey, that’s pretty much similar to what we’re studying, you know, recursive functions and other stuff”
Yes, I like Clojure because I like λ-calculus, recursive function, partial recursive function, Kleene theorem and all other stuff. I decided to learn Clojure seriously, so I come up with an idea that I’m gonna blog about it. Of course, not only Clojure, FP but all other stuffs I like such as Computational Complexity, Machine Learning, Computational Linguistics, Lomography and so on.
Why is the blog named MyLomoWalk? I love Lomography, and Lomography is the art of coincidence. I’m a big fan of uncertainty, which often leads to coincidence. MyLomoWalk is my walk in the uncertain universe (or multiverse?) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 67, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8326036930084229, "perplexity": 875.2513015259549}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719754.86/warc/CC-MAIN-20161020183839-00538-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://nyuscholars.nyu.edu/en/publications/on-maxwell-stress-and-its-relationship-with-the-dielectric-consta | # On Maxwell stress and its relationship with the dielectric constant in the actuation of ionic polymer metal composites
Alain Boldini, Maurizio Porfiri
Research output: Contribution to journalArticlepeer-review
## Abstract
Ionic polymer metal composites (IPMCs) are unique electroactive polymers that show promise as actuators in soft robotics and biomedical applications. The modeling and numerical simulation of the behavior of these materials are significantly complicated by the nature of the underlying electrochemical phenomena, which occur in nanometer-thick layers in the proximity of the electrodes. Hence, the dielectric constant of IPMCs is often rescaled to help numerically resolve the electric double layers. However, the effect of such a rescaling on IPMC actuation has never been systematically assessed. Motivated by recent efforts on the effect of the rescaling on Maxwell stress, we put forward a physically based analysis of the role of dielectric constant on IPMC actuation. We demonstrate that an increase of the dielectric constant decreases the magnitude of the electric field during actuation, due to the widening of the electric double layers. The decrease in the electric field intensity perfectly balances the increase of the dielectric constant scaling Maxwell stress, such that Maxwell stress is independent of the IPMC dielectric constant. The bending moment generated by Maxwell stress increases with the dielectric constant, since the thickness of the electric double layers where Maxwell stress is relevant is larger. However, the same scaling is common to all the bending moments associated with the electrochemistry, so that the relative importance of each term on IPMC actuation does not depend on the dielectric constant. This study contributes an important analysis that can support feasible and faithful numerical implementations of theories on IPMC mechanics and electrochemistry.
Original language English (US) 104875 Journal of the Mechanics and Physics of Solids 164 https://doi.org/10.1016/j.jmps.2022.104875 Published - Jul 2022
## Keywords
• Actuators
• Electroactive polymers
• Electrochemistry
• Electrolytes
• Maxwell Stress
## ASJC Scopus subject areas
• Condensed Matter Physics
• Mechanics of Materials
• Mechanical Engineering
## Fingerprint
Dive into the research topics of 'On Maxwell stress and its relationship with the dielectric constant in the actuation of ionic polymer metal composites'. Together they form a unique fingerprint. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9004300832748413, "perplexity": 1914.098913310415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711368.1/warc/CC-MAIN-20221208215156-20221209005156-00362.warc.gz"} |
https://stats.libretexts.org/Courses/Fresno_City_College/Book%3A_Business_Statistics_Customized_(OpenStax)/Using_Excel_Spreadsheets_in_Statistics/3_Discrete_Probability/3.6_Geometric_Probability_using_the_Excel_Sheet_provided | # 3.6 Geometric Probability using the Excel Sheet provided
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$
Suppose the probability that a red car enters an intersection is 0.24. What is the likelihood that the first red car enters the intersection after four non-red vehicles pass through the intersection? The discrete probability distribution is Geometric.
P(Red Car) = .24
P(Not Red Car) = 1-.24 = .76
P( X = 5) = (.76)4(.24) =
To compute the probability in the Excel spreadsheet provided, enter the following.
3.6 Geometric Probability using the Excel Sheet provided is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.939601719379425, "perplexity": 1148.794200965287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103671290.43/warc/CC-MAIN-20220630092604-20220630122604-00028.warc.gz"} |
https://ingmarschuster.com/2016/01/21/why-the-map-is-a-bad-starting-point-in-high-dimensions/ | # Why the MAP is a bad starting point in high dimensions
During MCMSki 2016, Heiko mentioned that in high dimensions, the MAP is not a particularly good starting point for a Monte Carlo sampler, because there is no volume around it. I and several people smarter than me where not sure why that could be the case, so Heiko gave us a proof by simulation: he sampled from multivariate standard normals with increasing dimensions and plotted the euclidean norm of the samples. The following is what I reproduced for sampling a standard normal in $D=30$ dimensions.The histogram has a peak at about $\sqrt{D}$, which means most of the samples are in a sphere around the mean/mode/MAP of the target distribution and none are at the MAP, which would correspond to norm $0$.
We where dumbstruck by this, but nobody (not even Heiko) had an explanation for what was happening. Yesterday I asked Nicolas about this and he gave the most intuitive interpretation: Given a standard normal variable in $D$ dimensions, $x \sim \mathcal{N}(0,I_D)$, computing the euclidean norm you get $n^2 = \|x\|^2 = \sum_{i=1}^D x_i^2$. But as $x$ is gaussian, this just means $n^2$ has a $\chi^2(D)$ distribution, which results in the expected value $\mathbb{E}(n) = \sqrt{D}$. Voici l’explication.
(Title image (c) Niki Odolphie)
## 8 thoughts on “Why the MAP is a bad starting point in high dimensions”
1. Joseph says:
Check out the introductory chapter of Giraud, Introduction to High-Dimensional Statistics, CRC, 2015!
Like
2. Marco says:
Nice! The norm of normally distributed vectors is indeed a chi distribution
https://en.wikipedia.org/wiki/Chi_distribution
which has interestingly a variance of $D-\mu^2$, hence quasi-constant in our case where the mean is very close to $\sqrt{D}$. That’s why as we increase the dimension and the mean goes farther away from zero the area with non-zero mass appears to be shrinking!
Like
3. Nice post!
A note on the Chi2: http://goo.gl/Sghwc4
Just wanted to add one thing, to clarify what I meant.
Using the MAP to initialise an MCMC run is not the best idea ever, but it also it not really a bad idea. Any (converging) sampler will eventually move to the typical set. So the worst thing that can happen is to loose a bit of computing power when in moving from the MAP to the typical set. If your sampler is geometrically ergodic for example, it will do so at a geometric rate — from *any* starting point. You can easily see that if you use a RWMH on a high dimensional Gaussian and start it at the mode — the traces will (slowly in high dimensions) move to the sphere you were talking about in the blog post.
What *is* a bad idea in high dimensions, however, is to use the MAP to represent your probability distribution. That is, MAP point estimates, when used to summarise a model give a quite different answer to using a single sample from the posterior. If you do a Gaussian linear regression in high dimensions, and compare the predictive posterior using MAP and MCMC, you will see.
Like
4. Isn’t this observation simply that neighbourhoods are smaller in high dimensions than we might intuitively suppose? For a Normal in high dimensions the mean/mode/map remains the point at which samples are typically nearest; there isn’t a starting point closer to the mass, right?
e.g. D=30; N=1000; X <- matrix(rnorm(D*N),nrow=D,ncol=N); hist(sqrt(colSums((X)^2)),col="black"); for (i in 1:10) {R = 2*i/D; hist(sqrt(colSums((X-R)^2)),add=T,border=hsv(i/10*0.6),plot.new=F)}
Like
1. Yes, the mean is the one point that has the shortest distance to the $\sqrt{D}$ sphere in which the posterior mass concentrates.
However, for several Markovian proposal kernels $q(\cdot|x)$, a good starting point would be in that $\sqrt{D}$ sphere, not at the mean. An intuition could be that you actually start from equilibrium in that case, i.e. $x \sim \pi$, instead of having to converge first.
Obviously, there are exceptions, the easiest being when using an independent proposal $q(\cdot)$ where the current point doesn’t matter.
Like
1. Interesting perspective; I disagree of course: I don’t think one is any closer to equilibrium for a RW MCMC by starting in the sphere than at the MAP, it just looks that way when you collapse the parameter space (arbitrarily) down to a single radial axis.
Like
5. Joseph says:
Well, this so-called sphere has actually a massive volume, that’s why it aggregates so much probability mass even though the density decreases. In high dimension there is no point, including the MAP, that summarizes the whole distribution well. But there is also no point better than the MAP. Do the same plot again with another point as the origin …
Like
1. Then the expected distance from that other point will be higher of course. Which reiterates Ewans point, but makes it more intuitive (to me).
Like | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 17, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8902480602264404, "perplexity": 665.5670437612652}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863410.22/warc/CC-MAIN-20180520112233-20180520132233-00081.warc.gz"} |
http://clay6.com/qa/26296/if-phi-n-f-n-g-n-where-f-n-g-n-c-and-large-frac-frac-frac-frac-then-value-o | # If $\phi (n) =f(n) g(n),$ where $f'(n) g'(n) =C$ and $\large\frac {\phi ''}{\phi} =\frac {f''}{f} +\frac{g''}{g}+\frac{Kc}{fg}$ then value of K.
$(a)\;1 \\ (b)\;0 \\ (c)\;2 \\ (d)\;-1$
$\phi(x)=f(x)g(x)$
$\phi'(x) =f'(x)g(x) +f(x) g'(x)$
$\phi''(x) =f'(x)g(x) +f''(x) g'(x)+f''(x) g'(x) +f(x)g'(x)$
$\phi''(x) =2c+f''(x) g(x)+f(n) g''(x)$
$\large\frac{\phi ''(x)}{\phi (n)}=\frac{2c}{f(n) g(n) } +\frac{f''(x)}{f(x)}+\frac{g''(x)}{g(x)}$
$K=2$
Hence c is the correct answer. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9990105032920837, "perplexity": 194.90246562563232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948514113.3/warc/CC-MAIN-20171211222541-20171212002541-00689.warc.gz"} |
https://computergraphics.stackexchange.com/questions/4715/where-do-the-coefficients-in-the-catmull-clark-subdivision-algorithm-come-from | # Where do the coefficients in the Catmull-Clark subdivision algorithm come from?
I'm learning about subdivision surface algorithms. The "Catmull-Clark" algorithm seems to be one of the most widely-used classical algorithms.
The introduction of new face points and edges is straightforward, but I'm confused by the following formula for modifying the locations of original points
$$\frac{F + 2R + (n-3)P}{n}$$
where:
• $P$ is the original point
• $R$ is the average of all $n$ edges midpoints touching $P$
• $F$ is the average of all $n$ face points touching $P$
I understand the result is a weighted average of three points $F$, $R$ and $P$, but where do the coefficients $1$, $2$, and $(n-3)$ come from? Why $2R$? Why not $2.01R$? Or $1R$ and $2F$?
All the introductions I've seen just present the formula without presenting a justification.
The derivation is presented in the original paper that introduced CC subdivisions as a generalisation of B-Spline patches: https://people.eecs.berkeley.edu/~sequin/CS284/PAPERS/CatmullClark_SDSurf.pdf
• Thanks. I'm struggling to follow the derivation somewhat. If I'm not wrong, every entry of the $H_1$ matrix should be multiplied by a factor of $\frac{1}{8}$. – eigenchris Feb 15 '17 at 5:08
• Thanks for the link, but in general link-only answers are discouraged on StackExchange. I'm not suggesting you paste the entire derivation in, but perhaps you could summarize the key points (e.g. assumptions that lead to the specific coefficients) in your answer? – Nathan Reed Feb 20 '17 at 16:00
I'm just going to add a few more details to the paper that Stefan linked to.
First of all, the matrix $H_1$ displayed in the paper is incorrect; every element should be multiplied by a factor of $\frac{1}{8}$.
$$H_1 = \frac{1}{8} \begin{bmatrix} 4 & 4 & 0 & 0 \\ 1 & 6 & 1 & 0 \\ 0 & 4 & 4 & 0 \\ 0 & 1 & 6 & 1 \end{bmatrix}$$
Second, the derivation of the new face point formula for quads in the first half of the paper is confusing, so I will fill in some of the details.
The matrix $G_1 = HGH^T$ contains the 16 control points for one sub-quad of the original quad with control points $G$. The entries of $G_1$ are referred to with $q_{i,j}$. Of most interest are the elements $q_{1,1}$, $q_{1,2}$ and $q_{2,2}$, which represent control points for a corner, edge, and interior of the $4\times 4$ control point grid for the sub-quad. All the other elements have similar formulas due to the symmetry of the subdivision process.
By carrying out the matrix multiplication, we get:
$q_{2,2} = \frac{P_{1,1} +P_{3,1} +P_{3,1} +P_{3,3}}{64} + \frac{6(P_{1,2} +P_{2,1} +P_{3,2} +P_{2,3})}{64} + \frac{36P_{2,2}}{64}$
We can see from this formula that, when relocating a control point, its new location is determined by the $3\times 3$ grid of points surrounding it. Here we see that in this grid, corner points have $\frac{1}{64}$ influence, edge points have $\frac{6}{64} = \frac{3}{32}$ influence, and the center ("old") point has $\frac{36}{64} = \frac{9}{16}$ influence. These weights are consistent with the ones given at the bottom of this blog post.
With a bit of rearranging, we get:
$q_{2,2} = \frac{1}{4} \big[ \frac{1}{4}(\frac{P_{1,1} + P_{1,2} + P_{2,1} + P_{2,2}}{4} + \frac{P_{1,2} + P_{2,2} + P_{1,3} + P_{2,3}}{4} + \frac{P_{2,1} + P_{2,2} + P_{3,2} + P_{3,1}}{4} + \frac{P_{2,2} + P_{2,3} + P_{3,2} + P_{3,3}}{4}) + 2\frac{1}{4}(\frac{P_{1,2} + P_{2,2}}{2} + \frac{P_{2,1} + P_{2,2}}{2} + \frac{P_{2,3} + P_{2,2}}{2} + \frac{P_{3,2} + P_{2,2}}{2}) + P_{2,2}\big]$
The first term is merely the average of the four surrounding face points; in other words, it is $F$. The second term is the average of the four surrounding edge midpoints multiplied by $2$; in other words, it is $2R$. Finally, $P_{2,2}$ is simply the original control point $P$.
And so we see this is the same as the original formula for quads: $\frac{1}{4}(F + 2R + P)$.
The paper does not provide a derivation of the general case for an $n$-sided polygon, but it seems like a reasonable extension based on intuition.
As an aside, here is an explanation for the other points, and their meanings as seen on wikipedia:
$q_{1,1} = \frac{P_{1,1} +P_{1,2} +P_{2,1} +P_{2,2}}{4}$
This is simply the formula for adding a new control point at the center of an existing quad.
$q_{1,2} = \frac{P_{1,1} +P_{1,2} +P_{2,1} +P_{2,3}}{16} + \frac{6(P_{1,2} +P_{2,1})}{16}$
Rearranging this, we see it is just the average of the two neighbouring face points and the edge endpoints:
$q_{1,2} = \frac{1}{4}(\frac{P_{1,1} +P_{2,1} +P_{1,2} +P_{2,2}}{4} + \frac{P_{1,2} +P_{2,2} +P_{1,3} +P_{2,3}}{4} + P_{1,2} + P_{2,1})$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8420752286911011, "perplexity": 260.33339790730196}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256980.46/warc/CC-MAIN-20190522223411-20190523005411-00042.warc.gz"} |
https://arxiv.org/abs/1403.6600 | cs.NE
(what is this?)
# Title: How Crossover Speeds Up Building-Block Assembly in Genetic Algorithms
Authors: Dirk Sudholt
Abstract: We re-investigate a fundamental question: how effective is crossover in Genetic Algorithms in combining building blocks of good solutions? Although this has been discussed controversially for decades, we are still lacking a rigorous and intuitive answer. We provide such answers for royal road functions and OneMax, where every bit is a building block. For the latter we show that using crossover makes every ($\mu$+$\lambda$) Genetic Algorithm at least twice as fast as the fastest evolutionary algorithm using only standard bit mutation, up to small-order terms and for moderate $\mu$ and $\lambda$. Crossover is beneficial because it effectively turns fitness-neutral mutations into improvements by combining the right building blocks at a later stage. Compared to mutation-based evolutionary algorithms, this makes multi-bit mutations more useful. Introducing crossover changes the optimal mutation rate on OneMax from $1/n$ to $(1+\sqrt{5})/2 \cdot 1/n \approx 1.618/n$. This holds both for uniform crossover and $k$-point crossover. Experiments and statistical tests confirm that our findings apply to a broad class of building-block functions.
Subjects: Neural and Evolutionary Computing (cs.NE); Data Structures and Algorithms (cs.DS) Cite as: arXiv:1403.6600 [cs.NE] (or arXiv:1403.6600v2 [cs.NE] for this version)
## Submission history
From: Dirk Sudholt [view email]
[v1] Wed, 26 Mar 2014 09:28:56 GMT (739kb,D)
[v2] Wed, 26 Nov 2014 11:46:21 GMT (460kb,D) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8440337181091309, "perplexity": 4486.04209256231}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721606.94/warc/CC-MAIN-20161020183841-00395-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://poidsdesante.com/south-cotabato/how-to-reduce-large-fractions-to-lowest-terms.php | ## How to reduce (simplify) fraction 48/60 to lowest terms
Reducing fractions to lowest terms. MrExcel. 12/10/2011В В· How To Reduce Fractions To Lowest Terms-Step By Step Math Lesson - Duration: This tutorial demonstrates how to reduce a fraction to lowest terms by using multiplication and cancellation, 02/09/2019В В· There is an algorithm based on an identity accredited to Euclid. Thus, in computer programming it is called the Euclidean algorithm. Euclidean algorithm - Wikipedia Start by noting the numerator and denominator for later. Take note of which is lar....
### Reduce the Fractions to its Lowest Terms Review S1
Reducing fractions to lowest terms. MrExcel. Regardless of which manual method you choose, using them to reduce fractions can be extremely tedious and time-consuming. Therefore I would suggest you bookmark the fraction reducer calculator and use it any time you are working with fractions. How to Convert Improper Fraction to Mixed Number, fractions worksheet reduce to lowest terms worksheets simplifying pdf 6th grade.. collection of free printable math worksheets reducing fractions simplifying worksheet pdf with answers improper,simplifying fractions worksheets pdf reduce to lowest terms 5 worksheet grade 3,reducing improper fractions worksheet pdf kindergarten grade simplifying 5th worksheets,simplifying fractions ….
04/08/2010В В· Example: reduce to lowest terms the fraction 169187 / 177628. The method is called the Euclidean Algorithm. It finds the biggest number which divides into both the numerator and denominator. First you divide the bottom into the top. In this case it goes in 0 times with a remainder of 169187. You just studied 11 terms! Now up your study game with Learn mode.
We reduce a fraction to lowest terms by finding an equivalent fraction in which the numerator and denominator are as small as possible. This means that there is no number, except 1, that can be divided evenly into both the numerator and the denominator. To reduce a В· Reducing fractions 30/01/2013В В· Some fractions have HUGE numerators and denominators. Here's a way to break it down into smaller, more managable reducing steps.
04/08/2010В В· Example: reduce to lowest terms the fraction 169187 / 177628. The method is called the Euclidean Algorithm. It finds the biggest number which divides into both the numerator and denominator. First you divide the bottom into the top. In this case it goes in 0 times with a remainder of 169187. 30/01/2013В В· Some fractions have HUGE numerators and denominators. Here's a way to break it down into smaller, more managable reducing steps.
The numerator and denominator have no common factors, so the fraction is already in lowest terms. Practice questions. Increase the terms of the fraction 2/3 so that the denominator is 18. Increase the terms of 4/9, changing the denominator to 54. Reduce the fraction 12/60 to … 29/03/2014 · One of the basic reasons why we reduce fraction to lowest terms is to lessen the burden of calculating large numbers. Of course, we would rather add or multiply and than and .So you see, the effort of multiplying the same fraction is lessen when they are reduced to lowest terms.
24/04/2017 · The directions of many worksheets, quizzes and tests will ask for fractions in their simplest form. To simplify a fraction, divide the top number, known as the numerator, and the bottom number, the denominator, by the greatest common factor.The GFC is the largest number that will divide into the numerator and denominator evenly. 23/05/2017 · I suppose you’re asking about converting a ordinary positive fraction like $\frac{49}{91}$ to lowest terms, in this case $\frac7{13}$. If the numerator and denominator are small numbers, as in $\frac{10}{12}$, you
17/06/2010В В· An important part of fraction learning is understanding how to reduce a fraction to its lowest terms. When a fraction is in lowest terms, the numerator and denominator don't share any common factors. Help your fourth grader wrap his head around reducing fractions with this helpful worksheet. Need more help? Try Reducing to Lowest Terms #2. 18/07/2012В В· Reducing a Fraction to Lowest Terms. Here I look at reducing a fraction to lowest terms. I do not take the shortest route, but show how I often perform the simplification if no one is watching!
### What are some ways to reduce large fractions? Quora
Fractions Worksheet Reduce To Lowest Terms Worksheets. 29/03/2014 · One of the basic reasons why we reduce fraction to lowest terms is to lessen the burden of calculating large numbers. Of course, we would rather add or multiply and than and .So you see, the effort of multiplying the same fraction is lessen when they are reduced to lowest terms., 23/05/2017 · I suppose you’re asking about converting a ordinary positive fraction like $\frac{49}{91}$ to lowest terms, in this case $\frac7{13}$. If the numerator and denominator are small numbers, as in $\frac{10}{12}$, you.
### Tips for reducing fractions with large numbers YouTube
How to reduce (simplify) fraction 18/6 to lowest terms. 30/01/1998В В· Reducing Fractions with Large Numbers Date: 01/30/98 at 09:37:30 From: Pat Subject: Reducing fractions Dear Dr. Math, A neighborhood child asked for help with a math problem involving reducing fractions. 20904/12 = 1742, so, reduced to lowest terms, 20904/52740 = 1742/4395. This always works, and always gives the largest common factor To reduce any fraction, we have to determine factors of its numerator (upper digits) and denominator (lower digit). As such 14 / 35. The factors of numerator 14 = 2 x 7 The factors if denominator 35 = 5 x 7 Calculate the HCF of both numerator and.
fractions worksheet reduce to lowest terms worksheets simplifying pdf 6th grade.. collection of free printable math worksheets reducing fractions simplifying worksheet pdf with answers improper,simplifying fractions worksheets pdf reduce to lowest terms 5 worksheet grade 3,reducing improper fractions worksheet pdf kindergarten grade simplifying 5th worksheets,simplifying fractions … 18/07/2012 · Reducing a Fraction to Lowest Terms. Here I look at reducing a fraction to lowest terms. I do not take the shortest route, but show how I often perform the simplification if no one is watching!
22/10/2009 · How to find the lowest terms of fractions. Examples include one step division by common factor. how to cancel down larger fractions in multiple steps, and how to use the prime factors of the fractions worksheet reduce to lowest terms worksheets simplifying pdf 6th grade.. collection of free printable math worksheets reducing fractions simplifying worksheet pdf with answers improper,simplifying fractions worksheets pdf reduce to lowest terms 5 worksheet grade 3,reducing improper fractions worksheet pdf kindergarten grade simplifying 5th worksheets,simplifying fractions …
22/10/2009В В· How to find the lowest terms of fractions. Examples include one step division by common factor. how to cancel down larger fractions in multiple steps, and how to use the prime factors of the To reduce any fraction, we have to determine factors of its numerator (upper digits) and denominator (lower digit). As such 14 / 35. The factors of numerator 14 = 2 x 7 The factors if denominator 35 = 5 x 7 Calculate the HCF of both numerator and
29/03/2014 · One of the basic reasons why we reduce fraction to lowest terms is to lessen the burden of calculating large numbers. Of course, we would rather add or multiply and than and .So you see, the effort of multiplying the same fraction is lessen when they are reduced to lowest terms. 19/01/2011 · How to Reduce Fractions. Math is hard. It's easy to forget even core concepts when you are trying to remember dozens of different principles and methods. Here's your refresher on two methods to reduce fractions. List the …
The numerator and denominator have no common factors, so the fraction is already in lowest terms. Practice questions. Increase the terms of the fraction 2/3 so that the denominator is 18. Increase the terms of 4/9, changing the denominator to 54. Reduce the fraction 12/60 to … 18/07/2012 · Reducing a Fraction to Lowest Terms. Here I look at reducing a fraction to lowest terms. I do not take the shortest route, but show how I often perform the simplification if no one is watching!
The numerator and denominator have no common factors, so the fraction is already in lowest terms. Practice questions. Increase the terms of the fraction 2/3 so that the denominator is 18. Increase the terms of 4/9, changing the denominator to 54. Reduce the fraction 12/60 to … 05/11/2015 · To reduce a fraction to lowest terms, try to find the largest number that divides into both numerator and denominator. This number is also known as the greatest common divisor, or …
30/01/1998 · Reducing Fractions with Large Numbers Date: 01/30/98 at 09:37:30 From: Pat Subject: Reducing fractions Dear Dr. Math, A neighborhood child asked for help with a math problem involving reducing fractions. 20904/12 = 1742, so, reduced to lowest terms, 20904/52740 = 1742/4395. This always works, and always gives the largest common factor 19/01/2011 · How to Reduce Fractions. Math is hard. It's easy to forget even core concepts when you are trying to remember dozens of different principles and methods. Here's your refresher on two methods to reduce fractions. List the …
## How to Increase and Reduce the Terms of Fractions dummies
How to Increase and Reduce the Terms of Fractions dummies. 04/08/2010В В· Example: reduce to lowest terms the fraction 169187 / 177628. The method is called the Euclidean Algorithm. It finds the biggest number which divides into both the numerator and denominator. First you divide the bottom into the top. In this case it goes in 0 times with a remainder of 169187., 29/03/2014В В· One of the basic reasons why we reduce fraction to lowest terms is to lessen the burden of calculating large numbers. Of course, we would rather add or multiply and than and .So you see, the effort of multiplying the same fraction is lessen when they are reduced to lowest terms..
### Equivalent Fractions Finding the Lowest Terms - YouTube
Equivalent Fractions Finding the Lowest Terms - YouTube. 19/01/2011 · How to Reduce Fractions. Math is hard. It's easy to forget even core concepts when you are trying to remember dozens of different principles and methods. Here's your refresher on two methods to reduce fractions. List the …, fractions worksheet reduce to lowest terms worksheets simplifying pdf 6th grade.. collection of free printable math worksheets reducing fractions simplifying worksheet pdf with answers improper,simplifying fractions worksheets pdf reduce to lowest terms 5 worksheet grade 3,reducing improper fractions worksheet pdf kindergarten grade simplifying 5th worksheets,simplifying fractions ….
18/07/2012В В· Reducing a Fraction to Lowest Terms. Here I look at reducing a fraction to lowest terms. I do not take the shortest route, but show how I often perform the simplification if no one is watching! 24/04/2017В В· The directions of many worksheets, quizzes and tests will ask for fractions in their simplest form. To simplify a fraction, divide the top number, known as the numerator, and the bottom number, the denominator, by the greatest common factor.The GFC is the largest number that will divide into the numerator and denominator evenly.
By finding the LCM of the denominators, (called the lowest common denominator) you can convert unlike to like fractions and proceed with the adding or subtracting. Knowing the GCF helps reduce a You just studied 11 terms! Now up your study game with Learn mode.
04/08/2010 · Example: reduce to lowest terms the fraction 169187 / 177628. The method is called the Euclidean Algorithm. It finds the biggest number which divides into both the numerator and denominator. First you divide the bottom into the top. In this case it goes in 0 times with a remainder of 169187. fractions worksheet reduce to lowest terms worksheets simplifying pdf 6th grade.. collection of free printable math worksheets reducing fractions simplifying worksheet pdf with answers improper,simplifying fractions worksheets pdf reduce to lowest terms 5 worksheet grade 3,reducing improper fractions worksheet pdf kindergarten grade simplifying 5th worksheets,simplifying fractions …
The numerator and denominator have no common factors, so the fraction is already in lowest terms. Practice questions. Increase the terms of the fraction 2/3 so that the denominator is 18. Increase the terms of 4/9, changing the denominator to 54. Reduce the fraction 12/60 to … 19/01/2011 · How to Reduce Fractions. Math is hard. It's easy to forget even core concepts when you are trying to remember dozens of different principles and methods. Here's your refresher on two methods to reduce fractions. List the …
05/05/2006В В· Reducing fractions to lowest terms. Hi and I can automatically calculate for very large unusual fractions I sometimes encounter, such as "what is 489/7896 th's of the transferor's 6784/762543 th share of the land, and what is the fraction of the whole of the land that the transferor keeps?", and so forth. if you want the numerator and 12/10/2011В В· How To Reduce Fractions To Lowest Terms-Step By Step Math Lesson - Duration: This tutorial demonstrates how to reduce a fraction to lowest terms by using multiplication and cancellation
To reduce any fraction, we have to determine factors of its numerator (upper digits) and denominator (lower digit). As such 14 / 35. The factors of numerator 14 = 2 x 7 The factors if denominator 35 = 5 x 7 Calculate the HCF of both numerator and 18/07/2012В В· Reducing a Fraction to Lowest Terms. Here I look at reducing a fraction to lowest terms. I do not take the shortest route, but show how I often perform the simplification if no one is watching!
Regardless of which manual method you choose, using them to reduce fractions can be extremely tedious and time-consuming. Therefore I would suggest you bookmark the fraction reducer calculator and use it any time you are working with fractions. How to Convert Improper Fraction to Mixed Number 12/10/2011В В· How To Reduce Fractions To Lowest Terms-Step By Step Math Lesson - Duration: This tutorial demonstrates how to reduce a fraction to lowest terms by using multiplication and cancellation
Reduce the Fractions to its Lowest Terms Review: S1 . Title: Microsoft Word - reduce-1 Author: educurve 13 Created Date: 4/4/2017 3:21:06 PM 05/05/2006В В· Reducing fractions to lowest terms. Hi and I can automatically calculate for very large unusual fractions I sometimes encounter, such as "what is 489/7896 th's of the transferor's 6784/762543 th share of the land, and what is the fraction of the whole of the land that the transferor keeps?", and so forth. if you want the numerator and
04/08/2010 · Example: reduce to lowest terms the fraction 169187 / 177628. The method is called the Euclidean Algorithm. It finds the biggest number which divides into both the numerator and denominator. First you divide the bottom into the top. In this case it goes in 0 times with a remainder of 169187. 05/11/2015 · To reduce a fraction to lowest terms, try to find the largest number that divides into both numerator and denominator. This number is also known as the greatest common divisor, or …
Regardless of which manual method you choose, using them to reduce fractions can be extremely tedious and time-consuming. Therefore I would suggest you bookmark the fraction reducer calculator and use it any time you are working with fractions. How to Convert Improper Fraction to Mixed Number Reduce the Fractions to its Lowest Terms Review: S1 . Title: Microsoft Word - reduce-1 Author: educurve 13 Created Date: 4/4/2017 3:21:06 PM
24/04/2017В В· The directions of many worksheets, quizzes and tests will ask for fractions in their simplest form. To simplify a fraction, divide the top number, known as the numerator, and the bottom number, the denominator, by the greatest common factor.The GFC is the largest number that will divide into the numerator and denominator evenly. 30/01/2013В В· Some fractions have HUGE numerators and denominators. Here's a way to break it down into smaller, more managable reducing steps.
17/06/2010В В· An important part of fraction learning is understanding how to reduce a fraction to its lowest terms. When a fraction is in lowest terms, the numerator and denominator don't share any common factors. Help your fourth grader wrap his head around reducing fractions with this helpful worksheet. Need more help? Try Reducing to Lowest Terms #2. You just studied 11 terms! Now up your study game with Learn mode.
30/01/2013 · Some fractions have HUGE numerators and denominators. Here's a way to break it down into smaller, more managable reducing steps. 23/05/2017 · I suppose you’re asking about converting a ordinary positive fraction like $\frac{49}{91}$ to lowest terms, in this case $\frac7{13}$. If the numerator and denominator are small numbers, as in $\frac{10}{12}$, you
17/06/2010 · An important part of fraction learning is understanding how to reduce a fraction to its lowest terms. When a fraction is in lowest terms, the numerator and denominator don't share any common factors. Help your fourth grader wrap his head around reducing fractions with this helpful worksheet. Need more help? Try Reducing to Lowest Terms #2. fractions worksheet reduce to lowest terms worksheets simplifying pdf 6th grade.. collection of free printable math worksheets reducing fractions simplifying worksheet pdf with answers improper,simplifying fractions worksheets pdf reduce to lowest terms 5 worksheet grade 3,reducing improper fractions worksheet pdf kindergarten grade simplifying 5th worksheets,simplifying fractions …
### Reduce the Fractions to its Lowest Terms Review S1
How to reduce the fraction 14⁄35 to its lowest terms Quora. The numerator and denominator have no common factors, so the fraction is already in lowest terms. Practice questions. Increase the terms of the fraction 2/3 so that the denominator is 18. Increase the terms of 4/9, changing the denominator to 54. Reduce the fraction 12/60 to …, fractions worksheet reduce to lowest terms worksheets simplifying pdf 6th grade.. collection of free printable math worksheets reducing fractions simplifying worksheet pdf with answers improper,simplifying fractions worksheets pdf reduce to lowest terms 5 worksheet grade 3,reducing improper fractions worksheet pdf kindergarten grade simplifying 5th worksheets,simplifying fractions ….
### How to reduce the fraction 14вЃ„35 to its lowest terms Quora
What are some ways to reduce large fractions? Quora. 30/01/1998В В· Reducing Fractions with Large Numbers Date: 01/30/98 at 09:37:30 From: Pat Subject: Reducing fractions Dear Dr. Math, A neighborhood child asked for help with a math problem involving reducing fractions. 20904/12 = 1742, so, reduced to lowest terms, 20904/52740 = 1742/4395. This always works, and always gives the largest common factor 18/07/2012В В· Reducing a Fraction to Lowest Terms. Here I look at reducing a fraction to lowest terms. I do not take the shortest route, but show how I often perform the simplification if no one is watching!.
04/08/2010В В· Example: reduce to lowest terms the fraction 169187 / 177628. The method is called the Euclidean Algorithm. It finds the biggest number which divides into both the numerator and denominator. First you divide the bottom into the top. In this case it goes in 0 times with a remainder of 169187. 02/09/2019В В· There is an algorithm based on an identity accredited to Euclid. Thus, in computer programming it is called the Euclidean algorithm. Euclidean algorithm - Wikipedia Start by noting the numerator and denominator for later. Take note of which is lar...
12/10/2011В В· How To Reduce Fractions To Lowest Terms-Step By Step Math Lesson - Duration: This tutorial demonstrates how to reduce a fraction to lowest terms by using multiplication and cancellation 12/10/2011В В· How To Reduce Fractions To Lowest Terms-Step By Step Math Lesson - Duration: This tutorial demonstrates how to reduce a fraction to lowest terms by using multiplication and cancellation
06/01/2009В В· Reduce fractions to lowest terms using prime numbers.? Help with C++ Reduce fractions to their lowest terms? How do u reduce fractions to lowest terms? More questions. Easier way to reduce to lower terms - fractions that have a big number? How do I reduce large fractions to lowest terms? Answer Questions. To reduce any fraction, we have to determine factors of its numerator (upper digits) and denominator (lower digit). As such 14 / 35. The factors of numerator 14 = 2 x 7 The factors if denominator 35 = 5 x 7 Calculate the HCF of both numerator and
By finding the LCM of the denominators, (called the lowest common denominator) you can convert unlike to like fractions and proceed with the adding or subtracting. Knowing the GCF helps reduce a 22/10/2009В В· How to find the lowest terms of fractions. Examples include one step division by common factor. how to cancel down larger fractions in multiple steps, and how to use the prime factors of the
29/03/2014 · One of the basic reasons why we reduce fraction to lowest terms is to lessen the burden of calculating large numbers. Of course, we would rather add or multiply and than and .So you see, the effort of multiplying the same fraction is lessen when they are reduced to lowest terms. fractions worksheet reduce to lowest terms worksheets simplifying pdf 6th grade.. collection of free printable math worksheets reducing fractions simplifying worksheet pdf with answers improper,simplifying fractions worksheets pdf reduce to lowest terms 5 worksheet grade 3,reducing improper fractions worksheet pdf kindergarten grade simplifying 5th worksheets,simplifying fractions …
To reduce any fraction, we have to determine factors of its numerator (upper digits) and denominator (lower digit). As such 14 / 35. The factors of numerator 14 = 2 x 7 The factors if denominator 35 = 5 x 7 Calculate the HCF of both numerator and 06/01/2009В В· Reduce fractions to lowest terms using prime numbers.? Help with C++ Reduce fractions to their lowest terms? How do u reduce fractions to lowest terms? More questions. Easier way to reduce to lower terms - fractions that have a big number? How do I reduce large fractions to lowest terms? Answer Questions.
29/03/2014В В· One of the basic reasons why we reduce fraction to lowest terms is to lessen the burden of calculating large numbers. Of course, we would rather add or multiply and than and .So you see, the effort of multiplying the same fraction is lessen when they are reduced to lowest terms. We reduce a fraction to lowest terms by finding an equivalent fraction in which the numerator and denominator are as small as possible. This means that there is no number, except 1, that can be divided evenly into both the numerator and the denominator. To reduce a В· Reducing fractions
The numerator and denominator have no common factors, so the fraction is already in lowest terms. Practice questions. Increase the terms of the fraction 2/3 so that the denominator is 18. Increase the terms of 4/9, changing the denominator to 54. Reduce the fraction 12/60 to … 24/04/2017 · The directions of many worksheets, quizzes and tests will ask for fractions in their simplest form. To simplify a fraction, divide the top number, known as the numerator, and the bottom number, the denominator, by the greatest common factor.The GFC is the largest number that will divide into the numerator and denominator evenly.
Reduce the Fractions to its Lowest Terms Review: S1 . Title: Microsoft Word - reduce-1 Author: educurve 13 Created Date: 4/4/2017 3:21:06 PM 24/04/2017В В· The directions of many worksheets, quizzes and tests will ask for fractions in their simplest form. To simplify a fraction, divide the top number, known as the numerator, and the bottom number, the denominator, by the greatest common factor.The GFC is the largest number that will divide into the numerator and denominator evenly.
05/11/2015 · To reduce a fraction to lowest terms, try to find the largest number that divides into both numerator and denominator. This number is also known as the greatest common divisor, or … 04/08/2010 · Example: reduce to lowest terms the fraction 169187 / 177628. The method is called the Euclidean Algorithm. It finds the biggest number which divides into both the numerator and denominator. First you divide the bottom into the top. In this case it goes in 0 times with a remainder of 169187.
06/01/2009 · Reduce fractions to lowest terms using prime numbers.? Help with C++ Reduce fractions to their lowest terms? How do u reduce fractions to lowest terms? More questions. Easier way to reduce to lower terms - fractions that have a big number? How do I reduce large fractions to lowest terms? Answer Questions. fractions worksheet reduce to lowest terms worksheets simplifying pdf 6th grade.. collection of free printable math worksheets reducing fractions simplifying worksheet pdf with answers improper,simplifying fractions worksheets pdf reduce to lowest terms 5 worksheet grade 3,reducing improper fractions worksheet pdf kindergarten grade simplifying 5th worksheets,simplifying fractions …
19/01/2011 · How to Reduce Fractions. Math is hard. It's easy to forget even core concepts when you are trying to remember dozens of different principles and methods. Here's your refresher on two methods to reduce fractions. List the … 22/10/2009 · How to find the lowest terms of fractions. Examples include one step division by common factor. how to cancel down larger fractions in multiple steps, and how to use the prime factors of the
By finding the LCM of the denominators, (called the lowest common denominator) you can convert unlike to like fractions and proceed with the adding or subtracting. Knowing the GCF helps reduce a 12/10/2011В В· How To Reduce Fractions To Lowest Terms-Step By Step Math Lesson - Duration: This tutorial demonstrates how to reduce a fraction to lowest terms by using multiplication and cancellation
187126 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9623177647590637, "perplexity": 1064.4775376112025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655906934.51/warc/CC-MAIN-20200710082212-20200710112212-00081.warc.gz"} |
http://www.ck12.org/physics/Average-Velocity/lesson/user:a3Jvc2VuYXVAcGVyaGFtLmsxMi5tbi51cw../Average-Velocity/r1/ | <img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# Average Velocity
## Displacement divided by time.
Estimated26 minsto complete
%
Progress
Practice Average Velocity
Progress
Estimated26 minsto complete
%
Average Velocity
You will learn the meaning of speed, velocity and average velocity.
### Key Equations
Speed = distance/time
Average Velocity
Guidance
Speed is the distance traveled divided by the time it took to travel that distance. Velocity is the instantaneous speed and direction. Average velocity is the displacement divided by the time.
#### Example 1
Pacific loggerhead sea turtles migrate over 7,500 miles (12,000 km) between nesting beaches in Japan and feeding grounds off the coast of Mexico. If the average speed of a loggerhead is about 45 km/day, how long does it take for it to complete the distance of a one-way migration?
Question: [days]
Given:
Equation: therefore
Plug n’ Chug:
### Time for Practice
1. Two cars are heading right towards each other, but are 12 km apart. One car is going 70 km/hr and the other is going 50 km/hr. How much time do they have before they collide head on?
2. You drive the 10 miles to work at an average speed of 40 mph. On the way home you hit severe traffic and drive at an average speed of 10 mph. What is your average speed for the trip?
1. 0.1 hours = 6 minutes
2. 16 mph | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9243046641349792, "perplexity": 2072.3372683914217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276567.28/warc/CC-MAIN-20160524002116-00206-ip-10-185-217-139.ec2.internal.warc.gz"} |
https://eprint.iacr.org/2019/1016 | ## Cryptology ePrint Archive: Report 2019/1016
Quantum Algorithms for the Approximate $k$-List Problem and their Application to Lattice Sieving
Elena Kirshanova and Erik Mårtensson and Eamonn W. Postlethwaite and Subhayan Roy Moulik
Abstract: The Shortest Vector Problem (SVP) is one of the mathematical foundations of lattice based cryptography. Lattice sieve algorithms are amongst the foremost methods of solving SVP. The asymptotically fastest known classical and quantum sieves solve SVP in a $d$-dimensional lattice in $2^{cd + o(d)}$ time steps with $2^{c'd + o(d)}$ memory for constants $c, c'$. In this work, we give various quantum sieving algorithms that trade computational steps for memory.
We first give a quantum analogue of the classical $k$-Sieve algorithm [Herold--Kirshanova--Laarhoven, PKC'18] in the Quantum Random Access Memory (QRAM) model, achieving an algorithm that heuristically solves SVP in $2^{0.2989d + o(d)}$ time steps using $2^{0.1395d + o(d)}$ memory. This should be compared to the state-of-the-art algorithm [Laarhoven, Ph.D Thesis, 2015] which, in the same model, solves SVP in $2^{0.2653d + o(d)}$ time steps and memory. In the QRAM model these algorithms can be implemented using $poly(d)$ width quantum circuits.
Secondly, we frame the $k$-Sieve as the problem of $k$-clique listing in a graph and apply quantum $k$-clique finding techniques to the $k$-Sieve.
Finally, we explore the large quantum memory regime by adapting parallel quantum search [Beals et al., Proc. Roy. Soc. A'13] to the $2$-Sieve and giving an analysis in the quantum circuit model. We show how to heuristically solve SVP in $2^{0.1037d + o(d)}$ time steps using $2^{0.2075d + o(d)}$ quantum memory.
Category / Keywords: foundations / approximate k-list problem, cryptanalysis, distributed computation, grover's algorithm, lattice sieving, nearest neighbour algorithms, quantum cryptography, shortest vector problem, SVP
Original Publication (with major differences): IACR-ASIACRYPT-2019 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8044134974479675, "perplexity": 3081.7261088570185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585450.39/warc/CC-MAIN-20211022021705-20211022051705-00625.warc.gz"} |
http://pttk.org.pl/t3rybzr/how-to-find-concavity-intervals-06f068 | finding intervals of increase and decrease, Graphs of curves can either be concave up or concave down, Concave up graphs open upward, and have the shape, Concave down graphs open downward, with the shape, To determine the concavity of a graph, find the second derivative of the given function and find the values that make it $0$ or undefined. Set the second derivative equal to zero and solve. The opposite of concave up graphs, concave down graphs point in the opposite direction. Else, if $f''(x)<0$, the graph is concave down on the interval. Evaluate the integral between $[0,x]$ for some function and then differentiate twice to find the concavity of the resulting function? Since the domain of $$f$$ is the union of three intervals, it makes sense that the concavity of $$f$$ could switch across intervals. Find the maximum, minimum, inflection points, and intervals of increasing/decreasing, and concavity of the function {eq}\displaystyle f (x) = x^4 - 4 x^3 + 10 {/eq}. 2. Determining concavity of intervals and finding points of inflection: algebraic. Locate the x-values at which f ''(x) = 0 or f ''(x) is undefined. In general, you can skip the multiplication sign, so 5 x is equivalent to 5 ⋅ x. To view the graph of this function, click here. Let us again consider graph A in Fig.- 22. To determine the intervals on which the graph of a continuous function is concave upward or downward we can apply the second derivative test. Therefore, there is an inflection point at $x=-2$. The perfect example of this is the graph of $y=sin(x)$. Solution: Since this is never zero, there are not points ofinflection. First, find the second derivative. This video explains how to find the open intervals for which a function is increasing or decreasing and concave up or concave down. f"(2)= pos. Mistakes when finding inflection points: not checking candidates. Finding the Intervals of Concavity and the Inflection Points: Generally, the concavity of the function changes from upward to downward (or) vice versa. If this occurs at -1, -1 is an inflection point. To view the graph, click here. Find the intervals of concavity and the inflection points of g(x) = x 4 – 12x 2. Here are the steps to determine concavity for $f(x)$: While this might seem like too many steps, remember the big picture: To find the intervals of concavity, you need to find the second derivative of the function, determine the $x$ values that make the function equal to $0$ (numerator) and undefined (denominator), and plug in values to the left and to the right of these $x$ values, and look at the sign of the results: $- \ \rightarrow$ interval is concave down, Question 1Determine where this function is concave up and concave down. Determining concavity of intervals and finding points of inflection: algebraic. So let's think about the interval when we go from negative infinity to two and let's think about the interval where we go from two to positive infinity. This means that this function has a zero at $x=-2$. In order to find what concavity it is changing from and to, you plug in numbers on either side of the inflection point. The calculator will find the intervals of concavity and inflection points of the given function. or just the numerator? You can think of the concave up graph as being able to "hold water", as it resembles the bottom of a cup. The concavity’s nature can of course be restricted to particular intervals. Differentiate. And the value of f″ is always 6, so is always >0,so the curve is entirely concave upward. Let's pick $-5$ and $1$ for left and right values, respectively. On the interval (-inf.,-1) f"(-2)=negative and (-1,0) f"(-1/2)= neg.so concavity is downward. To study the concavity and convexity, perform the following steps: 1. Relevance. I know that to find the intervals for concavity, you have to set the second derivative to 0 or DNE. And I must also find the inflection point coordinates. Ex 5.4.19 Identify the intervals on which the graph of the function $\ds f(x) = x^4-4x^3 +10$ is of one of these four shapes: concave up and increasing; concave up and decreasing; concave down and increasing; concave down and decreasing. For example, the graph of the function $y=x^2+2$ results in a concave up curve. In business calculus, concavity is a word used to describe the shape of a curve. Find the Concavity f(x)=x/(x^2+1) Find the inflection points. Relevance. Now to find which interval is concave down choose any value in each of the regions, and . 0 < -18x -18x > 0. You can easily find whether a function is concave up or down in an interval based on the sign of the second derivative of the function. Find the second derivative of f. Set the second derivative equal to zero and solve. Show Concave Up Interval. As you can see, the graph opens downward, then upward, then downward again, then upward, etc. For example, the graph of the function $y=-3x^2+5$ results in a concave down curve. (If you get a problem in which the signs switch at a number where the second derivative is undefined, you have to check one more thing before concluding that there’s an inflection point there. y' = 4 - 2x = 0. Determine whether the second derivative is undefined for any x-values. Lv 7. 0. Otherwise, if the second derivative is negative for an interval, then the function is concave down at that point. . How do we determine the intervals? I did the first one but am not sure if it´s right. Plot these numbers on a number line and test the regions with the second derivative. Plug these three x-values into f to obtain the function values of the three inflection points. But this set of numbers has no special name. The function can either be always concave up, always concave down, or both concave up and down for different intervals. Anonymous. Answer Save. 1 Answer. Then solve for any points where the second derivative is 0. Show activity on this post. Liked this lesson? In order for () to be concave up, in some interval, '' () has to be greater than or equal to 0 (i.e. How would concavity be related to the derivative(s) of the function? So, a concave down graph is the inverse of a concave up graph. Find the open intervals where f is concave up c. Find the open intervals where f is concave down $$1)$$ $$f(x)=2x^2+4x+3$$ Show Point of Inflection. Bookmark this question. The answer is supposed to be in an interval form. Notice this graph opens "down". The following method shows you how to find the intervals of concavity and the inflection points of. And then here in blue, I've graphed y is equal to the second derivative of our function. A test value of gives us a of . Tap for more steps... Find the first derivative. We can determine this intuitively. The square root of two equals about 1.4, so there are inflection points at about (–1.4, 39.6), (0, 0), and about (1.4, –39.6). Let's make a formula for that! This is a concave upwards curve. We still set a derivative equal to $0$, and we still plug in values left and right of the zeroes to check the signs of the derivatives in those intervals. Analyzing concavity (algebraic) Inflection points (algebraic) Mistakes when finding inflection points: second derivative undefined. How to Locate Intervals of Concavity and Inflection Points, How to Interpret a Correlation Coefficient r, You can locate a function’s concavity (where a function is concave up or down) and inflection points (where the concavity switches from positive to negative or vice versa) in a few simple steps. Highlight an interval where f prime of x, or we could say the first derivative of x, for the first derivative of f with respect to x is greater than 0 and f double prime of x, or the second derivative of f with respect to x, is less than 0. This means that the graph can open up, then down, then up, then down, and so forth. First, the line: take any two different values a and b (in the interval we are looking at):. 1. if the result is negative, the graph is concave down and if it is positive the graph is concave up. What I have here in yellow is the graph of y equals f of x. y = -3x^3 + 13x - 1. A function f of x is plotted below. [Calculus] Find the transition points, intervals of increase/decrease, concavity, and asymptotic behavior of y=x(4-x)-3ln3? The main difference is that instead of working with the first derivative to find intervals of increase and decrease, we work with the second derivative to find intervals of concavity. In business calculus, you will be asked to find intervals of concavity for graphs. Show Concave Down Interval $$2)$$ $$f(x)=\frac{1}{5}x^5-16x+5$$ Show Point of Inflection. yes I have already tried wolfram alpha and other math websites and can't get the correct answer so please help me solve this math calculus problem. How to solve: Find the intervals of concavity and the inflection points. This question does not show any research effort; it is unclear or not useful. 7 years ago. \begin{align} \frac{d^2y}{dx^2} = \frac{d}{dx} \left ( \frac{dy}{dx} \right) = \frac{\frac{d}{dt} \left (\frac{dy}{dx} \right)}{\frac{dx}{dt}} \end{align} f(x)= (x^2+1) / (x^2). In words: If the second derivative of a function is positive for an interval, then the function is concave up on that interval. 2. Determine whether the second derivative is undefined for any x-values. 10 years ago. When the second derivative of a function is positive then the function is considered concave up. a) Find the intervals on which the graph of f(x) = x 4 - 2x 3 + x is concave up, concave down and the point(s) of inflection if any. Therefore, the function is concave up at x < 0. Definition. We set the second derivative equal to $0$, and solve for $x$. The same goes for () concave down, but then '' () is non-positive. Find the inflection points of f and the intervals on which it is concave up/down. In math notation: If $f''(x) > 0$ for $[a,b]$, then $f(x)$ is concave up on $[a,b]$. And the function is concave down on any interval where the second derivative is negative. Thank you! Now that we have the second derivative, we want to find concavity at all points of this function. Please help me find the upward and downward concavity points for the function. Find the second derivative. cidyah. b) Use a graphing calculator to graph f and confirm your answers to part a). The following method shows you how to find the intervals of concavity and the inflection points of. This value falls in the range, meaning that interval is concave … If y is concave up, then d²y/dx² > 0. In general, you can skip parentheses, but be very careful: e^3x is e 3 x, and e^ (3x) is e 3 x. If you're seeing this message, it means we're having trouble loading external resources on our website. Differentiate twice to get: dy/dx = -9x² + 13. d²y/dx² = -18x. 4= 2x. For the second derivative I got 6x^2/x^5 simplified to 6/x^3. Because –2 is in the left-most region on the number line below, and because the second derivative at –2 equals negative 240, that region gets a negative sign in the figure below, and so on for the other three regions. Thank you. If the second derivative of the function equals $0$ for an interval, then the function does not have concavity in that interval. or both? Set this equal to 0. Click here to view the graph for this function. Find all intervalls on which the graph of the function is concave upward. so concavity is upward. $\begingroup$ Using the chain rule you can find the second derivative. Tap for more steps... Differentiate using the Quotient Rule which states that is where and . We check the concavity of the function using the second derivative at each interval: Consider {eq}\displaystyle (x=-5) {/eq} in the interval {eq}\displaystyle -\infty \:0, then the function is convex and when it is less than 0, then the function is concave. Form open intervals with the zeros (roots) of the second derivative and the points of discontinuity (if any). Multiply by . The key point is that a line drawn between any two points on the curve won't cross over the curve:. There is no single criterion to establish whether concavity and convexity are defined in this way or the contrary, so it is possible that in other texts you may find it defined the opposite way. In general, a curve can be either concave up or concave down. Then check for the sign of the second derivative in all intervals, If $f''(x) > 0$, the graph is concave up on the interval. Finding where ... Usually our task is to find where a curve is concave upward or concave downward:. If so, you will love our complete business calculus course. In general, concavity can only change where the second derivative has a zero, or where it is undefined. We technically cannot say that $$f$$ has a point of inflection at $$x=\pm1$$ as they are not part of the domain, but we must still consider these $$x$$-values to be important and will include them in our number line. Also, when $x=1$ (right of the zero), the second derivative is positive. Favorite Answer. So, we differentiate it twice. The concept is very similar to that of finding intervals of increase and decrease. After substitution of points from both the intervals, the second derivative was greater than 0 in the interval and smaller than 0 in the interval . The function has an inflection point (usually) at any x-value where the signs switch from positive to negative or vice versa. This is the case wherever the first derivative exists or where there’s a vertical tangent.). If you want, you could have some test values. non-negative) for all in that interval. How do you know what to set to 0? To find the intervals of concavity, you need to find the second derivative of the function, determine the x x values that make the function equal to 0 0 (numerator) and undefined (denominator), and plug in values to the left and to the right of these x x values, and look at the sign of the results: + → + → … y = 4x - x^2 - 3 ln 3 . f (x) = x³ − 3x + 2. To find the inflection point, determine where that function changes from negative to positive. Determining concavity of intervals and finding points of inflection: algebraic. Find the second derivative and calculate its roots. I first find the second derivative, determine where it is zero or undefined and create a sign graph. For example The second derivative is -20(3x^2+4) / (x^2-4)^3 When I set the denominator equal to 0, I get +2 and -2. When asked to find the interval on which the following curve is concave upward. A positive sign on this sign graph tells you that the function is concave up in that interval; a negative sign means concave down. That gives us our final answer: $in \ (-\infty,-2) \ \rightarrow \ f(x) \ is \ concave \ down$, $in \ (-2,+\infty) \ \rightarrow \ f(x) \ is \ concave \ up$. Find the inflection points and intervals of concavity upand down of f(x)=3x2−9x+6 First, the second derivative is justf″(x)=6. I hope this helps! Answers and explanations For f ( x ) = –2 x 3 + 6 x 2 – 10 x + 5, f is concave up from negative infinity to the inflection point at (1, –1), then concave down from there to infinity. Video transcript. Video transcript. Intervals. On the interval (0,1) f"(1/2)= positive and (1,+ inf.) And with the second derivative, the intervals of concavity down and concavity up are found. By the way, an inflection point is a graph where the graph changes concavity. When doing so, do you only set the denominator to 0? Sal finds the intervals where the function f(x)=x⁶-3x⁵ is decreasing by analyzing the intervals where f' is positive or negative. Algebraic ) Mistakes when finding inflection points: not checking candidates is never zero, both. Of a continuous function is concave up for certain intervals, and when $x=1$ ( of. Looking at ): this message, it means we 're having trouble getting the intervals on which graph! Concavity down and concavity up are found any x-value where the second derivative is negative for an interval then... And asymptotic behavior of y=x ( 4-x ) -3ln3, when $x=1 (! Video tutorial provides a basic introduction into concavity and inflection points of this is the graph, no need actually. However, a function with its derivatives: second derivative test bx^2 + cx + d$ at number! This point is that a line drawn between any two points on the interval ( )! Rule which states that is where points for the function is concave upward, here... Must also find the second derivative is negative for an interval form there are not points.! ( x^2 ) can see, the line: take any two different values a and (... First find the inflection points and intervals of increase/decrease, concavity is calculating the second derivative equal to zero solve. Tangent line to the derivative of f and the inflection points of g ( x ) = and... ) at any x-value where the second derivative test Differentiate using the Quotient which... Color i 've graphed y is concave down at that number up for intervals. Concavity ( algebraic ) Mistakes when finding inflection points ) Mistakes when finding inflection points of inflection:.... Considered concave up., no need for actually graphing f how to find concavity intervals of x doing. From positive to negative or vice versa divide by $30$ on both sides $\ds y = -. Have the second derivative is undefined for any x-values$ 30 $both... Mauve color i 've graphed y is equal to the derivative of is. Get: dy/dx = -9x² + 13. d²y/dx² = -18x curve that opens upward '', meaning it the... Word used to describe the concavity of intervals and finding points of inflection: algebraic derivative, the second is! Of$ f ( x ) $, etc f ( ) is non-positive = x^3 + +! Our task is to find concavity at all points of not checking candidates the!, perform the following method shows you how to find where this function is concave.! Opens downward, then upward, etc so 5 x is equivalent to ⋅! The derivative of$ -2 $while concave downwards in another equal to zero and solve only if there a... 5.4.20 describe the concavity and inflection points of inflection: algebraic derivative, determine where function. Graphed y is equal to zero and solve values of the function$ y=x^2+2 $in... ’ s nature can of course be restricted to particular intervals point, where the second undefined! > 0 same goes for ( ) is non-positive 2nd derivative and set it equal to function. Any x values related to the function is considered concave up, concave! Might be concave how to find concavity intervals, then down, so is always 6, so the curve is concave upward downward!$ results in a concave down on any interval where the second of... The signs switch from positive to negative or vice versa i must also find the interval we are at... Zeros ( roots ) of the function can apply the second derivative is positive and confirm answers! Will be asked to find concavity at all points of g ( x ) = x 4 12x.. Is that a line drawn between any two points on the interval we are at! Intervals of increase and decrease never zero, or both concave up, then up then... Concavity be related to the derivative of our function or downward we can the... On any interval where the signs switch from positive to negative or vice versa interval 0,1! The regions, and so forth the inflection points of could have some values... Task is to find the inflection point at $x=-2$ for an interval, up! And concavity up are found a number line and test the regions, and all points.! If you 're seeing this message, it means we 're having trouble external! I am asked to find what concavity it is zero or undefined and create a sign graph is... Where the signs switch from positive to negative or vice versa increase/decrease, concavity is the. Where this function when the second derivative is negative, the intervals of concavity down and if it is upward! Of discontinuity ( if any ) pick $-5$ and $1 for! Zero but i ca n't get the answer is supposed to be in an interval form for steps.: algebraic ) at any x-value where the graph is concave down then! Derivative of f is f prime of x = 0 x = 0 or DNE if any.... So we use the how to find concavity intervals test any ) f ( x =... First, let 's figure out how concave up graph open intervals with the second,. And ( 1, + inf. ), determine where it undefined... Always either concave up, then downward again, then down, so the curve is entirely concave upward derivative...$ ( right of the given function our inflection point coordinates resources our... The left and right of the function at that number $f ( x ) < 0,... Only need to take the derivative of f. set the second derivative i (... Introduction into concavity and inflection points and intervals of concavity for graphs might be concave upwards some... Describe the concavity test ( if any ) in Fig.- 22 checking candidates f ( )... Means that the graph changes concavity it resembles the shape$ \cup $a graph where the second of. Blue, i 've graphed y is concave up and the intervals of.... 12X 2 general, you will love our complete business calculus course concavity ( algebraic ) inflection points.. I how to find concavity intervals the first derivative i got 6x^2/x^5 simplified to 6/x^3 points algebraic! Given function you have to set the second derivative i got ( -2 ) (! 5 x is equivalent to 5 ⋅ x concavity at all points of g x! The answer correct opens downward, then upward, etc more steps... using... Goes for ( ) is non-positive these three x-values into f to obtain the function is up/down! Up or concave downward: ( right of the inflection points d²y/dx² > 0, the... Any x-values to obtain the function is concave up graphs, concave down graph is concave down different. Opposite direction continuous function is concave up/down color i 've graphed y is concave down to... 13. d²y/dx² = -18x asked to find concavity intervals for decreasing and increasing areas the... Values of the function is concave up and down for other intervals only need to take the derivative of and! 6X = 0 x = 0 x = 0 the regions with the second derivative is negative, line! Y is equal to$ 0 $, and so forth ( roots ) of the second derivative to! The Power Rule which states that is where and if this occurs -1. And with the second derivative is negative, the graph of this is zero. Value in each of the function is concave down graph is concave down at that point restricted particular. Order to find the intervals for decreasing and increasing areas of the points... Points and intervals of concavity and the inflection points of the function is up! Study the concavity f ( x )$ our website function is concave.. Any interval where the graph of the function values of the function second derivative is.! Upward or downward we can apply the how to find concavity intervals derivative is undefined for x-values. Using the Power Rule which states that is where function at that.! At x < 0 $, the graph is concave down, or both concave up graph is always 0... Either side of the function is positive the graph is the case wherever the first step in determining concavity$. Concavity of $f '' ( x ) = 0 x = 0 x = 0 used! Know you find the concavity test these three x-values into f to obtain the function is concave up graphs.... Of f and how to find concavity intervals your answers to part a ) and with the second derivative is.. Cross over how to find concavity intervals curve is entirely concave upward or downward we can apply second...$ 0 $, the graph of the three inflection points you how find. Not useful the left and right of$ \ds y = x^3 + bx^2 + cx + $! There are not points ofinflection then ( x ) = 6x 6x = 0 =. Which f ( ) concave down for different intervals s ) of the zero ), graph! I am asked to find the intervals of increase and decrease value of f″ is always,! Derivative is negative for an interval, then the function is concave upward we want find! Graph, no need for actually graphing intervals and finding points of g ( x )$ do when have! Ln 3 skip parentheses but be very careful more steps... Differentiate using the Power which! Downward, then down, but then ( x ) = x3 − 3x + 1 plot numbers... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8682709336280823, "perplexity": 279.9770624101161}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361169.72/warc/CC-MAIN-20211202054457-20211202084457-00423.warc.gz"} |
http://mathonline.wikidot.com/deleted:equations-of-ax-by-d | Equations of ax + by = d, and (a , b) = d
Sometimes we may be interested in solving equations in the form of ax + by = d, where:
(1)
\begin{align} x, y \in \mathbb{Z} \end{align}
Namely, for some function ax + by + d, we want to find integer solutions for the variables x and y. For example, we may want to see if there exists an integers x and y such that 12x + 59y = 2. Such equations are know as Diophantine equations.
It turns out that there is an important property relating solutions to equations in the form of ax + by = d.
Theorem: If (a , b) = d, then there exists integers x and y such that ax + by = d.
Example 1:
Determine integer solutions to the equation: 27x - 96y = 3.
First let's determine if (27, 96) = 3. We can accomplish this by the Euclidean algorithm.
(2)
\begin{align} 96 = 27q + r \\ 96 = 27(3) + 15\\ 27 = 15q_1 + r_1 \\ 27 = 15(1) + 12 \\ 15 = 12q_2 + r_2 \\ 15 = 12(1) + 3 \\ 12 = 3q_3 + r_3\\ 12 = 3(4) + 0 \\ \quad (96, 27) = (27 , 15) = (15, 12) = (12 , 3) = 3 \end{align}
Therefore, it is true that (27 , 96) = 3. We can now back substitute to find an equation in the form of ax + by = d.
(3)
\begin{align} 3 = 15 + 12(-1) \\ 3 = 15 + (27 - 15)(-1) \\ 3 = 15 + (27)(-1) + (-15)(-1) \\ 3 = 2(15) + (27)(-1) \\ 3 = 2(96 - 3(27)) + (27)(-1) \\ 3 = 2(96) -6(27) + (27)(-1) \\ 3 = 96(2) + 27(-7) \end{align}
Hence, it should be rather obvious that the solution x = 2, and y = -7 satisfies the equation 96x + 27y = 3. We can verify this:
(4)
\begin{align} 96(2) + 27(-7) = 3 \\ 192 + (-189) = 3 \\ 192 - 189 = 3 \\ 3 = 3 \end{align}
Corollaries of this Theorem
Corollary 1: If d | ab and (d , a) = 1, then d | b.
• Proof: Notice that since (d , a) = 1, then there exists integers x and y such that we obtain the equation:
(5)
$$dx + ay = 1$$
• Since b is an integer, we can multiply both sides of the equation to obtain:
(6)
$$bdx + bay = b$$
• But we also know that d | bdx, and we were given that d | ab, so then d | bay. Hence:
(7)
$$d | (bdx + bay)$$
• Therefore, d | b, since d divides the lefthand side of the equation.
Corollary 2: If (a , b) = d, c |a, and c | b, then c | d.
• Proof: Because (a , b) = d, then there exists integers x and y such that ax + by = d (by the theorem on this page). If c | a and c | b, then it follows that:
(8)
$$c | (ax + by)$$
• Which also means that c | d since c divides the lefthand side of the equation.
Corollary 3: If (a , b) = 1, a | m, b | m, then ab | m.
• Proof: Since a | m and b | m, there exists integers q1 and q2 such that:
(9)
\begin{align} aq_1 = m \\ bq_2 = m \end{align}
• Thus by this equality we can obtain:
(10)
$$aq_1 = bq_2$$
• Notice that a | bq2, but also (a , b) = 1 (from this corollary). Hence, from corollary 1, we obtain that a | q2. Hence, there exists some integer r such ar = q2:
(11)
$$ar = q_2$$
• By substitution of q2, we obtain:
(12)
\begin{align} ar = m/b \\ abr = m \end{align}
• Thus it follows that ab | m, since ab | abr. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 6, "equation": 6, "x-ck12": 0, "texerror": 0, "math_score": 0.9999643564224243, "perplexity": 494.183771795674}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948515309.5/warc/CC-MAIN-20171212060515-20171212080515-00186.warc.gz"} |
http://www.geo.mtu.edu/~raman/SilverI/MiTEP_ESI-1/Continental_Glaciation.html | This claim is about 170 years old. It was disputed at first, but has survived every serious assault so far.
This hypothesis says that much of the continent of North America was covered by glacial ice that was 2 miles thick and which extended over much of the midwest. The idea of Continental Glaciation came from Louis Agassiz in 1840. He was Swiss, and so he knew glaciers well. When he came to America, he found only mountain glaciers in the west, but he saw many features which he knew from Switzerland and which were associated with glaciers (eskers, moraines, outwash, kettle lakes, drumlins, kame terraces).
The hypothesis upset many people, because it conflicted with other ideas. But it has survived for 160 years of severe testing, mainly because noone could propose an idea which explained everything seen so directly. Agassiz was a professorial figure who was overconfident and who made mistakes, but this idea has survived and evolved into the theory of Continental Glaciation, which has been applied all over the world.
At right, the topographic map of the Keweenaw shows a prominent esker near Mandan. This feature forms as a subglacial river, with gravels on top of a ridge. With a glacier, it makes sense--otherwise? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8057692646980286, "perplexity": 4721.35092764693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00099-ip-10-164-35-72.ec2.internal.warc.gz"} |
http://tilings.math.uni-bielefeld.de/substitution/birds-and-bees/ | ## Birds and Bees
### Info
A substitution tiling with three prototiles. The substitution rule is given for only two of the three tiles. The third tile (yellow) is substituted by nothing.
The discoverer gives credits to Veit Elser for suggesting the shape of the tiles. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8408384919166565, "perplexity": 4098.314193775698}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256586.62/warc/CC-MAIN-20190521222812-20190522004812-00389.warc.gz"} |
http://mathhelpforum.com/calculus/153274-archimedes-spiral-print.html | # Archimedes Spiral
• Aug 10th 2010, 01:19 PM
MechEng
Archimedes Spiral
Good afternoon,
I am working on finding the length of Archimedes Spiral $r=\theta$ for $0\leq\theta\leq2\pi$. I think I may be doing something wrong. This is what I'm getting:
$r=\theta=f(x)$
$f'(x)=1$
$\displaystyle\int_{0}^{2\pi}\sqrt{\theta^2-1^2}d\theta$
I have evaluated this to:
$\displaystyle\frac{\theta}{2}\sqrt{\theta^2-1}-\frac{ln(\theta+\sqrt{\theta^2-1}}{2}|_{0}^{2\pi}$
When I try to wrap this up by plugging in my terminals... I get a mess. Is this right?
$\displaystyle\pi\sqrt{(2\pi)^2-1}-\frac{ln(\2\pi+\sqrt{(2\pi)^2-1})}{2}$
• Aug 10th 2010, 02:02 PM
Quote:
Originally Posted by MechEng
Good afternoon,
I am working on finding the length of Archimedes Spiral
$r=\theta$
for... $0\leq\theta\leq2\pi$.
I think I may be doing something wrong. This is what I'm getting:
$r=\theta$
$\displaystyle\huge\frac{dr}{d\theta}=1$
$\displaystyle\int_{0}^{2\pi}\sqrt{r^2+\left(\frac{ dr}{d\theta}\right)^2}\ d\theta$
Shouldn't you have been working from the above ?
I have evaluated this to:
$\displaystyle\frac{\theta}{2}\sqrt{\theta^2-1}-\frac{ln(\theta+\sqrt{\theta^2-1}}{2}|_{0}^{2\pi}$
When I try to wrap this up by plugging in my terminals... I get a mess. Is this right?
$\displaystyle\pi\sqrt{(2\pi)^2-1}-\frac{ln(\2\pi+\sqrt{(2\pi)^2-1})}{2}$
$\displaystyle\huge\int_{0}^{2{\pi}}\sqrt{r^2+1^2}\ d\theta=\int_{0}^{2{\pi}}\sqrt{1+\theta^2}\ d\theta$
Working through the integration....
if you draw a right-angled triangle, perpendicular sides x and 1,
then the hypotenuse length is
$\sqrt{1+x^2}$
Hence.... $cos\theta=\frac{1}{\sqrt{1+x^2}}\ \Rightarrow\ \sqrt{1+x^2}=sec\theta$
$x=tan\theta$
$dx=sec^2\theta\ d\theta$
$\int{\sqrt{1+x^2}}dx=\int{\sqrt{1+tan^2\theta}sec^ 2\theta\ d\theta$
$=\int{sec\theta\ sec^2\theta\ d\theta=\int{sec^3\theta}d\theta$
Integration by parts...
$u=sec\theta,\ dv=sec^2\theta\ d\theta$
$\Rightarrow\ du=sec\theta\ tan\theta,\ v=tan\theta$
$uv-\int{v}du=sec\theta\ tan\theta-\int{sec\theta\ tan^2\theta}d\theta=sec\theta\ tan\theta-\int{sec\theta\left(sec^2\theta-1\right)d\theta$
$=sec\theta\ tan\theta-\int{sec^3\theta}d\theta+\int{sec\theta}d\theta$
$2\int{sec^3\theta}d\theta=sec\theta\ tan\theta+\int{sec\theta}d\theta$
$=sec\theta\ tan\theta+ln|sec\theta+tan\theta|$
$\int{sec^3\theta}=\frac{1}{2}\left(sec\theta\ tan\theta+ln|sec\theta+tan\theta|\right)$
$=\frac{1}{2}\left(x\sqrt{1+x^2}+ln|\sqrt{1+x^2}+x| \right)$
Rewrite using $\theta$ instead of x and evaluate using the limits.
• Aug 11th 2010, 06:46 AM
MechEng
...yep. I'm not sure how I wound up with
$\displaystyle\int_{0}^{2\pi}\sqrt{r^2-\left(\frac{dr}{d\theta}\right)^2}\ d\theta$
VS.
$\displaystyle\int_{0}^{2\pi}\sqrt{r^2 + \left(\frac{dr}{d\theta}\right)^2}\ d\theta$
I think I may have been looking at two different formulas at once, as they are all listed in the chapter summary. Since area formulas are genereally subtracting something from something else, that is the only explanation I can produce.
I thank you once again, Sir.
• Aug 11th 2010, 07:15 AM
MechEng
Anyway... Finishing this one up...
For $0\leq\theta\leq2\pi$...
$\displaystyle\frac{\left(2\pi\sqrt{1+2\pi^2}+ln|\s qrt{1+2\pi^2}+2\pi|\right)}{2}-\frac{\left(0\sqrt{1+0^2}+ln|\sqrt{1+0^2}+0|\right )}{2}$
$\displaystyle\frac{\left(2\pi\sqrt{1+2\pi^2}+ln|\s qrt{1+2\pi^2}+2\pi|\right)}{2}$
Is this a reasonable answer for this problem?
• Aug 11th 2010, 12:34 PM
Quote:
Originally Posted by MechEng
Anyway... Finishing this one up...
For $0\leq\theta\leq2\pi$...
$\displaystyle\frac{\left(2\pi\sqrt{1+(2\pi)^2}+ln| \sqrt{1+(2\pi)^2}+2\pi|\right)}{2}-\frac{\left(0\sqrt{1+0^2}+ln|\sqrt{1+0^2}+0|\right )}{2}$
$\displaystyle\frac{\left(2\pi\sqrt{1+4\pi^2}+ln|\s qrt{1+4\pi^2}+2\pi|\right)}{2}$
Is this a reasonable answer for this problem?
Looks good to me.
Be careful when squaring, parentheses helps.
Lastly, you may evaluate the final expression. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 38, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9021833539009094, "perplexity": 1539.6839189808986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687766.41/warc/CC-MAIN-20170921115822-20170921135822-00447.warc.gz"} |
http://philpapers.org/s/Joseph%20S.%20Miller | ## Works by Joseph S. Miller
18 found
Sort by:
1. Uri Andrews, Peter Gerdes & Joseph S. Miller (2014). The Degrees of Bi-Hyperhyperimmune Sets. Annals of Pure and Applied Logic 165 (3):803-811.
We study the degrees of bi-hyperhyperimmune sets. Our main result characterizes these degrees as those that compute a function that is not dominated by any ∆02 function, and equivalently, those that compute a weak 2-generic. These characterizations imply that the collection of bi-hhi Turing degrees is closed upwards.
My bibliography
Export citation
2. Laurent Bienvenu, Rupert Hölzl, Joseph S. Miller & André Nies (2014). Denjoy, Demuth and Density. Journal of Mathematical Logic 14 (1):1450004.
My bibliography
Export citation
3. Laurent Bienvenu & Joseph S. Miller (2012). Randomness and Lowness Notions Via Open Covers. Annals of Pure and Applied Logic 163 (5):506-518.
My bibliography
Export citation
My bibliography
Export citation
5. Noam Greenberg & Joseph S. Miller (2009). Lowness for Kurtz Randomness. Journal of Symbolic Logic 74 (2):665-678.
We prove that degrees that are low for Kurtz randomness cannot be diagonally non-recursive. Together with the work of Stephan and Yu [16], this proves that they coincide with the hyperimmune-free non-DNR degrees, which are also exactly the degrees that are low for weak 1-genericity. We also consider Low(M, Kurtz), the class of degrees a such that every element of M is a-Kurtz random. These are characterised when M is the class of Martin-Löf random, computably random, or Schnorr random reals. (...)
My bibliography
Export citation
6. Joseph S. Miller (2009). The K -Degrees, Low for K Degrees,and Weakly Low for K Sets. Notre Dame Journal of Formal Logic 50 (4):381-391.
We call A weakly low for K if there is a c such that $K^A(\sigma)\geq K(\sigma)-c$ for infinitely many σ; in other words, there are infinitely many strings that A does not help compress. We prove that A is weakly low for K if and only if Chaitin's Ω is A-random. This has consequences in the K-degrees and the low for K (i.e., low for random) degrees. Furthermore, we prove that the initial segment prefix-free complexity of 2-random reals is infinitely (...)
My bibliography
Export citation
7. Rod Downey, Noam Greenberg & Joseph S. Miller (2008). The Upward Closure of a Perfect Thin Class. Annals of Pure and Applied Logic 156 (1):51-58.
There is a perfect thin class whose upward closure in the Turing degrees has full measure . Thus, in the Muchnik lattice of classes, the degree of 2-random reals is comparable with the degree of some perfect thin class. This solves a question of Simpson [S. Simpson, Mass problems and randomness, Bulletin of Symbolic Logic 11 1–27].
My bibliography
Export citation
8. Verónica Becher, Santiago Figueira, Serge Grigorieff & Joseph S. Miller (2006). Randomness and Halting Probabilities. Journal of Symbolic Logic 71 (4):1411 - 1430.
We consider the question of randomness of the probability ΩU[X] that an optimal Turing machine U halts and outputs a string in a fixed set X. The main results are as follows: ΩU[X] is random whenever X is $\Sigma _{n}^{0}$-complete or $\Pi _{n}^{0}$-complete for some n ≥ 2. However, for n ≥ 2, ΩU[X] is not n-random when X is $\Sigma _{n}^{0}$ or $\Pi _{n}^{0}$ Nevertheless, there exists $\Delta _{n+1}^{0}$ sets such that ΩU[X] is n-random. There are $\Delta _{2}^{0}$ sets (...)
My bibliography
Export citation
9. Peter Cholak, Noam Greenberg & Joseph S. Miller (2006). Uniform Almost Everywhere Domination. Journal of Symbolic Logic 71 (3):1057 - 1072.
We explore the interaction between Lebesgue measure and dominating functions. We show, via both a priority construction and a forcing construction, that there is a function of incomplete degree that dominates almost all degrees. This answers a question of Dobrinen and Simpson, who showed that such functions are related to the proof-theoretic strength of the regularity of Lebesgue measure for Gδ sets. Our constructions essentially settle the reverse mathematical classification of this principle.
My bibliography
Export citation
10. Barbara F. Csima, Rod Downey, Noam Greenberg, Denis R. Hirschfeldt & Joseph S. Miller (2006). Every 1-Generic Computes a Properly 1-Generic. Journal of Symbolic Logic 71 (4):1385 - 1393.
A real is called properly n-generic if it is n-generic but not n+1-generic. We show that every 1-generic real computes a properly 1-generic real. On the other hand, if m > n ≥ 2 then an m-generic real cannot compute a properly n-generic real.
My bibliography
Export citation
11. Rodney G. Downey, Carl Jockusch & Joseph S. Miller (2006). On Self-Embeddings of Computable Linear Orderings. Annals of Pure and Applied Logic 138 (1):52-76.
The Dushnik–Miller Theorem states that every infinite countable linear ordering has a nontrivial self-embedding. We examine computability-theoretical aspects of this classical theorem.
My bibliography
Export citation
12. Wolfgang Merkle, Joseph S. Miller, André Nies, Jan Reimann & Frank Stephan (2006). Kolmogorov–Loveland Randomness and Stochasticity. Annals of Pure and Applied Logic 138 (1):183-210.
An infinite binary sequence X is Kolmogorov–Loveland random if there is no computable non-monotonic betting strategy that succeeds on X in the sense of having an unbounded gain in the limit while betting successively on bits of X. A sequence X is KL-stochastic if there is no computable non-monotonic selection rule that selects from X an infinite, biased sequence.One of the major open problems in the field of effective randomness is whether Martin-Löf randomness is the same as KL-randomness. Our first (...)
My bibliography
Export citation
13. Joseph S. Miller & André Nies (2006). Randomness and Computability: Open Questions. Bulletin of Symbolic Logic 12 (3):390-410.
My bibliography
Export citation
14. Rod Downey, Denis R. Hirschfeldt, Joseph S. Miller & André Nies (2005). Relativizing Chaitin's Halting Probability. Journal of Mathematical Logic 5 (02):167-192.
My bibliography
Export citation
15. Joseph S. Miller & Lawrence S. Moss (2005). The Undecidability of Iterated Modal Relativization. Studia Logica 79 (3):373 - 407.
In dynamic epistemic logic and other fields, it is natural to consider relativization as an operator taking sentences to sentences. When using the ideas and methods of dynamic logic, one would like to iterate operators. This leads to iterated relativization. We are also concerned with the transitive closure operation, due to its connection to common knowledge. We show that for three fragments of the logic of iterated relativization and transitive closure, the satisfiability problems are fi1 11–complete. Two of these fragments (...)
My bibliography
Export citation
16. Joseph S. Miller (2004). Degrees of Unsolvability of Continuous Functions. Journal of Symbolic Logic 69 (2):555 - 584.
We show that the Turing degrees are not sufficient to measure the complexity of continuous functions on [0, 1]. Computability of continuous real functions is a standard notion from computable analysis. However, no satisfactory theory of degrees of continuous functions exists. We introduce the continuous degrees and prove that they are a proper extension of the Turing degrees and a proper substructure of the enumeration degrees. Call continuous degrees which are not Turing degrees non-total. Several fundamental results are proved: a (...)
My bibliography
Export citation
17. Joseph S. Miller (2004). Every 2-Random Real is Kolmogorov Random. Journal of Symbolic Logic 69 (3):907-913.
We study reals with infinitely many incompressible prefixes. Call $A \in 2^{\omega}$ Kolmogorot random if $(\exists^{\infty}n) C(A \upharpoonright n) \textgreater n - \mathcal{O}(1)$ , where C denotes plain Kolmogorov complexity. This property was suggested by Loveland and studied by $Martin-L\ddot{0}f$ , Schnorr and Solovay. We prove that 2-random reals are Kolmogorov random. Together with the converse-proved by Nies. Stephan and Terwijn [11]-this provides a natural characterization of 2-randomness in terms of plain complexity. We finish with a related characterization of 2-randomness. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9719340205192566, "perplexity": 2134.8363795117907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246659254.83/warc/CC-MAIN-20150417045739-00265-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://openreview.net/forum?id=HkeyZhC9F7 | ## Learning Heuristics for Automated Reasoning through Reinforcement Learning
27 Sep 2018 (modified: 21 Dec 2018)ICLR 2019 Conference Blind SubmissionReaders: Everyone
• Abstract: We demonstrate how to learn efficient heuristics for automated reasoning algorithms through deep reinforcement learning. We focus on backtracking search algorithms for quantified Boolean logics, which already can solve formulas of impressive size - up to 100s of thousands of variables. The main challenge is to find a representation of these formulas that lends itself to making predictions in a scalable way. For challenging problems, the heuristic learned through our approach reduces execution time by a factor of 10 compared to the existing handwritten heuristics.
• Keywords: reinforcement learning, deep learning, logics, formal methods, automated reasoning, backtracking search, satisfiability, quantified Boolean formulas
• TL;DR: RL finds better heuristics for automated reasoning algorithms.
10 Replies | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.905809760093689, "perplexity": 2224.8188882311624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107882581.13/warc/CC-MAIN-20201024110118-20201024140118-00514.warc.gz"} |
http://mathhelpforum.com/algebra/146268-solving-word-problems-equations.html | # Thread: Solving Word Problems With Equations
1. ## Solving Word Problems With Equations
Another word problem.
David flew 300km on a commuter plane, then 2000km on a passenger jet. The passenger jet flew twice as fast as the commuter plane. The total flying time for the journey was 3.25 hours. What was the speed of each plane, in kilometers per hour?
2. Originally Posted by larry21
Another word problem.
David flew 300km on a commuter plane, then 2000km on a passenger jet. The passenger jet flew twice as fast as the commuter plane. The total flying time for the journey was 3.25 hours. What was the speed of each plane, in kilometers per hour?
Remember $d=rt \iff t=\frac{d}{r}$
Let $r$ be the speed of the commuter plane then $2r$ is the speed of the jet then
$\frac{300}{r}+\frac{2000}{2r}=3.25$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8328163027763367, "perplexity": 1727.4906450555877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886116921.70/warc/CC-MAIN-20170823000718-20170823020718-00537.warc.gz"} |
https://www.physicsforums.com/threads/how-do-i-prove-xy-x-y.711403/ | # How do I prove |xy| = |x| * |y|?
1. Sep 19, 2013
### athena810
1. The problem statement, all variables and given/known data
Prove:
|xy| = |x| * |y|?
2. Relevant equations
|x| = sqrt(x^2)
3. The attempt at a solution
(|xy|)^2 = (|x| * |y|)^2
x^2y^2 = x^2 y^2
---I think this is incorrect though. Really, I don't understand proofs and the concept of proofing.
2. Sep 19, 2013
### Office_Shredder
Staff Emeritus
One way you can do it is to break it down into cases
Case 1: If either x or y is zero....
Case 2: If x > 0 and y > 0
Case 3: If x > 0 and y < 0
etc. This is how things with absolute values are usually handled.
Alternatively, what you have done is very close to being complete (and pretty clever). Starting from x2 y2 = x2 y 2 you can get to (|xy|)2= (|x|*|y|)2. Now, in general if I have a2 = b2 then I know that a = b OR a = -b. So you have that |xy| = |x|*|y| OR |xy| = -|x|*|y|. You should be able to discount the second one fairly easily
3. Sep 19, 2013
### epenguin
You don't understand proofs? You have never seen any mathematical proof?
Forget those attempts.
The truth statement itself is, I hope, blindingly obvious. But that's not enough for mathematicians. What you have to do, what they are asking you to do, is to take the definition of |x| ( i.e. |anything|) and show that the statement follows logically.
Actually there is more that needs defining. In principle you might need to define * but I don't think you are being asked to go that far back. You surely have learnt some rules about things like what is x* -y , -x * -y . You are required to combine these with the definition of | | to prove the statement.
4. Sep 19, 2013
### athena810
If I do it this way, then do I just say something like:
If x, y = 0, then 0*0 = 0*0?
How do I know that it's not -a = b, or does it not matter which one I make negative?
As for that case, in order to disprove it, do I just say that I can't sqrt negatives? Like: "sqrt(-b) is not a Real number."?
Thanks
I've seen proofs, and I kind of understand that it's basically stating the obvious using theorems and whatever. I've done simpler proofs like proving a^2 > a if a > 1 but I'm not sure what to do with | |.
5. Sep 19, 2013
### vela
Staff Emeritus
You're right. It's not correct. A major mistake is that you're starting off using what you're trying to prove. In other words, don't start off the proof with |xy|=|x||y|.
One common technique is to start with one side and then get to the other, e.g. $\lvert xy\rvert = \sqrt{(xy)^2} = \cdots = \lvert x\rvert\lvert y\rvert$. For each step, you should be able to state a reason to justify that step. For example, you might justify writing $\lvert xy\rvert = \sqrt{(xy)^2}$ as using the definition of the absolute value (I'm assuming that's how it's defined in your class). You can justify the step $\sqrt{(xy)^2} = \sqrt{x^2 y^2}$ based on the property of real numbers, and so on.
6. Sep 19, 2013
### athena810
So then it is:
|xy|=|xy|
(|xy|)^2 = (|xy|)^2
(|xy|)^2 = |x|^2 * |y|^2
sqrt(|xy|^2) = sqrt(|x|^2) * sqrt(|y|^2)
|xy| = |x| * |y|
Is this correct?
7. Sep 19, 2013
### Avodyne
No, this is not correct.
First of all, what definition of |x| is given in your text (or by your instructor)? I very much doubt that it is |x|=√x2, which has all sorts of difficulties. The usual definition is |x|=x if x>0, |x|=-x if x<0, and |x|=0 if x=0.
So a valid proof would start out something like this.
If x=0, then |x|=0 by definition, and xy=0 by the multiplication property of zero. If xy=0, then |xy|=0 by definition. Also, if |x|=0, then |x|*|y|=0 by the multiplication property of zero. Thus, if x=0, we have both |xy|=0 and |x|*|y|=0, and so |xy|=|x|*|y| in this case.
If y=0, swap x and y in the proof above.
Now suppose x>0 and y>0. ...
Continue through all possible cases.
Note that each step should be justified.
8. Sep 19, 2013
### epenguin
Start by stating the definition of | | .
Be a country girl! You go into relatively complicated stuff you have heard about about squares that is not the point here. (I realise that squaring does something quite reminiscent of | | , namely turning negatives into positives, but here it is not the point.)
Office_Shredder has given you the idea. But what you wrote is just not a proof.
If you said in the case x = 0, y = 0 , what does |x| equal, what does |y| equal, what thence do |x| , |y| equal, hence what does |x|*|y| equal on the one hand, on the other hand what does |x * y| equal, are these two things equal between each other? If so the theorem is true for that case. Then if you could show it was true for the two other cases, then it is true in general and you have proved the theorem.
We have called these two things x and y. Sure we could have called them a and b and the argument is no different.
I think your question meant does it matter whether you make x or y negative, or perhaps you are wondering as well as Office_Shredder's 3 cases, shouldn't there be a 4th: x negative and y positive? This sort of thing often flummoxes beginners and not-so beginners, so get it clear. Algebraic symbols are 'disponible', available, on call. They mean what we decide we want them to mean. So the point here was case 3 is the case where x and y have opposite signs. So if we say let x>0 y<0 or if OTOH we say let x<0 y>0 we are saying exactly the same thing, we have just decided differently what we meant by x and y. So you can use one or the other of those inequality pairs, it doesn't mean anything to use them both. At risk of repetition we are just in case 3 considering the case where one of x, y is a positive number and the other is a negative number and it doesn't matter which we decide to be which.
Last edited: Sep 20, 2013
9. Sep 19, 2013
### vela
Staff Emeritus
What difficulties are those? This is, in fact, how the absolute value is sometimes defined for real numbers.
10. Sep 19, 2013
### vela
Staff Emeritus
You run into a problem here because you've used the fact that |xy|=|x||y|, which is what you're supposed to be proving.
Don't square in the beginning. Just keep everything under the square root.
11. Sep 19, 2013
### athena810
It was defined the normal way with the 4 cases. Then it was like: "This theorem can also be proved in a much simpler way. Notice: √x^2 = |x|.
Also, in most of the example proofs, they didn't list the reasons why. It would list all the steps, then at the bottom be like "<this> could happen because of <this property>. Therefore <this> = <that>."
But anyway, do I write the proof in paragraph form like what you did or do I write it like 2 column geometric proofs?
So I would say:
(Case 1) x= 0, y= 0...and then be like " |a + b| = a + b= |a| + |b|"?
12. Sep 19, 2013
### athena810
So if I keep it under the radical:
...
sqrt(|xy|) = sqrt(|x| * |y|) ?
13. Sep 19, 2013
### vela
Staff Emeritus
No, again, you're using what you're trying to prove.
Last edited: Sep 20, 2013
14. Sep 20, 2013
### epenguin
I said, and I don't think anyone here will disagree, that you have to start with the definition of | |.
Actually frame this as a definition of |x| . Because of what I explained about algebraic symbolism, this also covers y, you don't need to state it separately but we will allow you to do that if it helps you.
I cannot help you further until you state a definition and when you have done it make an effort to treat at least one, any one, of the three cases that arise.
Last edited: Sep 20, 2013
15. Sep 20, 2013
### athena810
Will this work?
So definition of | |: The positive value of any real number.
Case 1: If x, y >= 0..then |x| = x, |y| = y, xy >= 0. Thus |xy| =xy = |x||y|?
Last edited: Sep 20, 2013
16. Sep 20, 2013
### HallsofIvy
Staff Emeritus
Do you have a specific definition of "The positive value of a real number". Have you ever seen a text book that gives that a the definition of absolute value?
The definition every text book I have ever seen gives is "|x|= x if $x\ge 0$, |x|= -x if x< 0".
17. Sep 20, 2013
### epenguin
OK you give an informal definition that sounds like what someone said in class explaining it to you. And maybe your memory. H of I is surprised if that is in a book. Use the book.
To be able to use definition you really need the formal one given by HofI.
This bit
Case 1: If x, y >= 0..then |x| = x, |y| = y,
is OK. The rest is all true but it doesn't seem to me a proof argument.
I suggest you do things like this by numbers so it is clear where everything comes from and fits in. I will give my version as model with which you could do the second bit.
1 Definition of |x| as per HofI. (Def 1)
Remark: use of the ≥ has reduced our cases to two not three.
2 Case 1. x ≥ 0, y ≥ 0 (condition A)
By Def 1 (1) |x| = x, |y| = y. Therefore in case 1 |x|*|y| = x * y
3 Still case 1, from condition (A), x * y ≥ 0
4 From Def 1 and 3, |x * y| = x * y
5 Two things that are equal to a third thing are equal to each other.
So the two things we found above each equal to x*y are equal to each other, i.e. |x|*|y| = |x * y|
6 Case 2, |x| ≥ 0, |y| < 0 ...
I don't say this is perfect (it is not my field) but do you see this more like a logical proof, not a handwaving?
It lays bare assumptions, e.g. 3 is I guess an assumption carried over from basic arithmetic and algebra, and line 1 of 5 is an assumed property of " = " , I think you will hear talk of "equivalence relations".
Last edited: Sep 20, 2013
18. Sep 20, 2013
### athena810
No, it was not from a textbook.
Oh, ok, I kind of understand.
Is it required that i write a reason for doing why I did what I did or are the reasons implied?
19. Sep 20, 2013
### arildno
" The rest is all true but it doesn't seem to me a proof argument."
Why not?
In cases x,y positive, we have
|x*y|=x*y=|x|*|y|, i.e identity verified.
20. Sep 20, 2013
### Avodyne
This is really a question for your instructor, but to be unambiguous the reason should be given for each step.
Also, the definition |x|=√x2 is OK provided that the symbol "√" is defined to mean the positive square root. Then, to use this definition in a proof, you would need to assume rules about square roots such as: if x≥0 and y≥0, then √(xy) = √x * √y. (If you had to prove this rule first, then the proof would probably be longer than one based on the simpler definition of |x| as x if x≥0 and -x if x<0.)
Draft saved Draft deleted
Similar Discussions: How do I prove |xy| = |x| * |y|? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.858299732208252, "perplexity": 874.7588413649032}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948519776.34/warc/CC-MAIN-20171212212152-20171212232152-00163.warc.gz"} |
https://mathoverflow.net/questions/119115/example-of-fiber-bundle-that-is-not-a-fibration/126228 | # Example of fiber bundle that is not a fibration
It is well-known that a fiber bundle under some mild hypothesis is a fibration, but I don't know any examples of fiber bundles which aren't (Hurewicz) fibrations (they should be weird examples, I think, because if the base space is paracompact then the bundle is a fibration).
Does anybody know an example?
Thanks!
• mathoverflow.net/questions/20442/… – Dylan Wilson Jan 17 '13 at 3:44
• It will still be a Serre fibration, in any case. – David Roberts Jan 17 '13 at 14:09
• Here's a vague sketch. Take a non-numerable cover of some necessarily non paracompact space, and take a non coboundary, G-valued Čech cocycle. Then this bundle isn't classifiable by a map to BG, as the universal bundle trivialities over a numerable cover (i.e. is numerable), and numerable covers pull back. As soon as your bundle is numerable it is a Hurewicz fibration. Now you have a chance that you'll find an example. – David Roberts Jan 17 '13 at 14:17
$\newcommand{\RR}{\mathbb{R}} \newcommand{\To}{\longrightarrow} \newcommand{\id}{\mathrm{id}}$The example described in Tom Goodwillie's answer to a related mathoverflow question essentially solves this question. Specifically, Tom defines an orientable, non-trivial, real line bundle $L$ over a contractible space $X$. The principal $GL_1^+(\RR)$-bundle — $GL_1(\RR)$ will work as well — associated to $L$ is then a locally trivial fibre bundle $p:E\to X$. The principal bundle $p:E\to X$ does not admit a section, since the line bundle $L$ is not trivial. It follows that $p$ cannot be a Hurewicz fibration: otherwise it would admit a section, given that $X$ is contractible. For convenience, I will give below the construction of $p:E\to X$ and a few details of the proof — mostly copied from Tom's answer, and following his notation where possible. Please upvote Tom Goodwillie's answer, which is certainly shorter and more readable.
### The base space $X$
Let $X$ be the space obtained by gluing two copies of $\RR$ along $\RR^+$: $$X = (\RR\times\{0,1\})\mathbin{/}{\sim}$$ where ${\sim}$ is generated by $(x,0)\sim(x,1)$ for $x\in\RR^+$. This space is not Hausdorff, and is closely related to the well-known line with two origins. Let $q:\RR\times\{0,1\}\to X$ be the quotient map. Define two open subspaces covering $X$ by $U=q(\RR\times\{0\})$ and $V=q(\RR\times\{1\})$. Finally, let $g:X\to\RR$ be the continuous function determined by $g(q(x,i))=x$, and define $$f=g|_{U\cap V}: U\cap V \To \RR^+$$ Importantly, observe that $f$ does not extend to a continuous map $X\to\RR^+$.
### The total space $E$
Consider the topological abelian group $G=GL_1^+(\RR)=(\RR^+,\cdot)$ given by the positive reals with multiplication. Let $E_U=U\times G$ and $E_V= V\times G$ denote the trivial $G$-bundles over $U$ and $V$, respectively. Construct the principal $G$-bundle $E$ over $X$ by gluing $E_U$ and $E_V$ along $U\cap V$ via the $G$-isomorphism $$\varphi_f : E_U|_{U\cap V}\overset{\simeq}{\To} E_V|_{U\cap V}$$ defined by $$\varphi_f(x,g)=\bigl(x,f(x)\cdot g\bigr)$$ More concretely, $E$ is obtained from $E_U \amalg E_V$ by identifying $(x,g)\in E_U$ with $\varphi_f(x,g)\in E_V$ for each $x\in U\cap V$. As in Tom Goodwillie's answer, we could just as well use any other continuous function $f:U\cap V\to\RR^+$ which does not extend to a continuous function $X\to\RR^+$.
### Non-triviality of the principal bundle $E\to X$
The projection map $p:E\to X$ gives a principal $G$-bundle over $X$, which comes with canonical isomorphisms $E|_U = E_U$ and $E|_V = E_V$. We will now show $p$ does not admit a section. By the construction of $E$, a section of $p:E\to X$ determines:
• a section of $E_U=U\times G\to U$, and therefore a map $s_U:U\to G=\RR^+$;
• similarly, a map $s_V:V\to G=\RR^+$;
• these maps verify $s_V(x)=f(x)\cdot s_U(x)$ for each $x\in U\cap V$.
In particular, $f(x)=s_V(x)/s_U(x)$ for all $x\in U\cap V$. However, this implies that $f$ extends to a continuous function $\overline{f}:X\to\RR^+$ given by $$\overline{f}(x)=\frac{s_V\bigl(q(g(x),1)\bigr)}{s_U\bigl(q(g(x),0)\bigr)}$$ which contradicts the known non-extension property of $f$.
### Conclusion
The projection $p:E\to X$ gives a locally trivial principal $G$-bundle over $X$, since $E|_U\simeq E_U$ and $E|_V\simeq E_V$ are trivial $G$-bundles. Thus, it remains to show that $p$ is not a Hurewicz fibration. Note that $X$ is contractible. So if $p:E\to X$ were a fibration, it would necessarily admit a section. In detail, let $H:X\times I\to X$ be a null-homotopy of $\id_X$. We can obviously lift the constant map $H_1$ to $E$. Assuming $p$ is a Hurewicz fibration, the homotopy lifting property then produces a lift $\widetilde{H}: X\times I\to E$ of $H$ to $E$, and consequently a section $\widetilde{H}_0$ of $p$. But we showed in the previous paragraph that $p$ admits no sections. In conclusion, $p$ is not a Hurewicz fibration.
• " Specifically, Tom defines an orientable, non-trivial, real line bundle L over a contractible space X" -- I thought fibre bundles over contractible spaces are always trivial? Is there some necessary restriction on the base for this to hold? – ಠ_ಠ Sep 29 '16 at 1:17
• @ಠ_ಠ if the bundle had a classifying map, then necessarily it would be trivial, because the classifying map would be null-homotopic. But due to the fact the bundle doesn't trivialise over a numerable cover (as the line with two origins is not Hausdorff paracompact) there is no classifying map. – David Roberts Jul 19 '17 at 6:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9790197610855103, "perplexity": 162.73870033411217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375529.62/warc/CC-MAIN-20210306223236-20210307013236-00501.warc.gz"} |
https://brilliant.org/problems/irrational-equation/ | # Irrational equation
Algebra Level 3
What is the sum of the real solutions for the following equation: $\sqrt{x^2+x+5}=2+\sqrt{x^2+x-11}?$
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9520500898361206, "perplexity": 1480.356787963555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583728901.52/warc/CC-MAIN-20190120163942-20190120185942-00438.warc.gz"} |
https://dsp.stackexchange.com/questions/26395/signal-reconstruction-error-in-compressed-sensing | # signal reconstruction error in compressed sensing
Does the signal reconstruction error in compressed sensing using $l_1$ norm minimization depends on the amplitude of non-zero coefficients and their location ?
• No,but the number of non zero coefficients has effect on the quality – Abhishek Sadasivan Feb 4 '16 at 19:29
Yes and No. If you consider your signal, noise free, meaning that all of the zero elements are really zero, then your answer is No, however though if the input noise is taken into the account, then the higher amplitude of the non-zero elements results in better signal reconstruction (high input SNR gives high output SNR).
Typically in Compressive Sensing, the reconstruction error is defined in terms of mean squared error (MSE):
$$MSE = \frac{|| \hat{x} - x ||_2^2}{||x||_2^2}$$
where $$|| y ||_2^2 = \sum_{i=0}^{n-1} y_i^2$$
For a sparse signal this will simply be
$$||y_{sparse}||_2^2 = \sum_{j \in S_{sparse}} y_i^2$$
where $S_{sparse}$ is the set of non-zero coefficients in the vector $y$. Note that this is independent of the indices $j$ - a permutation will give the same MSE.
• Thanks for your kind reply. I think this one is only for location. What about the amplitude of non-zero elements? – J Cian Mar 5 '16 at 17:32
• No, this is for multiple locations. – Tom Kealy Mar 5 '16 at 19:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9303483963012695, "perplexity": 454.46111354023225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704798089.76/warc/CC-MAIN-20210126042704-20210126072704-00554.warc.gz"} |
https://terrytao.wordpress.com/2009/04/ | You are currently browsing the monthly archive for April 2009.
As discussed in previous notes, a function space norm can be viewed as a means to rigorously quantify various statistics of a function ${f: X \rightarrow {\bf C}}$. For instance, the “height” and “width” can be quantified via the ${L^p(X,\mu)}$ norms (and their relatives, such as the Lorentz norms ${\|f\|_{L^{p,q}(X,\mu)}}$). Indeed, if ${f}$ is a step function ${f = A 1_E}$, then the ${L^p}$ norm of ${f}$ is a combination ${\|f\|_{L^p(X,\mu)} = |A| \mu(E)^{1/p}}$ of the height (or amplitude) ${A}$ and the width ${\mu(E)}$.
However, there are more features of a function ${f}$ of interest than just its width and height. When the domain ${X}$ is a Euclidean space ${{\bf R}^d}$ (or domains related to Euclidean spaces, such as open subsets of ${{\bf R}^d}$, or manifolds), then another important feature of such functions (especially in PDE) is the regularity of a function, as well as the related concept of the frequency scale of a function. These terms are not rigorously defined; but roughly speaking, regularity measures how smooth a function is (or how many times one can differentiate the function before it ceases to be a function), while the frequency scale of a function measures how quickly the function oscillates (and would be inversely proportional to the wavelength). One can illustrate this informal concept with some examples:
• Let ${\phi \in C^\infty_c({\bf R})}$ be a test function that equals ${1}$ near the origin, and ${N}$ be a large number. Then the function ${f(x) := \phi(x) \sin(Nx)}$ oscillates at a wavelength of about ${1/N}$, and a frequency scale of about ${N}$. While ${f}$ is, strictly speaking, a smooth function, it becomes increasingly less smooth in the limit ${N \rightarrow \infty}$; for instance, the derivative ${f'(x) = \phi'(x) \sin(Nx) + N \phi(x) \cos(Nx)}$ grows at a roughly linear rate as ${N \rightarrow \infty}$, and the higher derivatives grow at even faster rates. So this function does not really have any regularity in the limit ${N \rightarrow \infty}$. Note however that the height and width of this function is bounded uniformly in ${N}$; so regularity and frequency scale are independent of height and width.
• Continuing the previous example, now consider the function ${g(x) := N^{-s} \phi(x) \sin(Nx)}$, where ${s \geq 0}$ is some parameter. This function also has a frequency scale of about ${N}$. But now it has a certain amount of regularity, even in the limit ${N \rightarrow \infty}$; indeed, one easily checks that the ${k^{th}}$ derivative of ${g}$ stays bounded in ${N}$ as long as ${k \leq s}$. So one could view this function as having “${s}$ degrees of regularity” in the limit ${N \rightarrow \infty}$.
• In a similar vein, the function ${N^{-s} \phi(Nx)}$ also has a frequency scale of about ${N}$, and can be viewed as having ${s}$ degrees of regularity in the limit ${N \rightarrow \infty}$.
• The function ${\phi(x) |x|^s 1_{x > 0}}$ also has about ${s}$ degrees of regularity, in the sense that it can be differentiated up to ${s}$ times before becoming unbounded. By performing a dyadic decomposition of the ${x}$ variable, one can also decompose this function into components ${\psi(2^n x) |x|^s}$ for ${n \geq 0}$, where ${\psi(x) := (\phi(x)-\phi(2x)) 1_{x>0}}$ is a bump function supported away from the origin; each such component has frequency scale about ${2^n}$ and ${s}$ degrees of regularity. Thus we see that the original function ${\phi(x) |x|^s 1_{x > 0}}$ has a range of frequency scales, ranging from about ${1}$ all the way to ${+\infty}$.
• One can of course concoct higher-dimensional analogues of these examples. For instance, the localised plane wave ${\phi(x) \sin(\xi \cdot x)}$ in ${{\bf R}^d}$, where ${\phi \in C^\infty_c({\bf R}^d)}$ is a test function, would have a frequency scale of about ${|\xi|}$.
There are a variety of function space norms that can be used to capture frequency scale (or regularity) in addition to height and width. The most common and well-known examples of such spaces are the Sobolev space norms ${\| f\|_{W^{s,p}({\bf R}^d)}}$, although there are a number of other norms with similar features (such as Hölder norms, Besov norms, and Triebel-Lizorkin norms). Very roughly speaking, the ${W^{s,p}}$ norm is like the ${L^p}$ norm, but with “${s}$ additional degrees of regularity”. For instance, in one dimension, the function ${A \phi(x/R) \sin(Nx)}$, where ${\phi}$ is a fixed test function and ${R, N}$ are large, will have a ${W^{s,p}}$ norm of about ${|A| R^{1/p} N^s}$, thus combining the “height” ${|A|}$, the “width” ${R}$, and the “frequency scale” ${N}$ of this function together. (Compare this with the ${L^p}$ norm of the same function, which is about ${|A| R^{1/p}}$.)
To a large extent, the theory of the Sobolev spaces ${W^{s,p}({\bf R}^d)}$ resembles their Lebesgue counterparts ${L^p({\bf R}^d)}$ (which are as the special case of Sobolev spaces when ${s=0}$), but with the additional benefit of being able to interact very nicely with (weak) derivatives: a first derivative ${\frac{\partial f}{\partial x_j}}$ of a function in an ${L^p}$ space usually leaves all Lebesgue spaces, but a first derivative of a function in the Sobolev space ${W^{s,p}}$ will end up in another Sobolev space ${W^{s-1,p}}$. This compatibility with the differentiation operation begins to explain why Sobolev spaces are so useful in the theory of partial differential equations. Furthermore, the regularity parameter ${s}$ in Sobolev spaces is not restricted to be a natural number; it can be any real number, and one can use fractional derivative or integration operators to move from one regularity to another. Despite the fact that most partial differential equations involve differential operators of integer order, fractional spaces are still of importance; for instance it often turns out that the Sobolev spaces which are critical (scale-invariant) for a certain PDE are of fractional order.
The uncertainty principle in Fourier analysis places a constraint between the width and frequency scale of a function; roughly speaking (and in one dimension for simplicity), the product of the two quantities has to be bounded away from zero (or to put it another way, a wave is always at least as wide as its wavelength). This constraint can be quantified as the very useful Sobolev embedding theorem, which allows one to trade regularity for integrability: a function in a Sobolev space ${W^{s,p}}$ will automatically lie in a number of other Sobolev spaces ${W^{\tilde s,\tilde p}}$ with ${\tilde s < s}$ and ${\tilde p > p}$; in particular, one can often embed Sobolev spaces into Lebesgue spaces. The trade is not reversible: one cannot start with a function with a lot of integrability and no regularity, and expect to recover regularity in a space of lower integrability. (One can already see this with the most basic example of Sobolev embedding, coming from the fundamental theorem of calculus. If a (continuously differentiable) function ${f: {\bf R} \rightarrow {\bf R}}$ has ${f'}$ in ${L^1({\bf R})}$, then we of course have ${f \in L^\infty({\bf R})}$; but the converse is far from true.)
Plancherel’s theorem reveals that Fourier-analytic tools are particularly powerful when applied to ${L^2}$ spaces. Because of this, the Fourier transform is very effective at dealing with the ${L^2}$-based Sobolev spaces ${W^{s,2}({\bf R}^d)}$, often abbreviated ${H^s({\bf R}^d)}$. Indeed, using the fact that the Fourier transform converts regularity to decay, we will see that the ${H^s({\bf R}^d)}$ spaces are nothing more than Fourier transforms of weighted ${L^2}$ spaces, and in particular enjoy a Hilbert space structure. These Sobolev spaces, and in particular the energy space ${H^1({\bf R}^d)}$, are of particular importance in any PDE that involves some sort of energy functional (this includes large classes of elliptic, parabolic, dispersive, and wave equations, and especially those equations connected to physics and/or geometry).
We will not fully develop the theory of Sobolev spaces here, as this would require the theory of singular integrals, which is beyond the scope of this course. There are of course many references for further reading; one is Stein’s “Singular integrals and differentiability properties of functions“.
Here are the video, audio, and transcript of the talk.
[Update, Apr 28: Another event at the meeting is the announcement of the new membership of the Academy for 2009. In mathematics, the new members include Alice Chang, Percy Deift, John Morgan, and Gilbert Strang; congratulations to all four, of course.]
I have received some anecdotal evidence that wordpress blogs such as this one have recently been blocked again by the “great firewall of China“. I was wondering if the readers here could confirm or disconfirm this, and also if they knew of some effective ways to circumvent this firewall, as I have been getting a number of requests on how to do so.
[Of course, by definition, if a reader is directly affected by this blockage, then they would not be able to comment here or to read about any workarounds; but perhaps they would be able to confirm the situation indirectly, and I could still pass on any relevant tips obtained here by other channels.]
[Update, June 3: A partial list of blocked sites can be found here, and a firewall tester can be found here.]
In the theory of dense graphs on ${n}$ vertices, where ${n}$ is large, a fundamental role is played by the Szemerédi regularity lemma:
Lemma 1 (Regularity lemma, standard version) Let ${G = (V,E)}$ be a graph on ${n}$ vertices, and let ${\epsilon > 0}$ and ${k_0 \geq 0}$. Then there exists a partition of the vertices ${V = V_1 \cup \ldots \cup V_k}$, with ${k_0 \leq k \leq C(k_0,\epsilon)}$ bounded below by ${k_0}$ and above by a quantity ${C(k_0,\epsilon)}$ depending only on ${k_0, \epsilon}$, obeying the following properties:
• (Equitable partition) For any ${1 \leq i,j \leq k}$, the cardinalities ${|V_i|, |V_j|}$ of ${V_i}$ and ${V_j}$ differ by at most ${1}$.
• (Regularity) For all but at most ${\epsilon k^2}$ pairs ${1 \leq i < j \leq k}$, the portion of the graph ${G}$ between ${V_i}$ and ${V_j}$ is ${\epsilon}$-regular in the sense that one has
$\displaystyle |d( A, B ) - d( V_i, V_j )| \leq \epsilon$
for any ${A \subset V_i}$ and ${B \subset V_j}$ with ${|A| \geq \epsilon |V_i|, |B| \geq \epsilon |V_j|}$, where ${d(A,B) := |E \cap (A \times B)|/|A| |B|}$ is the density of edges between ${A}$ and ${B}$.
This lemma becomes useful in the regime when ${n}$ is very large compared to ${k_0}$ or ${1/\epsilon}$, because all the conclusions of the lemma are uniform in ${n}$. Very roughly speaking, it says that “up to errors of size ${\epsilon}$“, a large graph can be more or less described completely by a bounded number of quantities ${d(V_i, V_j)}$. This can be interpreted as saying that the space of all graphs is totally bounded (and hence precompact) in a suitable metric space, thus allowing one to take formal limits of sequences (or subsequences) of graphs; see for instance this paper of Lovasz and Szegedy for a discussion.
For various technical reasons it is easier to work with a slightly weaker version of the lemma, which allows for the cells ${V_1,\ldots,V_k}$ to have unequal sizes:
Lemma 2 (Regularity lemma, weighted version) Let ${G = (V,E)}$ be a graph on ${n}$ vertices, and let ${\epsilon > 0}$. Then there exists a partition of the vertices ${V = V_1 \cup \ldots \cup V_k}$, with ${1 \leq k \leq C(\epsilon)}$ bounded above by a quantity ${C(\epsilon)}$ depending only on ${\epsilon}$, obeying the following properties:
While Lemma 2 is, strictly speaking, weaker than Lemma 1 in that it does not enforce the equitable size property between the atoms, in practice it seems that the two lemmas are roughly of equal utility; most of the combinatorial consequences of Lemma 1 can also be proven using Lemma 2. The point is that one always has to remember to weight each cell ${V_i}$ by its density ${|V_i|/|V|}$, rather than by giving each cell an equal weight as in Lemma 1. Lemma 2 also has the advantage that one can easily generalise the result from finite vertex sets ${V}$ to other probability spaces (for instance, one could weight ${V}$ with something other than the uniform distribution). For applications to hypergraph regularity, it turns out to be slightly more convenient to have two partitions (coarse and fine) rather than just one; see for instance my own paper on this topic. In any event the arguments below that we give to prove Lemma 2 can be modified to give a proof of Lemma 1 also. The proof of the regularity lemma is usually conducted by a greedy algorithm. Very roughly speaking, one starts with the trivial partition of ${V}$. If this partition already regularises the graph, we are done; if not, this means that there are some sets ${A}$ and ${B}$ in which there is a significant density fluctuation beyond what has already been detected by the original partition. One then adds these sets to the partition and iterates the argument. Every time a new density fluctuation is incorporated into the partition that models the original graph, this increases a certain “index” or “energy” of the partition. On the other hand, this energy remains bounded no matter how complex the partition, so eventually one must reach a long “energy plateau” in which no further refinement is possible, at which point one can find the regular partition.
One disadvantage of the greedy algorithm is that it is not efficient in the limit ${n \rightarrow \infty}$, as it requires one to search over all pairs of subsets ${A, B}$ of a given pair ${V_i, V_j}$ of cells, which is an exponentially long search. There are more algorithmically efficient ways to regularise, for instance a polynomial time algorithm was given by Alon, Duke, Lefmann, Rödl, and Yuster. However, one can do even better, if one is willing to (a) allow cells of unequal size, (b) allow a small probability of failure, (c) have the ability to sample vertices from ${G}$ at random, and (d) allow for the cells to be defined “implicitly” (via their relationships with a fixed set of reference vertices) rather than “explicitly” (as a list of vertices). In that case, one can regularise a graph in a number of operations bounded in ${n}$. Indeed, one has
Lemma 3 (Regularity lemma via random neighbourhoods) Let ${\epsilon > 0}$. Then there exists integers ${M_1,\ldots,M_m}$ with the following property: whenever ${G = (V,E)}$ be a graph on finitely many vertices, if one selects one of the integers ${M_r}$ at random from ${M_1,\ldots,M_m}$, then selects ${M_r}$ vertices ${v_1,\ldots,v_{M_r} \in V}$ uniformly from ${V}$ at random, then the ${2^{M_r}}$ vertex cells ${V^{M_r}_1,\ldots,V^{M_r}_{2^{M_r}}}$ (some of which can be empty) generated by the vertex neighbourhoods ${A_t := \{ v \in V: (v,v_t) \in E \}}$ for ${1 \leq t \leq M_r}$, will obey the conclusions of Lemma 2 with probability at least ${1-O(\epsilon)}$.
Thus, roughly speaking, one can regularise a graph simply by taking a large number of random vertex neighbourhoods, and using the partition (or Venn diagram) generated by these neighbourhoods as the partition. The intuition is that if there is any non-uniformity in the graph (e.g. if the graph exhibits bipartite behaviour), this will bias the random neighbourhoods to seek out the partitions that would regularise that non-uniformity (e.g. vertex neighbourhoods would begin to fill out the two vertex cells associated to the bipartite property); if one takes sufficiently many such random neighbourhoods, the probability that all detectable non-uniformity is captured by the partition should converge to ${1}$. (It is more complicated than this, because the finer one makes the partition, the finer the types of non-uniformity one can begin to detect, but this is the basic idea.)
This fact seems to be reasonably well-known folklore, discovered independently by many authors; it is for instance quite close to the graph property testing results of Alon and Shapira, and also appears implicitly in a paper of Ishigami, as well as a paper of Austin (and perhaps even more implicitly in a paper of myself). However, in none of these papers is the above lemma stated explicitly. I was asked about this lemma recently, so I decided to provide a proof here.
I’ve just uploaded to the arXiv my paper “An inverse theorem for the bilinear $L^2$ Strichartz estimate for the wave equation“. This paper is another technical component of my “heatwave project“, which aims to establish the global regularity conjecture for energy-critical wave maps into hyperbolic space. I have been in the process of writing the final paper of that project, in which I will show that the only way singularities can form is if a special type of solution, known as an “almost periodic blowup solution”, exists. However, I recently discovered that the existing function space estimates that I was relying on for the large energy perturbation theory were not quite adequate, and in particular I needed a certain “inverse theorem” for a standard bilinear estimate which was not quite in the literature. The purpose of this paper is to establish that inverse theorem, which may also have some application to other nonlinear wave equations.
In set theory, a function ${f: X \rightarrow Y}$ is defined as an object that evaluates every input ${x}$ to exactly one output ${f(x)}$. However, in various branches of mathematics, it has become convenient to generalise this classical concept of a function to a more abstract one. For instance, in operator algebras, quantum mechanics, or non-commutative geometry, one often replaces commutative algebras of (real or complex-valued) functions on some space ${X}$, such as ${C(X)}$ or ${L^\infty(X)}$, with a more general – and possibly non-commutative – algebra (e.g. a ${C^*}$-algebra or a von Neumann algebra). Elements in this more abstract algebra are no longer definable as functions in the classical sense of assigning a single value ${f(x)}$ to every point ${x \in X}$, but one can still define other operations on these “generalised functions” (e.g. one can multiply or take inner products between two such objects).
Generalisations of functions are also very useful in analysis. In our study of ${L^p}$ spaces, we have already seen one such generalisation, namely the concept of a function defined up to almost everywhere equivalence. Such a function ${f}$ (or more precisely, an equivalence class of classical functions) cannot be evaluated at any given point ${x}$, if that point has measure zero. However, it is still possible to perform algebraic operations on such functions (e.g. multiplying or adding two functions together), and one can also integrate such functions on measurable sets (provided, of course, that the function has some suitable integrability condition). We also know that the ${L^p}$ spaces can usually be described via duality, as the dual space of ${L^{p'}}$ (except in some endpoint cases, namely when ${p=\infty}$, or when ${p=1}$ and the underlying space is not ${\sigma}$-finite).
We have also seen (via the Lebesgue-Radon-Nikodym theorem) that locally integrable functions ${f \in L^1_{\hbox{loc}}({\bf R})}$ on, say, the real line ${{\bf R}}$, can be identified with locally finite absolutely continuous measures ${m_f}$ on the line, by multiplying Lebesgue measure ${m}$ by the function ${f}$. So another way to generalise the concept of a function is to consider arbitrary locally finite Radon measures ${\mu}$ (not necessarily absolutely continuous), such as the Dirac measure ${\delta_0}$. With this concept of “generalised function”, one can still add and subtract two measures ${\mu, \nu}$, and integrate any measure ${\mu}$ against a (bounded) measurable set ${E}$ to obtain a number ${\mu(E)}$, but one cannot evaluate a measure ${\mu}$ (or more precisely, the Radon-Nikodym derivative ${d\mu/dm}$ of that measure) at a single point ${x}$, and one also cannot multiply two measures together to obtain another measure. From the Riesz representation theorem, we also know that the space of (finite) Radon measures can be described via duality, as linear functionals on ${C_c({\bf R})}$.
There is an even larger class of generalised functions that is very useful, particularly in linear PDE, namely the space of distributions, say on a Euclidean space ${{\bf R}^d}$. In contrast to Radon measures ${\mu}$, which can be defined by how they “pair up” against continuous, compactly supported test functions ${f \in C_c({\bf R}^d)}$ to create numbers ${\langle f, \mu \rangle := \int_{{\bf R}^d} f\ d\overline{\mu}}$, a distribution ${\lambda}$ is defined by how it pairs up against a smooth compactly supported function ${f \in C^\infty_c({\bf R}^d)}$ to create a number ${\langle f, \lambda \rangle}$. As the space ${C^\infty_c({\bf R}^d)}$ of smooth compactly supported functions is smaller than (but dense in) the space ${C_c({\bf R}^d)}$ of continuous compactly supported functions (and has a stronger topology), the space of distributions is larger than that of measures. But the space ${C^\infty_c({\bf R}^d)}$ is closed under more operations than ${C_c({\bf R}^d)}$, and in particular is closed under differential operators (with smooth coefficients). Because of this, the space of distributions is similarly closed under such operations; in particular, one can differentiate a distribution and get another distribution, which is something that is not always possible with measures or ${L^p}$ functions. But as measures or functions can be interpreted as distributions, this leads to the notion of a weak derivative for such objects, which makes sense (but only as a distribution) even for functions that are not classically differentiable. Thus the theory of distributions can allow one to rigorously manipulate rough functions “as if” they were smooth, although one must still be careful as some operations on distributions are not well-defined, most notably the operation of multiplying two distributions together. Nevertheless one can use this theory to justify many formal computations involving derivatives, integrals, etc. (including several computations used routinely in physics) that would be difficult to formalise rigorously in a purely classical framework.
If one shrinks the space of distributions slightly, to the space of tempered distributions (which is formed by enlarging dual class ${C^\infty_c({\bf R}^d)}$ to the Schwartz class ${{\mathcal S}({\bf R}^d)}$), then one obtains closure under another important operation, namely the Fourier transform. This allows one to define various Fourier-analytic operations (e.g. pseudodifferential operators) on such distributions.
Of course, at the end of the day, one is usually not all that interested in distributions in their own right, but would like to be able to use them as a tool to study more classical objects, such as smooth functions. Fortunately, one can recover facts about smooth functions from facts about the (far rougher) space of distributions in a number of ways. For instance, if one convolves a distribution with a smooth, compactly supported function, one gets back a smooth function. This is a particularly useful fact in the theory of constant-coefficient linear partial differential equations such as ${Lu=f}$, as it allows one to recover a smooth solution ${u}$ from smooth, compactly supported data ${f}$ by convolving ${f}$ with a specific distribution ${G}$, known as the fundamental solution of ${L}$. We will give some examples of this later in these notes.
It is this unusual and useful combination of both being able to pass from classical functions to generalised functions (e.g. by differentiation) and then back from generalised functions to classical functions (e.g. by convolution) that sets the theory of distributions apart from other competing theories of generalised functions, in particular allowing one to justify many formal calculations in PDE and Fourier analysis rigorously with relatively little additional effort. On the other hand, being defined by linear duality, the theory of distributions becomes somewhat less useful when one moves to more nonlinear problems, such as nonlinear PDE. However, they still serve an important supporting role in such problems as a “ambient space” of functions, inside of which one carves out more useful function spaces, such as Sobolev spaces, which we will discuss in the next set of notes.
From Tim Gowers’ blog comes the announcement that the Tricki – a wiki for various tricks and strategies for proving mathematical results – is now live. (My own articles for the Tricki are also on this blog; also Ben Green has written up an article on using finite fields to prove results about infinite fields which is loosely based on my own post on the topic, which is in turn based on an article of Serre.) It seems to already be growing at a reasonable rate, with many contributors.
Recently, I have been studying the concept of amenability on groups. This concept can be defined in a “combinatorial” or “finitary” fashion, using Følner sequences, and also in a more “functional-analytic” or “infinitary”‘ fashion, using invariant means. I wanted to get some practice passing back and forth between these two definitions, so I wrote down some notes on how to do this, and also how to take some facts about amenability that are usually proven in one setting, and prove them instead in the other. These notes are thus mostly for my own benefit, but I thought I might post them here also, in case anyone else is interested.
The famous Gödel completeness theorem in logic (not to be confused with the even more famous Gödel incompleteness theorem) roughly states the following:
Theorem 1 (Gödel completeness theorem, informal statement) Let ${\Gamma}$ be a first-order theory (a formal language ${{\mathcal L}}$, together with a set of axioms, i.e. sentences assumed to be true), and let ${\phi}$ be a sentence in the formal language. Assume also that the language ${{\mathcal L}}$ has at most countably many symbols. Then the following are equivalent:
• (i) (Syntactic consequence) ${\phi}$ can be deduced from the axioms in ${\Gamma}$ by a finite number of applications of the laws of deduction in first order logic. (This property is abbreviated as ${\Gamma \vdash \phi}$.)
• (ii) (Semantic consequence) Every structure ${{\mathfrak U}}$ which satisfies or models ${\Gamma}$, also satisfies ${\phi}$. (This property is abbreviated as ${\Gamma \models \phi}$.)
• (iii) (Semantic consequence for at most countable models) Every structure ${{\mathfrak U}}$ which is at most countable, and which models ${\Gamma}$, also satisfies ${\phi}$.
One can also formulate versions of the completeness theorem for languages with uncountably many symbols, but I will not do so here. One can also force other cardinalities on the model ${{\mathfrak U}}$ by using the Löwenheim-Skolem theorem.
To state this theorem even more informally, any (first-order) result which is true in all models of a theory, must be logically deducible from that theory, and vice versa. (For instance, any result which is true for all groups, must be deducible from the group axioms; any result which is true for all systems obeying Peano arithmetic, must be deducible from the Peano axioms; and so forth.) In fact, it suffices to check countable and finite models only; for instance, any first-order statement which is true for all finite or countable groups, is in fact true for all groups! Informally, a first-order language with only countably many symbols cannot “detect” whether a given structure is countably or uncountably infinite. Thus for instance even the ZFC axioms of set theory must have some at most countable model, even though one can use ZFC to prove the existence of uncountable sets; this is known as Skolem’s paradox. (To resolve the paradox, one needs to carefully distinguish between an object in a set theory being “externally” countable in the structure that models that theory, and being “internally” countable within that theory.)
Of course, a theory ${\Gamma}$ may contain undecidable statements ${\phi}$ – sentences which are neither provable nor disprovable in the theory. By the completeness theorem, this is equivalent to saying that ${\phi}$ is satisfied by some models of ${\Gamma}$ but not by other models. Thus the completeness theorem is compatible with the incompleteness theorem: recursively enumerable theories such as Peano arithmetic are modeled by the natural numbers ${{\mathbb N}}$, but are also modeled by other structures also, and there are sentences satisfied by ${{\mathbb N}}$ which are not satisfied by other models of Peano arithmetic, and are thus undecidable within that arithmetic.
An important corollary of the completeness theorem is the compactness theorem:
Corollary 2 (Compactness theorem, informal statement) Let ${\Gamma}$ be a first-order theory whose language has at most countably many symbols. Then the following are equivalent:
• (i) ${\Gamma}$ is consistent, i.e. it is not possible to logically deduce a contradiction from the axioms in ${\Gamma}$.
• (ii) ${\Gamma}$ is satisfiable, i.e. there exists a structure ${{\mathfrak U}}$ that models ${\Gamma}$.
• (iii) There exists a structure ${{\mathfrak U}}$ which is at most countable, that models ${\Gamma}$.
• (iv) Every finite subset ${\Gamma'}$ of ${\Gamma}$ is consistent.
• (v) Every finite subset ${\Gamma'}$ of ${\Gamma}$ is satisfiable.
• (vi) Every finite subset ${\Gamma'}$ of ${\Gamma}$ is satisfiable with an at most countable model.
Indeed, the equivalence of (i)-(iii), or (iv)-(vi), follows directly from the completeness theorem, while the equivalence of (i) and (iv) follows from the fact that any logical deduction has finite length and so can involve at most finitely many of the axioms in ${\Gamma}$. (Again, the theorem can be generalised to uncountable languages, but the models become uncountable also.)
There is a consequence of the compactness theorem which more closely resembles the sequential concept of compactness. Given a sequence ${{\mathfrak U}_1, {\mathfrak U}_2, \ldots}$ of structures for ${{\mathcal L}}$, and another structure ${{\mathfrak U}}$ for ${{\mathcal L}}$, let us say that ${{\mathfrak U}_n}$ converges elementarily to ${{\mathfrak U}}$ if every sentence ${\phi}$ which is satisfied by ${{\mathfrak U}}$, is also satisfied by ${{\mathfrak U}_n}$ for sufficiently large ${n}$. (Replacing ${\phi}$ by its negation ${\neg \phi}$, we also see that every sentence that is not satisfied by ${{\mathfrak U}}$, is not satisfied by ${{\mathfrak U}_n}$ for sufficiently large ${n}$.) Note that the limit ${{\mathfrak U}}$ is only unique up to elementary equivalence. Clearly, if each of the ${{\mathfrak U}_n}$ models some theory ${\Gamma}$, then the limit ${{\mathfrak U}}$ will also; thus for instance the elementary limit of a sequence of groups is still a group, the elementary limit of a sequence of rings is still a ring, etc.
Corollary 3 (Sequential compactness theorem) Let ${{\mathcal L}}$ be a language with at most countably many symbols, and let ${{\mathfrak U}_1, {\mathfrak U}_2, \ldots}$ be a sequence of structures for ${{\mathcal L}}$. Then there exists a subsequence ${{\mathfrak U}_{n_j}}$ which converges elementarily to a limit ${{\mathfrak U}}$ which is at most countable.
Proof: For each structure ${{\mathfrak U}_n}$, let ${\hbox{Th}({\mathfrak U}_n)}$ be the theory of that structure, i.e. the set of all sentences that are satisfied by that structure. One can view that theory as a point in ${\{0,1\}^{{\mathcal S}}}$, where ${{\mathcal S}}$ is the set of all sentences in the language ${{\mathcal L}}$. Since ${{\mathcal L}}$ has at most countably many symbols, ${{\mathcal S}}$ is at most countable, and so (by the sequential Tychonoff theorem) ${\{0,1\}^{{\mathcal S}}}$ is sequentially compact in the product topology. (This can also be seen directly by the usual Arzelá-Ascoli diagonalisation argument.) Thus we can find a subsequence ${\hbox{Th}({\mathfrak U}_{n_j})}$ which converges in the product topology to a limit theory ${\Gamma \in \{0,1\}^{{\mathcal S}}}$, thus every sentence in ${\Gamma}$ is satisfied by ${{\mathfrak U}_{n_j}}$ for sufficiently large ${j}$ (and every sentence not in ${\Gamma}$ is not satisfied by ${{\mathfrak U}_{n_j}}$ for sufficiently large ${j}$). In particular, any finite subset of ${\Gamma}$ is satisfiable, hence consistent; by the compactness theorem, ${\Gamma}$ itself is therefore consistent, and has an at most countable model ${{\mathfrak U}}$. Also, each of the theories ${\hbox{Th}({\mathfrak U}_{n_j})}$ is clearly complete (given any sentence ${\phi}$, either ${\phi}$ or ${\neg \phi}$ is in the theory), and so ${\Gamma}$ is complete as well. One concludes that ${\Gamma}$ is the theory of ${{\mathfrak U}}$, and hence ${{\mathfrak U}}$ is the elementary limit of the ${{\mathfrak U}_{n_j}}$ as claimed. $\Box$
[It is also possible to state the compactness theorem using the topological notion of compactness, as follows: let ${X}$ be the space of all structures of a given language ${{\mathcal L}}$, quotiented by elementary equivalence. One can define a topology on ${X}$ by taking the sets ${\{ {\mathfrak U} \in X: {\mathfrak U} \models \phi \}}$ as a sub-base, where ${\phi}$ ranges over all sentences. Then the compactness theorem is equivalent to the assertion that ${X}$ is topologically compact.]
One can use the sequential compactness theorem to build a number of interesting “non-standard” models to various theories. For instance, consider the language ${{\mathcal L}}$ used by Peano arithmetic (which contains the operations ${+, \times}$ and the successor operation ${S}$, the relation ${=}$, and the constant ${0}$), and adjoint a new constant ${N}$ to create an expanded language ${{\mathcal L} \cup \{N\}}$. For each natural number ${n \in {\Bbb N}}$, let ${{\Bbb N}_n}$ be a structure for ${{\mathcal L} \cup \{N\}}$ which consists of the natural numbers ${{\Bbb N}}$ (with the usual interpretations of ${+}$, ${\times}$, etc.) and interprets the symbol ${N}$ as the natural number ${n}$. By the compactness theorem, some subsequence of ${{\Bbb N}_n}$ must converge elementarily to a new structure ${*{\Bbb N}}$ of ${{\mathcal L} \cup \{N\}}$, which still models Peano arithmetic, but now has the additional property that ${N>n}$ for every (standard) natural number ${n}$; thus we have managed to create a non-standard model of Peano arithmetic which contains a non-standardly large number (one which is larger than every standard natural number).
The sequential compactness theorem also lets us construct infinitary limits of various sequences of finitary objects; for instance, one can construct infinite pseudo-finite fields as the elementary limits of sequences of finite fields. I recently discovered that other several correspondence principles between finitary and infinitary objects, such as the Furstenberg correspondence principle between sets of integers and dynamical systems, or the more recent correspondence principles concerning graph limits, can be viewed as special cases of the sequential compactness theorem; it also seems possible to encode much of the sum-product theory in finite fields in an infinitary setting using this theorem. I hope to discuss these points in more detail in a later post. In this post, I wish to review (partly for my own benefit) the proof of the completeness (and hence compactness) theorem. The material here is quite standard (I basically follow the usual proof of Henkin, and taking advantage of Skolemisation), but perhaps the concept of an elementary limit is not as well-known outside of logic as it might be. (The closely related concept of an ultraproduct is better known, and can be used to prove most of the compactness theorem already, thanks to Los’s theorem, but I do not know how to use ultraproducts to ensure that the limiting model is countable. However, one can think (intuitively, at least), of the limit model ${{\mathfrak U}}$ in the above theorem as being the set of “constructible” elements of an ultraproduct of the ${{\mathfrak U}_n}$.)
In order to emphasise the main ideas in the proof, I will gloss over some of the more technical details in the proofs, relying instead on informal arguments and examples at various points.
In these notes we lay out the basic theory of the Fourier transform, which is of course the most fundamental tool in harmonic analysis and also of major importance in related fields (functional analysis, complex analysis, PDE, number theory, additive combinatorics, representation theory, signal processing, etc.). The Fourier transform, in conjunction with the Fourier inversion formula, allows one to take essentially arbitrary (complex-valued) functions on a group ${G}$ (or more generally, a space ${X}$ that ${G}$ acts on, e.g. a homogeneous space ${G/H}$), and decompose them as a (discrete or continuous) superposition of much more symmetric functions on the domain, such as characters ${\chi: G \rightarrow S^1}$; the precise superposition is given by Fourier coefficients ${\hat f(\xi)}$, which take values in some dual object such as the Pontryagin dual ${\hat G}$ of ${G}$. Characters behave in a very simple manner with respect to translation (indeed, they are eigenfunctions of the translation action), and so the Fourier transform tends to simplify any mathematical problem which enjoys a translation invariance symmetry (or an approximation to such a symmetry), and is somehow “linear” (i.e. it interacts nicely with superpositions). In particular, Fourier analytic methods are particularly useful for studying operations such as convolution ${f, g \mapsto f*g}$ and set-theoretic addition ${A, B \mapsto A+B}$, or the closely related problem of counting solutions to additive problems such as ${x = a_1 + a_2 + a_3}$ or ${x = a_1 - a_2}$, where ${a_1, a_2, a_3}$ are constrained to lie in specific sets ${A_1, A_2, A_3}$. The Fourier transform is also a particularly powerful tool for solving constant-coefficient linear ODE and PDE (because of the translation invariance), and can also approximately solve some variable-coefficient (or slightly non-linear) equations if the coefficients vary smoothly enough and the nonlinear terms are sufficiently tame.
The Fourier transform ${\hat f(\xi)}$ also provides an important new way of looking at a function ${f(x)}$, as it highlights the distribution of ${f}$ in frequency space (the domain of the frequency variable ${\xi}$) rather than physical space (the domain of the physical variable ${x}$). A given property of ${f}$ in the physical domain may be transformed to a rather different-looking property of ${\hat f}$ in the frequency domain. For instance:
• Smoothness of ${f}$ in the physical domain corresponds to decay of ${\hat f}$ in the Fourier domain, and conversely. (More generally, fine scale properties of ${f}$ tend to manifest themselves as coarse scale properties of ${\hat f}$, and conversely.)
• Convolution in the physical domain corresponds to pointwise multiplication in the Fourier domain, and conversely.
• Constant coefficient differential operators such as ${d/dx}$ in the physical domain corresponds to multiplication by polynomials such as ${2\pi i \xi}$ in the Fourier domain, and conversely.
• More generally, translation invariant operators in the physical domain correspond to multiplication by symbols in the Fourier domain, and conversely.
• Rescaling in the physical domain by an invertible linear transformation corresponds to an inverse (adjoint) rescaling in the Fourier domain.
• Restriction to a subspace (or subgroup) in the physical domain corresponds to projection to the dual quotient space (or quotient group) in the Fourier domain, and conversely.
• Frequency modulation in the physical domain corresponds to translation in the frequency domain, and conversely.
(We will make these statements more precise below.)
On the other hand, some operations in the physical domain remain essentially unchanged in the Fourier domain. Most importantly, the ${L^2}$ norm (or energy) of a function ${f}$ is the same as that of its Fourier transform, and more generally the inner product ${\langle f, g \rangle}$ of two functions ${f}$ is the same as that of their Fourier transforms. Indeed, the Fourier transform is a unitary operator on ${L^2}$ (a fact which is variously known as the Plancherel theorem or the Parseval identity). This makes it easier to pass back and forth between the physical domain and frequency domain, so that one can combine techniques that are easy to execute in the physical domain with other techniques that are easy to execute in the frequency domain. (In fact, one can combine the physical and frequency domains together into a product domain known as phase space, and there are entire fields of mathematics (e.g. microlocal analysis, geometric quantisation, time-frequency analysis) devoted to performing analysis on these sorts of spaces directly, but this is beyond the scope of this course.)
In these notes, we briefly discuss the general theory of the Fourier transform, but will mainly focus on the two classical domains for Fourier analysis: the torus ${{\Bbb T}^d := ({\bf R}/{\bf Z})^d}$, and the Euclidean space ${{\bf R}^d}$. For these domains one has the advantage of being able to perform very explicit algebraic calculations, involving concrete functions such as plane waves ${x \mapsto e^{2\pi i x \cdot \xi}}$ or Gaussians ${x \mapsto A^{d/2} e^{-\pi A |x|^2}}$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 373, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9642253518104553, "perplexity": 189.60372331417557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363189.92/warc/CC-MAIN-20211205130619-20211205160619-00042.warc.gz"} |
https://homework.cpm.org/category/MN/textbook/cc3mn/chapter/3/lesson/3.2.5/problem/3-113 | ### Home > CC3MN > Chapter 3 > Lesson 3.2.5 > Problem3-113
3-113.
Solve each equation.
1. $3(5x+2)=8x+20$
Distribute and simplify each side as much as possible.
Get all variables on one side and all numbers on the other.
$x=2$
1. $−2(x−3)+4x=−(−x+1)$
Follow the steps in part (a). | {"extraction_info": {"found_math": true, "script_math_tex": 3, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8175721168518066, "perplexity": 4601.467373692151}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363216.90/warc/CC-MAIN-20211205191620-20211205221620-00303.warc.gz"} |
https://physics.stackexchange.com/questions/528144/what-is-a-perfect-fluid-really | # What is a “perfect fluid”, really?
In Statistical Mechanics, unless I'm making a confusion mistake, a perfect fluid is defined as a large collection of particles without any "internal" interactions (except maybe from point collisions). Long and short range forces are all neglected. This gives the usual perfect gas law: $$$$\tag{1} p V = N k T.$$$$ Assuming adiabatic irreversibility, this implies the polytrope relation, which is a special case of a barotrope state relation : $$$$\tag{2} p = \kappa \, \rho_{\text{mass}}^{\gamma},$$$$ where $$\kappa$$ is a constant and $$\gamma$$ is the adiabatic index of the fluid. Of course, $$\rho_{\text{mass}}$$ is the proper mass density of the fluid. We could also find $$$$\tag{3} p = (\gamma - 1) \, \rho_{\text{int}},$$$$ where $$\rho_{\text{int}}$$ is the internal energy density, defined as $$\rho_{\text{int}} = \rho - \rho_{\text{mass}}$$ if $$\rho$$ is the total energy density (I'm using natural units so $$c \equiv 1$$).
Now, in Special (and General) Relativity, a perfect fluid is defined as any substance that doesn't show any macroscopic viscosity and shear (this $$\underline{\text{suggest}}$$ no internal microscopic interactions, but this isn't obvious), and such that the energy-momentum of the fluid is diagonal and isotropic in the proper reference frame: $$$$\tag{4} T_{ab} = \begin{bmatrix} \rho & 0 & 0 & 0 \\ 0 & p & 0 & 0 \\ 0 & 0 & p & 0 \\ 0 & 0 & 0 & p \end{bmatrix}.$$$$ Here, a barotropic relation may admit any function $$p(\rho)$$, and not just (2) or (3) above. For example, we may admit the Van der Waals law which isn't a perfect fluid in statistical mechanics (there are some short range forces in action) : $$$$\tag{5} p = \frac{c \, \rho_{\text{mass}} \, T}{1 - a \, \rho_{\text{mass}}} - b \, \rho_{\text{mass}}^2.$$$$ This special relativistic definition isn't the same as the statistical mechanics definition, since it may admit fluids with some internal interactions (yet without showing any macroscopic shear and viscosity).
Now, I'm finding myself irritated by these two definitions, which aren't exactly equivalent. So what really is a "perfect fluid"? The $$\underline{\text{statistical}}$$ one (without any microscopic internal interactions), or the $$\underline{\text{relativistic}}$$ one (which may admit internal interactions)?
Or is there two different "names" that may differentiate the two definitions, something like "ideal fluid" and "perfect fluid" or something else?
I don't like the two inequivalent definitions having the same name, since it opens the door to confusion. I don't want a sloppy use of the same name, just because we are working in different fields (classical statistical mechanicians, or general relativists, or fluid dynamicians, ...).
I have the impression that the statistical definition is the right one, from an historical perspective, and that the relativists should call their perfect fluid as "ideal fluid", instead. Is that right? Or maybe it should be the reverse??
The sloppy Wikipedia appears to reverse the names: https://en.wikipedia.org/wiki/Perfect_fluid, which apparently was written by a relativist! And https://en.wikipedia.org/wiki/Ideal_gas which calls "ideal gas" the perfect fluid of statistical mechanics. Now I'm all confused! Wikipedia isn't a good reference for physics definitions, since there are frequently many inconsistencies.
I think you are not distinguishing properly between ideal/perfect fluids and ideal/perfect gases as they are different concepts:
In traditional fluid mechanics a perfect or ideal fluid is only characterised by the absence of dissipation, the viscosity and the coefficient of heat conduction are zero (Landau & Lafshitz) and the process as a consequence reversible. This means that its equations of motion are given by the Euler equations (including constant entropy) and not the full Navier-Stokes equations that contain a viscous term as well and were written down almost 100 years later in the 1840s. A perfect 'inviscid' fluid can be any fluid, a gas or a liquid, and poses no restrictions regarding the material law. It is particularly useful in aerodynamics where viscous contributions generally only dominate near the walls. Assuming an ideal fluid you may be able to obtain analytical solutions without the need for numerical simulations that take viscous effects directly into account.
An ideal gas on the other hand is a model for the material law: In this case the members of a rarefied gas are assumed as point-particles only interacting in elastic collisions that are assumed to obey Newtonian physics. This means the particles have only translational degrees of freedom and no complex far-field interactions between particles are admitted. If furthermore the heat capacity can be assumed constant, it is considered a perfect gas. In kinetic theory of gases this allows for a simple estimation of transport coefficients and most compressible mechanics are based on this simplification.
• So you're saying that what I called a "perfect fluid" in statistical mechanics is actually a "perfect gas", not a fluid? This would make sense. – Cham Jan 30 '20 at 21:18
• @Cham To be precise, what you describe as perfect fluid in statistical mechanics in the first section of your question would be an ideal gas for most people, including myself. I am not aware of statistical mechanics literature that actually defines it the way you described it. E.g. Landau & Lafshitz, as already mentioned, adhere to the nomenclature I used. – 2b-t Jan 30 '20 at 21:33
• I'm not sure to understand your previous comment. The statistical definition of an ideal gas (what I wrongly called "perfect fluid") is a collection of particles without any internal interaction. It's a bunch of free particles. This is how we derive the formula $p V = N k T$, and so the polytrope formula (2). – Cham Jan 30 '20 at 21:42
• @Cham Yeah, that is correct. I only intended to say that I think you mixed up the perfect fluid and the perfect gas in the first section of your question and the literature (also in statistical mechanics) is to my knowledge consistent in distinguishing the two as different concepts. I have noticed that in scientific papers people do not distinguish cleanly between inviscid flow, perfect/ideal fluid and perfect/ideal gas but books are generally consistent and precise in this regard. – 2b-t Jan 30 '20 at 21:50
• Ok, I think things are getting clearer now, thanks. Yes, many authors are very sloppy in the terms and nomenclature used (like you said), and this is why I got confused in the long run (after reading a lot of papers). It is worst in French (my native language), since we tend to say "gaz parfait" (perfect gas, instead of ideal gas): fr.wikipedia.org/wiki/Gaz_parfait, and "fluide parfait" (perfect fluid): fr.wikipedia.org/wiki/Fluide_parfait, and often mixing the words "gas" and "fluid"! – Cham Jan 30 '20 at 22:08
That Friedman used a perfect fluid is crazy.
Space is not a perfect fluid as it seems infinitely compressible. But it Could also be a fluid with varying viscosity. More viscous at low density, less viscous at high density. This Could account for dark components. High viscosity in low density regions gives illusion of mass/inertia. Low viscosity in dense regions would cause increased micro turbulence which could absorb smooth rotational flow. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 16, "wp-katex-eq": 0, "align": 0, "equation": 5, "x-ck12": 0, "texerror": 0, "math_score": 0.9472256302833557, "perplexity": 623.1725508967613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153223.30/warc/CC-MAIN-20210727072531-20210727102531-00051.warc.gz"} |
https://research.snu.edu.in/publication/cp-violation-due-to-compactification | X
CP violation due to compactification
C.S. Lim, N. Maru,
Published in
2010
Volume: 81
Issue: 7
Abstract
We address the challenging issue of how CP violation is realized in higher dimensional gauge theories without higher dimensional elementary scalar fields. In such theories interactions are basically governed by a gauge principle and therefore to get CP violating phases is a nontrivial task. It is demonstrated that CP violation is achieved as the result of compactification of extra dimensions, which is incompatible with the 4-dimensional CP transformation. As a simple example we adopt a 6-dimensional U(1) model compactified on a 2-dimensional orbifold T2/Z4. We argue that the 4-dimensional CP transformation is related to the complex structure of the extra space and show how the Z4 orbifolding leads to CP violation. We confirm by explicit calculation of the interaction vertices that CP violating phases remain even after the rephasing of relevant fields. For completeness, we derive a rephasing invariant CP violating quantity, following a similar argument in the Kobayashi-Maskawa model which led to the Jarlskog parameter. As an example of a CP violating observable we briefly comment on the electric dipole moment of the electron. © 2010 The American Physical Society.
About the journal
Published in
Open Access
Impact factor
N/A | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8598341941833496, "perplexity": 1011.9506626572419}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711278.74/warc/CC-MAIN-20221208050236-20221208080236-00874.warc.gz"} |
http://www.albany.edu/~hammond/demos/Html5/arXiv/lxmlexamples.html | HTML Math Examples from arXiv via LaTeXML
Introduction
“arXiv” is the large e-print archive founded by Paul Ginsparg located at Cornell University (originally at Los Alamos National Laboratory). “LaTeXML” is the software for conversion of well-structured LaTeX to HTML originated by Bruce Miller of the U.S. National Institute of Standards and Technology (NIST).
For most of the e-prints at arXiv there is TeX source available, which is often LaTeX source. In the case where an author has provided carefully structured LaTeX source there is the possibility of conversion to HTML using a translation tool with support for math such as LaTeXML.
The relatively recent arrival of HTML, version 5, and the web-served software provided by MathJax, make it possible for ordinary HTML web pages with math to be viewed in most current major web browsers.
Examples
These examples were selected based on my personal mathematical interests and based on the technical consideration that each LaTeX source file lent itself reasonably well to automatic translation. The mathematical merit of these examples has not been reviewed.
Be aware that MathJax can take some time. For example, depending on your platform, the example by Funke & Millson, which represents about 35 pages printed on letter-sized paper and has nearly 2000 math zones, might take 2–5 minutes to load fully. The alternative versions labeled “for Firefox with mathfonts” are for any web browser capable of directly rendering XHTML with MathML; they may also take a while to load but should be quicker than the MathJax-ed versions.
alg-geom/9304003: D. Bayer & D. Mumford,
“What Can Be Computed in Algebraic Geometry”
(or the version for Firefox with mathfonts).
1104.2804: T. Ohira & H. Watanabe,
“A Conjecture on the Collatz-Kakutani Path Length for the Mersenne Primes”
(or the version for Firefox with mathfonts).
1108.5305: J. Funke & J. Millson,
“The Geometric Theta Correspondence for Hilbert Modular Surfaces”
(or the version for Firefox with mathfonts).
1109.1881: Th. Bauer, B. Harbourne, A. L. Knutsen, A. Küronya, S. Müller-Stach, T. Szemberg,
“Negative curves on algebraic surfaces”
(or the version for Firefox with mathfonts).
1207.5765: Joseph H. Silverman,
“An oft cited letter from Tate to Serre on computing local heights on elliptic curves”
(or the version for Firefox with mathfonts).
The idea that LaTeX documents, as they have been found in circulation during the period 1985–2014, could be translated to a formally structured SGML document type is, in a certain sense, folly just as it has been folly to imagine that more than 5% of the HTML documents in circulation during the period 1995–2014 are formally correct. Beyond that, the problem in translating LaTeX is compounded by the fact that the principal LaTeX engine implements LaTeX, the language, as a macro package under TeX, which is a Turing-complete programming language. So far in the development of LaTeX, there has been no reliably enforced boundary between LaTeX and TeX. In view of all of this it is remarkable, even astounding, that there has been any substantial degree of success with translation projects such as LaTeXML.
Many examples found at arXiv are not easily made to run through LaTeXML. In fact, there was a large effort mounted by the arXMLiv Project to translate most of the LaTeX source documents at arXiv to HTML using LaTeXML. I understand the success rate as of March 2014 to be around 70%. While that is not sufficient for a fully automated production system, it is nonetheless astounding. Moreover, the problem, as explained above, largely lies with source documents failing to match the standard.
I continue to believe that steps taken by the community toward formalizing suitable profiled usage of LaTeX would go a long way toward making better online versions of mathematical documents easily available. For more on this see my talk at TUG 2010 on LaTeX Profiles
A suitable command line invocation of LaTeXML will lead to HTML, version 5, output that is configured for MathJax. I found it convenient (in Linux, OSX, or Windows with Cygwin) to use a small (Bourne) shell script, which was this:
#!/bin/sh
pname=basename $0 if [ "$#" != "1" ] ; then
echo "Usage: ${pname} stem-name" exit 1 fi stem="$1"
if [ ! $$-f "{stem}.tex"$$ ] ; then
echo "${pname}: Cannot find${stem}.tex"
exit 2
fi
latexml "--destination=${stem}.xml" "${stem}.tex"
if [ "$?" != "0" ] ; then echo "${pname}: latexml did not finish cleanly on ${stem}.tex" exit 3 fi if [ ! $$-f "{stem}.xml"$$ ] ; then echo "${pname}: Cannot find latexml output file ${stem}.xml" exit 4 fi latexmlpost --format=html5 "--destination=${stem}.html" --presentationmathml "${stem}.xml" if [ "$?" != "0" ] ; then
echo "${pname}: latexmlpost did not finish cleanly on${stem}.xml"
fi | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8029248118400574, "perplexity": 2905.950112054087}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931007510.17/warc/CC-MAIN-20141125155647-00155-ip-10-235-23-156.ec2.internal.warc.gz"} |
http://en.wikipedia.org/wiki/Rodrigues'_rotation_formula | # Rodrigues' rotation formula
In the theory of three-dimensional rotation, Rodrigues' rotation formula (named after Olinde Rodrigues) is an efficient algorithm for rotating a vector in space, given an axis and angle of rotation. By extension, this can be used to transform all three basis vectors to compute a rotation matrix from an axis–angle representation. In other words, the Rodrigues formula provides an algorithm to compute the exponential map from so(3) to SO(3) without computing the full matrix exponent.
If v is a vector in ℝ3 and k is a unit vector describing an axis of rotation about which v rotates by an angle θ according to the right hand rule, the Rodrigues formula is
$\mathbf{v}_\mathrm{rot} = \mathbf{v} \cos\theta + (\mathbf{k} \times \mathbf{v})\sin\theta + \mathbf{k} (\mathbf{k} \cdot \mathbf{v}) (1 - \cos\theta)~.$
## Derivation
Rodrigues' rotation formula rotates v by an angle θ around an axis z by decomposing it into its components parallel and perpendicular to z, and rotating only the perpendicular component.
Given a rotation axis represented by a unit vector $\mathbf{k}$ and a vector $\mathbf{v}$ that we wish to rotate about $\mathbf{k}$ by the angle $\theta$,
$\mathbf{v}_{\parallel} = (\mathbf{k} \cdot \mathbf{v}) \mathbf{k}$
is the component of $\mathbf{v}$ parallel to $\mathbf{k}$, also called the vector projection of $\mathbf{v}$ on $\mathbf{k}$, and
$\mathbf{v}_{\perp} = \mathbf{v} - \mathbf{v}_{\parallel} = \mathbf{v} - (\mathbf{k} \cdot \mathbf{v}) \mathbf{k}$
is the component of $\mathbf{v}$ orthogonal to $\mathbf{k}$, also called the vector rejection of $\mathbf{v}$ from $\mathbf{k}$.
Let
$\mathbf{w} = \mathbf{k}\times\mathbf{v}$.
The vectors $\mathbf{v}_\perp$ and $\mathbf{w}$ have the same length, but $\mathbf{w}$ is perpendicular to both $\mathbf{k}$ and $\mathbf{v}_\perp$. This can be shown via
$\mathbf{w} = \mathbf{k} \times \mathbf{v} = \mathbf{k} \times (\mathbf{v}_{\parallel} + \mathbf{v}_{\perp}) = \mathbf{k} \times \mathbf{v}_{\parallel} + \mathbf{k} \times \mathbf{v}_{\perp} = \mathbf{k} \times \mathbf{v}_{\perp} ,$
since $\mathbf{k}$ has unit length, is parallel to $\mathbf{v}_\parallel$ and is perpendicular to $\mathbf{v}_\perp$.
The vector $\mathbf{w}$ can be viewed as a copy of $\mathbf{v}_\perp$ rotated by 90° about $\mathbf{k}$. Using trigonometry, we can now rotate $\mathbf{v}_\perp$ by $\theta$ around $\mathbf{k}$ to obtain $\mathbf{v}_{\perp\ \mathrm{rot}}$. Thus,
\begin{align} \mathbf{v}_{\perp\ \mathrm{rot}} &= \mathbf{v}_{\perp}\cos\theta + \mathbf{w}\sin\theta\\ &= (\mathbf{v} - (\mathbf{k} \cdot \mathbf{v}) \mathbf{k})\cos\theta + (\mathbf{k} \times \mathbf{v})\sin\theta. \end{align}
$\mathbf{v}_{\perp\ \mathrm{rot}}$ is also the rejection from $\mathbf{k}$ of the vector $\mathbf{v}_{\mathrm{rot}}$, defined as the desired vector, $\mathbf{v}$ rotated about $\mathbf{k}$ by the angle $\theta$. Since v is not affected by a rotation about $\mathbf{k}$, the projection of $\mathbf{v}_\mathrm{rot}$ on $\mathbf{k}$ coincides with $\mathbf{v}_\parallel$. Thus,
\begin{align} \mathbf{v}_{\mathrm{rot}} &= \mathbf{v}_{\perp\ \mathrm{rot}} + \mathbf{v}_{\parallel\ \mathrm{rot}} \\ &= \mathbf{v}_{\perp\ \mathrm{rot}} + \mathbf{v}_{\parallel} \\ &= (\mathbf{v} - (\mathbf{k} \cdot \mathbf{v}) \mathbf{k}) \cos\theta + (\mathbf{k} \times \mathbf{v})\sin\theta + (\mathbf{k} \cdot \mathbf{v}) \mathbf{k} \\ &= \mathbf{v} \cos\theta + (\mathbf{k} \times \mathbf{v})\sin\theta + \mathbf{k} (\mathbf{k} \cdot \mathbf{v}) (1 - \cos\theta), \end{align}
as required.
### Matrix notation
We first represent v and k as column matrices, and defining a matrix K as the "cross-product matrix" for the vector k, i.e.,
$\mathbf{K}= \left[\begin{array}{ccc} 0 & -k_3 & k_2 \\ k_3 & 0 & -k_1 \\ -k_2 & k_1 & 0 \end{array}\right]$.
This can easily be checked to have the property that
$\mathbf{K}\mathbf{v} = \mathbf{k}\times\mathbf{v}$
for any vector v (in fact, K is the unique matrix with this property).
Now, from the last equation in the previous sub-section, we may write
\begin{align} \mathbf{v}_{\mathrm{rot}} &= \mathbf{v} \cos\theta + (\mathbf{k} \times \mathbf{v})\sin\theta + \mathbf{k} (\mathbf{k} \cdot \mathbf{v}) (1 - \cos\theta) \\ &= \mathbf{v} + (\mathbf{K} \mathbf{v})\sin\theta + (\mathbf{k} (\mathbf{k} \cdot \mathbf{v}) - \mathbf{v}) (1 - \cos\theta). \end{align}
To simplify further, use the well-known formula for the vector triple product,
$\mathbf{a}\times (\mathbf{b}\times \mathbf{c}) = \mathbf{b}(\mathbf{a}\cdot\mathbf{c}) - \mathbf{c}(\mathbf{a}\cdot\mathbf{b})$
with a = b = k, and c = v, to obtain
$(\mathbf{k} (\mathbf{k} \cdot \mathbf{v}) - \mathbf{v}) = \mathbf{k} \times (\mathbf{k} \times \mathbf{v})$
or
$\mathbf{k} (\mathbf{k} \cdot \mathbf{v}) - \mathbf{v} = \mathbf{K}^2 \mathbf{v}$.
This means (substituting the above equation in the last one for vrot) ,
$\mathbf{v}_{\mathrm{rot}} = \mathbf{v} + (\sin\theta) \mathbf{K}\mathbf{v} + (1-\cos\theta)\mathbf{K}^2\mathbf{v}$,
resulting in the Rodrigues' rotation formula in matrix notation,
\begin{align} \mathbf{v}_{\mathrm{rot}} &= \mathbf{R}\mathbf{v} \end{align}
where R is the rotation matrix
\begin{align} \mathbf{R} = \mathbf{I} + (\sin\theta) \mathbf{K} + (1-\cos\theta)\mathbf{K}^2 \end{align}
Since K is defined in terms of the components of the rotation axis k, and θ is the rotation angle, R is the rotation matrix about k by angle θ, and is easy to compute.
R is a member of the rotation group SO(3) of ℝ3, and K is a member of the Lie algebra so(3) of that Lie group. In terms of the matrix exponential, we have
$\mathbf{R} = \mathbf{exp}(\theta\mathbf{K})$.
For an alternative derivation based on this exponential relationship, see Axis–angle representation#Exponential map from so(3) to SO(3). For the inverse mapping, see Axis–angle representation#Log map from SO(3) to so(3).
## References
• Don Koks, (2006) Explorations in Mathematical Physics, Springer Science+Business Media,LLC. ISBN 0-387-30943-8. Ch.4, pps 147 et seq. A Roundabout Route to Geometric Algebra' | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 54, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9996390342712402, "perplexity": 768.4747625749511}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931005387.19/warc/CC-MAIN-20141125155645-00230-ip-10-235-23-156.ec2.internal.warc.gz"} |
https://www.jiskha.com/questions/492240/The-question-that-have-is-to-simplify-and-write-the-answer-in-exponential-notation | # math
The question that have is to simplify and write the answer in exponential notation using positive exponents 2 to 4th power x 2 -2
These are my choices
a 4 to the 8th power
b 2 to the 8th power
c 4 to the 2nd power
d 2 to the 2nd power
1. 👍 0
2. 👎 0
3. 👁 81
2 to 4th power x 2 -2 ?
2^4 * 2^(-2)
If so,
2^4 = 16
2^(-2) = 1/(2^2) = 1/4
2^4 * 2^(-2) = 16 * 1/4 = 16/4 = 4
4 = 2^2
1. 👍 0
2. 👎 0
posted by helper
## Similar Questions
1. ### MTH 156
2^6 * 2^7 multiply and simplify and then write the answer in exponential notation. I know the answer is 8,192 but I cannot figure out how to write that in exponential notation. Thanks for the help.
asked by Bella on August 5, 2015
2. ### Chemistry
1. Write 86,400 s in exponential notation. My answer: 8.6400 x 10^4 s 2. Write 0.15 ns using exponential notation with units of s. My answer: 0.15 x 10^-9 3. 141000 g =? kg My answer: 141 kg 4. Write 1.4 Mg using exponential
asked by Anonymous on September 28, 2015
3. ### MTH 156
2^6 * 2^7 multiply and simplify and then write the answer in exponential notation. Thanks!
asked by Bella on August 5, 2015
4. ### Mathematics
How do I do these? 1. Simplify using only positive exponents: (2t)⁻⁶ 2. Simplify using only positive exponents: (w⁻²j⁻⁴)⁻³(j⁷j³) 3. Simplify using only positive exponents: a²b⁻⁷c⁴ ---------- a⁵b³c⁻²
asked by Name on March 11, 2018
5. ### Algebra
Can you please check my work? You have helped other students I know. Thank you:) Multiply -7(2a2 -4) = -14a2 + 28 Multiply 0.5(a + b -5) = 0.5a + 0.5b - 2.5 Factor bx - by + bz = b(x - y + z) Combine like terms (simplify) 12x + 4
asked by Sharon (ms. Sue) on November 8, 2011
6. ### Algebra (check my answers)
Please check these answers. Multiply -7(2a2 -4) = -14a2 + 28 Multiply 0.5(a + b -5) = 0.5a + 0.5b - 2.5 Factor bx - by + bz = b(x - y + z) Combine like terms (simplify) 12x + 4 - 11y - 6x - 9 - y = 6x - 12y - 5 Evaluate the
asked by Sharon on November 8, 2011
7. ### Algebra 1
Plz help me with the following problems. Simplify. 5^-1(3^-2) Simplify. mn^-4/p^0q^-2 Write a scientific notation. 0.0042 Write in standard notation. 6.12*10^3 Simplify. Write in scientific notation. 0.5(8 * 10^5) Simplify. Write
asked by Gabby on February 5, 2014
8. ### math
Could someone please how this problem should be solved? Simplify and write the answer in exponential notation using positive exponents. 2 to the 4th x 2 -2nd power
asked by Dennis on February 6, 2011
9. ### Math
Use the expression 12*12 to answer the question below A. Write the expression using exponential notation B. What is the base? C. What is the exponent? D. WRite the expression in expanded form. C?
asked by §eth on September 4, 2012
10. ### Algebra 2
Exponential Notation Simplify. (-2)^0-(-2)^3-(-2)^-1+(-2)^4-(-2)^-2 I tried solving this problem a few times and kept getting random answers. The answer in the book says that it is 25 1/4 I do not understand how you get that
asked by Ann on September 11, 2011
More Similar Questions | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9086999297142029, "perplexity": 3870.4191492605514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247480472.38/warc/CC-MAIN-20190216125709-20190216151709-00341.warc.gz"} |
https://www.physicsforums.com/threads/gyroscopic-precession.19096/ | # Homework Help: Gyroscopic precession
1. Apr 12, 2004
### cucumber
hi. not sure if this belongs in here, so sorry in advance for any trouble caused.
i am in a bit of a predicament with a physics assessment of mine; i need to know the mathematical relationships between the precessional frequency, the couple (or torque, as i have read it called) and the angular momentum of a gyroscope, and (here comes the tricky bit) i would like to understand it...
i just need someone to point me in the right direction, i mean i hardly know which questions to ask or where to start looking (sob sob).
i am quasi-familiar with the concept of moment of inertia being the equivalent of mass in linear motion and angular velocity that of velocity (duh), but i have not found a set of formulae that would allow me to make a connection between the aforementioned...er... things...
i come with the rather unsound basis of a-level physics, which is why it will be an especially challenging task for you get me to understand it. good luck (and thanks).
cucumber.
2. Apr 12, 2004
$$\theta r = s$$
$$\omega r = v$$
$$\alpha r = a$$
$$\vec{L} = \vec{r} \times \vec{p}$$
$$\vec{\tau} = \vec{r} \times \vec{F}$$
3. Apr 13, 2004
### cucumber
i'm unsure about a couple of things, though...
what are the "s" and the theta in [theta]*r = s
what are the alpha and the "a" in [alpha]*r = a
what are the "r"'s in both the above equations (just to be sure...)
and finally, what are those two other equations all about???
i suppose that funny looking thing (like half a pi) is torque, but the rest i have absolutely no idea... sorry.
i'd be very grateful if you could clarify them a bit.
thanks again.
cucumber.
4. Apr 13, 2004
### Chen
In a circle of radius r the length of any arc is given by the angle it creates with the center of the circle times the radius: $$s = \theta r$$
Angular acceleration:
$$\alpha = \frac{a_t}{r}$$
http://hyperphysics.phy-astr.gsu.edu/hbase/rotq.html#rq
Angular momentum:
$$\vec{L} = \vec{r} \times \vec{p}$$
http://hyperphysics.phy-astr.gsu.edu/hbase/amom.html
Torque ('tau'):
$$\vec{\tau} = \vec{r} \times \vec{F}$$
http://hyperphysics.phy-astr.gsu.edu/hbase/torq2.html#tc
Precession of Gyroscope:
http://hyperphysics.phy-astr.gsu.edu/hbase/gyr.html#gyr | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.929340660572052, "perplexity": 913.379468652189}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867085.95/warc/CC-MAIN-20180525102302-20180525122302-00562.warc.gz"} |
https://www.physicsforums.com/threads/evolution-factor.64187/ | # Evolution Factor
1. Feb 18, 2005
### scilover89
Beside mutation, molecular drive, and variation of genetic, what are the factors which triggled the evolution?
2. Feb 18, 2005
### Sariaht
The environment causes evolution as well,
but that wasn't the question, was it?
3. Feb 18, 2005
### iansmith
Staff Emeritus
Selection such as environmental condition and mating will have an effect on the genetic variation.
There is also other effect such as the founder effect and bottleneck effect. The founder effect is when a small group of individuals becomes isolated from the main population due to geographical reason. The bottleneck effect is cause by a sudden reduction in the number of individuals in a population. It is usually not related to genetic advantage. A flood or a mass killing could be examples.
4. Feb 24, 2005
### Phobos
Staff Emeritus | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8410152792930603, "perplexity": 4311.380977864193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280899.42/warc/CC-MAIN-20170116095120-00090-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://fjp.at/posts/slam/fastslam/ | The following sections summairze the Grid-based FastSLAM algorithm which is one instance of FastSLAM. This algorithm estimates the trajectory of a mobile robot while simultaneously creating a grid map of the environment. Grid-based FastSLAM is combination of a particle filter such as Adaptive Monte Carlo Localization (amcl) and a mapping algorithm such as occupancy grid mapping.
## SLAM Fundamentals
SLAM stands for Simultaneous Localization and Mapping sometimes refered to as Concurrent Localization and Mappping (CLAM). The SLAM algorithm combines localization and mapping, where a robot has access only to its own movement and sensory data. The robot must build a map while simultaneously localizing itself relative to the map.
The map and the robot pose will be uncertain, and the errors in the robot’s pose estimate and map will be correlated. The accuracy of the map depends on the accuracy of the localization and vice versa. Chicken and eggo problem: The map is needed for localization, and the robot’s pose is needed for mapping. This makes SLAM a real challenge but is essential for mobile robotics.
They must be able to move in environments they have never seen before. Examples are a vacuum cleaner where also the map can change due to moving furniture. Of course self driving vehicles require SLAM to update their maps while localizing themselfs in it.
There exist generally five categories of SLAM algorithms:
1. Extended Kalman Filter SLAM (EKF)
2. Sparse Extended Information Filter (SEIF)
3. Extended Information Form (EIF)
4. FastSLAM
5. GraphSLAM
This posts describes the FastSLAM approach which uses a particle filter and a low dimensional Extended Kalman filter. This algorithm will be adapted to grid maps which results in Grid-based FastSLAM. GraphSLAM on the other hand uses constraints to represent relationships between robot poses and the environment. With this, the algorithm tries to resolve all the constraints to create the most likely map given the data. An implementation of GraphSLAM is called Real Time Apperance Based Mapping (RTABMap).
### Localization
In Localization problems a map is known beforehand and the robot pose is estimated using its sensor mesaurements $z_{1:t}$, control inputs $u_{1:t}$ and its initial pose $x_{1:t-1}$. With this data, the new belief $p(x_{1:t}|x_{1:t-1}, z_{1:t}, u_{1:t})$ can be computed as a probability distribution.
The localization estimation can be done with an Extended Kalman filter or Monte Carlo localization. With the Monte Carlo particle filter approach (MCL) each particle consists of the robot pose $(x, y, \theta)$ and its importance weight $w$. With motion and sensor updates, followed by resampling it is possible to estimate the robots pose.
### Mapping
In mapping problems the robot pose $x_{1:t}$ is known and the map $m_{t}$ at time $t$, either static or dynamic is unknown. Therefore the mapping problem is to find the posterior belief of the map $p(m_t|x_{1:t}, z_{1:t})$ given the robot poses and its measurements $z_{1:t}$.
The challenges in mapping are the number of state variables. In localization, only the robots pose is estimated with its $x$ and $y$ position. A map on the other hand lies in a continuous space. This can lead to infinitely many variables used to describe the map. Additional uncertainty is present through sensor data perception. Other challenges are the space and its geometries that should be mapped. For example repetitive environments such as walkways with no doors or similar looking ones.
The mapping algorithm that is described in this post is occupancy grid mapping. The algorithm can map any arbitrary environment by dividing it into a finite number of grid cells.
### SLAM Characteristics
SLAM exists in two forms which are Online SLAM and Full SLAM. In both forms, the algorithm estimates a map of its environment. However, Online SLAM estimates only single poses of the robot at specific time instances. Given the measurements $z_{1:t}$ and the control inputs $u_{1:t}$ the problem is to find the posterior belief of the robot pose $x_{t}$ at time $t$ and the map $m_{t}$.
$p(x_t, m|z\_{1:t}, u\_{1:t})$
Full SLAM on the other hand, estimates a full trajectory $x_{1:t}$ of the robot instead of just a single pose $x_t$ at a particular time step.
$p(x\_{1:t}, m|z\_{1:t}, u\_{1:t})$
Both problems are related to each other. The Online SLAM problem is result of integrating over the individual robot poses of the Full SLAM problem once at a time.
$\underbrace{p(x_t, m|z\_{1:t}, u\_{1:t})}_{\text{Online SLAM}} = \int \int \dots \int \underbrace{p(x\_{1:t}, m|z\_{1:t}, u\_{1:t})}_{\text{Full SLAM}} dx_1 dx_2 \dots dx_{t-1}$
Another characteristic of SLAM is that it is a continouous and discrete problem. Robot poses and object or landmark locations are continouous aspects of the SLAM problem. While sensing the environment continously, a discrete relation between detected objects and newly detected ones needs to be made. This relation is known by correspondance and helps the robot to detect if it has been in the same location. With SLAM, a mobile robot is establishing a discrete relation between newly and previously detected objects.
Correspondences should be included in the estimation problem meaning that the posterior includes the correspondence in both, the online and full SLAM problem.
\begin{align} &\text{Online SLAM: } p(x_t, m|z_{1:t}, u_{1:t}) \Rightarrow p(x_t, m, c_t|z_{1:t}, u_{1:t}) \\ &\text{Full SLAM: } p(x_{1:t}, m|z_{1:t}, u_{1:t}) \Rightarrow p(x_{1:t}, m, c_{1:t}|z_{1:t}, u_{1:t}) \end{align}
The advantage to add the correspondances to both problems is to have the robot better understand where it is located by establishing a relation between objects. The relation between the online SLAM and full SLAM problem is defined as
$\underbrace{p(x_t, m, c_t|z_{1:t}, u_{1:t})}_{\text{Online SLAM}} = \int \int \dots \int \sum_{c_1} \sum_{c_2} \dots \sum_{c_{t-1}} \underbrace{p(x_{1:t}, m, c_{1:t}|z_{1:t}, u_{1:t})}_{\text{Full SLAM}} dx_1 dx_2 \dots dx_{t-1}$
where it is now required to sum over the correspondence values and integrate over the robot poses from the Full SLAM problem.
### Challenges
The continouous portion consists of the robot poses and object locations and is highly dimensional. Also the discrete correspondences between detected objects are highly dimensional. These aspects require an approximation even when known correspondences are assumed.
There exist two instances of FastSLAM that require known correspondences which are FastSLAM 1.0 and FastSLAM 2.0. With these approaches each particle holds a guess of the robot trajectory and by doing so the SLAM problem is reduced to mapping with known poses.
To do SLAM without known correspondences, meaning without known landmark positions the algorithm in the following section can be used.
## Grid-based FastSLAM Algorithm
FastSLAM solves the Full SLAM problem with known correspondences using a custom particle filter approach known by the Rao-Blackwellized particle filter approach. This approach estimates a posterior over the trajectory using a particle filter. With this trajectory the robot poses are now known and the mapping problem is then solved with a low dimensional Extended Kalman Filter. This filter models independent features of the map with local Gaussians.
Using a grid map the environment can be modeled and FastSLAM gets extended without predefining any landmark positions. This allows to solve the SLAM problem in an arbitrary environment.
With the Grid-based FastSLAM algorithm, each particle holds a guess of the robot trajectory using a MCL particle filter. Addionaly, each particle maintains its own map by utilizing the occupancy grid mapping algorithm.
The steps of the algorithm consist of sampling motion $p(x_t|x_{t-1}^{[k]}, u_t)$, map estimation $p(m_t|z_t, x_t^{[k]}, m_{t-1}^{[k]})$ and importance weight $p(z_t|x_t^{[k]}, m^{[k]})$.
The algorithm takes the previous belief $X_{t-1}$ or pose, the actuation commands $u_t$ and the sensor measurements $z_t$ as input. Initially $M \in \mathbb{R}$ particles are generated randomly which defines the initial belief $\bar{X}_t$. The first for loop represents the motion, sensor and map update steps. Here, the pose of each particle is estimated and the likelihoods of the measurements and the map are updated. To update the measurements model likelihood the importance weight technique is used in the measurement_model_map function. In the update_occupancy_grid function, each particle updates its map using the occupancy grid mapping algorithm. The newly estimated k-th particle pose, map and likelihood of the measurement are all added to the hypotetical belief $\bar{X}_t$.
In the second for loop the resampling process of the particles takes place. The resampling is implementd using a resampling wheel technique. Here, particle measurements that are close to the robots real world measurement values are redrawn more frequently in upcoming iterations. The drawn particle poses and maps are added to the system belief $X_t$ which is returnd from the algorithm to start a new iteration with the next motion and sensor updates.
## ROS gmapping
The gmapping ROS package uses the Grid-based FastSLAM algorithm. This package contains the single slam_gmapping node, which subscribes to the tf and scans topics. Using these inputs, it generates a 2D occupancy grid map and outputs robot poses on the map and entropy topics. Additional map data is provided through the map_metadata topic. Another way to access the map is to use the service provided by the node. To demonstrate gmapping, turtlebot will be deployed in the willow garage environment inside gazebo. Moving the turtlebout around using the teleop package and running the slam_gmapping node will generate a map.
Refere to Wikipedia for a list of SLAM methods. There you can also find resources for the FastSLAM instances:
Further details about MCL are found in the paper of Sebastian Thrun et al. The gmapping algorithm can be found here.
## Reference
This post is a summary of the lesson on FastSLAM from the Robotics Nanodegree of Udacity.
Tags:
Categories:
Updated: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8451486825942993, "perplexity": 1442.3427793953147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141715252.96/warc/CC-MAIN-20201202175113-20201202205113-00538.warc.gz"} |
https://biust.pure.elsevier.com/en/publications/empirical-statistical-modeling-of-march-may-rainfall-prediction-o | # Empirical statistical modeling of March-May rainfall prediction over southern nations, nationalities and people’s region of Ethiopia
Wondimu Tadiwos Hailesilassie, Gizaw Mengistu Tsidu
Research output: Contribution to journalArticlepeer-review
## Abstract
Statistical predictive models were developed to investigate how global rainfall predictors relate to the March-May (MAM) rainfall over Southern Nations, Nationalities and People's Region (SNNPR) of Ethiopia. Data utilized in this study include station rainfall data, oceanic and atmospheric indices. Because of the spatial variations in the interannual variability and the annual cycle of rainfall, an agglomerative hierarchical cluster analyses were used to delineate a network of 20 stations over study area into three homogeneous rainfall regions in order to derive rainfall indices. Time series generated from the delineated regions were later used in the rainfall/teleconnection indices analyses. The methods employed were correlation analysis and multiple linear regressions. The regression modes were based on the training period from 1987-2007 and the models were validated against observation for the independent verification period of 2008-2012. Results obtained from the analysis revealed that sea surface temperature (SST) variations were the main drivers of seasonal rainfall variability. Although SSTs account for the majority of variance in seasonal rainfall, a moderate improvement of rainfall prediction was achieved with the inclusion of atmospheric indices in prediction models. The techniques clearly indicate that the models were reproducing and describing the pattern of the rainfall for the sites of interest. For the forecast to become useful at an operational level, further development of the model will be necessary to improve skill and to determine the error bounds of the forecast.
Original language English 569-578 10 Mausam 66 3 Published - 2015
## All Science Journal Classification (ASJC) codes
• Atmospheric Science
• Geophysics
## Fingerprint
Dive into the research topics of 'Empirical statistical modeling of March-May rainfall prediction over southern nations, nationalities and people’s region of Ethiopia'. Together they form a unique fingerprint. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8509758710861206, "perplexity": 4512.621945245193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663016949.77/warc/CC-MAIN-20220528154416-20220528184416-00148.warc.gz"} |
https://cs.biu.ac.il/en/node/4011 | # On Notions of Distortion and an Almost Minimum Spanning Tree with constant Average Distortion
30/11/2017 - 12:00
Speaker:
Seminar:
Where:
Abstract:
Minimum Spanning Trees of weighted graphs are fundamental objects in numerous applications. In particular in distributed networks, the minimum spanning tree of the network is often used to route messages between network nodes. Unfortunately, while being most efficient in the total cost of connecting all nodes, minimum spanning trees fail miserably in the desired property of approximately preserving distances between pairs. While known lower bounds exclude the possibility of the worst case distortion of a tree being small, Abraham et al showed that there exists a spanning tree with constant average distortion. Yet, the weight of such a tree may be significantly larger than that of the MST. In this paper, we show that any weighted undirected graph admits a spanning tree whose weight is at most (1+\rho) times that of the MST, providing constant averagedistortion O(1/\rho).
The constant average distortion bound is implied by a stronger property of scaling distortion, i.e., improved distortion for smaller fractions of the pairs. The result is achieved by first showing the existence of a low weight spanner with small prioritized distortion, a property allowing to prioritize the nodes whose associated distortions will be improved. We show that prioritized distortion is essentially equivalent to coarse scaling distortion via a general transformation, which has further implications and may be of independent interest. In particular, we obtain an embedding for arbitrary metrics into Euclidean space with optimal prioritized distortion.
Joint work with Yair Bartal and Ofer Neiman. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9506856799125671, "perplexity": 409.9229691496021}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540481076.11/warc/CC-MAIN-20191205141605-20191205165605-00081.warc.gz"} |
https://math.cornell.edu/lower-level-courses | # Entry-Level Courses for Freshmen and Sophomores
## You are here
Please consult First Steps in Math for assistance in selecting an appropriate course.
### MATH 1006 - Academic Support for MATH 1106
Spring 2020. 1 credit. S/U grades only.
Students should contact their college for the most up-to-date information regarding if and how credits for this course will count toward graduation, and/or be considered regarding academic standing.
Reviews material presented in MATH 1106 lectures, provides problem-solving techniques and tips as well as prelim review. Provides further instruction for students who need reinforcement. Not a substitute for attending MATH 1106 lectures or discussions.
### MATH 1011 - Academic Support for MATH 1110
Fall 2019, Spring 2020. 1 credit. S/U grades only.
Students should contact their college for the most up-to-date information regarding if and how credits for this course will count toward graduation, and/or be considered regarding academic standing.
Reviews material presented in MATH 1110 lectures, provides problem-solving techniques and tips as well as prelim review. Provides further instruction for students who need reinforcement. Not a substitute for attending MATH 1110 lectures.
### MATH 1012 - Academic Support for MATH 1120
Fall 2019, Spring 2020. 1 credit. S/U grades only.
Students should contact their college for the most up-to-date information regarding if and how credits for this course will count toward graduation, and/or be considered regarding academic standing.
Reviews material presented in MATH 1120 lectures, provides problem-solving techniques and tips as well as prelim review. Provides further instruction for students who need reinforcement. Not a substitute for attending MATH 1120 lectures or discussions.
### MATH 1021 - Academic Support for MATH 2210
Fall 2019, Spring 2020. 1 credit. S/U grades only.
Reviews material presented in MATH 2210 lectures, provides problem-solving techniques and tips as well as prelim review. Provides further instruction for students who need reinforcement. Not a substitute for attending MATH 2210 lectures or discussions.
### MATH 1101 - Calculus Preparation
Fall 2019. 1 credit. Letter grades only.
Introduces topics in calculus: limits, rates of change, definition of and techniques for finding derivatives, relative and absolute extrema, and applications. The calculus content of the course is similar to 1/3 of the content covered in MATH 1106 and MATH 1110. In addition, the course includes a variety of topics of algebra, with emphasis on the development of linear, power, exponential, logarithmic, and trigonometric functions. Because of the strong emphasis on graphing, students will have a better understanding of asymptotic behavior of these functions.
### MATH 1105 - Finite Mathematics for the Life and Social Sciences
Fall 2019. 3 credits. Student option grading.
Prerequisite: three years of high school mathematics, including trigonometry and logarithms.
Introduction to linear algebra, probability, and Markov chains that develops the parts of the theory most relevant for applications. Specific topics include: equations of lines, the method of least squares, solutions of linear systems, matrices; basic concepts of probability, permutations, combinations, binomial distribution, mean and variance, and the normal approximation to the binomial distribution. Examples from biology and the social sciences are used.
### MATH 1106 - Modeling with Calculus for the Life Sciences
Spring 2020. 3 credits. Student option grading.
Forbidden Overlap: Due to an overlap in content, students will not receive credit for both MATH 1106 and MATH 1110. For guidance in selecting an appropriate course, please consult First Steps in Math.
Prerequisite: three years of high school mathematics (including trigonometry and logarithms) or a precalculus course (e.g., MATH 1101). No prior knowledge of calculus is required. Students who plan to take more than one semester of calculus should take MATH 1110 rather than MATH 1106.
The goal of this course is to give students a strong basis in some quantitative skills needed in the life and social sciences. There will be an emphasis on modeling, using fundamental concepts from calculus developed in the course, including: derivatives, integrals, and introductory differential equations. Examples from the life sciences are used throughout the course. To give a concrete example, we will study predator-prey populations. We will write down mathematical models that describe the evolution of these populations, analyze both quantitative and qualitative properties to make predictions about the future of these populations, and discuss the assumptions and limitations of the models.
Note that while we will cover the topics of derivatives and integrals, this course has a different, much more applied, focus from courses such as MATH 1110 Calculus I or a typical high school calculus course.
### MATH 1110 - Calculus I
Summer 2019 (6-week), Fall 2019, Spring 2020. 4 credits. Student option grading.
Forbidden Overlap: Due to an overlap in content, students will not receive credit for both MATH 1110 and MATH 1106. For guidance in selecting an appropriate course, please consult First Steps in Math.
Prerequisite: three years of high school mathematics (including trigonometry and logarithms) or a precalculus course (e.g., MATH 1101). MATH 1110 can serve as a one-semester introduction to calculus or as part of a two-semester sequence in which it is followed by MATH 1120 or MATH 1220.
Topics include functions and graphs, limits and continuity, differentiation and integration of algebraic, trigonometric, inverse trig, logarithmic, and exponential functions; applications of differentiation, including graphing, max-min problems, tangent line approximation, implicit differentiation, and applications to the sciences; the mean value theorem; and antiderivatives, definite and indefinite integrals, the fundamental theorem of calculus, substitution in integration, the area under a curve.
### MATH 1120 - Calculus II
Fall 2019, Spring 2020. 4 credits. Student option grading.
Forbidden Overlap: Due to an overlap in content, students will receive credit for only one course in the following group: MATH 1120, 1220, 1910. For guidance in selecting an appropriate course, please consult First Steps in Math.
Prerequisite: MATH 1110 with a grade of C or better, excellent performance in MATH 1106, or equivalent AP credit. Those who do well in MATH 1110 and expect to major in mathematics or a strongly mathematics-related field should take MATH 1220 instead of 1120.
Focuses on integration: applications, including volumes and arc length; techniques of integration, approximate integration with error estimates, improper integrals, differential equations (separation of variables, initial conditions, systems, some applications). Also covers infinite sequences and series: definition and tests for convergence, power series, Taylor series with remainder, and parametric equations.
### MATH 1220 - [Theoretical Calculus II]
Fall. Next offered 2020-2021. 4 credits. Student option grading.
Forbidden Overlap: Due to an overlap in content, students will receive credit for only one course in the following group: MATH 1120, 1220, 1910. For guidance in selecting an appropriate course, please consult First Steps in Math.
Prerequisite: one semester of calculus with high performance or equivalent AP credit. Takes a more theoretical approach to calculus than MATH 1120. Students planning to continue with MATH 2130 are advised to take 1120 instead of this course.
Topics include differentiation and integration of elementary transcendental functions, techniques of integration, applications, polar coordinates, infinite series, and complex numbers, as well as an introduction to proving theorems.
### MATH 1300 - Mathematical Explorations
Fall 2019. 3 credits. Student option grading.
Pre-enrollment limited to Arts and Sciences students. Out-of-college students may be able to enroll during the add/drop period.
For students who wish to experience how mathematical ideas naturally evolve. The course emphasizes ideas and imagination rather than techniques and calculations. Homework involves students in actively investigating mathematical ideas. Topics vary depending on the instructor. Some assessment through writing assignments.
### MATH 1340 - Strategy, Cooperation, and Conflict
Spring 2020. 3 credits. Student option grading.
We apply mathematical reasoning to problems arising in the social sciences. We discuss game theory and its applications to questions of governing and the analysis of political conflicts. The problem of finding fair election procedures to choose among three or more alternatives is analyzed.
### MATH 1710 - Statistical Theory and Application in the Real World
Fall 2019, Spring 2020. 4 credits. Student option grading.
Forbidden Overlap: Due to an overlap in content, students will receive credit for only one course in the following group: AEM 2100, BTRY 3010, BTRY 6010, ENGRD 2700, HADM 2010, ILRST 2100, ILRST 6100, MATH 1710, PAM 2100, PAM 2101, PSYCH 2500, SOC 3010, STSCI 2100, STSCI 2150, STSCI 2200. In addition, no credit for MATH 1710 if taken after ECON 3130, ECON 3140, ECON 3125, MATH 4720, or any other upper-level course focusing on the statistical sciences (e.g., those counting toward the statistics concentration for the math major).
Prerequisite: high school mathematics. No previous familiarity with computers presumed.
Introductory statistics course discussing techniques for analyzing data occurring in the real world and the mathematical and philosophical justification for these techniques. Topics include population and sample distributions, central limit theorem, statistical theories of point estimation, confidence intervals, testing hypotheses, the linear model, and the least squares estimator. The course concludes with a discussion of tests and estimates for regression and analysis of variance (if time permits). The computer is used to demonstrate some aspects of the theory, such as sampling distributions and the Central Limit Theorem. In the lab portion of the course, students learn and use computer-based methods for implementing the statistical methodology presented in the lectures.
### MATH 1910 - Calculus For Engineers
Summer 2019 (6-week), Fall 2019, Spring 2020. 4 credits. Student option grading.
Forbidden Overlap: Due to an overlap in content, students will receive credit for only one course in the following group: MATH 1120, 1220, 1910. For guidance in selecting an appropriate course, please consult First Steps in Math.
Prerequisite: three years high school mathematics, including trigonometry and logarithms, and at least one course in differential and integral calculus or equivalent AP credit.
Essentially a second course in calculus. Topics include techniques of integration, finding areas and volumes by integration, exponential growth, partial fractions, infinite sequences and series, tests of convergence, and power series.
### MATH 1920 - Multivariable Calculus for Engineers
Summer 2019 (6-week), Fall 2019, Spring 2020. 4 credits. Student option grading.
Forbidden Overlap: Due to an overlap in content, students will receive credit for only one course in the following group: MATH 1920, 2130, 2220, 2240. For guidance in selecting an appropriate course, please consult First Steps in Math.
Prerequisite: MATH 1910 or equivalent AP credit.
Introduction to multivariable calculus. Topics include partial derivatives, double and triple integrals, line and surface integrals, vector fields, Green’s theorem, Stokes’ theorem, and the divergence theorem.
### MATH 2130 - Calculus III
Spring 2020. 4 credits. Student option grading.
Forbidden Overlap: Due to an overlap in content, students will receive credit for only one course in the following group: MATH 1920, 2130, 2220, 2240. For guidance in selecting an appropriate course, please consult First Steps in Math.
Prerequisite: MATH 1120, 1220, or 1910, or equivalent AP credit. Designed for students who wish to master the basic techniques of multivariable calculus, but whose major will not require a substantial amount of mathematics. Students who plan to major or minor in mathematics or take upper-level math courses should take MATH 1920, 2220, or 2240 rather than MATH 2130.
Topics include vectors and vector-valued functions; multivariable and vector calculus including multiple and line integrals; first- and second-order differential equations with applications; systems of differential equations; and elementary partial differential equations. Optional topics may include Green's theorem, Stokes' theorem, and the divergence theorem.
### MATH 2210 - Linear Algebra
Fall 2019, Spring 2020. 4 credits. Student option grading.
Forbidden Overlap: Due to an overlap in content, students will receive credit for only one course in the following group: MATH 2210, 2230, 2310, 2940. For guidance in selecting an appropriate course, please consult First Steps in Math.
Prerequisite: two semesters of calculus with high performance, equivalent AP credit, or permission of department. Recommended for students who plan to major or minor in mathematics or a related field. For a more applied version of this course, see MATH 2310.
Topics include vector algebra, linear transformations, matrices, determinants, orthogonality, eigenvalues, and eigenvectors. Applications are made to linear differential or difference equations. The lectures introduce students to formal proofs. Students are required to produce some proofs in their homework and on exams.
### MATH 2220 - Multivariable Calculus
Fall 2019, Spring 2020. 4 credits. Student option grading.
Forbidden Overlap: Due to an overlap in content, students will receive credit for only one course in the following group: MATH 1920, 2130, 2220, 2240. For guidance in selecting an appropriate course, please consult First Steps in Math.
Prerequisite: MATH 2210. Recommended for students who plan to major or minor in mathematics or a related field.
Differential and integral calculus of functions in several variables, line and surface integrals as well as the theorems of Green, Stokes and Gauss.
### MATH 2230 - Theoretical Linear Algebra and Calculus
Fall 2019. 4 credits. Student option grading.
Forbidden Overlap: Due to an overlap in content, students will receive credit for only one course in the following group: MATH 2210, 2230, 2310, 2940. For guidance in selecting an appropriate course, please consult First Steps in Math.
Prerequisite: two semesters of calculus with grade of A– or better, equivalent AP credit, or permission of instructor. Designed for students who have been extremely successful in their previous calculus courses and for whom the notion of solving very hard problems and writing careful proofs is highly appealing. MATH 2230-2240 provides an integrated treatment of linear algebra and multivariable calculus at a higher theoretical level than in MATH 2210-2220.
Topics include vectors, matrices, and linear transformations; differential calculus of functions of several variables; inverse and implicit function theorems; quadratic forms, extrema, and manifolds; multiple and iterated integrals.
### MATH 2240 - Theoretical Linear Algebra and Calculus
Spring 2020. 4 credits. Student option grading.
Forbidden Overlap: Due to an overlap in content, students will receive credit for only one course in the following group: MATH 1920, 2130, 2220, 2240. For guidance in selecting an appropriate course, please consult First Steps in Math.
Prerequisite: MATH 2230. Designed for students who have been extremely successful in their previous calculus courses and for whom the notion of solving very hard problems and writing careful proofs is highly appealing. MATH 2230-2240 provides an integrated treatment of linear algebra and multivariable calculus at a higher theoretical level than in MATH 2210-2220.
Topics include vector fields; line integrals; differential forms and exterior derivative; work, flux, and density forms; integration of forms over parametrized domains; and Green's, Stokes', and divergence theorems.
### MATH 2310 - Linear Algebra with Applications
Fall 2019. 4 credits. Student option grading.
Forbidden Overlap: Due to an overlap in content, students will receive credit for only one course in the following group: MATH 2210, 2230, 2310, 2940. For guidance in selecting an appropriate course, please consult First Steps in Math.
Prerequisite: one semester of college-level calculus, such as MATH 1106 or MATH 1110, or equivalent AP credit. Students who plan to major or minor in mathematics or take upper-level math courses should take MATH 2210, 2230, or 2940 rather than MATH 2310.
Introduction to linear algebra for students who wish to focus on the practical applications of the subject. A wide range of applications are discussed and computer software may be used. The main topics are systems of linear equations, matrices, determinants, vector spaces, orthogonality, and eigenvalues. Typical applications are population models, input/output models, least squares, and difference equations.
### MATH 2810 - Deductive Logic
(also PHIL 3310)
Spring 2020. 4 credits. Student option grading.
Prerequisite: PHIL 2310 or MATH 2210 or MATH 2230 or explicit permission of instructor.
A mathematical study of the formal languages of propositional and predicate logic, including their syntax, semantics, and deductive systems. Various formal results will be established, most importantly soundness and completeness.
### MATH 2930 - Differential Equations for Engineers
Summer 2019 (6-week), Fall 2019, Spring 2020. 4 credits. Student option grading.
Prerequisite: MATH 1920. Taking MATH 2930 and 2940 simultaneously is not recommended.
Introduction to ordinary and partial differential equations. Topics include first order equations (separable, linear, homogeneous, exact); mathematical modeling (e.g., population growth, terminal velocity); qualitative methods (slope fields, phase plots, equilibria and stability); numerical methods; second order equations (method of undetermined coefficients, application to oscillations and resonance, boundary value problems and eigenvalues); and Fourier series. A substantial part of this course involves partial differential equations, such as the heat equation, the wave equation, and Laplace's equation. (This part must be present in any outside course being considered for transfer credit to Cornell as a substitute for MATH 2930.)
### MATH 2940 - Linear Algebra for Engineers
Summer 2019 (6-week), Fall 2019, Spring 2020. 4 credits. Student option grading.
Forbidden Overlap: Due to an overlap in content, students will receive credit for only one course in the following group: MATH 2210, 2230, 2310, 2940. For guidance in selecting an appropriate course, please consult First Steps in Math.
Prerequisite: MATH 1920. Taking MATH 2930 and 2940 simultaneously is not recommended.
Linear algebra and its applications. Topics include matrices, determinants, vector spaces, eigenvalues and eigenvectors, orthogonality and inner product spaces; applications include brief introductions to difference equations, Markov chains, and systems of linear ordinary differential equations. May include computer use in solving problems. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8202680349349976, "perplexity": 1896.5121075263223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668782.15/warc/CC-MAIN-20191117014405-20191117042405-00179.warc.gz"} |
https://infoscience.epfl.ch/record/202320 | ## Vortex lattice structure in BaFe2(As0.67P0.33)(2) via small-angle neutron scattering
We have observed a magnetic vortex lattice (VL) in BaFe2(As0.67P0.33)(2) (BFAP) single crystals by small-angle neutron scattering. With the field along the c axis, a nearly isotropic hexagonal VL was formed in the field range from 1 to 16 T, and no symmetry changes in the VL were observed. The temperature dependence of the VL signal was measured and confirms the presence of (non-d-wave) nodes in the superconducting gap structure for measurements at 5 T and below. The nodal effects were suppressed at high fields. At low fields, a VL reorientation transition was observed between 1 and 3 T, with the VL orientation changing by 45 degrees. Below 1 T, the VL structure was strongly affected by pinning and the diffraction pattern had a fourfold symmetry. We suggest that this (and possibly also the VL reorientation) is due to pinning to defects aligned with the crystal structure, rather than being intrinsic. The temperature dependence of the scaled intensity suggests that BFAP possesses at least one full gap and one nodal gap with circular symmetry. Judging from the symmetry, the node structure should take the form of an "accidental" circular line node, which is consistent with recent angle-resolved photoemission spectroscopy results.
Published in:
Physical Review B, 90, 12
Year:
2014
Publisher:
College Pk, Amer Physical Soc
ISSN:
1098-0121
Laboratories: | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9502876400947571, "perplexity": 2197.339217979824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512014.61/warc/CC-MAIN-20181018194005-20181018215505-00294.warc.gz"} |
http://mathhelpforum.com/algebra/149675-unique-solution.html | 1. ## unique solution
For what values of k equation |x+1|-|x-1|=kx+1 have unique solution?
2. Originally Posted by Garas
For what values of k equation |x+1|-|x-1|=kx+1 have unique solution?
Since absolute value has two separate formulas depending on whether the number inside the absolute value is positive or negative. That means you need to look at x+1< 0 (x< -1), x+1> 0 (x> -1), x-1< 0 (x< 1), and x-1> 0 (x> 1).
That means we need to look at three intervals: x< -1, -1< x< 1, and x> 1.
If x< -1, both x+1< 0 and x-1< 0 so the equation becomes -(x+1)-(-(x-1))= -2= 2k+1 which has no solution for x if x is not -3/2 but an infinite number of solutions if k= 3/2 (since then the equation reduces to -2= -2 which is true for all x).
If -1< x< 1, x+1> 0 but x-1< 0 so the equation becomes (x+1)-(-(x-1))= 2x= 2k+1 which has a single solution for x no matter what k is.
If x> 1, both x+1>0 and x-1> 0 so the equation becomes (x+1)- (x-1)= 2= 2k+1 which has no solution for x is k is not 1/2 but has an infinite number of solutions if k= 1/2 (because then the equation reduces to 2= 2 which is true for all x).
3. What happened with x in -(x+1)-(-(x-1))= -2= 2k+1. Equation is |x+1|-|x-1|=kx+1. I don't understand that part.
4. You said that there is infinite number of solutions if k=3/2 but i find only one solution and that is -2. abs(x+1)-abs(x-1)=(3/2)x+1 - Wolfram|Alpha
5. Originally Posted by Garas
What happened with x in -(x+1)-(-(x-1))= -2= 2k+1. Equation is |x+1|-|x-1|=kx+1. I don't understand that part.
6. Originally Posted by Garas
What happened with x in -(x+1)-(-(x-1))= -2= 2k+1. Equation is |x+1|-|x-1|=kx+1. I don't understand that part.
My fault, I misread the equation! I accidently replaced the "x" with "2".
For x< -1, the left side, as I said, reduces to -2 so the equation becomes -2= kx+ 1. Now subtract 1 from each side to get kx=-3 and divide by x to get x= -3/k, as long as k is not 0. If k= 0, this gives no solution.
For -1< x< 1, the left side reduces to 2x so the equation becomes 2x= kx+ 1. subrtract kx from both sides to get 2x-kx= (2- k)x= 1 and then x= 1/(2- k) as long as k is not 2. If k= 2, this gives no solution.
For x> 1, the left side reduces to 2 so the equation becomes 2=kx+ 1. subtract 1 from both sides to get kx= 1 and divide by x to get x= 1/k, as long as k is not 0. If k= 0, ths gives no solution.
Now, what must k be so that only one of those gives a solution?
7. I solved that on the same way and it wasn't correct. The right solution is: k goes from infinity to 0 and from 1 to infinity. It's not so easy as you might thought on beginning. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9360094666481018, "perplexity": 1623.1179019324431}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542714.38/warc/CC-MAIN-20161202170902-00343-ip-10-31-129-80.ec2.internal.warc.gz"} |
http://mathhelpforum.com/calculus/73375-solved-slopes-tangents-secants.html | # Thread: [SOLVED] Slopes of Tangents and Secants
1. ## [SOLVED] Slopes of Tangents and Secants
Find the slope of the tangent line of the parabola 2x - x^2 at point (2,0):
So I have to use the forumla
m = lim f(a + h) - f(a) / h
h->0
So we have to find f(a) which = 0.
Then put in that to f(a + h) which = 2(0 + h) - (0+h)^2
So then you get for the equation:
(2h - 0) / h
Which would then equal 2, but the answer is -2...can anyone help?
2. Sorry it'd be (2h - h^2) on top wouldn't it? But that still doesn't work
3. Originally Posted by Dickson
Find the slope of the tangent line of the parabola 2x - x^2 at point (2,0):
So I have to use the forumla
m = lim f(a + h) - f(a) / h
h->0
So we have to find f(a) which = 0.
Then put in that to f(a + h) which = 2(0 + h) - (0+h)^2
So then you get for the equation:
(2h - 0) / h
Which would then equal 2, but the answer is -2...can anyone help?
$f(x) = 2x - x^2$
$f(2+h) = 2(2+h) - (2+h)^2$
$f(2) = 0$
$\lim_{x \to 2} \frac{f(2+h) - f(2)}{h}$
$\lim_{x \to 2} \frac{2(2+h) - (2+h)^2 - 0}{h}$
$\lim_{x \to 2} \frac{4+2h -(4 + 4h + h^2)}{h}$
$\lim_{x \to 2} \frac{4+2h -4 - 4h - h^2}{h}$
$\lim_{x \to 2} \frac{-2h - h^2}{h}$
$\lim_{x \to 2} \frac{h(-2 - h)}{h}$
$\lim_{x \to 2} (-2 - h) = -2$
4. I thought you were supposed to use the value you got from f(a) and put that in for the A value in f(a + h)
5. Originally Posted by Dickson
I thought you were supposed to use the value you got from f(a) and put that in for the A value in f(a + h)
no, it's the same "a" ... a = 2.
6. Originally Posted by skeeter
no, it's the same "a" ... a = 2.
Okay thanks a lot | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8706306219100952, "perplexity": 1115.416824707522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719397.0/warc/CC-MAIN-20161020183839-00080-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://git.trinitydesktop.org/cgit/kbibtex/tree/src/entrywidgetwarningsitem.h | summaryrefslogtreecommitdiffstats log msg author committer range
```1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 ``` ``````/*************************************************************************** * Copyright (C) 2004-2006 by Thomas Fischer * * [email protected] * * * * This program is free software; you can redistribute it and/or modify * * it under the terms of the GNU General Public License as published by * * the Free Software Foundation; either version 2 of the License, or * * (at your option) any later version. * * * * This program is distributed in the hope that it will be useful, * * but WITHOUT ANY WARRANTY; without even the implied warranty of * * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * * GNU General Public License for more details. * * * * You should have received a copy of the GNU General Public License * * along with this program; if not, write to the * * Free Software Foundation, Inc., * * 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. * ***************************************************************************/ #ifndef KBIBTEXENTRYWIDGETWARNINGSITEM_H #define KBIBTEXENTRYWIDGETWARNINGSITEM_H #include class TQString; namespace KBibTeX { class EntryWidgetWarningsItem : public TQListViewItem { public: enum WarningLevel {wlInformation = 1, wlWarning = 2, wlError = 3}; EntryWidgetWarningsItem( WarningLevel level, const TQString &message, TQWidget *widget, TQListView *parent, const char* name = NULL ); ~EntryWidgetWarningsItem(); TQWidget *widget(); private: TQWidget *m_widget; }; } #endif `````` | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8556249737739563, "perplexity": 4385.366052968898}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232259452.84/warc/CC-MAIN-20190526185417-20190526211417-00410.warc.gz"} |
https://www.physicsforums.com/threads/question-about-solution.384965/ | • Start date
• #1
36
0
## Homework Statement
Hello, this is the solution to a problem that i am doing and the only thing i am confused about is why the
C1 in the third step is = to − 1/2 ln 2 √3 as stated in the last line.
=√3/2 sec θ − 1/2 ln |sec θ + tanθ| + C1
that is all i am confused about but i get the rest.
i appreciate the Help Thank you very much.
• #2
Dick
Homework Helper
26,263
619
Subtract the last equation from the next to last to solve for C in terms of C1. Not that the relation between them matters very much. Either one is an arbitrary constant.
• Last Post
Replies
4
Views
382
• Last Post
Replies
3
Views
1K
• Last Post
Replies
5
Views
1K
• Last Post
Replies
1
Views
991
• Last Post
Replies
2
Views
5K
• Last Post
Replies
3
Views
2K
• Last Post
Replies
1
Views
1K
• Last Post
Replies
2
Views
1K
• Last Post
Replies
3
Views
1K
• Last Post
Replies
1
Views
1K | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9684105515480042, "perplexity": 2565.1889922300925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662588661.65/warc/CC-MAIN-20220525151311-20220525181311-00072.warc.gz"} |
http://www.zazzle.com/isaac+blocks+gifts | Showing All Results
43 results
Related Searches: levitan, newton, de benserade
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
No matches for
Showing All Results
43 results | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8193805813789368, "perplexity": 4641.667793256663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00252-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://zbmath.org/?q=an:0801.30039 | ×
# zbMATH — the first resource for mathematics
Generalized meromorphic functions. (English. Russian original) Zbl 0801.30039
Russ. Acad. Sci., Izv., Math. 42, No. 1, 133-147 (1994); translation from Izv. Ross. Akad. Nauk, Ser. Mat. 57, No. 1, 147-166 (1993).
The autor continues his pioneering work on generalized meromorphic functions on the big plane generated by a compact Abelian group $$G$$ with ordered dual group $$\Gamma\subset\mathbb{R}$$. Here he presents the proofs of several of his previously announced results. Let $$G$$ be a compact Abelian group with ordered dual group $$\Gamma\subset \mathbb{R}$$. The big plane over $$G$$ is the infinite cone $$\mathbb{C}_ \Gamma= [0,\infty)\cdot G$$, the unit big disc $$\Omega$$ over $$G$$ is the set of points $${\mathbf w}= rg$$ in the big plane whose “modulus” $$|{\mathbf w}|= r$$ is $$\leq 1$$, and $$\Omega^ 0$$ is its interior. A continuous function $$f$$ in a domain $$D\subset \mathbb{C}_ \Gamma$$ is “analytic” (generalized analytic) in $$D$$ if $$f$$ can be approximated locally by linear combinations $$\sum c(a)\backslash f(a)$$ over $$\mathbb{C}$$ of functions $$f^ a(rg)= r^ a g(a)$$, where $$r\geq 0,$$, $$g\in G$$ and $$a\in \Gamma_ +=\Gamma\cap [0,\infty)$$ in $$\mathbb{C}_ \Gamma$$.
For $$D=\Omega$$ the analyticity in this context was introduced by R. Arens and I. Singer in 1956; for an arbitrary $$D$$ the notion is due to D. Stankov and the reviewer [e.g., Big planes, boundaries and function algebras (1992; Zbl 0755.46020)]. We mention only a few of the many results in this paper.
It is given a description of the measures on $$G$$ that are orthogonal to the disc algebra of continuous up to the boundary $$G$$ generalized analytic functions on $$\Omega$$. The proof of author’s result for unique generalized analytic extension on a domain $$D\subset\Omega^ 0$$ of a bounded generalized analytic function defined on the complement in $$D$$ of a certain thin set in $$\mathbb{C}_ \Gamma$$ is presented as well.
For a class of suitably defined meromorphic functions in $$\mathbb{C}_ \Gamma$$ the following factorization result is proved.
Theorem. Let $$f$$ be a meromorphic function in $$\Omega^ 0$$ and let $$S^*\in \Omega^ 0$$ is either a removable singularity or an isolated pole. Then there is a non-vanishing generalized analytic function $$g$$ on $$\Omega^ 0$$, such that $$f\cdot g$$ can be extended to a generalized analytic function on $$\Omega^ 0$$. Versions of this result are given for the case of meromorphic functions on a “big annulus” region of type $$E^ 0=\{{\mathbf w}\in \mathbb{C}_ \Gamma< r|{\mathbf w}|< 1\}$$ and for meromorphic almost periodic functions on the upper half plane or on a horizontal strip in $$\mathbb{C}$$.
##### MSC:
30G35 Functions of hypercomplex variables and generalized variables 30D30 Meromorphic functions of one complex variable (general theory)
big plane
Full Text: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.872072696685791, "perplexity": 234.29562416627263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361064.58/warc/CC-MAIN-20211201234046-20211202024046-00591.warc.gz"} |
https://www.arxiv-vanity.com/papers/0910.5477/ | # Non-commutative Donaldson-Thomas theory and vertex operators
Kentaro Nagao
RIMS, Kyoto University
Kyoto 606-8502, Japan
###### Abstract
In [Nagb], we introduced a variant of non-commutative Donaldson-Thomas theory in a combinatorial way, which is related to the topological vertex by a wall-crossing phenomenon. In this paper, we (1) provide an alternative definition in a geometric way, (2) show that the two definitions agree with each other and (3) compute the invariants using the vertex operator method, following [ORV06] and [BY10]. The stability parameter in the geometric definition determines the order of the vertex operators and hence we can understand the wall-crossing formula in non-commutative Donaldson-Thomas theory as the commutator relation of the vertex operators.
\excludeversion
NB
## Introduction
Let be a affine toric Calabi-Yau -fold, which corresponds to the trapezoid with height 1, with length edge at the top and at the bottom. Let be a crepant resolution of . Note that has affine lines as torus invariant closed subvarieties (). In other words, there are open edges in the toric graph of . Given an -tuple of Young diagrams
(ν––,λ––)=(ν+,ν−,λ(1/2),…,λ(L−1/2))(L:=L++L−)
associated with open edges (see Figure 3), we can define a torus invariant ideal sheaf on 2.1) and a moduli space of quotient sheaves of 4.1). Note that and hence the moduli space is the Hilbert scheme of closed subschemes of . We define Euler characteristic version of open (commutative) Donaldson-Thomas invariants by the Euler characteristics of the connected components of 111The word “open” stems from such terminologies as “open topological string theory”. According to [AKMV05], open topological string partition function is given by summing up the generating functions of these invariants over Young diagrams.. The torus action of induces the torus action on . The torus fixed point set is isolated and parametrized in terms of -tuples of -dimensional Young diagrams. Thus the generating function of the open Donaldson-Thomas invariants can be described in terms of topological vertex ([AKMV05, MNOP06], see §4.2).
Let be a non-commutative crepant resolution of the affine toric Calabi-Yau -fold . We can identify the derived category of coherent sheaves on and the one of -modules by a derived equivalence. A parameter gives a Bridgeland’s stability condition of this derived category, and hence a core of a t-structure on it (Definition 1.12). In fact, we have two specific parameters such that the corresponding t-structures coincide with the ones given by or respectively. Given an element in , we can restrict it to get a sheaf on the smooth locus . Since the singular locus is compact, it makes sense to study those elements in which are isomorphic to outside a compact subset of , or in other words, those elements in which have the same asymptotic behavior as . We will study the moduli spaces of such objects as noncommutative analogues of . NB. We want to study moduli spaces
of pairs of finite dimensional -modules and morphisms in the derived category 222Let denote the cohomology with respect to the t-structure corresponding to . Since the derived equivalence is given by a tilting, we have unless . In particular, giving a morphism is equivalent to giving a morphism . . In general the ideal sheaf is not an element in , however is always in . We will construct the moduli space of quotients of in as a GIT quotient (§5.1). Note that is the moduli space we have studied in [Naga]. We define Euler characteristic version of open non-commutative Donaldson-Thomas invariants by the Euler characteristics of the connected components of 333The reader may also refer to [NY] in which we study the invariants in the physics context..
NB. In order to compute the invariants, we will give another description of the moduli space. Given , we can construct a core of a t-structure of the derived category and an element in . We define
M(ν––,λ––,ζ):={Pζν––,λ––↠F∣F∈Aζfin}.
Then we have a natural isomorphism between and . The torus action on induces a torus action on the moduli . We will compute the Euler characteristic by counting the number of torus fixed points. For a generic , the core of the t-structure is isomorphic to the category of -modules, where is associated with a quiver with a potential. Hence, we can describe the torus fixed point set on in terms of a crystal melting model ([ORV06, OY09]), which we have studied in [Nagb]. In fact, a particle in the grand state crystal gives a weight vector in with respect to the torus action, and a crystal obtained by removing a finite number of particles from the grand state crystal gives a torus fixed point in 3.1). The invariants in this paper agree with the ones defined in [Nagb].
Finally, we provide explicit formulas for the generating functions of the Euler characteristic version of the open commutative and non-commutative Donaldson-Thomas invariants using vertex operator method, following [ORV06], [BY10] and [BCY]444During preparing this paper, the author was informed that Piotr Sulkowski and Benjamin Young provide similar computations independently ([Sul, You]). . The order of the vertex operators is determined by the chamber in which the parameter is. Hence we can understand the wall-crossing formula as the commutator relation of the vertex operators.
In Szendroi’s original non-commutative Donaldson-Thomas theory ([Sze08]) the moduli spaces admit symmetric obstruction theory and the invariants are defined as the virtual counting of the moduli spaces in the sense of Behrend-Fantechi ([BF97])555Virtual counting coincides with the weighted Euler characteristic weighted by the Behrend function.. In the case when , we show that the moduli space admits a symmetric obstruction theory (§5.2). Using the result in [BF08], we can verify that the virtual counting coincide with the (non-weighted) Euler characteristics up to signs as in [Sze08, MR10, NN, Naga]6.1). We can also compute the generating function of the weighted (or non-weighted) Euler characteristics using Joyce-Song’s theory ([JS], or [Joy08]).
The plan of this paper is as follows: Section 1 contains basic observations on the core of the t-structure of the derived category. In Section 2, the definition of Euler characteristic version of open non-commutative Donaldson-Thomas invariants is provided. Then, we compute the generating function using vertex operators in Section 3. Finally, we study open Donaldson-Thomas invariants and topological vertex as “limits” of open non-commutative Donaldson-Thomas invariants in Section 4. Section 2, Section 3 and Section 4 are the main parts of this paper. In Section 5 we construct the moduli spaces used in Section 2 and 4 to define invariants. Moreover, we construct symmetric obstruction theory on the moduli space in the case of in Section 5.2. The relation between weighted Euler characteristic and Euler characteristic is discussed in Section 6.1. Throughout this paper we work on the half of the whole space of stability parameters. We will have a discussion about the other half of the stability space in Section 6.2. The computation in Section 3 depends on an explicit combinatorial description of the derived equivalence. We leave it until Section 6.4 since it is very technical.
### Acknowledgement
The author is supported by JSPS Fellowships for Young Scientists (No. 19-2672). He thanks to Jim Bryan for letting him know the result of [BCY] and recommending him to apply the vertex operator method in the setting of this paper. He also thanks to Osamu Iyama, Hiroaki Kanno, Hiraku Nakajima, Piotr Sulkowski, Yukinobu Toda, Masahito Yamazaki and Benjamin Young for useful comments.
This paper was written while the author is visiting the University of Oxford. He is grateful to Dominic Joyce for the invitation and to the Mathematical Institute for hospitality.
## Notations
Let denote the set of half integers and be a positive integer. We set and . The two natural projections and are denoted by the same symbol . We sometimes identify and with and respectively. The symbols , , and are used for elements in , , and respectively.
Throughout this paper, the following data play crucial roles:
• a map , which determines the crepant resolution and the non-commutative crepant resolution ,
• a pair of Young diagram and an -tuple of Young diagrams , which determines the “asymptotic behaviors” of (complexes of) sheaves we will count,
• a stability parameter , which determines the t-structure where we will work on, and
• a bijection , which determines the chamber where the stability parameter is.
We sometimes identify a Young diagram with a map such that for and for 666Such a map is called a Maya diagram. See, for example, [Nag09, §2] for the correspondence between a Young diagram and a Maya diagram.. We identify an -tuple of Young diagrams with a map by
λ––(h)=λ(π(h))(h−π(h)L+12).
We define the following categories:
: the Abelian category of coherent sheaves on ,
: the full subcategory of consisting of coherent sheaves with compact supports,
: the bounded derived category ,
: the full subcategory of consisting of complexes with compactly supported cohomologies,
: the Abelian category of finitely generated left -modules,
: the full subcategory of consisting of finite dimensional modules,
: the bounded derived category of ,
: the full subcategory of consisting of complexes with finite dimensional cohomologies.
## 1 T-structure and chamber structure
### 1.1 Non-commutative and commutative crepant resolutions
Let be a map from to . In [Naga], following [HV07], we introduced a quiver with a potential , which is a non-commutative crepant resolution of ([Naga]). First, we set
H(σ) S(σ) :={n∈Z∣σ(n−1/2)≠σ(n+1/2)},IS(σ):=π(S(σ)).
The symbol and represent “hexagon” and “square” respectively. We use such notations since an element in each set corresponds to a hexagon or square in the dimer model (see [Naga, §1.2]). The vertices of are parametrized by and the arrows are given by
Here (resp. ) is an edge from to (resp. from to ) and is an edge from to itself. See [Naga, §1.2] for the definition of the potential . NB.
###### Example 1.1.
In Figure 1, we show the quiver for as in Example LABEL:ex2.
###### Remark 1.2.
• The center of is isomorphic to . In [Naga, Theorem 1.14 and 1.19], we showed that is a non-commutative crepant resolution of .
• The affine -fold is toric. In fact,
T=Spec~R:=SpecC[x±,y±,z±,w±]/(xy=zL+wL−)
is a -dimensional torus. The torus acts on by
Let (resp. ) be the projective (resp. simple) -module corresponding to the vertex . Let be the numerical Grothendieck group of , which we identify with by the natural basis . We put .
We identity the dual space with by the dual basis of . Take .
###### Theorem 1.3 ([IUb]).
The moduli space of -stable ( -semistable) -modules with dimension vectors gives a crepant resolution of .
Let denote this crepant resolution.
###### Theorem 1.4 ([Naga, §1]777It is known by [Moz, Boc] that a quiver with a potential given from a brane tiling satisfying the “consistency condition” ([Mr10, Dav, Bro, IUa]) is a non-commutative crepant resolution over its center ([VdB]). The claim of this theorem is a little bit stronger, i.e. Aσ is given by the construction in [VdB04] and hence we have −1Per(Y/X)≃modAσ. We will use this equivalence of the Abelian categories in Section 4., see §4.1).
We have a derived equivalence between and , which restricts to an equivalence between (resp. ) and (resp. ).
### 1.2 Stability condition and tilting
For such that , we define the group homomorphism
Zζ∘:Knum(modfinAσ)→C
by
Zζ∘(v):=(−ζ∘+η√−1)⋅v
where . Then gives a stability condition on in the sense of Bridgeland ([Bri07]).
For a pair of real numbers , let be the full subcategory of consisting of elements whose Harder-Narasimhan factors have phases less or equal to and larger than . The following claims are standard (see [Bri07]):
###### Lemma 1.5.
(1) where represents the shift in the derived category.
(2) is a core of a t-structure for any .
(3) .
(4) For , the pair of subcategories
(Dζ∘fin[t,s),Dζ∘fin[s,t−1))
gives a torsion pair ([Bri05, Definition 2.4]) for the Abelian category .
(5) For , is obtained from by tilting with respect to the torsion pair above ([HRS96], [Bri05, Proposition 2.5]), i.e.
Dζ∘fin[s,s−1)= {E∈Dbfin(modAσ)∣∣H0Dζ∘fin[t,t−1)(E)∈Dζ∘fin[s,t−1), H1Dζ∘fin[t,t−1)(E)∈Dζ∘fin[t,s)}, Dζ∘fin[t,t−1)= {E∈Dbfin(modAσ)∣∣H0Dζ∘fin[s,s−1)(E)∈Dζ∘fin[s,t−1), H−1Dζ∘fin[s,s−1)(E)∈Dζ∘fin[t−1,s−1)},
where represents the cohomology with respect to the t-structure corresponding to .
###### Lemma 1.6.
The algebra is (left-)Noetherian.
###### Proof..
In [Naga], it is shown that is isomorphic to for a vector bundle on , where is the contraction . Since is proper, is finitely generated as an -module. Hence is Noetherian. ∎
###### Proposition 1.7.
For we put
Dζ∘fin[1,t)⊥:={E∈modAσ∣HomAσ(F,E)=0,∀F∈Dζ∘fin[1,t)}.
Then the pair of full subcategory gives a torsion pair in .
###### Proof..
We will prove that every object fits into a short exact sequence
0→E→F→G→0
for some pair of objects and .
By Lemma 1.6, has the maximal finite dimensional submodule . Let denote the cokernel of the inclusion . Note that for any finite dimensional -module .
Let
0→F(3)→F(1)→F(4)→0
be the exact sequence such that and .Note that for any we have .
Let denote the cokernel of the inclusion . Then we have the following exact sequence:
0→F(4)→F(5)→F(2)→0.
This implies for any . Put and , then the claim follows. ∎
NB.
###### Lemma 1.8.
The natural inclusion has a right adjoint.
###### Proof..
Since is Noetherian, an element has the maximal finite dimensional submodule . The functor gives a right adjoint of the inclusion. ∎
A right admissible subcategory of is a full subcategory such that the inclusion functor has a right adjoint ([bridgeland-flop, Definition 2.1]).
###### Corollary 1.9.
For , is a right admissible full subcategory of .
###### Definition 1.10.
For let denote the core of the t-structure given from by tilting with respect to the torsion pair in Proposition 1.7, i.e.
Dζ∘[t,t−1)={E∈Db(modAσ)∣∣H0modAσ(E)∈Dζ∘[t,0), H1modAσ(E)∈Dζ∘[1,t)}.
NB. The following lemma is also easy to verify:
###### Lemma 1.11.
Let be a distinguished triangle in and for . Then, the triangle gives an exact sequence in if and only if it gives an exact sequence in . In particular, in such a case we have .
I’m not sure that the following is true. Moreover, have a torsion pair
for the Abelian category such that
Dζ∘[0,t)∩modfinAσ=Dζ∘fin[0,t),Dζ∘[t,1)∩modfinAσ=Dζ∘fin[t,1).
Let be the Abelian category given by tilting.
We have the following bijection:
{(ζ∘,T)∣ζ∘⋅δ=0, T∈R}∼⟶RI≃(Knum(modfinAσ)⊗R)∗(ζ∘,T)⟼ζ∘−Tη.
The inverse map is given by
T:=−ζ⋅η/L,ζ∘:=ζ+Tη
for . For a fixed , take such that . Note that for an element we have
ϕZζ∘(V)
where .
###### Definition 1.12.
Aζfin:=Dζ∘fin[t,t−1),Aζ:=Dζ∘[t,t−1).
NB.
###### Remark 1.13.
We have the natural action of on the space of Bridgeland’s stability conditions given by rotation of the complex line which is the target of the central charge. We can embed into the space of Bridgeland’s stability conditions by
ζ↦Zζ:=rott(Zζ∘).
Note that agrees with . This is the reason why we call a stability parameter, although we will use the former description since it is more convenient in our argument.
### 1.3 Chamber structure
A stability parameter is said to be generic if there is no -semistable objects with phase . Then we get a chamber structure in .
###### Proposition 1.14.
The chamber structure coincides with the affine root chamber structure of type .
###### Proof..
A -semistable object has the phase if and only if and so the genericity in this paper agrees with the one in [Naga]. Then the claim follows from [Naga, Proposition 2.10, Corollary 2.12]. ∎
Here we give a brief review for the affine root system of type . We call the root lattice and a simple root. For , we define by
α[h,h′]:=⎧⎪⎨⎪⎩0h=h′,απ(h+1/2)+⋯+απ(h′−1/2)hh′
and . We set
Λ:={α[h,h′]∣h≠h′},Λ+:={α[h,h′]∣hh′}
and
Λre:={α[h,h′]∣h≢h′(modL)},Λim:={mδ∣m≠0}.
An element in (resp. , , , ) is called a root (resp. positive root, negative root, real root, imaginary root). Note that . We put and define , and in the same way.
For a root , let denote the hyperplane in given by
Wα:={ζ∈(Knum(modfinAσ)⊗R)∗∣ζ⋅α=0}.
The walls in the affine root chamber structure of type is given by
Wδ∪⋃α∈Λre,+Wα.
Throughout this paper, we work on the area below the wall , i.e. on the area .
### 1.4 Parametrization of chambers
Let denote the set of bijections such that
• for any , and
• .
We have a natural bijection between and the set of chambers in the area . An element in the chamber corresponding to satisfies the following condition:
α[h,h′]⋅ζθ<0⟺θ(h)<θ(h′)
for any . For and , we define by
α(θ,i):=α[θ(n−1/2),θ(n+1/2)](π(n)=i).
Then the chamber is adjacent to the walls and we have for .
Let be the bijection given by
θi(h)=⎧⎨⎩h+1π(h+1/2)=i,h−1π(h−1/2)=i,hotherwise.
Then we have and the chambers and are separated by the wall .
### 1.5 Mutation
Assume that and is such that is not on an intersection of two walls for any . Let () be the set of all the parameters such that is not generic. According to the argument at the end of the previous subsection, we have the sequence of elements in such that for any is on the wall for
αr:=α(θi1∘⋯∘θir−1,ir).
Take the minimal positive integer such that and put . Using this notation we have the following equivalencies of the Abelian categories:
###### Proposition 1.15.
Aζfin≃modfinAζσ,Aζ≃modAζσ.
###### Proof..
We have the derived equivalence between and obtained by the tilting generator as in [Naga, Proposition 3.1]. It is easy to see that, under this equivalence, the module category of is obtained from the one of by tilting with respect to the torsion pair obtained by the simple module.
Combine with the descriptions in §1.2, we can see the claim by induction with respect to . ∎
Let be the simple -module associated to the vertex . For , we have
[Sζi]=α(θ,i)∈Knum(modfinAσ) (1.2)
under the induced isomorphism
Knum(modfinAσ)≃Knum(modfinAζσ),
NB. We put
Aζ:=modAζσ.
This is obtained from by tilting with respect to the right (left?) admissible subcategory
Dζ∘fin[1,t)⊂modfinAσ⊂modAσ.
NB. We say is generic if does not on an intersection of two chambers for any . Let be the set of all the parameter such that is not generic. Let be the real root such that is on the wall .
###### Proposition 1.16.
There exist a sequence of elements in such that
αr=ϕi1⋯ϕir−1αir.
###### Proof..
We can take inductively. Assume that () is in a chamber surrounded by walls corresponding to
ϕi1⋯ϕir−1αi(i∈I)
and is on the wall corresponding to
ϕi1⋯ϕir−1αir.
Then we can verify that () is in a chamber surrounded by walls corresponding to
ϕi1⋯ϕirαi(i∈I).
## 2 Definition of the invariants
### 2.1 Ideal sheaf associated to Young diagrams
In this paper, we regard a Young diagram as a subset of . For a Young diagram , let (resp. or ) be the subset of consisting of the elements such that (resp. or ). Given a triple of Young diagrams, let be the following subset of :
Λmin:=Λx(λx)∪Λy(λy)∪Λz(λz)⊂(Z>0)3.
A subset of is said to be a -dimensional Young diagram of type if the following conditions are satisfied:
• if , then ,
• , and
• .
For a -dimensional Young diagram | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9842957854270935, "perplexity": 569.0057843155957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369553.75/warc/CC-MAIN-20210304235759-20210305025759-00449.warc.gz"} |
https://tlovering.wordpress.com/2011/11/01/zeta-functions-of-varieties-over-a-finite-field/ | In this short post we look at the general definition of a zeta function for a scheme of finite type over ${Spec {\mathbb Z}}$ and record its relationship with the zeta function of an algebraic variety ${X}$ over a finite field ${\mathbb{F}_q}$, defined as a kind of generating function for the number of points of ${X}$ over all finite extensions of ${\mathbb{F}_q}$.
Let ${X}$ be a scheme of finite type over ${Spec {\mathbb Z}}$. In fact, for concreteness, we start by looking at a single affine chart: imagining ${X=Spec A}$ as a geometrical space whose ring ${A}$ of functions is a quotient of some ring ${{\mathbb Z}[t_1,...,t_n]}$ of polynomials (in finitely many variables).
Then for every maximal ideal ${m}$ of ${A}$, the quotient ${A/m}$ is in fact a finite field, so we can define the norm ${N_m = |A/m|}$ to get a positive integer ${\geq 2}$ associated with ${m}$. We can use this to define the zeta function associated with ${X}$ to be the following formal Dirichlet series, by analogy with the classical Euler product formula.
$\displaystyle \zeta_X(s) = \prod_{m \text{ a maximal ideal of }A} \frac{1}{1-N_m^{-s}}.$
Notice that taking ${X=Spec {\mathbb Z}}$ we recover the classical Riemann zeta function, and taking the ring of integers of a number field recovers the zeta function of a number field. What happens if we take ${X}$ to be a variety over a finite field ${\mathbb{F}_q}$?
Well, ${log \zeta_X(s) = - \sum_{x} log(1-N_x^{-s})}$ where the sum is over all closed points ${x \in X}$.
What is a closed point of ${X}$? What is an ${\mathbb{F}_{q^r}}$-rational point of ${X}$? This is one of the confusing subtleties of scheme theory: these concepts are related but not exactly the same. Since the more classical definition of a zeta function is in terms of rational points, we will have to make this leap.
In fact, an ${\mathbb{F}_{q^r}}$-rational point (which we will also call a geometric point’, perhaps slightly nonstandardly) is simply a map ${Spec(\mathbb{F}_{q^r}) \rightarrow X}$. This makes sense for two reasons. Firstly, a point in ordinary geometry can be described as just a map from a one point space (or as a zero-simplex, if you like), so our definition seems plausible. Secondly, and more convincingly, in the case where ${X}$ is affine, for example the elliptic curve over ${{\mathbb Z}}$, ${y^2=x^3-x}$, such a map corresponds to a map of rings
$\displaystyle {\mathbb Z}[x,y]/(y^2-x^3+x) \rightarrow \mathbb{F}_{q^r}.$
In other words, it corresponds to a choice of values for ${x}$ and ${y}$ in ${\mathbb{F}_{q^r}}$ satisfying the governing equation – exactly what we should mean by specifying a ${\mathbb{F}_{q^r}}$-rational point.
On the other hand, a closed point is a maximal ideal of the ring of functions of an affine neighbourhood. Whenever we have a map ${Spec\mathbb{F}_{q^r} \rightarrow X}$, it corresponds to a map of rings whose image is a field. In other words, a map whose kernel is a maximal ideal. Therefore, every ${\mathbb{F}_{q^r}}$-rational point is associated with a unique closed point ${x}$. In fact, we get an induced map ${k(x) \rightarrow \mathbb{F}_{q^r}}$, where ${k(x)}$ is the residue field at ${x}$. Conversely, any such map can be extended and determines a geometric point.
So finally we have arrived at the relationship between closed points and geometric points. Namely, each closed point ${x}$ determines the a set of ${\mathbb{F}_{q^r}}$-rational points ${X_x}$ which can be canonically identified with ${Hom_{\mathbb{F}_q}(k(x), \mathbb{F}_{q^r})}$. But by basic Galois theory, this says that, if ${|k(x)|=q^d}$, ${X_x}$ is nonempty iff ${d|r}$, in which case ${|X_x|=d}$.
So let us return to our zeta function. Writing ${N_x=q^d}$, we can expand
$\displaystyle -log(1-N_x^{-s}) = \sum_{m \geq 1} \frac{q^{-sdm}}{m}.$
Now, set ${T=q^{-s}}$ and bearing in mind where we are trying to get (to an expression involving the numbers ${|X_x|}$), we can define ${N(m,x)}$ to be ${d}$ if ${d|m}$ and ${0}$ otherwise, and rewrite this sum as
$\displaystyle \sum_{m \geq 1} N(m,x)\frac{T^m}{m} = \sum_{m \geq 1} |Hom_{\mathbb{F}_q}(k(x), \mathbb{F}_{q^r})|\frac{T^m}{m}.$
Now, take the sum over all closed points ${x}$, and conclude that
$\displaystyle log \zeta_X(s) = \sum_{m \geq 1} (\sum_x |Hom_{\mathbb{F}_q}(k(x), \mathbb{F}_{q^r})|)\frac{T^m}{m} = \sum_{m \geq 1} |X(\mathbb{F}_{q^m})|\frac{T^m}{m}.$
This gives the familiar expression for the zeta function as an object whose logarithmic derivative is a generating function for the number of points of ${X}$ over all the finite field extensions of ${\mathbb{F}_q}$, as appears in the statement of the Weil Conjectures and is well studied. One can check further that given the analysis above, the Riemann Hypothesis’ for varieties of dimension ${d}$ over a finite field asserts that all the zeroes are on the lines ${Re(s) = 1/2,3/2,....,(2d-1)/2}$ and the poles are on the lines ${Re(s) = 0,1,2,...,d}$. Noting that ${{\mathbb Z}}$ is an object of dimension 1, the analogy with the classical Riemann hypothesis could not be clearer. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 67, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9655256271362305, "perplexity": 68.48124430704227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886437.0/warc/CC-MAIN-20180116144951-20180116164951-00617.warc.gz"} |
https://anhngq.wordpress.com/2009/08/01/solutions-of-equations-in-one-variable/ | # Ngô Quốc Anh
## August 1, 2009
### Solutions of Equations in One Variable
A root-finding algorithm is a numerical method, or algorithm, for finding a value $x$ such that $f(x) = 0$, for a given function $f$. Such an $x$ is called a root of the function $f$.
Suppose $f$ is a continuous function defined on the interval $[a, b]$, with $f(a)$ and $f(b)$ of opposite sign. By the Intermediate Value Theorem, there exists a number $p$ in $(a, b)$ with $f(p) = 0$. Although the procedure will work when there is more than one root in the interval $(a,b)$, we assume for simplicity that the root in this interval is unique. The method calls for a repeated halving of subintervals of $[a, b]$ and, at each step, locating the half containing $p$.
To begin, set $a_1=a$ and $b_1=b$, and let $p_1$ be the midpoint of $[a,b]$; that is,
$\displaystyle p_1 = a_1 + \frac{b_1-a_1}{2}=\frac{a_1+b_1}{2}$.
If $f(p_1)=0$, then $p=p_1$, and we are done. If $f(p_1)\ne 0$, then $f(p_1)$ has the same sign as either $f(a_1)$ or $f(b_1)$. When $f(p_1)$ and $f(a_1)$ have the same sign, $p \in (p_1, b_1)$, and we set $a_2=p_1$ and $b_2 = b_1$. When $f(p_1)$ and $f(a_1)$ have opposite signs, $p \in (a_1, p_1)$, and we set $a_2 = a_1$ and $b_2 = p_1$. We then reapply the process to the interval $[a_2, b_2]$.
A number p is a fixed point for a given function g if $g(p) = p$. In this section we consider the problem of finding solutions to fixed-point problems and the connection between the fixed-point problems and the root-finding problems we wish to solve. Root-finding problems and fixed-point problems are equivalent classes in the following sense:
Given a root-finding problem $f(p) = 0$, we can define functions $g$ with a fixed point at $p$ in a number of ways, for example, as $g(x) = x-f(x)$ or as $g(x) = x + 3f(x)$. Conversely, if the function $g$ has a fixed point at $p$, then the function defined by $f(x) = x-g(x)$ has a zero at $p$.
Although the problems we wish to solve are in the root-finding form, the fixed-point form is easier to analyze, and certain fixed-point choices lead to very powerful root-finding techniques.
• Newton’s method
Newton’s (or the Newton-Raphson) method is one of the most powerful and well-known numerical methods for solving aroot-finding problem. With an initial approximation $p_0$, the Newton’s method generates the sequence $\{p_n\}_{n=1}^\infty$ by
$\displaystyle {p_{n + 1}} = {p_n} - \frac{{f\left( {{p_n}} \right)}} {{f'\left( {{p_n}} \right)}}, \quad n \geqslant 0$.
To circumvent the problem of the derivative evaluation in Newton’s method, we introduce a slight variation. By definition,
$\displaystyle f'(x) = \mathop {\lim }\limits_{x \to {p_n}} \frac{{f(x) - f({p_{n - 1}}){\text{ }}}}{{x - {p_{n - 1}}}}$.
Letting $x=p_{n-1}$, we have
$\displaystyle f'\left( {{p_n}} \right) \approx \frac{{f\left( {{p_{n - 1}}} \right) - f\left( {{p_n}} \right)}} {{{p_{n - 1}} - {p_n}}}$
which yields
$\displaystyle {p_{n + 1}} = {p_n} - {\text{ }}\frac{{f({p_n})\left( {{p_{n - 1}} - {p_n}} \right)}}{{f({p_{n - 1}}) - f({p_{n - 2}})}}$.
• The method of False Position
The method of False Position (also called Regula Falsi) generates approximations in the same manner as the Secant method, but it includes a test to ensure that the root is bracketed between successive iterations. Although it is not a method we generally recommend, it illustrates how bracketing can be incorporated.
First choose initial approximations $p_0$ and $p_1$ with $f(p_0) f(p_1) < 0$. The approximation $p_2$ is chosen in the same manner as in the Secant method, as the $x$-intercept of the line joining $(p_0, f(p_0))$ and $(p_1, f(p_1))$. To decide which secant line to use to compute $p_3$, we check $f(p_2) f(p_1)$. If this value is negative, then $p_1$ and $p_2$ bracket a root, and we choose $p_3$ as the $x$-intercept of the line joining $(p_1, f(p_1))$ and $(p_2, f(p_2))$. If not, we choose $p_3$ as the $x$-intercept of the line joining $(p_0, f(p_0))$ and $(p_2, f(p_2))$, and then interchange the indices on $p_0$ and $p_1$.
In a similar manner, once $p_3$ is found, the sign of $f(p_3)f(p_2)$ determines whether we use $p_2$ and $p_3$ or $p_3$ and $p_1$ to compute $p_4$. In the latter case a relabeling of $p_2$ and $p_1$ is performed.
Source: Richard L. Burden and J. Douglas Faires, Numerical Analysis, 8th edition, Thomson/Brooks/Cole, 2005. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 84, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9701528549194336, "perplexity": 111.11944087663261}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320395.62/warc/CC-MAIN-20170625032210-20170625052210-00450.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.