url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
http://mathoverflow.net/questions/116786/grauerts-theorem-for-infinite-dimensional-frechet-lie-groups?answertab=active
Grauert's theorem for infinite dimensional Frechet Lie groups Stein manifolds are complex analytic submanifolds of some $\mathbb{ C}^N.$ (A version of) Grauert's theorem states that on a Stein manifold $X$ every continuous map $g\colon X\to G$ to a complex Lie group $G$ is homotopic to a holomorphic map, see Gromov "Oka's principle for holomorphic sections of elliptic bundles". This theorem was generalized to the case where $G$ is an infinite dimensional complex Banach Lie group, see Bungart "On analytic fiber bundles. I. Holomorphic fiber bundles with infinite dimensional fibers". I would like to know if the theorem still holds when $G$ is replaced by an infinite dimensional complex Frechet Lie group or at least in the case of $G=C^\infty(M,\mathbb{S}L(2,\mathbb{C}))$ for a compact mannifold $M.$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.986371636390686, "perplexity": 232.97821239015306}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929899.62/warc/CC-MAIN-20150521113209-00286-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.isoul.org/author/rg/page/2/
iSoul In the beginning is reality. # Home is the horizon As there is an inverse or harmonic algebra, so there is an inverse geometry, an inverse space. Home, the origin, is the horizon, the ends of the earth, and beyond that, the celestial equator, the heavens. We may attempt to journey to the centre of the earth with Jules Verne, but we’ll never make it because it is infinitely far away. We cannot plumb the ultimate depths within, the deep well of the heart. At the centre of it all is a bottomless pit, the hell of eternal darkness. The geometric inverse is with respect to a circle or sphere: By Krishnavedala P’ is the inverse of P with respect to the circle. The inverse of the centre is the point at infinity. The order of events in this geometry is their distance from the horizon, not the centre. The return to home is the end of events, the final event. The later the event, the better, since it is closer to the end, to home. The destination is where we’ve come from and where we return. It is a round trip, a circuit, a cycle of life and change. What we call the beginning is often the end And to make an end is to make a beginning. The end is where we start from. T. S. Eliot, Little Gidding # Lorentz factor from light clocks Space and time are inverse perspectives on motion. Space is three dimensions of length. Time is three dimensions of duration. Space is measured by a rigid rod at rest, whereas time is measured by a clock that is always in motion relative to itself. This is illustrated by deriving the Lorentz factor for time dilation and length contraction from light clocks. The first derivation is in space with a time parameter and the second is in time with a space parameter (placepoint). The first figure above shows frame S with a light clock in space as a beam of light reflected back and forth between two mirrored surfaces. Call the height between the surfaces that the light beam travels distance h. Let one time cycle Δt = 2h/c or h = cΔt/2, with speed of light c, which is the maximum speed. The second figure shows frame with the same light clock as observed by someone moving with velocity v relative to S. Call the length of each half-cycle d, and call the length of the base of one cycle in space b. # Proper and improper rates The independent quantity in a proper rate is the denominator. The independent quantity in an improper rate is the numerator. If a rate is multiplied by a quantity with the units of the independent quantity and the result has the units of the dependent quantity, it is proper. Otherwise, it is improper. A proper rate becomes improper if the proper rate is inverted. An improper rate becomes proper if the improper rate is inverted. If two or more improper rates are added, each must first be inverted. The result of adding proper rates must be inverted again to return to the original improper rate. This is harmonic addition: $\frac{b}{a_{1}}+\frac{b}{a_{2}}&space;\Rightarrow&space;\left&space;(\frac{a_{1}}{b}+\frac{a_{2}}{b}&space;\right)^{-1}$ If the addition of improper rates is divided by the number of addends so that it is the average or arithmetic mean of the inverted rates, then the result inverted is the harmonic mean: $\frac{1}{2}\left&space;(\frac{b}{a_{1}}+\frac{b}{a_{2}}&space;\right&space;)&space;\Rightarrow&space;\left&space;(&space;\frac{1}{2}&space;\left&space;(\frac{a_{1}}{b}+\frac{a_{2}}{b}&space;\right&space;)&space;\right)^{-1}$ Time speed is the speed of a body measured by the distance traversed in a known time, which is a proper rate because the independent quantity, time, is the denominator. Space speed is the speed of a body measured by the time it takes to transverse a known distance, which is an improper rate because the independent quantity, distance, is the numerator. Space speeds are averaged by the harmonic mean and called the space mean speed. The time mean speed is the arithmetic mean of time speeds. Velocity normalized by the speed of light is proper because the invariant speed of light is independent. The speed of light divided by a velocity is improper and must be added harmonically. Lenticity normalized by an hypothesized maximum pace is proper, but if the lenticity is divided by the pace of light, it is improper. # Space with time and their dual For the first post in this series see here. Space with time (3+1) Space is that which is measured by length; time is that which is measured by duration. There are three dimensions of length and one dimension (or parameter) of duration. Direction in space is measured by an angle, which is part of a circle. Spatial rates are dependent on another variable, usually interval of time (distime). Time is that which is measured by duration. Events are ordered by time. Time as an independent variable decreases from the past to the present and increases from the present to the future. Temporal rates are dependent on another variable, usually the interval of space (stance). Dual: time with space (1+3) The dual of space with time is time with space. The dual of space is time and the dual of time is space. Space corresponds to time and time corresponds to space. Time is that which is measured by duration; space is that which is measured by length. There are three dimensions of duration and one dimension (or parameter) of length. Direction in time is measured by a turn, which is part of a rotation. Temporal rates are dependent on another variable, usually interval of space (stance). Space is that which is measured by length. Events are ordered by length (stance). Space as an independent variable decreases from a past there to here and increases from here to a future there. Spatial rates are dependent on another variable, usually the interval of time (distime). Read more → # Number and algebra and their dual For the first post in this series, see here. (1) Set theory and logic, (2) number and algebra, and (3) space and time are three foundational topics that have dual approaches. Let us begin with the standard approaches to these three topics, and then define duals to each of them. In some ways, the original and the dual may be used together. (2) Number and algebra The concept of counting and number is as universal as language, though the full definition of number did not occur until the 19th century. Algebra came to the West from India and Arabia in the Middle Ages but its formal definition did not occur until the 19th century. Abstract algebra also began in the 19th century. The basic rules of algebra are as follows: addition and multiplication are commutative and associative; multiplication distributes over addition; addition and multiplication have identities and inverses with one exception: there is no multiplicative inverse for zero. An idea of infinity comes from taking the limit of a number as its value approaches zero: ∞ ∼ 1/x as x → 0. Infinity can be partially incorporated via limits. Dual: harmonic numbers An additive dual can be defined by negating every number. A more interesting dual comes from taking the multiplicative dual of every number. This latter case can be called harmonic numbers and harmonic algebra because of its relation to the harmonic mean. The harmonic isomorphism relates every number x to its harmonic dual by H(x) := 1/y. The dual of zero is ∞. For harmonic algebra: see here. Harmonic algebra is the multiplicative inverse of ordinary algebra. There is a sense in which harmonic algebra counts down rather than up. Zero in harmonic numbers is like infinity in ordinary numbers. Larger harmonic numbers correspond to smaller ordinary numbers. Smaller harmonic numbers correspond to larger ordinary numbers. # Set theory and logic and their dual (1) Set theory and logic, (2) number and algebra, and (3) space and time are three foundational topics that each have duals. Let us begin with the standard approaches to these three topics, and then define duals to each of them. To some extent, the original and the dual may be used together. (1) Set theory and logic A set is defined by its elements or members. Its properties may also be known or specified, but what is essential to a set is its members, not its properties. The notation for “x is an element of set S” is “x ∈ S”. A subset is a set whose members are all within another set: “s is a subset of S” is “s ⊆ S”. If subset s does not (or cannot) equal S, then it is a proper subset: “s ⊂ S”. The null set (∅) is a unique set defined as having no members. That is paradoxical but not contradictory. A universal set (Ω) is defined as having all members within a particular universe. An unrestricted universal set is not defined because it would lead to contradictions. The complement of a set (c) is the set of all elements within a particular universe that are not in the set. A union (∪) of sets is the set containing all members of the referenced sets. An intersection (∩) of sets is defined as the set whose members are contained in every referenced set. Set theory has a well-known correspondence with logic: negation (¬) corresponds to complement, disjunction (OR, ∨) corresponds to union, and conjunction (AND, ∧) corresponds to intersection. Material implication (→) corresponds to “is a subset of”. Contradiction corresponds to the null set, and tautology corresponds to the universal set. # Harmonic conversion of space and time As noted here, there are two kinds of mean rates: the time mean and the space mean. If the denominators have a common time interval, the time mean is the arithmetic mean and the space mean is the harmonic mean. If the denominators have a common space interval (stance), the space mean is the arithmetic mean and the time mean is the harmonic mean. For example, light reflected back from a mirror at known distance forms two successive trips whose mean rate is the space mean pace. Several measurements with the same apparatus have a time mean pace. The mean speed is the inverse of the mean pace. The general principle is that quantities with independent time (such as velocity) and a common time interval use ordinary algebra but such quantities with a common space interval use harmonic algebra. Alternately, quantities with independent space (stance) such as lenticity and a common space interval use ordinary algebra but such quantities with a common time interval use harmonic algebra. In other words, quantities over the same interval with independent variables use ordinary algebra but quantities with different independent variables use harmonic algebra to convert between them. For example, addition of velocities with a common time interval use ordinary vector addition (e.g., u + v) but addition of velocities with a common space interval use harmonic vector addition (e.g., ((1/u) + (1/v))−1 ≡ ((u+v)/u·v)−1 with u, v, u·v, u+v0. The relativity parameter γ is based on a (3+1) spatial frame. The parameter γ in a temporal frame with a common time interval (k ≡ 1/c and ≡ 1/v) is: γ² = (1 − v·v/c²)−1 ⇒ (1 − ℓ·ℓ/k²)−1. The parameter γ in a temporal frame with a common space interval (stance) is: γ² = (1 − v·v/c²)−1 ⇒ H(1 − ℓ·ℓ/k²)−1 = ((1 − k²/ℓ·ℓ)−1)−1 = (1 − k²/ℓ·ℓ) ≡ (1 − v·v/c²) = 1/γ². # Interchanging space and time The space-time exchange invariance, as stated by J. H. Field (see here) has an implicit second part. In addition to (1) the exchange (or interchange) of space and time coordinates, there is (2): the exchange of linear and harmonic algebra for ratios. Harmonic algebra is described here. This is seen in the different averaging methods for velocities that differ spatially vs. velocities that differ temporally. If two vehicles take the same route, their average velocity is their arithmetic mean (u + v)/2, but if one vehicle has velocity u going and velocity v returning, then the average velocity is their harmonic mean 2/(1/u + 1/v). However, if one vehicle has pace u going and pace v returning, then the average pace is their arithmetic mean, but if two vehicles take the same route, their average pace is their harmonic mean. Space and time are related to each other as covariant and contravariant components. If space is covariant, then time is contravariant, and if time is covariant, then space is contravariant. The equations of space-time (3+1) and time-space (1+3) physics are symmetric to one another with the interchange of space and time dimensions. The equations of spacetime (4D) physics is self-symmetric. The interchange of space and time dimensions produces equivalent 4D equations. To interchange the space and time coordinates, take these steps: For the equations of classical physics, (1) ensure either space or time is a parameter, (2) interchange one dimension with the parameter, and (3) expand the single dimension into three dimensions. For the equations of relativistic physics, (1) ensure there is a symmetry between space and time dimensions, (2) interchange one space and time dimension but leave dimensionless quantities unchanged, and (3) expand the single dimension into three dimensions. These steps reflect the difference between Galileo’s and Einstein’s relativity. Galileo transforms one frame into another frame but does not combine frames as Einstein’s does. For example, Einstein requires all frames to have the same orientation, but Galileo accepts frame-specific orientations such as the right-hand rule. The Galilean transformation represents the addition and subtraction of velocities as vectors. The dual Galilean transformation represents the addition and subtraction of lenticities as vectors. The Lorentz transformation represents the combination of Galilean and dual-Galilean transformations, as previously shown. # One and two-way transformations The transformation of Galileo is a one-way transformation, i.e., it uses only the one-way speed of light, which for simplicity is assumed to be instantaneous. The transformation of Lorentz is a the two-way transformation, which uses the universal two-way speed of light. The following approach defines two different one-way transformations, which combine to equal the two-way Lorentz transformation. Note that β = v/c; 1/γ² = 1 − β²; and γ = 1/γ + β²γ. Galilean transformation:  ${x}'&space;\mapsto&space;x-vt;\;&space;\;&space;{t}'&space;\mapsto&space;t.$ Dual Galilean transformation:  ${x}'&space;\mapsto&space;x;\;&space;\;&space;{t}'&space;\mapsto&space;t-wx.$ These could be combined with a selection factor κ of zero or one: ${x}'&space;\mapsto&space;x&space;-&space;\epsilon&space;vt;\;&space;\;&space;{t}'&space;\mapsto&space;t-(1-\epsilon&space;)wx.$ Lorentz transformation (boost): ${x}'&space;\mapsto&space;\gamma&space;(x-vt);\;&space;\;&space;{t}'&space;\mapsto&space;\gamma&space;(t-vx/c^{2})$. General Lorentz boost (see here): ${x}'&space;\mapsto&space;\gamma&space;(x-vt);\;&space;\;&space;{t}'&space;\mapsto&space;\gamma(t-k^{2}vx)$ with $\gamma&space;=\left&space;(1-\frac{v^{2}}{c^{2}}&space;\right&space;)^{-1}$  and k = 1/c for the Lorentz boost. General dual Lorentz boost:  ${x}'&space;\mapsto&space;\gamma_{2}&space;(x-kwt);\;&space;\;&space;{t}'&space;\mapsto&space;\gamma_{2}&space;(t-wx)$ with $\gamma_{2}&space;=\left(1-\frac{w^{2}}{k^{2}}&space;\right)^{-1}$and k = 1/c. # Invariant intervals Let’s begin with the space-time invariant interval r´² − ct´² = r² − ct². Then let us solve the equations: r´ = Ar + Bt and t´ = Cr + Dt. r´ = 0 = Ar + Btr = −tB/A = vt where v = −B/A {or} B = −Av r´ = Ar + Bt = A(rvt) A²(rvt)² − c²(Cr + Dt)² = r² − c²t² A²r² − 2A²vrt + A²v²t² − C²c²r² − 2CDc²rtD²c²t² = r² − c²t² ⇒ (A² − C²c²)r² = r² {or} A² − c²C² = 1 ⇒ (A²v² − D²c²)t² = −c²t² {or} D²c² − A²v² = c² ⇒ (2A²v + 2CDc²)rt = 0 {or} CDc² = −A²v
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9560272097587585, "perplexity": 1021.6660499123225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402131986.91/warc/CC-MAIN-20201001174918-20201001204918-00387.warc.gz"}
https://www.physicsforums.com/threads/relative-angle-between-two-balls-with-equal-mass-after-collision.780337/
# Relative angle between two balls with equal mass after collision 1. Nov 6, 2014 ### Phunee I´m working on a physics hand-in and found the relative angle between the two balls after collison(where one is stationary before the collison) to be exactly 90degrees, is this random or is it always so? It´s worth mentioning that total momentum and kinetic energy is conserved. The hand in is attached and the problem I´m referring to is problem 3 File size: 119 KB Views: 157 2. Nov 6, 2014 3. Nov 6, 2014 ### Staff: Mentor A.T. has provided a link to the previous discussion of this problem. We can close the thread, and if there are any more questions start a new thread in the physics homework sub forum.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8865686655044556, "perplexity": 1276.9888558947625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647692.51/warc/CC-MAIN-20180321195830-20180321215830-00375.warc.gz"}
http://math.stackexchange.com/questions/37044/explain-iint-mathrm-dx-mathrm-dy-iint-r-mathrm-d-alpha-mathrm-dr/37156
# Explain $\iint \mathrm dx\,\mathrm dy = \iint r \,\mathrm \,d\alpha\,\mathrm dr$ It is changing the coordinate from one coordinate to another. There is an angle and radius on the right side. What is it? And why? I got: $2\,\mathrm dy\,\mathrm dx = r(\cos^2\alpha-\sin^2\alpha)\,\mathrm d\alpha \,\mathrm dr$, where $x = r \cos(\alpha)$ and $y = r \sin(\alpha)$. but cannot understand and get the right side. The problem emerged when trying to integrate $\displaystyle \int_0^\infty e^{\frac{-x^2}{n^2}}\,\mathrm dx$ where I tried to change the problem knowing $r^2=x^2+y^2$ but stuck to this part. What is the change in the title called and why is it so? - See here, Example 3, and also here. – t.b. May 4 '11 at 21:54 If you think of $x$ and $y$ being cartesian coordinates for the plane, then what this does is a change to polar coordinates. – Lagerbaer May 4 '11 at 22:12 A physicist would note that this is dimensionally correct. Typically $dx, dy,r$ and $dr$ have dimensions of lenght, while $d \alpha$ is dimensionless (it is an angle). So $dxdy=r drd\alpha$ is dimensionally correct: both sides of the equation are areas. On the contrary $dx dy=drd\alpha$ is not dimensionally correct: you have an area equal a lenght. This is useful to quickly spot computational errors. – Giuseppe Negro May 4 '11 at 23:07 You don't have to be a physicist to talk about dimensional analysis. $\mathbb{R}^2$ is acted on by scaling and this extends to an action on differential forms, etc. And of course if two things are equal then the corresponding scaling action needs to agree. – Qiaochu Yuan May 5 '11 at 0:12 $r$ is the "Jacobian" ... when you learn about multi-dimensinoal integration, you should learn how to change variables in that context. – GEdgar Sep 11 '11 at 13:00 Generally speaking, a double integral always has an area differential, so that you're integrating $\int \int dA$. Another way to view the question, then, is why $dA = dx dy$ in Cartesian coordinates and $dA = r dr d\theta$ in polar coordinates. An area element in Cartesian is a rectangle, as Qiaochu describes in his answer. The area of the rectangle is the small change in $x$ times the small change in $y$, or $\Delta x \Delta y$. An area element in polar, however, is a piece of a circle sector. There's a nice picture below taken from here. (The area element is the shaded part.) If the angle is measured in radians, we know that the area of a sector of angle $\theta$ of a circle of radius $r$ is $\frac{1}{2}r^2 \theta$. So the area of the shaded piece in the picture is $$\Delta A = \frac{1}{2}(r + \Delta r)^2 \Delta \theta - \frac{1}{2}r^2 \Delta \theta = r \, \Delta r \, \Delta \theta + \frac{1}{2}(\Delta r)^2 \, \Delta \theta.$$ The quadratic factor of $\Delta r$ makes the second term negligible compared to the first term for small enough $\Delta r$ and $\Delta \theta$. Thus in the limit we get $dA = r \, dr \, d\theta$. As others have said, you can also use the multivariate change of variables formula involving the Jacobian of the transformation directly. I like the geometric argument when first introducing the polar element in a calculus course, though. - @Jonas: Thanks for editing the picture into my answer. – Mike Spivey May 5 '11 at 2:44 Is that formula just for aproximation? – Victor Dec 9 '11 at 0:55 @Victor: I'm not sure I understand your question. The expression for $\Delta A$ becomes $dA = r dr d\theta$ in the limit, as explained in the sentence following the formula. – Mike Spivey Dec 9 '11 at 1:00 @MikeSpivey Great answer! It really made me get the idea though I'm not familiar with double integration! – Pedro Tamaroff Feb 20 '12 at 0:41 Hm i guess to ease the computation as in the limit it will not matter whether those lines are straight or arc-like – i squared - Keep it Real May 17 at 19:40 It's a special case of the multivariate change of variables formula. Intuitively you can think of it as follows: starting from a point $(x, y)$, you make an infinitesimal change in $x$ and then an infinitesimal change in $y$ to get to $(x + \delta x, y + \delta y)$. Those changes trace out a little square whose area is $\delta x \delta y$, and you're summing over a bunch of these little squares. So what happens when you do the same thing in polar coordinates? Starting from $(r, \theta)$ you move to $(r + \delta r, \theta + \delta \theta)$. Now moving $\delta r$ is just like moving $\delta x$ in an appropriately rotated set of axes. But when you move $\delta \theta$, the actual distance you move by is multiplied by $r$ (draw a diagram to see this), and it's in a direction orthogonal to the direction you move when changing $r$. You're actually moving by $r \, \delta \theta$. The result is not quite a square, but for small enough values of $\delta r, \delta \theta$ it approaches a square with side lengths $\delta r$ and $r \, \delta \theta$. - you don't need square, you only need rectangle.. – Aang Jul 10 '12 at 19:18 UPDATE: The series have been replaced by limits of sums. Geometric interpretation. By definition of a double integral of a continuous function over a bounded closed region $R$ of the $xy$-plane, we have $$\iint_R f(x,y)\;\mathrm{d}x\;\mathrm{d}y=\lim_{n\to\infty}\; \sum_{i=1}^{n }f(x_{i},y_{i})\Delta A_{i},$$ where $\Delta A_{i}$ is the area of a generic rectangular cell and $n$ the number of cells. If $f(x,y)=1$, we get the area of $R$ $$\iint_R \mathrm{d}x\;\mathrm{d}y=\lim_{n\to\infty}\; \sum_{i=1}^{n }\Delta A_{i}.$$ If we decompose $R$ into cells with a shape of sectors of a circle defined by two radii whose difference is $\Delta r_{i}$ for the generic $i^{th}$ cell and two rays making an angle $\Delta \theta _{i}$ with each other, the area of the cell, using the formula of a circle sector, is $$\frac{1}{2}\left[ \left( r_{i}+\frac{1}{2}\Delta r_{i}\right) ^{2}-\left( r_{i}-\frac{1}{2}\Delta r_{i}\right) ^{2}\right] \Delta \theta _{i}=r_{i}\Delta r_{i}\Delta \theta _{i}\text{,}$$ where $r_{i}$ is the radius of the middle point of the cell. The same area $R$ can be expressed as the limit $\lim_{n\to\infty}\;\sum_{i=1}^{n }r_{i}\Delta r_{i}\Delta \theta _{i}$, which by definition of a double integral is equal to $$\iint_R r\;\mathrm{d}r\;\mathrm{d}\theta.$$ Figure: Generic $i^{th}$-cell in polar co-ordinates with the shape of a circle sector This transformation is defined rigorously by the absolute value of the Jacobian of the transformation $\left\vert \frac{\partial (x,y)}{\partial (r,\theta )}\right\vert =r$ from the Cartesian to the polar system of co-ordinates ($x=r\cos \theta ,y=r\sin \theta$): $$\iint_R \mathrm{d}x\;\mathrm{d}y=\iint_R \left\vert \frac{\partial (x,y)}{\partial (r,\theta )}\right\vert \;\mathrm{d}r\;\mathrm{d}\theta = \iint_R r\;\mathrm{d}r\;\mathrm{d}\theta.$$ Added: Evaluation of the Jacobian determinant: $$\begin{eqnarray*} \frac{\partial \left( x,y\right) }{\partial \left( r,\theta \right) } &=&\det \begin{pmatrix} \partial x/\partial r & \partial x/\partial \theta \\ \partial y/\partial r & \partial y/\partial \theta \end{pmatrix} \\ &=&\det \begin{pmatrix} \cos \theta & -r\sin \theta \\ \sin \theta & r\cos \theta \end{pmatrix} \\ &=&r\cos ^{2}\theta +r\sin ^{2}\theta \\ &=&r. \end{eqnarray*}$$ - @Mike Spivey: Thanks! (corrected). – Américo Tavares May 5 '11 at 8:33 Let us approach this problem like some physicists would, that is, let us consider the differential $\mathrm{d}$ as an operator which obeys the rules of derivation, only with a sign. In the present case, $x=r\cos\alpha$ and $y=r\sin\alpha$ hence $$\mathrm{d}x=\cos\alpha\mathrm{d}r-r\sin\alpha\mathrm{d}\alpha, \quad \mathrm{d}y=\sin\alpha\mathrm{d}r+r\cos\alpha\mathrm{d}\alpha.$$ Now, there are some magic rules which allow to multiply $\mathrm{d}r$ and $\mathrm{d}\alpha$ elements. These are $$\mathrm{d}r\mathrm{d}r=0,\quad \mathrm{d}\alpha\mathrm{d}r=-\mathrm{d}r\mathrm{d}\alpha,\quad \mathrm{d}\alpha\mathrm{d}\alpha=0.$$ Hence, $$\mathrm{d}x\mathrm{d}y=(\cos\alpha\mathrm{d}r-r\sin\alpha\mathrm{d}\alpha)(\sin\alpha\mathrm{d}r+r\cos\alpha\mathrm{d}\alpha).$$ The $\mathrm{d}r\mathrm{d}r$ and $\mathrm{d}\alpha\mathrm{d}\alpha$ terms disappear and the terms which interest us are the $\mathrm{d}r\mathrm{d}\alpha$ and $\mathrm{d}\alpha\mathrm{d}r$ ones. One gets $$\mathrm{d}x\mathrm{d}y=(\cos\alpha\cdot r\cos\alpha-r\sin\alpha\cdot (-1)\sin\alpha)\mathrm{d}r\mathrm{d}\alpha,$$ that is, $$\color{green}{\mathrm{d}x\mathrm{d}y=r\mathrm{d}r\mathrm{d}\alpha},$$ a formula which yields $$\color{red}{\iint f(x,y)\mathrm{d}x\mathrm{d}y=\iint f(r\cos\alpha,r\sin\alpha)r\mathrm{d}r\mathrm{d}\alpha}.$$ One can also transform integrals the other way round, all there is to do is to compute a formula for $\mathrm{d}r\mathrm{d}\alpha$ as a multiple of $\mathrm{d}x\mathrm{d}y$. Our formula for $\mathrm{d}x\mathrm{d}y$ in terms of $\mathrm{d}r\mathrm{d}\alpha$ yields $$\mathrm{d}r\mathrm{d}\alpha=\frac1r\mathrm{d}x\mathrm{d}y=\frac1{\sqrt{x^2+y^2}}\mathrm{d}x\mathrm{d}y,$$ hence $$\color{blue}{\iint f(r,\alpha)\mathrm{d}r\mathrm{d}\alpha=\iint f(x,y)\frac1{\sqrt{x^2+y^2}}\mathrm{d}x\mathrm{d}y}.$$ Once again, this only describes the computational side of the story, but this recipe is supported by a well established theory of differential forms which we omitted. - it would be maybe helpful to use the symbol $\wedge$ (or $\cdot$, or $\otimes$, or ...) for the multiplication of two differentials just to make clear that the "usual" rules for product do not apply. – Fabian May 5 '11 at 9:47 @Fabian: I deliberately omitted it. Both choices (with and without $\land$) have their advantages. Thanks for your reaction. – Did May 5 '11 at 9:53 Can the "magic rules" be viewed as the cross product of "vectors" $\mathrm{d}r,\mathrm{d}\alpha$ ? – Américo Tavares May 5 '11 at 17:41 @Américo Rather as a differential form (a 2-form, in this case). This is explained rigorously in many lecture notes, a congenial one might be math.berkeley.edu/~wodzicki/H185.S11/podrecznik/2forms.pdf – Did May 5 '11 at 18:09 Thanks! +1 for your approach. – Américo Tavares May 5 '11 at 18:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9291518926620483, "perplexity": 232.95828973278043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824345.69/warc/CC-MAIN-20160723071024-00053-ip-10-185-27-174.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/is-this-equation-wrong.85111/
# Is this equation wrong? • #1 78 0 Is this equation wrong? This is a equation for "uniform acceleration directed line motion at zero initial speed" in our textbook. S=1/2 at^2 Here is the list to compare the answer: t: 1 2 3 v: 2 4 6 If we use the equation to calculate, the answer is different to the list. For example, 1/2(2)(3*3) = 9, s=9 but in the list, it shows the answer is 12! Is my wrong? Or the equation is really wrong? Last edited: ## Answers and Replies • #2 ZapperZ Staff Emeritus Science Advisor Education Advisor 35,847 4,676 yu_wing_sin said: Is this equation wrong? This is a equation for "uniform acceleration directed line motion at zero initial speed" in our textbook. S=1/2 at^2 Here is the list to compare the answer: t: 1 2 3 a: 2 4 6 If we use the equation to calculate, the answer is different to the list. For example, 1/2(2)(3*3) = 9, s=9 but in the list, it shows the answer is 12! Is my wrong? Or the equation is really wrong? It is wrong because the kinematical equation that is used to derived that expression is for constant acceleration only. If the acceleration varies, that equation isn't valid. Zz. • #3 78 0 ZapperZ said: It is wrong because the kinematical equation that is used to derived that expression is for constant acceleration only. If the acceleration varies, that equation isn't valid. Zz. But in my trial calculation, the a is already constant. The a is 2, it is still not equal to the list result. Is me there is any wrong? • #4 ZapperZ Staff Emeritus Science Advisor Education Advisor 35,847 4,676 yu_wing_sin said: But in my trial calculation, the a is already constant. The a is 2, it is still not equal to the list result. Is me there is any wrong? What list? All you did was plug in t and a, and calculate s. YOu got an value for s. Why is the eqn. wrong? Zz. edit: I see you EDITED your list to now list v. This is no longer relevant and appropriate for the equation you are using. How could you plug "a" for "v"? This is getting VERY confusing. Last edited: • #5 78 0 ZapperZ said: What list? All you did was plug in t and a, and calculate s. YOu got an value for s. Why is the eqn. wrong? Zz. edit: I see you EDITED your list to now list v. This is no longer relevant and appropriate for the equation you are using. How could you plug "a" for "v"? This is getting VERY confusing. The reason of a comes from v is because the v is a speed with initial zero speed, so the first second, 1/s' speed is the a. But anyway I calculate it, I really can't get the true result. • #6 ZapperZ Staff Emeritus Science Advisor Education Advisor 35,847 4,676 yu_wing_sin said: The reason of a comes from v is because the v is a speed with initial zero speed, so the first second, 1/s' speed is the a. But anyway I calculate it, I really can't get the true result. And you don't see ANYTHING wrong with plugging in the values of "v" into "a" in $$s = \frac{1}{2} at^2$$? If there is an INITIAL velocity at t=0, then your question is MISSING a term. The full equation for the displacement is $$s= s_0 + v_0t + \frac{1}{2} at^2$$ It is ONLY for s(t=0)=0 and v(t=0)=0 do you get the displacement equation that you quoted in the first place. Zz. Last edited: • #7 78 0 More details, t: 1 2 3 v: 2 4 6 The v is a speed with zero initial speed, front mentioned. So the first time unit is the a, thus the a is 2. If we put 2 into a for calculating, we will get: 1/2 a t^2 =1/2 (2) 3^2 =9 but the list shows that 3 seconds (t) = 12 (v). This is really making me doubt, whether the equation is wrong. • #8 ZapperZ Staff Emeritus Science Advisor Education Advisor 35,847 4,676 yu_wing_sin said: More details, t: 1 2 3 v: 2 4 6 The v is a speed with zero initial speed, front mentioned. So the first time unit is the a, thus the a is 2. What the......? If we put 2 into a for calculating, we will get: 1/2 a t^2 =1/2 (2) 3^2 =9 but the list shows that 3 seconds (t) = 12 (v). This is really making me doubt, whether the equation is wrong. I give up. Maybe someone else can translate what's going on. Zz. • #9 78 0 ZapperZ said: And you don't see ANYTHING wrong with plugging in the values of "v" into "a" in $$s = 1/2 at^2$$? If there is an INITIAL velocity at t=0, then your question is MISSING a term. The full equation for the displacement is $$s= s_0 + v_0t + 1/2 at^2$$ It is ONLY for s(t=0)=0 and v(t=0)=0 do you get the displacement equation that you quoted in the first place. Zz. How come is the s_0 and v_0? The initial velocity is already the a, and the v is variable, and the s_0 where can find? Let me trial calculate it, ignore the s_0, v_0t+1/2 at^2 =2*3 + 1/2 (2) (3^2) =15 only "v_0t+1/2 at^2" result is ready far higher than this equation "S=1/2 at^2". • #10 LeonhardEuler Gold Member 859 1 The equation s=1/2at^2 goves the total distance traveled. After 3 seconds this is 1/2*2*3^2=9. The list says that after 3 seconds the velocity is 6. Where is the problem with this? • #11 LeonhardEuler Gold Member 859 1 yu_wing_sin said: How come is the s_0 and v_0? The initial velocity is already the a, and the v is variable, and the s_0 where can find? Let me trial calculate it, ignore the s_0, v_0t+1/2 at^2 =2*3 + 1/2 (2) (3^2) =15 only "v_0t+1/2 at^2" result is ready far higher than this equation "S=1/2 at^2". Why are you taking v_0 = 2? If the accelration is constant then you can extrapolate v_0 to be 0. • #12 551 1 If you're starting at postition s = 0 and time t = 0, the equation is just $$s = v_{0}t + \frac{1}{2}at^2$$ If you're starting with velocity v = 0 at time t = 0, the equation reduces further $$s = \frac{1}{2}at^2$$ The $s_{0}$ and $v_{0}$ are just the displacement and velocity at time t = 0, respectively. What's hard to understand? ZZ's explanation seemed pretty clear to me :/. • #13 78 0 LeonhardEuler said: The equation s=1/2at^2 goves the total distance traveled. After 3 seconds this is 1/2*2*3^2=9. The list says that after 3 seconds the velocity is 6. Where is the problem with this? This is an accelerated motion, the a is 2 , so the volecity is not wrong. v=at v=2*3 v=6 • #14 ZapperZ Staff Emeritus Science Advisor Education Advisor 35,847 4,676 yu_wing_sin said: How come is the s_0 and v_0? The initial velocity is already the a, and the v is variable, and the s_0 where can find? I'm sorry, but INITIAL VELOCITY is "a"? Since when can an acceleration becomes a velocity? Where did you learn this? Zz. • #15 78 0 LeonhardEuler said: Why are you taking v_0 = 2? If the accelration is constant then you can extrapolate v_0 to be 0. Oh, sorry. I missed understanding to it. In my textbook there is no this mention. • #16 78 0 LeonhardEuler said: Why are you taking v_0 = 2? If the accelration is constant then you can extrapolate v_0 to be 0. Oh, I missed understanding to it. In my textbook there is no mention of this. But the result is still wrong. • #17 LeonhardEuler Gold Member 859 1 ZapperZ said: I'm sorry, but INITIAL VELOCITY is "a"? Since when can an acceleration becomes a velocity? Where did you learn this? Zz. He seems to have gotten this from the fact that it is a uniform acceleration, so: $$a = \frac{\Delta v}{\Delta t} = \frac{v_1-v_0}{1-0}=2$$ • #18 LeonhardEuler Gold Member 859 1 yu_wing_sin said: Oh, I missed understanding to it. In my textbook there is no mention of this. But the result is still wrong. Is it possible that the initial position is not zero? Could you explain to me exactly how you know that the result is wrong. I don't seem to see any contradiction within the data you gave. • #19 78 0 ZapperZ said: I'm sorry, but INITIAL VELOCITY is "a"? Since when can an acceleration becomes a velocity? Where did you learn this? Zz. The initial speed is at zero, that is not wrong. It has many motive examples, from still to accelerate. It is also used for calculating certain time's distance. But I found it is not correct. • #20 ZapperZ Staff Emeritus Science Advisor Education Advisor 35,847 4,676 LeonhardEuler said: He seems to have gotten this from the fact that it is a uniform acceleration, so: $$a = \frac{\Delta v}{\Delta t} = \frac{v_1-v_0}{1-0}=2$$ Yes, but he's going to cause himself a lot of grief if he keeps insisting that a=v. a=dv/dt is certainly not a = v. Zz. • #21 78 0 LeonhardEuler said: Is it possible that the initial position is not zero? Could you explain to me exactly how you know that the result is wrong. I don't seem to see any contradiction within the data you gave. Not right that. Many motions have the feature from still to accelerate, also the acceleration is average, for example the rockets or trains, in theoretical, it is possible. But often we will ignore the variable volecity for easy to calculate. • #22 78 0 ZapperZ said: Yes, but he's going to cause himself a lot of grief if he keeps insisting that a=v. a=dv/dt is certainly not a = v. Zz. You mistook my meaning... Also this is the second paper of me in my scheme. Today now I found me is right, I can be brave to write it. Thank you. • #23 ZapperZ Staff Emeritus Science Advisor Education Advisor 35,847 4,676 yu_wing_sin said: You mistook my meaning... Also this is the second paper of me in my scheme. Today now I found me is right, I can be brave to write it. Thank you. You are right about what, that the equation is "wrong"? Or did you not realize you were USING it wrongly? I hate to think this is the "paper" you have been touting about in the other part of PF. Zz. • #24 BobG Science Advisor Homework Helper 185 82 yu_wing_sin said: Is this equation wrong? This is a equation for "uniform acceleration directed line motion at zero initial speed" in our textbook. S=1/2 at^2 Here is the list to compare the answer: t: 1 2 3 v: 2 4 6 If we use the equation to calculate, the answer is different to the list. For example, 1/2(2)(3*3) = 9, s=9 but in the list, it shows the answer is 12! Is my wrong? Or the equation is really wrong? Your answer is correct! Even without the acceleration being given, you can look at the increase in 'v' each second and see that the velocity is increasing by 2 each second, which gives you an acceleration of 2 (which you correctly used in your equation). Or you could differentiate the position equation and solve for acceleration (the acceleration is a little too obvious to bother with that, in this case). • #25 HallsofIvy Science Advisor Homework Helper 41,833 963 You repeatedly said "the list shows the answer is 12". What list?? You didn't show us any list that include "12". • Last Post Replies 1 Views 1K • Last Post Replies 12 Views 2K • Last Post Replies 4 Views 2K • Last Post Replies 2 Views 1K • Last Post Replies 7 Views 8K • Last Post Replies 1 Views 1K • Last Post Replies 5 Views 2K • Last Post Replies 1 Views 1K • Last Post Replies 4 Views 3K • Last Post Replies 2 Views 1K
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9339755773544312, "perplexity": 1547.3405095770195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991428.43/warc/CC-MAIN-20210514152803-20210514182803-00065.warc.gz"}
https://socratic.org/questions/what-is-the-pythagorean-theorem#467494
Trigonometry Topics # What is the Pythagorean Theorem? Nov 24, 2014 The Pythagorean Theorem is a relation in a right-angled triangle. The rule states that ${a}^{2} + {b}^{2} = {c}^{2}$ , in which $a$ and $b$ are the opposite and the adjacent sides, the 2 sides which make the right-angle, and $c$ representing the hypotenuse, the longest side of the triangle. So if you have $a = 6$ and $b = 8$, $c$ would equal to ${\left({6}^{2} + {8}^{2}\right)}^{\frac{1}{2}}$, (${x}^{\frac{1}{2}}$ meaning square rooted), which is equal to 10, $c$, the hypotenuse. Aug 24, 2017 #### Explanation: The Pythagorean Thereom (found by Pythagoras aka Pythagoras of Samos) is used to find the length of a side of a right triangle using the formula ${a}^{2} + {b}^{2} = {c}^{2}$! A right triangle has two "legs" and a hypotenuse. A hypotenuse is the longest side of a right triangle and is always the opposite of the right angle corner. The legs can be a or b (it doesn't matter which is $a$ or which is $b$). The $c$ is always longer than $a$ and $b$! To get some more clarity, take a look at the example down below! In this case, lets say that $a$ is $3$, $b$ is $4$ and $c$ is $x$. ${a}^{2} + {b}^{2} = {c}^{2}$ After substituting... ${3}^{2} + {4}^{2} = {x}^{2}$ After simplifying... $9 + 16 = {x}^{2}$ Now, solve it! ${x}^{2} = 25$ Whoa, whoa, wait a second before you finalize that as the answer! We can simplify this. It's just not $x$, it's ${x}^{2}$! So we have to find the square root of $25$ so that you can get your final answer! The square root of $25$ is $5$. So... $x = 5$! Remember, we don't use the Pythagorean Theorem just for the hypotenuse! We can use it for the other sides, too! Ex: In this problem, we know the hypotenuse, but we need to find out what one of the "legs" is. Lets say that $6$ is $a$, $x$ is $b$ and we know that $10$ has to be $c$. ${a}^{2} + {b}^{2} = {c}^{2}$ After substituting... ${6}^{2} + {x}^{2} = {10}^{2}$ After simplifying... $36 + {x}^{2} = 100$ Leave ${x}^{2}$ on one side... ${x}^{2} = 100 - 36$ ${x}^{2} = 64$ $x = 8$ There! We have it! I hope you have a better clarity of the Pythagorean Thereom and understand it! My source (despite the images) is my mind! Sorry if my answer is too long! ##### Impact of this question 11278 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 45, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9137048721313477, "perplexity": 342.3850193637378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301863.7/warc/CC-MAIN-20220120130236-20220120160236-00603.warc.gz"}
https://www.physicsforums.com/threads/giancoli-ed-5-p-121.131447/
# Giancoli ed 5 p 121 1. Sep 10, 2006 ### jla2w I am teaching myself physics using Giancoli's ed 5 text, and am confused on one of the examples. P 121 in the fifth edition, example 5-8 to be precise. In this a car is on an inclined plane (i.e a racetrack where the road banks to reduce the friction needed to keep the cars on the track). The solution explains that the normal force perpindicular to the track is greater than the force due to gravity directed downward. In previous examples, gravity was generally resolved into two components, but in this example it seems the normal force is resolved into two components, with the vertical component of the normal force set equal to gravity. This seems incorrect, what am I missing? Thanks 2. Sep 10, 2006 ### Chi Meson You can resolve any vector into componants along axes in any direction. In this situation, due to the lack of friction, the only force that the track can exert on the car is perpendicular to the surface. THis is the definition of "normal" (direction perpendicular to the plane of the surface). The normal force in this situation has to "accomplish" two things: first it must balance the weight (gravitational force), and simultaneously it must provide centripetal force. These two "required componants" add up to the net force from the track, which is the normal force. Depending on the angle of pitch of the turn in the track, there is only one speed that will allow the car to make the turn without wiping out. Last edited: Sep 10, 2006 3. Sep 10, 2006 ### jla2w thanks, that is a helpful explanation, but there is one aspect I'm still unclear on. If the car were at rest, then the normal force in a direction perpindicular to the track would be less than the force due to gravity directed downward. In this case the normal force is greater than the gravitational force. What is the principle or generalized explanation for why when the car is in motion the normal force exceeds gravity, and at rest it is less than gravity? 4. Sep 10, 2006 ### mbrmbrg Normal force remains the same regardless of whether the object is at rest or in motion (so long as the object remains in contact with the surface it's supposed to be on, at any rate. I don't know about you, but I like all four wheels of my car to be in contact with the road at all times, whether the car is parked or moving. ) PS I can't believe you're teaching yourself physics--more power to your elbow! 5. Sep 10, 2006 ### Chi Meson I'm sorry, but that is wrong. Normal force will react to whatever the situation is. The normal force on a stationary car would be less than that of a car moving through the corner. In the situation of a car that starts at rest, the normal force would be balanced by (and therefore equal in magnitude to) a componant of the car's weight (the so-called "perpendicular componant" of the weight). In the moving example, the normal force is the vector sum of the weight plus the "required" centripetal force. 6. Sep 10, 2006 ### mbrmbrg :blush: My humblest apologies. Could someone please delete that? 7. Sep 11, 2006 ### jla2w thanks, very helpful. So as the car moves around the banked road, the road exerts a force on the car (centripetal) directly proportional to the square of the velocity. In addition, since friction is not a factor, as it is in the resting situation, the vertical component of the normal force MUST equal precisely mg. Makes sense, but still pretty difficult to deduce without much practice. 8. Sep 11, 2006 ### Chi Meson The next step is to add friction (and thus make it a "real world" problem. Friction is the "surface force" that acts parallel to the surface (while normal is the perpendicular "surface force"). With friction, the speed of the car does not have to be exactly right for the given embankment. Keep in mind, that if an object is moving in perfect circular motion, all forces must be balanced except for the force (or componant) that is "centripetal." In other words, in uniform circular motion, the centripetal force is the net force. 9. Sep 11, 2006 ### jla2w Thanks Chi, and by the way, wahoowa (SEAS '02) 10. Sep 11, 2006 ### Chi Meson Aha! Rugby Road to Vinegar Hill! Have something to add? Similar Discussions: Giancoli ed 5 p 121
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8010303974151611, "perplexity": 542.8678282012506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280835.60/warc/CC-MAIN-20170116095120-00389-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/electrostatic-and-gravitational-potential-energy-question.33270/
# Homework Help: Electrostatic and gravitational potential energy question 1. Jun 30, 2004 ### Sigma Rho Hi all, I have a few questions that I'd appreciate some guidance on. There are two identical dust particles:- mass 13ug charge +9.8E-15 C electrostatic potential energy 8.7E-17 J gravitational potential energy 1.1E-24 J The mass is given in the question, the energies I calculated. The part of the question that I am having problems with asks me to comment on these values of energy, with reference to the zero points of each. Apart from saying that there's a lot more electrostatic potential energy than there is gravitational, and that electrostatic force is much more powerful than gravitational force, I don't really know what else to say. Do I need say more than that? It then asks for the total potential energy of the system, which I haven't seen mentioned in the text book (or maybe I just didn't read it properly ). Is this just the algebraic sum of the electrostatic and gravitational energies, or is there something else to consider? How does this change as the seperation of the particles changes? I guess I can work this out once I've mastered the question above. I'm not looking for actual answers to these, I'd rather work them out for myself, but any guidance would be greatly appreciated.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8705604076385498, "perplexity": 367.4613826269542}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865098.25/warc/CC-MAIN-20180623152108-20180623172108-00583.warc.gz"}
http://empslocal.ex.ac.uk/people/staff/mrwatkin/zeta/physics5.htm
## string theory, quantum cosmology, etc. E. Elizalde, S. Leseduarte and S. Zerbini, "Mellin transform techniques for zeta-function resummations" "Making use of inverse Mellin transform techniques for analytical continuation, an elegant proof and an extension of the zeta function regularization theorem is obtained...As an application of the method, the summation of the series which appear in the analytic computation (for different ranges of temperature) of the partition function of the string - basic in order to ascertain if QCD is some limit of a string theory - is performed. " E. Elizalde, S. Leseduarte and S.D. Odintsov, "Partition functions for the rigid string and membrane at any temperature" "Exact expressions for the partition functions of the rigid string and membrane at any temperature are obtained in terms of hypergeometric functions. By using zeta function regularization methods, the results are analytically continued and written as asymptotic sums of Riemann-Hurwitz zeta functions, which provide very good numerical approximations with just a few first terms." A class of zeta functions that extends the class of Epstein's was recently brought to my attention by Prof. E. Elizalde of M.I.T. They are spectral zeta functions associated with a quadratic + linear + constant form in any number of dimensions. Elizalde has developed formulas for them which extend the famous Chowla-Selberg formula. E. Elizalde, "Explicit zeta functions for bosonic and fermionic fields on a noncommutative toroidal spacetime", Journal of Physics A 34 (2001) 3025–3036 E. Elizalde, "Multidimensional extension of the generalized Chowla-Selberg formula", Communications in Mathematical Physics 198 (1998) 83–95 E. Elizalde, "Zeta functions, formulas and applications", J. Comp. Appl. Math. 118 (2000) 125 G.W. Moore, "Les Houches lectures on strings and arithmetic" (preprint 01/04) [abstract:] "These are lecture notes for two lectures delivered at the Les Houches workshop on Number Theory, Physics, and Geometry, March 2003. They review two examples of interesting interactions between number theory and string compactification, and raise some new questions and issues in the context of those examples. The first example concerns the role of the Rademacher expansion of coefficients of modular forms in the AdS/CFT correspondence. The second example concerns the role of the "attractor mechanism" of supergravity in selecting certain arithmetic Calabi-Yau's as distinguished compactifications." B. Dragovich, "Nonlocal dynamics of $p$-adic strings" (preprint 11/2010) [abstract:] "We consider the construction of Lagrangians that might be suitable for describing the entire $p$-adic sector of an adelic open scalar string. These Lagrangians are constructed using the Lagrangian for $p$-adic strings with an arbitrary prime number $p$. They contain space-time nonlocality because of the d'Alembertian in argument of the Riemann zeta function. We present a brief review and some new results." B. Dragovich, "On p-adic sector of adelic string" (Presented at the 2nd Conference on SFT and Related Topics, Moscow, April 2009. Submitted to Theor. Math. Phys.) [abstract:] "We consider construction of Lagrangians which are candidates for p-adic sector of an adelic open scalar string. Such Lagrangians have their origin in Lagrangian for a single p-adic string and contain the Riemann zeta function with the 'Alembertian in its argument. In particular, we present a new Lagrangian obtained by an additive approach which takes into account all p-adic Lagrangians. The very attractive feature of this new Lagrangian is that it is an analytic function of the d'Alembertian. Investigation of the field theory with Riemann zeta function is interesting in itself as well." B. Dragovich, "Towards effective Lagrangians for adelic strings" (preprint 02/2009) B. Dragovich, "Some Lagrangians with zeta function nonlocality" (preprint, 05/2008) [abstract:] "Some nonlocal and nonpolynomial scalar field models originated from $p$-adic string theory are considered. Infinite number of spacetime derivatives is governed by the Riemann zeta function through d'Alembertian $\Box$ in its argument. Construction of the corresponding Lagrangians begins with the exact Lagrangian for effective field of $p$-adic tachyon string, which is generalized replacing $p$ by arbitrary natural number $n$ and then taken a sum of over all $n$. Some basic classical field properties of these scalar fields are obtained. In particular, some trivial solutions of the equations of motion and their tachyon spectra are presented. Field theory with Riemann zeta function nonlocality is also interesting in its own right." B. Dragovich, "Zeta nonlocal scalar fields" (preprint, 04/2008) [abstract:] "We consider some nonlocal and nonpolynomial scalar field models originated from p-adic string theory. Infinite number of spacetime derivatives is determined by the operator valued Riemann zeta function through d'Alembertian $\Box$ in its argument. Construction of the corresponding Lagrangians L starts with the exact Lagrangian $\mathcal{L}_p$ for effective field of p-adic tachyon string, which is generalized replacing p by arbitrary natural number n and then taken a sum of $\mathcal{L}_n$ over all n. The corresponding new objects we call zeta scalar strings. Some basic classical field properties of these fields are obtained and presented in this paper. In particular, some solutions of the equations of motion and their tachyon spectra are studied. Field theory with Riemann zeta function dynamics is interesting in its own right as well." B. Dragovich, "Zeta strings" (preprint 03/2007) [abstract:] "We introduce nonlinear scalar field models for open and open-closed strings with spacetime derivatives encoded in the operator valued Riemann zeta function. The corresponding two Lagrangians are derived in an adelic approach starting from the exact Lagrangians for effective fields of $p$-adic tachyon strings. As a result tachyons are absent in these models. These new strings we propose to call zeta strings. Some basic classical properties of the zeta strings are obtained and presented in this paper." B. Dragovich, "Lagrangians with Riemann zeta function" (preprint 08/2008) [abstract:] "We consider construction of some Lagrangians which contain the Riemann zeta function. The starting point in their construction is $p$-adic string theory. These Lagrangians describe some nonlocal and nonpolynomial scalar field models, where nonlocality is controlled by the operator valued Riemann zeta function. The main motivation for this research is intention to find an effective Lagrangian for adelic scalar strings." R. Auzzi and S. Elitzur and S.B. Gudnason and E. Rabinovici, "Time-dependent stabilization in AdS/CFT" (preprint 06/2012) [abstract:] "We consider theories with time-dependent Hamiltonians which alternate between being bounded and unbounded from below. For appropriate frequencies dynamical stabilization can occur rendering the effective potential of the system stable. We first study a free field theory on a torus with a time-dependent mass term, finding that the stability regions are described in terms of the phase diagram of the Mathieu equation. Using number theory we have found a compactification scheme such as to avoid resonances for all momentum modes in the theory. We further consider the gravity dual of a conformal field theory on a sphere in three spacetime dimensions, deformed by a doubletrace operator. The gravity dual of the theory with a constant unbounded potential develops big crunch singularities; we study when such singularities can be cured by dynamical stabilization. We numerically solve the Einstein-scalar equations of motion in the case of a time-dependent doubletrace deformation and find that for sufficiently high frequencies the theory is dynamically stabilized and big crunches get screened by black hole horizons." C. Angelantonj, M. Cardella, S. Elitzur and E. Rabinovici, "Vacuum stability, string density of states and the Riemann zeta function" (preprint 12/2010) [abstract:] "We study the distribution of graded degrees of freedom in classically stable oriented closed string vacua and use the Rankin-Selberg transform to link it to the finite one-loop vacuum energy. In particular, we find that the spectrum of physical excitations not only must enjoy asymptotic supersymmetry but actually, at very large mass, bosonic and fermionic states must follow a universal oscillating pattern, whose frequencies are related to the zeros of the Riemann zeta-function. Moreover, the convergence rate of the overall number of the graded degrees of freedom to the value of the vacuum energy is determined by the Riemann hypothesis. We discuss also attempts to obtain constraints in the case of tachyon-free open-string theories." M.A. Cardella, "Error estimates in horocycle averages asymptotics: Challenges from string theory" (preprint 12/2010) [abstract:] "We study asymptotics and error estimates of long horocycle averages of automorphic functions with not-so-mild growing conditions at the cusp. For modular functions of rapid decay, it is a classical result that a certain value of the power in the error estimate is equivalent to the Riemann hypothesis. For modular functions of polynomial growth, we study how asymptotics are modified, by devising and unfolding trick with a two-dimensional lattice theta series. For modular functions of exponential growth, we gain insights on the horocycle average asymptotic, by translating this quantity into a states counting function in heterotic string theory. Consistency conditions for the heterotic string physical spectrum, lead to a special bound on the modular function exponential growth. String theory suggests that automorphic functions in the growing class below that bound should have convergent horocycle average." S.L. Cacciatori and M. Cardella, "Equidistribution rates, genus $g$ closed string amplitudes, and the Riemann Hypothesis" (preprint 07/2010) [abstract:] "Equidistribution of unipotent averages of $Sp(2g,\mathbb{R})$ automorphic forms is discussed by using analytic methods. We find that certain values of the equidistribution convergence rates are only compatible with the Riemann hypothesis. These results have applications in string theory by using modular functions appearing in recently proposed genus $g \ge 2$ closed string amplitudes. The potential mathematical advantages of obtaining a mapping of the equidistribution convergence rates into corresponding quantities appearing in string theory is also outlined." M. McGuigan, "Riemann Hypothesis and short distance fermionic Green's functions" (preprint 04/05) [abstract:] "We show that the Green's function of a two dimensional fermion with a modified dispersion relation and short distance parameter a is given by the Lerch zeta function. The Green's function is defined on a cylinder of radius R and we show that the condition R = a yields the Riemann zeta function as a quantum transition amplitude for the fermion. We formulate the Riemann hypothesis physically as a nonzero condition on the transition amplitude between two special states associated with the point of origin and a point half way around the cylinder each of which are fixed points of a $Z_2$ transformation. By studying partial sums we show that that the transition amplitude formulation is analogous to neutrino mixing in a low dimensional context. We also derive the thermal partition function of the fermionic theory and the thermal divergence at temperature 1/a. In an alternative harmonic oscillator formalism we discuss the relation to the fermionic description of two dimensional string theory and matrix models. Finally we derive various representations of the Green's function using energy momentum integrals, point particle path integrals, and string propagators." M. McGuigan, "Riemann Hypothesis, matrix/gravity correspondence and FZZT brane partition functions" (preprint 08/2007) [abstract:] "We investigate the physical interpretation of the Riemann zeta function as a FZZT brane partition function associated with a matrix/gravity correspondence. The Hilbert-Polya operator in this interpretation is the master matrix of the large N matrix model. Using a related function $\Xi(z)$ we develop an analogy between this function and the Airy function Ai(z) of the Gaussian matrix model. The analogy gives an intuitive physical reason why the zeros lie on a critical line. Using a Fourier transform of the $\Xi(z)$ function we identify a Kontsevich integrand. Generalizing this integrand to $n \times n$ matrices we develop a Kontsevich matrix model which describes n FZZT branes. The Kontsevich model associated with the $\Xi(z)$ function is given by a superposition of Liouville type matrix models that have been used to describe matrix model instantons." M. McGuigan, "Riemann Hypothesis and master matrix for FZZT brane partition functions" (preprint 05/2008) [abstract:] "We continue to investigate the physical interpretation of the Riemann zeta function as a FZZT brane partition function associated with a matrix/gravity correspondence begun in arxiv:0708.0645. We derive the master matrix of the $(2,1)$ minimal and $(3,1)$ minimal matrix model. We use it's characteristic polynomial to understand why the zeros of the FZZT partition function, which is the Airy function, lie on the real axis. We also introduce an iterative procedure that can describe the Riemann $\Xi$ function as a deformed minimal model whose deformation parameters are related to a Konsevich integrand. Finally we discuss the relation of our work to other approaches to the Riemann $\Xi$ function including expansion in terms of Meixner-Pollaczek polynomials and Riemann-Hilbert problems." Yang-Hui He, V. Jejjala, D. 'Minic, "From Veneziano to Riemann: A string theory statement of the Riemann Hypothesis" (preprint 01/2015) [abstract:] "We discuss a precise relation between the Veneziano amplitude of string theory, rewritten in terms of ratios of the Riemann zeta function, and two elementary criteria for the Riemann hypothesis formulated in terms of integrals of the logarithm and the argument of the zeta function. We also discuss how the integral criterion based on the argument of the Riemann zeta function relates to the Li criterion for the Riemann hypothesis. We provide a new generalization of this integral criterion. Finally, we comment on the physical interpretation of our recasting of the Riemann hypothesis in terms of the Veneziano amplitude." Yang-Hui He, V. Jejjala, D. Minic, "On the physics of the Riemann zeros" (Quantum Theory and Symmetries 6 conference proceedings) [abstract:] "We discuss a formal derivation of an integral expression for the Li coefficients associated with the Riemann xi-function which, in particular, indicates that their positivity criterion is obeyed, whereby entailing the criticality of the non-trivial zeros. We conjecture the validity of this and related expressions without the need for the Riemann Hypothesis and discuss a physical interpretation of this result within the Hilbert-Polya approach. In this context we also outline a relation between string theory and the Riemann Hypothesis." Jen-Chi Lee, Yi Yang, Sheng-Lan Ko, "Stirling number Identities and high energy string scatterings" (Invited talk presented by Jen-Chi Lee at "Tenth Workshop on QCD", Paris, France, June 7-12, 2009. To be published in the SLAC eConf series) [abstract] "We use Stirling number identities developed recently in number theory to show that ratios among high energy string scattering amplitudes in the fixed angle regime can be extracted from the Kummer function of the second kind. This result not only brings an interesting bridge between string theory and combinatoric number theory but also sheds light on the understanding of algebraic structure of high energy stringy symmetry." P. Ranjan Giri and R.K. Bhaduri, "Physical interpretation for Riemann zeros from black hole physics" (preprint 05/2009) [abstract:] "According to a conjecture attributed to Poyla and Hilbert, there is a self-adjoint operator whose eigenvalues are the the nontrivial zeros of the Riemann zeta function. We show that the near-horizon dynamics of a massive scalar field in the Schwarzschild black hole spacetime, under a reasonable boundary condition, gives rise to energy eigenvalues that coincide with the Riemann zeros. In achieving this result, we exploit the Bekenstein conjecture of black hole area quantization, and argue that it is responsible for the breaking of the continuous scale symmetry of the near horizon dynamics into a discrete one." R. Nally, "Exact half-BPS black hole entropies in CHL models from Rademacher series" (preprint 03/2018) [abstract:] "The microscopic spectrum of half-BPS excitations in toroidally compactified heterotic string theory has been computed exactly through the use of results from analytic number theory. Recently, similar quantities have been understood macroscopically by evaluating the gravitational path integral on the M-theory lift of the AdS2 near-horizon geometry of the corresponding black hole. In this paper, we generalize these results to a subset of the CHL models, which include the standard compactification of IIA on $K3 \times T2$ as a special case. We begin by developing a Rademacher-like expansion for the Fourier coefficients of the partition functions for these theories, which are modular forms for congruence subgroups. We then interpret these results in a macroscopic setting by evaluating the path integral for the reduced-rank $N=4$ supergravities described by these CFTs." [abstract:] "In this article we present the post-Newtonian (pN) coefficients of the energy flux (and angular momentum flux) at infinity and event horizon for a particle in circular, equatorial orbits about a Kerr black hole (of mass $M$ and spin-parameter $a$) up to 20-pN order. When a pN term is not a polynomial in $a/M$ and includes irrational functions (like polygamma functions), it is written as a power series of $a/M$. This is achieved by calculating the fluxes numerically with an accuracy greater than 1 part in 10600. Such high accuracy allows us to extract analytical values of pN coefficients that are linear combinations of transcendentals like the Euler constant, logarithms of prime numbers and powers of $\pi$. We also present the 20-pN expansion (spin-independent pN expansion) of the ingoing energy flux at the event horizon for a particle in circular orbit about a Schwarzschild black hole. E. Belbruno, "On the regularizability of the Big Bang singularity" (preprint 05/2012) [abstract:] "The singularity for the big bang state can be represented using the generalized anisotropic Friedmann equation, resulting in a system of differential equations in a central force field. We study the regularizability of this singularity as a function of a parameter, the state variable, $w$. We prove that for $w >1$ it is regularizable only for $w$ satisfying relative prime number conditions, and for $w \leq 1$ it can always be regularized. This is done by using a McGehee transformation, usually applied in the three and four-body problems. This transformation blows up the singularity into an invariant manifold. This has implications on the idea that our universe could have resulted from a big crunch of a previous universe, assuming a Friedmann modeling." J. Wang, "The zeros and poles of the partition function" (preprint 03/2011) [abstract:] "In this paper, we consider the physical meaning of the zeros and poles of partition function. We consider three different systems, including the harmonic oscillator in one dimension, Riemann zeta function and the quasinormal modes of black hole." J. Manuel Garcia-Islas, Black hole entropy in loop quantum gravity and number theory (preprint 07/09) [abstract:] "We show that counting different configurations that give rise to black hole entropy in loop quantum gravity is related to partitions in number theory." A. Dabholkar, J. Gomes and S. Murthy, "Nonperturbative black hole entropy and Kloosterman sums" (preprint 03/2014) [abstract:] "Non-perturbative quantum corrections to supersymmetric black hole entropy often involve nontrivial number-theoretic phases called Kloosterman sums. We show how these sums can be obtained naturally from the functional integral of supergravity in asymptotically AdS_2 space for a class of black holes. They are essentially topological in origin and correspond to charge-dependent phases arising from the various gauge and gravitational Chern–Simons terms and boundary Wilson lines evaluated on Dehn-filled solid 2-torus. These corrections are essential to obtain an integer from supergravity in agreement with the quantum degeneracies, and reveal an intriguing connection between topology, number theory, and quantum gravity. We give an assessment of the current understanding of quantum entropy of black holes." A. Corichi, "Black holes and entropy in loop quantum gravity: An overview" (preprint 01/2009) [abstract:] "Black holes in equilibrium and the counting of their entropy within Loop Quantum Gravity are reviewed. In particular, we focus on the conceptual setting of the formalism, briefly summarizing the main results of the classical formalism and its quantization. We then focus on recent results for small, Planck scale, black holes, where new structures have been shown to arise, in particular an effective quantization of the entropy. We discuss recent results that employ in a very effective manner results from number theory, providing a complete solution to the counting of black hole entropy. We end with some comments on other approaches that are motivated by loop quantum gravity." M. Axenides, E. Floratos and S. Nicolis, "Chaotic information processing by extremal black holes" (preprint 04/2015) [abstract:] "We review an explicit regularization of the AdS2/CFT1 correspondence, that preserves all isometries of bulk and boundary degrees of freedom. This scheme is useful to characterize the space of the unitary evolution operators that describe the dynamics of the microstates of extremal black holes in four spacetime dimensions. Using techniques from algebraic number theory to evaluate the transition amplitudes, we remark that the regularization scheme expresses the fast quantum computation capability of black holes as well as its chaotic nature." H.C. Rosu and M. Planat, "On arithmetic detection of grey pulses with application to Hawking radiation" (preprint 05/2002) [abstract:] "Micron-sized black holes do not necessarily have a constant horizon temperature distribution. The black hole remote-sensing problem means to find out the surface' temperature distribution of a small black hole from the spectral measurement of its (Hawking) grey pulse. This problem has been previously considered by Rosu, who used Chen's modified Möbius inverse transform. Here, we hint on a Ramanujan generalization of Chen's modified Möbius inverse transform that may be considered as a special wavelet processing of the remote-sensed grey signal coming from a black hole or any other distant grey source." H.C. Rosu, "Quantum Hamiltonians and prime numbers", Modern Physics Letters A 18 (2003) [abstract:] "A short review of Schroedinger hamiltonians for which the spectral problem has been related in the literature to the distribution of the prime numbers is presented here. We notice a possible connection between prime numbers and centrifugal inversions in black holes and suggest that this remarkable link could be directly studied within trapped Bose-Einstein condensates. In addition, when referring to the factorizing operators of Pitkanen and Castro and collaborators, we perform a mathematical extension allowing a more standard supersymmetric approach." This very welcome, thorough review article discusses and compares the various inter-related work of Bhaduri-Khare-Law, Berry-Keating, Aneva, Castro, et.al., Pitkanen, Khuri, Joffily, Wu-Sprung, Okubo, Mussardo, Boos-Korepin, Crehan and others. A. Sugamoto, "Factorization of number into prime numbers viewed as decay of particle into elementary particles conserving energy" (preprint 10/2009) [abstract:] "Number theory is considered, by proposing quantum mechanical models and string-like models at zero and finite temperatures, where the factorization of number into prime numbers is viewed as the decay of particle into elementary particles conserving energy. In these models, energy of a particle labeled by an integer $n$ is assumed or derived to being proportional to $\ln n$. The one-loop vacuum amplitudes, the free energies and the partition functions at finite temperature of the string-like models are estimated and compared with the zeta functions. The $SL(2, {\bf Z})$ modular symmetry, being manifest in the free energies is broken down to the additive symmetry of integers, ${\bf Z}_{+}$, after interactions are turned on. In the dynamical model existing behind the zeta function, prepared are the fields labeled by prime numbers. On the other hand the fields in our models are labeled, not by prime numbers but by integers. Nevertheless, we can understand whether a number is prime or not prime by the decay rate, namely by the corresponding particle can decay or can not decay through interactions conserving energy. Among the models proposed, the supersymmetric string-like model has the merit of that the zero point energies are cancelled and the energy levels may be stable against radiative corrections." M. Cvetič, I. García-Etxebarria, J. Halverson, "On the computation of non-perturbative effective potentials in the string theory landscape -- IIB/F-theory perspective" (preprint 09/2010) [abstract:] "We discuss a number of issues arising when computing non-perturbative effects systematically across the string theory landscape. In particular, we cast the study of fairly generic physical properties into the language of computability/number theory and show that this amounts to solving systems of diophantine equations. In analogy to the negative solution to Hilbert's 10th problem, we argue that in such systematic studies there may be no algorithm by which one can determine all physical effects. We take large volume type IIB compactifications as an example, with the physical property of interest being the low-energy non-perturbative F-terms of a generic compactification. A similar analysis is expected to hold for other kinds of string vacua, and we discuss in particular the extension of our ideas to F-theory. While these results imply that it may not be possible to answer systematically certain physical questions about generic type IIB compactifications, we identify particular Calabi-Yau manifolds in which the diophantine equations become linear, and thus can be systematically solved. As part of the study of the required systematics of F-terms, we develop technology for computing Z_2 equivariant line bundle cohomology on toric varieties, which determines the presence of particular instanton zero modes via the Koszul complex. This is of general interest for realistic IIB model building on complete intersections in toric ambient spaces." S. Davis, "Spin structures on Riemann surfaces and the perfect numbers" (preprint 12/1998) "The equality between the number of odd spin structures on a Riemann surface of genus $g$, with $2^g - 1$ being a Mersenne prime, and the even perfect numbers, is an indication that the action of the modular group on the set of spin structures has special properties related to the sequence of perfect numbers. A method for determining whether Mersenne numbers are primes is developed by using a geometrical representation of these numbers. The connection between the non-existence of finite odd perfect numbers and the irrationality of the square root of twice the product of a sequence of repunits is investigated, and it is demonstrated, for an arbitrary number of prime factors, that the products of the corresponding repunits will not equal twice the square of a rational number." Related work by S. Davis: "A method for generating Mersenne primes and the extent of the sequence of even perfect numbers" (preprint, again involving spin structures and dynamical systems) "A rationality condition for the existence of odd perfect numbers" (preprint 11/2000) "A proof of the odd perfect number conjecture" (preprint 01/2004) P. Frampton and T. Kephart, "Mersenne primes, polygonal anomalies and string theory classification" "It is pointed out that the Mersenne primes Mp = 2p-1 and associated perfect numbers Mp = 2p-1Mp play a significant role in string theory; this observation may suggest a classification of consistent string theories." P.H. Frampton and Y. Okada, "p-Adic string N-point function", Phys. Rev. Lett. B 60 (1988) 484–486 N. Efremov and N.V. Mitskievich, "A T0-discrete universe model with five low-energy fundamental interactions and the coupling constants hierarchy" (preprint, 09/03) [abstract:] "A quantum model of universe is constructed in which values of dimensionless coupling constants of the fundamental interactions (including the cosmological constant) are determined via certain topological invariants of manifolds forming finite ensembles of 3D Seifert fibrations. The characteristic values of the coupling constants are explicitly calculated as the set of rational numbers (up to the factor 2) on the basis of a hypothesis that these values are proportional to the mean relative fluctuations of discrete volumes of manifolds in these ensembles. The discrete volumes are calculated using the standard Alexandroff procedure of constructing T0-discrete spaces realized as nerves corresponding to characteristic canonical triangulations which are compatible with the Milnor representation of Seifert fibered homology spheres being the building material of all used 3D manifolds. Moreover, the determination of all involved homology spheres is based on the first nine prime numbers (p1 =2, ..., p9=23). The obtained hierarchy of coupling constants at the present evolution stage of universe well reproduces the actual hierarchy of the experimentally observed dimensionless low-energy coupling constants." D.B. Grunberg, "Integrality of open instanton numbers" (preprint 05/03) [abstract:] "We prove the integrality of the open instanton numbers in two examples of counting holomorphic disks on local Calabi-Yau threefolds: the resolved conifold and the degenerate P x P. Given the B-model superpotential, we extract by hand all Gromow-Witten invariants in the expansion of the A-model superpotential. The proof of their integrality relies on enticing congruences of binomial coefficients modulo powers of a prime. We also derive an expression for the factorial (pk-1)! modulo powers of the prime p. We generalise two theorems of elementary number theory, by Wolstenholme and by Wilson." P. D'Eath and G. Esposito, "The effect of boundaries in one-loop quantum cosmology" P. D'Eath and G. Esposito, "Local Boundary Conditions for the Dirac Operator and One-Loop Quantum Cosmology" P. D'Eath and G. Esposito, "Spectral boundary conditions in one-loop quantum cosmology" "For fermionic fields on a compact Riemannian manifold with boundary one has a choice between local and non-local (spectral) boundary conditions. The one-loop prefactor in the Hartle-Hawking amplitude in quantum cosmology can then be studied using the generalized Riemann zeta-function formed from the squared eigenvalues of the four-dimensional fermionic operators." A.P. de Almeida, F.T. Brandt and J. Frenkel, "Thermal matter and radiation in a gravitational field" "We study the one-loop contributions of matter and radiation to the gravitational polarization tensor at finite temperatures. Using the analytically continued imaginary-time formalism, the contribution of matter is explicitly given to next-to-leading T2 order. We obtain an exact form for the contribution of radiation fields, expressed in terms of generalized Riemann zeta functions." V. V. Nesterenko and I. G. Pirozhenko, "Justification of the zeta function renormalization in rigid string model" "A consistent procedure for regularization of divergences and for the subsequent renormalization of the string tension is proposed in the framework of the one-loop calculation of the interquark potential generated by the Polyakov-Kleinert string. In this way, a justification of the formal treatment of divergences by analytic continuation of the Riemann and Epstein-Hurwitz zeta functions is given. A spectral representation for the renormalized string energy at zero temperature is derived, which enables one to find the Casimir energy in this string model at nonzero temperature very easy." A. Edery, "Multidimensional cut-off technique, odd-dimensional Epstein zeta functions and Casimir energy of massless scalar fields", submitted to J. Physics A [abstract:] "Quantum fluctuations of massless scalar fields represented by quantum fluctuations of the quasiparticle vacuum in a zero-temperature dilute Bose-Einstein condensate may well provide the first experimental arena for measuring the Casimir force of a field other than the electromagnetic field. This would constitute a real Casimir force measurement - due to quantum fluctuations - in contrast to thermal fluctuation effects. We develop a multidimensional cut-off technique for calculating the Casimir energy of massless scalar fields in d-dimensional rectangular spaces with q large dimensions and d-q dimensions of length L and generalize the technique to arbitrary lengths. We explicitly evaluate the multidimensional remainder and express it in a form that converges exponentially fast. Together with the compact analytical formulas we derive, the numerical results are exact and easy to obtain. Most importantly, we show that the division between analytical and remainder is not arbitrary but has a natural physical interpretation. The analytical part can be viewed as the sum of individual parallel plate energies and the remainder as an interaction energy. In a separate procedure, via results from number theory, we express some odd-dimensional homogeneous Epstein zeta functions as products of one-dimensional sums plus a tiny remainder and calculate from them the Casimir energy via zeta function regularization." V. Di Clemente, S. F. King and D.A.J. Rayner, "Supersymmetry and electroweak breaking with large and small extra dimensions", Nucl. Phys. B 617 (2001) 71–100 [abstract:] "We consider the problem of supersymmetry and electroweak breaking in a 5d theory compactified on an $S^{1}/Z_{2}$ orbifold, where the extra dimension may be large or small. We consider the case of a supersymmetry breaking 4d brane located at one of the orbifold fixed points with the Standard Model gauge sector, third family and Higgs fields in the 5d bulk, and the first two families on a parallel 4d matter brane located at the other fixed point. We compute the Kaluza-Klein mass spectrum in this theory using a matrix technique which allows us to interpolate between large and small extra dimensions. We also consider the problem of electroweak symmetry breaking in this theory and localize the Yukawa couplings on the 4d matter brane spatially separated from the brane where supersymmetry is broken. We calculate the 1-loop effective potential using a zeta-function regularization technique, and find that the dominant top and stop contributions are separately finite. Using this result we find consistent electroweak symmetry breaking for a compactification scale {$1/R \approx 830$ GeV} and a lightest Higgs boson mass $m_{h} \approx 170$ GeV." K. Roland, "Two- and three-loop amplitudes in covariant loop calculus", Nuclear Physics B 313 (1989) 432–446 [abstract:] "We study two- and three-loop vacuum amplitudes for the closed bosonic string. We compare two sets of expressions for the corresponding density on moduli space. One is based on the covariant reggeon loop calculus (where modular invariance is not manifest). The other is based on analytic geometry. We want to prove identity between the two sets of expressions. Quite apart from demonstrating modular invariance of the reggeon results, this would in itself be a remarkable mathematical feature. Identity is established to 'high' order in some moduli and exactly in others. The expansions reveal an essentially number-theoretic structure. Agreement is found only by exploiting the connection between the four Jacobi theta-functions and number theory." J. L. Petersen, K. O. Roland and J. R. Sidenius, "Modular invariance and covariant loop calculus", Physics Letters B 205 (1988) 262–266 [abstract:] "The covariant loop calculus provides an efficient technique for computing explicit expressions for the density on moduli space corresponding to arbitrary (bosonic string) loop diagrams. Since modular invariance is not manifest, however, we carry out a detailed comparison with known explicit two- and three-loop results derived using analytic geometry (one loop is known to be okay). We establish identity to 'high' order in some moduli and exactly in others. Agreement is found as a result of various non-trivial cancellations, in part related to number theory. We feel our results provide very strong support for the correctness of the covariant loop calculus approach." G. Heinrich, T. Huber, D. Maitre, "Master integrals for fermionic contributions to massless three-loop form factors" (preprint 12/07) [abstract:] "In this letter we continue the calculation of master integrals for massless three-loop form factors by giving analytical results for those diagrams which are relevant for the fermionic contributions proportional to N_F^2, N_F*N, and N_F/N. Working in dimensional regularisation, we express one of the diagrams in a closed form which is exact to all orders in epsilon, containing Gamma-functions and hypergeometric functions of unit argument. In all other cases we derive multiple Mellin-Barnes representations from which the coefficients of the Laurent expansion in epsilon are extracted in an analytical form. To obtain the finite part of the three-loop quark and gluon form factors, all coefficients through transcendentality six in the Riemann zeta-function have to be included." G. Heinrich, T. Huber, D. A. Kosower, V. A. Smirnov, "Nine-propagator master integrals for massless three-loop form factors" (preprint 02/2009) [abstract:] "We complete the calculation of master integrals for massless three-loop form factors by computing the previously-unknown three diagrams with nine propagators in dimensional regularisation. Each of the integrals yields a six-fold Mellin-Barnes representation which we use to compute the coefficients of the Laurent expansion in epsilon. Using Riemann zeta functions of up to weight six, we give fully analytic results for one integral; for a second, analytic results for all but the finite term; for the third, analytic results for all but the last two coefficients in the Laurent expansion. The remaining coefficients are given numerically to sufficiently high accuracy for phenomenological applications." L.J. Dixon, J.M. Drummond, M. von Hippel and J. Pennington, "Hexagon functions and the three-loop remainder function" (preprint 08/2013) [abstract:] "We present the three-loop remainder function, which describes the scattering of six gluons in the maximally-helicity-violating configuration in planar $N=4$ super-Yang-Mills theory, as a function of the three dual conformal cross ratios. The result can be expressed in terms of multiple Goncharov polylogarithms. We also employ a more restricted class of "hexagon functions" which have the correct branch cuts and certain other restrictions on their symbols. We classify all the hexagon functions through transcendental weight five, using the coproduct for their Hopf algebra iteratively, which amounts to a set of first-order differential equations. The three-loop remainder function is a particular weight-six hexagon function, whose symbol was determined previously. The differential equations can be integrated numerically for generic values of the cross ratios, or analytically in certain kinematics limits, including the near-collinear and multi-Regge limits. These limits allow us to impose constraints from the operator product expansion and multi-Regge factorization directly at the function level, and thereby to fix uniquely a set of Riemann-zeta-valued constants that could not be fixed at the level of the symbol. The near-collinear limits agree precisely with recent predictions by Basso, Sever and Vieira based on integrability. The multi-Regge limits agree with the factorization formula of Fadin and Lipatov, and determine three constants entering the impact factor at this order. We plot the three-loop remainder function for various slices of the Euclidean region of positive cross ratios, and compare it to the two-loop one. For large ranges of the cross ratios, the ratio of the three-loop to the two-loop remainder function is relatively constant, and close to $-7$." M.B. Green, J.G. Russo, P. Vanhove, "Low energy expansion of the four-particle genus-one amplitude in type II superstring theory" (preprint 01/2008) [abstract:] "A diagrammatic expansion of coefficients in the low-momentum expansion of the genus-one four-particle amplitude in type II superstring theory is developed. This is applied to determine coefficients up to order $s^6R^4$ (where $s$ is a Mandelstam invariant and $R^4$ the linearized super-curvature), and partial results are obtained beyond that order. This involves integrating powers of the scalar propagator on a toroidal world-sheet, as well as integrating over the modulus of the torus. At any given order in $s$ the coefficients of these terms are given by rational numbers multiplying multiple zeta values (or Euler-Zagier sums) that, up to the order studied here, reduce to products of Riemann zeta values. We are careful to disentangle the analytic pieces from logarithmic threshold terms, which involves a discussion of the conditions imposed by unitarity. We further consider the compactification of the amplitude on a circle of radius $r$, which results in a plethora of terms that are power-behaved in $r$. These coefficients provide boundary data' that must be matched by any non-perturbative expression for the low-energy expansion of the four-graviton amplitude. The paper includes an appendix by Don Zagier." M.B. Green, S.D. Miller, J.G. Russo and P. Vanhove, "Eisenstein series for higher-rank groups and string theory amplitudes" (preprint 11/2011) [abstract:] "Scattering amplitudes of superstring theory are strongly constrained by the requirement that they be invariant under dualities generated by discrete subgroups, $E_n(Z)$, of simply-laced Lie groups in the $E_n$ series ($n\leq 8$). In particular, expanding the four-supergraviton amplitude at low energy gives a series of higher derivative corrections to Einstein's theory, with coefficients that are automorphic functions with a rich dependence on the moduli. Boundary conditions supplied by string and supergravity perturbation theory, together with a chain of relations between successive groups in the $E_n$ series, constrain the constant terms of these coefficients in three distinct parabolic subgroups. Using this information we are able to determine the expressions for the first two higher derivative interactions (which are BPS-protected) in terms of specific Eisenstein series. Further, we determine key features of the coefficient of the third term in the low energy expansion of the four-supergraviton amplitude (which is also BPS-protected) in the $E_8$ case. This is an automorphic function that satisfies an inhomogeneous Laplace equation and has constant terms in certain parabolic subgroups that contain information about all the preceding terms." E. D'Hoker and M.B. Green, "Zhang–Kawazumi invariants and superstring amplitudes" (preprint 08/2013) [abstract:] "The two-loop Type II superstring correction to supergravity at order $D^6 R^4$ is derived from the genus-two superstring 4-point function of massless NS-NS states. We show that this correction is proportional to the integral over moduli space of a modular invariant introduced recently by Zhang and Kawazumi in number theory and related to the Faltings delta-invariant studied for genus-two by Bost. Furthermore, the structure of two-loop superstring corrections at higher order in the low energy expansion leads to higher invariants, which naturally generalize Zhang-Kawazumi invariant at genus two. An explicit formula is derived for the unique higher invariant associated with order $D^8 R^4$. In an attempt to compare the prediction for the $D^6 R^4$ correction from superstring perturbation theory with the one produced by S-duality and supersymmetry of Type IIB, various reformulations of the invariant are given. This comparison with string theory leads to a predicted value for the integral of the invariant over the moduli space of genus-two surfaces." F. Brown, "Periods and Feynman amplitudes" (preprint 12/2015) [abstract:] "Feynman amplitudes in perturbation theory form the basis for most predictions in particle collider experiments. The mathematical quantities which occur as amplitudes include values of the Riemann zeta function and relate to fundamental objects in number theory and algebraic geometry. This talk reviews some of the recent developments in this field, and explains how new ideas from algebraic geometry have led to much progress in our understanding of amplitudes. In particular, the idea that certain transcendental numbers, such as $\pi$, can be viewed as a representation of a group, provides a powerful framework to study amplitudes which reveals many hidden structures." P. Fleig, H.P.A. Gustafsson, A. Kleinschmidt and D. Persson, "Eisenstein series and automorphic representations" (preprint 11/2015) [abstract:] "We provide an introduction to the theory of Eisenstein series and automorphic forms on real simple Lie groups $G$, emphasising the role of representation theory. It is useful to take a slightly wider view and define all objects over the (rational) adeles $A$, thereby also paving the way for connections to number theory, representation theory and the Langlands program. Most of the results we present are already scattered throughout the mathematics literature but our exposition collects them together and is driven by examples. Many interesting aspects of these functions are hidden in their Fourier coefficients with respect to unipotent subgroups and a large part of our focus is to explain and derive general theorems on these Fourier expansions. Specifically, we give complete proofs of Langlands' constant term formula for Eisenstein series on adelic groups $G(A)$ as well as the Casselman--Shalika formula for the $p$-adic spherical Whittaker vector associated to unramified automorphic representations of $G(Q_p)$. Somewhat surprisingly, all these results have natural interpretations as encoding physical effects in string theory. We therefore introduce also some basic concepts of string theory, aimed toward mathematicians, emphasising the role of automorphic forms. In addition, we explain how the classical theory of Hecke operators fits into the modern theory of automorphic representations of adelic groups, thereby providing a connection with some key elements in the Langlands program, such as the Langlands dual group $LG$ and automorphic $L$-functions. Our treatise concludes with a detailed list of interesting open questions and pointers to additional topics where automorphic forms occur in string theory." V. Ravindran, J. Smith and W.L. van Neerven, "Two-loop corrections to Higgs boson production" (Report-no: YITP-SB-04-46, 08/04) [abstract:] "In this paper we present the two-loop vertex corrections to scalar and pseudo-scalar Higgs boson production for general colour factors for the gauge group SU(N) We derive a general formula for the vertex correction which holds for conserved and non conserved operators. For the conserved operator we take the electromagnetic vertex correction as an example whereas for the non conserved operators we take the two vertex corrections above. Our observations for the structure of the pole terms 1/4, 1/3 and 1/2 in two loop order are the same as made earlier in the literature for electromagnetism. For the single pole terms 1/ we can predict the terms containing the Riemann zeta functions (2) and (3). A. Schwarz, V. Vologodsky and J. Walcher, "Integrality of framing and geometric origin of 2-functions" (preprint 02/2017) [abstract:] "We say that a formal power series $\sum a_nz^n$ with rational coefficients is a 2-function if the numerator of the fraction $a_{n/p} - p^2a_n$ is divisible by $p^2$ for every prime number $p$. One can prove that 2-functions with rational coefficients appear as building block of BPS generating functions in topological string theory. Using the Frobenius map we define 2-functions with coefficients in algebraic number fields. We establish two results pertaining to these functions. First, we show that the class of 2-functions is closed under the so-called framing operation (related to compositional inverse of power series). Second, we show that 2-functions arise naturally in geometry as $q$-expansion of the truncated normal function associated with an algebraic cycle extending a degenerating family of Calabi-Yau 3-folds." S.K. Ashok, F. Cachazo, E. Dell'Aquila, "Children's drawings from Seiberg-Witten curves", Communications in Number Theory and Physics 1 no. 2 (2007) 237–305 [abstract:] "We consider $N=2$ supersymmetric gauge theories perturbed by tree level superpotential terms near isolated singular points in the Coulomb moduli space. We identify the Seiberg-Witten curve at these points with polynomial equations used to construct what Grothendieck called "dessins d'enfants" or "children's drawings" on the Riemann sphere. From a mathematical point of view, the dessins are important because the absolute Galois group $Gal(\bar{Q}/Q)$ acts faithfully on them. We argue that the relation between the dessins and Seiberg-Witten theory is useful because gauge theory criteria used to distinguish branches of $N=1$ vacua can lead to mathematical invariants that help to distinguish dessins belonging to different Galois orbits. For instance, we show that the confinement index defined in hep-th/0301006 is a Galois invariant. We further make some conjectures on the relation between Grothendieck's programme of classifying dessins into Galois orbits and the physics problem of classifying phases of $N=1$ gauge theories. " S. Bose, J. Gundry and Y.-H. He, "Gauge theories and dessins d'enfants: Beyond the torus" (preprint /2014) [abstract:] "Dessin d'enfants on elliptic curves are a powerful way of encoding doubly-periodic brane tilings, and thus, of four-dimensional supersymmetric gauge theories whose vacuum moduli space is toric, providing an interesting interplay between physics, geometry, combinatorics and number theory. We discuss and provide a partial classification of the situation in genera other than one by computing explicit Belyi pairs associated to the gauge theories. Important also is the role of the Igusa and Shioda invariants that generalise the elliptic $j$-invariant." G. Moore, "Arithmetic and attractors" (preprint 07/03) [abstract:] "We study relations between some topics in number theory and supersymmetric black holes. These relations are based on the "attractor mechanism" of N=2 supergravity. In IIB string compactification this mechanism singles out certain "attractor varieties". We show that these attractor varieties are constructed from products of elliptic curves with complex multiplication for N=4 and N=8 compactifications. The heterotic dual theories are related to rational conformal field theories. In the case of N=4 theories U-duality inequivalent backgrounds with the same horizon area are counted by the class number of a quadratic imaginary field. The attractor varieties are defined over fields closely related to class fields of the quadratic imaginary field. We discuss some extensions to more general Calabi-Yau compactifications and explore further connections to arithmetic including connections to Kronecker's Jugendtraum and the theory of modular heights. The paper also includes a short review of the attractor mechanism. A much shorter version of the paper summarizing the main points is the companion note entitled "Attractors and Arithmetic"" N. Benjamin, S. Kachru, K. Ono and L. Rolen, "Black holes and class groups" (preprint 07/2018) [abstract:] "The theory of quadratic forms and class numbers has connections to many classical problems in number theory. Recently, class numbers have appeared in the study of black holes in string theory. We describe this connection and raise questions in the hope of inspiring new collaborations between number theorists and physicists." Yu. Manin and M. Marcolli, "Holography principle and arithmetic of algebraic curves", Adv. Theor. Math. Phys. 5 (2001), no. 3, 617–650. [abstract:] "According to the holography principle (due to G. 't Hooft, L. Susskind, J. Maldacena, et al.), quantum gravity and string theory on certain manifolds with boundary can be studied in terms of a conformal field theory on the boundary. Only a few mathematically exact results corroborating this exciting program are known. In this paper we interpret from this perspective several constructions which arose initially in the arithmetic geometry of algebraic curves. We show that the relation between hyperbolic geometry and Arakelov geometry at arithmetic infinity involves exactly the same geometric data as the Euclidean AdS3 holography of black holes. Moreover, in the case of Euclidean AdS2 holography, we present some results on bulk/boundary correspondence where the boundary is a non-commutative space." Yu. Manin, "Reflections on arithmetical physics", in Conformal Invariance and String Theory (Academic, 1989) 293–303 M. Marcolli's survey article "Number Theory in Physics" contains some material on string theory. C. Hattori, M. Matsunaga, T. Matsuoka, K. Nakanishi, "Galois group on elliptic curves and flavor symmetry" (preprint 10/07) [abstract:] "Putting emphasis on the relation between rational conformal field theory (RCFT) and algebraic number theory, we consider a brane configuration in which the D-brane intersection is an elliptic curve corresponding to RCFT. A new approach to the generation structure of fermions is proposed in which the flavor symmetry including the R-parity has its origin in the Galois group on elliptic curves with complex multiplication (CM). We study the possible types of the Galois group derived from the torsion points of the elliptic curve with CM. A phenomenologically viable example of the Galois group is presented, in which the characteristic texture of fermion masses and mixings is reproduced and the mixed-anomaly conditions are satisfied." C. Castro, "On the Riemann Hypothesis and tachyons in dual string scattering amplitudes", International Journal of Geometric Methods in Modern Physics 3 no. 2 (2006) 187–199 [abstract:] "It is the purpose of this work to pursue a novel physical interpretation of the nontrivial Riemann zeta zeros and prove why the location of these zeros $z_n = 1/2+i\lambda_n$ corresponds physically to tachyonic-resonances/tachyonic-condensates, originating from the scattering of two on-shell tachyons in bosonic string theory. Namely, we prove that if there were nontrivial zeta zeros (violating the Riemann hypothesis) outside the critical line Real $z = 1/2$ (but inside the critical strip), these putative zeros do not correspond to any poles of the bosonic open string scattering (Veneziano) amplitude $A(s,t,u)$. The physical relevance of tachyonic-resonances/tachyonic-condensates in bosonic string theory, establishes an important connection between string theory and the Riemann Hypothesis. In addition, one has also a geometrical interpretation of the zeta zeros in the critical line in terms of very special (degenerate) triangular configurations in the upper-part of the complex plane." "Supersymmetry, p-adic stochastic dynamics, Brownian motion, Fokker-Planck equation, Langevin equation, prime number random distribution, random matrices, p-adic fractal strings, the adelic condition, etc...are all deeply interconnected in this paper." C. Bachas and I. Brunner, "Fusion of conformal interfaces" (preprint 12/2007) [abstract:] "We study the fusion of conformal interfaces in the c=1 conformal field theory. We uncover an elegant structure reminiscent of that of black holes in supersymmetric theories. The role of the BPS black holes is played by topological interfaces, which (a) minimize the entropy function, (b) fix through an attractor mechanism one or both of the bulk radii, and (c) are (marginally) stable under splitting. One significant difference is that the conserved charges are logarithms of natural numbers, rather than vectors in a charge lattice, as for BPS states. Besides potential applications to condensed-matter physics and number theory, these results point to the existence of large solution-generating algebras in string theory." "Here we present the results of applying the generalized Riemann zeta-function regularization method to the gravitational radiation reaction problem. We analyze in detail the headon collision of two nonspinning black holes with extreme mass ratio. The resulting reaction force on the smaller hole is repulsive. We discuss the possible extensions of these method to generic orbits and spinning black holes. The determination of corrected trajectories allows to add second perturbative corrections with the consequent increase in the accuracy of computed waveforms." S. Benvenuti, B. Feng, A. Hanany, Yang-Hui He, "BPS operators in gauge theories: Quivers, syzygies and plethystics" (preprint 08/2006) [abstract:] "We develop a systematic and efficient method of counting single-trace and multi-trace BPS operators for world-volume gauge theories of $N$ $D$-brane probes, for both $N \rightarrow \infty$ and finite $N$. The techniques are applicable to generic singularities, orbifold, toric, non-toric, et cetera, even to geometries whose precise field theory duals are not yet known. Mathematically, fascinating and intricate inter-relations between gauge theory, algebraic geometry, combinatorics and number theory exhibit themselves in the form of plethystics and syzygies." Y.-H. He, "Graph zeta function and gauge theories" (preprint 02/2011) [abstract:] "Along the recently trodden path of studying certain number theoretic properties of gauge theories, especially supersymmetric theories whose vacuum manifolds are non-trivial, we investigate Ihara's Graph Zeta Function for large classes of quiver theories and periodic tilings by bi-partite graphs. In particular, we examine issues such as the spectra of the adjacency and whether the gauge theory satisfies the strong and weak versions of the graph theoretical analogue of the Riemann Hypothesis." [abstract:] "A Selberg zeta function is attached to the three-dimensional BTZ black hole, and a trace formula is developed for a general class of test functions. The trace formula differs from those of more standard use in physics in that the black hole has a fundamental domain of infinite hyperbolic volume. Various thermodynamic quantities associated with the black hole are conveniently expressed in terms of the zeta function." "In the recent publication (Journal of Geometry and Physics, 33 (2000) 23-102) we demonstrated that dynamics of 2+1 gravity can be described in terms of train tracks. Train tracks were introduced by Thurston in connection with description of dynamics of surface automorphisms. In this work we provide an example of utilization of general formalism developed earlier. The complete exact solution of the model problem describing equilibrium dynamics of train tracks on the punctured torus is obtained. Being guided by similarities between the dynamics of 2d liquid crystals and 2+1 gravity the partition function for gravity is mapped into that for the Farey spin chain. The Farey spin chain partition function, fortunately, is known exactly and has been thoroughly investigated recently. Accordingly, the transition between the pseudo-Anosov and the periodic dynamic regime (in Thurston's terminology) in the case of gravity is being reinterpreted in terms of phase transitions in the Farey spin chain whose partition function is just a ratio of two Riemann zeta functions. The mapping into the spin chain is facilitated by recognition of a special role of the Alexander polynomial for knots/links in study of dynamics of self homeomorphisms of surfaces. At the end of paper, using some facts from the theory of arithmetic hyperbolic 3-manifolds (initiated by Bianchi in 1892), we develop systematic extension of the obtained results to noncompact Riemannian surfaces of higher genus. Some of the obtained results are also useful for 3+1 gravity. In particular, using the theorem of Margulis, we provide new reasons for the black hole existence in the Universe: black holes make our Universe arithmetic. That is the discrete Lie groups of motion are arithmetic." "We study the fluctuation modes for lump solutions of the tachyon effective potential in p-adic open string theory. We find a discrete spectrum with equally spaced mass squared levels. We also find that the interactions derived from this field theory are consistent with p-adic string amplitudes for excited string J.A. Nogueira, A. Maia, Jr., "Demonstration of how the zeta function method for effective potential removes the divergences" [abstract:] "The calculation of the minimum of the effective potential using the zeta function method is extremely advantagous, because the zeta function is regular at s = 0 and we gain immediately a finite result for the effective potential without the necessity of subtraction of any pole or the addition of infinite counter-terms. The purpose of this paper is to explicitly point out how the cancellation of the divergences occurs and that the zeta function method implicitly uses the same procedure used by Bollini-Giambiagi and Salam-Strathdee in order to gain finite part of functions with a simple pole." V.S. Vladimirov and Ya.I. Volovich, "On the nonlinear dynamical equation in the p-adic string theory" (preprint 06/03) [abstract:] "In this work nonlinear pseudo-differential equations with the infinite number of derivatives are studied. These equations form a new class of equations which initially appeared in p-adic string theory. These equations are of much interest in mathematical physics and its applications in particular in string theory and cosmology. In the present work a systematical mathematical investigation of the properties of these equations is performed. The main theorem of uniqueness in some algebra of tempered distributions is proved. Boundary problems for bounded solutions are studied, the existence of a space-homogenous solution for odd p is proved. For even p it is proved that there is no continuous solutions and it is pointed to the possibility of existence of discontinuous solutions. Multidimensional equation is also considered and its soliton and q-brane solutions are discussed." I. Ya. Aref'eva, I.V. Volovich, "Quantization of the Riemann zeta-function and cosmology" (preprint 12/2006) [abstract:] "Quantization of the Riemann zeta-function is proposed. We treat the Riemann zeta-function as a symbol of a pseudodifferential operator and study the corresponding classical and quantum field theories. This approach is motivated by the theory of p-adic strings and by recent works on stringy cosmological models. We show that the Lagrangian for the zeta-function field is equivalent to the sum of the Klein-Gordon Lagrangians with masses defined by the zeros of the Riemann zeta-function. Quantization of the mathematics of Fermat-Wiles and the Langlands program is indicated. The Beilinson conjectures on the values of L-functions of motives are interpreted as dealing with the cosmological constant problem. Possible cosmological applications of the zeta-function field theory are discussed." "It has been conjectured that an extremum of the tachyon potential of a bosonic D-brane represents the vacuum without any D-brane, and that various tachyonic lump solutions represent D-branes of lower dimension. We show that the tree level effective action of p-adic string theory, the expression for which is known exactly, provides an explicit realisation of these conjectures." I.Ya. Aref'eva, M.G. Ivanov and I.V. Volovich, "Non-extremal intersecting p-branes in various dimensions", Phys. Lett. B 406 (1997) 44–48 [abstract:] "Non-extremal intersecting p-brane solutions of gravity coupled with several antisymmetric fields and dilatons in various space-time dimensions are constructed. The construction uses the same algebraic method of finding solutions as in the extremal case and a modified "no-force" conditions. We justify the "deformation" prescription. It is shown that the non-extremal intersecting p-brane solutions satisfy harmonic superposition rule and the intersections of non-extremal p-branes are specified by the same characteristic equations for the incidence matrices as for the extremal p-branes. We show that S-duality holds for non-extremal p-brane solutions. Generalized T-duality takes place under additional restrictions to the parameters of the theory which are the same as in the extremal case." I.Ya.Arefeva, K.S.Viswanathan, A.I.Volovich and I.V.Volovich, "Composite p-branes in various dimensions", Nucl. Phys. Proc. Suppl. 56B (1997) 52–60 [abstract:] "We review an algebraic method of finding the composite p-brane solutions for a generic Lagrangian, in arbitrary spacetime dimension, describing an interaction of a graviton, a dilaton and one or two antisymmetric tensors. We set the Fock-De Donder harmonic gauge for the metric and the "no-force" condition for the matter fields. Then equations for the antisymmetric field are reduced to the Laplace equation and the equation of motion for the dilaton and the Einstein equations for the metric are reduced to an algebraic equation. Solutions composed of n constituent p-branes with n independent harmonic functions are given. The form of the solutions demonstrates the harmonic functions superposition rule in diverse dimensions. Relations with known solutions in D = 10 and D = 11 dimensions are discussed." I.Ya. Aref'eva, K.S. Viswanathan and I.V. Volovich, "p-Brane solutions in diverse dimensions", Phys.Rev. D55 (1997) 4748–4755 [abstract:] "A generic Lagrangian, in arbitrary spacetime dimension, describing the interaction of a graviton, a dilaton and two antisymmetric tensors is considered. An isotropic p-brane solution consisting of three blocks and depending on four parameters in the Lagrangian and two arbitrary harmonic functions is obtained. For specific values of parameters in the Lagrangian the solution may be identified with previously known superstring solutions." I.V. Volovich, "p-Adic string", Classical Quantum Gravity 4 (1987) 83–87 I.V. Volovich, "From p-adic strings to étale strings", Proc. Steklov Inst. Math. 203 (1995) no. 3, 37–42 I.Aref'eva and A. Volovich, "Composite p-branes in diverse dimensions", Class. Quant. Grav. 14 (1997) 2991–3000 [abstract:] "We use a simple algebraic method to find a special class of composite p-brane solutions of higher dimensional gravity coupled with matter. These solutions are composed of n constituent p-branes corresponding n independent harmonic functions. A simple algebraic criteria of existence of such solutions is presented. Relations with D = 11, 10 known solutions are discussed." A. Volovich, "Three-block p-branes in various dimensions", Nucl. Phys. B492 (1997) 235–248 [abstract:] "It is shown that a Lagrangian, describing the interaction of the gravitation field with the dilaton and the antisymmetric tensor in arbitrary dimension spacetime, admits an isotropic p-brane solution consisting of three blocks. Relations with known p-brane solutions are discussed. In particular, in ten-dimensional spacetime the three-block p-brane solution is reduced to the known solution, which recently has been used in the D-brane derivation of the black hole entropy." G. Dattoli, M. Del Franco, "The Euler legacy to modern physics" (preprint 09/2010) [abstract:] "Particular families of special functions, conceived as purely mathematical devices between the end of XVIII and the beginning of XIX centuries, have played a crucial role in the development of many aspects of modern Physics. This is indeed the case of the Euler gamma function, which has been one of the key elements paving the way to string theories, furthermore the Euler–Riemann Zeta function has played a decisive role in the development of renormalization theories. The ideas of Euler and later those of Riemann, Ramanujan and of other, less popular, mathematicians have therefore provided the mathematical apparatus ideally suited to explore, and eventually solve, problems of fundamental importance in modern Physics. The mathematical foundations of the theory of renormalization trace back to the work on divergent series by Euler and by mathematicians of two centuries ago. Feynman, Dyson, Schwinger... rediscovered most of these mathematical "curiosities" and were able to develop a new and powerful way of looking at physical phenomena." In Section 2.3 of Bernard Julia's seminal 1990 paper "Statistical theory of numbers", the author turns briefly from multiplicative to additive number theory, in particular to generating functionals associated with integer partition problems. He relates these to the Veneziano open string model, the tachyon mode, and the phenomenon of "bosonization" which is discussed elsewhere in the paper. L. Brekke and P. Freund, "p-adic numbers in physics", Physics Reports 233, (1993) 1–66 This is a review article related to the achievements in application of p-adic numbers to string theory, quantum field theory and quantum mechanics during the period 1987-1992. The contribution of Freund and his collaborators is emphasised. Here is an excerpt from pp.61–62: "String theory, the candidate "theory of everything" is expected to raise fundamental issues both at the level of physics and at the level of mathematics. The old issue of the nature of continuity in physics naturally leads to the consideration of p-adic strings. It is remarkable that these very simple alternate topologies have not appeared earlier in physics (ultrametrics have appeared [62]). Yet, even now it would not be reasonable to actually select a prime and claim this to be the phenomenologically preferred prime which "underlies" physics. As we have seen, such a preferred prime could lead to serious causality problems. But if none of the primes is to be preferred, then why select a priori the prime at infinity, and deal exclusively with real numbers? A more "even-handed" procedure would envision dealing with all primes at the same time. This naturally leads to adelic theories. We have seen that this point of view immediately yields the remarkable adelic product formulae. Could it be that the adelic string is the "real thing"? This question has been raised by Manin [41] in the following (somewhat paraphrased) form. Supposing that the true physics is adelic, then why can we always assume it to be archimedean, grounded in the real numbers? Maybe this is on account of some experimental limitations, e.g. low energy. Could it be that once these limitations get lifted and we reach very high (Planck) energies, the full adelic structure of the string will reveal itself? This is an interesting possibility. Another possibility is that the true theory is archimedean, but that on account of the product formulae, one could alternatively conceive of the theory as an Euler product over all p-adic theories. As we saw, each such theory puts the strings' world sheet on a Bethe lattice. What the adelic formulae then tell us is that we should not opt for a particular Bethe lattice as the discretization of the world sheet, but rather study absolutely all of them. The cumulative understanding of all these discretizations is tantamount to understanding the ordinary archimedean string. Of course, each of these discretizations is far simpler than the ordinary string. On the other hand, there is the p-adics-quantum group connection, which places the ordinary and all the p-adic strings at certain special points in a continuum of theories. It is an important problem to assess the theoretical consistency of all these "quantum" strings and the phenomenological possibilities offered by them." Note that the elements of the Monster involve Ogg's supersingular primes. Here are some instances of The Monster in string theory (thanks to Mark Thomas for pointing these out): F. Lizzi and R.J. Szabo, "Duality symmetries and noncommutative geometry of string spacetime" F. Lizzi and R.J. Szabo, "Noncommutative Geometry and Spacetime Gauge Symmetries of String Theory" , Chaos, Solitons and Fractals 10 (1999) 445–458 B. Craps, M.R. Gaberdiel and J.A. Harvey, "Monstrous branes", Commun. Math. Phys. 234 (2003) 229–251 M.B. Green and D. Kutasov, "Monstrous heterotic quantum mechanics", Journal of High Energy Physics 1 (01-12), 1–6 P. Henry-Labordere, B. Julia and L. Paulot, "Symmetries in M-theory: Monsters Inc." (talk given by PHL at Cargese 2002) M.A.R. Osorio and M.A. Vazquez-Mozo, "Strings below the Planck scale", Phys. Lett. B 280 (1992) 21–25 J.A. Harvey and G. Moore, "Exact gravitational threshold correction in the FHSV model", Phys. Rev. D 57 (1998) 2329–2336 S. Chaudhuri and D.A. Lowe, "Monstrous string-string duality", Nucl. Phys. B 469 (1996) 21–36 F.D.T. Smith, "E6, strings, branes, and the standard model" (preprint 04/04) L. Dolan and M. Langham, "Symmetric subgroups of gauged supergravities and AdS string theory vertex operators", Mod. Phys. Lett. A 14 (1999) 517–526 L. Dolan, "The beacon of Kac-Moody symmetry for physics", Notices of the American Mathematical Society, Dec. 1995, 1489 P.C. West, "Physical states and string symmetries", Mod. Phys. Lett. A 10 (1995) 761–772 K. Esmakhanova, G. Nugmanova and R. Myrzakulov, "A note on the relationship between solutions of Einstein, Ramanujan and Chazy equations" (preprint 02/2011) [abstract:] "The Einstein equation for the Friedmann–Robertson–Walker metric plays a fundamental role in cosmology. The direct search of the exact solutions of the Einstein equation even in this simple metric case is sometime a hard job. Therefore, it is useful to construct solutions of the Einstein equation using a known solutions of some other equations which are equivalent or related to the Einstein equation. In this work, we establish the relationship the Einstein equation with two other famous equations namely the Ramanujan equation and the Chazy equation. Both these two equations play an imporatant role in the number theory. Using the known solutions of the Ramanujan and Chazy equations, we find the corresponding solutions of the Einstein equation." [abstract:] "We obtain in this paper, as a consequence of the Riemann hypothesis, certain class of topological deformations of the graph of the function $|\zeta|$. These are used to construct an infinite set of microscopic universes (on the Planck's scale) of the Einstein type." M. Lapidus, In Search of the Riemann Zeros (AMS, 2008) [from publisher's description:] "In this book, the author proposes a new approach to understand and possibly solve the Riemann Hypothesis. His reformulation builds upon earlier (joint) work on complex fractal dimensions and the vibrations of fractal strings, combined with string theory and noncommutative geometry. Accordingly, it relies on the new notion of a fractal membrane or quantized fractal string, along with the modular flow on the associated modui space of fractal membranes. Conjecturally, under the action of the modular flow, the spacetime geometries become increasingly symmetric and crystal-like, hence, arithmetic. Correspondingly, the zeros of the associated zeta functions eventually condense onto the critical line, towards which they are attracted, thereby explaining why the Riemann Hypothesis must be true. Written with a diverse audience in mind, this unique book is suitable for graduate students, experts and nonexperts alike, with an interest in number theory, analysis, dynamical systems, arithmetic, fractal or noncommutative geometry, and mathematical and mathematical or theoretical physics." S.L. Cacciatori and M.A. Cardella, "Uniformization, unipotent flows and the Riemann hypothesis" (preprint 02/2011) [abstract:] "We prove equidistribution of certain multidimensional unipotent flows in the moduli space of genus $g$ principally polarized abelian varieties (ppav). This is done by studying asymptotics of $\pmb{\Gamma}_{g} \sim Sp(2g,\mathbb{Z})$-automorphic forms averaged along unipotent flows, toward the codimension-one component of the boundary of the ppav moduli space. We prove a link between the error estimate and the Riemann hypothesis. Further, we prove $\pmb{\Gamma}_{g - r}$ modularity of the function obtained by iterating the unipotent average process $r$ times. This shows uniformization of modular integrals of automorphic functions via unipotent flows." [A connection with string theory is outlined on pp.4–5.] K. Nakayama, F. Takahashi and T.T. Yanagida, "Revisiting the number-theory dark matter scenario and the weak gravity conjecture" (preprint 11/2018) [abstract:] "We revisit the number-theory dark matter scenario where one of the light chiral fermions required by the anomaly cancellation conditions of $U(1)_{B-L}$ explains dark matter. Focusing on some of the integer B-L charge assignments, we explore a new region of the parameter space where there appear two light fermions and the heavier one becomes a dark matter of mass O(10)keV or O(10)MeV. The dark matter radiatively decays into neutrino and photon, which can explain the tantalizing hint of the 3.55keV X-ray line excess. Interestingly, the other light fermion can erase the AdS vacuum around the neutrino mass scale in a compactification of the standard model to 3D. This will make the standard model consistent with the AdS-WGC statement that stable non-supersymmetric AdS vacua should be absent." Conference: "Modular Forms and String Duality", Banff International Research Station, June 3–8, 2006 "Physical duality symmetries relate special limits of the various consistent string theories (Types I, II, Heterotic string and their cousins, including F-theory) one to another. By comparing the mathematical descriptions of these theories, one reveals often quite deep and unexpected mathematical conjectures. The best known string duality to mathematicians, Type IIA/IIB duality also called mirror symmetry, has inspired many new developments in algebraic and arithmetic geometry, number theory, toric geometry, Riemann surface theory, and infinite dimensional Lie algebras. Other string dualities such as Heterotic/Type II duality and F-Theory/Heterotic string duality have also, more recently, led to series of mathematical conjectures, many involving elliptic curves, K3 surfaces, and modular forms. Modular forms and quasi-modular forms play a central role in mirror symmetry, in particular, as generating functions counting the number of curves on Calabi-Yau manifolds and describing Gromov-Witten invariants. This has led to a realization that time is ripe to assess the role of number theory, in particular, that of modular forms, in mirror symmetry and string dualities in general. One of the principal goals of this workshop is to look at modular and quasi-modular forms, congruence zeta-functions, Galois representations, and L-series for dual families of Calabi-Yau varieties with the aim of interpreting duality symmetries in terms of arithmetic invariants associated to the varieties in question. Over the last decades, a great deal of work has been done on these problems. In particular it appears that we need to modify the classical theories of Galois representations (in particular, the question of modularity) and modular forms, among others, for families of Calabi-Yau varieties in order to accommodate "quantum corrections"." School and Workshop on Modular Forms and Black Holes, 5–14 January 2017, National Institute of Science Education and Research, Bhubaneswar, India M. Nardelli, "Proposta di dimostrazione della variante Riemann di Lagarias (RH1), equivalente all'ipotesi di Riemann, con RH1=RH" (preprint in Italian, 12/2007) [translation of abstract provided by author:] "In this paper, we suggest a proof of the Riemann Hypothesis by the 'Lagarias variant' or 'Lagarias Equivalence': RH1 = RH. Hence, we prove that for abundant and colossally abundant numbers, $L(n)$ increases progressively, as $n$ increases, although with small oscillations, but which never lead to $L(n)$ taking negative or zero values. Furthermore, we obtain also some interesting mathematical connections between various equations concerning the Riemann zeta function and some solutions of equations regarding various models of string cosmology." [abstract:] "This paper is a review of some interesting results that has been obtained in the study of the categories of A-branes on the dual Hitchin fibers and some interesting phenomena associated with the endoscopy in the geometric Langlands correspondence of various authoritative theoretical physicists and mathematicians." [abstract:] "This paper is a review of some interesting results that has been obtained in the study of the quantum gravity partition functions in three-dimensions, in the Selberg zeta function, in the vanishing of cosmological constant and in the ten-dimensional anomaly cancellations. In the Section 1, we have described some equations concerning the pure three-dimensional quantum gravity with a negative cosmological constant and the pure three-dimensional supergravity partition functions. In the Section 2, we have described some equations concerning the Selberg super-trace formula for Super-Riemann surfaces, some analytic properties of Selberg super zeta-functions and multiloop contributions for the fermionic strings. In the Section 3, we have described some equations concerning the ten-dimensional anomaly cancellations and the vanishing of cosmological constant. In the Section 4, we have described some equations concerning p-adic strings, p-adic and adelic zeta functions and zeta strings. In conclusion, in the Section 5, we have described the possible and very interesting mathematical connections obtained between some equations regarding the various sections and some sectors of number t heory (Riemann zeta functions, Ramanujan modular equations, etc...) and some interesting mathematical applications concerning the Selberg super-zeta functions and some equations regarding the Section 1." [abstract:] "This paper is a review of some interesting results that has been obtained in the study of the physical interpretation of the Riemann zeta function as a FZZT Brane Partition Function associated with a matrix/gravity correspondence and some aspects of the Rigid Surface Operators in Gauge Theory. Furthermore, we describe the mathematical connections with some sectors of String Theory (p-adic and adelic strings, p-adic cosmology) and Number Theory. In the Section 1 we have described various mathematical aspects of the Riemann Hypothesis, matrix/gravity correspondence and master matrix for FZZT brane partition functions. In the Section 2, we have described some mathematical aspects of the rigid surface operators in gauge theory and some mathematical connections with various sectors of Number Theory, principally with the Ramanujan's modular equations (thence, prime numbers, prime natural numbers, Fibonacci's numbers, partitions of numbers, Euler's functions, etc...) and various numbers and equations related to the Lie Groups. In the Section 3, we have described some very recent mathematical results concerning the adeles and ideles groups applied to various formulae regarding the Riemann zeta function and the Selberg trace formula (connected with the Selberg zeta function), hence, we have obtained some new connections applying these results to the adelic strings and zeta strings. In the Section 4 we have described some equations concerning p-adic strings, p-adic and adelic zeta functions, zeta strings and p-adic cosmology (with regard the p-adic cosmology, some equations concerning a general class of cosmological models driven by a nonlocal scalar field inspired by string field theories). In conclusion, in the Section 5, we have showed various and interesting mathematical connections between some equations concerning the Section 1, 3 and 4." [abstract:] "According to quantum mechanics, the properties of an atom can be calculated easily if we known the eigenfunctions and eigenvalues of quantum states in which the atom can be found. The eigenfunctions depend, in general, by the coordinates of all the electrons. However, a diagram effective and enough in many cases, we can get considering the individual eigenfunctions for individual electrons, imagining that each of them is isolated in an appropriate potential field that represent the action of the nucleus and of other electrons. From these individual eigenfunctions we can to obtain the eigenfunction of the quantum state of the atom, forming the antisymmetrical products of eigenfunctions of the individual quantum states involved in the configuration considered. The problem, with this diagram, is the calculation of the eigenfunctions and eigenvalues of individual electrons of each atomic species. To solve this problem we must find solutions to the Schroedinger's equation where explicitly there is the potential acting on the electron in question, due to the action of the nucleus and of all the other electrons of the atom. To research of potential it is possible proceed with varying degrees of approximation: a first degree is obtained by the statistical method of Thomas-Fermi in which electrons are considered as a degenerate gas in balance as a result of nuclear attraction. This method has the advantage of a great simplicity as that, through a single function numerically calculated once and for all, it is possible to represent the behaviour of all atoms. In this work (Sections 1 and 2) we give the preference to the statistical method, because in any case it provides the basis for more approximate numerical calculations. Furthermore, we describe the mathematical connections that we have obtained between certain solutions concerning the calculation of any eigenfunctions of atoms with this method, the Aurea ratio, the Fibonacci's numbers, the Ramanujan modular equations, the modes corresponding to the physical vibrations of strings, the p-adic and Adelic free relativistic particle and p-adic and adelic strings (Sections 3 and 4)." [abstract:] "This paper is a review, a thesis, of some interesting results that has been obtained in various researches concerning the "brane collisions in string and M-theory" (Cyclic Universe), p-adic inflation and p-adic cosmology. In Section 1 we have described some equations concerning cosmic evolution in a Cyclic Universe. In the Section 2, we have described some equations concerning the cosmological perturbations in a Big Crunch/Big Bang space-time, the M-theory model of a Big Crunch/Big Bang transition and some equations concerning the solution of a braneworld Big Crunch/Big Bang Cosmology. In the Section 3, we have described some equations concerning the generating Ekpyrotic curvature perturbations before the Big Bang, some equations concerning the effective five-dimensional theory of the strongly coupled heterotic string as a gauged version of $N = 1$ five dimensional supergravity with four-dimensional boundaries, and some equations concerning the colliding branes and the origin of the Hot Big Bang. In the Section 4, we have described some equations regarding the "null energy condition" violation concerning the inflationary models and some equations concerning the evolution to a smooth universe in an ekpyrotic contracting phase with $w > 1$. In the Section 5, we have described some equations concerning the approximateinflationary solutions rolling away from the unstable maximum of p-adic string theory. In the Section 6, we have described various equations concerning the p-adic minisuperspace model, zeta strings, zeta nonlocal scalar fields and p-adic and adelic quantum cosmology. In the Section 7, we have showed various and interesting mathematical connections between some equations concerning the p-adic Inflation, the p-adic quantum cosmology, the zeta strings and the brane collisions in string and M-theory. Furthermore, in each section, we have showed the mathematical connections with various sectors of number theory, principally the Ramanujan's modular equations, the Aurea Ratio and the Fibonacci numbers." [abstract:] "The aim of this paper is that of show the further and possible connections between the p-adic and adelic strings and Lagrangians with Riemann zeta function with some problems, equations and theorems in number theory. In Section 1, we have described some equations and theorems concerning the quadrature- and mean-convergence in the Lagrange interpolation. In Section 2, we have described some equations and theorems concerning the difference sets of sequences of integers. In Section 3, we have showed some equations and theorems regarding some problems of a statistical group theory (symmetric groups) and in Section 4, we have showed some equations and theorems concerning the measure of the non-monotonicity of the Euler phi function and the related Riemann zeta function. In Section 5, we have showed some equations concerning the p-adic and adelic strings, the zeta strings and the Lagrangians for adelic strings In conclusion, in Section 6, we have described the mathematical connections concerning the various sections previously analyzed. Indeed, in the Section 1, 2 and 3, where are described also various theorems on the prime numbers, we have obtained some mathematical connections with Ramanujan's modular equations, thence with the modes corresponding to the physical vibrations of the bosonic and supersymmetric strings and also with p-adic and adelic strings. Principally, in Section 3, where is frequently used the Hardy-Ramanujan stronger asymptotic formula and are described some theorems concerning the prime numbers. With regard Section 4, we have obtained some mathematical connections between some equations concerning the Euler phi function, the related Riemann zeta function and the zeta strings and field Lagrangians for p-adic sector of adelic string (Section 5). Furthermore, in Sections 1, 2, 3 and 4, we have described also various mathematical expressions regarding some frequency connected with the exponents of the Aurea ratio, i.e. with the exponents of the number phi = 1.61803399. We consider important remember that the number 7 of the various exponents is related to the compactified dimensions of M-theory." [abstract:] "In this paper we have showed the various applications of the Boltzmann equation in string theory and related topics. In Section 1, we have described some equations concerning the time dependent multi-term solution of Boltzmann's equation for charged particles in gases under the influence of electric and magnetic fields, the Planck's blackbody radiation law, the Boltzmann's thermodynamic derivation and the connections with the superstring theory. In Section 2, we have described some equations concerning the modifications to the Boltzmann equation governing the cosmic evolution of relic abundances induced by dilaton dissipative-source and non-critical-string terms in dilaton-driven non-equilibrium string cosmologies. In Section 3, we have described some equations concerning the entropy of an eternal Schwarzschild black hole in the limit of infinite black hole mass, from the point of view of both canonical quantum gravity and superstring theory. We have described some equations regarding the quantum corrections to black hole entropy in string theory. Furthermore, in this section, we have described some equations concerning the thesis "Can the Universe create itself?" and the adapted Rindler vacuum in Misner space. In Section 4, we have described some equations concerning p-Adic models in Hartle-Hawking proposal and p-Adic and Adelic wave functions of the Universe. Furthermore, we have described in the various sections the various possible mathematical connections that we've obtained with some sectors of number theory and, in the Section 5, we have showed some mathematical connections between some equations of arguments above described and p-adic and adelic cosmology." R. Turco, M. Colonnese and M. Nardelli, "On the Riemann Hypothesis. Formulas explained - $\psi(x)$ as equivalent RH. Mathematical connections with 'Aurea' section and some sectors of string theory" (preprint 06/2009) [abstract:] "In this work we will examine the themes of RH, equivalent RH and GRH. We will explain some formulas and will show other special functions that are usually introduced with the PNT (Prime Number Theorem) and useful to investigate in other ways. In the Sections 1 and 2, we describe $\psi(x)$, i.e. the second Chebyshev function as equivalent RH. In the Section 3, we describe a step function and a generalization of Polignac. In the Section 4, we describe some equations concerning p-adic strings, p-adic and adelic zeta functions, zeta strings and zeta nonlocal scalar fields. In conclusion, in the Section 5, we have described some possible mathematical connections between adelic strings and Lagrangians with Riemann zeta function with some equations in number theory above examined." R. Turco, M. Colonnesse, M. Nardelli, "Links between string theory and Riemann's zeta function" (preprint 01/2010) [abstract:] "There is a connection between string theory and the Riemann's zeta function: this is an interesting way, because the zeta is related to prime numbers and we have seen on many occasions how nature likes to express himself through perfect laws or mathematical models. In [6] the authors showed all the mathematical and theoretical aspects related to the Riemann's zeta, while in [9] showed the links of certain formulas of number theory with the golden section and other areas such as string theory. The authors have proposed a solution of the Riemann hypothesis (RH) and the conjecture on the multiplicity of nontrivial zeros, showing that they are simple zeros [7][8]. Not least the situation that certain stable energy levels of atoms could be associated with non-trivial zeros of the Riemann's zeta. In [6] for example has been shown the binding of the Riemann zeta and its nontrivial zeros with quantum physics through the Law of Montgomery-Odlyzko. The law of Montgomery-Odlyzko says that "the distribution of the spacing between successive non-trivial zeros of the Riemann zeta function (normalized) is identical in terms of statistical distribution of spacing of eigenvalues in an GUE operator", which also represent dynamical systems of subatomic particles! In [10] [11] have proposed hypotheses equivalent RH, in [12] [13] the authors have presented informative articles on the physics of extra dimensions, string theory and M-theory, in [15] the conjecture Yang and Mills, in [16] the conjecture of Birch and Swinnerton-Dyer." M. Nardelli, "From the Maxwell's equations to the string theory: new possible mathematical connections" (preprint 02/2010) [abstract:] "In this paper in the Section 1, we describe the possible mathematics concerning the unification between the Maxwell's equations and the gravitational equations. In this Section we have described also some equations concerning the gravitomagnetic and gravitoelectric fields. In the Section 2, we have described the mathematics concerning the Maxwell's equations in higher dimension (thence Kaluza-Klein compactification and relative connections with string theory and Palumbo-Nardelli model). In the Section 3, we have described some equations concerning the noncommutativity in String Theory, principally the Dirac-Born-Infeld action, noncommutative open string actions, Chern-Simons couplings on the brane, D-brane actions and the connections with the Maxwell electrodynamics, Maxwell's equations, B-field and gauge fields. In the Section 4, we have described some equations concerning the noncommutative quantum mechanics regarding the particle in a constant field and the noncommutative classical dynamics related to quadratic Lagrangians (Hamiltonians) connected with some equations concerning the Section 3. In conclusion, in the Section 5, we have described the possible mathematical connections between various equations concerning the arguments above mentioned, some links with some aspects of Number Theory (Ramanujan modular equations connected with the physical vibrations of the superstrings, various relationships and links concerning $p, f$ thence the Aurea ratio), the zeta strings and the Palumbo-Nardelli model that link bosonic and fermionic strings". M. Nardelli, "The mathematical theory of black holes: Mathematical connections with some sectors of string theory and number theory" (preprint 04/2010) [abstract:] "In this paper we describe some equations concerning the stellar evolution and their stability. We describe some equations concerning the perturbations of Schwarschild blackhole, the Reissner-Nordstrom solution and the entropy of strings and black holes: Schwarzschild geometry in $D = d + 1$ dimensions. Furthermore, we show the mathematical connections with some sectors of number theory, principally with Ramanujan's modular equations and the aurea ratio (or golden ratio)." [abstract:] "In this paper we have described, in the Section 1, some mathematics concerning the Andrica's conjecture. In the Section 2, we have described the Cramer–Shank Conjecture. In the Section 3, we have described some equations concerning the possible proof of the Cramer's conjecture and the related differences between prime numbers, principally the Cramer's conjecture and Selberg's theorem. In the Section 4, we have described some equations concerning the p-adic strings and the zeta strings. In the Section 5, we have described some equations concerning the W-deformation in toroidal compactification for N = 2 gauge theory. In conclusion, in the Section 6, we have described some possible mathematical connections between various sectors of string theory and number theory." M. Nardelli and R. Turco, "The circle method to investigate Goldbach's conjecture and the Germain primes: Mathematical connections with p-adic strings and zeta strings" (preprint 08/2010) [abstract:] "In this paper we have described in Section 1 some equations and theorems concerning the circle method applied to Goldbach's conjecture. In Section 2, we have described some equations and theorems concerning the circle method to investigate Germain primes by the major arcs. In Section 3, we have described some equations concerning the equivalence between Goldbach's conjecture and the generalized Riemann hypothesis. In Section 4, we have described some equations concerning p-adic strings and zeta strings. In conclusion, in Section 5, we have described some possible mathematical connections between the arguments discussed in the various sections." [abstract:] "In this paper, in Section 1, we have described some equations concerning the functions $\zeta(s)$ and $zeta(s,w)$. In this Section, we have described also some equations concerning a transformation formula involving the gamma and Riemann zeta functions of Ramanujan. Furthermore, we have described also some mathematical connections with various theorems concerning the incomplete elliptic integrals described in "Ramanujan's lost notebook". In Section 2, we have described some Ramanujan-type series for $1/\pi$ and some equations concerning the $p$-adic open string for the scalar tachyon field. In this section, we have described also some possible and interesting mathematical connections with some Ramanujan's Theorems, contained in the first letter of Ramanujan to G.H. Hardy. In Section 3, we have described some equations concerning the zeta strings and the zeta nonlocal scalar fields. In conclusion, in Section 4, we have showed some possible mathematical connections between the arguments above mentioned, the Palumbo--Nardelli model and the Ramanujan's modular equations that are related to the physical vibrations of bosonic strings and of superstrings." [abstract:] "In this paper, in the Section 1, we have described some equations and theorems concerning the Lebesgue integral and the Lebesgue measure. In the Section 2, we have described the possible mathematical applications, of Lebesgue integration, in some equations concerning various sectors of Chern-Simons theory and Yang-Mills gauge theory, precisely the two dimensional quantum Yang-Mills theory. In conclusion, in the Section 3, we have described also the possible mathematical connections with some sectors of String Theory and Number Theory, principally with some equations concerning the Ramanujan's modular equations that are related to the physical vibrations of the bosonic strings and of the superstrings, some Ramanujan's identities concerning $\pi$ and the zeta strings." [abstract:] "This paper is principally a review, a thesis, of principal results obtained from various authoritative theoretical physicists and mathematicians in some sectors of theoretical physics and mathematics. In this paper in the Section 1, we have described some equations concerning the quantum electrodynamics coupled to quantum gravity. In the Section 2, we have described some equations concerning the gravitational contributions to the running of gauge couplings. In the Section 3, we have described some equations concerning some quantum effects in the theory of gravitation. In the Section 4, we have described some equations concerning the supersymmetric Yang-Mills theory applied in string theory and some lemmas and equations concerning various gauge fields in any non-trivial quantum field theory for the pure Yang-Mills Lagrangian. Furthermore, in conclusion, in the Section 5, we have described various possible mathematical connections between the argument above mentioned and some sectors of Number Theory and String Theory, principally with some equations concerning the Ramanujan's modular equations that are related to the physical vibrations of the bosonic strings and of the superstrings, some Ramanujan's identities concerning $\pi$ and the zeta strings." [abstract:] "In this paper in the Section 1, we have described some equations concerning the duality and higher derivative terms in M-theory. In the Section 2, we have described some equations concerning the moduli-dependent coefficients of higher derivative interactions that appear in the low energy expansion of the four-supergraviton amplitude of maximally supersymmetric string theory compactified on a d-torus. Thence, some equations regarding the automorphic properties of low energy string amplitudes in various dimensions. In the Section 3, we have described some equations concerning the Eisenstein series for higher-rank groups, string theory amplitudes and string perturbation theory. In the Section 4, we have described some equations concerning U-duality invariant modular form for the D^6R^4 interaction in the effective action of type IIB string theory compactified on T^2. Furthermore, in the Section 5, we have described various possible mathematical connections between the arguments above mentioned and some sectors of Number Theory, principally the Aurea Ratio Phi, some equations concerning the Ramanujan's modular equations that are related to the physical vibrations of the bosonic strings and of the superstrings, some Ramanujan's identities concerning p and the zeta strings. In conclusion, in the Appendix A, we have analyzed some pure numbers concerning various equations described in the present paper. Thence, we have obtained some useful mathematical connections with some sectors of Number Theory. In the Appendix B, we have showed the column "system" concerning the universal music system based on Phi and the table where we have showed the difference between the values of Phi^(n/7) and the values of the column "system"." [abstract:] "The present paper is a review, a thesis of some very important contributes of E. Witten, C. Beasley, R. Ricci, B. Basso et al. regarding various applications concerning the Jones polynomials, the Wilson loops and the cusp anomaly and integrability from string theory. In this work, in Section 1, we have described some equations concerning the knot polynomials, the Chern–Simons from four dimensions, the D3-NS5 system with a theta-angle, the Wick rotation, the comparison to topological field theory, the Wilson loops, the localization and the boundary formula. We have described also some equations concerning electric-magnetic duality to $N = 4$ super Yang-Mills theory, the gravitational coupling and the framing anomaly for knots. Furthermore, we have described some equations concerning the gauge theory description, relation to Morse theory and the action. In Section 2, we have described some equations concerning the applications of non-abelian localization to analyze the Chern–Simons path integral including Wilson loop insertions. In the Section 3, we have described some equations concerning the cusp anomaly and integrability from string theory and some equations concerning the cusp anomalous dimension in the transition regime from strong to weak coupling. In Section 4, we have described also some equations concerning the "fractal" behaviour of the partition function. Also here, we have described some mathematical connections between various equation described in the paper and (i) the Ramanujan's modular equations regarding the physical vibrations of the bosonic strings and the superstrings, thence the relationship with the Palumbo-Nardelli model, (ii) the mathematical connections with the Ramanujan's equations concerning $\pi$ and, in conclusion, (iii) the mathematical connections with the golden ratio $\phi$ and with $1.375$ that is the mean real value for the number of partitions $p(n)$." [abstract:] "The present paper is a review, a thesis of some very important contributes of P. Horava, M. Fabinger, M. Bordag, U. Mohideen, V.M. Mostepanenko, Trang T. Nguyen et al. regarding various applications concerning the Casimir Effect. In this paper in the Section 1 we have showed some equations concerning the Casimir Effect between two ends of the world in M-theory, the Casimir force between the boundaries, the Casimir effect on the open membrane, the Casimir form and the Casimir correction to the string tension that is finite and negative. In the Section 2, we have described some equations concerning the Casimir effect in spaces with nontrivial topology, i.e. in spaces with non-Euclidean topology, the Casimir energy density of a scalar field in a closed Friedmann model, the Casimir energy density of a massless field, the Casimir contribution and the total vacuum energy density, the Casimir energy density of a massless spinor field and the Casimir stress-energy tensor in the multi-dimensional Einstein equations with regard the Kaluza–Klein compactification of extra dimensions. Further, in the Section 1 and 2 we have described some mathematical connections concerning some sectors of Number Theory, i.e. the Palumbo-Nardelli model, the Ramanujan modular equations concerning the physical vibrations of the bosonic strings and the superstrings and the connections of some values contained in the equations with some values concerning the new universal music system based on fractional powers of Phi and Pigreco. In the Section 3, we have described some mathematical connections concerning the Riemann zeta function and the zeta-strings. In conclusion, in Section 4, we have described some mathematical connections concerning some equations regarding the Casimir effect and vacuum fluctuations. In conclusion (Appendix A), we have described some mathematical connections between the equation of the energy negative of the Casimir effect, the Casimir operators and some sectors of number theory, i.e. the triangular numbers, the Fibonacci numbers, phi, Pigreco and the partition of numbers." [abstract:] "In the present paper we have described some interesting mathematical applications of number theory to heterotic string theory $E8 \times E8$. In Chapter 1, we have described various theoretic arguments and equations concerning the Lie group $E8$, $E8 \times E8$ gauge fields and heterotic string theory. In Chapter 2, we have described the link between the subsets of odd natural numbers and of squares, some equations concerning the theorem that: 'every sufficiently large odd positive integer can be written as the sum of three primes', and the possible method of factorization of a number. In Chapter 3, we have described some classifications of the numbers: perfect, defective, abundant. Furthermore, we have described an infinite set of integers, each of which has many factorizations. In Chapter 4, we have described some interesting mathematical applications concerning the possible method of factorization of a number to the number of dimensions of the Lie group $E8$. In conclusion, in the Appendix, we have described some mathematical connections between various series of numbers concerning Chapter 1 and some sectors of number theory." P.F. Roggero, M. Nardelli and F. Di Noto, "Study on the Riemann zeta function" (preprint 11/2012) [abstract:] "In this paper we show some connections between hyperbolic cotangent and Riemann zeta function plus many other interesting relations. Furthermore, we show also some possible mathematical connections between some equations concerning this thesis and some equations regarding the zeta-strings and the zeta nonlocal scalar fields." [abstract:] "In this paper we show that perfect numbers are only 'even' plus many other interesting relations about Mersenne's prime. Furthermore, we describe also various equations, lemmas and theorems concerning the expression of a number as a sum of primes and the primitive divisors of Mersenne numbers. In conclusion, we show some possible mathematical connections between some equations regarding the arguments above mentioned and some sectors of string theory ($p$-adic and adelic strings and Ramanujan modular equation linked to the modes corresponding to the physical vibrations of the bosonic strings)." M. Nardelli, "A possible proof of Fermat's Last Theorem throught the abc radical" (preprint 03/2013) [abstract:] "In this paper we show a possible proof of Fermat's Last Theorem through the 'abc' radical. Furthermore, in the various sections, we have described also some mathematical connections with $\pi$, $\phi$, thence with some sectors of string theory." [abstract:] "In the present paper in the Section 1, we have described some equations concerning the cusp anomalous dimension in the planar limit of $N = 4$ super Yang–Mills from a thermodynamic Bethe ansatz (TBA) system, the Luscher correction at strong coupling and the strong coupling expansion of the function $F$. In Section 2, we have described some equations concerning a two-parameter family of Wilson loop operators in $N = 4$ supersymmetric Yang–Mills theory which interpolates smoothly between the $1/2$ BPS line or circle, principally some equations concerning the one-loop determinants. In Section 3, we have described some results and equations of the mathematician Ramanujan concerning some definite integrals and an infinite product and some equations concerning the development of derivatives of order $n$ ($n$ positive integer) of various trigonometric functions and divergent series. Thence, we have described some mathematical connections between some equations concerning this section and Sections 1 and 2. In Section 4, we have described some equations concerning the relationship between Yang–Mills theory and gravity and, consequently, the complete four-loop four-point amplitude of $N = 4$ super-Yang–Mills theory including the nonplanar contributions regarding the gauge theory and the gravity amplitudes. In conclusion, in the Appendix A and B, we have described a new possible method of factorization of a number and various mathematical connections with some sectors of Number Theory (Fibonacci numbers, Lie numbers, triangular numbers, $\Phi$, $\pi$, etc.)." P.F. Roggero, M. Nardelli and F. Di Noto, "Universal rule to find all the prime numbers" (preprint 09/2013) [abstract:] "In the present paper in Section 1, we have described the formula to find all the prime numbers. In Section 1.1, we have described some equations and lemmas concerning the prime numbers and various mathematical connections with some sectors of string theory. In Section 2, we have described the universal rule to find a prime number as large as desired." P.F. Roggero, M. Nardelli and F. Di Noto, "Relations between the Gauss–Eisenstein prime numbers and their correlation with Sophie Germain primes" (preprint 11/2013) [abstract:] "In the present paper we examine the relations between the Gauss prime numbers and the Eisenstein prime numbers and their correlation with Sophie Germain primes. Furthermore, we have described also various mathematical connections with some equations concerning the string theory." O. Volonterio, M. Nardelli and F. Di Noto, "On a new mathematical application concerning the discrete and the analytic functions. Mathematical connections with some sectors of number theory and string theory" (preprint 02/2014) [abstract:] "In this work we have described a new mathematical application concerning discrete and analytic functions: the Volonterio transform and the Volonterio polynomial. The Volonterio transform (V transform), indeed, works from the world of discrete functions to the world of analytic functions. We have described various mathematical applications and properties of them. Furthermore, we have showed also various examples and the possible mathematical connections with some sectors of number theory and string theory." P. F. Roggero, M. Nardelli and F. Di Noto, "On some equations concerning Riemann's prime number formula and on a secure and efficient primality test. Mathematical connections with some sectors of string theory" (preprint 06/2014) [abstract:] "In this paper we focus attention on some equations concerning Riemann's prime number formula and on the behavior of a secure primality test. Furthermore, we have described also some mathematical connections with some sectors of string theory." O. Volonterio and M. Nardelli, "On some applications of the Volonterio transform: Series development of type $Nk+M$ adn mathematical connections with some sectors of string theory" (preprint 02/2015) [abstract:] "In this work we have described a new mathematical application concerning discrete and the analytic functions: the Volonterio transform (V transform) and the Volonterio polynomial. We have descrive various mathematical applications and properties of them, precisely the series development of the type $Nk+M$. Furthermore, we have showed also various examples and the possible mathematical connections with some sectors of number theory and string theory." M. Nardelli and R. Servi, "On some equations concerning the M-theory and topological strings and the Gopakumar–Vafa formula applied in some sectors of string theory and number theory" (preprint 05/2015) [abstract:] "In the present paper we have described in Chapter 1 some equations concerning M-Theory, topological strings and topological gauge theory, in Chapter 2 some equations concerning the Gopakumar–Vafa formula in Type IIA compactification to four dimensions on a Calabi–Yau manifold in terms of a counting of BPS states in M-theory. Finally, in Chapter 3, we have described some possible methods of factorization and their various possible mathematical connections concerning the solutions for some equations regarding the above sectors of string theory." M. Nardelli, F. Di Noto and P. Roggero, "On some mathematical connections between the cubic equation and some sectors of string theory and relativistic quantum gravity" (preprint 11/2015) [abstract:] "In this paper we have described some interesting mathematical connections with various expressions of some sectors of string theory and relativistic quantum gravity, principally with the Palumbo–Nardelli model applied to the bosonic strings and the superstrings, and some parts of the theory of the cubic equation. In Appendix A, we have described the mathematical connections with some equations concerning the possible relativistic theory of quantum gravity. In conclusion in Appendix B, we have described a proof of Fermat's Last Theorem for the cubic equation case $n = 3$." P. Roggero, M. Nardelli and F. Di Noto, "The sum of reciprocal Fibonacci prime numbers converges to a new constant: Mathematical connections with some sectors of Einstein's field equations and string theory" (preprint 03/2016) [abstract:] "In this paper we have described a sum of the reciprocal Fibonacci primes that converges to a new constant. Furthermore, in the Section 2, we have described also some new possible mathematical connections with the universal gravitational constant $G$, the Einstein field equations and some equations of string theory linked to $\phi$ and $\pi$." P. Roggero, M. Nardelli and F. Di Noto, "Sum of the reciprocals of famous series: Mathematical connections with some sectors of theoretical physics and string theory" (preprint 01/2017) [abstract:] "In this paper it has been calculated the sums of the reciprocals of famous series. The sum of the reciprocals gives fundamental information on these series. The higher this sum and larger numbers there are in series and vice versa. Furthermore we understand also what is the growth factor of the series and that there is a clear link between the sums of the reciprocal and the "intrinsic nature" of the series. We have described also some mathematical connections with some sectors of theoretical physics and string theory." [abstract:] "In this paper, in Sections 1 and 2, we have described some equations and theorems concerning and linked to the Riemann zeta function. In the Section 3, we have showed the fundamental equation of the Riemann zeta function and the some equations concerning a new possible method for the calculation of the prime numbers. In conclusion, in the Section 4 we show the possible mathematical connections with various expressions of some sectors of string theory and number theory and finally we suppose as the prime numbers can be identified as possible solutions to the some equations of the string theory (zeta string)." Nardelli has also provided four other preprints involving work relating aspects of number theory to string theory, quantum cosmology, gauge theory, noncommutative geometry, etc.: [1]   [2]   [3]   [4] Here is an excerpt from a posting by on the sci.physics newsgroup (02/98) by Dan Piponi: "In (bosonic) string theory via the operator formalism you find an infinite looking zero point energy just like in QED except that you get a sum that looks like: 1+2+3+4+... Now the naive thing to do is the same: subtract off this zero point energy. However later on you get into complications. In fact (if I remember correctly) you must replace this infinity with -1/12 (of all things!) to keep things consistent. Now it turns out there is a nice mathematical kludge that allows you to see 1+2+3+4+... as equalling -1/12. What you do is rewrite it as 1+2-n +3-n +... This is the Riemann Zeta function. This converges for large n but can be analytically continued to n = -1, even though the series doesn't converge there. Zeta(-1) is -1/12. So in some bizarre sense 1+2+3+4+... really is -1/12. But even more amazingly is that you can get the -1/12 by a completely different route - using the path integral formalism rather than the operator formalism. This -1/12 is tied up in a deep way with the geometry of string theory so it's a lot more than simply a trick to keep the numbers finite. However I don't know if the equivalent operation in QED is tied up with the same kind of interesting geometry." C. Clark, "Math formula gives new glimpse into the magical mind of Ramanujan" (phys.org, 12/2012) [excerpt:] "The result is a formula for mock modular forms that may prove useful to physicists who study black holes. The work, which Ono recently presented at the Ramanujan 125 conference at the University of Florida, also solves one of the greatest puzzles left behind by the enigmatic Indian genius. "No one was talking about black holes back in the 1920s when Ramanujan first came up with mock modular forms, and yet, his work may unlock secrets about them," [Ken] Ono says. Expansion of modular forms is one of the fundamental tools for computing the entropy of a modular black hole. Some black holes, however, are not modular, but the new formula based on Ramanujan's vision may allow physicists to compute their entropy as though they were." J.F.R. Duncan, M.J. Griffin and K. Ono, "Moonshine" (preprint 11/2014) [abstract:] "Monstrous moonshine relates distinguished modular functions to the representation theory of the monster. The celebrated observations that 196884 = 1 + 196883 and 21493760 = 1+196883+21296876, etc., illustrate the case of the modular function j - 744, whose coefficients turn out to be sums of the dimensions of the 194 irreducible representations of the monster. Such formulas are dictated by the structure of the graded monstrous moonshine modules. Recent works in moonshine suggest deep relations between number theory and physics. Number theoretic Kloosterman sums have reappeared in quantum gravity, and mock modular forms have emerged as candidates for the computation of black hole degeneracies. This paper is a survey of past and present research on moonshine. We also obtain exact formulas for the multiplicities of the irreducible components of the moonshine modules. These formulas imply that such multiplicities are asymptotically proportional to dimensions." number theory, renormalisation and zeta-function regularisation techniques archive      tutorial      mystery      new      search      home      contact
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9120985865592957, "perplexity": 678.6441175382737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823303.28/warc/CC-MAIN-20181210034333-20181210055833-00442.warc.gz"}
http://mathhelpforum.com/advanced-algebra/165846-positive-definiteness.html
# Math Help - Positive Definiteness 1. ## Positive Definiteness Hi, I'm trying to show that the following matrix: $A = \begin{bmatrix} 1&2&3\\2&5&1\\3&1&36 \end{bmatrix}$ is positive definite. Using the typical definitions I get to this inequality: x^2 + 2x*(2y+3z) + 5y^2 + 2*y*z + 36z^2 > 0. I just don't know to show that this is true. Also, I'm trying to show that this matrix: $B = \begin{bmatrix} 1&2&3\\2&5&1\\3&1&34 \end{bmatrix}$ is NOT positive definite. I can't find a vector x that shows x^T*B*x > 0. Thanks a lot for your help. 2. Never mind 3. Originally Posted by AKTilted Hi, I'm trying to show that the following matrix: $A = \begin{bmatrix} 1&2&3\\2&5&1\\3&1&36 \end{bmatrix}$ is positive definite. Using the typical definitions I get to this inequality: x^2 + 2x*(2y+3z) + 5y^2 + 2*y*z + 36z^2 > 0. I just don't know to show that this is true. Also, I'm trying to show that this matrix: $B = \begin{bmatrix} 1&2&3\\2&5&1\\3&1&34 \end{bmatrix}$ is NOT positive definite. I can't find a vector x that shows x^T*B*x > 0. Thanks a lot for your help. Dou you know Sylvester's Criterion? A is pos. def. because all its principal minors are positive, whereas B doesn't fulfill this condition (in fact, $\det B =0$ ). So in order to find an element $x\in\mathbb{R}^3\,\,s.t.\,\,x^tBx\ngtr 0$ , just choose a non-trivial vector in the kernel of B... Tonio
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8873574137687683, "perplexity": 692.9286250823311}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207925274.34/warc/CC-MAIN-20150521113205-00227-ip-10-180-206-219.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/physics-book-slides-off-a-horizontal.42230/
# Physics book slides off a horizontal 1. Sep 8, 2004 ### COCoNuT A physics book slides off a horizontal table top with a speed of 1.60m/s . It strikes the floor after a time of 0.440s . Ignore air resistance. the following are given: v(0)=1.60 t= 0.440s ok this is the question Find the height of the table top above the floor. Take free fall acceleration to be g=9.80m/s^2 . i need to use this formula: x(t) = x(0) + v(0)t + 1/2at^2 <-- height formula the thing is why is v(0)t equal to zero? isnt the v(0) given in the problem? i know the correct answer and i know how the book done it, but they dont explain why v(0)t is equal to zero. x(0) = 0 + 1/2(-9.8m/s^2)(.440s)^2 <--- i dont get why v(0) is zero 2. Sep 8, 2004 ### COCoNuT ok for that question the answer is 0.949m, so you dont have to do the calculations. im working on this question: Find the vertical component of the book's velocity just before the book reaches the floor. express in m/s wouldnt it just be distance/time? 0.949m/0.440 = 2.15 which is incorrect 3. Sep 8, 2004 ### Gza Your book should have explained that it was using the vertical component of the v(0) vector which happens to be zero (can you think of why this is so? Hint: the table is flat). The quantity you came up with was the average velocity of the book through it's transit from the table to the ground. There is an acceleration due to gravity, and as you know, accelerations are changes in velocity, so you will need to use the relation v = at; where a will of course be 9.8 m /s^2 4. Sep 8, 2004 ### COCoNuT the answer is -4.321, but why v=at? do i need the formula.... Vfy = V(0)y - gt v = -(-9.8)(0.44) = 4.321 which is wrong, why is that? 5. Sep 8, 2004 ### Tide Bonus question: Does the book land face up or face down? :-) 6. Sep 9, 2004 ### Gza I apologize for my lack of explanation. I got v=at from the kinematical relation: $$v_f = v_i + a\Delta t$$ vi is simply zero since book slides off horizontally (with no initial vertical velocity.) $$\Delta t$$ is simply the time of flight. You got the wrong answer because you added in another minus sign that didn't need to be there. Hope I helped! 7. Sep 9, 2004 ### Gza You know, that would actually make an interesting problem to figure out! (NERD ALERT!!) Correct me if i'm wrong but wouldn't it depend on the velocity of the horizontal launch? Launching the book at a critical speed would ensure it staying upright upon landing. Any other speed and it will either flip over or land on its edge. 8. Sep 9, 2004 ### Tide Yes, it would depend on the velocity at launch as well as whether the book edge is parallel to the edge of the table! This was inspired by my observation that if I ever knock my toast off the counter it seems always to land jelly side down! Similar Discussions: Physics book slides off a horizontal
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8196254968643188, "perplexity": 1626.4116209913739}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103270.12/warc/CC-MAIN-20170817111816-20170817131816-00673.warc.gz"}
http://www.newworldencyclopedia.org/entry/Refraction
# Refraction A straw dipped in a colored solution appears to be broken, because of the refraction of light as it passes from the solution to the air. Refraction is the change in direction of a wave due to a change in its speed, as observed when a wave passes from one medium to another. The most common example is the refraction of light, as happens in the formation of rainbows in the sky or rainbow-like bands when white light passes through a glass prism. Other types of waves also undergo refraction, for example, when sound waves pass from one medium into another. The refraction of waves through a medium is quantified in terms of what is called the refractive index (or index of refraction). The refractive index of a medium is a measure of how much the speed of light (or other waves) is reduced inside the medium, compared with the speed of light in vacuum or air. For example, if a sample of glass has a refractive index of 1.5, it means that the speed of light traveling through the glass is $1/1.5=0.67$ times the speed of light in vacuum or air. Based on knowledge of the properties of refraction and refractive index, a number of applications have been developed. For example, the invention of lenses and refracting telescopes rests on an understanding of refraction. Also, knowledge of the refractive index of various substances is used to evaluate the purity of a substance or measure its concentration in a mixture. In eye tests performed by ophthalmologists or optometrists, the property of refraction forms the basis for the technique known as refractometry. ## Explanation In optics, refraction occurs when light waves travel from a medium with a particular refractive index to a second medium with another refractive index. At the boundary between the media, the wave's phase velocity is altered, it changes direction, and its wavelength increases or decreases, but its frequency remains constant. For example, a light ray will undergo refraction as it enters and leaves glass. An understanding of this concept led to the invention of lenses and the refracting telescope. Refraction of light waves in water. The dark rectangle represents the actual position of a pencil sitting in a bowl of water. The light rectangle represents the apparent position of the pencil. Notice that the end (X) looks like it is at (Y), a position that is considerably shallower than (X). Refraction can be seen when looking into a bowl of water. Air has a refractive index of about 1.0003, and water has a refractive index of about 1.33. If a person looks at a straight object, such as a pencil or straw, which is placed at a slant, partially in the water, the object appears to bend at the water's surface. This is due to the bending of light rays as they move from the water to the air. Once the rays reach the eye, the eye traces them back as straight lines (lines of sight). The lines of sight (shown as dashed lines) intersect at a higher position than where the actual rays originated. This causes the pencil to appear higher and the water to appear shallower than it really is. The depth that the water appears to be when viewed from above is known as the apparent depth, Diagram of refraction of water waves The diagram on the right shows an example of refraction in water waves. Ripples travel from the left and pass over a shallower region inclined at an angle to the wavefront. The waves travel more slowly in the shallower water, so the wavelength decreases and the wave bends at the boundary. The dotted line represents the normal to the boundary. The dashed line represents the original direction of the waves. The phenomenon explains why waves on a shoreline never hit the shoreline at an angle. Whichever direction the waves travel in deep water, they always refract towards the normal as they enter the shallower water near the beach. Refraction is also responsible for rainbows and for the splitting of white light into a rainbow-spectrum as it passes through a glass prism. Glass has a higher refractive index than air and the different frequencies of light travel at different speeds (dispersion), causing them to be refracted at different angles, so that you can see them. The different frequencies correspond to different colors observed. While refraction allows for beautiful phenomena such as rainbows it may also produce peculiar optical phenomena, such as mirages and Fata Morgana. These are caused by the change of the refractive index of air with temperature. Refraction in a Perspex (acrylic) block. Snell's law is used to calculate the degree to which light is refracted when traveling from one medium to another. Recently some metamaterials have been created which have a negative refractive index. With metamaterials, we can also obtain the total refraction phenomena when the wave impedances of the two media are matched. There is no reflected wave. Also, since refraction can make objects appear closer than they are, it is responsible for allowing water to magnify objects. First, as light is entering a drop of water, it slows down. If the water's surface is not flat, then the light will be bent into a new path. This round shape will bend the light outwards and as it spreads out, the image you see gets larger. ## Refractive index The refractive index (or index of refraction) of a medium is the inverse ratio of the phase velocity (defined below) of a wave phenomenon such as light or sound, and the phase velocity in a reference medium (substance that the wave passes through). It is most commonly used in the context of light with vacuum as a reference medium, although historically other reference media (e.g. air at a standard pressure and temperature) have been common. It is usually given the symbol n, In the case of light, it equals $n=\sqrt{\epsilon_r\mu_r}$, where εr is the material's relative permittivity (how a material affects an electric field), and μr is its relative permeability (how a material reacts to a magnetic field). For most materials, μr is very close to 1 at optical frequencies, therefore n is approximately $\sqrt{\epsilon_r}$. n may be less than 1 and this has practical technical applications, such as effective mirrors for X-rays based on total internal reflection. The phase velocity is defined as the rate at which any part of the waveform travels through space; that is, the rate at which the phase of the waveform is moving. The group velocity is the rate that the envelope of the waveform is propagating; that is, the rate of variation of the amplitude (the maximum up and down motion) of the waveform. It is the group velocity, the velocity at which the crests and troughs of a wave move through space, that (almost always) represents the rate that information (and energy) may be transmitted by the wave—for example, the velocity at which a pulse of light travels down an optical fiber. ### The speed of light Refraction of light at the interface between two media of different refractive indices, with n2 > n1. The velocity is lower in the second medium (v2 < v1), therefore the angle of refraction θ2 is less than the angle of incidence θ1; that is, the ray in the higher-index medium is closer to the normal. The speed of all electromagnetic radiation in vacuum is the same, approximately 3×108 meters per second, and is denoted by c. Therefore, if v is the phase velocity of radiation of a specific frequency in a specific material, the refractive index is given by $n =\frac{c}{v}$. This number is typically greater than one: the higher the index of the material, the more the light is slowed down. However, at certain frequencies (e.g., X-rays), n will actually be smaller than one. This does not contradict the theory of relativity, which holds that no information-carrying signal can ever propagate faster than c, because the phase velocity is not the same as the group velocity or the signal velocity, same as group velocity except when the wave is passing through an absorptive medium. Sometimes, a "group velocity refractive index," usually called the group index is defined: $n_g=\frac{c}{v_g}$ where vg is the group velocity. This value should not be confused with n, which is always defined with respect to the phase velocity. At the microscale, an electromagnetic wave's phase velocity is slowed in a material because the electric field creates a disturbance in the charges of each atom (primarily the electrons) proportional (a $y=kx$ relationship) to the permittivity. The charges will, in general, oscillate slightly out of phase with respect to the driving electric field. The charges thus radiate their own electromagnetic wave that is at the same frequency but with a phase delay. The macroscopic sum of all such contributions in the material is a wave with the same frequency but shorter wavelength than the original, leading to a slowing of the wave's phase velocity. Most of the radiation from oscillating material charges will modify the incoming wave, changing its velocity. However, some net energy will be radiated in other directions (see scattering). If the refractive indices of two materials are known for a given frequency, then one can compute the angle by which radiation of that frequency will be refracted as it moves from the first into the second material from Snell's law. ### Negative Refractive Index Recent research has also demonstrated the existence of negative refractive index, which can occur if ε and μ are simultaneously negative. Not thought to occur naturally, it can be achieved with so called metamaterials. It offers the possibility of perfect lenses and other exotic phenomena such as a reversal of Snell's law. ## List of indices of refraction Some representative refractive indices Material n at f=5.09x1014 Hz Vacuum 1 (exactly) Helium 1.000036 Air @ STP 1.0002926 Carbon dioxide 1.00045 Water Ice 1.31 Liquid Water (20°C) 1.333 Cryolite 1.338 Acetone 1.36 Ethanol 1.36 Teflon 1.35 - 1.38 Glycerol 1.4729 Acrylic glass 1.490 - 1.492 Rock salt 1.516 Crown glass (pure) 1.50 - 1.54 Salt (NaCl) 1.544 Polycarbonate 1.584 - 1.586 Flint glass (pure) 1.60 - 1.62 Crown glass (impure) 1.485 - 1.755 Bromine 1.661 Flint glass (impure) 1.523 - 1.925 Cubic zirconia 2.15 - 2.18 Diamond 2.419 Moissanite 2.65 - 2.69 Cinnabar (Mercury sulfide) 3.02 Gallium(III) phosphide 3.5 Gallium(III) arsenide 3.927 Silicon 4.01 Many materials have well-characterized refractive indices, but these indices depend strongly on the frequency of light. Therefore, any numeric value for the index is meaningless unless the associated frequency is specified. There are also weaker dependencies on temperature, pressure/stress, and so forth, as well as on precise material compositions. For many materials and typical conditions, however, these variations are at the percent level or less. It is therefore especially important to cite the source for an index measurement, if precision is required. In general, an index of refraction is a complex number with both a real and an imaginary part, where the latter indicates the strength of absorption loss at a particular wavelength—thus, the imaginary part is sometimes called the extinction coefficient k. Such losses become particularly significant—for example, in metals at short wavelengths (such as visible light)—and must be included in any description of the refractive index. ## Dispersion and absorption In real materials, the polarization does not respond instantaneously to an applied field. This causes dielectric loss, which can be expressed by a permittivity that is both complex and frequency dependent. Real materials are not perfect insulators either, meaning they have non-zero Direct Current (DC) conductivity. Taking both aspects into consideration, we can define a complex index of refraction: $\tilde{n}=n-i\kappa$ Here, n is the refractive index indicating the phase velocity, while κ is called the extinction coefficient, which indicates the amount of absorption loss when the electromagnetic wave propagates through the material. Both n and κ are dependent on the frequency. The effect that n varies with frequency (except in vacuum, where all frequencies travel at the same speed c) is known as dispersion, and it is what causes a prism to divide white light into its constituent spectral colors, which is how rainbows are formed in rain or mists. Dispersion is also the cause of chromatic aberration in lenses. Since the refractive index of a material varies with the frequency (and thus wavelength) of light, it is usual to specify the corresponding vacuum wavelength at which the refractive index is measured. Typically, this is done at various well-defined spectral emission lines; for example, nD is the refractive index at the Fraunhofer "D" line, the centre of the yellow sodium double emission at 589.29 nm wavelength. The Sellmeier equation is an empirical formula that works well in describing dispersion, and Sellmeier coefficients are often quoted instead of the refractive index in tables. For some representative refractive indices at different wavelengths, see list of indices of refraction. As shown above, dielectric loss and non-zero DC conductivity in materials cause absorption. Good dielectric materials such as glass have extremely low DC conductivity, and at low frequencies the dielectric loss is also negligible, resulting in almost no absorption (κ ≈ 0). However, at higher frequencies (such as visible light), dielectric loss may increase absorption significantly, reducing the material's transparency to these frequencies. The real and imaginary parts of the complex refractive index are related through use of the Kramers-Kronig relations. For example, one can determine a material's full complex refractive index as a function of wavelength from an absorption spectrum of the material. ## Birefringence A calcite crystal laid upon a paper with some letters showing birefringence. The refractive index of certain media may be different depending on the polarization and direction of propagation of the light through the medium. This is known as birefringence and is described by the field of crystal optics. ## Nonlinearity The strong electric field of high intensity light (such as output of a laser) may cause a medium's refractive index to vary as the light passes through it, giving rise to nonlinear optics. If the index varies quadratically with the field (linearly with the intensity), it is called the optical Kerr effect and causes phenomena such as self-focusing and self phase modulation. If the index varies linearly with the field (which is only possible in materials that do not possess inversion symmetry), it is known as the Pockels effect. ## Inhomogeneity A gradient-index lens with a parabolic variation of refractive index (n) with radial distance (x). The lens focuses light in the same way as a conventional lens. If the refractive index of a medium is not constant, but varies gradually with position, the material is known as a gradient-index medium and is described by gradient index optics. Light traveling through such a medium can be bent or focussed, and this effect can be exploited to produce lenses, some optical fibers and other devices. Some common mirages are caused by a spatially varying refractive index of air. ## Applications The refractive index of a material is the most important property of any optical system that uses the property of refraction. It is used to calculate the focusing power of lenses and the dispersive power of prisms. Since refractive index is a fundamental physical property of a substance, it is often used to identify a particular substance, confirm its purity, or measure its concentration. Refractive index is used to measure solids (glasses and gemstones), liquids, and gases. Most commonly, it is used to measure the concentration of a solute in an aqueous solution. A refractometer is the instrument used to measure refractive index. For a solution of sugar, the refractive index can be used to determine the sugar content. In medicine, particularly ophthalmology and optometry, the technique of refractometry utilizes the property of refraction for administering eye tests. This is a clinical test in which a phoropter is used to determine the eye's refractive error and, based on that, the best corrective lenses to be prescribed. A series of test lenses in graded optical powers or focal lengths are presented, to determine which ones provide the sharpest, clearest vision. ## Alternative meaning: Refraction in metallurgy In metallurgy, the term refraction has another meaning. It is a property of metals that indicates their ability to withstand heat. Metals with a high degree of refraction are referred to as refractory. These metals have high melting points, derived from the strong interatomic forces that are involved in metal bonds. Large quantities of energy are required to overcome these forces. Examples of refractory metals include molybdenum, niobium, tungsten, and tantalum. Hafnium carbide is the most refractory binary compound known, with a melting point of 3,890 degrees C.[1][2]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8712250590324402, "perplexity": 538.4426021420958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542288.7/warc/CC-MAIN-20161202170902-00145-ip-10-31-129-80.ec2.internal.warc.gz"}
https://byjus.com/ncert-exemplar-class-10-science-chapter-14-sources-of-energy/
# NCERT Exemplar Class 10 Science Solutions for Chapter 14 - Sources Of Energy NCERT Exemplar solutions for Class 10 Science Chapter 14 Sources of Energy gives necessary insights on the concepts involved in class 10 Sources of Energy. Studying the solution provided in this NCERT exemplar is very crucial for it provides you with knowledge of questions that can be asked on the topic. Having extra knowledge on the questions of varied difficulty will boosts your confidence and you can write the examination without any fear. This Chapter shed light on important topics like the non-conventional source of energy, their importance, the difference between conventional and non-conventional energy resources and their usage. This chapter also shed light on types of energy sources and their utilization. Take a closer look at class 10 science chapter 14 NCERT exemplars below. ## Class 10 Sources of Energy Importance Today we hear so much about the energy crisis and the need for conservation of sources of energy. Thus chapter 14 is quite an important chapter for students. Here, they will study the good sources of energy and its conservation methods. They will also discuss conventional or non-conventional sources of energy, renewable and non-renewable sources of energy as well as learn about the thermal power plant, hydroelectric power plant, and biogas plant. While chapter 14 is not so difficult compared to other chapters, we are offering free NCERT exemplar for class 10 Science chapter 14 here to help students to have a smooth process in understanding all the chapter topics. The exemplars provide detailed answers to all the questions given at the end of the chapter and our experts have also provided additional explanations for difficult topics. Students can use these exemplars to practice solving different types of questions and also to have a thorough revision before the examinations. ### Topics covered in NCERT Exemplar of Chapter 14 Sources of Energy 1. What Is A Good Source Of Energy? 2. Conventional Sources Of Energy 1. Fossil Fuels 2. Thermal Power Plant 3. Hydro Power Plants 4. Improvements In Technology For Using Conventional Sources Of Energy 3. Alternative Or Non-conventional Sources Of Energy 1. Solar Energy 2. Energy From The Sea 3. Geothermal Energy 4. Nuclear Energy 4. Environmental Consequences 5. How Long Will An Energy Source Last Us? BYJU’S provide you with CBSE notes, CBSE textbooks, sample papers and previous year questions for the benefit of the students.At BYJU’S students are provided with proper guidance and their performance is assessed by periods, exercises, tests and assignments which will help them stay ahead of the competition. To get all the benefits we provide Download BYJU’S learning App and experience the new way of learning.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8493282198905945, "perplexity": 1638.8121014866726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525009.36/warc/CC-MAIN-20190717021428-20190717043428-00269.warc.gz"}
http://es.planetcalc.com/search/?tag=1776
# Search results #### Volume of body given the buoyant force This calculator computes the volume of body given the buoyant force and density of the body, assuming that the buoyant force equals gravity force. ##### Volume of body given the buoyant force • Density of oil Recalculation of density of oil for different temperature and pressure values. Formulas are taken from Russia's GOST R 8.610-2004. "State system for ensuring the uniformity of measurements. Density of oil. The tables for recalculation" standard • Petroleum product density Petroleum product density conversion, according to GOST 3900-85 • Surface tension. Weight and volume of the drop of liquid This online calculator calculates the weight and the volume of the drop given the surface tension, diameter of the capillary tube, and density of the liquid • Wind energy and wind power This online calculator computes kinetic energy of wind and wind power ## Site sections Life (37) Professional (109) Study (167) Can't find calculators you've been looking for? Please suggest an idea for a new online calculator.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8576629161834717, "perplexity": 2868.525848527681}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320368.57/warc/CC-MAIN-20170624235551-20170625015551-00240.warc.gz"}
https://stacks.math.columbia.edu/tag/061R
Lemma 10.69.5. Let $R$ be a ring. Let $M$ be an $R$-module. Let $f_1, \ldots , f_ c \in R$ be an $M$-quasi-regular sequence. For any $i$ the sequence $\overline{f}_{i + 1}, \ldots , \overline{f}_ c$ of $\overline{R} = R/(f_1, \ldots , f_ i)$ is an $\overline{M} = M/(f_1, \ldots , f_ i)M$-quasi-regular sequence. Proof. It suffices to prove this for $i = 1$. Set $\overline{J} = (\overline{f}_2, \ldots , \overline{f}_ c) \subset \overline{R}$. Then \begin{align*} \overline{J}^ n\overline{M}/\overline{J}^{n + 1}\overline{M} & = (J^ nM + f_1M)/(J^{n + 1}M + f_1M) \\ & = J^ nM / (J^{n + 1}M + J^ nM \cap f_1M). \end{align*} Thus, in order to prove the lemma it suffices to show that $J^{n + 1}M + J^ nM \cap f_1M = J^{n + 1}M + f_1J^{n - 1}M$ because that will show that $\bigoplus _{n \geq 0} \overline{J}^ n\overline{M}/\overline{J}^{n + 1}\overline{M}$ is the quotient of $\bigoplus _{n \geq 0} J^ nM/J^{n + 1} \cong M/JM[X_1, \ldots , X_ c]$ by $X_1$. Actually, we have $J^ nM \cap f_1M = f_1J^{n - 1}M$. Namely, if $m \not\in J^{n - 1}M$, then $f_1m \not\in J^ nM$ because $\bigoplus J^ nM/J^{n + 1}M$ is the polynomial algebra $M/J[X_1, \ldots , X_ c]$ by assumption. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9980543851852417, "perplexity": 401.3387734296954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710192.90/warc/CC-MAIN-20221127041342-20221127071342-00718.warc.gz"}
https://www.nickzom.org/blog/2019/02/06/how-to-calculate-and-solve-for-gravitational-force-the-calculator-encyclopedia/
How to Calculate and Solve for Gravitational Force | The Calculator Encyclopedia The image above represents the gravitational force. To compute the gravitational force of a field, three parameters are needed and this parameters are mass (m1), mass (m2) and radius between the masses (R). The formula for calculating the gravitational force: F = Gm1m2 / Where; F = Gravitational force m1 = Mass 1 m2 = Mass 2 r = Radius between the masses Let’s solve an example; Find the gravitational force of a field when the mass 1 is 8 cm, mass 2 is 10 cm and the radius between masses is 14 cm. This implies that; m1 = Mass 1 = 8 cm m2 = Mass 2 = 10 cm r = Radius between the masses = 14 cm F = Gm1m2 / F = (6.67 x 10-11 x 8 x 10) / 196 F = 5.336e-9 / 196 F = 2.722e-11 Therefore, the gravitational force is 2.722e-11 Newton (N). Nickzom Calculator – The Calculator Encyclopedia is capable of calculating the gravitational force. To get the answer and workings of the gravitational force using the Nickzom Calculator – The Calculator Encyclopedia. First, you need to obtain the app. You can get this app via any of these means: To get access to the professional version via web, you need to register and subscribe for NGN 1,500 per annum to have utter access to all functionalities. You can also try the demo version via https://www.nickzom.org/calculator Once, you have obtained the calculator encyclopedia app, proceed to the Calculator Map, then click on Gravitational Field under the Physics section Now, Click on Gravitational Force under Gravitational Field The screenshot below displays the page or activity to enter your values, to get the answer for the gravitational force according to the respective parameters which are the mass (m1), mass (m2) and radius between the masses (R). Now, enter the values appropriately and accordingly for the parameters as required by the example above where the mass (m1) is 8 cm, mass (m2) is 10 cm and radius between the masses (R) is 14 cm. Finally, Click on calculate As you can see from the screenshot above, Nickzom Calculator – The Calculator Encyclopedia solves for the gravitational force and presents the formula, workings and steps too.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9793190956115723, "perplexity": 1719.928498458777}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247513661.77/warc/CC-MAIN-20190222054002-20190222080002-00155.warc.gz"}
http://mathhelpforum.com/discrete-math/139792-two-set-theory-problems.html
# Thread: Two set theory problems 1. ## Two set theory problems I have this kind of problems: 1. Let $A$ be a set so that $card(A) \geq 2$. Show that $Sq(A) \preceq A^\omega$. $A ^\omega$ means funtion $g: \omega \rightarrow A$ and $Sq(A)= \bigcup_{n \in \omega} A^n$ ( $A^n$ means funtions $h: n \rightarrow A$). I know that I have to build injection between those sets, but how to do it? 2. Let's designate $F(A) = \{B \in P(A) \mid B finite\}$. Show that $F(A) \approx A$, when $A$ is infinite. I know that I have find bijection between those sets, but how to do it in this case? 2. Hm, I don't know if this is the step in the same direction, but the first thing that comes to mind for 2) is, if A were the set of natural numbers, then each finite subse can be encoded by a single number. E.g., the list of prime numbers is $p_1 = 2$, $p_2 = 3$, $p_3 = 5$, $p_4 = 7$, ..., so $\{3,4,7,0\}$ would be encoded as $p_1^{3+1}\cdot p_2^{4+1}\cdot p_3^{7+1}\cdot p_4^{0+1}$. By Fundamental Theorem of Arithmetic, factorization into primes is unique, so one gets the original set unambiguously from the code. 3. Originally Posted by Ester I have this kind of problems: 1. Let $A$ be a set so that $card(A) \geq 2$. Show that $Sq(A) \preceq A^\omega$. $A ^\omega$ means funtion $g: \omega \rightarrow A$ and $Sq(A)= \bigcup_{n \in \omega} A^n$ ( $A^n$ means funtions $h: n \rightarrow A$). I know that I have to build injection between those sets, but how to do it? I think it is enough to show that $card(A^n) \preceq A^\omega$. Showing this should be easy. Do you know how to continue after proving this? 4. Originally Posted by Ester 2. Let's designate $F(A) = \{B \in P(A) \mid B finite\}$. Show that $F(A) \approx A$, when $A$ is infinite. I know that I have find bijection between those sets, but how to do it in this case? I guess you've forgotten the assumption that A is infinite. Note that $F(A)=\bigcup_{n\in\mathbb{N}} \{ B\subset A \mid \mathrm{card} B=n\}$. (Finite sets = sets having 0,1,2,3,... elements.) Can you show (using some results you know from your lecture) that $\{ B\subset A \mid \mathrm{card} B=n\} \approx A$ for any infinite set A? How do you continue from there? 5. Originally Posted by kompik I think it is enough to show that $card(A^n) \preceq A^\omega$. Showing this should be easy. Do you know how to continue after proving this? I have been thinking this problem, and I can't get any further. Can you explain why this: $card(A^n) \preceq A^\omega$ instead of $card(\bigcup_{n \in \omega} A^n) \preceq A^\omega$? 6. Originally Posted by kompik I guess you've forgotten the assumption that A is infinite. Note that $F(A)=\bigcup_{n\in\mathbb{N}} \{ B\subset A \mid \mathrm{card} B=n\}$. (Finite sets = sets having 0,1,2,3,... elements.) Can you show (using some results you know from your lecture) that $\{ B\subset A \mid \mathrm{card} B=n\} \approx A$ for any infinite set A? How do you continue from there? This problem has also cause me some difficulties over the weekend.. I have been thinking this too, and I still haven't been able to solve it. 7. Originally Posted by Ester I have been thinking this problem, and I can't get any further. Can you explain why this: $card(A^n) \preceq A^\omega$ instead of $card(\bigcup_{n \in \omega} A^n) \preceq A^\omega$? If you don't mind, I will use $|A|$ instead of $card A$. If $\kappa$ is a cardinal and $|A^n| \le \varkappa$, then $|\bigcup_{n\in\omega} A^n| \le \aleph_0.\varkappa$, right? Now, for every infinite cardinal number you have $\aleph_0.\varkappa=\varkappa$. Originally Posted by wikipedia If either $\varkappa$ or $\mu$ is infinite and both are non-zero, then $\kappa\cdot\mu = \max\{\kappa, \mu\}.$ See cardinal multiplication at wikipedia, but I guess you've covered this at your lecture. 8. Originally Posted by Ester This problem has also cause me some difficulties over the weekend.. I have been thinking this too, and I still haven't been able to solve it. Have you seen this result at your lecture: If A is infinite then $A\times A \approx A$. Can you continue from there? 9. Originally Posted by kompik If you don't mind, I will use $|A|$ instead of $card A$. If $\kappa$ is a cardinal and $|A^n| \le \varkappa$, then $|\bigcup_{n\in\omega} A^n| \le \aleph_0.\varkappa$, right? Hmm... I'm just wondering, how did you get in $|\bigcup_{n\in\omega} A^n| \le \aleph_0.\kappa$ that $\aleph_0$ there? I undestand why I only need to show that $card(A^n) \preceq A^\omega$, but now I'm not sure how to contruct injection between these sets. 10. Originally Posted by kompik Have you seen this result at your lecture: If A is infinite then $A\times A \approx A$. Can you continue from there? Yes, I have seen that result, but I don't undestand why I need it in this exercise. I'm totally lost. 11. Originally Posted by Ester Hmm... I'm just wondering, how did you get in $|\bigcup_{n\in\omega} A^n| \le \aleph_0.\kappa$ that $\aleph_0$ there? I undestand why I only need to show that $card(A^n) \preceq A^\omega$, but now I'm not sure how to contruct injection between these sets. I think I have mixed up some things. (It wasn't always clear whethe I am speaking about problem 1 or problem 2.) To be sure - now I am talking about problem 1. You know that $A\times A \approx A$ (for A infinite). Prove by induction that $A^n \approx A$. This means $|A^n|=|A|$. Now, if you make a union if countably many sets, each of them having cardinality $\varkappa$ then the cardinality of union is (at most) $\aleph_0.\varkappa$. (This should answer your question concerning $\aleph_0$.) As far as your question about injection is concerned: To show that there is an inequality between two cardinalities, finding an injection is one possibility. The solution I am suggesting is different - we show the inequality without explicitly constructing an injective map between the two sets. 12. As far as problem 2 is concerned: We already know that $|A^n|=|A|$ for each n (assuming that A is infinite). Now, if you can construct an injection from $\{B\subseteq A; |B|=n\}$ to $A^n$, you get the inequality $|\{B\subseteq A; |B|=n\}|\le |A|$. The opposite inequality is (I believe) easy. Combining these two inequalities you get (using Cantor-Bernstein) that $|\{B\subseteq A; |B|=n\}|=|A|$ for each n. 13. Originally Posted by kompik As far as problem 2 is concerned: We already know that $|A^n|=|A|$ for each n (assuming that A is infinite). Now, if you can construct an injection from [tex]\{B\subseteq A; |B|=n\}[\math] to $A^n$, you get the inequality $|\{B\subseteq A; |B|=n\}|\le |A|$. The opposite inequality is (I believe) easy. Combining these two inequalities you get (using Cantor-Bernstein) that $|\{B\subseteq A; |B|=n\}|=|A|$ for each n. Okey, this is how I was thinking to "solve" it: Let's designate $H: \{B \subseteq A \mid card(B) = n\} \rightarrow A^n$, so that $H(B) = f(\{n\})$, where $A^n$ is a function $f:n \rightarrow A$ with every $n\in \omega$. Let's choose arbitrary $B_1,B_2 \in dom(H)$ so that $B_1 \ne B_2$. Now $card(B_1) = n, card(B_2) = m$, where $n ,m \in \omega, n \ne m$. $H(B_1) = f_1(\{n\})$ and $H(B_2) = f_2(\{m\})$. Now $f_1 \ne f_2$ so $f_1(\{n\}) \ne f_2(\{m\})$ and that's why $H(B_1) \ne H(B_2)$. So $H$ is injective. The other direction I was thinking to do like this: Let's designate $G: A^n \rightarrow \{B \subseteq A \mid card(B) = n\}$, so that $G(f) = B$ where $B \approx n$ and $A^n$ is a function $f:n \rightarrow A$ with every $n\in \omega$. Let's choose arbitrary $f_1, f_2 \in A^n, n\in \omega, f_1 \ne f_2$. Now $f_1:n \rightarrow A, f_2:m \rightarrow A$. $G(f_1) = B_1, G(f_2) = B_2, B_1 \approx n, B_2 \approx m$, so $B_1 \ne B_2$ and that's why $G(f_1) \ne G(f_2)$. So $G$ is injective. I think I got that last one some how wrong... And I'm not sure about the first one either. 14. Originally Posted by Ester 1. Let $A$ be a set so that $card(A) \geq 2$. Show that $Sq(A) \preceq A^\omega$. $A ^\omega$ means funtion $g: \omega \rightarrow A$ and $Sq(A)= \bigcup_{n \in \omega} A^n$ ( $A^n$ means funtions $h: n \rightarrow A$). I know that I have to build injection between those sets, but how to do it? My original intention was to guide you somehow, so that you come up with a solution by yourself (in several steps). Maybe I am not good at this, or maybe I tried to used the facts about cardinal numbers, which you're not familiar with. So I will simply try to write down some solutions and then we can go through them, if something is not clear. Problem 1 first. Forget the approach I suggested before (using cardinalities). We construct an injection F from Sq(A) to $A^\omega$. A has at least two elements. Choose some $a\ne b$ in A. If f belongs to some $A^n$, i.e., it is a function from $n=\{0,1,\dots,n-1\}$ to A. Let us define $F(f):\omega \to A$ as follows: $F(f)(k)=\begin{cases} f(k),&\text{if }k a,&\text{if }k=n\\ b,&\text{if }k>n. \end{cases}$ The basic idea is using a and b to discriminate between elements of $A^n$ and $A^m$ for $n\ne m$. You can try to prove that this is indeed an injection. Drawing some pictures might help to understand it. 15. Originally Posted by Ester 2. Let's designate $F(A) = \{B \in P(A) \mid B finite\}$. Show that $F(A) \approx A$, when $A$ is infinite. I know that I have find bijection between those sets, but how to do it in this case? In this case I do not know how to describe the bijection explicitly. So I try to prove |F(A)|=|A| instead. (Which is equivalent.) $|A|\le|F(A)|$ is easy, since we have injection $a\mapsto\{a\}$ from A to F(A). Now; let us denote $A_n:=\{B\subseteq A; |B|=n\}$. Then $F(A)=\bigcup_{n\in\omega} A_n.$ If I show $|A_n|\le|A|$ then I get $|F(A)|\le|A|$. (I tried to explain this before - check the older posts.) You agreed that $|A|=|A\times A|$ and this can be used to show (by induction) that $|A|=|A^n|$. (Here $A^n$ means the set of all ordered n-tuples, which is basically the same thing as all functions $n\to A$.) I want to show $|A_n|\le|A^n|$ by exhibiting an injection $A_n\to A^n$. The injection can be (for example) obtained in a such way that to a set (containing n elements of A) I assign an n-tuple where these elements are ordered in an increasing order. So we have $|A_n|\le |A^n| = |A|.$ EDIT: Perhaps someone could provide a simpler solution to this one - I was not able to come up with anything easier from the scratch. (In particular, I was not able to avoid using Cantor-Bernstein and cardinalities.) Page 1 of 2 12 Last
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 135, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9628283977508545, "perplexity": 228.2125035940465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818686117.24/warc/CC-MAIN-20170920014637-20170920034637-00501.warc.gz"}
https://compsciedu.com/mcq-questions/Fuzzy-Systems/NET-computer-science-question-paper/3
21. A basic feasible solution to a m-origin, n-destination transportation problem is said to be ................... if the number of positive allocations are less than m + n – 1. a. degenerate b. non-degenerate c. unbounded d. unbalanced 22. The total transportation cost in an initial basic feasible solution to the following transportation problem using Vogel’s Approximation method is a. 76 b. 80 c. 90 d. 96
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8030047416687012, "perplexity": 887.8296422481955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943625.81/warc/CC-MAIN-20230321033306-20230321063306-00146.warc.gz"}
http://mathoverflow.net/questions/8331/approximately-known-matrix?sort=oldest
# Approximately known matrix What linear algebraic quantities can be calculated precisely for a nonsingular matrix whose entries are only approximately known (say, entries in the matrix are all huge numbers, known up to an accuracy of plus or minus some small number)? Clearly not the determinant or the trace, but probably the signature, and maybe some sort of twisted signatures? What is a reference for this sort of stuff? (numerical linear algebra, my guess for the name of such a field, seems to mean something else). - SVD is stable, and in some sense incorporates all the stable data you can have, so the answer is: "anything you can see on the SVD". Specifically you can easily see the signature (assuming the matrix is far enough from being singular). - Could you add some details? What do you mean by "SVD is stable"? –  Daniel Moskovich Dec 9 '09 at 10:11 I mean that you can easily bound the change in the output by the change in the input (and the input itself); which is something you cannot do for e.g. LU decomposition. Caveat: If some Eigenvalue of the SVD is (close to) multiple, then the output is of course the span of the related vectors, and not the vectors themselves. –  David Lehavi Dec 9 '09 at 11:05 I assume you mean “far enough from singular”. –  Harald Hanche-Olsen Dec 9 '09 at 13:59 @ Harald: Thanks –  David Lehavi Dec 9 '09 at 15:02 This is a great answer. What about the change in the singular values? Also, is there a name for this field or a reference for related problems? –  Daniel Moskovich Dec 10 '09 at 4:24 show 1 more comment If an invariant of nonsingular matrices is locally constant (I guess this is what's meant by "can be calculated precisely"), then it can only depend on the connected component of the linear group, which means only the orientation (sign of the determinant) can be calculated. For symmetric matrices, the same argument shows that any calculable quantity is a function of the signature since any matrix can be connected to a standard representatives of one of the signature classes using a continuous version of orthogonalization. - I think the words "approximate" and "numerical" in the question hint that this is not really what Daniel has in mind.... –  David Lehavi Dec 9 '09 at 15:15 This is a fair interpretation of "precisely," I think. Perhaps Daniel should rephrase his question if that's not what he has in mind. –  Qiaochu Yuan Dec 9 '09 at 15:50
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9503215551376343, "perplexity": 631.5836034076161}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999664754/warc/CC-MAIN-20140305060744-00065-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/initially-you-are-driving-at-45-ft-sec-on-a-straight-road.820447/
# Initially you are driving at 45 ft/sec on a straight road 1. Jun 23, 2015 ### Tonia 1. The problem statement, all variables and given/known data Initially you are driving at 45 ft/sec on a straight road. You accelerate the car at a steady rate and increase the speed to 76ft/sec. During this acceleration process you travel a distance of 300 ft. 2. Relevant equations a) Calculate the time required for the 300 ft travel b) determine the acceleration of the car during the 300 ft displacement c) find the time when the speed of the car will be 55.8 ft/sec. d) determine how much of the 300 ft displacement is due to the initial velocity and how much is due to the acceleration e) find the time required for traveling the first 150 ft. 3. The attempt at a solution a) time = distance divided by speed, I am not sure how to use this equation since I need to know the speed to find the time. Speed = distance divided by time. b) I need help c) I need help acceleration = m = a = change in velocity divided by change in time = (76ft/sec - 45 ft/sec)/time = ?? d) I need help e) I need help 2. Jun 23, 2015 ### tony873004 This problem builds on itself. You can't get part b unless you get part a. a) time = distance divided by AVERAGE speed. What's the average of 45 and 76? 3. Jun 23, 2015 ### Tonia (76ft/sec + 45 ft/sec.)/2 = 121/2 = 60.5 ft/ sec. 300 ft / 60.5 ft/ sec. = 4.9 seconds?? acceleration = m = a = change in velocity divided by change in time = (76ft/sec - 45 ft/sec)/4.9 sec. = 31ft/sec/4.9 sec. = 6.3 ft/sec^2 c) find the time when the speed of the car will be 55.8 ft/sec. ?? d) determine how much of the 300 ft displacement is due to the initial velocity and how much is due to the acceleration ?? e) find the time required for traveling the first 150 ft. ?? 4. Jun 23, 2015 ### tony873004 You are traveling 45 ft/s. You are gaining 6.3 ft/s each second. How many seconds until you are doing 55.8 ft/s? Think of it as a money problem. You have 45 dollars. You make $6.30 an hour. You need$55.8. How many hours must you work? 5. Jun 23, 2015 ### Tonia 55.8 ft/sec - 45 ft/sec = 10.8 ft/sec 10.8ft/sec/6.30ft/sec^2 = 1.714 sec. = 1.7 seconds d) determine how much of the 300 ft displacement is due to the initial velocity and how much is due to the acceleration e) find the time required for traveling the first 150 ft. 6. Jun 23, 2015 ### tony873004 d) How far would you have traveled at your initial velocity during the time you computed in part (a) if you did not accelerate? The rest belongs to acceleration. e) x = vit + (1/2) at2 You could use the quadratic formula to solve for t. There may be easier ways. Draft saved Draft deleted Similar Discussions: Initially you are driving at 45 ft/sec on a straight road
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8980106115341187, "perplexity": 1045.6298294869869}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104612.83/warc/CC-MAIN-20170818063421-20170818083421-00265.warc.gz"}
https://www.physicsforums.com/threads/conservation-of-elastic-and-gravitation-energy-2.581962/
# Homework Help: Conservation of Elastic and Gravitation Energy - 2 1. Feb 27, 2012 ### PeachBanana 1. The problem statement, all variables and given/known data A 70 kg bungee jumper jumps from a bridge. She is tied to a bungee cord whose unstretched length is 13 m , and falls a total of 37 m . Calculate the spring stiffness constant of the bungee cord, assuming Hooke's law applies. 88 N/M Calculate the maximum acceleration she experiences. 2. Relevant equations Spring Force = -k * x F = m * a 3. The attempt at a solution Spring Force = (88 N/M)(24 m) I used 24 m because that was the maximum stretch of the cord. Spring Force = 2,112 N a = F / m a = (2112 N) / (70 kg) a = 88 m/s^2 I added 9.8 m/s^2 because she was in free fall. a = 97.8 m/s^2 Could someone explain why this is incorrect? 2. Feb 27, 2012 ### Delphi51 I get 30, not 88. Why add 9.8? My thinking is that F = ma kx - mg = ma a = kx/m - g Maximum (upward) acceleration is when x is at its maximum but 9.8 would be subtracted rather than added. 3. Feb 27, 2012 ### PeachBanana I made a typo on the acceleration, oops. I added 9.8 m/s^2 because I thought the jumper was moving in the same direction as gravity as she jumped. Could you explain why you subtracted? Last edited: Feb 27, 2012 4. Feb 27, 2012 ### Delphi51 kx - mg = ma The spring force kx is upward. Gravity mg is downward.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9169297218322754, "perplexity": 3267.125534778981}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741087.23/warc/CC-MAIN-20181112193627-20181112215627-00206.warc.gz"}
http://mathhelpforum.com/advanced-statistics/69712-joint-probability-density-print.html
# Joint Probability density • January 24th 2009, 11:30 AM Yan Joint Probability density Suppose that P, the price of a certain commodity (in dollars), and S, its total sales (in 10,000 units), are random variables whose joint probability distribution can be approximated closely with the joint probability density f(p,s)=5pe^(-ps) for 0.2<p<0.4, s>0 and 0 elsewhere Find the probabilities that (a) the price will be less than 30 cents and sales will exceed 20,000 units; (b) the price will between 25 cents and 30 cents and sales will be less than 10,000 units; (c) the marginal density of P; (d) the conditional density of S given P=p; (e) the probability that sales will be less than 30,000 units when p=25 cents. • January 24th 2009, 02:07 PM mr fantastic Quote: Originally Posted by Yan Suppose that P, the price of a certain commodity (in dollars), and S, its total sales (in 10,000 units), are random variables whose joint probability distribution can be approximated closely with the joint probability density f(p,s)=5pe^(-ps) for 0.2<p<0.4, s>0 and 0 elsewhere Find the probabilities that (a) the price will be less than 30 cents and sales will exceed 20,000 units; (b) the price will between 25 cents and 30 cents and sales will be less than 10,000 units; (c) the marginal density of P; (d) the conditional density of S given P=p; (e) the probability that sales will be less than 30,000 units when p=25 cents. These are all set up and solved form the basic definitions. (a) $\int_{p=0}^{p = 0.3} \int_{s=2}^{s=+\infty} f(p, s) \, ds \, dp$. (b) $\int_{p=0.25}^{p = 0.3} \int_{s=0}^{s=1} f(p, s) \, ds \, dp$. (c) $f_P(p) = \int_{s=0}^{s=+\infty} f(p, s) \, ds$. (d) $f_S(s | p) = \frac{f(p, s)}{f_P(p)}$. (e) $\int_{s=0}^{s=3} f_S(s | p = 0.25) \, ds$. • January 25th 2009, 11:17 AM Yan Quote: Originally Posted by mr fantastic These are all set up and solved form the basic definitions. (a) $\int_{p=0}^{p = 0.3} \int_{s=2}^{s=+\infty} f(p, s) \, ds \, dp$. how to calculate the first part (the ds part, $\int_{s=2}^{s=+\infty} f(p, s) \, ds$. and I think the dp part is $\int_{p=0.2}^{p=0.3} f(p, s) \, dp$, is it right? • January 25th 2009, 01:48 PM mr fantastic Quote: Originally Posted by Yan how to calculate the first part (the ds part, $\int_{s=2}^{s=+\infty} f(p, s) \, ds$. Mr F says: Do you know how to integrate? You're integrating a simple exponential function. Where are you stuck here? and I think the dp part is $\int_{p=0.2}^{p=0.3} f(p, s) \, dp$, is it right? Mr F says: Why would you think that when the question clearly says "the price will be less than 30 cents "?! .. • January 25th 2009, 06:22 PM Yan how to calculate the first part (the ds part, http://www.mathhelpforum.com/math-he...1f17a78b-1.gif. Mr F says: Do you know how to integrate? You're integrating a simple exponential function. Where are you stuck here? the problem is the S is form 2 to infin. there is not exactly number for infin. like if it is from negative infin to 2,then i know the number is from 0 to 2. and I think the dp part is http://www.mathhelpforum.com/math-he...cf451136-1.gif, is it right? Mr F says: Why would you think that when the question clearly says "the price will be less than 30 cents "?! because the problem is the Price is 0.2<p<0.4, so think is should be 0.2 to 0.3 • January 25th 2009, 07:10 PM mr fantastic Quote: Originally Posted by Yan [snip] and I think the dp part is http://www.mathhelpforum.com/math-he...cf451136-1.gif, is it right? Mr F says: Why would you think that when the question clearly says "the price will be less than 30 cents "?! because the problem is the Price is 0.2<p<0.4, so think is should be 0.2 to 0.3 Well, that's a good reason why. Quote: Originally Posted by Yan how to calculate the first part (the ds part, http://www.mathhelpforum.com/math-he...1f17a78b-1.gif. Mr F says: Do you know how to integrate? You're integrating a simple exponential function. Where are you stuck here? the problem is the S is form 2 to infin. there is not exactly number for infin. like if it is from negative infin to 2,then i know the number is from 0 to 2. [snip] http://www.mathhelpforum.com/math-he...1f17a78b-1.gif is an improper integral. To find it you need to do the usual thing and consider a limit: $\lim_{a \rightarrow +\infty} \int_2^{a} f(p, s) \, ds$ etc.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9235941171646118, "perplexity": 1216.8768755867127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398468233.50/warc/CC-MAIN-20151124205428-00231-ip-10-71-132-137.ec2.internal.warc.gz"}
http://moodle.archbold.k12.oh.us/mod/forum/discuss.php?d=5268&parent=11340
## Extended Response Questions for Test ### Describe how the gas molecules in the atmosphere are heated by radiation and conduction. Re: Describe how the gas molecules in the atmosphere are heated by radiation and conduction. or move faster...increased molecular motion=increased heat
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8789322376251221, "perplexity": 2345.442452013087}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889733.57/warc/CC-MAIN-20180120201828-20180120221828-00566.warc.gz"}
http://math.stackexchange.com/questions/42037/a-variation-of-borel-cantelli-lemma
# A variation of Borel Cantelli Lemma If $P(A_n) \rightarrow 0$ and $\sum_{n=1}^{\infty}{P(A_n^c\cap A_{n+1}})<\infty$ then $P(A_n \text{ i.o.})=0$. How to prove this? Thanks. - What does $P(A_ni. o.) = 0$ mean? Also, if this is homework, please tag it as such. –  mixedmath May 29 '11 at 23:12 Probably $P(\limsup A_n)$. (infinitely often). Further continuity from above/below of the measure. –  Jonas Teuwen May 29 '11 at 23:12 @epsilon: Care to accept an answer? –  Did Aug 15 '11 at 13:00 How do you prove this if the complement is switched, i.e. if we know $$\sum_{n=1}^{\infty}{P(A_n\cap A_{n+1}^c})<\infty$$ –  nelson meier Oct 13 '12 at 18:35 Hint: $\lim \sup A_n \subseteq A_N \cup \bigcup_{n=N}^\infty (A^c_n \cap A_{n+1})$. Estimate the probability of this.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8724667429924011, "perplexity": 1921.0104528926727}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/432172/developing-category-theory-inside-etcs/432291
# Developing category theory inside ETCS Trying to understand how Lawvere's [E]lementary [T]heory of the [C]ategory of [S]ets can be used as a foundation for mathematics alternative to ZFC, I am getting stuck with the question on how to develop category theory within ETCS. It's neither a problem in ZFC nor in ETCS to define the notion of a category, at least internally, when objects and morphisms are "sets". However, what is problematic in developing category theory completely internal to ZFC, i.e. without use of proper classes and metatheorems, is the choice of what we want to call the category of sets inside ZFC. One way to solve this is to introduce Grothendieck universes and to take them as the "base-categories of sets", on which category theory within ZFC can be built. My question now is: What are the possibilites in ETCS to ensure a reasonable "category of sets" based on which category theory can be developed inside ETCS? Are there any good sources for this question? Thanks! Hanno - My inclination would be that as long as you're using ETCS you might as well take categories to be your primitive notion and then you don't have to reaxiomatize categories. –  Qiaochu Yuan Jun 29 '13 at 10:00 Note that $\sf ETCS$ is much weaker than $\sf ZFC$, since it lacks the replacement schema (or anything equivalent/stronger). So $\sf ZFC$ can prove the consistency of $\sf ETCS$. If you could define categories fully in $\sf ETCS$ then you could in $\sf ZFC$ as well. –  Asaf Karagila Jun 29 '13 at 11:44 Qiaochu: Thank you for your comment! Could you elaborate a bit? So far I don't yet understand what you mean. –  Hanno Jun 29 '13 at 12:31 Asaf: Thank you! Ok, so one should at first add the replacement axiom scheme to ETCS. My question is then: What do we need in addition to this? –  Hanno Jun 29 '13 at 12:34 Universe in a topos. Note that this formulation is stricly relating to Zhen Lin's answer. –  Andrea Gagna Jun 29 '13 at 16:13 The point of ETCS is that a model of ETCS is already a category, so I suppose what you are asking for is how to do universes in ETCS. Well, ETCS is equivalent in a strong sense to Mac Lane set theory, and in Mac Lane set theory, Grothendieck universes are models of (second-order!) ZFC. So if you're willing to go down that route it suffices to axiomatise strongly inaccessible cardinals within ETCS – and this is straightforward. Now suppose $X$ is a set of strongly inaccessible cardinality. We construct within ETCS a category of small sets based on $X$. First, let $O$ be the subset of $\mathscr{P}(X)$ consisting of those subsets of $X$ strictly smaller than $X$, and let $E \subseteq X \times O$ be the binary relation obtained by restricting $[\in]_X \subseteq X \times \mathscr{P}(X)$. We may regard the projection $E \to O$ as an $O$-indexed family of sets, and we may then form $F \to O \times O$ such that the fibre of $F$ over an element $(Y, Z)$ of $O$ is the set $Z^Y$. This has the expected universal property in the slice category $\mathbf{Set}_{/ O \times O}$. Once we have $F \to O \times O$, it is a straightforward matter to build an internal category which, when externalised, is the full subcategory of $\mathbf{Set}$ spanned by the small subsets of $X$. Note that the assumption that the cardinality of $X$ is a regular cardinal implies that the resulting category is a model of not just ETCS but also the axiom of replacement. Of course, if one really believes ETCS is an adequate foundation for mathematics, then one can also make do with less. For instance, the same construction could be carried out for a set whose cardinality is a strong limit, and the resulting internal category of sets will still be a model of ETCS though not necessarily of replacement. - Zhen, thank you very much for this interesting and helpful answer! Could you explain in some detail what you mean by "when externalized, ..."? In the concrete situation, I think I know what you mean: we do not only have a category in ETCS, but in fact a category the objects of which "are" again sets in such a way that the morphism sets of the category we consider are the corresponding power sets. Still, this is rather an ad-hoc interpretation of what you're saying, without knowing what "externalizing" means in general, so could you maybe elaborate? –  Hanno Jun 29 '13 at 16:14 It essentially means to interpret internal structures as instances of those structures in the meta-logic. It is not meant to be precise. –  Zhen Lin Jun 29 '13 at 16:39 Is it true that the family $E(X)\to O(X)$ you constructed from $X$ is a universe, and that any universe $E\to U$ arises in this way? Like in ZFC where the Grothendieck universes are precisely the $V_\kappa$ with $\kappa$ strongly inaccessible? –  Hanno Jun 30 '13 at 6:00 It should be a universe, but I have no checked. Not all universes are of this form, I think. –  Zhen Lin Jun 30 '13 at 6:18 First, note that the theory of categories is one-sorted in nature, despite it is usually presented as two-sorted (objects, arrows), since you can identify an object with its identity arrow. The ETCS is actually part of a larger formal system, the Elementary Theory of the Category of Categories [ETCC], where alphabet's variables represent morphisms between categories. The most important objects, that have to be axiomatized, are the categories $\mathbf{0, 1, 2, 3}$. A good reference may be: Elementary Categories, Elementary Toposes. Lawvere, in his beautiful PhD thesis, suggested to deal with dimensional problems in a way very similar to ZF3 (i.e., ZF enhanced with universes $\omega = \theta_0 \in \theta_1\in \theta_2$). Thus, he proposed to axiomatize the existence of two categories (i.e., variables) • $\mathcal C_1$ which has to be thought as the category of small categories. Here, the most important category of sets is $\mathbf{FinSet}$ (Bill calls it $\mathcal S_0$). One requires it to have all the nice properties of $\mathbf{Cat}$. • $\mathcal C_2$ which has to be thought as the category of large categories. One requires it has all the nice properties of $\mathcal C_1$ and more an identity arrow (i.e., an object, a category) with all its property (that is, $\mathcal C_1$). Of course, the category of small sets may be think as an object of $\mathcal C_2$, i.e., an arrow $\{\mathcal S_1\}\colon \mathbf 1 \to \mathcal C_2$. In this general framework, one can axiomatize ETCS. Also, take a look to these 3d on MO: 1, 2. - Thank you, Andrea! I am rather interested in adding axioms to ETCS while keeping the formal system fixed instead of passing to a larger system; still, ETCC sounds very interesting and I will definitely have a look in McLarty's book - thank you! Concerning the third paragraph: In which formal system do the two axioms stating the existence of ${\mathcal C}_1$ and ${\mathcal C}_2$ live? Are they ETCS formulae? If yes, then what do you mean with your last comment? –  Hanno Jun 29 '13 at 12:44 In Lawvere's world, $\mathcal C_i, i=1, 2$ live in ETCC. Actually, you need less than all ETCC to formalize ETCS. Take a curious look at this page on nlab. As you can see, axioms are really easy. –  Andrea Gagna Jun 29 '13 at 13:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9518668055534363, "perplexity": 399.2073070244412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930895.96/warc/CC-MAIN-20150521113210-00265-ip-10-180-206-219.ec2.internal.warc.gz"}
http://wiki.stat.ucla.edu/socr/index.php?title=AP_Statistics_Curriculum_2007_Distrib_RV&diff=5934&oldid=4102
# AP Statistics Curriculum 2007 Distrib RV (Difference between revisions) Revision as of 18:43, 14 June 2007 (view source)IvoDinov (Talk | contribs)← Older edit Revision as of 02:31, 30 January 2008 (view source)IvoDinov (Talk | contribs) Newer edit → Line 1: Line 1: ==[[AP_Statistics_Curriculum_2007 | General Advance-Placement (AP) Statistics Curriculum]] - Random Variables and Probability Distributions== ==[[AP_Statistics_Curriculum_2007 | General Advance-Placement (AP) Statistics Curriculum]] - Random Variables and Probability Distributions== - === Random Variables and Probability Distributions=== + === Random Variables=== - Example on how to attach images to Wiki documents in included below (this needs to be replaced by an appropriate figure for this section)! + A '''random variable''' is a function or a mapping from a sample space into the real numbers (most of the time). In other words, a random variable assignes real values to outcomes of experiments. This mapping is called ''random'', as the output values of the mapping depend on the outcome of the experiment, which are indeed random. So, instead of studying the raw outcomes of experiments (e.g., define and compute probabilities), most of the time we study (or compute probabilities) on the corresponding random variables instead. The [http://en.wikipedia.org/wiki/Random_variable formal general definition of random variables may be found here]. - [[Image:AP_Statistics_Curriculum_2007_IntroVar_Dinov_061407_Fig1.png|500px]] + - ===Approach=== + ===Examples of random variables=== - Models & strategies for solving the problem, data understanding & inference. + - * TBD + * '''Die''': In rolling a regular hexagonal die, the sample space is clearly and numerically well-defined and in this case the random variable is the identity function assigning to each face of the die the numerical value it represents. This the possible outcomes of the RV of this experiment are { 1, 2, 3, 4, 5, 6 }. You can see this explicit RV mapping in the [[SOCR_EduMaterials_Activities_DiceExperiment | SOCR Die Experiment]]. + + * '''Coin''': For a coin toss, a suitable space of possible outcomes is S={H, T} (for heads and tails).  In this case these are not numerical values, so we can define a RV that maps these to numbers. For instance, we can define the RV $X: S \longrightarrow [0, 1]$ as: $X(s) = \begin{cases}0,& s = \texttt{H},\\ + 1,& s = \texttt{T}.\end{cases}$. You can see this explicit RV mapping of heads and tails to numbers in the [[SOCR_EduMaterials_Activities_BinomialCoinExperiment | SOCR Coin Experiment]]. + + * '''Card''': Suppose we draw a [[ | 5-card hand from a standard 52-card deck]] and we are interested in the probability that the hand contains at least one pair of cards with identical denomination. Then the sample space of this experimetn is large - it sould be difficult to list all possible outcomes. However, we can assign a random variable $X(s) = \begin{cases}0,& s = \texttt{no-pair},\\ + 1,& s = \texttt{at least one pair}.\end{cases}$ and try to compute the probability of P(X=1), i., the chance that the hand contains a pair. You can see this explicit RV mapping and the calculations of this probability at the [[SOCR_EduMaterials_Activities_CardExperiment | SOCR Card Experiment]]. + + + ===How do we use RVs?=== + There are 3 important quantities that we are always interested in when we study random processes. each of these may be phrased in terms of RVs, which simplifies their calculations. + + * '''Probability distribution''': What is the probability of $P(X=x_o)$? For instance, in the card example above, we may be interested in [[SOCR_EduMaterials_Activities_CardExperiment#Applications | P(at least 1 pair) = P(X=1) = P(1 pair only) = 0.422569]]. Or in the die example, we may want to know P(Even number turns up) = $P(X \in \{2, 4, 6 \}) = 0.5$. + + * '''Cummulative distribution''': [itex]P(X + {| class="wikitable" style="text-align:center; width:75%" border="1" + |- + | x || 0 || 1 || 2 || 3 || 4 || 5 || 6 + |- + | (P(X<=x) || 1/6 || 2/6 || 3/6 || 4/6 || 5/6 || 1 + |} + + + * M'''oments/expected values''': ===Model Validation=== ===Model Validation=== ## General Advance-Placement (AP) Statistics Curriculum - Random Variables and Probability Distributions ### Random Variables A random variable is a function or a mapping from a sample space into the real numbers (most of the time). In other words, a random variable assignes real values to outcomes of experiments. This mapping is called random, as the output values of the mapping depend on the outcome of the experiment, which are indeed random. So, instead of studying the raw outcomes of experiments (e.g., define and compute probabilities), most of the time we study (or compute probabilities) on the corresponding random variables instead. The formal general definition of random variables may be found here. ### Examples of random variables • Die: In rolling a regular hexagonal die, the sample space is clearly and numerically well-defined and in this case the random variable is the identity function assigning to each face of the die the numerical value it represents. This the possible outcomes of the RV of this experiment are { 1, 2, 3, 4, 5, 6 }. You can see this explicit RV mapping in the SOCR Die Experiment. • Coin: For a coin toss, a suitable space of possible outcomes is S={H, T} (for heads and tails). In this case these are not numerical values, so we can define a RV that maps these to numbers. For instance, we can define the RV $X: S \longrightarrow [0, 1]$ as: $X(s) = \begin{cases}0,& s = \texttt{H},\\ 1,& s = \texttt{T}.\end{cases}$. You can see this explicit RV mapping of heads and tails to numbers in the SOCR Coin Experiment. • Card: Suppose we draw a [[ | 5-card hand from a standard 52-card deck]] and we are interested in the probability that the hand contains at least one pair of cards with identical denomination. Then the sample space of this experimetn is large - it sould be difficult to list all possible outcomes. However, we can assign a random variable $X(s) = \begin{cases}0,& s = \texttt{no-pair},\\ 1,& s = \texttt{at least one pair}.\end{cases}$ and try to compute the probability of P(X=1), i., the chance that the hand contains a pair. You can see this explicit RV mapping and the calculations of this probability at the SOCR Card Experiment. ### How do we use RVs? There are 3 important quantities that we are always interested in when we study random processes. each of these may be phrased in terms of RVs, which simplifies their calculations. • Probability distribution: What is the probability of P(X = xo)? For instance, in the card example above, we may be interested in P(at least 1 pair) = P(X=1) = P(1 pair only) = 0.422569. Or in the die example, we may want to know P(Even number turns up) = $P(X \in \{2, 4, 6 \}) = 0.5$. • Cummulative distribution: P(X < xo), for all xo. For instance, for the die example we have the following discrete cummulative distribution table: x 0 1 2 3 4 5 6 (P(X<=x) 1/6 2/6 3/6 4/6 5/6 1 • Moments/expected values: ### Model Validation Checking/affirming underlying assumptions. • TBD • TBD ### Examples Computer simulations and real observed data. • TBD ### Hands-on activities Step-by-step practice problems. • TBD • TBD
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9172226786613464, "perplexity": 994.2363248479555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948550199.46/warc/CC-MAIN-20171214183234-20171214203234-00564.warc.gz"}
https://www.physicsforums.com/threads/4d-space.553723/
# 4D space • Start date • #1 23 0 What is the definition and implications of 4D space? By implications I mean if it existed how will it redefine what we know about physics and reality up to now. (Applications, possibilities, etc) • #2 DaveC426913 Gold Member 19,195 2,682 There are some current, active theories (such as string theory) that require as many as ten spatial dimensions. • #3 536 35 Special relativity is a theory in 4D space. What does it change? There is a difference to the classical Newton mechanics which postulates only 3 dimensions. In newtonian mechanics we can rotate the coordination system to transform spatial axes one into another. Adding the fourth dimension (identified with time) allows us to "rotate" over new axes, that means to transform time dimension into spatial ones and vice versa. That means, one observer's time may be mixed with another observer's length. This "rotation" in the fourth dimension is exactly the Lorenz transformation. Lorentz length contraction and time dilation are the effects of looking at an object from different "angles" in the fourth dimension. Postulating more dimensions gives us just that: the ability to transform some physical quantity into another, specifically length and time. Suppose we postulate the fifth dimension and we identify the electric charge with the momentum in that dimension. So, there must exist a transformation ("rotation") transforming charge into length and time the Lagrangian is invariant under. These "rotation" transformations in higher dimensions usually have conserved quantities associated with them. Just as the angular momentum is associated with rotation transformation. Postulating the fourth dimension (time) gives us another conserved quantity associated with the resulting rotation group - the spin. That's why many authors write that the existence of spin is relativistic effect. If we postulate the fifth dimension identified with the electric charge, we also get a new spin-like quantity - the isospin. That's why I believe the Kaluza-Klein theories, but that's my personal preference. Postulating more dimensions implicitly assumes that translations in these dimensions are symmetries. This is not a problem with the time dimension, since it has been translation-symmetrical since the newtonian dynamics, but with fifth (electrical) dimension this means that there have to exist elementary particles with arbitrary high electric charge. This is called the Kaluza-Klein tower. The hypothetical elementary particle with 2e electric charge is called dilepton and it has been looked for. • #4 23 0 So does "entanglement" have anything to do with 4d space? • #5 536 35 So does "entanglement" have anything to do with 4d space? No, this is completely unrelated concept. • Last Post Replies 1 Views 2K • Last Post Replies 2 Views 2K • Last Post Replies 7 Views 5K • Last Post Replies 4 Views 5K • Last Post Replies 6 Views 2K • Last Post Replies 24 Views 4K • Last Post Replies 11 Views 5K • Last Post Replies 1 Views 2K • Last Post Replies 4 Views 1K • Last Post Replies 1 Views 2K
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8566098213195801, "perplexity": 1093.4293341432392}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178368431.60/warc/CC-MAIN-20210304021339-20210304051339-00149.warc.gz"}
https://teachingcalculus.com/2012/10/26/reading-the-derivatives-graph/?replytocom=26170
A very typical calculus problem is given the equation of a function, to find information about it (extreme values, concavity, increasing, decreasing, etc., etc.). This is usually done by computing and analyzing the first derivative and the second derivative. All the textbooks show how to do this with copious examples and exercises. I have nothing to add to that. One of the “tools” of this approach is to draw a number line and mark the information about the function and the derivative on it. A very typical AP Calculus exam problem is given the graph of the derivative of a function, but not the equation of either the derivative or the function, to find all the same information about the function. For some reason, student find this difficult even though the two-dimensional graph of the derivative gives all the same information as the number line graph and, in fact, a lot more. Looking at the graph of the derivative in the x,y-plane it is easy to very determine the important information. Here is a summary relating the features of the graph of the derivative with the graph of the function. Feature the function ${y}'$> 0 is increasing ${y}'$ < 0 is decreasing ${y}'$ changes  – to + has a local minimum ${y}'$changes + to – has a local maximum ${y}'$ increasing is concave up ${y}'$ decreasing is concave down ${y}'$ extreme value has a point of inflection Here’s a typical graph of a derivative with the first derivative features marked. Here is the same graph with the second derivative features marked. The AP Calculus Exams also ask students to “Justify Your Answer.” The table above, with the columns switched does that. The justifications must be related to the given derivative, so a typical justification might read, “The function has a relative maximum at x-2 because its derivative changes from positive to negative at x = -2.” Conclusion Justification y is increasing ${y}'$> 0 y is decreasing ${y}'$< 0 y has a local minimum ${y}'$changes  – to + y has a local maximum ${y}'$changes + to – y is concave up ${y}'$increasing y is concave down ${y}'$decreasing y has a point of inflection ${y}'$extreme values For notes on vertical asymptotes see For notes on horizontal asymptotes see Other Asymptotes ## 17 thoughts on “Reading the Derivative’s Graph” 1. hello! what does it eman if the graph of the y’ has an asymptote or has an undefined part? Thanks! Like • Thanks for this great question. I will try to answer it more fully in a new blog post that I hope to have ready for next Tuesday November 5, 2019. The short answer is that if there is a vertical asymptote on the derivative, then there is a vertical asymptote or a cusp on the function on the function. Please check next week for more details. Again thanks for the suggestion; I’m always looking for more topics to write about. Liked by 1 person 2. Pingback: Graph Analysis (Type 3) | MATHMANMCQ 3. Thank you, great post! Like 4. I’ve never had this information provided so simply and easy to understand, thank you so much! Like 5. That’s great, but where’s the original graph: f(x) ? Not seeing the whole picture at the same time is hurting my understanding. Thanks Like • Peter; Yes, I can see that would help, however I don’t have an equation for the derivative, so I cannot write the equation for the function and graph it. it was just a sketch from an old AP exam. Its equation is something like y = (x+2)(x-4)(x-1)^2. So its antiderivative is f(x) =$\displaystyle \frac{1}{6}{{x}^{6}}-\frac{2}{5}{{x}^{5}}-\frac{11}{4}{{x}^{4}}+\frac{8}{3}{{x}^{3}}+10{{x}^{2}}-16x+C$ Here’s what you can try: On a calculator or Desmos or other graphing utility, graph a function and its derivative. Then you can compare the two. Starting with the functions see how its features show up in the derivative; starting with the derivative see how its features tell you about the function. Try several different functions. They do not have to be complicated. Here is two links to Desmos graphs that may help https://www.desmos.com/calculator/ovtzrjunz9 and https://www.desmos.com/calculator/tk241itk7v You can change the function (first line), leave the other parts alone. Like 6. Thanks so much! This is extremely useful! Beautiful explanation, demystified a week’s worth of lessons! Like 7. Also, note that the justifications must refer to calculus on the AP exam. For example, stating “concave up” is not OK for justification; stating that f ” > 0 or f ‘ increasing is OK. Like 8. f'(1)=0. You have it in the section labeled “decreasing”. I tell my kids at f'(x) = 0 f is neither increasing nor decreasing. There may be an extrema or not. Am I wrong? Like • Barbara Good question. Thank you. This is a very common misunderstanding; the question comes up a lot. There is a theorem that says, if the derivative is positive on an interval, then the function is increasing on the interval. This theorem does not cover the case where the derivative is zero. Also, the converse is false. Consider the function $f\left( x \right)={{x}^{3}}$ which is increasing on any interval you choose, even though the derivative is zero at the origin. It is increasing because on every interval every point is higher (i.e. has a greater y-value) than all of those to its left. For more on this see my post Open or Closed? and the link at the end. Like 9. Hey, this has been REALLY helpfully, outlining all the rules. I have always been confused with it, but not now. Thank you. Like 10. Thanks, this website is really useful.| Like This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 16, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8372834324836731, "perplexity": 511.1400287607517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00559.warc.gz"}
https://byjus.com/physics/darcys-law/
# Darcy's Law Darcy’s law states the principle which governs the movement of fluid in the given substance. The Darcy’s law equation that describes the capability of the liquid to flow via any porous media like a rock. The law is based on the fact according to which, the flow between two points is directly proportional to the pressure differences between the points, the distance and the connectivity of flow within rocks between the points. Measuring the interconnectivity is known as permeability. One application of Darcy’s law is to flow water through an aquifer. Darcy’s law with the conservation of mass equation is equivalent to groundwater flow equation, being one of the basic relationships of hydrogeology. Darcy’s law is also applied to describe oil, gas and water flows through petroleum reservoirs. The liquid flow within the rock is governed by the permeability of the rock. Permeability has to be determined in horizontal and vertical directions. For instance, shale consists of improbabilities which are less vertically. This indicates that it is not easy for liquid to flow up and down via shale bed but easier to flow side to side. ### Darcy’s Law Formula To understand mathematical aspect behind liquid flow in the substance, Darcy’s law can be described as: Darcy’s law describes the relationship among the instantaneous rate of discharge through porous medium and pressure drop at a distance. Using the specific sign convention, darcy’s law is expressed as: ### Q = -KA dh/dl Wherein: Q is the rate of water flow K is the hydraulic conductivity A is the column cross section area The Darcy’s Law diagram is as shown below. Darcy’s refers to many unit systems. A medium that has a permeability of 1 darcy allows a flow of 1 cm3/s of a liquid with viscosity 1 cP under 1 atm/cm pressure gradient acting across an area of 1 cm2. Darcy’s law is critical when it comes to determining the possibility of flow from a hydraulically fractured to a fresh water zone because it creates a condition where fluid flow from one zone to the other determines whether hydraulic fluids can reach fresh water zone or not. ### Darcy’s law Limitations Darcy’s law can be applied to many situations but do not correspond to these assumptions. • Unsaturated and Saturated flow. • Flow in fractured rocks and granular media. • Transient flow and steady-state flow. • Flow in aquitards and aquifers. • Flow in Homogeneous and heterogeneous systems. To know more about darcy’s law groundwater flow and hydraulic conductivity along with darcy’s law examples, you can visit us @Byju’s. #### Practise This Question A man of mass M is standing on a plank kept in a box. The plank and box as a whole has mass m. A light string passing over a fixed smooth pulley connects the man and box. If the box remains stationary, find the tension in the string and the force exerted by the man on the plank.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8198060989379883, "perplexity": 862.4155452152063}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743184.39/warc/CC-MAIN-20181116194306-20181116220306-00095.warc.gz"}
http://math.stackexchange.com/questions/17955/why-do-we-use-the-smash-product-in-the-category-of-based-topological-spaces
# Why do we use the smash product in the category of based topological spaces? I was telling someone about the smash product and he asked whether it was the categorical product in the category of based spaces and I immediately said yes, but after a moment we realized that that wasn't right. Rather, the categorical product of $(X,x_0)$ and $(Y,y_0)$ is just $(X\times Y,(x_0,y_0))$. (It seems like in any concrete category $(\mathcal{C},U)$, if we have a product (does a concrete category always have products?) then it must be that $U(X\times Y)=U(X)\times U(Y)$. But I couldn't prove it. I should learn category theory. Maybe functors commute with products or something.) Anyways, here's what I'm wondering: is the main reason that we like the smash product just that it gives the right exponential law? It's easy to see that the product $\times$ I gave above has $F(X\times Y,Z)\not\cong F(X,F(Y,Z))$ just by taking e.g. $X=Y=Z=S^0$. - Forgetful functors need not preserve products, but they do for nice (i.e. complete) concrete categories which have free objects. –  Chris Eagle Jan 18 '11 at 6:48 Two reasons why we want the smash product: 1. There is a natural homeomorphism $(X\times Y)^+ \cong X^+ \wedge Y^+$ for any spaces $X$ and $Y$. This is one of the places that the smash product naturally arises- you want to describe the compactification of a product in terms of each of the factors, how do you do it? It turns out the smash product is the best way to answer the question. In particular, using slightly more suggestive notation, this gives us the result $S^V \wedge S^W \cong S^{V \oplus W}$ for vector spaces $V$ and $W$... look familiar? 2. As Jonas said, the smash product is the analog of the tensor product. Recall that the tensor product for $R$-modules is not the categorical product, but it is left adjoint to the Hom functor. This gives it all sorts of nice properties; it's one of the reasons why there is a nice duality between Tor and Ext, and all sorts of other nice stuff. However: I don't think that this analogy is incredibly useful until you move to the stable category... there the smash product is much closer to an honest tensor product, since you're working with an additive category. You can start making sense of what it means to "smash over a ring spectrum" just as one "tensors over a ring" (at least I think you can; you know more about stable stuff than I do.) As a side note, in response to your parenthetical statement: If you want to know when some functor preserves products or coproducts or some type of limit, it's usually easiest to first check and see if it has an adjoint. See wikipedia on adjoints and (co)continuous functors. - You definitely can smash over a ring spectrum. Nice answer! –  Sean Tilson Jan 18 '11 at 14:44 From nLab: The smash product is the tensor product in the closed monoidal category of pointed sets. That is, $$\operatorname{Fun}_*(A\wedge B,C)\cong \operatorname{Fun}_*(A,\operatorname{Fun}_*(B,C))$$ Here, $\operatorname{Fun}_*(A,B)$ is the set of basepoint-preserving functions from $A$ to $B$, itself made into a pointed set by taking as basepoint the constant function from all of $A$ to the basepoint in $B$. There's more at the link. I must admit that I know nothing about this, but I recommend nLab as a good place to look for the categorical place of mathematical constructions. - Right, thanks. nLab is a great resource, but often just a bit above my head since they always define everything so categorically in the first place (as I'm sure they should). Anyways, my question is more about whether the validity of the exponential law with smash product the main reason we like it or whether there's another more primordial justification. –  Aaron Mazel-Gee Jan 18 '11 at 8:16 @Aaron: that is already a pretty primordial justification, in my opinion, at least as primordial as something being a categorical product. –  Qiaochu Yuan Jan 18 '11 at 11:46 Right, I guess that's more a philosophical question (albeit one that may have a right answer) than anything else. In any case, I was just wondering if there are any other really basic reasons why we use the smash product! –  Aaron Mazel-Gee Jan 25 '11 at 3:38 This is pretty much (derived from, I guess) Jonas Meyers answer, but a bit more concrete, and as far as I know why we're interested in it. There is an adjunction $\hom_*(\Sigma X, Y)\cong\hom_*(X,\Omega X)$, where $\Sigma X:=S^1\wedge X$ and $\Omega X:=\hom_*(S^1,X)$. If we define $\pi_n(X):=\pi_0(\Omega^n X)$, or indeed $\pi_n(X):=[S^n,X]_*$, we get $\pi_n(X):=\pi_0(\Omega^n X)\cong[S^0,\Omega^n X]_*\cong[\Sigma^n S^0,X]_*\cong[S^n,X]_*$, which is an interesting relationship.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8657141327857971, "perplexity": 263.7432344820876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345758214/warc/CC-MAIN-20131218054918-00089-ip-10-33-133-15.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/41211/two-questions-on-determinability-probability-theory
# Two questions on determinability (probability theory) The following are some problems I encountered when self-learning GTM 261 "Probability and Stochastics". Definition (determinability) If $X$ and $Y$ are random variables taking values in $(E,\mathcal{E})$ and $(D,\mathcal{D})$, then we say that $X$ determines $Y$ if $Y=f\circ X$ for some $f:E\rightarrow D$ measurable with respect to $\mathcal{E}$ and $\mathcal{D}$. Problem 1 Let $T$ be a positive random variable and define a stochastic process $X=(X_t)_{t\in\mathbb{R}_+}$ by setting, for each $\omega$ $$X_t(\omega) = \begin{cases} 0 & \text{if } t < T(\omega) \\ 1 & \text{if } t \geq T(\omega) \end{cases}$$ Show that $X$ and $T$ determine each other. If $T$ represents the time of failure for a device, then $X$ is the process that indicates whether the device has failed or not. That $X$ and $T$ determine each other is intuitively obvious, but the measurability issues cannot be ignored altogether. In particular, I do not know how to show the measurability part. Problem 2 A slight change in the preceding exercise shows that one might guard against raw intuition. Let $T$ have a distribution that is absolutely continuous with respect to the Lebesgue measure on $\mathbb{R}_+$; in fact, all we need is that $\mathbb{P}\{T = t\} = 0$ for every $t\in\mathbb{R}_+$. Define $$X_t(\omega) = \begin{cases} 1 & \text{if } t = T(\omega)\\ 0 & \text{otherwise} \end{cases}$$ Show that, for each $t\in\mathbb{R}_+$, the random variable $X_t$ is determined by $T$. But, contrary to raw intuition, $T$ is not determined by $X=(X_t)_{t\in\mathbb{R}_+}$. Show this by the following steps below: a. For each $t$, we have $X_t = 0$ almost surely. Therefore, for every sequence $(t_n)$ in $\mathbb{R}_+$, $X_{t_1} = X_{t_2} = \ldots = 0$ almost surely. b. If $V\in \sigma(X)$, then $V = c$ almost surely for some constant $c$. It follows that $T$ is not in $\sigma(X)$. • Problem 1: What is $Y$? (Is $Y=T$?) And what does it mean for a process to determine a random variable? – Florian May 25 '11 at 12:41 • @ Florian Yeah, it should be $T$. – Hawii May 25 '11 at 18:13 • The underlying question in problem 2 is purely about measurability; the introduction of the probability measure $\mathbb{P}$ is not necessary. In general, $T$ is determined by the process $(X_t)$ if and only if $T$ only takes on countably many values. – user940 Jun 8 '11 at 18:38 Problem 1: Fix $t\in\mathbb{R}_+$. Then $X_t=1_{\{T\le t\}}$. Since $\{T\le t\}\in\sigma T$, it follows that $X_t$ is $\sigma T$-measurable. Therefore, $\sigma X_t\subset\sigma T$, and this implies $$\sigma X = \bigvee_{t\in\mathbb{R}_+}\sigma X_t \subset\sigma T.$$ By Theorem 4.4, $X$ is a measurable function of $T$, and so $T$ determines $X$. For the converse, fix $t\in\mathbb{R}_+$. Then $$\{T\le t\} = \{X_t = 1\} \in \sigma X_t \subset \sigma X.$$ Since $t$ was arbitrary, this gives $\sigma T\subset\sigma X$, and so $X$ determines $T$. Problem 2(a): For each $t$, we have $P(X_t \ne 0) = P(T = t) = 0$. Hence, $X_t=0$ a.s. Moreover, given a sequence $(t_n)$, $$P(\exists n\text{ such that }X_{t_n}\ne 0) = P\bigg(\bigcup_{n=1}^\infty\{X_{t_n}\ne 0\}\bigg) \le \sum_{n=1}^\infty P(X_{t_n}\ne 0) = 0.$$ Thus, $P(X_{t_n}=0,\forall n)=1$, that is, $X_{t_1}=X_{t_2}=\cdots=0$ a.s. Problem 2(b): Suppose $T$ is $\sigma X$ measurable. Then by Proposition 4.6, we have $$T = f(X_{t_1},X_{t_2},\ldots),$$ for some sequence $(t_n)$ and some Borel-measurable function $f:\mathbb{R}^\infty \to \mathbb{R}_+$. Define $t=f(0,0,\ldots)\in\mathbb{R}_+$. Then, by part (a), we have $$T = f(X_{t_1},X_{t_2},\ldots) = f(0,0,\ldots) = t\text{ a.s.}$$ But this contradicts the hypothesis that $P(T=t)=0$ for all $t$. • It's a good solution, but the converse in part 1 seems overly complicated. You could just use $\{T\leq t\}=\{X_t=1\}\subseteq \sigma(X).$ – user940 Jun 8 '11 at 11:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9817735552787781, "perplexity": 77.76298672899937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256887.36/warc/CC-MAIN-20190522163302-20190522185302-00396.warc.gz"}
https://brilliant.org/problems/short-circuit-4/
# Short Circuit? This circuit, consisting of four resistors, is connected to a voltage source. The value of the resistance marked $x$ is unknown. When we accidentally drop a screwdriver across the wires, thus connecting points A and B directly, there is no noticeable change in the current that flows through the circuit. Calculate the resistance $x$, in ohms. ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8188116550445557, "perplexity": 855.609306206488}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401641638.83/warc/CC-MAIN-20200929091913-20200929121913-00697.warc.gz"}
http://mathhelpforum.com/trigonometry/120342-trig-proofs-help-print.html
# Trig proofs help! • December 13th 2009, 10:04 PM steelersgirl18 Trig proofs help! Hi! I needed help verifying the following proofs! If anyone wants to contribute by solving either of them, I will greatly appreciate it! Sin^(4) x - cos^(4) x = 1-2cos^(2) x and: sec x - sin x • tan x = cos x and: {(cos^(2) x)/(1+ sin x)} -1= -sin x thank you! • December 13th 2009, 10:15 PM bigwave (sin^2 - cos^2)(1) = sin^2 - cos^2 here is the first one $Sin^{4}x - cos^{4} x = 1-2cos^{2} x$ $(sin^2 - cos^2)(sin^2 + cos^2) = sin^2 + cos^2 - 2cos^2$ $ (sin^2 - cos^2)(1) = sin^2 - cos^2$ continue more?? • December 13th 2009, 10:25 PM steelersgirl18 Wow! Thank you so much! Yes, can you please show me the other ones? Also, can you try verifying them by only working on the left side of the equation? I have to do it that way sorry • December 13th 2009, 10:40 PM bigwave the last 2 $sec x - sin x tan x = cos x$ get all terms in $sinx$ and $cosx$ $\frac{1}{cosx} - sinx \frac{sinx}{cosx} = cosx$ $\frac{1 - sin^{2}x}{cosx} = cosx$ $\frac{cos^{2}x}{cosx} = cosx$ $cosx = cosx$ -------------------------------------------------------- $\frac{cos^{2}x}{1 + sinx} - 1 = -sinx$ $cos^{2}x = (-sin+1)(1+ sinx)$ or $cos^{2}x = (1-sinx)(1+ sinx) = 1 - sin^{2}x$ $cos^{2}x = cos^{2}x$ • December 13th 2009, 10:52 PM steelersgirl18 Quote: Originally Posted by bigwave $sec x - sin x tan x = cos x$ get all terms in $sinx$ and $cosx$ $\frac{1}{cosx} - sinx \frac{sinx}{cosx} = cosx$ $\frac{1 - sin^{2}x}{cosx} = cosx$ $\frac{cos^{2}x}{cosx} = cosx$ $cosx = cosx$ wait a sec.... When you combined the left side to form (1-sin^2x)/(cosx) , aren't you also supposed to multiple the denominators? So would the correct combination be : (1-sin^2x)/(cos^2x) ? EDIT: nevermind, you're right whoops • December 13th 2009, 10:57 PM bigwave we are adding 2 fractions not mulitplying the common denominator is $cosx$ • December 13th 2009, 11:10 PM steelersgirl18 One laaaast thing ;) can you complete the first one but this time only work on the left side? Thank you so much, you're very helpful • December 13th 2009, 11:34 PM bigwave $sin^{4}x - cos^{4}x = 1-2cos^{2}x$ $(sin^{2}x - cos^{2}x)(sin^{2}x + cos^{2}x) = 1-2cos^{2}x$ $ ((1-cos^{2}x) - cos^{2}x)(1) = 1-2cos^{2}x$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 25, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8050351142883301, "perplexity": 3462.1999400048517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1433195036742.18/warc/CC-MAIN-20150601214356-00059-ip-10-180-206-219.ec2.internal.warc.gz"}
http://mathhelpforum.com/discrete-math/230492-easy-compactness-proof-what-wrong-my-picture-logic.html
# Math Help - Easy compactness proof, What is wrong with my picture/logic? 1. ## Easy compactness proof, What is wrong with my picture/logic? Theorem. If {Kα} is a collection of compact subsets of a metric space X such that the intersection of every finite subcollection of {Kα} is nonempty, then Kα is nonempty. Proof. Fix a member K1 of {Kα} and put Gα=Kcα [this denotes the complement of Kα]. Assume that no point of K1 belongs to every Kα. Then the sets Gα form an open cover of K1 [this took me a bit but that's by the last assumption]; and since K1 is compact, there are finitely many indices α1,,αn such that K1ni=1Gαi [so far so good]. But this means that K1Kα1Kαn is empty, in contradiction to our hypothesis. The area enclosed by the green is $K_1$, by the fuschia $K_2$ and together they make up the set $\{K_\alpha \}$ The blue lines indicate the area of $K_2^c$. And $G_\alpha$ is the union of the blue shaded area and the part of $K_2$ not occupied by $K_1$ (which I didn't draw out as it looked too messy for one picture) Line for line of the proof; Fix a member of $K_1$ of $\{K_\alpha\}$... I think I'm good on this line. It just means call the green blob's insides $K_1$ Assume that no point of $K_1$ belongs to every $K_\alpha$. This means that $K_1 \cap K_2$ is empty. So the bit of the green that overlaps the fuschia is empty. Then the sets $G_\alpha$ form an open cover of $K_1$; As blue area and the compliment to $K_1$ definitively covers everything but $K_1 \cap K_2$, but that's empty, so this looks acceptable to me. and since $K_1$ is compact, there are finitely many indices $\alpha_1,...,\alpha_n s.t. K_1 \subset G_{\alpha_1} \cup ... \cup G_{\alpha_n}$ I think this means that there is a collection of open sets which cover $K_1$. This is only true because we've removed that chunk with the; "Assume that no point of $K_1$ belongs to every $K_\alpha$.". But this means that $K_1 \cap K_2 \cap ... \cap K_{\alpha_n}$ is empty, in contradiction to our hypothesis. I don't see how this contradicts anything. He assumed the intersection of all K's was empty and then showed that the intersection of all K's is empty. It looks like he's written $A \land A$ conclude $\perp$ which is nonsense. Where are my errors? 2. ## Re: Easy compactness proof, What is wrong with my picture/logic? Hi, It appears to me that your main problem is unfamiliarity with some basic set operations. I have rewritten your theorem and proof slightly. If you have question about what I have posted, post again and I will try to explain.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9005730748176575, "perplexity": 402.4750733965975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860116929.30/warc/CC-MAIN-20160428161516-00172-ip-10-239-7-51.ec2.internal.warc.gz"}
http://www.researchgate.net/publication/222822906_Interlaced_solitons_and_vortices_in_coupled_DNLS_lattices
Article # Interlaced solitons and vortices in coupled DNLS lattices Grupo de Física No Lineal, Universidad de Sevilla. Departamento de Física Aplicada I. Escuela Universitaria Politécnica, C/ Virgen de África, 7, E-41011 Sevilla, Spain; Department of Mathematics, Western New England College, Springfield, MA 01119, USA; School of Mathematical Sciences, University of Nottingham, University Park, Nottingham, NG7 2RD, United Kingdom; Department of Mathematics and Statistics, University of Massachusetts, Amherst, MA 01003-4515, USA Physica D Nonlinear Phenomena (Impact Factor: 1.67). 01/2009; DOI:10.1016/j.physd.2009.09.002 Source: arXiv ABSTRACT In the present work, we propose a new set of coherent structures that arise in nonlinear dynamical lattices with more than one component, namely interlaced solitons. In the anti-continuum limit of uncoupled sites, these are waveforms whose one component has support where the other component does not. We illustrate systematically how one can combine dynamically stable unary patterns to create stable ones for the binary case of two-components. For the one-dimensional setting, we provide a detailed theoretical analysis of the existence and stability of these waveforms, while in higher dimensions, where such analytical computations are far more involved, we resort to corresponding numerical computations. Lastly, we perform direct numerical simulations to showcase how these structures break up, when they are exponentially or oscillatorily unstable, to structures with a smaller number of participating sites. 0 0 · 0 Bookmarks · 50 Views • Source ##### Article: Intrinsic localized modes in coupled DNLS equations from the anti-continuum limit [hide abstract] ABSTRACT: In the present work, we generalize earlier considerations for intrinsic localized modes consisting of a few excited sites, as developed in the one-component discrete nonlinear Schrodinger equation model, to the case of two-component systems. We consider all the different combinations of "up" (zero phase) and "down" ({\pi} phase) site excitations and are able to compute not only the corresponding existence curves, but also the eigenvalue dependences of the small eigenvalues potentially responsible for instabilities, as a function of the nonlinear parameters of the model representing the self/cross phase modulation in optics and the scattering length ratios in the case of matter waves in optical lattices. We corroborate these analytical predictions by means of direct numerical computations. We infer that all the modes which bear two adjacent nodes with the same phase are unstable in the two component case and the only solutions that may be linear stable are ones where each set of adjacent nodes, in each component is out of phase. 02/2012; 0 Downloads Available from
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.811547577381134, "perplexity": 1219.6805583070663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010450813/warc/CC-MAIN-20140305090730-00067-ip-10-183-142-35.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/491268/difference-between-permittivities-eps-opt-and-eps-inf
# Difference between permittivities eps-opt and eps-inf? Very often in materials physics we are interested in the relative permittivity at optical frequencies, which is usually denoted by $$\varepsilon_\text{opt}$$ or $$\varepsilon_\infty$$. But I'm confused because $$\varepsilon\left(\omega=\infty \right)=1$$, since at high enough frequencies the medium does not have time to respond to the field and alter the allowed flux per unit charge. So to me I feel like $$\varepsilon_\text{opt}$$ is very clear but that $$\varepsilon_\infty$$ is a stupid convention to denote optical permittivity because it should be =1 always... But perhaps there is something I'm not understanding? Am I missing something or is it just a convention thing? • And at x-ray frequencies, the refractive index is even slightly smaller than unity. – Pieter Jul 12 '19 at 20:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.892632782459259, "perplexity": 302.91830085337284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250591234.15/warc/CC-MAIN-20200117205732-20200117233732-00376.warc.gz"}
http://mathhelpforum.com/algebra/225079-elementary-linear-algebra-finding-coefficients-b-c-d-given-curve.html
# Math Help - Elementary Linear Algebra: Finding the coefficients to a,b,c, and d of a given curve. 1. ## Elementary Linear Algebra: Finding the coefficients to a,b,c, and d of a given curve. The question is as follows from the text: Linear Algebra Pictures, Linear Algebra Images, Linear Algebra Photos, Linear Algebra Videos - Image - TinyPic - Free Image Hosting, Photo Sharing & Video Hosting "Find the coefficients a,b,c, and d so that the curve shown in the accompanying figure [a circle in the xy-plane that passes through the points (-4,5),(-2,7),(4,-3)] is given by the equation ax2+ay2+bx+cy+d=0" What I've done so far is that I have used the given points and substituted them into the given equation thus giving me three equations and four unknowns: • 41a-4b+5c+d=0 • 29a-2b+7c+d=0 • 25a+4b-3c+d=0 In which have then three equations into an augmented matrix and proceeded into putting it into a reduced row echelon form. While doing so, I realized that the numbers where quite off and that I was no further into the solution to the problem. Please, if anybody, pinpoint where I went wrong and guide me toward the solution and why it is the solution. Thanks so much. 2. ## Re: Elementary Linear Algebra: Finding the coefficients to a,b,c, and d of a given cu Since the graph of the equation is a circle a cannot be 0. So we will divide the equation by a to get the equation as x^2 + y^2 + (b/a) x + (c/a) y + d/a = 0 For easy handling we may put b/a = P, c/a = Q and d/a = R. Thus the equation becomes x^2 + y^2 + P x + Q y + R = 0 , Now we have three variables and three points and hence we can solve it.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.873396635055542, "perplexity": 581.2932056876624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010354479/warc/CC-MAIN-20140305090554-00047-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/a-few-questions-on-tension-and-some-work.538374/
# A Few Questions on Tension, and Some Work 1. Oct 9, 2011 ### Nickg140143 1. The problem statement, all variables and given/known data You can find the questions, and relevant diagrams for each within the attached image 2. Relevant equations Force Equations $$ƩF=ma$$ $$f_k=μ_kN$$ Work Equations $$W=Fs$$ (in this case, s=h) $$W_{tot}=\frac{1}{2}mv^2-\frac{1}{2}mv_0^2$$ My Questions (these are also written on the diagrams): -Looking at the diagram with the black background, have these tensions been identified correctly? If so, how would I try to solve for T2, given I've created all of my force equations? -looking at the attachment that has two problems, with forces identified, and specific areas of interest highlighted: -Have I identified my forces correctly? -In question 1, it says block m2 just "drops" a distance h, since it says nothing about the velocity being constant, should I assume that there is an acceleration in the system? -In question 2, it says that block c descends with a constant velocity, can I say that block A and block B also move through the system at a constant velocity? -When finding the work done on each individual body in both questions, I forgot that none of the tensions are given to me. Would I simply set up force equations in order find each tension? It seems I have most of the concepts understood to a certain degree, but their are certain things that tend to trip me up quite a bit, any help regarding these question would be greatly appreciated. File size: 78.5 KB Views: 83 File size: 10.8 KB Views: 73 2. Oct 9, 2011 ### grzz re diagram with black background: 3. Oct 9, 2011 ### Nickg140143 I have something along these lines forces: $$ƩF_{Ax}=T_1-f_{sa}=0 → f_s=T_1$$ $$ƩF_{Ay}=N_A-mg=0 → N_A=mg$$ $$ƩF_{Wy}=m_Wg-T_2=0 → m_W=\frac{T_2}{g}$$ friction: $$f_s=μ_sN_A$$ so, $$T_1=μ_smg$$ The third equation is the one I'm not too sure about. I've attached a better diagram of the problem, as well as my own work (you'll notice a typo in the third equation, T3 should be T2) My main concern regarding this problem is how should I use the given angle to help calculate my tension2? That is, assuming I was even able to get tension1 calculated correctly. File size: 24.5 KB Views: 70 File size: 39.2 KB Views: 66 4. Oct 9, 2011 ### grzz If the horizontal tension is T1 then the tension at 45 deg CANNOT be also T1. 5. Oct 9, 2011 ### Nickg140143 Alright, so I can now say that I have 3 tensions: tension on Block A $$T_1=μ_smg$$ Tension on the wall from the rope at angle 45 $$T_2=$$ tension on hanging weight $$T_3=$$ Well, I'm looking at the rope on the wall, can I say I have a 45-45-90 triangle here, since one angle is 45, the angle the imaginary horizontal line makes with the wall is 90, and the remaining angle is 180=90+45+x, x=45? I'm not sure where to take it from here. 6. Oct 10, 2011 ### grzz Sorry for my uprupt disappearance last time due a total power failure in our area. Now have a look at the point where all three tensions meet. This point is in equilibrium. Hence you can find the horizontal and vertical components of T2 and get an equation connecting T1 and T2 and another connecting T2 and T3. Similar Discussions: A Few Questions on Tension, and Some Work
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8831925392150879, "perplexity": 1288.7730660957338}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806438.11/warc/CC-MAIN-20171121223707-20171122003707-00359.warc.gz"}
http://mathhelpforum.com/pre-calculus/207262-function-problem.html
# Math Help - function problem 1. ## function problem Hi, I have a function question may you help me? Which one is increasing function for all x values? (a) y = |x - 7| (b) y = 2x^2 + 9 (c) 3x^3 -11 (d) 4x^4 + 2 how can i prove? Thank you 2. ## Re: function problem C (although it is neither increasing nor decreasing at x = 0) All the other ones are decreasing for part of the domain. 3. ## Re: function problem Originally Posted by sunrise Hi, I have a function question may you help me? Which one is increasing function for all x values? (a) y = |x - 7| (b) y = 2x^2 + 9 (c) 3x^3 -11 (d) 4x^4 + 2 how can i prove? You posted this in the pre-calculus forum. Thus there is no way to prove this one way or the other. You can simply draw the graphs and see the answer. But that is hardly a poof. On the other hand, with calculus we can see which one has a non-negative derivative. That would prove it. 4. ## Re: function problem richard, How you solved and thought that C is increasing? Should i give value to the x? 5. ## Re: function problem Originally Posted by sunrise richard, How you solved and thought that C is increasing? Should i give value to the x? Assigning a value to x has no indication of whether a function is increasing or not. Why don't you just graph the function? To rigorously prove it, we note that its derivative is 9x^2, which is always non-negative. Therefore the function in (C) is always increasing (except when x = 0, where the derivative is zero). 6. ## Re: function problem Originally Posted by richard1234 (C) is always increasing (except when x = 0, where the derivative is zero). Technically the function $f(x)=3x^3-11$ is increasing everywhere. The the statement that $f$ is an increasing function means that if $a then $f(a). That is clearly true in this case. When I made the remark reply #3 about proof, it was addressing the fact that this is a precalculus forum. There is of course a perfectly good way of proving this. Suppose that $a then $a^3 then $a^3-11. Proved. 7. ## Re: function problem If we think from your idea, y = 2x^2 + 9 is also increasing isnt it? Thanks 8. ## Re: function problem Originally Posted by sunrise If we think from your idea, $f(x) = 2x^2 + 9$ is also increasing isnt it? No it is not. $-3<1$ BUT $f(-3)>f(1)$. 9. ## Re: function problem Now I understand thank you.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9162901639938354, "perplexity": 1225.206043331015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657113000.87/warc/CC-MAIN-20140914011153-00012-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
http://matroidunion.org/?m=202006
Online talk: Marthe Bonamy Mon, July 6, 3pm ET (8pm BST, 7am Tue NZST) Marthe Bonamy, Univ. Bordeaux, CNRS Graph classes and their Asymptotic Dimension Abstract: Introduced in 1993 by Gromov in the context of geometric group theory, the asymptotic dimension of a graph class measures how much “contact” is necessary between balls of “bounded” diameter covering a graph in that class. This concept has connections with clustered coloring or weak diameter network decompositions. While it seems surprisingly fundamental, much remains unknown about this parameter and it displays intriguing behaviours. We will provide a gentle exposition to the area: from the state of the art to the main tools and questions, including some answers. This is joint work with Nicolas Bousquet, Louis Esperet, Carla Groenland, François Pirot and Alex Scott. Online talks: announcement Hello everyone. As you have likely already noticed, there is no talk in the seminar series today. As several other seminars have paused talks over these summer months, we are expanding the focus of the series to “Graphs and Matroids”. We’re excited to have Marthe Bonamy giving the first talk next Monday under the new name, and Federico Ardila speaking the following week. As usual, more details will follow. See you next Monday! The geometry of cocircuits and circuits. A tombeau for Henry Crapo 1933 — 2019 A guest post by Joseph Kung. Henry Howland Crapo died on September 3, 2019, at the age of 86. Henry’s mathematical work has had significant influence in at least three areas: his early results were landmarks in matroid theory, he was one of the creators of structural topology, and he was one of the first to apply exterior or Cayley algebra methods to matroid theory, computational geometry and automatic theorem proving. Henry’s life did not follow the usual path of an academic mathematician in the second half of the twentieth century. After a short more-or-less conventional academic career, ending as professor at the University of Waterloo, he chose to resign and pursue a peripatetic career in France, taking inappropriately junior positions. This is probably not because he needed a job, as he had private means, but to gain residency in France. He achieved this, eventually retiring to La Vacquerie (Hérault), a small village in the mountains near Montpellier. His beautifully restored house, on the Grand’Rue, was the venue for several private conferences — the Moutons Matheux — first in structural topology and then in matroid theory. Henry was a contrarian, not only in mathematics, but in life. Henry’s politics were to the left (in the French sense), and they were never dogmatic, but infused with a deep humanity. He was fond of conspiracy theories, but as intellectual exercises, so that it was possible to discuss issues like the Kennedy assassination with him, reasonably and humorously. He was constantly exploring the possibilities and improbabilities of the human mind, individual and collective; for example, as the guests and performers of a musical evening at a Moutons Matheux, one found a violist da gamba playing J.S. Bach and Tobias Hume, and a Dervish musician singing traditional melodies, accompanying himself on the Saz. On a more personal note, my introduction to matroids is through Henry’s book with Rota “On the Foundations of Combinatorial Theory: Combinatorial Geometries”, the flimsy preliminary edition which, to be honest, taught me a minimum about matroids in the technical sense, but a maximum about their deep structures. I also “lifted” much of the chapter on strong maps in the book “Theory of Matroids” from his Bowdoin lecture notes. I should also apologize to Henry for not using the ineffable effable effanineffable name “combinatorial geometry” instead of “matroid”, but the latter name has stuck, and there is no doing anything about it. I should also mention two expository or survey papers Henry wrote. The first is “Structural topology, or the fine art of rediscovery” (MR1488863) and the second Ten abandoned gold mines” (MR1854472). These papers are well-worth reading. The geometry of circuits I had intended to write in some detail about Henry’s work, but cold reflection soon indicates that this is a task which will almost certainly exceed my own mortality. One way to remember Henry is to write about one of his idées fixes, the problem of defining a matroid on the set of circuits or the set of cocircuits of a matroid. The program of finding (all) the dependencies on the circuits of a matroid was stated by Gian-Carlo Rota, in the 1960’s, as a part of his vision that matroid theory is an “arithmetical” or discrete counterpart of classical invariant theory. Thus, this program asks an analogue of the algebraic geometry question of “finding the syzygies among the syzygies”. This is an easy task when the matroid comes with a representation over a field, but I believe, impossible in general. Henry’s favorite approach was to build a free exterior algebra with “formal” coordinates from the matroid. One can then perform algebraic or symbolic calculations with formal coordinates as if the matroid were represented in a vector space. The ultimate version of this is the Whitney algebra, which Henry developed in joint work with Bill Schmitt. See the paper “The Whitney algebra of a matroid” (MR1779781). However, I think that there are intrinsic difficulties in this approach because the combinatorial interpretation of one algebraic identity is sufficiently ambiguous that when repeated, as is necessary when an algebraic derivation is performed, the interpretation usually becomes meaningless. However, there are exceptions — the Rota basis conjecture, for example — which show that this approach could be fruitful. Henry wrote an introductory article “The geometry of circuits” (unpublished) about this approach. The geometry of cocircuits In the remainder of this note, I will describe what seems to be practical from Henry’s ideas. I begin by describing a way of building a matroid of cocircuits from a given representation of a matroid. Before starting, I should say all this is elementary linear algebra. Some would say, and they would be right, that it is a special case of the theory of chain groups of Tutte. Recall that a cocircuit is the complement of a copoint. (A copoint is also known as a hyperplane.) Let $M^{(0)}$ be a matroid of rank $r$ with a specific representation as a multiset $S$ of vectors in $\mathbb{K}^r$, where $\mathbb{K}$ is a field. For each cocircuit $D$, the copoint $S \backslash D$ spans a unique (linear) hyperplane in $\mathbb{K}^r$ and hence, each cocircuit $D$ defines a linear functional $\alpha_D$, unique up to a non-zero multiplicative factor, whose value on a vector $x$ we denote by $\langle \alpha_D | x \rangle.$ For each vector $x$ in $S$, $\langle \alpha_D | x \rangle \neq 0$ if and only if $x$ is in the cocircuit $D$. It is easy to check that the cocircuit matrix, with columns indexed by $S$ and rows by the set $\mathcal{D}$ of cocircuits, with the $D,x$-entry equal to $\langle \alpha_D | x \rangle$, is a representation of $M^{(0)}$. Transposing the matrix, one obtains a matroid $M^{(1)}$ on the set $\mathcal{D}$ of cocircuits, This matroid is the cocircuit matroid of $M^{(0)}$. It has the same rank $r$ as $M^{(0)}$. Among the copoints of $M^{(1)}$ are the sets $E_a,$ where $a$ is a vector in $S$ and $E_a = \{D: a \in D\},$ the set of all copoints containing the vector $a$. If $M^{(0)}$ is simple, the map $a \mapsto E_a$ embeds $S$ injectively into $\mathcal{D}$. Deleting rows from the transpose of the cocircuit matrix so that there remains a matrix having $r$ rows and rank $r$, one obtains a representation of $M^{(1)}$ on which one can repeat the construction. Iterating, one constructs a sequence — I will call it the Crapo sequence $M^{(0)}, M^{(1)}, M^{(2)}, M^{(3)}, \,\,\ldots$ with $M^{(i)}$ embedded naturally in $M^{(i+2)},$ so that there are two non-decreasing sequences of matroids, all having the same rank $r$ as the original matroid $M^{(0)}$, one starting with the matroid and the other starting with the cocircuit matroid. It is not easy to describe the Crapo sequence given $M^{(0)}$, but an example might clarify the situation. Let $M^{(0)} = M(K_4)$, the cycle matroid of the complete graph on $4$ vertices, or as Henry would have preferred, the complete quadrilateral, with the specific representation $e_1,e_2,e_3,e_1-e_2, e_1-e_3,e_2 – e_3$. The matroid $M(K_4)$ has $7$ copoints (four $3$-point lines and three $2$-point lines), so if one takes the representation over $\mathrm{GF}(2),$ then $M^{(1)}$ is the Fano plane $F_7$ and the Crapo sequence stabilizes, with $M^{(i)}= F_7$ for $i \geq 2$. On the other hand, if one takes the representation over $\mathrm{GF}(3)$, then $M^{(1)}$ is the non-Fano configuration $F_7^-$. The non-Fano configuration has $9$ copoints ($6$ are $3$-point lines and $3$ are $2$-point lines) with $4$ points on (exactly) $3$ lines, and $3$ points on $4$-lines. From this, one sees that $M^{(2)}$ has nine points, three $4$-point lines, four $3$-point lines, and perhaps more lines. One concludes that $M^{(2)}$ is the rank-$3$ Dowling matroid $Q_3$ on the group $\{+1,-1\}$. The matroid $Q_3$ has six additional $2$-point lines, making a total of thirteen lines. Thus, $M^{(3)}$ is the ternary projective plane $\mathrm{PG}(2,3)$. The Crapo sequence now stabilizes and $M^{(i)} = \mathrm{PG}(2,3)$ forever after. Over $\mathrm{GF}(5)$ and larger fields, the situation is much more complicated and I do not know simple descriptions, although as will be shown shortly, the Crapo sequence of $M(K_4)$ eventually stabilizes over any finite field. I might note that since $M(K_n)$ has $2^{n-1}-1$ copoints, its Crapo sequence stabilizes at $i=1$ over $\mathrm{GF}(2)$. I should make two remarks at this point. The first is that the construction of $M^{(1)}$ from $M^{(0)}$ may change if one chose inequivalent representations. An example is $U_{6,3}$ and its two inequivalent representations. The second is that it is possible to define a looser structure, that of $2$- or line-closure, on the set of cocircuits; the closed sets in this structure are the linear subclasses of Tutte. Being the closed sets of a closure, the linear subclasses form a lattice, with the points (that is, the elements covering the minimum, in this case the empty set) being the one-element linear subclasses consisting of one copoint. Henry discussed this theory in his papers “Single-element extensions of matroids” (MR0190045) and “Erecting geometries” (MR0272655, 0277407). It is almost true that if one takes a matroid with a specific representation over a finite field $\mathrm{GF}(q)$, then the Crapo sequence $M^{(i)}$ eventually stabilizes at $\mathrm{PG}(r-1,q^{\prime})$, where $q^{\prime}$ divides $q$. It is clear that if the sequence reaches a projective geometry, then the sequence stabilizes. For a sequence to stabilize at $M^{(i)}$, the number of points and the number of copoints must be the same, and a theorem (discovered many times) says that $M^{(i)}$ must be modular, and by a theorem of G. Birkhoff, a direct sum of projective geometries. Thus, the assertion is true except when $M^{(0)} = U_{r,r}$. In particular, with one exception, if one starts with a representation of $M^{(0)}$ over a finite field $\mathbb{K}$ and $r \geq 3,$ then from the projective geometry in the Crapo sequence, one can recover a subfield of $\mathbb{K}$. When one takes a representation over a field of characteristic $0$ (and $M^{(0)} \neq U_{r,r}$), then the Crapo sequence cannot stabilize at a finite index $i$, but one should be able to take an infinite limit to get a geometric structure in which points are in bijection with copoints. The matroid of circuits of a matroid $M$ with a representation as a multiset of vectors can be defined as the cocircuit matroid of the dual $M^{\perp}$ with an orthogonal dual of the given representation. If the original matroid has size $n$ and rank $r$, the circuit matroid has rank $n-r$ and iterating this construction usually gives a sequence which blows up. See the papers of Longyear (MR566870) and Oxley and Wang (MR4014616). There are specific problems about circuit matroids which are interesting. For example, one can ask whether there is a matroid of circuits of a transversal matroid which is a gammoid. (The orthogonal dual of a transversal matroid is a gammoid, and the transpose of a circuit matrix gives a representation of the dual, so the answer is probably affirmative.) Or, one can ask for a construction (which has to be combinatorial) of a circuit matroid of a paving matroid (if one exists). The adjoint is an attempt at constructing the cocircuit matroid without using a representation. I will work with geometric lattices rather than matroids. The opposite $P^{\mathrm{opp}}$ of a partial order $P$ is the partial order obtained by reversing the order relation. An adjoint $\mathrm{Adj}(L)$ of a geometric lattice $L$ is a geometric lattice of the same rank as $L$ such that there is a one-to-one order-preserving function mapping points of $L^{\mathrm{opp}}$ (which are copoints of $L$) into the points of $\mathrm{Adj}(L)$. Alan Cheung (MR0373976) showed that the lattice of flats of the Vamós matroid (or the “twisted cube”) does not have an adjoint. Alan proved this using the lattice of linear subclasses. The Vamós matroid is the smallest rank-$4$ paving matroid which is not representable over any field as it is a relaxation of the configuration given by the bundle theorem in projective geometry. It might be worthwhile to look at bigger rank-$4$ paving matroids: do the non-representable ones also not have adjoints? The Vamós matroid is self-dual. A natural question is whether the existence of an adjoint for the lattice of flats of a matroid implies the existence of an adjoint for the lattice of flats of its dual. The question now arises of what happens in rank $3$. In rank $3$ or the plane, if two lines do not intersect at a point, then it is always possible to add that point of intersection. Thus adjoints exist at rank $3$. When one iterates taking adjoints, the Crapo sequence does not necessarily stabilize at a finite projective plane, but it is possible to construct the infinite limit. This limit is the free closure of D.R. Hughes; see his book “Projective Planes”, written with Piper (MR0333959). It might be of interest to extend the Hughes construction for rank $4.$ I end with matroids which can be said to be hiding in plain sight. Henry was the first to look at what are now called paving matroids. He and Rota called them Hartmanis partitions, which is correct, but hardly “catchy”. (Incidentally, Hartmanis’ papers (MR0098358, 0086045) are well worth studying.) In his work with Blackburn and Higgs (MR0419270) on cataloging all simple matroids having at most $8$ elements, a remarkably far-sighted effort for its time, it was observed that paving matroids seem to “predominate”, an observation which has now been formalized as a conjecture. However, while much work is done on asymptotic predomination, the matroids themselves have hardly been studied. One way to study paving matroids in detail (suggested by Cheung’s work), is to look at the lattice of linear subclasses, which should answer questions such as representability or the existence of adjoints. Online talk: Matthew Kwan Mon, June 22, 3pm ET (8pm BST, 7am Tue NZST) Matthew Kwan, Stanford University Halfway to Rota’s basis conjecture In 1989, Rota made the following conjecture. Given $n$ bases $B_1,\ldots,B_n$ in an $n$-dimensional vector space $V$, one can always find $n$ disjoint bases of $V$, each containing exactly one element from each $B_i$ (we call such bases transversal bases). Rota’s basis conjecture remains wide open despite its apparent simplicity and the efforts of many researchers. In this talk we introduce the conjecture and its generalisation to matroids, and we outline a proof of the result that one can always find $(1/2−o(1))n$ disjoint transversal bases (improving on the previous record of $\Omega(n/\log{n})$). This talk will be accessible to non-matroid theorists.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8580052256584167, "perplexity": 648.499058131271}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573849.97/warc/CC-MAIN-20220819222115-20220820012115-00192.warc.gz"}
https://mathproblems123.wordpress.com/2010/07/07/application-to-isoperimetric-inequality/
Home > Geometry, Olympiad, Problem Solving > Application of the isoperimetric inequality ## Application of the isoperimetric inequality Find the shortest curve which splits an equilateral triangle with edge of length $1$ into two regions having equal area. Hint: This is an awsome problem. Try to transform, or modify the figure such that you can use the isoperimetric inequality. The result is unusual and you cannot really expect it intuitively. Consider the curve which splits the equilateral triangle in two regions of equal area. Since we are looking for the shortest curve, we can assume there are no autointersections and the curve touches the boundary of the equilateral triangle in two points lying on two different edges. Then we make another 5 equilateral triangles such that the 6 congruent curves form a closed curve thus dividing the resulting regular hexagon in two regions of equal area. By the isoperimetric problem, the shortest curve which encloses a given area is the circle, and therefore our curve is an arc of a circle equal to $\frac{1}{6}$ of the length of the circle which encloses the area equal to $\frac{1}{2}\cdot 6 \cdot \frac{\sqrt{3}}{4}=\frac{3\sqrt{3}}{4}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 3, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9144846200942993, "perplexity": 217.943049304522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804965.9/warc/CC-MAIN-20171118132741-20171118152741-00406.warc.gz"}
https://community.ptc.com/t5/Creo-Modeling-Questions/Does-anyone-remember-DesignView/m-p/60173
cancel Showing results for Did you mean: cancel Showing results for Did you mean: Highlighted Newbie ## Does anyone remember DesignView? Does anyone remember DesignView from the early '90s? I seem to recall that it was really rather good. Can anyone suggest a modern equivalent? Alternatively, does anyone have a copy of DesignView I could have? Cheers, Sam Tags (2) Highlighted ## RE: Does anyone remember DesignView? Well fortunately some of you do... My thanks to John for sending me copy of the software, and also to various other people for their suggestions. However I don't think there is an alterative solution to this piece of 1980's software (yes, it is that old). I have actually put in a request to PTC that they should re-instate it because it is/was brilliant (see attached). The simplest/best 2D parametric solution that I can find. Sam Announcements
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9317820072174072, "perplexity": 2899.9393563874796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400197946.27/warc/CC-MAIN-20200920094130-20200920124130-00683.warc.gz"}
http://owenduffy.net/blog/?p=1581
Efficiency and gain of Small Transmitting Loops (STL) Small Transmitting Loops (STL) are loops of less than about 0.1λ in diameter or about 0.3λ in circumference. Below these limits, the current around the loop is almost uniform and this permits a simplified analysis. STL are commonly known by Hams as “magnetic loops”, but that term is rarely used in recognised antenna text books. The efficiency and free space gain of a circular STL can be easily estimated by calculation from simple measurements. Theory The half power bandwidth BW1 of a lossless STL at some frequency f is f/Q1. Q1 is the ratio of inductive reactance Xl of the loop to radiation resistance Rr and these can readily be calculated using simple formulas. BW1=f/(Xl/Rr)   (1) The half power bandwidth BW2 of a practical STL at some frequency f is f/Q2. Q2 is the ratio of inductive reactance of the loop to total resistance (radiation resistance Rr plus loss resistance Rloss). The half power bandwidth of a practical STL can be easily measured by matching it at some frequency and measuring the VSWR=2.62 bandwidth (which corresponds to half power bandwidth or Zload=Zo±jZo). BW2=f/(Xl/(Rr+Rloss))    (2) It follows from (1) and (2) that Rr/(Rr+Rloss)=BW1/BW2   (3). The efficiency η of an STL is Rr/(Rr+Rloss), so from (1) and (3) we can say η=(f/(Xl/Rr))/BW2   (4) The directivity of a small loop irrespective of size is 1.5 or 1.76dB. The gain of a small loop is Directivity*Efficiency which can be stated in deciBels as gain=1.76+10*log((f/(Xl/Rr))/BW2)dB. The calculator Calculate small transmitting loop gain from bandwidth measurement uses this method to predict efficiency and gain of a circular STL from the loop radius, conductor radius and measured half power bandwidth. Note that the ARRL handbooks frequently use Q of an STL to mean the Q when loaded with a receiver. That quantity involves another variable (rx Zin) with uncertainty. ARRL observations about Q will usually be lower and bandwidth higher for that reason, but these are properties of the receive system and not the antenna system alone.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9629201292991638, "perplexity": 2687.633474555449}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319943.55/warc/CC-MAIN-20170623012730-20170623032730-00534.warc.gz"}
https://physics.stackexchange.com/questions/63660/irreducible-representations-of-lorentz-group/63708
# Irreducible Representations Of Lorentz Group In Weinberg's The Theory of Quantum Fields Volume 1, he considers classification one-particle states under inhomogeneous Lorentz group. My question only considers pages 62-64. He define states as $P^{\mu} |p,\sigma\rangle = p^{\mu} |p,\sigma\rangle$, where $\sigma$ is any other label. Then he shows that, for a Lorentz Transformation : $$P^{\mu}U(\Lambda)|p,\sigma\rangle = \Lambda^{\mu}_{\rho} p^{\rho}U(\Lambda)|p,\sigma\rangle$$ Therefore: $$U(\Lambda)|p,\sigma\rangle = \sum_{\sigma'} C_{\sigma' \sigma}(\Lambda,p)|\Lambda p,\sigma'\rangle.$$ Then he wants to find $C$ in irreducible representations of the inhomogeneous Lorentz group. For any $m$ he chooses a $k$ such that $k^{\mu}k_{\mu} = - m^2$. Then defines express $p$'s with mass m, according to $p^{\mu} = L^{\mu}_{v}(p)k^v$. Then he defines $$|p,\sigma\rangle = N(p)U(L(p))|k,\sigma\rangle$$ (where $N(p)$ are normalization constants). I didn't understand this last statement. Is $\sigma$ an eigenvalue of the corresponding operator, or just a label? I mean, if $J |k,\sigma \rangle = \sigma |k,\sigma\rangle$ then is it true, $J |p,\sigma\rangle = \sigma |p,\sigma\rangle$. If so how can we say that if $$U(\Lambda)|k,\sigma\rangle = \sum_{\sigma'} C_{\sigma' \sigma}(\Lambda,k)|\Lambda k=p,\sigma'\rangle$$ Thanks for any help. First pages of these notes on General Relativity from Lorentz Invariance are very similar to Weinberg's book. For Poincaré algebra there are (as far as I know) two different approaches to find its representations. In the first approach one begins from a finite dimensional representation of (complexified) Lorentz algebra, and using it one constructs a representation on the space of some fields on Minkowski space. Representation so obtained is usually not irreducible and an irreducible representation is obtained from it through some differential equation. E.g. space of massive Dirac fields satisfying Dirac equation form an irreducible representation of Poincaré group (added later : last statement is not quite correct). Another approach is to find (irreducible, unitary) Hilbert space representation of identity component of Poincaré algebra by so called "Little group method". This is what Weinberg is doing in pages 62-64 in volume 1 of his QFT book. Idea of this approach is following -- In momentum space fix a hyperboloid $S_m=\{p|p^2=m^2,p_0 \geq 0\}$ corresponding to a given (nonnegative) mass $m$. (note : here I am using signature $(1,-1,-1,-1)$) Choose a 4-momentum $k$ on $S_m$. Let $G_k$ be the maximal subgroup of (the identity component) of the Lorentz group such that $G_k$ fixes $k$. i.e. for each Lorentz transformation $\Lambda\in G_k$ we have $\Lambda k=k$. $G_k$ is called little group corresponding to 4-momentum $k$. Let $V_k$ be a fixed finite dimensional irreducible representation of $G_k$ (or double cover of $G_k$)$^{**}$. Fix a basis of this vector space $|k,1\rangle,|k,2\rangle,\ldots,|k,n\rangle$ where $n$ is (complex) dimension of $V_k$ {note that $k$ is a fixed vector, and not a variable.} Now for every other $p\in S_m$ introduce a vector space $V_p$ which is spanned by the basis $|p,1\rangle,|p,2\rangle,\ldots,|p,n\rangle\;.$ Hilbert space representation of (the identity component of) the Poincaré group is now constructed by gluing these vector spaces $V_p$'s together. This is done as follows :- i) Define $H$ to be direct sum of $V_p$'s. ii) For every $p\in S_m$ fix a Lorentz transformation $L_p$ that takes you from $k$ to $p$, i.e. $L_p(k)=p$. Also fix a number $N(p)$ (this is used for fixing suitable normalization for the basis states). In particular, take $L_k=I$. iii) Define operator $U(L_p)$ corresponding to $L_p$ on $V_k$ as :- $U(L_p)|k,\sigma\rangle =N(p)^{-1}|p,\sigma\rangle,\:\sigma=1,\ldots,n\tag1$ This only defines action of $L_p$'s on subspace $V_k$ of $H$. But in fact this definition uniquely extends to the action of whole of (identity component of) Poincaré group on the whole of $H$ as follows -- Suppose $\Lambda$ be ANY Lorentz transformation in the identity component of the Lorentz group, and $|p,\sigma\rangle$ be any basis state. Then (all the following steps are from Weinberg's book): \begin{align}U(\Lambda)|p,\sigma\rangle &= N(p) U(\Lambda) U(L_p)|k,\sigma\rangle\,\,\,\,\,\, \textrm{using def. (1)}\\ &= N(p) U(\Lambda.L_p)|k,\sigma\rangle \,\,\,\, \textrm{(from requiring}\,\, U(\Lambda) U(L_p)=U(\Lambda.L_p))\\ &= N(p) U(L_{\Lambda p}.L_{\Lambda p}^{-1}.\Lambda.L_p)|k,\sigma\rangle\\ &= N(p) U(L_{\Lambda p})U(L_{\Lambda p}^{-1}.\Lambda.L_p)|k,\sigma\rangle\;.\end{align} Now note that $L_{\Lambda p}^{-1}.\Lambda.L_p$ is an element of $G_k$ {check it} and $V_k$ is irreducible representation of $G_k$. So $U(L_{\Lambda p}^{-1}.\Lambda.L_p)|k,\sigma\rangle$ is again in $V_k$; and from (1) we know how $U(L_{\Lambda p})$ acts on $V_k$; thus we know what is $U(\Lambda)|p,\sigma\rangle\;.$ Summarizing, the idea of little group method is to construct irreducible Hilbert space representations of the identity component of Poincare group starting from finite dimensional irreducible representations of the Little group corresponding to a fixed four momenta. $^{**}$ If $V_k$ is not a proper representation of $G_k$ but is a representation of the double cover $\mathcal{G}_k$ of $G_k$ then we'll also need to specify a section $G_k\to \mathcal{G}_k$ of the covering map so that we know how $G_k$ acts on $V_k$. • Comments are not for extended discussion; this conversation has been moved to chat. Mar 18, 2017 at 13:54 With respect to the discussion of momentum-eigenstates and the following derivation in Weinberg's book, $\sigma$ is just a label that denotes any degree of freedom that is not momentum. Even though it can be identified with spin, its nature is not relevant for the discussion at hand. • Thanks for your comment, I know that he uses $\sigma$ for anything other than momentum but my question don't have anything to do with spin, I used $J |p,\sigma\rangle = \sigma |p,\sigma\rangle$ for any observable. I ask if this relation is true after definition of $|p,\sigma\rangle = N(p)U(L(p))|k,\sigma\rangle$. – hans May 7, 2013 at 14:16 • Yes, it's true. The $\sigma$ are eigenvalues of some operators that commute with the $P$ operators. It wouldn't make any sense to use them to label eigenkets otherwise. May 7, 2013 at 20:48 • @user1504 But in this notes link, he says that (at page 2 between (7) and (8) ) $\sigma$ is not an eigenvalue of $J_z$ for $p \neq 0$. – hans May 7, 2013 at 21:29 • That doesn't contradict anything I said. It doesn't have to be the eigenvalue of $J_z$. May 7, 2013 at 21:55 • But it is an eigenvalue of $J_z$ for $|k,\sigma>$. I mean the operator does not important here, in the link it says that for some operator $J$ $J |k,\sigma\rangle = \sigma |k,\sigma\rangle$ but $J |p \neq k, \sigma\rangle \neq \sigma |p \neq k, \sigma\rangle$. – hans May 7, 2013 at 22:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9923191070556641, "perplexity": 276.5061100230077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662521883.7/warc/CC-MAIN-20220518083841-20220518113841-00255.warc.gz"}
http://128.84.4.34/abs/2110.02355
cs.GT (what is this?) # Title: Robustness and sample complexity of model-based MARL for general-sum Markov games Abstract: Multi-agent reinfocement learning (MARL) is often modeled using the framework of Markov games (also called stochastic games or dynamic games). Most of the existing literature on MARL concentrates on zero-sum Markov games but is not applicable to general-sum Markov games. It is known that the best-response dynamics in general-sum Markov games are not a contraction. Therefore, different equilibrium in general-sum Markov games can have different values. Moreover, the Q-function is not sufficient to completely characterize the equilibrium. Given these challenges, model based learning is an attractive approach for MARL in general-sum Markov games. In this paper, we investigate the fundamental question of \emph{sample complexity} for model-based MARL algorithms in general-sum Markov games and show that $\tilde{\mathcal{O}}(|\mathcal{S}|\,|\mathcal{A}| (1-\gamma)^{-2} \alpha^{-2})$ samples are sufficient to obtain a $\alpha$-approximate Markov perfect equilibrium with high probability, where $\mathcal{S}$ is the state space, $\mathcal{A}$ is the joint action space of all players, and $\gamma$ is the discount factor, and the $\tilde{\mathcal{O}}(\cdot)$ notation hides logarithmic terms. To obtain these results, we study the robustness of Markov perfect equilibrium to model approximations. We show that the Markov perfect equilibrium of an approximate (or perturbed) game is always an approximate Markov perfect equilibrium of the original game and provide explicit bounds on the approximation error. We illustrate the results via a numerical example. Subjects: Computer Science and Game Theory (cs.GT); Multiagent Systems (cs.MA); Systems and Control (eess.SY); Optimization and Control (math.OC) Cite as: arXiv:2110.02355 [cs.GT] (or arXiv:2110.02355v1 [cs.GT] for this version)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8405836224555969, "perplexity": 398.2304978310669}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300934.87/warc/CC-MAIN-20220118152809-20220118182809-00632.warc.gz"}
https://math.stackexchange.com/questions/2505442/shortest-distance-between-2-lines-with-direction-vectors
# Shortest distance between $2$ lines with direction vectors Let L1 be the line passing through the point $P_1=(−11, 10, −1)$ with direction vector $\vec d_1=\begin{bmatrix}−2\\ 3\\ −1\end{bmatrix}t$, and let $L_2$ be the line passing through the point $P_2=(−8, 9, 10)$ with direction vector $\vec d_2=\begin{bmatrix}−1\\ 3\\1\end{bmatrix}t$. Find the shortest distance, $d$, between these two lines, and find a point $Q_1$ on $L_1$ and a point $Q_2$ on $L_2$ so that $d(Q_1,Q_2) = d$. I dont really understand how to properly approach this question, how do i start? • Nov 5 '17 at 8:16 Hints: any point on $L_1$ can be written as $(-11-2t,10+3t,-1-t)$ where t is a parameter. $Q_1$ is one such point. Likewise for $L_2$ with parameter, say s. Note that the line connecting $Q_1$ and $Q_2$ is perpendicular to both $d_1$ and $d_2$ so, must be along their cross product $d_3$. Parametrise that direction by u. So, $$Q_1 + u d_3=Q_2$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9021493196487427, "perplexity": 74.37626063411908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056476.66/warc/CC-MAIN-20210918123546-20210918153546-00604.warc.gz"}
http://studyadda.com/notes/jee-main-advanced/mathematics/determinants/minors-and-cofactors/9494
# JEE Main & Advanced Mathematics Determinants Minors and Cofactors ## Minors and Cofactors Category : JEE Main & Advanced (1) Minor of an element : If we take the element of the determinant and delete (remove) the row and column containing that element, the determinant left is called the minor of that element. It is denoted by ${{M}_{ij}}$. Consider the determinant $\Delta =\left| \,\begin{matrix} {{a}_{11}} & {{a}_{12}} & {{a}_{13}} \\ {{a}_{21}} & {{a}_{22}} & {{a}_{23}} \\ {{a}_{31}} & {{a}_{32}} & {{a}_{33}} \\ \end{matrix}\, \right|$, then determinant of minors $M=\left| \,\begin{matrix} {{M}_{11}} & {{M}_{12}} & {{M}_{13}} \\ {{M}_{21}} & {{M}_{22}} & {{M}_{23}} \\ {{M}_{31}} & {{M}_{32}} & {{M}_{33}} \\ \end{matrix}\, \right|$ where  ${{M}_{11}}=$ minor of ${{a}_{11}}=\left| \,\begin{matrix} {{a}_{22}} & {{a}_{23}} \\ {{a}_{32}} & {{a}_{33}} \\ \end{matrix}\, \right|$  ${{M}_{12}}=$minor of  ${{a}_{12}}=\left| \,\begin{matrix} {{a}_{21}} & {{a}_{23}} \\ {{a}_{31}} & {{a}_{33}} \\ \end{matrix}\, \right|$ ${{M}_{13}}=$ minor of ${{a}_{13}}=\left| \,\begin{matrix} {{a}_{21}} & {{a}_{22}} \\ {{a}_{31}} & {{a}_{32}} \\ \end{matrix}\, \right|$ Similarly, we can find the minors of other elements . Using this concept the value of determinant can be $\Delta ={{a}_{11}}{{M}_{11}}-{{a}_{12}}{{M}_{12}}+{{a}_{13}}{{M}_{13}}$ or, $\Delta =-{{a}_{21}}{{M}_{21}}+{{a}_{22}}{{M}_{22}}-{{a}_{23}}{{M}_{23}}$ or,  $\Delta ={{a}_{31}}{{M}_{31}}-{{a}_{32}}{{M}_{32}}+{{a}_{33}}{{M}_{33}}$. (2) Cofactor of an element : The cofactor of an element ${{a}_{ij}}$ (i.e. the element in the ${{i}^{th}}$ row and ${{j}^{th}}$ column) is defined as ${{(-1)}^{i+j}}$ times the minor of that element. It is denoted by ${{C}_{ij}}$ or ${{A}_{ij}}$ or ${{F}_{ij}}$. ${{C}_{ij}}={{(-1)}^{i+j}}{{M}_{ij}}$ If $\Delta =\left| \,\begin{matrix} {{a}_{11}} & {{a}_{12}} & {{a}_{13}} \\ {{a}_{21}} & {{a}_{22}} & {{a}_{23}} \\ {{a}_{31}} & {{a}_{32}} & {{a}_{33}} \\ \end{matrix}\, \right|$, then determinant of cofactors is $C=\left| \,\begin{matrix} {{C}_{11}} & {{C}_{12}} & {{C}_{13}} \\ {{C}_{21}} & {{C}_{22}} & {{C}_{23}} \\ {{C}_{31}} & {{C}_{32}} & {{C}_{33}} \\ \end{matrix}\, \right|$ where ${{C}_{11}}={{(-1)}^{1+1}}{{M}_{11}}=+{{M}_{11}}$, ${{C}_{12}}={{(-1)}^{1+2}}{{M}_{12}}=-{{M}_{12}}$  and  ${{C}_{13}}={{(-1)}^{1+3}}{{M}_{13}}=+{{M}_{13}}$ Similarly, we can find the cofactors of other elements. You need to login to perform this action. You will be redirected in 3 sec
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9576646089553833, "perplexity": 714.2499553726434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890514.66/warc/CC-MAIN-20180121100252-20180121120252-00488.warc.gz"}
http://nntdm.net/volume-20-2014/number-3/
# Volume 20, 2014, Number 3 Volume 20Number 1Number 2 ▷ Number 3 ▷ Number 4Number 5 Solutions with infinite support bases of a functional equation arising from multiplication of quantum integers Original research paper. Pages 1—28 Lan Nguyen Full paper (PDF, 258 Kb) | Abstract It follows from our previous works and those of Nathanson that if P is a set of primes, then the greater the cardinality of P, the less likely that there exists a sequence of polynomials, satisfying the functional equation arising from multiplication of quantum integers studied by Nathanson, which has P as its support base and which cannot be generated by quantum integers. In this paper we analyze the set of roots of the polynomials involved leading to a direct construction of a polynomial solution Γ which has infinite support base P and which cannot be generated by quantum integers. Our results demonstrate that there are more to these solutions than those provided by quantum integers. In addition, we also show that a result of Nathanson does not hold if the condition tΓ = 1 is removed. Remark on twin primes Original research paper. Pages 29—30 József Sándor Full paper (PDF, 71 Kb) | Abstract Recently, I. Gueye proved a variant of Clement’s Theorem on twin primes. We show that, this result follows by a simple identity. A note on the number of perfect powers in short intervals Original research paper. Pages 31—35 Rafael Jakimczuk Full paper (PDF, 166 Kb) | Abstract Let N(x) be the number of perfect powers that do not exceed x. In this note we obtain asymptotic formulae for the difference N(x + xθ) − N(x), where 1/2 < θ < 2/3 + 1/7. We also prove that if θ = 1/2 the difference N(x + xθ) − N(x) is zero for infinite x arbitrarily large. On the summation of certain infinite series and sum of powers of square root of natural numbers Original research paper. Pages 36—44 Ramesh Kumar Muthumalai Full paper (PDF, 177 Kb) | Abstract Summation of certain infinite series involving powers of square root of natural numbers is evaluated through Riemann zeta function. The sum of powers of square root of first n natural numbers are expressed in terms of infinite series and Riemann zeta function. Fibonacci numbers with prime subscripts: Digital sums for primes versus composites Original research paper. Pages 45—49 J. V. Leyendekkers and A. G. Shannon Full paper (PDF, 190 Kb) | Abstract If we use the expression Fp = kp ± 1, p prime, then digital sums of k reveal specific values for primes versus composites in the range 7 ≤ p ≤ 107. The associated digital sums of Fp±1 also yield prime/composite specificity. It is shown too that the first digit of Fp, and hence for the corresponding triples, (Fp, Fp±1) and (Fp, Fp−1, Fp−2) can be significant for primality checks. Note on φ, ψ and σ-functions. Part 7 Original research paper. Pages 50—53 Krassimir Atanassov Full paper (PDF, 137 Kb) | Abstract The inequality connecting φ, ψ and σ-functions is formulated and proved. Sieving 2m-prime pairs Original research paper. Pages 54—60 Srečko Lampret Full paper (PDF, 154 Kb) | Abstract A new characterization of 2m-prime pairs is obtained. In particular, twin prime pairs are characterized. Our results give elementary methods for finding 2m-prime pairs (e.g. twin prime pairs) up to a given integer. On sk-Jacobsthal numbers Original research paper. Pages 61—63 Aldous Cesar F. Bueno Full paper (PDF, 143 Kb) | Abstract In this paper the sk–Jacobsthal numbers are introduced and their properties are studied. On the tree of the General Euclidean Algorithm Original research paper. Pages 64—84 Vlasis Mantzoukas Full paper (PDF, 226 Kb) | Abstract The General Euclidean Algorithm (GEA) is the natural generalization of the Euclidean Algorithm (EA) and equivalent to Semi Regular Continued Fractions (SRCF). In this paper, we consider the finite case with entries in GEA natural numbers. Consider the Euclidean Division. In GEA we want the divider to be bigger than the absolute value of the remainder. Thus, we take two divisions except for the case the remainder is zero. However, for our help, we consider the non Euclidean remainder without the negative sign so as not to take the absolute value of it for the next step of the algorithm as we need to have two positive integers. So, it occurs a binary tree except for the before last vertex of its path which gives one division as the remainder is zero. This paper presents mainly a criterion with which we can find all the shortest paths of this tree and not only the one that Valhen–Kronecker’s criterion gives. In terms of SRCF, this criterion gives all the SRCF expansions of a rational number t with the same length as the Nearest Integer Continued Fraction (NICF) expansion of t. This criterion, as we shall see, is related to the golden ration. Afterwards, it is presented a theorem which connects the Fibonacci sequence with the difference between the longest and the shortest path of this tree, a theorem which connects the Fibonacci sequence with the longest path of this tree and a different proof of a theorem which occurs by and which connects the Pell numbers with the shortest path of the aforementioned tree. After that, it is proven a connection of this tree to the harmonic and the geometric mean and in particular two new criteria of finding a shortest path are constructed based on this two means. In the final chapter, it is an algorithm, which has an “opposite” property of the EA, property which has been proven in and has to do with the number of steps Least Remainder Algorithm (LRA) needs to be finished in relation to EA and the signs of the remainders of LRA path. Volume 20Number 1Number 2 ▷ Number 3 ▷ Number 4Number 5
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8677294254302979, "perplexity": 1110.6363580578482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655897707.23/warc/CC-MAIN-20200708211828-20200709001828-00002.warc.gz"}
https://neurips.cc/Conferences/2017/ScheduleMultitrack?event=9475
Timezone: » Poster Multi-view Matrix Factorization for Linear Dynamical System Estimation Mahdi Karami · Martha White · Dale Schuurmans · Csaba Szepesvari Wed Dec 06 06:30 PM -- 10:30 PM (PST) @ Pacific Ballroom #80 #None We consider maximum likelihood estimation of linear dynamical systems with generalized-linear observation models. Maximum likelihood is typically considered to be hard in this setting since latent states and transition parameters must be inferred jointly. Given that expectation-maximization does not scale and is prone to local minima, moment-matching approaches from the subspace identification literature have become standard, despite known statistical efficiency issues. In this paper, we instead reconsider likelihood maximization and develop an optimization based strategy for recovering the latent states and transition parameters. Key to the approach is a two-view reformulation of maximum likelihood estimation for linear dynamical systems that enables the use of global optimization algorithms for matrix factorization. We show that the proposed estimation strategy outperforms widely-used identification algorithms such as subspace identification methods, both in terms of accuracy and runtime.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9473677277565002, "perplexity": 1547.7015486512212}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738746.41/warc/CC-MAIN-20200811090050-20200811120050-00424.warc.gz"}
https://jp.mathworks.com/help/antenna/ug/method-of-moments.html
Method of Moments Solver for Metal Structures Method of Moments computation technique for metal antennas. The first step in the computational solution of electromagnetic problems is to discretize Maxwell's equations. The process results in this matrix-vector system: `$V=ZI$` • V - Applied voltage vector. This signal can be voltage or power applied to the antenna or an incident signal falling on the antenna. • I — Current vector that represents current on the antenna surface. • Z — Interaction matrix or impedance matrix that relates V to I. Antenna Toolbox™ uses method of moments (MoM) to calculate the interaction matrix and solve system equations. MoM Formulation The MoM formulation is split into three parts. Discretization of Metals Discretization enables the formulation from the continuous domain to the discrete domain. This step is called meshing in antenna literature. In the MoM formulation, the metal surface of the antenna is meshed into triangles. Basis Functions To calculate the surface currents on the antenna structure, you first define basis functions. Antenna Toolbox uses Rao-Wilton-Glisson (RWG) [2] basis functions. The arrows show the direction of current flow. The basis function includes a pair of adjacent (not necessarily coplanar) triangles and resembles a small spatial dipole with linear current distribution. Each triangle is associated with a positive or negative charge. For any two triangle patches, ${t}_{n}^{+}$ and ${t}_{n}^{-}$, having areas ${A}_{n}^{+}$ and ${A}_{n}^{-}$, and sharing common edge ${l}_{n}$, the basis function is `$\begin{array}{l}{\stackrel{\to }{f}}_{n}\left(\stackrel{\to }{r}\right)=\left\{\begin{array}{cc}\frac{{l}_{n}}{2{A}_{n}^{+}}{\stackrel{\to }{\rho }}_{n}^{+S},& \stackrel{\to }{r}\text{\hspace{0.17em}}\in \text{\hspace{0.17em}}{t}_{n}^{+}\\ \frac{{l}_{n}}{2{A}_{n}^{-}}{\stackrel{\to }{\rho }}_{n}^{-S},& \stackrel{\to }{r}\text{\hspace{0.17em}}\in \text{\hspace{0.17em}}{t}_{n}^{-}\end{array}\text{ }\\ \end{array}$` • ${\stackrel{\to }{\rho }}_{n}^{+}=\stackrel{\to }{r}-{\stackrel{\to }{r}}_{n}^{+}$ — Vector drawn from the free vertex of triangle ${t}_{n}^{+}$ to observation point $\stackrel{\to }{r}$ • ${\stackrel{\to }{\rho }}_{n}^{-}={\stackrel{\to }{r}}_{n}^{+}-\stackrel{\to }{r}$ — Vector drawn from the observation point to the free vertex of the triangle ${t}_{n}^{-}$ and `$\nabla \cdot {\stackrel{\to }{f}}_{n}\left(\stackrel{\to }{r}\right)=\left\{\begin{array}{cc}\frac{{l}_{n}}{{A}_{n}^{+}},& \stackrel{\to }{r}\text{\hspace{0.17em}}\in \text{\hspace{0.17em}}{t}_{n}^{+}\\ -\frac{{l}_{n}}{{A}_{n}^{-}},& \stackrel{\to }{r}\text{\hspace{0.17em}}\in \text{\hspace{0.17em}}{t}_{n}^{-}\end{array}$` The basis function is zero outside the two adjacent triangles ${t}_{n}^{+}$ and ${t}_{n}^{-}$. The RWG vector basis function is linear and has no flux (no normal component) through its boundary. Interaction Matrix The interaction matrix is a complex dense symmetric matrix. It is a square N-by-N matrix, where N is the number of basis functions, that is, the number of interior edges in the structure. A typical interaction matrix for a structure with 256 basis functions is shown: To fill out the interaction matrix, calculate the free-space Green's function between all basis functions on the antenna surface. The final interaction matrix equations are: `${Z}_{mn}=\left(\frac{j\omega \mu }{4\pi }\right)\underset{S}{\int }\underset{S}{\int }{\stackrel{\to }{f}}_{m}\left(\stackrel{\to }{r}\right).{\stackrel{\to }{f}}_{m}\left(\stackrel{\to }{{r}^{\prime }}\right)gd\stackrel{\to }{{r}^{\prime }}d\stackrel{\to }{r}-\left(\frac{j}{4\pi \omega \epsilon }\right)\underset{S}{\int }\underset{S}{\int }\left(\nabla .{\stackrel{\to }{f}}_{m}\right)\left(\nabla .{\stackrel{\to }{f}}_{m}\right)gd\stackrel{\to }{{r}^{\prime }}d\stackrel{\to }{r}$` where • $g\left(\stackrel{\to }{r},\stackrel{\to }{{r}^{\prime }}\right)=\frac{\mathrm{exp}\left(-jk|\stackrel{\to }{r}-\stackrel{\to }{{r}^{\prime }}|\right)}{|\stackrel{\to }{r}-\stackrel{\to }{{r}^{\prime }}|}$ — Free-space Green's function To calculate the interaction matrix, excite the antenna by a voltage of 1 V at the feeding edge. So the voltage vector has zero values everywhere except at the feeding edge. Solve the system of equations to calculate the unknown currents. Once you determine the unknown currents, you can calculate the field and surface properties of the antenna. Neighbor Region From the interaction matrix plot, you observe that the matrix is diagonally dominant. As you move further away from the diagonal, the magnitude of the terms decreases. This behavior is same as the Green's function behavior. The Green's function decreases as the distance between r and r' increases. Therefore, it is important to calculate the region on the diagonal and close to the diagonal accurately. This region on and around the diagonal is called neighbor region. The neighbor region is defined within a sphere of radius R, where R is in terms of triangle size. The size of a triangle is the maximum distance from the center of the triangle to any of its vertices. By default, R is twice the size of the triangle. For better accuracy, a higher-order integration scheme is used to calculate the integrals. Singularity Extraction Along the diagonal, r and r' are equal and defines Green's function becomes singular. To remove the singularity, extraction is performed on these terms. `$\begin{array}{l}\underset{{t}_{p}}{\int }\underset{{t}_{q}}{\int }\left({\stackrel{\to }{\rho }}_{i}.{{\stackrel{\to }{\rho }}^{\prime }}_{j}\right)g\left(\stackrel{\to }{r},\stackrel{\to }{{r}^{\prime }}\right)ds\text{'}ds=\underset{{t}_{p}}{\int }\underset{{t}_{q}}{\int }\frac{\left({\stackrel{\to }{\rho }}_{i}.{{\stackrel{\to }{\rho }}^{\prime }}_{j}\right)}{|\stackrel{\to }{r}-\stackrel{\to }{{r}^{\prime }}|}ds\text{'}ds+\underset{{t}_{p}}{\int }\underset{{t}_{q}}{\int }\frac{\left(\mathrm{exp}\left(-jk|\stackrel{\to }{r}-\stackrel{\to }{{r}^{\prime }}|\right)-1\right)\left({\stackrel{\to }{\rho }}_{i}.{{\stackrel{\to }{\rho }}^{\prime }}_{j}\right)}{|\stackrel{\to }{r}-\stackrel{\to }{{r}^{\prime }}|}ds\text{'}ds\\ \underset{{t}_{p}}{\int }\underset{{t}_{q}}{\int }g\left(\stackrel{\to }{r},\stackrel{\to }{{r}^{\prime }}\right)ds\text{'}ds=\underset{{t}_{p}}{\int }\underset{{t}_{q}}{\int }\frac{1}{|\stackrel{\to }{r}-\stackrel{\to }{{r}^{\prime }}|}ds\text{'}ds+\underset{{t}_{p}}{\int }\underset{{t}_{q}}{\int }\frac{\left(\mathrm{exp}\left(-jk|\stackrel{\to }{r}-\stackrel{\to }{{r}^{\prime }}|\right)-1\right)}{|\stackrel{\to }{r}-\stackrel{\to }{{r}^{\prime }}|}ds\text{'}ds\end{array}$` The two integrals on the right side of the equations, called potential or static integrals are found using analytical results [3]. Finite Arrays The MoM formulation for finite arrays is the same as for a single antenna element. The main difference is the number of excitations (feeds). For finite arrays, the voltage vector is now a voltage matrix. The number of columns are equal to the number of elements in the array. For example, the voltage vector matrix for a `2x2` array of rectangular patch antenna has four columns as each antenna can be excited separately. Infinite Array To model an infinite array, you change the MoM to account for the infinite behavior. To do so, you replace the free-space Green's functions with periodic Green's functions. The periodic Green's function is an infinite double summation. Green's FunctionPeriodic Green's Function `$\begin{array}{l}g=\frac{{e}^{-jkR}}{R}\\ R=|\stackrel{\to }{r}-{\stackrel{\to }{r}}^{\prime }|\end{array}$` `$\begin{array}{l}{g}_{\text{periodic}}=\sum _{m=-\infty }^{\infty }\sum _{n=-\infty }^{\infty }{e}^{j{\varphi }_{mn}}\frac{{e}^{-jk{R}_{mn}}}{{R}_{mn}}\\ {R}_{mn}=\sqrt{{\left(x-{x}^{\prime }-{x}_{m}\right)}^{2}+{\left(y-{y}^{\prime }-{y}_{n}\right)}^{2}+{\left(z-{z}^{\prime }\right)}^{2}}\\ {\varphi }_{mn}\text{ }\text{ }\text{\hspace{0.17em}}\text{ }=-k\left({x}_{m}\mathrm{sin}\theta \mathrm{cos}\phi +{y}_{n}\mathrm{sin}\theta \mathrm{sin}\phi \right)\\ {x}_{m}\text{ }\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }=m\cdot {d}_{x},\text{}{y}_{n}=n\cdot {d}_{y}\end{array}$` dx and dy are the ground plane dimensions that define the x and y dimensions of the unit cell. θ and Φ are the scan angles. Comparing the two Green's functions, you observe an additional exponential term that is added to the infinite sum. The Φmn accounts for the scanning of the infinite array. The periodic Green's function also accounts for the effect of mutual coupling.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 20, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8940775394439697, "perplexity": 540.5289507644779}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338244.64/warc/CC-MAIN-20221007175237-20221007205237-00109.warc.gz"}
https://apps.iu.edu/ccl-prd/events/view/17900295?pubCalId=GRP1322&amp;utm_source=crimsoncard.iu.edu/news-events/events-iu-calendar.html&amp;utm_medium=web&amp;utm_campaign=framework&amp;utm_term=standard&amp;utm_content=2019-04-22-15-00-Department%20of%20Statistics%20Colloquium%20Series
# IUB All Events ## Department of Statistics Colloquium Series Speaker:   Raghu Pasupathy, Associate Professor, Department of Statistics, Purdue University Where:  IMU Sassafras Room When:  Monday, April 22, 2019, 3:00 PM Title:  Second-Order Adaptive Sampling Recursion and the Effect of Curvature on Convergence Rate Abstract:  The Stochastic Approximation (SA) recursion $X_{k+1} = X_k - \gamma_k \nabla F(X_k), k=1,2,\ldots$, also known as Stochastic Gradient Descent (SGD), is the workhorse" recursion for solving stochastic optimization problems. For smooth, strongly convex objectives, it is well-known that SA achieves the optimal convergence rate (the so-called information theoretic Cramer-Rao lower bound) to the stationary point when the step size $\gamma_k = \theta k^{-1}$, assuming the constant $\theta$ is chosen to be larger than the inverse of the smallest eigen value of the Hessian of the objective function at the optimum. When this condition on $\theta$ is violated, SA's convergence rate deteriorates rapidly, a situation that seems to manifest frequently in practice because the eigen values of the Hessian at the optimum are rarely known in advance. While the natural remedy to this conundrum is to estimate the eigen values of the Hessian, this is a computationally expensive step in high dimensions. We remedy this impasse using an adaptive sampling quasi-Newton recursion where the inverse Hessian and gradient estimates are constructed using prescribed but different sample sizes, and potentially updated on different timescales. A fine analysis of the resulting recursion reveals that the Cramer-Rao information-theoretic bound is attained with only a light updating" of the Hessian inverse, as long as the sample sizes to update the gradient increase at a specific rate. Moreover, unlike in optimal SGD, none of the optimal parameter prescriptions within the proposed procedure depend on unknown curvature constants. To fully illuminate the effect of (a) dimension, (b) computation, and (c) sampling, we express all convergence rates in terms of a work measure that includes all such costs. Our theory and illustrative numerical examples are consistent with success reported by a number of recent adaptive sampling heuristics. Start Monday April 22, 2019 03:00 PM End Monday April 22, 2019 04:00 PM Location Indiana Memorial Union (Union Building) Contact Kelly Hanna Contact Email [email protected] Cost Free view calendar
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9533654451370239, "perplexity": 780.3737403564357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319155.91/warc/CC-MAIN-20190823235136-20190824021136-00016.warc.gz"}
https://www.physicsforums.com/threads/currents-and-magnetic-fields.298709/
# Homework Help: Currents and magnetic fields 1. Mar 10, 2009 ### w3390 1. The problem statement, all variables and given/known data Your friend wants to be a magician and intends to use Earth's magnetic field to suspend a current-carrying wire above the stage. He asks you to estimate the minimum current needed to suspend the wore just above Earth's surface at the equator (where the Earth's magnetic field is horizontal). Assume the wire has a linear mass density of 10g/m. Would you advise him to proceed with his plans for this act? 2. Relevant equations dF=IdlxB, where dlxB is a cross product 3. The attempt at a solution I am having trouble understanding where to start. I am having trouble visualizing the problem and I don't know how to incorporate the linear mass density into anything. My book does not go over anything like this. Help on how to start would be much appreciated. 2. Mar 10, 2009 ### LowlyPion 3. Mar 10, 2009 ### w3390 I do not understand how to get I from the equations in your second link. I understand what they mean and I have used them a lot, but I am not sure how the force is going to come into play. 4. Mar 10, 2009 ### LowlyPion Well it will take real magic then to suspend a wire. If the qV got you then try http://hyperphysics.phy-astr.gsu.edu/hbase/magnetic/forwir.html#c1 5. Mar 10, 2009 ### w3390 Thank you so much. That last link does an amazing job at helping the visualization process. Once I could see what was happening, it made complete sense.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8439767360687256, "perplexity": 624.9842039358372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510866.52/warc/CC-MAIN-20181016180631-20181016202131-00491.warc.gz"}
https://leon.bottou.org/_export/xhtml/news/counterfactual_reasoning_and_learning_systems
# Counterfactual Reasoning and Learning Systems The report “Counterfactual Reasoning and Learning Systems” shows how to leverage causal inference to understand the behavior of complex learning systems interacting with their environment and predict the consequences of changes to the system. Such predictions allow both humans and algorithms to select changes that improve both the short-term and long-term performance of such systems. This work is illustrated by experiments carried out on the ad placement system associated with the Bing search engine.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.833544135093689, "perplexity": 917.9633465520204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160454.88/warc/CC-MAIN-20180924125835-20180924150235-00111.warc.gz"}
http://mathhelpforum.com/advanced-applied-math/204902-real-functions-not-sure-my-answers.html
Math Help - real functions (not sure of my answers) 1. real functions (not sure of my answers) Indicate whether the described function is a real function a)the binary operation of addition of real numbers. YES b)the operation that corresponds each nonzero real numbers with its two square roots. YES c)the function that associates with each quadratic polynomial function p defined by y=p(x)=ax^2+bx+c with a not equal to zero the coordinates (r,s)of the vertex of the graph of p in the xy-plane YES d)the function that maps each positive integer into the number of its different prime factors. NO e) the function f with f(x)=0 for all real numbers x NO f)the Dirichlet function D with D(x)=1 if x is rational and D(x)=0 if x is irrational. YES 2. Re: real functions (not sure of my answers) Hey franios. b) is wrong if it returns both square roots since a function can only return one single output (not two). If you are returning the two square roots of a positive number which you are taking the square root of, then this is not a function. e) is wrong as well since f maps all numbers to a unique value for every element of the domain and this is a function.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8265565037727356, "perplexity": 547.4083902923113}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701163421.31/warc/CC-MAIN-20160205193923-00116-ip-10-236-182-209.ec2.internal.warc.gz"}
https://socratic.org/questions/5a7dc38211ef6b0bf22b1358
Chemistry Topics # What is the "ppm" concentration of salt in a solution composed of 3.5*g of salt, and 96.5*g water? Feb 10, 2018 So, in $100 \cdot g$ of solution there are $3.5 \cdot g$ of salt, and the balance water? #### Explanation: By definition...$\text{1 ppm} = \frac{1 \cdot m g}{1 \cdot L}$...and we call this ratio $\text{parts per million}$ because there are $1000 \times 1000 \cdot m g \equiv {10}^{6} \cdot m g$ IN ONE LITRE VOLUME of water. ...most of the time we can ignore the density because the mass of the solute is miniscule...here we assume that the $96.5 \cdot g$ of solvent water expresses a volume of $96.5 \cdot m L$ in the SOLUTION... And so we take the quotient.... $\frac{3.5 \cdot g \times {10}^{3} \cdot m g \cdot {g}^{-} 1}{96.5 \cdot m L \times {10}^{-} 3 \cdot L \cdot m {L}^{-} 1} = 36 , 269 \cdot m g \cdot {L}^{-} 1 \equiv 36 , 269 \cdot \text{ppm}$. Do you think this $\text{ppm}$ quotation of concentration is appropriate here? Note that I have been asked by several posters whether this dissolution reaction of sodium chloride in water represents a physical or chemical reaction. My own very conservative notions of the definition of chemical reaction, INSISTS that such a process, while REVERSIBLE, is CHEMICAL. Chemical change is characterized by the formation of new substances and the making and breaking of chemical bonds. The dissolution reaction certainly qualifies... $N a C l \left(s\right) \stackrel{{H}_{2} O}{\rightarrow} N {a}^{+} + C {l}^{-}$ Where the ion is the aquated complex, i.e. $N {a}^{+} \equiv {\left[N a {\left(O {H}_{2}\right)}_{6}\right]}^{+}$.. ##### Impact of this question 1252 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 11, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9239616990089417, "perplexity": 1379.230678345223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540523790.58/warc/CC-MAIN-20191209201914-20191209225914-00550.warc.gz"}
https://www.sofia.usra.edu/science/proposing-and-observing/sofia-observers-handbook-cycle-4/6-instruments-iii-flitecam/61-1
# 6.1.1 Design FLITECAM consists of a cryogenically cooled near-IR (NIR) camera that can be used for both imaging and grism spectroscopy. A schematic of the optical bench is shown in Figure 6-1 with a full ray trace diagram in Figure 6-2. The incoming beam first passes through the entrance aperture and into the collimator assembly, a stack of custom designed lenses that allow imaging of nearly the entire 8'x8' SOFIA field of view (FOV). The beam is then repositioned using three flat fold-mirrors so that it passes through the image pupil and through a pair of 12-position filter wheels. A fourth flat fold-mirror redirects the beam through the f/4.7 refractive imaging assembly, which then focusses the beam on the array. When observing in spectroscopy mode, only minimal changes to the optical path are required. First, the slit mask is inserted into the beam immediately behind the aperture window at the telescope focus. The slit is a single 16.5 mm long slit (2' on the detector) divided in half with two different widths, one approximately 2'' and the other 1''. Second, the chosen grism and order sorting filter, located in filter wheel #2 and #1, respectively, are set in place. Details are presented in Section 6.1.5 below. Figure 6-1: This is a block diagram of the front end of the FLITECAM instrument with labels of important components. Figure 6-2: This is the ray diagram for the FLITECAM instrument. The inset at the upper left displays the additional lenses inserted into the light path for the pupil-viewing mode. Download the PDF Version
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8088178634643555, "perplexity": 2295.991409001478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519600.31/warc/CC-MAIN-20210119170058-20210119200058-00715.warc.gz"}
http://math.stackexchange.com/questions/204445/help-to-understand-the-mathcalo-p-notation
Help to understand the $\mathcal{O}_p$-Notation im work with the following definitions if $\{a_n\}$ is a sequence of r.v. and $g(n)$ a real-valued function of the positive integer argument $n$, then the notation $a_n = \mathcal{o}_p(g(n))$ means that $p\lim_{n \rightarrow \infty} \left(\frac{a_n}{g(n)}\right)=0$ Similarly, the notation $a_n = \mathcal{O}_p(g(n))$ means that there is a constant $K$ sucht that, for all $\epsilon>0$, there is a positive integer $N$ such that $Pr\left(|\frac{a_n}{g(n)}|>K\right) < \epsilon \ \ \$ for all $n>N$ Now my problem which i dont quite understand: i look at a sequence of random variables $\{x_n\}$ sucht that $x_n$ is distributed as $N(0,n^{-1})$. It is easy to see that $x_n$ has the c.d.f $\Phi(n^{\frac{1}{2}}x)$ i.e. that $Pr(x_n < x ) = \Phi(n^{\frac{1}{2}}x)$. But why is $n^{\frac{1}{2}}x_n$ a $\mathcal{O}_p(1)$? My understanding of the given definition regarding $\mathcal{O}_p(.)$ is that i have to find a constant K such that for $g(n) = n^0 = 1$ the probability of $|n^{\frac{1}{2}}x_n|$ being greater then K is smaller then $\epsilon$. Since $n^{\frac{1}{2}}x_n$ is $N(0,1)$ distributed i really dont see why this is a $\mathcal{O}_p(1)$. If n gets large the probability for large K is small and to write down a limit distribution for $n^{\frac{1}{2}}x_n$ is also possible but i dont see how this would help? Or is it enough to find any sort of K for which the probablilty is smaller then $\epsilon$ even if K is very large? Thanks Tim - Let $X$ be any $N(0,1)$ random variable. You know that for each $\epsilon>0$ there is a $K_\epsilon$ such that $\operatorname{Pr}(|X|>K_\epsilon)<\epsilon$. Now each of your random variables $n^{1/2}x_n$ is $N(0,1)$, so for every $n$ you have $$\operatorname{Pr}\left(\left|n^{1/2}x_n\right|>K_\epsilon\right)<\epsilon\;.\tag{1}$$ In other words, you can take $N=1$, and $(1)$ will be true for all $n>N$, because in fact it’s true for all $n$. The point is that since your variables are identically distributed, a $K_\epsilon$ that works for one of them automatically works for all of them: it’s independent of $n$. It’s perfectly true that if $\epsilon=10^{-10^{10}}$, say, $K_\epsilon$ will be fairly large, but that’s perfectly acceptable: the definition merely requires that for each $\epsilon$, no matter how small, there be some $K_\epsilon$ that ‘works’, even if it’s huge. hey, thx for your answer - i guess the other way around works too? meaning that $\epsilon$ does not have to be so very small if $K_\epsilon$ is not to large? im just confused because usually $\epsilon$ is used for very small amounts but in this case $\epsilon$ does not need to be very small i.e. could possibly be like 0.5. – tim Sep 29 '12 at 23:14 @tim: In all limit contexts $\epsilon$ can be any positive real number; it’s just that smaller $\epsilon$’s impose stiffer requirements. Here, too, smaller $\epsilon$’s impose stiffer requirements and typically require larger $K_\epsilon$’s. It’s harder to get the probability of being in one of the tails of the distribution under $0.001$ than it is to get it under $0.1$, for instance: you have to go further out and use smaller tails. – Brian M. Scott Sep 29 '12 at 23:24 but is it not very easy to find a $K_\epsilon$ and a corresponding $\epsilon$ when the random variables follow a known c.d.f? like u said especially when they are iid distributed. – tim Sep 29 '12 at 23:42 @tim: You have to know the details of the distribution in order to get a specific $K_\epsilon$. If you’re dealing with just one distribution, though, you know that the cdf $F$ approaches $0$ to the left and $1$ to the right, so you for any given $\epsilon>0$ you know that there is a $K_\epsilon$ such that $F(K_\epsilon)-F(-K_\epsilon)<\epsilon$, which is all you need. – Brian M. Scott Sep 30 '12 at 1:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9524229168891907, "perplexity": 112.20679515748141}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398461132.22/warc/CC-MAIN-20151124205421-00177-ip-10-71-132-137.ec2.internal.warc.gz"}
https://sumidiot.wordpress.com/2009/01/
Archive for January, 2009 Non-Functorial January 31, 2009 In other words, I’m hosed. I’ve been thinking about a category of ‘abstract locally complete partitions’. Fix $M=m\times \mathbb{R}^n$, a disjoint union of copies of $\mathbb{R}^n$ indexed by elements of the finite set $m$. My category is then supposed to consist of pairs $(\rho,f)$ where $\rho$ is a finite collection of affine spaces, $A_{\rho}=\coprod_{i\in s}A_i$, along with a partition, $\Lambda$, of the indexing set $s$, and $f:A_{\rho}\rightarrow M$ an affine (on each component) map which is non-locally-constant. I interchangeably let $\rho$ refer to the collection of spaces, or the partition on that collection of spaces. To describe non-locally-constant, I must first remind you that I let my partition $\Lambda$ on $s$ induce a partition on $A_{\rho}$ where $x\sim y$ iff the component containing $x$ is equivalent to the component containing $y$, mod $\Lambda$. That is, all points in a component $A_i$ are considered equivalent, and two components are equivalent as determined by $\Lambda$. Now $f$ being non-locally-constant means that there exists $x,y$ equivalent mod $\Lambda$ but with $f(x)\neq f(y)$. Now, given such an object $(\rho,f)$ in my category, I would like to reduce it to a more restricted type of complete locally affine partition. In particular, I would like to reduce it to the case where • There are no more than $m$ zero-dimensional affine spaces in $A_{\rho}$, and no more than $m$ non-zero-dimensional affine spaces in $A_{\rho}$. • If $A_i$ is a component of $A_{\rho}$ that is not zero-dimensional, then the $\Lambda$-equivalence class of $i$ is just $i$ itself. • My map $f$ takes distinct zero-dimensional spaces in $A_{\rho}$ to distinct components of $M$. Similarly for non-zero-dimensional spaces. To complete the description of my big category, I need to describe the morphisms. A map $\alpha$ from $(\rho,f)$ to $(\rho',f')$ will be an affine map $\alpha:A_{\rho}\rightarrow A_{\rho'}$ such that $f'\circ \alpha=f$ and $\overline{\alpha(\rho)}\leq \rho'$ – a property I will now further clarify. My $\rho$ consists of a partition of the space $A_{\rho}$. By $\alpha(\rho)$, I mean the transitive closure of the relation on $A_{\rho'}$ where whenever $x\sim y$ in $A_{\rho}$ then $f(x)\sim f(y)$ in $A_{\rho'}$. This process gives me an equivalence relation $\alpha(\rho)$ on $A_{\rho'}$. Any time I have an equivalence relation $\sigma$ on a (disjoint union of) affine space(s), I let $\overline{\sigma}$ denote the finest coarsening of $\sigma$ that is “locally affine” – meaning equivalence classes are (disjoint unions of) affine subspaces, and all equivalence classes in a given component are parallel (technically, there’s probably a little more than that, but it’s good enough for now I guess). So now we have the meaning of $\overline{\alpha(\rho)}$, and by $\overline{\alpha(\rho)}\leq \rho'$, I simply mean that $\overline{\alpha(\rho)}$ is coarser than $\rho'$ (by which I mean the equivalence relation on $A_{\rho'}$. So allow me to recap. My $\alpha$ is an affine map with a property saying that an appropriate triangle commutes ($f'\circ \alpha=f$), and the affine closure of the image of the partition for $\rho$ is coarser than the partition for $\rho'$. Since $\rho'$ is a “complete” locally affine partition (any two points in the same component are in the same equivalence class), this also forces $\overline{\alpha(\rho)}$ to be “complete”. Of course, I’m not mentioning that this category is really “a category object in the category of topological spaces”. So really I have a space of objects, a space of morphisms, and enough maps between them to make sense of things. I’ll continue not mentioning that, saving it for another day. Now, like I said, I want to be reducing any $(\rho,f)$ in my big category to get it down to a particularly nice form. One of the main steps I had been relying on turns out to not be allowed, because it isn’t functorial. I hardly knew one could write down “obvious” maps that weren’t functorial, but I’ve apparently done so rather frequently lately. So, what is this construction? Given $(\rho,f)$, one step I want to take is to replace any subset of the components of $A_{\rho}$ that are (1) all related by the partition, and (2) all map to the same component of $M$. I want to replace such a subset by the affine span (direct sum in the category of affine spaces) of the spaces in it. This seems entirely reasonable. Given a bunch of affine spaces, and a map to an affine space, I get, for free, a map from the direct sum of the original affine spaces. That’s what direct sums do. However, since I have disjoint unions as targets of maps $\alpha$, I run into trouble. Consider, for example, $(\rho,f)$ where the space $A_{\rho}$ consists of three disjoint points, the equivalence relation has them all equivalent, and $f$ sends them all to the same component. Consider $(\rho',f')$ the same three points, the same $f$, but only two of the points are equivalent. The obvious map $\alpha$ has $\overline{\alpha(\rho)}\leq \rho'$, and the triangle commutes by construction. Now, when I take affine spans as mentioned, I end up with a single space in $\rho$ (a plane, the affine span of 3 points), and two spaces in $\rho'$ (a line (the span of two points), and a point). That’s a problem, because I no longer know what to do with $\alpha$. As I mentioned before, given a map from a bunch of affine spaces to a single affine space, I get a map from the affine span to the single space. However, given a map from a bunch of affine spaces to a bunch of affine spaces, I no longer get a map from the affine span to the same bunch of spaces (the single affine span, being connected, can only end up in one of the target spaces). So that’s upsetting. It’s not even the only thing I’ve written down recently that wasn’t a functor. Back to the drawing board, as a fella says. My Problem with 0 January 28, 2009 In my research recently, I’ve been debating between two setups for a category. My category is supposed to have, as objects, a finite set $s$ of spaces, a partition $\Lambda$ of $s$, and a map from the disjoint union of those spaces to a space $M$. I tend to bundle all of this information up into $\rho$ (for the finite set, it’s partition, and the collection of spaces) and $f$ (the map to $M$). In my situation, $M$ is a disjoint union of copies of $\mathbb{R}^n$. The spaces I have in $\rho$ have, for a while, been affine spaces. But there’s also always been a question about maybe having them be vector spaces. The difference, of course, is the existence of 0. There are a few ways to think about affine spaces. The least precise is to say it is a vector space that forgot where its 0 is. With this idea, a pointed affine space is (essentially) a vector space. Every affine space has an underlying vector space, and given two points in the affine space, you can find their difference, which will be a vector in the underlying vector space. Since differences are defined in a vector space, every vector space is (essentially) an affine space whose underlying vector space is the one you started with. Now, I have this collection of spaces (either vector or affine) and an affine map $f$ to $M$ – that is, the map is affine (linear after a linear translation) on each space in $\rho$. Since I have an equivalence relation $\Lambda$ (by abuse of notation) on my spaces in $\rho$, I can take the transitive closure of the image, $f(\Lambda)$, and get an equivalence relation on $M$. I then have this process in mind where I convert this equivalence relation on $M$ to one of a particularly nice form, which I have been calling ‘locally affine.’ For more, see my earlier writeup. Part of the process to convert an equivalence relation to one that is locally affine involves looking at pairs of points that are parallel to some linear subspace of (a component of) $M$. For reasons that deserve to be called ‘continuity’, this is not a great procedure to do in $M$. If two points are parallel to some line, and you wiggle the two points a little, there’s no reason to assume they are still parallel. And that messes some things up (at least, can, and seems to with what I’ve been hoping to do), or, if nothing else, makes them uglier. So what I’ve been trying to accomplish, or approximate, is to do the similar operations on my original $\rho$ in my category of ‘abstract’ locally affine partitions. I would like to convert the original $\rho$ to something pretty similar, staying in the category I’m defining while not changing the (locally affine coarsening of the) image of the equivalence relation too much. It’s tempting to assume that my spaces in $\rho$ are affine spaces, because a big part of making the locally affine partitions above is taking affine spans of things. Taking the affine span of a vector space would give me an affine space, but that would mean I’ve changed categories (from a category using vector spaces to a category using affine spaces). But taking the affine space of affine spaces causes no such problem. The problem with using affine spaces is that sometimes I also want to take linear spans, which is not defined for affine spaces. Taking linear spans only works if you have a 0. Grr. I’m going to have to start being more clever, or more careless. I wish I knew which. Counting Cards January 3, 2009 Over the past few days my roommate and I have been working on a business card Menger sponge origami project. This was most recently inspired by Thomas Hull‘s fun book ‘Project Origami‘, but it’s quite possible I’d heard of the project before I found the book. I know that in Hull’s book he finds formulas for the number of cards it takes to make various iterations of the sponge, but I wanted to try to come up with them again on my own, and thought I’d share my process. Let $F_n$ be the number of cards showing on each face of a level $n$ sponge (recalling that level 0 is just an ordinary cube). The exterior faces of the Menger sponge are Sierpinski carpets, and it’s pretty easy to determine that $F_0=1$ and $F_n=8\cdot F_{n-1}$. This means $F_n=8^n$. Next, let $U_n$ be the number of cards needed to make a completely unpanelled level $n$ sponge. So $U_0=6$, and to make a level $n$ sponge requires 20 level $n-1$ sponges, so $U_n=20\cdot U_{n-1}$. This makes $U_n=20^n\cdot 6$. Now I’d like to determine the number of cards needed for what I’ll call ‘interior panelled’ sponges. Let $I_n$ denote the number of cards for such a level $n$ sponge. A level $n$ sponge consists of lots of little cubes, some of the faces of which are showing (not stuck against faces of other little cubes). Some of the showing faces occur on the very exterior of the sponge (on the faces of the encompassing cube, if you want). So ‘interior panelled’ would be all of the faces except these very exterior ones. There’s no such thing as an interior panelled level 0 sponge, so we start at level 1, counting $I_1$ as follows: Begin with an unpanelled level 1 sponge ($U_1=20\cdot 6$ cards), and panel the two interior faces of each of the 12 ‘edge’ level 0 cubes. So $I_1=U_1+2\cdot 12=20\cdot 6 +2\cdot 12=144$. What about the next iteration for $I_n$? Well, there are 20 ‘interior panelled’ $n-1$ sponges, and you need to panel 2 of the exterior faces of each of the 12 ‘edge’ cubes. The exterior faces, recall, require $F_{n-1}$ cards to panel. So $I_n=20\cdot I_{n-1}+2\cdot 12\cdot F_{n-1}$. Since we know $F_{n-1}=8^{n-1}$, we can clean this formula up a little: $I_n=20\cdot I_{n-1}+24\cdot 8^{n-1}$. Now we’re almost done. The overall goal is to have a fully panelled level $n$ sponge. Let $P_n$ denote the number of cards required. Since a fully panelled level $n$ cube is an interior panelled level $n$ cube, along with exterior panelling, it’s easy to count. There are 6 exterior faces, each requiring $F_n=8^n$ cards to panel, so $P_n=I_n+6\cdot 8^n$. So we’re down to two formulas: $\begin{array}{rcl}I_n &=& 20\cdot I_{n-1}+24\cdot 8^{n-1} \\ P_n &=& I_n+6\cdot 8^n\end{array}$ At this point, I checked Hull’s book, and noticed he had only one formula – one for $P_n$. So how do we get rid of $I_n$? Solve the second equation for $I_n$, obtaining $I_n=P_n-6\cdot 8^n$ (and so also $I_{n-1}=P_{n-1}-6\cdot 8^{n-1}$). Now substitute these two formulas in their appropriate places in the formula above for $I_n$. This gives $P_n-6\cdot 8^n=20(P_{n-1}-6\cdot 8^{n-1})+24\cdot 8^{n-1}$ which we simplify to $P_n=20\cdot P_{n-1}-6\cdot 8^n,$ obtaining the formula that Hull’s book has (hurray!). Perhaps there’s something more than can be done to understand this formula though (and Hull does have remarks like this as well). I’d really like my recurrence formula for $P_n$ to have $(n-1)$s showing up, instead of that $n$ in the power for 8. So write $P_n=20\cdot P_{n-1}-6\cdot 8\cdot 8^{n-1}$. How can we interpret this formula geometrically, in terms of the sponge? Well, we see that to make a panelled level $n$ sponge, we make 20 level $n-1$ sponges, and then remove some bits. In particular, we remove the panelling on faces of cubes that come together – that’s where the $8^{n-1}$ comes from, it is, after all $F_{n-1}$, the number of panels on a face of an $n-1$ sponge. How many faces come together? Well, each of the 8 corner cubes (that lonely 8 in $6\cdot 8\cdot 8^{n-1}$) is 3-valent, meaning there are three ‘edges’ hitting that cube. At each of these 3 edges, we have two ($3\cdot 2=6$, in $6\cdot 8\cdot 8^{n-1}$) panelled faces coming together, and we must remove all of that panelling. So there you have it – a formula for the number of cards to build a level $n$ Menger sponge. If you’re interested in building one, start gathering up cards. While the level 0 sponge (a cube) requires only 12 cards, a level 1 sponge already requires 192 cards. The level 2 sponge, that my roommate and I built, took 3456 cards, and the next level would require a staggering 66,048 cards. I’m in no hurry to start making one of those.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 156, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9206564426422119, "perplexity": 293.87394557406145}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583657470.23/warc/CC-MAIN-20190116113941-20190116135941-00095.warc.gz"}
https://math.stackexchange.com/questions/2384639/bootstrap-confidence-interval-in-r-using-replicate-and-quantile
# Bootstrap confidence interval in R using 'replicate' and 'quantile' I have around one thousand measurements (numbers). All these measurements are my observations and I have calculated a 95% confidence interval for the mean and for the variance by using the normal formulas (without software). Then, I have used the replicate function in R with around one hundred thousand simulations for my measurements (observations) and the parameter "replace" has been set to "true". After that, I used the apply function with the three parameters: my measurements as the data, 2 as the margin and mean as the function. Then to get a 95% confidence interval this way, I used the quantile function containing the variable for the apply function I used, and then 0.025 and 0.975 combined as the second parameter for the quantile function. In that way, I got almost exactly the same 95% confidens interval as calculated with the normal formula (without software). So now I wanted to do exactly the same thing for the variance, i.e. use replicate, apply and quantile to get a 95% confidence interval for the variance. So I just changed the third parameter "mean" to "var" in the apply function. I then noticed that the outputted 95% confidence interval for the variance (from the quantile function) is a little bit different (it is wider) than the one I calculated by the normal formula without software. So my question is: Did I use the replicate, apply and quantile function correctly for the variance confidence interval? I know I did use the functions right for the mean, since I got almost exactly the same result there as calculated by normal formula. If I did use the functions correctly for the variance, why is the confidence interval a little bit different? Is it because as sample size increases, there might be more observations far away from the mean resulting in a bigger variance? • Hello and welcome to math.stackexchange. It looks like you are interested in comparing bootstrap and formula confidence intervals for the mean and the variance. From your description, it looks to me that you are doing the right thing. As to the difference between the bootstrap and formula confidence intervals for the variance, this is most likely due to the fact that your distribution is not normal. The formula confidence interval for the variance is not as robust to such deviations as the formula confidence for the mean. That is, the bootstrap CI is more accurate. – Hans Engler Aug 6 '17 at 16:15 • Hi. Thank you for your comment. But the distribution of my data (my observations) is definitely normal. I have made both a histogram of it (and it forms a bell curve) and a Q-Q Normal plot containing an almost straight line through the dots. – John A Aug 6 '17 at 16:33 • Try the same thing with synthetic data from $N(0,1)$ and see if you get the same effects. – Hans Engler Aug 6 '17 at 18:20 From your description, I am not sure exactly what you are doing. But maybe I can give you some helpful ideas. 1) For normal data, the standard CI for the population mean $\mu$ uses Student's t distribution which is symmetrical. The 95% CI is of the form $\bar X \pm t^*S/\sqrt{n},$ where $t^*$ cuts 2.5% from the upper tail of $\mathsf{T}(\nu= n-1)$. By contrast, the standard CI for the population variance $\sigma^2$ uses the chi-squared distribution which is not symmetrical. The 95% CI is of the form $\left(\frac{(n-1)S^2}{U}, \frac{(n-2)S^2}{L}\right),$ where $L$ and $U$ cut 2.5% of the probability from the lower and upper tails, respectively, of $\mathsf{Chisq}(\nu = n-1).$ Unless your bootstrapping procedure corrects for the bias in the case of the variance (with its skewed distribition), you cannot expect an accurate result. 2) It is not exactly 'fair' to compare CIs (using Student's t and chi-squared distributions) based on the assumption data are normal with results of nonparametric bootstrap CIs. The assumption that data are normal provides 'information' for the t and chi-squared intervals that is not used in nonparametric bootstrapping. 3) In making bootstrap CIs for variation, I have found it better to find CIs for $\sigma$ rather than $\sigma^2,$ possibly because the former has the same units as the data. Also, for scale parameters, I have found that it often works better to bootstrap ratios rather than differences. 4) Finally, along lines of @HansEngler's suggestion, I will generate $n = 1000$ observations from $\mathsf{Norm}(\mu = 100, \sigma=15),$ find the chi-sqared CI for $\sigma$ and compare it with a bias-corrected nonparametric bootstrap. First, here are my fake data and the corresponding standard CI for $\sigma,$ which turns out to be $(9.55, 10.43).$ [I have provided set.seed statements, so that you can replicate the exact simulations I have used.] set.seed(1234) n = 1000; mu = 100; sg = 10; x = rnorm(n, mu, sg) a = mean(x); s = sd(x); a; s ## 99.73403 # sample mean ## 9.973377 # sample SD # CI for sg UL = qchisq(c(.975,.025), 999); UL ## 1088.487 913.301 sqrt((n-1)*var(x)/UL) ## 9.554619 10.430810 # CI for sg (includes 9.973, as it must) Now for the (bias corrected) nonparametric bootstrap CI: I will bootstrap the ratio $R = S/\sigma.$ If I knew the distribution of $R,$ then I could find $L$ and $U$ with $P(L \le R = S/\sigma \le U) = 0.95$ so that $P(S/U \le \sigma \le S/L) = .95$ and a 95% CI for $\sigma$ would be of the form $(S/U,\, S/L).$ By bootstrapping $R,\,$ I can estimate $U$ and $L.$ I use the observed $s = 9.97$ temporarily as a proxy for unknown $\sigma.$ Suffixes .re indicate bootstrapped quantities. [On this site with relatively few experienced R users, I try to use only the most fundamental R functions; I will leave it to you to write more elegant R code, which is obviously possible.] set.seed(1235) B = 10^5; r = numeric(B) for(i in 1:B) { s.re = sd(sample(x,n,repl=T)) r[i] = s.re/s } L.re = quantile(r, .025); U.re = quantile(r, .975) c(s/U.re, s/L.re) ## 97.5% 2.5% ## 9.537866 10.462202 So the nonparametric bootstrap CI is $(9.54, 10.46),$ which is not a bad match for the standard chi-squared CI $(9.55, 10.43)$ obtained above. [The large sample size ($n$ = 1000) has mostly obviated my comment in (2) above, which still stands for smaller $n$. Even so, it is possible that the information that the data are normal accounts for the fact that the normal-based CI is a bit shorter.] Note: If you do a parametric bootsrap CI, you may get closer to the result for the chi-squared CI. In parametric bootstrapping, one re-samples from a parametric distribution (here normal) with parameters suggested by the data (rather than from the data themselves). In particular, in the code above one would substitute the code s.re = sd(rnorm(n, a, s)) for the line with sample. With this change and bootstrap seed 1066, I got the 95% parametric bootstrap CI $(9.55, 10.43).$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8682306408882141, "perplexity": 461.2955337924894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668534.60/warc/CC-MAIN-20191114182304-20191114210304-00408.warc.gz"}
http://math.stackexchange.com/questions/266794/how-do-i-read-this-question/266799
# Introduction In Basic Algebra I, I am struggling with fully understanding the following exercise: Show that $S\overset{\alpha}{\to}T$ is injective if and only if there is a map $T\overset{\beta}{\to}S$ such that $\beta\alpha=1_S$, surjective if and only if there is a map $T\overset{\beta}{\to}S$ such that $\alpha\beta=1_T$. In both cases, investigate the assertion: if $\beta$ is unique then $\alpha$ is bijective. # My Problem I am struggling only with the bold portion. (I have written proofs by contradiction for the other aspects of the question.) What confuses me specifically is this: • What is this question really asking? Is it saying, "What happens when $\beta$ is unique when both $\beta\alpha=1_S$ and $\alpha\beta=1_T$?" or is it saying, "What happens when $\beta$ is unique and either $\beta\alpha=1_S$ or $\alpha\beta=1_T$ is true?" # Remarks As you can see, my real problem here is understanding precisely what is being asked. If it is asking, the first (both $\alpha\beta=1_T$ and $\beta\alpha=1_S$ are true), then we're simply constructing the very definition of a bijection. If it's asking the latter, I don't know what's going on . . . Are we somehow still constructing a bijection? Can you all give me help on reading questions such as this? - That an injective map has a left inverse is only true if its domain is nonempty. –  Michael Greinecker Dec 29 '12 at 21:01 I parse this as follows: The bold sentence refers separately to each of the two statements. So expanded out, this would be: 1a. Show that ... is injective if and only if ... Show also that if $\beta$ is unique then $\alpha$ is bijective. 1b. Show that ... is surjective if and only if... Show also that if $\beta$ is unique then $\alpha$ is bijective. - Why do you parse this this way? Why is this correct when the other interpretation is not? I'm not saying you're wrong. I'm genuinely asking those questions because I don't see why this is the intended meaning of the question. –  000 Dec 29 '12 at 1:00 Fair question. I replace 'both' with 'each', where the claim is that the statement "in both cases" is true iff "each case" is true separately. –  AKE Dec 29 '12 at 1:04 Could you explain what the mathematical consequences of your interpretation are? I am not seeing any and it makes the question seem null. I am most likely wrong as I am heavily confused. I think it is interpreted the other way because that appears to be the only way of making a sensible question. I mean, Jacobson (the author) even puts a definition of bijections in terms identical to what the other interpretation says: "$S\overset{\alpha}{\to}T$ is bijective if and only if there exists a map $T\overset{\beta}{\to}S$ such that $\beta\alpha=1_S$ and $\alpha\beta=1_T$." –  000 Dec 29 '12 at 1:09 Jacobson (in my opinion) is not the clearest author, and I've also found understanding exactly what he means a tad challenging (frustrating?) Do you want to take this into a chat room? It may be quicker and easier there. –  AKE Dec 29 '12 at 1:17 Transcript resolving the problem. –  AKE Dec 29 '12 at 2:15 # Central Matter With the help of AKE, I was able to figure out what's being said here. The crux is this: $$\beta \text{ is unique} \iff |S|=|T| \iff \alpha \text{ is bijective},$$ which quickly implies that if $\alpha$ is bijective in either case. # Elaboration and Specifics For the mapping $\alpha$ wherein $\beta\alpha=1_S$, we have that $\alpha$ is injective. However, $\beta$ can be any map $T\to S$ such that all the elements of $T$ map to $S$. This means that $\beta$ acts not just on the elements $\alpha(s)$; it acts on all elements of $T$. As a result, there are, in general, elements in $T$ which can be mapped to any $s\in S$. Thus, there are many possible maps $\beta$; we can create a new one simply by changing what a given $t\in T$ which is not $\alpha(s)$ maps to. Now, the issue is this: We are supposing $\beta$ is unique. This means there cannot be elements in $T$ which are not equal to $\alpha(s)$. If there were, then $\beta$ would cease to be unique for the reason outlined just above. Hence, we have that $\alpha$ is also surjective. Therefore, $\alpha$ is bijective. $\blacksquare$ For the mapping $\alpha$ wherein $\alpha\beta=1_T$, we have a similar situation. Using the same line of logic, we see that there cannot be $s\in S$ such that $s\ne \beta(t)$. If there were, $\beta$ would cease to be unique. Thus, we have that $\alpha$ is injective. Therefore, $\alpha$ is bijective. $\blacksquare$ - Hint: You might want to start with extreme cases, such as when $S$ or $T$, but not both, has only one element. (I take the problems to be whether "$\alpha$ is bijective if and only if there is a unique map $\beta$ such that $\beta\alpha=1_S$" and similarly for the other part.) –  Michael E2 Dec 29 '12 at 5:23 @MichaelE2 That's a very good point. This question has indeed taught me to examine cases in particular detail. –  000 Dec 29 '12 at 5:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8595277070999146, "perplexity": 225.4869952147639}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345758389/warc/CC-MAIN-20131218054918-00088-ip-10-33-133-15.ec2.internal.warc.gz"}
https://mathfoolery.wordpress.com/category/geometric-series/
## Archive for the ‘Geometric series’ Category ### What is so Geometric about the Geometric Series January 17, 2010 Behold the picture: Exhibit number 2, details on Wikipedia: Exhibit number 3, calculation of the area under a power curve by Fermat, pilfered from Analysis by Its History, by E.Hairer and G.Wanner (Springer 1996), Sect. I.3, pp. 33-34: From this picture it looks like the area we want to calculate is sandwiched between the sums of two geometric series: $B^{k+1}(1-R)(R^k+R^{2k+1}+\dots) \le A(B) \le B^{k+1}(1-R)(1+R^{k+1}+R^{2k+2}+\dots)$ After summing the series we get $R^k B^{k+1}(1-R)/(1-R^{k+1}) \le A(B) \le B^{k+1}(1-R)/(1-R^{k+1})$ At $R=1$ the upper and the lower bounds come together. Unfortunately, $(1-R)/(1-R^{k+1})$ is undefined for $R=1$ since it reduces to $0/0$. But fortunately Fermat knew how to make sense out of the undefined expressions like this, he knew how to differentiate, so he could figure out that $A(B)=B^{k+1}/(k+1)$. Had he noticed this peculiar link between differentiation, i.e., making sense of $0/0$, and integration, i.e., calculating areas? I have no doubt he had, Fermat was Fermat. Had he pointed it out to his high-esteemed correspondents? I don’t know. But it was up to Isaac Barrow to expose differentiation and integration as two operations inverse to each other, and up to Newton, Leibniz and Bernoulli brothers to put his remarkable observation to work.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 8, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9364420771598816, "perplexity": 1061.8060508756137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687833.62/warc/CC-MAIN-20170921153438-20170921173438-00014.warc.gz"}
http://www.newser.com/story/152097/a-fertile-uterus-is-mathematically-perfect.html
A Fertile Uterus Is Mathematically ... Perfect Gynecologist finds the Golden Ratio in fertile uteri By Dustin Lushing,  Newser Staff Posted Aug 19, 2012 4:36 PM CDT (Newser) – The world's most mysterious number has popped up in the uterus. Known as the Golden Ratio, 1.618 is hailed by devotees as the formula for perfect natural beauty. Fanatics say the most aesthetically-pleasing rectangle and the most attractive smiles adhere to the numeral. Now Jasper Verguts, a Belgian gynecologist, has discovered the ratio inside the female reproductive system, reports the Guardian. Verguts measured the organs of 5,000 women using ultrasound and found that the ratio of length to width in the most fertile women came out to 1.6. "This is the first time anyone has looked at this, so I am pleased it turned out so nicely," said Verguts. The ratio was initially derived from the Fibonacci sequence, a chain of numbers in which every figure is the sum of the previous two. Want to see it on yourself? The distance between the first and second knuckles on each finger is about 1.618 times the distance between the second and third. My Take on This Story 7% 7% 65% 2% 18% 1%
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8585818409919739, "perplexity": 1766.8385100998144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690016.68/warc/CC-MAIN-20170924115333-20170924135333-00059.warc.gz"}
https://digitalcommons.lsu.edu/gradschool_disstheses/5610/
## LSU Historical Dissertations and Theses 1993 Dissertation #### Degree Name Doctor of Philosophy (PhD) #### Department Educational Theory, Policy, and Practice David Kirshner #### Abstract Competence in algebra requires knowledge of parsing and transformations. The parsing component specifies the structure of algebraic expressions based on conventional operation hierarchies. The transformational component specifies the real number properties used to transform algebraic expressions into equivalent forms. In standard models of mathematical cognition, both components are conceived as strictly propositional domains (e.g., Anderson, 1983a). Kirshner (1989b) demonstrated that parsing knowledge initially is apprehended in a visual (non-propositional) modality. This dissertation extends that visual analysis beyond parsing to the transformational component. It is proposed that transformations range on a continuum from highly propositional (e.g., x(y$\sp{-1}) = ({\rm x \over y}$, x$\sp2$ - y$\sp2$ = (x - y)(x + y)) to highly visual (e.g., (x$\sp{\rm y})\sp{\rm z}$ = x$\sp{\rm yz}$, (xy)$\sp{\rm n}$ = x$\sp{\rm n}$y$\sp{\rm n}$. It is hypothesized that visual rules are: (1) easier to apprehend initially, but (2) less easily constrained to their proper contexts of application. Thus common errors like ${a+b\over a+c} = {b\over c}$ and ${\root n\of {a+b}} = {\root n\of a} + {\root n \of b}$ are analyzed as overgeneralizations of visual rules like ${ab \over ac} = {b\over c}$ and ${\root n \of {AB}}$ = ${\root n\of a}{\root n\of b}$, respectively. Two groups of algebra neophytes were taught a mixture of visual and nonvisual rules. One group was taught using ordinary algebraic notation: the other using a syntactic tree notation which distorts the visual structure of ordinary notation, forcing propositional level learning for all rules. Each group was evaluated using recognition tasks, which were applications of rules that had been taught, and rejection tasks, to which no rule applied though one rule nearly applied. Recognition tasks assess the students' initial rule acquisition. Rejection items invite overgeneralization of rules. In tree notation the two rule types were equally difficult to recognize, but in ordinary notation visual rules were significantly easier to recognize than propositional rules. For rejection tasks in tree notation, visual and propositional items were equally difficult. In ordinary notation the visual items tended to be more difficult to constrain than propositional items; though because of basement effects these differences were significant only in some cases. The visual salience construct is more fully analyzed in the Conclusions section. 216 COinS
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9618173837661743, "perplexity": 4018.625868225278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991693.14/warc/CC-MAIN-20210512004850-20210512034850-00491.warc.gz"}
http://math.stackexchange.com/questions/191348/how-to-solve-this-system-in-matlab
# How to solve this system in matlab? \begin{align} x_1 &= 20000+0.5x_2+0.1x_3 \\ x_2 &= 40000+0.2x_1+0.6x_3 \\ x_3 &= 20000+0.1x_1+0.25x_2 \end{align} I want to write the system as $Ax=b$, what will then $A$, $b$ and $x$ be? I suppose $x$ should be $[x_1, x_2, x_3]$ but then I must solve for all of the variables? ## Update Did I formulate the system correctly as $Ax=b$? $$A=\begin{pmatrix} -1 & 0.5 & 0.1 \\ 0.2 & -1 & 0.6 \\ 0.1 & 0.25 & 1 \end{pmatrix}$$ $$b=\begin{pmatrix} -20000 \\-40000 \\-20000 \end{pmatrix}$$ and $$x=\begin{pmatrix} x1 \\ x2 \\x3 \end{pmatrix}$$ ## Update 2 I think I got it right, did it this way in matlab: >> A=[-1 0.5 0.1;0.2 -1 0.6;0.1 0.25 -1] A = -1.0000 0.5000 0.1000 0.2000 -1.0000 0.6000 0.1000 0.2500 -1.0000 >> b=[-20000 -40000 -20000]' b = -20000 -40000 -20000 >> x=A\b x = 1.0e+004 * 6.5248 8.1135 4.6809 >> - You will need to form $A$ and $b$ by hand. You can then use $x=A/b$ to solve the linear system $Ax=b$. – Daryl Sep 5 '12 at 11:02 @Daryl Thank you for the comment. I followed your advice so you can inspect my update that I can continue to develop intosomething that I can load into matlab. – Dac Saunders Sep 5 '12 at 11:24 Yes, except $A_{33}$ should be $-1$, not $1$. Then / will do the job for you. For help, at the MATLAB prompt, type help mldivide. – Daryl Sep 5 '12 at 11:37 @Daryl Of course, it should be -1. Now I understand this part and can go on with formulating it in matlab. Thanks for the help. – Dac Saunders Sep 5 '12 at 12:13 EDIT: I had the wrong operator. I have updated all operators to \ which is the correct operator to solve $Ax=b$. The operator / solves $A^Tx=b$. You will need to form $A$ and $b$ by hand. You can then use x=A\b to solve the linear system $Ax=b$. Your matrix $A$ is almost correct. The $(3,3)$ entry should be $-1$, not 1. For help with the \ operator, at the MATLAB prompt, type help mldivide. - Thank you for the answer. I exercised the solution in matlba and updated the question with the numbers I got that I also could verify fit the equations. So it must be right. – Dac Saunders Sep 5 '12 at 14:15 sol1 = inv(A)*b; it's the most simplest question - @M-AskmanYou definitely want to use the '\' operator for solving standard linear systems. The use of 'inv()' is rather costly (see example in reference)! mathworks.co.uk/help/techdoc/ref/inv.html – vanguard2k Sep 5 '12 at 12:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.995548665523529, "perplexity": 807.0828246047909}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257828286.80/warc/CC-MAIN-20160723071028-00242-ip-10-185-27-174.ec2.internal.warc.gz"}
https://www.elitedigitalstudy.com/10563/in-the-hcl-molecule-the-separation-between-the-nuclei-of-the-two-atoms-is-about-127-a-1-a-10-10-m
In the HCl molecule, the separation between the nuclei of the two atoms is about 1.27 Å (1 Å = 10-10 m). Find the approximate location of the CM of the molecule, given that a chlorine atom is about 35.5 times as massive as a hydrogen atom and nearly all the mass of an atom is concentrated in its nucleus. Asked by Pragya Singh | 1 year ago |  129 ##### Solution :- Given, Given mass of hydrogen atom = 1 unit mass of chlorine atom = 35.5 unit ( As a chlorine atom is 35.5 times the size ) Let the center of mass lie at a distance  x from the chlorine atom Thus, the distance of center of mass from the hydrogen atom = 1.27 – x Assuming that the center of mass of HCL lies at the origin, Hydrogen will lie on the left side of the origin and chlorine lie on the right side of the origin x =$$\dfrac{ (-m(1.27 -x) + 35.5mx)}{(m + 35.5m)}$$ = 0 -m( 1.27 – x) + 35.5mx = 0 -1.27+x+35.5x = 0 36.5x = 1.27 Therefore, x = $$\dfrac{1.27}{36.5}$$ = 0.035 Å The center of mass lies at 0.035 Å from the chlorine atom. Answered by Abhisek | 1 year ago ### Related Questions #### Separation of Motion of a system of particles into motion of the centre of mass and motion about the centre of mass. Separation of Motion of a system of particles into motion of the centre of mass and motion about the centre of mass: (i) Show  p = p’i + miV Where pi is the momentum of the ith particle (of mass mi ) and p’i = mivi‘.  Note v’i is the velocity of the ith particle with respect to the centre of mass. Also, verify using the definition of the centre of mass that Σp’i = 0 (ii) Prove that K = K′ + $$\dfrac{1}{2}$$MV2 Where K is the total kinetic energy of the system of particles, K′ is the total kinetic energy of the system when the particle velocities are taken relative to the centre of mass and MV2 /2 is the kinetic energy of the translation of the system as a whole. (iii) Show L = L’+ R × MV where L’ = ∑r’i × p’i is the angular momentum of the system about the centre of mass with velocities considered with respect to the centre of mass. Note r’i = ri – R, rest of the notation is the standard notation used in the lesson. Note L’ and  MR × V can be said to be angular momenta, respectively, about and of the centre of mass of the system of particles. (iv) Prove that : $$\dfrac{dL’}{dt}$$= ∑ r’i x $$\dfrac{dp’}{dt}$$ Further prove that : $$\dfrac{dL’}{dt}$$ = τ’ext Where τ’ext is the sum of all external torques acting on the system about the centre of mass. ( Clue : Apply  Newton’s Third Law and  the definition of centre of mass . Consider that internal forces between any two particles act along the line connecting the particles.) #### During rolling, the force of friction acts in the same direction as the direction of motion of the CM of the body. Read each statement below carefully, and state, with reasons, if it is true or false; (a) During rolling, the force of friction acts in the same direction as the direction of motion of the CM of the body. (b) The instantaneous speed of the point of contact during rolling is zero. (c) The instantaneous acceleration of the point of contact during rolling is zero. (d) For perfect rolling motion, work done against friction is zero. (e) A wheel moving down a perfectly frictionless inclined plane will undergo slipping (not rolling) motion. #### A cylinder of mass 10 kg and radius 15 cm is rolling perfectly on a plane of inclination 30°. A cylinder of mass 10 kg and radius 15 cm is rolling perfectly on a plane of inclination 30°. The coefficient of static friction µs = 0.25. (a) How much is the force of friction acting on the cylinder? (b) What is the work done against friction during rolling? (c) If the inclination θ of the plane is increased, at what value of θ does the cylinder begin to skid, and not roll perfectly? #### A solid disc and a ring, both of radius 10 cm are placed on a horizontal table simultaneously A solid disc and a ring, both of radius 10 cm are placed on a horizontal table simultaneously, with an initial angular speed equal to 10 π rad s-1. Which of the two will start to roll earlier? The coefficient of kinetic friction is µk = 0.2.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8896406888961792, "perplexity": 447.8478882867821}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710218.49/warc/CC-MAIN-20221127073607-20221127103607-00846.warc.gz"}
http://alexanderpruss.blogspot.com/2012/11/infinity-and-probability.html
## Monday, November 5, 2012 ### Infinity and probability You are one of a countably infinite number of blindfolded people arranged in a line. You have no idea where in the line you are. Each person tosses a independent fair die, but doesn't get to see or feel the result. Case 1: What is the probability you tossed a six? Obviously 1/6. Case 2: You are informed that, surprisingly, all the even-numbered people tossed sixes, and all the odd numbered people tossed something else. What is the probability you tossed a six? It sure seems like it's no longer 1/6. For suppose it's still 1/6. Then when you learned that all the even-numbered people tossed sixes and the odd-numbered ones didn't, you thereby received evidence yielding probability 5/6 that you were one of the odd-numbered ones. But surely you didn't. After all, if there were ten people in the line and you learned that all but the tenth tossed a six, that wouldn't give you probability 5/6 that you were the tenth! But as long as infinitely many people tossed six and infinitely many didn't (and with probability one, this is true), there is always some ordering on the people such that relative to that ordering you can be correctly informed that every second person tossed a six. That puts the judgment in Case 1 into question. Moreover, why should the alternating ordering in Case 2 carry any weight that isn't already carried by the fact which you know ahead of time (with probability one) that there are equally many sixes as non-sixes? But if the judgment in Case 1 is wrong, then were we to find out we are in an infinite multiverse, that would undercut our probabilistic reasoning which assumes we can go from intra-universe chances to credences, and hence undercut science. Thus, a scientific argument that we live in an infinite multiverse would be self-defeating. I don't exactly know what to make of arguments like this. residentoftartarus said... I think the correct answer to case 2 is 1/2 via the following reasoning: A = You throw a six. B = Your place in line is even-numbered. C = All the even-numbered people tossed sixes. Now, P(A|C) = P(B|C) since (A & C) is the same event as (B & C), and P(B|C) = P(B) since B and C are independent events. Ergo, P(A|C) = P(B|C) = P(B) = 1/2. Similarly, in the case of ten people in line we have the following computation: A = You throw a six. B = You're the tenth person in line C = Everyone but the tenth person tossed a six. Then P(A|C) = P(B|C) = P(B) = 1/10 via the exact same reasoning. Indeed, it's quite obvious that (A & C) is the same event as (B & C) and we can manually check that B and C are independent events in this case by verifying that P(B & C) = P(B)P(C). Alexander R Pruss said... What I am not clear on is whether P(B)=1/2 in case 2. residentoftartarus said... I think your confusion stems from not clearly differentiating between two different possibilities. In case 2 of your entry, the even-numbered people toss sixes subsequent to everyone in line being ordered in some way, in which case I believe the analysis I gave in the previous comment follows. However, if we change the situation in case 2 so that the blindfolded people are intelligently ordered subsequent to their toss in order to achieve the desired result of having all the even-numbered people toss a six (insofar as this is possible) then I believe the probability in that case is 1/6 via the following reasoning: A = You throw a six. B = All the even-numbered people tossed sixes. C = An infinite number of people tossed a six. Now, P(B) = P(C) since B and C are the same event, but then it follows that P(B) = 1 since P(C) = 1. Moreover, P(A|B) = P(A) since P(B) = 1, hence P(A|B) = 1/6. Alexander R Pruss said... By "ordering" I meant an abstract total ordering relation. Set-theoretically speaking, all of the orderings exist. There is the standard numerical ordering. But there are infinitely many others. residentoftartarus said... As per my last comment, if B = {your place in line is even-numbered} then P(B) will depend on your setup. However, if the blindfolded people are randomly ordered independent of anything else then it seems clear to me that P(B) should equal 1/2. residentoftartarus said... If you just mean some abstract total ordering then I am quite confident that the probability of any individual being even-numbered should is 1/2; however, I would concede that any mathematical demonstration of this would involve some technical details. residentoftartarus said... Omega = the set of all total orderings on the positive integers = the set of all bijections from the positive integers to the positive integers. F = The sigma-algebra generated by the subsets {g in Omega | g(i) is divisible by j}, where i and j are any positive integers. Now, P({g in Omega | g(i) is divisible by j}) = 1/j induces the natural probability measure on F. But then it follows from the natural probability measure that P({g in Omega | g(i) is divisible by 2}) = 1/2 for any positive integer i. Thomas Larsen said... "Thus, a scientific argument that we live in an infinite multiverse would be self-defeating." There are good arguments to the effect that, on an unrestricted multiverse, it's far more likely than not that I am a Boltzmann brain - which would undermine any scientific basis I might have for an unrestricted multiverse in the first place. Gill said... Either I missed something or there isn't any puzzle here. In case 1, 1/6 is correct only if the die is assumed to be symmetrical. I actually doubt whether we are justified to make that assumption. But maybe in the absence of evidence to the contrary, we are. In case 2, however, since people who tossed even all got sixes, we do have strong evidence that the die isn't symmetrical. And we have no evidence as to how it isn't, which is crucial, since the different ways in which the die might be asymmetrical would give rise to different probability distributions. So in that case all bets should be off. We simply can't make warranted probability judgments. Did I miss anything? Alexander R Pruss said... Gill: I am assuming, throughout, that we are certain that it's a fair die, i.e., one that has equal chances of landing on each side. Mr Larsen: 1. Physicists are trying to find measures on multiverses where it's not likely that we're Boltzmann brains. Maybe they'll succeed. 2. But in any case, if the line of thought in this post pans out, then the Boltzmann brain argument fails--as probabilistic reasoning breaks down entirely in such contexts. residentoftartarus: One problem is that in the case where we have the sequence where even numbered items are sixes and odd-numbered ones are non-sixes, there is an alternate ordering, where in the exact same physical situation instead the items whose positions are divisible by 3 are sixes. For instances, the ordering can be given by 0, 1, 3, 2, 5, 7, 4, 9, 11, 6, .... Let B = your place in line is even in the standard ordering. B1 = your place in line is divisible by three in the alternate ordering. If P(B)=1/2, then by parity of reasoning P(B1)=1/3. But B holds if and only if B1 does. Now maybe the standard ordering is somehow probabilistically special. But why? residentoftartarus said... Alex, If you change B and B1 as follows: B = {your place in line is even} = {g in Omega | g(1) is divisible by two} B1 = {your place in line is divisible by three} = {g in Omega | g(1) is divisible by three} Then P(B) = 1/2 and P(B1) = 1/3 according to the probability measure I gave earlier and it's not the case that B is equivalent to B1 in event space so that there's no contradiction. The way this works is that you privilege some fixed numbering of the blindfolded people such that you are given the number 1, and then identify the set of all numberings of the blindfolded people as the set of all bijections from the positive integers to itself. Alexander R Pruss said... But why should you privilege some ordering? residentoftartarus said... You privilege some numbering of the blindfolded people so that you can identify the set of all numberings of the blindfolded people as the set of all bijections from {positive integers} to {positive integers}. So, consider some function f from {blindfolded people} to {positive integers} such that f(you) = 1 (note that f acts as a numbering of the blindfolded people). Then for every bijection g from {positive integers} to {positive integers} we have a numbering (g o f) from {blindfolded people} to {positive integers}. Indeed, this is how we identify the aforementioned set of all numberings with the relevant bijections. Of course, it hardly matters what numbering we privilege at the beginning of our analysis. Alexander R Pruss said... Let's make this a little more concrete. Suppose that the people are arranged in an infinite two-dimensional field, with clearly defined x and y axes. Each axis defines an ordering. Suppose that ordered by x coordinates the pattern is: six non-six six non-six six non-six And ordered by y coordinates the pattern is: six non-six non-six six non-six non-six non-six non-six ... You don't know where in the field you are. What is your probability of six? Well, going by the reasoning in Case 2 as applied to the x-axis ordering, your probability of six is 1/2. But applying this reasoning to the y-axis ordering, your probability of six is 1/3. And quite possibly, if you order by Euclidean distance from some origin point, you get an even different probability. So which is it? residentoftartarus said... Alex, In your field example the x and y axes are giving you two different random variables X and Y and the analysis proceeds as follows: P({you tossed a six}) = P({X(1) is divisible by two} or {Y(1) is divisible by three}) = P({X(1) is divisible by two}) + P({Y(1) is divisible by three}) - P({X(1) is divisible by two} and {Y(1) is divisible by three}) = 1/2 + 1/3 - 1/6 = 2/3. residentoftartarus said... Alex, Sorry, the analysis in my last comment is wrong. Here's the setup for your field example: A = You throw a six. B = X(1) is a multiple of two. C = Y(1) is a multiple of three. D = X^-1({multiples of two}) = Y^-1({multiples of three}) are precisely the people who throw sixes. Now, P(A|D) = P(B|D) = P(C|D) since (A and D) = (B and D) = (C and D) just as before. However, B and C are not independent of D since in that case 1/2 = P(B) = P(B|D) = P(C|D) = 1/3. In other words, the lesson of your field example is that the independence assumption I made in my first comment does not necessarily, which is fine by me since I always found that assumption to be somewhat problematic. The computation of P(A|D) is still an open problem for me at this point.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9067057967185974, "perplexity": 946.2896498820274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00312-ip-10-147-4-33.ec2.internal.warc.gz"}
http://math.stackexchange.com/users/50284/anonymous?tab=summary
# anonymous less info reputation 510 bio website location Florida age member for 2 years, 1 month seen Jun 12 '13 at 4:20 profile views 107 17 What is the difference between writing f and f(x)? 6 I can't find the contradiction in this proof 5 Proving a theorem about continuity property with contradiction. 5 What is the difference between exponentials and powers? 4 Proof of \$\{f+g # 1,973 Reputation +10 Problem Involving Hardy-Littlewood Maximal Function +10 What is the norm of a complex number? +10 Two trivial questions in general topology +5 Total variation of complex measure is finite # 5 Questions 6 Total variation of complex measure is finite 5 Measurability of a certain set in Falcolner's Geometry of Fractal Sets 3 Improper integral evaluation 1 Independence of a certain Linear combination of random variables 0 Interpretation of functional equation of dedekind eta function # 52 Tags 64 real-analysis × 41 9 functions × 3 24 calculus × 6 8 complex-analysis × 6 23 analysis × 11 8 linear-algebra × 4 14 limits × 8 8 proof-writing × 3 11 general-topology × 8 8 proof-strategy × 3 # 1 Account Mathematics 1,973 rep 510
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8137133717536926, "perplexity": 1990.3483482455383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447549662.85/warc/CC-MAIN-20141224185909-00055-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/a-paradox-in-plancks-law.243110/
# A paradox in Planck's Law? 1. Jul 2, 2008 ### cmos Hi all, Physical law: I understand the derivation of the Planck law for the blackbody spectrum and why it takes slightly different forms whether you are doing the analysis in the frequency domain or the wavelength domain. That is to say, you cannot simply invoke the Planck relation ($$E=h\nu=hc/\lambda$$) if you want to convert the final result between frequency and wavelength domains. My problem: What is the physical implication of this? Maybe another way of saying this is, do we detect wavelength or frequency? An example: Consider a blackbody at 6000 K. From the Planck law (derived in the frequency domain), the spectral radiance of emitted light will peak at 353 THz. From the Plack law (derived in the wavelength domain), the spectral radiance of emitted light will peak at 483 nm. Clearly the Planck relation does not prevail. As a check, my results correspond with those obtained from the displacement law of Wien. The two numbers above correspond to 1.46 eV and 2.57 eV, respectively. Suppose I had a friendly and very much alive cat put into some ungodly contraption, a box. There are two photon counters, A and B. A will accept only photons of 1.46 eV energy and B will only accept photons of 2.57 eV (maybe put in a plus or minus several meV for all us Heisenberg buffs). The contraption is designed so that once activated, once detector A receives 1000 photons, it will crack a vial of cyanide thus ending the life of our friendly companion. However, if detector B should receive 1000 photons first, then detector A will be deactivated and our friendly companion will happily survive and join us for some future thought experiment. So, does Planck's cat live or die? 2. Jul 2, 2008 ### George Jones Staff Emeritus It's just integration by substitution. I don't know if this will help. Last edited: Jul 2, 2008 3. Jul 2, 2008 ### cmos Dr. Jones, Your comments are always welcomed; however, I think you misunderstood my dilemma. I understand the mathematics to derive either form of the Planck law. One such way is simply by the link you posted; you show how you transform between the two domains. The problem I presented is in the physical interpretation. Please look at the numbers in my first post, they speak for themselves. In my "paradox," I outline how the peaks obtained from the two domains lead to two vastly different results. Should our kitten live or die? I feel that after 107 years, that this paradox must have been resolved; I have just been unable to ascertain the means by which to do so. 4. Jul 2, 2008 ### malawi_glenn I have never noticed that, are you sure you have done correct? 5. Jul 2, 2008 ### Zizy Using ln, this problem doesnt occur and peak also matches eye maximum sensitivity. Loose translation of a statement of my professor - this could be used as a proof that the god invented logarithm before humans. EDIT: Also, I do not think math thing behind has any physical significance. 6. Jul 2, 2008 ### gel If I understand you correctly, if the photon counters only accept photons of exactly the frequencies you state, then the experiment will never end because they will never detect any photons. If they accept a range of photons, you just multiply the Planck density by this width, being consistent between using the correct units (energy/wavelength/frequency) for the density and the width of the range. Then there's no problem, and it is all consistent because $$I(\nu,T)\,d\nu=-I(\lambda,T)\,d\lambda.$$ See Planck's law. A probability distribution, and its peak, is entirely dependent on the variable you express it in terms of. So, no paradox. edit: If you use logs as Zizy suggests then it just converts the non-linear relation between the frequency and wavelenth to a linear one, so the issue desn't arrive. However you still have to multiply the density by the range of log(frequency) or log(wavelength) that the sensor detects to get the probability. That's just what probability density means. Ah, I see you said maybe put in plus or minus a few meV. That's the whole issue, you have to (nothing to do with Heisenberg). And if the range is expressed in meV, then you should use the Planck distribution in meV (proportional to frequency). Last edited: Jul 2, 2008 7. Jul 2, 2008 ### cmos gel, I had a feeling that this problem could be resolved by the fact that we are dealing with distributions; however, I still do not accept your answer. I will pose the problem slightly different, but first some results I worked on today.... I based my original post on counting photons from the peaks of the Planck energy density distributions in the two domains. I realized soon thereafter that directly relating the peaks of these distributions to photon number was not valid. I have since come up with expressions for photon flux from a blackbody radiator. I list these in case I have made a mistake in my transformations: $$\dot n(\nu) = \frac{2\pi\nu^2}{c^2} \frac{1}{e^{h\nu/kT}-1}$$ $$\dot n'(\lambda) = \frac{2\pi c}{\lambda^4} \frac{1}{e^{hc/\lambda kT}-1}$$ Please note that I use the 'prime' symbol to denote that we are working in a different domain; it does not represent differentiation. Edit: The above expressions are for the flux through 2 pi steradians of space. Sadly, the results still give two different peaks. In the frequency domain: 199 THz. In the wavelength domain: 612 nm. Suppose I pose the "paradox" as so: Let detector A be a photon counter that works in the frequency domain while detector B is a photon counter that works in the wavelength domain. These detectors accept ALL photons and records the count in a histogram. After an hour a computer looks at the photon count at 199 THz from detector A and the count at 612 nm from detector B. If A is greater than B, then kitty lives, otherwise kitty dies. Thoughts? Last edited: Jul 2, 2008 8. Jul 2, 2008 ### cesiumfrog Isn't this just something really simple? Due to the inverse (non-linear) relationship of frequency and wavelength, consider two "equal chunks of frequency space" namely 1-2Hz and 3-4Hz. Now, if we convert to wavelength space, the first box is twice as large as the second box. Agreed? So why should it surprise you if the peak in frequency space doesn't quite match the peak in wavelength space? The OP's paradox is that saying is the same as saying "plus or minus a nm or so" for one detector but "plus or minus half a nm" for the other. No wonder the first detector seems to get too much signal in the wavelength picture: it's less selective. 9. Jul 2, 2008 Staff Emeritus I think that all the OP is saying is that the average of the reciprocal is not the reciprocal of the average. It has nothing really to do with QM, or Stat Mech, or really, anything except arithmetic. 10. Jul 2, 2008 ### cmos Which is why I amended the "paradox" at the end of post 7; so that we may look at the final count at the specified peak position. 11. Jul 2, 2008 ### cmos 12. Jul 2, 2008 Staff Emeritus There is no physical interpretation. It's just arithmetic. For example, equal sized bins in wavelength are not equal sized bins in frequency, so you don't expect the peaks to be in the same place in the two domains. 13. Jul 2, 2008 ### cesiumfrog 14. Jul 2, 2008 ### cmos If there is no physical interpretation, then you are clearly lacking in the physics. Also, your explanation does not resolve the apparent "paradox" I have posted. 15. Jul 2, 2008 ### cmos cesiumfrog, I suggest carefully re-reading the end of post #7 where I amend my question to properly make use of photon counting. None on this thread, thus far, have been able to explain this. 16. Jul 3, 2008 ### malawi_glenn I think this is the answer: $$I(\nu,T)\,d\nu=-I(\lambda,T)\,d\lambda$$ This gives you the true relation between the two distributions= $$I(\nu,T)=I(\lambda,T)\frac{\lambda ^2}{c}$$ you think it is: $$I(\nu,T)=I(\lambda,T)$$, which is wrong, since we are dealing with probabilty densities. Last edited by a moderator: Jul 3, 2008 17. Jul 3, 2008 Staff Emeritus Hmmm...one of us has a PhD in physics. The other is making an elementary arithmetic mistake, calling it a paradox, and refusing to believe the correct explanation. Which one of us is lacking in physics? 18. Jul 3, 2008 ### Redbelly98 Staff Emeritus This is an "apples vs. oranges" comparison. One number is the peak power emitted per unit frequency, and the other is the peak power emitted per unit wavelength. In other words, the two spectra are really of different quantities, Watts-per-Hz and Watts-per-nm, and have different peaks. 19. Jul 3, 2008 ### cmos Well then, I strongly question the rigor your institution put you through. 20. Jul 3, 2008 ### cmos In post # 7, I accounted for this (somewhat?) by expressing the Planck law in terms of (spectral) photon flux as opposed to spectral radiance. But this still does make sense, b/c now we are looking at flux-per-Hz and flux-per-nm. A peculiarity I can see falling out of this is that the true peak flux (as opposed to spectral flux) may not coincide with either of the peaks I previously mentioned. I also thought last night how one might make and invoke a spectral radiometer. Similar to a scattering experiment where you must measure over an interval $$d\theta$$, by using a prisim you would effectively measure of an interval $$d\lambda$$. This leads me to wonder if it even makes sense to talk about a "true peak flux." More on all of this when I have had time to think about it. Similar Discussions: A paradox in Planck's Law?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9099194407463074, "perplexity": 1166.9777787359785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187144.60/warc/CC-MAIN-20170322212947-00648-ip-10-233-31-227.ec2.internal.warc.gz"}
https://cstheory.wordpress.com/category/mathematics/
## Projections onto a Convex Body August 23, 2010 Given a point ${x} \in {{\bf R}^d}$ and an affine subspace ${H \neq {\bf R}^d}$, the projection of ${x}$ onto ${H}$ is the point ${P_H(x)}$ such that, $\displaystyle \begin{array}{rcl} \vert \vert x - P_H(x) \vert \vert = \min \{ \vert \vert x y \vert \vert ~ \mid y ~ \in H \} \end{array} .$ One can obviously define other notions of projection but the above is probably the most commonly used in geometry. If ${x \notin H}$ then the projected point ${P_H(x)}$ is the unique point in ${H}$ such that the vector ${P_H(x) - x}$ is orthogonal to ${H}$. Also is well known that projecting any segment to an affine subspace can only shrink its length. The proofs of these facts are easy to see. But in fact these facts are just corollaries of the following two results. Given any nonempty closed convex set ${C \subseteq {\bf R}^d}$ and a point ${x \in {\bf R}^d}$, there is a unique point ${P_C(x) \in C}$ such that, $\displaystyle \begin{array}{rcl} \vert \vert x - P_C(x) \vert \vert = \min \{ \vert \vert x - y \vert \vert ~ \mid ~ y \in C\} \end{array} .$ and, Let ${C}$ be a nonempty closed convex set. The mapping ${P_C : {\bf R}^d \rightarrow C}$ is a contraction i.e. for any ${x,y \in {\bf R}^d}$ we have, $\displaystyle \begin{array}{rcl} \vert \vert P_C(x) - P_C(y) \vert \vert \leq \vert \vert x - y \vert \vert \end{array}$ Applying these results to affine spaces (which are nonempty closed convex sets) yields the results mentioned earlier. This projection that maps $x$ to $P_C(x)$ is known as the metric projection. The proofs of these facts are in order. Read the rest of this entry » ## Bounding the Volume of Hamming Balls August 13, 2010 In this post I will derive an oft used inequality in Theoretical computer science, using the Chernoff technique. The following function turns up in a few places in combinatorial geometry and other places. $\displaystyle \begin{array}{rcl} \Phi_d(m) = \sum_{i=0}^d {m \choose i}. \end{array}$ Here we assume ${m \geq d}$ since otherwise the function is trivially seen to be equal to ${2^m}$. Here we will show that, $\displaystyle \begin{array}{rcl} \Phi_d(m) \leq \left ( \frac{me}{d} \right )^d \end{array}$ Two easy interpretations of ${\Phi_d(m)}$ are as follows. First, it counts the number of points in the hamming cube ${{\bf H}^m = \{0,1\}^m}$ whose hamming distance from the origin is at most ${d}$. This interpretation also allows us to derive the nice identity, $\displaystyle \begin{array}{rcl} \Phi_d(m) = \sum_{h=0}^d \sum_{l = 0}^h {s \choose l}{m - s \choose h - l}. \end{array}$ for any ${0 \leq s \leq m}$. In the above expression we obey the convention ${{p \choose q} = 0}$ when ${p < q}$. To see this, interpret the above sum as the distance from a fixed point ${p \in {\bf H}^m}$ whose distance from the origin is ${s}$. The ${h}$ is the distance from ${p}$ and the inner summation counts the number of points at distance ${h}$. Then noting that the hamming cube looks the same from every point so the number of points of distance at most ${d}$ is the same from the origin or from ${p}$. Hence the identity. This definition has a nice geometric interpretation because ${\Phi_d(m)}$ is also the number of points from ${{\bf H}^n}$ inside any ball of radius ${d}$. The second interpretation is as follows. Consider the discrete probability space ${\Omega = \{ 0, 1 \}}$ with the measure ${p(\{0\}) = p(\{1\}) = 1/2}$. Let ${X_1, X_2, \dots, X_m}$ be ${m}$ independent random variables ${X_i : \Omega \rightarrow \{0,1\}}$ defined as ${X_i(y) = y}$ for ${y \in \Omega}$. Let ${X = \sum X_i}$. Then we have, $\displaystyle \Pr\{ X_1 = a_1, X_2 = a_2, \dots, X_m = a_m \} = \frac{1}{2^m},$ for any ${a_i \in \{0,1\}}$. As such we have, $\displaystyle \frac{\Phi_d(m)}{2^m} = \Pr\{ X \leq d\}$. We will use this interpretation to derive our inequality. Lemma: ${\Phi_d(m) \leq \left ( \frac{me}{d} \right )^d}$. Proof: Our first observation is that we may take ${d \leq m/2}$. The function ${\left ( \frac{me}{d} \right )^d}$ is an increasing function of ${d}$. Denoting this by ${f(d)}$ and extending it to a function on ${[0,m]}$ we have by logarithmic differentiation, $\displaystyle \begin{array}{rcl} \frac{f'(d)}{f(d)} = \ln \left ( \frac{m}{d} \right ) \geq 0 \end{array}$ Thus for ${d > m/2}$ we have ${f(d) \geq f(m/2) \geq (2e)^{m/2} > 2^m}$. On the other hand ${\Phi_d(m) \leq 2^m}$ trivially. Thus assuming ${d}$ is ${0 \leq d \leq m/2}$. Now we have the following identity, $\displaystyle \begin{array}{rcl} \sum_{i=0}^d {m \choose i} = \sum_{j=m-d}^m {m \choose j} = 2^m \Pr\{ X \geq m-d \}. \end{array}$ so it sufficies to upper bound ${\Pr \{ X \geq m - d \}}$. We now use the Chernoff technique. For any ${\lambda > 0}$, $\displaystyle \begin{array}{rcl} \Phi_d(m) &=& 2^m \Pr\{ X \geq m - d \} \\\\ &=& 2^m \Pr \{ e^{\lambda X} \geq e^{\lambda(m - d)} \} \\\\ &\leq& 2^m \frac{{\bf E}(e^{\lambda X})}{e^{\lambda(m - d)}} ~~ \text{[By Markov's Inequality]} \\\\ &=& 2^m \frac{\prod_{i=1}^m {\bf E}(e^{\lambda X_i})}{e^{\lambda(m-d)}} ~~ \text{[By Independence of the} ~ X_i] \end{array}$ Now for any ${i}$ we have, $\displaystyle {{\bf E}(e^{\lambda X_i}) = \frac{e^{\lambda} + 1}{2}}$ and so we have, $\displaystyle \begin{array}{rcl} \Phi_d(m) \leq \frac{(e^{\lambda} + 1)^m}{e^{\lambda(m-d)}} \end{array}$ We try to optimize the above expression by finding the ${\lambda}$ which minimizes the expression above. Noting that ${m/2 \geq d}$ we have, by differentiation and observing the sign of the derivative, that the ${\lambda}$ minimizing the expression is given by, $\displaystyle \begin{array}{rcl} e^{\lambda} = \frac{m}{d} - 1 \end{array}$ and substituing that in the estimate we have, $\displaystyle \begin{array}{rcl} \Phi_d(m) &\leq& \frac{\left ( \frac{m}{d} \right )^m}{\left ( \frac{m-d}{d}\right )^{m-d}} \\ &=& \left ( 1 + \frac{d}{m-d} \right )^d \left ( \frac{m}{d} \right )^d \leq \left ( \frac{me}{d} \right )^d. \end{array}$ The last step follows from the simple fact that, $\displaystyle \begin{array}{rcl} \left ( 1 + \frac{d}{m-d} \right )^{m - d} \leq \left ( e^{\frac{d}{m-d}} \right )^{m-d} = e^d. \end{array}$ $\Box$ ## Jung’s Theorem August 7, 2010 Given a set of points of diameter ${D}$ in ${n}$ dimensional Euclidean space ${{\bf R}^n}$ it is trivial to see that it can be covered by a ball of radius ${D}$. But the following theorem by Jung improves the result by a factor of about ${\frac{1}{\sqrt{2}}}$, and is the best possible. Theorem 1 [ Jung’s Theorem: ] Let ${S}$ be a set of points in ${{\bf R}^n}$ of diameter ${D}$. Then there is a ball of radius ${\sqrt{\frac{n}{2n + 2}} D}$ covering ${S}$. Proof: We first prove this theorem for sets of points ${S}$ with ${\vert S \vert \leq n + 1}$ and then extend it to an arbitrary point set. If ${\vert S \vert \leq n + 1}$ then the smallest ball enclosing ${S}$ exists. We assume that its center is the origin. Denote its radius by ${r}$. Denote by ${S' \subseteq S}$ the subset of points such that ${||p|| = r}$ for ${p \in S'}$. It is easy to see that ${S'}$ is in fact non empty. Observation: The origin must lie in the convex hull of ${S'}$. Assuming the contrary, there is a separating hyperplane ${H}$ such that ${S'}$ lies on one side and the origin lies on the other side of ${H}$ (strictly). By assumption, every point in ${S \setminus S'}$ is distance strictly less than ${r}$ from the origin. Move the center of the ball slightly from the origin, in a direction perpendicular to the hyperplane ${H}$ towards ${H}$ such that the distances from the origin to every point in ${S \setminus S'}$ remains less than ${r}$. However, now the distance to every point of ${S'}$ is decreased and so we will have a ball of radius strictly less than ${r}$ enclosing ${S}$ which is a contradiction to the minimality of ${r}$. Let ${S' = \{ p_1, p_2,\dots, p_m\}}$ where ${m \leq n \leq d+1}$ and because the origin is in the convex hull of ${S'}$ so we have nonnegative ${\lambda_i}$ such that, $\displaystyle \sum \lambda_i p_i = 0, \sum \lambda_i = 1$ Fix a ${k, 1 \leq k \leq m}$. Then we have, $\displaystyle \begin{array}{rcl} 1 - \lambda_k &=& \sum_{i \neq k} \lambda_i \\ &\geq& \frac{1}{D^2}\sum_{i = 1}^m \lambda_i || p_i -p_k ||^2 \\ &=& \frac{1}{D^2}\left ( \sum_{i=1}^m \lambda_i (2r^2 - 2 \langle p_i, p_k \rangle) \right ) \\ &=& \frac{1}{D^2} \left ( 2r^2 - 2 \left \langle \sum_{i=1}^m \lambda_i p_i, p_k \right \rangle \right ) \\ &=& \frac{2r^2}{D^2} \end{array}$ Adding up the above inequalities for all values of ${k}$, we get $\displaystyle \begin{array}{rcl} m - 1 \geq 2m r^2 / D^2 \end{array}$ Thus we get ${\frac{r^2}{D^2} \leq m - 1 / 2m \leq \frac{n}{2n + 2}}$ since ${m \leq n + 1}$ and the function $(x-1)/2x$ is monotonic. So we have immediately ${r \leq \sqrt{\frac{n}{2n+2}} D}$. The remainder of the proof uses the beautiful theorem of Helly. So assume ${S}$ is any set of points of diameter ${D}$. With each point as center draw a ball of radius ${r = \sqrt{\frac{n}{2n+2}}D}$. Clearly any ${n + 1}$ of these balls intersect. This is true because the center of the smallest ball enclosing ${n+1}$ of the points is at most ${r}$ away from each of those points. So we have a collection of compact convex sets, any ${n+1}$ of which have a nonempty intersection. By Helly’s theorem all of them have an intersection. Any point of this intersection can be chosen to be the center of a ball of radius ${r}$ that will enclose all of ${S}$. $\Box$ ## Tensor Calculus II July 10, 2008 In the last post I described a contravariant tensor of rank 1 and a covariant tensor of rank 1. In this post we will consider generalizations of these. We will introduce tensors of arbitrary rank $(k,l)$ where $k$ is the number of contravariant indices and $l$ is the number of covariant indices. How many numbers does such a tensor represent? It is easy to see that if the tensor is defined in $n$ dimensional space, it defines $n^{(k+l)}$ real numbers for each point of space, and each coordinate system. The notation for such a tensor is $A^{i_1 i_2 \dots i_k}_{j_1 j_2 \dots j_l}$ Now before we go ahead, I would like to clarify that the above represents just 1 single component of the tensor , out of the $n^{(k+l)}$ components. The reason why I clarified this is that with tensors and the Einstein summation convention introduced in the last post, indices can sometimes be used as a summand. Specifically we said that if an index repeats in a tensor expression in a contravariant postion and a covariant position it stands for summation. You need to sum over that index for all possible values (1 to $n$). Also, multiple indices can repeat in a tensor expression. In that case one needs to sum over all such indices varying from 1 to $n$. Therefore a tensor expression in which 1 index is to be summed upon expands into $n$ quantities and a tensor equation in which 2 indicies are to be summed upon expands into a sum of $n^2$ quantities. Guess what? armed with this understanding we are already ready to understand the general transformation law of tensors. Despair not my friends looking at this expression. If you do not understand this – dont worry. We will probably never see such complicated mixed tensors of general rank $(k,l)$. However, it is necessary to understand how to compute with tensors and their transformation law. So after reading the general law of transformation and understanding what it means, I woul like you to forget it temporarily instead of getting bogged down by the weird equation! I will follow the definition with an example. This will be restricted to 3 dimensions and to our familiar spherical polar and cartesian coordinate systems. If you can understand the stuff in there, I would like you to remember the general computational method there. The general law of tensor transformation for a tensor of rank $(k,l)$ is $\bar{A}^{i_1 i_2 \dots i_k}_{j_1 j_2 \dots j_l} = A^{u_1 u_2 \dots u_k}_{v_1 v_2 \dots v_l}\frac{\partial{\bar{x}^{i_1}}}{\partial{x^{u_1}}} \frac{\partial{\bar{x}^{i_2}}}{\partial{x^{u_2}}} \dots \frac{\partial{\bar{x}^{i_k}}}{\partial{x^{u_k}}} \cdot \frac{\partial{x^{v_1}}}{\partial{\bar{x}^{j_1}}} \frac{\partial{x^{v_2}}}{\partial{\bar{x}^{j_2}}} \dots \frac{\partial{x^{v_l}}}{\partial{\bar{x}^{j_l}}}$ In this equation, the left hand side represents the value of 1 component out of the $n^{(k+l)}$ components in the “bar” coordinate system [which is why we denote the $A$ with a bar as the tensor]. The right hand side on the other hand is a huge sum of $n^{(k+l)}$ quantities because of the repeating indices $u_1, u_2, \dots, u_k, v_1, v_2, \dots, v_l$. Each term of the summation is a product of a component of the tensor with the appropriate partial derivatives – that is, each term is a particular instantiation of the $u_1, u_2, \dots, u_k, v_1, v_2, \dots, v_l$. The transformation law for the general rank tensor is a direct generalization of the transformation law for rank 1 tensors. The tensors value is needed for the $i_1,i_2,\dots,i_k,j_1,j_2,\dots,j_l$ index and the corresponding terms for the primed coordinate system occur in the numerator or denominator according to its for the $i$‘s – the contraviant indices or the $j$‘s – the covariant indices. The rest of the new introduced $u_s, v_t$ have been summed upon since they are repeating indices. Wow! We have stated the general transformation law and now we will proceed to an example. Before we do that, I would like to state our general roadmap in the future posts. Remember that our goal is to understand the equations of General Relativity. We are studying tensors just because we want to understand the weird symbols and equations in the profound equations laid down by Einstein. So in the next post, we will study general operations on tensors like – addition, subtraction, inner products etc. Along with this we will state some rules to recognize tensors. We will state a general rule by which we can ascertain that a set of number form the components of a tensor. We will use this general rule to prove that some collection of numbers are tensors in later posts. Here we will not however talk of the derivative of a tensor because that involves some more machinery – but hey .. without derivatives there is no calculus so get back we will! The next few posts will talk about the fundamental tensor which is related to measuring distance in space between two points. That will define define distance for us. In the next few posts we will then discuss derivatives of tensors. In doing so we will introduce some tensors in terms of the fundamental tensor. These will be helpful for defining derivatives. Armed with all this machinery about distances and derivatives, we will then state equations for geodesics which are shortest paths between 2 points in space. In ordinary space this is the straight line. But in spaces, where the fundamental tensor is more complicated the geodesics are not “straight” lines [Well by then we will probably wonder what straight means anyway!]. Finally we will have the notions of distance, derivatives and shortest paths in our bag so we will talk about what we really need to understand the Einstein equations – curvature. Specifically we will introduce quantities which represent how curved a space is at a point. After this we will state the Einstein equations and if I can manage I will show some simple consequences of these equations. The Einstein equations somehow relate the curvature of our playground [which is the “space” of 4 dimensional spacetime], to the distribution of mass and energy. So much for the future posts – all that is probably going to take several posts! But for the time being lets get back to an example using familiar 3 space. In familiar 3 space, we are going to work with 2 coordinate systems – one : cartesian $x^1=x, x^2=y, x^3=z$ and the other spherical polar $\bar{x}^1=r, \bar{x}^2=\theta, \bar{x}^3 = \phi$. Recall that for a point $P$, $r$ is the distance from the origin $O$, $0 \leq \phi \leq \pi$ is the angle made by $OP$ with the $z$ axis and $0 \leq \theta < 2\pi$ is the angle made by the projection of $OP$ onto the $xy$-plane with the $x$ axis. Then it is easy to see that $x^1 = \bar{x}^1 \sin{\bar{x}^3} \cos{\bar{x}^2}$ $x^2=\bar{x}^1 \sin{\bar{x}^3} \sin{\bar{x}^2}$ $x^3=\bar{x}^1 \cos{\bar{x}^3}$. The inverse transformations equations are $\bar{x}^1 = \sqrt{ {x^1}^2 + {x^2}^2 + {x^3}^2}$ $\bar{x}^2=\tan^{-1}\left( \frac{x^2}{x^1}\right)$ $\bar{x}^3=\cos^{-1}\left( \frac{x^3}{\sqrt{ {x^1}^2 + {x^2}^2 + {x^3}^2}}\right)$ See here for an illustration. Assume we are working in a small region of space where all the $x^1,x^2,x^3$ are nonzero so that $\bar{x}^2,\bar{x}^3$ are well defined. We also need to verify that the Jacobian matrix of the partial derivatives is non-singular but that is left as an exercise! We will take a simple case here. Example Suppose there is a contravariant tensor that has components $\bar{A}^1, \bar{A}^2, \bar{A}^3$. What are the components in the first coordinate system? For illustration we only show $A^2$. Now by the tensor transformation law $A^2 = \bar{A}^i \frac{\partial{x^2}}{\partial{\bar{x}^i}}$ and so evaluating the partial derivatives and summing them up $A^2$ evaluates to $A^2=\bar{A}^1\sin{\bar{x}^2}\sin{\bar{x}^3} + \bar{A}^2\bar{x}^1\cos{\bar{x}^2}\sin{\bar{x}^3}+\bar{A}^3\sin{\bar{x}^2}\cos{\bar{x}^3}$ Of course the above represents the value at a specific point in space. A contravariant tensor of rank 2 will have 9 such terms to sum up. We encourage the reader to evaluate the components of such a tensor. Before the next post do think of some of the things you can do with tensors. Specifically think about 1. When can two tensors be added/subtracted to produce new tensors? 2. Can two tensors be multiplied to produce a new tensor? 3. Consider a function $f$ defined for each point of space. Suppose for each coordinate system $(x^1,x^2,\dots, x^n)$ we define $n$ components at a point by $A_i= \frac{\partial{f}}{\partial{x^i}}$. Prove that these numbers define a covariant tensor of rank 1. ## Tensor Calculus – Part 1 May 12, 2008 Firstly, I am sorry for the outage in posts. Could not find time to write up anything worth a really good post. I have received some verbal requests for a new post. I appreciate all such requests! So without further ado, let me get down to business. Today’s post will be about the Tensor Calculus . In fact I intend to write much more about them in subsequent posts but we will see how that goes. But before I go to give an introduction, let me state my motivation in studying tensors. While some familiarity with Physics is needed to understand the next paragraph, the tensor calculus is a purely mathematical topic. The motivation comes from my long standing wish to be able to understand the General Theory of Relativity . The GTR is a geometrical theory of gravity. It explains gravity as a result of the deformation of SpaceTime . Mass and Energy cause SpaceTime to be distorted and curved. This distortion in turn acts back on mass to determine its motion. The perceived effects of gravitation on a particle are a direct result of the motion in the distorted SpaceTime without any mysterious force acting on the particle. That is quiet as much as I know about the GTR for now. But the basic mathematical tool to study the GTR is Reimannian Geometry . Reimannian Geometry is the appropriate generalization, for a general $n-$dimensional space, to study things like curves, surfaces, curvature, so common from everyday experience in two and three dimensional spaces. Many basic entities in Reimannian Geometry are best described by Tensors. Tensors are also used in many other places and in order to understand current physical theories and their mathematical formulations, one needs to know about tensors and basic differential geometry. So, now lets get down to an introduction. Consider an $n-$ dimensional space $V_n$. A coordinate system is an assignment of an $n$ tuple of numbers $x^i, i = 1 \cdots n$ to each point of $V_n$. In a different coordinate system the same point will have a different set of coordinates $\bar{x}^i, i = 1 \cdots n$. We also assume that the $\bar{x}^i$ are given by $n$ functions $\varphi_i(x^j)$ of the $x^i$. The functions are assumed to be such that the Jacobian matrix $\frac{\partial{\bar{x}^i}}{\partial{x^j}}$ is invertible. Example. In 3 dimensional space, a standard cartesian coordinate system can be the first coordinate system, while the polar coordinates $(r,\theta,\phi)$ can be the primed coordinate system. Or the primed coordinate system can be one with the 3 axes not necessarily at right angles to each other. Definition. A contravariant tensor of rank 1 is a set of $n$ functions $A^i$ of the coordinates such that they transform according to the following law $\bar{A}^i = A^j \frac{\partial{\bar{x}^i}}{\partial{x^j}}$ Now in the above expression you might notice that that the index $j$ is repeated in the right hand side of the equation. That implies a summation . Therefore the expression actually means the following sum $\sum^n_{j=1} A^j \frac{\partial{\bar{x}^i}}{\partial{x^j}}$. In general, in writing tensorial expressions, the above so called summation convention is widely used. Whenever an index is used as a superscript and the same occurs as a subscript, or if it occurs in the numerator and the denominator it is to be summed on over all the coordinates. The above definition was for a contravariant tensor of rank 1. In fact a contravariant tensor of rank 1 is just a vector. The tensorial transformation law just generalizes the transformation of vectors. To see this consider 3 dimensional space and basis vectors $e_1, e_2, e_3$. Suppose a vector $v = a_1 e_1 + a_2 e_2 + a_3 e_3$. Consider a different coordinate system with basis vectors $e'_i = \sum_j a_ij e_j$. Now in this coordinate system let $v = a'_1 e'_1 + a'_2 e'_2 + a'_3 e'_3$. Now what are the $a'_i$ as functions of the $a_i$ ? A quick calculation shows that $a' = B a$ where $a'$ is the row vector of coordinates $a'_i$, $a$ is the row vector of coordinates $a_i$ and $B$ is the inverse of matrix $A = (a_{ij})$. Also as one can verify $b_{ij} = \frac{\partial{\bar{x}^i}}{\partial{x^j}}$. Example. Consider the differentials $dx^i$. Clearly the differentials in the primed coordinate system are given by $d\bar{x}^i= dx^j \frac{\partial{\bar{x}^i}}{\partial{x^j}}$ Next, we define a covariant tensor of rank 1 . Definition. A covariant tensor of rank 1 is a set of $n$ functions $A_i$ that transform according to $\bar{A}_i = A_j \frac{\partial{x^j}}{\partial{\bar{x}^i}}$. In the next post we will go to the definition of more general tensors called mixed tensors. Such tensors are contravariant or covariant in more than one index or have mix of such indices. We will see the transformation law of such tensors. We will also start with some simple operations on tensors. ## Superconcentrators From Expanders March 7, 2008 Today I will be blogging about the construction of superconcentrators from expanders. This is directly from the expanders survey. I am just reviewing it here for my own learning. First some definitions. Definition 1. A $d-$ regular bipartitie graph $(L,R,E)$, with $|L|=n$ and $|R| =m$ is called a $\emph{magical graph}$ if it saisfies the properties below. For a given set $S$ of vertices, we denote the set of vertices to which some vertex in $S$ is connected as $\Gamma(S)$. 1. For every $S$ with $|S| \leq \frac{n}{10}, |\Gamma(S)| \geq \frac{5d}{8} |S|$ 2. For every $S$ with $\frac{n}{10} < |S| \leq \frac{n}{2}$, $|\Gamma(S)| \geq |S|$. Definition 2. A superconcentrator is a graph $G=(V,E)$ with two given subsets $I, O \subseteq S$ with $|I|=|O|=n$, such that for every $S \subseteq I$ and $T \subseteq O$ with $|S|=|T|=k$, the number of disjoint paths from $S$ to $T$ is at least $k$. Superconcentrators with $O(n)$ edges are interesting for various reasons which we do not go into here. But we do give a construction of a superconcentrator with $O(n)$ edges from magical graphs above. A simple probabilistic argument can show the following result. Theorem There exists a constant $n_0$ such that for every $d \geq 32$, such that $n \geq m,n_0 \geq \frac{3n}{4}$, there is a magical graph with $|L|=n,|R|=m$. Here is the construction of a superconcentrator from magical graphs. Assume that we can construct a superconcentrator with $O(m)$ edges for every $m \leq (n-1)$. The construction is recursive. First take two copies of the magical graphs $(L_1,R_1,E_1), (L_2,R_2,E_2)$ with $|L_1|=|L_2|=n$ and $|R_1|=|R_2|=\frac{3n}{4}$. Connect every vertex of $L_1$ to the corresponding vertex of $L_2$ and add edges between $R_1=I$ and $R_2=O$ so that the graph becomes a superconcentrator with the size of the input vertex set as $\frac{3n}{4}$. We claim that the resulting graph is a superconcentrator with input vertex set of size $n$. Identify the input vertex set as $L_1$ and the output vertex set as $L_2$. For every $| S \subseteq L_1 | = k \leq \frac{n}{2}$ it is true that $| \Gamma(S) \cap R_1 | \geq |S|$. Therefore by Halls Marriage Theorem there exists a perfect matching between vertices of $S$ and $\Gamma(S) \cap R_1$. Similarly there exists a perfect matching between vertices of $T \subseteq L_2$ and $\Gamma(T) \cap R_2$. Together with the edges of the superconcentrator between input set $R_1$ and output set $R_2$ there are at least $k$ disjoint paths between $S$ and $T$. It remains to handle the case of $k > \frac{n}{2}$. For $|S|=|T|=k > \frac{n}{2}$, there is a subset $U \subseteq S$ of vertices with $|U| \geq (k - \frac{n}{2})$ such that the vertices corresponding to $U$ in $L_2$ are in $T$. Edges between such vertices contribute to $(k- \frac{n}{2})$ disjoint paths between $S$ and $T$. The remaining $\frac{n}{2}$ disjoint paths exist as proved earlier. Hence we have a superconcentrator with input, output vertex sets of size $n$. How many edges are there in this graph? For the base case of this recursion, we let a superconcentrator with input output sets of size $n \leq n_0$ be the complete bipartite graph with $n_0 ^2$ edges. The following recursion counts the edges. Let $e(k)$ denote the number of edges in the superconcentrator as per this construction with input, output sets of size $k$. Then $e(k) = k^2$ for $k \leq n_0$ and $e(k) = k + 2dk + e(\frac{3k}{4})$ for $k > n_0$. It can be easily seen that $e(n) \leq Cn$ for $C \geq max(n_0,4(2d+1))$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 298, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9664477705955505, "perplexity": 138.86950003414924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948522999.27/warc/CC-MAIN-20171213104259-20171213124259-00350.warc.gz"}
http://block-library.enseeiht.fr/html/Progress/ModeResolution.html
# Mode resolution¶ ## Context¶ This page describes the mode resolution mechanism. Its goal is to compute the computation mode of a block according to its input, outputs, memories and parameters DataTypes, SignalTypes and values. In order to find the correct mode for an instance of a block, it is mandatory to represent each mode by a different expression (signature). For a BlockType it will not be possible to have two BlockModes having the same signature. If it is the case, it will not be possible to select a mode for this block. This will mean either: • The provided block instance is not correct • The specification is not correct • The specification is not complete In the BlockLibrary instance verification page we describe how to check that a block’s specification is correct according to the previously exposed conditions. We also provide means to verify if a block specification is complete. According to this, if we do not manage to find a correct BlockMode for a block instance this means that the provided block instance is not correct. ### Precisions on the BlockMode/BlockVariant relations with BlockVariants¶ BlockModes implements BlockVariants. BlockVariants extends other BlockVariants. These two relations are modeled using VariantSet model elements. See BlockLibrary Meta-model page for documentation. A VariantSet is contained in either a BlockVariant, a BlockMode or an Other VariantSet. Each VariantSet has an attached operator giving him a specific semantics. We distinguish between two operators AND and ALT: • AND operator: The AND VariantSet container must implement (or extend if the container is a BlockVariant) all of the BlockVariants and VariantSets refered by this VariantSet. • ALT operator: The ALT VariantSet container must implement (or extend if the container is a BlockVariant) one of the BlockVariants and VariantsSets refered by this VariantSet. It is worth noting that: • From one BlockMode/BlockVariant we can have multiple relations. This allows to combine the two kind of relations. • A VariantSet has either a reference to BlockVariants or to VariantSets but not combinations of both. ## Definitions¶ ### The ALT relation¶ We need here to introduce a new symbol for logical formulas: $$\bigodot$$ named ALT for Aternative. This symbol is stating that for a set of logical properties if exactly one of these properties is true (this behavior is close to the one of the binary logical operator XOR but generalized for a set of values). It is ofter refered to as n-ary XOR. Here is the formal definition of the ALT operator: $S = \{a_1, a_2, ..., a_{n}\}, \forall i \in [1..n], a_i \in \mathcal{B}$$\bigodot^{n}_{i=1} S(i) = \left( \bigvee^{n}_{i=1} S(i) \right) \wedge \left( \bigwedge^{i,j \leq n, i \not= j}_{i,j =1} \neg(S(i) \wedge S(j)) \right)$ ### Operations on specification elements¶ Let’s assume the following operations definitions: • $$SIG(e)$$: BlockMode | BlockVariant -> Signature The signature of $$e$$. Detailed definition is given below. • $$SC(e)$$: StructuralFeature -> Set(Annotation) The collection of structural constraints of $$e$$. Structural constraints are the properties expressed in the StructuralFeatures of the element $$e$$ and its structural constraints. Returns a collection of Annotations. Note In the following, $$SC(e)_i$$ is the $$i^{th}$$ element of this collection. • $$BV^{\&}(e)$$: BlockMode | BlockVariant -> Set(BlockVariant) The collection of BlockVariants implemented (if $$e$$ is a BlockMode) or extended (if e is a BlockVariant) by the $$e$$ : SpecificationElement related to it by AND relations. Returns a collection of BlockVariant. Note In the following, $$BV^{\&}(e)_i$$ is the $$i^{th}$$ element of this collection. • $$BV^{\odot}(e)$$: BlockMode | BlockVariant -> Set(Set(BlockVariant)) The collection of collections of BlockVariants implemented (if $$e$$ is a BlockMode) or extended (if $$e$$ is a BlockVariant) by the $$e$$ : SpecificationElement related to $$e$$ by ALT relations. Each sub collection in the result contains the elements related in one ALT relation. Returns a collection of collections of BlockVariants. Note In the following, $$BV^{\odot}(e)_{i,j}$$ is the $$j^{th}$$ BlockVariant of the $$i^{th}$$ collection of BlockVariants. We will refer in the following with $$|X|$$ as the cardinal (number of elements) of the collection $$X$$. ## BlockMode signature calculus¶ According to the previous definitions: • The signature of an element $$e$$ (either a BlockMode or a BlockVariant) is recusively defined by: $\begin{split}SIG(e) = \left( \bigwedge^{|SC(e)|}_{i=1} SC(e)_i \right) \land \left( \left( \bigwedge^{|BV^{\&}(e)|}_{i=1} SIG(BV^{\&}(e)_i) \right) \land \left( \bigwedge^{|BV^{\odot}(e)|}_{i=1} \left( \bigodot^{|BV^{\odot}(e)_i|}_{j=1} SIG(BV^{\odot}(e)_{i,j}) \right) \right) \right)\end{split}$ ## Example¶ Example 1: Let take a BlockMode called $$a$$. Let’s imagine a first ALT ($$\odot$$) relation between $$a$$ and three BlockVariants ($$v_1, v_2, v_3$$) and a second ALT relation between $$a$$ and two BlockVariants ($$v_4, v_5$$). As these two ALT relations needs to be satisfied then a AND VariantSet holds both of them. A compact form for this is: • $$a$$ implements $$\&(\odot(v_1, v_2, v_3), \odot(v_4, v_5))$$ We give in the following the result for the computation of the set of signatures for $$a$$: $$SIG(a) = \bigodot\{$$ $\left(\bigwedge^{|SC(a)|}_{i=1} SC(a)_i \right) \land \left(\bigwedge^{|SC(v_1)|}_{i=1} SC(v_1)_i \right) \land \left(\bigwedge^{|SC(v_4)|}_{i=1} SC(v_4)_i \right) \land \left(\bigwedge^{|SC(v_5)|}_{i=1} SC(v_5)_i \right),$ $\left(\bigwedge^{|SC(a)|}_{i=1} SC(a)_i \right) \land \left(\bigwedge^{|SC(v_2)|}_{i=1} SC(v_2)_i \right) \land \left(\bigwedge^{|SC(v_4)|}_{i=1} SC(v_4)_i \right) \land \left(\bigwedge^{|SC(v_5)|}_{i=1} SC(v_5)_i \right),$ $\left(\bigwedge^{|SC(a)|}_{i=1} SC(a)_i \right) \land \left(\bigwedge^{|SC(v_3)|}_{i=1} SC(v_3)_i \right) \land \left(\bigwedge^{|SC(v_4)|}_{i=1} SC(v_4)_i \right) \land \left(\bigwedge^{|SC(v_5)|}_{i=1} SC(v_5)_i \right)$ $$\}$$ ## Termination of the calculus¶ In the OCL constraints specified on the BlockLibrary metamodel, we specified that no cycle can exists in the BlockVariant hierarchy. Furthermore, there is a finite number of BlockVariants and BlockModes in a BlockLibrary. This make the previous calculus possible to finish and so the calculation always possible.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9569306969642639, "perplexity": 4460.096371666704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104634.14/warc/CC-MAIN-20170818102246-20170818122246-00328.warc.gz"}
https://scholars.unh.edu/dissertation/2026/
## Doctoral Dissertations #### Title Ground state of (16)O Spring 1998 Dissertation Physics #### Degree Name Doctor of Philosophy We use the coupled cluster expansion (exp(S) method) to solve the many-body Schrodinger equation in configuration space in a configuration space of 35 $\hbar\omega.$ The Hamiltonian includes a nonrelativistic one-body kinetic energy, a realistic two-nucleon potential and a phenomenological three-nucleon potential. Using this formalism we generate the complete ground state correlations due the underlying interactions between nucleons. The resulting ground state wave function is used to calculate the binding energy, the one- and two-body densities for the ground state of $\sp{16}$O. The problem of center-of-mass corrections in calculating observables has been worked out by expanding the center-of-mass correction as many-body operators. For convergence testing purposes, we apply our formalism to the case of the harmonic oscillator shell model, where an exact solution exists. We also work out the details of the calculation involving realistic nuclear wave functions.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8213514089584351, "perplexity": 916.0561701856725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510866.52/warc/CC-MAIN-20181016180631-20181016202131-00410.warc.gz"}
http://particle.thep.lu.se/pub/Preprints/11/lu_tp_11_09_abs.html
Lisa Carloni, Johan Rathsman, Torbjorn Sjostrand Discerning Secluded Sector gauge structures Abstract New fundamental particles, charged under new gauge groups and only weakly coupled to the standard sector, could exist at fairly low energy scales. In this article we study a selection of such models, where the secluded group either contains a softly broken U(1) or an unbroken SU(N). In the Abelian case new {\gamma}v gauge bosons can be radiated off and decay back into visible particles. In the non-Abelian case there will not only be a cascade in the hidden sector, but also hadronization into new {\pi}v and {\rho}v mesons that can decay back. This framework is developed to be applicable both for e+e- and pp collisions, but for these first studies we concentrate on the former process type. For each Abelian and non-Abelian group we study three different scenarios for the communication between the standard sector and the secluded one. We illustrate how to distinguish the various characteristics of the models and especially study to what extent the underlying gauge structure can be determined experimentally. LU TP 11-09
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8874532580375671, "perplexity": 1089.9351241166194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424090.53/warc/CC-MAIN-20170722182647-20170722202647-00642.warc.gz"}
https://infoscience.epfl.ch/record/202321
Infoscience Journal article # Phonon-limited resistivity of graphene by first-principles calculations: Electron-phonon interactions, strain-induced gauge field, and Boltzmann equation We use first-principles calculations, at the density-functional-theory (DFT) and GW levels, to study both the electron-phonon interaction for acoustic phonons and the "synthetic" vector potential induced by a strain deformation (responsible for an effective magnetic field in case of a nonuniform strain). In particular, the interactions between electrons and acoustic phonon modes, the so-called gauge-field and deformation potential, are calculated at the DFT level in the framework of linear response. The zero-momentum limit of acoustic phonons is interpreted as a strain of the crystal unit cell, allowing the calculation of the acoustic gauge-field parameter (synthetic vector potential) within the GW approximation as well. We find that using an accurate model for the polarizations of the acoustic phonon modes is crucial to obtain correct numerical results. Similarly, in the presence of a strain deformation, the relaxation of atomic internal coordinates cannot be neglected. The role of electronic screening on the electron-phonon matrix elements is carefully investigated. We then solve the Boltzmann equation semianalytically in graphene, including both acoustic and optical phonon scattering. We show that, in the Bloch-Gruneisen and equipartition regimes, the electronic transport is mainly ruled by the unscreened acoustic gauge field, while the contribution due to the deformation potential is negligible and strongly screened. We show that the contribution of acoustic phonons to resistivity is doping and substrate independent, in agreement with experimental observations. The first-principles calculations, even at the GW level, underestimate this contribution to resistivity by approximate to 30%. At high temperature (T > 270 K), the calculated resistivity underestimates the experimental one more severely, the underestimation being larger at lower doping. We show that, besides remote phonon scattering, a possible explanation for this disagreement is the electron-electron interaction that strongly renormalizes the coupling to intrinsic optical-phonon modes. Finally, after discussing the validity of the Matthiessen rule in graphene, we derive simplified forms of the Boltzmann equation in the presence of impurities and in a restricted range of temperatures. These simplified analytical solutions allow us the extract the coupling to acoustic phonons, related to the strain-induced synthetic vector potential, directly from experimental data.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.924528181552887, "perplexity": 1313.3165703891173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120001.0/warc/CC-MAIN-20170423031200-00474-ip-10-145-167-34.ec2.internal.warc.gz"}
http://masteringolympiadmathematics.blogspot.com/2015/05/show-that-math16.html
Wednesday, May 6, 2015 Show that [MATH]16<\sum_{k=1}^{80}\dfrac{1}{\sqrt{k}}<17[/MATH]. Show that [MATH]16<\sum_{k=1}^{80}\dfrac{1}{\sqrt{k}}<17[/MATH]. On my previous post (Show that [MATH]16<\dfrac{1}{\sqrt{1}}+\dfrac{1}{\sqrt{2}}+\cdots+\dfrac{1}{\sqrt{80}}<17[/MATH]), we used the trapezoid rule to prove the upper bound for the given inequality, i.e. [MATH]16<\sum_{k=1}^{80}\dfrac{1}{\sqrt{k}}[/MATH]. We're then not supposed to use the same method to prove the lower bound simply because the function $y=\dfrac{1}{\sqrt{x}}$ is concave up. No matter how we manipulated that concept, we will only end up with proving the target sum [MATH]\sum_{k=1}^{80}\dfrac{1}{\sqrt{k}}[/MATH] will be greater than some quantity, not less than. This works against to what we are looking to prove, that is, [MATH]\sum_{k=1}^{80}\dfrac{1}{\sqrt{k}}<17[/MATH]. We have to come up with another idea. More often than not, questions ask to estimate the sum are questions of telescoping series. But, the thing is, how exactly [MATH]\sum_{k=1}^{80}\dfrac{1}{\sqrt{k}}=\dfrac{1}{\sqrt{1}}+\dfrac{1}{\sqrt{2}}+\dfrac{1}{\sqrt{3}}+\cdots +\dfrac{1}{\sqrt{80}}[/MATH] can be a telescoping series in disguise? Okay, let's start from the very obvious, then work things out in details: It is very clear that: $k>k-1$ Taking square root of both sides yields: $\sqrt{k}>\sqrt{k-1}$ Adding the quantity $\sqrt{k}$ to both sides gives: $\sqrt{k}+\sqrt{k}>\sqrt{k}+\sqrt{k-1}$ Don't worry, albeit we got a summation here, but, if we turn both sides into fractions, then, when we rationalize the denominator, we would get a difference instead. $2\sqrt{k}>\sqrt{k}+\sqrt{k-1}$ $\dfrac{2}{\sqrt{k}+\sqrt{k-1}}>\dfrac{1}{\sqrt{k}}$ (Note that [MATH]\color{yellow}\bbox[5px,purple]{k\ne 1}[/MATH] or the fraction $\dfrac{1}{0}$ would be undefined and then the whole sum will follow suit, becomes undefined as well. $\dfrac{2}{\sqrt{k}+\sqrt{k-1}}\dfrac{\cdot \sqrt{k}-\sqrt{k-1}}{\cdot \sqrt{k}-\sqrt{k-1}}>\dfrac{1}{\sqrt{k}}$ $\dfrac{2(\sqrt{k}-\sqrt{k-1})}{1}>\dfrac{1}{\sqrt{k}}$ $2(\sqrt{k}-\sqrt{k-1})>\dfrac{1}{\sqrt{k}}$ Remember that we're asked to find [MATH]\sum_{k=1}^{80}\dfrac{1}{\sqrt{k}}<17[/MATH] One new condition that we just set, we must satisfy that, that means we have to algebraically manipulate the relation of inequality $\dfrac{2(\sqrt{k}-\sqrt{k-1})}{1}>\dfrac{1}{\sqrt{k}}$ so we could estimate the sum of [MATH]\sum_{k=1}^{80}\dfrac{1}{\sqrt{k}}[/MATH]. [MATH]\sum_{k=1}^{80}\dfrac{1}{\sqrt{k}}=\dfrac{1}{\sqrt{1}}+\sum_{k=2}^{80}\dfrac{1}{\sqrt{k}}[/MATH] For $n=2$:  $2(\bcancel{\sqrt{2}}-\sqrt{1})>\dfrac{1}{\sqrt{2}}$ For $n=3$:  $2(\bcancel{\sqrt{3}}-\bcancel{\sqrt{2}})>\dfrac{1}{\sqrt{3}}$ For $n=4$:  $2(\bcancel{\sqrt{4}}-\bcancel{\sqrt{3}})>\dfrac{1}{\sqrt{4}}$ $\vdots$           $\vdots$                    $\vdots$ For $n=79$:  $2(\bcancel{\sqrt{79}}-\bcancel{\sqrt{78}})>\dfrac{1}{\sqrt{79}}$ For $n=80$:  $2(\sqrt{80}-\bcancel{\sqrt{79}})>\dfrac{1}{\sqrt{80}}$ Therefore [MATH]\begin{align*}\sum_{k=1}^{80}\dfrac{1}{\sqrt{k}}=1+\dfrac{1}{\sqrt{2}}+\dfrac{1}{\sqrt{3}}+\dfrac{1}{\sqrt{4}}+\cdots+\dfrac{1}{\sqrt{80}}&=\dfrac{1}{\sqrt{1}}+\sum_{k=2}^{80}\dfrac{1}{\sqrt{k}}\\& < 1+(-2\sqrt{1})+(2\sqrt{80})\\& < 2\sqrt{80}-1 \end{align*}[/MATH] Since $81>80$ $9> \sqrt{80}$ $18> 2\sqrt{80}$ $17> 2\sqrt{80}-1$ We can put together what we just found to see that: [MATH]\sum_{k=1}^{80}\dfrac{1}{\sqrt{k}}=\dfrac{1}{\sqrt{1}}+\sum_{k=2}^{80}\dfrac{1}{\sqrt{k}}< 2\sqrt{80}-1< 17[/MATH] In other words, we have proved that [MATH]\sum_{k=1}^{80}\dfrac{1}{\sqrt{k}}< 17[/MATH]. And we're done.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9502334594726562, "perplexity": 1138.1227278197973}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689823.92/warc/CC-MAIN-20170924010628-20170924030628-00565.warc.gz"}
http://mathoverflow.net/questions/31809/pre-triangulated-category-that-isnt-triangulated
# Pre-triangulated category that isn't triangulated I've been working through some of the early parts of Neeman's book on triangulated categories, and he mentions that he does not know of a pre-triangulated category that is not triangulated. Is this still an open question? Actually, let me be a bit more specific and break this into two parts: 1. The usual axioms for a triangulated category are known to be redundant. Do we know of a list of independent axioms? (Peter May wrote something about this, and it's the best I've seen but it still doesn't give independence results for the new list of axioms.) 2. Assuming it is known that, say, the octahedral axiom is independent from the others, what is an example of a pre-triangulated category that is not a triangulated category? It appears that if there is such an example, it would have to be very artificial... all of the pre-triangulated categories appearing in nature are automatically triangulated. (I haven't read all of May's article but it looks like he explains how one usually goes about proving that the octahedral axiom holds for a given category, so perhaps this would give some clues for situations where this would fail.) - I've never thought hard about producing a counterexample but I have discussed this with a couple of people relatively recently and they weren't aware of any examples of a category which was pre-triangulated but not triangulated. – Greg Stevenson Jul 14 '10 at 23:48 Part 2 of this question came up in a local reading group - even though it's not so recent, I hope people won't mind my attempting to bump it up again. Is there still no known example, abstract or otherwise? – Jan Grabowski May 19 '14 at 15:28 ## 3 Answers Don't know if you're still interested in this question, but for anyone who is curious about it, the answer seems to be that there are no pretriangulated categories that are not triangulated. http://arxiv.org/abs/1506.00887 - Wow, I find it amazing that such a basic result took 50 years for anyone to figure out! – Eric Wofsey Jun 3 '15 at 1:47 I think it's a great, major achievement! – Fernando Muro Jun 3 '15 at 4:04 Whoa! I definitely didn't expect that result to be true- very cool! – Dylan Wilson Jun 3 '15 at 8:00 The author has apparently removed the preprint from the arxiv (see arxiv.org/abs/1506.00887v2), leaving the following comment: "There is a serious error at the end of the proof of 3.4 which cannot be fixed. Thee [sic] is a counterexample to the statement of 3.4 due to Canonaco and Kunzer. Lemma 3.5 also needs stronger hypotheses. Many thanks to those whonsent [sic] comments, counterexamples and encouragement" – Ricardo Andrade Jun 17 '15 at 14:37 Perhaps all the excitement was premature... – Ricardo Andrade Jun 17 '15 at 14:41 I have to say that this article http://arxiv.org/abs/1506.00887 has at least one mistake. Lemma 3.5, Counterexample - take group $G=$ sequences of integer numbers $=\mathbb{Z}^\mathbb{N}$. $\ell$ and $m$ are right and left shift respectively. Then $m\circ\ell=id$, so $Image(e)=G$. - Why is this a counterexample? Take the isomorphism asked of in the lemma to be $l$. – Dylan Wilson Jun 8 '15 at 15:25 well, it is not isomorphism ) right shift don't have sequence 1 0 0 0 ... in it's image – Tomas Paul Jun 8 '15 at 15:31 although I'm not sure if this is very important - this lemma is used to construct automorphism of some Ext, which is zero in many cases – Tomas Paul Jun 8 '15 at 16:27 (I have informed the author of the paper) – André Henriques Jun 8 '15 at 16:39 Yes, I'm about to post some fixes. My interest in it was really just the k-linear case. In fact, that example is not a counterexample to the main result because the map omega_* cannot be an injection (as the spectral sequence implies that the kernel and cokernel must either both not vanish or both vanish). However,in replacing Lemma 3.5 I need an assumption on the enrichment. This means that the result is still open for the general case. It seems to work for k-linear, divisible, finitely generated and, as it were, co-finitely generated (ie the duals of finitely generated groups). The assumptions are not likely to be necessary though and I would expect that someone better equipped to deal with the infinite group case might be able to fix it. There are a couple of of other blunders at the end. In particular, there is some mistake in the uniqueness result. I've removed it in the meantime. I'll post the fixes on arxiv in a day or so. Antony Maciocia -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8292672038078308, "perplexity": 529.6250351831686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461864121714.34/warc/CC-MAIN-20160428172201-00178-ip-10-239-7-51.ec2.internal.warc.gz"}
http://www.ck12.org/statistics/Chi-Square-Test/studyguide/user:ZGFuZGFuMjc0QGdtYWlsLmNvbQ../Chi-Square-Test/
<meta http-equiv="refresh" content="1; url=/nojavascript/"> Chi-Square Test ( Study Aids ) | Statistics | CK-12 Foundation # Chi-Square Test % Progress Practice Chi-Square Test Progress % Chi-Square Test We use the chi-square test to examine patterns between categorical variables, such as genders, political candidates, locations, or preferences. There are two types of chi-square tests: the goodness-of-fit test and the test for independence. We use the goodness-of-fit test to estimate how closely a sample matches the expected distribution.  We use the test for independence to determine whether there is a significant association between two categorical variables in a single population. To test for significance, it helps to make a table containing the observed and expected frequencies of the data sample. If you have two different categorical variables, this is called a contingency table. The Chi-Square Statistic The value that indicates the comparison between the observed and expected frequency is called the chi-square statistic . The idea is that if the observed frequency is close to the expected frequency, then the chi-square statistic will be small. On the other hand, if there is a substantial difference between the two frequencies, then we would expect the chi-square statistic to be large. To calculate the chi-square statistic, $\chi^2$ , we use the following formula: $\chi^2=\sum_{} \frac{(O_{}-E_{})^2}{E_{}}$ where: $\chi^2$ is the chi-square test statistic. $O_{}$ is the observed frequency value for each event. $E_{}$ is the expected frequency value for each event. The number of degrees of freedom associated with a goodness-of-fit chi-square test is df = c - 1 where c is the number of categories.  The number of degrees of freedom associated with a chi-square test of independence is, df = (r-1) * (c-1) where where r is the number of levels for one catagorical variable, and c is the number of levels for the other categorical variable. We use the chi-square test statistic and the degrees of freedom to determine the p-value on a chi-square probability table. Using the p-value and the level of significance, we are able to determine whether to reject or fail to reject the null hypothesis and write a summary statement based on these results. Test of Single Variance We can use the chi-square test if we want to test two samples to determine if they belong to the same population.  We are testing the hypothesis that the sample comes from a population with a variance greater than the observed variance. Here is the formula for the chi-square statistic: $\chi^2=\frac{df(s^2)}{\sigma^2}$ where: $\chi^2$ is the chi-square statistical value. $df=n-1$ , where $n$ is the size of the sample. $s^2$ is the sample variance. $\sigma^2$ is the population variance. Once we have the chi-square statistic, find the p-value and complete the test as usual. ### Explore More Sign in to explore more, including practice questions and solutions for Chi-Square Test. Please wait... Please wait...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 11, "texerror": 0, "math_score": 0.9830873608589172, "perplexity": 670.2890132186071}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931011060.35/warc/CC-MAIN-20141125155651-00087-ip-10-235-23-156.ec2.internal.warc.gz"}
http://slideplayer.com/slide/2766370/
# Agenda Informationer –Uformel evaluering –2 spørgeskemaer + eval. Opsamling fra sidst –Variabel – def. og typer –Fordeling (distribution) –Centrumskøn. ## Presentation on theme: "Agenda Informationer –Uformel evaluering –2 spørgeskemaer + eval. Opsamling fra sidst –Variabel – def. og typer –Fordeling (distribution) –Centrumskøn."— Presentation transcript: Agenda Informationer –Uformel evaluering –2 spørgeskemaer + eval. Opsamling fra sidst –Variabel – def. og typer –Fordeling (distribution) –Centrumskøn Sandsynlighedsregning A.Definitioner B.Def. og basale regneregler C.Regneregel og uafhængighed Dagens øvelser Mean (gennemsnittet) The mean is the sum of the observations divided by the number of observations –n betegner antallet af observationer (stikprøvestørrelsen) –y 1, y 2, y 3, … y i,..., y n betegner de n observationer – betegner gennemsnittet It is the center of mass Standard Deviation (standardafvigelsen) Gives a measure of variation by summarizing the deviations of each observation from the mean and calculating an adjusted average of these deviations. Site Obs. 1 2 3 SumnGns.Std.afv. A55515350,0 B 45615351,0 C 35715352,0 A. Learning Objectives 1.Random Phenomena 2.Law of Large Numbers 3.Probability 4.Independent Trials (trail = forsøg / eksperiment) 5.Finding probabilities Learning Objective 1: Random Phenomena For a random phenomena, the outcome is uncertain –In the short-run, the proportion of times that something happens is highly random –In the long-run, the proportion of times that something happens becomes very predictable Probability quantifies long-run randomness Learning Objective 2: Law of Large Numbers As the number of trials increase, the proportion of occurrences of any given outcome approaches a particular number “in the long run” For example, as one tosses a die, in the long run 1/6 of the observations will be a 6. Hvad får vi i det lange løb, hvis vi kaster en terning og bereger andelen, som er større end 3? Learning Objective 3: Probability With random phenomena, the probability of a particular outcome is the proportion of times that the outcome would occur in a long run of observations Example: –When rolling a die, the outcome of “6” has probability = 1/6. In other words, the proportion of times that a 6 would occur in a long run of observations is 1/6. Opgave: –Vi tager et kort fra en bunke spillekort bestående af 4 x 13 = 52 kort (og lægger det tilbage igen). Hvor stor en andel af gangene får man et kort over 10 (i det lange løb)? Learning Objective 4: Independent Trials Different trials of a random phenomenon are independent if the outcome of any one trial is not affected by the outcome of any other trial. Example: –If you have 20 flips of a coin in a row that are “heads”, you are not “due” a “tail” - the probability of a tail on your next flip is still 1/2. The trial of flipping a coin is independent of previous flips. Learning Objective 5: How can we find Probabilities? Observe many trials of the random phenomenon and use the sample proportion of the number of times the outcome occurs as its probability. This is merely an estimate of the actual probability. Calculate theoretical probabilities based on assumptions about the random phenomena. For example, it is often reasonable to assume that outcomes are equally likely such as when flipping a coin, or a rolling a die. A. Learning Objectives 1.Random Phenomena 2.Law of Large Numbers 3.Probability 4.Independent Trials (trail=forsøg / eksperiment) 5.Finding probabilities B. Learning Objectives 1.Sample Space (Udfaldsrum) for a Trail (forsøg) 2.Event (Hændelse) 3.Probabilities for a sample space 4.Probability of an event 5.Basic rules for finding probabilities about a pair of events 6.Probability of the union of two events 7.Probability of the intersection of two events Learning Objective 1: Sample Space (udfaldsrum) for a Trail (forsøg) The sample space (udfaldsrummet) is the set of all possible outcomes. Udfaldsrummet for en prøve bestående af 3 spørgsmål, som kan besvares korrekt, C (correct), eller forkert, I, (incorrect) fremgår af figuren. Hvad er udfaldsrummet? Learning Objective 2: Event (hændelse) An event (hændelse) is a subset of the sample space An event corresponds to a particular outcome or a group of possible outcomes. For example; –Event A = student answers all 3 questions correctly = (CCC) –Event B = student passes (at least 2 correct) = (CCI, CIC, ICC, CCC) Learning Objective 3: Probabilities for a sample space Each outcome, f.eks. CCC, in a sample space has a probability The probability of each individual outcome is between 0 and 1. The total (the sum) of all the individual probabilities equals 1. Learning Objective 4: Probability of an Event The Probability of an event A is denoted by P(A) The Probability is obtained by adding the probabilities of the individual outcomes in the event. When all the possible outcomes are equally likely: Learning Objective 4: Eksempel: Forespørgsler på en hjemmeside? Oplist 3 hændelser i ovenstående udfaldsrum. Hvad er ssh. for tilfældigt valgt person... –har kontaktet en hjemmeside med sin mobiltelefon? –har besøgt en hjemmeside med mere end 100.000 besøgende? Hvilken websitestørrelse har størst ssh. for at blive besøgt af en mobiltlf. bruger? Antal siderMobilPCTotal Under 25.0009014.010 25.000-49.9997130.629 50.000-99.9996924.631 100.000 +8010.620 Total Learning Objective 5: Basic rules for finding probabilities about a pair of events Some events are expressed as the outcomes (udfald) that 1.Are not in some other event (complement of the event) 2.Are in one event and in another event (intersection of two events) 3.Are in one event or in another event (union of two events) Learning Objective 5: Complement of an event The complement of an event A consists of all outcomes in the sample space that are not in A. The probabilities of A and of A’ add to 1 P(A ’ ) = 1 – P(A) Learning Objective 5: Disjoint events Two events, A and B, are disjoint if they do not have any common outcomes (udfald) Learning Objective 5: Intersection of two events (fællesmængde) The intersection of A and B consists of outcomes that are in both A and B. Learning Objective 5: Union of two events (foreningsmængde) The union of A and B consists of outcomes that are in A or B or in both A and B. Learning Objective 6: Probability of the Union of Two Events Addition Rule: For the union of two events, P(A or B) = P(A) + P(B) – P(A and B) If the events are disjoint, P(A and B) = 0, so P(A or B) = P(A) + P(B) + 0 Learning Objective 6: Example Event A = Mobil Event B = Site med mere end 100.000 sider Hvordan beregner vi P(A and B) til 0,001? Antal siderMobilPCTotal Under 25.0009014.01014.100 25.000-49.9997130.62930.700 50.000-99.9996924.63124.700 100.000 +8010.62010.700 Total31079.89080.200 Learning Objective 7: Probability of the Intersection of Two Events Multiplication Rule: For the intersection of two independent events, A and B, P(A and B) = P(A) x P(B) Opgave: Kast med to terninger. Hvad er sandsynligheden for at få to 6’ere? –Definer hændelserne A og B. Learning Objective 7: Example What is the probability of getting 3 questions correct by guessing (= tilfældigheder)? A=correct. Probability of guessing correctly, P(A)=0,2 What is the probability that a student answers at least 2 questions correctly? P( ) + P( ) + P( ) + P( ) = 0,... +... = 0,104 Learning Objective 7: Events Often Are Not Independent Don’t assume that events are independent unless you have given this assumption careful thought and it seems plausible. Ø jne ved terningkast M ø ntkast 1-23-45-6 Plat ½ x ⅓ Krone ½ x ⅓ Learning Objective 7: Events Often Are Not Independent Example: A Pop Quiz with 2 Multiple Choice Questions –Data giving the proportions for the actual responses –Events: II IC CI CC –Probability: 0,26 0,11 0,05 0,58 –P(CC) = 0,58 Learning Objective 7: Events Often Are Not Independent Define the events A and B as follows: –A: {first question is answered correctly} –B: {second question is answered correctly} P(A) = P{(CC), (CI)} = 0.58 + 0.05 = 0.63 P(B) = P{(CC), (IC)} = 0.58 + 0.11 = 0.69 P(A and B) = P{(CC)} = 0.58 If A and B were independent, P(A and B) = P(A) x P(B) = 0.63 x 0.69 = 0.43 Thus, in this case, A and B are not independent! B. Learning Objectives 1.Sample Space (Udfaldsrum) for a Trail (forsøg) 2.Event (Hændelse) 3.Probabilities for a sample space 4.Probability of an event 5.Basic rules for finding probabilities about a pair of events 6.Probability of the union of two events 7.Probability of the intersection of two events C. Learning Objectives 1.Conditional probability 2.Multiplication rule for finding P(A and B) 3.Independent events defined using conditional probability Learning Objective 1: Conditional Probability For events A and B, the conditional probability of event A, given that event B has occurred, is: P(A|B) is read as “the probability of event A, given event B.” The vertical slash represents the word “given”. Of the times that B occurs, P(A|B) is the proportion of times that A also occurs Learning Objective 6: Eksempel: 1) Omregning fra antal til ssh. Antal siderMobilPCTotal Under 25.0009014.01014.100 25.000-49.9997130.62930.700 50.000-99.9996924.63124.700 100.000 +8010.62010.700 Total31079.89080.200 Antal siderMobilPCTotal Under 25.0000,00110,17470,1758 25.000-49.9990,00090,38190,3828 50.000-99.9990,00090,30710,3080 100.000 +0,00100,13240,1334 Total0,00390,99611,0000 Learning Objective 1: Example 1 What was the probability of a cell phone visit, given that the site has ≥ 100,000? –Event A: Cell phone is used –Event B: Site has ≥ 100,000 Learning Objective 1: Example 1 What is the probability of a cell phone visit given that the site has < 25.000 pages? A = Cell phone is used B = Pages < 25.000 P(A and B) = P(B) = P(A|B) = 0,0063 Antal siderMobilPCTotal Under 25.0000,00110,17470,1758 25.000-49.9990,00090,38190,3828 50.000-99.9990,00090,30710,3080 100.000 +0,00100,13240,1334 Total0,00390,99611,0000 Learning Objective 3: Independent Events Defined Using Conditional Probabilities Two events A and B are independent if the probability that one occurs is not affected by whether or not the other event occurs Events A and B are independent if: P(A|B) = P(A), or equivalently, P(B|A) = P(B) If events A and B are independent, P(A and B) = P(A) x P(B) Learning Objective 3: Checking for Independence To determine whether events A and B are independent: –Is P(A|B) = P(A)? –Is P(B|A) = P(B)? –Is P(A and B) = P(A) x P(B)? If any of these is true, the others are also true and the events A and B are independent Ø jne ved terningkast M ø ntkast 1-23-45-6 Plat Krone Download ppt "Agenda Informationer –Uformel evaluering –2 spørgeskemaer + eval. Opsamling fra sidst –Variabel – def. og typer –Fordeling (distribution) –Centrumskøn." Similar presentations
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8176745176315308, "perplexity": 3884.6768153362596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814002.69/warc/CC-MAIN-20180222041853-20180222061853-00577.warc.gz"}
https://www.physicsforums.com/threads/angle-between-products-of-decay.225508/
Homework Help: Angle between products of decay 1. Mar 31, 2008 estrella 1. The problem statement, all variables and given/known data Meson B+ decays at J/ψ and K+. we know the masses of the particles and we are looking for the angle between the products when B+ has a momentum of 5GeV in lab frame. 2. Relevant equations 3. The attempt at a solution i found the momentum and energies at cm frame (easy..) then i assumed that B+ is mooving at x axes (in lab) i analized momentums of products (both in cm and lab) in x axes to use the Lorentz transformations qx=q*cosθ qy=q*sinθ Lorentz: qxL=γ(q*cosθ+uE*) qyL=q*sinθ so tanΘL=qyL/qxL how can i find the angle θ ??? do i miss something? i think i don't have enough data. Last edited: Apr 1, 2008
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8760573863983154, "perplexity": 2193.615165996247}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944682.35/warc/CC-MAIN-20180420194306-20180420214306-00023.warc.gz"}
http://swmath.org/?term=upper%20bounds
• # MULKNAP • Referenced in 35 articles [sw06467] • constrained MKP and present a branch-and-bound algorithm to solve this problem to optimality ... Lagrangian relaxation approach to obtain an upper bound. Together with the lower bound obtained... • # COSTA • Referenced in 23 articles [sw00162] • selection of resource descriptions, and tries to bound the resource consumption of the program with ... such recurrence relations which represents an upper bound, COSTA includes a dedicated solver. An ! interesting ... much the same machinery for inferring upper bounds on cost as for proving termination (which... • # FRSDE • Referenced in 15 articles [sw08762] • advantage: it can guarantee that the upper bound of the time complexity is linear with ... large data set and the upper bound of the space complexity is independent... • # CodingTheory • Referenced in 8 articles [sw01945] • constant weight codes (lower and upper bounds) and also using the Table of Constant Weight ... www.research.att.com/∼njas/codes/Andw/] and the table of upper bounds on A(n,d,w) (which ... electronic supplement to their paper “Upper bounds for constant-weight codes” [IEEE Trans. Inf. Theory ... parameters of new codes with classical upper bounds such as Johnson bound, linear programming...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.924788236618042, "perplexity": 1749.870918489949}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107863364.0/warc/CC-MAIN-20201019145901-20201019175901-00036.warc.gz"}
https://freshergate.com/mechanical-engineering/engineering-mechanics/discussion/1228
Home / Mechanical Engineering / Engineering Mechanics :: Discussion ### Discussion :: Engineering Mechanics 1. The time of flight (t) of a projectile on an upward inclined plane is(where u = Velocity of projection, α = Angle of projection, and β = Inclination of the plane with the horizontal.) 2. A. t = $$\frac {g cos\beta} { 2u sin(\alpha - \beta) }$$ B. t = $$\frac {2u sin(\alpha - \beta)} { g cos\beta }$$ C. t = $$\frac {g cos\beta} { 2u sin(\alpha + \beta) }$$ D. t = $$\frac {2u sin(\alpha + \beta)} { g cos\beta }$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8421263694763184, "perplexity": 4137.875793491096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303779.65/warc/CC-MAIN-20220122073422-20220122103422-00208.warc.gz"}
http://sites.psu.edu/johnroe/tag/holomorphic-calculus/
# “Holomorphic Functional Calculus” Writing up the Connes-Renault notes, which I mentioned in a previous post, leads to a number of interesting digressions. For instance, the notion of “holomorphic closure” is discussed at some length in these early notes. But what exactly is the relationship between “holomorphic closure”, “inverse closure”, “complete holomorphic closure” (= holomorphic closure when tensored with any matrix algebra), and so on? I was aware that there had been some progress in this area but had not really sorted it out in my mind. Here’s a summary (all these results are pretty old, so perhaps everyone knows this but me…)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8603073358535767, "perplexity": 878.8976664996792}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921957.9/warc/CC-MAIN-20140901014521-00187-ip-10-180-136-8.ec2.internal.warc.gz"}
http://www.physicsforums.com/showthread.php?s=52b6a6f8327c79e4e0eeca045dbdac43&p=3995942
## Linear Algebra - Jordan form basis Hi all, I'm having trouble finding jordan basis for matrix A, e.g. the P matrix of: $J=P^{-1}AP$ Given $A = \begin{pmatrix} 4 & 1 & 1 & 1 \\ -1 & 2 & -1 & -1 \\ 6 & 1 & -1 & 1 \\ -6 & -1 & 4 & 2 \end{pmatrix}$ I found Jordan form to be: $J = \begin{pmatrix} -2 & & & \\ & 3 & 1 & \\ & & 3 & \\ & & & 3 \end{pmatrix}$ Now wer'e looking for $v_1, v_2, v_3, v_4$ such that: $Av_1 = -2v_1 → (A+2I)v_1=0$ $Av_2 = 3v_2 → (A-3I)v_2=0$ $Av_3 = v_2+3v_3 → (A-3I)v_3=v_2$ $Av_4 = 3v_4 → (A-3I)v_4=0$ So now I find: $v_1 = \begin{pmatrix} 0 \\ 0 \\ 1 \\ -1 \end{pmatrix} \hspace{10mm} v_2,v_4 = \begin{pmatrix} 1 \\ 0 \\ 1 \\ -2 \end{pmatrix},\begin{pmatrix} 0 \\ 1 \\ 0 \\ -1 \end{pmatrix}$ Now I try to solve $(A-3I)v_3=v_2$ for each of the possible v2's I just found above, but there's no solution for any of em'... $A = \begin{pmatrix} 1 & 1 & 1 & 1 \\ -1 & -1 & -1 & -1 \\ 6 & 1 & -4 & 1 \\ -6 & -1 & 4 & -1 \end{pmatrix}\begin{pmatrix} x \\ y \\ z \\ w \end{pmatrix}=\begin{pmatrix} 1 \\ 0 \\ 1 \\ -2 \end{pmatrix}\hspace{5mm} OR \hspace{5mm} A = \begin{pmatrix} 1 & 1 & 1 & 1 \\ -1 & -1 & -1 & -1 \\ 6 & 1 & -4 & 1 \\ -6 & -1 & 4 & -1 \end{pmatrix}\begin{pmatrix} x \\ y \\ z \\ w \end{pmatrix}=\begin{pmatrix} 0 \\ 1 \\ 0 \\ -1 \end{pmatrix}$ Where am I going wrong? Thanks in advance! PhysOrg.com science news on PhysOrg.com >> Galaxies fed by funnels of fuel>> The better to see you with: Scientists build record-setting metamaterial flat lens>> Google eyes emerging markets networks Recognitions: Homework Help Hi oferon! Did you consider that the proper v2 could be a linear combination of your current v2 and v4? What if you try ##\lambda v_2 + \mu v_4## to find v3? If it was a linear combination of other vectors then V1-4 would not be a basis.. Am I wrong? Plus, another student told me the method I tried was completly wrong and that the correct method is finding more vectors through $Ker (A-λI)^j$ where j=2,3,... depends on how many more vectors I need for my basis. Which of the methods should I use? Any why? I'm lost Recognitions: Homework Help ## Linear Algebra - Jordan form basis Quote by oferon If it was a linear combination of other vectors then V1-4 would not be a basis.. Am I wrong? You need to find a ##v_3## that satisfies ##(A-3I)v_3=λv_2+μv_4##. When you find it, v1-v4 will form a basis. Plus, another student told me the method I tried was completly wrong and that the correct method is finding more vectors through $Ker (A-λI)^j$ where j=2,3,... depends on how many more vectors I need for my basis. That would work too, but it seems to me that it is a lot more work. (Short story: that student is wrong. Your method is fine. You just did not finish it.) Which of the methods should I use? Any why? I'm lost If you're wondering... try both? Hi, thanks for your kind replies. Ok, first I try what you suggested.. I take $(A-3I)v_3 = λv_2+μv_4$ I get: $\begin{pmatrix} 1 & 1 & 1 & 1 \\ -1 & -1 & -1 & -1 \\ 6 & 1 & -4 & 1 \\ -6 & -1 & 4 & -1 \end{pmatrix}\begin{pmatrix} x \\ y \\ z \\ w \end{pmatrix}=\begin{pmatrix} λ \\ μ \\ λ \\ -2λ-μ \end{pmatrix} ----> \begin{pmatrix} 1 & 1 & 1 & 1 \\ 0 & 0 & 0 & 0 \\ 6 & 1 & -4 & 1 \\ 0 & 0 & 0 & 0 \end{pmatrix}\begin{pmatrix} x \\ y \\ z \\ w \end{pmatrix}=\begin{pmatrix} λ \\ λ+μ \\ λ \\ -λ-μ \end{pmatrix}$ Now I see it must satisfy $μ = -λ$ so I pick $λ=1, μ=-1$ thus $v_2-v_4=\begin{pmatrix} 1 \\ -1 \\ 1 \\ -1 \end{pmatrix}$ so now I solve: $\begin{pmatrix} 1 & 1 & 1 & 1 \\ -1 & -1 & -1 & -1 \\ 6 & 1 & -4 & 1 \\ -6 & -1 & 4 & -1 \end{pmatrix}\begin{pmatrix} x \\ y \\ z \\ w \end{pmatrix}=\begin{pmatrix} 1 \\ -1 \\ 1 \\ -1 \end{pmatrix}$ But the solutions I get are exactly $\begin{pmatrix} 1 \\ 0 \\ 1 \\ -2 \end{pmatrix} , \begin{pmatrix} 0 \\ 1 \\ 0 \\ -1 \end{pmatrix}$ The same v2,v4... So where am I wrong now? Second thing, I've searched all over the net, and found this method. Yet the method the other student told me is what was taught in class. Can I be 100% sure both methods are equivalent and can be used both in all cases? I thank you again for your time. Ok, so I asked our instructor about the second question and yes, both methods are good. I prefer "my" method, but as you can see I still get stucked with it.. So how do I move on with this $(A-3I)v_3 = λv_2+μv_4$ ? Thanks again Recognitions: Homework Help Can you find a 3rd solution that is independent of v2 and v4? (Let's say with the first 2 entries set to zero. ;) Hmm, ok I see what you say.. So now I have 3 final questions to close this case for good: 1) I thought all solutions were given by span of $\begin{pmatrix} 1 \\ 0 \\ 1 \\ -2 \end{pmatrix} , \begin{pmatrix} 0 \\ 1 \\ 0 \\ -1 \end{pmatrix}$ So where did this $\begin{pmatrix} 0 \\ 0 \\ 0 \\ 1 \end{pmatrix}$ (tho I agree it IS a solution for this system) come from?? 2) How is it possible that $v_2 , v_4$ are solutions of both homogeneous and non-homogeneous $(A−3I)v_3=0$ and $(A−3I)v_3=v_2-v_4$. I doubt if it was just by accident.. 3) Final question is how come I'm allowed to go from the equation I got by comparing columns of PJ and AP: $(A−3I)v_3=v_2$, , to the equation $(A−3I)v_3=λv_2+μv_4$? The third column in J matrix $\begin{pmatrix} 0 \\ 1 \\ 3 \\ 0 \end{pmatrix}$ clearly shows I should find $Av_3=v_2+3v_3$ , not $Av_3=v_2+3v_3-v_4$ I appreciate your help alot! Thank you. Oh ok, I discard my 3rd question... The answer is that I pick v2 to be $\begin{pmatrix} 1 \\ -1 \\ 1 \\ -1 \end{pmatrix}$ Now I remain only with questions 1, and 2.. More related to equations system rather than J form I suppose OK, please discard all of my question, I'm an idiot :) Everything is clear now, I thank you very much for the last time :)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8778128027915955, "perplexity": 599.6354149967874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705956263/warc/CC-MAIN-20130516120556-00018-ip-10-60-113-184.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/184022/solving-a-system-of-linear-congruences-in-2-variables
# Solving a system of linear congruences in 2 variables Given: $$6x+7y \equiv 17 \pmod{42} \tag1$$ $$21x+5y \equiv 13 \pmod{42} \tag2$$ Here's my initial attempt at solving the above system. $(2) \times 35$: $$21x+7y \equiv 35 \pmod{42} \tag3$$ $(3)-(1)$: $$15x \equiv 18 \pmod{42}$$ $$5x \equiv 6 \pmod{14}$$ $$x \equiv 4 \pmod{14}$$ $$x \equiv 4,18,32 \pmod{42} \tag4$$ Substitute $(4)$ into $(2)$: $$5y \equiv 13 \pmod{42}$$ $$y \equiv 11 \pmod{42}$$ Hence the solutions in $\mathbb Z_{42}$ are $(4,11), (18,11), (32,11)$. I know this is correctly the solution set because the answers work, and because I've been told the system has 3 solutions. Then I tried substituting $(4)$ into $(1)$, and also into $(3)$, and each time I got $$7y \equiv 35 \pmod{42}$$ $$7y \equiv 35 \pmod{42}$$ $$y \equiv 5,11,17,23,29,35,41 \pmod{42}$$ Now, I don't understand why substituting $(4)$ into $(1)$ (or $(3)$) instead of into $(2)$ created excess solutions. I would really appreciate it if someone could take a look and explain it to me..thanks! - Equation 1. and 3. don't give you enough information to identify $y$, since you can only solve for the expression $7y$ and $7$ isn't a unit modulo $42$. Just to be sure: do you mean that eqn 2 works because $gcd(5,42)=1$, whereas eqns 1 & 3 fail because $gcd(7,42)\neq 1$? – Ryan Aug 18 '12 at 15:34 @Ryan Right, equation 2. can be written as $y = 21x + 11 (\bmod 42)$ while 1. and 3. only determine $7y$, and therefore $y$ only up to a multiple of $6$, as you can see from your set of candidates. This is really because $7*6 = 42$. – Cocopuffs Aug 18 '12 at 15:39 How did you get $y=21x+11 \pmod{42}$?? Assuming that you are basically just saying that the coefficient of $y$ (which we want to solve for) needs to be coprime with $42$, then my follow-up question is: what if the neither of the two coefficients of $y$ in the given system had been coprime with $42$? – Ryan Aug 18 '12 at 16:00 @Ryan I multiplied both sides by $17$. If neither of the two coefficients had been coprime, you would only be able to determine $y$ up to $\bmod 6$ and would have more solutions $\bmod 42$. – Cocopuffs Aug 18 '12 at 16:14 Oh heehee yes of course. Thanks. But I still don't understand how my multiplying Eqn 2 by $35$ created more solutions (after all, I had made sure to go back to $\mathbb Z_{42}$ immediately after multiplying Eqn 2 by $35$). I had originally thought that Eqns 2 and 3 were equivalent to each other (just like when solving regular simultaneous eqns). Without this intuition/understanding, do I then just have to be very careful in the final step and make sure that the eqn I choose to solve for $y$ contains the $y$-coefficient that has the lowest gcd with $42$ among all the available eqns?? – Ryan Aug 18 '12 at 16:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9479617476463318, "perplexity": 179.84824710322303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049278887.83/warc/CC-MAIN-20160524002118-00028-ip-10-185-217-139.ec2.internal.warc.gz"}
https://www.albert.io/ie/electricity-and-magnetism/semi-circle-of-charge-vs-a-transverse-line-of-charge
? Free Version Difficult Semi-Circle of Charge vs. a Transverse Line of Charge EANDM-WFKOYZ Figure 1 depicts a non-conducting semi-circle of radius $\rho = 3.66 \text{ cm}$, upon which a charge of $q_{sc} = + 0.724 \text{ nC}$ has been uniformly distributed. A distance $\rho$ from the center of the semi-circle of charge, a straight non-conducting wire of length $2\rho$ has been placed, oriented perpendicularly to the axis of symmetry of the semi-circle.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9136143326759338, "perplexity": 524.5233651061761}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541876.62/warc/CC-MAIN-20161202170901-00287-ip-10-31-129-80.ec2.internal.warc.gz"}
https://www.the-cryosphere.net/13/911/2019/tc-13-911-2019.html
Journal topic The Cryosphere, 13, 911–925, 2019 https://doi.org/10.5194/tc-13-911-2019 The Cryosphere, 13, 911–925, 2019 https://doi.org/10.5194/tc-13-911-2019 Research article 15 Mar 2019 Research article | 15 Mar 2019 # Large spatial variations in the flux balance along the front of a Greenland tidewater glacier Large spatial variations in the flux balance along the front of a Greenland tidewater glacier Till J. W. Wagner1, Fiamma Straneo2, Clark G. Richards3, Donald A. Slater2, Laura A. Stevens4, Sarah B. Das5, and Hanumant Singh6 Till J. W. Wagner et al. • 1Department of Physics and Physical Oceanography, University of North Carolina Wilmington, NC 28403, USA • 2Scripps Institution of Oceanography, University of California at San Diego, La Jolla, CA 92093, USA • 3Bedford Institute of Oceanography, Fisheries and Oceans Canada, Dartmouth, NS B2Y 4A2, Canada • 4Lamont-Doherty Earth Observatory, Columbia University, Palisades, NY 10964, USA • 5Department of Geology and Geophysics, Woods Hole Oceanographic Institution, Woods Hole, MA 02543, USA • 6Department of Electrical & Computer Engineering, Northeastern University, Boston, MA 02115, USA Correspondence: Till J. W. Wagner ([email protected]) Abstract The frontal flux balance of a medium-sized tidewater glacier in western Greenland in the summer is assessed by quantifying the individual components (ice flux, retreat, calving, and submarine melting) through a combination of data and models. Ice flux and retreat are obtained from satellite data. Submarine melting is derived using a high-resolution ocean model informed by near-ice observations, and calving is estimated using a record of calving events along the ice front. All terms exhibit large spatial variability along the ∼5 km wide ice front. It is found that submarine melting accounts for much of the frontal ablation in small regions where two subglacial discharge plumes emerge at the ice front. Away from the subglacial plumes, the estimated melting accounts for a small fraction of frontal ablation. Glacier-wide, these estimates suggest that mass loss is largely controlled by calving. This result, however, is at odds with the limited presence of icebergs at this calving front – suggesting that melt rates in regions outside of the subglacial plumes may be underestimated. Finally, we argue that localized melt incisions into the glacier front can be significant drivers of calving. Our results suggest a complex interplay of melting and calving marked by high spatial variability along the glacier front. 1 Introduction The retreat of Greenland's tidewater glaciers may be among the most noticeable manifestations of a changing global climate . Tidewater glaciers act as thermodynamic buffers as well as mechanical buttresses between the ocean and the main Greenland ice sheet . The speedup of the Greenland ice sheet observed since the early 2000s has likely been caused (at least to some degree) by the thinning of the glaciers' termini and, in some cases, the disappearance of their floating tongues . The processes that determine the flux balance at the glacier front therefore impact the ice sheet as a whole, yet a comprehensive understanding of these processes remains elusive. Increased ocean and air temperatures are expected to further increase the rates of glacier retreat in the coming decades , lending additional weight and urgency to the study of calving front dynamics. For a retreating glacier, the delivery of upstream ice to the terminus is outweighed by the loss of ice due to frontal ablation. At tidewater glaciers this frontal ablation occurs predominantly through two distinct processes: submarine melting and calving, both of which remain very difficult to constrain observationally. Recent studies have reported ways to measure submarine melting either directly (from repeat multibeam sonar surveys; Fried et al.2015), or indirectly (by considering the ocean heat transport toward the glacier; Jackson and Straneo2016). In most cases, however, melt is estimated using parameterizations that require local ocean temperatures and water velocities . Constraining melt rates at glacier fronts then relies on accurate observations of the ocean properties at these hard-to-reach ice–ocean interfaces and on finding appropriate parameterizations that translate these observations to melt rates. The continued scarcity of near-terminus data results in large uncertainties in current melt parameterizations . While melting is a continuous process, calving is discontinuous, highly complex, and influenced by a multitude of environmental factors, as well as the condition of the ice itself . In recent years, much effort has been dedicated to studying the calving of tidewater glaciers (see the review by ), yet a comprehensive understanding of what processes and variables determine the frequency and magnitude of calving events is still lacking. Oftentimes calving and melt fluxes are not considered separately but rather as a single ablation term, in particular when derived from satellite imagery . In situ ablation data remain scarce, and previous studies of explicit calving activities of Greenland's tidewater glaciers have typically been limited to visible daylight hours (see, for example, the calving event catalogue of Åström et al.2014) or somewhat indirect detection methods such as teleseismicity and measuring calving-generated surface gravity waves . Finally, the calving and melt fluxes of glaciers are oftentimes described by single (horizontally and vertically averaged) mean values . However, both melting and calving can vary substantially along the front of a glacier, with largely unknown implications for the overall stability of a glacier front. For example, submarine melt is enhanced in the vicinity of subglacial discharge plumes, leading to pronounced undercutting and incisions into the ice front . Spatially resolving these differences is challenging, and in particular spatial calving distributions are difficult to obtain. Here we use a multifaceted dataset for a first attempt at quantifying the relative contribution of calving and melting and their spatial variability along a glacier front. The dataset consists of both in situ and remotely sensed observations of the front of Saqqarliup Sermia, a midsized Greenland tidewater glacier. The dataset is unique in its detail, in its close proximity to the glacier front, and in that it contains observations of all of the main physical quantities of interest. The dataset consists of (i) detailed bathymetry at the glacier front, (ii) high-resolution ice-surface elevations, (iii) InSAR-derived ice velocities at and upstream from the glacier front, (iv) a continuous 3-week calving event catalogue, (v) local hydrographic measurements that allow for estimates of melt rates, and (vi) multibeam sonar imagery of the underwater shape of the glacier front. The spatial and temporal concurrence of these observations allows us to compare and contrast the individual components that make up the frontal mass budget of the glacier. Specifically, we first derive ice flux and retreat using satellite data collected over the observational period. We then compute submarine melting using a numerical model that is constrained (and validated) by near-ice hydrographic observations. Next, we estimate calving as a residual of the other terms in the frontal mass budget and compare this estimate with the observed calving frequencies. Finally, we bring our findings together to assess the overall mass budget and discuss how calving may be enhanced by highly focused melt “hot spots”. Figure 1(a) Landsat 8 image of the lower part of Saqqarliup Sermia and Sarqardleq Fjord. The inset of Greenland shows the location of the glacier. (b) Gridded bathymetry from in situ observations (readings indicated by gray dots). Also shown is the surface height from ArcticDEM (digital elevation map created by the Polar Geospatial Center from DigitalGlobe, Inc. imagery). The red line shows the front position on 9 July 2013. (c) Surface height (blue) and bathymetry (black) along the glacier front (following the red line in b). Also shown is the isostatic bottom of the ice (blue dashed). Locations of the two main plumes are highlighted in (b, c) by and ; two additional surface dips are indicated by and . The green horizontal line above panel (c) and the letters A–D indicate the locations of the front profiles shown in Fig. 8. 2 Field campaigns and physical setting Saqqarliup Glacier and the adjacent Sarqardleq Fjord were visited during two field seasons in the summers of 2012 and 2013. This site was chosen because ocean properties and bathymetry could be measured within 100 m of the ice front. Such observations are exceedingly difficult to obtain at larger glaciers that often have an ice mélange that obstructs access and where calving poses a major threat to equipment and personnel. The fjord is a tributary of the Ilulissat Icefjord, with the northwest-facing front of the glacier (Fig. 1) located 30 km southeast of Ilulissat Icefjord. At the glacier front, the fjord is about 5 km wide and the terminus is mostly, if not completely, grounded. Since 2004, the main northeastern part of the terminus has been retreating more rapidly than the southwestern section, which now juts out by almost 1 km from the rest of the glacier front (Fig. S2 in the Supplement). This part of the glacier, which we refer to as the “promontory” (Fig. 1), is grounded in shallow bathymetry and features tall ice cliffs (40–50 m above mean sea level; see Sect. 2.2). Overall, the glacier advanced slightly between 1975 and the early 1990s, but experienced an accelerating retreat from the mid-1990s until 2016 (Fig. S2;  Stevens et al.2016). The front position has been relatively stable from 2016 to 2018. The 2012 survey, described by , revealed the presence of two main subglacial discharge plumes along the glacier front, which, in turn, drained the two dominant catchment basins. The plume entering the fjord at the eastern edge of the promontory (Fig. 1) has drainage an order of magnitude greater and can result in an outcropping surface pool . We refer to this as the “main plume”. While this plume appears to be an annually recurring feature, its discharge is likely amplified episodically by the cyclical drainage of the ice-dammed Lake Tininnilik located to the southwest of the promontory . We note that the dramatic retreat of the glacier front in 2015 coincided with a major drainage event of Lake Tininnilik . The second recurring plume, which we will refer to as the “secondary plume”, is located closer to the northeastern margin of the glacier (Fig. 1). In what follows, we use bathymetry data from both years, while the other in situ observations were mostly collected during the 2013 season (see , for further details on the field campaigns). ## 2.1 Bathymetry The bathymetry of Sarqardleq Fjord was first mapped in detail during the 2012 and 2013 field seasons and the immediate bay in front of the terminus was found to feature depths of up to 150 m . These initial results were limited to data from a Remote Environmental Monitoring UnitS (REMUS) acoustic Doppler current profiler (ADCP) and a shipboard ADCP, which did not get closer than ∼200 m to the glacier front. Here, we supplement these data with several additional near-terminus datasets from the 2013 field campaign (Fig. S1), which allows for a detailed bathymetry map along the grounding line. The new data consist of circa 39 000 depth readings taken with Jetyak-mounted and ship-mounted ADCPs. In addition, there are approximately 6000 readings from a ship-mounted National Marine Electronics Association (NMEA) bottom-range profiler and six readings from expendable CTD sensors (XCTDs) deployed in the otherwise undersampled region of the main plume. Most of these readings are within 10–100 m of the glacier front. Figure 1c shows the new bathymetry at the glacier front as a function of x, the distance along the glacier front. The bathymetry can be split into two main regimes: for x<1800 m (the promontory) the glacier is grounded in shallow waters and its surface heights are elevated substantially above flotation. From here on, we refer to the eastern part of the glacier (x>1800 m) as the “main” glacier. In 2013, the front of the promontory was grounded on a sill that runs parallel to the glacier front. This sill coincides approximately with the furthest advance of the glacier in 1992 . By 2013 the main glacier had retreated ∼500 m from the sill, but the promontory was still perched on it in a bathymetry of 60 m in depth or less (Fig. 1c). Since 2013, this part of the glacier front has also retreated by several hundred meters (Fig. S2). In 2013, the main part of the glacier front was in waters of a depth of up to 150 m. A pronounced dip in bathymetry – suggestive of a subglacial channel – is found near the location of the main plume (x= 2000–2400 m). A number of smaller dips are observed between x= 3400 and 4700 m. Beyond 4700 m the water depth decreases rapidly as one approaches the northeastern shoreline. ## 2.2 Glacier surface topography We obtained a digital elevation map (DEM) from an ArcticDEM overflight on 22 March 2013, which covers the full span of the Saqqarliup glacier front and some of the upstream region (Fig. 1). The DEM has a horizontal resolution of 2 m and is capable of resolving individual crevasses on the glacier surface. The DEM shows that the front of the glacier is heavily crevassed and has several pronounced dips in the surface elevation at the terminus. The ice cliff is highest (up to 50 m) and most uniform in the region of the promontory, while the main glacier is much more variable with four distinct depressions that reach below 10 m in surface elevation (indicated by symbols in Fig. 1). The coincident high-resolution surface elevation and bathymetry data near the terminus enable us to compute the total ice thickness along the glacier front, which allows for an estimation of the total ice flux (discussed in Sect. 3.1). 3 Components of the frontal mass balance In order for the mass budget along the glacier front to be balanced, the sum of advective ice flux and frontal retreat must be balanced by total ablation (i.e., by the sum of melting and calving fluxes). Here we consider a steady-state, vertically averaged balance. At a given point x along the glacier front this can be written as $\begin{array}{}\text{(1)}& H\left(R+{v}_{\mathrm{i}}\right)=D\stackrel{\mathrm{‾}}{M}+C\phantom{\rule{0.125em}{0ex}}.\end{array}$ The left-hand side represents retreat and advection, where H is the ice thickness (in meters), R is the retreat rate, and vi is the ice velocity at the terminus (both in meters per year). The first term on the right represents the ice loss due to submarine melting, where D is the draft of the glacier (in meters) and $\stackrel{\mathrm{‾}}{M}$ is the depth-averaged melt rate (in meters per year). The final term, C, is the ice loss due to calving (in square meters per year). In this section we discuss the data used and assumptions made to estimate each term in detail. ## 3.1 Ice velocity and advective ice flux Several dozen ice velocity reconstructions of the lower part of the glacier are available for the years 2009–2015 from InSAR data . The mean ice velocity at the glacier front (space- and time-averaged over all available fields) is ∼350 m yr−1 with minima at the edges of the glacier. There is a notable peak in time-mean ice velocity (up to 750 m yr−1) near the location of the main plume (Fig. 2). A second region of elevated velocities is found near x=4500 m and is more pronounced further upstream from the glacier front. The drainage location of this second ice stream coincides with that of the secondary plume. It is worth noting that the spatial distribution of velocities was remarkably consistent during summer months (June–September) from 2012 to 2014 (Fig. 2b), followed by a substantial overall slowdown in 2015. This slowdown is not included here as it has been linked to a major drainage event of Lake Tininnilik and therefore is subject to altogether different environmental forcing. In what follows, we will consider the 2012–2014 mean July velocity profile along the glacier front. Using the mean summer (June–September) velocities instead does not change the results appreciably. Figure 2(a) InSAR ice velocity data near the glacier front. Shown are mean summer (June–September) values averaged over 28 velocity fields, collected during 2012–2014. Note that there is a consistent data gap near the promontory. The shading represents the horizontal velocity magnitude. (b) Velocity profiles along the glacier front. Here, as in all figures, the orientation is looking down-glacier. The faint gray lines show the 28 individual velocity fields. Also indicated are the approximate locations of the two plumes (, ). The magnitude of the summer ice velocity along the glacier front, vi(x), shown in Fig. 2b, together with the ice thickness profile H(x), allows for an estimate of total advective ice flux (Fig. 3). This assumes plug flow, i.e., that the ice velocity is approximately constant from the surface to the ice–bedrock interface. Note that for a glacier with no sliding and uniform temperature, the depth-averaged velocity is 80 % of the surface velocity. For fast-flowing tidewater glaciers with concentrated deformation at depth, such as Saqqarliup, plug flow is therefore considered a good approximation . Figure 3(a) Mean July ice velocity along the glacier front in blue (right vertical axis). Here we used cubic interpolation to fill the data gap shown in Fig. 2b. In red (left vertical axis) is shown the estimated ice thickness along the glacier front, obtained by computing the difference of the surface and bathymetry profiles of Fig. 1c. The red dotted line shows the ice thickness at the glacier front assuming the ice is locally in isostatic equilibrium everywhere. (b) Ice flux per unit width along the glacier front (in black), computed from the product of velocity and thickness (shown in a). The shaded gray areas under the curve show the ice-flux range due to potential flotation. This is a result of the thickness ranges indicated as red shaded areas in (a). Uncertainties for thickness, velocity, and ice flux are shown by the red, blue, and black standard error bars, respectively. Also indicated are the approximate locations of the two known plumes (, ), which coincide with two areas of possible flotation. We note that the thickness data suggest that the terminus might be floating at several locations: the four highlighted surface depressions at the glacier front are all low enough to raise the isostatic bottom of the ice above the local sea floor. The locally isostatic bottom of the ice is indicated in Fig. 1c (blue dashed line). Here we assume an average ice density of 883 kg m−3, obtained as a mean of low and high values commonly used for glacier and ice shelf front densities, namely 850 kg m−3 and 917 kg m−3 (pure ice). The surrounding ice and the associated stiffness of the glacier will likely prevent the ice from assuming local isostasy along the glacier front. However, the isostatic bottom can be used to compute a lower bound on the ice thickness in regions where the ice may be floating. It may be speculated that the ice appears to be floating in these regions due to undercutting by submarine melt (which in turn is associated with rising discharge plumes, as discussed in Sect. 3.3). The ice would be grounded everywhere else. In particular, the ice surface is elevated substantially beyond its isostatic height in the region of the promontory. The uncertainty in ice thickness associated with the glacier potentially floating at several points is illustrated by the shaded areas in Fig. 3. In the figure, the upper bound of the ice thickness assumes a fully grounded glacier front, while the lower (dashed) bound assumes local isostasy everywhere. The ice flux is highest when assuming a fully grounded glacier, while a partially floating glacier front would have a correspondingly reduced flux. ## 3.2 Changes in glacier front position Superimposed on the aforementioned long-term retreat of the glacier front over the past decades (Fig. S2) we observe a seasonal advance–retreat cycle during 2012 and 2013 (Fig. 4). A total of 27 front positions between January and October 2012 and between January and October 2013 were digitized from TerraSAR-X satellite images. The 15 profiles from 2013 are shown in Fig. 4a. Both years exhibit a clear, albeit modest, seasonal cycle in terminus position, with a mean advance for the entire front of roughly 30 m from January through April/May, followed by a more rapid retreat from June to September of ca. 80 m (Fig. 4b). However, there is substantial variability along the glacier front in this cycle. Near the edges of the glacier, and in particular at the promontory, the glacier exhibits a much reduced advance–retreat cycle, and more variable regions are found in the main dynamic section of the glacier. Figure 4Seasonal advance and retreat of glacier front. (a) The total 15 front profiles acquired from February to September 2013; the legend lists every second profile. The thick red and blue profiles represent the May–June and September averages, respectively. Also indicated are the locations of the two plumes (, ). (b) Mean front position, shown as an anomaly from the yearly mean position. The 2012 values are shown in gray and 2013 in black. The 2013 spring profiles used in (a) are highlighted in red, fall profiles in blue, and July profiles in green. The vertical dotted lines demarcate the period from 12 to 31 July during which calving was observed. (c) Retreat rates, R(x), along the glacier front. Positive R represents glacier retreat and negative R glacier advance. The dashed line represents the spring–fall 2013 mean retreat rates; the solid line shows the retreat rates between 9 and 31 July, computed from the profiles marked green in (b). R(x) is computed as the rate of retreat perpendicular to the initial glacier front. The most rapid retreat in 2013 was observed at the time of the July study period. Figure 4b shows the spatial-mean seasonal retreat anomalies for 2012 and 2013, with profiles from 9 and 31 July 2013 highlighted in green. Such rapid retreat is spatially highly variable (Fig. 4c) and strongly impacted by sporadic large individual calving events. Longer-term mean retreat rates, computed from average spring and fall glacier front positions (highlighted in Fig. 4 in red and blue, respectively) may therefore be more representative on longer timescales. ## 3.3 Submarine melting Submarine melt rates at Saqqarliup Sermia during summer 2013 have been estimated by . Here we provide only a brief overview of the approach and build on the results of to investigate the glacier's flux balance. Melt rates within the two plumes were estimated using standard buoyant plume theory . Melt rates outside of the plumes were estimated using a high-resolution numerical model of the fjord in the Massachusetts Institute of Technology general circulation model (MITgcm), which has become the leading model for simulating the circulation and water properties of glacial fjords and for estimating the resulting submarine melt rates . Both buoyant plume theory and MITgcm were forced with runoff from the regional climate model RACMO and initialized with hydrographic profiles from the fjord. also presented observationally inferred melt rates using water property and velocity measurements collected within 100 m of the calving front. In each approach, then used the standard three-equation melt rate parameterization of to convert the modeled or observed water properties and velocities to an estimated submarine melt rate. There is good agreement between the melt rates estimated with MITgcm and with observations (, their Fig. 3). Here, we only consider the modeled melt rates (Fig. 5), which have the advantage of covering the whole extent of the glacier front (unlike rates inferred from observations, which have data gaps in and around the plumes). Figure 5(a) Time-mean melt rates along the glacier front as estimated from MITgcm, adapted from , their Fig. 3f. The bathymetry in the model (white) is based on that of . (b) Melt rates averaged inside the main discharge plume (blue) and outside of both plumes (red). There is large spatial variability in submarine melt rates along the glacier front (Fig. 5). Submarine melt rates are highest (in both a depth-averaged and maximum sense) within the two plumes in which the discharge of buoyant surface meltwater from beneath the glacier gives high water velocities. Outside of the two plumes melt rates are much smaller in a depth-averaged sense; however the lateral circulation excited by the plumes combines with warm surface waters to give high melt rates near the surface outside of the plumes (Fig. 5b; see also ). While these melt rate estimates represent the state of the art in terms of melt rate modeling, we stress that they are based on a melt rate parameterization that has not been confirmed by observations, especially for the case of a mostly vertical front of a tidewater glacier. The uncertainty associated with these melt rate estimates is further discussed in Sect. 5. ## 3.4 Calving frequency and distribution Calving events were detected over a 19-day period from 12 July to 31 July 2013, using two pressure sensor moorings located on the western and eastern banks of the fjord, each at a distance roughly 2 km from the nearest point along the glacier front (Fig. 6a). The dispersion of waves that are created by individual calving events can be inverted to estimate the distance between the mooring and the origin of the wave. Wave packets that are detected by both moorings can be used to triangulate the time and position of the corresponding calving event . For the present dataset, this method has been validated against a photography-derived calving record and good correspondence was observed (not shown). The study by provides a detailed description of the method. Figure 6(a) Spatial calving distribution as estimated from pressure sensor data; the shaded rectangle indicates the promontory. The inset shows a close-up of the glacier front and adjacent fjord, with the red rectangle outlining the region of interest and red stars indicating the location of the wave moorings. (b) Calving count along the glacier front, obtained as the total number of calving events detected within a 300 m running window along the glacier front (red bars, left axis). Also shown is an estimate for the relative calving volume, computed from the product of the frequency of calving events and the corresponding magnitudes of the detected waves (black line, right axes). Plume and surface dip locations are indicated as in previous figures. In total, 336 calving events were identified using this method over the period that both sensors were recording. Figure 6a shows the location and wave amplitude of the individual events. The calving frequency distribution along the glacier front is illustrated in Fig. 6b. A pronounced peak in frequency is found at the promontory, where shallow bathymetry causes the glacier to be elevated substantially beyond its isostatic height of flotation. With its high ice cliffs the promontory can be regarded as a region that is subject to a rather different calving regime than the rest of the glacier. For the main glacier, we observe a peak in calving activity at a distance x≈2400 m along the glacier front, near the concave bend in the glacier front. A second peak in calving activity is found around x≈4300 m. Both peaks appear slightly offset from the location of the two plumes. The calving activity is lowest at the northeastern edge of the terminus. Even though this dataset presents a rather accurate record of calving frequencies, it remains challenging to infer a total volume of calved ice . This is due to the different modes of calving (e.g., ice-cliff calving versus submarine calving), as well as the different shapes of calved ice blocks and the differing heights from which they fall (or depths from which they rise). Distinguishing between these events from the pressure sensor data is a difficult task and beyond the scope of this study. The pressure sensors do record an amplitude of the incoming wave packet associated with a given calving event. Crudely approximating that this amplitude is proportional to the size of the calved ice, we can estimate a relative calving volume (black curve in Fig. 6b). However, since, for example, a small cone-shaped ice block can act as a more efficient wave generator than a large flat piece of ice (Nicholas Pizzo, personal communication, 2018; ), it is difficult to ascertain a direct relation between wave amplitudes and calving volume. In what follows we therefore only consider the calving frequency record and will scale this record such that the resulting calving flux approximately closes the mass budget at the glacier front (see Sect. 4.2). Given the limitations of the data, we take such a scaling to be the most justifiable first-order approximation, supported by the rather uniform distribution of estimated calving event sizes along the glacier front. The scaling factor is chosen such that the mean calving volume is equal to the mean of the residual, i.e., $〈C〉=〈H\left(R+{v}_{\mathrm{i}}\right)-D\stackrel{\mathrm{‾}}{M}〉$, where 〈 〉 denotes the spatial mean along the glacier front. 4 Overall flux balance and spatial variability In what follows we consider the volume flux across the glacier front during the summer of 2013. We make the assumption that this flux was steady during the study period and ignore time dependencies of the individual terms in Eq. (1). To compare the different terms in the mass budget, we consider the retreat rate as computed from the two fronts measured on 9 and 31 July 2013 since this is almost the exact time window of the calving observations (12–31 July). For the advection term we use the July average over the years 2012–2014 since the July 2013 ice velocity fields have substantial data gaps at the glacier front. However, as discussed above, there is little interannual variability in vi over these years, so the 3-year mean likely gives a close approximation to the July 2013 velocity field. Front retreat and advective flux along the glacier front (i.e., the left-hand side of Eq. 1) are shown in Fig. 7a. The sum of ice advection and retreat is compared to the estimated melt fluxes in Fig. 7b. Figure 7c shows the calving flux as estimated from the observations (Sect. 3.4), compared to the residual C of the other three terms in Eq. (1), such that $C=H\left(R+{v}_{\mathrm{i}}\right)-D\stackrel{\mathrm{‾}}{M}$. Figure 7Flux balance along the glacier front. Dashed lines indicate uncertainties as discussed in the text. (a) The green line represents the July 2013 retreat rate and the blue line the advective ice flux. (b) Sum of retreat and advection (gray) and melt flux (orange). (c) Approximate closure of the volume flux budget along the glacier front. The black line shows the residual of advection plus retreat minus melting, while the red line shows the observational calving estimate as in (b). Note that the calving flux has been scaled to approximately close the budget for the main part of the glacier. ## 4.1 High spatial variability along the glacier front A striking feature of almost all components of this multipartite dataset is their high spatial variability along the glacier front. Away from the margins, the ice thickness at the front ranges from thin (<40 m) sections near the northeast edge to ∼100 m along the promontory and up to 192 m near the main plume, with substantial variations throughout. Overall, we observe a mean thickness of 128 m with a variability of ±38 m (1 standard deviation). We find that the advective flux is the most uniform component; still it is notably suppressed at the promontory and highest near the outflow location of the main plume (Figs. 37a). The retreat rates are of comparable magnitude to the advective flux overall. However, the retreat rates are spatially extremely variable, in particular the observed July 2013 rates, which exhibit three regions of enhanced retreat, two of which are close to the two discharge plumes, with peaks at x=2400 and 4400 m (Figs. 4c, 7a). Averaged over longer time periods, the retreat rates become more uniform. Over shorter time periods retreat rates are more strongly influenced by individual calving events. The melting estimates feature two pronounced maxima at the plumes and are small everywhere else (Fig. 7b). The maximum melt flux value at the main plume (1.5×105 m2 yr−1) is slightly higher than the mean retreat and advective flux values (1.0×105 and 0.8×105 m2 yr−1, respectively). Outside the two plumes the mean melting flux (0.04×105 m2 yr−1) is an order of magnitude lower than inside the plume and than the other budget terms. Dividing these depth-integrated flux values by the average thickness (128 m), we obtain depth-averaged velocities for each term. These are ice advection of 780 m yr−1, retreat of 620 m yr−1, maximum melt rate at the main plume of 1200 m yr−1, and mean melt rate outside the plumes of 30 m yr−1. Calving frequencies are strongly enhanced at the promontory, which – given the reduced advection and retreat in this area – implies that calved pieces are in general smaller here. Since we are unable to adequately distinguish between the different calving sizes, the heightened calving activity at the promontory results in a large discrepancy between the computed residual C and the observationally estimated calving flux in that region (Fig. 7c). We may also be underestimating the advective flux at the promontory slightly since we only consider horizontal velocities, and the ice flow may have a non-negligible vertical component as the glacier rides onto the local sill. Even though calving frequencies are overall lower for the main part of the glacier, we observe two slight local maxima, slightly offset from the plumes (Fig. 7b). The lowest calving frequencies are found between the two plumes in the region farthest from both plumes. The two peaks in depth-averaged melt flux (Fig. 7b), co-located with the two discharge plumes, are just offset from the two maxima in frontal retreat and calving. ## 4.2 Spatially integrated mass budget Integrated along the main part of the glacier front we estimate an ice advection rate of 0.2±0.05 Gt yr−1 and a retreat rate of 0.3±0.03 Gt yr−1. This gives ∼0.5 Gt yr−1 as a best estimate for the total rate of ice loss. The uncertainty in ice advection corresponds to 1 standard deviation in the spread of mean July ice velocities. The uncertainty in the retreat rate is largely due to the somewhat arbitrary selection of “before” and “after” dates, and the resultant disproportionate impact of individual calving events. The error reported here is 1 standard deviation in the difference in retreat when choosing the frontal profiles of 28 June (instead of 9 July) as the before date or 22 August (instead of 31 July) as the after date. Integrating the estimated melt over the main glacier front gives a total melting flux of 0.03 Gt yr−1. This would suggest that ∼0.47 Gt yr−1 (or 94 %) of ablation occurs in the form of calving, thus implying that the glacier balances the ice flux almost exclusively through calving (except in the narrow regions at the discharge plumes). The lack of an ice mélange in the fjord and the anecdotal observation of limited calving are, however, at odds with this finding. This raises the question of whether the melt term – estimated using state-of-the-art parameterizations informed by observations very close to the ice front – is incorrect? This is discussed further in Sect. 5. While we have no direct measurement of calving volume, we can close the integrated mass budget by scaling the observed relative calving frequencies to give the required total calving flux of 0.47 Gt yr−1 (Fig. 7c). This corresponds to a mean calving flux of 1.7×105 m2 yr−1 along the glacier front (compared to a mean melting flux of 0.1×105 m2 yr−1). Again dividing by the average thickness, we estimate 1300 m yr−1 of ice loss due to calving, compared 80 m yr−1 of melting. ## 4.3 Variations in the vertical glacier front profile A final piece of observational evidence that may help in the interpretation of the results above is provided by point cloud images of the glacier front profile. These were collected during the 2013 field season using an autonomous surface vehicle, the Woods Hole Oceanographic Institution “Jetyak” . Among other instruments, the Jetyak carried a multibeam sonar that was mounted sideways facing the glacier, which collected three-dimensional maps of the underwater portion of the glacier front. Further details of the Jetyak's operation and data can be found in . Here, we highlight several characteristic frontal profiles. Figure 8 shows a point-cloud transect of the northeastern flank of the glacier, as well as four vertical line profiles at different locations along the transect. Figure 8Multibeam sonar data of glacier front from 26 July 2013. (a) Map illustrating the location of the multibeam cross sections A–D and the two plumes (, ). (b) The 3-D point-cloud transect showing a part of the eastern side of the glacier (distance along glacier front,  4000–4800 m). Data are color-coded by depth below sea level. Indicated are the locations of the four cross sections A–D shown in (c, d). (c) Cross sections A and B near subglacial plume, exhibiting characteristic undercutting. (d) Cross sections C and D away from plume, showing submarine protrusions without undercutting. The first two profiles (A and B) are placed near the secondary plume. Both profiles are marked by two features: (i) a sloped upper 20–25 m, which results in the above-water cliff of the glacier being set back by 10–20 m, relative to the most ocean-ward point of the glacier face, and (ii) up to 10 m of undercutting below 40 m in depth, such that the protrusion beyond the above-water cliff is most pronounced at depths of 20–40 m, and the ice is substantially eroded at greater depths. This is likely caused by the rising subglacial plume, which leads to preferential melt of the deeper parts of the glacier front . Note that the high turbidity of water within the main plume prevented the Jetyak from surveying the shape of the glacier front occupied by that plume. Profiles C and D, which are located far from the plume, also feature said underwater ice protrusion; however, they show no signs of undercutting. The presence of such net-buoyant underwater protrusions and their potential impact on calving has been studied previously and will be discussed further in the next section. We note that the bathymetry reaches depths of around 130 m for this part of the glacier and the bottom ∼50 m is unfortunately not captured by the multibeam sonar. However, the profiles located near the melt (A and B) can be expected to be further undercut below the observed range , while profiles C and D likely do not feature such undercutting. 5 The role of melting in the frontal mass budget ## 5.1 Uncertainty in melt rate estimates The finding that calving appears to make up almost the entire loss of ice is somewhat unexpected, in particular since during the study period the glacier's calving activity was limited to relatively small events, and the fjord was by-and-large devoid of icebergs. Furthermore, the melt rates used here are roughly double that of what previous estimates would have been since we account for additional melt that arises from the recirculation of warm ambient surface waters . However, melting supposedly only makes up ∼6 % of the total ablation. Given the lack of observational verification of the current melt rate parameterization, it is worth considering end-member melt rate scenarios. A key parameter in the melt rate parameterization is the thermal Stanton number, which directly controls the rate of transfer of heat from the ocean to the ice. Its canonical value is based largely on field observations at a cold Antarctic ice shelf and there are as yet no strong observational constraints from tidewater glaciers. Furthermore, have recently argued for a larger Stanton number based on numerical simulations. We thus consider lower and upper bounds for melt rates in which the thermal Stanton number is respectively reduced and increased by 50 % (Fig. 7b, c). To obtain the upper-bound melt rate scenario, we also increase the outside-of-plume water velocity that enters the melt rate parameterization. While vertical velocities inside of plumes might be considered reliable based on well-validated plume theory , one could argue that the mean modeled outside-of-plume velocities may be too small for a number of reasons, including coarse model resolution and the lack of tides, surface waves, and calving events that may excite water motion. These factors might crudely be taken into account by placing an additional velocity in the melt rate parameterization. Such an approach has some precedent with the inclusion of tides beneath ice shelves . In the upper-bound melt rate scenario, we thus add 0.2 m s−1 to the outside-of-plume water velocity entering the melt rate parameterization. In the lower-bound melt rate scenario, melting accounts for an even smaller fraction of mass loss than in our best estimate but is still significant inside the plumes. In the upper-bound melt rate estimate, melting accounts for a significant proportion of mass loss both inside and outside of the plumes (Fig. 7b). Clearly this is an observationally under-constrained discussion, and we emphasize that these upper and lower bounds are very rough error estimates as the state of understanding of submarine melting does not yet permit rigorous quantitative assessment of uncertainties. However these bounds do show that through reasonable modification of the melt rate parameterization, melting can account for a larger fraction of the ice loss than reported in our best estimate. Even by introducing these uncertainties, however, the analysis presented still indicates that calving is the dominant mode of mass loss for most of the glacier front (except at the localized melt plumes). ## 5.2 Impact of melting on calving In addition to balancing the frontal ice flux, the data allow us to examine how melting and calving may be interlinked. Specifically, one consequence of melting being focused on narrow regions is that it can lead to sharp incisions in the glacier front, which in turn may enhance calving. found that fjord-scale circulations driven by plumes can result in enhanced submarine melting near the fjord surface in regions distant from the plume Fig. 5b). This near-surface melting has in turn been suggested as a potential driver for large calving events at glacier fronts that are floating or close to floating : preferential near-surface melting at the glacier front leads to a horizontal melt incision near the water surface, which in turn causes erosion of the above-water ice cliff. As a result, the front of the glacier is left with an underwater protrusion (or “ice foot”) as in profiles C and D of Fig. 8. This frontal profile is statically unstable since the ice foot is net buoyant and exerts bending stresses on the glacier . Calving events occur when such stresses surpass the yield strength of the terminus. It is likely that profiles C and D represent sizable ice feet that exert bending stresses that enhance the calving flux in this region. Furthermore, it is possible that the regions adjacent to the meltwater plumes are more prone to calving since the high melt rates at the plumes cause vertical incisions in the glacier front . These in turn would reduce the transverse (i.e., along-front) stability of the terminus and trigger further calving. A surface expression of such a vertical incision in the glacier front can be found near the main plume in the profile of August 2012 (Fig. S2). Considering the particular geometry of Saqqarliup, as the two main plumes drive rapid melt near the two edges of the main part of the glacier, this may cause the entire front between the plumes to be more prone to calving, in particular since we have found this region to be close to (or at) flotation. In summary, from the observations presented in the previous sections, we propose that there are two distinct regimes driving ablation at Saqqarliup: (a) melting-dominated ablation in spatially confined regions near the discharge plumes, and (b) calving-dominated ablation in the regions away from the plumes (which may be enhanced by near-surface horizontal melt incisions). This is further supported by the local minima in calving activity at the location of the two discharge plumes (Fig. 7b). The two ablation regimes are summarized in the schematic of Fig. 9. Figure 9Schematic of two distinct ablation regimes. (a) Melt-dominated regime: the vertical structure of melting due to a rising subglacial discharge plume that entrains warm ambient water results in substantial undercutting of the glacier front (as in profiles A and B in Fig. 8). These front profiles likely do not cause large calving events, with calving mostly confined to the smaller above-water cliff. Profiles are drawn for an earlier time t1 and a later time t2 by which the glacier has retreated mostly due to melting. (b) Calving–dominated regime: here the growth of sizable and buoyant underwater feet (as in profiles C and D in Fig. 8) can accelerate calving, with the melt contribution confined to a small region near the water surface. Again, profiles are shown at t1 and t2 (pre- and post-calving), as part of the “footloose” calving cycle . 6 Conclusions We have presented a multifaceted dataset of a Greenland tidewater glacier and its surroundings. The unique dataset enables us to investigate the individual terms that determine the flux balance along the glacier front. We find that the individual terms that comprise the glacier's frontal mass budget are marked by high spatial variability. Ice velocities feature maxima that coincide with troughs in the bathymetry and locations of subglacial discharge plumes. The retreat rates are spatially particularly variable when calculated over shorter periods of time (days to weeks) and are likely dominated by somewhat stochastic calving events over such short timescales. Estimated submarine melt rates from numerical modeling of fjord circulation show rapid melting within the two discharge plumes and more widely at the fjord surface but limited melting elsewhere. If we use the inferred melt rate to scale the calving flux we find that 94 % of the mass balance of this glacier must be balanced by calving. This finding appears to be at odds with the observation of limited calving and the lack of icebergs in the fjord. We suggest that the numerical model – even though constrained by direct measurements and using the standard melt parameterization – may underestimate melting outside of the plumes, indicating that current melt models for tidewater glacier fronts may need to be reviewed and should be treated with caution. The spatial variability of the observed processes suggests the presence of two distinct ablation regimes: a melting-dominated regime near the discharge plumes and a calving-dominated regime away from the plumes. We discuss that melting, through its horizontal and vertical variability, may play an important role in driving calving, thus having a dynamic effect out of proportion to the fraction of mass lost by melting. If calving is indeed dependent on the localized melt rates, this may have far-reaching implications for the overall stability of the glacier. Understanding the impact of these spatially highly variable processes on ice sheet dynamics should thus be a priority in the study of ice–ocean interactions. Data availability Data availability. Temperature and salinity profiles collected near the glacier front are available at https://doi.org/10.18739/A2B853H78 (Straneo2019). Water pressure data used to detect calving events are available at https://doi.org/10.18739/A24Q7QP6V . ADCP-derived water velocities near the terminus are available at https://data.nodc.noaa.gov/cgi-bin/iso?id=gov.noaa.nodc:0177127 . InSAR-derived surface ice velocities are available at https://nsidc.org/data/nsidc-0478/versions/1 . Supplement Supplement. Author contributions Author contributions. TW led the analysis and integrated the data. FS, SD, and CR planned and supervised the project. FS, SD, CR, LS, and HS carried out the field work. DS developed the melt model and performed the melt simulations. TW, DS, and FS drafted the paper. All authors discussed the results and commented on the paper. Competing interests Competing interests. The authors declare that they have no conflict of interest. Acknowledgements Acknowledgements. We acknowledge support from the Woods Hole Oceanographic Institution Ocean and Climate Change Institute Arctic Research Initiative, and NSF OPP-1418256 and OPP-1743693, to Fiamma Straneo and Sarah B. Das. Till J. W. Wagner was further supported by NSF OPP award 1744835. Geospatial support for this work was provided by the Polar Geospatial Center under NSF OPP awards 1043681 and 1559691. DEMs provided by the Polar Geospatial Center under NSF OPP awards 1043681, 1559691, and 1542736. Donald A. Slater acknowledges the support of Scottish Alliance for Geoscience, Environment and Society early-career research exchange funding. Edited by: Benjamin Smith Reviewed by: two anonymous referees References Åström, J. A., Vallot, D., Schäfer, M., Welty, E. Z., O'Neel, S., Bartholomaus, T. C., Liu, Y., Riikilä, T. I., Zwinger, T., Timonen, J., and Moore, J. C.: Termini of calving glaciers as self-organized critical systems, Nat. Geosci., 7, 874–878, 2014. a Benn, D. I., Warren, C. R., and Mottram, R. H.: Calving processes and the dynamics of calving glaciers, Earth-Sci. Rev., 82, 143–179, 2007. a Benn, D. I., Cowton, T., Todd, J., and Luckman, A.: Glacier Calving in Greenland, Current Climate Change Reports, 3, 282–290, 2017. a, b Bühler, O.: Impulsive fluid forcing and water strider locomotion, J. Fluid Mech., 573, 211–236, 2007. a Carr, J. R., Stokes, C. R., and Vieli, A.: Threefold increase in marine-terminating outlet glacier retreat rates across the Atlantic Arctic: 1992–2010, Ann. Glaciol., 58, 72–91, 2017. a Carroll, D., Sutherland, D. A., Shroyer, E. L., Nash, J. D., Catania, G. A., and Stearns, L. A.: Modeling turbulent subglacial meltwater plumes: implications for fjord-scale buoyancy-driven circulation, J. Phys. Oceanogr., 45, 2169–2185, 2015. a Carroll, D., Sutherland, D. A., Hudson, B., Moon, T., Catania, G. A., Shroyer, E. L., Nash, J. D., Bartholomaus, T. C., Felikson, D., Stearns, L. A., Noel, B. P. Y., and van den Broeke, M. R.: The impact of glacier geometry on meltwater plume structure and submarine melt in Greenland fjords, Geophys. Res. Lett., 43, 9739–9748, 2016. a Cowton, T. R., Slater, D. A., Sole, A. J., Goldberg, D. N., and Nienow, P.: Modeling the impact of glacial runoff on fjord circulation and submarine melt rate using a new subgrid-scale parameterization for glacial plumes, J. Geophys. Res.-Oceans, 120, 796–812, 2015. a Ezhova, E., Cenedese, C., and Brandt, L.: Dynamics of Three-Dimensional Turbulent Wall Plumes and Implications for Estimates of Submarine Glacier Melting, J. Phys. Oceanogr., 48, 1941–1950, 2018. a Fried, M. J., Catania, G. A., Bartholomaus, T. C., Duncan, D., Davis, M., Stearns, L. A., Nash, J., Shroyer, E., and Sutherland, D.: Distributed subglacial discharge drives significant submarine melt at a Greenland tidewater glacier, Geophys. Res. Lett., 42, 9328–9336, 2015. a, b, c, d, e Hill, E. A., Carr, J. R., and Stokes, C. R.: A review of recent changes in major marine-terminating outlet glaciers in northern greenland, Front. Earth Sci., 4, 111, https://doi.org/10.3389/feart.2016.00111, 2017. a Holland, D. M. and Jenkins, A.: Modeling thermodynamic ice-ocean interactions at the base of an ice shelf, J. Phys. Oceanogr., 29, 1787–1800, 1999. a, b Holland, D. M., Thomas, R. H., De Young, B., Ribergaard, M. H., and Lyberth, B.: Acceleration of Jakobshavn Isbrae triggered by warm subsurface ocean waters, Nat. Geosci., 1, 659–664, 2008. a Howat, I. M., Joughin, I., and Scambos, T. A.: Rapid changes in ice discharge from Greenland outlet glaciers, Science, 315, 1559–1561, 2007. a Howat, I. M., Joughin, I., Fahnestock, M., Smith, B. E., and Scambos, T. A.: Synchronous retreat and acceleration of southeast Greenland outlet glaciers 2000–06: ice dynamics and coupling to climate, J. Glaciol., 54, 646–660, 2008. a Jackson, R. H. and Straneo, F.: Heat, salt, and freshwater budgets for a glacial fjord in Greenland, J. Phys. Oceanogr., 46, 2735–2768, 2016. a Jenkins, A.: Convection-Driven Melting near the Grounding Lines of Ice Shelves and Tidewater Glaciers, J. Phys. Oceanogr., 41, 2279–2294, 2011. a Jenkins, A. and Nicholls, K.: Observation and parameterization of ablation at the base of Ronne Ice Shelf, Antarctica, J. Phys. Oceanogr., 40, 2298–2313, 2010. a Jensen, T. S., Box, J. E., and Hvidberg, C. S.: A sensitivity study of annual area change for Greenland ice sheet marine terminating outlet glaciers: 1999–2013, J. Glaciol., 62, 72–81, 2016. a Joughin, I., Smith, B., Howat, I., and Scambos, T.: MEaSUREs Greenland Ice Sheet Velocity Map from InSAR Data, Version 1, Boulder, Colorado USA, NASA National Snow and Ice Data Center Distributed Active Archive Center, https://doi.org/10.5067/MEASURES/CRYOSPHERE/nsidc-0478.001 (last access: 8 March 2019), 2010. a, b Joughin, I., Alley, R. B., and Holland, D. M.: Ice-Sheet Response to Oceanic Forcing, Science, 338, 1172–1176, 2012. a Kimball, P., Bailey, J., Das, S., Geyer, R., Harrison, T., Kunz, C., Manganini, K., Mankoff, K., Samuelson, K., Sayre-McCord, T., Straneo, F., Traykovski, P., and Singh, H.: The WHOI Jetyak: An autonomous surface vehicle for oceanographic research in shallow or dangerous waters, in: 2014 IEEE/OES Autonomous Underwater Vehicles (AUV), Oxford, MS, USA, 6–9 October 2014, IEEE, 1–7, https://doi.org/10.1109/AUV.2014.7054430, 2014. a, b, c Kjeldsen, K. K., Khan, S. A., Bjørk, A. A., Nielsen, K., and Mouginot, J.: Ice-dammed lake drainage in west Greenland: Drainage pattern and implications on ice flow and bedrock motion, Geophys. Res. Lett., 44, 7320–7327, 2017. a, b, c Luckman, A., Benn, D. I., Cottier, F., Bevan, S., Nilsen, F., and Inall, M.: Calving rates at tidewater glaciers vary strongly with ocean temperature, Nat. Commun., 6, 8566, https://doi.org/10.1038/ncomms9566, 2015. a Mankoff, K. D., Straneo, F., Cenedese, C., Das, S. B., Richards, C. G., and Singh, H.: Structure and dynamics of a subglacial discharge plume in a Greenlandic fjord, J. Geophys. Res.-Oceans, 121, 8670–8688, 2016. a, b, c Meier, M. F. and Post, A.: Fast tidewater glaciers, J. Geophys. Res., 92, 9051–9058, 1987. a Minowa, M., Podolskiy, E. A., Sugiyama, S., Sakakibara, D., and Skvarca, P.: Glacier calving observed with time-lapse imagery and tsunami waves at Glaciar Perito Moreno, Patagonia, J. Glaciol., 21, 1–15, 2018. a, b, c, d Moon, T., Joughin, I., Smith, B., and Howat, I.: 21st-Century Evolution of Greenland Outlet Glacier Velocities, Science, 336, 576–578, 2012. a Morton, B. R., Sir Geoffrey Taylor, F. R. S., and Turner, J. S.: Turbulent gravitational convection from maintained and instantaneous sources, P. Roy. Soc. Lond. A Mat., 234, 1–23, 1956. a Nick, F. M., Vieli, A., Howat, I. M., and Joughin, I.: Large-scale changes in Greenland outlet glacier dynamics triggered at the terminus, Nat. Geosci., 2, 110–114, 2009. a Nick, F. M., Vieli, A., Andersen, M. L., Joughin, I., Payne, A., Edwards, T. L., Pattyn, F., and van de Wal, R. S. W.: Future sea-level rise from Greenland's main outlet glaciers in a warming climate, Nature, 497, 235–238, 2013. a Rignot, E. and Thomas, R. H.: Mass balance of polar ice sheets, Science, 297, 1502–1506, 2002. a Rignot, E., Xu, Y., Menemenlis, D., Mouginot, J., Scheuchl, B., Li, X., Morlighem, M., Seroussi, H., van den Broeke, M., Fenty, I., Cai, C., An, L., and de Fleurian, B.: Modeling of ocean-induced ice melt rates of five west Greenland glaciers over the past two decades, Geophys. Res. Lett., 43, 6374–6382, 2016. a Robertson, C. M., Benn, D. I., Brook, M. S., Fuller, I. C., and Holt, K. A.: Subaqueous calving margin morphology at Mueller, Hooker and Tasman glaciers in Aoraki/Mount Cook National Park, New Zealand, J. Glaciol., 58, 1037–1046, 2012. a Sciascia, R., Straneo, F., Cenedese, C., and Heimbach, P.: Seasonal variability of submarine melt rate and circulation in an East Greenland fjord, J. Geophys. Res.-Oceans, 118, 2492–2506, 2013. a Silva, T. A. M., Bigg, G. R., and Nicholls, K. W.: Contribution of giant icebergs to the Southern Ocean freshwater flux, J. Geophys. Res., 111, C03004, https://doi.org/10.1029/2004JC002843, 2006. a Slater, D. A., Goldberg, D. N., Nienow, P. W., and Cowton, T. R.: Scalings for submarine melting at tidewater glaciers from buoyant plume theory, J. Phys. Oceanogr., 46, 1839–1855, 2016. a Slater, D. A., Nienow, P. W., Goldberg, D. N., Cowton, T. R., and Sole, A. J.: A model for tidewater glacier undercutting by submarine melting, Geophys. Res. Lett., 44, 2360–2368, 2017. a Slater, D. A., Straneo, F., Das, S. B., Richards, C. G., Wagner, T. J. W., and Nienow, P. W.: Localized Plumes Drive Front-Wide Ocean Melting of A Greenlandic Tidewater Glacier, Geophys. Res. Lett., 45, 12350–12358, 2018. a, b, c, d, e, f, g, h, i Stevens, L. A., Straneo, F., Das, S. B., Plueddemann, A. J., Kukulya, A. L., and Morlighem, M.: Linking glacially modified waters to catchment-scale subglacial discharge using autonomous underwater vehicle observations, The Cryosphere, 10, 417–432, https://doi.org/10.5194/tc-10-417-2016, 2016. a, b, c, d, e, f Straneo, F.: Temperature and salinity profiles adjacent to a tidewater glacier in Sarqardleq Fjord, West Greenland, collected during July 2013, Arctic Data Center, https://doi.org/10.18739/A2B853H78., 2019. a Straneo, F. and Cenedese, C.: The Dynamics of Greenland's Glacial Fjords and Their Role in Climate, Annu. Rev. Mar. Sci., 7, 89–112, 2015. a Straneo, F. and Richards, C.: Detecting Glacier Calving Events from Ocean Waves and Underwater Acoustics, Saqardleq fjord, West Greenland 2013, Arctic Data Center, https://doi.org/10.18739/A24Q7QP6V, 2018. a Straneo, F., Richards, C., and Holte, J.: Eastward and northward components of ocean current profiles from ADCP taken from small boat in Sarqardleq Fjord adjacent to a tidewater glacier, West Greenland from 2013-07-25 to 2013-07-27 (NCEI Accession 0177127), Version 1.1, NOAA National Centers for Environmental Information, dataset, last access: 8 March 2019, 2018. a Sugiyama, S., Minowa, M., and Schaefer, M.: Underwater ice terrace observed at the front of Glaciar Grey, a freshwater calving glacier in Patagonia, Geophys. Res. Lett., 46, https://doi.org/10.1029/2018GL081441, online first, 2019. a Veitch, S. A. and Nettles, M.: Spatial and temporal variations in Greenland glacial-earthquake activity, 1993–2010, J. Geophys. Res., 117, F04007, https://doi.org/10.1029/2012JF002412, 2012. a Vieli, A. and Nick, F. M.: Understanding and Modelling Rapid Dynamic Changes of Tidewater Outlet Glaciers: Issues and Implications, Surv. Geophys., 32, 437–458, 2011. a Wagner, T. J. W., Wadhams, P., Bates, R., Elosegui, P., Stern, A., Vella, D., Abrahamsen, P., Crawford, A., and Nicholls, K. W.: The “footloose” mechanism: Iceberg decay from hydrostatic stresses, Geophys. Res. Lett., 41, 5522–5529, 2014.  a, b, c Wagner, T. J. W., James, T. D., Murray, T., and Vella, D.: On the role of buoyant flexure in glacier calving, Geophys. Res. Lett., 43, 232–240A, 2016. a, b Wilson, N., Straneo, F., and Heimbach, P.: Satellite-derived submarine melt rates and mass balance (2011–2015) for Greenland's largest remaining ice tongues, The Cryosphere, 11, 2773–2782, https://doi.org/10.5194/tc-11-2773-2017, 2017. a Xu, Y., Rignot, E., Menemenlis, D., and Koppes, M.: Numerical experiments on subaqueous melting of Greenland tidewater glaciers in response to ocean warming and enhanced subglacial discharge, Ann. Glaciol., 53, 229–234, 2012. a
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8125157356262207, "perplexity": 4198.51890874463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144058.43/warc/CC-MAIN-20200219061325-20200219091325-00017.warc.gz"}
http://ab-initio.mit.edu/wiki/index.php?title=Meep_Tutorial/Multilevel-atomic_susceptibility&diff=prev&oldid=4941
# Meep Tutorial/Multilevel-atomic susceptibility (Difference between revisions) Revision as of 16:09, 18 April 2017 (edit)Ardavan (Talk | contribs)← Previous diff Revision as of 16:10, 18 April 2017 (edit)Ardavan (Talk | contribs) Next diff → Line 12: Line 12: (set! dimensions 1) (set! dimensions 1) (set! pml-layers (list (make pml (thickness dpml) (side High)))) (set! pml-layers (list (make pml (thickness dpml) (side High)))) + (define-param freq-21 (/ 40 (* 2 pi))) ; emission frequency (units of 2\pia/c) (define-param freq-21 (/ 40 (* 2 pi))) ; emission frequency (units of 2\pia/c) (define-param gamma-21 (/ 4 (* 2 pi))) ; emission linewidth (units of 2\pia/c) (define-param gamma-21 (/ 4 (* 2 pi))) ; emission linewidth (units of 2\pia/c) ## Revision as of 16:10, 18 April 2017 Meep 1.4 introduced a feature to model saturable absorption/gain via multilevel-atomic susceptibility. This is based on a generalization of the Maxwell-Bloch equations which involve the interaction of a quantized system having an arbitrary number of levels with the electromagnetic fields. Meep's implementation is similar to that described in S.-L. Chua et al (eqns. 1-5). We will demonstrate this feature by computing the lasing thresholds of a two-level, multimode cavity in 1d similar to the example used in A. Cerjan et al (Fig. 2). The cavity consists of a high-index medium with a perfect-metallic mirror on one end and an abrupt termination in air on the other. We will specify an initial population density for the ground state of the two-level system. The field within the cavity is initialized to arbitrary non-zero values and a fictitious source is used to pump the cavity at a fixed rate. The fields are time stepped until reaching steady state. Near the end of the time stepping, we output the electric field at the center of the cavity and then, in post processing, compute its Fourier transform to obtain the spectra. The simulation script is as follows. ```(set-param! resolution 1000) (define-param ncav 1.5)  ; cavity refractive index (define-param Lcav 1)  ; cavity length (define-param dpml 1)  ; PML thickness (define-param sz (+ Lcav dpad dpml)) (set! geometry-lattice (make lattice (size no-size no-size sz))) (set! dimensions 1) (set! pml-layers (list (make pml (thickness dpml) (side High)))) (define-param freq-21 (/ 40 (* 2 pi)))  ; emission frequency (units of 2\pia/c) (define-param gamma-21 (/ 4 (* 2 pi)))  ; emission linewidth (units of 2\pia/c) (define-param sigma-21 8e-23)  ; dipole coupling strength (set! sigma-21 (/ sigma-21 (sqr freq-21))) (define-param rate-21 0.005)  ; non-radiative rate (units of c/a) (define-param N0 5e23)  ; initial population density of ground state (define-param Rp 0)  ; pumping rate of ground to excited state (define two-level (make medium (index ncav) (E-susceptibilities (make multilevel-atom (sigma 1) (transitions (make transition (from-level 1) (to-level 2) (pumping-rate Rp) (frequency freq-21) (gamma gamma-21) (sigma sigma-21)) (make transition (from-level 2) (to-level 1) (transition-rate rate-21))) (initial-populations N0))))) (set! geometry (list (make block (center 0 0 (+ (* -0.5 sz) (* 0.5 Lcav))) (size infinity infinity Lcav) (material two-level)))) (init-fields) (meep-fields-initialize-field fields Ex (lambda (p) (if (= (vector3-z p) (+ (* -0.5 sz) (* 0.5 Lcav))) 1 0))) (define print-field (lambda () (print "field:, " (meep-time) ", " (real-part (get-field-point Ex (vector3 0 0 (+ (* -0.5 sz) (* 0.5 Lcav))))) "\n"))) (define-param endt 30000) (run-until endt (after-time (- endt 250) print-field)) ``` Definition of the two-level medium involves the `multilevel-atom` sub-class of the `E-susceptibilities` material type. Each radiative and non-radiative `transition` is specified separately. The atomic resonance used to drive absorption and gain is based on a damped harmonic oscillator described in Materials in Meep with the same parameters. Note that the `sigma` of any given transition is multiplied by the `sigma` of its sub-class definition (1 in this example). `transition-rate` defines the rate of non-radiative decay and `pumping-rate` refers to pumping of the ground to the excited state. It is important to specify the `from-level` and `to-level` parameters correctly, otherwise the results will be undefined. The choice of these parameters requires some care. For example, choosing a pumping rate that lies far beyond threshold will cause large inversion which is not physical and produce meaningless results. The simulation time is also important when operating near the threshold of a particular mode. The fields contain relaxation oscillations and require sufficient time to reach steady state. We also need to choose a small timestep to ensure that the data is smooth and continuous. This requires a large resolution. A large resolution is also necessary to ensure stability when the strength of the source, which depends on `sigma` and `N0`, driving the polarization is large (this also applies to a linear absorber). The spectra at a pumping rate of 0.0073 is shown below. There are four modes present: two are lasing while the other two are slightly below threshold. The frequency of the passive cavity modes can be computed analytically using the equation ωcav = (m + 0.5)π / (ncavLcav) where ncav and Lcav are the cavity index and length, and m is an integer. The four modes in the figure correspond to m=17-20 which are labelled. In the continuum limit, these modes would appear as Dirac delta functions in the spectra. The discretized model, however, produces peaks with finite width. Thus, we need to integrate a fixed number of points around each peak to smooth out the modal intensity. For this simple two-level cavity, the thresholds can be computed analytically using the steady-state ab-initio laser theory (SALT) developed by Prof. A. Douglas Stone and his group at Yale. Based on the default parameters in the script, two modes, m=18 and 19, should begin to lase very close to the relaxation rate. We plot the variation of the modal intensity with pumping rate. The two modes predicted by SALT to have the lowest thresholds are indeed the first to begin lasing. Note that the slopes of each curve for the two lasing modes are decreasing with increasing pumping rate. This gain saturation occurs because the onset of lasing from additional modes means there is less gain available to the other modes. The modal intensities reach an asymptote in the limit of large pumping rates. We can convert Meep's dimensionless parameters into real units by specifying the units of the cavity length Lcav and then multiplying the rate terms by Lcav / c.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9277218580245972, "perplexity": 1468.153279142395}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509170.2/warc/CC-MAIN-20181015100606-20181015122106-00177.warc.gz"}
http://www.newworldencyclopedia.org/entry/Hyperbola
# Hyperbola Not to be confused with hyperbole. Graph of a hyperbola (red). The two foci are B1 and B2. As point P moves along an arm of the hyperbola, the difference between the distances d1 and d2 remains a constant. In mathematics, a hyperbola (from the Greek word ὑπερβολή, literally meaning "overshooting" or "excess") is a geometric figure such that the difference between the distances from any point on the figure to two fixed points is a constant. The two fixed points are called foci (plural of focus). This figure consists of two disconnected curves called its arms or branches that separate the foci. The bend points of the arms of a hyperbola are called the vertices (plural of vertex). A hyperbola is a type of conic section. Thus a second definition of a hyperbola is that it is the figure obtained by the intersection between a right circular conical surface and a plane that cuts through both halves of the cone. ## Contents A third definition is that a hyperbola is the locus of points for which the ratio of the distances to one focus and to a line (called the directrix) is a constant greater than one. This constant is the eccentricity of the hyperbola. Hyperbola (shaded green) in terms of its conic section. ## Definitions of terms and properties The point that lies halfway between the two foci is called the center of the hyperbola. The major axis runs through the center of the hyperbola and intersects both arms at their vertices. The foci lie on the extension of the major axis of the hyperbola. The minor axis is a straight line that runs through the center of the hyperbola and is perpendicular to the major axis. The distance from the center of the hyperbola to the vertex of the nearest branch is known as the semi-major axis of the hyperbola. If a point P moves along an arm of the hyperbola and the distances of that point from the two foci are called d1 and d2, the difference between d1 and d2 remains a constant. This constant is equal to two times a, where a is the semi-major axis of the hyperbola. At large distances from the foci, the hyperbola begins to approximate two lines, known as asymptotes. The asymptotes cross at the center of the hyperbola and have slope $\pm \frac{b}{a}$ for an East-West opening hyperbola or $\pm \frac{a}{b}$ for a North-South opening hyperbola. A hyperbola has the property that a ray originating at one of the foci is reflected in such a way as to appear to have originated at the other focus. Also, if rays are directed toward one focus from the exterior of the hyperbola, they will be reflected toward the other focus. ## Special cases Conjugate unit rectangular hyperbolas. A special case of the hyperbola is the equilateral or rectangular hyperbola, in which the asymptotes intersect at right angles. The rectangular hyperbola with the coordinate axes as its asymptotes is given by the equation xy=c, where c is a constant. Just as the sine and cosine functions give a parametric equation for the ellipse, so the hyperbolic sine and hyperbolic cosine give a parametric equation for the hyperbola. If on the hyperbola equation one switches x and y, the conjugate hyperbola is obtained. A hyperbola and its conjugate have the same asymptotes. ## Equations Algebraically, a hyperbola is a curve in the Cartesian plane defined by an equation of the form Ax2 + Bxy + Cy2 + Dx + Ey + F = 0 such that B2 > 4AC, where all of the coefficients are real, and where more than one solution, defining a pair of points (x, y) on the hyperbola, exists. ### Cartesian East-west opening hyperbola centered at (h,k): $\frac{\left( x-h \right)^2}{a^2} - \frac{\left( y-k \right)^2}{b^2} = 1$ North-south opening hyperbola centered at (h,k): $\frac{\left( y-k \right)^2}{a^2} - \frac{\left( x-h \right)^2}{b^2} = 1$ In both formulas, a is the semi-major axis (half the distance between the two arms of the hyperbola measured along the major axis), and b is the semi-minor axis. If one forms a rectangle with vertices on the asymptotes and two sides that are tangent to the hyperbola, the length of the sides tangent to the hyperbola are 2b in length while the sides that run parallel to the line between the foci (the major axis) are 2a in length. Note that b may be larger than a. If one calculates the distance from any point on the hyperbola to each focus, the absolute value of the difference of those two distances is always 2a. The eccentricity is given by $e = \sqrt{1+\frac{b^2}{a^2}}$ The foci for an east-west opening hyperbola are given by $\left(h\pm c, k\right)$ where c is given by c2 = a2 + b2 and for a north-south opening hyperbola are given by $\left( h, k\pm c\right)$ again with c2 = a2 + b2 For rectangular hyperbolas with the coordinate axes parallel to their asymptotes: $(x-h)(y-k) = c \,$ A graph of the rectangular hyperbola, $y=\tfrac{1}{x}$. The simplest example of these are the hyperbolas $y=\frac{m}{x}\,$. ### Polar East-west opening hyperbola: $r^2 =a\sec 2\theta \,$ North-south opening hyperbola: $r^2 =-a\sec 2\theta \,$ Northeast-southwest opening hyperbola: $r^2 =a\csc 2\theta \,$ Northwest-southeast opening hyperbola: $r^2 =-a\csc 2\theta \,$ In all formulas the center is at the pole, and a is the semi-major axis and semi-minor axis. ### Parametric East-west opening hyperbola: $\begin{matrix} x = a\sec t + h \\ y = b\tan t + k \\ \end{matrix} \qquad \mathrm{or} \qquad\begin{matrix} x = \pm a\cosh t + h \\ y = b\sinh t + k \\ \end{matrix}$ North-south opening hyperbola: $\begin{matrix} x = a\tan t + h \\ y = b\sec t + k \\ \end{matrix} \qquad \mathrm{or} \qquad\begin{matrix} x = a\sinh t + h \\ y = \pm b\cosh t + k \\ \end{matrix}$ In all formulas (h,k) is the center of the hyperbola, a is the semi-major axis, and b is the semi-minor axis.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 16, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8907546401023865, "perplexity": 340.43098278433763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535917663.12/warc/CC-MAIN-20140901014517-00160-ip-10-180-136-8.ec2.internal.warc.gz"}
https://arxiv.org/abs/cs/0302028
cs (what is this?) # Title:The Boolean Functions Computed by Random Boolean Formulas OR How to Grow the Right Function Abstract: Among their many uses, growth processes (probabilistic amplification), were used for constructing reliable networks from unreliable components, and deriving complexity bounds of various classes of functions. Hence, determining the initial conditions for such processes is an important and challenging problem. In this paper we characterize growth processes by their initial conditions and derive conditions under which results such as Valiant's (1984) hold. First, we completely characterize growth processes that use linear connectives. Second, by extending Savický's (1990) analysis, via Restriction Lemmas'', we characterize growth processes that use monotone connectives, and show that our technique is applicable to growth processes that use other connectives as well. Additionally, we obtain explicit bounds on the convergence rates of several growth processes, including the growth process studied by Savický (1990). Subjects: Discrete Mathematics (cs.DM); Computational Complexity (cs.CC) ACM classes: F.1.1;F.1.2;G.2.1;G.3 Cite as: arXiv:cs/0302028 [cs.DM] (or arXiv:cs/0302028v1 [cs.DM] for this version) ## Submission history From: Alex Brodsky [view email] [v1] Wed, 19 Feb 2003 21:57:06 UTC (23 KB)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9281798601150513, "perplexity": 2737.0405619267494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247494449.56/warc/CC-MAIN-20190220044622-20190220070622-00484.warc.gz"}
http://aas.org/archives/BAAS/v34n4/aas201/13.htm
AAS 201st Meeting, January, 2003 Session 40. Normal and Dwarf Novae Poster, Tuesday, January 7, 2003, 9:20am-6:30pm, Exhibit Hall AB ## [40.16] A preliminary model of the intermediate polar YY Dra from FUSE spectra A.P. Linnell (Univ. Washington), D.W. Hoard (SSC/IPAC), P. Szkody (Univ. Washington) YY Dra is an intermediate polar with orbital period 3h 58m. The WD has magnetic poles lying close to the equator (Szkody et al. 2002); the WD spin period is 529s (Haswell et al. 1996). We have obtained phase-resolved FUSE spectra of the YY Dra system, and have determined a mean FUSE spectrum. We have measured flux values at spin maximum, spin minimum, and mean flux values from the HST plots by Haswell et al. (1996), and have combined these with the corresponding FUSE spectra. The BINSYN program suite (Linnell & Hubeny 1996) has been used to calculate synthetic spectra of the WD, based on the Haswell et al. (1996) system parameters. The distance determination by Mateo et al. (1991) establishes an absolute flux calibration factor to superpose synthetic spectra on the observed spectra. An initial comparison between the spot minimum spectrum, and the model WD, but without model hot spots, shows that the WD {\rm Teff} must be less than 20kK. We represent the hot spots as flat, circular regions on the WD photosphere, separated by 180{\arcdeg} in longitude. With one spot located on the WD central meridian, as seen by the observer, we explored the tradeoff between spot angular radius and spot {\rm Teff} in fitting the FUSE+HST spot maximum flux. The best fit was with a spot {\rm Teff} of 80kK on a WD of 16kK {\rm Teff}. A comparison of the same model, but with the spots at the WD limb, and a simulation with no spots shows substantial differences and demonstrates that the spots must be visible at the WD limb. We empirically vary the spot size as function of angular distance from the central meridian to fit the observed light variation. The paper reports satisfactory fits to the observational data. We expect that re-extraction of the phase-resolved spectra with the latest FUSE pipeline software will improve the spectra S/N and allow further refinement of the model. The author(s) of this abstract have provided an email address for comments about the abstract: [email protected] Bulletin of the American Astronomical Society, 34, #4 © 2002. The American Astronomical Soceity.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9423447251319885, "perplexity": 4383.045648080126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929899.62/warc/CC-MAIN-20150521113209-00224-ip-10-180-206-219.ec2.internal.warc.gz"}
https://openstax.org/books/chemistry/pages/11-exercises
Chemistry # Exercises ChemistryExercises ### 11.1The Dissolution Process 1. How do solutions differ from compounds? From other mixtures? 2. Which of the principal characteristics of solutions can we see in the solutions of K2Cr2O7 shown in Figure 11.2? 3. When KNO3 is dissolved in water, the resulting solution is significantly colder than the water was originally. (a) Is the dissolution of KNO3 an endothermic or an exothermic process? (b) What conclusions can you draw about the intermolecular attractions involved in the process? (c) Is the resulting solution an ideal solution? 4. Give an example of each of the following types of solutions: (a) a gas in a liquid (b) a gas in a gas (c) a solid in a solid 5. Indicate the most important types of intermolecular attractions in each of the following solutions: (a) The solution in Figure 11.2. (b) NO(l) in CO(l) (c) Cl2(g) in Br2(l) (d) HCl(g) in benzene C6H6(l) (e) Methanol CH3OH(l) in H2O(l) 6. Predict whether each of the following substances would be more soluble in water (polar solvent) or in a hydrocarbon such as heptane (C7H16, nonpolar solvent): (a) vegetable oil (nonpolar) (b) isopropyl alcohol (polar) (c) potassium bromide (ionic) 7. Heat is released when some solutions form; heat is absorbed when other solutions form. Provide a molecular explanation for the difference between these two types of spontaneous processes. 8. Solutions of hydrogen in palladium may be formed by exposing Pd metal to H2 gas. The concentration of hydrogen in the palladium depends on the pressure of H2 gas applied, but in a more complex fashion than can be described by Henry’s law. Under certain conditions, 0.94 g of hydrogen gas is dissolved in 215 g of palladium metal (solution density = 10.8 g cm3). (a) Determine the molarity of this solution. (b) Determine the molality of this solution. (c) Determine the percent by mass of hydrogen atoms in this solution. ### 11.2Electrolytes 9. Explain why the ions Na+ and Cl are strongly solvated in water but not in hexane, a solvent composed of nonpolar molecules. 10. Explain why solutions of HBr in benzene (a nonpolar solvent) are nonconductive, while solutions in water (a polar solvent) are conductive. 11. Consider the solutions presented: (a) Which of the following sketches best represents the ions in a solution of Fe(NO3)3(aq)? (b) Write a balanced chemical equation showing the products of the dissolution of Fe(NO3)3. 12. Compare the processes that occur when methanol (CH3OH), hydrogen chloride (HCl), and sodium hydroxide (NaOH) dissolve in water. Write equations and prepare sketches showing the form in which each of these compounds is present in its respective solution. 13. What is the expected electrical conductivity of the following solutions? (a) NaOH(aq) (b) HCl(aq) (c) C6H12O6(aq) (glucose) (d) NH3(aq) 14. Why are most solid ionic compounds electrically nonconductive, whereas aqueous solutions of ionic compounds are good conductors? Would you expect a liquid (molten) ionic compound to be electrically conductive or nonconductive? Explain. 15. Indicate the most important type of intermolecular attraction responsible for solvation in each of the following solutions: (a) the solutions in Figure 11.8 (b) methanol, CH3OH, dissolved in ethanol, C2H5OH (c) methane, CH4, dissolved in benzene, C6H6 (d) the polar halocarbon CF2Cl2 dissolved in the polar halocarbon CF2ClCFCl2 (e) O2(l) in N2(l) ### 11.3Solubility 16. Suppose you are presented with a clear solution of sodium thiosulfate, Na2S2O3. How could you determine whether the solution is unsaturated, saturated, or supersaturated? 17. Supersaturated solutions of most solids in water are prepared by cooling saturated solutions. Supersaturated solutions of most gases in water are prepared by heating saturated solutions. Explain the reasons for the difference in the two procedures. 18. Suggest an explanation for the observations that ethanol, C2H5OH, is completely miscible with water and that ethanethiol, C2H5SH, is soluble only to the extent of 1.5 g per 100 mL of water. 19. Calculate the percent by mass of KBr in a saturated solution of KBr in water at 10 °C. See Figure 11.17 for useful data, and report the computed percentage to one significant digit. 20. Which of the following gases is expected to be most soluble in water? Explain your reasoning. (a) CH4 (b) CCl4 (c) CHCl3 21. At 0 °C and 1.00 atm, as much as 0.70 g of O2 can dissolve in 1 L of water. At 0 °C and 4.00 atm, how many grams of O2 dissolve in 1 L of water? 22. Refer to Figure 11.11. (a) How did the concentration of dissolved CO2 in the beverage change when the bottle was opened? (b) What caused this change? (c) Is the beverage unsaturated, saturated, or supersaturated with CO2? 23. The Henry’s law constant for CO2 is 3.4 $××$ 10−2 M/atm at 25 °C. What pressure of carbon dioxide is needed to maintain a CO2 concentration of 0.10 M in a can of lemon-lime soda? 24. The Henry’s law constant for O2 is 1.3 $××$ 10−3 M/atm at 25 °C. What mass of oxygen would be dissolved in a 40-L aquarium at 25 °C, assuming an atmospheric pressure of 1.00 atm, and that the partial pressure of O2 is 0.21 atm? 25. How many liters of HCl gas, measured at 30.0 °C and 745 torr, are required to prepare 1.25 L of a 3.20-M solution of hydrochloric acid? ### 11.4Colligative Properties 26. Which is/are part of the macroscopic domain of solutions and which is/are part of the microscopic domain: boiling point elevation, Henry’s law, hydrogen bond, ion-dipole attraction, molarity, nonelectrolyte, nonstoichiometric compound, osmosis, solvated ion? 27. What is the microscopic explanation for the macroscopic behavior illustrated in Figure 11.15? 28. Sketch a qualitative graph of the pressure versus time for water vapor above a sample of pure water and a sugar solution, as the liquids evaporate to half their original volume. 29. A solution of potassium nitrate, an electrolyte, and a solution of glycerin (C3H5(OH)3), a nonelectrolyte, both boil at 100.3 °C. What other physical properties of the two solutions are identical? 30. What are the mole fractions of H3PO4 and water in a solution of 14.5 g of H3PO4 in 125 g of water? (a) Outline the steps necessary to answer the question. 31. What are the mole fractions of HNO3 and water in a concentrated solution of nitric acid (68.0% HNO3 by mass)? (a) Outline the steps necessary to answer the question. 32. Calculate the mole fraction of each solute and solvent: (a) 583 g of H2SO4 in 1.50 kg of water—the acid solution used in an automobile battery (b) 0.86 g of NaCl in 1.00 $××$ 102 g of water—a solution of sodium chloride for intravenous injection (c) 46.85 g of codeine, C18H21NO3, in 125.5 g of ethanol, C2H5OH (d) 25 g of I2 in 125 g of ethanol, C2H5OH 33. Calculate the mole fraction of each solute and solvent: (a) 0.710 kg of sodium carbonate (washing soda), Na2CO3, in 10.0 kg of water—a saturated solution at 0 °C (b) 125 g of NH4NO3 in 275 g of water—a mixture used to make an instant ice pack (c) 25 g of Cl2 in 125 g of dichloromethane, CH2Cl2 (d) 0.372 g of tetrahydropyridine, C5H9N, in 125 g of chloroform, CHCl3 34. Calculate the mole fractions of methanol, CH3OH; ethanol, C2H5OH; and water in a solution that is 40% methanol, 40% ethanol, and 20% water by mass. (Assume the data are good to two significant figures.) 35. What is the difference between a 1 M solution and a 1 m solution? 36. What is the molality of phosphoric acid, H3PO4, in a solution of 14.5 g of H3PO4 in 125 g of water? (a) Outline the steps necessary to answer the question. 37. What is the molality of nitric acid in a concentrated solution of nitric acid (68.0% HNO3 by mass)? (a) Outline the steps necessary to answer the question. 38. Calculate the molality of each of the following solutions: (a) 583 g of H2SO4 in 1.50 kg of water—the acid solution used in an automobile battery (b) 0.86 g of NaCl in 1.00 $××$ 102 g of water—a solution of sodium chloride for intravenous injection (c) 46.85 g of codeine, C18H21NO3, in 125.5 g of ethanol, C2H5OH (d) 25 g of I2 in 125 g of ethanol, C2H5OH 39. Calculate the molality of each of the following solutions: (a) 0.710 kg of sodium carbonate (washing soda), Na2CO3, in 10.0 kg of water—a saturated solution at 0°C (b) 125 g of NH4NO3 in 275 g of water—a mixture used to make an instant ice pack (c) 25 g of Cl2 in 125 g of dichloromethane, CH2Cl2 (d) 0.372 g of tetrahydropyridine, C5H9N, in 125 g of chloroform, CHCl3 40. The concentration of glucose, C6H12O6, in normal spinal fluid is $75mg100g.75mg100g.$ What is the molality of the solution? 41. A 13.0% solution of K2CO3 by mass has a density of 1.09 g/cm3. Calculate the molality of the solution. 42. Why does 1 mol of sodium chloride depress the freezing point of 1 kg of water almost twice as much as 1 mol of glycerin? 43. What is the boiling point of a solution of 115.0 g of sucrose, C12H22O11, in 350.0 g of water? (a) Outline the steps necessary to answer the question 44. What is the boiling point of a solution of 9.04 g of I2 in 75.5 g of benzene, assuming the I2 is nonvolatile? (a) Outline the steps necessary to answer the question. 45. What is the freezing temperature of a solution of 115.0 g of sucrose, C12H22O11, in 350.0 g of water, which freezes at 0.0 °C when pure? (a) Outline the steps necessary to answer the question. 46. What is the freezing point of a solution of 9.04 g of I2 in 75.5 g of benzene? (a) Outline the steps necessary to answer the following question. 47. What is the osmotic pressure of an aqueous solution of 1.64 g of Ca(NO3)2 in water at 25 °C? The volume of the solution is 275 mL. (a) Outline the steps necessary to answer the question. 48. What is osmotic pressure of a solution of bovine insulin (molar mass, 5700 g mol−1) at 18 °C if 100.0 mL of the solution contains 0.103 g of the insulin? (a) Outline the steps necessary to answer the question. 49. What is the molar mass of a solution of 5.00 g of a compound in 25.00 g of carbon tetrachloride (bp 76.8 °C; Kb = 5.02 °C/m) that boils at 81.5 °C at 1 atm? (a) Outline the steps necessary to answer the question. (b) Solve the problem. 50. A sample of an organic compound (a nonelectrolyte) weighing 1.35 g lowered the freezing point of 10.0 g of benzene by 3.66 °C. Calculate the molar mass of the compound. 51. A 1.0 m solution of HCl in benzene has a freezing point of 0.4 °C. Is HCl an electrolyte in benzene? Explain. 52. A solution contains 5.00 g of urea, CO(NH2)2, a nonvolatile compound, dissolved in 0.100 kg of water. If the vapor pressure of pure water at 25 °C is 23.7 torr, what is the vapor pressure of the solution? 53. A 12.0-g sample of a nonelectrolyte is dissolved in 80.0 g of water. The solution freezes at −1.94 °C. Calculate the molar mass of the substance. 54. Arrange the following solutions in order by their decreasing freezing points: 0.1 m Na3PO4, 0.1 m C2H5OH, 0.01 m CO2, 0.15 m NaCl, and 0.2 m CaCl2. 55. Calculate the boiling point elevation of 0.100 kg of water containing 0.010 mol of NaCl, 0.020 mol of Na2SO4, and 0.030 mol of MgCl2, assuming complete dissociation of these electrolytes. 56. How could you prepare a 3.08 m aqueous solution of glycerin, C3H8O3? What is the freezing point of this solution? 57. A sample of sulfur weighing 0.210 g was dissolved in 17.8 g of carbon disulfide, CS2 (Kb = 2.43 °C/m). If the boiling point elevation was 0.107 °C, what is the formula of a sulfur molecule in carbon disulfide? 58. In a significant experiment performed many years ago, 5.6977 g of cadmium iodide in 44.69 g of water raised the boiling point 0.181 °C. What does this suggest about the nature of a solution of CdI2? 59. Lysozyme is an enzyme that cleaves cell walls. A 0.100-L sample of a solution of lysozyme that contains 0.0750 g of the enzyme exhibits an osmotic pressure of 1.32 $××$ 10−3 atm at 25 °C. What is the molar mass of lysozyme? 60. The osmotic pressure of a solution containing 7.0 g of insulin per liter is 23 torr at 25 °C. What is the molar mass of insulin? 61. The osmotic pressure of human blood is 7.6 atm at 37 °C. What mass of glucose, C6H12O6, is required to make 1.00 L of aqueous solution for intravenous feeding if the solution must have the same osmotic pressure as blood at body temperature, 37 °C? 62. What is the freezing point of a solution of dibromobenzene, C6H4Br2, in 0.250 kg of benzene, if the solution boils at 83.5 °C? 63. What is the boiling point of a solution of NaCl in water if the solution freezes at −0.93 °C? 64. The sugar fructose contains 40.0% C, 6.7% H, and 53.3% O by mass. A solution of 11.7 g of fructose in 325 g of ethanol has a boiling point of 78.59 °C. The boiling point of ethanol is 78.35 °C, and Kb for ethanol is 1.20 °C/m. What is the molecular formula of fructose? 65. The vapor pressure of methanol, CH3OH, is 94 torr at 20 °C. The vapor pressure of ethanol, C2H5OH, is 44 torr at the same temperature. (a) Calculate the mole fraction of methanol and of ethanol in a solution of 50.0 g of methanol and 50.0 g of ethanol. (b) Ethanol and methanol form a solution that behaves like an ideal solution. Calculate the vapor pressure of methanol and of ethanol above the solution at 20 °C. (c) Calculate the mole fraction of methanol and of ethanol in the vapor above the solution. 66. The triple point of air-free water is defined as 273.16 K. Why is it important that the water be free of air? 67. Meat can be classified as fresh (not frozen) even though it is stored at −1 °C. Why wouldn’t meat freeze at this temperature? 68. An organic compound has a composition of 93.46% C and 6.54% H by mass. A solution of 0.090 g of this compound in 1.10 g of camphor melts at 158.4 °C. The melting point of pure camphor is 178.4 °C. Kf for camphor is 37.7 °C/m. What is the molecular formula of the solute? Show your calculations. 69. A sample of HgCl2 weighing 9.41 g is dissolved in 32.75 g of ethanol, C2H5OH (Kb = 1.20 °C/m). The boiling point elevation of the solution is 1.27 °C. Is HgCl2 an electrolyte in ethanol? Show your calculations. 70. A salt is known to be an alkali metal fluoride. A quick approximate determination of freezing point indicates that 4 g of the salt dissolved in 100 g of water produces a solution that freezes at about −1.4 °C. What is the formula of the salt? Show your calculations. ### 11.5Colloids 71. Identify the dispersed phase and the dispersion medium in each of the following colloidal systems: starch dispersion, smoke, fog, pearl, whipped cream, floating soap, jelly, milk, and ruby. 72. Distinguish between dispersion methods and condensation methods for preparing colloidal systems. 73. How do colloids differ from solutions with regard to dispersed particle size and homogeneity? 74. Explain the cleansing action of soap. 75. How can it be demonstrated that colloidal particles are electrically charged? Do you know how you learn best? Order a print copy As an Amazon Associate we earn from qualifying purchases.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8071391582489014, "perplexity": 3354.4567887694325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663016949.77/warc/CC-MAIN-20220528154416-20220528184416-00432.warc.gz"}
https://www.physicsforums.com/threads/simple-calculus-volumes-integration.593058/
# Simple calculus volumes integration 1. ### togo 106 1. The problem statement, all variables and given/known data Find the volume of this equation, revolved around x axis 2. Relevant equations y=x^2 y^2=x 3. The attempt at a solution 1) (pi)(r^2) 2) r = x^2 3) (pi)((x^2)^2) 4) (pi)(x^4) now to integrate 5) (pi)(1/5(x^5)) since x = 1, and 1^5=1, 1/5=1/5 pi/5? there are two equations here (y=x^2 & y^2=x), are these two somehow combined for the result the answer is supposed to be 3pi/10 thanks. 2. ### NewtonianAlch 448 You're not looking for the volume of the "equation", you're looking for the volume of the object that's formed when you revolve a 2-d object around the x-axis. This may help: http://www.wyzant.com/Help/Math/Calculus/Integration/Finding_Volume.aspx Also, were those the actual equations given to you? y = x^2 and y^2 = x? 3. ### togo 106 yes the question gave me those two equations specifically. The picture of the answer shows two curves. 4. ### LawrenceC 3) (pi)((x^2)^2) ?????????????????????? 5. ### togo 106 ((x^2)^2) = x^4? could someone just do this, I've wracked my brain on it hard enough already. ### Staff: Mentor Have you sketched the graphs? Are you finding the volume of material used in molding the walls of that 3D object? Perhaps you should post the solution. 106 question 7 ### Staff: Mentor But you've cut off that part that was going to answer my question! 106 lol sorry ### Staff: Mentor Okay, so from the solution we can see that the question does indeed require that you, for example, find the volume of clay needed to make the walls of that aforementioned jar. Now that we all understand the question...are you right to finish it? 11. ### togo 106 obviously not ### Staff: Mentor It might be clearer if we attack this in two steps: ① Find the volume of the generated solid enclosed within the outer curve (viz., x=y²) for 0≤ x ≥1, ② Find the volume of the generated solid enclosed within the inner curve (viz., y=x²) for that same domain. Finally, subtract these volumes to determine the difference. The volume of the disc shown shaded in your figure is a circular-based cylinder of thickness dx. For the moment, let's forget about the hole in the disc. At any distance x, the circular face of that disc is of radius = y. Using the area of a circle formula, and the thickness of the disc, what is the expression for the volume of just that thin disc shown shaded? (No calculus is involved in answering this.) Last edited: Apr 10, 2012 Similar discussions for: Simple calculus volumes integration
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.809619128704071, "perplexity": 2272.4576706645516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298538.29/warc/CC-MAIN-20150323172138-00121-ip-10-168-14-71.ec2.internal.warc.gz"}
https://en.wikipedia.org/wiki/Beta-binomial_model
# Beta-binomial distribution (Redirected from Beta-binomial model) Parameters Probability mass function Cumulative distribution function n ∈ N0 — number of trials ${\displaystyle \alpha >0}$ (real) ${\displaystyle \beta >0}$ (real) k ∈ { 0, …, n } ${\displaystyle {n \choose k}{\frac {\mathrm {B} (k+\alpha ,n-k+\beta )}{\mathrm {B} (\alpha ,\beta )}}\!}$ ${\displaystyle 1-{\tfrac {\mathrm {B} (\beta +n-k-1,\alpha +k+1)_{3}F_{2}({\boldsymbol {a}},{\boldsymbol {b}};k)}{\mathrm {B} (\alpha ,\beta )\mathrm {B} (n-k,k+2)(n+1)}}}$ where 3F2(a,b,k) is the generalized hypergeometric function =3F2(1, α + k + 1, −n + k + 1; k + 2, −β − n + k + 2; 1) ${\displaystyle {\frac {n\alpha }{\alpha +\beta }}\!}$ ${\displaystyle {\frac {n\alpha \beta (\alpha +\beta +n)}{(\alpha +\beta )^{2}(\alpha +\beta +1)}}\!}$ ${\displaystyle {\tfrac {(\alpha +\beta +2n)(\beta -\alpha )}{(\alpha +\beta +2)}}{\sqrt {\tfrac {1+\alpha +\beta }{n\alpha \beta (n+\alpha +\beta )}}}\!}$ See text ${\displaystyle _{2}F_{1}(-n,\alpha ;\alpha +\beta ;1-e^{t})\!}$ ${\displaystyle {\text{for }}t<\log _{e}(2)}$ ${\displaystyle _{2}F_{1}(-n,\alpha ;\alpha +\beta ;1-e^{it})\!}$ ${\displaystyle {\text{for }}|t|<\log _{e}(2)}$ In probability theory and statistics, the beta-binomial distribution is a family of discrete probability distributions on a finite support of non-negative integers arising when the probability of success in each of a fixed or known number of Bernoulli trials is either unknown or random. The beta-binomial distribution is the binomial distribution in which the probability of success at each trial is not fixed but random and follows the beta distribution. It is frequently used in Bayesian statistics, empirical Bayes methods and classical statistics as an overdispersed binomial distribution. It reduces to the Bernoulli distribution as a special case when n = 1. For α = β = 1, it is the discrete uniform distribution from 0 to n. It also approximates the binomial distribution arbitrarily well for large α and β. The beta-binomial is a one-dimensional version of the Dirichlet-multinomial distribution, as the binomial and beta distributions are univariate versions of the multinomial and Dirichlet distributions, respectively. ## Motivation and derivation ### Beta-binomial distribution as a compound distribution The Beta distribution is a conjugate distribution of the binomial distribution. This fact leads to an analytically tractable compound distribution where one can think of the ${\displaystyle p}$ parameter in the binomial distribution as being randomly drawn from a beta distribution. Namely, if {\displaystyle {\begin{aligned}X&\sim \operatorname {Bin} (n,p)\\\end{aligned}}} then {\displaystyle {\begin{aligned}P(X=k|p,n)&=L(k|p)={n \choose k}p^{k}(1-p)^{n-k}\end{aligned}}} where Bin(n,p) stands for the binomial distribution, and where p is a random variable with a beta distribution. {\displaystyle {\begin{aligned}\pi (p|\alpha ,\beta )&=\mathrm {Beta} (\alpha ,\beta )\\&={\frac {p^{\alpha -1}(1-p)^{\beta -1}}{\mathrm {B} (\alpha ,\beta )}}\end{aligned}}} then the compound distribution is given by {\displaystyle {\begin{aligned}f(k|n,\alpha ,\beta )&=\int _{0}^{1}L(k|p)\pi (p|\alpha ,\beta )\,dp\\&={n \choose k}{\frac {1}{\mathrm {B} (\alpha ,\beta )}}\int _{0}^{1}p^{k+\alpha -1}(1-p)^{n-k+\beta -1}\,dp\\&={n \choose k}{\frac {\mathrm {B} (k+\alpha ,n-k+\beta )}{\mathrm {B} (\alpha ,\beta )}}.\end{aligned}}} Using the properties of the beta function, this can alternatively be written ${\displaystyle f(k|n,\alpha ,\beta )={\frac {\Gamma (n+1)}{\Gamma (k+1)\Gamma (n-k+1)}}{\frac {\Gamma (k+\alpha )\Gamma (n-k+\beta )}{\Gamma (n+\alpha +\beta )}}{\frac {\Gamma (\alpha +\beta )}{\Gamma (\alpha )\Gamma (\beta )}}.}$ ### Beta-binomial as an urn model The beta-binomial distribution can also be motivated via an urn model for positive integer values of α and β, known as the Polya urn model. Specifically, imagine an urn containing α red balls and β black balls, where random draws are made. If a red ball is observed, then two red balls are returned to the urn. Likewise, if a black ball is drawn, then two black balls are returned to the urn. If this is repeated n times, then the probability of observing k red balls follows a beta-binomial distribution with parameters n,α and β. Note that if the random draws are with simple replacement (no balls over and above the observed ball are added to the urn), then the distribution follows a binomial distribution and if the random draws are made without replacement, the distribution follows a hypergeometric distribution. ## Moments and properties The first three raw moments are {\displaystyle {\begin{aligned}\mu _{1}&={\frac {n\alpha }{\alpha +\beta }}\\[8pt]\mu _{2}&={\frac {n\alpha [n(1+\alpha )+\beta ]}{(\alpha +\beta )(1+\alpha +\beta )}}\\[8pt]\mu _{3}&={\frac {n\alpha [n^{2}(1+\alpha )(2+\alpha )+3n(1+\alpha )\beta +\beta (\beta -\alpha )]}{(\alpha +\beta )(1+\alpha +\beta )(2+\alpha +\beta )}}\end{aligned}}} and the kurtosis is ${\displaystyle \beta _{2}={\frac {(\alpha +\beta )^{2}(1+\alpha +\beta )}{n\alpha \beta (\alpha +\beta +2)(\alpha +\beta +3)(\alpha +\beta +n)}}\left[(\alpha +\beta )(\alpha +\beta -1+6n)+3\alpha \beta (n-2)+6n^{2}-{\frac {3\alpha \beta n(6-n)}{\alpha +\beta }}-{\frac {18\alpha \beta n^{2}}{(\alpha +\beta )^{2}}}\right].}$ Letting ${\displaystyle \pi ={\frac {\alpha }{\alpha +\beta }}\!}$ we note, suggestively, that the mean can be written as ${\displaystyle \mu ={\frac {n\alpha }{\alpha +\beta }}=n\pi \!}$ and the variance as ${\displaystyle \sigma ^{2}={\frac {n\alpha \beta (\alpha +\beta +n)}{(\alpha +\beta )^{2}(\alpha +\beta +1)}}=n\pi (1-\pi ){\frac {\alpha +\beta +n}{\alpha +\beta +1}}=n\pi (1-\pi )[1+(n-1)\rho ^{2}]\!}$ where ${\displaystyle \rho ^{2}={\tfrac {1}{\alpha +\beta +1}}\!}$. The parameter ${\displaystyle \rho \!}$ is known as the "intra class" or "intra cluster" correlation. It is this positive correlation which gives rise to overdispersion. The following recurrence relation holds: ${\displaystyle \left\{{\begin{array}{l}(\alpha +k)(n-k)p(k)-(k+1)p(k+1)(\beta -k+n-1)=0,\\[10pt]p(0)={\frac {(\beta )_{n}}{(\alpha +\beta )_{n}}}\end{array}}\right\}}$ ## Point estimates ### Method of moments The method of moments estimates can be gained by noting the first and second moments of the beta-binomial namely {\displaystyle {\begin{aligned}\mu _{1}&={\frac {n\alpha }{\alpha +\beta }}\\\mu _{2}&={\frac {n\alpha [n(1+\alpha )+\beta ]}{(\alpha +\beta )(1+\alpha +\beta )}}\end{aligned}}} and setting these raw moments equal to the first and second raw sample moments respectively {\displaystyle {\begin{aligned}{\hat {\mu }}_{1}&:=m_{1}={\frac {1}{N}}\sum _{i=1}^{N}X_{i}\\{\hat {\mu }}_{2}&:=m_{2}={\frac {1}{N}}\sum _{i=1}^{N}X_{i}^{2}\end{aligned}}} and solving for α and β we get {\displaystyle {\begin{aligned}{\hat {\alpha }}&={\frac {nm_{1}-m_{2}}{n({\frac {m_{2}}{m_{1}}}-m_{1}-1)+m_{1}}}\\{\hat {\beta }}&={\frac {(n-m_{1})(n-{\frac {m_{2}}{m_{1}}})}{n({\frac {m_{2}}{m_{1}}}-m_{1}-1)+m_{1}}}.\end{aligned}}} Note that these estimates can be non-sensically negative which is evidence that the data is either undispersed or underdispersed relative to the binomial distribution. In this case, the binomial distribution and the hypergeometric distribution are alternative candidates respectively. ### Maximum likelihood estimation While closed-form maximum likelihood estimates are impractical, given that the pdf consists of common functions (gamma function and/or Beta functions), they can be easily found via direct numerical optimization. Maximum likelihood estimates from empirical data can be computed using general methods for fitting multinomial Pólya distributions, methods for which are described in (Minka 2003). The R package VGAM through the function vglm, via maximum likelihood, facilitates the fitting of glm type models with responses distributed according to the beta-binomial distribution. Note also that there is no requirement that n is fixed throughout the observations. ### Example The following data gives the number of male children among the first 12 children of family size 13 in 6115 families taken from hospital records in 19th century Saxony (Sokal and Rohlf, p. 59 from Lindsey). The 13th child is ignored to assuage the effect of families non-randomly stopping when a desired gender is reached. Males 0 1 2 3 4 5 6 7 8 9 10 11 12 Families 3 24 104 286 670 1033 1343 1112 829 478 181 45 7 We note the first two sample moments are {\displaystyle {\begin{aligned}m_{1}&=6.23\\m_{2}&=42.31\\n&=12\end{aligned}}} and therefore the method of moments estimates are {\displaystyle {\begin{aligned}{\hat {\alpha }}&=34.1350\\{\hat {\beta }}&=31.6085.\end{aligned}}} The maximum likelihood estimates can be found numerically {\displaystyle {\begin{aligned}{\hat {\alpha }}_{\mathrm {mle} }&=34.09558\\{\hat {\beta }}_{\mathrm {mle} }&=31.5715\end{aligned}}} and the maximized log-likelihood is ${\displaystyle \log {\mathcal {L}}=-12492.9}$ from which we find the AIC ${\displaystyle {\mathit {AIC}}=24989.74.}$ The AIC for the competing binomial model is AIC = 25070.34 and thus we see that the beta-binomial model provides a superior fit to the data i.e. there is evidence for overdispersion. Trivers and Willard posit a theoretical justification for heterogeneity (also known as "burstiness") in gender-proneness among mammalian offspring (i.e. overdispersion). The superior fit is evident especially among the tails Males 0 1 2 3 4 5 6 7 8 9 10 11 12 Observed Families 3 24 104 286 670 1033 1343 1112 829 478 181 45 7 Fitted Expected (Beta-Binomial) 2.3 22.6 104.8 310.9 655.7 1036.2 1257.9 1182.1 853.6 461.9 177.9 43.8 5.2 Fitted Expected (Binomial p = 0.519215) 0.9 12.1 71.8 258.5 628.1 1085.2 1367.3 1265.6 854.2 410 132.8 26.1 2.3 ## Further Bayesian considerations It is convenient to reparameterize the distributions so that the expected mean of the prior is a single parameter: Let {\displaystyle {\begin{aligned}\pi (\theta |\mu ,M)&=\operatorname {Beta} (M\mu ,M(1-\mu ))\\&={\frac {\Gamma (M)}{\Gamma (M\mu )\Gamma (M(1-\mu ))}}\theta ^{M\mu -1}(1-\theta )^{M(1-\mu )-1}\end{aligned}}} where {\displaystyle {\begin{aligned}\mu &={\frac {\alpha }{\alpha +\beta }}\\M&=\alpha +\beta \end{aligned}}} so that {\displaystyle {\begin{aligned}\operatorname {E} (\theta |\mu ,M)&=\mu \\\operatorname {Var} (\theta |\mu ,M)&={\frac {\mu (1-\mu )}{M+1}}.\end{aligned}}} The posterior distribution ρ(θ|k) is also a beta distribution: {\displaystyle {\begin{aligned}\rho (\theta |k)&\propto \ell (k|\theta )\pi (\theta |\mu ,M)\\&=\operatorname {Beta} (k+M\mu ,n-k+M(1-\mu ))\\&={\frac {\Gamma (M)}{\Gamma (M\mu )\Gamma (M(1-\mu ))}}{n \choose k}\theta ^{k+M\mu -1}(1-\theta )^{n-k+M(1-\mu )-1}\end{aligned}}} And ${\displaystyle \operatorname {E} (\theta |k)={\frac {k+M\mu }{n+M}}.}$ while the marginal distribution m(k|μ, M) is given by {\displaystyle {\begin{aligned}m(k|\mu ,M)&=\int _{0}^{1}l(k|\theta )\pi (\theta |\mu ,M)\,d\theta \\&={\frac {\Gamma (M)}{\Gamma (M\mu )\Gamma (M(1-\mu ))}}{n \choose k}\int _{0}^{1}\theta ^{k+M\mu -1}(1-\theta )^{n-k+M(1-\mu )-1}d\theta \\&={\frac {\Gamma (M)}{\Gamma (M\mu )\Gamma (M(1-\mu ))}}{n \choose k}{\frac {\Gamma (k+M\mu )\Gamma (n-k+M(1-\mu ))}{\Gamma (n+M)}}.\end{aligned}}} Substituting back M and μ, in terms of ${\displaystyle \alpha }$ and ${\displaystyle \beta }$, this becomes: ${\displaystyle m(k|\alpha ,\beta )={\frac {\Gamma (n+1)}{\Gamma (k+1)\Gamma (n-k+1)}}{\frac {\Gamma (k+\alpha )\Gamma (n-k+\beta )}{\Gamma (n+\alpha +\beta )}}{\frac {\Gamma (\alpha +\beta )}{\Gamma (\alpha )\Gamma (\beta )}}.}$ which is the expected beta-binomial distribution with parameters ${\displaystyle n,\alpha }$ and ${\displaystyle \beta }$. We can also use the method of iterated expectations to find the expected value of the marginal moments. Let us write our model as a two-stage compound sampling model. Let ki be the number of success out of ni trials for event i: {\displaystyle {\begin{aligned}k_{i}&\sim \operatorname {Bin} (n_{i},\theta _{i})\\\theta _{i}&\sim \operatorname {Beta} (\mu ,M),\ \mathrm {i.i.d.} \end{aligned}}} We can find iterated moment estimates for the mean and variance using the moments for the distributions in the two-stage model: ${\displaystyle \operatorname {E} \left({\frac {k}{n}}\right)=\operatorname {E} \left[\operatorname {E} \left(\left.{\frac {k}{n}}\right|\theta \right)\right]=\operatorname {E} (\theta )=\mu }$ {\displaystyle {\begin{aligned}\operatorname {var} \left({\frac {k}{n}}\right)&=\operatorname {E} \left[\operatorname {var} \left(\left.{\frac {k}{n}}\right|\theta \right)\right]+\operatorname {var} \left[\operatorname {E} \left(\left.{\frac {k}{n}}\right|\theta \right)\right]\\&=\operatorname {E} \left[\left(\left.{\frac {1}{n}}\right)\theta (1-\theta )\right|\mu ,M\right]+\operatorname {var} \left(\theta |\mu ,M\right)\\&={\frac {1}{n}}\left(\mu (1-\mu )\right)+{\frac {n-1}{n}}{\frac {(\mu (1-\mu ))}{M+1}}\\&={\frac {\mu (1-\mu )}{n}}\left(1+{\frac {n-1}{M+1}}\right).\end{aligned}}} (Here we have used the law of total expectation and the law of total variance.) We want point estimates for ${\displaystyle \mu }$ and ${\displaystyle M}$. The estimated mean ${\displaystyle {\hat {\mu }}}$ is calculated from the sample ${\displaystyle {\hat {\mu }}={\frac {\sum _{i=1}^{N}k_{i}}{\sum _{i=1}^{N}n_{i}}}.}$ The estimate of the hyperparameter M is obtained using the moment estimates for the variance of the two-stage model: ${\displaystyle s^{2}={\frac {1}{N}}\sum _{i=1}^{N}\operatorname {var} \left({\frac {k_{i}}{n_{i}}}\right)={\frac {1}{N}}\sum _{i=1}^{N}{\frac {{\hat {\mu }}(1-{\hat {\mu }})}{n_{i}}}\left[1+{\frac {n_{i}-1}{{\widehat {M}}+1}}\right]}$ Solving: ${\displaystyle {\widehat {M}}={\frac {{\hat {\mu }}(1-{\hat {\mu }})-s^{2}}{s^{2}-{\frac {{\hat {\mu }}(1-{\hat {\mu }})}{N}}\sum _{i=1}^{N}1/n_{i}}},}$ where ${\displaystyle s^{2}={\frac {N\sum _{i=1}^{N}n_{i}({\hat {\theta _{i}}}-{\hat {\mu }})^{2}}{(N-1)\sum _{i=1}^{N}n_{i}}}.}$ Since we now have parameter point estimates, ${\displaystyle {\hat {\mu }}}$ and ${\displaystyle {\widehat {M}}}$, for the underlying distribution, we would like to find a point estimate ${\displaystyle {\tilde {\theta }}_{i}}$ for the probability of success for event i. This is the weighted average of the event estimate ${\displaystyle {\hat {\theta _{i}}}=k_{i}/n_{i}}$ and ${\displaystyle {\hat {\mu }}}$. Given our point estimates for the prior, we may now plug in these values to find a point estimate for the posterior ${\displaystyle {\tilde {\theta _{i}}}=E(\theta |k_{i})={\frac {k_{i}+{\widehat {M}}{\hat {\mu }}}{n_{i}+{\widehat {M}}}}={\frac {\widehat {M}}{n_{i}+{\widehat {M}}}}{\hat {\mu }}+{\frac {n_{i}}{n_{i}+{\widehat {M}}}}{\frac {k_{i}}{n_{i}}}.}$ ## Shrinkage factors We may write the posterior estimate as a weighted average: ${\displaystyle {\tilde {\theta }}_{i}={\hat {B}}_{i}\,{\hat {\mu }}+(1-{\hat {B}}_{i}){\hat {\theta }}_{i}}$ where ${\displaystyle {\hat {B}}_{i}}$ is called the shrinkage factor. ${\displaystyle {\hat {B_{i}}}={\frac {\hat {M}}{{\hat {M}}+n_{i}}}}$ ## Related distributions • ${\displaystyle BB(1,1,n)\sim U(0,n)\,}$ where ${\displaystyle U(a,b)\,}$ is the discrete uniform distribution.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 65, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9616560935974121, "perplexity": 2865.3761692868643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982296721.55/warc/CC-MAIN-20160823195816-00228-ip-10-153-172-175.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/physical-equivalence-of-lagrangian-under-addition-of-df-dt.578871/
# Homework Help: Physical equivalence of Lagrangian under addition of dF/dt 1. Feb 18, 2012 ### Favicon 1. The problem statement, all variables and given/known data This isn't strictly a homework question as I've already graduated and now work as a web developer. However, I'm attempting to recover my ability to do physics (it's been a few months now) by working my way through the problems in Analytical Mechanics (Hand and Finch) in my free time and have got stuck on a question about physically invariant Lagrangians. I understand that the Lagrangian of a physical system is not unique because there are many Lagrangians for which the Euler-Lagrange equations reduce to the same thing. The question I can't solve asks for a proof that the Euler-Lagrange (EL) equations are unchanged by the addition of $dF/dt$ to the Lagrangian, where $F \equiv F(q_1, ..., q_2, t)$. 2. Relevant equations Euler-Lagrange equations: $\frac{d}{dt}\frac{∂L}{∂\dot{q_k}} - \frac{∂L}{∂q_k} = 0$ Change of Lagrangian $L \rightarrow L' = L + \frac{dF}{dt}$ Chain rule: $\frac{dF}{dt} = \sum\limits_k{\frac{∂F}{∂q_k}\frac{∂q_k}{∂t}}$ 3. The attempt at a solution I guess the solution is to substitute the new Lagrangian, $L'$, into the EL equations and somehow show that it reduces to exactly the EL equations for $L$. The substitution gives: $\frac{d}{dt}\frac{∂}{∂\dot{q_k}}\left(L + \frac{dF}{dt}\right) - \frac{∂}{∂q_k}\left(L + \frac{dF}{dt}\right) = 0$ This can be rewritten as: $\left(\frac{d}{dt}\frac{∂L}{∂\dot{q_k}} - \frac{∂L}{∂q_k}\right) + \left(\frac{d}{dt}\frac{∂}{∂\dot{q_k}}\frac{dF}{dt} - \frac{∂}{∂q_k}\frac{dF}{dt}\right) = 0$ where the first set of bracketed terms are just the left hand side of the EL equation in L. Therefore, I now need to show that the other bracketed terms equate to zero also. i.e. $\frac{d}{dt}\frac{∂}{∂\dot{q_k}}\frac{dF}{dt} - \frac{∂}{∂q_k}\frac{dF}{dt} = 0$ All I can think of to try next is apply the chain rule to the differentiation of $F$ and then hope that everything cancels nicely. Applying the chain rule gives: $\frac{d}{dt}\frac{∂}{∂\dot{q_k}}\frac{∂F}{∂q_k} \frac{∂q_k}{∂t} + \frac{∂}{∂q_k}\frac{∂F}{∂q_k}\frac{∂q_k}{∂t} = 0$ So I guess my proof works if $\frac{d}{dt}\frac{∂}{∂\dot{q_k}} = \frac{d}{dt}\frac{∂}{∂q_k/∂t}$ is equivalent to $\frac{∂}{∂q_k}$. Is this correct? I feel like you can't just cancel the $dt$ and $∂t$ like this, but I can't see how else this proof can be done. 2. Feb 18, 2012 ### ehild Correctly: $$\frac{dF}{dt} = \sum\limits_k{\frac{∂F}{∂q_k}\frac{dq_k}{dt}}+ \frac {∂F}{∂t}$$, that is $$\frac{dF}{dt} = \sum\limits_k{\frac{∂F}{∂q_k}\dot q_k}+ \frac {∂F}{∂t}$$ Correct. Substitute equation $\frac{dF}{dt} = \sum\limits_k{\frac{∂F}{∂q_k}\dot q_k}+ \frac {∂F}{∂t}$ for dF/dt, derive it with respect of $\dot q_k$ and apply chain rule again when determining the time derivative. ehild 3. Feb 18, 2012 ### genericusrnme You can do it without going all that dirty work by looking at the action $S=\int L dt = \int (L' + \frac{dF}{dt})dt$ That method seems much neater to me Mod note: removed remaining steps. Last edited by a moderator: Feb 18, 2012 4. Feb 18, 2012 ### ehild Yes, the variation of F disappears between the endpoints of the path... That was as we learnt, but it works with the derivatives, too. 5. Feb 21, 2012 ### Favicon Indeed, I was aware of the action as the simpler way to prove it, but I wanted to do it with derivatives too as the book I'm working from doesn't discuss action until later on. Thought it made a good exercise in partial differentiation and chain rule, which it seems I needed! In fact I'm still struggling a little. Applying the chain rule gives me: $\frac{d}{dt}\frac{\partial}{\partial \dot{q_k}}\frac{dF}{dt} = \left(\frac{\partial^2 F}{\partial q_k^2} + \frac{\partial^3F}{\partial q_k \partial \dot{q_k}\partial q_k} + \frac{\partial^3F}{\partial q_k\partial \dot{q_k}\partial t}\right) \dot{q_k} + \frac{\partial^2F}{\partial t\partial q_k} + \frac{\partial^3 F}{\partial t\partial\dot{q_k}\partial q_k} + \frac{\partial^3F}{\partial t\partial\dot{q_k}\partial t}$ and $\frac{\partial}{\partial q_k}\frac{dF}{dt} = \frac{\partial}{\partial q_k}\left(\frac{\partial F}{\partial q_k}\dot{q_k} + \frac{\partial F}{\partial t}\right)$ Now, it seems to me that the only way this doesn't end with a lot of second and third order partial derivatives that don't cancel out is if I've got a sign wrong. For instance, if the chain rule had a minus $\frac{\partial F}{\partial t}$ at the end instead of plus, then I think everything would cancel out perfectly with each of the above becoming equal to zero, as required. But I don't think the chain rule should have a negative term like that. Have I missed something? 6. Feb 22, 2012 ### ehild Have you read my post #2? $$\frac{dF}{dt} = \sum\limits_k{\frac{∂F}{∂q_k}\dot q_k}+ \frac {∂F}{∂t}$$ First determine he derivative of dF/dt with respect to one of the velocity components, $\dot q_j$ As F itself does not depend on the velocities, $$\frac{\partial}{\partial \dot q_j}\frac{dF}{dt}=\frac{∂F}{∂q_j}$$ Then take the time derivative of $\frac{\partial}{\partial \dot q_j}\frac{dF}{dt}=\frac{∂F}{∂q_j}$: $$\frac{d}{dt} ( \frac{∂F}{∂q_j} )$$ and you have to subtract $\frac{∂}{∂q_j}\frac{dF}{dt}$ at the end. ehild 7. Feb 22, 2012 ### Favicon Ah ha, it was the $\frac{\partial }{\partial \dot{q_j}}\frac{dF}{dt} = \frac{\partial F}{\partial q_j}$ that I missed. That got rid of all the 3rd order derivatives, leaving only 2nd order one's which all cancel each other out in the end. Thanks very much for the help ehild! 8. Feb 22, 2012 ### ehild It was a great battle that ended well... ehild
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9187101721763611, "perplexity": 428.42369495394667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864622.33/warc/CC-MAIN-20180522014949-20180522034949-00452.warc.gz"}
https://mathoverflow.net/questions/267169/upper-bound-for-tail-in-dirichlet-series
# Upper bound for tail in Dirichlet series We know the elementary fact that if the partial sums $\sum_{n\leq X} a_n$ are bounded, say by $C$, then the series $\sum_{n\geq 1} a_n n^{-s}$ converges for $s >0$. My question then is, is there a simple upper bound in terms of $X$ and $C$ of the tail series $\sum_{n\geq X} a_n n^{-s}$ for a complex number $s$ with $\Re s>0$ ? The upper bound should tend to $0$ as $X\to \infty$. Take $Y>X>0$ two integers and $s = a + ib$. Denote $S_X = \sum_{n=1}^X a_n n^{-s}$. Then $|S_Y - S_X| = | \sum_{n=X+1}^Y a_n n^{-a} n^{-ib}| \leq \sum_{n=X+1}^Y |a_n n^{-a}| \leq C (X+1)^{-a}$. You get convergence using the Cauchy caracterisation and an upper bound $| \sum_{n \geq X} a_n n^{-s} | \leq C X^{-\Re(s)}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9994539618492126, "perplexity": 79.00447152748244}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500140.36/warc/CC-MAIN-20230204142302-20230204172302-00617.warc.gz"}
http://www.ub.edu/focm2017/content/viewAbstract.php?code=1725
#### Conference abstracts Session B7 - Numerical Linear Algebra July 14, 15:30 ~ 16:00 ## Vector Spaces of Linearizations for Rectangular Matrix Polynomials ### INDIAN INSTITUTE OF TECHNOLOGY GUWAHATI, INDIA   -   [email protected] The seminal work [MMMM06] introduced vector spaces of matrix pencils, with the property that almost all the pencils in the vector space are strong linearizations of a given square regular matrix polynomial. This work was subsequently extended to include the case of square singular matrix polynomials in [DTDM09]. Extending these ideas, we construct similar vector spaces of rectangular matrix pencils such that almost every matrix pencil of the space is a strong linearization of a given rectangular matrix polynomial $P$ in a generalized sense. Moreover, the minimal indices of $P$ can be recovered from those of the matrix pencil. We further show that such pencils can be `trimmed' to form smaller pencils that are unimodular equivalent to $P.$ The resulting pencils are almost always strong linearizations of $P.$ Moreover they are easier to construct and are often smaller than the Fiedler linearizations of $P$ introduced in [DTDM12]. Further, the backward error analysis carried out in [DLPVD16] when applied to these trimmed linearizations shows that under suitable conditions, the computed eigenstructure of the linearizations obtained from some backward stable algorithm yield the exact eigenstructure of a slightly perturbed matrix polynomial. REFERENCES [DTDM09] F. De Teran, F. M. Dopico, and D. S. Mackey, Linearizations of singular matrix polynomials and the recovery of minimal indices, Electron. J. Linear Algebra, 18(2009), pp. 371-402. [DTDM12] F. De Teran, F. M. Dopico, and D. S. Mackey, Fiedler companion linearizations for rectangular matrix polynomials, Linear Algebra Appl., 437(2012), pp. 957-991. [DLPVD16] F. M. Dopico, P. W. Lawrence, J. Perez and P. Van Dooren, Block Kronecker linearizations of matrix polynomials and their backward errors, MIMS Eprint, 2016.34, 2016. [MMMM06] D. S. Mackey, N. Mackey, C. Mehl and V. Mehrmann, Vector spaces of linearizations for matrix polynomials, SIAM J. Matrix Annal. Appl., 28(4): 971-1004, 2006. Joint work with Biswajit Das (Indian Institute of Technology Guwahati). FoCM 2017, based on a nodethirtythree design.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9352675080299377, "perplexity": 2486.5624169410107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865691.44/warc/CC-MAIN-20180523161206-20180523181206-00615.warc.gz"}
https://www.physicsforums.com/threads/physics-problem.11363/
# Physics problem 1. Dec 20, 2003 ### mustang Problem 16. Three charges: +8.2uC 4.8 cm to the left of 4uC and a -2.3uC 2.1 cm to the right of the 4uC charge. What is the electric field strength at a point 2.7cm to the left of the middle charge? In N/C. Note" Is the answer 1.733218664*10^7 from the 8.99*10^9 (4*10^-6)/(0.027^2)=49327846.36 and (8.99*10^9)(8.2*10^-6)/(0.048^2)=31995659.72 which were subtracted to get that answer. 2. Dec 20, 2003 ### gnome You might want to look that over again. For starters, there are 3 charges & you figured the field from only 2 (& got one of the distances wrong). Draw a diagram first, & work from that. 3. Dec 20, 2003 ### mustang I drew a diagram and have 8.2 uC 2.1 cm away from the point that is 2.7 cm to the left of the middle charge. In addition should i multiply 8.99*10^9 to -2.3uC divided by 3.1 cm and subtract what is now three values to get my answer? 4. Dec 20, 2003 ### gnome First, where did you get 3.1 cm? Second, make sure you keep the directions straight. The field from the positive 8.2 &mu;C charge is directed toward the right. What are the directions of the other two fields? i.e.: same direction = add; opposite direction = subtract And don't forget, you are dividing by the square of the distance. 5. Dec 20, 2003 ### mustang woops! From -2.3uC to 4 uC is 2.1cm and from 4 uC it is 2.7 cm to reach that point so the distance would be 4.8cm. So for -2.3uC I multiply 8.99*10^9 to -2.3uC divided by 4.8 cm or 0.048m? 6. Dec 20, 2003 ### gnome .0482 But think carefully about the directions. It's not just a question of the sign of the charge. You have to consider the relative positions of the particles. 7. Dec 20, 2003 ### gnome The field from the +8.2 &mu;C charge has the same direction at point P as the -2.3 &mu;C charge. Do you see why? 8. Dec 20, 2003 ### mustang No, i don't see why 8.2uC and -2.3uC have the same direction. I got three values from 4*106-6 is 49327846.36, from 8.2*10^-6 is 167160997.7, and -2.3*10^-6 is 8974392.361. So would i add 167160997.7 to 8974392.361 and subtract that from 167160997.7/ 9. Dec 20, 2003 ### gnome No. The direction of the electric field at any point P is the same as the direction of the electrical force that would be experienced by a positive "test" charge placed at that point. In this problem, if you placed a positive test charge at point P, it would be repelled by the positive 8.2 &mu;C charge (call that charge A) towards the right since they're both positive, so the component of the field from charge A at point P is directed toward the right. But the negative 2.3 &mu;C charge (call it C) would ATTRACT a positive charge, so a positive test charge located at point P would be pulled to the right. Therefore, the field component produced by charge C at point P is also directed toward the right. Therefore, the field components of charges A and C at point P are added, not subtracted. On the other hand, what would charge B (the +4 &mu;C charge in the middle) do to a positive test charge at point P? Get it? 10. Dec 20, 2003 ### mustang Are the three values I got from the charges right? 11. Dec 20, 2003 ### gnome Yes. Now you just have to figure out what to add & what to subtract. 12. Dec 20, 2003 ### mustang So gnome since you said that "The field from the +8.2 ìC charge has the same direction at point P as the -2.3 ìC charge." I would add 167160997.7 to 8974392.361 to get 176135390.1. From that I would subtract 49327846.36 and get 126807543.7, right? 13. Dec 20, 2003 ### gnome Yes, but do you understand why, or are you just taking my word for it? Similar Discussions: Physics problem
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.882841169834137, "perplexity": 1653.5369833001614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319933.33/warc/CC-MAIN-20170622234435-20170623014435-00596.warc.gz"}
https://byjus.com/maths/differential-equation/
# Differential Equation ## What is Differential Equations A differential equation is an equation that contains derivatives which are either partial derivatives or ordinary derivatives. The derivatives represent a rate of change, and the differential equation describes a relationship between the quantity that is continuously varying and the speed of change. ### Real World Usage of Linear Differential Equations To understand Differential equations, let us consider this simple example. Have you ever thought why a hot cup of coffee cools down when kept under normal conditions? According to Newton, cooling of a hot body is proportional to the temperature difference between its own temperature $T$ and the temperature $Tₒ$ of its surrounding. This statement in terms of mathematics can be written as: $\frac{dT}{dt} ~∝~ (T ~-~ Tₒ)$    …………(1) Introducing a proportionality constant $k$, the above equation can be written as: $\frac{dT}{dt}$ = $k(T~ -~ Tₒ)$    …………(2) Here, $T$ is the temperature of the body, $t$ is the time, $Tₒ$ is temperature of the surrounding, $\frac{dT}{dt}$ is the rate of cooling of the body Fig: The path of the projectile follows a curve which can be derived from an ordinary differential equation. Eg:­ dy/dx = 3x Here, the differential equation contains a derivative that involves a variable (dependent variable,y) w.r.t another variable(independent variable,x). The types of differential equations are ­ 1. An ordinary differential equation ­containd one independent variable and its derivatives. It is frequently called as ODE. The general definition of ordinary differential equation is of the form:­ Given an F, a function os x and y and derivative of y , we have F(x, y, y’ …..y^(n­1)) = y (n) is an explicit ordinary differential equation of order n. 2. Partial differential equation ­that contains one or more independent variable. There are two ways to solve a differential equation ; 1. Separation of variables 2. integrating factor Separation of the variable is done when the differential equation can be written in the form of dy/dx= f(y)g(x) where f is the function of y only and g is the function of x only. Taking an initial condition We rewrite this problem as 1/f(y)dy= g(x)dx and then integrate them from both the sides. Integrating factor technique is used when the differential equation is of the form dy/dx+p(x)y=q(x) where p and q are both the functions of x only. First order differential equation is of the form y’+ P(x)y = Q(x). where p and q are both functions of x and hence called first order differential equation because it contains functions and the first derivative of y. Higher order differential equation is an equation that contains derivatives of an unknown function which can be either a partial or ordinary derivative.It can be represented in any order. ### Application of differential equations 1) Differential equations describe various exponential growths and decays. 2) They are also used to describe the change in investment return over time. 3) They are used in the field of medicines for modeling cancer growth or spread of a disease in the body. 4) Movement of electricity can also be described with the help of differential equations. 5) They help economists in finding the optimum investment strategies. 6) Motion of waves or a pendulum can also be described using differential equations. Illustration 1: Verify that the function $y$ = $e^{-3x}$ is a solution to the differential equation $\frac{d^2y}{dx^2}~ + ~\frac{dy}{dx} ~-~ 6y$ = $0$. Solution: The function given is $y$ = $e^{-3x}$. We differentiate both the sides of the equation with respect to $x$, $\frac{dy}{dx}$ = $- 3 e^{-3x}$   …………(1) Now we again differentiate the above equation with respect to x, $\frac{d^2y}{dx^2}$ = $9 e^{-3x}$ …………(2) We substitute the values of $\frac{dy}{dx}, \frac{d^2y}{dx^2}$ and $y$ in the differential equation given in the question, On left hand side we get, LHS = $9e^{-3x}~+~(-3e^{-3x})~– 6e^{-3x}$ = $9e^{-3x} ~–~9e^{-3x}$ = $0$ (which is equal to RHS) Therefore the given function is a solution to the given differential equation. Stochastic differential equation – that contains one or more terms that are stochastic and the solution it provides is also stochastic. The various application of differential equations in engineering is :­ heat conduction analysis , in physics it can be used to understand the motion of waves, pendulums, in chemistry it is used for modeling the chemical reactions, in medical science for monitoring the cancer growth. The ordinary differential equation can be utilized as an application in engineering field like for finding the relationship between various parts of the bridge. To gain better understanding about this topic, it would be ideal if students would be able to have hands on experience about the same by working on NCERT Solutions for Differential Equations. #### Practise This Question In a game of football, if a player kicks the ball into opponent's goal post then he scores a goal. If he passes the ball to any of his team mates and that particular team mate scores a goal after that, the player is said to have assisted the goal. Here you have line graphs which give the goals and assists of two players in past 5 months. The player who played better in Month 5 is ________. (Assume the level of competition in the matches played by both the players was same)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8561936616897583, "perplexity": 404.34665835695546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256724.28/warc/CC-MAIN-20190522022933-20190522044933-00544.warc.gz"}
http://tex.stackexchange.com/questions/95401/questions-on-macro-writing-in-tex-to-modify-an-existing-style-file-fancybox-sty
# Questions on macro writing in TeX to modify an existing style file: fancybox.sty I've decided that I must become more adept at TeX if I'm to become proficient with LaTeX. While this is probably obvious to most readers who pass by, my ability to muddle through has hidden this small but important truth. As an example, consider: % \doublebox \def\doublebox{\VerbBox\@doublebox} \def\@doublebox#1{% \begingroup \setbox\@fancybox\hbox{{#1}}% \fboxrule=.75\fboxrule \setbox\@fancybox\hbox{\fbox{\box\@fancybox}}% \fboxrule=2\fboxrule \fboxsep=\fboxrule \fbox{\box\@fancybox}% \endgroup} This is a snippet from fancybox.sty by Timothy Van Zandt. I wish to clone and improve it (at least by my lights) by adding control over inner and outer rule width, likewise separation as well. To this small wish list I want to add color on a per rule basis. So what's the problem you say? The problem is that I don't understand the above well enough to modify it. In a nutshell, I don't see a way to add parameters to \doublebox in a way that gets through to \@doublebox. For that matter I'm not really sure I understand the calling sequence. As an example consider my title page that I use as a test bed: \documentclass{article} \usepackage{modfancybox} \usepackage{nth} \begin{document} \newlength{\fb} \setlength{\fb}{5.625ex} \newlength{\myl} \setlength{\myl}{\textwidth} \thispagestyle{empty} \thisfancypage{% \setlength{\fboxrule}{.75ex} \setlength{\fboxsep}{10pt} \doublebox }{} \parbox{\myl}{% \null\vfil \vskip 60pt \centering {\huge A HISTORY OF\\ THE\\ MYERS\\ OVERSTREET\\ and\\ GRAY\\ FAMILIES\\} \vskip 2em {\large \lineskip .75em \textit{\nth{1} Edition By}: Jourdan George Myers \par \textit{\nth{2} Edition Edited By}: Hugh Shannon Myers \par \vskip 1.5em} {\large 1st. Edition \\December 27, 1983 \\2nd. Edition \\\today \par} {\small vrs.\input{version}$\alpha$} \vskip 1in It is not Abraham -- It is Abram'' } \end{document} The heart of all of this seems to be: \thisfancypage{% \setlength{\fboxrule}{.75ex} \setlength{\fboxsep}{10pt} \doublebox }{} Which somehow seems to pull in the \parbox that follows. Given the setup of \thisfancypage, this certainly makes sense. It also makes it rational to hard-wire the 3 values I want better control over :) That said, I still want my cake and to eat it as well. Adding color seems to be the least of my concerns as there are a number of ways to handle that. And now that I think about it, I could also create the various new lengths to add the control that I want. But that isn't really a solution that works in the long term. I can only get that by increasing my knowledge. So TLTR: How to a create a parametric version of \doublebox? And what is happening in \thisfancypage? My hope is to be able to get far enough to not only create a newer \doubleboxP but even perhaps a \Nbox as well. For those who dislike such things as somehow violating current typographic standards I apologize, but my likes relate more to the 19th century than to the 21st century :) - Please don't call them "style files" just because they have the ".sty" extensions. While this was the original purpose and name they are now called "packages". Most of them don't influence the document style at all. – Martin Scharrer Jan 26 '13 at 8:32 For starters in \def\doublebox{\VerbBox\@doublebox}, \VerbBox gets \@doublebox as argument as a macro to execute after it has boxed the content. You should be able to simply write e.g. \def\doublebox#1{\VerbBox{\@doublebox{#1}}} or \newcommand\doublebox[1][]{\VerbBox{\@doublebox{#1}}}. However, the look-ahead code for optional arguments will fix the catcode of the first character which follows, which might be an issue. – Martin Scharrer Jan 26 '13 at 8:37 I'd do something like this to parameterise the macro \documentclass{article} \usepackage{fancybox} \usepackage{keyval} \makeatletter \define@key{myfb}{inner}{\def\myfb@inner{#1}} \define@key{myfb}{outer}{\def\myfb@outer{#1}} \define@key{myfb}{sep}{\def\myfb@sep{#1}} \newcommand\mydoublebox[1][]{% \def\myfbkeys{\setkeys{myfb}{#1}}% \VerbBox\@doublebox} \def\@doublebox#1{% \begingroup \def\myfb@inner{.75\fboxrule}% \def\myfb@outer{2\fboxrule}% \def\myfb@sep{\fboxrule+.5pt}% \myfbkeys \setbox\@fancybox\hbox{{#1}}% \fboxrule\dimexpr\myfb@inner\relax \setbox\@fancybox\hbox{\fbox{\box\@fancybox}}% \fboxrule\dimexpr\myfb@outer\relax \fboxsep\dimexpr\myfb@sep\relax \fbox{\box\@fancybox}% \endgroup} \makeatother \begin{document} \mydoublebox{hello} \mydoublebox[inner=4pt,sep=10pt]{hello} \end{document} Also don't do this {\huge A HISTORY OF\\ THE\\ MYERS\\ OVERSTREET\\ and\\ GRAY\\ FAMILIES\\} As that ends the font size before paragraph ends so sets huge text on a normal baseline. (You do the same with \small and possibly some other size changes) - Do you mean {size text} should be just size text next size text? No brace enclosure? – hsmyers Jan 26 '13 at 13:36 No brace or include a blank line or \par before the } The baseline setting can only be set once per paragraph and the value at the end is what counts but the font changes straight away so {\huge one two three} sets big letters without increasing the baseline spacing but {\huge one two three\par} uses an appropriate baseline – David Carlisle Jan 26 '13 at 14:48 How to a create a parametric version of \doublebox? In the absence of any pre-defined hooks in an existing macro, you have three choices: 1. Re-write the existing macro fully. 2. Inject code via using the LaTeX2e macro \g@addto@macro or using similar macros from the etoolbox package. 3. Use the existing macro and add parameters, using a key-value interface. I personally prefer a combination of 1) and 3), which I will explain in detail below using the LaTeX macro \rule as an example, which might come handy for your particular case. The normal command has the format: \rule[<raised>]{<width>}{<height>} Personally, I have trouble remembering if the width comes first or the height when calling the macro, also it would be nice if one can set the color as well. A command of the form: \Rule[rule color = thegray, rule thickness = 1pt, rule raised = 2pt, rule width = 85pt] is preferable, as the key values can be typed in any order and also one can set default values at the beginning of a document. If you notice I capitalized the name of the macro as it is considered good practice to try and not change existing macros, if possible. I also use PGF keys, as I find it quicker to code them. \documentclass{article} \usepackage{pgf} \definecolor{thegray} {rgb}{0.9,0.9,0.9} \def\setcolor#1{\color{#1}} % create family of keys called rule \pgfkeys{/rule/.is family} \def\cxset{\pgfqkeys{/rule}} \cxset{rule width/.store in = \rulewidth@my, rule thickness/.store in=\rulethickness@my, rule color/.code ={\setcolor{#1}}, rule raised/.store in = \ruleraised@my } \cxset{rule thickness = 10pt, rule raised = 2pt, rule width = 45pt} \newcommand\Rule[1][rule color = thegray, rule thickness = 1pt, rule raised = 2pt, rule width = 85pt]{% \colorlet{originalcolor}{.}% \cxset{#1}% \begingroup \rule{\rulewidth@my}{\rulethickness@my}% \endgroup \color{originalcolor}} \begin{document} \Rule \Rule[rule width=60pt, rule color= purple] test \end{document} See if you can use this approach and modify \doublebox to your requirements. If you succeed post the answer. - I will do so and let you know – hsmyers Jan 26 '13 at 13:39 As advised by the good Mr Carlisle, here is my version of his excellent coding: \define@key{myfb}{inner}{\def\myfb@inner{#1}} \define@key{myfb}{outer}{\def\myfb@outer{#1}} \define@key{myfb}{sep}{\def\myfb@sep{#1}} \define@key{myfb}{ocolor}{\def\myfb@ocolor{#1}} \define@key{myfb}{icolor}{\def\myfb@icolor{#1}} \newcommand\mydoublebox[1][]{% \def\myfbkeys{\setkeys{myfb}{#1}}% \VerbBox\@doublebox} \def\@doublebox#1{% \begingroup \def\myfb@inner{.75\fboxrule}% \def\myfb@outer{2\fboxrule}% \def\myfb@sep{\fboxrule+.5pt}% \def\myfb@ocolor{black} \def\myfb@icolor{black} \myfbkeys \setbox\@fancybox\hbox{{#1}}% \fboxrule\dimexpr\myfb@inner\relax% \setbox\@fancybox\hbox{\fcolorbox{\myfb@icolor}{white}{\box\@fancybox}}% \fboxrule\dimexpr\myfb@outer\relax% \fboxsep\dimexpr\myfb@sep\relax% \fcolorbox{\myfb@ocolor}{white}{\box\@fancybox}% \endgroup } As you can see, all I've done is add two more keys, icolor and ocolor. They default to black as they would if \fboxrule were not replacing \fbox. This pretty much meets my admitedly vague specs. And even better, I've learned a fair amount in the process. Xcolor me a happy camper :) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8295482993125916, "perplexity": 2041.1511905938792}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860106452.21/warc/CC-MAIN-20160428161506-00072-ip-10-239-7-51.ec2.internal.warc.gz"}
https://math.eccentric.dk/2019/01/20/is-this-quotient-space-connected/
# Is this quotient space connected? I came across this little problem recently: If $$X$$ is a topological space with exactly two components, and given an equivalence relation $$\sim$$ what can we say about its quotient space $$X/{\sim}$$? It turns out that $$X/{\sim}$$ is connected if and only if there exists $$x,y\in X$$ where $$x$$ and $$y$$ are in separate components, such that $$x\sim y$$. Suppose first that there exists $$x,y\in X$$ such that $$x\sim y$$. Let $$C_1$$ and $$C_2$$ be the two components of $$X$$ and let $$p: X \to X/{\sim}$$ be the natural projection. Since $$p$$ is a quotient map it is surely continuous and since the image of a connected space under a continuous function is connected we have have that, say $$p(C_1)$$ is connected and so is $$p(C_2)$$, but since $$x\sim y$$ we have that $$p(C_1)\cap p(C_2)\neq \varnothing$$ so $$X/{\sim}$$ consists of a single component, becuase $p(C_1)\cup p(C_2) = p(C_1\cup C_2)=p(X)=X/{\sim},$ as wanted. To show the reverse implication, we use the contrapositive of the statement and show: if we for no $$x\in C_1$$ or $$y\in C_2$$ have that $$x\sim y$$, then $$X/{\sim}$$ is not connected. Assume the hypothesis and note that then $$p(C_1)$$ and $$p(C_2)$$ are then disjoint connected subspaces whose union equal all of $$X/{\sim}$$ (since $$p$$ is surjective). But then the images of $$C_1$$ and $$C_2$$ under $$p$$ are two components of $$X/{\sim}$$, showing that $$X/{\sim}$$ is not connected. As wanted.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9765026569366455, "perplexity": 40.96115478157987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986666959.47/warc/CC-MAIN-20191016090425-20191016113925-00321.warc.gz"}
http://mathhelpforum.com/advanced-statistics/89764-linear-regression-mean-square-error-mse.html
# Math Help - Linear Regression: Mean square error (MSE) 1. ## Linear Regression: Mean square error (MSE) Simple linear regression model: Y_i = β0 + β1*X_i + ε_i , i=1,...,n where n is the number of data points, ε_i is random error Let σ^2 = V(ε_i) = V(Y_i) Then an unbiased estimator of σ^2 is s^2 = (1/n-2)[∑(e_i)^2] where e_i's are the residuals s^2 is called the "mean square error" (MSE). My concerns: 1) The GENERAL formula for sample variance is s^2 = (1/n-1)[∑(y_i - y bar)^2], it's defined on the first pages of my statistics textbook, I've been using this again and again, now I don't see how this general formula (which always holds) can reduce to the formula in red above? How come we have (n-2) and e_i in the formula for s^2? 2) From what I've learnt in previous stat courses, the "mean square error" of a point estimator is by definition MSE(θ hat) = E[(θ hat - θ)^2] Is this the same MSE as the one in red above? Are they related at all? Any help is greatly appreciated! note: also under discussion in Talk Stats forum 2. My textbook also says that the sample s^2 = (1/n-1)[∑(y_i - y bar)^2] has n-1 in the denominator because it has n-1 degrees of freedom. And s^2 = (1/n-2)[∑(e_i)^2] has n-2 in the denominator because it has n-2 degrees of freedom. Now I am puzzled...what is "degrees of freedom"? Why does it have n-2 degrees of freedom? What is the simplest way to understand this? Thanks!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9442357420921326, "perplexity": 1702.9531652971066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447549301.141/warc/CC-MAIN-20141224185909-00038-ip-10-231-17-201.ec2.internal.warc.gz"}